content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Seminars given at Na UKMA in Kiev on Econometrics of panel data
Slides from the first seminar.
Slides from the second seminar.
Slides from the third seminar.
Value added at constant prices per sector (Excel data from Insee, the French national statistical office).
Net fixed capital stock at constant prices per sector (Excel data from Insee, the French national statistical office).
Full-time equivalent employment per sector (Excel data from Insee, the French national statistical office).
Value added at constant prices per sector (Excel data suitable for reading by E-Views).
Net fixed capital stock at constant prices per sector (Excel data for reading by E-Views).
Full-time equivalent employment per sector (Excel data for reading by E-Views).
E-Views program: OLS on raw data (All sectors).
E-Views program: random effects model (All sectors).
E-Views program: OLS on raw data (Industrial sectors).
E-Views program: random effects model (Industrial sectors).
E-Views program: fixed effects model (Industrial sectors).
E-Views program: between model (Industrial sectors).
|
{"url":"http://legendre.ovh/NaUKMA/NaUKMA.html","timestamp":"2024-11-12T00:02:53Z","content_type":"text/html","content_length":"2625","record_id":"<urn:uuid:6636887a-a571-44d2-b130-41be36289a01>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00613.warc.gz"}
|
Theoretical Computer Science - TIN
Course details
Theoretical Computer Science
TIN Acad. year 2022/2023 Winter semester 7 credits
An overview of the applications of the formal language theory in modern computer science and engineering (compilers, system modelling and analysis, linguistics, etc.), the modelling and decision
power of formalisms, regular languages and their properties, minimalization of finite-state automata, context-free languages and their properties, Turing machines, properties of recursively
enumerable and recursive languages, computable functions, undecidability, undecidable problems of the formal language theory, and the introduction to complexity theory.
Credit+Examination (written+oral)
• 39 hrs lectures
• 10 hrs seminar
• 16 hrs exercises
• 13 hrs projects
• 60 pts final exam (written part)
• 25 pts written tests (written part)
• 15 pts projects
Češka Milan, doc. RNDr., Ph.D.
Holík Lukáš, doc. Mgr., Ph.D.
Lengál Ondřej, Ing., Ph.D.
Rogalewicz Adam, doc. Mgr., Ph.D.
Vojnar Tomáš, prof. Ing., Ph.D.
Subject specific learning outcomes and competences
The students are acquainted with basic as well as more advanced terms, approaches, and results of the theory of automata and formal languages and with basics of the theory of computability and
complexity allowing them to better understand the nature of the various ways of describing and implementing computer-aided systems.
The students acquire basic capabilities for theoretical research activities.
To acquaint students with more advanced parts of the formal language theory, with basics of the theory of computability, and with basic terms of the complexity theory.
The course acquaints students with fundamental principles of computer science and allows them to understand where boundaries of computability lie, what the costs of solving various problems on
computers are, and hence where there are limits of what one can expect from solving problems on computing devices - at least those currently known. Further, the course acquaints students, much more
deeply than in the bachelor studies, with a number of concrete concepts, such as various kinds of automata and grammars, and concrete algorithms over them, which are commonly used in many application
areas (e.g., compilers, text processing, network traffic analysis, optimisation of both hardware and software, modelling and design of computer systems, static and dynamic analysis and verification,
artificial intelligence, etc.). Deeper knowledge of this area will allow the students to not only apply existing algorithms but to also extend them and/or to adjust them to fit the exact needs of the
concrete problem being solved as often needed in practice. Finally, the course builds the students capabilities of abstract and systematic thinking, abilities to read and understand formal texts
(hence allowing them to understand and apply in practice continuously appearing new research results), as well as abilities of exact communication of their ideas.
Prerequisite knowledge and skills
Basic knowledge of discrete mathematics concepts including algebra, mathematical logic, graph theory and formal languages concepts, and basic concepts of algorithmic complexity.
• Češka, M. a kol.: Vyčíslitelnost a složitost, Nakl. VUT Brno, 1993. ISBN 80-214-0441-8
• Češka, M., Rábová, Z.: Gramatiky a jazyky, Nakl. VUT Brno, 1992. ISBN 80-214-0449-3
• Češka, M., Vojnar, T.: Studijní text k předmětu Teoretická informatika (http://www.fit.vutbr.cz/study/courses/TIN/public/Texty/TIN-studijni-text.pdf), 165 str. (in Czech)
• Kozen, D.C.: Automata and Computability, Springer-Verlag, New Yourk, Inc, 1997. ISBN 0-387-94907-0
• Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to Automata Theory, Languages, and Computation, Addison Wesley, 2nd ed., 2000. ISBN 0-201-44124-1
• Meduna, A.: Formal Languages and Computation. New York, Taylor & Francis, 2014.
• Aho, A.V., Ullmann, J.D.: The Theory of Parsing,Translation and Compiling, Prentice-Hall, 1972. ISBN 0-139-14564-8
• Martin, J.C.: Introduction to Languages and the Theory of Computation, McGraw-Hill, Inc., 3rd ed., 2002. ISBN 0-072-32200-4
• Brookshear, J.G. : Theory of Computation: Formal Languages, Automata, and Complexity, The Benjamin/Cummings Publishing Company, Inc, Redwood City, California, 1989. ISBN 0-805-30143-7
• Hopcroft, J.E., Motwani, R., Ullman, J.D.: Introduction to Automata Theory, Languages, and Computation, Addison Wesley, 2nd ed., 2000. ISBN 0-201-44124-1
• Kozen, D.C.: Automata and Computability, Springer-Verlag, New Yourk, Inc, 1997. ISBN 0-387-94907-0
• Martin, J.C.: Introduction to Languages and the Theory of Computation, McGraw-Hill, Inc., 3rd ed., 2002. ISBN 0-072-32200-4
• Brookshear, J.G. : Theory of Computation: Formal Languages, Automata, and Complexity, The Benjamin/Cummings Publishing Company, Inc, Redwood City, California, 1989. ISBN 0-805-30143-7
• Aho, A.V., Ullmann, J.D.: The Theory of Parsing,Translation and Compiling, Prentice-Hall, 1972. ISBN 0-139-14564-8
1. An introduction to the theory of formal languages, regular languages and grammars, finite automata, regular expressions.
2. Minimization of finite-state automata, pumping theorem, Nerod's theorem, decidable problems of regular languages.
3. Context-free languages and grammars, push-down automata, transformations and normal forms of context-free grammars.
4. Advanced properties of context-free languages, pumping theorem for context-free languages, decidable problems of context-free languages, deterministic context-free languages.
5. Turing machines (TMs), the language accepted by a TM, recursively enumerable and recursive languages and problems.
6. TMs with more tapes, nondeterministic TMs, universal TMs.
7. The relation of TMs and computable functions.
8. TMs and type-0 languages, diagonalization, properties of recursively enumerable and recursive languages, linearly bounded automata and type-1 languages.
9. The Church-Turing thesis, undecidability, the halting problem, reductions, Post's correspondence problem, undecidable problems of the formal language theory.
10. Gödel's incompleteness theorems.
11. An introduction to the computational complexity, Turing complexity, asymptotic complexity.
12. P and NP classes and beyond, polynomial reduction, completeness.
Syllabus of numerical exercises
1. Formal languages, and operations over them. Grammars, the Chomsky hierarchy of grammars and languages.
2. Regular languages and finite-state automata (FSA) and their determinization.
3. Conversion of regular expressions to FSA. Minimization of FSA. Pumping lemma
4. Context-free languages and grammars. Transformations of context-free grammars.
5. Operations on context-free languages and their closure properties. Pumping lemma for context-free languages.
6. Push-down automata, (nondeterministic) top-down and bottom-up syntax analysis. Deterministic push-down languages.
7. Turing machines.
8. Turing machines and computable functions.
9. Recursive and recursively enumerable languages and their properties.
10. Decidability, semi-decidability, and undecidability of problems, reductions of problems.
11. Complexity classes. Properties of space and time complexity classes.
12. P and NP problems. Polynomial reduction.
Syllabus - others, projects and individual work of students
1. Assignment in the area of regular languages.
2. Assignment in the area of context-free languages and Turing machines.
3. Assignment in the area of undecidability and complexity.
An evaluation of the exam in the 4th week (max. 15 points) and in the 9th week (max. 15 points), an evaluation of the assignments (max 3-times 5 points) and an final exam evaluation (max 60 points).
A written exam in the 4th week focusing on the fundamental as well as on advance topics in the area of regular languages. A written exam in the 9th week focusing on advance topics in the area of
context-free languages, and on Turing machines. Regular evaluation of the assignments and the final written exam.
The requirements to obtain the accreditation that is required for the final exam: The minimal total score of 18 points achieved from the assignments and from the exams in the 4th and 9th week (i.e.
out of 40 points).
The final exam has 4 parts. Students have to achieve at least 4 points from each part and at least 25 points in total, otherwise the exam is evaluated by 0 points.
The minimal total score of 18 points achieved from the first two assignments, and from the exams in the 4th and 9th week (i.e. out of 40 points).
Course inclusion in study plans
• Programme IT-MGR-2, field MBI, MBS, MGM, MIN, MIS, MMM, MPV, MSK, 1st year of study, Compulsory
• Programme MITAI, field NADE, NBIO, NCPS, NEMB, NEMB up to 2021/22, NGRI, NHPC, NIDE, NISD, NISY, NISY up to 2020/21, NMAL, NMAT, NNET, NSEC, NSEN, NSPE, NVER, NVIZ, 1st year of study, Compulsory
|
{"url":"https://www.fit.vut.cz/study/course/259407/.en?year=2022","timestamp":"2024-11-10T09:04:53Z","content_type":"text/html","content_length":"102821","record_id":"<urn:uuid:237175b4-d7ee-44b1-a606-f95fb7817d91>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00319.warc.gz"}
|
Competing Foundations?
‘Competing Foundations?’ Conference
Posted by David Corfield
FINAL CFP and EXTENDED DEADLINE: SoTFoM II `Competing Foundations?’, 12-13 January 2015, London.
The focus of this conference is on different approaches to the foundations of mathematics. The interaction between set-theoretic and category-theoretic foundations has had significant philosophical
impact, and represents a shift in attitudes towards the philosophy of mathematics. This conference will bring together leading scholars in these areas to showcase contemporary philosophical research
on different approaches to the foundations of mathematics. To accomplish this, the conference has the following general aims and objectives. First, to bring to a wider philosophical audience the
different approaches that one can take to the foundations of mathematics. Second, to elucidate the pressing issues of meaning and truth that turn on these different approaches. And third, to address
philosophical questions concerning the need for a foundation of mathematics, and whether or not either of these approaches can provide the necessary foundation.
Date and Venue: 12-13 January 2015 - Birkbeck College, University of London.
Confirmed Speakers: Sy David Friedman (Kurt Goedel Research Center, Vienna), Victoria Gitman (CUNY), James Ladyman (Bristol), Toby Meadows (Aberdeen).
Call for Papers: We welcome submissions from scholars (in particular, young scholars, i.e. early career researchers or post-graduate students) on any area of the foundations of mathematics (broadly
construed). While we welcome submissions from all areas concerned with foundations, particularly desired are submissions that address the role of and compare different foundational approaches.
Applicants should prepare an extended abstract (maximum 1,500 words) for blind review, and send it to sotfom [at] gmail [dot] com, with subject `SOTFOM II Submission’.
Submission Deadline: 31 October 2014
Notification of Acceptance: Late November 2014
Scientific Committee: Philip Welch (University of Bristol), Sy-David Friedman (Kurt Goedel Research Center), Ian Rumfitt (University of Birmigham), Carolin Antos-Kuby (Kurt Goedel Research Center),
John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Goedel Research Center), Neil Barton (Birkbeck College), Chris Scambler (Birkbeck College), Jonathan Payne (Institute of
Philosophy), Andrea Sereni (Universita Vita-Salute S. Raffaele), Giorgio Venturi (CLE, Universidade Estadual de Campinas)
Organisers: Sy-David Friedman (Kurt Goedel Research Center), John Wigglesworth (London School of Economics), Claudio Ternullo (Kurt Goedel Research Center), Neil Barton (Birkbeck College), Carolin
Antos-Kuby (Kurt Goedel Research Center)
Conference Website: sotfom [dot] wordpress [dot] com
Further Inquiries: please contact Carolin Antos-Kuby (carolin [dot] antos-kuby [at] univie [dot] ac [dot] at) Neil Barton (bartonna [at] gmail [dot] com) Claudio Ternullo (ternulc7 [at] univie [dot]
ac [dot] at) John Wigglesworth (jmwigglesworth [at] gmail [dot] com)
The conference is generously supported by the Mind Association, the Institute of Philosophy, British Logic Colloquium, and Birkbeck College.
Posted at October 17, 2014 2:09 PM UTC
Re: ‘Competing Foundations?’ Conference
Funny how even the url sotfom [dot] wordpress [dot] com was obfuscated. To prevent robots from visiting the site? Google has already indexed it …
more on topic: I hope they will provide some of the outcomes for non-visitors, such as videos of lectures and downloadable slides.
Posted by: Konrad Voelkel on October 17, 2014 3:32 PM | Permalink | Reply to this
Re: ‘Competing Foundations?’ Conference
The interaction between set-theoretic and category-theoretic foundations has had significant philosophical impact, and represents a shift in attitudes towards the philosophy of mathematics. This
conference will bring together leading scholars in these areas
I don’t see many (if any!) category theorists listed…
Posted by: David Roberts on October 18, 2014 11:27 PM | Permalink | Reply to this
Re: ‘Competing Foundations?’ Conference
David wrote:
I don’t see many (if any!) category theorists listed…
Yeah, really. This looks like a “competition” where one team wasn’t invited. Too bad: it will be much more boring than if they also brought in Voevodsky, Awodey, McClarty and Corfield.
However, I also think the idea of “competing foundations” needs to be criticized (and I’m sure some of these people could do it).
The metaphor of “foundation” makes it sound like there should be just one foundation, on which the building of mathematics rests. But that’s just not how it works. A better terminology would be
“portal” or “entrance”: there are various ways to get into mathematics starting from simple assumptions, but once you’re in you can study all the entrances. You don’t have to choose one entrance as
the “correct” one. There’s still a competition, but we shouldn’t expect or hope for a single winner.
Posted by: John Baez on October 24, 2014 11:53 PM | Permalink | Reply to this
Re: ‘Competing Foundations?’ Conference
There’s a thread taking place at the list FOM (Foundations of Mathematics) where this same theme of competition can be found. Urs has recently posted on this matter at Google+.
The FOM thread is archived here and gets under way approximately here in this post by Harvey Friedman where Voevodsky’s name was brought up. (The posts can be sorted by thread which seems to be
easiest way of following the flow of discussion.) This is the initial post to which Urs responded at FOM.
Almost immediately hackles seem to have been raised, with talk of “throwing down the gauntlet” and various “challenges” being issued. If I may speak frankly, I don’t hold out hope that this sort of
atmosphere will lead to much enlightenment. Certainly not if past history has been any guide (and already I see people talking past one another).
Urs did (in his Google+ post) helpfully point to a minicourse by Mike Shulman on for anyone wishing an introduction to HoTT. Really, though, if people at FOM want to know the point of HoTT, I’d say
they should pick up the book and start reading.
Alternatively, if “traditional logicians/set theorists” don’t want to pick up the book and start reading, they could ask someone in their community such as François G. Dorais, who has made an effort
to understand and could help explain. But of course this is predicated on the assumption that people want to learn and not just have a fight or “defend turf”.
Posted by: Todd Trimble on October 25, 2014 6:42 AM | Permalink | Reply to this
Re: ‘Competing Foundations?’ Conference
Yes, I felt the discussion was going nowhere. I’ve bowed out of the discussion, and unsubscribed from the list, to preserve my sanity.
Posted by: David Roberts on October 26, 2014 10:46 AM | Permalink | Reply to this
Re: ‘Competing Foundations?’ Conference
I guess they’re largely recruiting from the local population, and there aren’t too many category theoretic philosophers in the UK.
To the extent I get invited to conferences, it’s not as someone working on any foundations. The introduction to The Philosophy of Mathematical Practice falsely presents me as opposed to foundational
On the other hand, James Ladyman is invited and he’s working with Stuart Presnell on a three year project ‘Applying Homotopy Type Theory in Logic, Metaphysics, and Philosophy of Physics’.
Posted by: David Corfield on October 26, 2014 8:36 AM | Permalink | Reply to this
|
{"url":"https://golem.ph.utexas.edu/category/2014/10/competing_foundations_conferen.html","timestamp":"2024-11-06T03:58:42Z","content_type":"application/xhtml+xml","content_length":"24624","record_id":"<urn:uuid:39aa866e-6686-4c13-b62e-0c0ddad972ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00517.warc.gz"}
|
SEm/escapeLab/signal generator
From FSI
Welcome to the SEM escape lab
Find the coefficients and match the signal.
4 sinewaves are combined to build a signal:
• the frequency of the second one is the double of the first one's.
• the frequency of the third one is the double of the second one's.
• the frequency of the fourth one is the double of the third one's.
Each sinewave is damped by a power of 2 : in terms of hardware, the signal values are shifted to the right by a given number of bits. The damped sinewaves are then added together to form the signal
at the bottom of the following picture.
Damping factors can either be 1, 1/2 and 1/256. The key is the modulo 30 sum of:
• the power of two of the first coefficient (0 for 1, 1 for 1/2 and 8 for 1/256)
• two times the power of two of the second coefficient
• three times the power of two of the third coefficient
• four times the power of two of the fourth coefficient
PS: 1st coefficient is 0
|
{"url":"https://wiki.hevs.ch/fsi/index.php5?title=SEm/escapeLab/signal_generator&oldid=2958","timestamp":"2024-11-02T21:12:31Z","content_type":"text/html","content_length":"16570","record_id":"<urn:uuid:a90d1e76-5c7c-4874-8364-6e87af21fa72>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00628.warc.gz"}
|
Villines Internet Humor Archive - Socio-Math Problems for San Francisco Students
Socio-Math Problems for San Francisco Students
1. Zelda and Jane were given a rottweiler at their commitment ceremony. If their dog needs to be walked two miles a day and they walk at a rate of one mile per hour, how much time will they spend
discussing their relationship in public?
2. Michael has two abusive stepfathers and an alcoholic mother. If his self-esteem is reduced by 20% per dysfunctional parent, but Michael feels 3% better for every person he denigrates, how long
will it take before he's ready to go home if 1 person walks by the cafe every 2 minutes?
3. Sanjeev has 7 piercings. If the likelihood of getting cellulitis on a given day is 10% per piercing, what is the likelihood Sanjeev will need to renew his erythromycin prescription during the next
4. Chad wants to take half a pound of heroin to Orinda and sell it at a 20% profit. If it originally cost him $ 1,500 in food stamps, how much should Nicole write the check for?
5. The City and County of San Francisco decide to destroy 50 rats infesting downtown. If 9,800 animal rights activists hold a candlelight vigil, how many people did each dead rat empower?
6. A red sock, a yellow sock, a blue sock, and a white sock are tossed randomly in a drawer. What is the likelihood that the first two socks drawn will be socks of color?
7. George weighs 245 pounds and drinks two triple lattes every morning. If each shot of espresso contains 490 mg of caffeine, what is George's average caffeine density in mg/pound?
8. There are 4500 homes in Mill Valley and all of them recycle plastic. If each household recycles 10 soda bottles a day and buys one polar fleece pullover per month, does Mill Valley have a monthly
plastic surplus or deficit? Bonus question: Assuming all the plastic bottles are 1 liter size, how much Evian are they drinking?
9. If the average person can eat one pork pot sticker in 30 seconds, and the waitress brings a platter of 12 pot stickers, how long will it take five vegans to not eat them?
10. Todd begins walking down Market Street with 12 $ 1 bills in his wallet. If he always gives panhandlers a single buck, how many legs did he have to step over if he has $ 3 left when he reaches the
other end and met only one double-amputee?
Advanced Placement Students Only
11. Katie, Trip, Ling, John-John and Effie share a three-bedroom apartment on Guerrero for $ 2400 a month. Effie and Trip can share one bedroom, but the other three need their own rooms with separate
ISDN lines to run their web servers. None of them wants to use the futon in the living room as a bed, and they each want to save $ 650 in three months to attend Burning Man. What is their best
a) All five roommates accept a $ 12/hour job-share as handgun monitors at Mission High.
b) Ask Miles, the bisexual auto mechanic, to share Effie and Trip's bedroom for $ 500/month.
c) Petition the Board of Supervisors to advance Ling her annual digital-artists-of-color stipend.
d) Rent strike.
|
{"url":"https://villines.com/Internet/socioecon.php","timestamp":"2024-11-03T00:24:39Z","content_type":"text/html","content_length":"8505","record_id":"<urn:uuid:46e49c51-40a2-48fd-a05f-a3b3a4d48c97>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00844.warc.gz"}
|
Y0(3) Linux Programmer's Manual Y0(3)
y0, y0f, y0l, y1, y1f, y1l, yn, ynf, ynl - Bessel functions of the sec-
ond kind
#include <math.h>
double y0(double x);
double y1(double x);
double yn(int n, double x);
float y0f(float x);
float y1f(float x);
float ynf(int n, float x);
long double y0l(long double x);
long double y1l(long double x);
long double ynl(int n, long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
y0(), y1(), yn():
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _SVID_SOURCE || _BSD_SOURCE
y0f(), y0l(), y1f(), y1l(), ynf(), ynl():
_XOPEN_SOURCE >= 600
|| (_ISOC99_SOURCE && _XOPEN_SOURCE)
|| /* Since glibc 2.19: */ _DEFAULT_SOURCE
|| /* Glibc versions <= 2.19: */ _SVID_SOURCE || _BSD_SOURCE
The y0() and y1() functions return Bessel functions of x of the second
kind of orders 0 and 1, respectively. The yn() function returns the
Bessel function of x of the second kind of order n.
The value of x must be positive.
The y0f(), y1f(), and ynf() functions are versions that take and return
float values. The y0l(), y1l(), and ynl() functions are versions that
take and return long double values.
On success, these functions return the appropriate Bessel value of the
second kind for x.
If x is a NaN, a NaN is returned.
If x is negative, a domain error occurs, and the functions return
-HUGE_VAL, -HUGE_VALF, or -HUGE_VALL, respectively. (POSIX.1-2001 also
allows a NaN return for this case.)
If x is 0.0, a pole error occurs, and the functions return -HUGE_VAL,
-HUGE_VALF, or -HUGE_VALL, respectively.
If the result underflows, a range error occurs, and the functions re-
turn 0.0
If the result overflows, a range error occurs, and the functions return
-HUGE_VAL, -HUGE_VALF, or -HUGE_VALL, respectively. (POSIX.1-2001 also
allows a 0.0 return for this case.)
See math_error(7) for information on how to determine whether an error
has occurred when calling these functions.
The following errors can occur:
Domain error: x is negative
errno is set to EDOM. An invalid floating-point exception
(FE_INVALID) is raised.
Pole error: x is 0.0
errno is set to ERANGE (but see BUGS). No FE_DIVBYZERO excep-
tion is returned by fetestexcept(3) for this case.
Range error: result underflow
errno is set to ERANGE. No FE_UNDERFLOW exception is returned
by fetestexcept(3) for this case.
Range error: result overflow
errno is not set for this case. An overflow floating-point ex-
ception (FE_OVERFLOW) is raised.
For an explanation of the terms used in this section, see at-
|Interface | Attribute | Value |
|y0(), y0f(), y0l() | Thread safety | MT-Safe |
|y1(), y1f(), y1l() | Thread safety | MT-Safe |
|yn(), ynf(), ynl() | Thread safety | MT-Safe |
The functions returning double conform to SVr4, 4.3BSD, POSIX.1-2001,
POSIX.1-2008. The others are nonstandard functions that also exist on
the BSDs.
On a pole error, these functions set errno to EDOM, instead of ERANGE
as POSIX.1-2004 requires.
In glibc version 2.3.2 and earlier, these functions do not raise an in-
valid floating-point exception (FE_INVALID) when a domain error occurs.
This page is part of release 5.05 of the Linux man-pages project. A
description of the project, information about reporting bugs, and the
latest version of this page, can be found at
2017-09-15 Y0(3)
Man Pages Copyright Respective Owners. Site Copyright (C) 1994 - 2024 Hurricane Electric. All Rights Reserved.
|
{"url":"http://man.he.net/man3/y0l","timestamp":"2024-11-12T10:27:33Z","content_type":"text/html","content_length":"5652","record_id":"<urn:uuid:134e420f-3fb6-414f-b126-138117a67357>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00253.warc.gz"}
|
FREE CHP Mathematical Reasoning Questions and Answers - Practice Test Geeks
FREE CHP Mathematical Reasoning Questions and Answers
What is the next number in the sequence: 2, 6, 12, 20, …?
Correct! Wrong!
To find the pattern, observe the differences between consecutive terms:
The differences are increasing by 2. Therefore, the next difference should be 8+2=10. Adding 10 to the last term (20):
Thus, the next number in the sequence is 30.
A triangle has angles measuring 40 degrees and 85 degrees. What is the measure of the third angle?
Correct! Wrong!
The sum of the angles in a triangle is always 180 degrees. To find the third angle:
Third Angle=180−(40+85)=180−125=55 degrees
A rectangular garden has a length of 12 meters and a width of 5 meters. What is the area of the garden?
Correct! Wrong!
The area of a rectangle is calculated by multiplying the length by the width. For a garden with a length of 12 meters and a width of 5 meters:
Area = Length × Width = 12m × 5m = 60 square meters
A store sells pencils at $2 per pencil. If you buy 3 pencils and use a coupon for $1 off the total purchase, what is the final cost?
Correct! Wrong!
Calculate the cost without the coupon:
Total Cost = 3 pencils × 2 dollars/pencil = 6 dollars
Apply the coupon:
Final Cost = 6 dollars − 1 dollar = 5 dollars
What is the median of the following set of numbers: 3, 7, 8, 5, 12?
Correct! Wrong!
To find the median, first arrange the numbers in ascending order:
The median is the middle number in this ordered list. Since there are 5 numbers (an odd number), the median is the 3rd number:
Median = 7
|
{"url":"https://practicetestgeeks.com/free-chp-mathematical-reasoning-questions-and-answers/","timestamp":"2024-11-05T19:42:13Z","content_type":"text/html","content_length":"103809","record_id":"<urn:uuid:fd6d873f-6077-42f1-b1fa-5a277577d951>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00795.warc.gz"}
|
How many kPa in 5/8 pound per square inch?
Contact Us!
Please get in touch with us if you:
1. Have any suggestions
2. Have any questions
3. Have found an error/bug
4. Anything else ...
To contact us, please click HERE.
How many kPa in 5/8 pound per square inch?
5/8 pound per square inch equals 4.30922 kilopascals because 5/8 times 6.89476 (the conversion factor) = 4.30922
All In One Unit Converter
Pounds per square inch to kilopascals Conversion Formula
How to convert 5/8 pound per square inch into kilopascals
To calculate the value in kilopascals, you just need to use the following formula:
Value in kilopascals = value in pounds per square inch × 6.89475728
In other words, you need to multiply the capacitance value in pound per square inch by 6.89475728 to obtain the equivalent value in kilopascals.
For example, to convert 0.625 pounds per square inch to kilopascals, you can plug the value of 5/8 into the above formula toget
kilopascals = 5/8 × 6.89475728 = 4.3092233
Therefore, the capacitance of the capacitor is 4.3092233 kilopascals. Note that the resulting value may have to be rounded to a practical or standard value, depending on the application.
By using this converter, you can get answers to questions such as:
• How much is 5/8 pound per square inch in kilopascals;
• How to convert pounds per square inch into kilopascals and
• What is the formula to convert from pounds per square inch to kilopascals, among others.
Pound Per Square Inch to Kilopascals Conversion Chart Near 0.565 pound per square inch
Pounds Per Square Inch to Kilopascals
0.565 pound per square inch 3.896 kilopascals
0.575 pound per square inch 3.964 kilopascals
0.585 pound per square inch 4.033 kilopascals
0.595 pound per square inch 4.102 kilopascals
0.605 pound per square inch 4.171 kilopascals
0.615 pound per square inch 4.24 kilopascals
0.625 pound per square inch 4.309 kilopascals
0.635 pound per square inch 4.378 kilopascals
0.645 pound per square inch 4.447 kilopascals
0.655 pound per square inch 4.516 kilopascals
0.665 pound per square inch 4.585 kilopascals
0.675 pound per square inch 4.654 kilopascals
0.685 pound per square inch 4.723 kilopascals
Note: Values are rounded to 4 significant figures. Fractions are rounded to the nearest 8th fraction.
Despite efforts to provide accurate information on this website, no guarantee of its accuracy is made. Therefore, the content should not be used for decisions regarding health, finances, or property.
|
{"url":"https://www.howmany.wiki/u/How-many--kPa--in--5%7C8--pound-per-square-inch","timestamp":"2024-11-06T05:59:48Z","content_type":"text/html","content_length":"115783","record_id":"<urn:uuid:e8d32112-6886-400d-9884-ade5e65ea637>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00079.warc.gz"}
|
s -
Tutorial videos
Introduction to Analytica webinar
This is an unedited 75-minute recording of the Introduction to Analytica webinar given on 3-Aug-2016. It is the webinar you attend when you Sign up for a Live Webinar on the Lumina Home Page.
Watch: Introduction to Analytica Webinar
You can alternatively watch the demo portion of the webinar, showing off the Enterprise model, recorded separately. This 13 minute streamlined recording skips the Power Point slides (What is
Analytica, Benefits, Key Features, Applications, Users, Editions, Resources), but comprises the core of the demo. There is audio, but you might need to turn up the volume a bit.
Watch: Enterprise Model Demo.mp4
Analytica Cloud Platform (ACP)
Learn how to use the Analytica Cloud Platform (ACP) for collaboration and to create web applications for end users, so they can run models without having to download any software:
• Upload a model to instantly to ACP from Analytica using Publish to Web... from the File menu.
• Explore and run a model in ACP via a web browser
• Use the ACP style library to modify the user interface to create a web application, including tab-based navigation, embedding graphs and tables in a diagram, extra diagram styles, sizing a window
for the web, autocalc, and more.
• Set up ACP Group Accounts for multiple users to share models in project directories.
• Set up Group Account member access as Reviewers, Authors, or Admins.
Watch: Analytica Cloud Player (ACP) Webinar
See also: Analytica Cloud Platform, ACP Style Library
Presenter: Max Henrion, CEO of Lumina on 18 Feb 2016
Expression Assist
Expression Assist makes suggestions of matching variables and functions as you type definitions. It helps novices and experts alike. It can dramatically speed up the task of writing Analytica
expressions. It often provide help, saving you from having to consult a reference elsewhere.
Watch: New Expression-Assist.wmv.
Presenter: Lonnie Chrisman, CTO, Lumina Decision System, on 9 Feb 2012
See also: Expression Assist
Table and Array Topics
The Basics of Analytica Arrays and Indexes
These two videos introduce the basic concepts of indexes, multi-dimensional arrays, and Intelligent Array Abstraction. These features are what make Analytica such a powerful tool. Mastering them is
key to using Analytica effectively. Understanding them means letting go of some preconceptions you may have from use of Excel or other modeling languages.
Part 1: Indexes, 1-D arrays and the Subscript/Slice Operator: Watch at Intro-to-arrays (Part 1).wmv
Part 2: array functions, multi-D arrays, and Array abstraction. Watch at Intro-to-arrays (Part 2).wmv
Presented by Lonnie Chrisman, CTO, Lumina on Jan 10 and 17th, 2008.
Download a model containing the examples created during the webinar from Intro to intelligent arrays.ana, and Plane catching decision with EVIU.ana.
Local Variables
Date and Time: Thursday, 23 July 2009, 10:00-11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
I'll explain distinctions between different types of local variables that can be used within expressions. These distinctions are of primary interest for people implementing Meta-Inference algorithms,
since they have a lot to do with how Handles are treated. Analytica 4.2 introduces some new distinctions to the types of local variables, designed to make the behavior of local variables cleaner and
more understandable. One type of local variable is the LocalAlias, in which the local variable identifier serves as an alias to another existing object. In contrast, there is the MetaVar, which may
hold a Handle to another object, but does not act as an alias. The only local variable option that existed previously, declared using Var..Do, is a hybrid of these two, which leads to confusion when
manipulating handles. Since LocalAlias..Do and MetaVar..Do have very clean semantics, the use of these when writing Meta-Inference algorithm should help to reduce that confusion considerably. Inside
a User-Defined Function, parameters are also instances of local variables, and depending on how they are declared, may behave as a MetaVar or LocalAlias, so I'll discuss how these fit into the
picture, as well as local indexes and local indexes.
This is appropriate for advanced Analytica modelers.
You can watch a recording of this webinar at Local-Variables.wmv. The analytica file from the webinar is at Local Variables.ana, where I've also implemented the exercises that I had suggested at the
end of the webinar, so you can look in the model for the solutions.
Array Concatenation
Date and Time: Thursday, 25 June 2009 10:00am-11:00 Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Array concatenation combines two (or more) arrays by joining them side-by-side, creating an array having all the elements of both arrays. The special case of list-concatenation joins 1-D arrays or
lists to create a list of elements that can function as an index. Array concatenation is a basic, and common, form of array manipulation.
The Concat function has been improved in Analytica 4.2, so that array concatenation is quite a bit easier in many cases, and the ConcatRows function is now built-in (formerly it was available as a
library function).
I'll take you through examples of array concatenation, including cases that have been simplified with the 4.2 enhancements, to help develop your skills at using Concat and ConcatRows.
This webinar is appropriate for all levels of Analytica modelers.
You can view a recording of this webinar at Array_Concatenation.wmv. The model file created during the webinar is: Array_Concatenation.ana.
Flattening and Unflattening of Arrays
Date and Time: January 31, 2008, 10:00 - 11:00 Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
On occassion you may need to flatten a multi-dimensional array into a 2-D table. The table could be called a relational representation of the data. In some circles it is also refered to as a fact
table. Or, you may need to convert in the other direction -- expanding, or unflattening a relational/fact table into a multi-dimensional array. In Analytica, the MdTable and MdArrayToTable functions
are the primary tools for unflattening and flattening. In this session, I'll introduce these functions and how to use them, several examples, and many variations.
The model developed during this talk is at Flattening_and_Unflatting_Arrays.ana. A recording of the webinar can be viewed at Array-Flattening.wmv
Date and Time: Thursday, 2 July 2009, 10:00am - 11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Aggregation is the process of transforming an array based on a fine-grain index into a smaller array based on a coarser-grain index. For example, you might map a daily cash stream into monthly
revenue (i.e., reindexing from days to months).
This has always been a pretty common operation in Analytica models, with a variety of techniques for accomplishing it, but it has just become more convenient with the Aggregate function, new to
Analytica 4.2.
In the webinar, I'll be demonstrating the use and generality of the Aggregate function. In the process, it will also be a chance to review a number of other basic intelligent array concepts,
including array abstraction, subscripting, re-indexing, etc.
This webinar is appropriate for all levels of Analytica modelers.
A recording of this webinar can be viewed at Aggregate.wmv. The model file created during this webinar is: Aggregate Function.ana.
Date and Time: Thursday, 6 Aug 2009, 10:00am-11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This webinar will demonstrate the functions in Analytica that are used to sort (i.e. re-order) data -- the functions sortIndex, Rank, and the new to 4.2 Sort. I'll cover the basics of using these
functions, including how they interact with indexes, how to apply them to arrays of data, and their use with array abstraction. I'll then introduce several new 4.2 extensions for handling multi-key
sorts, descending options, and case insensitivity.
This webinar is appropriate for all levels of Analytica modelers.
A recording of this webinar can be viewed at Sorting.wmv. The model file created during the webinar is at Sorting.ana.
Date and Time: January 24, 2008, 10:00 - 11:00 Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Every dimension of an Analytica array is associated with an index object. Array Abstraction recognizes when two arrays passed as parametes to an operator or function contain the same indexes. These
indexes are more commonly defined by a global index object, i.e., an index object that appears on a diagram as a parallelogram node. However, variable and decision nodes can serve as indexes, and can
even have a multi-dimensional value in addition to being an index itself. This is refered to as a self index. If a variable identifier is used in an expression, the context in which it appears always
makes it clear whether the identifier is being used as an index, or as a variable with a value. Self-indexes can arise in several ways, which I will cover. In rare cases, when writing an expression,
you may need to be aware of whether you intend to use the index value or the context value of a self-indexed variable. I'll discuss these cases, for example in For..Do loops, and the use of the
IndexValue function.
In some cases, lists may be used in expressions, and when combined with other results, lists can end up serving as an implicit dimension of an array. An implicit dimension is a bit different from a
full-fledged index since it has not name, and hence no way to refer to it in an expression where an index parameter is expected. Yet most built-in Analytica functions can still be employed to operate
over an implicit index. When an implicit index reaches the top level of an expression, it is promoted to be a self-index. I will explain and demonstrate these concepts.
The model developed during this talk is at Self-Indexes_Lists_and_Implicit_dimensions.ana. A recording of the webinar can be viewed at Self-Indexes-Implicit-Dims.wmv
Date and Time: 18 September 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
A DetermTable provides an input view like that of an edit table, allowing you to specify values or expressions in each cell for all index combinations; however, unlike a table, the evaluation of a
determtable conditionally returns only selected values from the table. It is called a determtable because it acts as a deterministic function of one or more discrete-valued variables. You can
conceptualize a determtable as a multi-dimensional generalization of a select-case statement found in many programming languages, or as a value that varies with the path down a decision tree.
DetermTables can be used to encode a table of utilities (or costs) for each outcome in a probabilistic model. In this usage, they combine very naturally with ProbTables (probability tables) for
discrete probabilistic models. They are also extremely useful in combination with Choice pulldowns, allowing you to keep lots of data in your model, but using only a selected part of that for your
analysis. This leads to Selective Parametric Analysis, which is often an effective way of coping with memory capacity limitation in high dimensional models.
In this talk, I'll introduce the DetermTable, show how you create one and describe the requirements for the table indexes. The actual "selection" of slices occurs in the table indexes. Not all
indexes have to be selectors, but I'll explain the difference and how the domain attribute is used to establish the table index, while the value is used to select the slice. When you define the
domain of a variable that will serve as a DetermTable index, you have the option of defining the domain as an index domain. This can be extremely useful in combination with a DetermTable, so I will
cover that feature as well. It is helpful to understand how the functionality a DetermTable can be replicated using two nodes -- the first containing an Edit Table and the second using Subscript.
Despite this equivalence, DetermTable can be especially convenient, both because it simplifies things by requiring one less node, but also because an Edit Table can be easily converted into a
You can watch a recording of this webinar at DetermTables.wmv. The examples created while demonstrating the mechanics of DetermTables is saved here: DetermTable intro.ana. Other example models used
were the 2-branch party problem.ana and the Compression post load calculator.ana, both distributed in the Example models folder with Analytica, and the Loan policy selection.ana model.
Date and Time: Thursday, August 14, 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Edit tables, probability tables and determ tables automatically adjust when their index's values are altered. When new elements are inserted into an index, rows (or columns or slices) are
automatically inserted, and when elements are deleted, rows (or columns or slices) are deleted from the tables. This process of adjusting tables is referred to as splicing.
Some indexes in Analytica may be computed, so that changes to some input variables could result in dramatic changes to the index value, both in terms of the elements that appear and the order of the
elements in the index. This creates a correspondence problem for Analytica -- how do the rows after the change correspond to the rows before the change. Analytica can utilize three different methods
for determining the correspondence: associative, positional, or flexible correspondence. I'll discuss what these are and show you how you can control which method is used for each index.
When slices (rows or columns) are inserted in a table, Analytica will usually insert 0 (zero) as the default value for the new cells. It is possible, however, to explicit set a default value, and
even to set a different default for each column of the table. Doing so requires some typescripting, but I'll take you through the steps.
Using blank cells as a default value, rather than zero, has some advantages. It becomes quickly apparent which cells need to be filled in after index items are inserted, and Analytica will issue a
warning message if blank cells exist that you haven't yet filled in. I'll take you through the steps of enabling blank cells by default.
You can watch a recording of this webinar at Edit-Table-Splicing.wmv. (Note: There is a gap in the recording's audio from 18:43-27:35).
Date and Time: Thursday, 8 April 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The StepInterp function is useful in a number of scenarios, including:
• Discretizing a continuous quantity into a set of finite buckets.
• Looking up a value from a "schedule table" (e.g., tax-rate table, depreciation table)
• Mapping from a date to its fiscal year, when the fiscal year starts on an arbitrary mid-year date.
• Mapping from a cumulated value back to the index element/position.
• Performing a "nearest" or "robust" Subscript or Slice operation.
• Interpolating value for a relationship that changes in discrete steps
In this webinar, I'll demonstrate how to use the StepInterp function on several simple examples.
This webinar is appropriate for beginning Analytica modelers and up.
You can watch a recording of this webinar at: StepInterp.wmv. You can download the model created during this webinar from Step Interp Intro.ana.
Date and Time: Thursday, 31 July 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The SubTable function allows a subset of another edit table to be edited by the user as a different view. To the user, it appears as if he is editing any other edit table; however, the changes are
stored in the original edit table. The rows and columns can be transformed to other dimensions in the Subtable, with different index element orders, based on Subset indexes, and with different number
A recording of this webinar can be viewed at SubTables.wmv. The model file from this webinar is at media:SubTable_webinar.ana.
Edit Table Enhancements in Analytica 4.0
Date and Time: Thursday, Aug 2, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
In this webinar, I will demonstrate several new edit table functionalities in Analytica 4.0, including:
• Insert Choice drop-down controls in table cells.
• Splicing tables based on computed indexes.
• Customizing the default cell value(s).
• Blank cells to catch entries that need to be filled in.
• Using different number formats for each column.
This talk is oriented for model builders with Analytica model-building experience.
The Analytica session that existed by the end of the talk is stored in the following model file: "Edit Table Features.ana".
Modeling Time
Manipulating Dates in Analytica
Date and Time: Thursday, Sept. 13, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
In this talk, I'll cover numerous aspects relating to the manipulation of dates in Analytica. I'll introduce the encoding of dates as integers and the date origin preference. I'll review how to
configure input variables, edit tables, or even individual columns of edit tables to accept (and parse) dates as input. I'll cover date number format capabilities in depth, including how to create
your own custom date formats, understanding how date formats interact with your computer's regional settings, and how to restrict a date format to a single column only. We'll also see how axis
scaling in graphs is date-aware.
Next, we'll examine various ways to manipulate dates in Analytica expressions. This includes use of the new and powerful functions MakeDate, DatePart, and DateAdd, and some interesting ways in which
these can be used, for example, to define date sequences. Finally, we'll practice our array mastery by aggregating results to and from different date granularities, such aggregating from a month
sequence to a years, or interpolating from years to months.
The model file resulting by the end of the session is available here: Manipulating Dates in Analytica.ana.
You can watch a recording of this webinar here: Manipulating Dates.wmv (Windows Media Player required) Unfortunately, this one seems to have recorded poorly -- the video size is too small. If you
magnify it in your media player, it does become readable. Sorry -- I don't know why it recorded like this.
Date and Time: Thursday, 12 June 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The Dynamic function is used for modeling or simulating changes over time, in which values of variables at time t depend on the values of those variables at earlier time points. Analytica provides a
special system index named Time that can be used like any other index, but which also has the additional property that it is used by the Dynamic function for dynamic simulation.
This webinar is a brief introduction to the use of the Dynamic function and to the creation of dynamic models. I'll cover the basic syntax of the Dynamic function, as well as various ways in which
you can refer to values at earlier time points within an expression. Dynamic models result in influence diagrams that have directed cycles (i.e., where you can start at a node, follow the arrows
forward and return to where you started), called dynamic loops. Similar cyclic dependencies are disallowed in non-dynamic influence diagrams.
During the webinar, we'll loop at several simple examples of Dynamic, oriented especially for those of you with little or no experience with using Dynamic in models. I'll provide some helpful hints
for keeping things straight when building dynamic models. For the more seasoned modelers, I'll also try to fold in a few more detailed tidbits, such as some explanation about how dynamic loops are
evaluated, and how variable identifiers are interpreted somewhat differently from within dynamic loops.
The model developed (extension of Fibonacci's rabbit growth model) can be downloaded here: The Dynamic Function.ana. A recording of the webinar can be viewed at Dynamic-Function.wmv.
Modeling Markov Processes in Analytica
Date and Time: Thursday, Sept. 20, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Matthew Bingham, Principal Economist, Veritas Economic Consulting
The class of mathematical processes characterized by dynamic dependencies between successive random variables is called Markov chains. The rich behavior and wide applicability of Markov chains make
them important in a variety of applied mathematical applications including population and demographics, health outcomes, marketing, genetics, and renewable resources. Analytica’s dynamic modeling
capabilities, robust array handling, and flexible uncertainty capabilities support sophisticated Markov modeling. In this webinar, a Markov modeling application is demonstrated. The model develops
age-structured population simulations using a Leslie matrix structure and dynamic simulation in Analytica.
A recording of this session can be viewed at: Markov-Processes.wmv (requires Windows Media Player)
An article about the model presented here: AnalyticaMarkovtext.pdf
See also Donor/Presenter Dashboard -- a sample model that implements a continuous-time Markov chain in Analytica's discrete-time dynamic simulation environment.
Analytica Language Features
Date and Time: Thursday, Dec. 13, 2007 at 10:00 - 11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
A local index is an index object created during the evaluation of an expression using either the Index..Do or MetaIndex..Do construction. Local indexes may exist only temporarily, being reclaimed
when they are no longer used, or they may live on after the evaluation of the expression has completed, as an index of the result. Some operations require the use of local indexes, or otherwise could
not be expressed.
In this talk, I'll introduce simple uses of local indexes, covering how they are declared using Index..Do, with several examples. We'll see how to access a local index using the A.I operator. I'll
discuss the distinctions between local indexes and local variables. I'll show how the name of a local index can be computed dynamically, and I'll briefly cover the IndexNames and IndexesOf functions.
The model created during this talk is here: Webinar_Local_Indexes.ana.
You can watch a recording of this webinar at: Local-Indexes.wmv (Requires Windows Media Player)
Date and Time: Thursday, Dec. 6, 2007 at 10:00 - 11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Meta-inference refers to computations that reason about your model itself, or that actually alter your model. For example, if you were to write an expression that counted how many variables are in
your model, you would be reasoning about your model. Other examples of meta inference include changing visual appearance of nodes to communicate some property, re-arranging nodes, finding objects
with given properties, or even creating a transformed model based on portion of your model's structure.
The ability to implement meta-inferential algorithms in Analytica has been greatly enhanced in Analytica 4.0. The key to implementation of meta-inference is the manipulation of Handles to objects
(formerly refered to as varTerms). This webinar will provide a very brief introduction to handles and using them from within expressions. I will assume you are pretty familiar with creating models
and writing expressions in Analyica, but I will not assume that have previous seen or used Handles. This topic is oriented towards more advanced Analytica users.
The model used/created during this webinar as at: Handle and MetaInference Webinar.ANA.
You can watch a recording of this webinar at: Handles.wmv (Requires Windows Media Player)
Date and Time: Thursday, Nov. 29, 2007 at 10:00 - 11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
With Iterate, you can create a recurrent loop around a large model, which can be useful for iterating until a convergence condition is reached, for example. For complex iterations, where many
variables are being updated at each iteration, requires you to structure your model appropriate, bundling and unbundling values within the single iterative loop. With some work, Iterate can be used
to simulate the functionality Dynamic, and thus provides one option when a second Time-like index is needed (although not nearly as convenient as Dynamic).
In this session, we'll explore how Iterate can be used.
Here is the model file developed during the webinar: Iterate Demonstration.ANA
You can watch a recording of this webinar at: Iterate.wmv (Requires Windows Media Player)
Date and Time: Thursday, Nov. 15, 2007 at 10:00 - 11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Abstract The reference operators make it possible to represent complex data structures like trees or non-rectangular arrays, bundle heterogenous data into records, maintain arrays of local indexes,
and seize control of array abstraction in a variety of scenarios. Using a reference, an array can be made to look like an atomic element to array abstraction, so that arrays of differing
dimensionality can be bundled into a single array without an explosion of dimensions. The flexibilities afforded by references are generally for the advanced modeler or programmer, but once mastered,
they come in useful fairly often.
Here is the model used during the webinar: Reference and Dereference Operators.ana. Near the end of the webinar, I encountered a glitch that I was not able to resolve until after the webinar was
over. This has been fixed in the attached model. For an explanation of what was occurring, see: Analytica_User_Group/Reference_Webinar_Glitch.
You can watch a recording of this webinar at: Reference-And-Dereference.wmv (Requires Windows Media Player)
Date and Time: Thursday, Sept. 27, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
When you need a specialized function that is not already built into Analytica, never fear -- you can create your own User-Defined Function (UDF). Creating UDFs in Analytica is very easy. I'll
introduce this convenient capability, and demonstrate how UDFs can be organized into libraries and re-used in other models. I'll also review the libraries of functions that come with Analytica,
providing dozens of additional functions.
After this introduction to the basics of UDFs, I'll dive into an in-depth look at Function Parameter Qualifiers. There is a deep richness to function parameter qualifiers, mastery of which can be
used to great benefit. One of the main objectives for a UDF author, and certainly a hallmark of good modeling style, should be to ensure that the function fully array abstracts. Although this usually
comes for free with simple algorithms, it is sometimes necessary to worry about this explicitly. I will demonstrate how this objective can often be achieved through appropriate function parameter
Finally, I will cover how to write a custom distribution function, and how to ensure it works with Mid, Sample and Random.
This talk is appropriate for Analytica modelers from beginning through expert level. At least some experience building Analytica models and writing Analytica expressions is assumed.
The model created during this webinar, complete with the UDFs written during that webinar, can be downloaded here: Writing User Defined Functions.ana.
You can watch this webinar here: Writing-UDFs.wmv (Windows Media Player required)
Custom Distribution Functions
Date and Time: Thursday, 24 July 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Analytica comes with most of the commonly seen distributions built-in, and many additions distribution functions available in the standard libraries. However, in specific application areas, you may
encounter distribution types that aren't already provided, or you may wish to create a variation on an existing distribution based on a different set of parameters. In these cases, you can create
your own User-Defined Distribution Function (UDDF). Once you've created your function, you can utilize it within your model like you would any other distribution function.
User-defined distribution functions are really just instances of User-Defined Functions (UDFs) that behave in certain special ways. This webinar discusses the various functionalities that a
user-defined distribution function should exhibit and various related considerations. Most fundamentally, the defining feature of a UDDF is that it returns a median value when evaluated in Mid mode,
but a sample indexed by Run when evaluated from Sample mode. This contrasts with non-distribution functions whose behavior does not depend on the Mid/Sample evaluation mode. Custom distributions are
most often implemented in terms of existing distributions (which includes Inverse CDF methods for implementing distributions), so that this property is achieved automatically since the existing
distributions already have this property. But in less common cases, UDDFs may treat the two evaluation modes differently.
When you create a UDDF, you may also want to ensure that it works with Random() to generate a single random variate, and supports the Over parameter for generating independent distributions. You may
also want to create a companion function for computing the density (or probability for discrete distributions) at a point, which may be useful in a number of contexts including, for example, during
importance sampling. I'll show you how these features are obtained.
There are several techniques that are often used to implement distribution functions. The two most common, especially in Analytica, are the Inverse CDF technique and the transformation from existing
distributions method. I'll explain and show examples of both of these. The Inverse CDF is particularly convenient in that it supports all sampling methods (Median Latin Hypercube, Random Latin
Hypercube, and Monte Carlo).
A recording of this webinar can be viewed at Custom-Distribution-Functions.wmv. The model file created during the webinar is Custom Distribution Functions.ana.
Date and Time: Thursday, 9 July 2009, 10:00am - 11:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Analytica 4.2 exposes a new and powerful ability to utilize Perl-Compatible regular expressions for text expression analysis. This feature has particular applicability for parsing application when
importing data. Long known as the feature that makes Perl and Python popular as data file processing languages, that same power is now readily available within Analytica's FindInText, SplitText, and
TextReplace functions.
This talk will only touch on the regular expression language itself (information on which is readily available elsewhere), but instead focuses on the use of these expressions from the Analytica
expressions, especially the extracting of text that matches to subpatterns and finding repeated matches.
One relatively complex example that I plan to work through is the parsing of census population data from datafiles downloaded from the U.S. census web site. The task includes parsing highly variable
HTML, as well as multiple CSV files with formatting variations that occur from element to element. These variations, which are typical in many sources of data, demonstrate why the flexibility of
regular expressions can be extremely helpful when parsing data files.
Regular expressions themselves are extremely powerful, but when overused, can be very cryptic. So even though it is possible to get carried away with this power, it is good to know how to balance the
This talk is appropriate for moderate to advanced level modelers.
A recording of this webinar can be watched at Regular-Expressions.wmv. If you are new to regular expressions, I've included a slides on the regular expression patterns that I made use of in this
power point show (these were not shown during the webinar). The model file developed during the webinar is Regular expressions.ana.
Using the Check Attribute to validate inputs and results
Date and Time: Thursday, 17 July 2008 10:00 Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The check attribute provides a way to validate inputs and computed results. When users of your model are entering data, this can provide immediate feedback when they enter values that are out of
range or inconsistent. When applied to computed results, it can help catch inconsistencies, which can help reduce error rates and accidental introduction of errors later.
In this talk, I'll demonstrate how to define a check validation for a variable, and how to turn on the check attribute visible so that it is visible in the object window. I'll demonstrate how the
failed check alert messages can be customized. And perhaps most interestingly, how the check can be used in edit tables for cell-by-cell validation, so that out-of-range inputs are flagged with a red
background, and alert balloons pop-up when out-of-range inputs are entered. Cell-by-cell validation when certain restrictions on the check expression are followed, which I'll discuss.
A recording of this webinar can be viewed at Check-Attribute.wmv (Note: There is audio, but screen is black, for first 50 seconds). The model used during this webinar, with the check attributes
inserted, is at Check attribute -- car costs.ana.
The Performance Profiler
Date and Time: October 9, 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Requires Analytica Enterprise
When you have a model that takes a long time to compute, thrashes in virtual memory, or uses up available memory, the Performance Profiler can tell you where your model is spending its time and how
much memory is being consumed by each variable to cache results. It is not uncommon to find that even in a very large model, a small number (e.g., 2 to 5) of variables account for the lion's share of
time and memory. With this knowledge, you can focus your attention optimizing the definition of those few variables. On several occassions I've achieved more than 100-fold sped up in computation time
on large models using this technique.
The Performance Profiler requires with Analytica Enterprise or Optimizer. I'll demonstrate how to use the profiler with some basic discussions of what is does and does not measure. One neat aspect of
the profiler is that you can actually activate it after the fact. In otherwords, even though you haven't adding profiling to your model, if you happen to notice something taking a long time, you can
add it in to find out where the time was spent.
Using the Profiler is pretty simple, so I expect this session will be somewhat shorter than usual. The content will be oriented primarily to people who are unfamiliar with the profiler, although I
will also try to provide some behind the scenes details and can answer questions about it for
You can watch a recording of this webinar at Performance-Profiler.wmv. The model file containing the first few examples from the webinar can be downloaded from Simple Performance Profiler Example.ana
Organizing Models
Modules and Libraries
Date and Time: 10 Dec 2009 10:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Modules form the basic organizational principle of an Analytica model, allowing models to be structured hierarchically, keeping things simple at every level even in very large complex models. You can
use linked modules to store your model across multiple files. This capability enables reuse of libraries and model logic across different models, and allows you to divide your model into separate
pieces so that different people can work concurrently on different pieces of the model.
In this talk, I will review many aspects of modules and libraries. We'll see how to use linked modules effectively. I'll cover what the the distinctions are between Modules, Libraries, Models and
Forms. I'll demonstrate various considerations when adding modules to existing models -- such as whether you want to import system variables or merge (update) existing objects, and some variations on
what is possible there. We'll see how to change modules (or libraries) from being embedded to linked, or vise versa, and how to change the file location for a linked module. When distributing a model
consisting of multiple module files, I'll go over directory structure considerations (the relative placement of module files), and also demonstrate how you can store a copy of your model with
everything embedded in a single file for easy distribution.
I'll also discuss definition hiding and browse-only locking. By locking individual modules, you can create libraries with hidden and unchangeable logic that can be used in the context of other
people's models, keeping your algorithms hidden. Or, you can distribute individual models that are locked as browse only, even in the context of a larger model where the remainder of the model is
I'll talk about using linked modules in the context of a source control system, which is often of interest for projects where multiple people are modifying the same model. I'll also reveal an
esoteric feature, the Sys_PreLoadScript attribute, and how this can be used to implement your own licensing and protection of intellectual property.
This webinar is appropriate for all levels of Analytica model builders.
You can watch a recording of this webinar at Linked-Modules.wmv. The starting model used in the webinar can be downloaded from Loan_policy_selection_start.ana, and then you can follow along to
introduce and adjust modules as depicted in the recording if you like.
User interface (UI) basics
Learn how to easily customize and consolidate the key parts of your models that you want to share with clients and stakeholders.
In this short 3-minute video you will learn how to size, group, align and evenly space various inputs and outputs. You will also gain knowledge on how to add buttons, checkboxes, sliders, borders and
gridlines. These features will dictate how users interact with your model in different ways. Watch this tutorial to better understand how to clean up and organize your models.
Uncertainty & Probability Topics
Gentle Introduction to Modeling Uncertainty: Webinar Series
Date and Time:
Session 1: Thursday, 29 Apr 2010 10:00am Pacific Daylight Time
Session 2: Thursday, 6 May 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Are you someone who has never built a model containing explicit representions of uncertainty? Did that Statistics 1A class you took a long time ago instill a belief that probability distributions are
irrelevant to the type stuff you work on? Are you afraid to start representing uncertainty explicitly because you just don't have the statistics background and don't know much about probability and
probability distributions?
If any of these sentiments resonate with you, then this webinar (series) may be for you.
These are interactive webinars. Be prepared to answer some questions, and have Analytica fired up in the background. You are going to use it to compute the answer to a couple exercises! Even if you
are watching the recording, be ready to complete the exercises.
This webinar series is most appropriate for:
• Beginning Analytica model builders.
• Users of models that present results with uncertainty.
• Accomplished spreadsheet or Analytica model builders who have not previously incorporated uncertainty.
• People looking to learn the basics of probability for the representation of uncertainty.
Session 1: Uncertainty and Probability
In the first session discusses different sources and types of uncertainty, probability distributions and how they can be used to represent uncertainty, various different interpretations of
probabilities and probability distributions, and reasons why it is valuable to represent uncertainty explicitly in your quantitative models.
A recording of this webinar can be viewed at: Modeling-Uncertainty1.wmv. A copy of the model created by the presenter during the webinar (the scholarship example) can be downloaded from Modeling
uncertainty 1 - princeton scholarship.ana. Power point slides can be downloaded from: Modeling Uncertainty 1.ppt.
Session 2: Probability Distributions
How do you characterize the amount of uncertainty you have regarding a real-valued quantity? This second session explores this question, and introduces the concepts of average deviation (aka absolute
deviation), variance and standard deviation. It then introduces the concept of a probability distribution and the Normal and LogNormal distributions. We examine the expected value of including
uncertainty and do a few modeling exercises that demonstrate how it can be highly misleading, even expensive, to ignore uncertainty.
A recording of this webinar can be viewed at Prob-Distributions.wmv. The model build during the webinar can be downloaded from Probability Distributions Webinar.ana. Power point slides are at
Modeling Uncertainty 2.ppt.
Session 3: Monte Carlo
Date and Time: Thursday, 13 May 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
In this third webinar in the "Gentle Introduction to Modeling Uncertainty" series, we will see how a probability distribution can be represented as a set of representative samples, and how this leads
to a very general method for propagating uncertainty to computed results. This method is known as Monte Carlo simulation.
Analytica represents uncertainty by storing a representative sample, so we'll be learning about how Analytica actually carries out uncertainty analysis. We explore how all the uncertainty result
views in Analytica are created from the sample, and learn various 'tricks' for nice histograms for PDF views in various situations.
We'll learn about the Run index, and how this places samples across different variables in correspondence. We'll learn about the generality of the Monte Carlo for propagating uncertainty, and also
learn what Latin Hypercube sampling is.
A recording of this webinar can be viewed at Monte-Carlo.wmv. The power point slides are at: Monte Carlo Simulation.ppt. Example models created during the webinar include: Mining Example.ana,
Explicit samples.anaand Representing Uncertainty 3 - Misc.ana (product of normals and comparison between Latin Hypercube and Monte Carlo precision).
Session 4: Measures of Risk and Utility
Date and Time: Thursday, 20 May 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This fourth webinar in the "Gentle Introduction to Modeling Uncertainty" series will explore concepts and quantitative measures of Risk and Utility. We'll discuss various conceptions and types of
risk, and explore topics relevant to model-building that include utility and loss functions, expected value, expected utility, risk neutrality, risk aversion, fractiles and Value-at-risk (VaR).
A recording of this webinar can be viewed at Risk-And-Utility.wmv. The power point slides can be viewed at Measures of Risk and Utility.ppt. There is an interesting modeling exercise and exploration
of Expected Shortfall near the end of the power point slides that was not covered during the webinar. The worked out model examples from the webinar, along with a solution to the final example not
covered, can be downloaded from Measures of Risk.ana.
Session 5: Risk Analysis for Portfolios
Date and Time: Thursday, 3 June 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Committing to a single project or investing in a single asset entails a certain amount of risk along with the potential payoff. If you are able to proceed with multiple projects or invest in multiple
assets, the degree of risk may be reduced substantially with small impact on potential return. In this fifth webinar in the "Gentle Introduction to Modeling Uncertainty" series, we'll look at
modeling portfolios, such as portfolios of investments or portfolios of research and development projects, and the impact this has on risk and return. Portfolio analysis is the basis for practices
such as diversification and hedging, and is a key of risk management.
As with other topics in this webinar series, the presentation and discussion is designed for people who are new to the use of these concepts in a model building context.
You can watch a recording of this webinar at: Portfolio-Risk.wmv. The Power Point slides are at Risk Analysis for Portfolios.ppt. These include some exercises at the end (for homework!) not covered
during the webinar, including continuous portfolio allocations. The model developed during the webinar, augmented to include answers to additional exercises is at Risk Analysis for Portfolios.ana.
Session 6: Common Parametric Distributions
Date and Time: Thursday, 10 June 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
During the first five sessions of the Gentle Introduction to Modeling Uncertainty webinar series, you have been introduced to three distribution functions: Bernoulli, Normal and LogNormal. In this
webinar, we're going to increase this repertoire and learn about other common parametric distributions. I'll discuss situations where specific distributions are particularly convenient or natural for
expressing uncertainty about certain types of quantities, and other reasons for why you might prefer one particular distribution type over another. We'll also examine the distinction between discrete
and continuous distributions.
As with other topics in this webinar series, the presentation and discussion is designed for people who are new to the use of these concepts in a model building context.
A recording of the webinar can be viewed at Parametric-Distributions.wmv. The power point slides are at Common Parametric Distributions.ppt, and the Analytica model containing the exercises and
solutions to exercises not covered during the live recording is at Common-Parametric-Distributions.ana.
Session 7: Expert Assessment of Uncertainty
Date and Time: Thursday, 24 June 2010 10:00am Pacific Daylight Time
Presenter: Max Henrion, Lumina Decision Systems
For most uncertainty analysis, uncertainties about many key quantities must be assessed by expert judgment. There has been a lot of empirical research on human abilities to express their knowledge
and uncertainty in the form of probability distributions. It shows that we are liable to a variety of biases, such as overconfidence and motivational biases. I'll give an introduction to practical
methods developed by decision analysts to avoid or minimize these biases. I'll give some examples from recent work in expert elicitation for the Department of Energy on the future performance of
renewable energy technologies. I'll also discuss ways to aggregate judgments from different experts.
The session is appropriate for people who are new to this area. This probably includes just about everybody!
This session will draw from Chapters 6 and 7 of "Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis" by M Granger Morgan & Max Henrion, Cambridge University
Press, 1992
Note: There is no recording of this webinar.
Session 8: Hypothesis Testing
Date and Time: Thursday, 15 July 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Hypothesis testing from classical statistics addresses the question of whether the apparent support for a given hypothesis is statistically significant. In the field of classical statistics, this is
perhaps the most heavily emphasized application of probability concepts, and the methodology is used (if not required by editors) when publishing results for research studies in nearly every field of
empirical study.
To illustrate the basic idea, suppose a journalist selects 10 Americans at random and asks whether they support a moratorium on deep sea drilling. Seven of the 10 respond "yes", so the the next day
he publishes his article "The Majority of Americans Support a Moratorium on Deep Sea Drilling". His sample is certainly consistent with this hypothesis, but his conclusion is not credible because
with such a small sample, this majority could have easily been a random quirk (sampling error). Hence we would say that the conclusion is not "statistically significant". But how big does the sample
have to be to achieve statistical significance? Where should we draw the line when determining whether the data's support is statistically significant? These are the types of questions addressed by
this area of statistics.
Hypothesis testing is a central topic in every introductory Statistics 1A course, often comprising more than half of the total course syllabus. But most introductory courses emphasize a cookbook
approach in favor of a conceptual understanding, apparently in the hope of providing people in non-statistical fields step-by-step recipes to follow when they need to publish results in their own
fields. As a result, the methodology is possibly misused more often than it is applied correctly, and published results are commonly misinterpreted.
In this seminar, I intend to emphasize a conceptual understanding of the statistical hypothesis methodology rather than the more traditional textbook methodology. After this webinar, when you read
"our hypothesis was confirmed by the data at a p-value=0.02 level", or "the hypothesis was rejected with a p-value of 0.18", you should be able to precisely relay what these statements really do or
do not imply. You should understand what a p-value and confidence level really denote -- they do not represent, as many people think, the probability that the hypothesis is true.
We will also, of course, examine how we can carry out computations of significance levels (i.e., p-values) within Analytica. Statistics texts are filled with numerous "standard" hypothesis tests
(e.g., t-tests, etc), each based on a specific set of assumptions. In this webinar, we'll dive into this in a more general way, where we get to start with our own set of arbitrary assumptions,
leveraging the power of Monte Carlo for computation. This means there are no recipes to remember, you can compute significance levels for any statistical model, even if the same assumptions don't
appear in your statistics texts, and most importantly, you'll be left with a more general understanding of the concepts.
As a prerequisite, this webinar will assume little more than the introductory background from the earlier webinars in this "Gentle Introduction to Modeling Uncertainty" series. It is appropriate for
people who have never taken a Statistics 1A course, or for the majority of people who have taken that introduction to Statistics but could use a refresher.
You can watch a recording of this webinar at Hypothesis-Testing.wmv. To follow along with the webinar, you'll want to also download the Analytica model file Hypothesis Test S&P Volatility.ana before
staring. You'll use the data in that model for the various exercises during the webinar.
Solutions to exercises are saved in this version of the model (created during the webinar): Hypothesis Test S&P Volatility solution.ana. I also inserted a solution to the Parkinson's data test that
wasn't covered in the webinar but is contained in the Power Point slides.
Inteligencia Artificial y aplicaciones en los negocios (en español)
Date and Time: Thursday, 15 April 2021, 7pm Peru time
Presenter: Professor Jorge Muro Arbulú
Abstract: Inteligencia Artificial y aplicaciones en los negocios, con ejemplos desarrollados o a desarrollar en Operaciones, Proyectos, y Finanzas.
Watch: Inteligencia Artificial en los Negocios.mp4 (2 hours).
Alternate: On You Tube (2 hours)
Download: Power Point slides
The value of knowing how little you know
Date and Time: Thursday, 22 April 2021, 11am PDT
Presenter: Max Henrion, Lumina Decision Systems
Abstract: An Analytica demo showing how to estimate the Expected Value of Including Uncertainty (EVIU).
Watch: The Value of Knowing How Little You Know.mp4 (57 minutes)
Alternate: On You Tube (57 minutes)
Sensitivity, uncertainty, and Monte Carlo simulation tips and tricks
Date and Time: Friday, 18 December 2020, 10:00am Pacific Daylight Time
Presenter: Max Henrion, Ph.D., Lumina Decision Systems
Sensitivity, uncertainty, and Monte Carlo simulation
• Approaches
• Influence diagram for risk of offshore wind power
• Risk analysis of offshore wind power
• U1: Parametric and importance analysis with TXC model
• Importance Analysis
• Why express uncertainty as probability distributions?
• Assessing points on a triangular distribution
• U2: The Plan Catching Decision
• Create a chance variable and select a distribution
• U3: Analyze the Plane Catching Decision
• Loss function ignoring uncertainty
• U4: Tornado chart range sensitivity
• Monte Carlo Simulation
• U5: Explore Uncertainty Setup
• Probability distribution
You can download the PowerPoint slides used for the webinar: Uncertainty-MC.pdf.
You can watch a recording of the webinar from Tips-Tricks_MC_Sensitivity.mp4.
to request access to the models, email info@lumina.com
Expecting the Unexpected: Coping with surprises in Probabilistic and Scenario Forecasting
Date and Time: Thursday, 7 April 2011, 10:00am Pacific Daylight Time
Presenter: Max Henrion, Ph.D., Lumina Decision Systems
The notion of "Black Swans", reinforced by the financial debacles of 2008, confirms decades of research on expert judgment and centuries of anecdotes about the perils of prediction: Our forecasts are
consistently overconfident and we are too often surprised. Henrion will explain why forecasters, risk analysts, and R&D portfolio managers should embrace the inevitable uncertainties using scenarios
or probability distributions. He will describe a range of practical methods including:
• The value of knowing how little you know — why and when to treat uncertainty explicitly
• Elicitation of expert judgment and how to minimize cognitive biases
• Using Monte Carlo for probabilistic forecasting and risk analysis.
• Calibrating probabilistic forecasts against the historical distributions of forecast errors and surprises.
• Brainstorming to identify "Gray Swans" — surprises that are foreseeable, but ignored in conventional forecasting.
Participants will come away with a deeper understanding of when and how to apply these methods.
You can download the PowerPoint slides used for the webinar: Expecting the Unexpected.pptx (note: If your browser changes this into *.zip when downloading, save it and rename to "Expecting the
Unexpected.pptx" before you try to open it). You can watch a recording of the webinar from ExpectingTheUnexpected.wmv.
Date and Time: Thursday, March 13, 2008 10:00 Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This talk will discuss various techniques within Analytica for defining probability distributions with specified marginal distributions, and also being correlated with other uncertain variables.
Techniques include the use of conditional and hierarchical distributions, multivariate distributions, and Iman-Conover rank-correlated distributions.
The model created during session talk is Correlated distributions.ana. You can watch a recording of the webinar from Correlated-Distributions.wmv.
Assessment of Probability Distributions
Date and Time: March 6, 2008 10:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
When building a quantitative model, we usually need to come up with estimates for many of the parameters and input variables that we use in the model. Because these are estimates, it is good idea to
encode these as probability distributions, so that our degree of subjective uncertainty is explicit in the model. The process of encoding a distribution to reflect the level of knowledge that you (or
the experts you work with) have about the true value of the quantity is referred to as probability (or uncertainty) assessment or probability elicitation.
This webinar will be a highly interactive one, where all attendees are expected to participate in a series of uncertainty assessments as we explore the effects of cognitive biases (such as
over-confidence and anchoring), understand what it means to be well-calibrated, and utilize scoring metrics to measure your own degree of calibration. These exercises can help you improve the quality
of your distribution assessments, and serve as tools that can help you to when eliciting estimates of uncertainty from other domain experts.
The Analytica model Probability assessment.ana contains a game of sorts that takes you through several probability assessments and scores your responses. Participants of the webinar played this game
by running this model, if you are going to watch the webinar, you will want to do the same. You may want to wait until the appropriate point in the webinar (after preliminary stuff has been covered)
before starting. You can watch the webinar recording here: Probability-Assessment.wmv. The power point slides from the talk are here: Assessment_of_distributions.ppt.
Statistical Functions
Date and Time: Thursday, 21 Aug 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This topic was presented in Aug 2007, but not recorded at that time.
A statistical function is a function that process a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex
examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica. I'll describe several built-in statistical functions such as Mean, SDeviation,
GetFract, Pdf, Cdf, and Covariance. I'll demonstrate how all built-in statistical functions can be applied to historical data sets over an arbitrary index, as well as to uncertain samples (the Run
index). I'll discuss how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for example), and how various statistical functions (e.g.,
Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such things as inferring sample covariance or correlation
matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for rapid aggregation.
In addition, all built-in statistical functions can compute weighted statistics, where each point is assigned a different weight. I'll briefly touch on this feature as a segue into next week's topic,
Importance Sampling.
This talk can be viewed at Statistical-Functions.wmv. The model built during this talk is available for download at Intro to Statistical Functions.ana.
Date and Time: Thursday, 25 March 2010 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Many measures for quantifying the degree of statistical dependence between quantities are used in statistics. THe two most commonly used are Pearson's Linear Correlation and Spearman's Rank
Correlation, computed respectively in Analytica by the functions Correlation and RankCorrel. Pearson's Correlation, which is what people usually mean when they just use the term "Correlation", is a
measure of how linear the relationship between two variables is. Spearman's Rank Correlation is a measure of monotonic the relationship between two variables is.
This talk provides an introduction to the concept of rank correlation, how it is distinguished from standard Pearson correlation, and what it measures. There are several notable and rather diverse
uses of RankCorrel, which include these (and probably many others):
• A quantitative measure of the degree to which two variables are monotonically related. (E.g., the degree to which an increase in one leads to an increase, or decrease, in the other).
• Testing (from measurements) whether two factors are statistically dependent
• Importance analysis: Determining how much the uncertain of an input contributes to the uncertainty of an output.
• Sampling from joint distributions with arbitrary marginals and specified rank-correlations (Correlate_With and Correlate_Dists)
I will focus mostly on the first two factors in this talk (previous webinars on Sensitivity Analysis have covered the Importance Analysis usage to some extent, and a previous webinar on Correlated
and Multivariate Distributions has covered the last point).
Standard hypothesis tests exist for determining whether two factors are statistically dependent by testing the hypothesis that their rank correlation is non-zero (null hypothesis that it is zero).
When the P-value of these tests is less than 5% (or 1%), you would be justified in concluding that the two variables are statistically dependent. I will demonstrate how to compute this P-value.
Then I will introduce a new analysis of rank correlation that I came up with, which I think is novel and potentially pretty useful, somewhat related to the classical hypothesis tests just mentioned.
Suppose you gather a small sample of data on two variables in a study and you want to determine how strong the monotonicity between the two variables is. You can compute the sample rank correlation
for the data set, but this is only an estimate since you have a small sample size and thus sampling error may throw off this estimate. So suppose we imagine there is some "true" underlying rank
correlation between the variables (this in itself is a new concept, which I will make precise). From your data set, you have some knowledge about the true value of this underlying rank correlation --
the larger your sample size, the more precise your knowledge is. The new technique I describe here computes a (posterior) distribution over the true underlying rank correlation, from which you can
express your rank correlation result as a range (such as rc=0.6±0.2), and answer questions such as what is the probability that the underlying rank correlation is between -0.1 and 0.1, P(-0.1 < rc <
0.1), or P(rc>0), etc. Although this is essentially a posterior distribution, there is no prior distribution involved or needed to computate it, so it is simply a function of the measured data and of
the sample size. It really is a probability distribution on the underlying rank correlation, not just a P-value, making it much more useful.
This new analysis is also useful for quantifying the probability that two factors are independent in a manner not possible with the classical tests. The classical P-value of the aforementioned tests
measure the probability of a Type II error for the hypothesis that variables are dependent. These tests do not provide the probability of a Type I error, which would be the criteria for concluding
that a claim of statistical independence is statistically signficant. This new measure, however, can justifiably be used for quantifying a claim of statistical independence since it allows P(-c<rc<c)
to be computed for any c.
I will demonstrate how this new analysis of rank correlation works and is encoded within Analytica, and show how to read off the interesting results.
A recording of this webinar can be viewed at: Rank-Correlation.wmv. The model files created during the talk are available at: Rank-Correlation-Examples.ana and Rank-Correlation-Analysis.ana.
Statistical Functions in Analytica
Date and Time: Thursday, Aug 16, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
A statistical function is a function that process a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex
examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica 4.0. In Analytica 4.0, all built-in statistical functions can now be applied to
historical data sets over an arbitrary index, as well as to uncertain samples (the Run index), eliminating the need for separate function libraries. I will demonstrate this use, as well as several
new statistical functions, e.g., Pdf, Cdf, Covariance. I will explain how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for
example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such
things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for
rapid aggregation.
In addition, all statistical functions in Analytica 4.0 can compute weighted statistics, where each point is assigned a different weight. I'll cover the basics of sample weighting, and demonstrate
some simple examples of using this for computing a Bayesian posterior and for importance sampling from an extreme distribution.
The Analytica model file that had resulted by the end of the presentation can be downloaded here: User Group Webinar - Statistical Functions.ANA.
Date and Time: Thursday, 18 Feb 2010 10:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The Large Sample Library is an Analytica library that lets you run a Monte Carlo simulation for a large model or a large sample size that might otherwise exhaust computer memory, including virtual
memory. It breaks up a large sample into a series of batch samples, each small enough to run in memory. For selected variables, known as the Large Sample Variables or LSVs, it accumulates the batches
into a large sample. You can then view the probability distributions for each LSV using the standard methods — confidence bands, PDF, CDF, etc. — with the full precision of the large sample.
Memory is saved by not storing results for non-LSVs.
This presentation introduces this library and how to use it.
You can watch a recording of this webinar at Large-Sample-Library.wmv. The Large Sample library can be downloaded for use in your own models from the Large Sample Library: User Guide page. The two
example models used during this webinar were: Enterprise model3.ana and Simple example for Large Sample Library.ana.
Sensitivity Analysis Topics
Tornado Charts
Time and Date: Thursday, 20 Mar 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
A tornado chart depicts the result of a local sensitivity analysis, showing how much a computed result would change if each input were varied one input at a time, with all other inputs held to their
baseline value. The result is usually plotted with horizontal bars, sorted with larger bars on top, resulting in a graph resembling the shape of a tornado, hence the name. There a numerous variations
on tornado charts, resulting from different ways of varying the inputs, and in some cases, different metrics graphed.
This talk will walk through the steps of setting up a Tornado chart, and explore different variations of varying inputs. We'll also explore some more complex issues that can arise when some inputs
are arrays.
The model used during this talk is here: Tornado Charts.ana (the stuff for the talk was in the Tornado Analysis module). You can watch a recording of this webinar from Tornado-Charts.wmv.
Advanced Tornado Charts -- when inputs are Array-Valued
Date and Time: Thursday, April 17, 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The webinar of 20-Mar-2008 (Tornado-Charts.wmv, see webinar archives) went through the fundamentals of setting up a local sensitivity analysis and plotting the results in the form of a tornado chart.
That webinar also discussed the many variations of tornado analyses (or more generally, local sensitivity analyses) that are possible.
This talk builds on those foundations by going a step further and addressing tornado analyses when some of the input variables are array-valued. The presence of array-valued inputs introduces many
additional possible variations of analyses, as well as many modeling complications. For example, a local sensitivity analysis varies one input at a time, but that could mean you vary each input
variable (as a whole) at a time, or it could mean that you vary each cell of each input array individually. Either is possible, each resulting in a different analysis. Some of these variations
compute the correct result automatically through the magic of array abstraction, once you've set up the basic tornado analysis that we covered in the first talk, while other require quite a bit of
additional modeling effort. However, even the ones that produce the correct result can often be made more efficient, particularly when the indexes of each input variable are different across input
When we do opt to vary input arrays one cell at a time, the display of the results may be dramatically effected. Although we can keep the results in an array form, the customary tornado chart require
us to flatten the multi-D arrays and label each bar on the chart with a cell coordinate.
A recording of this webinar can be viewed at Tornados-With-Arrays.wmv. This webinar made use of the following models: Sales Effectiveness Model with tornado.ana, Biotech R&D Portfolio with
Tornado.ana, Sensitivity Analysis Library.ana, and Sensitivity Functions Examples.ana. See The Sensitivity Analysis Library for more information on how to use Sensitivity Analysis Library.ana in your
own models.
Financial Analysis
Internal Rate of Return (IRR) and Modified Internal Rate of Return (MIRR)
Date and Time: 18 Dec 2008, 10:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This is Part 3 of a multi-part webinar series where we have been covering the modeling and evaluation of cash flows over time in an interactive exercise-based webinar format, where concepts are
introduced in the form of modeling exercises, and participants are asked to complete the exercises in Analytica during the webinar. Part 3 covers Internal Rate of Return (IRR) and Modified Internal
Rate of Return (MIRR), and includes seven modeling exercises.
To speed the presentation up, I am providing the exercises in advance: NPV_and_IRR.ppt. I urge you to take a shot at completing them before the webinar begins, and we'll advance through the exercises
more rapidly so as to complete the topic material within the hour. By attempting the exercises in advance, you'll have a good opportunity to compare your solutions to mine, and to ask questions about
things you got stuck on.
A dollar received today is not worth the same as a dollar received next year. Taking this time-value of money (or more generally, time-value of utility) into account is very important when comparing
cash flows over time that result from long-term capital budgeting decisions. Net Present Value (NPV) and Internal Rate of Return (IRR) are the two most commonly used metrics examining the effective
value of an investment's cash flow over time. Both concepts are pervasive in decision-analytic models.
This webinar will be highly interactive. Fire up a instance of Analtyica as you join on. As I introduce each concept, I'll provide you with cash flow scenarios, and give you a chance to compute the
result yourself using Analytica. This talk is intended for people who are not already well-versed in NPV and IRR, or for people who already have a good background with those concepts but are new to
Analytica and thus can learn from the interactive practice of addressing these exercises during the talk.
See also the materials from Parts 1 and 2 (Net Present Value, 20 Nov 2008 and 4 Dec 2008) elsewhere on this page. This session begins with the model Cash Flow Metrics 2.ana, and ends with Cash Flow
Metrics 3.ana. You can watch a recording of this webinar at IRR.wmv.
Bond Portfolio Analysis
Date and Time: 11 Dec 2008, 10:00am Pacific Standard Time
Presenter: Rob Brown, Incite! Decision Technologies
I demonstrate how to value a bond portfolio in which bonds are bought and sold on an uncertain frequency. The demonstration shows how Intelligent Arrays and related functions can greatly simplify
calculations of multiple dimensions that would typically require multiple interconnected sheets in a spreadsheet or nested do-loops in a procedural language.
You can watch a recording of this webinar at Bond-Portfolio-Analysis.wmv. The model underlying the presentation is Bond Portfolio Valuation.ana, and the power point slides are at Bond Portfolio
Net Present Value (NPV)
Date and Time: Part I : Thursday, 20 Nov 2008, 10:00am Pacific Standard Time
Part II : Thursday, 4 Dec 2008, 10:00am Pacific Standard Time
(Parts 1 & 2 cover NPV -- part 3, listed now separately, covers IRR)
Presenter: Lonnie Chrisman, Lumina Decision Systems
A dollar received today is not worth the same as a dollar received next year. Taking this time-value of money (or more generally, time-value of utility) into account is very important when comparing
cash flows over time that result from long-term capital budgeting decisions. Net Present Value (NPV) and Internal Rate of Return (IRR) are the two most commonly used metrics examining the effective
value of an investment's cash flow over time. Both concepts are pervasive in decision-analytic models.
This multi-part webinar provides an introduction to the concepts of present value, discount rate, NPV and IRR. We'll discuss the interpretation of discount rate, and we'll get practice computing
these metrics in Analytica. We'll examine the pitfalls of each metric, and we'll examine the interplay of each metric with explicitly modelled uncertainty (including the concepts of Expected NPV
(ENPV) and Expected IRR (EIRR)).
This webinar will be highly interactive. Fire up a instance of Analtyica as you join on. As I introduce each concept, I'll provide you with cash flow scenarios, and give you a chance to compute the
result yourself using Analytica. This talk is intended for people who are not already well-versed in NPV and IRR, or for people who already have a good background with those concepts but are new to
Analytica and thus can learn from the interactive practice of addressing these exercises during the talk.
I have assembled quite a bit of material, which I believe will fill two webinar sessions. Part 1 will focus mostly on present value, NPV, discount rate, and the use of NPV with uncertainty. Part 2
will focus mostly on IRR, several "gotchas" with IRR, and MIRR.
Note: Part 1 covered 5 exercises, covering present value, discount rate, modeling certain cash flows, computing NPV, and graphing the NPV curve. Part 2 added exercises 6-9, covering cash flows at
non-uniformly-spaced time periods, valuating bonds and treasury notes, cash flows with uncertainty, and using the CAPM to find invester-implied corporate discount rate.
The "class" will continue with Part 3 beginning with Internal Rate of Return.
Data Analysis Techniques
Statistical Functions
Date and Time: Thursday, May 22, 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
A statistical function is a function that processes a data set containing many sample points, computing a "statistic" that summarizes the data. Simple examples are Mean and Variance, but more complex
examples may return matrices or tables. In this talk, I'll review statistical functions that are built into Analytica 4.0. In Analytica 4.0, all built-in statistical functions can now be applied to
historical data sets over an arbitrary index, as well as to uncertain samples (the Run index), eliminating the need for separate function libraries. I will demonstrate this use, as well as several
new statistical functions, e.g., Pdf, Cdf, Covariance. I will explain how the domain attribute should be utilized to indicate that numeric-valued data is discrete (such as integer counts, for
example), and how various statistical functions (e.g., Frequency, GetFract, Pdf, Cdf, etc) make use of this information. In the process, I'll demonstrate numerous examples using these functions, such
things as inferring sample covariance or correlation matricies from data, quickly histogramming arbitrary data and using the coordinate index setting to plot it, or using a weighted Frequency for
rapid aggregation.
In addition, all statistical functions in Analytica 4.0 can compute weighted statistics, where each point is assigned a different weight. I'll cover the basics of sample weighting, and demonstrate
some simple examples of using this for computing a Bayesian posterior and for importance sampling from an extreme distribution.
This talk is appropriate for moderate to advanced users.
A recording of this webinar can be watched at Statistical-Functions.wmv. The model created during this webinar is at Statistical Functions.ana.
Principal Components Analysis (PCA)
Date and Time: 15 Jan 2009, 10:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Principal component analysis (PCA) is a widely used data analysis technique for dimensionality reduction and identification of underlying common factors. This webinar will provide a gentle
introduction to PCA and demonstrate how to compute principal components within Analytica. Intended to be at an introductory level, with no prior experience with PCA (or even knowledge of what it is)
The model developed during this talk, where the principal components were computed for 17 publically traded stocks based on the previous 2 years of price change data is Principal Component
Analysis.ana. A recording of this webinar can be viewed at PCA.wmv.
Variable Stiffness Cubic Splines
Date and Time: 2 October 2008, 10:00am Pacific Daylight Time
Presenter: Brian Parsonnet, ICE Energy
The Variable Stiffness Cubic Spline is a highly robust data smoothing and interpolation technique. A stiffness parameter adjusts the variability of the curve. At the extreme of minimal stiffness, the
curve approaches a cubic spline (like CubicInterp) that passes through all data points, while at the other extreme of maximal stiffness, the spline curve becomes the best-fit line. Weight parameters
can be used to constrain the curve to include selected points, while smoothing over others. The first, second and third derivatives all exist and are readily available.
I'll introduce and demonstrate User-Defined Functions that compute the variable stiffness cubic spline and interpolate to new points. I'll also show how these curves can be used to detect or
eliminate anomalies in data.
You can watch a recording of this webinar at Variable-Stiffness-Cubic-Splines.wmv. The model and library with the vscs functions will be posted here within a few weeks.
Using Regression Video Tutorial
Date: Thursday, May 1, 2008
Presenter: Lonnie Chrisman, Lumina Decision Systems
Regression analysis is a statistical technique for curve fitting, discovering relationships in data, and testing hypotheses between variables. This webinar shows how to do generalized linear
regression using Analytica's Regression function. It shows several ways to use regression, including fitting simple lines to data, polynomial regression, other non-linear terms, and autoregressive
models (e.g., ARMA). It shows how to estimate the likelihood that the data might have been generated from the particular form of the regression model. It also shows how to determine the level of
uncertainty in inferred parameter values, and then use these uncertainties in a predictive modelbased on the regression results. It covers functions Regression, RegressionDist, RegressionFitProb, and
Recording: Regression.wmv (or on You Tube).
Model examples: Using Regression.ana
Logistic Regression
Date and Time: Thursday, 5 June 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
(When the webinar was first given, the features required Analytica Optimizer. Not so any more. Where the webinar uses Logistic_Regression(), replace it with LogisticRegression(), which is a built-in
function. )
Logistic regression is a technique for fitting a model to historical data to predict the probability of an event from a set of independent variables. In this talk, I'll introduce the concept of
Logistic regression, explain how it differs from standard linear regression, and demonstrate how to fit a logistic regression model to data in Analytica. Probit regression is for all practical
purposes the same idea as Logistic regression, differing only in the specific functional form for the model. Poisson regression is also similar except is appropriate when predicting a probability
distribution over a dependent variable that represents integer "counts". All are examples of generalized linear models, and after reviewing these forms of logistic regression, it should be clear how
other generalized linear model forms can be handled within Analytica.
This topic is appropriate for advanced modelers. I will assume familiarity with regression (see the earlier talk on the topic), but will not assume a previous knowledge of logistic regression.
You can watch a recording of this webinar at: Logistic-Regression.wmv. A reorganized version of the model developed during this webinar can be downloaded from Logistic_regression_example.ana. The
model has been reorganized to use the built-in LogisticRegression function and to embed the data so that it can be used from any edition of Analytica. There has also been some reorganization into
Bayesian Techniques
Bayesian Posteriors using Importance Sampling
Date and Time: Thursday, September 4, 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Several algorithms for computing Bayesian posterior probabilities are special cases of importance sampling. The webinar of the previous week, Importance Sampling (rare events) introduced importance
sampling, covered the theory behind it, how it is applied, and how Analytica's sample weighting feature can be use for importance sampling. This webinar continues with importance sampling, this time
exploring how it can be used (at least in some cases) to compute Bayesian posterior probabilities.
I'll provide an introduction to what Bayesian posterior probabilities are, describe a couple importance sampling-based approaches to computing them, and implement a few examples in Analytica.
Importance sampling techniques for computing posteriors have limited applicability -- in some cases they work well, other not. I'll try to characterize what those conditions are.
You can watch a recording of this webinar at Posteriors_using_IS.wmv. About two-thirds through the presentation, we noticed a result that seemed to be coming out incorrectly. I explain what the
problem was and fix it in Posteriors_using_IS_addendum.wmv. The models used during this presentation can be downloaded from Posterior sprinklers.ana and Likelihood weighting.ana.
Importance Sampling (Rare events)
Date and Time: Thursday, 28 Aug 2008, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Importance sampling is a technique that simulates a target probability distribution of interest by sampling from a different sampling distribution and then re-weighting the sampled points so that
computed statistics match those of the target distribution. The technique has has applicability when the target distribution is difficult to sample from directly, but where the probability density
function is readily available. The technique produces valid results in the large sample size limit for any selection of sampling distribution (provided it is absolutely continuous with respect to the
target distribution), but best results (i.e., fastest convergence with smaller sample size) are obtained when a good sampling distribution is used. The technique is commonly used for rare-event
sampling, where you want to ensure greater sampling coverage in the tails of distributions, where few samples would occur with standard Monte Carlo sampling. During the talk, we develop a rare event
model. It also has applicability to the computation of Bayesian posteriors, and sampling of complex distribution.
In this talk we cover the theory behind importance sampling and introduce the sample weighting mechanism that is built into Analytica. We develop a rare-event model to demonstrate how the weighting
mechanism is used to achieve the importance sampling. Next week we'll continue with an example of computing a Bayesian posterior probability.
A recording of this webinar can be viewed at Importance-Sampling.wmv. The model developed during this talk can be downloaded from: Importance Sampling rare events.ana.
Presenting Models to Others
The Analytica Cloud Player Style Library
Date and Time: Tuesday, 31 Jan 2012 10:00 am Pacific Standard Time
Presenter: Max Henrion or Fred Brunton (TBD), Lumina Decision Systems
How to use the ACP Style Library and custom ACP-based web applications. Good practices for designing Analytica-model applications for the web.
A recording of this webinar can be viewed at ACP-Style-Library.wmv.
Intro to Analytica Cloud Player
Date and Time: Thursday, 26 Jan 2012 10:00am Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This talk provides an introduction to the Analytica Cloud Player (ACP). We'll browse several example models on the web, demonstrating various capabilities and illustrating what a user of models needs
to know. You'll see how to set up an ACP account, and we'll cover free usage of ACP with active support, the details of individual and group plans, session credits and pricing. Finally, you'll see
how to publish (upload) models to the cloud. This talk will not cover how to tailor a model for the web with specific cloud-player style settings or the ACP style library -- those will be covered the
following week.
A recording of this webinar can be viewed at Intro-ACP.wmv. You can also view the Power Point Slides from the talk (The power point slides were a very small part of the webinar).
Guidelines for Model Transparency
Date and Time: 19 Feb 2009, 10:00am Pacific Standard Time
Presenter: Max Henrion, Lumina Decision Systems
What makes Analytica models easy for others to use and understand? I will review some example models that illustrate ways to improve transparency -- or opacity. Feel free to send me your candidates
ahead of time! We'll review some proposed guidelines. I hope to stimulate a discussion about what you think works well or not, and enlist your help in refining these guidelines.
You can watch a recording of this webinar at Transparency-Guidelines.wmv.
Creating Control Panels
Date and Time: Thursday, May 29, 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
It is quite easy to put together "control panels" or "forms" for your Analytica models by creating input and output nodes for the inputs and outputs of interest to your model end users. This webinar
will cover the basic steps involved in creating and arranging these forms, along with some tricks for making the process efficient. We'll cover the different types of input and output controls that
are currently available, the use of text nodes to create visual groupings, use of images and icons, and the alignment commands that make the process very rapid. We'll learn how to change colors, and
look at the use of buttons very briefly. This talk is appropriate for beginning Analytica users.
A recording of this webinar can be viewed at Control-Panels.wmv (required Windows Media Player). The model used during this webinar is at Building Control Panels.ana.
Sneak preview of Analytica Web Publisher
Date and Time: Thursday, February 21, 2008, 10:00 - 11:00 Pacific Standard Time
Presenter: Max Henrion, Lumina Decision Systems
In this week's webinar, Max Henrion, Lumina's CEO, will provide a sneak preview of the Analytica Web Publisher. AWP offers a way to make Analytica models easily accessible to anyone with a web
browser. Users can open a model, view diagrams and objects, change input variables, and view results as tables and graphs. Users will also be able to save changed models, to revisit them in later
sessions. Model builders can upload models into AWP directly from their desktop. Usually, AWP directories are password protected, so only authorized users can view and use models. But, we also plan
to make a free AWP directory available for people who want to share their models openly.
AWP is nearing release for alpha testing. We will welcome your comments and hearing how you might envisage using AWP.
This webinar was not recorded.
Application Integration Topics
OLE Linking
Time and Date: Thursday, 27 Mar 2008 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
OLE linking is a commonly used methods for linking data from Excel spreadsheets into Analytica and results from Analytica into Excel spreadsheets. It can be used with other applications that support
OLE-linking as well. The basic usage of OLE linking is pretty simple -- it is a lot like copy and paste. This webinar covers basics of using OLE linking of fixed-sized 1-D or 2-D tables. I also
demonstrate the basic tricks you must go through to link index values and multi-D inputs and outputs. In addition, we discuss what some of those OLE-link settings actually do, and explain how
OLE-connected applications connect to their data sources.
A recording of this webinar can be viewed at 2008-03-27-OLE-Linking.wmv.
Note: Another 10 minute fast-paced video (separate from the webinar) demonstrates linking data from Analytica into Excel, computing something from that data, and linking the result back into
Analytica: OLE-to-Excel-and-back.wmv.
Querying an OLAP server
Date and Time: Thursday, February 14, 2008, 10:00 - 11:00 Pacific Standard Time
(Note: Schedule change from an earlier posting. This is now back to the usual Thursday time. )
Presenter: Lonnie Chrisman, Lumina Decision Systems
In this session, I'll show how the MdxQuery function can be used to extract multi-dimensional arrays from an On-Line Analytical Processing (OLAP) server. In particular, during this talk we'll query
Microsoft Analysis Services using MDX. In this talk, I'll introduce some basics regarding OLAP and Analysis Services, discuss the differences between multi-dimensional arrays in OLAP and Analytica,
cover the basics of the MDX query language, show how to form a connection string for MdxQuery, and import data. I'll also show how hierarchical dimensions can be handled once you get your data to
Note: Use of the features demonstrated in this webinar require the Analytica Enterprise or Optimizer edition, or the Analytica Power Player. They are also available in ADE.
The model created during this webinar is available here: Using MdxQuery.ana. You can watch a recording of this webinar here: MdxQuery.wmv (requires Microsoft Media Player)
Querying an ODBC relational database
Date and Time: Thursday, February 7, 2008, 10:00 - 11:00 Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
In this talk I'll review the basics of querying an external relational ODBC database using DbQuery. This provides a flexible way to bring in data from SQL Server, Access, Oracle, and mySQL databases,
and can also be used to read CSV-text databases and even Excel. In this talk, I will cover the topics of how to configure and specify the data source, the rudimentary basics of using SQL, the use of
Analytica's DbQuery, DbWrite, DbLabels and DbTable functions.
Note: Use of the features demonstrated in this webinar require the Analytica Enterprise or Optimizer edition, or the Analytica Power Player. They are available in ADE."
You can grab the model created during this webinar from here: Querying an ODBC relational database.ana. A recording of this webinar can be viewed at Using-ODBC-Queries.wmv.
Calling External Applications
Date and Time: Thursday, Oct 18, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The RunConsoleProcess function runs an external program, can exchange data with that program, and can be used to perform a computation or acquire data outside of Analytica, that then can be used
within the model. I'll demonstrate how this can be used with a handful of programs, and code written in several programming and scripting languages. I'll demonstrate a user-defined function that
retrieves historical stock data from a web site.
You can watch a recording of this webinar at: Calling-External-Applications.wmv (Requires Windows Media Player)
Files created or used during this webinar can be downloaded:
The example of retrieving stock data from Yahoo Finance is also detailed in an article here: Retrieving Content From the Web
New Functions for Reading Directly from an Excel File
Date and Time: Thursday, 24 April 2008 10:00 Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
(Feature covered requires Analytica Enterprise or better)
Hidden within the new release of Analytica 4.1 are three new functions for reading values directly from Excel spreadsheets: OpenExcelFile, WorksheetCell, WorksheetRange. These provide an alternative
to OLE linking and ODBC for reading data from spreadsheets, which may be more convenient, flexible and reliable in many situations. We have not yet exposed these functions on the Definitions menu or
in the Users Guide in release 4.1, since they are still in an experimental stage. I would like know that they have been "beta-tested" in a variety of scenarios before we fully expose them (also, the
symmetric functions for writing don't exist yet). In this webinar, I will introduce and demonstrate these functions, after which you can start using them with your own problems.
The model created during this talk is here: Image:Functions for Reading Excel Worksheets.ana. It read from the example that comes with Office 2003, to which we added a few range names during the
talk, resulting in SolvSamp.xls. Place the excel file in the same directory as the model. A recording of this webinar can be viewed at Reading-From-Excel.wmv.
Reading Data from URLs to a Model
Date and Time: Thursday, 27 Aug 2009, 10:00am-11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Requires Analytica Enterprise
The new built-in function, ReadFromUrl, can be used to read data (and images) from websites, such as HTTP web pages, FTP pages, or even web services like SOAP. In this webinar, I'll demonstrate the
use of this function in several ways, including reading live stock and stock option price data, posting data to a web form, retrieving a text file from an FTP site, supplying user and password
credentials for a web site or ftp service, downloading and displaying images including customized map and terrain images, and querying a SOAP web service.
You can watch a recording of this webinar at ReadFromUrl.wmv. The model with the examples shown during the webinar is at Reading_Data_From_the_Web.ana.
Using the Analytica Decision Engine (ADE) from ASP.NET
Date and Time: Thursday, April 10, 2008 10:00am Pacific Daylight Time
Presenter: Fred Brunton, Lumina Decision Systems
The Analytica Decision Engine (ADE) allows you to utilize a model developed in Analytica as a computational back-end engine from a custom application. In this webinar, we'll create a simple active
web server application using ASP.NET that sends inputs submitted by a user to ADE, and displays results computed by ADE on a custom web page. In doing this, you will get a flavor how ADE works and
how you program with it. If you've never created an active server page, you may enjoy seeing how that is done as well. This introductory session is oriented more towards people who do not have
experience using ADE, so that you can learn a bit more about what ADE is and where it is appropriate by way of example.
You can watch a recording of this webinar at ASP-from-ASPNET.wmv. To download the program files that were created during this webinar Click here.
Date and Time: Thursday, Febrary 24, 2011 at 10:00am PST (1:00pm EST, 6:00pm GMT)
Presenter: Paul Sanford, Lumina Decision Systems
Analytica 4.3 is now available for beta testing and will be released in early March. The new version includes expanded optimization capabilities and simplified workflow for encoding optimization
problems. The new Structured Optimization framework in 4.3 is centered around a new function, DefineOptimization(), which replaces all three of the previous type-specific functions: LPDefine(),
QPDefine() and NLPDefine(). It also introduces a new node type, Constraint, which allows you to specify constraints using ordinary expressions. Paul will build up some basic examples using Structured
Optimization and field questions from users.
A recording of this webinar can be viewed at: Structured-Optimization.wmv. The example models used during this webinar are: Beer Distribution LP1.ana,Beer Distribution LP2.ana, File:Plane Allocation
LP.ana,File:Polynomial NLP.ana
Interactive Optimization Workshop
Date and Time: Thursday, 24 March 2011, 10:00am Pacific Daylight Time
Presenter: Paul Sanford, Lumina Decision Systems
This is an interactive workshop where you will learn the basics of creating Structured Optimization models and challenge yourself to set up and solve some basic examples on your own! No prior
training in optimization is required. Trial Downloads of Analytica Optimizer are now available. Attendees are encouraged to have Analytica Optimizer 4.3 installed and running during the workshop.
You can watch a recording of this webinar at: Optimization Workshop.wmv. You can download the models from the talk: Optimal Box.ana and call_center.ana.
Optimizing Parameters in a Complex Model to Match Historical Data
Date and Time: Thursday, 31 March 2011, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Ph.D., Lumina Decision Systems
Almost all quantitative models have parameters that must be assessed by experts or estimated from historical data. Estimation from historical data can be complicated by the presence of variables that
are either unobservable or unavailable in the historical record. Maximum likelihood estimation addresses this by finding the parameter settings that maximize the likelihood of the historical data
predicted by the model. In this talk, I will formulate the parameter fitting task as a structured optimization problem (NLP), providing a hands-on demonstration of the new structured optimization
features in Analytica 4.3.
A webinar recording of this can be viewed at Parameter-Optimization.wmv. The model file developed during the webinar is Parameter_Optimization.ana. The webinar also mentioned the Arbitrage Theorem.
Optimization with Uncertainty
Date and Time: Thursday, 14 April 2011, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Ph.D., Lumina Decision Systems
Analytica analyzes uncertainty by conducting a Monte Carlo analysis. When you optimize decision variables in a model containing uncertainty, you have a choice: You can perform one optimization over
the Monte Carlo analysis, or you can perform a Monte Carlo sampling of optimizations (i.e., the Monte Carlo is inside the optimization, or the optimization is inside the Monte Carlo). The first case
is used when the decision must be taken while the quantities are still uncertain. The second case is used when the values of the uncertain quantities will be resolved before the decisions are taken.
To illustrate, consider the situation faced by a relief organization that provides aid to victims of natural disasters. In one situation, a decision must be made regarding how to allocate resources
among several currently occuring famines. At the time the decision must be made, the actual intensity, progress and aid effectiveness for each famine is uncertain. In a different situation, the
organization wants to characterize the uncertainty in its need for resources for the upcoming year, perhaps forecasting the damage from next year's famines, and using these forecasts in its budgeting
and planning decisions.
You can watch a recording of this webinar at Optimization-w-Uncertainty.wmv. The example model developed during the webinar can be downloaded from Famine Relief. You can also download the PowerPoint
slides from the talk.
Neural Networks
Date and Time: Thursday, 21 April 2011, 10:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Ph.D.
A feed-forward artificial neural networks is a non-linear function that predicts one or more outputs from a set of inputs. These are usually used in two layers, where the first layer of inputs are
weighted and summed, and then passed through a sigmoid function to determine the activations of a hidden layer, those those activations are weighted, summed and then passed through a sigmoid function
to predict the final output. A training phase is used to adjust the weight to "fit" an example data set.
In this webinar, I'll create a nearal network model in Analytica and train it on example data as a demonstration of the use of structured optimization. It provides a simple and easily understood
example of the use of intrinsic indexes in a structured optimization model, while at the same time introducing the basics of the interesting topic if neural networks.
You can watch a recording of this webinar at Neural-Networks.wmv. The neural network model created during the webinar (requires Analytica Optimizer) is Neural-Network.ana.
Introduction to Linear and Quadratic Programming
Date and Time: Thursday, Oct 11, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This talk is an introduction to linear programming and quadratic programming, and an introduction to solving LPs and QPs from inside an Analytica model (via Analytica Optimizer). LPs and QPs can be
efficiently encoded using the Analytica Optimizer functions LpDefine and QpDefine. I'll introduce what a linear program is for the sake of those who are not already familiar, and examine some example
problems that fit into this formalism. We'll encode a few in Analytica and compute optimal solutions. Although LPs and QPs are special cases of non-linear programs (NLPs), they are much more
efficient and reliable to solve, avoid many of the complications present in non-linear optimization, and fully array abstract. Many problems that initially appear to be non-linear can often be
reformulated as an LP or QP. We'll also see how to compute secondary solutions such as dual values (slack variables and reduced prices) and coefficient sensitivies. Finally, LpFindIIS can be useful
for debugging an LP to isolate why there are no feasible solutions.
You can watch a recording of this webinar here: LP-QP-Optimization.wmv (requires Windows Media Player)
The model file created during this webinar is here: LP QP Optimization.ana
Non-Linear Optimization
Date and Time: Thursday, Oct 4, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This talk focuses on the problem of maximizing or minimizing an objective criteria in the presence of contraints. This problem is referred to as a non-linear program, and the capability to solve
problems of this form is provided by the Analytica Optimizer via the NlpDefine function. In this talk, I'll introduce the use of NlpDefine for those who have not previously used this function, and
demonstrate how NLPs are structured within Analytica models. I'll examine various challenges inherent in non-linear optimization, tricks for diagnosing these and some ways to address these. We'll
also examine various ways in which to structure models for parametric analyses (e.g., array abstraction over optimization problems), and optimizations in the presence of uncertainty.
You can watch a recording of this session here: Nonlinear-Optimization.wmv
During the talk, these two models were created:
Vertical Applications and Case Studies
Regional Weather Data Analysis
Date and Time: Thursday, 22 April 2010 10:00am Pacific Daylight Time
Presenter: Brian Parsonnet, Ice Energy
There are numerous sources of weather data on the web. Users of this data face a few common problems: how to gather the data in volume, how to normalize the data regardless of source, and how to
analyze the results to generate insight. Analytica is the perfect tool to address all three issues simply and efficiently. A sample model will be shown illustrating some of techniques.
A recording of this webinar can be viewed at Regional-Weather-Analysis.wmv. The model shown during the talk is Weather analysis.ana, and the data file used by this model for Burbank weather can be
downloaded from Burbank.zip (remember to Unzip it first to Burbank.txt). To avoid issues with ownership of the data, the temperatures in this file have been randomized (so the data is not accurate)
and other fields zeroed out, but this will still allow you to play with the model and data.
Automated Monitoring and Failure Detection
Date and Time: 5 Feb 2009, 10:00am Pacific Standard Time
Presenter: Brian Parsonnet, ICE Energy
In many complex physical systems, the automatic and proactive detection of system failures can be highly beneficial. Often dozens of sensor readings are collected over time, and a computer analyzes
these to detect when system behavior is deviating from normal. Sounding an alert can then facilitate early intervention, perhaps catching a component that is just starting to go bad.
In a complex physical system with multiple operating modes and placed in a changing environment, anomaly detection is a very difficult problem. Simple sensor thresholds (and other related approaches)
lack context-dependence, often making these simple approaches insufficient for the task. What is normal for any given sensor depends on the system's operating mode, time of day, activities in
progress, and environmental factors. Simple thresholds that don't take such context into account either end up being so loose that they miss legitimate anomalies, or so tight that too many excess
alarms are generated during normal conditions.
In this webinar, I'll show an expert system I've developed in Analytica that detects anomalies and developing failures in our deployed cooling system products. Data from dozens of sensors is
collected in 5 minute intervals and the system transitions through multiple operating modes, daily and seasonal environmental fluctuations, and system demands. The Analytica model provides a
framework in which complex rules that take multiple factors into account can be expressed, and used to estimate acceptable upper and lower operating ranges that are dynamically adjusted across each
moment in time, taking into account whatever context is available. The Analytica environment presents a very readable and understandable language for expressing monitoring rules, and the overall
transparency enables us to spot where other rules are needed and what they need to be.
A recording of this webinar can be watched at Failure-Detection.wmv.
Data Center Capacity Planning
Please note that this presentation will be on Wednesday rather than Thursday this week.
Date and Time: Wednesday, October 21, 2008 10:00am Pacific Daylight Time
Presenter: Max Henrion, Lumina Decision Systems
Data center energy demands are on the rise, creating serious financial as well as infrastructural challenges for data center operators. In 2006, data centers were responsible for a costly 1.5 percent
of total U.S. electricity consumption, and national energy consumption by data centers is expected to nearly double by 2011. For data center operators, this means that many data centers are reaching
the limits of power capacity for which they were originally designed. In fact, Gartner predicts that 50 percent of data centers will discover they have insufficient power and cooling capacity in
This week's presentation will provide an overview of ADCAPT -- the Analytica Data Center Capacity Planning Tool. For this webinar, the User Group will be joining a presentation that is also being
given outside of the Analytica User Group, but I (Lonnie) think is also of interest to the User Group community in that it shows of an example of a re-usable Analytica model, containing several very
interesting and novel techniques, applied to a very interesting application area.
Due to technical difficulties, this webinar was not recorded.
Modeling the Precision Strike Process
Date and Time: Thursday, October 16, 2008, 10:00am Pacific Daylight Time
Presenter: Henry Neimeier, MITRE
We describe a new paradigm for modeling, and apply it to a simple view of the precision strike attack process against mobile targets. The new modeling paradigm employs analytic approximation
techniques that allow rapid model development and execution. These also provide a simple dynamic analytic risk evaluation capability for the first time. The beta distribution is used to summarize a
broad range of target dwell and execution time scenarios in compact form. The data processing and command and control processes are modeled as analytic queues.
You can watch a recording of this webinar at: Precision-Strike-Process.wmv. Several related papers and materials are also available, including:
Modeling Utility Tariffs in Analytica
Presenter: Brian Parsonnet, Ice Energy
Date and Time: Thursday, Nov 8, 2007 at 10:00 - 11:00am Pacific Standard Time
Modeling utility tariffs is a tedious and complicated task. There is no standard approach to how a utility tariff is constructed, and there are 1000’s of tariffs in the U.S. alone. Ice Energy has
made numerous passes at finding a “simple” approach to enable tariff vs. product analysis, including writing VB applications, involved Excel spreadsheets, using 3rd party tools, or outsourcing
projects to consultants. The difficulty stems from the fact that there is little common structure to tariffs, and efforts to standardize on what structure does exist is confounded by an endless list
of exceptions. But using the relatively simple features of Analytica we have created a truly generic model that allows a tariff to be defined and integrated in just a few minutes. The technique is
not fancy by Analytica standards, so this in essence demonstrates how Analytic’s novel modeling concept can tackle tough problems.
You can watch a recording of this webinar at: 2007-11-08-Tariff-Modeling (Requires Windows Media Player)
Modeling Energy Efficiency in Large Data Centers
Date and TimeThursday, Oct 25, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter:Surya Swamy, Lumina Decision Systems
The U.S. data center industry is witnessing a tremendous growth period stimulated by increasing demand for data processing and storage. This has resulted in a number of important implications
including increased energy costs for business and government, increased emissions from electricity generation, increased strain on the power grid and rising capital costs for data center capacity
expansion. In this webinar, Analytica's dynamic modeling capabilities coupled with it's advanced uncertainty capabilities, which offer tremendous support in building cost models for planning and
development of energy efficient data centers, will be illustrated. The model enables users to explore future technologies, the performance, costs and efficiencies of which are uncertain and hence to
be probabilistically evaluated over time.
You can watch a recording of this presentation at: Data-Center-Model.wmv (Requires Windows Media Player)
Time of Use pricing
Presenter: Lonnie Chrisman, Lumina Decision Systems
Date and Time: Wednesday, Sep 30, 2020
Electricity demand and generation is not constant, varying by time of day and season. For example, solar panels generate only when the sun is out, and demand drops in the wee morning hours when most
people are sleeping. Time-of-use pricing is a rate tariff model used by utility companies that changes more during times when demand tends to exceed supply. This model import actual usage data from a
spreadsheet obtained from NationalGridUS.com of historic average customer usage, uses that to project average future demand, and then calculates the time-of-use component of PG&E's TOU-C and TOU-D
tariffs. (Note: The historical data came from Massachussets, the rate plan is from California, but these are used as examples). Developed during a User Group Webinar on 30-Sep-2020, which you can
watch as well to see it built.
Download: You need both the model Time of use pricing.ana and the accompanying spreadsheet MECOLS0620.xlsx (original source of spreadsheet, on 9-Sep-2020, was: Massachusetts - Class Average Load
Shapes (.xlsx)
Video: Time of use pricing.mp4
Creating Scatter Plots
Date and Time: Thursday, May 15, 2008 at 10:00 - 11:00am Pacific Daylight Time
Date and Time: Thursday, Aug 23, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
This webinar focuses on utilizing graphing functionality new to Analytica 4.0, and specifically, functionality enabling the creative use of scatter plots. The talk will focus primarily on techniques
for simultaneously displaying many quantities on a single 2-D graph. I'll discuss several methods in which multiple data sources (i.e., variable results) can be brought together for display in a
single graph, including the use of result comparison, comparison indexes, and external variables. I'll describe the basic new graphing-role / filler-dimension structure for advanced graphing in
Analytica 4.0, enabling multiple dimensions to be displayed on the horizontal and vertical axes, or as symbol shape, color, or symbol size, and how all these can be rapidly pivoted to quickly explore
the underlying data. I'll discuss how graph settings adapt to changes in pivot or result view (such as Mean, Pdf, Sample views).
A recording of this webinar can be viewed at Scatter-Plots.wmv.
Model used: During this webinar, I started with some example data in the model Chemical elements.ana. The original file is in the form before graph settings were changed. By the end of the webinar,
many graph settings had been altered, and various changes made, resulting in Scatter-Plots.ana (during the Aug 23 presentation, this was the final model: Chemical elements2.ana).
Graph Style Templates
Date and Time: Thursday, February 28, 2008, 10:00 - 11:00 Pacific Standard Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
Graph style templates provide a convenient and versitile way to bundle graph setup options so that they can be reused when viewing other result graphs. For example, if you've discovered a set of
colors and fonts and a layout that creates the perfect pizzazz for your results, you can bundle that into a template where you can quickly select it for any graph. In this talk, I'll introduce how
templates can be used and how you can create and re-use your own. I'll show the basics of using existing templates, previewing what templates will look like, and applying a given template to a single
result or to your entire model. We'll also see how to create your own templates, and in the process I'll discuss what settings can be controlled from within a template. I'll discuss how graph setup
options are a combination of global settings, template settings, and graph-specific overrides. I'll show how to place templates into libraries (thus allowing you to have template libraries that can
be readily re-used in different models), and even show how to control a few settings using templates that aren't selectable from the Graph Setup UI. I'll also touch on how different graph setting are
associated with different aspects of a graph, ultimately determining how the graph adapts to changes in uncertainty view or pivots.
The model created during this webinar is here: Graph style templates.ana. You can watch a recording of the webinar here: Graph-Style-Templates.wmv.
Button Scripting
Date and Time: Thursday, Sept. 6, 2007 at 10:00 - 11:00am Pacific Daylight Time
Presenter: Max Henrion, Lumina Decision Systems
This webinar is an introduction to Analytica's typescript and button scripting. Unlike variable definitions, button scripts can have side-effects, and this can be useful in many circumstances. I'll
cover the syntax of typescript (and button scripts), and how scripts can be used from buttons, picture nodes or choice inputs. I'll introduce some of the Analytica scripting language to those who may
have seen or used it before. And we'll examine some ways in which button scripting can be used.
You can watch the recording of this webinar here: Button Scripting.wmv (Requires Windows Media Player or equiv). The model files and libraries used during the webinar are in
Analytica User Community
The Analytica Wiki, and How to Contribute
Date and Time: (tentative) Thursday, October 30, 2008, Pacific Daylight Time
Presenter: Lonnie Chrisman, Lumina Decision Systems
The Analytica Wiki is a central repository of resources for active Analytica users. What's more, you -- as an active Analytica user -- can contribute to it. As an Analytica community, we have a lot
to learn from each other, and the Analytica Wiki provides one very nice forum for doing so. You can contribute example models and libraries, hints and tricks, and descriptions of new techniques. You
can fix errors in the Wiki documentation if you spot them, or add to the information that is there when you find subtleties that are not fully described. If you spend a lot of time debugging a
problem, after solving it you could document the issue and how it was solved for your own benefit in the future, as well as for others in the user community who may encounter the same problem. When
you publish a relevant paper, I hope you will add it to the page listing publications that utilize Analytica models.
I will provide a quick tour of the Analytica Wiki as it exists today. I'll then provide a tutorial on contributing to the Wiki -- e.g., the basics of how to edit or add content. The Wikipedia has had
tremendous success with this community content contribution model, and I hope that after this introduction many of you will feel more comfortable contributing to the Wiki as you make use of it.
Due to a problem with the audio on the recording, the recording of this webinar is not available.
Licensing or Installation
Reprise License Manager Tutorial
Date and Time: Wednesday, 11 March 2010, 10:00am Pacific Standard Time
Presenter: Bob Mearns, Reprise Software Inc.
The Reprise License Manager (RLM) allows all Analytica and ADE licenses within an organization to be managed from a central server. RLM can be used with either floating or named-user licenses.
This tutorial on RLM administration is being given by Bob Mearns, lead software developer at Reprise Software, Inc., who has over 15 years' experience developing and supporting software license
managers. This session will focus on:
• Basic RLM Server Setup
• How and where RLM looks for licenses
• Using the RLM Web Server Admin Interface
• Using RLM diagnostics, new in RLM v8
• A systematic approach to diagnosing license server connectivity problems
There is a big focus in this talk on how to debug problems with the RLM license manager, and in the process many of the technical details pertaining to the RLM setup are covered. This talk is most
relevant for IT managers who administer the license server, and for people who may be installing the RLM server who would like a more thorough understanding of how things work. The RLM license
manager is used to host centrally managed licenses, which includes floating and named-user licenses.
This talk is being provided by Reprise Software.
This webinar may be viewed here: RLM-troubleshooting.wmv. The trouble-shooting tips document covered in the talk is at RLM Troubleshooting Tips.
See also
|
{"url":"https://docs.analytica.com/index.php/Tutorial_videos","timestamp":"2024-11-07T04:22:21Z","content_type":"text/html","content_length":"218210","record_id":"<urn:uuid:d837f063-640b-458f-b040-f0dd4fcdc18a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00843.warc.gz"}
|
Weights for minimum-variance array pattern synthesis
Since R2022b
wts = minvarweights(pos,ang) computes the minimum-variance weights wts for synthesizing the pattern of a sensor array in the directions specified by ang. Array element positions are specified in pos.
The function optimizes the beamforming weights using a second-order cone programming solver. This function requires Optimization Toolbox™.
wts = minvarweights(pos,ang,cov) also specifies the spatial covariance matrix cov of the array elements.
wts = minvarweights(___,MaskAngle=angm) also specifies angles angm at which mask sidelobe levels are defined in the sllm argument.
wts = minvarweights(___,MaskSidelobeLevel=sllm) also specifies maximum allowable sidelobe levels sllm at the angles defined in angm.
wts = minvarweights(___,NullAngle=angn) also specifies null directions angn for the array.
Compute Optimized ULA Beamforming Weights
Compute optimized beamforming weights of a 31-element half-wavelength spacing ULA in the direction of ${-30}^{\circ }$ degree in azimuth. Design the array to keep sidelobe levels less than -23 dB.
Create the optimized weights.
N = 31;
pos = (0:N-1)*0.5;
sll = -23;
wts = minvarweights(pos,-30,MaskSidelobeLevel=sll);
Apply the optimized weights and display the array pattern from $-{90}^{\circ }$ to ${+90}^{\circ }$ azimuth.
az = -90:.25:90;
pat_opt = arrayfactor(pos,az,wts);
xlabel('Azimuth Angle (deg)')
ylabel('Beam Pattern (dB)')
Optimized Tapered ULA Weights
Design an array to have a tapered beampattern, The array is a 51-element half-wavelength spacing ULA steered in the direction of ${25}^{\circ }$ in azimuth. The pattern synthesis goal is to achieve
sidelobe levels smaller than a tapered mask decreasing linearly from -18 dB to -55 dB at $±{90}^{\circ }$. Place nulls at ${-35}^{\circ }$, $-{45}^{\circ }$, and ${40}^{\circ }$ azimuth angle.
N = 51;
pos = (0:N-1)*0.5;
ANGmainBeam = 25;
angn = [-35 -45 40];
angm = [-90:.2:22 27:0.2:90];
sllm = [linspace(-55,-18,length(-90:.2:22)) ...
wts = minvarweights(pos,ANGmainBeam,'MaskAngle',angm, ...
Apply optimized weights and display the array pattern from ${-90}^{\circ }$ to ${+90}^{\circ }$ in azimuth.
az = -90:.25:90;
pat_opt = arrayfactor(pos,az,wts);
axis([-90 90 -125 5])
xlabel('Azimuth Angle (deg)')
ylabel('Beam Pattern (dB)')
Verify that nulls are placed at ${-35}^{\circ }$, $-{45}^{\circ }$, and ${40}^{\circ }$ azimuth angle.
Beamforming Weights in Interference
Calculate the optimized beamforming weights for a 32-element, half-wavelength spacing ULA. The look direction (angs) is 30 degrees in azimuth. There are two interfering signals coming from azimuths
60 and -45 degrees, respectively (angi). The signal-free spatial covariance matrix contains contributions from the noise and the interferers only. The noise is white across all elements and the SNR
(snr) is 10 dB. The pattern synthesis goal is to keep the sidelobe level (sll) below -25 dB and to point nulls at the interferers.
N = 32;
d = 0.5;
pos = (0:N-1)*d;
angs = 30;
angi = [-45 60];
Find the sensor covariance matrix and then the array weights.
snr= 10;
cov = sensorcov(pos,angi,db2pow(-snr));
sll = -25;
wts = minvarweights(pos,angs,cov,'MaskSidelobeLevel',sll);
Find the array pattern.
az = -90:.25:90;
pat_opt = arrayfactor(pos,az,wts);
Plot the array pattern as a function of azimuth (az).
yline(sll,'--','Sidelobe level', ...
xlabel('Azimuth Angle (deg)')
ylabel('Beam Pattern (dB)')
ylim([-70 1])
grid on
Input Arguments
cov — Sensor spatial covariance matrix
eye(N) (default) | N-by-N complex-valued positive definite matrix
Sensor spatial covariance matrix, specified as an N-by-N complex-valued positive-definite matrix. N is the number of array sensor elements. cov is a signal-free covariance matrix, which means it does
not contain the signals of interest and includes only contributions from spatial noise and interference. The default value is the identity matrix indicating isotropic spatially white noise with unit
variance. You can use the sensorcov function to generate a sensor covariance matrix.
Example: [5,0.1;0.1,2]
Data Types: double
Complex Number Support: Yes
angm — Mask angles
[] (default) | real-valued 1-by-K vector | real-valued 2-by-K matrix
Angles at which mask sidelobe levels are defined, specified as a real-valued 1-by-K vector or a real-valued 2-by-K matrix where K is the number of mask sidelobe levels. If angm is a 1-by-K vector,
then it contains the azimuth angles of the mask directions. If angm is a 2-by-K matrix, each column specifies the direction in the form [az;el] where az stands for azimuth and el stands for
elevation. Angle units are in degrees.
Data Types: double
sllm — Maximum allowable mask sidelobe levels
non-positive scalar (default) | non-positive real-valued 1-by-K vector
Maximum allowable mask sidelobe levels, specified as a non-positive scalar or non-positive real-valued 1-by-K vector. K is the number of mask sidelobe levels. Sidelobe levels are always less than or
equal to zero.
• If sllm is a scalar, then it contains a uniform mask for all sidelobe levels and angm must be empty.
• If sllm is a 1-by-K vector, then sllm and angm must have the same number of columns; and sllm contains the mask sidelobe levels for corresponding mask angles, angm.
An empty sllm vector means that there are no constraints on the sidelobe levels. Units are in dB.
Data Types: double
angn — Null direction angles
[] (default) | real-valued 1-by-P vector | real-valued 2-by-P matrix
Null direction angles, specified as either a 1-by-P vector or a 2-by-P matrix where P is the number of null directions. If angn is a 1-by-P vector, then it contains only the azimuth angles of
directions. If angn is a 2-by-P matrix, each column specifies the null direction in the form [az;el] where az stands for azimuth and el stands for elevation. Angle units are in degrees.
Data Types: double
Output Arguments
wts — Beamformer weights
N-by-1 complex-valued vector
Beamformer weights, returned as a complex-valued N-by-1 vector. N represents the number of sensor elements of the array. The optimization procedure determining wts attempts to find array weights wts
such that the resulting pattern has nulls in the directions of any interferers present in cov.
[1] Lebret, H., and S. Boyd. “Antenna Array Pattern Synthesis via Convex Optimization.” IEEE Transactions on Signal Processing, vol. 45, no. 3, Mar. 1997, pp. 526–32. DOI.org (Crossref), https://
[2] Golbon-Haghighi, Mohammad-Hossein, et al. “Design of a Cylindrical Crossed Dipole Phased Array Antenna for Weather Surveillance Radars.” IEEE Open Journal of Antennas and Propagation, vol. 2,
2021, pp. 402–11. DOI.org (Crossref), https://doi.org/10.1109/OJAP.2021.3059471.
Version History
Introduced in R2022b
|
{"url":"https://fr.mathworks.com/help/phased/ref/minvarweights.html","timestamp":"2024-11-10T19:41:54Z","content_type":"text/html","content_length":"114430","record_id":"<urn:uuid:44127caa-cea7-4191-9584-cb4e3d2208b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00795.warc.gz"}
|
Algorithm for Enumerating Even Permutations
Simply put, a permutation of a set of N objects is simply an ordering of the objects into a particular sequence, where each object appears exactly once, without repetition.
If the N objects are distinct, then there are exactly N! permutations (where N! stands for the factorial of N, the product of all positive integers from 1 to N inclusive). Algorithms for enumerating
all permutations abound.
Now, mathematically, permutations can be categorized into odd permutations and even permutations. All permutations can be decomposed into a series of pair-wise swappings. If the N objects are
distinct, then the permutations that result from an even number of swaps (even permutations) never coincide with the permutations that result from an odd number of swaps (odd permutations). Whether a
permutation is even or odd, is called its parity.
While there are algorithms for determining whether a given permutation is even or odd, there do not appear to be any online resources that discuss how to enumerate all even permutations (or, for that
matter, how to enumerate all odd permutations).
This page attempts to remedy that.
Enumerating All Permutations
First, we take a look at a particular algorithm among the many that enumerate all permutations of a set. Later, we will alter this algorithm so that it enumerates only even permutations. This
particular algorithm has some characteristics that make it attractive as our starting point:
• It is fast: its time complexity is linear in the length of the array—O(N).
• It is compact: its space complexity is constant—O(1). It modifies the input array in-place, so that you can call it repeatedly to enumerate all permutations.
• It is stateless: no memory overhead is needed for it to keep track of how far along it is in enumerating the permutations. The input array itself is enough to determine the next permutation.
• Its output is sorted: it enumerates all permutations in lexicographic order.
• Its output has no repetitions: when the input array has duplicate elements, it correctly outputs only distinct permutations.
The input to this algorithm is an array A of length N, which is initially sorted in non-decreasing order, and its output is a boolean value indicating whether A has been modified to contain the
lexicographically next permutation (True), or there are no more permutations and A has been returned to its original non-decreasing order (False).
1. Find the largest array index i such that A[i] < A[i+1]. If no such index exists, then A is lexicographically the greatest permutation, so transform it back to the lexicographical least
permutation by reversing the order of its elements, then return False.
2. Find the largest array index j such that A[i] < A[j]. Since i+1 is such an index, j can always be found, and is always greater than i.
3. Swap A[i] and A[j].
4. Reverse the order of the elements from A[i+1] to the end of the array, and return True.
To enumerate all permutations of an array A, we simply sort it in non-decreasing order and run this algorithm repeatedly until it returns False, at which point we have finished enumerating the
The initial sorting of the array is not necessary, of course, if we store a copy of it somewhere else and simply run the algorithm repeatedly until A returns to its original state.
Enumerating Even Permutations
One way of enumerating all even permutations, of course, is to run an algorithm such as the one we described in the preceding section and apply one of the many parity-finding algorithms on each
permutation to determine whether it is even or odd, and discard those that are odd. Unfortunately, algorithms to compute the parity of a permutation tend to be expensive, since the cost of such a
computation adds up significantly when we're trying to enumerate all even permutations.
However, if we look closely at the algorithm we described above, we see that actually there's no need to run a separate algorithm for computing the parity of the output permutations: we already have
enough information to tell what parity the output will have.
In particular, step (3) always flips the parity of the input array, and the reversal of a segment of the array (step (1) when i cannot be found, and step (4)) changes the parity depending on how many
pairs of elements need to be swapped to reverse their order. The latter is a simple computation: to reverse the order of M elements, if M is even, then we simply swap M/2 elements (the front half of
the array segment with the back half); if M is odd, the middle element doesn't need to move, so we swap (M-1)/2 elements. In other words, to reverse the order of M elements requires exactly ⌊M/2⌋
swaps. If the number of swaps is odd, the parity of the array is flipped; otherwise, it does not change.
Therefore, to enumerate only even permutations, we simply add a Boolean variable, ParityFlip, to the algorithm to keep track of whether the parity of the input array has been flipped. If the parity
is flipped when we are about to return, repeat the algorithm instead of stepping through the next permutation until we reach a permutation that exhibits no parity flip from the input array. This way,
if we start with an initial array (which is an even permutation by definition, since zero swaps is an even number of swaps), we will skip over all odd permutations and only return even ones.
In pseudo-code, then, our algorithm to enumerate even permutations is:
1. Set ParityFlip to False.
2. Find the largest array index i such that A[i] < A[i+1].
3. If no such i exists:
1. Reverse the order of A.
2. If ⌊N/2⌋ is odd, invert the value of ParityFlip.
4. Otherwise:
1. Find the largest array index j such that A[i] < A[j].
2. Swap A[i] and A[j]. Invert the value of ParityFlip.
3. Reverse the order of the elements from A[i+1] to the end of the array.
4. If ⌊M/2⌋ is odd, where M is the number of elements swapped, invert the value of ParityFlip.
5. If ParityFlip is True, go back to step 2. Otherwise, return.
It may appear on the surface that this algorithm would be quite inefficient due to its outer loop; however, its amortized complexity does not exceed that of the original algorithm. In practice,
roughly every 4 consecutively generated permutations will consist of two even and two odd permutations, so even the per-iteration cost is negligible.
Important note: even permutations only make sense when the input array contains only unique elements. If there are duplicate elements, then the set of even permutations is identical to the set of odd
permutations, since if a permutation P is obtained through an odd number of swaps, you can make the number of swaps even by simply adding a swap of two identical elements. For this reason, the above
algorithm requires that the input array have unique elements; otherwise its output would be incomplete. It is possible to further alter the algorithm to do the “right thing” under all circumstances,
but that would sacrifice its efficiency, which is where its value lies.
Download C++ Source Code
The preceding discussion gone way over your head? Not interested in the inner workings but just want some real code you can use? No problem! Just download the C++ implementation of the algorithm for
enumerating even permutations.
This implementation uses STL-style templates for maximum reusability. It can enumerate even permutations given any range of an STL bidirectional container, and can be easily extended to user-defined
lexicographic orderings. It also has a more careful handling of return values so that it correctly tells you when the array has “rolled over” back to the lexicographically smallest permutation.
Enumerating Odd Permutations
What if you want to enumerate odd permutations instead of even ones? No problem! Relative to an odd permutation, another odd permutation is even (permutations obey the rule that odd + even = odd, and
even + even = even). Since our enumeration algorithm works relative to the input array's parity, if you hand it an odd permutation, it will return the next odd permutation. To get an odd permutation
in the first place, simply swap the last two elements of the input array. (Swapping any two elements will do, but swapping the last two on a sorted array lets you enumerate all odd permutations in
lexicographic order.)
The algorithm for enumerating all permutations used above is described on Wikipedia's Permutation page. It is credited to 14th century Indian mathematician Narayana Pandita.
|
{"url":"http://www.qfbox.info/epermute","timestamp":"2024-11-07T23:00:46Z","content_type":"text/html","content_length":"12200","record_id":"<urn:uuid:5d74aa08-68ed-4b85-9193-cc9b30c7bc64>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00838.warc.gz"}
|
Standard enthalpy of formation
Chemical Thermodynamics: Standard enthalpy of formation
The standard enthalpy of formation (ΔHf°) for a reaction is the enthalpy change that occurs when 1 mol of a substance is formed from its component elements in their standard states.
When we say “The standard enthalpy of formation of methanol, CH[3]OH(l) is –238.7 kJ”, it means:
C(graphite) + 2 H[2](g) + ½ O[2](g) CH[3]OH(l)
has a value of ΔH of –238.7 kJ.Likewise, for ethanol ∆H[f]° [C2H5OH(l)] = – 279 kJ/mol means that when the reaction below is carried out, 279 kJ of energy is released. We may treat ΔH[f]° values as
though they were absolute enthalpies, in order to determine enthalpy changes for reactions. Note that standard enthalpy of formation for an element under standard conditions is 0.Standard conditions
refer to a pressure of 1 atm and a specified temperature of 298K (25°C).The standard state of oxygen at 1 atm is O[2](g) and for nitrogen, N[2](g).The stoichiometry for formation reactions indicates
the formation of 1 mole of the respective compound. Hence enthalpies of formation are always denoted with kJ/mol.
The process of calculation of standard enthalpy of formation is as follows:
∆H°[rxn] = ∑n[p] x ∆H[f]°(products) –∑n[r] x ∆H[f]°(reactants)
Where the symbol ‘∑’ signifies the summation of several variables. The symbol ‘n’ signifies the stoichiometric coefficient used in front of a chemical symbol or formula.
In other words
1. Add all the values for ΔH[f]° of the products.
2. Add all the values for ΔH[f]° of the reactants.
3. Subtract #2 from #1
Calculate a value for the standard enthalpy of formation of propanone, CH[3]COCH[3](l), where the following are standard enthalpy changes of combustion:
ΔHc/kJmol^–1for C(s) = –394kJmol^–1, for H[2](g)= –286kJmol^–1andCH[3]COCH[3](l) = –1821kJmol^–1
For the enthalpy change, write an equation representing this change including its state symbols. For the formation of propanone, one mol of the propanone should form the main equation.
3C(s) + 3H[2](g) + 0.5O[2](g) → CH[3]COCH[3](l)
The reactants in this equation C(s) and H[2](g)can be burned to produce CO2(g) and H2O(l).
Other equations related to this reaction are:
3C(s) + 3O[2](g) → 3CO[2](g) ;nΔH= 3(–394)
where 3 is the stochiometric coefficient of carbon in the main equation.
3H[2](g) + 1.5O2(g) → 3H[2]O(l) ;nΔH= 3(–286)
where 3 is the stochiometric coefficient of H[2] in the main equation.
Also, 1 mole of propanone is formed in the main equation so the third equation is be reversed. The sign of ΔH is also changed since the equation is reversed.
3CO[2](g) + 3H[2]O(l) → CH[3]COCH[3](l) + 4O[2](g) ;nΔH=+1821
When we add these equations together and cancel down the moles of any substance that appears on both sides of the equation, it results in the main equation and the total of the enthalpy changes gives
the enthalpy change for the main equation.
1. 3C(s) + 3O[2](g) = 3(–394) +
2. 3H[2](g) + 1.5O[2](g)= 3(–286) + Add 1, 2 and 3 3CO[2](g) + 3H[2]O(l) + CH[3]COCH[3](l) + 4O[2](g)
3. 3CO[2](g) + 3H[2]O(l) = 1821
Cancel the moles of substances which appear on the left side and right side as highlighted above.
This leaves:
3C(s) + 3H[2](g) + 0.5O[2](g) → CH[3]COCH[3](l)
Where, 3(-394)+3(-286)+1821 = -219 kJ/mol. Therefore, enthalpy of formation of propanone is -219 kJ/mol and this reaction is an exothermic reaction.
One can also solve the same problem using Hess’s law where,
ΔH[f] = ΣΔH[c] (reactants) – ΣΔH[c] (products)
|
{"url":"https://www.w3schools.blog/standard-enthalpy-of-formation","timestamp":"2024-11-13T11:27:18Z","content_type":"text/html","content_length":"171785","record_id":"<urn:uuid:8f012032-fc21-498f-958c-1f2d4a9fd427>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00270.warc.gz"}
|
Scalene Triangle Calculator - Calculator Wow
Scalene Triangle Calculator
In the realm of geometry and construction, the scalene triangle stands out for its unique properties and versatile applications. The Scalene Triangle Calculator emerges as a valuable tool, providing
mathematicians, architects, and engineers with the means to effortlessly compute the area of a scalene triangle. Let’s embark on a journey to unravel the significance of this calculator, understand
its importance, and delve into its usage.
The scalene triangle, characterized by its three unequal sides and angles, presents intriguing challenges and opportunities in geometry and construction. Calculating the area of a scalene triangle is
essential for various applications, including architectural design, surveying, and trigonometry. By accurately determining the area, professionals can make informed decisions regarding materials,
dimensions, and structural integrity, ensuring the success and efficiency of their projects.
How to Use
Utilizing the Scalene Triangle Calculator is a straightforward process, accessible to students, professionals, and enthusiasts alike. Begin by inputting the lengths of the three sides of the scalene
triangle into the designated fields. Once the values are entered, click the “Calculate” button. The calculator swiftly applies Heron’s formula, a mathematical theorem named after the ancient Greek
mathematician Hero of Alexandria, to compute the area of the scalene triangle. The result provides users with valuable insights into the spatial dimensions of the triangle, empowering them to proceed
with confidence in their geometric analyses and design endeavors.
10 FAQs and Answers
1. What is a scalene triangle?
A scalene triangle is a type of triangle characterized by having three unequal sides and three unequal angles.
2. Why is calculating the area of a scalene triangle important?
Calculating the area of a scalene triangle is important for various applications in geometry, architecture, engineering, and surveying. It provides insights into the spatial dimensions of the
triangle, facilitating accurate measurements and design decisions.
3. What is Heron’s formula?
Heron’s formula is a mathematical theorem used to calculate the area of a triangle when the lengths of all three sides are known. It is named after the ancient Greek mathematician Hero of Alexandria.
4. How does Heron’s formula work?
Heron’s formula calculates the area of a triangle using the lengths of its three sides and the semi-perimeter, which is half the sum of the lengths of the three sides.
5. Can the Scalene Triangle Calculator be used for triangles with negative or zero side lengths?
No, the Scalene Triangle Calculator is designed to calculate the area of scalene triangles with positive side lengths only. Negative or zero side lengths are not applicable in geometry.
6. What are some real-world applications of the Scalene Triangle Calculator?
The Scalene Triangle Calculator finds applications in architecture, engineering, construction, cartography, and surveying, where accurate geometric calculations are essential for designing
structures, landscapes, and infrastructure.
7. Can the Scalene Triangle Calculator handle triangles with decimal side lengths?
Yes, the Scalene Triangle Calculator can handle triangles with decimal side lengths, providing accurate results for geometric analyses and design purposes.
8. Is there a limit to the size of triangles that the Scalene Triangle Calculator can handle?
No, the Scalene Triangle Calculator can compute the area of scalene triangles of any size, ranging from small-scale models to large-scale structures, with ease and precision.
9. How can I verify the accuracy of the calculated area?
You can verify the accuracy of the calculated area by comparing it with alternative methods of calculating triangle area, such as trigonometric formulas or geometric constructions.
10. Are there alternative methods for calculating the area of a scalene triangle?
Yes, in addition to Heron’s formula, the area of a scalene triangle can be calculated using trigonometric formulas, such as the sine or cosine rules, or by decomposing the triangle into simpler
shapes and summing their areas.
In conclusion, the Scalene Triangle Calculator serves as a valuable tool for mathematicians, architects, engineers, and students seeking to unravel the mysteries of triangular geometry. By
effortlessly computing the area of scalene triangles using Heron’s formula, this calculator empowers users to make informed decisions, solve complex problems, and unleash their creative potential in
various fields. As the demand for precision and efficiency continues to drive advancements in science and technology, the Scalene Triangle Calculator remains a cornerstone of geometric analysis,
inspiring innovation and discovery across diverse disciplines.
|
{"url":"https://calculatorwow.com/scalene-triangle-calculator/","timestamp":"2024-11-12T15:58:11Z","content_type":"text/html","content_length":"66097","record_id":"<urn:uuid:098b1f55-0e89-48c9-a3bd-fbeb7b8c8daf>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00875.warc.gz"}
|
MCAT Basics: Work and Energy
Sam Smith discusses energy and work for the Chem/Phys section of the MCAT. He covers the mathematical and conceptual definitions of work, the sign convention of work, mechanical advantage and path
dependency. He also covers Energy, including the general definition of energy, comparing and contrasting energy and work, the different types of energy, and heat transfer.
• [01:43] Work
• [06:28] Units of Work
• [07:18] Sign Conventions of Work
• [11:42] Work Kinetic Energy Theorem
• [15:38] Path Dependent vs Path Independent Work
• [18:29] Mechanical Advantage
• [21:08] The Efficiency of a Machine
• [23:17] Work and Power
• [24:25] Work and Energy
• [29:10] Types of Heat Transfer
What is Work?
Work is a measure of energy transfer. Also, it is a scalar value – meaning that it does not have a direction, only a magnitude. In some cases, when calculating work, you must account for the angle or
direction in which the force is being applied to move an object.
Work = force × distance
For example, if you are pushing a wooden block across the floor, a force on the block that is perpendicular to the floor will not move the block at all. On the other hand, a force on the block that
is parallel to the floor will move the block most effectively- this is the optimal angle at which force can be applied to move the wooden block. At this angle, all the force being applied on the
block will be converted into work.
When force is applied at an angle (θ):
Work = force × distance × cos θ
When force is applied perfectly parallel to the floor, the angle between the floor and the force will be 0°:
Work = force × distance × cos (0°)
Work = force × distance × 1
Work = force × distance
When force is applied perfectly perpendicular to the floor, the angle between the floor and the force will be 90°:
Work = force × distance × cos (90°)
Work = force × distance × 0
Work = 0
Hence, you are doing no work when you are applying a force perpendicular to the floor.
The Unit Circle is an important part of dealing with these questions. For example, when tackling a question that says that force is being applied to an angle at 60°, knowledge that cos (60) = ½ is
Units of Work
F [=] Newtons (N) or pound-force (lbf)
d [=] meters (m) or feet (ft)
W [=] work measured in Newton meter (Nm) or foot pound-force (ft-lb)
A Newton meter (Nm) is equal to a joule (J), which is the unit of energy in the SI system. The units of work are the same as the units of energy.
Sign Conventions for Work
Thermodynamic Work
Work can be positive or negative. If work is done to the system, the system is gaining energy so the sign is positive. If work is done by the system, the system is losing energy so the sign is
Mechanical Work
If the applied force on an object and its displacement are in the same direction, then the work is positive. If the applied force on an object and its displacement are in opposite directions, then
the work is negative.
Work Kinetic Energy Theorem
Kinetic energy is the energy associated with a moving object.
Kinetic Energy = ½ × mass × velocity^2
Ek = ½ mv^2
Work causes an object to move. If an object is moving at a constant velocity, no work is done to the object.
The net work done on an object is equal to the change in that object’s kinetic energy.
If Ek (at initial) = Ek (at final point), then W= 0.
Also, the heavier an object is, the more work needs to be done to get it moving to a certain velocity.
Path Dependent vs Path Independent Work
Path dependent variables are variables in which the path between two points that an object takes matters for that variable. Friction is an example of a force that is path dependent. In the presence
of friction, some kinetic energy is always transformed into thermal energy.
Path independent variables are variables in which the path between two points that an object takes does not matter. Gravitational potential energy is an example of a force that is path independent.
Work can be a path dependent and a path independent variable. It depends on what type of force is being applied to do the work.
Friction is a non-conservative force – work done against friction is path dependent. In the presence of friction, some kinetic energy is always transformed to thermal energy, so mechanical energy is
not conserved. Work that is done by conservative forces (forces that don’t leak energy, like gravity), results in work that is path independent.
Mechanical advantage
Mechanical advantage is a measure of force amplification achieved by a mechanical tool or device. It is expressed as a number that is dimensionless; it does not have units. Examples are pulleys,
wedges, inclined planes, wheels, axles, etc.
MA = output force / input force
Hence, if MA = 2, you are getting 2X the amount of force out of the machine than you are putting in.
If MA < 1, then you are putting more work into the machine than you are getting out.
Mechanical tools generally work by applying less force over a greater distance. For a machine or tool, the work in must be equal to the work out:
Work (into the machine) = Work (out of the machine)
∴ Force × distance (into the machine) = Force × distance (out of the machine)
A machine can work in one of 2 ways:
1. An increased distance over which a force is applied
2. An increased force applied over a shorter distance
Real life example: When you’re throwing a baseball, your arm is acting as a lever. What allows pitchers to throw the ball in excess of 100mph is the fact that the arm acts as a lever. The force that
is exerted on a baseball when you’re throwing it is a lot more than the force that is exerted by the shoulder.
Mechanical Efficiency
The efficiency of a machine is the ratio between work the machine supplies and work that is put into the machine.
Efficiency of a machine = output work / input work
In ideal situations, efficiency will be equal to 1 (100%). In real life, however, efficiency will be less than 1; the work out will be less than the work in.
Work and Power
Power is the rate at which work is performed.
Power = work / time
∴ Power = force × distance / time
∴ Power = force × velocity
Power has units, J/s, or Watts.
Work and Energy
Energy is the ability to do work which is the ability to exert a force causing the displacement of an object. Work is the transfer of energy with the help of a force exerted over a particular
distance. And energy is the ability to do work. Energy uses the same units as work; Joule (a Joule is equal to a Nm).
Kinetic energy
Kinetic energy is the energy in moving objects. The faster an object moves or the heavier it is, the more kinetic energy it has.
Ek = ½ mv^2
Potential energy
Potential energy is energy in terms of physical position.
U – potential energy
m – mass of the object
g – gravitational constant
h – height of the object (relative to reference point)
For example, a rock on top oh a hill has a lot of potential energy. Once it rolls down the hill, its potential energy is converted to kinetic energy as it speeds up. Once it reaches the bottom of
the hill, it has no more potential energy, if your reference point is the bottom of the hill.
Total mechanical energy
Total mechanical energy is the sum of kinetic energy and potential energy that exists for an object or system.
Total mechanical energy = potential energy + kinetic energy
Emech = U + K
Thermal energy
Thermal energy is a type of kinetic energy. It is the energy of molecular motion.
For example, when bringing a pot of water to a boil, the water molecules move progressively faster until the water molecules can no longer stay in the liquid phase, and enter the gas phase.
The 3 Types of Heat Transfer
Conduction is heat transfer through direct physical contact.
Convection is heat transfer through the movement of fluid. When fluid is heated, it becomes less dense than fluid at a lower temperature. Due to this difference in density, the heated fluid tends to
Radiation is heat transfer through electromagnetic radiation.
|
{"url":"https://www.prospectivedoctor.com/mcat-basics-work-and-energy/","timestamp":"2024-11-02T08:11:43Z","content_type":"text/html","content_length":"151861","record_id":"<urn:uuid:720cbbc1-e2cf-4bae-9f0e-ebc207dd97e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00345.warc.gz"}
|
All-pass Peaking/Bell filter for audio equalisation - ASN Home
A Peaking or Bell filter is a type of audio equalisation filter that boosts or attenuates the magnitude of a specified set of frequencies around a centre frequency in order to perform magnitude
equalisation. As seen in the plot in the below, the filter gets its name from the shape of the its magnitude spectrum (blue line) which resembles a Bell curve.
Frequency response (magnitude shown in blue, phase shown in purple) of a 2nd order Bell filter peaking at 125Hz.
All-pass filters
Central to the Bell filter is the so called All-pass filter. All-pass filters provide a simple way of altering/improving the phase response of an IIR without affecting its magnitude response. As
such, they are commonly referred to as phase equalisers and have found particular use in digital audio applications.
A second order all-pass filter is defined as:
\( A(z)=\Large\frac{r^2-2rcos \left( \frac{2\pi f_c}{fs}\right) z^{-1}+z^{-2}}{1-2rcos \left( \frac{2\pi f_c}{fs}\right)z^{-1}+r^2 z^{-2}} \)
Notice how the numerator and denominator coefficients are arranged as a mirror image pair of one another. The mirror image property is what gives the all-pass filter its desirable property, namely
allowing the designer to alter the phase response while keeping the magnitude response constant or flat over the complete frequency spectrum.
A Bell filter can be constructed from the \(A(z)\) filter by the following transfer function:
After some algebraic simplication, we obtain the transfer function for the Peaking or Bell filter as:
\(H(z)=\Large{\frac{1}{2}}\left[\normalsize{(1+K)} + \underbrace{\Large\frac{k_2 + k_1(1+k_2)z^{-1}+z^{-2}}{1+k_1(1+k_2)z^{-1}+k_2 z^{-2}}}_{all-pass filter}\normalsize{(1-K)} \right] \)
• \(K\) is used to set the gain and sign of the peak
• \(k_1\) sets the peak centre frequency
• \(k_2\) sets the bandwidth of the peak
A Bell filter may easily be implemented in ASN FilterScript as follows:
ClearH1; // clear primary filter from cascade
interface BW = {0,2,0.1,0.5}; // filter bandwidth
interface fc = {0, fs/2,fs/100,fs/4}; // peak/notch centre frequency
interface K = {0,3,0.1,0.5}; // gain/sign
Pz = {1,k1*(1+k2),k2}; // define denominator coefficients
Qz = {k2,k1*(1+k2),1}; // define numerator coefficients
Num = (Pz*(1+K) + Qz*(1-K))/2;
Den = Pz;
Gain = 1;
This code may now be used to design a suitable Bell filter, where the exact values of \(K, f_c\) and \(BW\) may be easily found by tweaking the interface variables and seeing the results in
real-time, as described below.
Designing the filter on the fly
Central to the interactivity of the FilterScript IDE (integrated development environment) are the so called interface variables. An interface variable is simply stated: a scalar input variable that
can be used modify a symbolic expression without having to re-compile the code – allowing designers to design on the fly and quickly reach an optimal solution.
As seen in the code example above, interface variables must be defined in the initialisation section of the code, and may contain constants (such as, fs and pi) and simple mathematical operators,
such as multiply * and / divide. Where, adding functions to an interface variable is not supported.
An interface variable is defined as vector expression:
interface name = {minimum, maximum, step_size, default_value};
Where, all entries must be real scalars values. Vectors and complex values will not compile.
[arve mp4=”https://www.advsolned.com/wp-content/uploads/2018/07/peakingfilter.mp4″ align=”right” loop=”true” autoplay=”true” nodownload nofullscreen noremoteplayback… /]
Real-time updates
All interface variables are modified via the interface variable controller GUI. After compiling the code, use the interface variable controller to tweak the interface variables values and see the
effects on the transfer function. If testing on live audio, you may stream a loaded audio file and adjust the interface variables in real-time in order to hear the effects of the new settings.
https://www.advsolned.com/wp-content/uploads/2018/07/peakbell.png 384 443 ASN consultancy team https://www.advsolned.com/wp-content/uploads/2018/02/ASN_logo.jpg ASN consultancy team2018-07-04
18:51:512023-06-30 16:58:51All-pass Peaking/Bell filter
|
{"url":"https://www.advsolned.com/allpass-peaking-bell-filter/","timestamp":"2024-11-13T02:29:03Z","content_type":"text/html","content_length":"90814","record_id":"<urn:uuid:4cca928e-d2e8-409c-92cb-d009744cd13d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00697.warc.gz"}
|
Keras documentation: Hashing layer
Hashing layer
Hashing class
num_bins, mask_value=None, salt=None, output_mode="int", sparse=False, **kwargs
A preprocessing layer which hashes and bins categorical features.
This layer transforms categorical inputs to hashed output. It element-wise converts a ints or strings to ints in a fixed range. The stable hash function uses tensorflow::ops::Fingerprint to produce
the same output consistently across all platforms.
This layer uses FarmHash64 by default, which provides a consistent hashed output across different platforms and is stable across invocations, regardless of device and context, by mixing the input
bits thoroughly.
If you want to obfuscate the hashed output, you can also pass a random salt argument in the constructor. In that case, the layer will use the SipHash64 hash function, with the salt value serving as
additional input to the hash function.
For an overview and full list of preprocessing layers, see the preprocessing guide.
Example (FarmHash64)
>>> layer = tf.keras.layers.Hashing(num_bins=3)
>>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]
>>> layer(inp)
<tf.Tensor: shape=(5, 1), dtype=int64, numpy=
Example (FarmHash64) with a mask value
>>> layer = tf.keras.layers.Hashing(num_bins=3, mask_value='')
>>> inp = [['A'], ['B'], [''], ['C'], ['D']]
>>> layer(inp)
<tf.Tensor: shape=(5, 1), dtype=int64, numpy=
Example (SipHash64)
>>> layer = tf.keras.layers.Hashing(num_bins=3, salt=[133, 137])
>>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]
>>> layer(inp)
<tf.Tensor: shape=(5, 1), dtype=int64, numpy=
Example (Siphash64 with a single integer, same as salt=[133, 133])
>>> layer = tf.keras.layers.Hashing(num_bins=3, salt=133)
>>> inp = [['A'], ['B'], ['C'], ['D'], ['E']]
>>> layer(inp)
<tf.Tensor: shape=(5, 1), dtype=int64, numpy=
• num_bins: Number of hash bins. Note that this includes the mask_value bin, so the effective number of bins is (num_bins - 1) if mask_value is set.
• mask_value: A value that represents masked inputs, which are mapped to index 0. None means no mask term will be added and the hashing will start at index 0. Defaults to None.
• salt: A single unsigned integer or None. If passed, the hash function used will be SipHash64, with these values used as an additional input (known as a "salt" in cryptography). These should be
non-zero. If None, uses the FarmHash64 hash function. It also supports tuple/list of 2 unsigned integer numbers, see reference paper for details. Defaults to None.
• output_mode: Specification for the output of the layer. Values can bes "int", "one_hot", "multi_hot", or "count" configuring the layer as follows:
□ "int": Return the integer bin indices directly.
□ "one_hot": Encodes each individual element in the input into an array the same size as num_bins, containing a 1 at the input's bin index. If the last dimension is size 1, will encode on that
dimension. If the last dimension is not size 1, will append a new dimension for the encoded output.
□ "multi_hot": Encodes each sample in the input into a single array the same size as num_bins, containing a 1 for each bin index index present in the sample. Treats the last dimension as the
sample dimension, if input shape is (..., sample_length), output shape will be (..., num_tokens).
□ "count": As "multi_hot", but the int array contains a count of the number of times the bin index appeared in the sample. Defaults to "int".
• sparse: Boolean. Only applicable to "one_hot", "multi_hot", and "count" output modes. If True, returns a SparseTensor instead of a dense Tensor. Defaults to False.
• **kwargs: Keyword arguments to construct a layer.
Input shape
A single or list of string, int32 or int64 Tensor, SparseTensor or RaggedTensor of shape (batch_size, ...,)
Output shape
An int64 Tensor, SparseTensor or RaggedTensor of shape (batch_size, ...). If any input is RaggedTensor then output is RaggedTensor, otherwise if any input is SparseTensor then output is SparseTensor,
otherwise the output is Tensor.
|
{"url":"https://keras.io/2.15/api/layers/preprocessing_layers/categorical/hashing/","timestamp":"2024-11-11T01:11:09Z","content_type":"text/html","content_length":"26346","record_id":"<urn:uuid:ca2977a7-7d20-40ff-8140-d55d04995aae>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00088.warc.gz"}
|
Philosophy of Mathematics Seminar (Week 4, TT18)
Weakening the principles of classical logic in order to retain desirable properties of intensional notions such as truth has been widely embraced as a response to the intensional paradoxes. Advocates
of classical logic who resist logical revision have argued that our standard for reasoning with intensional notions should not be di erent from that employed in our best scienti c and mathematical
theories. A speci c version of this argument, due to Halbach, uses a proof-theoretic analysis of two classical and nonclassical theories of Kripkean truth (known as KF and PKF) to show that when we
give up classical logic, we must in the process give up important non-semantic patterns of reasoning, in particular in mathematics. e nonclassical logician has a natural response to Halbach?s
argument: they can bite the bullet and argue that even if one must give up the universality of these patterns of reasoning, one does not thereby lose too much in the way of genuine mathematics.
However, despite rst appearances, one does give up substantial mathematics by accepting nonclassical logic in this context. Drawing on recent work in reverse mathematics by Montalban and Neeman, I ´
show that an ordinary mathematical theorem concerning indecomposable linear orderings is proof-theoretically reducible to the classical theory of Kripkean truth KF, but not to the weaker nonclassical
theory PKF.
Philosophy of Mathematics Seminar Convenors: Dr James Stud, Dr Daniel Isaacson, and Prof Volker Halbach
|
{"url":"https://www.philosophy.ox.ac.uk/event/philosophy-of-mathematics-seminar-week-4-tt18","timestamp":"2024-11-13T22:45:12Z","content_type":"text/html","content_length":"113603","record_id":"<urn:uuid:c4a26ee7-54a8-46a9-b087-49df17845b20>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00867.warc.gz"}
|
Totals in Charts
2014-06-24 02:15 AM
The total in a chart is not the sum of the individual rows of the chart.
Instead, the total and the subtotals are calculated using the expression – but on a larger subset of the data than for the individual row.
Usually, the two methods result in the same numbers, but sometimes there is a huge difference. One example of this is if you use a non-linear function, e.g. Count(distinct …) as expression. The
example below clearly shows this.
The source data to the left assigns a country to each state, and if you count the number of countries per state using a Count(distinct Country), you will get the chart to the right: Each state
belongs to one country only, and the total number of countries is 2, also if the chart has four rows.
A second example is if you have a many-to-many relationship in the data. In the example below, you have three products, each with a sales amount. But since each product can belong to several product
groups, the sales amounts per product group will not add up: The total will be smaller than the sum of the individual rows, since there is an overlap between the product groups. The summation will be
made in the fact table.
Another way to describe it would be to say that a specific dollar belongs to both product groups, and would be counted twice if you just summed the rows.
In both cases, QlikView will show the correct number, given the data. To sum the rows would be incorrect.
So, how does this affect you as an application developer?
Normally not very much. But it is good to be aware of it, and I would suggest the following:
• When you write your expression, you should have the total line in mind. Usually, the expression will automatically be right also for the individual rows.
• Always use an aggregation function. This will ensure that QlikView is able to calculate the total correctly.
• If you want an average on the total line, you should most likely divide your expression with Count(distinct <Dim>). Then it will work both for the individual rows (where the count is 1) and the
total lines. Example
Sum( Amount ) / Count( distinct Customer )
• For cases where you want to show something completely different in the total line, you should consider the Dimensionality() function, that returns 0, 1, 2, … depending on whether the evaluation
takes place in a total, subtotal or row. Example:
If( Dimensionality() = 0, <Total line expression>, <Individual line expression> )
But If I want to show the sum of the individual rows? I don’t want the expression to be calculated over a larger data set. What do I do then?
There are two ways to do this. First, you can use an Aggr() function as expression:
Sum( Aggr( <Original expression> , <Dimension> ) )
This will work in all objects. Further, if you have a straight table, you have a setting on the Expressions tab where you can specify the Total mode.
Setting this to Sum of Rows will change the chart behavior to show exactly this: The sum of the rows.
|
{"url":"https://community.qlik.com/t5/Design/Totals-in-Charts/ba-p/1464797","timestamp":"2024-11-04T23:36:50Z","content_type":"text/html","content_length":"161111","record_id":"<urn:uuid:6b7e3f45-60e1-4ca5-be58-ff9ddbe83ded>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00242.warc.gz"}
|
Eugene Demidenko | M-statistics
top of page
A comprehensive resource providing new statistical methodologies and demonstrating how new approaches work for applications
M-statistics introduces a new approach to statistical inference, redesigning the fundamentals of statistics and improving on the classical methods we already use. This book targets exact optimal
statistical inference for a small sample under one methodological umbrella. Two competing approaches are offered: maximum concentration (MC) and mode (MO) statistics combined under one methodological
umbrella, which is why the symbolic equation M=MC+MO. M statistics defines an estimator as the limit point of the MC or MO exact optimal confidence interval when the confidence level approaches zero,
the MC and MO estimator, respectively. Neither mean nor variance plays a role in M statistics theory.
Novel statistical methodologies in the form of double-sided unbiased and short confidence intervals and tests apply to major statistical parameters:
• Exact statistical inference for small sample sizes is illustrated with effect size and coefficient of variation, the rate parameter of the Pareto distribution, two-sample statistical inference
for normal variance, and the rate of exponential distributions.
• M statistics is illustrated with discrete, binomial and Poisson distributions. Novel estimators eliminate paradoxes with the classic unbiased estimators when the outcome is zero.
• Exact optimal statistical inference applies to correlation analysis including Pearson correlation, squared correlation coefficient, and coefficient of determination. New MC and MO estimators
along with optimal statistical tests, accompanied by respective power functions, are developed.
• M statistics is extended to the multidimensional parameter and illustrated with the simultaneous statistical inference for the mean and standard deviation, shape parameters of the beta
distribution, the two-sample binomial distribution, and finally, nonlinear regression.
Our new developments are accompanied by respective algorithms and R codes, available at GitHub, and as such readily available for applications.
M-statistics book is suitable for professionals and students alike. It is highly useful for theoretical statisticians and teachers, researchers, and data science analysts as an alternative to
classical and approximate statistical inference.
advanced statistics
Understand how the Pandemic Changed the Job Market
Professionals at Zippia have conducted market research to understand the evolving job market post-pandemic. Zippia have posted Eugene Demidenko's analysis on the subject here.
Advanced Statistics with Applications in R fills the gap between several excellent theoretical statistics textbooks and many applied statistics books where teaching reduces to using existing
packages. This book looks at what is under the hood. Many statistics issues including the recent crisis with p-value are caused by misunderstanding of statistical concepts due to poor theoretical
background of practitioners and applied statisticians. This book is the product of a forty-year experience in teaching of probability and statistics and their applications for solving real-life
There are more than 442 examples in the book: basically every probability or statistics concept is illustrated with an example accompanied with an R code. Many examples, such as Who said #? What team
is better? The fall of the Roman empire, James Bond chase problem, Black Friday shopping, Free fall equation: Aristotle or Galilei, and many others are intriguing. These examples cover biostatistics,
finance, physics and engineering, text and image analysis, epidemiology, spatial statistics, sociology, etc.
Advanced Statistics with Applications in R teaches students to use theory for solving real-life problems through computations: there are about 500 R codes and 100 datasets. This data can be freely
downloaded from the button below.
Mixed Models: Theory and Applications with R
Second Edition
This book features unique applications of mixed model methodology, as well as:
• Comprehensive theoretical discussions illustrated by examples and figures
• Problems and extended projects requiring simulations in R intended to reinforce material
• Summaries of major results and general points of discussion at the end of each chapter
• Open problems in mixed modeling methodology, which can be used as the basis for research or PhD dissertations
• Over 300 exercises, end-of-section problems, updated data sets, and R subroutines
ISBN 9781118091579
About the Author
PROFESSOR EUGENE DEMIDENKO works at Dartmouth College in the Department of Biomedical Science, he teaches statistics at Mathematics Department to undergraduate students and to graduate students at
Quantitative Biomedical Sciences at Geisel School of Medicine. He has brought experience in theoretical and applied statistics, such as epidemiology and biostatistics, statistical analysis of images,
tumor regrowth, ill-posed inverse problems in engineering and technology, optimal portfolio allocation, among others. His first book with Wiley Mixed Model: Theory and Applications with R gained much
popularity among researchers and graduate/PhD students. Prof. Demidenko is the author of a controversial paper The P-value You Can't Buy published in 2016 in The American Statistician.
According to a recent database compiled by Stanford University, Dr. Demidenko belongs to top 2% World scientists around the globe across all disciplines.
bottom of page
|
{"url":"https://www.eugened.org","timestamp":"2024-11-11T19:22:58Z","content_type":"text/html","content_length":"585039","record_id":"<urn:uuid:2cded5fe-7fb2-4215-bb77-869f82e82e06>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00850.warc.gz"}
|
Teacher access
Request a demo account. We will help you get started with our digital learning environment.
Student access
Is your university not a partner? Get access to our courses via
Pass Your Math
independent of your university. See pricing and more.
Or visit
if jou are taking an OMPT exam.
|
{"url":"https://cloud.sowiso.nl/courses/theory/6/108/1159/en","timestamp":"2024-11-12T09:39:34Z","content_type":"text/html","content_length":"79424","record_id":"<urn:uuid:b4a55458-b146-45d7-a1fa-ecfe158091f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00176.warc.gz"}
|
IEC 60050 - International Electrotechnical Vocabulary
Definition: mean value of the information content of the events in a finite set of mutually exclusive and jointly exhaustive events
$H= ∑ i=1 n p( x i )⋅I( x i ) = ∑ i=1 n p( x i )⋅log( 1 p( x i ) ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz
3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake
aacaWGibGaeyypa0ZaaabCaeaacaWGWbGaaiikaiaadIhadaWgaaWc baGaamyAaaqabaGccaGGPaGaeyyXICTaamysaiaacIcacaWG4bWaaS baaSqaaiaadMgaaeqaaOGaaiykaaWcbaGaamyAaiabg2da9iaaigda
aeaacaWGUbaaniabggHiLdGccqGH9aqpdaaeWbqaaiaadchacaGGOa GaamiEamaaBaaaleaacaWGPbaabeaakiaacMcacqGHflY1ciGGSbGa ai4BaiaacEgacaGGOaWaaSaaaeaajugqbiaaigdaaOqaaiaadchaca
GGOaGaamiEamaaBaaaleaacaWGPbaabeaakiaacMcaaaGaaiykaaWc baGaamyAaiabg2da9iaaigdaaeaacaWGUbaaniabggHiLdaaaa@6788@$
where $X={ x 1 , …, x n } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz
3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake
aacaWGybGaeyypa0ZaaiWaaeaacaWG4bWaaSbaaSqaaiaaigdaaeqa aOGaaiilaiaac6cacaGGUaGaaiOlaiaadIhadaWgaaWcbaGaamOBaa qabaaakiaawUhacaGL9baaaaa@47F5@$ is the set of events $x i ( i=1, …, n )
MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb
a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aacaWG4bWaaSbaaSqaaiaadMgaaeqaaOGaaGPaVpaabmaabaGaamyA
aiabg2da9KqzafGaaGymaiaacYcacaaMc8UaeSOjGSKaaiilaiaayk W7kiaad6gaaiaawIcacaGLPaaaaaa@4C2F@$, $I( x i ) MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX
garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=
vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aacaWGjbGaaiikaiaadIhadaWgaaWcbaGaamyAaaqabaGccaGGPaaa aa@414F@$ are their information contents and $p( x i ) MathType@MTEF@5@5@+=
feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb
a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aacaWGWbGaaiikaiaadIhadaWgaaWcbaGaamyAaaqabaGccaGGPaaa aa@4176@$ the probabilities of
their occurrences, subject to $∑ i=1 n p( x i )=1 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz
3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake
aadaaeWbqaaiaadchacaGGOaGaamiEamaaBaaaleaacaWGPbaabeaa kiaacMcacqGH9aqpjugqbiaaigdaaSqaaiaadMgacqGH9aqpcaaIXa aabaGaamOBaaqdcqGHris5aaaa@49CA@$
EXAMPLE Let ${ a,b,c } MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz
3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake
aaomaacmaakeaajugibiaadggacaGGSaGaamOyaiaacYcacaWGJbaa kiaawUhacaGL9baaaaa@43FE@$ be a set of three events and let $p(a)=0,5 MathType@MTEF@5@5@+=
feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb
a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aacaWGWbGaaiikaiaadggacaGGPaqcLbqacqGH9aqpcaaIWaGaaiil aiaaiwdaaaa@43D9@$, $p(b)=0,25
MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb
a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aacaWGWbGaaiikaiaadkgacaGGPaqcLbqacqGH9aqpcaaIWaGaaiil aiaaikdacaaI1aaaaa@4496@$ and $p
(c)=0,25 MathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb
a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aacaWGWbGaaiikaiaadogacaGGPaqcLbqacqGH9aqpcaaIWaGaaiil aiaaikdacaaI1aaaaa@4497@$ be the
probabilities of their occurrences. The entropy of this set is $H(X)=p(a)⋅I(a)+p(b)⋅I(b)+p(c)⋅I(c)=1,5 ShMathType@MTEF@5@5@+= feaagKart1ev2aqatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbbjxAHX
garuavP1wzZbItLDhis9wBH5garmWu51MyVXgaruWqVvNCPvMCG4uz 3bqee0evGueE0jxyaibaiGc9aspC0FXdbbc9asFfpec8Eeeu0lXdbb a9frFj0xb9Lqpepeea0xd9s8qiYRWxGi6xij=hbba9q8aq0=yq=He9 q8qiLsFr0=vr0=
vr0db8meaabaGacmaadiWaaiWabaabaiaafaaake aajugibiaadIeacaGGOaGaamiwaiaacMcacqGH9aqpcaWGWbGaaiik aiaadggacaGGPaGaeyyXICTaamysaiaacIcacaWGHbGaaiykaiabgU
caRiaadchacaGGOaGaamOyaiaacMcacqGHflY1caWGjbGaaiikaiaa dkgacaGGPaGaey4kaSIaamiCaiaacIcacaWGJbGaaiykaiabgwSixl aadMeacaGGOaGaam4yaiaacMcacqGH9aqpjugabiaaigdacaGGSaGa
|
{"url":"https://electropedia.org/iev/iev.nsf/IEVref_xref/en:171-07-15","timestamp":"2024-11-04T20:47:35Z","content_type":"text/html","content_length":"26986","record_id":"<urn:uuid:36974d2f-e804-4a42-8de0-9c3725e68f0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00414.warc.gz"}
|
The covariant Stark effect
This paper examines the Stark effect, as a first order perturbation of manifestly covariant hydrogen-like bound states. These bound states are solutions to a relativistic Schrödinger equation with
invariant evolution parameter, and represent mass eigenstates whose eigenvalues correspond to the well-known energy spectrum of the nonrelativistic theory. In analogy to the nonrelativistic case, the
off-diagonal perturbation leads to a lifting of the degeneracy in the mass spectrum. In the covariant case, not only do the spectral lines split, but they acquire an imaginary part which is linear in
the applied electric field, thus revealing induced bound state decay in first order perturbation theory. This imaginary part results from the coupling of the external field to the non-compact boost
generator. In order to recover the conventional first order Stark splitting, we must include a scalar potential term. This term may be understood as a fifth gauge potential, which compensates for
dependence of gauge transformations on the invariant evolution parameter.
Dive into the research topics of 'The covariant Stark effect'. Together they form a unique fingerprint.
|
{"url":"https://cris.tau.ac.il/en/publications/the-covariant-stark-effect","timestamp":"2024-11-05T09:18:12Z","content_type":"text/html","content_length":"47780","record_id":"<urn:uuid:3235977c-9e7d-49c2-ab8a-aa31d4112dd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00118.warc.gz"}
|
Free Weight Converter Online | Ounce to Pound and Pound to Ounce
Weight Converter
Easily convert between ounces and pounds. Convert from 1 to 50 pounds into ounces, or from 1 to 200 ounces , You can convert your desired weight too.
How many ounces in 1 pounds?
How many ounces in 2 pounds?
How many ounces in 3 pounds?
How many ounces in 4 pounds?
How many ounces in 5 pounds?
How many ounces in 6 pounds?
How many ounces in 7 pounds?
How many ounces in 8 pounds?
How many ounces in 9 pounds?
How many ounces in 10 pounds?
How many ounces in 11 pounds?
How many ounces in 12 pounds?
How many ounces in 13 pounds?
How many ounces in 14 pounds?
How many ounces in 15 pounds?
How many ounces in 16 pounds?
How many ounces in 17 pounds?
How many ounces in 18 pounds?
How many ounces in 19 pounds?
How many ounces in 20 pounds?
How many ounces in 21 pounds?
How many ounces in 22 pounds?
How many ounces in 23 pounds?
How many ounces in 24 pounds?
How many ounces in 25 pounds?
How many ounces in 26 pounds?
How many ounces in 27 pounds?
How many ounces in 28 pounds?
How many ounces in 29 pounds?
How many ounces in 30 pounds?
How many ounces in 31 pounds?
How many ounces in 32 pounds?
How many ounces in 33 pounds?
How many ounces in 34 pounds?
How many ounces in 35 pounds?
How many ounces in 36 pounds?
How many ounces in 37 pounds?
How many ounces in 38 pounds?
How many ounces in 39 pounds?
How many ounces in 40 pounds?
How many ounces in 41 pounds?
How many ounces in 42 pounds?
How many ounces in 43 pounds?
How many ounces in 44 pounds?
How many ounces in 45 pounds?
How many ounces in 46 pounds?
How many ounces in 47 pounds?
How many ounces in 48 pounds?
How many ounces in 49 pounds?
How many ounces in 50 pounds?
How many ounces in 55 pounds?
How many ounces in 60 pounds?
How many ounces in 65 pounds?
How many ounces in 70 pounds?
How many ounces in 75 pounds?
How many ounces in 80 pounds?
How many ounces in 85 pounds?
How many ounces in 90 pounds?
How many ounces in 95 pounds?
How many ounces in 100 pounds?
How many ounces in 105 pounds?
How many ounces in 110 pounds?
How many ounces in 115 pounds?
How many ounces in 120 pounds?
How many ounces in 125 pounds?
How many ounces in 130 pounds?
How many ounces in 135 pounds?
How many ounces in 140 pounds?
How many ounces in 145 pounds?
How many ounces in 150 pounds?
How many ounces in 155 pounds?
How many ounces in 160 pounds?
How many ounces in 165 pounds?
How many ounces in 170 pounds?
How many ounces in 175 pounds?
How many ounces in 180 pounds?
How many ounces in 185 pounds?
How many ounces in 190 pounds?
How many ounces in 195 pounds?
How many ounces in 200 pounds?
Weight Converter
Use the tool to get accurate measurements and a user-friendly interface with no complications. Just one click, and you will get the result on the screen. Also, it helps save your time by quickly
converting “Pound to Ounces” and “Ounces to Pounds” online.
Accuracy in Measurements
Luletools offers an accurate pound to ounce weight conversion. However, accuracy is significant in making the right measurements, especially when cooking, nutrition, or student work. Luletools offers
a simple conversion option with no errors, which means that you get the right result for your desired weight. This is especially useful in food preparation, where minor differences can significantly
change the food's taste. Using Luletools weight converter, you can be sure of the measurements.
User-Friendly Interface
Unlike other tools, Luletools has a very easy-to-use interface. The tool is developed for beginners and advanced users so that the users don’t have to make those complicated calculations before
conversion. Just enter the value, and with one click, you can easily convert pounds to ounces and get the result on the screen. No more hassle in calculations. Also, it is suitable for everyone,
including home users, workplaces, and learners. Now, you can always be ready for tough calculations without using formulas.
Multitasking in Solutions
The Luletools weight conversion tool is versatile and can be used in various applications beyond cooking. Whether you’re figuring out ratios for nutrition plans or shipments or for a task you have in
DIY, the calculator for weight conversion is of great help. This flexibility of the tool helps save you a lot of time. Open the Luletools weight conversion tool and enter the value you want to
convert. The answer will be on screen within seconds. Users can use it to calculate a variety of conversions.
Frequently Asked Questions
What is a weight calculator?
Where can I find the Luletools weight calculator?
What if I have a decimal value to convert?
Can I convert ounces to pounds using a weight calculator?
How do I use the Luletools weight converter tool?
What are the uses of Luletools weight converter?
Is the conversion accurate?
Is there a limit to using this tool?
|
{"url":"https://luletools.com/weight-conversions","timestamp":"2024-11-09T09:50:22Z","content_type":"text/html","content_length":"76729","record_id":"<urn:uuid:52bed5ce-5e83-4f47-8ccc-d412ea737d57>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00651.warc.gz"}
|
How To Convert A Tensor To Numpy Array In TensorflowHow To Convert A Tensor To Numpy Array In Tensorflow
To convert a tensor to a NumPy array in TensorFlow, you can use the numpy() method. This method allows you to extract the values from a tensor and convert them into a NumPy array, which can then be
further processed or used in other Python libraries. Here are two possible ways to convert a tensor to a NumPy array in TensorFlow:
Method 1: Using the numpy() method
One straightforward way to convert a tensor to a NumPy array is by using the numpy() method. This method is available for TensorFlow tensors and returns a NumPy array with the same shape and values
as the original tensor. Here’s an example:
import tensorflow as tf
import numpy as np
# Create a TensorFlow tensor
tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
# Convert the tensor to a NumPy array
numpy_array = tensor.numpy()
# Print the NumPy array
array([[1, 2, 3],
[4, 5, 6]])
In this example, we create a TensorFlow tensor using the tf.constant() function. Then, we use the numpy() method to convert the tensor to a NumPy array. Finally, we print the NumPy array to verify
the conversion.
Related Article: How to Use Matplotlib for Chinese Text in Python
Method 2: Using the eval() method
Another way to convert a tensor to a NumPy array is by using the eval() method. This method is available for TensorFlow tensors and allows you to evaluate the tensor in a TensorFlow session and
retrieve its value as a NumPy array. Here’s an example:
import tensorflow as tf
import numpy as np
# Create a TensorFlow tensor
tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
# Start a TensorFlow session
with tf.Session() as sess:
# Evaluate the tensor and convert it to a NumPy array
numpy_array = tensor.eval()
# Print the NumPy array
array([[1, 2, 3],
[4, 5, 6]])
In this example, we create a TensorFlow tensor using the tf.constant() function. Then, we start a TensorFlow session using the tf.Session() context manager. Inside the session, we use the eval()
method to evaluate the tensor and convert it to a NumPy array. Finally, we print the NumPy array to verify the conversion.
Best Practices
When converting a tensor to a NumPy array in TensorFlow, keep the following best practices in mind:
1. Make sure to have TensorFlow and NumPy installed in your Python environment. You can install them using pip:
pip install tensorflow numpy
2. Use the numpy() method whenever possible, as it is a more concise and efficient way to convert a tensor to a NumPy array.
3. If you need to perform additional operations on the tensor before conversion, consider using TensorFlow’s built-in functions and operations instead of converting to a NumPy array prematurely. This
can help maintain better performance and compatibility with TensorFlow’s computational graph.
4. Be mindful of the memory usage when working with large tensors. Converting a tensor to a NumPy array creates a copy of the data in memory. If memory is a concern, consider manipulating the tensor
directly using TensorFlow operations or using TensorFlow’s streaming capabilities.
Related Article: How To Exit/Deactivate a Python Virtualenv
|
{"url":"https://www.squash.io/how-to-convert-a-tensor-to-numpy-array-in-tensorflow/","timestamp":"2024-11-10T01:55:39Z","content_type":"text/html","content_length":"68133","record_id":"<urn:uuid:777a94d3-f115-4b4f-b38e-5a096ee4dda9>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00202.warc.gz"}
|
What is: Joint Exponential Distribution
What is Joint Exponential Distribution?
The Joint Exponential Distribution is a statistical concept that extends the properties of the exponential distribution to multiple random variables. In probability theory and statistics, the
exponential distribution is often used to model the time until an event occurs, such as the time until failure of a mechanical system or the time between arrivals of customers in a queue. When
dealing with two or more random variables, the joint distribution provides a comprehensive framework to understand how these variables interact with one another, particularly in scenarios where they
may exhibit dependence or correlation.
Mathematical Representation
The Joint Exponential Distribution can be mathematically represented for two random variables, X and Y, as follows: if X and Y are independent exponential random variables with parameters λ₁ and λ₂
respectively, their joint probability density function (PDF) is given by the product of their individual PDFs. This can be expressed as:
[ f_{X,Y}(x,y) = f_X(x) cdot f_Y(y) = λ₁ e^{-λ₁ x} cdot λ₂ e^{-λ₂ y} ]
for ( x, y geq 0 ). This formulation highlights the independence of the two variables, which is a crucial aspect when analyzing their joint behavior. However, in cases where the variables are not
independent, the joint distribution must account for the correlation between them, leading to a more complex representation.
Properties of Joint Exponential Distribution
One of the key properties of the Joint Exponential Distribution is its memoryless property, which states that the future behavior of the distribution is independent of the past. This property is
particularly useful in various applications, such as survival analysis and reliability engineering. Additionally, the joint distribution can be used to derive marginal distributions, which provide
insights into the behavior of individual random variables while considering their joint relationship. The marginal distribution of X, for instance, can be obtained by integrating the joint PDF over
the range of Y.
Applications in Data Science
In the field of data science, the Joint Exponential Distribution is employed in various applications, particularly in modeling systems where events occur continuously and independently. For example,
it can be used in queuing theory to analyze customer service systems, where the arrival times of customers and service times can be modeled as independent exponential variables. Furthermore, it plays
a significant role in survival analysis, where researchers are interested in the time until an event, such as death or failure, occurs for multiple subjects or components.
Joint Exponential Distribution in Machine Learning
Machine learning practitioners often utilize the Joint Exponential Distribution in probabilistic models, particularly in Bayesian inference and graphical models. By incorporating the joint
distribution into their models, data scientists can better understand the relationships between multiple variables and make more informed predictions. For instance, in a Bayesian network, the joint
distribution can help in modeling the dependencies between different features, leading to improved accuracy in classification tasks.
Estimation Techniques
Estimating the parameters of the Joint Exponential Distribution is a crucial step in applying this statistical model to real-world data. Common techniques include Maximum Likelihood Estimation (MLE)
and Bayesian estimation. MLE involves finding the parameter values that maximize the likelihood function, which represents the probability of observing the given data under the model. Bayesian
estimation, on the other hand, incorporates prior beliefs about the parameters and updates these beliefs based on the observed data, providing a more flexible approach to parameter estimation.
Challenges and Limitations
Despite its usefulness, the Joint Exponential Distribution comes with certain challenges and limitations. One significant challenge is the assumption of independence between the random variables. In
many real-world scenarios, this assumption may not hold true, leading to inaccurate models and predictions. Additionally, the joint distribution can become computationally intensive when dealing with
a large number of variables, necessitating the use of advanced statistical techniques and computational tools to manage the complexity.
Software and Tools for Analysis
Various software tools and programming languages, such as R, Python, and MATLAB, provide libraries and functions to work with the Joint Exponential Distribution. In R, for instance, the ‘stats’
package includes functions for generating random samples, calculating probabilities, and estimating parameters for exponential distributions. Python’s SciPy library offers similar functionalities,
allowing data scientists to easily implement and analyze joint distributions in their projects.
Conclusion on Joint Exponential Distribution
The Joint Exponential Distribution serves as a powerful tool in statistics and data analysis, providing insights into the relationships between multiple random variables. Its applications span
various fields, including engineering, finance, and healthcare, making it a vital concept for researchers and practitioners alike. Understanding its properties, estimation techniques, and
applications can significantly enhance the ability to model complex systems and make data-driven decisions.
|
{"url":"https://statisticseasily.com/glossario/what-is-joint-exponential-distribution/","timestamp":"2024-11-02T11:03:13Z","content_type":"text/html","content_length":"139394","record_id":"<urn:uuid:bf4254d3-8737-4e2c-ae7d-2a6148b6fb6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00870.warc.gz"}
|
Number pattern 35 in C - Codeforwin
Number pattern 35 in C
Write a C program to print the given number pattern using loop. How to print the given number pattern using for loop in C programming. Logic to print the given number pattern using loop in C program.
Input N: 5
Required knowledge
Basic C programming, If else, Loop
Logic to print the given number pattern
Logic to this pattern can be little tricky on first look. The pattern contains N rows (where N is the total number of rows to be printed). There are exactly i columns per row (where i is the current
row number). Now, when you look to the pattern carefully you will find that the pattern contains odd and even columns.
Read more – Program to check even number
When the row number is odd then the columns are odd, starting from first odd number and when it is even then the columns are even, starting from the first even number. For printing the numbers we
will use an extra variable say k that will keep track of next number to be printed.
Step-by-step-descriptive logic:
1. To iterate through rows, run an outer loop from 1 to N.
2. Inside this loop initialize a variable k = 1 if the current row is odd otherwise k = 2.
3. To iterate through columns, run an inner loop from 1 to i (where i is current row number).
Inside this loop print the value of k and increment k = k + 2 to get the next even or odd number.
Program to print the given number pattern
* C program to print number pattern
#include <stdio.h>
int main()
int i, j, k, N;
printf("Enter N: ");
scanf("%d", &N);
for(i=1; i<=N; i++)
// Checking even or odd
if(i & 1)
k = 1;
k = 2;
// Logic to print numbers
for(j=1; j<=i; j++, k+=2)
printf("%d", k);
return 0;
Enter N: 5
Happy coding 😉
|
{"url":"https://codeforwin.org/c-programming/number-pattern-35-in-c","timestamp":"2024-11-05T07:19:25Z","content_type":"text/html","content_length":"124285","record_id":"<urn:uuid:7f1fd70f-0470-4b39-978e-fea478ed8852>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00154.warc.gz"}
|
Disc Method | Brilliant Math & Science Wiki
Given a region under a curve \(y=f(x)\) for the interval \(a \leq x \leq b\), revolving this region around the \(x\)-axis gives a three-dimensional figure called a solid of revolution.
To find the volume of this solid of revolution, consider a thin vertical strip with thickness \(\Delta x\) and height \(y=f(x)\) and consider revolving this thin strip around the \(x\)-axis. This
strip generates a thin circular disk with radius \(y=f(x)\) and thickness \( \Delta x\), which has volume
\[ \Delta V = \pi y^2 \Delta x = \pi (f(x))^2 \Delta x.\]
The disk method calculates the volume of the full solid of revolution by summing the volumes of these thin circular disks from the left endpoint \(a\) to the right endpoint \(b\) as the thickness \(
\Delta x \) goes to \(0\) in the limit. This gives the volume of the solid of revolution:
\[ V = \int_a^b dV = \int_a^b \pi \big(f(x)\big)^2 dx.\]
Volume of a cylinder:
A cylinder with height \(h\) and base radius \(r\) can be thought of as the solid of revolution obtained by revolving the line \(y=r\) around the \(x\)-axis. Using the disk method, find the
volume of the cylinder of height \(h\) and base radius \(r\).
Using the disk method, we revolve the line \(y=r\) around the \(x\)-axis from \(x=0\) to \(x=h\). Then \[ \Delta V = \pi y^2 = \pi r^2 \Delta x\] and the volume of the right circular cylinder is
\[ V &= \int_{0}^{h} dV \\&= \int_{0}^{h} \pi r^2 dx \\ & = \left[ \pi r^2 x \right]^{h}_{0} \\ & = \pi r^2 h. \ _\square \]
Volume of a sphere:
A sphere can be thought of as the solid of revolution obtained by revolving a semicircle around the \(x\)-axis. Using the disk method, find the volume of the sphere of radius \(r\).
We can consider the semicircle to be centered at the origin with radius \(r\), which has equation \(y = \sqrt{r^2 - x^2}\). Then \[ \Delta V = \pi y^2 = \pi \big(r^2 - x^2\big) \Delta x\] and the
volume of the sphere is \[ V &= \int_{-r}^{r} dV \\&= \int_{-r}^{r} \pi \big(r^2-x^2\big) dx \\ & = \left[ \pi r^2 x - \frac{\pi x^3}{3} \right]^{r}_{-r}\\ &= \pi r^2 \big(r-(-r)\big) - \left( \
frac{\pi r^3}{3} - \frac{\pi (-r)^3}{3} \right)\\ & = 2 \pi r^3 - \left(\frac{2 \pi r^3}{3} \right)\\ & = \frac{4\pi r^3}{3}. \ _\square \]
Volume of a right circular cone:
A right circular cone can be thought of as the solid of revolution obtained by revolving a right triangle around the \(x\)-axis. Using the disk method, find the volume of the right circular cone
of height \(h\) and base radius \(r\).
The hypotenuse of the right triangle we revolve around the \(x\)-axis is given by \(y = \frac{r}{h}x \). Then \[ \Delta V = \pi y^2 = \pi \left( \frac{r}{h}x \right)^2 \Delta x\] and the volume
of the right circular cone is \[ V &= \int_{0}^{h} dV \\&= \int_{0}^{h} \pi \frac{r^2}{h^2 }x^2 dx \\ & = \left[ \pi \frac{r^2}{h^2} \cdot \frac{1}{3} x^3\right]^{h}_{0}\\ & = \frac{ \pi r^2 h}
{3}. \ _\square \]
As shown in the examples above, we have the following relationships between the volumes of the cone and sphere as fractional parts of the volumes of their respective circumscribed cylinders:
\[ V_\text{cone} & = \frac{1}{3} V_\text{cylinder}\\\\ V_\text{sphere} &= \frac{2}{3} V_\text{cylinder}. \]
When the curve \(\displaystyle{y=\frac{1}{x}}\ \ (1\le x \le \infty)\) is revolved about the \(x\)-axis, a funnel-shaped surface is formed. What is the volume of that revolution?
Using the disk method gives
\[ \Delta V = \pi y^2 = \pi\cdot \frac{1}{x^2} \Delta x,\]
and the volume of the revolution is
\[ V &= \int_{1}^{\infty} dV \\&= \int_{1}^{\infty} \pi\cdot\frac{1}{x^2}dx \\ & =\left.-\pi\cdot\frac{1}{x}\right|_{1}^{\infty}\\ &= -\pi(0-1)\\ & = \pi. \]
It is interesting to note that the volume is finite. \(_\square\)
Let there be a region \(R:\{(x,y) \ | \ x^{1/4}+y^{4} \leq 1\}\). What is the volume of the solid generated when \(R\) is rotated around the line \(x=0?\) Give your answer to 3 decimal places.
Note: You may use a calculator for the final step of your calculation.
This question was posed by my calculus teacher as the last and higher value exercise on the third Calculus I test.
A bullet is formed by revolving the area bounded by the the curve \(y = \ln(x)\) from \(x = 1\) to \(x = e\) about the \(x\)-axis.
It is then shot straight into a very thick wall (i.e. it does not pierce through the other side at all) making a closed cylindrical hole until it stops moving. Then the bullet is carefully extracted
without affecting the hole at all, leaving an empty hole with a pointy end where the bullet once was.
The length of the entire hole is \(e+1\). If the volume of the hole can be expressed as \[\pi i e,\] where \(i\) is a constant, find the value of \(i\).
\[ 2 \pi r L \] \[ \pi r L \] \[ r L \] \[ 2 r L \]
The above figure consists of a rectangle with a semicircle cut out of one end and added to the other end, where \(L\) is the width of the rectangle, and the curved length of a semicircle is \( \pi r
To calculate the area of the shaded figure, Svatejas applies the disc method as follows:
Consider the axis of integration to be the semicircular arc, which has length \( \pi r \). For each horizontal strip, we have an area element (technically length element) of \(L \). Hence, the area
\[ \int_{R} L \, dR = \pi r \times L. \]
What is the area of the shaded figure?
Inspiration, see solution comments.
|
{"url":"https://brilliant.org/wiki/disc-method/","timestamp":"2024-11-05T16:49:45Z","content_type":"text/html","content_length":"56842","record_id":"<urn:uuid:ae3ef4ff-6924-41ef-aee2-abdfb9c5a7d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00279.warc.gz"}
|
Pi Day: mathematics; independent school; boarding school
Pi Day is celebrated on March 14th (3/14) around the world and is an annual opportunity for maths enthusiasts to recite the infinite digits of Pi, talk to their friends about maths, and eat pie.
Pi (often represented by the lower-case Greek letter π), one of the most well-known mathematical constants, is the ratio of a circle’s circumference to its diameter. For any circle, the distance
around the edge is a little more than three times the distance across. Typing π into a calculator and pressing ENTER will yield the result 3.141592654, not because this value is exact, but because a
calculator’s display is often limited to 10 digits. Pi is actually an irrational number (a decimal with no end and no repeating pattern) that is most often approximated with the decimal 3.14 or the
fraction 227.
This brings up a rather interesting question:
If pi is the number of diameter lengths that fit around a circle, how can it have no end?
Pi has interested people around the world for over 4,000 years. Many mathematicians – from famous ones such as Fibonacci, Newton, Leibniz, and Gauss, to lesser well-known mathematical minds – have
toiled over pi, calculated its digits, and applied it in numerous areas of mathematics. Some spent the better parts of their lives calculating just a few digits. Here is a sampling of the many
milestones in the life of pi.
Early decimal approximations for pi were obtained in a number of different ways. For example, in ancient Babylon, rope stretchers marking the locations of buildings and boundaries estimated pi to be
258 = 3.125. The ancient Egyptians determined the ratio to be (169)2 ≈ 3.16. The earliest calculations of pi were largely based on measurement.
Archimedes, a Greek mathematician, was the first to use an algorithmic approach to calculate pi. He drew a polygon inside a circle and drew a second polygon outside of the circle. Then he
continuously added more and more sides of both polygons, getting closer and closer to the shape of the circle. Having reached 96-sided polygons, he proved that 22371 < pi < 227.
From Archimedes’ time (about 250 B.C.E.) to the early 1600s mathematicians in countries around the world used methods similar to Archimedes’ to estimate pi, with increasingly efficient and accurate
results. In 1630, Austrian astronomer Christoph Grienberger calculated 38 digits of pi using polygons with 10^40 sides, which remains the best calculation of pi using this polygonal method.
The Renaissance saw many developments and work on pi, including the creation of the name pi. Until 1647, it didn’t have a universal name or symbol. English mathematician William Oughtred began
calling it pi in his publication Clavis Mathematicae, but it wasn’t until Leonhard Euler used the symbol in 1737 that it became widely embraced. The reason for adopting this particular Greek letter
is because it is the first letter of the Greek word, perimetros, which loosely translates to “circumference.”
In 1767, Swiss mathematician Johann Heinrich Lambert proved pi is irrational and in 1882 Ferdinand von Lindemann proved pi is transcendental, which means π cannot be a solution to a polynomial
equation with rational coefficients. This finding is significant because, until this point, it was believed that one could construct a square and a circle with equal area, known as “squaring the
circle”. Proving the transcendence of pi showed this is not possible and the phrase “squaring the circle” is now used as a metaphor for trying to do something that is impossible.
With modern technological advances, pi has now been calculated to 31 trillion digits. However, only the first 39 or so are needed to be able to perform all calculations in our observable universe
with virtually no error. Though it is news every time the digit record is broken, we can now use technology to explore other aspects of pi. One example from the Chudnovsky brothers, a pair of
American mathematicians:
We are looking for the appearance of some rules that will distinguish the digits of pi from other numbers. If you see a Russian sentence that extends for a whole page, with hardly a comma, it is
definitely Tolstoy. If someone gave you a million digits from somewhere in pi, could you tell it was from pi? We don’t really look for patterns; we look for rules
Take some time to research and explore this unique number. It has a long and very detailed history that shows the field of mathematics as a living, breathing subject, not as a collection of rules and
|
{"url":"https://www.stdavidscollege.co.uk/news/2022/happy-letter-pi-day","timestamp":"2024-11-11T02:08:00Z","content_type":"text/html","content_length":"98598","record_id":"<urn:uuid:802c11a1-479d-4218-a1c0-d334923702ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00164.warc.gz"}
|
Transformation groups 2017. Conference dedicated to Prof. Ernest B. Vinberg
on the occasion of his 80th birthday
Transformation groups 2017. Conference dedicated to Prof. Ernest B. Vinberg on the occasion of his 80th birthday
(December 14–18, 2017, Independent University of Moscow (Bolshoi Vlassievskii, 11), room 401, Moscow)
Transformation groups 2017. Conference dedicated to Prof. Ernest B. Vinberg on the occasion of his 80th birthday, Moscow, December 14–18, 2017
|
{"url":"https://m.mathnet.ru/php/conference.phtml?confid=1285&option_lang=eng","timestamp":"2024-11-11T23:03:57Z","content_type":"text/html","content_length":"37347","record_id":"<urn:uuid:f3cc56d1-2ba4-4e24-ab7c-216a2daf6d9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00078.warc.gz"}
|
A solid spherical ball remolded into a hollow spherical ball
A 523.6 cm^3 solid spherical steel ball was melted and remolded into a hollow steel ball so that the hollow diameter is equal to the diameter of the original steel ball. Find the thickness of the
hollow steel ball.
A. 1.3 cm C. 1.2 cm
B. 1.5 cm D. 1.6 cm
Answer Key
Click here to expand or collapse this section Click here to show or hide the Answer Key
Radius of the original steel ball
$V = \frac{4}{3}\pi r^3$
$523.6 = \frac{4}{3}\pi r^3$
$r = 5 ~ \text{cm}$
The amount of material in the hollow steel ball is equal to the amount of material in the the original solid steel ball.
Let R = outer radius of the hollow steel ball.
$\frac{4}{3}\pi R^3 - \frac{4}{3}\pi r^3 = \frac{4}{3}\pi r^3$
$\frac{4}{3}\pi R^3 = \frac{8}{3}\pi r^3$
$R^3 = 2r^3$
$R^3 = 2(5^3)$
$R = 6.3 ~ \text{cm}$
Thickness of the hollow steel ball
$t = R - r = 6.3 - 5$
$t = 1.3 ~ \text{cm}$ ← Answer: [ A ]
|
{"url":"https://mathalino.com/board-problem/mathematics-surveying-and-transportation-engineering/node-3450","timestamp":"2024-11-10T21:05:48Z","content_type":"text/html","content_length":"47198","record_id":"<urn:uuid:5fb84b59-4d3c-4bda-9293-546ac74306b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00024.warc.gz"}
|
A zero matrix, also known as a null matrix, is a matrix with all of its entries equal to zero in mathematics, particularly linear algebra. It also acts as the additive identity of the additive group
of m x n matrices and is indicated by the sign O or, depending on the context, subscripts corresponding to the matrix dimension.
The organisation of zero elements into rows and columns is known as a zero matrix. A zero matrix is one in which all of the entries are 0. It is represented by the letter ‘O,’ which can be
interpreted as a subscript to reflect the matrix’s dimension.
What is a zero matrix?
A zero matrix is a type of matrix in which all of the elements are equal to zero. A zero matrix is also known as a null matrix because it has solely zeros as its elements. A square matrix can be made
out of a zero matrix.
The letter ‘O’ stands for a zero matrix. When an additive identity matrix is added to a matrix of order m x n, the outcome is the same matrix.
0m×n =[ 0 ]
Addition of zero matrix:-
When the non-zero matrix of order m x n is added to a zero matrix of order m x n then the result will be the original matrix.
Let us suppose we have a matrix Am×n = [aij] is an
m × n matrix and 0 is a zero matrix of m× n order. Then. A + 0 = 0 + A = A
When the zero matrix is added to another matrix, the identity of the matrix remains unchanged. As a result, it’s known as the additive identity for matrix addition.
Product of matrix with zero matrix:-
It is feasible to get a zero matrix by multiplying two non-zero matrices together.
If xy = 0, we can state that either x = 0 or y = 0 given two real numbers, say x and y. Matrixes can be thought of in the same way.
The product of the two matrices A and B will result in a zero matrix if the rows of matrix A have zero items and the columns of matrix B which is of the same order having zero elements.
Properties of zero matrix:-
Some of the null matrix’s most important features are listed below.
• A square matrix is used as the null matrix.
• The number of rows and columns in the null matrix can be uneven.
• The addition of a null matrix to any matrix has no effect on the matrix’s properties.
• When a null matrix is multiplied by another null matrix, the result is a null matrix.
• A null matrix’s determinant is equal to zero.
• A singular matrix is a null matrix.
• The only matrix with a rank of 0 is the zero matrix.
• A – 0 = A
• A – A = 0
• 0A = 0
• If cA = 0 then c = 0 or A = 0
Application of zero matrix :-
Simple solutions to algebraic equations involving matrices are possible with Zero Matrices. The zero matrix, for example, can be defined as an additive group, making it a useful variable in
situations when an unknown matrix must be solved.
Consider the equation X + Y = Z, where X is the variable. To begin, simplify the equation to X + 0 = Z + (-Y). Y and -Y become a zero matrix when the inverse matrix is added to either side of the
equation. The additive identity property thus reduces X + 0 to merely X, yielding X = Z – Y. Algebraic equations are substantially easier to calculate when the zero matrix is used.
A null matrix is a square matrix in which all of the elements are 0. Because the null matrix’s elements are all zeros, the null matrix is also known as a zero matrix. Any matrix’s additive identity
is the null matrix. A null matrix has the order m x n and can have an uneven number of rows and columns.
The linear transformation that transfers all vectors to the zero vector is also represented by the zero matrix. It is idempotent, which means that when multiplied by itself, the result is the same
as the original. The only matrix with a rank of 0 is the zero matrix.
|
{"url":"https://unacademy.com/content/jee/study-material/mathematics/zero-matrix/","timestamp":"2024-11-13T09:04:53Z","content_type":"text/html","content_length":"642077","record_id":"<urn:uuid:dcb32238-c46a-431b-ae61-57d1ccebb577>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00418.warc.gz"}
|
Standard Deviation Measure
A Standard Deviation Measure is a point estimation function that quantifies the statistical dispersion of a random variable around its mean value (in the form of a standard deviation value).
• (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/standard_deviation Retrieved:2015-5-3.
□ In statistics, the standard deviation (SD) (represented by the Greek letter sigma, σ) is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A
standard deviation close to 0 indicates that the data points tend to be very close to the mean (also called the expected value) of the set, while a high standard deviation indicates that the
data points are spread out over a wider range of values.
The standard deviation of a random variable, statistical population, data set, or probability distribution is the square root of its variance. It is algebraically simpler, though in practice
less robust, than the average absolute deviation. A useful property of the standard deviation is that, unlike the variance, it is expressed in the same units as the data. Note, however, that
for measurements with percentage as the unit, the standard deviation will have percentage points as the unit. There are also other measures of deviation from the norm, including mean absolute
deviation, which provide different mathematical properties from standard deviation.
In addition to expressing the variability of a population, the standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in
polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. The reported margin of error is typically about
twice the standard deviation — the half-width of a 95 percent confidence interval. In science, researchers commonly report the standard deviation of experimental data, and only effects that
fall much farther than two standard deviations away from what would have been expected are considered statistically significant — normal random error or variation in the measurements is in
this way distinguished from causal variation. The standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the
volatility of the investment.
When only a sample of data from a population is available, the term standard deviation of the sample or sample standard deviation can refer to either the above-mentioned quantity as applied
to those data or to a modified quantity that is a better estimate of the population standard deviation (the standard deviation of the entire population).
• (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Standard_deviation
□ In probability theory and statistics, the standard deviation of a statistical population, a data set, or a probability distribution is the square root of its variance. Standard deviation is a
widely used measure of the variability or dispersion, being algebraically more tractable though practically less robust than the expected deviation or average absolute deviation.
It shows how much variation there is from the "average" (mean). A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation
indicates that the data are spread out over a large range of values.
□ Let X be a random variable with mean value μ:
□ Here $E$ denotes the average or expected value ofX. Then the standard deviation of [math]\displaystyle{ X }[/math] is the quantity
□ That is, the standard deviation σ is the square root of the average value of (X – μ)^2.
□ In the case where X takes random values from a finite data set x[1], x[2], ..., x[N], with each value having the same probability, the standard deviation is ∑ = \sqrt{\frac{(x_1-μ)^2 +
(x_2-μ)^2 + … + (x_N - μ)^2}{N}},
□ or, using summation notation, ∑ = \sqrt{\frac{1}{N} ∑[{i=1}]^N (x_i - μ)^2},
□ The standard deviation of a (univariate) probability distribution is the same as that of a random variable having that distribution. Not all random variables have a standard deviation, since
these expected values need not exist. For example, the standard deviation of a random variable which follows a Cauchy distribution is undefined because its E(X) is undefined.
|
{"url":"https://www.gabormelli.com/RKB/standard_deviation","timestamp":"2024-11-09T20:30:07Z","content_type":"text/html","content_length":"47508","record_id":"<urn:uuid:4aa2d527-b5c6-4e55-b8f6-bc506ead009c>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00652.warc.gz"}
|
For the component described in the problem indicated, determine... | Filo
Question asked by Filo student
For the component described in the problem indicated, determine ( ) the principal mass moments of inertia at the origin, the principal axes of inertia at the origin. Sketch the body and show the
orientation of the principal axes of inertia relative to the and axes.Prob. 9.168 Prob. 9.168
Not the question you're searching for?
+ Ask your question
Filo tutor solution
Learn from their 1-to-1 discussion with Filo tutors.
Generate FREE solution for this question from our expert tutors in next 60 seconds
Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7
Found 2 tutors discussing this question
Discuss this question LIVE
11 mins ago
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Question For the component described in the problem indicated, determine ( ) the principal mass moments of inertia at the origin, the principal axes of inertia at the origin. Sketch the body and
Text show the orientation of the principal axes of inertia relative to the and axes.Prob. 9.168 Prob. 9.168
Updated Mar 15, 2024
Topic All topics
Subject Physics
Class Class 11
|
{"url":"https://askfilo.com/user-question-answers-physics/for-the-component-described-in-the-problem-indicated-37383639353638","timestamp":"2024-11-13T01:24:59Z","content_type":"text/html","content_length":"124685","record_id":"<urn:uuid:15c11377-f320-4650-989a-e512851d5971>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00602.warc.gz"}
|
Degree Course in
Academic Year 2017/2018
- 1° Year
Teaching Staff Credit Value:
Taught classes:
105 hours
36 hours
Term / Semester:
1° and 2°
Learning Objectives
• MODULO 1
The student will learn the principal concepts and technics of Mathematical Analysis.
• MODULO 2
The student will acquire the main concepts of Mathematical Analysis and will be guided to link them to the concepts learned in Algebra, in General Topology or the concepts that he will study in
Physics.He will learn the main techniques of Mathematical Analysis. In particular, he will study the differential calculus for functions of one variable and its important applications, the theory
of integration of a single variable function, the solving methods of ordinary differential equations, the theory of sequences and series of functions.
In particular, the course aims to:
Knowledge and understanding: the student will familiarize himself with the differential calcul of the functions of one real variable and will know many of its important applications. He will
learn the integration methods and the theory of Rienmann's integration linked to Peano's theory of measure. It will be able to move on to the concept of sequences and series of functions . He
will learn the solving methods of differential equations that he willfind in other disciplines. The theory of differential equations will be postponed to the second year. Some studies will be
entrusted to the most willing students who, either alone or in groups, will be able to present them in short seminars.
Ability to apply knowledge and understanding: the student will not only learn the individual concepts but will succeed in linking them and will be led, in particular, to reflect on the structure
properties (eg topological) that are the basis of the various topics studied.They can also exercise the ability to use the acquired knowledgement in other settings: for example, the student will
be invited to independently demonstrate results similar to those studied, and to carry out numerous exercises of application of the theories studied. This will be donethrough exercises - both
manipulative and demonstrative - that will be proposed for individual study.
Making judgements: Students are invited to study some contents individually and to improve own knowledge also comparing different books. They can also verify the solutions of the problems that
they have studied with the other students during tutoring hours to find the right solutions.
Communication skills: By listening to the lessons and reading the recommended books, the student will continue to improve in the use of mathematical language that will be more and more suitable
to express correctly and elegantly mathematically sophisticated concepts.
Learning Skills: Students will be guided to acquire a method that will allow them to approach a new topic, recognizing immediately what are the prerequisites needed. It will also develop the
skills of computing and manipulating the mathematical objects studied.
Detailed Course Content
• MODULO 1
1. Real and complex numbers
2. Sequences and series
3. Continuous functions
• MODULO 2
MODULO 2
Differential Calcul for Functions of one variable
Integral calcul.
Improper integration.
Sequences and series of Functions
Solving methods of differential equations
Textbook Information
• MODULO 1
1. G. Emmanuele, Analisi Matematica 1, Pitagora (nuova edizione)
2. C.D. Pagani, S. Salsa, Analisi Matematica 1, Zanichelli
3. J.P. Cecconi, G. Stampacchia, Analisi Matematica vol. 1, Liguori
4. G. De Marco, Analisi uno, Zanichelli
• MODULO 2
1) G. Emmanuele Analisi Matematica I Pitagora
2) G. Emmanuele Analisi Matematica II Foxwell Davies
Other books that the student is invited to use
3) Pagani Salsa Analisi Matematica 1, Zanichelli
4) Cecconi Stampacchia, Analisi Matematica , Liguori
5) DeMarco Analisi uno, Zanichelli
|
{"url":"https://dmi.unict.it/courses/l-35/course-units/?cod=6830","timestamp":"2024-11-15T00:19:08Z","content_type":"text/html","content_length":"25055","record_id":"<urn:uuid:567f0897-47e8-4aa3-a339-1b09e4a0b706>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00805.warc.gz"}
|
The square of the hypotenuse of a right triangle is equal to the sum of the squares of the other two sides. Plane Geometry - Page 247by William Betz, Harrison Emmett Webb - 1912 - 332 pagesFull view
About this book
William Chauvenet - Geometry - 1871 - 380 pages
...proportional between the diameter and the segment adjacent to that chord. PROPOSITION XIV.— THEOREM. 48. The square of the hypotenuse of a right triangle is equal to the sum of the squares of
the other two sides. Let ABC be right angled at C; then, = A~U' + BC*. For, by the preceding...
Philosophy - 1871 - 396 pages
...bodies or in the infinite world of conceivable atoms ; and so, also, the theorem that the square upon the hypotenuse of a right triangle is equal to the sum of the squares upon its other two
sides, is necessary in its truth, and universal in its application,...
William Chauvenet - Mathematics - 1872 - 382 pages
...proportional between the diameter and the segment adjacent to that chord. PROPOSITION XIV.— THEOREM. 48. The square of the hypotenuse of a right triangle is equal to the sum of the squares of
the other two sides. Let ABC be right angled at C; then, IB* = AC-' + BC\ For, by the...
Samuel Mecutchen, George Mornton Sayre - Arithmetic - 1877 - 200 pages
...joining opposite corners; what is the area of the field? Note. — It is established by Geometry that "The square of the hypotenuse of a right triangle is equal to the sum of the squares of the
other two sides." Hence the following : — To find the hypotenuse of a right triangle....
Alfred Hix Welsh - Geometry - 1883 - 326 pages
...difference of the squares of two lines is 81, and one of the lines is 12; required the other. THEOREM XL The square of the hypotenuse of a right triangle is equal to the sum of the squares of
the other two sides. Let ABC be a right triangle, whose hypotenuse is AB; then will...
William Chauvenet - Geometry - 1888 - 826 pages
...proportional between the diameter and the segment adjacent to that chord. PROPOSITION XIV.— THEOREM. 48. The square of the hypotenuse of a right triangle is equal to the turn of tlie squares of
the other two sides. Let ABC be right angled at C; then, IB* = AU* + BC\ For,...
Charles Austin Hobbs - Arithmetic - 1889 - 370 pages
...opposite sides of a triangle, when the sides are respectively 12cm, 15cm, and 20cm. RIGHT TRIANGLES. ito. The square of the hypotenuse of a right triangle is equal to the sum of the squares of
the other two sides. This principle is illustrated in the annexed diagram. To find the...
James Wallace MacDonald - Geometry - 1889 - 80 pages
...SCHOLIUM. Compare (a + b) (a — b) = a* — P. Proposition XI. A Theorem. 246. The square described on the hypotenuse of a right triangle is equal to the sum of the squares of the other two sides.
COROLLARY. The square described on either side forming the right...
James Wallace MacDonald - Geometry - 1894 - 76 pages
...SCHOLIUM. Compare (a + b) (a — b) = a1 — 63. Proposition XI. A Theorem. 246. The square described on the hypotenuse of a right triangle is equal to the sum of the squares of the other two -
sides. COROLLARY. The square described on either side forming the right...
Edward Albert Bowser - Geometry - 1890 - 420 pages
...projection of AB upon the line XY Proposition 25. Theorem. 327. The square of the number which measures the hypotenuse of a right triangle is equal to the sum of the squares of the numbers
which measure the other two sides. Hyp. Let ABC be a rt. A , with rt. Z...
|
{"url":"https://books.google.com.jm/books?id=Bu5HAAAAIAAJ&qtid=f87e7830&output=html_text&source=gbs_quotes_r&cad=5","timestamp":"2024-11-02T13:44:16Z","content_type":"text/html","content_length":"30661","record_id":"<urn:uuid:76832652-4704-4a44-ad42-aba9ce0ac0c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00851.warc.gz"}
|
Machined Learnings
America has experienced increasing income inequality for the past few decades, and there is lively debate in the econoblogosphere about the causes.
One post by Karl Smith
caught my attention:
My longer thesis is that the rising return to unskilled labor is a function of industrialization and that industrialization is unique in this. The wage rate on unskilled labor never benefited
before and its not immediately clear that it will ever benefit again.
This is because rents always accrue to the scarce factors of production. Industrialization meant that the only thing we were short on were “control systems” everything else in the production
process was effectively cheap.
However, any mentally healthy human being is a decent control system. So, this meant huge returns to being a human.
If this theory is correct, it indicates anybody working in Artificial Intelligence and related fields is contributing to income inequality. Doh!
Karl goes on to say
You need there to be a shortage of something that human beings have a comparative advantage at simply by being human beings.
Mechanical Turk shows that people still have the ability to trade their inherent excellent perceptual capabilities. Identifying obscenity and tagging images for \$2.00 an hour may not sound like your
idea of the good life, but
those who subsist on landfills in Nicaragua
would presumably consider it an improvement. It would be great if it were feasible to connect the world's poorest to Mechanical Turk to improve their welfare.
Any charity with such ambitions needs to hurry, however. Within a decade or two we will have cracked all the problems that are commonly encountered on Mechanical Turk today, closing this window of
development opportunity.
At my new job the first major problem I'm dealing with is characterized by a prevalence of positive labeled data, no negative labeled data, and abundant unlabeled data; also known as p-u learning.
Despite the ``stinky'' moniker, it is possible to make progress under these conditions. Because this is a common predicament there is extensive treatment in the literature: researchers have advocated
a variety of different approaches and associated statistical assumptions and it's been tricky for me to distill best practices. Fortunately I noticed a gem of a result due to Zhang and Lee which is
proving useful.
The setup is extremely natural: assume features $x$ and (binary) labels $y$ are distributed jointly via $D = D_x \times D_{y|x} = D_y \times D_{x|y}$; assume you have access to samples from the
distribution $D_{x|1}$ of features $x$ given a positive label $y = 1$, i.e., positive labeled examples; and assume that you have to access to samples from the unconditional distribution of features
$D_x$, i.e., unlabeled examples. Note that you do not have access to samples from the distribution $D_{x|0}$, i.e., you do not have any negative labeled examples.
It turns out if AUC is the objective function you can optimize directly on the positive and unlabeled data. This is demonstrated by relating AUC on the p-u dataset, \[
\mathop{\mathrm{PUAUC}} (\Phi) &= \mathbb{E}_{(x_+, x_-) \sim D_{x|1} \times D_x}\left[ 1_{\Phi (x_+) > \Phi (x_-)} + \frac{1}{2} 1_{\Phi (x_+) = \Phi (x_-)} \right],
\] to the standard AUC computed using the (inaccessible) negative labeled examples, \[
\mathop{\mathrm{AUC}} (\Phi) &= \mathbb{E}_{(x_+, x_-) \sim D_{x|1} \times D_{x|0}}\left[ 1_{\Phi (x_+) > \Phi (x_-)} + \frac{1}{2} 1_{\Phi (x_+) = \Phi (x_-)} \right].
\] In particular, \[
&\mathop{\mathrm{AUC}} (\Phi) \\
&= \mathbb{E}_{(x_+, x_-) \sim D_{x|1} \times D_{x|0}}\left[ 1_{\Phi (x_+) > \Phi (x_-)} + \frac{1}{2} 1_{\Phi (x_+) = \Phi (x_-)} \right] \\
&= \frac{\mathbb{E}_{(x_+,(x_-,y)) \sim D_{x|1} \times D}\left[ 1_{y=0} \left( 1_{\Phi (x_+) > \Phi (x_-)} + \frac{1}{2} 1_{\Phi (x_+) = \Phi (x_-)} \right) \right] }{\mathbb{E}_{(x,y) \sim D}\left[
1_{y=0} \right]} \\
&= \frac{\mathop{\mathrm{PUAUC}} (\Phi) - \mathbb{E}_{(x_+,(x_-,y)) \sim D_{x|1} \times D}\left[ 1_{y=1} \left( 1_{\Phi (x_+) > \Phi (x_-)} + \frac{1}{2} 1_{\Phi (x_+) = \Phi (x_-)} \right) \right]}
{\mathbb{E}_{(x,y) \sim D}\left[ 1_{y=0} \right]} \\
&= \frac{\mathop{\mathrm{PUAUC}} (\Phi) - \mathbb{E}_{(x, y) \sim D} \left[ 1_{y = 1} \right] \mathbb{E}_{(x_+,x_-) \sim D_{x|1} \times D_{x|1}}\left[ \left( 1_{\Phi (x_+) > \Phi (x_-)} + \frac{1}{2}
1_{\Phi (x_+) = \Phi (x_-)} \right) \right]}{\mathbb{E}_{(x,y) \sim D}\left[ 1_{y=0} \right]} \\
&= \frac{\mathop{\mathrm{PUAUC}} (\Phi) - \frac{1}{2} \mathbb{E}_{(x, y) \sim D} \left[ 1_{y = 1} \right]}{\mathbb{E}_{(x,y) \sim D}\left[ 1_{y=0} \right]} \\
&= \frac{\mathop{\mathrm{PUAUC}} (\Phi) - \frac{1}{2}}{\mathbb{E}_{(x,y) \sim D}\left[ 1_{y=0} \right]} + \frac{1}{2},
\] which they write in the paper as \[
\mathop{\mathrm{AUC}} (\Phi) - \frac{1}{2} \propto \mathop{\mathrm{PUAUC}} (\Phi) - \frac{1}{2}.
\] This results in the following extremely simple procedure: treat unlabeled data as negative data and optimize for AUC.
|
{"url":"http://www.machinedlearnings.com/2012/03/","timestamp":"2024-11-09T04:24:30Z","content_type":"application/xhtml+xml","content_length":"83933","record_id":"<urn:uuid:9375853d-259e-4807-8848-f94260ff0ab0>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00336.warc.gz"}
|
Master Discrete Math 2020: Complete Basic-to-advanced Course - Free Online Courses
Master Discrete Math 2020: Complete Basic-to-advanced Course
Udemy Online Course Free Coupon Code
Master Discrete Math 2020: Complete Basic-to-advanced Course
Learn Discrete Mathematics In This Course: 300+ Lectures/Quizzes And 30 Assignments With 500+ Questions & Solutions
What you’ll learn
• Analyze and interpret the truth value of statements by identifying logical connectives, quantification and the truth value of each atomic component
• Distinguish between various set theory notations and apply set theory concepts to construct new sets from old ones
• Interpret functions from the perspective of set theory and differentiate between injective, surjective and bijective functions
• Construct new relations, including equivalence relations and partial orderings
• Apply the additive and multiplicative principles to count disorganized sets effectively and efficiently
• Synthesize counting techniques developed from counting bit strings, lattice paths and binomial coefficients
• Formulate counting techniques to approach complex counting problems using both permutations and combinations
• Prove certain formulas are true using special combinatorial proofs and complex counting techniques involving stars and bars
• Connect between complex counting problems and counting functions with certain properties
• Develop recurrence relations and closed formulas for various sequences
• Explain various relationships and properties involving arithmetic and geometric sequences
• Solve many recurrence relations using polynomial fitting
• Utilize the characteristic polynomial to solve challenging recurrence relations
• Master mathematical induction and strong induction to prove sophisticated statements involving natural numbers by working through dozens of examples
• Use truth tables and Boolean Algebra to determine the truth value of complex molecular statements
• Apply various proving techniques, including direct proofs, proof by contrapositive and proof by contradiction to prove various mathematical statements
• Analyze various graphs using new definitions from graph theory
• Discover many various properties and algorithms involving trees in graph theory
• Determine various properties of planar graphs using Euler’s Formula
• Categorize different types of graphs based on various coloring schemes
• Create various properties of Euler paths and circuits and Hamiltonian paths and cycles
• Apply concepts from graph theory, including properties of bipartite graphs and matching problems
• Use generating functions to easily solve extremely sophisticated recurrence relations
• Develop a deep understanding of number theory which involve patterns in the natural numbers
• You should be comfortable with high school algebra
• Be ready to learn an insane amount of awesome stuff
• Prepare to succeed in any college level discrete math course
• Brace yourself for tons of content
MASTER DISCRETE MATH 2020 IS SET UP TO MAKE DISCRETE MATH EASY:
This 461-lesson course includes video and text explanations of everything from Discrete Math, and it includes 150 quizzes (with solutions!) after each lecture to check your understanding and an
additional 30 workbooks with 500+ extra practice problems (also with solutions to every problem!), to help you test your understanding along the way.
This is the most comprehensive, yet straight-forward, course for Discrete Mathematics on Udemy! Whether you have never been great at mathematics, or you want to learn about the advanced features of
Discrete Math, this course is for you! In this course we will teach you Discrete Mathematics.
Master Discrete Math 2020 is organized into the following 24 sections:
• Mathematical Statements
• Set Theory
• Functions And Function Notation
• Relations
• Additive And Multiplicative Principles
• Binomial Coefficients
• Combinations And Permutations
• Combinatorial Proofs
• Advanced Counting Using The Principle Of Inclusion And Exclusion
• Describing Sequences
• Arithmetic And Geometric Sequences
• Polynomial Fitting
• Solving Recurrence Relations
• Mathematical Induction
• Propositional Logic
• Proofs And Proving Techniques
• Graph Theory Definitions
• Trees
• Planar Graphs
• Coloring Graphs
• Euler Paths And Circuits
• Matching In Bipartite Graphs
• Generating Functions
• Number Theory
AND HERE’S WHAT YOU GET INSIDE OF EVERY SECTION:
Videos: Watch engaging content involving interactive whiteboard lectures as I solve problems for every single math issue youll encounter in discrete math. We start from the beginning… I explain the
problem setup and why I set it up that way, the steps I take and why I take them, how to work through the yucky, fuzzy middle parts, and how to simplify the answer when you get it.
Notes: The notes section of each lesson is where you find the most important things to remember. Its like Cliff Notes for books, but for Discrete Math. Everything you need to know to pass your class
and nothing you dont.
Quizzes: When you think youve got a good grasp on a topic within a lecture, test your understanding with a quiz. If you pass, great! If not, you can review the videos and notes again or ask for help
in the Q&A section.
Workbooks: Want even more practice? When you’ve finished the section, you can review everything you’ve learned by working through the bonus workbooks. These workbooks include 500+ extra practice
problems (all with detailed solutions and explanations for how to get to those solutions), so they’re a great way to solidify what you just learned in that section.
YOU’LL ALSO GET:
• Lifetime access to a free online Discrete Math textbook
• Lifetime access to Master Discrete Math 2020
• Friendly support in the Q&A section
• Udemy Certificate of Completion available for download
So what are you waiting for? Learn Discrete Math in a way that will advance your career and increase your knowledge, all in a fun and practical way!
Will this course give you core discrete math skills?
Yes it will. There are a range of exciting opportunities for students who take Discrete Math. All of them require a solid understanding of Discrete Math, and thats what you will learn in this course.
Why should you take this course?
Discrete Mathematics is the branch of mathematics dealing with objects that can assume only distinct, separated values. Discrete means individual, separate, distinguishable implying discontinuous or
not continuous, so integers are discrete in this sense even though they are countable in the sense that you can use them to count. The term Discrete Mathematics is therefore used in contrast with
Continuous Mathematics, which is the branch of mathematics dealing with objects that can vary smoothly (and which includes, for example, calculus). Whereas discrete objects can often be characterized
by integers, continuous objects require real numbers.
Almost all middle or junior high schools and high schools across the country closely follow a standard mathematics curriculum with a focus on Continuous Mathematics. The typical sequence includes:
Pre-Algebra -> Algebra 1 -> Geometry -> Algebra 2/Trigonometry -> Precalculus -> Calculus Multivariable Calculus/Differential Equations
Discrete mathematics has not yet been considered a separate strand in middle and high school mathematics curricula. Discrete mathematics has never been included in middle and high school high-stakes
standardized tests in the USA. The two major standardized college entrance tests: the SAT and ACT, do not cover discrete mathematics topics.
Discrete mathematics grew out of the mathematical sciences response to the need for a better understanding of the combinatorial bases of the mathematics used in the real world. It has become
increasingly emphasized in the current educational climate due to following reasons:
Many problems in middle and high school math competitions focus on discrete math
Approximately 30-40% of questions in premier national middle and high school mathematics competitions, such as the AMC (American Mathematics Competitions), focus on discrete mathematics. More than
half of the problems in the high level math contests, such as the AIME (American Invitational Mathematics Examination), are associated with discrete mathematics. Students not having enough knowledge
and skills in discrete mathematics cant do well on these competitions. Our AMC prep course curriculum always includes at least one-third of the studies in discrete mathematics, such as number theory,
combinatorics, and graph theory, due to the significance of these topics in the AMC contests
Discrete Mathematics is the backbone of Computer Science
Discrete mathematics has become popular in recent decades because of its applications to computer science. Discrete mathematics is the mathematical language of computer science. Concepts and
notations from discrete mathematics are useful in studying and describing objects and problems in all branches of computer science, such as computer algorithms, programming languages, cryptography,
automated theorem proving, and software development. Conversely, computer implementations are tremendously significant in applying ideas from discrete mathematics to real-world applications, such as
in operations research.
The set of objects studied in discrete mathematics can be finite or infinite. In real-world applications, the set of objects of interest are mainly finite, the study of which is often called finite
mathematics. In some mathematics curricula, the term finite mathematics refers to courses that cover discrete mathematical concepts for business, while discrete mathematics courses emphasize discrete
mathematical concepts for computer science majors.
Discrete math plays the significant role in big data analytics.
The Big Data era poses a critically difficult challenge and striking development opportunities: how to efficiently turn massively large data into valuable information and meaningful knowledge.
Discrete mathematics produces a significant collection of powerful methods, including mathematical tools for understanding and managing very high-dimensional data, inference systems for drawing sound
conclusions from large and noisy data sets, and algorithms for scaling computations up to very large sizes. Discrete mathematics is the mathematical language of data science, and as such, its
importance has increased dramatically in recent decades.
IN SUMMARY, discrete mathematics is an exciting and appropriate vehicle for working toward and achieving the goal of educating informed citizens who are better able to function in our increasingly
technological society; have better reasoning power and problem-solving skills; are aware of the importance of mathematics in our society; and are prepared for future careers which will require new
and more sophisticated analytical and technical tools. It is an excellent tool for improving reasoning and problem-solving abilities.
Starting from the 6th grade, students should some effort into studying fundamental discrete math, especially combinatorics, graph theory, discrete geometry, number theory, and discrete probability.
Students, even possessing very little knowledge and skills in elementary arithmetic and algebra, can join our competitive mathematics classes to begin learning and studying discrete mathematics.
Does the course get updated?
Its no secret how discrete math curriculum is advancing at a rapid rate. New, more complex content and topics are changing Discrete Math courses across the world every day, meaning its crucial to
stay on top with the latest knowledge.
A lot of other courses on Udemy get released once, and never get updated. Learning from an outdated course and/or an outdated version of Discrete Math can be counter productive and even worse – it
could teach you the wrong way to do things.
There’s no risk either!
This course comes with a full 30 day money-back guarantee. Meaning if you are not completely satisfied with the course or your progress, simply let Kody know and he will refund you 100%, every last
penny no questions asked.
You either end up with Discrete Math skills, go on to succeed in college level discrete math courses and potentially make an awesome career for yourself, or you try the course and simply get all your
money back if you dont like it
You literally cant lose. Ready to get started?
Enroll now using the Add to Cart button on the right, and get started on your way to becoming a master of Discrete Mathematics. Or, take this course for a free spin using the preview feature, so you
know youre 100% certain this course is for you.
See you on the inside (hurry, your Discrete Math class is waiting!)
Author(s): Kody Amour
You must be logged in to post a comment.
|
{"url":"https://coupon.technovedant.com/master-discrete-math-2020-complete-basic-to-advanced-course/","timestamp":"2024-11-09T06:31:29Z","content_type":"text/html","content_length":"78303","record_id":"<urn:uuid:508820be-0e03-48bf-9511-3a5ae008a276>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00624.warc.gz"}
|
trainLearner: Train an R learner.
Mainly for internal use. Trains a wrapped learner on a given training set. You have to implement this method if you want to add another learner to this package.
trainLearner(.learner, .task, .subset, .weights = NULL, ...)
(any). Model of the underlying learner.
Wrapped learner.
Task to train learner on.
Subset of cases for training set, index the task with this. You probably want to use getTaskData for this purpose.
Weights for each observation.
Additional (hyper)parameters, which need to be passed to the underlying train function.
Your implementation must adhere to the following: The model must be fitted on the subset of .task given by .subset. All parameters in ... must be passed to the underlying training function.
|
{"url":"https://www.rdocumentation.org/packages/mlr/versions/2.19.1/topics/trainLearner","timestamp":"2024-11-12T20:26:27Z","content_type":"text/html","content_length":"65194","record_id":"<urn:uuid:58e31f4f-9cb3-4f24-ab29-6c4d6bbfadc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00504.warc.gz"}
|
Mike ●
December 14, 2015
●1 comment
The mathematics of number theory and elliptic curves can take a life time to learn because they are very deep subjects. As engineers we don't have time to earn PhD's in math along with all the
things we have to learn just to make communications systems work. However, a little learning can go a long way to helping make our communications systems secure - we don't need to know everything.
The following articles are broken down into two realms, number theory and elliptic...
A digital signature is used to prove a message is connected to a specific sender. The sender can not deny they sent that message once signed, and no one can modify the message and maintain the
signature. The message itself is not necessarily secret. Certificates of authenticity, digital cash, and software distribution use digital signatures so recipients can verify they are getting what
they paid for.
Since messages can be of any length and mathematical algorithms always use fixed...
Elliptic Curve Cryptography is used to create a Public Key system that allows two people (or computers) to exchange public data so that both sides know a secret that no one else can find in a
reasonable time. The simplest method uses a fixed public key for each person. Once cracked, every message ever sent with that key is open. More advanced key exchange systems have "perfect forward
secrecy" which means that even if one message key is cracked, no other message will...
Mike ●
November 23, 2015
●2 comments
One of the important steps of computing point addition over elliptic curves is a division of two polynomials.
Mike ●
November 20, 2015
●7 comments
Error correction codes and cryptographic computations are most easily performed working with GF(2^n)
Mike ●
November 16, 2015
●6 comments
Secure online communications require encryption. One standard is AES (Advanced Encryption Standard) from NIST. But for this to work, both sides need the same key for encryption and decryption. This
is called Private Key encryption.
Mike ●
November 3, 2015
●2 comments
Elliptic Curve Cryptography is used as a public key infrastructure to secure credit cards, phones and communications links. All these devices use either FPGA's or embedded microprocessors to compute
the algorithms that make the mathematics work. While the math is not hard, it can be confusing the first time you see it. This blog is an introduction to the operations of squaring and computing an
inverse over a finite field which are used in computing Elliptic Curve arithmetic. ...
Mike ●
October 22, 2015
●6 comments
Everything in the digital world is encoded. ASCII and Unicode are combinations of bits which have specific meanings to us. If we try to interpret a compiled program as Unicode, the result is a lot
of garbage (and beeps!) To reduce errors in transmissions over radio links we use Error Correction Codes so that even when bits are lost we can recover the ASCII or Unicode original. To prevent
anyone from understanding a transmission we can encrypt the raw data...
I got a bad feeling yesterday when I had to include reference information about a 16-bit CRC in a serial protocol document I was writing. And I knew it wasn’t going to end well.
The last time I looked into CRC algorithms was about five years ago. And the time before that… sometime back in 2004 or 2005? It seems like it comes up periodically, like the seventeen-year locust or
sunspots or El Niño,...
Fabien Le Mentec ●
July 19, 2014
I am improving the domotics framework that I described in a previous article://www.embeddedrelated.com/showarticle/605.php
I want to support wireless wall outlets, allowing me to switch devices power from a remote location over HTTP.
To do so, I could design my own wireless wall outlets and use a hardware similar to the previous one, based on the NRF905 chipset. The problem is that such a product would not be certified, and that
would be an issue regarding the home insurance,...
Fabien Le Mentec ●
July 19, 2014
I am improving the domotics framework that I described in a previous article://www.embeddedrelated.com/showarticle/605.php
I want to support wireless wall outlets, allowing me to switch devices power from a remote location over HTTP.
To do so, I could design my own wireless wall outlets and use a hardware similar to the previous one, based on the NRF905 chipset. The problem is that such a product would not be certified, and that
would be an issue regarding the home insurance,...
Elliptic Curve Cryptography is used to create a Public Key system that allows two people (or computers) to exchange public data so that both sides know a secret that no one else can find in a
reasonable time. The simplest method uses a fixed public key for each person. Once cracked, every message ever sent with that key is open. More advanced key exchange systems have "perfect forward
secrecy" which means that even if one message key is cracked, no other message will...
A wireless button that uses the M5 STAMP PICO and Mongoose to send a Telegram message when pressed. The code is written in C
Ido Gendel ●
January 4, 2024
While the Internet is choke-full of explanations of basic data communication protocols, very little is said about the higher levels of packing, formatting, and exchanging information in a useful and
practical way. This less-charted land is still fraught with strange problems, whose solutions may be found in strange places – in this example, a very short, 60 years old Science Fiction story.
Mike ●
November 16, 2015
●6 comments
Secure online communications require encryption. One standard is AES (Advanced Encryption Standard) from NIST. But for this to work, both sides need the same key for encryption and decryption. This
is called Private Key encryption.
Sergio R Caprile ●
June 22, 2024
In the first part of this blog, we introduced this little framework to integrate MicroPython and Cesanta's Mongoose; where Mongoose runs when called by MicroPython and is able to run Python functions
as callbacks for the events you decide in your event handler. Now we add MQTT to the equation, so we can subscribe to topics and publish messages right from MicroPython.
The security of elliptic curve cryptography is determined by the elliptic curve discrete log problem. This article explains what that means. A comparison with real number logarithm and modular
arithmetic gives context for why it is called a log problem.
Mike ●
November 20, 2015
●7 comments
Error correction codes and cryptographic computations are most easily performed working with GF(2^n)
Mike ●
November 23, 2015
●2 comments
One of the important steps of computing point addition over elliptic curves is a division of two polynomials.
Mike ●
December 14, 2015
●1 comment
The mathematics of number theory and elliptic curves can take a life time to learn because they are very deep subjects. As engineers we don't have time to earn PhD's in math along with all the
things we have to learn just to make communications systems work. However, a little learning can go a long way to helping make our communications systems secure - we don't need to know everything.
The following articles are broken down into two realms, number theory and elliptic...
A digital signature is used to prove a message is connected to a specific sender. The sender can not deny they sent that message once signed, and no one can modify the message and maintain the
signature. The message itself is not necessarily secret. Certificates of authenticity, digital cash, and software distribution use digital signatures so recipients can verify they are getting what
they paid for.
Since messages can be of any length and mathematical algorithms always use fixed...
Elliptic curve mathematics over finite fields helps solve the problem of exchanging secret keys for encrypted messages as well as proving a specific person signed a particular document. This article
goes over simple algorithms for key exchange and digital signature using elliptic curve mathematics. These methods are the essence of elliptic curve cryptography (ECC) used in applications such as
SSH, TLS and HTTPS.
Mohammed Billoo ●
January 29, 2024
In this blog post, I show how to enable BLE support in a Zephyr application. First, I show the necessary configuration options in Kconfig. Then, I show how to use the Zephyr functions and macros to
create a custom service and characteristic for a contrived application.
Mike ●
November 3, 2015
●2 comments
Elliptic Curve Cryptography is used as a public key infrastructure to secure credit cards, phones and communications links. All these devices use either FPGA's or embedded microprocessors to compute
the algorithms that make the mathematics work. While the math is not hard, it can be confusing the first time you see it. This blog is an introduction to the operations of squaring and computing an
inverse over a finite field which are used in computing Elliptic Curve arithmetic. ...
Sergio R Caprile ●
March 31, 2024
This is more a framework than an actual application, with it you can integrate MicroPython and Cesanta's Mongoose.
Mongoose runs when called by MicroPython and is able to run Python functions as callbacks for the events you decide in your event handler. The code is completely written in C, except for the example
Python callback functions, of course. To try it, you can just build this example on a Linux machine, and, with just a small tweak, you can also run it on any ESP32 board.
The security of elliptic curve cryptography is determined by the elliptic curve discrete log problem. This article explains what that means. A comparison with real number logarithm and modular
arithmetic gives context for why it is called a log problem.
A wireless button that uses the M5 STAMP PICO and Mongoose to send a Telegram message when pressed. The code is written in C
Sergio R Caprile ●
June 22, 2024
In the first part of this blog, we introduced this little framework to integrate MicroPython and Cesanta's Mongoose; where Mongoose runs when called by MicroPython and is able to run Python functions
as callbacks for the events you decide in your event handler. Now we add MQTT to the equation, so we can subscribe to topics and publish messages right from MicroPython.
The STM32 B-CAMS-OMV camera module offers an accessible way to get started with embedded vision. Coupled with the STM32H747I-DISCO discovery kit and the FP-AI-VISION1 function pack, it's possible to
be up and running in minutes.
This video describes the camera connection interface to the discovery kit and the key software functions required to control the camera and process its data. We review the ISP (Image Signal
Processor) interface with examples of image processing...
Ido Gendel ●
January 4, 2024
While the Internet is choke-full of explanations of basic data communication protocols, very little is said about the higher levels of packing, formatting, and exchanging information in a useful and
practical way. This less-charted land is still fraught with strange problems, whose solutions may be found in strange places – in this example, a very short, 60 years old Science Fiction story.
|
{"url":"https://www.embeddedrelated.com/blogs-3/mpat/all/Communications.php","timestamp":"2024-11-14T17:21:39Z","content_type":"text/html","content_length":"68174","record_id":"<urn:uuid:dad23bdf-e6bc-4094-ac2d-ffa4ddb6ef4c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00724.warc.gz"}
|
ICLA 2021
The times mentioned below are all in Indian Standard Time (IST).
IST = GMT + 5 hours 30 minutes (e.g., 1400 IST = 0830 GMT).
March 4, 2021 (Thursday)
Invited talk: Agata Ciabattoni (Bio)
14:00 - 15:00 Normative reasoning in Mīmāṃsā: A deontic logic approach(Slides)
Chair: S P Suresh
15:00 - 15:30 BREAK and Meet on Gather
Panel on logic education
Serafina Lapenta (Bio)
15:30 - 17:00 Other Panelists:
- Viviane Durand-Guerrier (Bio):The place of logic in school curricula and advocacy for logic at school(Slides)
- François Schwarzentruber (Bio):Use of tools in teaching logic(Slides)
- John Slaney (Bio):Introductory logic: opportunity and challenge
17:00 - 17:30 BREAK and Meet on Gather
Invited talk: Thomas Schwentick (Bio)
17:30 - 18:30 Dynamic Complexity: Basics and recent directions(Slides)
Chair: Anuj Dawar
18:30 - 20:30 BREAK and Meet on Gather
Invited talk: Adnan Darwiche (Bio)
20:30 - 21:30 Reasoning about what was learned(Slides)
Chair: Wiebe van der Hoek
March 5, 2021 (Friday)
Invited talk: Marta Kwiatkowska (Bio)
14:00 - 15:00 Probabilistic model checking for strategic equilibria-based decision making(Slides)
Chair: Prakash Saivasan
15:00 - 15:30 BREAK and Meet on Gather
Contributed talks
- Isabella McAllister and Patrick Girard. AGM Belief Revision About Logic (Slides)
- Abhisekh Sankaran. Feferman-Vaught decomposition for prefix classes of first order logic (Slides)
15:30 - 17:00 - Sayantan Roy. On the Characterizations of Tarski-type and Lindenbaum-type Logical Structures (Slides)
- Antonio Di Nola, Serafina Lapenta and Giacomo Lenzi. Dualities and logical aspects of Baire functions (Slides)
- Nikolay Bazhenov. On computability-theoretic properties of Heyting algebras (Slides)
- Sreehari K and Kamal Lodaya. Plenitude (Slides)
Chair: Sourav Chakraborty
17:00 - 17:30 BREAK and Meet on Gather
Contributed talks
- Jie Fan, Davide Grossi, Barteld Kooi, Xingchi Su and Rineke Verbrugge. Commonly Knowing Whether (Slides)
17:30 - 18:30 - Hans van Ditmarsch, Didier Galmiche and Marta Gawek. An Epistemic Separation Logic with Action Models (Slides)
- Masanobu Toyooka and Katsuhiko Sano. Analytic Multi-Succedent Sequent Calculus for Combining Intuitionistic and Classical Propositional Logic (Slides)
- Santiago Jockwich Martinez, Sourav Tarafder and Giorgio Venturi. Quotient models for a class of non-classical set theories (Slides)
Chair: Md. Aquil Khan
18:30 - 20:30 BREAK and Meet on Gather
Invited talk: Julia Knight (Bio)
20:30 - 21:30 Describing structures and classes of structures(Slides)
Chair: Amit Kuber
March 6, 2021 (Saturday)
Invited talk: Hans van Ditmarsch (Bio)
14:00 - 15:00 Dynamic epistemic logic for distributed computing - asynchrony and concurrency(Slides)
Chair: Helle Hvid Hansen
15:00 - 15:30 BREAK and Meet on Gather
Panel on logic and experimental studies: A new paradigm or contradiction in terms?
Torben Braüner (Bio)(Slides)
15:30 - 17:00 Other Panelists:
- Nina Gierasimczuk (Bio):Approximate number sense and semantic universals: an experimental simulation study(Slides)
- Paula Quinon (Bio):The core cognition paradigm and foundations of logic, arithmetic, and computation
- Niels Taatgen (Bio):Cognitive architectures and predictive models(Slides)
17:00 - 17:30 BREAK and Meet on Gather
Contributed talks
- Harshit Bisht and Amit Kuber. Aggregating Relational Structures (Slides)
17:30 - 18:30 - Patrick Blackburn, Torben Braüner and Julie Lundbak Kofod. A Note on Hybrid Modal Logic with Propositional Quantifers (Slides)
- Deepak D'Souza and Raveendra Holla. Equivalance of Pointwise and Continuous Semantics of FO with Linear Constraints (Slides)
- Deepak D'Souza and Raj Mohan Matteplackel. A Clock-Optimal Hierarchical Monitoring Automaton for MITL (Slides)
Chair: Mohua Banerjee
18:30 - 20:30 BREAK and Meet on Gather
Music Session: Instrumental
- Hans van Ditmarsch on Cello
20:30 - 21:30 - François Schwartzentruber on Piano
- Sourav Tarafder on Ghatam
- Amit Kuber on Keyboard
Chair: Sujata Ghosh
March 7, 2021 (Sunday)
Invited talk: Maria Aloni (Bio)
14:00 - 15:00 A logic for pragmatic intrusion (Slides)
Chair: Fenrong Liu
15:00 - 15:30 BREAK and Meet on Gather
Invited talk: Nick Bezhanishvili (Bio)
15:30 - 16:30 Polyhedral modal logic(Slides)
Chair: Soma Dutta
Contributed talks (Poster Session)
- Purbita Jana. L-Topology via Generalised Geometric Logic (Poster)
16:30 - 17:00 - Jieting Luo, Beishui Liao and John-Jules Meyer. Reasoning about the Robustness of Self-organizing Multi-agent Systems (WIP) (Poster)
- Anantha Padmanabha. Relative Expressive Powers of First Order Modal Logic and Term Modal Logic (Poster)
- Bama Srinivasan and Ranjani Parthasarathi. Multiple Task Specification inspired from Mīmāṃsā for Reinforcement Learning Models (Work in Progress)
17:00 - 17:30 BREAK and Meet on Gather
17:30 - 18:30 ALI General Body Meeting
Chair: Sanjiva Prasad
Contributed talks
- Prosenjit Howlader and Mohua Banerjee. Double Boolean Algebras with Operators (Slides)
18:30 - 19:15 - Kaibo Xie and Jialiang Yan. A Logic for Instrumental Desire (Slides)
- Rohit Parikh. Covid-19 and Knowledge Based Computation (Slides)
Chair: Kamal Lodaya
19:15 - 20:30 BREAK and Meet on Gather
Music Session: Vocal
- Soma Dutta
20:30 - 21:30 - Amit Kuber
- Mohua Banerjee
- Abhisekh Sankaran
Chair: R. Ramanujam
|
{"url":"https://www.isichennai.res.in/~sujata/icla2021/prog.html","timestamp":"2024-11-05T07:04:49Z","content_type":"text/html","content_length":"55723","record_id":"<urn:uuid:ca05aabc-52d9-4f39-8d98-e0717808f922>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00469.warc.gz"}
|
Early Software Size Estimation using Weighted Analysis Class Diagram Metrics - Datasets
Published: 9 June 2022| Version 1 | DOI: 10.17632/mnrpcxzk88.1
It includes five different datasets. The first four datasets contain student projects collected from different offerings of two undergraduate-level courses – Object-Oriented Analysis and Design
(OOAD) and Software Engineering (SE) – taught in a renowned private university in Lahore over a period of six years. The fifth dataset contains real-life industry projects collected from a renowned
software house (i.e. member of Pakistan Software Houses Association for IT and ITeS (P@SHA)) in Lahore. Dataset #1 consists of 31 C++ GUI-based desktop applications. Dataset #2 consists of 19 Java
GUI-based desktop applications. Dataset #3 consists of 12 Java web applications. Dataset #4 consists of 31 Java all two categories. Dataset #5 consists of 11 VB.NET GUI-based desktop applications.
Attributes are used as follows: Project Code – Project ID for identification purposes NOC – The total number of classes in a class diagram NOA – The total number of attributes in a class diagram NOM
– The total number of methods/operations in a class diagram NODep – The total number of dependency relationships in a class diagram NOAss – The total number of association relationships in a class
diagram NOComp – The total number of composition relationships in a class diagram NOAgg – The total number of aggregation relationships in a class diagram NOGen – The total number of generalization
relationships in a class diagram NORR – The total number of realization relationships in a class diagram NOOM – The total number of one-to-one multiplicity relationships in a class diagram NOMM – The
total number of one-to-many multiplicity relationships in a class diagram NMMM – The total number of many-to-many multiplicity relationships in a class diagram OCP – objective class points EOCP –
enhanced objective class points WEOCP – weighted enhanced objective class points SLOC – software size measured in source lines of code
National University of Computer and Emerging Sciences - Lahore Campus
Project Management, Object Oriented Software Engineering, Software Design, Empirical Study of Software Engineering, Linear Regression
|
{"url":"https://data.mendeley.com/datasets/mnrpcxzk88/1","timestamp":"2024-11-09T00:21:55Z","content_type":"text/html","content_length":"107132","record_id":"<urn:uuid:a6de1947-3acb-4b93-98fd-c1a7cfa70ae5>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00260.warc.gz"}
|
Equality restrictions in the social sciences and impacts on estimators of covariates
Marco Giesselmann, Wolfgang Jagodzinski
Building: Law Building
Room: Breakout 3 - Law Building, Room 104
Date: 2012-07-10 01:30 PM – 03:00 PM
Last modified: 2012-01-12
It has become a common practice in survey research to represent groups and collectives by means of dummy variables. In comparative survey research, for example, country-specific influences are often
estimated in this way. If a survey includes L countries and we want to find out whether the average life satisfaction in each of the remaining L-1 countries significantly differs from the life
satisfaction in a baseline country C0 we represent each country by a country dummy variable (CDV) and regress them on the life satisfaction. Researchers who are interested in micro-level
relationships or micro/macro interactions often use CDVs for the elimination of composition effects. Another well-known field of application is cohort analysis. Here dummy variables represent
generations, age groups, and/or periods.
While the classical discussion in cohort analysis focused on the aggregate-level effects of dummy variables, developments during the last decades have expanded the approach in two directions. First,
simple macro-level models have been replaced by multi-level, mostly two-level models. Researchers nowadays are not only interested in the variation between groups but also in the variation between
individuals. Second, models have become more complex insofar, as ordinal or metric variables are included besides dichotomous variables. In this research note we want to examine the interplay between
dummy variables and metric variables in two-level models. More precisely we want to find out to what extent the effect of a metric variable on a given dependent variable is affected by the dummy
variables in the model.
The latter question in turn is motivated by noticing that the specification of a full set of dummy-variables often leads to identification problems in presence of covariates. To make estimation of
such models feasible, it is common to collapse several macro-units on one dummy variable. While the mathematics of an estimation with such ‘equality restrictions’ are well-known, its
methodological implications are still somehow vague. Therefore, we want to explain how estimators in a DV-model with equality restrictions are constructed and have to be interpreted from a technical
viewpoint. By proving a substantial insight in what an equality restriction actually implies, we also want to reveal, why often results differ depending on the collapsing strategy. Additionally, it
will be substantively clarified, why especially the choice of a minimal set (with only two macro-units collapsed on one dummy variable) will generate arbitrary results.
|
{"url":"https://conference.acspri.org.au/index.php/rc33/2012/paper/view/572","timestamp":"2024-11-08T01:48:43Z","content_type":"application/xhtml+xml","content_length":"8467","record_id":"<urn:uuid:f5ddce03-50bd-4c2c-91e6-c03118c553d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00085.warc.gz"}
|
Complexity of quantum algorithms makes it difficult to program - CiteBar
Complexity of quantum algorithms makes it difficult to program ^78%
Truth rate:
The Quantum Conundrum: Why Programming Complex Algorithms Can Be a Challenge
As we continue to push the boundaries of what is possible in the world of computing, the complexities of quantum algorithms have emerged as a major hurdle for programmers and researchers alike. With
the promise of exponential speedup over classical computers, quantum algorithms hold great potential for solving some of the most challenging problems in fields like cryptography, optimization, and
The Nature of Quantum Algorithms
Quantum algorithms operate on the principles of superposition and entanglement, allowing them to process multiple possibilities simultaneously. This leads to a fundamental difference in how they
approach problem-solving compared to classical computers. However, this very same property makes it difficult for programmers to develop and implement these algorithms.
The Challenges of Quantum Programming
There are several reasons why programming complex quantum algorithms is such a challenge:
• Understanding the underlying mathematics: Quantum algorithms rely heavily on advanced mathematical concepts like linear algebra and group theory.
• Dealing with superposition: The concept of superposition makes it difficult to grasp how a qubit can exist in multiple states at once, leading to confusion and errors.
• Managing entanglement: Entangled particles are connected in such a way that the state of one particle is dependent on the state of the other, making it hard to predict and control their behavior.
The Consequences of Complexity
The complexity of quantum algorithms has several consequences for programmers and researchers:
• Steep learning curve: Quantum programming requires a deep understanding of both computer science and physics, making it inaccessible to many.
• Limited resources: Currently, there are few tools and frameworks available to support the development of complex quantum algorithms.
• High error rates: The fragile nature of quantum states makes it difficult to achieve reliable results, leading to high error rates.
The Way Forward
While the challenges of programming complex quantum algorithms are significant, they are not insurmountable. To overcome these hurdles, we need:
• More educational resources: There is a pressing need for more courses, tutorials, and documentation that explain the principles and practice of quantum programming.
• Improved tools and frameworks: Better software support will make it easier for programmers to develop and test complex quantum algorithms.
• Collaboration and community building: Sharing knowledge and expertise through conferences, workshops, and online forums can help to accelerate progress in this field.
Programming complex quantum algorithms is a challenging task that requires a deep understanding of both computer science and physics. However, with the potential rewards of solving some of the
world's most pressing problems, it is an effort worth pursuing. By acknowledging the complexities and working together to overcome them, we can unlock the full potential of quantum computing and
change the course of history.
Be the first who create Pros!
Be the first who create Cons!
• Created by: Adriana Gonçalves
• Created at: Aug. 17, 2024, 1:41 a.m.
|
{"url":"https://citebar.com/cite/Complexity_of_quantum_algorithms_makes_it_difficult_to_program/","timestamp":"2024-11-11T02:09:13Z","content_type":"text/html","content_length":"48365","record_id":"<urn:uuid:db491ee2-6e0a-4f86-aab0-f12caeb1cad7>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00228.warc.gz"}
|
Mastering the Art of C – How to Print Floats with Precision
Understanding Floats in C
Before delving into the intricacies of printing floats with precision in C, it is essential to understand the basics of floating-point numbers. In the C programming language, floats represent real
numbers with fractional parts. They are typically stored in memory using the IEEE 754 standard.
The IEEE 754 standard defines two commonly used floating-point formats: single precision and double precision. Single precision floats, represented by the float data type, occupy 32 bits of memory,
while double precision floats, represented by the double data type, occupy 64 bits.
Representation of floats in memory
To understand how floats are stored in memory, let’s consider the example of a single precision float. The 32 bits of memory allocated for a float are divided into three components:
• 1 bit for the sign (+/-)
• 8 bits for the exponent
• 23 bits for the fractional part (also known as the significand or mantissa)
The sign bit determines whether the float is positive or negative. The exponent represents the power of two by which the significand is multiplied. The fractional part contains the significant
digits, which determine the precision of the float.
Precision and Rounding in C
While floats can represent a wide range of values, they are limited in terms of precision due to the finite number of bits allocated for the significand. Single precision floats typically have a
precision of about 7 decimal digits, while double precision floats have a precision of about 15 decimal digits.
When performing calculations or printing floating-point numbers, it is important to be aware of the limitations of precision. Rounding errors can occur when the exact decimal representation of a
float cannot be accurately represented using the finite number of bits available.
Rounding modes and their impact on precision
C provides several rounding modes that dictate how rounding is performed when dealing with floating-point numbers. The default rounding mode is known as “round to nearest even,” where the number is
rounded to the nearest representable value. In cases where the number is equidistant between two representable values, it is rounded to the nearest even value.
Other rounding modes include “round towards zero,” “round towards positive infinity,” and “round towards negative infinity.” These rounding modes can be specified using the fesetround() function from
the fenv.h header.
Techniques for Printing Floats with Precision
Printing floats with precision in C can be achieved using the printf function. The printf function provides format specifiers that allow you to control the precision of the output.
Using the printf function
To specify precision when printing floats, you can use the %f, %e, or %g format specifiers. The %f specifier displays the float in decimal notation with a fixed number of decimal places, while the %e
specifier displays the float in scientific notation with a fixed number of decimal places. The %g specifier automatically chooses between the decimal and scientific notation based on the magnitude of
the float.
Here’s an example of using the printf function to print a float with a precision of 2 decimal places:
#include <stdio.h>
int main() { float pi = 3.14159; printf("Pi: %.2f\n", pi); return 0; }
This will output:
Pi: 3.14
Adjusting width and alignment
In addition to precision, the printf function also allows you to adjust the width and alignment of the output. You can specify the minimum width of the output using the %n format specifier, where n
is the desired width. You can also specify the alignment using the - flag for left alignment or the 0 flag for zero-padding.
Here’s an example of adjusting the width and alignment when printing a float:
#include <stdio.h>
int main() { float pi = 3.14159; printf("Pi: %10.2f\n", pi); return 0; }
This will output:
Pi: 3.14
Limiting rounding errors
Rounding errors can accumulate when performing calculations with floats. To minimize the accumulation of rounding errors, it is recommended to utilize rounding functions, such as round and trunc, to
round or truncate intermediate results as needed.
For example, instead of performing calculations directly on a float variable, you can calculate with higher precision using double precision floats and then round or truncate the final result as
necessary before printing it.
Best Practices for Printing Floats with Precision
When working with floats and printing them with precision in C, it is important to follow some best practices to ensure accurate and expected results.
Avoiding unnecessary conversions
Unnecessary conversions between float and double types can introduce additional rounding errors. Try to perform calculations and printing using the same data type whenever possible to maintain
consistency and accuracy.
Using appropriate data types and variables
Using appropriate data types and variables can help ensure precision when working with floats. If double precision is required for your calculations, use the double data type instead of the float
data type to avoid loss of precision.
Considering the scale of the floating-point value
When determining the desired precision for printing a float, consider the scale of the value. Precision requirements may differ for very small or very large numbers. Adjust the precision accordingly
to ensure the desired level of accuracy.
Handling Edge Cases
When printing floats with precision in C, it is important to consider edge cases, such as extremely large or small numbers, as well as special values like NaN (Not-a-Number) and infinity.
Dealing with extremely large or small numbers
When dealing with extremely large or small numbers, the %e format specifier is commonly used to display the float in scientific notation. This allows for a more compact and readable representation of
the float.
Printing special values like NaN and infinity
In C, special values like NaN and infinity can occur when performing calculations. These special values can be printed using the %f or %e format specifiers, just like any other float. However, it is
essential to handle these special values separately if needed, as they may require additional logic or formatting.
Case Study: Printing Pi with Precision
Let’s explore a case study to demonstrate different techniques for printing the value of Pi with precision in C.
Step-by-step approach to printing the value of Pi
1. Declare a float variable to store the value of Pi.
2. Assign the approximate value of Pi to the variable.
3. Use the printf function with the desired format specifier to print the value of Pi with the desired precision.
Demonstrating different techniques and their results
Here’s an example of printing Pi with different precision using the printf function:
#include <stdio.h>
int main() { float pi = 3.14159;
printf("Pi (default): %.6f\n", pi); printf("Pi (scientific notation): %e\n", pi);
return 0; }
This will output:
Pi (default): 3.141590 Pi (scientific notation): 3.141590e+00
In conclusion, precision is crucial when printing floats in C. By understanding the basics of floating-point numbers and the limitations of precision, you can avoid common challenges associated with
float printing.
Utilizing the proper techniques, such as adjusting the format specifiers, aligning and adjusting the width, and minimizing rounding errors, enables you to print floats with precision accurately.
Following best practices, considering the scale of the floating-point value, and handling edge cases ensures consistent and expected results.
By mastering the art of printing floats with precision in C, you can enhance the accuracy and reliability of your programs that involve floating-point calculations or outputting floating-point
|
{"url":"https://skillapp.co/blog/mastering-the-art-of-c-how-to-print-floats-with-precision/","timestamp":"2024-11-11T06:43:25Z","content_type":"text/html","content_length":"113607","record_id":"<urn:uuid:85937d54-89b2-4ba8-8197-6fa312dc60c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00154.warc.gz"}
|
Turn me on my side and I am everything. Cut me in half and I am nothing. What am I?
One of the trickiest riddles you’ll find is the one that reads:
Turn me on my side and I am everything. Cut me in half and I am nothing. What am I?
The key to answering this riddle is a little math knowledge. In particular, you need to know the mathematical symbol for infinity (i.e. everything). If you do, then the answer is quite simple.
Still unable to come up with the solution to this riddle? Here’s the answer for you:
Turn me on my side and I am everything. Cut me in half and I am nothing. What am I?
The number 8.
If you turn the number 8 on its side, you get the mathematical symbol for infinity.
If you cut the number 8 in half horizontally, you are left with a zero.
See also:
|
{"url":"https://quickanswer.org/turn-me-on-my-side-and-i-am-everything-riddle/","timestamp":"2024-11-15T04:26:33Z","content_type":"text/html","content_length":"88437","record_id":"<urn:uuid:6c14d294-5189-437b-91e1-8fb54b2bb2c3>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00032.warc.gz"}
|
Simple Epidemiological Dynamics Explain Phylogenetic Clustering of HIV from Patients with Recent Infection
Phylogenies of highly genetically variable viruses such as HIV-1 are potentially informative of epidemiological dynamics. Several studies have demonstrated the presence of clusters of highly related
HIV-1 sequences, particularly among recently HIV-infected individuals, which have been used to argue for a high transmission rate during acute infection. Using a large set of HIV-1 subtype B pol
sequences collected from men who have sex with men, we demonstrate that virus from recent infections tend to be phylogenetically clustered at a greater rate than virus from patients with chronic
infection (‘excess clustering’) and also tend to cluster with other recent HIV infections rather than chronic, established infections (‘excess co-clustering’), consistent with previous reports. To
determine the role that a higher infectivity during acute infection may play in excess clustering and co-clustering, we developed a simple model of HIV infection that incorporates an early period of
intensified transmission, and explicitly considers the dynamics of phylogenetic clusters alongside the dynamics of acute and chronic infected cases. We explored the potential for clustering
statistics to be used for inference of acute stage transmission rates and found that no single statistic explains very much variance in parameters controlling acute stage transmission rates. We
demonstrate that high transmission rates during the acute stage is not the main cause of excess clustering of virus from patients with early/acute infection compared to chronic infection, which may
simply reflect the shorter time since transmission in acute infection. Higher transmission during acute infection can result in excess co-clustering of sequences, while the extent of clustering
observed is most sensitive to the fraction of infections sampled.
Author Summary
Diversity of viral genetic sequences depends on epidemiological mechanisms and dynamics, however the exact mechanisms responsible for patterns observed in phylogenies of HIV remain poorly understood.
We observe that virus taken from patients with early/acute HIV infection are more likely to be closely related. By developing a mathematical model of HIV transmission, we show how these and other
patterns arise as a simple consequence of intensified transmission during the early/acute stage of HIV infection, however observing these patterns is highly dependent on sampling a significant
fraction of prevalent infections.
Citation: Volz EM, Koopman JS, Ward MJ, Brown AL, Frost SDW (2012) Simple Epidemiological Dynamics Explain Phylogenetic Clustering of HIV from Patients with Recent Infection. PLoS Comput Biol 8(6):
e1002552. https://doi.org/10.1371/journal.pcbi.1002552
Editor: Christophe Fraser, Imperial College London, United Kingdom
Received: September 27, 2011; Accepted: April 24, 2012; Published: June 28, 2012
Copyright: © 2012 Volz et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in
any medium, provided the original author and source are credited.
Funding: This study was funded by the NIH grant no. 1-K01-AI-091440-01 and NIH grant no. R01-AI078752. The funders had no role in study design, data collection and analysis, decision to publish, or
preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
Phylogenetic clusters of closely related virus such as HIV arise from the epidemiological dynamics and transmission by infected hosts. If virus is phylogenetically clustered, it is an indication that
the hosts are connected by a short chain of transmissions [1].
If super-infection is rare, and assuming an extreme bottleneck at the point of transmission, each lineage in a phylogenetic tree corresponds to a single infected individual with its own unique viral
population [2], [3]. A transmission event between hosts causes an extreme bottleneck in the population of virus in the new hosts. For infections between MSM, it is estimated that infection is
initiated by one or several virions [4], [5]. At the time of transmission, the quasispecies of virus within the transmitting host diverges and can thereby generate a new branch in the phylogeny of
consensus viral isolates from infected individuals [6]. Transmissions in the recent past should be reflected by recently diverged lineages, and transmissions from long ago should reflect branches
close to the root of a tree. [7]. Viruses such as HIV which have a high mutation rate relative to epidemiological spread can generate epidemics such that the correspondence between transmission and
phylogenetic branching is most clear [2].
Given a phylogeny of virus reconstructed from samples, the phylogenetic clusters are a partition of the sample units into disjoint sets as a function of the tree topology. A cluster will consist of
all taxa of the tree that are descended from a given lineage on the interior of the tree. There are many variations of this idea, and there is no general agreement about how to choose interior
lineages for defining clusters. The most common algorithms require strong statistical support for a monophyletic clade among all taxa in a cluster [8]–[14]. These definitions may additionally require
all taxa in a cluster to be connected by short branches with less than a threshold length [11], or similarly require that the genetic sequences corresponding to each taxon be separated by a genetic
distance less than a given threshold [8], [14]. Definitions of clustering based on statistical support for monophyly are very difficult to operationalize in a mathematical model, and in particular,
it is not clear how the statistical significance of internal nodes relates to population dynamics. Consequently, we have devised a conceptually similar definition of clusters that relies on the
estimated time to most recent common ancestor (TMRCA) of a set of taxa [15]. A formal definition is provided below.
The sizes of the groupings that arise from a clustering algorithm have been interpreted as a reflection of the heterogeneity of epidemiological transmission. The distribution of cluster sizes of HIV
is often skewed right, and depending on the definition of clustering used, can have a heavy tail [14], [15]. This is consistent with the prevailing view among modelers of sexually transmitted
infections that there is a skewed and in some cases power-law distribution in the number of risky sexual contacts in the population, however it is not straightforward to make inferences about sexual
network properties from cluster size distributions [16]. In the case of HIV, the distribution of branch lengths within clusters may also reflect the disproportionate impact of early and acute HIV
infection on forward transmission, which is due to higher viral loads in the early stages of infection, higher transmissibility per act [17], and fluctuating risk behavior [18].
When the taxa of the phylogeny are labeled, such as with the demographic, behavioral or clinical attributes of the the individuals from whom the virus was sampled, one can further analyze statistical
properties of clustered taxa. Similar taxa, such as those arising from acute infections, may cluster together (or co-cluster) at greater rates. Patterns of co-clustering might be informative about
the fraction of transmissions that occur at different stages of infection or between different demographic categories. HIV phylogenies from men who have sex with men (MSM) have been widely observed
[12], [13], [19] to have individuals with early/acute HIV infection that are much more likely to appear in a phylogenetic cluster. And moreover, if early-stage individuals are in a cluster, they are
much more likely to be clustered with other early infections. Both Lewis et al. and Brenner et al. [8], [9] have hypothesized that co-clustering of early infection is caused by higher
transmissibility per act during early infection. For example, in phylogenies with time-scaled branch lengths, if a large fraction of clusters have a maximum branch length of six months [8], [15],
this suggests that at least that fraction of transmissions also occur within six months. In this article we demonstrate that the mechanisms that generate co-clustering of early infections are
complex, and involve many attributes of the epidemic in addition to higher transmissibility per act [17]. To summarize, several features of the phylogenetic structure of HIV in MSM have been
independently observed by several investigators:
• Many more early infections are phylogenetically clustered than late infections. For future reference, we will refer to this as excess clustering of early/acute infections.
• If an early infection is clustered, it is more likely to be co-clustered with another early infection than expected by chance alone. For future reference, we will refer to this as excess
co-clustering of early/acute infections.
• The distribution of phylogenetic cluster sizes is skewed to the right and is potentially heavy-tailed.
Below, we illustrate these clustering patterns using 1235 HIV-1 subtype B pol sequences collected between 2004 and 2010 in Detroit, Michigan, USA.
These common clustering features motivate several questions. How informative are clustering patters about the underlying epidemic? In particular, how does higher transmissibility per act during early
infection shape the phylogeny of virus ? To address these questions, we have developed a simple mathematical framework that demonstrates the connection between epidemiological dynamics and the
expected patterns of clustering from a transmission tree and the corresponding phylogeny.
Our modeling work suggests that common features of HIV phylogenies are not coincidences, but universal features of certain viral phylogenies. We expect to see similar patterns for any disease such
that the natural history features an early period of intensified transmission. High transmission rates during early infection may be a consequence of higher transmissibility per act due to high viral
loads, but are also influenced by behavioral factors, such as fluctuating risk behavior [18], concurrency [20], and a lack of awareness of the infection. We do not explicitly model immunological or
behavioral factors, but rather consider a compound parameter that describes the rate of transmission during the early/acute period. We find that while higher transmission rates increase the frequency
of early/acute clustering, virus collected from early/acute patients clusters at a higher rate even when transmission rates are uniform over the infectious period.
Materials and Methods
Ethics statement
This research was reviewed by the Institutional Review Board at the University of Michigan. Data used in this research was originally collected for HIV surveillance purposes. Data were anonymized by
staff at the Michigan Department of Community Health before being provided to investigators. Because this research falls under the original mandate for HIV surveillance, it was not classified as
human subjects research.
Phylogenetic clustering of Michigan HIV-1 sequences
Our analysis consists of an empirical component which establishes clustering patterns for a geographically and temporally delineated set of HIV sequences, and an analytical component which
establishes a possible mechanism that could generate the observed patterns.
We examined the phylogenetic relationships of 1235 HIV-1 subtype B partial-pol sequences originally collected for drug-resistance testing. All sequences were collected in the Detroit metropolitan
statistical area between 2004 and 2010. Sequences were tested for quality and subtype using the LANL quality control tool [21]–[23], and aligned against a subtype-B reference (HXB2).Drug resistance
sites [24] were treated as missing data.
A maximum clade credibility phylogeny was estimated with BEAST 1.6.2 [25]. The phylogeny was estimated using a relaxed molecular clock and and HKY85 model of nucleotide substitution with Gamma rate
variation between sites (4 categories). The MCMC was run for 50 million iterations with sampling every iterations. The first million iterations were discarded. The effective sample size of all
parameters exceeded 50.
The phylogeny was converted into a matrix of pairwise distances between taxa expressed in units of calendar time. The distance between a pair of taxa was the TMRCA estimated by BEAST. Taxa were then
classified into clusters using hierarchical clustering algorithms. A pair of taxa were considered to be clustered if the estimated TMRCA did not exceed a given threshold, and a range of thresholds
was examined, from 0.5% of the maximum distance to the distance corresponding to the point where 90% of taxa are clustered with at least one other taxon.
Co-clustering of early/acute infections was investigated using a clinical variable (CD4 count) and a measure of genetic diversity of the virus. Both CD4 and sequence diversity are imprecise
indicators of stage of infection. Nevertheless, with a large population-based sample, even noisy indicators of stage of infection are useful for illustrating phylodynamic patterns.
In most cases, CD4 counts were assessed contemporaneously with samples collected for sequencing. The CD4 cell counts can be informative about disease progression and can be used as a noisy predictor
of the unknown date of infection [26]. Individuals with very high cell counts are unlikely to represent late/chronic infections, and we hypothesize that virus from these patients will be more likely
to be phylogenetically clustered. Clustering of patients with high CD4 was previously observed by Pao et al. [10]
Recent work [27] has also highlighted the potential for sequence diversity to be informative of the date of infection. The frequency of ambiguous sites (FAS) in consensus sequences provides an
approximate measure of sequence diversity in the host. HIV infection is initiated by one or a few founder lineages [4], [5]; initially the diversity of the viral population within the host is low,
but diversity increases steadily over the course of infection [28]. By convention, consensus sequences report ambiguous sites as those where the most frequent nucleotide is read with a frequency less
than 80%. We hypothesize that having few ambiguous sites is an indicator of early/acute infection; sequences with fewer ambiguous sites will be more likely to be in a phylogenetic cluster and to be
clustered with other sequences with few ambiguous sites.
A simple analysis was conducted to establish the existence of excess clustering and co-clustering in the Michigan sequences. This analysis is not designed to classify our sample into a early/acute
component or to estimate the date of infection for each unit.
To illustrate excess clustering of early/acute infections, we calculated the mean CD4 cell count and FAS for each sample unit in a phylogenetic cluster. Because all clustering thresholds are
arbitrary, we explored a large range of values, up to the point where 90% of the sample was clustered with at least one other unit. The standard error of the estimated mean was calculated assuming
simple random sampling. For small threshold distances, very few taxa are clustered, and the standard error is large, but decreases monotonically as the threshold is increased and more taxa are
To illustrate excess co-clustering, we classified taxa into three categories of CD4: those with CD4 , representing AIDS cases; those with CD4 , and those with CD4 between 200 and 800. Taxa were also
classified into quartiles by FAS. We then counted the number of pairwise clusterings of taxa within and between each category. These counts were arranged in a matrix. Large counts along the diagonal
(within categories) represent co-clustering by stage of infection. To establish excess co-clustering, we compared the counts to the expectation if clusters were being formed at random, e.g. if two
taxa were selected uniformly at random without replacement.We denote the symmetric matrix of co-clustering counts as , so that represents the number of times that a taxon in category is clustered
with a taxon in category . The sum of counts in the 'th row of will be denoted . Following the methods described in [29], the expected value of under random pair formation isBelow, we illustrate the
difference . We can also calculate the assortativity coefficient [29], , which describes the total amount of co-clustering in the matrix. To construct the co-clustering matrices, we selected the
value of the distance threshold which maximized the assortativity coefficient.
Mathematical model
Following the approach outlined in [6] and [30], we develop a deterministic coalescent model derived from a compartmental susceptible-infected-recovered (SIR) model. A system of several ordinary
differential equations describe the dynamics of prevalence of early and late HIV infection. Individuals pass from a susceptible state, to an early/acute infection state, to a chronic infection state
followed by removal (treatment or death). , and will denote the numbers susceptible, acute, and chronically infected respectively, and the population size will be denoted . For didactic purposes, we
will suppose that treatment is completely effective at preventing forward transmissions. The HIV model is described by the following equations:(1)In these equations, and are respectively the
frequency-dependent transmission rates for early and chronic infected individuals. The average duration of early and chronic infection are respectively and . Natural mortality occurs at the rate and
immigration into the susceptible state occurs at the rate , which maintains a constant population size . is a term which modulates the way incidence of infection scales with prevalence. For the
results presented below, we choose . This term corrects for observed patterns of decreasing incidence with prevalence; this can occur as a result of population heterogeneities (including sexual
network structure) or as the result of decreasing risk behavior as knowledge of the epidemic spread. Many more relevant details could be included in a model of the HIV epidemic in MSM, however our
purpose is to demonstrate how these simple dynamics lead to observed phylogenetic patterns.
In [6], a similar HIV model was presented along with a method to fit such models to a sequence of phylogenetic divergence times (the heights of nodes in a time-scaled phylogeny). Where possible, we
will use the parameter estimates from [6]. The parameters are reported in table 1. Together, these parameters imply and that 41% of transmissions occur during the acute stage.
Corresponding to an epidemic model of the form 1, we can define a coalescent process [31], [32] that describes the properties of the transmission tree and by extension the phylogeny of virus. The
taxa descended from a lineage at time in the past form a clade, which we will also call a cluster. The number of taxa in a randomly selected cluster will be a random variable. The cluster size
distribution (CSD) is a function of a threshold TMRCA , and describes the probability of having a size cluster if a lineage (i.e. branch) at time is selected uniformly at random from the set of all
lineages at and the size of the cluster descended from that branch is counted. A schematic of how clusters and the CSD are constructed given a tree and a threshold is shown in figure S5. In [6] we
derived differential equations that describe the moments of the CSD.
Some of the properties of phylogenies that we seek to reproduce with the model developed below are:
• The number of lineages as a function of time (NLFT), also known as the ancestor function.
• The fraction of sampled early/acute and chronic infections which are clustered given a threshold TMRCA.
• Within a given cluster there will a number of early/acute taxa and a number of chronic taxa. We will calculate the correlation coefficient between these counts across all clusters given a
threshold TMRCA.
• The moments of the distribution of cluster sizes, including the mean, variance, and skew of cluster sizes.
Figure 1 shows a simple genealogy that could be generated by the HIV model. Four events can occur in this genealogy representing coalescence or the changing stage of a lineage. By quantifying the
rate that these events occur using a coalescent model, we can calculate the clustering properties of these genealogies. These methods are described below and in detail in supporting Text S1.
Dark branches with taxa labeled A correspond to stage-1 (early/acute infected hosts). Light branches with taxa labeled C correspond to stage-2 (chronic infections). Event 1 represents the coalescence
of two lineages corresponding to early/acute infection. Event 2 represents coalescence of an early and a late infection. Event 3 represents the stage transition of an early infection to a late
infection. Event 4 represents the transmission by a late infection which is not ancestral to the sample. Top: Includes an unsampled lineage (dashed). Middle: The unsampled lineage has been pruned
from the tree. The point where the lineage is pruned corresponds to event 4. Bottom: The number of lineages as a function of time (NLFT) which correspond to a host with early/acute infection (black)
or chronic infection (grey).
The ancestor function is strictly decreasing in reverse time and converges to one (a single lineage) when the most recent common ancestor of the sample is reached. The initial value of the ancestor
function (when the population is sampled) is equal to the sample size . For the purposes of modeling phylogenetic properties of HIV, we will be interested in phylogenies such that the taxa are
labeled with the state of the sampled individual (e.g. the individual will have early or late infection corresponding to the states in equation 1). In this case, we will have two ancestor functions,
since a lineage may correspond to an infected individual with either early or late infection.
The ancestor functions derived from equations 1, and which are derived in the Text S1 are as follows:(2)In these equations, is the number of lineages corresponding to early infections and is the
number of lineages corresponding to late infections. These equations provide a deterministic approximation to the NLFT, which is . Each term in these equations accounts for loss or gain of lineages
due to the concurrent processes of transmission (at rates and ) and transition between states (at rates ). This approximation becomes exact in the limit of large sample and population size. Note that
since the model is continuous in both time and state variables, the ancestor functions are not integers in contrast to most coalescent frameworks based on discrete mathematics.
Real epidemics in a finite population will have transmission trees such that the number of lineages at any time is a random variable. The mean-field model presented in equation 1 can be viewed as a
description of the dynamics of a stochastic system in the limit of large population size. In this case, we can adapt the coalescent to make approximate descriptions of the stochastic properties of
the transmission tree in large populations. The ancestor functions will reflect the approximation of the actual (random) number of lineages. Previous work has demonstrated that deterministic
descriptions can be excellent approximations for the number of lineages over time [6], [33]. In the following section, we compare our deterministic coalescent to stochastic simulations, confirming
that it is a good approximation over a wide range of parameters.
Given a clustering threshold TMRCA , the random variable will be the number of stage- taxa descended from a given lineage that is extant at time in the past. As before, will be the number of type
lineages at the time in the past. In our model, infected can be of two types (early/acute and chronic infected), so there are only two types: corresponds to earl/acute and corresponds to chronic. We
will denote the set of all lineages of type at time in the past as . Then we define the and 'th moment of cluster sizes descended from a type lineage to be(3)
Many summary statistics that are potentially informative about transmission dynamics can be derived from these moments. The moments are difficult to interpret, so in practice we use them to calculate
summary statistics such as variance and skew of the CSD. Below, we examine 30 summary statistics derived from the first three moments and multiple clustering thresholds.
For example, the variance of cluster sizes counting only type taxa descended from type lineages is(4)The total variance of cluster sizes counting only stage 1 taxa is found with the weighted average
over lineage types:(5)A similar set of equations can be developed for the cluster sizes aggregated over taxon types, that is, for . Detailed derivations are provided in Text S1 for differential
equations that describe these moments as function of the threshold .
Event-driven stochastic simulations were conducted to verify the suitability of the deterministic approximations for inference. Simulations implemented a variation on the Gillespie algorithm [34].
Populations consisted of agents, and were simulated for 15 or 30 years starting with one hundred initial infections. At the end of each simulation, a sample of either 20% or 100% of prevalent
infections was taken and used to reconstruct a transmission tree. Five hundred simulations were conducted for each sample fraction and sample time. Corresponding to each simulation, 10 transmission
trees were generated based on a random sampling of using distinct clustering thresholds. The CSDs were then estimated from each tree and the moments of these distributions were compared to the moment
equations (3–5).
We have further conducted an investigation into the potential of various summary statistics of the viral phylogeny for inference of underlying epidemiological parameters. Of particular interest is
the fraction of transmissions that occur during early HIV infection. As indicated above, it is possible that phylogenetic clustering of early infections reflects elevated transmission during early/
acute HIV infection, which we will define as the infectious period from zero to six months. The following simulation experiment was carried out to identify informative statistics:
1. Parameters were sampled from a multivariate uniform distribution. 1800 replicates were sampled.
2. For each set of parameters, the HIV ODE model was integrated. The number of transmissions by early/acute and chronic cases was recorded. The number of stage transitions from acute to chronic was
also recorded.
3. For each record of transmissions and stage transitions, a coalescent tree was simulated using the method described in [35].
4. For each coalescent tree, summary statistics were calculated and recorded. These statistics consisted of the following: The number of lineages as a function of time before the most recent sample;
the correlation between between the number of early/acute and chronic infections with threshold TMRCA; the fraction of acute/recent taxa which remain unclustered (not clustered with any other
taxa); the fraction of chronic taxa which remain unclustered; the mean number of taxa clustered with a early/acute infection; the mean number of taxa clustered with a chronic infection. Each of
these statistics was calculated using 5 threshold TMRCA uniformly distributed between one year and 25 years before the most recent sample.
The coalescent tree was simulated such that the sample size matched that of the Detroit MSM phylogeny, and the heterochronous sampling of that phylogeny was reproduced in the coalescent tree.
Furthermore, the number of early/acute versus chronic taxa sampled was determined using the BED test for recency of infection for each patient [36], and simulations were also made to match the
numbers of early/acute and chronic taxa sampled. Virus from patients with early/acute infection accounted for 24% of the samples.
Summary statistics were centralized around the mean and rescaled by their standard deviation (). The dependent variable of interest is the fraction of transmissions attributable to the acute stage at
the beginning of the epidemic, which may be defined(6)where is the expected number of transmissions generated during early/acute infection at the beginning of the epidemic, and is the expected number
of transmissions over the entire infectious period. Pearson correlation coefficients were calculated for each statistic and . To give a better indication which statistics would be useful for
estimating the ratio of acute to chronic transmission rates, we conducted a partial least-squares (PLS) regression [37], which has been used by other investigators when estimating parameters by
approximate Bayesian computation (ABC) methods [38]. Prediction error was assessed with 10-fold cross validation. We controlled for the sample fraction by including the prevalence of infection at the
time of the most recent sample as a covariate.
The mean CD4 cell count and FAS for clustered taxa is shown in figure 2. Consistent with our hypotheses, patients with higher CD4 count are more likely to yield phylogenetically clustered virus, and
the mean CD4 count among clustered patients has an inverse relationship with the threshold TMRCA for clustering. Also consistent with our hypothesis, patients which yield virus with lower FAS (less
diverse virus) are more likely to be phylogenetically clustered, and mean FAS has a positive relationship with the threshold TMRCA for clustering. Patients were strongly co-clustered within
quantiles. Maximum assortativity values, which measures the similarity of co-clustered taxa were 13% for CD4 and 4.5% for FAS. The maximum assortativity also occurs at low threshold TMRCA for FAS and
CD4 (1700 and 1467 days). Very little clustering is observed between the first and last quantiles.
Left: The mean CD4 cell count (top) and frequency of ambiguous sites (bottom) versus the threshold TMRCA used to form clusters. Middle: The assortativity coefficient, a measure of similarity of
co-clustered taxa, versus the treshold TMRCA used to form clusters. Assortativity of CD4 is at top, and frequency of ambiguous sites is bottom. Right: The size of each matrix element is proportional
to number of co-clusterings between taxa categorized by CD4 (top, ) or quartile of frequency of ambiguous sites (bottom). The color represents the extent to which the count of co-clusterings exceeds
the expectation if clusters were forming at random. The color scale (far right) shows strong assortativity within quartiles. The vertical red bar represents the threshold which was used to create
clusters and the matrix derived from the set of clusters. This threshold corresponds to the maximum of the assortativity coefficient for the derived matrix.
In general, the deterministic model offers an excellent approximation to the stochastic system. All trajectories pass through or close to the median of simulation predictions. Figure 3 illustrates
the prevalence of early/acute and chronic infections from a typical simulation of the HIV model and the corresponding deterministic approximations. This correspondence occurs despite large
fluctuations in prevalence when the number of infections is small. In [6] it was shown that the correspondence between the stochastic and deterministic systems can be very good even if the epidemic
is started from a single infection and the coalescent is fit to the resulting transmission tree.
The x-axis gives the time since the beginning of the epidemic, or equivalently, the threshold TMRCA used to calculate the number of lineages over time. Green describes the simulated number of late
infections. Blue describes the simulated number of early infections. Dots show the simulated ancestor function for the number of lineages that correspond to late infections. And x's show the
simulated ancestor function for lineages in early infection. Dashed lines show the prediction of the deterministic coalescent. The top row shows results for a sample taken at 15 years following the
initial infections, and the bottom shows results for a sample at 30 years. The right column shows results for a sample fractions of 20%, and the left column for a census of prevalent infections
In figure 3, late infections outnumber early infections by approximately 20 to 1. As a consequence, NLFT for late infections are more stable due to larger sample sizes, and the NLFT are more noisy
for the sample of early infections. The prevalence of infection plateaus prior to the 15 year sample time, so there is not much difference in the phylogenetic features observed at 15 and 30 year
sampling times.
Many summary statistics calculated from an HIV gene genealogy can be informative about the fraction of transmissions attributable to early/acute infection, (equation 6). Figure 4 shows the value of
four statistics as is varied. The dependancy of these summary statistics on the sample fraction is also shown in figure S4. (upper left) is the Pearson correlation coefficient between the number of
early/acute taxa and chronic taxa in a cluster and is most sensitive to . Also shown are the mean cluster size, the number of extant lineages at the threshold TMRCA, and the fraction of taxa in a
phylogenetic cluster. As the fraction of transmissions from the early/acute stage is varied, transmission rates and are adjusted so that remains constant. The smallest value of shown in figure 4
corresponds to the point where , such that there is no excess transmission in the early/acute stage. The most recent sample is assumed to be at 35 years following the initial infection. Epidemic
prevalence after 35 years is approximately constant. The threshold TMRCA was five years before the most recent sample. Sample size and distribution of samples over time was matched to the Detroit MSM
phylogeny. Furthermore, the number of early/acute versus chronic taxa sampled was made to match the Detroit data by use of the BED test [36] for determining recency of infection.
The threshold TMRCA was five years before the most recent sample. Sample size and distribution of samples over time was matched to the Detroit MSM phylogeny.
The fraction of taxa which are phylogenetically clustered also varies with (figure 4, upper left). The fraction of early/acute taxa clustered is more sensitive to than the fraction from chronic taxa.
Early/acute taxa are always clustered at a greater rate than chronic taxa, even when corresponding to the minimum value of . This is because virus from early/acute patients was recently transmitted,
making it much more likely that the lineage will coalesce in the recent past regardless of the source of the infection.
Using the mathematical model, we explored many parameters including the threshold TMRCA for clustering, the sample fraction, and the time relative to the beginning of the epidemic at which sampling
occurs. Figures S1, S2, S3 demonstrate that the deterministic model is capable of reproducing many phylogenetic signatures that have been associated with HIV epidemics in MSM. For example, figure S5
shows the fraction of the sample (both early and late infections) which remain unclustered with any other sample unit. When the threshold TMRCA is zero (corresponding to the far right of the time
axis), the entire sample remains unclustered. As the threshold TMRCA increases (moving leftwards on the time axis), more sample units become clustered and the fraction of taxa remaining unclustered
The time of sampling makes little absolute difference to the qualitative nature of the tree statistics if sampling occurs after the peak epidemic prevalence (around 15 years). However the sample
fraction (the fraction of prevalent infections sampled) has a large effect on all tree statistics. When the sample fraction is large, the fraction remaining unclustered drops much more precipitously
than when it is small as the threshold TMRCA increases. This occurs because each transmission can cause a sample unit to become clustered; a large sample size implies that transmissions will have a
greater probability of resulting in an observable coalescent event (e.g. it results in a larger ratio ).
Early infections become clustered at a much greater rate than late infections. This corresponds to the excess clustering of early/acute infections observed in many phylogenies. By virtue of being
infected in the recent past, an acute infection inevitably has a very recent common ancestor with another infection who transmitted to that individual. Mathematically, this is reflected in
transmission terms of the form which appear in the ancestor function for early, but not late infections.
When the sample fraction is non-negligible, the fraction of the sample in a cluster levels off for intermediate thresholds. Similar phenomena were noted by Lewis et al. [8] and Hughes et al. [14] who
observed that the fraction of the sample in a cluster did not change substantially beyond a small threshold, though these studies probably had high sample fractions. The plateau is due to the
bimodality of coalescence times induced by early infection dynamics. Many coalesce events occurs at thresholds close to the sampling time, which corresponds to lineages of early infection
coalescing.A larger group of coalescence times occurs close to the beginning of the epidemic when the effective population size is small. We hypothesize that the amount of excess clustering of early
infections can be informative for estimating the sample fraction when it is not known.
Figure S2 shows the Pearson correlation coefficient for the number of co-clustered early and chronic infections as a function of the clustering threshold (). Given that a sample unit is in a cluster,
under certain circumstances, it is much more likely to be clustered with another unit of the same type. This is reflected by large negative correlation coefficients for the number of co-clustered
early and late infections for small threshold TMRCA. But negative correlation between the number of early and late infections is only observed for small sample fractions and small threshold TMRCA.
The region of negative correlation appears very briefly for a 100% sample fraction; the region is much longer for small samples. This implies that if a patient with early infection is clustered, it
is much more likely to be clustered with another early infection than expected by chance alone.
The skewness of the CSD shows a similar trend (figure S3). The skewness is always positive (to the right) and rapidly decreases as the threshold TMRCA is increased reflecting greater probability mass
in the tail of the distribution. Skew is greatest for small threshold TMRCA, when most clusters are of size 1. The distribution remains positively skewed, though it quickly levels off for
intermediate threshold TMRCA. The mathematical model shows that all moments of the CSD are finite and diverge to infinity in the limit of large sample size and threshold TMRCA.
A practical consequence of having an intermediate to large sample fraction is that chains of acute-stage transmission will account for many of the clusters observed at low thresholds. If a taxon is
clustered with an early infection, then it is more likely that the unit will be clustered with additional early infections since such cases are highly infectious and have likely transmitted in the
recent past. This provides a justification for the theory expounded in Lewis et al. [8] that high clustering of cases with recent MRCA's indicates episodic transmission; chains of transmission by
early infections are interrupted by occasional long intervals until a transmission by late stage infections.
Corroborating figure 4 which shows that many statistics are correlated with , the PLS regression did not single out any particular group of statistics as being informative of early/acute stage
transmission rates. The first component distinguishes between statistics that describe co-clustering (correlation of the number of acute and chronic taxa in a cluster) and statistics that describe
excess clustering (e.g. the fraction of early/acute taxa that are not clustered with any other taxa). Four principal components were required to explain 42% of the variance of the transmission
fraction with additional components only explaining an additional 2%. All statistics were well represented in the model with four components.
We have used coalescent models to characterize the phylogenetic patterns of a virus which produces an early stage of intensified transmission followed by a long period of low infectiousness. These
patterns have been observed in multiple phylogenies of HIV-1 from MSM and IDU, and our model suggests that these should be general features for epidemics which feature early and intense transmission.
These patterns are not necessarily a consequence of complex sexual network structure [14]. Complex transmission dynamics driven by sexual networks are undoubtedly taking place, but detecting the
phylogenetic signature of sexual network structure will require carefully-chosen summary statistics [15]. We have characterized phylogenies using the cluster size distribution (CSD) which is similar
to commonly used clustering methods based on strong support for monophyly but is nevertheless tractable for mathematical modeling in a dynamical systems framework. Moments of the CSD reflect a wide
range of tree topologies, such as the distribution of branch lengths and tree balance, and are potentially informative of a wide range population genetic processes. For example, a highly unbalanced
tree would have produce very skewed CSD, and a very star-like tree would have a CSD that is insensitive to changes in the clustering threshold.
While there has been much discussion of how clustering of acute infections is caused by the intensity of transmission during the acute stage, the amount of excess clustering that will be observed is
also very sensitive to the sample fraction. And even if transmission rates in the early/acute stage are equal to those in the late/chronic stage, we would still observe excess clustering of early/
acute provided the sample fraction was large enough. This is a simple consequence of early/acute infections being connected by short branch lengths to the individual who transmitted infection. An
advantage of the coalescent framework used in this investigation is that it is accurate even with large sample fractions [35].
Some of the statistics which are most informative of the underlying epidemiological processes are those based on co-clustering of labeled taxa, such as the correlation between the number of early and
late infections in a cluster. Such statistics tend to be the most responsive to variation of the intensity of transmission during early infection, and are therefore good candidates for future
estimation of the fraction of transmissions that occur during the first few months of infection with HIV. Knowing the frequency of early transmission is essential to prevention efforts, since these
transmissions are the most difficult to prevent. Individuals with early and acute infection are usually not aware of the infection, and are therefore not susceptible to many interventions. Modeling
to evaluate strategies such seek, test, and treat (STT) [39], [40] and pre-exposure prophylaxis(PrEP) [41] will require good estimates for the frequency of early-stage transmission in diverse
populations, and phylogenetic data promise to refine these estimates.
Future work could focus on finding ways to use statistics derived from the CSD for estimation of epidemiological parameters within an approximate Bayesian framework [38], [42], [43]. Alternatively,
advances [35] in coalescent theory may make it possible to calculate the likelihood of a gene genealogy conditional on a complex demographic history, such as those generated by the HIV model
discussed here. Current techniques are limited in the amount of phylogenetic data that can be used for inference of demographic and epidemiological parameters. Estimation of the intensity of early
stage transmission will likely require co-clustering statistics similar to the moments derived from the CSD. In cases where the simple compartmental models fail to reproduce phylogenetic patterns, a
more complex transmission system model and its corresponding coalescent should be investigated which might involve sexual networks or geographical [44] and risk heterogeneity. We further conclude
that care must be taken in using phylogenetic clusters for epidemiological inference. Mechanisms that generates clustering are often complex and counter-intuitive. We recommend that investigators
shift from individual-based inference using small clusters to model-based inference using population-based surveys of sequence diversity.
Supporting Information
Two simulated epidemics and the deterministic approximations for the fraction of the sample which remains un-clustered as a function of the threshold TMRCA. The fraction un-clustered is shown for
sample units classified as early infections (solid lines) as well as sample units that are late infections (dashed). The x-axis gives the clustering threshold in units of days since the start of the
epidemic. All variables are illustrated for a sample at 30 years following the initial infections and at two possible sample fractions (100% or 20%).
Simulated epidemics and the deterministic approximations for the Pearson correlation coefficient between the number of co-clustered early and late infections. Variables are shown as a function of the
threshold TMRCA in units of days since the beginning of the epidemic. All of these variables are illustrated for a sample at 30 years following the initial infections and at two possible sample
fractions (100% or 20%).
Two simulated epidemics and the deterministic approximations for the skewness of the cluster size distribution (third central moment divided by the standard deviation cubed). Variables are shown as a
function of the threshold TMRCA in units of days since the beginning of the epidemic. All variables are illustrated for a sample at 30 years following the initial infections and at two possible
sample fractions (100% or 20%).
Summary statistics from HIV gene genealogies versus the fraction of infections sampled after 35 years. The threshold TMRCA was five years before the most recent sample. Sampling was homochronous.
Construction of the cluster size distribution (CSD). Given a tree and a threshold time to most recent common ancestor, represented by red, green, and blue lines, the set of taxa at the base of the
tree are classified into disjoint sets or clusters. The distribution of cluster sizes for each threshold is shown at right.
The authors thank Eve Mokotoff and Mary-Grace Brandt and colleagues at the Michigan Department of Community Health for assisting with access to the HIV drug-resistance database.
Author Contributions
Conceived and designed the experiments: EMV SDWF. Performed the experiments: EMV. Analyzed the data: EMV SDWF. Contributed reagents/materials/analysis tools: EMV SDWF. Wrote the paper: EMV JSK MJW
ALB SDWF.
|
{"url":"https://journals.plos.org:443/ploscompbiol/article?id=10.1371/journal.pcbi.1002552","timestamp":"2024-11-13T08:03:34Z","content_type":"text/html","content_length":"231144","record_id":"<urn:uuid:0fd6ec05-e43f-4c79-8865-28dded27cbaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00491.warc.gz"}
|
Connected components algorithm
stb_connected_components.h stb_connected_components.h
(stbcc) finds the connected components on a large 2D grid, and tracks those connected components as the grid is updated. This is trickier than you might think.
stbcc has many possible uses, but it was inspired in particular by a comment by the programmer of Dwarf Fortress that they kept a separate data structure that could efficiently answer the question of
whether any given point Q on the map was reachable from some other point P, and they used this to prevent degenerate slowness in pathfinding from P to Q (which, if Q is unreachable, typically will
explore the entirety of the map that is reachable from P).
Finding and updating connected components
The traditional algorithm for finding connected components in a graph is to just use depth-first-search; the equivalent on grids is a "flood-fill" algorithm. These techniques work fine, and are O(N)
with N nodes (e.g. in a 1024x1024 grid, they take ~1M units of time). However, they offer no good approach for updating.
The classic update-friendly approach for this kind of problem is to use
Disjoint Set Forests
. The algorithm will work on arbitrary graphs, not just grids. It allows you to build groups of 'connected' objects (sets) efficiently; the initial build is essentially O(N) time (see the Wikipedia
page for more details about its speed). This technique also allows you to dynamically add connections to the existing data structure in essentially O(1) (amortized) time. The implementation of
disjoint set forests is straightforward, using only a dozen or so lines of code, as in this reference solution I coded at the beginning of the stbcc stream (see previous post):
1 point leader[1024][1024];
3 point dsf_find(int x, int y)
4 {
5 point p,q;
6 p = leader[y][x];
7 if (p.x == x && p.y == y)
8 return p;
9 q = dsf_find(p.x, p.y);
10 leader[y][x] = q; // path compression
11 return q;
12 }
14 void dsf_union(int x1, int y1, int x2, int y2)
15 {
16 point p = dsf_find(x1,y1);
17 point q = dsf_find(x2,y2);
19 if (p.x == q.x && p.y == q.y)
20 return;
22 leader[p.y][p.x] = q;
23 }
Unfortunately, there is no simple update-friendly data structure to allow
a connection and still track the connected components.
Understanding the Deletion Problem
Think of a connected-components data structure as storing a label for every node on the graph, indicating which component that node belongs to. (To check if point P can reach point Q, we check if
they have the same label.)
If you imagine a map split into two large components, when you connect them, you naively might have to update half of the nodes on the map, which will obviously not be O(1) time. However, if the
nodes don't directly store a label, but store a pointer to a label, then we might be able to set up a situation where all the nodes in one component point to one label, and all the nodes in the other
component point to another label, and when we join the two components, we only have to update one
label and by doing so we've updated the whole map. (This is essentially what the Disjoint Set Forest algorithm above does, taken to its ultimate limit.)
But when we disconnect a large component on the map into two components, we might make that disconnection in many places; there's no way to guarantee that the existing pointers to labels point to two
different locations in a way that exactly matches the disconnection that we have. Thus there is no straightforward solution to an update that creates a disconnection.
AFAIK most people simply rebuild the entire connected component data structure (i.e. periodically recompute the flood fill).
So the idea of stb_connected_components.h is to compromise; we introduce a moderate number of intermediate, indirected labels, so that we can rewrite as few labels as possible and minimize the number
of nodes which we need to point to different labels entirely. We do this by relying on the graph being a grid and using the geometry of the grid to guide us.
Two-level Hierarchies
There's a classic trick for making certain kinds of O(N^1.5) algorithms when you have an O(N^2) algorithm that it relies on: partition the input data into sqrt(N) pieces each of size sqrt(N), run the
O(N^2) algorihm on the pieces; each one will take O(sqrt(N)^2), or O(N) time, so running on all the pieces will take sqrt(N)*O(N) time, i.e. O(N^1.5) time.
We do something similar for stbcc. We're going to chunk up our M*M grid into smaller pieces. Suppose that size is K. Then we have K*K pieces, each one (M/K)*(M/K) nodes. Suppose we run an O(N)
algorithm within one (M/K)^2 piece and another O(N) algorithm on the K^2 coarse pieces; then we have O(K*K) + O((M/K)*(M/K)), or O(K*K+M*M/(K*K). If K is 1, this becomes O(M^2), if K is M, this
becomes O(M^2), but if K is sqrt(M), this becomes O(M). It's minimized when K is sqrt(M).
Why a two-level hierarchy? Decades of experience has shown that you almost never want quadtree-esque fully-recursive hierarchies for speeding things up; you almost always want two-level hierarchies.
(Maybe, maybe, if it's really big, you want three-level hierarchies. If you wanted to do a 65536x65536 map, maybe that would be better.)
Subgrids in stb_connected_components
So, as described in the previous section, stbcc splits an M*M grid into subgrids that are sqrt(M)*sqrt(M). For each subgrid, we cache the locally-connected components; each within-subgrid connected
component has a single slot storing a label and all the nodes in that component point to the same slot; thus when connectivity changes we only have to update that one slot to update all of the nodes.
There are sqrt(M)*sqrt(M) total subgrids; which means we have (on typical maps) a small multiple of that many locally-connected components; in other words we usually have O(M) pieces--but that's from
an M*M map, so if N=M*M, we're talking about O(N^0.5) components. If we also keep track of how locally-connected components connect
subgrid boundaries, then we can run a straight Disjoint Set Forest algorithm on those O(N^0.5) components in
O(N^0.5) time.
On a single update, we do three things:
1. Rebuild the locally-connected components for the subgrid containing the updated node
2. Rebuild the adjacency information from that subgrid to its neighbors, and vice versa
3. Compute from scratch the globally-connected components from the locally-connected pieces
For an M*M map, step 1 takes essentially O(M), step 2 takes O(sqrt(M)) time [pretending array deletion is O(1)], and step 3 takes O(M) time for game-like maps but has a much worse case discussed
below. (All these 'essentiallies' are based on pretending that DSF takes O(N) time for N operations, ignoring that (a) the actual cost is technically higher, and (b) the nearly-linear behavior is
only if you implement both path compression and union by rank, and we only implement the former to reduce memory/cache pressure.)
Adjacency lists can be maintained as doubly-linked lists so that deletion time for them is actually O(1), but this requires a lot of extra memory, and in the common case where the arrays are small,
searching the list to delete an item isn't a big deal.
The actual implementation is:
1. Run a distjoint-set-forest on a subgrid
2. Delete old adjacency information for the old subgrid (and use that to delete adjancey info pointing to this subgrid from its neighbors); then check every node on the subgrid edges to see if it
connects across the edge to a neighbor component, and if so add a connection to the adjacency lists for both
3. Run a disjoint-set-forest over all the locally-collected-components in the entire map using the adjacency info to define the connections (this is the slowest step, and has a bad worst case)
Plausible Improvements
Just writing down the above suggested a few possible improvements:
Might a flood-fill for step 1 above be faster?
The adjacency list additions are actually computed twice, to add the entries to each side independently. Could maybe be twice as fast; but I don't think this was ever a significant chunk of the time
in tests.
The old reference disjoint-set-forest added connections in both directions [i.e. called both dsf_union(x,y,x+1,y) and dsf_union(x+1,y,x,y)], and it was found to nearly double the speed to remove this
redundancy. The final global one might want to do something similar, although it is not as straightforward to do it geometrically now, as they're arbitrary graph connections instead of simple grid
connections. However, they ARE still always across a subgrid edge, and we could determine that; but is that faster than just computing dsf_union()?
(This comes up below:) Don't store anything at all for degenerate components, since their index is all we need for reachability tests. Reducing Memory Usage
The above describes the initial 0.90 release of stb_connected_components. The first update reduced memory usage.
stbcc allocates the worst-case memory usage needed so it never does any dynamic allocations. Because adjacency lists are variable sized, the initial version allocated a max-sized adjacency list for
every locally-connected-component. This required foolish amounts of memory (~80MB for a 1K*1K map).
The max-sized adjacency list occurs when every connection out of a subgrid come from the same locally-connected component. If this happens, no other locally-connected component in that subgrid can
adjacency list entries. In other words, the
number of adjacency list entries is bounded by the same maximum that the initial version used for each list. (This is, by the way, half the "circumference" of the subgrid.)
The problem is that with a fixed-allocation scheme, we need to allocate variably-sized lists, and they might need to be resized. This could require writing a little "memory allocator" out of each
subgrid's adjacency "pool", and then after multiple updates, these could become fragmented and need defragmentation. While this is possible, we use a much simpler strategy: when we create the initial
lists for a subgrid, we divide up the list, and apportion any leftover space equally between all the components. If we try to add to a component's adjacency list and there is no room, we discard all
the information for that subgrid and rebuild. (We're already always rebuilding 1 subgrid on every update; now we update at most 5, and this is rare in practice.)
This reduces worst-case memory to ~8MB for a 1K*1K map (equivalent to about 8 bytes per node, which I consider tolerable).
Reducing Global Components
Although I don't believe the cases considered in this section are interesting for game maps, stb_connected_components doesn't claim to be only for game maps, so when I tested and found some
degeneracies I decided it was worth putting the effort into speeding them up. This didn't noticeably speed up the game map case, but it also didn't hurt.
A worst case for the algorithm as described above is a checkerboard; each white square is a degenerate single-node connected component. In this case, an M*M grid has M*M/2 locally-connected
components, and the disjoint-set-forest algorithm run on those components will take O(M*M) time (even though they're all disconnected). This is not theoretical; making such a map did make this step
very slow.
To improve performance, we could simply eliminate degenerate 1-node connected-components from the global step, but we can do better, handling larger components as well. We could classify each
locally-connected component by whether it has an empty adjacency list or not. However, that could change when an update happens immediately next to the upgrid, creating a new adjacent connection even
if the subgrid itself doesn't change.
Instead we classify each locally-connected component with whether it touches an edge of the subgrid or not. Those that touch it need adjacency lists and need to participate in the globally-connected
components step. Those that do not do not need lists and do not participate. (This also helps the previous section; splitting up the adjacency-list storage, only components that are touch edges are
allocated extra adjacency-list storage.) Remember, if the subgrid ever changes, it will be discarded and fully rebuilt, so if a locally-connected component doesn't touch the edge now, it never will
for the lifetime of this subgrid.
By separating the list of locally-connected components in each subgrid into two lists (those that are on-edge and those that aren't), we avoid traversing the latter entirely, bringing performance
down from O(N) to O(N^0.75) (i.e. from O(M^2) to O(M^1.5). This is still a bit slow; in practice with M=1024 I saw times from 0.25ms for the O(M)-ish normal-map case to 2.5ms for an O(M^1.5)-ish
worst case, all compared to say 50ms for the O(M^2) full-map disjoint set forest.
The checkerboard case is actually not the worst case for performance, because it actually has
global adjacencies; we had to traverse every locally-connected component, but didn't do any dsf_union() operations. Some practical cases that were more interesting: invert every other subgrid in a
checkerboard fashion (so there are maximal numbers of adjacencies across the subgrid borders); stripes on alternating rows (so there are large global connected components), and both of those but with
some fraction of the nodes randomized (creating more connected components).
Implausible Improvements
To do significantly better we'd have to formalize the problem differently; as it stands, an M*M grid has M subgrids with worst case O(sqrt(M)) adjacencies, and we have to call dsf_union() for every
adjacency, so the O(M^1.5) worst case is unavoidable with this way of factoring the problem. Of course, speeding up this degenerate worst case is also not necessarily the priority; however,
measurements have shown that the global disjoint-set-forest step is also the slowest step in the update operation, even for 'normal' grids.
One way to speed up the current global disjoint-set-forest step would be to apply a similar 'incremental' algorithm to it, i.e. introduce a third hierarchical level. The easiest way is to imagine
that this is another level of grids
the current level; the globally-connected components from the current grid could be treated as locally-connected components on an even further-out supergrid. We might split a 1024*1024 supergrid into
8x8 grids that are 128x128, and subdivide each grid into 8x8 sets of 16x16 subgrids. Then an update operation would rebuild one 16x16 subgrid, compute the "globally"-connected components from ~200
locally-connected components (worst case 2048), and then compute the truely-global-connected components from a set of ~100 "globally"-connected components (worst case 16384).
Note that that worst case hasn't improved a lot (the existing worst case for a 1024*1024 grid is 65536), so this might cause an even bigger spread between worst case and average case (which is
perhaps surprising if you thought it would help the worst case more).
But I don't plan to do this, since two-level hierarchies tend to be the sweet spot of complexity & performance.
|
{"url":"https://stb.handmade.network/blog/p/1136","timestamp":"2024-11-04T20:36:14Z","content_type":"text/html","content_length":"41631","record_id":"<urn:uuid:4ec42534-d174-4cb1-9825-92a16bd24b2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00369.warc.gz"}
|
SAS LAG Function | Steps to Create SAS lag Function | Examples
Updated March 15, 2023
Introduction to SAS LAG Function
The SAS lag function is the type of function that can be used to compare the current value and the existing value, which is already there, and the new value of the predecessors to calculate the lag
of the orders. The second order is used LAG2 function. Likewise, the third order will calculate the LAG3 function. LAG of the first order will use the grouping variable with other elements.
Overview of SAS Lag function
When the LAG function is used in the multiple ways for calculating the SAS lead in time series data. It is generally more required for lag and more lead of the one or more than variables measured.
The time series or the longitudinal datas which considered more effect of the data challenging as well as data manipulation process tasks which tricks to easy effect for number of periods or rows and
columns. The Lag of the first order will be looked at the data observation which denoted as last value and referred as the lag1_value. In the second order will denote as lag2_value of the data
observations, lag function which helps to return the value of the character variable that has not been assigned to the length used in the parameter.
How to use SAS lag function?
LAG functions like LAG1, LAG2,….,LAGn which helps to return the values from the data queue LAG1 is retrieved and derived from the LAG and the LAGn function stores the values in the queue. For each
occurrence the LAGn function of the program which generates the own queue values whereas the n denotes as the missing values length of the queue and initialized for removed and returned. Hence the
missing values of the first occurrences and the remaining LAGn function executions the parameters begins to be called and appeared in the SAS function. Therefore the storing values of the bottom
queue which helps to returning the values from top queue and also the function is executed conditionally stored and returned. Parameters of the LAGn is mainly called it as the array name and formed
in the separate queue for maintaining the variable in array memory. If the function is compiled the SAS will allocate the memory in each and every data queues for holding the data bytes is 8 and the
memory is needed for 8 times with 100 or 800 bytes. The memory limit of the LAG function which is based upon the memory which allocated to the SAS with different operating system environment. We
accessed the subsequent rows and columns for the LAG function for comparing the current and previous or existing row values.
The following arguments like scalar_expressions, offset and default values for mandatory parameters and executed with fine set of arguments and declared in the offset integer numbers with number of
rows. Mostly the offset is the optional parameters and the value is one similarly the default is defined with the offset of the first 3 consecutives rows and it cannot behind the lag function. The
Partition by is another lag function syntax that helps to create the logical drive boundary datas for extensive dataset and almost it requires the calculations for smaller datasets. It depends upon
the user and organization requirements the partition quarterly datas is computed like offset the partition also the optional argument.
Steps to Create SAS lag Function
1. Navigate to the below URL,
3. Paste the below code for to create the sample dataset.
data first;
input a $ b;
proc print data=news;
data second;
set first;
d = lag(b);
e = lag2(b);
f = lag3(b);
proc print data=second;
1. We can use the lag(), lag2(), lag3(),.. function for to compare the current value and followed or replaced by the another value.
SAS lag Function Data
The lag function is used to retrieve the data and lagged values for the some variable missing the data. It’s a technique for performing the data computations which across the observations using the
LAGn function of nth-previous value of the function execution. Its to assume that the LAGn function which return the values of the existing values on the pre-conditional statement for assigning the
new variable. There is no lag function for calculating the LEAD and SAS function for to sorting the order count the number of rows in the variable which sequence of table records. The function Lag is
used in the data queue for returning the arguments and returns the values for calculating the missing values.
Another way to miss the data for accomplishing the task for LAG function which available in the data step. LAG also the variable name to perform the data observations and operations. To use the do
and other loop functions which used the addition, subtraction and other default functions.
Useful computing the data observations with reference date and time functions and they used the PROC EXPAND stored procedure for executing the set of queries.
data examples;
input inp1 $ inp2;
100 siva
101 raman
102 sivaraman
103 jack
104 ceasr
proc print data=examp1;
data examp2;
set examples;
inp3 = lag(inp1);
inp4 = lag2(inp1);
inp5 = lag3(inp1);
inp6 = lag4(inp1);
inp7 = lag5(inp1);
proc print data=examp2;
proc print data=examp2;
Sample Ouptut:
1. The above example first we need to create and set the datasets.
2. Then using the datalines we can assign the inputs with two columns.
3. We need to set the dataset example on the lag function.
4. We can perform the lag() function with n number of executions.
The lag() function is mainly used for to compare the current value of the predecessors in n number of type lag functions. Data queue for lag function which is on the direct sequence and consequence
of the data constructions modulus function for each data observations on loopback with handling by groups.
Recommended Articles
This has been a guide to SAS LAG Function. Here we have discuss Introduction, overviews, How to useful SAS lag function, examples with code implementation. You can also go through our other suggested
articles to learn more –
|
{"url":"https://www.educba.com/sas-lag-function/","timestamp":"2024-11-09T10:50:30Z","content_type":"text/html","content_length":"312841","record_id":"<urn:uuid:ebc8885a-e9f4-4cb5-9ad5-fc4aaf5e911f>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00323.warc.gz"}
|
What Route is Best for ME to Buy a Vehicle?
Mathematics, Functions, Measurement and Data, Numbers and Operations
Material Type:
Lesson Plan
Middle School
Education Standards
Module Guide / Lesson Plan
What Route is Best for ME to Buy a Vehicle?
This problem-based learning module is designed to link a student’s real-life problem to learning targets in the subjects of math, social studies and language arts. The problem being, what route is
best for me to buy a vehicle? The students will prepare, research and present findings about their own personal finances relating to buying a vehicle. The students will create two equations based on
two purchasing plans they will be comparing. At the conclusion, students will be able to decide which plan is best for them based on research and mathematical practices. Students will present to
their peers, teachers, administrators, and most importantly their parents in an attempt to convince them of their chosen plan.
This blended module includes teacher led instruction, student led rotations, community stakeholder collaboration and technology integration.
Section 1
This problem-based learning module is designed to link a student’s real-life problem to learning targets in the subjects of math, social studies and language arts. The problem being, what route is
best for me to buy a vehicle? The students will prepare, research and present findings about their own personal finances relating to buying a vehicle. The students will create two equations based on
two purchasing plans they will be comparing. At the conclusion, students will be able to decide which plan is best for them based on research and mathematical practices. Students will present to
their peers, teachers, administrators, and most importantly their parents in an attempt to convince them of their chosen plan.
This blended module includes teacher led instruction, student led rotations, community stakeholder collaboration and technology integration.
|
{"url":"https://oercommons.org/courseware/lesson/18860/overview","timestamp":"2024-11-04T11:36:56Z","content_type":"text/html","content_length":"70115","record_id":"<urn:uuid:036a3c3a-8bf5-4cce-afc6-e60dfa2145ac>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00162.warc.gz"}
|
1354 -- Placement of Keys
Placement of Keys
Time Limit: 1000MS Memory Limit: 10000K
Total Submissions: 1795 Accepted: 919
Assume that there are n (3<=n<=200) boxes identified by A1, A2,..., An , and each box Ai is configured a lock which is different from the others. Now put n keys to the n locks into the n boxes, each
box can only hold a key. After locking all the boxes, unclench the boxes named A1, A2 and fetch out the keys in them to unlock the locked boxes. If the two keys can open some box, fetch out the key
in the box to unlock other locked boxes again. If we can open all the boxes finally, we call the placement of the n keys good placement. How many are there different good placements of the n keys?
The input file, ending with -1, contains several data, each of which holds a line.
According to every input datum, compute the number of different good placements. Each output data hold two lines, the first line is held by the input datum, followed by a colon, which follows an
equal mark before which is an N; the second is held by the number of different good placement of the n keys.
Sample Input
Sample Output
Xi'an 2002
|
{"url":"http://poj.org/problem?id=1354","timestamp":"2024-11-10T04:25:25Z","content_type":"text/html","content_length":"6101","record_id":"<urn:uuid:65fcc5e7-0abf-4c5b-b448-3d80a4580d89>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00036.warc.gz"}
|
IMO Class 11 Maths Olympiad Sample Question Paper 2 for the year 2024-25
IMO Class 11 Maths Olympiad Sample Question Paper 2: Free PDF Download
International Maths Olympiad Class 11 Maths Olympiad Sample Question Paper 2 are provided here to help students prepare effectively for the exams in the pdf format. By practicing the IMO Maths
Olympiad Sample Question Paper 2 for Class 11, students get well-versed in the exam pattern and evaluate their current preparation level. It is because IMO Class 11 Maths Olympiad Sample Question
Paper 2 is designed by experts considering the latest exam pattern and IMO Class 11 syllabus.
Additionally, IMO Class 11 Maths Olympiad Sample Question Paper 2 includes all the important topics from the exam perspective. Hence, IMO Class 11 Maths Olympiad Sample Question Paper 2 helps in
revision and gives students good practice in solving any question in the Olympiad exam. Students can download IMO Class 11 Maths Olympiad Sample Question Paper 2 pdf through the link below and
practice the questions anytime at their convenience.
FAQs on IMO Maths Olympiad Sample Question Paper 2 For Class 11 2024-25
1. Would there be a negative marking system in the IMO?
The IMO does not have a negative marking system. For wrong answers, the student is awarded zero marks against the question. Therefore, students can continue their exams without the fear of losing
their marks for wrong answers. The exam has MCQs or multiple choice questions, where students will be provided options and they will have to choose the right answer from the option given. Giving
these exams is helpful for students as it prepares them for the competitive exams they must face in the future.
2. How many questions are asked in the IMO question paper?
In the International Mathematics Olympiad (IMO) exam organized by the SOF, for Classes 5 to 12 there will be a total of 50 questions. The first section, Logical Reasoning comprises 15 questions each
of 1 mark. The second section, Mathematical Reasoning /Applied Mathematics comprises 20 questions each of 1 mark. The third section, Everyday Mathematics comprises 10 questions each of 1 mark. The
last section, Achiever’s Section comprises 5 questions each of 3 marks.
3. What is an Olympiad exam?
Olympiad is a competitive exam that is held across schools to find exceptionally talented students with potential for success, talent, aptitude, and IQ. Olympiad exams are conducted in schools all
across India for improving the skills of students through a foundational understanding of the subjects. Through Olympiad exams, students can analyze their strengths and weaknesses. Students from
classes 1-12 are eligible to appear for the Olympiad exam. A student can improve their skills in various subjects like computer technology, mathematics, English, Science, etc. In Olympiad exams, A
student gets to analyze their skills based on reasoning and logical abilities.
4. How can I prepare for the Olympiad exam?
Olympiad Exams are not easy, and while answering the questions only requires a proper understanding of the fundamentals, there are various materials available on the internet to succeed in Olympiad
Exams including the sample papers made available in Vedantu for absolutely free. In addition, you can prepare for the exam by going through the course material provided by the school and can take the
help of a tutor. The SOF IMO has a lot of sample papers, previous years papers which can be practised from.
5. What are the IMO eligibility requirements?
Just like any other competitive test, the International Maths Olympiad for Class 10 includes eligibility requirements. Participants who register for the Class 10 IMO must be aware that the test is
divided into two tiers, each with its own set of criteria. Participants should usually be from SOF-affiliated schools, however, they can also enrol as individuals. Level 1 is open to students in
grades 1 through 12. They are eligible for Level 2 of the IMO if they achieve the minimum qualifying score.
6. What is the cut-off score for Class 11 IMO?
Students can find the cut-off score for Class 11 IMO on the official website of SOF. Students can also find the cut-off score for the IMO exam on Vedantu. Students must visit the page regularly to
find the cut-off score and other updates related to the exam. This is the minimum score required to qualify for the next level, and students should aim to score well and appear for the next round.
Having a strong grip on concepts is essential to solving questions in the exam.
7. What is the pattern of questions in the Maths Olympiad exam for Class 11?
In the IMO for Class 11, students have to answer all sorts of questions. Most questions asked in the exam are based on basic concepts covered in the school syllabus. Students must apply their logical
thinking to solve the questions in the IMO exam. The exam helps to enhance the doubts and concepts of maths further. Students can solve the IMO Maths Olympiad Sample Question Paper 2 for Class 11 to
get an idea about the type of questions and how to approach them.
8. Is there any study material to prepare for IMO Class 11?
Students can find the study material on the official website of the SOF to prepare for the Maths Olympiad for Class 11. Students can also solve all questions given in the school syllabus; this is the
most part of preparing for the IMO. After being through the syllabus, students can also solve the Maths Olympiad Sample Question Paper 2 to prepare for the exam. The Maths Olympiad Sample Question
Paper 2 for Class 11 Maths Olympiad is available with solutions that can help students to clearly understand all concepts and make the most of their preparation.
9. Is there any shortcut to scoring a top rank in the International Maths Olympiad exam for Class 11?
There is no shortcut to scoring a top rank in the Olympiad exam. One must practice different questions using IMO Class 11 Maths Olympiad Sample Question Paper 2, previous year question papers, and
mock tests, and prepare a practical and realistic study plan. Students can get all these resources from Vedantu’s official website.
10. What type of questions are asked in the IMO exam of Class 11?
Only multiple-choice questions are asked in the IMO exam. But the difficulty level of the questions varies from one class to another.
|
{"url":"https://www.vedantu.com/olympiad/imo-maths-olympiad-sample-question-paper-2-class-11","timestamp":"2024-11-05T10:36:54Z","content_type":"text/html","content_length":"330241","record_id":"<urn:uuid:f5a98c6d-8e9e-40fc-b9cf-7c0d1d673990>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00387.warc.gz"}
|
Tell whether the given sequence is an arithmetic sequence. 1,-2-Turito
Are you sure you want to logout?
Tell whether the given sequence is an arithmetic sequence. 1,-2,3,-4,5,...
• A sequence is said to be arithmetic if the common difference is always constant.
• The General formula of any AP is
The correct answer is: ⇒-7
• We have given a sequence
• We have to find weather the given sequence is AP or not.
Step 1 of 1:
We have given sequence
The difference in first two terms is
Now the difference in next two terms is
Then, The difference between next two terms will be
Since the difference is not constant
The given sequence is not an arithmetic sequence.
Get an Expert Advice From Turito.
|
{"url":"https://www.turito.com/ask-a-doubt/Maths--q215abb50","timestamp":"2024-11-12T20:17:19Z","content_type":"application/xhtml+xml","content_length":"536704","record_id":"<urn:uuid:ac9a2b08-eac4-4700-8fd7-25dc2a9730e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00789.warc.gz"}
|
The Electromagnetic Field of a Magnetic Dipole above a Conducting
Short communication
, Volume: 7( 1) DOI: 10.37532/2320-6756.2019.7(1).169
The Electromagnetic Field of a Magnetic Dipole above a Conducting HalfSpace
Received: October 31, 2018; Accepted: November 23, 2018; Published: November 30, 2018
Citation: Adel AS Abo Seliem, Alseroury F. The Electromagnetic Field of a Magnetic Dipole Above a Conducting Half-Space. J Phys Astron. 2019;7(1):169.
We calculate solutions for the electromagnetic field due to a magnetic dipole and a finite loop vertically oriented and above a conducting ground by means of an approximate Green function in the
frequency domain. Our solution is expressed as the sum of two partial azimuthally electric field, the two partial field are identified as radioactive and diffusion, the transient source in which the
exciting current is abruptly switched off has been considered in detail.
Reflection; Electromagnetic field; Ward
The present paper is concerned priming with the asymptotic representation from the Fast Electric Maniac (FEM) above a uniform conducting ground and scattered form are bodies within the ground, the
paper interpretation of their results requires the development of theoretical models as has been started by Wait JR and Kaufman AA [1,2]. The electromagnetic field due to current loop is naturally
described by a magnetic Hertz vector while the elementary source is a magnetic dipole. The solution of the present boundary vale problem, where the field derived a vertically oriented distribution of
magnesium above the horizontal interface of the air, and the ground is the solution. Stration considered the electromagnetic theory as section 1 [3], and Ward in his study concluded that a ward loop
may be regarded as equivalent to the distribution of magnesium [4-6]. We consider the transient field resulting from an abrupt current and showed that the radioactive field is a superposition of
poles using the intranet of the element of tangle in the ground [7].
The diffuse field is essentially more complex, we restrict our investigation to its structure and above the ground and lies enough for an asymptotic approximation as an application [8]. Further,
since the field is continuously ensured using a recurring loop at the same height, we restrict our calculation to the field of the height.
The source of magnetic Hertz vector
In a medium specified by its uniform permeability, permittivity and conductivity, the electric and magnetic field vectors E(x,t) and H(x,t) are given in terms
In air, we take
With the origin on the surface of the ground, the boundary condition reduced to the form
On z=0 where we have taken z=x-h by applied the Fourier transform, it is convenient to eliminate time derivatives
Subtitled from equation (2) and (3) transforms into the scalar Helmholtz form
The solution of the last equation subject to boundary conditions of the (5) is approximately expressed in medium terms of the Green function
Where the integral is to cover the whole of space and
Substitutimg the same boundary condition transform of the field vectors are this given by the formula:
We are interested in the particular case, when the source
Where ‘c’ the integration is over the plane surface spinning the loop. The transforms of the field vectors can readily be obtained by substitution from (4) into (11) and (12) together with the
application of Stokes theorem to the surface integral
Where the integration is around the loop, in the right-hand sense with respect to direction K. The Gamma function to the position can be Written in Sommerfeld form:
Corresponding to the value of k square in the air and the ground respectively and
We took cylindrical polar coordinate
From the addition formula for the Bessel function, the results obtained are
When we substitute from (15) into (14), we find
The corresponding result for an elementary dipole whose mount is of limiting values of 1], can be recovered from this formula by applying the approximations, thus, we get
The results can be obtained directly from E an substituting E^p (r,z,s) is purely radiated vanishing when the source is on ground, whereas that corresponding E^s (r,z,s) is diffusive in character,
because of its dependence on the conductive of the ground , they are given by inverse transforms , of the form
In general, such field cannot be determined in exact form, however, for h=0, the radiation component
We consider the transient field resulting from an abrupt current, we show that the relative field consists of a superposition of Huygens spectral pulse issuing at the instant of a switch - off from
each element of the source and image of the ground.
|
{"url":"https://www.tsijournals.com/articles/the-electromagnetic-field-of-a-magnetic-dipole-above-a-conducting-halfspace-13917.html","timestamp":"2024-11-10T05:57:39Z","content_type":"text/html","content_length":"84525","record_id":"<urn:uuid:43d5386f-4c2f-4e7b-bf2a-2a81c9903e85>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00609.warc.gz"}
|
nForum - Discussion Feed (double affine Hecke algebra)zskoda comments on "double affine Hecke algebra" (106812)zskoda comments on "double affine Hecke algebra" (106775)zskoda comments on "double affine Hecke algebra" (106773)zskoda comments on "double affine Hecke algebra" (106765)
Notice that rational Cherednik algebras defined by Etingof and Ginzburg are not literally DAHAs, but certain degenerations of those and can be considered a special case of symplectic reflection
diff, v8, current
Edit: for this reason I have moved some of the references into the new entry rational Cherednik algebra.
|
{"url":"https://nforum.ncatlab.org/search/?PostBackAction=Search&Type=Comments&Page=1&Feed=ATOM&DiscussionID=15935&FeedTitle=Discussion+Feed+%28double+affine+Hecke+algebra%29","timestamp":"2024-11-10T17:34:28Z","content_type":"application/atom+xml","content_length":"7475","record_id":"<urn:uuid:169f41b4-1451-40dc-a08c-ea5b827fcaf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00859.warc.gz"}
|
324 research outputs found
We study inelastic two-body relaxation in a spin-polarized ultracold Fermi gas in the presence of a p-wave Feshbach resonance. It is shown that in reduced dimensionalities, especially in the
quasi-one-dimensional case, the enhancement of the inelastic rate constant on approach to the resonance is strongly suppressed compared to three dimensions. This may open promising paths for
obtaining novel many-body states.Comment: 14 pages, 12 figure
We discuss fluctuations in a dilute two-dimensional Bose-condensed dipolar gas, which has a roton-maxon character of the excitation spectrum. We calculate the density-density correlation function,
fluctuation corrections to the chemical potential, compressibility, and the normal (superfluid) fraction. It is shown that the presence of the roton strongly enhances fluctuations of the density, and
we establish the validity criterion of the Bogoliubov approach. At T=0 the condensate depletion becomes significant if the roton minimum is sufficiently close to zero. At finite temperatures
exceeding the roton energy, the effect of thermal fluctuations is stronger and it may lead to a large normal fraction of the gas and compressibility.Comment: 5 pages, 3 figure
We consider weakly interacting bosons in a 1D quasiperiodic potential (Aubry-Azbel-Harper model) in the regime where all single-particle states are localized. We show that the interparticle
interaction may lead to the many-body delocalization and we obtain the finite-temperature phase diagram. Counterintuitively, in a wide range of parameters the delocalization requires stronger cou-
pling as the temperature increases. This means that the system of bosons can undergo a transition from a fluid to insulator (glass) state under heating
In this work, we discuss the emergence of $p$-wave superfluids of identical fermions in 2D lattices. The optical lattice potential manifests itself in an interplay between an increase in the density
of states on the Fermi surface and the modification of the fermion-fermion interaction (scattering) amplitude. The density of states is enhanced due to an increase of the effective mass of atoms. In
deep lattices, for short-range interacting atoms, the scattering amplitude is strongly reduced compared to free space due to a small overlap of wavefunctions of fermions sitting in the neighboring
lattice sites, which suppresses the $p$-wave superfluidity. However, we show that for a moderate lattice depth there is still a possibility to create atomic $p$-wave superfluids with sizable
transition temperatures. The situation is drastically different for fermionic polar molecules. Being dressed with a microwave field, they acquire a dipole-dipole attractive tail in the interaction
potential. Then, due to a long-range character of the dipole-dipole interaction, the effect of the suppression of the scattering amplitude in 2D lattices is absent. This leads to the emergence of a
stable topological $p_x+ip_y$ superfluid of identical microwave-dressed polar molecules.Comment: 14 pages, 4 figures; prepared for proceedings of the IV International Conference on Quantum
Technologies (Moscow, July 12-16, 2017); the present paper summarizes the results of our studies arXiv:1601.03026 and arXiv:1701.0852
We consider a gas of cold fermionic atoms having two spin components with interactions characterized by their s-wave scattering length $a$. At positive scattering length the atoms form weakly bound
bosonic molecules which can be evaporatively cooled to undergo Bose-Einstein condensation, whereas at negative scattering length BCS pairing can take place. It is shown that, by adiabatically tuning
the scattering length $a$ from positive to negative values, one may transform the molecular Bose-Einstein condensate into a highly degenerate atomic Fermi gas, with the ratio of temperature to Fermi
temperature $T/T_F \sim 10^{-2}$. The corresponding critical final value of $k_{F}|a|$ which leads to the BCS transition is found to be about one half, where $k_F$ is the Fermi momentum.Comment: 4
pages, 1 figure. Phys. Rev. Lett. in pres
We consider a one-dimensional (1D) two-component atomic Fermi gas with contact interaction in the even-wave channel (Yang-Gaudin model) and study the effect of an SU(2) symmetry breaking
near-resonant odd-wave interaction within one of the components. Starting from the microscopic Hamiltonian, we derive an effective field theory for the spin degrees of freedom using the bosonization
technique. It is shown that at a critical value of the odd-wave interaction there is a first-order phase transition from a phase with zero total spin and zero magnetization to the spin-segregated
phase where the magnetization locally differs from zero.Comment: 18 pages, 3 fugures; references adde
We find that the key features of the evolution and collapse of a trapped Bose condensate with negative scattering length are predetermined by the particle flux from the above-condensate cloud to the
condensate and by 3-body recombination of Bose-condensed atoms. The collapse, starting once the number of Bose-condensed atoms reaches the critical value, ceases and turns to expansion when the
density of the collapsing cloud becomes so high that the recombination losses dominate over attractive interparticle interaction. As a result, we obtain a sequence of collapses, each of them followed
by dynamic oscillations of the condensate. In every collapse the 3-body recombination burns only a part of the condensate, and the number of Bose-condensed atoms always remains finite. However, it
can comparatively slowly decrease after the collapse, due to the transfer of the condensate particles to the above-condensate cloud in the course of damping of the condensate oscillations.Comment: 11
pages, 3 figure
We study finite size effects for the gap of the quasiparticle excitation spectrum in the weakly interacting regime one-dimensional Hubbard model with on-site attraction. Two type of corrections to
the result of the thermodynamic limit are obtained. Aside from a power law (conformal) correction due to gapless excitations which behaves as $1/N_a$, where $N_a$ is the number of lattice sites, we
obtain corrections related to the existence of gapped excitations. First of all, there is an exponential correction which in the weakly interacting regime ($|U|\ll t$) behaves as $\sim \exp (-N_a \
Delta_{\infty}/4 t)$ in the extreme limit of $N_a \Delta_{\infty} /t \gg 1$, where $t$ is the hopping amplitude, $U$ is the on-site energy, and $\Delta_{\infty}$ is the gap in the thermodynamic
limit. Second, in a finite size system a spin-flip producing unpaired fermions leads to the appearance of solitons with non-zero momenta, which provides an extra (non-exponential) contribution $\
delta$. For moderate but still large values of $N_a\Delta_{\infty} /t$, these corrections significantly increase and may become comparable with the $1/N_a$ conformal correction. Moreover, in the case
of weak interactions where $\Delta_{\infty}\ll t$, the exponential correction exceeds higher order power law corrections in a wide range of parameters, namely for $N_a\lesssim (8t/\Delta_{\infty})\ln
(4t/|U|)$, and so does $\delta$ even in a wider range of $N_a$. For sufficiently small number of particles, which can be of the order of thousands in the weakly interacting regime, the gap is fully
dominated by finite size effects.Comment: 17 pages, 5 figure
|
{"url":"https://core.ac.uk/search/?q=author%3A(Shlyapnikov%2C%20G.%20V.)","timestamp":"2024-11-10T15:52:52Z","content_type":"text/html","content_length":"164819","record_id":"<urn:uuid:9b978204-10e3-4cbe-aa46-089e98118a9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00820.warc.gz"}
|
OpenSolver 2.6 (8 Oct 2014) with Mac OSX support
We are happy to announce that OpenSolver 2.6 has been released.
This is a major release that supports Office 2011 for Mac OSX for the first time! Nearly all of the features of the Windows version are working on Mac, the only notable exception is the non-linear
NOMAD solver, which is not supported on the Mac version at this stage. However, all of the other non-linear solvers are available as normal.
This brings us to the other main addition in this release which is the ability to use the COIN-OR non-linear solvers locally. The Bonmin and Couenne solvers were previously only available on NEOS as
cloud-based solvers, but they can now be used on your machine locally (both Windows and Mac), which should offer more convenience. This feature is very experimental, as these solvers rely on us
converting the Excel model into another model format and we currently only support doing this for a subset of the functions in Excel (most common functions are working). These solvers will fail if
your spreadsheet uses functions OpenSolver cannot interpret. If you get a message about an unsupported function, please let us know so we can look at adding it in. These solvers are included in the
‘Advanced’ release, more on that next.
The last main difference in this release is the release format. Instead of offering a single download, we are offering separate downloads for Windows and Mac. This is so Windows users don’t have to
download the solvers for Mac, and vice-versa. We are also introducing ‘Linear’ and ‘Advanced’ downloads. The Linear download only contains the linear CBC solver, whereas the experimental Advanced
download comes with the non-linear solvers Bonmin, Couenne, and NOMAD. Both releases can use the NEOS solvers, and can use Gurobi if installed on the machine. If you aren’t sure, the default download
is the Linear version for your computer (either Windows or Mac as appropriate).
Other changes include:
• NEOS solvers now write AMPL files to disk before sending the model to NEOS. These can be used to run the model locally using AMPL if you have it installed
• Resolve bugs introduced by system-locale settings
• Upgrade NOMAD to v3.6.2
• Bugfixes for non-linear NOMAD solver
• Bugfixes for NEOS solvers
• Various other bugfixes
You can see the releases here. There are a lot of changes in this release, especially since it is the first Mac release, and we are looking for any feedback you have, as well as any problems you
might run into while using it.
16 thoughts on “OpenSolver 2.6 (8 Oct 2014) with Mac OSX support”
1. Greetings,
Thank-you very much for your work creating/maintaining Open Solver. I’ve tried running version 2.6 on a new 64 bit Windows box with Excel 2013 and I receive the following error:
OpenSolver2.6 encountered an error:
Path/File access error (at line 747) (at line 1448) (at line 2810)
Source = OpenSolver, ErrNumber=75
I’ve tried running it from My Documents as well as the AddInns director. My company has changed its security settings for macros, so I’m afraid it might be causing the problem (even though I
enabled them/trusted source). I will download v2.7 from home (can’t do that at work either) and will try to run. Thoughts?
With aprpeciation,
Jason H.
1. Please do try with 2.7 and/or the 2.8 prerelease versions and let us know how you go. Tracing this error indicates we are having trouble creating the script to run the solver inside your Temp
folder. There are changes to how we deal with the Temp folder in version 2.8, so that might help resolve this problem. You should also try rebooting if you haven’t already, this often helps
fix file access-related issues (I doubt it will help here but you never know)
2. Hello,
Is OpenSolver 2.6 able to generate .nl files for Bonmin and Couenne with binary constraints? The solutions I’m getting appear to ignoring the binary constraints. Manually looking at the .nl
files, it seems like binary constraints are not being written in.
I will rebuild my model in a new blank spreadsheet to see if it helps.
Thank you for the progress so far!
1. This looks like our bug; thanks for reporting it (and for reading the .nl file – not an activity for the faint hearted!). We will look into this further. Andrew
1. It is a bit strange because the BONMIN solver using NEOS works fine.
The NOMAD solver is working fine as well.
1. We have found and fixed the bug; the next release should resolve this. Perhaps in the meantime you can make the variables integer (whiwch works correctly) with upper bounds of 1.
Sorry about this bug, but many6 thanks for reporting it. Andrew. PS: The NEOS code works differently.
1. Thank you Andrew!
OpenSolver is becoming a formidable product with the inclusion of the nonlinear solvers. NL solvers are very useful when verifying the correctness of a linearized NL model.
I also see that the ‘NEOS using COIN-OR CBC’ engine generates .ampl files! (WOW!)
Is there any way to change the AMPL file to run using the other solvers on NEOS? I tried to change the ‘option solver cbc’ to a different solver but NEOS is giving me errors.
Thanks again!
1. Thanks for the feedback; we have some great students who have the skills to add these new features. We don’t have the user interface that would let you change the solver
yourself – but it is clearly something we need to add partly because it is not a big job. Keep an eye out for a new release (but not the next one which is almost set to go).
Thanks again, Andrew
3. Hi again,
OK, carrying in from before, Opensolver has put an item of the menu bar. I can access the main Model form and try to run the solver. Unfortunatey, it comes back with error 1004 in line 3322 “The
index into the specified collection is out of bounds”. Not very meaningful.
The model is currently very simple & stripped-down. The Excel built-in Solver, although not giving a properly optimised solution, nevertheless does not throw any error.
I can only assume that this version of OpenSolver is indeed NOT compatible with Excel 2003.
Please advise.
1. We do not test against Excel 2003, sorry, and have not done so for some time. But, we hoped it still worked in this old version. We’d welcome your help in testing this further… do you know
VBA? Andrew
1. Hi Andrew,
That’s a pity, considering the description I had read about OpenSolver running on Excel 2003.
Anyway, to answer your question, I know a little VBA & if time allows would like to learn some more. If I can help when time allows, let me know how (& with some guidance).
Best regards,
1. Thanks. We will look into your error report, and try to get back to you over the next few weeks. Also, I have updated our front page to make it clear that Excel 2003 support is not
guaranteed! Thanks for taking the time to give us your feedback. Andrew
4. Hi again,
Hold everything!
I’ve just tried loading OpenSolver 2.6 again.
This time, although it popped up the messages I previously told you about, It has now added a OpenSolver item on the menu bar.
I shall try it out & get back to you.
5. I excitedly downloaded this version to use on my tried & tested Excel 2003.
When loading, a little window popped up saying “File conversion in progress” & another one saying “This file was created in a newer version of Microsoft Excel. . . . originally contained features
not recognised by this version of Excel” & then did nothing. No additional menus or add-ins. I tried several times with the same result. Nada.
I conclude that this is not actually compatible with Excel 2003.
A pity!
When did OpenSolver end support for Excel 2003?
6. Hey There!
I was just wondering how many variables and constraints is this program limited to?
1. No limit apart from memory and time.
|
{"url":"https://opensolver.org/opensolver-2-6-8-oct-2014-with-mac-osx-support/","timestamp":"2024-11-10T09:40:28Z","content_type":"text/html","content_length":"68004","record_id":"<urn:uuid:817f33db-9dbf-4a9a-bd56-419981e8a04b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00306.warc.gz"}
|
1.6.1 Unit 1 Test Review
Want to make creations as awesome as this one?
1.6.1 Unit 1 Test Review
HS Math
Created on September 24, 2024
More creations to inspire you
PREHISTORYescape room
1.6.1 Unit Test Review
Civilization Game
Este juego está bloqueado aún
Este juego está bloqueado aún
Civilization Game
You need to answer the following questions to find your civilization.
Which statement about the number 1.75 is true?
1.75 is a rational number.
1.75 is an irrational number.
1.75 is a whole number.
select the largest number.
Complete the Factor Tree.
Find the Least common Denominator (LCD).
you were wrong,caveman
Try again!
You found your civilization.
Civilization Game
¡Ya has conseguido pasar este juego!
Este juego está bloqueado aún
Hunting Game
You need to answer the following questions to learn how to hunt.
What is the greatest common factor (gcf) of 88 and 132?
What is the least common multiple (lcm) of 88 and 132?
There are two buildings on Main Street. Building #1 has 12 floors, all with equal height, and a total height of 105 feet. Building #2 has 10 floors with each floor having a height that is times 6
over 5 times the height of each floor in building #1.Which building is taller?
Both buildings are the same height.
Building #2
Building #1
you were wrong,caveman
Try again!
you caught a fish.
Civilization Game
¡Ya has conseguido pasar este juego!
¡Ya has conseguido pasar este juego!
Wheel Game
answer the following questions to learn how to use the wheel.
solve the following:
solve the following:
use the order of operations to solve:
use the order of operations to solve:
you were wrong,caveman
Try again!
You leanred how to use the wheel.
Great job!
|
{"url":"https://view.genially.com/66f2c0feb963df603ced2f90/interactive-content-161-unit-1-test-review","timestamp":"2024-11-01T22:39:59Z","content_type":"text/html","content_length":"36064","record_id":"<urn:uuid:cf2f07c4-f3c5-45ee-8ae1-b536c4bad060>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00721.warc.gz"}
|
Two cylindrical swimming pools are being filled simultaneously with w
Two cylindrical swimming pools are being filled simultaneously with water, at exactly the same rate measured in m³/min. The smaller pool has a radius of 5 m and the height
of the water in smaller pool is increasing at a rate of 0.5 m/min. The larger pool has a radius of8 m. How fast is the height of the water increasing in larger pool? Your answer must be a specific
numerical value.3.
Fig: 1
Fig: 2
|
{"url":"https://tutorbin.com/questions-and-answers/two-cylindrical-swimming-pools-are-being-filled-simultaneously-with-wa","timestamp":"2024-11-12T23:51:06Z","content_type":"text/html","content_length":"64082","record_id":"<urn:uuid:a9a83b26-122c-4e18-8903-d77a677e8d32>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00789.warc.gz"}
|
Electromotive Force Calculator - Calculator Doc
Electromotive Force Calculator
The Electromotive Force (EMF) Calculator is a handy tool used to calculate the electromotive force in electrical circuits. EMF refers to the voltage generated by a source, such as a battery or
generator, when work is done to move a charge through the circuit. It is a fundamental concept in electromagnetism and electrical engineering.
The formula for calculating electromotive force (EMF) is:
Electromotive Force (E) = Work Done (W) / Charge (Q)
• E is the electromotive force measured in volts (V).
• W is the work done measured in joules (J).
• Q is the charge measured in coulombs (C).
How to use
1. Enter the Work Done (W) in joules into the input field.
2. Enter the Charge (Q) in coulombs into the next field.
3. Click the Calculate button to find the Electromotive Force (E) in volts.
Suppose you have a work done of 100 joules and a charge of 20 coulombs. Using the formula:
E = 100 / 20 = 5 volts
This means the electromotive force generated is 5 volts.
1. What is electromotive force (EMF)? Electromotive force (EMF) is the energy provided by a power source, such as a battery, to move electrical charge through a circuit, measured in volts.
2. What is the difference between EMF and voltage? EMF refers to the potential difference created by a source, while voltage can refer to the potential difference between any two points in a
3. What is the unit of electromotive force? The unit of electromotive force is the volt (V).
4. How do you calculate electromotive force? You calculate EMF by dividing the work done in moving a charge by the amount of charge, using the formula E = W / Q.
5. What is work done in an electrical circuit? Work done is the energy transferred to move a charge through a circuit, measured in joules (J).
6. What is charge in an electrical circuit? Charge is the quantity of electricity that flows through a conductor, measured in coulombs (C).
7. Can EMF be negative? EMF is typically positive, but a negative value can occur in certain situations, such as when opposing forces are at play in a circuit.
8. What is the significance of EMF in batteries? EMF represents the maximum potential difference a battery can provide when no current is flowing, essentially its voltage when the circuit is open.
9. Is EMF the same as potential difference? EMF is a type of potential difference, specifically the one generated by a power source.
10. Can EMF be zero? EMF can be zero if no work is done in moving the charge, or if the battery or generator is not active.
11. How does EMF relate to energy conversion? EMF is a measure of energy conversion from one form (such as chemical or mechanical energy) to electrical energy.
12. What is the difference between EMF and terminal voltage? Terminal voltage is the actual voltage available at the terminals of a battery when a current is flowing, while EMF is the theoretical
maximum voltage without current flow.
13. What factors affect electromotive force? Factors such as the source material, temperature, and load on the system can affect EMF.
14. Can I use this calculator for both AC and DC circuits? Yes, this calculator applies to both AC and DC circuits, as long as you know the work done and the charge.
15. What is the relationship between EMF and Ohm’s law? Ohm’s law relates current, voltage, and resistance, while EMF is the source of voltage in a circuit. EMF is the total potential difference
before considering internal resistance.
16. What happens to EMF if the charge is very small? If the charge is very small, the EMF will increase, as the same amount of work is done over a smaller amount of charge.
17. What if no work is done in the circuit? If no work is done (W = 0), the EMF will be zero, meaning no force is driving the charge through the circuit.
18. Can EMF vary over time? Yes, EMF can vary, especially in alternating current (AC) circuits, where the source alternates its potential difference.
19. Is EMF important in electrical generators? Yes, EMF is crucial in electrical generators, where mechanical energy is converted into electrical energy, creating voltage.
20. Does temperature affect EMF? Yes, temperature can affect EMF, especially in batteries, where increased temperature can lead to changes in chemical reactions and thus alter the EMF.
The Electromotive Force Calculator provides an easy and quick way to determine the EMF generated in a circuit based on the work done and the charge. EMF is a fundamental concept in electromagnetism
and electrical engineering, helping to understand how energy is transferred and used in circuits. This tool can be used in various applications, from basic physics problems to real-world electrical
system design.
|
{"url":"https://calculatordoc.com/electromotive-force-calculator/","timestamp":"2024-11-06T08:27:58Z","content_type":"text/html","content_length":"86905","record_id":"<urn:uuid:e4e38fd8-2860-489c-bb1f-bd4f4bd0c2e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00746.warc.gz"}
|
How Many Feet Are in a Mile? Understanding the Conversion Between Feet and Miles
Common measures of length and distance include the foot and the mile. Converting between these units might be helpful when trying to figure out things like how far a long car trip is or how big an
area of land actually is. In this piece, I’ll show how to convert between feet and miles and discuss the history of the two measurement systems.
Understanding Feet and Miles
It’s crucial to know what feet and miles represent before attempting a conversion. Approximately 0.3048 metres (12 inches) is equivalent to one foot. It is the standard unit of measurement for
determining things like a person’s stature, the dimensions of a room, or the breadth of a street.
However, a mile is defined as 5,280 feet, which is roughly equivalent to 1.609 kilometres. Long distances, like the length of a marathon or the distance between two cities, are typically measured
using this method.
Converting Feet to Miles
Divide the number of feet you want to convert to miles by 5,280. If you need to know how many miles are in 10,000 feet, for instance, you may do the calculation by dividing 10,000 by 5,280, which
gives you an approximate result of 1.893 miles.
It’s important to remember that the mile isn’t always the most convenient unit of measurement, especially when working with shorter distances. When measuring the length of a bookcase or the height of
a structure, for instance, feet or inches would be more appropriate.
Converting Miles to Feet
Multiply the amount of miles by 5,280 to get the equivalent in feet. Multiplying 3.5 by 5,280 gives you 18,480 feet as a result of converting miles to feet.
Using Conversion Tables
A conversion table between feet and miles can be useful if you find yourself doing the conversion often. You may rapidly calculate the conversion factor by looking up a common measurement in both
feet and miles, as is done in these tables.
Using Online Conversion Tools
Alternatively, you can use a foot-to-mile conversion tool to do the maths for you. These converters let you enter a measurement in one unit and get the result in the other unit instantly. If you need
to convert between several different units of measurement, this can be a fast and easy method to do it.
Applications of Feet and Miles
Both feet and miles are frequently used in many contexts. If you’re taking a road trip, for instance, you might be curious about how many miles you’ll cover. You may be interested in the square
footage of a home you’re considering purchasing.
Miles and feet are both common in the building and engineering industries. If you want to create a bridge that can span a given distance, you’ll need to know how long the bridge needs to be. For
architectural purposes, knowing the height and width in feet is essential.
Common Conversions
Here are some common conversions between feet and miles:
1 foot = 0.000189394 miles
10 feet = 0.00189394 miles
100 feet = 0.0189394 miles
1,000 feet = 0.189394 miles
10,000 feet = 1.89394 miles
100,000 feet = 18.9394 miles
1 mile = 5,280 feet
2 miles = 10,560 feet
5 miles = 26,400 feet
10 miles = 52,800 feet
50 miles = 264,000 feet
Other Units of Measurement
There are many more units of measurement for length and distance that are used in addition to the more conventional feet and miles. In the metric system, which is used in many nations, centimetres
and metres are both common units of measurement. Yards, a typical American unit of length measurement, and kilometres, a standard European unit of distance measurement, are two examples of
alternative units.
Tips for Converting
Some things to keep in mind when doing a feet-to-miles conversion:
• Check that you have the right conversion factor. Subtract 5,280 from the number of feet you want to convert to miles. Multiply a mile by 5,280 to get the equivalent in feet.
• Always keep in mind that feet are far less than miles. The result of converting a distance expressed in feet to miles will be less than the original expression. The outcome of converting miles to
feet will be more than the original value.
• Be cautious to round your calculations to the correct amount of digits. If you convert 10,000 feet to miles, for instance, you’ll get 1.893 miles (rounding to three decimal points).
• When deciding on a unit of measurement, use your best judgement. It would be more reasonable to use inches or centimetres when measuring the length of a pencil rather than miles or feet.
In sum, knowing how to convert between feet and miles is useful knowledge in many contexts. Converting between these units and getting precise length and distance measurements can be done with the
right conversion factor and by remembering these guidelines.
|
{"url":"https://itsreleased.com/how-many-feet-are-in-a-mile-understanding-the-conversion-between-feet-and-miles/","timestamp":"2024-11-13T09:40:05Z","content_type":"text/html","content_length":"82278","record_id":"<urn:uuid:b2b7749f-2519-4761-933e-af299330e1b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00745.warc.gz"}
|
Binary Tree Traversal - CSVeda
Traversal is an essential operation of any data structure. It is the process of reaching out to or visiting every element stored in the data structure at least once. Unlike linear data structures
where traversal can be done in only one way, a binary tree can be traversed in different ways. Three common Binary Tree Traversal methods are
• Preorder Traversal
• Inorder Traversal
• Postorder Traversal
Preorder Traversal
Preorder traversal is the traversal technique where the root node of a tree/subtree is processed before processing its left and right child nodes. In short the Preorder traversal can be termed as
Root, Left, Right. The steps for preorder traversal are
1. Process the current node
2. Perform Preorder Traversal on the left subtree of the current node
3. Perform Preorder Traversal on the right subtree of the current node
Inorder Traversal
Inorder traversal is binary tree traversal where the root node of a tree/subtree is processed after processing its left child node but before processing the right child node. The root node is
processed between the left and right child node processing. In short Inorder traversal can be termed as Left, Root, Right. The steps for inorder traversal are
1. Perform Inorder Traversal on the left subtree of the current node
2. Process the current node
3. Perform Inorder Traversal on the right subtree of the current node
Postorder Traversal
Postorder Binary tree traversal is the traversal where the root node of a tree/subtree is processed after processing its left child node and the right child node. That is the root node is processed
after processing of the left and right child nodes. In short the Postorder traversal can be termed as Left, Right, Root. The steps for postorder traversal are
1. Perform Postorder Traversal on the left subtree of the current node
2. Perform Postorder Traversal on the right subtree of the current node
3. Process the current node
Process is a general term used in a binary tree traversal. A process can be printing the values of the nodes, evaluation of the node value, updating the node value or counting the nodes.
One Comment
1. Great
|
{"url":"https://csveda.com/binary-tree-traversal/","timestamp":"2024-11-09T00:57:38Z","content_type":"text/html","content_length":"64438","record_id":"<urn:uuid:5b521663-1f08-4664-838d-abcd7d82f657>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00780.warc.gz"}
|
Re: [std-proposals] Integer overflow arithmetic
From: Tiago Freire <tmiguelf_at_[hidden]>
Date: Sat, 17 Feb 2024 15:00:39 +0000
Double-wide remainders are important for safe modular arithmetic. For
example, std::linear_congruential_generator is commonly implemented
using double-wide arithmetic through __int128. However, it would be
desirable to have a general solution for any integer type.
Your div_wide function does not cover this use case because it has the
precondition is_div_wide_defined. A rem_wide function would have a
wider contract. Only division by zero and the edge case of HUGE / -1
are problematic, but the remainder always fits into a 64-bit number.
> FWIW, for std::linear_congruential_generator the integers involved are unsigned and the remainder is a compile-time constant that can't be zero, so neither of those cases is a problem. That doesn't
mean it couldn't be useful in other situations, but isn't needed for std::linear_congruential_generator.
That was my thought, being a modular math, I had assumed it was unsigned, and for unsigned you can extend the domain of usability for the remainder using the following algorithm:
//assume dividend != 0
div_result<uint64_t> temporary = dive_wide(0, divisor_hi, dividend);
temporary = dive_wide(temporary.remainder, divisor_low, dividend);
return temporary.remainder;
That’s why these are so useful.
Received on 2024-02-17 15:00:43
|
{"url":"https://lists.isocpp.org/std-proposals/2024/02/9102.php","timestamp":"2024-11-05T23:32:32Z","content_type":"text/html","content_length":"6864","record_id":"<urn:uuid:411f2490-a4fe-4dbc-ab83-d515b25f69aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00460.warc.gz"}
|
Rainfall/Evaporation Boundary | Technical Reference
Rainfall/Evaporation Boundary
• 15 Aug 2022
• 7 Minutes to read
Rainfall/Evaporation Boundary
• Updated on 15 Aug 2022
• 7 Minutes to read
Did you find this summary helpful?
Thank you for your feedback
Field in Data Entry Description Name in
Field Datafile
Node Label Node label at boundary Label 1
IncludeThis Data Flag signifying if rainfall data (rain), evaporation data (evap) or infiltration (infi) is to be specified rtype(i)
Number of Data Pairs Number of ensuing depth and time data pairs for nth dataset ndat(n)
Time Datum Adjustment Optional time-datum adjustment tlag(n)
Units of Time Optional keyword or value for units of time in the following dataset. Can be any numerical multiplier or one of the following: seconds (the default), minutes, hours, tm(n)
days, weeks, fortnight, lunar (month), months (of 30 days), quarter, years, decades. Alternatively 'date' signifies a date type format
Data Extending Method Policy for extending data if the run finishes after the end of the boundary data. Options are: REPEAT, EXTEND or NOEXTEND repeat(n)
Data Interpolation SPLINE if a cubic spline is to be fitted to the data, LINEAR to use linear interpolation or BAR to denote rainfall type histogram. (If the field is blank then linear smooth(n)
interpolation is used, however for rainfall the overriding default is BAR)
Flow Multiplier Flow multiplier. User input multiplier for resultant flow. Default is 1 qmult(n)
Intensity Time Units Keyword signifying time unit (cf tm) to determine rainfall intensity time base. Default is hours, i.e. mm/hr. May also be a number denoting number of seconds in the intenstr
intensity time (n)
Data Specification 'DEPTH' or 'INTENSITY' flag to determine whether data is specified as a depth or intensity (default) diflag(n)
Rainfall/ Evaporation Intensity (mm/unit tm) or depth (mm) of rainfall (or evaporation or infiltration) corresponding to time(n,i). In the case of BAR format data, this corresponds to the depth
/ Infiltration time period UP TO time(n,i) (i.e. time(n,i-1) - time(n,i)) (n,i)
Time time (in units of tm - default of seconds, or in form of mm:hh for DATE time units) at ith dataset for nth data type time(n,i)
Date As time, but date in format dd/mm/yyyy (DATE time units only) timed
Theory and Guidance
The rainfall evaporation boundary unit (REBDY) provides a rainfall and/or evaporation boundary inflow into a model network. This is of particular interest where direct rainfall or evaporation forms a
significant proportion of the water volume entering a system. There may be up to three consecutive tables of data, representing rainfall, evaporation/evapotranspiration and/or infiltration. There is
the opportunity to use actual dates or standard time series (as in the QT-type boundaries) with the rainfall data and flow averaging will be done within the REBDY unit so values can be used directly
in mass equations.
The REBDY unit always operates in conjunction with either a lateral inflow unit, or a lateral inflow node of a RIVER or RESERVOIR unit, to create an inflow into one or more units with a magnitude
based on the water surface area of the receiving unit. A REBDY unit must either be connected to exactly one lateral inflow unit or to any number of lateral inflow nodes of a RESERVOIR or RIVER unit.
This can then divide the resultant inflow across the network accordingly. The lateral inflow unit must have its distribution method set to AREA for the REBDY unit to apply correctly.
The REBDY unit can hold combinations of rainfall data, evapo[transpi]ration data and infiltration data. These are input via individual tabs on the REBDY form within the Flood Modeller interface. It
is therefore, for example, possible to store rainfall data with a different time interval from corresponding evaporation data. If both rainfall and evaporation are present then the REBDY unit will
combine these into an effective rainfall figure during the computational phase of the model, i.e. (rainfall - evaporation). Similarly infiltration will be treated as a negative value in the
calculation of an effective inflow.
The corresponding time data for rainfall and evaporation data can be in any of the formats already available for QT-type boundaries, i.e. seconds, minutes, hours, weeks, etc. In addition, any of the
available time units may be specified in a date format. The date format within the data form must be of the form dd/mm/yyyy hh:mm. It is recommended that the user choose the Select ('...') button
within the data grid to bring up the following dialogue which ensures the correct formatting.
The typical method of specifying rainfall and evaporation data is as a depth intensity (Data format = INTENSITY) with units of depth per unit time, which can be considered as flow per unit surface
area. The REBDY is designed by default to expect this form of unit, however, a variety of intensities are available, e.g. mm/15 minute, mm/hr, mm/day, etc. or even a user specified intensity time
interval (in seconds). These may be specified by altering the Intensity Time Units field. An aggregated rainfall depth over the preceding data interval may alternatively be supplied (Data Format=
DEPTH), in which case an intensity is obtained by dividing the depth by the preceding time interval. In order to convert the intensity value to an equivalent inflow over a model time step the REBDY
unit calculates the average value over the time step and this is then multiplied by the receiving surface area in the lateral inflow unit to obtain an inflow.
To explain how the averaging of date format rainfall data will be done; consider a typical rainfall distribution, as shown below. The rainfall time interval is different from the model time step, dt.
As the shaded area shows there are two different rainfall rates occurring over the second model time step. The total depth of rain falling during the time step is equivalent to the shaded area, V.
Dividing this value by the model time step, dt, gives the average rainfall rate per unit time, with units ms-1. Finally, multiplying this value by the surface area receiving the rainfall will produce
an equivalent inflow. This is therefore dependent on the dimensions of the receiving river or reservoir unit.
The REBDY data can have irregular increments between data points in a series as the unit will always calculate the time between data points. For bar chart format data, as shown below, for any model
time preceding that of the first data item, the first data value is assumed to apply up until this time. NB Bar chart format is forced if data is specified as a DEPTH (as opposed to an INTENSITY). In
addition, a repeating or extended series can be specified, which may be useful for example when specifying an annual or daily evaporation pattern (see Notes).
1. A rainfall, etc. value corresponds to the intensity or depth that occurred during the data interval prior to its corresponding time. If the model start time is before that of the first data item,
this value is assumed to occur up to the latter time.
2. Since the first value in the data series applies up to and including (but not after) its corresponding time, using the REPEAT methodology will effectively repeat the data from the second data
value (i.e. that applies between the first and second specified times). It may therefore be advisable to insert a dummy item at t=0.
3. Using REPEAT will never repeat REBDY data 'backwards' in time - the first data item will always be extended backwards in such cases.
4. Any Hydrological boundary (e.q. ReFHBDY, GERRBDY, FEHBDY, -etc) may be treated as an REBDY by selecting the 'Hyetograph' option on the Options tab within its unit form. This will then apply the
generated hyetograph to act as a direct Rainfall boundary.
5. If DATES are entered as time units, then the model run times (either the Start Time or Time Zero) must be specified as a Date and Time.
6. If an REBDY unit is attached (via a lateral inflow unit) to a reservoir, there must be at least one regular inflow/outflow node (i.e. non-lateral inflow) also attached to the reservoir. If none
is naturally attached, then a dummy node (e.g. a QTBDY inflow with zero flow, or a weir with an unfeasibly high crest) must be attached.
The following errors may be generated:
Code Message Comments
E1620 Version number is not supported in this version of Flood Modeller The unit was generated using a more recent version of Flood Modeller. Contact your supplier for an upgrade
E1629 Node connecting to receiving unit not recognised as lateral inflow REBDY must be directly connected to a lateral inflow unit
E1640 Error reading data General data file read error
E1641 No input data types recognised Data type must be rainfall (RAIN), evaporation (EVAP) or infiltration (INFI)
E1580 Unrecognised time factor keyword Time keyword is not one of: second, hour, day, week, month, year or date
E1642 No data in REBDY at line ... Gap in date file time series rainfall data
E1643 Two data points with same time Time step used to calculate intensity; therefore zero time step will lead to 'divide by zero error'
Datafile Format
Line 1 - keyword "REBDY", version number "#REVISION#2" , [comment]
Line 2 - Label 1
Line 3 - rtype(1), ... rtype(ntyp)
The following block is repeated for n = 1 to ntyp, where ntyp is the number of types of specified data (maximum 3)
Line 4 - ndat(n) [tlag(n)] [z(n)] [tm(n)] [repeat(n)] [smooth(n)] [qmult(n)]
Line 5 - Intenstr(n), diflag(n)
Line 6 to 5+ndat(n) - depth(n,i), time(n,i)[, timed(n,i)]
Data File example
Was this article helpful?
What's Next
|
{"url":"https://help.floodmodeller.com/docs/rainfall-evaporation-boundary","timestamp":"2024-11-07T01:28:27Z","content_type":"text/html","content_length":"65832","record_id":"<urn:uuid:75df9539-67fc-4da2-9635-48e85672314e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00398.warc.gz"}
|
Data Mining Algorithms - 13 Algorithms Used in Data Mining - DataFlair
Data Mining Algorithms – 13 Algorithms Used in Data Mining
Free Machine Learning courses with 130+ real-time projects Start Now!!
In our last tutorial, we studied Data Mining Techniques. Today, we will learn Data Mining Algorithms.
We will cover all types of Algorithms in Data Mining: Statistical Procedure Based Approach, Machine Learning-Based Approach, Neural Network, Classification Algorithms in Data Mining, ID3 Algorithm,
C4.5 Algorithm, K Nearest Neighbors Algorithm, Naïve Bayes Algorithm, SVM Algorithm, ANN Algorithm, 48 Decision Trees, Support Vector Machines, and SenseClusters.
So, let’s start Data Mining Algorithms.
What are Data Mining Algorithms?
There are too many Data Mining Algorithms present. We will discuss each of them one by one.
• These are the examples, where the data analysis task is Classification Algorithms in Data Mining-
• A bank loan officer wants to analyze the data in order to know which customer is risky or which are safe.
• A marketing manager at a company needs to analyze a customer with a given profile, who will buy a new computer.
Why Algorithms Used In Data Mining?
Here, are some reason which gives the answer of usage of Data Mining Algorithms:
• In today’s world of “big data”, a large database is becoming a norm. Just imagine there present a database with many terabytes.
• As Facebook alone crunches 600 terabytes of new data every single day. Also, the primary challenge of big data is how to make sense of it.
• Moreover, the sheer volume is not the only problem. Also, big data need to diverse, unstructure and fast changing. Consider audio and video data, social media posts, 3D data or geospatial data.
This kind of data is not easily categorized or organized.
• Further, to meet this challenge, a range of automatic methods for extracting information.
Types of Algorithms In Data Mining
Here, 13 Data Mining Algorithms are discussed-
a. Statistical Procedure Based Approach
There are two main phases present to work on classification. That can easily identify the statistical community.
The second, “modern” phase concentrated on more flexible classes of models. In which many of which attempt has to take. That provides an estimate of the joint distribution of the feature within each
class. That can, in turn, provide a classification rule.
Generally, statistical procedures have to characterize by having a precise fundamental probability model. That used to provides a probability of being in each class instead of just a classification.
Also, we can assume that techniques will use by statisticians.
Hence some human involvement has to assume with regard to variable selection. Also, transformation and overall structuring of the problem.
b. Machine Learning-Based Approach
Generally, it covers automatic computing procedures. That was based on logical or binary operations. That use to learn a task from a series of examples.
Here, we have to focus on decision-tree approaches. As classification results come from a sequence of logical steps. These classification results are capable of representing the most complex problem
given. Such as genetic algorithms and inductive logic procedures (I.LP.) are currently under active improvement.
Also, its principle would allow us to deal with more general types of data including cases. In which the number and type of attributes may vary.
This approach aims to generate classifying expressions. That is simple enough to understand by the human. And must mimic human reasoning to provide insight into the decision process.
Like statistical approaches, background knowledge may use in development. But the operation is assumed without human interference.
c. Neural Network
The field of Neural Networks has arisen from diverse sources. That is ranging from understanding and emulating the human brain to broader issues. That is of copying human abilities such as speech
and use in various fields. Such as banking, in classification program to categorize data as intrusive or normal.
Generally, neural networks consist of layers of interconnected nodes. That each node producing a non-linear function of its input. And input to a node may come from other nodes or directly from the
input data. Also, some nodes are identified with the output of the network.
On the basis of this, there are different applications for neural networks present. That involve recognizing patterns and making simple decisions about them.
In airplanes, we can use a neural network as a basic autopilot. In which input units read signals from the various instruments and output units. That modifying the plane’s controls appropriately to
keep it safely on course. Inside a factory, we can use a neural network for quality control.
d. Classification Algorithms in Data Mining
It is one of the Data Mining. That is used to analyze a given data set and takes each instance of it. It assigns this instance to a particular class. Such that classification error will be least. It
is used to extract models. That define important data classes within the given data set. Classification is a two-step process.
During the first step, the model is created by applying a classification algorithm. That is on training data set.
Then in the second step, the extracted model is tested against a predefined test data set. That is to measure the model trained performance and accuracy. So classification is the process to assign
class label from a data set whose class label is unknown.
e. ID3 Algorithm
This Data Mining Algorithms starts with the original set as the root hub. On every cycle, it emphasizes through every unused attribute of the set and figures. That the entropy of attribute. At that
point chooses the attribute. That has the smallest entropy value.
The set is S then split by the selected attribute to produce subsets of the information.
This Data Mining algorithms proceed to recurse on each item in a subset. Also, considering only items never selected before. Recursion on a subset may bring to a halt in one of these cases:
• Every element in the subset belongs to the same class (+ or -), then the node is turned into a leaf and
• labeled with the class of the examples
• If there are no more attributes to select but the examples still do not belong to the same class. Then the node is turned into a leaf and labeled with the most common class of the examples in
that subset.
• If there are no examples in the subset, then this happens. Whenever parent set found to be matching a specific value of the selected attribute.
• For example, if there was no example matching with marks >=100. Then a leaf is created and is labeled with the most common class of the examples in the parent set.
Working steps of Data Mining Algorithms is as follows,
• Calculate the entropy for each attribute using the data set S.
• Split the set S into subsets using the attribute for which entropy is minimum.
• Construct a decision tree node containing that attribute in a dataset.
• Recurse on each member of subsets using remaining attributes.
f. C4.5 Algorithm
C4.5 is one of the most important Data Mining algorithms, used to produce a decision tree which is an expansion of prior ID3 calculation. It enhances the ID3 algorithm. That is by managing both
continuous and discrete properties, missing values. The decision trees created by C4.5. that use for grouping and often referred to as a statistical classifier.
C4.5 creates decision trees from a set of training data same way as an Id3 algorithm. As it is a supervised learning algorithm it requires a set of training examples. That can see as a pair: input
object and the desired output value (class).
The algorithm analyzes the training set and builds a classifier. That must have the capacity to accurately arrange both training and test cases.
A test example is an input object and the algorithm must predict an output value. Consider the sample training data set S=S1, S2,…Sn which is already classified.
Each sample Si consists of feature vector (x1,i, x2,i, …, xn,i). Where xj represent attributes or features of the sample. The class in which Si falls. At each node of the tree, C4.5 selects one
attribute of the data. That most efficiently splits its set of samples into subsets such that it results in one class or the other.
The splitting condition is the normalized information gain. That is a non-symmetric measure of the difference. The attribute with the highest information gain is chosen to make the decision. General
working steps of algorithm is as follows,
Assume all the samples in the list belong to the same class. If it is true, it simply creates a leaf node for the decision tree so that particular class will select.
None of the features provide any information gain. If it is true, C4.5 creates a decision node higher up the tree using the expected value of the class.
An instance of previously-unseen class encountered. Then, C4.5 creates a decision node higher up the tree using the expected value.
g. K Nearest Neighbors Algorithm
The closest neighbor rule distinguishes the classification of an unknown data point. That is on the basis of its closest neighbor whose class is already known.
M. Cover and P. E. Hart purpose k nearest neighbor (KNN). In which nearest neighbor is computed on the basis of estimation of k. That indicates how many nearest neighbors are to consider to
It makes use of the more than one closest neighbor to determine the class. In which the given data point belongs to and so it is called as KNN. These data samples are needed to be in the memory at
the runtime.
Hence they are referred to as memory-based technique.
Bailey and A. K. Jain enhances KNN which is focused on weights. The training points are assigned weights. According to their distances from sample data point. But at the same, computational
complexity and memory requirements remain the primary concern.
To overcome memory limitation size of data set is reduced. For this, the repeated patterns. That don’t include additional data are also eliminated from training data set.
To further enhance the information focuses which don’t influence the result. That are additionally eliminated from training data set.
The NN training data set can organize utilizing different systems. That is to enhance over memory limit of KNN. The KNN implementation can do using ball tree, k-d tree, and orthogonal search tree.
The tree-structured training data is further divided into nodes and techniques. Such as NFL and tunable metric divide the training data set according to planes. Using these algorithms we can expand
the speed of basic KNN algorithm. Consider that an object is sampled with a set of different attributes.
Assuming its group can determine from its attributes. Also, different algorithms can use to automate the classification process. In pseudo code, k-nearest neighbor algorithm can express,
K ← number of nearest neighbors
For each object Xin the test set do
calculate the distance D(X,Y) between X and every object Y in the training set
neighborhood ← the k neighbors in the training set closest to X
X.class ← SelectClass (neighborhood)
End for
h. Naïve Bayes Algorithm
The Naive Bayes Classifier technique is based on the Bayesian theorem. It is particularly used when the dimensionality of the inputs is high.
The Bayesian Classifier is capable of calculating the possible output. That is based on the input. It is also possible to add new raw data at runtime and have a better probabilistic classifier.
This classifier considers the presence of a particular feature of a class. That is unrelated to the presence of any other feature when the class variable is given.
For example, a fruit may consider to be an apple if it is red, round.
Even if these features depend on each other features of a class. A naive Bayes classifier considers all these properties to contribute to the probability. That it shows this fruit is an apple.
Algorithm works as follows,
Bayes theorem provides a way of calculating the posterior probability, P(c|x), from P(c), P(x), and P(x|c). Naive Bayes classifier considers the effect of the value of a predictor (x) on a given
class (c). That is independent of the values of other predictors.
P(c|x) is the posterior probability of class (target) given predictor (attribute) of class.
P(c) is called the prior probability of class.
P(x|c) is the likelihood which is the probability of predictor of given class.
P(x) is the prior probability of predictor of class.
i. SVM Algorithm
SVM has attracted a great deal of attention in the last decade. It also applied to various domains of applications. SVMs are used for learning classification, regression or ranking function.
SVM is based on statistical learning theory and structural risk minimization principle. And have the aim of determining the location of decision boundaries. It is also known as a hyperplane. That
produces the optimal separation of classes. Thereby creating the largest possible distance between the separating hyperplane.
Further, the instances on either side of it have been proven. That is to reduce an upper bound on the expected generalization error.
The efficiency of SVM based does not depend on the dimension of classified entities. Though, SVM is the most robust and accurate classification technique. Also, there are several problems.
The data analysis in SVM is based on convex quadratic programming. Also, expensive, as solving quadratic programming methods. That need large matrix operations as well as time-consuming numerical
Training time for SVM scales in the number of examples. So researchers strive all the time for more efficient training algorithm. That resulting in several variant based algorithm.
SVM can also extend to learn non-linear decision functions. That is by first projecting the input data onto a high-dimensional feature space. As by using kernel functions and formulating a linear
classification problem. The resulting feature space is much larger than the size of a dataset. That is not possible to store on popular computers.
Investigation of this issues leads to several decomposition based algorithms. The basic idea of decomposition method is to split the variables into two parts:
a set of free variables called as a working set. That can update in each iteration and set of fixed variables. That are fix during a particular. Now, this procedure have to repeat until the
termination conditions are met
The SVM was developed for binary classification. And it is not simple to extend it for multi-class classification problem. The basic idea to apply multi-classification to SVM. That is to decompose
the multi-class problems into several two-class problems. That can address using several SVMs.
J. ANN Algorithm
This is the types of computer architecture inspire by biological neural networks. They are used to approximate functions. That can depend on a large number of inputs and are generally unknown.
They are presented as systems of interconnected “neurons”. That can compute values from inputs. Also, they are capable of machine learning as well as pattern recognition. Due to their adaptive
An artificial neural network operates by creating connections between many different processing elements. That each corresponding to a single neuron in a biological brain. These neurons may actually
construct or simulate by a digital computer system.
Each neuron takes many input signals. Then based on an internal weighting. That produces a single output signal that is sent as input to another neuron.
The neurons are interconnected and organized into different layers. The input layer receives the input and the output layer produces the final output.
In general, one or more hidden layers are sandwiched between the two. This structure makes it impossible to forecast or know the exact flow of data.
Artificial neural networks start out with randomized weights for all their neurons. This means that they need to train to solve the particular problem for which they are proposed. A back-propagation
ANN is trained by humans to perform specific tasks.
During the training period, we can test whether the ANN’s output is correct by observing a pattern. If it’s correct the neural weightings produce that output is reinforced. if the output is
incorrect, those weightings responsible diminish.
Implemented on a single computer, a network is slower than more traditional solutions. The ANN’s parallel nature allows it to built using many processors. That gives a great speed advantage at very
little development cost.
The parallel architecture allows ANNs to process amounts of data very in less time. It deals with large continuous streams of information. Such as speech recognition or machine sensor data. ANNs can
operate faster as compared to other algorithms.
An artificial neural network is useful in a variety of real-world applications. Such as visual pattern recognition and speech recognition. That deals with complex often incomplete data.
Also, recent programs for text-to-speech have utilized ANNs. Many handwriting analysis programs are currently using ANNs.
K. 48 Decision Trees
A decision tree is a predictive machine-learning model. That decides the target value of a new sample. That based on various attribute values of the available data. The internal nodes of a decision
tree denote the different attributes.
Also, the branches between the nodes tell us the possible values. That these attributes can have in the observed samples. While the terminal nodes tell us the final value of the dependent variable.
The attribute is to predict is known as the dependent variable. Since its value depends upon, the values of all the other attributes. The other attributes, which help in predicting the value of the
dependent variable. That are the independent variables in the dataset.
The J48 Decision tree classifier follows the following simple algorithm. To classify a new item, it first needs to create a decision tree. That based on the attribute values of the available training
So, whenever it encounters a set of items. Then it identifies the attribute that discriminates the various instances most clearly.
This feature is able to tell us most about the data instances. So that we can classify them the best is said to have the highest information gain.
Now, among the possible values of this feature. If there is any value for which there is no ambiguity. That is, for which the data instances falling within its category. It has the same value for the
target variable. Then we stop that branch and assign to it the target value that we have obtained.
For other cases, we look for another attribute that gives us the highest information gain. We continue to get a clear decision. That of what combination of attributes gives us a particular target
In the event that we run out of attributes. If we cannot get an unambiguous result from the available information. We assign this branch a target value that the majority of the items under this
branch own.
Now that we have the decision tree, we follow the order of attribute selection as we have obtained for the tree. By checking all the respective attributes. And their values with those seen in the
decision tree model. we can assign or predict the target value of this new instance.
The above description will be more clear and easier to understand with the help of an example. Hence, let us see an example of J48 decision tree classification.
l. Support Vector Machines
Support Vector Machines are supervised learning methods. That used for classification, as well as regression. The advantage of this is that they can make use of certain kernels to transform the
problem. Such that we can apply linear classification techniques to non-linear data.
Applying the kernel equations. That arranges the data instances in a way within the multi-dimensional space. That there is a hyperplane that separates data instances of one kind from those of
The kernel equations may be any function. That transforms the non-separable data in one domain into another domain. In which the instances become separable. Kernel equations may be linear, quadratic,
Gaussian, or anything else. That achieves this particular purpose.
Once we manage to divide the data into two distinct categories, our aim is to get the best hyperplane. That is to separate the two types of instances. This hyperplane is important, it decides the
target variable value for future predictions. We should decide upon a hyperplane that maximizes the margin. That is between the support vectors on either side of the plane.
Support vectors are those instances that are either on the separating planes. The explanatory diagrams that follow will make these ideas a little more clear.
In Support Vector Machines the data need to be separate to be binary. Even if the data is not binary, these machines handle it as though it is. Further completes the analysis through a series of
binary assessments on the data.
M. SenseClusters (an adaptation of the K-means clustering algorithm)
We have made use of SenseClusters to classify the email messages. SenseCluster available package of Perl programs. As it was developed at the University of Minnesota Duluth. That we use for automatic
text and document classification. The advantage of SenseClusters is that it does not need any training data;
It makes use of unsupervised learning methods to classify the available data.
Now, particularly in this section will understand the K-means clustering algorithm. That has been used in SenseClusters. Clustering is the process in which we divide the available data. That
instances of a given number of sub-groups. These sub-groups are clusters, and hence the name “Clustering”.
To put it, the K-means algorithm outlines a method. That is to cluster a particular set of instances into K different clusters. Where K is a positive integer. It should notice K-means clustering
algorithm requires a number of clusters from the user. It cannot identify the number of clusters by itself.
However, SenseClusters has the facility of identifying the number of clusters. That the data may comprise of.
The K-means clustering algorithm starts by placing K centroids. Then each of the available data instances has to assign a particular centroid. That depends on a metric like Euclidian distance,
Manhattan distance, Minkowski distance, etc.
The position of the centroid has to recalculate every time an instance is added to the cluster. This continues until all the instances are group into the final required clusters.
Since recalculating the cluster centroids may alter the cluster membership. Also, cluster memberships are also verified once the position of the centroid changes.
This process continues till there is no further change in the cluster membership. And there is as little change in the positions of the centroids as possible.
The initial position of the centroids is thus very important. Since this position affects all the future steps in the K-means clustering algorithm. Hence, it is always advisable to keep the cluster
centers as far away from each other as possible.
If there are too many clusters, then clusters resemble each other. And they are in the vicinity of each other that need to be club together.
Moreover, if there are few clusters then clusters that are too big. And may contain two or more sub-groups of different data instances that must be a divide.
The K-means clustering algorithm is thus a simple to understand. Also, a method by which we can divide the available data into sub-categories.
So, this was all about Data Mining Algorithms. Hope you like our explanation.
As a result, we have studied Data Mining Algorithms. Also, we have learned each type of Data Mining algorithm. Furthermore, if you feel any query, feel free to ask in a comment section.
Did we exceed your expectations?
If Yes, share your valuable feedback on Google
1 Response
1. Of the following algorithms
A. C4.5 decision tree
B. One-R
C. PRISM covering
D. 1-Nearest Neighbor
E. Naive Bayes
F. Linear Regression
1.1. Which one(s) are fast in training but slow in classification?
1.2. Which one(s) produce classification rules?
1.3. Which one(s) require discretization of continuous attributes before application?
|
{"url":"https://data-flair.training/blogs/data-mining-algorithms/","timestamp":"2024-11-14T02:05:52Z","content_type":"text/html","content_length":"307008","record_id":"<urn:uuid:adee6016-c7db-470d-8e1e-fad6bf3dbeef>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00315.warc.gz"}
|
Time and work problems for SSC CGL Tier 2
Submitted by Atanu Chaudhuri on Sat, 28/09/2019 - 17:25
Time and work problems for SSC CGL Tier 2 set 26 Solutions
Learn to solve 10 time and work problems for SSC CGL Tier 2 Set 26 in 15 minutes using basic and advanced concepts of solving time and work problems.
Before going through these solutions you should take the test at,
SSC CGL Tier II level Question Set 26 on Time and Work problems 2.
Solution to 10 time and work problems for SSC CGL Tier 2 set 26 - time to solve was 15 mins
Problem 1.
In 16 days A can do 50% of a job. B can do one-fourth of the job in 24 days. In how many days can they do three-fourths of the job while working together?
a. 21
b. 9
c. 18
d. 24
Solution 1: Problem analysis and conceptual solution by work portion done in a day and working together concepts
As number of days to complete a portion of a job is directly proportional to the portion of work done by a worker, by the first statement, A completes the whole job in,
$16\times{2}=32$ days, as $50\text{%}=\frac{1}{2}$
By the same concept, as B completes $\frac{1}{4}$th of the job in 24 days, the whole job is completed by B in,
$24\times{4}=96$ days.
This is the use of the first concept of direct proportionality of work done to number of days the worker worked.
Solution 1: Problem solving second stage: Working together concept of summing up work portion done in a day
When A and B work together, total work portion done by them in a day is given by summing up the work portion done by each of them in a day. Inverting the total work portion done in a day, you will
get the number of days required to complete the work by them while working together.
To get the work portion done in day for a worker, just invert the number of days required by the worker to complete the work.
Using these concepts work done in a day by A and B working together is,
This means, the whole work will be completed by the two working together in 24 days, and $\displaystyle\frac{3}{4}$th of the job will be completed in,
$24\times{\displaystyle\frac{3}{4}}=18$ days.
Answer: c: 18 days.
Key concepts used: Work portion done to number of days of work direct proportionality -- Work portion done in a day as inverse of number of days to complete the work -- Working together concept to
get portion of work done in a day by summing up portions of work done by each worker in a day -- Number of days to complete the work as inverse of work portion done in a day.
If you are used to these common concepts of time and work, you can easily solve the problem in mind by being a little careful.
Problem 2.
If each of them had worked alone, B would have taken 10 hours more than what A would have taken to complete a job. Working together, they can complete the job in 12 hours. How many hours B would take
to do 50% of the job?
a. 30
b. 20
c. 10
d. 15
Solution 2: Problem analysis and execution: By Mathematical reasoning and Working together concept
We have to introduce one variable for work completion time for either A or B, not two. By general principle of mathematical problem solving, that is supported by common sense. We would assume $b$
hours as the time taken by B to complete the work, because target duration involves B's completion time.
So completion time for A is,
$a=b-10$, which is in terms of $b$.
We would be dealing with a single variable.
As working together A and B complete the job in 12 hours, applying the working together concept of total work portion done by the two in a day as the sum of work portion done by each individually in
a day,
Cross-multiplying and rearranging terms we get the quadratic equation in $b$ as,
4 times 30 is 120 as well as 4 plus 30 is 34. So the factors of the quadratic equation are,
$b=4$ is not possible as $a=b-10$ will then be negative.
So, $b=30$ hours.
To complete 50% of the job then, B willl take half of 30, that is, 15 hours.
Answer: d: 15.
Key concepts used: Work portion done in a day as inverse of number of days to complete the work -- Working together concept as work portion done in a day by two workers by summing up their individual
work portion done in a day-- Formation and factorization of quadratic equation.
In this form of time and work problems, it is hard to avoid formation and factorization of a quadratic equation. But usually this is easy.
Problem 3.
Two workers P and Q are engaged to do a piece of work. Working alone P would take 8 hours more to complete the work than when working together. Working alone Q would take $4\frac{1}{2}$ hours more
than when they work together. The time required to finish the work together is,
a. 5 hours
b. 6 hours
c. 4 hours
d. 8 hours
Solution 3: Problem analysis and solution by work per unit time and working together concept
Though the problem is quite interestingly framed, it is easy to set up the equation for per hour work portion done when P and Q work together as,
$\displaystyle\frac{1}{T}=\displaystyle\frac{1}{T+8}+\displaystyle\frac{1}{T+4.5}$, where $T$ is the working together work completion time you have to find.
The two denominators represent the two work completion times in terms of $T$ for P and Q.
Cross-multiply and simplify to form the desired quadratic equation as,
Or, $T^2=36$,
Or, $T=6$
An unexpected quick result if you have followed the right path.
Cancellation of $12.5T$ on both sides of the equation makes things simpler.
Answer: b: 6 hours.
Key concepts used: Work portion done per unit time -- Working together concept.
Problem 4.
A contractor employed 200 men to complete a certain work in 150 days. If only one-fourth of the work gets completed in 50 days, then how many more men the contractor must employ to complete the whole
work in time?
a. 100
b. 300
c. 200
d. 600
Solution 4 : Problem analysis and execution: Mandays concept
200 men do one-fourth of the work in 50 days. Assuming that work rate (work portion done by a man in a day) remains same for all men, a total of $4\times{50}$, that is, 200 days would have required
for 200 men to finish the job. Obviously the contractor misjudged the work rate capacity of the men. That's why to meet the target of 150 days he would need to employ more men.
The reason for the need of more men being clear, let's get on with our main task of calculating the number of extra men required.
In 50 days, $\displaystyle\frac{1}{4}$th of work is done by 200 men,
So the total work amount in terms of mandays is,
$50\times{200}\times{4}=40000$ mandays.
To complete three-fourth remaining part of this work, that is, 30000 mandays work, in remaining 100 days, number of men required will simply be,
The contractor has to employ then 100 more men to finish the job in 150 days.
Answer: a: 100.
Key concepts used: Work amount in terms mandays concept -- Work rate assessment -- Mandays technique to find the number of extra men required.
Problem 5.
A, B and C are engaged to do a work for Rs.5290. A and B together are supposed to do $\displaystyle\frac{19}{23}$rd of the work and B and C together $\displaystyle\frac{8}{23}$rd of the work. Then A
should be paid,
a. Rs.4250
b. Rs.3450
c. Rs.2290
d. Rs.1950
Solution 5: Problem analysis and execution: Earning share concept, Worker compensation proportional to work portion done
As B and C together complete $\displaystyle\frac{8}{23}$rd of the work, the rest of the work must be completed by A alone.
So A completes,
$1-\displaystyle\frac{8}{23}=\displaystyle\frac{15}{23}$rd of the work.
Total amount of Rs.5290 is to be paid proportionate to the work amount done. So A will be paid,
The first statement of work portion done by A and B is to create diversion and is not required for getting the answer. But we can satisfy our curiosity by calculating that with A doing $\displaystyle
\frac{15}{23}$rd portion of work, B would have done,
$\displaystyle\frac{19}{23}-\displaystyle\frac{15}{23}=\displaystyle\frac{4}{23}$rd of work and so C's work portion will be,
$\displaystyle\frac{8}{23}-\displaystyle\frac{4}{23}=\displaystyle\frac{4}{23}$rd portion of whole work.
This is the reason of total work given by two statements becoming $\displaystyle\frac{27}{23}$, the extra $\displaystyle\frac{4}{23}$ coming from C contributing twice.
Answer: b: Rs.3450.
Key concepts used: Earning share concept -- Worker compensation proportional to work portion done.
Problem 6.
Ruchi does $\displaystyle\frac{1}{4}$th of a job in 6 days and Bivas completes rest of the same job in 12 days. Then they together complete the job in,
a. $9\frac{3}{5}$ days
b. $9$ days
c. $7\frac{1}{3}$ days
d. $8\frac{1}{8}$ days.
Solution 6 : Problem analysis and solution: Working together concept
The first step is to accurately evaluate portion of job completed by each separately in 1 day.
As Ruchi does $\displaystyle\frac{1}{4}$th of the job in 6 days, her work rate in terms of work portion done in a day is,
Bivas completes the rest of the job, that is, $\displaystyle\frac{3}{4}$th of the job in 12 days. So the portion of job he completes in a day is,
Together they complete the portion of job in 1 day is then,
And number of days they take to complete the job is inverse of this portion of total work done in a day by the two, which is,
$\displaystyle\frac{48}{5}=9\frac{3}{5}$ days.
Answer: a: $9\frac{3}{5}$ days.
Key concepts used: Work portion done directly proportional to number of days of work -- Work rate in terms of work portion done in a day is work portion done divided by number of days of work --
Working together per unit time concept -- Number of days of completion of work is inverse of work portion done in a day.
Easy to solve in mind with a little care.
Problem 7.
P and Q together can do a job in 6 days and Q and R finishes the same job in $\displaystyle\frac{60}{7}$ days. Starting the work alone P worked for 3 days. Then Q and R continued for 6 days to
complete the work. What is the difference in days in which R and P can complete the job, each working alone?
a. 15
b. 8
c. 12
d. 10
Solution 7: Problem analysis and solution: Work rate technique and Working together concept
Assume, $p$, $q$ and $r$ to be the portion of work done by P, Q and R respectively in 1 day, each working alone.
By the first statement then,
$6(p+q)=W$, where $W$ is the total work amount.
So, $(p+q)=\displaystyle\frac{1}{6}W$
By the second statement similarly,
Or, $(q+r)=\displaystyle\frac{7}{60}W$.
And by the third statement,
Or, $3p=W\left(1-\displaystyle\frac{7}{10}\right)=\displaystyle\frac{3}{10}W$
So, $10p=W$.
It means P completes the work in 10 days working alone.
Subtracting $(q+r)$ from $(p+q)$, you get,
Or, $20p-20r=W$,
Or, $20r=2W-W=W$.
This means R will complete the work in 20 days working alone, and the desired difference in days is,
Answer: d: 10.
Key concepts used: Work rate technique -- Working together concept -- Sequencing of events -- Algebraic simplification techniques.
The solution is speeded up because of bypassing the need of evaluating $q$.
Problem 8.
A man is twice as fast as a woman who is twice as fast as a boy in doing a piece of work. If one each of them work together and finish the work in 7 days, in how many days would a boy finish the work
when working alone?
a. 7
b. 6
c. 49
d. 42
Solution 8: Problem analysis and solution: Work rate technique and Worker equivalence concept
Assume, $m$, $w$ and $b$ to be the portion of work done in a day by a man, a woman and a boy respectively when working alone. This is use of work rate technique. This approach reduces fraction
calculation and thus speeds up solution.
So by the given efficiency statements, as in a day, a man does twice the work portion done by a woman, and a woman does twice the work portion done by a boy,
Basically this means 1 man is equivalent to 4 boys and 1 woman is equivalent to 2 boys. This is Worker equivalence concept. Worker efficiency leads to worker equivalence.
So by the working together statement,
$7(m+w+b)=W$ where $W$ is the work amount.
Or, $7(4b+2b+b)=49b=W$,
This means, a boy working alone would complete the work in 49 days.
Answer: c: 49.
Key concepts used: Work rate technique -- Worker equivalence concept -- Working together concept -- Worker efficiency concept.
Problem 9.
While A can do a job working alone in 27 hours, B can do it in 54 hours also working alone. Find the share of C (in Rs.) if A, B and C get paid Rs.4320 for completing the job in 12 hours working
a. 1440
b. 960
c. 1280
d. 1920
Solution 9: Problem analysis and solution: Earning share proportional to work portion done and Working together concept
In 12 hours, work portion done by A and B is,
So rest $\displaystyle\frac{1}{3}$rd portion of the work is completed by C.
As share of earning is proportional to work portion done, and total work is worth Rs.4320, the earning by C is one-third of Rs.4320,
Answer: a: 1440.
Key concepts used: Earning share concept -- Earning to work done proportionality -- Working together concept -- Work portion left concept.
Problem 10.
While A and B together finish a work in 15 days, A and C take 2 more days than B and C working together to finish the same work. If A, B and C complete the work in 8 days, in how many days would C
complete it working alone?
a. $20$ days
b. $40$ days
c. $24$ days
d. $17\frac{1}{7}$ days
Solution 10: Problem analysis and solution: Strategic problem definition, Work rate technique and Working together concept
The strategy of problem definition is to form first the algebraic relations that contain maximum amount of certain information. Out of four given statements, the fourth statement carrying maximum
amount of certain information, we'll first form corresponding equation as,
By work rate technique we have assumed variables $a$, $b$ and $c$ to be the work portion done per day by A, B and C respectively and $W$ as the total work amount.
Next we'll form the equation corresponding to the first statement as it involves no uncertainty,
It is easy to see that $c$ can be evaluated from these two equations by eliminating $(a+b)$.
From first equation,
$(a+b+c)=\displaystyle\frac{W}{8}$, and from second equation,
Subtracting the second result from the first,
Inverse of this work rate of C is the number of days to complete the work by C working alone. It is then,
Answer: d: $17\frac{1}{7}$ days.
Key concepts used: Strategy of problem definition -- Work rate technique -- Working together concept -- Solving in mind.
This is a good example of diversionary tactics in a question. The second and the third statements are not required at all in finding the answer.
Task for you: What would be the number of days to complete the work for B working alone?
Note: Observe that most, if not all, of the problems can be solved quickly in mind if you use the right concepts and techniques. Problem analysis and clear problem definition play an important role
in such quick solutions.
Useful resources to refer to
Guidelines, Tutorials and Quick methods to solve Work Time problems
7 steps for sure success in SSC CGL Tier 1 and Tier 2 competitive tests
How to solve Arithmetic problems on Work-time, Work-wages and Pipes-cisterns
Basic concepts on Arithmetic problems on Speed-time-distance Train-running Boat-rivers
How to solve a hard CAT level Time and Work problem in a few confident steps 3
How to solve a hard CAT level Time and Work problem in a few confident steps 2
How to solve a hard CAT level Time and Work problem in few confident steps 1
How to solve Work-time problems in simpler steps type 1
How to solve Work-time problem in simpler steps type 2
How to solve a GATE level long Work Time problem analytically in a few steps 1
How to solve difficult Work time problems in simpler steps, type 3
SSC CGL Tier II level Work Time, Work wages and Pipes cisterns Question and solution sets
SSC CGL Tier II level Solution set 26 on Time-work Work-wages 2
SSC CGL Tier II level Question set 26 on Time-work Work-wages 2
SSC CGL Tier II level Solution Set 10 on Time-work Work-wages Pipes-cisterns 1
SSC CGL Tier II level Question Set 10 on Time-work Work-wages Pipes-cisterns 1
SSC CGL level Work time, Work wages and Pipes cisterns Question and solution sets
SSC CGL level Solution Set 72 on Work time problems 7
SSC CGL level Question Set 72 on Work time problems 7
SSC CGL level Solution Set 67 on Time-work Work-wages Pipes-cisterns 6
SSC CGL level Question Set 67 on Time-work Work-wages Pipes-cisterns 6
SSC CGL level Solution Set 66 on Time-Work Work-Wages Pipes-Cisterns 5
SSC CGL level Question Set 66 on Time-Work Work-Wages Pipes-Cisterns 5
SSC CGL level Solution Set 49 on Time and work in simpler steps 4
SSC CGL level Question Set 49 on Time and work in simpler steps 4
SSC CGL level Solution Set 48 on Time and work in simpler steps 3
SSC CGL level Question Set 48 on Time and work in simpler steps 3
SSC CGL level Solution Set 44 on Work-time Pipes-cisterns Speed-time-distance
SSC CGL level Question Set 44 on Work-time Pipes-cisterns Speed-time-distance
SSC CGL level Solution Set 32 on work-time, work-wage, pipes-cisterns
SSC CGL level Question Set 32 on work-time, work-wages, pipes-cisterns
SSC CHSL level Solved question sets on Work time
SSC CHSL Solved question set 2 Work time 2
SSC CHSL Solved question set 1 Work time 1
Bank clerk level Solved question sets on Work time
Bank clerk level solved question set 2 work time 2
|
{"url":"https://suresolv.com/ssc-cgl-tier-ii/ssc-cgl-tier-ii-level-solution-set-26-time-and-work-problems-2","timestamp":"2024-11-04T05:34:54Z","content_type":"text/html","content_length":"54896","record_id":"<urn:uuid:3b956c02-0044-410c-969e-20df5ba04ac4>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00627.warc.gz"}
|
Stationary Waves & Phase - A-level Physics
TLDR: Understanding stationary waves and phase in A-level Physics
📍 Article Source
Stationary Waves & Phase - A-level Physicshttps://www.youtube.com/watch?v=TAGlpuMYdk4
Introduction to Waves
Waves can be represented as one complete wave or cycle, with the time period denoted by the letter capital T and measured in seconds. The frequency of a wave can be determined by taking the
reciprocal of the time period. A complete wave can also be represented as a circle, with a full circle being 2 pi radians. Different phases of a wave, such as completely in phase, half a wave out of
phase, and quarter wave out of phase, are discussed using radians and degrees. The phase difference between two points on a wave can be calculated using formulas involving time difference and
distance between particles. When two waves meet, they interfere and superimpose, resulting in a standing wave with peaks and troughs adding up. The formation of standing waves on a string, including
the fundamental mode and the second harmonic, is explained in relation to frequency and wavelength.
Formation of Standing Waves
The standing wave formation on a string with fixed ends and the fundamental mode is explained. It is described that the fundamental mode has half a wavelength going from one end to the other, and the
length of the string is equal to lambda over two. The second harmonic, also known as the first overtone, is discussed, indicating that there is a node in the middle and two anti-nodes. The third
harmonic is also explained with one and a half waves on the length of the string. The generalization for a piece of string with two fixed ends is provided, which states that the length of the string
is equal to n lambda over two, where n can be any whole number. The standing wave formation in a wind instrument is then explored, discussing the closed and open ends and the formation of standing
waves in different scenarios. The fundamental mode frequency is calculated, emphasizing the proportionality of frequency to the square root of tension.
Timestamped Summary
A wave can be represented as a complete cycle with a time period
• The time period is the time taken for one complete cycle
• The frequency is the reciprocal of the time period
• A complete wave is equal to 2pi radians or a full circle
• A half wave is equal to pi radians or half a circle
• A quarter wave is equal to pi/2 radians or a quarter of a circle
• An eighth wave is equal to pi/4 radians or an eighth of a circle
Bits of a wave can be in phase or out of phase depending on their position on the wave.
• Points on the same peak or trough are completely in phase.
• Points on opposite peaks or troughs are half a wave or PI radians out of phase.
• Points a quarter of a wave or PI/2 radians apart are 90 degrees out of phase.
• Phase difference can be calculated using time difference or distance between particles.
When two waves meet, they interfere and superimpose to form a single wave with increased amplitude
• Calculating fraction of wave length and time determines the position of a wave
• Plucking a guitar string sends a wave down the string, which gets reflected at the end and creates interference when two waves meet
Standing wave is a stationary wave created by two identical waves traveling in opposite directions.
• The standing wave has anti-nodes where the amplitude is at a maximum and nodes where the amplitude is zero.
• The two waves cancel each other out at the nodes, resulting in destructive interference and no displacement.
A guitar string of length L has a fundamental mode or the first harmonic that is formed with half a wavelength.
• The standing wave is formed with two waves traveling in opposite directions.
• The length of the string is equal to lambda over two.
• The second harmonic is formed when there is a node in the middle, resulting in two loops.
Playing harmonics on guitar creates different harmonics with varying frequencies.
• The fundamental note is the loudest and other harmonics are created more faintly.
• Harmonics can be created by gently placing a finger on a string in the middle.
• The frequency and wavelength of the sound changes depending on the number of loops on the string.
• Destructive interference happens at nodes and constructive interference happens at antinodes.
Standing waves in wind instruments
• Fundamental standing wave in a wind instrument has a 90 degree node
• The length of a string for a standing wave equals n lambda over 2 where n is any integer
N equals lambda over 2 is the fundamental mode
• The open ends create an anti-node at each end
• The fundamental mode frequency can be calculated using the length, tension, and mass per unit length of the string
Related Questions
What is a time period in the context of waves?
In the context of waves, the time period refers to the time it takes for one complete wave cycle to be completed. It is represented by the letter T and is measured in seconds. The time period can be
found by taking the reciprocal of the frequency, i.e., 1 divided by the frequency.
How can a complete wave be represented using a circle?
A complete wave can be represented using a circle, where a full circle is equivalent to 2 pi radians. Therefore, a full wave is also considered to be 2 pi radians. For instance, a wave that only goes
halfway around the circle is equal to pi radians, and a quarter of a wave is equivalent to PI over two radians.
What does it mean for two points on a wave to be in phase or out of phase?
When two points on a wave are in phase, it means they are on the same part of the wave. If the points are not in phase, they are out of phase. For example, if two points are half a wave out of phase,
they are said to be pi radians or 180 degrees out of phase with each other.
How does interference occur when two waves meet?
When two waves meet, they interfere and superimpose. This results in the peaks and troughs of the waves adding up, leading to constructive interference where the amplitudes increase, and destructive
interference where the amplitudes cancel each other out, resulting in zero displacement at certain points.
What is a standing wave and how is it formed?
A standing wave is formed when two identical waves, with the same frequency and wavelength, traveling in opposite directions meet and interfere. This results in a stationary pattern of nodes (zero
amplitude) and anti-nodes (maximum amplitude), creating the appearance of a wave that is not moving.
How are standing waves and harmonics related in a string instrument?
In a string instrument, standing waves are formed as the result of interference between waves traveling in opposite directions on the string. The fundamental mode, first harmonic, and subsequent
harmonics are created based on the number of nodes and anti-nodes formed on the string, influencing the pitch and frequency of the produced sound.
How is the frequency of the fundamental mode of a standing wave on a string calculated?
The frequency of the fundamental mode of a standing wave on a string can be calculated using the formula: frequency = 1 / (2 * length of the string) * square root of (tension in the string divided by
mass per unit length). The tension in the string is equal to the weight hanging on the end of the string, and the mass per unit length can be measured in kilograms per meter.
|
{"url":"https://www.youtubesummaries.com/education/TAGlpuMYdk4","timestamp":"2024-11-09T09:26:22Z","content_type":"text/html","content_length":"115773","record_id":"<urn:uuid:f61dce2d-ae56-4ee6-819c-7fd091f23ef9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00135.warc.gz"}
|
"Statisticians can prove almost anything"
"Statisticians can prove almost anything"
Sometimes, when you look for statistical significance, you'll find it even if the effect isn't real -- in other words, a false positive. With a 5% significance level, you'll find that one out of 20
However, experimenters don't do just one analysis one time. They'll try a bunch of different variables, and a bunch of different datasets. If they try enough things, they have a much better than 5%
chance of coming up with a positive. How much better? Well, there's no real way to tell, since the tests aren't independent (adding one dependent variable to a regression isn't really a whole new
regression). But, intuitively: if, by coincidence, your first experiment winds up at (say) p=0.15, it seems like it should be possible to get it down to 0.05 if you try a few things.
That's exactly what Joseph P. Simmons, Leif D. Nelson, and Uri Simonsohn did in a new academic paper (reported on in today's National Post). They wanted to prove the hypothesis that listening to
children's music makes you older. (Not makes you *feel* older, but actually makes your date of birth earlier.) Obviously, that hypothesis is false.
Still, the authors managed to find statistical significance. It turned out that subjects who were randomly selected to listen to "When I'm Sixty Four" had an average (adjusted) age of 20.1 years, but
those who listened to the children's song "Kalimba" had an adjusted age of 21.5 years. That was significant at p=.04.
How? Well, they gave the subjects three songs to listen to, but only put two in the regression. They asked the subjects 12 questions, but used only one in the regression. And, they kept testing
subjects 10 at a time until they got significance, then stopped.
In other words, they tried a large number of permutations, but only reported the one that led to statistical significance.
One thing I found interesting was that one variable -- father's age -- made the biggest difference, dropping the p-value from .33 to .04. That makes sense, because father's age is very much related
to subject's age. If you father is 40, you're unlikely to be 35. You could actually make a case that father's age *helps* the logic, not hurts it, even though it was arbitrarily selected because it
gave the desired result.
In this case, all the permutations meant that statistical significance was extremely likely. Suppose that, before any regressions, the two groups had about the same age. Then, you start adjusting for
things, one at a time. What you're looking for is a significant difference in that one respect. The chances of that are 5%. But, the things the researchers adjusted for are independent: how much they
would enjoy eating at a diner, their political orientation, which of four Canadian quarterbacks believed they won an award ... and so on. With ten independent thingies, the chance at least one would
be significant is about 0.4.
Add to that the possibility of continuing the experiment until significance was found, and the possibility of combining factors, and you're well over 0.5.
Plus, if the researchers hadn't found significance, they would have kept adjusting the experiment until they did!
The authors make recommendations for how to avoid this problem. They say that researchers should be forced to decide in advance, when to stop collecting data. And they should be forced to list all
variables and all conditions, allowing the referees and the readers to see all the "failed" options.
These are all good things. Another thing that I might add is: you have to repeat the *exact same study* with a second dataset. If the result was the result of manipulation, you'll have only a 5%
chance of having it stand up to an exact replication. This might create more false negatives, but I think it'd be worth it.
One point I'd add is that this study reinforces my point, last post, that the interpretation of the study is just as important as the regression. For one thing, looking at all the "failed" iterations
of the study is necessary to decide how to describe the conclusions. But, mostly, this study shows an extreme example of how you have to use insight to figure out what's going on.
Even if this study wasn't manipulated, the conclusion "listening to children's music makes you older" would be ludicrous. But, the regression doesn't tell you that. Only an intelligent analysis of
the problem tells you that.
In this case, it's obvious, and you don't need much insight. In other cases, it's more subtle.
Finally, let me take exception to the headline of the National Post article: "Statisticians can prove almost anything, a new study finds." Boo!
First of all, the Post makes the same mistake I argued against last post: the statistics don't prove anything: the statistics *plus the argument* make the case. Saying "statistics prove a hypothesis"
is like saying "subtraction proves socialism works" or "the hammer built the birdhouse."
Second, a psychologist who uses statistics should not be described as a statistician, any more than an insurance salesman should be described as an actuary.
Third, any statistician would tell you, in seconds, that if you allow yourself to try multiple attempts, the .05 goes out the window. It's the sciences that have chosen to ignore that fact.
The true moral of the story, I'd argue, is that the traditional academic standard is wrong -- the standard that once you find statistical significance, you're entitled to conclude your effect is
P.S. If Uri Simonsohn's name looks familiar, it might be because he was one of the authors of the "batters hit .463 when gunning for .300" study.
Labels: academics, statistics
0 Comments:
|
{"url":"http://blog.philbirnbaum.com/2011/11/statisticians-can-prove-almost-anything.html","timestamp":"2024-11-02T11:26:20Z","content_type":"application/xhtml+xml","content_length":"27881","record_id":"<urn:uuid:acbfb3ee-3ca7-4d0f-9d12-ecdb70969f44>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00492.warc.gz"}
|
Learning Green's Functions: Best Resources & Books
• Thread starter auditor
• Start date
In summary, Super Nadine suggests that people should learn Green's functions by studying an electrodynamics text or by looking at examples in a book. She also suggests that someone learn about
Green's functions through a lecture notes.
Do anyone have a recommendation for a great resource to learn Green's functions from? Preferably a book with a generous amount of examples. I'm thinking something like a solid introduction to applied
partial differential equations or the like. Ideally, there would be a lot of illustrations as well.
I know people speak warmly about "Partial Differential Equations for Scientists and Engineers" by Stanley J. Farlow (Dover), but it seems a little light on the Green's function side. Would it be
better to learn this from an electrodynamics text? My ultimate interest in Green's functions is 1) to get a better understanding of mathematical modeling in general and 2) to better understand its
application in QFT (in terms of propagators).
Anyone have some sound advice on where to look? Thanks!
Last edited by a moderator:
Thanks jsea-7. I was aware of the book by Roach, but fear it is a little verbose for my needs. Have you had a chance to study it closely?
Last edited by a moderator:
Thanks Peeter! I've been thinking of getting my hands on that one for a while now - perhaps I finally will. :) However, a quick look at the index doesn't seem too promising regarding Green's
functions, but I might need better glasses... Do you have any particular passage in mind?
chapter 7: Green's Functions.
Yup, books.google.com seems to be the right tool to evaluate Dover classics before purchase. :)
It depends on what aspect you wish to pursue? I learned it from a Vector-Space/Field perspective. The gist being, G(x,x') is a weighted sum of basis vectors. The calculus approach seems a bit
auditor said:
... My ultimate interest in Green's functions is
1) to get a better understanding of mathematical modeling in general and
2) to better understand its application in QFT (in terms of propagators).
I have an introduction to Green's functions in Russian with very simple examples. I can translate some passages if you like. Maybe it is worth to add this chapter to the PF Library.
Bob_for_short said:
I have an introduction to Green's functions in Russian with very simple examples. I can translate some passages if you like. Maybe it is worth to add this chapter to the PF Library.
That would be great! As far as I understand, there are still some treasures unknown to the west in the scientific literature of the old Soviet union. I think it would be a great asset to the PF
@Super nade: My familiarity with Green's functions is almost non-existing, so I'd like to learn both its computational side, as well as building up some mathematical intuition. From what sources did
you learn the vector-space approach?
I was introduced to Green's functions when I took a class in Electrodynamics (Jackson) and it seemed very arbitrary back then.
It was re-introduced to me in the Group-Theory class I'm taking. I can safely say that I like the approach better. This is probably the best class I have taken in my life. We started off by saying
"something exists" and "we can count" and proceeded to build up everything else from there.
I'd be happy to share my lecture notes (scanned pages/photocopies by post) with everybody here with the proviso that I get some help in typing it out electronically. I'd like an e-copy but my typing
skills are rudimentary at best. :)
1. Logic/What is Physics?
2. Basic Set theory
3. Groups
4. Fields
5. Vector Spaces
6. Operators as Matrices
7. Tensors
8. Orthonormal Functions and Gram Schmidt Orthogonalization
9. Legendre Polynomials
10. 1/r potential expansion
11. Spherical Harmonics
12. Green Functions
13. Brief intro to Complex Analysis.
There is a bit more that I missed out, but boy, am I glad I took this class or what!
@Super nade:
that seems like a really cool class. I'd be happy to type out a couple of pages. Have you checked out LyX? It's
convenient whenever you need to typeset anything with a certain amount of mathematics in it. Basically, it's a WYSIWYG-editor, which generates LaTeX-code. Make sure to check it out at
At the moment, everything is pretty crazy at work, so I can't promise much before xmas. Send me a private message, and I'll try to type out some. :)
I would be willing to type up a few chapters(also with LyX) if you need. I really like the Syllabus.
Does anyone know how to derive the free Phonon Green function for monolayer Graphene or a book where it is derived in details...
Thank you
How does one obtain the propagator for a scalar field whose mass is not a constant,for example space-dependent? Could the Green's function be an option?Any refs.?
FAQ: Learning Green's Functions: Best Resources & Books
What is the concept of Green's functions?
Green's functions are mathematical tools used in the field of partial differential equations to solve boundary value problems. They represent the response of a linear system to a delta function
input, and can be used to find the solution to a differential equation with a specific boundary condition.
Why is learning about Green's functions important?
Green's functions are essential in various fields of science and engineering, such as quantum mechanics, electromagnetics, and acoustics. They provide a powerful method for solving complex partial
differential equations, making them a valuable tool for researchers and engineers.
What are the best resources for learning about Green's functions?
Some of the best resources for learning about Green's functions include textbooks like "Green's Functions and Boundary Value Problems" by Ivar Stakgold and "The Mathematics of Diffusion" by J. Crank,
as well as online resources such as lecture notes and tutorials from universities and research institutions.
Are there any recommended books specifically for beginners in Green's functions?
Yes, some recommended books for beginners include "The Method of Green's Functions" by George Arfken and "Green's Functions with Applications" by Dean G. Duffy. These books provide a comprehensive
introduction to Green's functions and their applications in various fields.
How can I apply my knowledge of Green's functions in my research or work?
Green's functions have a wide range of applications in fields such as physics, engineering, and mathematics. They can be used to solve boundary value problems, model complex systems, and analyze the
behavior of physical systems. By understanding Green's functions, you can apply them in your research or work to solve problems and gain a deeper understanding of the underlying principles.
|
{"url":"https://www.physicsforums.com/threads/learning-greens-functions-best-resources-books.357891/","timestamp":"2024-11-09T16:37:48Z","content_type":"text/html","content_length":"138206","record_id":"<urn:uuid:16de8d05-9b7d-4ca4-9de4-f592d8e9e445>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00771.warc.gz"}
|
Example with a TCGA dataset
8 September 2014
This is an R Markdown document, which demonstrates the use of gapmap and dendsort packages to generate a gapped cluster heatmap visualization.
Let’s start by loading the data file from the package, and creating two color palettes.
## Warning: package 'RColorBrewer' was built under R version 4.1.2
Now you have the data matrix loaded, let’s calculate correlation-based distance and perform hierarchical clustering. In this example, we use the correlation-based dissimilarity (Pearson Correlation)
and the complete linkage for hierarchical clustering.
dataTable <- t(sample_tcga)
#calculate the correlation based distance
row_dist <- as.dist(1-cor(t(dataTable), method = "pearson"))
col_dist <- as.dist(1-cor(dataTable, method = "pearson"))
#hierarchical clustering
col_hc <- hclust(col_dist, method = "complete")
row_hc <- hclust(row_dist, method = "complete")
col_d <- as.dendrogram(col_hc)
row_d <- as.dendrogram(row_hc)
Now you are ready to plot the data. First, we will plot a cluster heatmap without any gaps.
gapmap(m = as.matrix(dataTable), d_row = rev(row_d), d_col = col_d, ratio = 0, verbose=FALSE, col=RdBu,
label_size=2, v_ratio= c(0.1,0.8,0.1), h_ratio=c(0.1,0.8,0.1))
This gapmap package was designed to encode the similarity between two nodes by adjusting the position of each leaves. In the traditional representation, all the information about the similarity two
adjacent nodes is in the height of the branch in a dendrogram. By positioning leaves based on the similarity, we introduce gaps in both dendrograms and heat map visualization. In the figure below, we
exponentially map a distance (dissimilarity) of two nodes to a scale of gap size.
gapmap(m = as.matrix(dataTable), d_row = rev(row_d), d_col = col_d, mode = "quantitative", mapping="exponential", col=RdBu,
ratio = 0.3, verbose=FALSE, scale = 0.5, label_size=2, v_ratio= c(0.1,0.8,0.1), h_ratio=c(0.1,0.8,0.1))
Since the background is white, we can use another color scale where the value 0 is encoded in yellow.
gapmap(m = as.matrix(dataTable), d_row = rev(row_d), d_col = col_d, mode = "quantitative", mapping="exponential", col=RdYlBu,
ratio = 0.3, verbose=FALSE, scale = 0.5, label_size=2, v_ratio= c(0.1,0.8,0.1), h_ratio=c(0.1,0.8,0.1))
This package works well with the dendsort package to reorder the structure of dendrograms. For further information for the dendsort, please see the paper.
gapmap(m = as.matrix(dataTable), d_row = rev(dendsort(row_d, type = "average")), d_col = dendsort(col_d, type = "average"),
mode = "quantitative", mapping="exponential", ratio = 0.3, verbose=FALSE, scale = 0.5, v_ratio= c(0.1,0.8,0.1),
h_ratio=c(0.1,0.8,0.1), label_size=2, show_legend=TRUE, col=RdBu)
You can also plot gapped dendrogram. First you need to create a gapdata class object by calling gap_data(). To bring the text labels closer to the dendrogram, we set a negative value to
axis.tick.margin. This value should be adjusted depending on your plot size. If anyone has a better solution to adjust the position of the axis labels, please let me know.
row_data <- gap_data(d= row_d, mode = "quantitative", mapping="exponential", ratio=0.3, scale= 0.5)
dend <- gap_dendrogram(data = row_data, leaf_labels = TRUE, rotate_label = TRUE)
dend + theme(axis.ticks.length= grid::unit(0,"lines") )+ theme(axis.ticks.margin = grid::unit(-0.8, "lines"))
## Warning: The `axis.ticks.margin` argument of `theme()` is deprecated as of ggplot2
## 2.0.0.
## ℹ Please set `margin` property of `axis.text` instead
## This warning is displayed once every 8 hours.
## Call `lifecycle::last_lifecycle_warnings()` to see where this warning was
## generated.
|
{"url":"https://cran.case.edu/web/packages/gapmap/vignettes/tcga_example.html","timestamp":"2024-11-06T11:54:45Z","content_type":"text/html","content_length":"1049014","record_id":"<urn:uuid:6890e40e-2af3-4773-8970-063f29f35b45>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00336.warc.gz"}
|
Correlation and regression in contingency tables
Organization: Thomas Cool Consultancy & Econometrics
URL: http://thomascool.eu/
Revision date
Nominal data currently lack a correlation coefficient, such as has already been defined for real data. A
measure can be designed using the determinant, with the useful interpretation that the determinant gives
the ratio between volumes. With M a m × n contingency table with m ≥ n, and A = Normalized[M], then A'A
is a square n × n matrix and the suggested measure is r = Sqrt[Det[A'A]]. With M an a × a × ... × a
contingency matrix, then pairwise correlations can be collected in a k × k matrix R. A matrix of such
pairwise correlations is called an association matrix. If that matrix is also positive semi-definite
(PSD) then it is a proper correlation matrix. The overall correlation then is R = f[R] where f can be
chosen to impose PSD-ness. An option is to use R = Sqrt[1 - Det[R]]. However, for both nominal and
cardinal data the advisable choice is to take the maximal multiple correlation within R. The resulting
measure of nominal correlation measures the distance between a main diagonal and the off-diagonal
elements, and thus is a measure of strong correlation. Cramer s V measure for pairwise correlation can
be generalized in this manner too. It measures the distance between all diagonals (including
cross-diagaonals and subdiagonals) and statistical independence, and thus is a measure of weaker
correlation. Finally, when also variances are defined then regression coefficients can be determined
from the variance-covariance matrix.
Business and Economics
Mathematics > Probability and Statistics
Wolfram Technology > Application Packages > Additional Applications > Cool Economics
association, correlation, contingency table, volume ratio, determinant, nonparametric methods, nominal
data, nominal scale, categorical data, Fisher , s exact test, odds ratio, tetrachoric correlation
coefficient, phi, Cramer , s V, Pearson, contingency coefficient, uncertainty coefficient, Theil , s U,
eta, meta-analysis, Simpson , s paradox, causality, statistical independence, regression
ColignatusCorrelation.zip (157.5 KB) - ZIP archive
Business and Economics
Mathematics > Probability and Statistics
Wolfram Technology > Application Packages > Additional Applications > Cool Economics
|
{"url":"https://library.wolfram.com/infocenter/MathSource/6741/","timestamp":"2024-11-12T18:32:49Z","content_type":"text/html","content_length":"41714","record_id":"<urn:uuid:148c23b6-f282-4b5a-bbe6-64853f31f68a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00662.warc.gz"}
|
Numerical integration in a function
Numerical integration in a function
So what I want is to the integration wait until after the variables have been substituted so that it is able to numerically integrate. (Yes I need to numerically integrate. This is a simplified form
that reproduces the same result.)
I thought there might be a way using a lambda defined function, but I was unable to find one.
2 Answers
Sort by ยป oldest newest most voted
This works for me:
sage: var('d')
sage: f = lambda x,y: numerical_integral(1/d*2*x*y,.01,Infinity)[0]
sage: f(3,1)
edit flag offensive delete link more
This worked great!
willmwade ( 2010-09-01 12:32:51 +0100 )edit
Great! However, you won't be able to take the derivative of this function, as far as I know. Maybe our f(x,y) should make a lambda function when it's not possible to make a symbolic function... but
that sounds hard.
kcrisman ( 2010-09-01 14:28:06 +0100 )edit
This isn't really an answer, but ...
This is one of many things about the integration interface that needs to be improved. We have two fundamentally different syntaxes for symbolic versus numerical integration. Would you find it
possible to collate all your ask.sagemath.org questions about this into one collection and open a Trac ticket for "Improve numerical integration syntax" or something like that? I don't think it would
be terribly difficult to fix many of these things, but it's much harder because the requests (and possibly tickets - over the last few years) are sort of scattered.
edit flag offensive delete link more
Please have a concrete suggestion for the syntax before opening the ticket. Tickets without specific goals tend to be ignored by the developers, and are generally hard to resolve. The sage-devel list
is the right place for a discussion on how the improved syntax might be.
burcin ( 2010-09-01 11:46:23 +0100 )edit
Maybe what's also (or alternatively) needed is a ticket for "improve documentation for numerical and symbolic integration", since these questions seem (to me) to get resolved, but not always in
obvious ways. @willmwade: could you identify the things you've learned which are not documented?
niles ( 2010-09-01 11:46:28 +0100 )edit
The main area I think that could use some documentation is the use of the Python inline lambda function. Most if not all of the integration issues I have had, have been solved with using this.
However there is little in how to use it with Sage specifically.
willmwade ( 2010-09-01 12:34:38 +0100 )edit
For example in the answer to this one. I did not know that f=lambda x: x^2 would allow me to do f(2) result 4. Nor has any of the python docs on it been the most help. Mind I do more in Java :( and
PHP :) than the little I have done in Python, but still some Sage documentation pointing at others?
willmwade ( 2010-09-01 12:36:57 +0100 )edit
The only other item would be a mess I keep running into at defining the variable of integration. integral(f(x),x,0,1) verses numerical_integral(f(x),0,1). Numerical integral errors when the variable
of integration is declared. If it would not do that, it would help bring the two into uniformity.
willmwade ( 2010-09-01 12:39:45 +0100 )edit
|
{"url":"https://ask.sagemath.org/question/7660/numerical-integration-in-a-function/","timestamp":"2024-11-09T15:39:51Z","content_type":"application/xhtml+xml","content_length":"71923","record_id":"<urn:uuid:fb558c59-75e8-4d6d-9015-3b3182d29eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00582.warc.gz"}
|
Number 1015
Number 1015 is an odd four-digits composite number and natural number following 1014 and preceding 1016.
Nominal 1015
Cardinal one thousand fifteen
one thousand fifteen
Ordinal 1,015th
Number of digits 4
Sum of digits 7
Product of digits 0
Number parity Odd
Calculation was done in 0.0000350475 seconds
Prime factorization 5 x 7 x 29
Prime factors 5, 7, 29
Number of distinct prime factors ω(n) 3
Total number of prime factors Ω(n) 3
Sum of prime factors 41
Product of prime factors 1015
Calculation was done in 0.0000069141 seconds
Calculation was done in 0.0000140667 seconds
Is 1015 a prime number? No
Is 1015 a semiprime number? No
Is 1015 a Chen prime number? No
Is 1015 a Mersenne prime number? No
Calculation was done in 0.0002279282 seconds
Is 1015 a Catalan number? No
Is 1015 a Fibonacci number? No
Is 1015 a Idoneal number? No
Calculation was done in 0.0000038147 seconds
Calculation was done in 0.0073118210 seconds
Calculation was done in 0.0000171661 seconds
Square of 1015 (n^2) 1030225
Cube of 1015 (n^3) 1045678375
Square root of 1015 31.859064644148
Natural Logarithm (ln) of 1015 6.9226438914759
Decimal Logarithm (log) of 1015 3.0064660422492
Calculation was done in 0.0000040531 seconds
Sine of 1015 -0.26246211746178
Cosecant of 1015 -3.810073658137
Cosine of 1015 -0.96494229718542
Secant of 1015 -1.0363313981746
Tangent of 1015 0.27199773315704
Cotangent of 1015 3.6765012281284
Calculation was done in 0.0000140667 seconds
Is 1015 an Even Number?
Is 1015 an Odd Number?
Yes, the number 1015 is an odd number.
Total number of all odd numbers from 1 to 1015 is
Sum of all the odd numbers from 1 to 1015 are
The sum of all odd numbers is a perfect square: 258064 = 508
An odd number is any integer (a whole number) that cannot be divided by 2 evenly. Odd numbers are the opposite of even numbers.
Calculation was done in 0.0000050068 seconds
The spelling of 1015 in words is "one thousand fifteen", meaning that:
1015 is not an aban number (as it contains the letter a)
1015 is not an eban number (as it contains the letter e)
1015 is not an iban number (as it contains the letter i)
1015 is not an oban number (as it contains the letter o)
1015 is not a tban number (as it contains the letter t)
1015 is not an uban number (as it contains the letter u)
Calculation was done in 0.0000019073 seconds
Bengali numerals ১০১৫
Eastern Arabic numerals ١٬٠١٥
Hieroglyphs numeralsused in Ancient Egypt 𓆼𓎆𓏾
Khmer numerals ១០១៥
Japanese numerals 千十五
Roman numerals MXV
Thai numerals ๑๐๑๕
Calculation was done in 0.0000629425 seconds
Arabic ألف و خمسة عشر
Croatian tisuću petnaest
Czech jedna tisíc patnáct
Danish et tusind femten
Dutch duizend vijftien
Estonian akpe ɖeka kple wuiatɔ̃
Faroese eitt tusin og fímtan
Filipino isáng libó’t labíng-limá
Finnish tuhatviisitoista
French mille quinze
Greek χίλια δεκαπέντε
German eintausendfünfzehn
Hebrew אלף וחמש עשרה
Hindi एक हज़ार पन्द्रह
Hungarian ezertizenöt
Icelandic eitt þúsund og fimmtán
Indonesian seribu lima belas
Italian millequindici
Japanese 千十五
Korean 천십오
Latvian tūkstoš piecpadsmit
Lithuanian tūkstantis penkiolika
Norwegian tusen og femten
Persian یک هزار و پانزده
Polish tysiąc piętnaście
Portuguese mil e quinze
Romanian una mie cincisprezece
Russian одна тысяча пятнадцать
Serbian једна хиљаду петнаест
Slovak jedna tisíc pätnásť
Slovene tisuću petnajst
Spanish mil quince
Swahili elfu moja, kumi na tano
Swedish entusenfemton
Thai หนึ่งพันสิบห้า
Turkish bin on beş
Ukrainian одна тисяча пʼятнадцять
Vietnamese một nghìn không trăm mười lăm
Calculation was done in 0.0135290623 seconds
Number 1015 reversed 5101
Unicode Character U+03F7 Ϸ
Unix Timestamp Thu, 01 Jan 1970 00:16:55 +0000
Calculation was done in 0.0000219345 seconds
This page was generated in 0.02 seconds.
|
{"url":"https://www.numberfacts.one/1015","timestamp":"2024-11-03T12:26:52Z","content_type":"text/html","content_length":"32563","record_id":"<urn:uuid:c5074fa2-9a2b-49b4-a027-b4648080c4bd>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00135.warc.gz"}
|
Advisory Board
The MATRIX Advisory Board consists of the following senior mathematical scientists and industry representatives.
• Prof. Tony Guttmann
(University of Melbourne)
Tony holds a Personal Chair in mathematics at The University of Melbourne. He was the Interim Director of the Australian Mathematical Sciences Institute at the time of its creation, and was
Director of the ARC Centre of Excellence for the Mathematics and Statistics of Complex systems (MASCOS) from its inception in 2002 until 2017.
Research Interests: Tony’s research interests are in equilibrium statistical mechanics in general, and more particularly in discrete models of phase transitions. As these are often equivalent to
a combinatorial problem, he is equally interested in the relevant combinatorics, and the connection between the two. Additionally, he is regularly looking for new ways to count the underlying
graphs efficiently leading to a study of algorithmic complexity.
• Prof. Jan de Gier
Director MATRIX
(University of Melbourne)
Jan is the Director of MATRIX. He is a former Head of the School of Mathematics and Statistics at The University of Melbourne and was the inaugural Chair of the Australia and New Zealand
Association for Mathematical Physics. Jan is Chief Investigator of the ARC Centre of Excellence for Mathematical and
Statistical Frontiers.
Research Interests: Jan’s main research areas are mathematical physics, statistical mechanics, interacting particle systems, solvable lattice models, representation theory and multivariable
polynomials. He also studies applications of stochastic particle systems to real world traffic modelling.
• Prof. Uri Onn
Acting Deputy Director MATRIX
(Australian National University)
Uri is acting Deputy Director of MATRIX and a Professor of Mathematics at the Australian National University.
Research Interests: Representation theory and related zeta functions, asymptotic groups theory, special functions and their role in representation theory, and the Local Langlands correspondence.
• Prof. David Wood
Deputy Director MATRIX
(Monash University)
David is Deputy Director of MATRIX, and Professor in the Discrete Mathematics Research Group of the School of Mathematics at Monash University in Melbourne, Australia.
Research Interests: David's research interests are in discrete mathematics and theoretical computer science, especially structural graph theory, extremal graph theory, geometric graph theory,
graph colouring, and combinatorial geometry.
• Prof. Joe Grotowski
Deputy Director MATRIX
(University of Queensland)
Joe has been Head of School at the University of Queensland since May, 2014, having served as Head of Mathematics from January 2010 until April 2014.
Research Interests: Joe’s research is mainly in geometric and nonlinear analysis, in particular in geometric evolution equations. In recent years he has also become involved in a number of more
applied projects.
• Dr Mark Aarons
(VFMC and Monash University)
Mark joined VFMC in 2018 and is currently Head of Investment Risk and Absolute Returns. His risk role encompasses risk analysis, portfolio construction and defensive overlays. He also heads-up
the Absolute Returns team which covers Hedge Funds, Private Credit and Emerging Market Debt (circa 20% of VFMC’s FUM). Mark is a member of the Investment Leadership Team and investment committee.
He is also an Adjunct Associate Professor in the Centre for Quantitative Finance and Investment Strategies at Monash University.
Prior to working at VFMC, Mark was global Head of FICC Structuring at the National Australia Bank from 2010 to 2017, where he built a leading institutional derivatives structuring and sales
business in both Australia and the UK. Mark also spent four years in London with NAB working in both market risk and in front office on a rates trading desk.
Mark holds Bachelor degrees in Law and Science (both with Honours) from Monash University and a PhD in Mathematics jointly from the Max Planck Institute for Gravitational Physics and the Free
University of Berlin, Germany. Mark is also the co-author of a book on securitisation swaps which was published by Wiley Finance in 2019 and has published several finance papers.
• Prof. Hélène Barcelo
(Simons Laufer Mathematical Sciences Institute)
Hélène is the Deputy Director of the Simons Laufer Mathematical Sciences Institute, a position she has held since July 1, 2008. As Deputy Director, she is in charge of overseeing all scientific
activities at the Institute. She received the Wexler Award (ASU) for distinguished teaching, and 4 doctoral and 8 master students completed their degree under her direction. She has held visiting
positions at numerous universities and research institutes around the world.
Research Interests: Hélène's research interests lie in algebraic combinatorics; more specifically, combinatorial representation theory and homotopy theories in relation to subspace arrangements.
• Prof. Howard Bondell
(University of Melbourne)
Howard is Professor and Head of the School of Mathematics and Statistics at the University of Melbourne, and a Fellow of the American Statistical Association.
Research interests: Howard’s research interest is in statistical data science, with focus on model selection, robust estimation, regularisation, Bayesian methods, and all aspects of modelling and
handling uncertainty in statistical and machine learning approaches.
• Prof. Lilia Ferrario
(Director, Mathematical Sciences Institute, Australian National University)
Lilia is a Professor of Mathematics and Theoretical Astrophysics at the Australian National University in Canberra. She was the Head of the ANU Department of Mathematics in 2012-2014 and then the
Associate Director of Education of the ANU Mathematical Sciences Institute (MSI) until 2020. She is currently the Director of MSI.
Research Interests: Lilia’s work is on compact stars, magnetic accretion and cyclotron shocks on the surface of highly magnetic white dwarfs. She has studied the magnetosphere-accretion stream
interaction in compact stars with detailed self-consistent 3-D computational models of the thermal structure of magnetically confined accretion flows. She has investigated the origin of magnetic
fields in compact stars and has shown that binary interaction and stellar merging can explain the strongest magnetic fields in the universe. She has also performed Galactic archaeology studies to
hunt for the elusive progenitors of type Ia Supernovae that are routinely used as standard candles to determine the expansion history of the universe.
• Prof. Jennifer Flegg
(University of Melbourne) (Chair, WIMSIG)
Jennifer is a Professor in the School of Mathematics and Statistics at the University of Melbourne and is Chair of the Women in Mathematics Special Interest Group (WIMSIG) of the Australian
Mathematical Society. She obtained her PhD from the Queensland University of Technology in 2009. She held a postdoctoral position at the University of Oxford and was a lecturer at Monash
University before joining the University of Melbourne in 2017. Jennifer was awarded the Australian Academy of Science Christopher Heyde Medal In Applied Mathematics in 2020.
Research interests: Jennifer's research focuses on using mathematics and statistics to answer questions in biology and medicine. In particular, she develops mathematical models in areas such as
wound healing, tumour growth and infectious disease epidemiology.
• Prof. Andrew Francis
(UNSW, Sydney)
Andrew is a Professor of Mathematics and Head of the School of Mathematics and Statistics at the University of New South Wales. He obtained his PhD from UNSW in 1999, held a postdoc at the
University of Virginia, and worked at Western Sydney University before re-joining UNSW in 2024. He has held several ARC roles, including being on the College of Experts, and Research Evaluation
Committees for the ERA processes.
Research Interests: Andrew’s research uses ways of thinking from the discrete side of pure mathematics, such as from group theory, graph theory, and combinatorics, to study problems arising in
evolutionary biology. In particular, he has had a focus on phylogenetic trees and networks - used to describe evolutionary relationships - and on processes of evolution in bacteria.
• Prof. Dr. Gerhard Huisken
(Mathematisches Forschungsinstitut Oberwolfach, Germany)
Gerhard is currently a Professor of Mathematics at Tübingen University and Director of the Mathematisches Forschungsinstitut Oberwolfach in Germany. Before that he was a Director at the
Max-Planck-Institute for Gravitational Physics (Albert-Einstein-Institute) in Potsdam.
Research Interest: Gerhard's research interests are in Geometric Analysis with applications to Mathematical Physics, in particular to General Relativity.
• Prof. Tim Marchant
(University of Wollongong) (Director, AMSI)
Tim Marchant is Director of the Australian Mathematical Sciences Institute and a Honorary Professor of Applied Mathematics at University of Wollongong. He was previously President of the
Australian Mathematical Society, and Head of the School of Mathematics and Applied Statistics and Dean of Research at the University of Wollongong.
Research interests: nonlinear optics, nonlinear waves and combustion theory.
• Prof. Yong-Geun Oh
(Director IBS Center for Geometry and Physics, Pohang, Korea)
Yong-Geun is the founding director of the IBS Center for Geometry and Physics established in August 2012 and a professor of the mathematics department in the Pohang University of Science and
Technology (POSTECH). He got his Ph.D. in the University of California, Berkeley in 1988. He was a faculty member in the mathematics department of the University of Wisconsin–Madison for the
period 1991-2014. He was a member of the Institute for Advanced Study in Princeton (1991-1992, 2001-2002, 2012). He was honored with the 2019 Korea Science Award.
He is a member of Korean Mathematical Society and American Mathematical Society. He is a member of the Korean Academy of Science and Technology and is in the inaugural class of AMS Fellows.
Research Interests: Yong-Geun’s fields of study have been on symplectic topology, Hamiltonian mechanics, and mirror symmetry. His research focus lies in the symplectic Floer homology theory and
its applications. He was a ICM speaker in the Geometry Session of ICM-2006 in Madrid.
• Prof. Cheryl Praeger
(University of Western Australia)
Cheryl is Professor of Mathematics at the University of Western Australia. She is also the Foreign Secretary of the Australian Academy of Science (2014-2018). She is a former ARC Federation
Fellow and was the inaugural Director of the UWA Centre for the Mathematics of Symmetry and Computation.
Research Interests: Cheryl’s research has focussed on the theory of group actions and their applications in Algebraic Graph Theory and for Combinatorial Designs; and algorithms for group
computation including questions in statistical group theory and algorithmic complexity.
• Prof. Jessica Purcell
(AustMS/Monash University)
Jessica Purcell is a Professor of Mathematics at Monash University. Jessica is the current President of the Australian Mathematical Society (AustMS) and former Chair of the AustMS Women in
Mathematical Sciences Special Interest Group (WIMSIG). Before arriving at Monash, she held positions at Brigham Young University, Oxford University, and the University of Texas at Austin, and
visiting positions at the Institute for Advanced Study and Melbourne University. Jessica received her PhD in mathematics from Stanford University in 2004.
Research Interests: Jessica's research interests are in low-dimensional topology and geometry, especially 3-manifolds, hyperbolic geometry, and their relation to knot theory.
• Ms Brigitte Smith
(GBS Venture Partners)
Brigitte has twenty years’ experience in venture capital, business strategy and start-up company operations. She has been investing and managing investments for GBS’s $450m of life science
specialised venture capital funds since 1998. Brigitte has a B. Chem Eng (Honours) from the University of Melbourne, and as a Fulbright Scholar completed a MBA (Honours) from the Harvard Business
School and a MALD from the Fletcher School of Law and Diplomacy, both in Boston, USA
• Prof. Peter Taylor
((Director ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), University of Melbourne))
Peter is an Australian Laureate Fellow and Director of ACEMS.
Research Interests: Peter's research interests lie in the fields of stochastic modelling and applied probability, with particular emphasis on applications in telecommunications, biological
modelling, mechanism design, epidemiology, healthcare and disaster management. Recently he has become interested in the interaction of stochastic modelling with optimisation and optimal control
under conditions of uncertainty.
• Prof. Ole Warnaar
(University of Queensland)
Ole is Chair and Professor in Pure Mathematics at the University of Queensland, a Fellow of the Australian Academy of Science and former President of the Australian Mathematical Society (AustMS).
Research Interests: Ole’s research interest include algebraic combinatorics, basic and elliptic hypergeometric series, representation theory, and special functions.
• Prof. Warwick Tucker
(Monash University)
Warwick has been Head of School at Monash University since July, 2020, having previously (2014-2020) served as Head of the Department of Mathematics at Uppsala University (Sweden). Warwick
received his Ph.D. in Mathematics from Uppsala University in 1998. Since then he has had research positions at IMPA (Brazil), Cornell University (USA), University of Bergen (Norway), ENS-Lyon
(France), and Uppsala University. Before joining Monash University, Warwick was part of the Swedish $1bn research initiative WASP acting as the chair of the national graduate school in Artificial
Research Interests: Warwick’s research is mainly in dynamical systems, chaos theory, computer-assisted proofs, and artificial intelligence.
• Prof. Shmuel Weinberger
(University of Chicago)
Shmuel Weinberger is the Andrew MacLeish Professor of Mathematics and chair of the Department of Mathematics at the University of Chicago. He is Fellow of the AMS and of the AAAS.
Research Interests: geometric topology, differential geometry, geometric group theory, and applications of topology in other disciplines.
|
{"url":"https://www.matrix-inst.org.au/governance/advisory-board/","timestamp":"2024-11-12T06:49:14Z","content_type":"text/html","content_length":"135138","record_id":"<urn:uuid:91895f70-5564-4ade-bf3f-20bb22c6726f>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00494.warc.gz"}
|
How do you sketch the curve y=cos^2x-sin^2x by finding local maximum, minimum, inflection points, asymptotes, and intercepts? | Socratic
How do you sketch the curve #y=cos^2x-sin^2x# by finding local maximum, minimum, inflection points, asymptotes, and intercepts?
1 Answer
Note that we can use the trigonometric identity:
${\cos}^{2} x - {\sin}^{2} x = \cos 2 x$
so we know that the function has a maximum for $x = k \pi$ and a minimum for $x = \frac{\pi}{2} + k \pi$ with $k \in \mathbb{Z}$.
The inflection points are coincident with the intercepts and are in $x = \frac{\pi}{4} + k \pi$. The function is concave down in $\left(- \frac{\pi}{4} + k \pi , \frac{\pi}{4} + k \pi\right)$ with
$k$ even, and concave up with $k$ odd.
Finally, the function is continuous for every $x \in \mathbb{R}$ and has no limit for $x \to \pm \infty$ and no asymptotes.
graph{cos(2x) [-2.5, 2.5, -1.25, 1.25]}
Impact of this question
2359 views around the world
|
{"url":"https://socratic.org/questions/how-do-you-sketch-the-curve-y-cos-2x-sin-2x-by-finding-local-maximum-minimum-inf#456636","timestamp":"2024-11-03T19:15:12Z","content_type":"text/html","content_length":"33676","record_id":"<urn:uuid:a8cc9ff3-4c36-47e4-aefd-6f778c93e371>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00298.warc.gz"}
|
Governing Equation for Elastic Buckling
Consider a buckled simply-supported column of length L under an external axial compression force F, as shown in the left schematic below. The transverse displacement of the buckled column is
represented by w.
The right schematic shows the forces and moments acting on a cross-section in the buckled column. Moment equilibrium on the lower free body yields a solution for the internal bending moment M,
Recall the relationship between the moment M and the transverse displacement w for an Euler-Bernoulli beam,
Eliminating M from the above two equations results in the governing equation for the buckled slender column,
Buckling Solutions
The governing equation is a second order homogeneous ordinary differential equation with constant coefficients and can be solved by the method of characteristic equations. The solution is found to
where A and B can be determined by the two boundary conditions
The coefficient B is always zero, and for most values of m*L the coefficient A is required to be zero. However, for special cases of m*L, A can be nonzero and the column can be buckled. The
restriction on m*L is also a restriction on the values for the loading F; these special values are mathematically called eigenvalues. All other values of F lead to trivial solutions (i.e. zero
The lowest load that causes buckling is called critical load (n = 1).
The above equation is usually called Euler's formula. Although Leonard Euler did publish the governing equation in 1744, J. L. Lagrange is considered the first to show that a non-trivial solution
exists only when n is an integer. Thomas Young then suggested the critical load (n = 1) and pointed out the solution was valid when the column is slender in his 1807 book. The "slender" column idea
was not quantitatively developed until A. Considère performed a series of 32 tests in 1889.
The shape function for the buckled shape w(x) is mathematically called an eigenfunction, and is given by,
Recall that this eigenfunction is strictly valid only for simply-supported columns.
Note: 1. Boundary conditions other than simply-supported will result in different critical loads and mode shapes.
2. The buckling mode shape is valid only for small deflections, where the material is still within its elastic limit.
3. The critical load will cause buckling for slender, long columns. In contrast, failure will occur in short columns when the strength of material is exceeded. Between the long and short
column limits, there is a region where buckling occurs after the stress exceeds the proportional limit but is still below the ultimate strength. These columns are classfied as intermediate
and their failure is called inelastic buckling.
4. Whether a column is short, intermediate, or long depends on its geometry as well as the stiffness and strength of its material. This concept is addressed in the columns introduction page.
|
{"url":"https://www.efunda.com/formulae/solid_mechanics/columns/theory.cfm","timestamp":"2024-11-15T00:49:28Z","content_type":"text/html","content_length":"26762","record_id":"<urn:uuid:a2b84ada-6ccb-46fa-b0b5-4f78714095e8>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00737.warc.gz"}
|
Economics 91
Spring 2021
Course Description
This course provides an introduction to probability and statistics with emphasis on topics that are central to the study of econometrics. We will begin with descriptive statistics, probability
distributions and expected values. We will learn how to make statistical inferences and tests of significance. At the end of the semester we will study classical models of bivariate and multivariate
regressions. This will allow us to test various economic theories about how the world works. The aim is to provide you with sufficient knowledge of statistical and econometric theory to make you an
effective consumer of empirical research in the social sciences.
Though this is not a math class, statistics can not be discussed in a meaningful way without the use of a lot of graphs and algebra. Thus Math 25 or equivalent is a prerequisite for this class. It
may be good to review your knowledge in these areas. You must be able to add, subtract, multiply, and divide. But calculus will not be necessary for this class. The course is designed to prepare
students for Economics 125, Econometrics. The statistical sofware we will use is Microsoft Excel.
Requirements for the course will include class attendance, six problem sets, two midterm exams, and a final exam. The problem sets will serve as excellent practice for the exams. They will be posted
on this web page. Students are encouraged to do additional problems in the textbook. The midterm exams will be on Wednesday 24 February and Wednesday 7 April. The final exam will be given on Tuesday
11 May from 2-5pm PST. The problem sets will count for 30%, each midterm exam for 20% and the final exam for 30% of the final grade. All the problem sets will be posted on Sakai. You may choose to
write an optional essay of 1000 words on how randomness rules your life after reading The Drunkard's Walk: How Randomness Rules Our Lives by Leonard Mlodinow. If you choose to write this essay, it
will count as extra credit, and is due on Wednesday 5 May. This essay can only improve your grade.
A written version of the lectures is available on Sakai. Since all the material will be covered in lecture, the textbook is optional, but recommended as a reference. The optional textbook for the
course is Gerald Keller, Statistics for Management and Economics, 11th edition, South-Western College Pub, 2018. This textbook is available at Huntley Bookstore. We will learn to use the statistics
tables in the back of the book. Leondard Mlodinow, The Drunkard's Walk: How Randomness Rules Our Lives, is optional but recommended reading for the semester.
If any material is ever unclear, or even if everything is perfectly clear, please chat with me about statistics, economics or anything for that matter. If you have a short question, feel free to
email me. For longer and better explanations, please make an appointment to see me on Zoom at your convenience. I can be reached at the following:
Office: Fletcher 216
Office Hours: Monday-Thursday 2:00-3:00pm, and by appointment
Phone: 607-3769
Email lyamane@pitzer.edu
After this pandemic, please join me for lunch every Friday from 12:00-1:00pm in the east wing of McConnell Dining Hall. Look for the table with the Economics Lunch sign. These are just office hours
over lunch.
Introduction Ch 1
Descriptive Statistics Ch 2
Probability Ch 3
Probability Distributions Ch 4
Special Probability Distributions Ch 5
Statistical Inference Ch 6
Confidence Intervals Ch 7
Hypothesis Testing Ch 8
Hypothesis Testing with Two Samples Ch 9
Simple Linear Regression Ch 10
Multiple Regression Ch 11
Interpreting Regression Results Ch 12
Statistics Web Sites
Ken White's Coin Flipping Page
Problem Sets
Please download these pdf documents.
Problem Set #1 Due Wednesday 3 February
Problem Set #2 Due Wednesday 17 February Keno
Midterm #1 Wednesday 24 February
Problem Set #3 Due Wednesday 17 March
Problem Set #4 Due Wednesday 31 March
Midterm #2 Wednesday 7 April
Problem Set #5 Due Wednesday 21 April
Problem Set #6 Due Wednesday 28 April
Final Exam Tuesday 11 May 2pm
Practice Problems
Probability Rules
Probability Distributions
Confidence Intervals and Hypothesis Testing
Lecture Slides
Week 6
Week 11
Week 15
|
{"url":"http://pzacad.pitzer.edu/~lyamane/econ91.htm","timestamp":"2024-11-08T22:07:23Z","content_type":"text/html","content_length":"15497","record_id":"<urn:uuid:1838e387-b4b4-4340-8ed8-34261f766c66>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00537.warc.gz"}
|
Time-Dependent Flow with Convective Heat Transfer through a Curved Square Duct with Large Pressure Gradient
Time-Dependent Flow with Convective Heat Transfer through a Curved Square Duct with Large Pressure Gradient ()
1. Introduction
Fluid flow and heat transfer in curved ducts have been studied for a long time because of their fundamental importance in engineering and industrial applications. Today, the flows in curved
non-circular ducts are of increasing importance in micro-fluidics, where lithographic methods typically produce channels of square or rectangular cross-section. These channels are extensively used in
many engineering applications, such as in turbo-machinery, refrigeration, air conditioning systems, heat exchangers, rocket engine, internal combustion engines and blade-to-blade passages in modern
gas turbines. In a curved duct, centrifugal forces are developed in the flow due to channel curvature causing a counter rotating vortex motion applied on the axial flow through the channel. This
creates characteristics spiraling fluid flow in the curved passage known as secondary flow. At a certain critical flow condition and beyond, additional pairs of counter rotating vortices appear on
the outer concave wall of curved fluid passages which are known as Dean vortices, in recognition of the pioneering work in this field by Dean [1] . After that, many theoretical and experimental
investigations have been done; for instance, the articles by Berger et al. [2] , Nandakumar and Masliyah [3] , and Ito [4] may be referenced.
One of the interesting phenomena of the flow through a curved duct is the bifurcation of the flow because generally there exist many steady solutions due to channel curvature. Studies of the flow
through a curved duct have been made, experimentally or numerically, for various shapes of the cross section by many authors. However, an extensive treatment of the bifurcation structure of the flow
through a curved duct of rectangular cross section was presented by Winters [5] , Daskopoulos and Lenhoff [6] and Mondal [7] .
Unsteady flows by time evolution calculation of curved duct flows was first initiated by Yanase and Nishiyama [8] for a rectangular cross section. In that study they investigated unsteady solutions
for the case where dual solutions exist. The time-dependent behavior of the flow in a curved rectangular duct of large aspect ratio was investigated, in detail, by Yanase et al. [9] numerically. They
performed time-evolution calculations of the unsteady solutions with and without symmetry condition and found that periodic oscillations appear with symmetry condition while aperiodic time variation
without symmetry condition. Wang and Yang [10] [11] performed numerical as well as experimental investigation on fully developed periodic oscillation in a curved square duct. Flow visualization in
the range of Dean numbers from 50 to 500 was carried out in their experiment. Recently, Yanase et al. [12] performed numerical investigation of isothermal and non-isothermal flows through a curved
rectangular duct and addressed the time-dependent behavior of the unsteady solutions. In the succeeding paper, Yanase et al. [13] extended their work for moderate Grashof numbers and studied the
effects of secon-dary flows on convective heat transfer. Recently, Mondal et al. [14] [15] performed numerical prediction of the unsteady solutions by time-evolution calculations for the flow through
a curved square duct and discussed the transitional behavior of the unsteady solutions.
One of the most important applications of curved duct flow is to enhance the thermal exchange between two sidewalls, because it is possible that the secondary flow may convey heat and then increases
heat flux between two sidewalls. Chandratilleke and Nursubyakto [16] presented numerical calculations to describe the secondary flow characteristics in the flow through curved ducts of aspect ratios
ranging from 1 to 8 that were heated on the outer wall, where they studied for small Dean numbers and compared the numerical results with their experimental data. Yanase et al. [13] studied
time-dependent behavior of the unsteady solutions for curved rectangular duct flow and showed that secondary flows enhance heat transfer in the flow. Mondal et al. [17] performed numerical prediction
of the unsteady solutions by time-evolution calculations of the thermal flow through a curved square duct and studied convective heat transfer in the flow. Recently Norouzi et al. [18] [19]
investigated fully developed flow and heat transfer of viscoelastic materials in curved square ducts under constant heat flux. Very recently, Chandratilleke and Narayanaswamy [20] numerically studied
vortex structure-based analysis of laminar flow and thermal characteristics in curved square and rectangular ducts. To the best of the authors’ knowledge, however, there has not yet been done any
substantial work studying the transitional behavior of the unsteady solutions for thermal flows through a curved square duct for combined effects of large Grashof number and large Dean number, which
has very practical applications in fluids engineering, for example, in internal combustion engine, gas turbines etc. Thus from the scientific as well as engineering point of view it is quite
interesting to study the unsteady flow behavior in the presence of strong buoyancy and centrifugal forces. Keeping this issue in mind, in this paper, a comprehensive numerical study is presented for
fully developed two-dimen- sional (2D) flow of viscous incompressible fluid through a curved square duct and studied effects of secondary flows on convective heat transfer in the flow.
2. Mathematical Formulations
Consider an incompressible viscous fluid streaming through a curved duct with square cross section whose width or height is 2d. The coordinate system is shown in Figure 1. It is assumed that the
temperature of the outer wall is Figure 1. The variables are non-dimensionalized by using the representative length d and the representative velocity
We introduce the non-dimensional variables defined as
where, u, v and w are the non-dimensional velocity components in the x, y and z directions, respectively; t is the non-dimensional time, P the non-dimensional pressure,
Then the basic equations for
Figure 1. Coordinate system of the curved square duct.
The Dean number Dn, the Grashof number Gr, and the Prandtl number Pr, which appear in Equations (2) to (4) are defined as
The rigid boundary conditions for
and the temperature
. (8)
In the present study, Dn and Gr vary while Pr and
3. Numerical Calculations
3.1. Method of Numerical Calculation
In order to solve the Equations (2) to (4) numerically the spectral method is used. This is the method which is thought to be the best numerical method to solve the Navier-Stokes equations as well as
the energy equation (Gottlieb and Orazag, [21] ). By this method the variables are expanded in a series of functions consisting of the Chebyshev polynomials. That is, the expansion functions
are expanded in terms of
3.2. Resistance Coefficient
The resistant coefficient
where, quantities with an asterisk (^*) denote dimensional ones,
4. Results and Discussion
4.1. Time Evolution of the Unsteady Solutions
Time evolution of the resistance coefficient l are performed for Figure 2(a). It is found that the flow is a steady-state solution for
the contours for the stream lines of the secondary flow patterns Figure 2(b), where it is seen that the unsteady flow is an asymmetric single- and two-vortex solution. It is found that as Gr
increases, the two-vortex solution ceases to be a single-vortex solution which covers the whole cross-section of the duct. We also investigated time-dependent solutions for Figure 3, where we find
that the unsteady flow is an asymmetric two-vortex steady-state solution.
Then, we investigated time-dependent solutions of l for Figure 4(a). As seen in Figure 4(a), the time-dependent flow is a periodic solution for Figure 4(b), where periodic flows are clearly observed.
Contours of secondary flow patterns and temperature profiles are shown in Figure 4(c) forFigure 5(a) explicitly shows time evolution of l for Figure 5(b) forFigure 5(b), the periodic oscillation for
Figure 5(c) for Gr = 1000, 1500 and 2000. In Figure 5(c), we see that the steady-state flow for Dn = 1000 and Gr = 1000, 1500 and 2000 are
asymmetric two-vortex solution.
We then performed time evolution of l for Dn = 1500 andFigure 6(a). As seen in Figure 6(a), the time-dependent flow for Figure 6(b) forFigure 7(a). As seen in Figure 7(a), the time-dependent flow for
Figure 7(b) for a clear view. Figure 7(c) shows typical contours of secondary flow patterns and temperature profiles for the steady-state solutions at Figure 7(d) shows those forFigure 8(a). As seen
in Figure 8(a), the time-dependent flow for Figure 9(a) and Figure 9(b) respectively show time-dependent solutions for Figure 9(a) and Figure 9(b), the unsteady flow is a time-periodic for Figure 9
(c) and Figure 9(d) respectively, where we see that the periodic or multi-periodic
flows are asymmetric two-vortex solution.
Finally, the results of the time-dependent solutions for Figure 10(a) and separately in the successive figures. Figure 10(b) explicitly show time-dependent flow for
and Figure 10(c) in the Figure 10(c),
the time-dependent flow creates multiple orbits, which suggests that the flow is multi-periodic. Typical contours
of secondary flow patterns and temperature profiles are shown in Figure 10(d), and it is found that the flow oscillates between asymmetric two-vortex solutions. Then we explicitly show the result of
the time-dependent flow for Figure 11(a). Then, to be sure whether the flow is periodic, multi-periodic or chaotic, we draw the phase space of the time-dependent flow for Figure 11(b) and see that
the flow is a transitional chaos (Mondal [7] ). Then we draw typical contours of secondary flow patterns and temperature profiles for the transitional chaos at Figure 11(c). Figure 11(c) shows that
the flow is an asymmetric two-vortex solution. Then we perform time-evolution of l for Figure 12(a). As seen in Figure 12(a), the flow oscillates multi-periodically. In order to see the
characteristics of the multi-periodic oscillation, we draw the phase space of the time-dependent flow for Figure 12(b). It is found that the unsteady flow creates irregular or multiple orbit which
means the flow presented in Figure 12(a) is chaotic rather than multi-periodic. The chaotic behavior is clearly justified by Figure 12(b). Then we draw typ- ical contours of secondary flow patterns
and temperature profiles for the chaotic oscillation at Figure 12(c). As seen in Figure 12(c), the chaotic flow oscillates irregularly between the asym- metric two-vortex solutions. The results of
the time-dependent flow for Figure 13(a) and Figure 14(a) respectively. As seen in Figure 13(a) and Figure 14(a), the unsteady flows at Figure 13(b) and Figure 14(b) respectively. It is found that
the two flows have nearly the same type of unsteady flow behavior. Typical contours of secondary flow patterns and temperature profiles for the periodic oscillations at Figure 13(c) and Figure 14(c)
respectively, where we see that the periodic flow oscillates between asymmetric two-vortex solutions. The temperature distribution is well consistent with the secondary vortices, and it becomes so
entangled when the secondary vortices become stronger. In this regard, it should be worth mentioning that irregular oscillation of the non-isothermal and isothermal flows has been observed
experimentally by Wang and Yang [10] for a curved square duct flow and by Ligrani and Niver [22] for flow through a curved rectangular duct of large aspect ratio.
Figure 12. (c) Secondary flow patterns (top) and temperature profiles (bottom) for
4.2. Phase Diagram in the Dn-Gr Plane
Finally, the distribution of the time-dependent solutions, obtained by the time evolution calculations of the curved square duct flows, is shown in Figure 15 in the Dean number versus Grashof number
(Dn-Gr) plane for Figure 15, the unsteady flow is always a steady-state solution for any value of Gr in the range Figure 15, the flow is also periodic/multi-periodic for Dn = 2500 at Gr = 1500 and
2000; for large Dean numbers, e.g. Dn = 3000, on the other hand, the unsteady flow changes in the scenario “periodic ® chaotic ® periodic”, if Gr in increased.
5. Conclusion
A numerical study is presented for the time-dependent solutions of the flow through a curved square duct of constant curvature
Figure 15. Distribution of the time-dependent solutions in the Dean number vs. Grashof number (Dn-Gr) plane for
are also obtained, and it is found that periodic or multi-periodic solution oscillates between asymmetric two-, and four-vortex solutions, while for chaotic solution, there exist only asymmetric
two-vortex solution. The temperature distribution is consistent with the secondary vortices and it is found that the temperature distribution occurs significantly from the heated wall to the fluid as
the secondary flow becomes stronger. The present study also shows that there is a strong interaction between the heating-induced buoyancy force and the centrifugal force in the curved passage which
stimulates fluid mixing and thus results in thermal enhancement in the flow.
^*Corresponding author.
|
{"url":"https://scirp.org/journal/paperinformation?paperid=59944","timestamp":"2024-11-12T15:32:42Z","content_type":"application/xhtml+xml","content_length":"145062","record_id":"<urn:uuid:28900990-98c8-44ec-a40b-25b2353ad485>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00358.warc.gz"}
|
Hofstadter's tq system, from Gödel, Escher, Bach
Theorem schema:
• Symbols are p, q and -.
• Axiom-defining scheme: If x consists only of hyphens, then xt-qx- is an axiom.
• Theorem production rule: If x, y and z consist only of hyphens, and xtyqz is known to be a theorem, then xty-qzx is also a theorem.
index | 2021-02-10
|
{"url":"https://oatcookies.neocities.org/cs/tq_system","timestamp":"2024-11-04T02:05:48Z","content_type":"text/html","content_length":"2654","record_id":"<urn:uuid:e0f1e41d-a9c4-4ab1-b31e-5f3601fe7da7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00511.warc.gz"}
|
Learning Task 2: “Bonding Time” Suppose that you and your friends will have a short bicycle ride around Sampaloc Lake which is considered as the largest among Guide Questions: Learning Task 2 the 7 lakes in San Pablo City. After biking, you and your friends a. How many flavors of ice cream do you have?_· decided to buy an ice cream where you can choose from the b. How many choices of toppings do you have? different flavors such as chocolate, vanilla, cheese, or mango. c. What are your possible choices of ice cream and its toppings? Show them all in a tree And for the toppings, you can select from marshmallows, nips, or diagram. nuts. How many possible combinations of flavors and toppings do d. Using systematic listing, how many different choices of ice cream and toppings are you have? there?
Learning Task 2: “Bonding Time” Suppose that you and your friends will have a short bicycle ride around Sampaloc Lake which is considered as the largest among Guide Questions: Learning Task 2 the 7
lakes in San Pablo City. After biking, you and your friends a. How many flavors of ice cream do you have?_· decided to buy an ice cream where you can choose from the b. How many choices of toppings
do you have? different flavors such as chocolate, vanilla, cheese, or mango. c. What are your possible choices of ice cream and its toppings? Show them all in a tree And for the toppings, you can
select from marshmallows, nips, or diagram. nuts. How many possible combinations of flavors and toppings do d. Using systematic listing, how many different choices of ice cream and toppings are you
have? there?
|
{"url":"http://redmondmathblog.com/algebra/learning-task-2-bonding-time-suppose-that-you-and-your-friends-will-have-a-short-bicycle-ride-around-sampaloc-lake-which-is-considered-as-the-largest-among-guide-questions-learning-task-2-the-7-lakes-in-san-pablo-city-after-biking-you-and-your","timestamp":"2024-11-03T20:26:41Z","content_type":"text/html","content_length":"29077","record_id":"<urn:uuid:a8f167ec-f0f4-4aaa-b1fe-d459f06ac1ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00342.warc.gz"}
|
Ph.D. Preliminary Examination
Syllabus for Mathematical Science Cluster in Applied Mathematics, Ph.D. Preliminary Examination
Please check the current Graduate Handbook for more information. There are two related programs in Applied Mathematics leading to the PhD degree - Mathematical Science and Applications-Oriented
Mathematics. These two programs provide frameworks for the study of mathematics and its interactions with science and engineering. In addition, the members of the applied mathematics group have
interests in certain collateral areas. Current topics in the Mathematical Science program include:
Mathematical Science:
Scattering theory, wave propagation, statistical mechanics, electrodynamics, acoustics, plasmas, magnetohydrodynamics, elasticity, critical phenomena, fluid mechanics, geophysical fluid dynamics,
and mathematical biology.
Prelim #6. Mathematical Science Cluster:
Students may prepare for the written prelim in Applied Mathematics by taking the two of the following three courses:
• Math 574 Applied Optimal Control
• Math 580(590) Introduction to the Mathematics of Fluid Dynamics
• Math 586(590) Computational Finance
Collateral Areas: Scientific supercomputing, control theory, parallel computational control, parallel scheduling, stochastic modeling, queuing theory and computer performance evaluation, numerical
analysis, and symbolic computation.
The department will also work with other departments on combined study programs and joint degrees to meet individual needs and special interests of students. The department's broad spectrum of
activities includes group theory, classical and functional analysis, differential geometry and topology, statistics and probability, and computational science. Joint programs of any of these and
Applied Mathematics may be arranged.
Applied Mathematics Advisor: Students' programs of doctoral study in Applied Mathematics should be made in close consultation with an Applied Mathematics advisor and will depend upon their interests
and research area. Programs should be arranged so that 500-level courses leading to the two written prelims and to the fulfillment of the minor requirement are taken early.
Doctoral Minor Requirement: The doctoral minor requirement should be designed in consultation with an Applied Mathematics advisor and in accordance with the department regulations. The minor
typically consists of a sequence of two 500 level courses either in the department or in an outside department. If the minor courses are in the department, the two courses may be chosen from the list
issued by the Graduate Studies Committee. Typically these courses are required for one of the preliminary examinations in clusters outside of applied mathematics, such as Combinatorics, Algorithms
and Complexity, Computational Science, Analysis, etc. Any other sequence of the department's courses or courses in an outside department must be approved in advance by the Director of Graduate
Studies. A minor in an outside department is recommended for students interested in a specific application area such as plasma physics, fluid dynamics, elasticity, scattering, or neuroscience.
This is one of two cluster examinations required for the Applied Mathematics Option. The examinee is required to answer at least 3 out of 6 questions form material covered in other Mathematical
Sciences courses offer in recent terms. Usually 9 or more questions will be offered on the exam. A perfect score consists of answering 5 questions correctly. The questions deal with the mathematical
formulation and solution of problems stated in physical contexts. The basic topics for the examination are discrete and continuum mechanics, electromagnetics, scattering theory, wave propagation,
diffusion theory, applied optimal control theory and computation, computational and mathematical finance, mathematical biology, and problems from other physical sciences and engineering. Some exams
in the file indicate the intent and level.
Topics in Mathematical Models in the Sciences:
• Optimal Control - Optimal control theory, calculus of variations, maximum principle, dynamic programming, feedback control, linear systems with quadratic criteria, singular control, stochastic
differential equations, Gaussian and Poisson noise, stochastic control.
• Fluid mechanics - Navier-Stokes equations for viscous flow, Euler equations for inviscid flow; Prandtl boundary layer.
• Computational Finance - Pricing of derivative instruments such as options, interest rates and other contracts; computation of fair market prices.
Former Topics in Mathematical Science:
• Classical mechanics - generalized coordinates, central force motion, electrostatics, potential theory, energy conservation, nonlinear vibrations.
• Elasticity - static and dynamic problems including linear elastic waves; beam theory; biharmonic equation; Beltrami-Michell equation; Poisson's equation.
• Biology processes - population dynamics, (logistic and age structure equation), interacting populations (predator-prey and competition).
Course References:
• David J. Acheson, Elementary Fluid Dynamics, Oxford Applied Mathematics and Computing Science Series, Oxford University Press, 1990 (Math 590 Friedlander).
• R. F. Stengel, Optimal Control and Estimation, Dover Paperback, 1994 (Math 574).
• D. E. Kirk, Optimal Control Theory: An Introduction, Prentice-Hall, 1970 (Math 574).
• C. C. Lin and L. Segel, Mathematics Applied to Deterministic Problems (Math 580).
• L. Segel, Mathematics Applied to Continuum Mechanics (Math 580).
• P. Wilmott, Howison, Dewynne, The Mathematics of Financial Derivatives: A Student Introduction, Cambridge, 1995 (Math 590/58X?).
General References:
• Courant & Hilbert, Methods of Mathematics Physics, Vol. 2
Web Source: http://www.math.uic.edu/~hanson/prelmspsyl.html
Email Comments or Questions to Professor Hanson
|
{"url":"http://homepages.math.uic.edu/~applmath/prelmspsyl.html","timestamp":"2024-11-04T21:44:38Z","content_type":"text/html","content_length":"7028","record_id":"<urn:uuid:28f75ec5-93c4-4f14-971c-8ee1ab51f617>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00694.warc.gz"}
|
Amortization calculator - Mortgage calculator
Loan Amount / Lånebeløb
Interest Rate (APR) / Rente
Term (months) / Terminer (måneder)
Amortization calculator - Mortgage calculator
Renteberegner - Låneberegner
Use this calculator (input your actual figures in the fields to the left and click "Calculate / Beregn") to calculate you Monthly Payment (Månedlig Afdrag) on a Loan (Lån) with a given Interest
(Rente) on a number of Terms (Terminer)
|
{"url":"http://dooley.dk/Amortization-and-Mortgage-Calculator-Rente-og-Laane-Regnemaskine.htm","timestamp":"2024-11-13T09:17:39Z","content_type":"text/html","content_length":"4772","record_id":"<urn:uuid:8664bbd8-f89b-4125-a37a-871ec6571152>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00635.warc.gz"}
|
numpy ufuncs and COREPY - any info?
21 May 2009 21 May '09
7:57 a.m.
hi all, has anyone already tried to compare using an ordinary numpy ufunc vs that one from corepy, first of all I mean the project http://socghop.appspot.com/student_project/show/google/gsoc2009/
python/t1240... It would be interesting to know what is speedup for (eg) vec ** 0.5 or (if it's possible - it isn't pure ufunc) numpy.dot(Matrix, vec). Or any another example.
dmitrey schrieb:
hi all, has anyone already tried to compare using an ordinary numpy ufunc vs that one from corepy, first of all I mean the project http://socghop.appspot.com/student_project/show/google/gsoc2009/
It would be interesting to know what is speedup for (eg) vec ** 0.5 or (if it's possible - it isn't pure ufunc) numpy.dot(Matrix, vec). Or any another example.
I have no experience with the mentioned CorePy, but recently I was playing around with accelerated ufuncs using Intels Math Kernel Library (MKL). These improvements are now part of the numexpr
package http://code.google.com/p/numexpr/ Some remarks on possible speed improvements on recent Intel x86 processors. 1) basic arithmetic ufuncs (add, sub, mul, ...) in standard numpy are fast (SSE
is used) and speed is limited by memory bandwidth. 2) the speed of many transcendental functions (exp, sin, cos, pow, ...) can be improved by _roughly_ a factor of five (single core) by using the
MKL. Most of the improvements stem from using faster algorithms with a vectorized implementation. Note: the speed improvement depends on a _lot_ of other circumstances. 3) Improving performance by
using multi cores is much more difficult. Only for sufficiently large (>1e5) arrays a significant speedup is possible. Where a speed gain is possible, the MKL uses several cores. Some experimentation
showed that adding a few OpenMP constructs you could get a similar speedup with numpy. 4) numpy.dot uses optimized implementations. Gregor
A Friday 22 May 2009 11:42:56 Gregor Thalhammer escrigué:
dmitrey schrieb:
hi all, has anyone already tried to compare using an ordinary numpy ufunc vs that one from corepy, first of all I mean the project http://socghop.appspot.com/student_project/show/google/
gsoc2009/python/t1 24024628235
It would be interesting to know what is speedup for (eg) vec ** 0.5 or (if it's possible - it isn't pure ufunc) numpy.dot(Matrix, vec). Or any another example.
I have no experience with the mentioned CorePy, but recently I was playing around with accelerated ufuncs using Intels Math Kernel Library (MKL). These improvements are now part of the numexpr
package http://code.google.com/p/numexpr/ Some remarks on possible speed improvements on recent Intel x86 processors. 1) basic arithmetic ufuncs (add, sub, mul, ...) in standard numpy are fast
(SSE is used) and speed is limited by memory bandwidth. 2) the speed of many transcendental functions (exp, sin, cos, pow, ...) can be improved by _roughly_ a factor of five (single core) by
using the MKL. Most of the improvements stem from using faster algorithms with a vectorized implementation. Note: the speed improvement depends on a _lot_ of other circumstances. 3) Improving
performance by using multi cores is much more difficult. Only for sufficiently large (>1e5) arrays a significant speedup is possible. Where a speed gain is possible, the MKL uses several cores.
Some experimentation showed that adding a few OpenMP constructs you could get a similar speedup with numpy. 4) numpy.dot uses optimized implementations.
Good points Gregor. However, I wouldn't say that improving performance by using multi cores is *that* difficult, but rather that multi cores can only be used efficiently *whenever* the memory
bandwith is not a limitation. An example of this is the computation of transcendental functions, where, even using vectorized implementations, the computation speed is still CPU-bounded in many
cases. And you have experimented yourself very good speed-ups for these cases with your implementation of numexpr/MKL :) Cheers, -- Francesc Alted
(sending again) Hi, I'm the student doing the project. I have a blog here, which contains some initial performance numbers for a couple test ufuncs I did: http://numcorepy.blogspot.com It's really
too early yet to give definitive results though; GSoC officially starts in two days :) What I'm finding is that the existing ufuncs are already pretty fast; it appears right now that the main
limitation is memory bandwidth. If that's really the case, the performance gains I'll get will be through cache tricks (non-temporal loads/stores), reducing memory accesses and using multiple cores
to get more bandwidth. Another alternative we've talked about, and I (more and more likely) may look into is composing multiple operations together into a single ufunc. Again the main idea being that
memory accesses can be reduced/eliminated. Andrew dmitrey wrote:
hi all, has anyone already tried to compare using an ordinary numpy ufunc vs that one from corepy, first of all I mean the project http://socghop.appspot.com/student_project/show/google/gsoc2009/
It would be interesting to know what is speedup for (eg) vec ** 0.5 or (if it's possible - it isn't pure ufunc) numpy.dot(Matrix, vec). Or any another example.
_______________________________________________ Numpy-discussion mailing list Numpy-discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Francesc Alted wrote:
A Friday 22 May 2009 11:42:56 Gregor Thalhammer escrigué:
dmitrey schrieb: 3) Improving performance by using multi cores is much more difficult. Only for sufficiently large (>1e5) arrays a significant speedup is possible. Where a speed gain is
possible, the MKL uses several cores. Some experimentation showed that adding a few OpenMP constructs you could get a similar speedup with numpy. 4) numpy.dot uses optimized implementations.
Good points Gregor. However, I wouldn't say that improving performance by using multi cores is *that* difficult, but rather that multi cores can only be used efficiently *whenever* the memory
bandwith is not a limitation. An example of this is the computation of transcendental functions, where, even using vectorized implementations, the computation speed is still CPU-bounded in many
cases. And you have experimented yourself very good speed-ups for these cases with your implementation of numexpr/MKL :)
Using multiple cores is pretty easy for element-wise ufuncs; no communication needs to occur and the work partitioning is trivial. And actually I've found with some initial testing that multiple
cores does still help when you are memory bound. I don't fully understand why yet, though I have some ideas. One reason is multiple memory controllers due to multiple sockets (ie opteron). Another is
that each thread is pulling memory from a different bank, utilizing more bandwidth than a single sequential thread could. However if that's the case, we could possibly come up with code for a single
thread that achieves (nearly) the same additional throughput.. Andrew
A Friday 22 May 2009 13:59:17 Andrew Friedley escrigué:
Using multiple cores is pretty easy for element-wise ufuncs; no communication needs to occur and the work partitioning is trivial. And actually I've found with some initial testing that multiple
cores does still help when you are memory bound. I don't fully understand why yet, though I have some ideas. One reason is multiple memory controllers due to multiple sockets (ie opteron).
Yeah. I think this must likely be the reason. If, as in your case, you have several independent paths from different processors to your data, then you can achieve speed-ups even if you are having a
memory bound in a one-processor scenario.
Another is that each thread is pulling memory from a different bank, utilizing more bandwidth than a single sequential thread could. However if that's the case, we could possibly come up with
code for a single thread that achieves (nearly) the same additional throughput..
Well, I don't think you can achieve important speed-ups in this case, but experimenting never hurts :) Good luck! -- Francesc Alted
A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué:
(sending again)
I'm the student doing the project. I have a blog here, which contains some initial performance numbers for a couple test ufuncs I did:
It's really too early yet to give definitive results though; GSoC officially starts in two days :) What I'm finding is that the existing ufuncs are already pretty fast; it appears right now that
the main limitation is memory bandwidth. If that's really the case, the performance gains I'll get will be through cache tricks (non-temporal loads/stores), reducing memory accesses and using
multiple cores to get more bandwidth.
Another alternative we've talked about, and I (more and more likely) may look into is composing multiple operations together into a single ufunc. Again the main idea being that memory accesses
can be reduced/eliminated.
IMHO, composing multiple operations together is the most promising venue for leveraging current multicore systems. Another interesting approach is to implement costly operations (from the point of
view of CPU resources), namely, transcendental functions like sin, cos or tan, but also others like sqrt or pow) in a parallel way. If besides, you can combine this with vectorized versions of them
(by using the well spread SSE2 instruction set, see [1] for an example), then you would be able to achieve really good results for sure (at least Intel did with its VML library ;) [1] http://
gruntthepeon.free.fr/ssemath/ Cheers, -- Francesc Alted
For some reason the list seems to occasionally drop my messages... Francesc Alted wrote:
A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué:
I'm the student doing the project. I have a blog here, which contains some initial performance numbers for a couple test ufuncs I did:
Another alternative we've talked about, and I (more and more likely) may look into is composing multiple operations together into a single ufunc. Again the main idea being that memory
accesses can be reduced/eliminated.
IMHO, composing multiple operations together is the most promising venue for leveraging current multicore systems.
Agreed -- our concern when considering for the project was to keep the scope reasonable so I can complete it in the GSoC timeframe. If I have time I'll definitely be looking into this over the
summer; if not later.
Another interesting approach is to implement costly operations (from the point of view of CPU resources), namely, transcendental functions like sin, cos or tan, but also others like sqrt or pow)
in a parallel way. If besides, you can combine this with vectorized versions of them (by using the well spread SSE2 instruction set, see [1] for an example), then you would be able to achieve
really good results for sure (at least Intel did with its VML library ;)
[1] http://gruntthepeon.free.fr/ssemath/
I've seen that page before. Using another source [1] I came up with a quick/dirty cos ufunc. Performance is crazy good compared to NumPy (100x); see the latest post on my blog for a little more info.
I'll look at the source myself when I get time again, but is NumPy using a Python-based cos function, a C implementation, or something else? As I wrote in my blog, the performance gain is almost too
good to believe. [1] http://www.devmaster.net/forums/showthread.php?t=5784 Andrew
On Mon, May 25, 2009 at 4:59 AM, Andrew Friedley <afriedle@indiana.edu>wrote:
For some reason the list seems to occasionally drop my messages...
Francesc Alted wrote:
A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué:
I'm the student doing the project. I have a blog here, which contains some initial performance numbers for a couple test ufuncs I did:
Another alternative we've talked about, and I (more and more likely) may look into is composing multiple operations together into a single ufunc. Again the main idea being that memory
accesses can be reduced/eliminated.
IMHO, composing multiple operations together is the most promising venue for leveraging current multicore systems.
Agreed -- our concern when considering for the project was to keep the scope reasonable so I can complete it in the GSoC timeframe. If I have time I'll definitely be looking into this over the
summer; if not later.
Another interesting approach is to implement costly operations (from the point of view of CPU resources), namely, transcendental functions like sin, cos or tan, but also others like sqrt or
pow) in a parallel way. If besides, you can combine this with vectorized versions of them (by using the well spread SSE2 instruction set, see [1] for an example), then you would be able to
achieve really good results for sure (at least Intel did with its VML library ;)
[1] http://gruntthepeon.free.fr/ssemath/
I've seen that page before. Using another source [1] I came up with a quick/dirty cos ufunc. Performance is crazy good compared to NumPy (100x); see the latest post on my blog for a little more
info. I'll look at the source myself when I get time again, but is NumPy using a Python-based cos function, a C implementation, or something else? As I wrote in my blog, the performance gain is
almost too good to believe.
Numpy uses the C library version. If long double and float aren't available the double version is used with number conversions, but that shouldn't give a factor of 100x. Something else is going on.
A Monday 25 May 2009 12:59:31 Andrew Friedley escrigué:
For some reason the list seems to occasionally drop my messages...
Francesc Alted wrote:
A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué:
I'm the student doing the project. I have a blog here, which contains some initial performance numbers for a couple test ufuncs I did:
Another alternative we've talked about, and I (more and more likely) may look into is composing multiple operations together into a single ufunc. Again the main idea being that memory
accesses can be reduced/eliminated.
IMHO, composing multiple operations together is the most promising venue for leveraging current multicore systems.
Agreed -- our concern when considering for the project was to keep the scope reasonable so I can complete it in the GSoC timeframe. If I have time I'll definitely be looking into this over the
summer; if not later.
You should know that Numexpr has already started this path for some time now. The fact that it already can evaluate complex array expressions like 'a+b*cos(c)' without using temporaries (like NumPy
does) should allow it to use multiple cores without stressing the memory bus too much. I'm planning to implement such parallelism in Numexpr for some time now, but not there yet.
I've seen that page before. Using another source [1] I came up with a quick/dirty cos ufunc. Performance is crazy good compared to NumPy (100x); see the latest post on my blog for a little more
info. I'll look at the source myself when I get time again, but is NumPy using a Python-based cos function, a C implementation, or something else? As I wrote in my blog, the performance gain is
almost too good to believe.
[1] http://www.devmaster.net/forums/showthread.php?t=5784
100x? Uh, sounds really impressing... -- Francesc Alted
Charles R Harris wrote:
On Mon, May 25, 2009 at 4:59 AM, Andrew Friedley <afriedle@indiana.edu <mailto:afriedle@indiana.edu>> wrote:
For some reason the list seems to occasionally drop my messages...
Francesc Alted wrote: > A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué: >> I'm the student doing the project. I have a blog here, which contains >> some initial performance numbers for a
couple test ufuncs I did: >> >> http://numcorepy.blogspot.com
>> Another alternative we've talked about, and I (more and more likely) may >> look into is composing multiple operations together into a single ufunc. >> Again the main idea being that memory
accesses can be reduced/eliminated. > > IMHO, composing multiple operations together is the most promising venue for > leveraging current multicore systems.
Agreed -- our concern when considering for the project was to keep the scope reasonable so I can complete it in the GSoC timeframe. If I have time I'll definitely be looking into this over the
summer; if not later.
> Another interesting approach is to implement costly operations (from the point > of view of CPU resources), namely, transcendental functions like sin, cos or > tan, but also others like sqrt or
pow) in a parallel way. If besides, you can > combine this with vectorized versions of them (by using the well spread SSE2 > instruction set, see [1] for an example), then you would be able to
achieve > really good results for sure (at least Intel did with its VML library ;) > > [1] http://gruntthepeon.free.fr/ssemath/
I've seen that page before. Using another source [1] I came up with a quick/dirty cos ufunc. Performance is crazy good compared to NumPy (100x); see the latest post on my blog for a little more
info. I'll look at the source myself when I get time again, but is NumPy using a Python-based cos function, a C implementation, or something else? As I wrote in my blog, the performance gain is
almost too good to believe.
Numpy uses the C library version. If long double and float aren't available the double version is used with number conversions, but that shouldn't give a factor of 100x. Something else is going
I think something is wrong with the measurement method - on my machine, computing the cos of an array of double takes roughly ~400 cycles/item for arrays with a reasonable size (> 1e3 items). Taking
4 cycles/item for cos would be very impressive :) David
A Tuesday 26 May 2009 03:11:56 David Cournapeau escrigué:
Charles R Harris wrote:
On Mon, May 25, 2009 at 4:59 AM, Andrew Friedley <afriedle@indiana.edu <mailto:afriedle@indiana.edu>> wrote:
For some reason the list seems to occasionally drop my messages...
Francesc Alted wrote: > A Friday 22 May 2009 13:52:46 Andrew Friedley escrigué: >> I'm the student doing the project. I have a blog here, which
>> some initial performance numbers for a couple test ufuncs I did: >> >> http://numcorepy.blogspot.com >> >> Another alternative we've talked about, and I (more and more
likely) may
>> look into is composing multiple operations together into a
single ufunc.
>> Again the main idea being that memory accesses can be
> IMHO, composing multiple operations together is the most
promising venue for
> leveraging current multicore systems.
Agreed -- our concern when considering for the project was to keep the scope reasonable so I can complete it in the GSoC timeframe. If I have time I'll definitely be looking into this over
the summer; if not later.
> Another interesting approach is to implement costly operations
(from the point
> of view of CPU resources), namely, transcendental functions like
sin, cos or
> tan, but also others like sqrt or pow) in a parallel way. If
besides, you can
> combine this with vectorized versions of them (by using the well
spread SSE2
> instruction set, see [1] for an example), then you would be able
to achieve
> really good results for sure (at least Intel did with its VML
library ;)
> [1] http://gruntthepeon.free.fr/ssemath/
I've seen that page before. Using another source [1] I came up with a quick/dirty cos ufunc. Performance is crazy good compared to NumPy (100x); see the latest post on my blog for a little
more info. I'll look at the source myself when I get time again, but is NumPy using a Python-based cos function, a C implementation, or something else? As I wrote in my blog, the performance
gain is almost too good to believe.
Numpy uses the C library version. If long double and float aren't available the double version is used with number conversions, but that shouldn't give a factor of 100x. Something else is
going on.
I think something is wrong with the measurement method - on my machine, computing the cos of an array of double takes roughly ~400 cycles/item for arrays with a reasonable size (> 1e3 items).
Taking 4 cycles/item for cos would be very impressive :)
Well, it is Andrew who should demonstrate that his measurement is correct, but in principle, 4 cycles/item *should* be feasible when using 8 cores in parallel. In [1] one can see how Intel achieves
(with his VML kernel) to compute a cos() in less than 23 cycles in one single core. Having 8 cores in parallel would allow, in theory, reach 3 cycles/item. [1]http://www.intel.com/software/products/
mkl/data/vml/functions/_performanceal... -- Francesc Alted
Francesc Alted wrote:
Well, it is Andrew who should demonstrate that his measurement is correct, but in principle, 4 cycles/item *should* be feasible when using 8 cores in parallel.
But the 100x speed increase is for one core only unless I misread the table. And I should have mentioned that 400 cycles/item for cos is on a pentium 4, which has dreadful performances (defective
L1). On a much better core duo extreme something, I get 100 cycles / item (on a 64 bits machines, though, and not same compiler, although I guess the libm version is what matters the most here). And
let's not forget that there is the python wrapping cost: by doing everything in C, I got ~ 200 cycle/cos on the PIV, and ~60 cycles/cos on the core 2 duo (for double), using the rdtsc performance
counter. All this for 1024 items in the array, so very optimistic usecase (everything in cache 2 if not 1). This shows that python wrapping cost is not so high, making the 100x claim a bit doubtful
without more details on the way to measure speed. cheers, David
David Cournapeau wrote:
Francesc Alted wrote:
Well, it is Andrew who should demonstrate that his measurement is correct, but in principle, 4 cycles/item *should* be feasible when using 8 cores in parallel.
But the 100x speed increase is for one core only unless I misread the table. And I should have mentioned that 400 cycles/item for cos is on a pentium 4, which has dreadful performances (defective
L1). On a much better core duo extreme something, I get 100 cycles / item (on a 64 bits machines, though, and not same compiler, although I guess the libm version is what matters the most here).
And let's not forget that there is the python wrapping cost: by doing everything in C, I got ~ 200 cycle/cos on the PIV, and ~60 cycles/cos on the core 2 duo (for double), using the rdtsc
performance counter. All this for 1024 items in the array, so very optimistic usecase (everything in cache 2 if not 1).
This shows that python wrapping cost is not so high, making the 100x claim a bit doubtful without more details on the way to measure speed.
I appreciate all the discussion this is creating. I wish I could work on this more right now; I have a big paper deadline coming up June 1 that I need to focus on. Yes, you're reading the table
right. I should have been more clear on what my implementation is doing. It's using SIMD, so performing 4 cosine's at a time where a libm cosine is only doing one. Also I don't think libm
trancendentals are known for being fast; I'm also likely gaining performance by using a well-optimized but less accurate approximation. In fact a little more inspection shows my accuracy decreases as
the input values increase; I will probably need to take a performance hit to fix this. I went and wrote code to use the libm fcos() routine instead of my cos code. Performance is equivalent to numpy,
plus an overhead: inp sizes 1024 10240 102400 1024000 3072000 numpy 0.7282 9.6278 115.5976 993.5738 3017.3680 lmcos 1 0.7594 9.7579 116.7135 1039.5783 3156.8371 lmcos 2 0.5274 5.7885 61.8052 537.8451
1576.2057 lmcos 4 0.5172 5.1240 40.5018 313.2487 791.9730 corepy 1 0.0142 0.0880 0.9566 9.6162 28.4972 corepy 2 0.0342 0.0754 0.6991 6.1647 15.3545 corepy 4 0.0596 0.0963 0.5671 4.9499 13.8784 The
times I show are in milliseconds; the system used is a dual-socket dual-core 2ghz opteron. I'm testing at the ufunc level, like this: def benchmark(fn, args): avgtime = 0 fn(*args) for i in xrange
(7): t1 = time.time() fn(*args) t2 = time.time() tm = t2 - t1 avgtime += tm return avgtime / 7 Where fn is a ufunc, ie numpy.cos. So I prime the execution once, then do 7 timings and take the
average. I always appreciate suggestions on better way to benchmark things. Andrew
A Tuesday 26 May 2009 15:14:39 Andrew Friedley escrigué:
David Cournapeau wrote:
Francesc Alted wrote:
Well, it is Andrew who should demonstrate that his measurement is correct, but in principle, 4 cycles/item *should* be feasible when using 8 cores in parallel.
But the 100x speed increase is for one core only unless I misread the table. And I should have mentioned that 400 cycles/item for cos is on a pentium 4, which has dreadful performances
(defective L1). On a much better core duo extreme something, I get 100 cycles / item (on a 64 bits machines, though, and not same compiler, although I guess the libm version is what matters
the most here).
And let's not forget that there is the python wrapping cost: by doing everything in C, I got ~ 200 cycle/cos on the PIV, and ~60 cycles/cos on the core 2 duo (for double), using the rdtsc
performance counter. All this for 1024 items in the array, so very optimistic usecase (everything in cache 2 if not 1).
This shows that python wrapping cost is not so high, making the 100x claim a bit doubtful without more details on the way to measure speed.
I appreciate all the discussion this is creating. I wish I could work on this more right now; I have a big paper deadline coming up June 1 that I need to focus on.
Yes, you're reading the table right. I should have been more clear on what my implementation is doing. It's using SIMD, so performing 4 cosine's at a time where a libm cosine is only doing one.
Also I don't think libm trancendentals are known for being fast; I'm also likely gaining performance by using a well-optimized but less accurate approximation. In fact a little more inspection
shows my accuracy decreases as the input values increase; I will probably need to take a performance hit to fix this.
I went and wrote code to use the libm fcos() routine instead of my cos code. Performance is equivalent to numpy, plus an overhead:
inp sizes 1024 10240 102400 1024000 3072000 numpy 0.7282 9.6278 115.5976 993.5738 3017.3680
lmcos 1 0.7594 9.7579 116.7135 1039.5783 3156.8371 lmcos 2 0.5274 5.7885 61.8052 537.8451 1576.2057 lmcos 4 0.5172 5.1240 40.5018 313.2487 791.9730
corepy 1 0.0142 0.0880 0.9566 9.6162 28.4972 corepy 2 0.0342 0.0754 0.6991 6.1647 15.3545 corepy 4 0.0596 0.0963 0.5671 4.9499 13.8784
The times I show are in milliseconds; the system used is a dual-socket dual-core 2ghz opteron. I'm testing at the ufunc level, like this:
def benchmark(fn, args): avgtime = 0 fn(*args)
for i in xrange(7): t1 = time.time() fn(*args) t2 = time.time()
tm = t2 - t1 avgtime += tm
return avgtime / 7
Where fn is a ufunc, ie numpy.cos. So I prime the execution once, then do 7 timings and take the average. I always appreciate suggestions on better way to benchmark things.
No, that seems good enough. But maybe you can present results in cycles/item. This is a relatively common unit and has the advantage that it does not depend on the frequency of your cores. --
Francesc Alted
Francesc Alted wrote:
A Tuesday 26 May 2009 15:14:39 Andrew Friedley escrigué:
David Cournapeau wrote:
Francesc Alted wrote:
Well, it is Andrew who should demonstrate that his measurement is correct, but in principle, 4 cycles/item *should* be feasible when using 8 cores in parallel.
But the 100x speed increase is for one core only unless I misread the table. And I should have mentioned that 400 cycles/item for cos is on a pentium 4, which has dreadful performances
(defective L1). On a much better core duo extreme something, I get 100 cycles / item (on a 64 bits machines, though, and not same compiler, although I guess the libm version is what
matters the most here).
And let's not forget that there is the python wrapping cost: by doing everything in C, I got ~ 200 cycle/cos on the PIV, and ~60 cycles/cos on the core 2 duo (for double), using the rdtsc
performance counter. All this for 1024 items in the array, so very optimistic usecase (everything in cache 2 if not 1).
This shows that python wrapping cost is not so high, making the 100x claim a bit doubtful without more details on the way to measure speed.
I appreciate all the discussion this is creating. I wish I could work on this more right now; I have a big paper deadline coming up June 1 that I need to focus on.
Yes, you're reading the table right. I should have been more clear on what my implementation is doing. It's using SIMD, so performing 4 cosine's at a time where a libm cosine is only doing
one. Also I don't think libm trancendentals are known for being fast; I'm also likely gaining performance by using a well-optimized but less accurate approximation. In fact a little more
inspection shows my accuracy decreases as the input values increase; I will probably need to take a performance hit to fix this.
I went and wrote code to use the libm fcos() routine instead of my cos code. Performance is equivalent to numpy, plus an overhead:
inp sizes 1024 10240 102400 1024000 3072000 numpy 0.7282 9.6278 115.5976 993.5738 3017.3680
lmcos 1 0.7594 9.7579 116.7135 1039.5783 3156.8371 lmcos 2 0.5274 5.7885 61.8052 537.8451 1576.2057 lmcos 4 0.5172 5.1240 40.5018 313.2487 791.9730
corepy 1 0.0142 0.0880 0.9566 9.6162 28.4972 corepy 2 0.0342 0.0754 0.6991 6.1647 15.3545 corepy 4 0.0596 0.0963 0.5671 4.9499 13.8784
The times I show are in milliseconds; the system used is a dual-socket dual-core 2ghz opteron. I'm testing at the ufunc level, like this:
def benchmark(fn, args): avgtime = 0 fn(*args)
for i in xrange(7): t1 = time.time() fn(*args) t2 = time.time()
tm = t2 - t1 avgtime += tm
return avgtime / 7
Where fn is a ufunc, ie numpy.cos. So I prime the execution once, then do 7 timings and take the average. I always appreciate suggestions on better way to benchmark things.
No, that seems good enough. But maybe you can present results in cycles/item. This is a relatively common unit and has the advantage that it does not depend on the frequency of your cores.
(it seems that I do not receive all emails - I never get the emails from Andrew ?) Concerning the timing: I think generally, you should report the minimum, not the average. The numbers for numpy are
strange: 3s to compute 3e6 cos on a 2Ghz core duo (~2000 cycles/item) is very slow. In that sense, taking 20 cycles/item for your optimized version is much more believable, though :) I know the usual
libm functions are not super fast, specially if high accuracy is not needed. Music softwares and games usually go away with approximations which are quite fast (.e.g using cos+sin evaluation at the
same time), but those are generally unacceptable for scientific usage. I think it is critical to always check the result of your implementation, because getting something fast but wrong can waste a
lot of your time :) One thing which may be hard to do is correct nan/inf handling. I don't know how SIMD extensions handle this. cheers, David
David Cournapeau wrote:
Francesc Alted wrote:
No, that seems good enough. But maybe you can present results in cycles/item. This is a relatively common unit and has the advantage that it does not depend on the frequency of your cores.
Sure, cycles is fine, but I'll argue that in this case the number still does depend on the frequency of the cores, particularly as it relates to the frequency of the memory bus/controllers. A
processor with a higher clock rate and higher multiplier may show lower performance when measuring in cycles because the memory bandwidth has not necessarily increased, only the CPU clock rate. Plus
between say a xeon and opteron you will have different SSE performance characteristics. So really, any sole number/unit is not sufficient without also describing the system it was obtained on :)
(it seems that I do not receive all emails - I never get the emails from Andrew ?)
I seem to have issues with my emails just disappearing; sometimes they never appear on the list and I have to re-send them.
Concerning the timing: I think generally, you should report the minimum, not the average. The numbers for numpy are strange: 3s to compute 3e6 cos on a 2Ghz core duo (~2000 cycles/item) is very
slow. In that sense, taking 20 cycles/item for your optimized version is much more believable, though :)
I can do minimum. My motivation for average was to show a common-case performance an application might see. If that application executes the ufunc many times, the performance will tend towards the
I know the usual libm functions are not super fast, specially if high accuracy is not needed. Music softwares and games usually go away with approximations which are quite fast (.e.g using
cos+sin evaluation at the same time), but those are generally unacceptable for scientific usage. I think it is critical to always check the result of your implementation, because getting
something fast but wrong can waste a lot of your time :) One thing which may be hard to do is correct nan/inf handling. I don't know how SIMD extensions handle this.
I was waiting for someone to bring this up :) I used an implementation that I'm now thinking is not accurate enough for scientific use. But the question is, what is a concrete measure for determining
whether some cosine (or other function) implementation is accurate enough? I guess we have precedent in the form of libm's implementation/accuracy tradeoffs, but is that precedent correct? Really
answering that question, and coming up with the best possible implementations that meet the requirements, is probably a GSoC project on its own. Andrew
Andrew Friedley wrote:
David Cournapeau wrote:
Francesc Alted wrote:
No, that seems good enough. But maybe you can present results in cycles/item. This is a relatively common unit and has the advantage that it does not depend on the frequency of your
Sure, cycles is fine, but I'll argue that in this case the number still does depend on the frequency of the cores, particularly as it relates to the frequency of the memory bus/controllers. A
processor with a higher clock rate and higher multiplier may show lower performance when measuring in cycles because the memory bandwidth has not necessarily increased, only the CPU clock rate.
Plus between say a xeon and opteron you will have different SSE performance characteristics. So really, any sole number/unit is not sufficient without also describing the system it was obtained
on :)
Yes, that's why people usually add the CPU type with the cycles/operation count :) It makes comparison easier. Sure, the comparison is not accurate because differences in CPU may make a difference.
But with cycles/computation, we could see right away that something was strange with the numpy timing, so I think it is a better representation for discussion/comoparison.
I can do minimum. My motivation for average was to show a common-case performance an application might see. If that application executes the ufunc many times, the performance will tend towards
the average.
The rationale for minimum is to remove external factors like other tasks taking CPU, etc...
I was waiting for someone to bring this up :) I used an implementation that I'm now thinking is not accurate enough for scientific use. But the question is, what is a concrete measure for
determining whether some cosine (or other function) implementation is accurate enough?
Nan/inf/zero handling should be tested for every function (the exact behavior for standard functions is part of the C standard), and then, the particular values depend on the function and
implementation. If your implementation has several codepath, each codepath should be tested. But really, most implementations just test for a few more or less random known values. I know the GNU libc
has some tests for the math library, for example. For single precision, brute force testing against a reference implementation for every possible input is actually feasible, too :) David
Last active (days ago)
17 comments
6 participants
participants (6)
• Andrew Friedley
• Charles R Harris
• David Cournapeau
• dmitrey
• Francesc Alted
• Gregor Thalhammer
|
{"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/thread/KOSWTZ33Z3KJSY4KZVXH5F4D34LTZAJC/?sort=date","timestamp":"2024-11-06T13:50:36Z","content_type":"text/html","content_length":"122635","record_id":"<urn:uuid:f36778da-92de-41e2-993b-7a8ee061ac14>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00755.warc.gz"}
|
Technical Analysis from A to Z
Technical Analysis from A to Z
by Steven B. Achelis
The Volume Rate-of-Change ("ROC") is calculated identically to the Price ROC, except it displays the ROC of the security's volume, rather than of its closing price.
Almost every significant chart formation (e.g., tops, bottoms, breakouts, etc) is accompanied by a sharp increase in volume. The Volume ROC shows the speed at which volume is changing.
Additional information on the interpretation of volume trends can be found in the discussions on Volume and on the Volume Oscillator.
The following chart shows Texas Instruments and its 12-day Volume ROC.
When prices broke out of the triangular pattern, they were accompanied by a sharp increase in volume. The increase in volume confirmed the validity of the price breakout.
The Volume Rate-Of-Change indicator is calculated by dividing the amount that volume has changed over the last n-periods by the volume n-periods ago. The result is the percentage that the volume has
changed in the last n-periods.
If the volume is higher today than n-periods ago, the ROC will be a positive number. If the volume is lower today than n-periods ago, the ROC will be a negative number.
This online edition of Technical Analysis from A to Z is reproduced here with permission from the author and publisher.
|
{"url":"https://www.metastock.com/customer/resources/taaz/?p=123","timestamp":"2024-11-08T03:58:08Z","content_type":"text/html","content_length":"148043","record_id":"<urn:uuid:57b51bba-a5a4-45bf-84c4-89788291d7e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00642.warc.gz"}
|
Tensor Research | M. Alex O. Vasilescu | UCLA | Los Angeles
top of page
Dimensionality Reduction
Hierarchical Causal Factors
Causality in a Tensor Framework
Developing causal explanations for correct results or for failures from mathematical equations and data is important in developing a trustworthy artificial intelligence and retaining public trust.
Causal explanations are germane to the “right to an explanation” statute [15], [13], i.e., to data driven decisions, such as those that rely on images.
Computer graphics and computer vision problems, also known as addressing forward and inverse imaging problems, have been cast as causal inference questions [40], [42] consistent with Donald Rubin’s
quantitative definition of causality, where “A causes B” means “the effect of A is B”, a measurable and experimentally repeatable quantity [14], [17]. Computer graphics may be viewed as addressing
analogous questions to forward causal inference that addresses the “what if” question, and estimates the change in effects given a delta change in a causal factor. Computer vision may be viewed as
addressing analogous questions to inverse causal inference that addresses the “why” question [12]. We define inverse causal inference as the estimation of causes given an estimated forward causal
model and a set of constraints on the solution set.
(Vasilescu, Kim, and Zeng, 2020)
Natural images are the composite consequence of multiple constituent factors related to scene structure, illumination conditions, and imaging conditions. Multilinear algebra, the algebra of
higher-order tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent
factors or modes.
(Vasilescu and Terzopoulos,2002)
Scene structure is composed of a set of objects that appear to be formed from a recursive hierarchy of perceptual wholes and parts whose properties, such as shape, reflectance, and color, constitute
a hierarchy of intrinsic causal factors of object appearance. Object appearance is the compositional consequence of both an object’s intrinsic causal factors, and extrinsic causal factors with the
latter related to illumination (i.e. the location and types of light sources), imaging (i.e. viewpoint, viewing direction, lens type and other camera characteristics). Intrinsic and extrinsic causal
factors confound each other’s contributions, hindering recognition.
(Vasilescu and Kim, 2019)
Tensor factor analysis is a transparent framework for modeling a hypothesized multi-causal mechanisms of data formation, computing invariant causal representations, and estimating the effects of
interventions [103][100][108][105]. Building upon prior representation learning efforts aimed at disentangling the causal factors of data variation [28][8][92] [72][71], we derive a set of causal
deep neural networks that are a consequence of tensor (multilinear) factor analysis. Tensor (multilinear) factor analysis may be implemented with shallow or deep learning composed of causal capsules
and tensor transformers. The former estimate a set of latent variables that represent the causal factors, and the latter governs their interaction.
(Vasilescu, 2022 - paper, supplemental)
While we can directly observe and measure the gray (or color) values in an image/video, we are often more interested in the information associated with the causal factors that determine the pixel
values in an image, such as the person's identity, the viewing direction, or expression, which may only be inferred, but not directly measured. Given the correct experimental design and problem
setup, the tensor framework is suitable for disentangling the multifactor causal structure of data formation.
The tensor framework was first employed in computer vision, computer graphics, and machine learning to recognize people from their gait (Human Motion Signatures in 2001) and from their facial images
(TensorFaces in 2002). However, this approach may be used to synthesize or recognize any object and object attribute. The development and utility of the tensor framework have been illustrated
primarily in the context of face recognition since the problem statement and facial images lend themselves to an intuitive understanding of the underlying mathematics. Other examples are
TensorTextures (see video below of image-based rendering that demonstrates progressive reduction of illumination effects through strategic dimensionality reduction), and 3D sound.
There are two classes of data tensor modeling techniques that stem from:
1. the linear rank-K tensor decompositions (CANDECOMP / Parafac decomposition) and
2. the multilinear rank-(R1,R2,...,RM) tensor decompositions, (Tucker decomposition).
TensorFaces is a multilinear tensor method that explicitly models and decomposes a facial image in terms of the causal factors of data formation where each causal factor is represented according to
its second-order statistics. by employing the Tucker tensor decomposition. We refer to this approach more generally as Multilinear PCA in order to better differentiate it from our Multilinear ICA
Multilinear (tensor) ICA is a more sophisticated model of cause-and-effect based on higher-order statistics associated with each causal factor. Similarly, one can employ kernel variants (pg.43 ) to
model cause-and-effect. By comparison, matrix decompositions, such as PCA, or ICA, capture the overall statistical information (variance, kurtosis) without any causal differentiation.
Subspace multilinear learning demonstratively disentangles the causal factors of data formation through strategic dimensionality reduction. For example, in the case of facial images (or
bi-directional textures functions), we suppress illumination effects such as shadows and highlights without blurring the edges associated with the person's identity that are important for recognition
(or edges associated with structural information that are important for texture synthesis. See TensorTextures video below. ).
Next important question: While TensorFaces is a handy moniker for an approach that learns and represents the interaction of various causal factors from a set of training images, with Multilinear
(Tensor) ICA and kernel variants as a more sophisticated approaches, none of the interaction models prescribe a solution for how one might determine the multiple causal factors of a single unlabeled
test image.
Multilinear Projection (FG 2011, ICCV 2007, briefly summarized in the 2005 MICA paper) addresses the question of how one might determine from one or more unlabeled test images all the unknown causal
factors of data formation. Ie, how does one solve for multiple unknowns from a single image equation? In the course of addressing this question, several concepts from linear (matrix) algebra were
generalized, such as the mode-m identity tensor (which is also an algebraic operator that reshapes a matrix into a tensor and back again to a matrix), the mode-m pseudo-inverse tensor, the mode-m
product in order to develop the multilinear projection algorithm. (Note: The mode-m pseudo-inverse tensor is not a tensor pseudo-inverse.) Multilinear projection simultaneously projects one or more
unlabeled test images into multiple constituent mode spaces, associated with image formation, in order to infer the mode labels.
• "Causal Deep Learning", M. Alex O. Vasilescu, In the Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR 2022) Montreal, Canada, August 21-25, 2022. Paper (pdf),
Supplemental (pdf).
• "CausalX: Causal eXplanations and Block Multilinear Factor Analysis", M.A.O. Vasilescu, E. Kim, X. S. Zeng In the Proceedings of the 2020 25th International Conference on Pattern Recognition
(ICPR 2020), Milan, Italy, January 2021, 10736-10743, Paper(pdf).
• "Compositional Hierarchical Tensor Factorization: Representing Hierarchical Intrinsic and Extrinsic Causal Factors ”, M.A.O. Vasilescu, E. Kim, In The 25th ACM SIGKDD, Knowledge Discovery and
Data Mining Conference and Workshops: Tensor Methods for Emerging Data Science Challenges (archived slow link), August 04-08, 2019, Anchorage, AK. ACM, New York, NY, USA Paper (pdf)
• "Face Tracking with Multilinear (Tensor) Active Appearance Models", Weiguang Si, Kota Yamaguchi, M. A. O. Vasilescu , June, 2013. Paper (pdf)
• "Multilinear Projection for Face Recognition via Canonical Decomposition ", M.A.O. Vasilescu, In Proc. Face and Gesture Conf. (FG'11), 476-483. Paper (pdf)
• "Multilinear Projection for Face Recognition via Rank-1 Analysis ", M.A.O. Vasilescu, CVPR, IEEE Computer Society and IEEE Biometrics Council Workshop on Biometrics, June 18, 2010.
• "A Multilinear (Tensor) Algebraic Framework for Computer Graphics, Computer Vision, and Machine Learning", M.A.O. Vasilescu, Ph.D. dissertation , University of Toronto, 2009.
• "Multilinear Projection for Appearance-Based Recognition in the Tensor Framework", M.A.O. Vasilescu and D. Terzopoulos, Proc. Eleventh IEEE International Conf. on Computer Vision (ICCV'07), Rio
de Janeiro, Brazil, October, 2007, 1-8.
Paper (1,027 KB - .pdf)
• “Multilinear Independent Components Analysis and Multilinear Projection Operator for Face Recognition”, M.A.O. Vasilescu, D. Terzopoulos, in Workshop on Tensor Decompositions and Applications,
CIRM, Luminy, Marseille, France, August 2005.
• "Multilinear (Tensor) ICA and Dimensionality Reduction", M.A.O. Vasilescu, D. Terzopoulos, Proc. 7th International Conference on Independent Component Analysis and Signal Separation (ICA07),
London, UK, September, 2007. In Lecture Notes in Computer Science, 4666, Springer-Verlag, New York, 2007, 818–826.
• "Multilinear Independent Components Analysis", M. A. O. Vasilescu and D. Terzopoulos, Proc. Computer Vision and Pattern Recognition Conf. (CVPR '05), San Diego, CA, June 2005, vol.1, 547-553.
Paper (1,027 KB - .pdf)
• "Multilinear Independent Component Analysis", M. A. O. Vasilescu and D. Terzopoulos, Learning 2004 Snowbird, UT, April, 2004.
• "Multilinear Subspace Analysis for Image Ensembles,'' M. A. O. Vasilescu, D. Terzopoulos, Proc. Computer Vision and Pattern Recognition Conf. (CVPR '03), Vol.2, Madison, WI, June, 2003, 93-99.
Paper (1,657KB - .pdf)
• "Multilinear Image Analysis for Facial Recognition,'' M. A. O. Vasilescu, D. Terzopoulos, Proceedings of International Conference on Pattern Recognition (ICPR 2002), Vol. 2, Quebec City, Canada,
Aug, 2002, 511-514.
Paper (439KB - .pdf)
• "Multilinear Analysis of Image Ensembles: TensorFaces," M. A. O. Vasilescu, D. Terzopoulos, Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002, in Computer
Vision -- ECCV 2002, Lecture Notes in Computer Science, Vol. 2350, A. Heyden et al. (Eds.), Springer-Verlag, Berlin, 2002, 447-460.
Full Article in PDF (882KB)
TensorTextures: Image-based Rendering
One of the goals of computer graphics is photorealistic rendering, the synthesis of images of virtual scenes visually indistinguishable from those of natural scenes. Unlike traditional model-based
rendering, whose photorealism is limited by model complexity, an emerging and highly active research area known as
image-based rendering eschews complex geometric models in favor of representing scenes by ensembles of example images. These are used to render novel photoreal images of the scene from arbitrary
viewpoints and illuminations, thus decoupling rendering from scene complexity. The challenge is to develop structured representations in high-dimensional image spaces that are rich enough to capture
important information for synthesizing new images, including details such as self-occlusion, self-shadowing, interreflections, and subsurface scattering.
TensorTextures, a new image-based texture mapping technique, is a rich generative model that, from a sparse set of example images, learns the interaction between viewpoint, illumination, and geometry
that determines detailed surface appearance. Mathematically, TensorTextures is a nonlinear model of texture image ensembles that exploits tensor algebra and the M-mode SVD to learn a representation
of the bidirectional texture function (BTF) in which the multiple constituent factors, or modes---viewpoints and illuminations---are disentangled and represented explicitly.
• "TensorTextures: Multilinear Image-Based Rendering", M. A. O. Vasilescu and D. Terzopoulos, Proc. ACM SIGGRAPH 2004 Conference Los Angeles, CA, August, 2004, in Computer Graphics Proceedings,
Annual Conference Series, 2004, 336-342.
Paper (5,104 KB - .pdf)
□ TensorTextures - AVI (54,225 KB)
□ TensorTextures Strategic Dimensionality Reduction - AVI (19,650 KB)
□ TensorTextures Trailer - AVI (17,605 KB)
• "TensorTextures", M. A. O. Vasilescu and D. Terzopoulos, Sketches and Applications SIGGRAPH 2003 San Diego, CA, July, 2003.
Sketch (6MB - .pdf)
Human Motion Signatures, Style Transfer, and Tracking:
Given motion-capture samples of Charlie Chaplin’s walk, is it possible to synthesize other motions (say, ascending or descending stairs) in his distinctive style? More generally, in analogy with
handwritten signatures, do people have characteristic motion signatures that individualize their movements? If so, can these signatures be extracted from example motions? Can they be disentangled
from other causal factors?
We have developed an algorithm that extracts motion signatures and uses them in the animation of graphical characters. The mathematical basis of our algorithm is a statistical numerical technique
known as or M-mode data tensor analysis. For example, given a corpus of walking, stair ascending, and stair descending motion data collected over a group of subjects, plus a sample walking motion for
a new subject, our algorithm can synthesize never before seen ascending and descending motions in the distinctive style of this new individual.
• "Human Motion Signatures: Analysis, Synthesis, Recognition," M. A. O. Vasilescu Proceedings of International Conference on Pattern Recognition (ICPR 2002), Vol. 3, Quebec City, Canada, Aug, 2002,
Paper (439KB - .pdf)
• "An Algorithm for Extracting Human Motion Signatures", M. A. O. Vasilescu, Computer Vision and Pattern Recognition CVPR 2001 Technical Sketches, Lihue, HI, December, 2001.
• "Human Motion Signatures for Character Animations", M. A. O. Vasilescu, Sketch and Applications SIGGRAPH 2001 Los Angeles, CA, August, 2001.
Sketch (141KB - .pdf)
• "Recognition Action Events from Multiple View Points," Tanveer Sayed-Mahmood, Alex Vasilescu, Saratendu Sethi, in IEEE Workshop on Detection and Recognition of Events in Video, International
Conference on Computer Vision (ICCV 2001), Vancuver , Canada, July 8, 2001, 64-72
Human Motion Signature
Listenting in 3D
Listening in 3D
Head related transfer function (HRTF) characterizes how an individual's anatomy and sound source location impacts an individual's perception of sound. The size, shape and density of the head, the
shape of the ears and ear canal, the distance between the ears, all transform sound by amplifying some frequencies and attenuating others. Learning how sound is perceived is important in:
• pinpointing the location of sound that is vital for safe navigation in traffic,
• achieving a realistic acoustic environment in gaming and home cinema set-ups.
To measure an HRTF, one places a loudspeaker at various locations in space and a microphone at the ear. To recreate an authentic sound experience, slightly differently synthesized sounds are sent to
each ear in accordance with a person's HRTF.
This is not surround sound which uses multiple speakers to provide a 360 sound.
• "A Multilinear (Tensor) Framework for HRTF Analysis and Synthesis", G. Grindlay, M.A.O. Vasilescu, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Honolulu,
Hawaii, April, 2007
Paper (439KB - .pdf)
Adaptive Meshes
Adaptive Meshes: Physically Based Modeling
Adaptive mesh models for the nonuniform sampling and reconstruction of visual data. Adaptive meshes are dynamic models assembled from nodal masses connected by adjustable springs. Acting as mobile
sampling sites, the nodes observe interesting properties of the input data, such as intensities, depths, gradients, and curvatures. The springs automatically adjust their stiffnesses based on the
locally sampled information in order to concentrate nodes near rapid variations in the input data. The representational power of an adaptive mesh is enhanced by its ability to optimally distribute
the available degrees of freedom of the reconstructed model in accordance with the local complexity of the data.
We developed open adaptive mesh and closed adaptive shell surfaces based on triangular or rectangular elements. We propose techniques for hierarchically subdividing polygonal elements in adaptive
meshes and shells. We also devise a discontinuity detection and preservation algorithm suitable for the model. Finally, motivated by (nonlinear, continuous dynamics, discrete observation) Kalman
filtering theory, we generalize our model to the dynamic recursive estimation of nonrigidly moving surfaces.
• "Adaptive meshes and shells: Irregular triangulation, discontinuities, and hierarchical subdivision," M. Vasilescu, D. Terzopoulos, in Proc. Computer Vision and Pattern Recognition Conf. (CVPR
'92), Champaign , IL, June, 1992, pages 829 - 832.
Paper (652KB - .pdf)
• "Sampling and Reconstruction with Adaptive Meshes," D. Terzopoulos, M. Vasilescu, in Proc. Computer Vision and Pattern Recognition Conf. (CVPR '91), Lahaina, HI, June, 1991, pages 70 - 75.
Paper (438KB - .pdf)
bottom of page
|
{"url":"https://www.10srprofessor.org/","timestamp":"2024-11-15T04:44:48Z","content_type":"text/html","content_length":"741285","record_id":"<urn:uuid:0912d468-cc7c-46cb-a36d-20931e31a2f1>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00043.warc.gz"}
|
Large-scale power spectrum and structures from the ENEAR galaxy peculiar velocity catalogue
We estimate the mass density fluctuations power spectrum (PS) on large scales by applying a maximum likelihood technique to the peculiar velocity data of the recently completed redshift-distance
survey of early-type galaxies (hereafter ENEAR). Parametric cold dark matter (CDM)-like models for the PS are assumed, and the best-fitting parameters are determined by maximizing the probability of
the model given the measured peculiar velocities of the galaxies, their distances and estimated errors. The method has been applied to CDM models with and without COBE normalization. The general
results are in agreement with the high-amplitude power spectra found from similar analyses of other independent all-sky catalogue of peculiar velocity data such as MARK III and SFI, in spite of the
differences in the way these samples were selected, the fact that they probe different regions of space and galaxy distances are computed using different distance relations. For example, at k =
0.1hMpc^-1 the power spectrum value is P(k)Ω^1.2 = (6.5 ± 3) × 10^3(h^-1Mpc)^3 and η[8] ≡ σ[8]Ω^0.6 = 1.1^+0.2[-0.35]; the quoted uncertainties refer to 3σ error level. We also find that, for ΛCDM
and OCDM COBE-normalized models, the best-fitting parameters are confined by a contour approximately defined by Ωh^1.3 = 0.377 ± 0.08 and Ωh^0.88 = 0.517 ± 0.083 respectively. Γ-shape models, free of
COBE normalization, result in the weak constraint of Γ ≥ 0.17 and in the rather stringent constraint of η[8] = 1.0 ± 0.25. All quoted uncertainties refer to 3σ confidence level (c.1.). The calculated
PS has been used as a prior for Wiener reconstruction of the density field at different resolutions and the three-dimensional velocity field within a volume of radius ≈80h^-1Mpc. All major structures
in the nearby Universe are recovered and are well matched to those predicted from all-sky redshift surveys. The robustness of these features has been tested with constrained realizations (CR).
Analysis of the reconstructed three-dimensional velocity field yields a small bulk-flow amplitude (∼ 160 ± 60kms^-1 at 60h^-1Mpc) and a very small rms value of the tidal field (∼60kms^-1). The
results give further support to the picture that most of the motion of the Local Group arises from mass fluctuations within the volume considered.
ملاحظة ببليوغرافية
Copyright 2005 Elsevier Science B.V., Amsterdam. All rights reserved.
أدرس بدقة موضوعات البحث “Large-scale power spectrum and structures from the ENEAR galaxy peculiar velocity catalogue'. فهما يشكلان معًا بصمة فريدة.
|
{"url":"https://cris.openu.ac.il/ar/publications/large-scale-power-spectrum-and-structures-from-the-enear-galaxy-p","timestamp":"2024-11-11T16:06:11Z","content_type":"text/html","content_length":"57955","record_id":"<urn:uuid:98529c80-cb13-4156-b08a-cb84611ebe80>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00784.warc.gz"}
|
Empirical Study of Optimal Capital Structure
and the Debt Capacity of BOT Projects
Empirical Study of Optimal Capital Structure and the Debt Capacity of BOT Projects
Borliang Chen* and Chen-Hung Tang
Department of Civil Engineering, National United University, Taiwan
Submission: February 01, 2019; Published: February 20, 2019
*Corresponding Author:Borliang Chen, Department of Civil Engineering, National United University, Taiwan
How to cite this article:Borliang C, Chen H T. Empirical Study of Optimal Capital Structure and the Debt Capacity of BOT Projects. Civil Eng Res J. 2019; 7(3): 555713. DOI: 10.19080/
It is world widely trend for governmental agencies to promote private sectors to participate in the development of infrastructures, such as BOT projects. However, BOT projects are inherently risky
due to too many uncertainties during the project period. Overoptimistic in revenue and lack of risk analysis in BOT projects might usually lead the concessionaire to misestimate the project
feasibility. This may result in high probability of defaults (PD) of BOT projects and cause financial disaster toward concessionaire. In the worst situation, it could make the projects become
unbackable. However, it is rarely to discuss the probability of defaults and bankruptcy cost in the conventional BOT financial model. In general, capital structure of BOT projects is assumed 30% of
equity and 70% of debt, and debt repayment is arranged in repaying equivalent uniform annul cost debt in repayment periods. This kind of repayment arrangement does not consider the erratic nature of
revenue. It is risky to repay the debt obligation with inflexible repayment term. And, it may lead to high PD of the projects. The objective of this study was to alleviate the risk of project
defaults due to the volatility of revenue. A student dormitory project of the national United University in Taiwan is used as an empirical study to demonstrate the analysis. The repayment arrangement
proposed in this paper could make the 35% PD of the project reduce to less than 1% PD.
Keywords: BOT projects; Financial measures; Risk analysis; Probability of defaults
For mitigating the fiscal burden of government, improving the quality of infrastructure, and increasing the investment opportunities, it becomes world widely trend for governmental agencies to
promote private sectors to participate in the development of infrastructure projects, such as BOT projects. However, many BOT projects are long term investments, which are inherently risky. Mostly,
the concessionaires of BOT projects are too optimistic in revenue projections and are short of project risk analysis of BOT projects, which both overestimate the project value. Conventional financial
evaluation models of BOT projects set the capital structure as D/E=7/3, in common, and repay the debt obligation as the equivalent uniform annual cost. In this financial arrangement, concessionaires
are likely to encounter financial difficulty during the project period with shortage of cash flows. For example, the fixed equivalent uniform annual payment for debt may cause great probability of
defaults of the project when low revenue or high operating and maintenance cost occur in certain operation periods. Net operating income is the difference between revenue and operating and
maintenance cost. And, the project income is considered as the source of project value. Therefore, the net operating income is a critical managerial factor.
The future net operating income is uncertain in every year of operating period, while the debt obligation is uniform annual cost; the probability of defaults is fluctuant. And if high probability of
defaults would debase the debt value and make concessionaires find difficulty in financing the projects. In fact, the replacement cost of project facility is a considerable expenditure in operating
period. The huge expenditure will outstandingly enlarge the operating and maintenance cost of the years compare to others. If facility replacement took place in the debt repayment periods, the great
replacement cost would diminish the ability of repaying the outstanding debt of the concessionaire, and the probability of defaults would increase or even really cause the concessionaire defaults.
If the project company became bankrupt, debt holders would have to pay bankruptcy cost before they could take over the company. Bankruptcy cost will vary the debt value because of the existence of
the probability of defaults. The objective of this paper was to evaluate the probability of defaults under different debt repayment arrangement. With the knowledge of the distribution of net
operating income, which is defined as a random variable, the probability of defaults, debt and equity value can be calculated. The probability of defaults and promised debt repayment would be
determined under three criteria: maximizing project value, debt capacity and value at risk with a confidence level of 99%. The model in this study is named as Probability of Defaults Measures Model
The basic theory of the PDMM in this paper is based on the Capital Asset Pricing Model (CAPM) [1-4]. Dais (1995) incorporated the CAPM with capital cost into an evaluation model of project value, in
monetary units. A modification of Dais’s formulation for the debt holders’ perspectives is proposed with considering the cost of bankruptcy, the probability of defaults, and the systematic risks. The
probability of bankruptcy is the likelihood that a firm’s cash flows will be insufficient to meet its promised debt obligation, either interest or principal.
Because it is likelihood that the project concessionaire becomes bankrupt and falls into defaults, there is a cost of financing the debt: the bankruptcy cost, . The equation is assumed as follow
Where bf is the fixed cost, bv is the coefficient of X. And, X is the net operating income of BOT project.
Suppose that in a specific year of repayment periods of the project, the net operating income was unable to meet the debt obligation. Then the concessionaire would fall into defaults. The probability
of defaults is the probability of the state that the net operating income was less than the loan. The net operating income, X, is assumed to be a normal random variable. As the period of debt
repayment, Rp, expected value of the net operating income, E[Xi], standard deviation, σX, and the ith year promised debt repayment, di, are given, the probability of defaults, F(di), could be
determined. The equation is as follow:
Frequently, the project value is regarded as one of the most important criteria in the project financial feasibility evaluation. The greater the project value is, the higher the feasibility of the
project. But the project would be feasible only if the project value was greater than the project cost. Project value is considered the sum of debt value and equity value, in this study, both debt
and equity depend on the net operating income. In a project evaluation, the risk usually reflects in the cost of debt and equity. Base on the foundation theory of CAPM, Dais (1995) derived a model
with considering the systemic risk, risk of revenue and the risk of default. The model provides a scheme to evaluate the value of debt and equity individually.
The project value, V , can be calculated by summing the debt, D, and equity, E .
Three criteria were adopted in the study:
A. maximizing the project value that generates the optimal capital structure;
B. maximizing the debt value for obtaining the debt capacity; and
C. value at risk with 99% confidence level.
Optimal capital structure (OCS) is the proportion of debt and equity that maximizes the project value. OCS depends on the promised repayment of debt. The value of debt increases when the
concessionaire raises the promised debt repayment. On the contrary, the value of equity decreases. As the optimal capital structure was obtained, the probability of default could be determined
simultaneously for the given promised repayment.
Since the net operating income is risky, the expected debt repayment doesn’t continually increase while the promised debt repayment increases. If the promised debt repayment was very near to the net
operating income, the expected debt repayment would decrease because of the increasing of the probability of defaults. The present value of the maximum of debt value is the debt capacity, one of the
debt holders’ main concerns. Intuitionally, debt holders will not gain their debt when the promised repayment is greater than this critical value.
Suppose the debt holders were taking the default risk for the probability of default equals to 1%, the corresponding promised debt repayment would be di in the ith year.
A case of university dormitory of National United University (NUU) in Taiwan is to illustrate the PDMM as an empirical study of this paper. It is a BOT project of dormitory in National United
University. The input parameters and results are shown as follow.
Input parameters of the National United University dormitory BOT project are shown as Table 1. These parameters are useful only when loan relationship exist between the concessionaire and the debt
Figure 1 displays the probability of defaults (PD) under the calculation of conventional repayment arrangement model. Conventional Model(D/E=7/3).
The loan was assumed 70% of total asset in conventional model and the debt principal and interest were repay in equivalent uniform annual cost during 15years of debt repayment periods. The PD is
large especially in the 12th and 15th years that the years of replacement expenditure payout. The average PD of σx=20%. E[Xi] is equals to 29.4%. The high PD suggests the difficulty of debt finance.
The PDs with two criteria of optimal capital structure and debt capacity in every repayment period of PDMM are shown in Figure 2 and Figure 3, respectively
Maximizing the project value can benefit to keep PD low, even in case of σx=30%. E[Xi]. The PD remains less than 0.8%.
To evaluate PD with debt capacity, PD are large in the early periods and remain steady in the late periods.
Figure 4 shows the debt ratio (=D/V) with different standard deviation value under the three criteria. The results show less debt finance with PDMM (see Figure 4).
Figure 4 illustrate the debt ratio with different standard deviation value under the three criteria. Great value of standard deviation causes less available debt finance. And maximizing the debt
value can raise more debt than maximizing the project value. Taking 1% risk of PD can raise more debt than maximizing project value but les than maximizing debt value.
Debt repayment arrangement in conventional model will result in high probability of default. It is very risky for concessionaire to go bankruptcy. Likewise, for debt holders are unable to receive the
payback. Therefore, it is suggesting that the debt repayment is necessary to rearrange with considering the characteristic of net operating income every year. Maximizing the project value leads to
safer debt finance with low probability of default. However, the disadvantage is that the concessionaire would find it difficult to finance more debt compare to the case of maximizing the debt value.
Consequently, making the decision of whether to raise more debt or to improve the project value is essential for concessionaire. No matter which criteria concessionaire select, maximizing the debt
value and maximizing the project value, the probability of default reduces dramatically to 1% from original more than 35%. The BOT project becomes very promising and attractive for loan providers.
|
{"url":"https://juniperpublishers.com/cerj/CERJ.MS.ID.555713.php","timestamp":"2024-11-10T22:28:43Z","content_type":"text/html","content_length":"80658","record_id":"<urn:uuid:24a5f02d-31f7-487a-83cc-24b5ad3c3d0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00450.warc.gz"}
|
A thermal imaging camera has been used to view a target at night for military purposes for the past 40 years. In recent times, civilian demands for the thermal camera have been gradually increasing.
Civilian demands are primarily in the security, industry, research and medical fields. The thermal imaging camera measures the radiation of the target, and the measured radiation can be expressed by
the distribution of the radiative temperature [1]. It is often necessary for optical systems to have multiple fields of view. A zoom system can have continuous changes of the field of view by moving
lens groups along an optical axis.
In this paper, paraxial studies and lens modules are used to obtain the optimum initial design of a 10× extended four-group inner-focus zoom system for an LWIR camera. It is integrated into uncooled
IR detectors with 384×288 pixels, of which pixel size is 25 µm. The f-number of the optical zoom system was estimated to be F/1.4 from the Airy disk and MRTD(Minimum Resolvable Temperature
Difference) analysis [2-5].
Through an optimization process, we have obtained an optical zoom system satisfying the requirements for a 10× LWIR camera. In order to balance the wave front aberrations, we located the diffractive
surfaces at L3 and L6. Since the diffractive lenses are designed to be very weakly powered and the wavelength to zone period ratio is very small across the entire lens, the scalar predictions of
diffraction efficiency are valid to calculate the polychromatic integrated diffraction efficiency. The changes of the MTF(Modulation Transfer Function) due to diffraction efficiency are evaluated
from the polychromatic integrated efficiency [6].
The main requirements of an LWIR zoom system are listed in Table 1. Since the f-number has an effect on Airy disk diameter as well as on MRTD, the optical system with uncooled IR detector should have
a low f-number, if possible [7]. Basically spatial resolution of an IR optical system is decided by the specification of IR detector, because target acquisition performance is based on the number of
line pairs resolvable across a target critical dimension. Consequently an IR optical system should be designed to have the diffraction limited performance [3].
The Airy disk diameter is given by 2.44·λ·F-number. In diffraction limited system, a pixel’s diagonal size of detector should be equal to the Airy disk diameter as follows:
where d is a pixel width of the detector, λ is an operating wavelength, and F/# is an f-number of the optical system. For a detector with pixel width of 25 µm, from Eq. (1), the target f-number of
the optical zoom system is given by F/1.4 at central wavelength of 10.2 µm.
The MRTD is one of the criteria denoting optical system performances, which characterizes the thermal sensitivity at a given spatial frequency [8], whereas the MTF measures the attenuation in
modulation depth without regard for thermal property of an object. This MRTD is given by
where A is a proportional constant depending on the detector, D* is the normalized detectivity of which the average value is known as 1.44×10^8 cmHz^1/2W^-1 [9], and MTF(𝜉[t]) is the modulation
transfer function of the optical system at target frequency of 𝜉[t]. For a given set of parameters, the MRTD can be calculated from the Eq. (2) [8]. The relationships between the f-number and MRTD
are summarized in Table 2.
Since the diffraction MTF depends on the f-number of an optical system, a large f-number results in a large value of MRTD, which reduces the resolution for resolvable temperature difference.
Meanwhile, a small f-number gives a good value of MRTD, however, the large aperture system is limited by the cost and the detector. From the exact evaluation for the relationship between f-number and
MRTD, optimum f-number of this system is confirmed to be 1.4. The MRTD at F/1.4 is so small that it may give enough resolution.
Figure 1 illustrates an extended four-group inner-focus zoom system that has the same total track length at all zoom positions. The zoom system is composed of fixed three groups, the second group for
zooming, and the fourth group for focusing.
The powers of the groups are denoted by k[1], k[2], k[3], k[4], and k[5], respectively. The distances between the adjacent principal planes are represented by z[[iW]] and z[[iN]] (i=1, 2, 3, 4) at
wide and narrow field positions, respectively. The first, the third, and the fifth groups are always fixed. While the second group moves to have a longer focal length, the fourth group should move to
keep the image position stationary. For the zoom system having zoom ratio of 10×, it is desired that the powers of all groups are positive, except the negative second group for zooming.
We can analytically derive the set of zoom equations for an extended four-group inner-focus zoom system with infinite object by using the Gaussian brackets. The initial design of this zoom system can
be formulated as follows [10-12]:
where K is the optical power at a zoom position. Equations (3) ~ (4) can be rearranged as follows:
Therefore, Eqs. (5) and (6) can be expressed in a matrix form at wide and narrow field positions as follows:
where a[ij] is given as a function of a[ij] (K[W], K[N] , k[5], z[[1]][[W]], z[[2]][[W]], z[[3]][[W]], z[[4]][[W]], z[[1]][[N]] , z[[2]][[N]] , z[[3]][[N]] , z[[4]][[N]] , bfd) with K[W] and K[N]
being the optical powers at wide and narrow field positions, respectively. If the Gauss-Jordan elimination method is applied, Eq. (7) is replaced by
where b[ij] is given as a function of a[ij]. In order to derive the unknown quantities of k[1], k[2], k[3], and k[4], four equations are obtained from Eq. (8):
Inserting the conditions of Eqs. (9) ~ (11) into the first row equation of Eq. (8) results in an expression for the unknown parameter k[1]:
where A[i] (i=1,2,3), B[i] (i=1,2,3), A, B, C, D and E are given as a function of b[ij]. Solving Eq. (12) for a given k[5], we can get k[1]. As a result, k[2], k[3], and k[4] are determined from Eqs.
(9) ~ (11).
For the initial design of a 10× extended four-group inner-focus system, we input the proper distances between principal points and the targeted total powers at wide and narrow positions. These
starting values of z[1], z[2], z[3], z[4], and bfd are empirically selected to satisfy the basic requirements by using paraxial studies, as shown in Table 3.
By inputting these starting values of Table 3 into Eqs. (5) and (6), we can obtain the proper values for the power of each group from Eqs. (9) ~ (12). Figures 2 illustrates the four solution sets of
Eq. (12) for various k[5].
In the solutions of case 1 and case 2, the powers of the second group (k[2]) are positive. Since the second group is the variator, these positive powers of this group are not valid. In the solutions
of case 3, the powers of the third group are negative, so they are also not valid to an extended four-group inner-focus zoom system. Also, the powers of the second group are so small that the
displacement amount of this group is to be larger, not compact in the high zoom ratio system.
Finally, the solutions of case 4 satisfy the requirements for power distribution available to our zoom system having zoom ratio of 10×. Among the solutions of case 4, we took the powers of k[1], k
[2], k[3], k[4], and k[5], maintaining the most stable zooming locus. Table 4 lists the paraxial design data for the powers obtained from above process.
We have set up a zoom lens system with five thin-lens modules, for which the power of each group and the zoom locus inputs are taken from Table 4 and Table 3. The lens modules do not reflect
higher-order aberrations, so reducing the aperture and the field size of the system is desirable [13, 14]. Thus, we took a zoom system with a half image size of 3 mm and f-number of F/3 at all
positions. The air distances between modules were constrained to be longer than 3.0 mm for the mounting space.
In Table 3 and Table 4 of the previous paraxial study, the thickness of each group and the air spaces are not presented. They can be derived by specifying the design variables of the lens modules,
such as the effective focal length (FL[M]), the front (FF[M]) and the back focal lengths (BF[M]), and the air spaces between the modules. Figure 3 shows the initial zoom design composed of five thick
lens modules obtained from this process. The focal lengths range from 10.0 to 100.0 mm. Table 5 shows the first-order properties and the zoom loci at each position, and Table 6 lists the design data
for each module. In Table 5, d[ij] (j=1,2,3,4) are the air spaces between the modules at the i-th zoom position.
A real lens group is generally composed of several lens elements, which should be equivalent to the lens module given in Table 6. In this paper, an optimization design method is used to design a real
lens group equivalent to the module of each group. We choose an appropriate structure for each real lens group and scale it up or down so that the focal length of each real lens group is the same as
that of the lens module. By using the optimization method of Code-V, we matched the first-order quantities of the lens module to those of a real lens. For the conversion process, the design variables
of the real lens are the radius of each surface, the thickness, and the refractive index. Therefore, the three constraints given in Table 6 can be satisfied by specifying the lens design variables.
After a few iterations, the real lenses of the groups are obtained.
If a zoom system equivalent to the lens module zoom system is to be achieved, the air spaces (d[ji]) between groups should be set according to the zoom loci of Table 5 at each position. This
procedure results in a zoom system equivalent to the lens module zoom system within paraxial optics. From the evaluation of this zoom system, the agreement of the first-order quantities between the
two systems is good.
In the initial design of the zoom system, we reduced the aperture and the field size so that they were too small. If current specifications for an LWIR zoom camera are to be met, the aperture and the
field size should be increased. The f-numbers are extended to F/1.4 at all positions. The half image size should be 6.0 mm for an uncooled IR detector. In an extended aperture and field system,
however, the aberrations that were not corrected in the previous design become significant. In order to improve the overall performance of the zoom system, we balance the aberrations of the starting
data. In this process, the first-order layouts are fixed, and the residual aberrations are corrected using aspheric surfaces and diffractive lenses. Finally, a zoom system having good performance was
obtained, and its layout is shown in Fig. 4. Table 7 lists the design data.
This system consists of the six lenses including two diffractive surfaces. The diffractive surfaces are used to balance the wave-front aberrations. They are located in the front surface of L3 and the
back surface of L6. The hybrid lenses such as L3 and L6 have refractive and diffractive properties at the same time. Therefore the hybrid lens differs from a conventional refractive lens in that it
can produce many images due to diffraction orders. These diffraction images serve to lower the contrast of the desired image.
The rotationally symmetric diffractive lens has the surfacerelief of ring type. The number of rings (N) on the diffractive surface is given by Eq. (13):
where D is the diameter of diffractive surface and f[0] is the focal length at design wavelength of λ[0]. For a diffractive lens having short focal length, as shown in Eq. (13), it can be seen that N
is to be large. Therefore, we need to budget the small power on the diffractive lens. Table 8 gives the specification of the designed hybrid lens in the zoom system. Table 9 lists the radius of each
zone, given by .
Figure 5 compares two point spread functions by L3, Fig. 5(a) is obtained from refracted image only and Fig. 5(b) from refracted plus diffracted images. Other diffraction orders, not design order,
generate the background noises such as flares. They result in degrading the contrast at around zero frequency. However, the width of the central peak is much more narrow than that of the refracted
image, which eventually improves the image quality at high frequency.
For a diffractive optical system that utilizes spectrally broadband illumination, the wavelength dependence of diffraction efficiency needs to be considered when evaluating imaging performance. To
determine the polychromatic transfer function for a diffractive lens, it is first to define a polychromatic integrated efficiency for the wavelength band ranging from λ[min] to λ[max]. Denoting the
integrated efficiency at wavelength λ by η[[int]](λ), the polychromatic integrated efficiency is given by [6]
In Table 8, the diffractive components are very weakly powered, to the 4 % of the hybrid component’s powers. In addition to that, the wavelength to zone period ratio is very small across the entire
lens, as shown in Table 9. In these situations, it is known that scalar predictions of diffraction efficiency are valid to replace the η[[int]](λ) of Eq. (14) [6, 15]. The scalar diffraction theory
value for the diffraction efficiency in order m for wavelength λ is given by
where λ[0] is the design wavelength for the diffractive lens. The expression for η[scalr] (λ) given by Eq. (15) cannot be explicitly integrated, but an approximate expression for can be found by
expanding η[scalr] (λ) in a power series and integrating term by term. The resulting approximate expression for the first order diffraction of m = 1 is given by
As a result, the polychromatic integrated efficiency of an LWIR zoom system is 93.39 % and background noise reduces the diffraction efficiency by 6.61 % for λ[min] = 7.7 µm, λ[max] = 12.8 µm, and λ =
10.2 µm. These losses in diffraction efficiency may reduce the contrast of image. Figure 6 shows the polychromatic modulation transfer functions calculated by using the Eq. (16). The polychromatic
MTF is scaled by and drops down at zero-frequency. The resulting curves show the behavior predicted by point spread functions of Fig. 5(b).
We have designed and evaluated the optical zoom system for an LWIR camera with an uncooled IR detector. The fields of view are 51.28° × 38.46° at wide and 5.50° × 4.12° at narrow field positions. The
optimum f-number of the zoom system was determined to be F/1.4 from Airy disk and MRTD analysis.
Through paraxial design and optimization method, we have obtained an extended four-group inner-focus zoom system having a focal length of 10 to 100 mm. Balancing the wave-front aberrations by use of
diffractive surfaces, we improved the performance of the zoom system further. To evaluate the polychromatic transfer function for a weakly powered diffractive lens, the scalar diffraction theory was
effectively used.
As a result, the polychromatic integrated efficiency is 93.39% and background noise reduces the diffraction efficiency by 6.61%. This loss in diffraction efficiency slightly reduced the contrast of
image at zero-frequency, and scaled down the polychromatic MTF by . However, the designed optical zoom system has enough image qualities for an LWIR camera, as shown in Fig. 6. Consequently, the zoom
system developed in this work performs reasonably as an LWIR zoom camera.
|
{"url":"https://oak.go.kr/central/journallist/journaldetail.do?article_seq=14609","timestamp":"2024-11-04T04:32:05Z","content_type":"text/html","content_length":"190883","record_id":"<urn:uuid:b90ecd3e-d13b-4e54-b619-d2b66a943300>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00438.warc.gz"}
|
Thinking About Immediate Products Of Math U See Algebra Reviews - UK Business Directory
Learning platforms, on-line communities, math tools and on-line courses for greater math learners and fans. It is a number-filling logic puzzle recreation that performs like a cross between Sudoku
and a crossword puzzle. Offline access: Enjoyable math video games can math u see geometry reviews be performed offline through numerous apps. There’re a number of free and paid apps obtainable on
both the App Store and Google Play that assist kids study math whereas having enjoyable.
Mathway is a problem-fixing software that may answer math problems and supply step-by-step options. Worksheets will be downloaded and printed for classroom use, or actions will be completed and
routinely graded online. This is a welcome addition velocity recreation for kids and college students.
These Math Websites for Kids are comprised of numerous grade-changed games where college students practice the distinctive pre-K-5 numerical ideas they are adapting completely. For those math u see
algebra reviews who’re looking for an interactive learning platform that might increase your child’s math performance, then Brighterly is a superb alternative.
Gamers should answer 4 questions every degree and sixteen ranges in whole. Participating animated learning videos, games, quizzes, and activities to encourage kids math u see pre algebra reviews on
their distinctive learning path. Mathway is a math drawback-solving app that provides step-by-step options for varied math problems.
Step 2: Each individual then hop, skips and counts on the identical time, which is a really good way of helping those multiplication tables stick. This website mathusee.com includes many enjoyable
math games for teenagers, in addition to different resources like math videos and reading video games.
Secrets Of Math U See Geometry Reviews Around The USA
Methods For Math U See Geometry Reviews
Brian Knowles, a sophomore at Leesville, performs video games to procrastinate so he does not must do his work.” Knowles believes that utilizing Cool Math Games can be useful. Shine bright in the
math world by learning learn how to relate actions with A.M. and P.M.
Plans For Math U See Geometry Reviews – For Adults
This math app takes a visible studying approach to math ideas for kindergarten to 5th grade. These Math Websites for Children assist construct mathematical skills in kids. If a fast math u see
algebra reviews answer to a math downside is needed without focusing a lot on learning to unravel it independently, Photomath might be the higher option.
158 With the massive variety of new areas of mathematics that appeared for the reason that beginning of the twentieth century and continue to seem, defining mathematics by this object of examine
turns math u see reviews into an impossible process. The answer to the question of who invented math is, disappointingly, everybody and no one at the similar time.
The time period “utilized arithmetic” additionally describes the skilled specialty in which mathematicians work on problems, typically concrete but generally summary. Prepare your KS4 college
students for maths GCSEs with revision classes designed to build confidence and familiarity.
0 Comment on this Article
Comment closed!
|
{"url":"https://directorybusiness.co.uk/thinking-about-immediate-products-of-math-u-see-algebra-reviews/","timestamp":"2024-11-12T19:32:51Z","content_type":"text/html","content_length":"147793","record_id":"<urn:uuid:3f399141-1c27-43e7-a400-7e42f2a1b57e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00526.warc.gz"}
|
Translating Chemical Equations Worksheet Answers - Equations Worksheets
Translating Chemical Equations Worksheet Answers
Translating Chemical Equations Worksheet Answers – The aim of Expressions and Equations Worksheets is to assist your child in learning more efficiently and effectively. These worksheets include
interactive exercises as well as problems that are based on order of operations. These worksheets will help children can grasp both simple as well as complex concepts in quick amount of duration. You
can download these free documents in PDF format. They will aid your child’s learning and practice math equations. They are useful for students between 5th and 8th Grades.
Download Free Translating Chemical Equations Worksheet Answers
The worksheets listed here are for students in the 5th-8th grades. These two-step word problems are constructed using decimals or fractions. Each worksheet contains ten problems. You can find them at
any website or print source. These worksheets are a great way to exercise rearranging equations. These worksheets are a great way to practice rearranging equations , and assist students with
understanding equality and inverse operation.
These worksheets are designed for fifth and eight grade students. They are ideal for students who struggle to calculate percentages. It is possible to select three different kinds of problems. You
can choose to solve single-step issues that contain whole numbers or decimal numbers or to use words-based methods to solve fractions and decimals. Each page will have 10 equations. The Equations
Worksheets are used by students in the 5th through 8th grades.
These worksheets can be used to test fraction calculations as well as other algebraic concepts. Many of these worksheets allow users to select from three different types of problems. You can select a
word-based or a numerical one. It is essential to pick the type of problem, as each problem will be different. There are ten issues in each page, and they’re excellent for students in the 5th through
8th grade.
These worksheets help students understand the relationship between variables and numbers. The worksheets let students practice solving polynomial equations and learn how to use equations to solve
problems in everyday life. These worksheets are an excellent method to understand equations and formulas. These worksheets can teach you about different types of mathematical problems along with the
different symbols used to express them.
These worksheets are great to students in the beginning grade. These worksheets will teach students how to solve equations as well as graph. These worksheets are excellent to practice with polynomial
variables. They also assist you to discover how to factor and simplify the equations. There are plenty of worksheets you can use to help kids learn equations. The most effective way to learn about
the concept of equations is to perform the work yourself.
You will find a lot of worksheets that teach quadratic equations. There are several levels of equations worksheets for each level. These worksheets are designed in order to assist you in solving
problems in the fourth degree. Once you have completed the required level then you are able to work on solving different types of equations. You can continue to take on the same problems. For
instance, you could identify the same problem as an extended one.
Gallery of Translating Chemical Equations Worksheet Answers
Net Ionic Equation Worksheets Chemistry Learner
MemorableAcademic Types Of Reactions Worksheet Answer Key TypesOf
Download Balancing Equations 03 Chemical Equation Chemistry
Leave a Comment
|
{"url":"https://www.equationsworksheets.net/translating-chemical-equations-worksheet-answers/","timestamp":"2024-11-04T20:34:21Z","content_type":"text/html","content_length":"62850","record_id":"<urn:uuid:2585b2f0-e883-432e-b01d-eed7fc5829c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00577.warc.gz"}
|
Write a formula that checks whether or not the cell is in a parent row, then performs the following
Write a formula that checks whether or not the cell is in a parent row, then performs the following calculation based on this information:
If a task has children, sum the children.
If a task has no children, subtract [Extra Column] from [Status].
Write the formula in the [Numbers] parent row, then drag the formula down to apply it to all the children rows.
Calculation Based on Hierarchy Position.
• Try something like this:
=IF(COUNT(CHILDREN()) = 0, Status@row - [Extra Column]@row, SUM(CHILDREN()))
You should be able to convert this into a column formula instead of having to dragfill.
Help Article Resources
|
{"url":"https://community.smartsheet.com/discussion/90861/write-a-formula-that-checks-whether-or-not-the-cell-is-in-a-parent-row-then-performs-the-following","timestamp":"2024-11-06T11:41:50Z","content_type":"text/html","content_length":"390605","record_id":"<urn:uuid:19e18c53-f2df-427d-a9ac-b1a04ccaa85b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00042.warc.gz"}
|
Miquel's Point
Miquel's Point: What Is It?
A Mathematical Droodle
|Activities| |Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
Select a point on each side of a triangle or its extensions. By the Pivot Theorem, the three circles shown in the applet pass through the same point, the Miquel point of the three circles.
When the three selected points are collinear, the circumcircle of the given triangle also passes through the same point. The point is now called the Miquel point of the 4-line, i.e. of the four
Following is the proof of that statement.
Given a 4-line, number the lines 1, 2, 3 and 4. There are four ways to take 3 lines at a time. Each gives us a triangle and its circumcircle. We are to prove that the four circles share a point.
Consider two triangles, say 123 and 124. Their circumcircles intersect at two points of which one does not belong to any of the given lines. Call it P. Consider the simsons of P with respect to the
two triangles. One passes through the feet of the perpendiculars to the lines 1, 2 and 3. The other passes through the feet of perpendiculars to the lines 1, 2 and 4. The two simsons thus share two
points, and therefore coincide.
This means that the feet of the perpendiculars from P to the sides of the triangles 134 and 234 all lie on the same line which then must be the simson of P with respect to triangles 134 and 234. From
here, P lies on the circumcircles of both triangles 134 and 234. The four circumcircles intersect at P!
Now, what if you have a 5-line. There are five ways to pick a 4-line out of five lines. In each case there is a Miquel's point. Do you think they are strewn out randomly? If so, think again.
1. R. Honsberger, Episodes in Nineteenth and Twentieth Century Euclidean Geometry, MAA, 1995.
Related material
Read more...
Simson Line - the simson
Simson Line: Introduction
Simson Line
Three Concurrent Circles
9-point Circle as a locus of concurrency
Circumcircle of Three Parabola Tangents
Angle Bisector in Parallelogram
Simsons and 9-Point Circles in Cyclic Quadrilateral
Reflections of a Point on the Circumcircle
Simsons of Diametrically Opposite Points
Simson Line From Isogonal Perspective
Pentagon in a Semicircle
Simson Line in Disguise
Two Simsons in a Triangle
Carnot's Theorem
A Generalization of Simson Line
|Activities| |Contact| |Front page| |Contents| |Geometry|
Copyright © 1996-2018 Alexander Bogomolny
|
{"url":"https://www.cut-the-knot.org/Curriculum/Geometry/Miquel.shtml","timestamp":"2024-11-03T09:18:57Z","content_type":"text/html","content_length":"16205","record_id":"<urn:uuid:8775c29e-2a4a-4a1c-8051-a9a0dbcecf3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00111.warc.gz"}
|
Population Growth Calculator - Population Growth Rate
This population growth calculator is a valuable tool used by demographers, economists, and policymakers to estimate how a population will change over time.
This calculator takes into account various factors such as birth rates, death rates, and migration patterns to project future population sizes.
The primary purpose of a population growth calculator is to:
1. Forecast future population sizes
2. Analyze demographic trends
3. Plan for resource allocation
4. Predict potential societal challenges
By inputting current population data and growth rates, users can obtain projections for future years, which is crucial for long-term planning in areas such as urban development, healthcare, and
Population Growth Calculator
Let’s use the simple exponential growth formula to calculate population changes over time for a hypothetical city.
We’ll assume an initial population of 100,000 and a growth rate of 1.5% per year.
The formula we’ll use is: P(t) = P₀ * (1 + r)^t
• P(t) is the population after time t
• P₀ is the initial population
• r is the annual growth rate (as a decimal)
• t is the number of years
Here’s a table showing the population projections for 5 years:
Year Population
0 100,000
1 101,500
2 103,023
3 104,568
4 106,136
5 107,728
These calculations show how the population would grow over 5 years with a constant 1.5% annual growth rate.
Related Tools:
Population Calculation Formula
The basic population calculation formula is relatively straightforward, but it can be expanded to include more complex factors. The simplest form of the formula is:
P(t) = P₀ + (B – D) + (I – E)
• P(t) is the population at time t
• P₀ is the initial population
• B is the number of births
• D is the number of deaths
• I is the number of immigrants
• E is the number of emigrants
For more accurate long-term projections, demographers often use the exponential growth formula:
P(t) = P₀ * e^(rt)
• e is the mathematical constant (approximately 2.71828)
• r is the growth rate
• t is the time period
This formula assumes that the population grows continuously at a constant rate.
Real-world population growth is often more complex, requiring more sophisticated models that account for changing growth rates and other dynamic factors.
What is Population Growth?
Population growth refers to the increase in the number of individuals in a population over time. It is a fundamental concept in demography and population studies, reflecting the balance between
births, deaths, and migration within a specific area.
Population growth can be:
• Positive: When the population increases
• Negative: When the population decreases
• Zero: When the population remains stable
Several factors influence population growth:
1. Fertility rates: The average number of children born to women of reproductive age
2. Mortality rates: The number of deaths per 1,000 individuals per year
3. Migration: The movement of people in and out of a specific area
4. Age structure: The distribution of different age groups within a population
5. Socioeconomic factors: Such as education, healthcare, and economic opportunities
Understanding population growth is crucial for addressing global challenges such as resource allocation, environmental sustainability, and economic development.
Population Growth Rate
The population growth rate is a measure of how quickly a population is increasing or decreasing. It is typically expressed as a percentage and represents the change in population size over a specific
period, usually one year.
The formula for calculating the population growth rate is:
r = (P₁ – P₀) / P₀ * 100
• r is the growth rate
• P₁ is the population at the end of the period
• P₀ is the population at the beginning of the period
Population growth rates can vary significantly between different countries and regions. Factors affecting growth rates include:
• Economic development: Generally, more developed countries have lower growth rates
• Cultural norms: Attitudes towards family size and contraception
• Government policies: Such as China’s former one-child policy or incentives for larger families
• Access to healthcare: Better healthcare often leads to lower mortality rates but can also result in lower fertility rates
• Education levels: Higher education levels, especially for women, often correlate with lower fertility rates
Growth Rate Formula
The growth rate formula is used to calculate the rate at which a population is increasing or decreasing over a specific period. The basic formula is:
Growth Rate = (Final Value – Initial Value) / Initial Value * 100
In demographic terms, this can be expressed as:
r = (P₁ – P₀) / P₀ * 100
• r is the growth rate (as a percentage)
• P₁ is the population at the end of the period
• P₀ is the population at the beginning of the period
This formula gives you the total growth rate for the period. To calculate the annual growth rate over multiple years, you would use:
r = (ⁿ√(P₁ / P₀) – 1) * 100
Where n is the number of years between P₀ and P₁.
Formula for the population growth model?
The population growth model has several formulas depending on the type of growth being modeled. The two most common are:
1. Exponential Growth Model: P(t) = P₀ * e^(rt) Where:
□ P(t) is the population at time t
□ P₀ is the initial population
□ e is the mathematical constant (approximately 2.71828)
□ r is the growth rate
□ t is the time period
2. Logistic Growth Model: P(t) = K / (1 + ((K – P₀) / P₀) * e^(-rt)) Where:
□ K is the carrying capacity of the environment
□ Other variables are the same as in the exponential model
The logistic model is more realistic for long-term projections as it accounts for limiting factors in population growth.
Formula for calculating the growth factor in a population?
The growth factor is a multiplier that shows how much a population increases (or decreases) over a given time period. The formula for calculating the growth factor is:
Growth Factor = 1 + r
Where r is the growth rate expressed as a decimal.
For example, if the population growth rate is 2.5% (0.025), the growth factor would be:
Growth Factor = 1 + 0.025 = 1.025
This means that for each time period, the population is multiplied by 1.025 to get the new population.
To calculate the population after t time periods, you would use:
P(t) = P₀ * (Growth Factor)^t
This formula is equivalent to the exponential growth model mentioned earlier, just expressed in a different form.
|
{"url":"https://ctrlcalculator.com/math/population-growth-calculator/","timestamp":"2024-11-04T22:01:30Z","content_type":"text/html","content_length":"106882","record_id":"<urn:uuid:f29acfb5-0dc8-4f07-9f4a-b266ae58e223>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00899.warc.gz"}
|
6.6.8. Deriving via
6.6.8. Deriving via¶
This allows deriving a class instance for a type by specifying another type that is already an instance of that class. This only makes sense if the methods have identical runtime representations, in
the sense that coerce (see The Coercible constraint) can convert the existing implementation into the desired implementation. The generated code will be rejected with a type error otherwise.
DerivingVia is indicated by the use of the via deriving strategy. via requires specifying another type (the via type) to coerce through. For example, this code:
{-# LANGUAGE DerivingVia #-}
import Numeric
newtype Hex a = Hex a
instance (Integral a, Show a) => Show (Hex a) where
show (Hex a) = "0x" ++ showHex a ""
newtype Unicode = U Int
deriving Show
via (Hex Int)
-- >>> euroSign
-- 0x20ac
euroSign :: Unicode
euroSign = U 0x20ac
Generates the following instance
instance Show Unicode where
show :: Unicode -> String
show = Data.Coerce.coerce
@(Hex Int -> String)
@(Unicode -> String)
This extension generalizes GeneralizedNewtypeDeriving. To derive Num Unicode with GND (deriving newtype Num) it must reuse the Num Int instance. With DerivingVia, we can explicitly specify the
representation type Int:
newtype Unicode = U Int
deriving Num
via Int
deriving Show
via (Hex Int)
euroSign :: Unicode
euroSign = 0x20ac
Code duplication is common in instance declarations. A familiar pattern is lifting operations over an Applicative functor. Instead of having catch-all instances for f a which overlap with all other
such instances, like so:
instance (Applicative f, Semigroup a) => Semigroup (f a) ..
instance (Applicative f, Monoid a) => Monoid (f a) ..
We can instead create a newtype App (where App f a and f a are represented the same in memory) and use DerivingVia to explicitly enable uses of this pattern:
{-# LANGUAGE DerivingVia, DeriveFunctor, GeneralizedNewtypeDeriving #-}
import Control.Applicative
newtype App f a = App (f a) deriving newtype (Functor, Applicative)
instance (Applicative f, Semigroup a) => Semigroup (App f a) where
(<>) = liftA2 (<>)
instance (Applicative f, Monoid a) => Monoid (App f a) where
mempty = pure mempty
data Pair a = MkPair a a
deriving stock
deriving (Semigroup, Monoid)
via (App Pair a)
instance Applicative Pair where
pure a = MkPair a a
MkPair f g <*> MkPair a b = MkPair (f a) (g b)
Note that the via type does not have to be a newtype. The only restriction is that it is coercible with the original data type. This means there can be arbitrary nesting of newtypes, as in the
following example:
newtype Kleisli m a b = Kleisli (a -> m b)
deriving (Semigroup, Monoid)
via (a -> App m b)
Here we make use of the Monoid ((->) a) instance.
When used in combination with StandaloneDeriving we swap the order for the instance we base our derivation on and the instance we define e.g.:
deriving via (a -> App m b) instance Monoid (Kleisli m a b)
|
{"url":"https://downloads.haskell.org/~ghc/9.6.6/docs/users_guide/exts/deriving_via.html","timestamp":"2024-11-03T23:10:39Z","content_type":"text/html","content_length":"25172","record_id":"<urn:uuid:a70b9ed5-77c1-47a1-adee-3fe26ca26d36>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00486.warc.gz"}
|
2 to 4 players
One standard 52-card deck.
The object is to be the first player to score 121 points (long game) or 61 points (short game).
For 2 players, each player is dealt 6 cards. For 3 players, each player is dealt 5 cards, and one card is dealt into the kitty. For 4 players, each player is dealt 5 cards. The rest of the cards are
placed in their pile on the table. The cards are cut, and low card is the first dealer. Deal passes on clockwise.
Game play
Cribbage is played in two phases. Before game play begins, you must decide which four cards you wish to keep, and discard your extra cards to the kitty. The kitty cards are given to the dealer, but
they aren't used for the first phase of play (and cannot be looked at until the first phase of play is over).
This is the general scoring table. Below it are some rules that apply to the scoring table.
Combination Points
Fifteen 2
(Any combination of cards that adds up to 15)
Run 1 per card
(three or more cards, ace is always low)
Pair 2
3-of-a-kind 6
4-of-a-kind 12
Knobs 1 (in hand) *
(Jack of suit of the flipped card) 2 (flipped) **
(4 or 5 cards, in hand) 1 per card *
(all 5 cards, in kitty)
* = this item is scored in the second phase of game play only
** = this item is scored immediately upon flipping
After everyone has discarded to the kitty, then the player to the dealer's left cuts the deck and flips up a card. If this card is a jack, it is the Knobs card, and the player who flipped it
immediately scores his two points. Otherwise, this card is only used during the second round of scoring and does not affect the first round of play.
During the first round of play, players will play cards one at a time, starting with the person to the left of the dealer. Totals of 15, runs, pairs, 3-of-a-kinds, or 4-of-a-kinds that result from
playing are immediately scored to the person who created them. The first set stops when someone has taken the total to 31, or when the current player does not have any cards that wouldn't put the
total over 31. If the total is not all the way to 31, the last player who played cards may continue to play cards until the total goes to 31, or until they also cannot play. If the total gets to the
31, the person who played the last card gets two points; otherwise, the person who played the last card gets one point. The total is reset to zero, and a new set of cards is played. Play continues
until all players are out of card.
Two players; number two is the dealer, and number one is sitting across from him. Their cards are:
Player 1 9♦ 10♦ Q♦ K♣
Player 2 5♣ 6♦ 10♠ J♠
Flipped Card J♦
Kitty A♦ 2♣ 3♠ 7♣
Player one flipped a jack, so he immediately scores 2 points.
Player one has the lead, and plays K♣. The total is now 10. Player two plays 5♣. The total is now 15, and player two immediately points himself two points for the 15. Player one plays Q♦. The total
is now 25. Player two plays 6♦. The total is now 31. This set is over, and player two points himself two points for ending the set by reaching 31.
Player two made the last play, so it is player one's turn. One plays 10♦, total is now ten. Two plays 10♠, total is now twenty. Two scores two points for the pair. One plays his last card, 9♦, total
is now twenty nine. Two says "go", meaning that he cannot play without taking the total over 31. One is out of cards, so he also cannot play. This set is over, and player one points himself one point
for ending the set without reaching 31.
Player two plays his last card, J♠, total is now ten. Both players are out of cards, so player two points himself one point for ending the set without reaching 31.
During the second round of play, the players collect their cards back into their hands, and lay them in front of themselves to be scored (and verified by the other players). Players score their hands
(including the flipped card), starting with the player to the left of the dealer. Scoring order is important, because the first player to hit the target score is the winner (regardless of whether
another player would also have hit the target score). Each player is responsible for counting their score, announcing it, and scoring themselves. Scoring continues clockwise, until the dealer, who
scores last. Once the dealer has scored his hand, he then flips over the kitty cards and scores it like an additional bonus hand.
Be careful to count your points right! When playing competitively, miss scoring has detrimental effects. If a player catches you underscoring yourself, they may claim the extra, uncounted points for
themselves. If a player catches you over scoring yourself, you will be forced to correct your score, and they may claim the difference in scores for themselves. In some tournaments, you will be
required to count your score aloud: "15 - 2, 15 - 4, run - 7, knobs - 8" to ensure you are scoring correctly. Most people play friendly, though, and allow players to correct their scores.
We will continue this example with the last cards we used, so we can show you how the second round is scored.
Player 1 9♦ 10♦ Q♦ K♣
Player 2 5♣ 6♦ 10♠ J♠
Flipped Card J♦
Kitty A♦ 2♣ 3♠ 7♣
Player one is the first player to the left of the dealer, so he starts scoring first.
Player one has a score of 9 points: 5 points for the 9-10-J-Q-K run, and four points for the diamond flush. The knobs card was already scored when it was flipped.
Player two has a score of 8 points: 2 points for each of the 15s, and 2 points for the pair of jacks. In the kitty, player two has a score of 5: 3 points for the A-2-3 run, and 2 points for the 15
from J-3-2.
Traditionally, Cribbage is scored with a cribbage board. This is typically a board with holes and pegs that allow you to easily count up to your intended score. While you can score this game on
paper, it isn't very convenient, because of the rapid and repeated scoring nature of the game.
Some people play that the score of flipping Knobs is given to the dealer, rather than the person who flips the card.
|
{"url":"http://bradwilson.dev/games/cribbage.html","timestamp":"2024-11-11T20:34:53Z","content_type":"text/html","content_length":"12786","record_id":"<urn:uuid:c0d28072-e553-49ee-a127-7a8eeda810e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00318.warc.gz"}
|
Unified Threads (UNC/UNF/UNEF): Dimensions & Formulas (2024)
List of Charts
* Easy access to all the data charts that appear on this page
• List of Symbols
• Designation Examples
• Possible Diameter/Pitch combinations.
• UNC Series – Basic Dimensions
• UNF Series – Basic Dimensions
• UNEF Series – Basic Dimensions
• Constant Pitch Series – Basic Dimensions
The Unified Thread Standard (UTS) defines a 60° thread form in Inch dimensions as described in the ASME B1.1 standard. It is the North American equivalent of the ISO metric thread system. The UTS
serves as the leading standard for bolts, nuts, and a wide variety of other threaded fasteners used in the USA and Canada. However, in recent years the metric thread standard is becoming more common.
The standard defines diameter and pitch combinations along with allowances, tolerances, and designations. It has the same 60° profile as the ISO metric screw thread, but the basic dimensions of each
UTS thread were chosen as an inch fraction rather than a millimeter value.
Basic designation syntax:
• Nominal Diameter in Inch Fraction
• Pitch in TPI
• Series: UNC / UNF / UNEF / UN
• Class: 1A, 2A, 3A, 1B, 2B or 3B.
• From 1/4″ and above the diameters are given in inch fractions. For example 1/4″, 3/4″, 1 1/4″.
• Below 1/4″ the diameters are given by a series of numbers from #0 to #12. Each “number” describes an arbitrary diameter as shown in the below table.
• It is also allowed to denote the diameters with a decimal value. For example 0.250 (1/4″), 0.4375 (7/16″), etc.
Standard Diameter numbers below 1/4″
Number #0 #1 #2 #3 #4 #5 #6 #8 #10 #12
(Inch) 0.06 0.073 0.086 0.099 0.112 0.125 0.138 0.164 0.19 0.216
Diameter 1.524 1.8542 2.1844 2.5146 2.8448 3.175 3.5052 4.1656 4.8260 5.4864
• By default, the pitch is given in TPI. For example, 1/4-20 means a thread with a pitch of 20 TPI (1/20=0.05″).
• It is also allowed to denote the pitch by distance. For example, 1/4-0.05P means a thread with a pitch of 0.05″ (same as 20 TPI).
• In most cases, the series will be UNC, UNF, or UNEF (Coarse, Fine, or Extra fine pitch).
• ASTM B1.1, also defines several constant pitch series. These are denoted by ##-UN. For example, 12-UN or 8-UN. The pitch remains the same for the whole diameter range in these series. However,
they are much less popular.
• The class is defined by a two-character code.
□ The first character is a digit between 1 and 3. 3=Tight fit, 2=Medium fit, and 1=Loose fit.
□ The second character is a letter. A=External thread and B=Internal thread.
□ For a detailed explanation of classes, read the dedicated section below.
Additional Parameters:
• Direction: By default, the thread is right-hand. For a left-hand thread, add the suffix -LH.
• Number of Starts: By default, all threads have a single start. Thus the lead equals the pitch. For a multiple-start thread, the lead is also indicated. For example: 3/4– 0.0625P – 0.1875L UNF
denotes a 3/4″ UNF thread with 3 starts. (0.1875/0.0625=3)
Designation Examples of Unified Threads
Internal / Nominal
Designation Thread Series Diameter TPI Pitch Lead Class Number of Starts Thread
External Direction
#8-32 UNC UNC undefined 0.164" 32 0.03125" 0.03125" undefined 1 Right hand
1/4 -28 UNF-2A UNF External 0.25" 28 0.0357" 0.0357" 2A 1 Right hand
3/4-20 UNEF-3B UNEF Internal 0.75" 20 0.05" 0.05" 3B 1 Right hand
Constant Pitch
1-12 UN-1A External 1" 12 0.0833" 0.0833" 1A 1 Right hand
12 TPI
3/4– 0.0625P – 0.1875L UNF-2A UNF External 0.75" 16 0.0625" 0.1875" 2A 3 Right hand
1/4 -28 UNF-2A-LH UNF External 0.25" 28 0.0357" 0.0357" 2A 1 Left hand
Possible Combinations (Diameter/Pitch)
The charts below show all the possible thread combinations that are defined in ASTM B1.1
• UNC– Unified Inch Coarse Thread Series
• UNF – Unified Inch Fine Thread Series
• UNEF – Unified Inch Extra Fine Thread Series
• ##-UN – Unified Inch Constant PitchThread Series
• Click to get the Full Thread Data-Sheet for a specific combination
Possible Combinations for 1/16″ – 1/4″
Possible Combinations for 5/16″ – 1 1/2″
Possible Combinations for 1 9/16″ – 6″
Basic Thread Dimensions (UNC, UNF & UNEF)
The basic dimensions are nominal dimensions of a unified thread profile without allowance and tolerances (The thread class defines that). They are based on standard ASTM B1.1. The basic dimensions
can be used for design. However, for manufacturing and machining, you need the allowable range of each dimension. This data can be found in the standard or in the Limits and Dimensions section below.
All the basic dimensions are derived from simple formulas based on the thread’s nominal diameter and pitch.
List of symbols used in charts and formulas of Unified Inch Threads
Symbol Explanation
Basic Parameters - Diameters and Pitch
D Major (Basic) diameter of internal thread
D[1] Minor Diameter - Internal Thread
D[2] Pitch Diameter - Internal Thread
D[3] Major diameter, rounded root, internal thread
d Major (Basic) diameter of external thread
d[1] Minor Diameter - External Thread
d[2] Pitch Diameter - External Thread
d[3] MMajor diameter, rounded root, internal thread
n Number of threds per Inch (TPI)
P Pitch (Distance)
L Lead
Height Parameters
H Height of fundamental triangle
h[s] Thread Height - External Thread
h[as] Thread addendum - External Thread
h[n] Thread Height - Internal Thread
h[an] Thread addendum - Internal Thread
Length Parameters
LE Length of Engagment
L[ts] Length - External Thread
L[tn] Length- Internal Thread
Allowance, Deviation, and Tolerances
T[D1] Tolerance - D[1]
T[D2] Tolerance - D[2]
T[d] Tolerance - d
T[d2] Tolerance - d[2]
es Allowance - External Thread
Crest / Root Parameters
F[cs] Crest width - External Thread
F[rs] Root width - External Thread
F[cs] Crest width - Internal Thread
F[cs] Root width - Internal Thread
Formulas for basic dimensions
External Thread
\( \large d=\text{Basic Diameter} \)
\( \large P=\frac{1}{TPI} \)
\( \large H=\frac{\sqrt{3}}{2}\times{P} = 0.866025404\times{P} \)
\( \large h_s=\frac{5}{8}\times{H} \)
\( \large h_{as}=\frac{3}{8}\times{H}\)
\( \large d_2=d-{2}\times{h_{as}} \)
\( \large d_1=d-{2}\times{h_s}\)
\( \large F_{cs}=\frac{P}{8}\)
\( \large F_{rs}=\frac{P}{4}\)
internal thread
\( \large D=\text{Basic Diameter} \)
\( \large P=\frac{1}{TPI} \)
\( \large H=\frac{\sqrt{3}}{2}\times{P} = 0.866025404\times{P} \)
\( \large h_n=\frac{5}{8}\times{H} \)
\( \large h_{an}=\frac{1}{4}\times{H}\)
\( \large D_1=D-{2}\times{h_n}\)
\( \large D_2=D1+{2}\times{h_{an}}\)
\( \large F_{rn}=\frac{P}{8}\)
\( \large F_{cn}=\frac{P}{4}\)
Unified Inch Threads Dimensions Charts
UNC Series Basic Dimensions chart
Click the Thread Link to get the Full Thread Data for all classes
All dimensions are in inches. To view the Metric translation, click the Thread Link
Click icon to show Additional columns
Click icon to show Full Thread Data
UNF Series Basic Dimensions chart
Click the Thread Link to get the Full Thread Data for all classes
All dimensions are in inches. To view the Metric translation, click the Thread Link
Click icon to show Additional columns
Click icon to show Full Thread Data
UNEF Series Basic Dimensions chart
Click the Thread Link to get the Full Thread Data for all classes
All dimensions are in inches. To view the Metric translation, click the Thread Link
Click icon to show Additional columns
Click icon to show Full Thread Data
UN Constant Pitch Series Basic Dimensions chart
Click the Thread Link to get the Full Thread Data for all classes
All dimensions are in inches. To view the Metric translation, click the Thread Link
Click icon to show Additional columns
Click icon to show Full Thread Data
limits of Thread Dimensions (UNC, UNF & UNEF)
To manufacture or measure a thread, one has to know the maximum and minimum permissible values of the basic dimensions. These values are calculated according to the thread class (See below). To
understand classes, you first need to understand the terms Allowance (Sometimes referred to as deviation) and Tolerances.
Definition of terms:
• Allowance (Deviation):The minimum permissible distance between the basic and actual profile.
• Tolerance:The width of the tolerance field of a diameter on the actual thread profile. (Pitch, Major & Minor diameters)
• A small allowance means that the assembly of a male and female thread will be harder, but after assembly, there will be less freedom of movement.
• A large allowance means that the assembly of a male and female thread will be easier, but after assembly, there will be more freedom of movement.
• The allowance size does not influence a thread’s production difficulty or price.
• A wide tolerance is easier and cheaper to produce but yields a larger spread between threads.
• A narrow tolerance is challenging to produce and more expensive but yields a smaller spread between threads.
Unified Thread Classes
• The class is defined by a two-character code.
□ The first character defines is a digit between 1 and 3. 1=Loose fit, 2=Medium fit, and 3=Tight fit.
□ The second character is a letter. A=External thread and B=Internal thread.
• In the Unified Inch thread system, only classes 1A & 2A have an allowance. The allowance for the two classes is the same.
• The other 4 classes have an allowance of Zero.
• All the internal thread profiles (B classes) have no allowance.
• Classes 1A & 1B: These classes are suitable for applications where a liberal tolerance and allowance are required to permit easy assembly. The maximum diameters of Class 1A threads are smaller
than the basic diameter by the allowance amount. The minimum diameters of Class 1B threads are equal to the basic diameters and consequently afford no allowance at maximum material condition.
• Classes 2A & 2B: Classes 2A (external) and 2B (internal) are the default classes used for the commercial production of Nut, Bols, and otehr threaded fasteners. They are the default class unless
otherwise specified. they have the same allowance as classes 1A/1B. However, they have a narrower tolerance field. The difficulty of assembly is the same, but the spread between different parts
in a production batch is smaller.
• Classes 3A & 3B: These classes are suitable for applications where tight fit and accuracy of thread elements are essential. The maximum diameters of Class 3A threads and the minimum diameters of
Class 3B threads are the same (Both have zero allowance). The accuracy is high, but the assembly is difficult.
• Combinations of Classes: The engineer can select any valid class for the male and female threads to achieve his design goals. The valid classes are defined in ASTM B1.1 as follows:
□ Classes 2A, 2A, 3B, and 3B are always valid.
□ Classes 1A & 1B are valid only for UNC threads with a nominal diameter equal to or larger than 1/4″.
Unified Thread Allowance.
The allowance es should be calculated for external threads with classes 1A and 2A. The allowance of both classes is the same and equals 0.3 times the pitch diameter tolerance of class 2A. It is the
function of 3 parameters.
• D – Basic (nominal diameter)
• P – Pitch (1/TPI)
• LE – Length of engagement. If not explicitly specified, the length of engagement is assumed to be 5 pitches. (LE=5*P)
• All parameters are in inches
\( \large es = 0.3 \times \left [ 0.0015 \times \sqrt[3]{D} + 0.0015 \times \sqrt{LE} + 0.015 \times \sqrt[3]{P^{2}} \right ] \)
\begin{array}\large\,es& = 0.3 \times\,[\,0.0015 \times \sqrt[3]{D} &\\& +\,0.0015 \times \sqrt{LE}&\\& + 0.015 \times \sqrt[3]{P^{2}}\,]\end{array}
• The result should be rounded to 6 decimal places.
• ASTM B1.1 lists in a long chart the allowance for each thread size. However, the results of this formula are 100% accurate, and there is no need to browse the chart.
• You can use our Advanced Threading Calculator to obtain the allowance according to a thread description or use the mini-calculator below.
Unified Thread Allowance Calculator
|
{"url":"https://idizyn.com/article/unified-threads-unc-unf-unef-dimensions-formulas","timestamp":"2024-11-02T20:44:39Z","content_type":"text/html","content_length":"94017","record_id":"<urn:uuid:9a150596-978e-4444-8380-9a9e1e16e914>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00072.warc.gz"}
|
Middle School Math: Graphing Virtually - EALA
Middle School Math: Graphing Virtually
Reading Time: < 1 minute
This case study describes a virtual math support session focused on learning to graph from an equation in slope-intercept form and anxiety mitigation. The practitioner describes working with a
seventh grade student with a diagnosis of dyscalculia. The use of Zoom, a magnetic whiteboard in lieu of a life-sized coordinate plane, google docs as a shared virtual notebook, and video tutorials
to support learning and teaching are described.
Emily is a seventh grade student with a diagnosis of dyscalculia. She can become anxious and easily frustrated when she does not understand a math concept. She is receiving one-on-one mathematics
Learning goals
Graph from an equation in slope-intercept form
Transition to distance learning
Face to Face
• Use color-coded equations on a whiteboard to identify the slope and the intercept.
• Create a coordinate plane on the floor and move oneself based on the equation.
• Graph an equation on a coordinate plane.
At a Distance
• Together, watch a video showing the process of identifying the x and y intercepts.
• Practice identification using a virtual whiteboard with color-coded equations.
• Student gives instructor graphing instructions that the teacher follows and draws on the whiteboard.
• Student practices problems independently (with support as necessary) through Google Docs.
Face to Face
• Colored markers
• Whiteboard
• Student’s notebook
• Coordinate plane (life-sized) on the floor made from masking tape
• Equations to graph and a coordinate plane drawn on paper
At a Distance
• Computer with Zoom (with virtual whiteboard)
• Google Docs or OneNote (virtual notebook)
• Prepared equations
• Videos
• Magnetic whiteboard (for teacher)
• 2-3 magnets (for teacher)
Face to Face
• Use a gradual release model (i.e., show the student first, do the second problem with the student, watch the student do the third problem independently) through identification of slope and
y-intercept in an equation, using red for the y-intercept and blue for the slope. Refer to the student’s definitions in her math notebook as needed.
• Model how to use the life-sized coordinate plane before the student uses her own body to show location of the y-intercept as well as “rise over run” as the student moves her body on the graph
based on the coordinates (e.g., if the equation is y=2/3x + 5, the student begins at y=5 and moves her body up two spaces on the y-axis and three spots over on the x-axis. The student puts down a
circle made of paper to mark the “point” on the graph.)
• Use a gradual release model (see above) to guide the student through the graphing process using written equations and a coordinate plane on paper.
At a Distance
• Identify salient ideas in the video and support the student in taking notes in her Google Docs virtual notebook.
• Use a gradual release model (i.e., show the student first, do the second problem with the student, watch the student do the third problem independently) through identification of slope and
y-intercept in an equation, using red for the y-intercept and blue for the slope within the virtual whiteboard feature. Refer to the student’s definitions in her math notebook as needed.
• Model before the student gives guidance to the instructor. Specifically, the teacher holds a magnetic whiteboard with a coordinate plane. The teacher models how to move a magnet to the
y-intercept and then move a second magnet based on the slope and plot a second point.
• Use a gradual release model (see above) to guide the student through the graphing process using equations and a coordinate plane in Google Docs.
What worked well
Emily can become easily frustrated, and our first virtual lesson did not go perfectly (see below). However, in this lesson a number of key features seemed to boost Emily’s independence, perhaps even
more than in face-to-face settings.
First, using a combination of videos and a virtual notebook allowed Emily to independently think about what she may need to refer to after the lesson when working by herself. She took her own notes
and even used the time stamp on the video to document portions of the video she might need to refer back to later.
Second, during the magnetic whiteboard activity, Emily provided me with directions for how to move magnets to the y-intercept and then to a second point on the line. Because Emily was providing
directions and I was moving the magnets, I was able to prompt Emily before she made errors, thus reducing frustration. If Emily provided the incorrect direction, I prompted her to look back at the
equation and provide the direction again. Checking her work before the error was “written down” seemed to reduce frustration overall.
Finally, the components of the lesson allowed Emily to become fully independent with graphing because she was provided multiple models and had multiple opportunities for practice before she attempted
problems independently. Because she was working in Google Docs, Emily felt that she was not alone when solving problems on her own, which appeared to boost her confidence.
I was surprised by
It was challenging at times to switch back and forth from different platforms (Zoom to Google Docs) and within platforms (Zoom whiteboard to video feature), though I think it was more challenging for
me than for Emily!
Next time I’ll try
Though it would be possible to do an active lesson as I had originally planned, it would require Emily and her family to prepare prior to the start of the lesson to avoid wasting time during the
lesson. I might try to do this next time by providing a little guidance for Emily’s family about how to prepare via email, a shared Google Doc or a short phone conference. I have found that the mode
and format of communication needs to be consistent and flexible to families’ needs. Families are juggling a lot right now, and so to deliver high quality support I am finding that I might need to
adjust the content of or approach to that support depending on how families are able to engage with me. This means that things sometimes take longer to get organized and implemented than they used
to, but that’s ok! The quality of the resulting interaction and learning was still good and impactful.
My big picture takeaways
It is possible to provide the same level of scaffolding and practice as in face-to-face lessons. The online environment allows for multiple ways of practicing and interacting, and the environment is
perhaps even better than a face-to-face setting for encouraging students like Emily to become independent. Digital natives like Emily are comfortable using reference documents, and weaving those
supports into a lesson allows for a natural gradual release.
This case study centers on the way that Rachel Currie Rubin translated a math graphing lesson using manipulatives into a virtual format. This resource contains a similar strategy for division,
including instructions for adapting during virtual learning.
|
{"url":"https://educatingalllearners.org/case-studies/middle-school-math-graphing-virtually/","timestamp":"2024-11-05T03:07:21Z","content_type":"text/html","content_length":"148422","record_id":"<urn:uuid:54623780-e2cf-46b8-a148-dc9983c1f293>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00509.warc.gz"}
|
Mean » PASTPAPERMCQs
Sum of all the values – This question was asked in:PMS (P) General Knowledge 2019 A. ModeB. MeanC. MedianD. Geometric Mean Answer Option B: Mean View all PPSC PMS Past Papers Detailed Solution – Mean
The mean is equal to the sum of all the values in the data divided by the number of values … Read more
|
{"url":"https://pastpapermcqs.com/tag/mean/","timestamp":"2024-11-14T04:30:46Z","content_type":"text/html","content_length":"107903","record_id":"<urn:uuid:634e8eab-6c52-44ac-8a40-c4eca936604d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00683.warc.gz"}
|
A soccer player kicks a ball from the ground to a maximum height of 12
A soccer player kicks a ball from the ground to a maximum height of 12 m. The high point in the trajectory of the ball occurs at a distance of 18 m from the kicker. On the downward path, another
player heads the ball at a height of 2.2 m from the ground.
Write a quadratic function that models the situation.
my ans -
y = a (x+2.2)2 + 12
y = a (0 +2.2)2 + 12
12 = a(2.2)2
a = 12/(2.2)2
a= 2.47..
4 answers
(click or scroll down)
y(0) = 0 and the vertex is at x=18 (halfway to the other root) and the parabola opens downward, so
y = ax(36-x)
the vertex is at (18,12) so
18a*(18) = 12
x = 12/18^2 = 2/3*18 = 2/54
y = 2/54 x(36-x)
Thank you so much, Sir.
That answer is wrong
To derive the quadratic function that models the situation, we can use the general form of a quadratic equation, which is y = ax^2 + bx + c.
Given the information in the problem, we know that the maximum height of the ball is 12 m, which occurs at a distance of 18 m from the kicker. This means that the vertex of the parabolic trajectory
is at the point (18, 12).
To find the value of the quadratic coefficient, a, we can substitute the vertex coordinates into the equation:
12 = a(18)^2 + b(18) + c
Since the ball starts at ground level (y = 0) and reaches a maximum height of 12 m, we also know that the y-intercept is 0. We can substitute this information into the equation as well:
0 = a(0)^2 + b(0) + c
Now we have a system of equations:
1) 12 = a(18)^2 + b(18) + c
2) 0 = c
From equation 2, we can conclude that c = 0.
Substituting this value into equation 1, we have:
12 = a(18)^2 + b(18)
Simplifying further:
12 = 324a + 18b
Now we have a linear equation with two variables (a and b). Since we need another equation to solve the system, we can use the given information that another player heads the ball at a height of 2.2
m from the ground. This means that when y = 2.2, there is some corresponding value of x.
Using the vertex form of a parabolic equation, we know that the x-coordinate of the vertex (h) is given by:
h = -b/2a
In this case, the x-coordinate of the vertex (h) is 18. Substituting this value into the equation:
18 = -b/2a
We can rearrange the equation as:
b = -36a
Now we have two equations:
1) 12 = 324a + 18b
2) b = -36a
Substituting equation 2 into equation 1:
12 = 324a + 18(-36a)
12 = 324a - 648a
Combining like terms:
12 + 648a = 324a
648a - 324a = 12
324a = 12
a = 12/324
a = 0.037... (approximately)
Now that we have the value of a, we can substitute it back into equation 2 to find b:
b = -36(0.037...)
b = -1.33... (approximately)
Finally, substituting the values of a and b into the general quadratic equation form, we have:
y = 0.037...x^2 - 1.33...x
Therefore, the quadratic function that models the situation is:
y = 0.037...x^2 - 1.33...x
|
{"url":"https://askanewquestion.com/questions/1648146","timestamp":"2024-11-04T10:09:40Z","content_type":"text/html","content_length":"25359","record_id":"<urn:uuid:87ddd699-ad10-4d1b-86a5-e911fff25313>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00334.warc.gz"}
|
Theorems of Moment of Inertia
As the moment of inertia depends on the axis of rotation and also the orientation of the body about that axis, it is different for the same body with different axes of rotation. We have two important
theorems to handle the case of shifting the axis of rotation.
(i) Parallel axis theorem:
Parallel axis theorem states that the moment of inertia of a body about any axis is equal to the sum of its moment of inertia about a parallel axis through its center of mass and the product of the
mass of the body and the square of the perpendicular distance between the two axes.
If IC is the moment of inertia of the body of mass M about an axis passing through the center of mass, then the moment of inertia I about a parallel axis at a distance d from it is given by the
I = IC + Md2 (5.46)
Let us consider a rigid body as shown in Figure 5.25. Its moment of inertia about an axis AB passing through the center of mass is IC. DE is another axis parallel to AB at a perpendicular distance d
from AB. The moment of inertia of the body about DE is I. We attempt to get an expression for I in terms of IC. For this, let us consider a point mass m on the body at position x from its center of
The moment of inertia of the point mass about the axis DE is, m( x + d)2.
The moment of inertia I of the whole body about DE is the summation of the above expression.
I = + ∑ m (x+d)2
This equation could further be written as,
Here, ∑mx2 is the moment of inertia of the body about the center of mass. Hence, IC = ∑mx2
The term, ∑mx = 0 because, x can take positive and negative values with respect to the axis AB. The summation ( ∑mx) will be zero.
Thus, I = IC + ∑md2 = IC + ( ∑m)d2
Here, Σm is the entire mass M of the object ( ∑m = M )
I = IC + Md2
Hence, the parallel axis theorem is proved.
(ii) Perpendicular axis theorem:
This perpendicular axis theorem holds good only for plane laminar objects.
The theorem states that the moment of inertia of a plane laminar body about an axis perpendicular to its plane is equal to the sum of moments of inertia about two perpendicular axes lying in the
plane of the body such that all the three axes are mutually perpendicular and have a common point.
Let the X and Y-axes lie in the plane and Z-axis perpendicular to the plane of the laminar object. If the moments of inertia of the body about X and Y-axes are IX and IY respectively and IZ is the
moment of inertia about Z-axis, then the perpendicular axis theorem could be expressed as,
IZ =IX +IY (5.47)
To prove this theorem, let us consider a plane laminar object of negligible thickness on which lies the origin (O). The X and Y-axes lie on the plane and Z-axis is perpendicular to it as shown in
Figure 5.26. The lamina is considered to be made up of a large number of particles of mass m. Let us choose one such particle at a point P which has coordinates (x, y) at a distance r from O.
The moment of inertia of the particle about Z-axis is, mr2
The summation of the above expression gives the moment of inertia of the entire lamina about Z-axis as, IZ = ∑mr2
Here, r2 = x2 + y2
Then, IZ = ∑m ( x2 + y2 )
IZ = ∑m x2 + ∑m y2
IZ = ∑m x2 + ∑m y2
In the above expression, the term Σmx 2 is the moment of inertia of the body about the Y-axis and similarly the term Σmy2 is the moment of inertia about X-axis. Thus,
IX =∑my2
IY =∑mx2
Substituting in the equation for Iz gives,
IZ = IX + IY
Thus, the perpendicular axis theorem is proved.
Solved Example Problems for Theorems of Moment of Inertia
Example 5.16
Find the moment of inertia of a disc of mass 3 kg and radius 50 cm about the following axes.
i. axis passing through the center and perpendicular to the plane of the disc,
ii. axis touching the edge and perpendicular to the plane of the disc and
iii. axis passing through the center and lying on the plane of the disc.
The mass, M = 3 kg, radius R = 50 cm = 50 × 10−2 m = 0.5 m
i. The moment of inertia (I) about an axis passing through the center and perpendicular to the plane of the disc is,
ii. The moment of inertia (I) about an axis touching the edge and perpendicular to the plane of the disc by parallel axis theorem is,
(iii) The moment of inertia (I) about an axis passing through the center and axis passing through the center and
About which of the above axis it is easier to rotate the disc?
It is easier to rotate the disc about an axis about which the moment of inertia is the least. Hence, it is case (iii).
Example 5.17
Find the moment of inertia about the geometric center of the given structure made up of one thin rod connecting two similar solid spheres as shown in Figure.
The structure is made up of three objects; one thin rod and two solid spheres.
The mass of the rod, M = 3 kg and the total length of the rod, ℓ = 80 cm = 0.8 m
The mass of the sphere, M = 5 kg and the radius of the sphere, R = 10 cm = 0.1 m
The moment of inertia of the sphere about geometric center of the structure is,
I sph = I C + Md2
Where, d = 40 cm + 10 cm = 50 cm = 0.5 m
As there are one rod and two similar solid spheres we can write the total moment of inertia (I) of the given geometric structure as, I = Irod + (2 × Isph )
|
{"url":"https://www.brainkart.com/article/Theorems-of-Moment-of-Inertia_34612/","timestamp":"2024-11-08T15:45:24Z","content_type":"text/html","content_length":"77598","record_id":"<urn:uuid:6ffe14f3-7f0a-4b80-bcf3-3e32898f3a26>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00473.warc.gz"}
|
Class 10 CBSE Maths: Constructions - Bisectors, Perpendiculars, and Angles - SchoolMyKids
This chapter dives into the world of constructions, focusing on using a compass and straightedge to create specific geometrical elements. Here, we’ll explore constructing bisectors (of angles and
line segments), perpendicular lines, and angles of specific measures. This chapter is no longer part of class 10 Maths syllabus.
Important Note: No formulas are involved in constructions; they rely on geometric principles and following specific steps with compass and straightedge.
1. Constructing the Bisector of an Angle
A bisector divides an angle into two equal parts.
1. Draw the angle you want to bisect (∠ABC).
2. Place the compass point at the vertex (point B).
3. Set the compass opening to any convenient width.
4. With that width, draw two arcs, one intersecting each ray of the angle (one arc inside each angle sector).
5. Without changing the compass width, place the compass point at one of the intersection points (say, point P on AB).
6. Draw another arc that intersects the other ray of the angle (on AC).
7. Mark the point where this new arc intersects AC (point Q).
8. Line segment BPQ is the angle bisector of ∠ABC.
• The chosen compass width ensures both arcs (from step 4) cut across both sides of the angle.
• Steps 5 and 6 ensure both new arcs (with the same compass width) cut each ray of the angle once.
• Line segment BPQ divides ∠ABC into two congruent angles (∠ABP and ∠PBQ) because the arcs created in steps 4 and 6 are equal in radius.
2. Constructing the Perpendicular Bisector of a Line Segment
The perpendicular bisector of a line segment is a line that passes through the midpoint of the segment and is perpendicular to it.
1. Draw the line segment you want to find the perpendicular bisector for (say, line segment AB).
2. Set the compass width wider than half the length of the line segment (AB).
3. Place the compass point at one endpoint (A) and draw an arc above and below the line segment that intersects the other endpoint (B).
4. Repeat step 3, placing the compass point at endpoint B and ensuring the same compass width is used.
5. The two arcs will intersect at two points (say, points P and Q) above the line segment.
6. Draw line PQ. Line PQ is the perpendicular bisector of line segment AB.
• The compass width chosen in step 2 ensures both arcs will intersect each other above the line segment.
• Since the compass width is the same in both placements (A and B), the radii of the arcs will be equal.
• The two intersection points (P and Q) are equidistant from both A and B (due to equal radii), making PQ the perpendicular bisector that passes through the midpoint of line segment AB.
3. Constructing an Angle of a Specific Measure (Using a Protractor)
While a compass and straightedge can’t directly construct angles of specific measures, you can use a protractor to achieve this.
1. Draw a ray (say, ray BC).
2. Place the center of the protractor at point B (the endpoint of the ray).
3. Align the protractor’s base line with ray BC.
4. Locate the desired angle measure on the protractor’s scale.
5. Mark a point (say, point A) on ray BC corresponding to the chosen angle measure.
6. Line segment BA represents the angle with the desired measure (∠ABC).
Note: This method relies on a physical protractor to measure and transfer the angle measure.
More Advanced Techniques
This section explores some additional constructions that build upon the basic principles introduced earlier.
1. Constructing the Angle Bisector of a Triangle
There are two common methods for this construction:
• Using the Perpendicular Bisector of the Base:
1. Construct the perpendicular bisector of the base of the triangle (say, line segment BC in triangle ABC). The intersection point of this bisector with the opposite side (AC) will be the angle
bisector of angle A (point D).
• Using Intersecting Incircles (requires prior knowledge of inscribed circles):
1. Construct the incircle (tangent to all three sides) of the triangle.
2. Draw lines from each vertex (A, B, and C) to the point of tangency on the corresponding side.
3. The intersection point of these two lines (excluding the base) will be the angle bisector (point D).
2. Constructing a Line Perpendicular to a Line at a Point
• Method 1 (Using the Bisector of an Angle):
1. Draw the line (say, line AB) and mark the point (point P) where you want the perpendicular line to intersect.
2. Place the compass point at point P and set a convenient width.
3. Draw two arcs intersecting line AB on either side of point P.
4. Without changing the compass width, place the compass point at one of the intersection points (say, point Q) and draw an arc above line AB.
5. Repeat step 4 from the other intersection point (say, point R).
6. The two new arcs will intersect above line AB (point S).
7. Draw line PS. Line PS is perpendicular to line AB at point P.
• Method 2 (Using Two Bisectors):
1. Similar to method 1, draw two intersecting arcs from point P on line AB.
2. Draw lines through point P bisecting the angles formed by the arcs and line AB.
3. The intersection point of these two bisecting lines will be point S, and line PS is perpendicular to line AB at point P.
3. Constructing a Parallel Line
• Method 1 (Using Corresponding Angles):
1. Draw the line (say, line AB) and mark a point (point P) not on the line.
2. Draw a ray (say, ray PQ) emanating from point P and intersecting line AB at point Q.
3. Choose a convenient angle measure (e.g., 45°) and use your protractor to construct a line through point P that makes an angle with ray PQ equal to the chosen measure on the same side as line
4. This newly constructed line will be parallel to line AB.
• Method 2 (Using Transversal Angles):
1. Similar to method 1, draw a line (AB) and a ray (PQ) intersecting it at point Q.
2. Draw another line (say, line RS) intersecting both line AB (at point S) and ray PQ (at point R).
3. As long as lines AB and RS are not parallel, corresponding angles (alternate interior or same-side interior) will be congruent.
4. Use a protractor to copy the measure of one of these corresponding angles (say, ∠QRS) next to line AB, creating a line through point P that makes an angle with ray PQ congruent to ∠QRS.
5. This newly constructed line will be parallel to line AB.
These are just a few examples of more intricate constructions. Remember, these techniques rely on a combination of the basic constructions from the previous section and might involve using a
protractor for specific angle measurements. As you progress in geometry, you’ll encounter even more complex constructions that combine these concepts.
Short Notes and Summary
This chapter explored geometric constructions using a compass and straightedge to create various shapes and lines. We focused on:
• Bisectors: Dividing angles or line segments into two equal parts.
• Perpendicular Lines: Lines intersecting at a 90-degree angle.
• Angles of Specific Measures: Achieved using a protractor (not a pure compass and straightedge construction).
Short Notes:
• Constructions rely on following specific steps and geometric principles.
• No formulas are involved, but understanding basic geometric concepts is crucial.
• Angle bisector: Divide the angle into two congruent angles by constructing arcs with equal radii.
• Perpendicular bisector of a line segment: Two arcs with the same compass width intersect above the line segment, creating a perpendicular line that passes through the midpoint.
• Constructing an angle with a protractor: Align the protractor’s base with a ray, locate the desired measure, and mark a point to create the angle.
Advanced Techniques (Summary)
• Angle Bisector of a Triangle: Achieved using the perpendicular bisector of the base or intersecting incircles (requires prior knowledge).
• Perpendicular Line at a Point: Two methods involve constructing bisectors of angles or intersecting arcs.
• Parallel Line: Achieved using corresponding angles (same angle measure) or by copying transversal angles (alternate interior or same-side interior angles).
By mastering these constructions, you gain a deeper understanding of geometric relationships and can solve problems that require creating specific lines and angles without relying solely on
measurements. Remember, practice is key to perfecting these techniques. As you progress, you’ll encounter more complex constructions that combine these skills.
|
{"url":"https://www.schoolmykids.com/education/chapter-11-constructions-bisectors-perpendiculars-and-angles","timestamp":"2024-11-08T11:40:19Z","content_type":"text/html","content_length":"191771","record_id":"<urn:uuid:dd2167d8-3411-4d23-88f7-c75c167d8213>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00354.warc.gz"}
|