content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[ Basic Info | User Guide ]
Basic Information on uvfft
Task: uvfft
Purpose: Fourier transform a sequence of selected uv-data.
Categories: uv analysis
UVFFT takes a 1-D Fourier transform of a uv-data sequence.
If the sequence is a time series, then the FFT is a fringe
rate spectrum. If the sequence steps the delay center, then
the FFT is a bandpass.
Key: vis
The input UV dataset name. No default.
Key: select
This selects the data to be processed, using the standard uvselect
format. Default is all data. Selecting more than one baseline
does not make much sense.
Key: line
Standard linetype of data to be tranformed, in the form:
where type can be `channel' `wide' or `velocity'.
The default is wide,2,1,1,1. The maximum number of channels
which can be processed is 8.
Key: size
The length of the sequence for the Fourier transform.
uv-data points in excess of the size of the transform are omitted.
Must be a power of 2. The default is 128.
Key: log
The list output file name. The default is the terminal.
Key: device
standard PGPLOT device, e.g. /xw
Key: delay
Step size for stepped delay center. FFT is bandpass function.
Default=0 Delay center not stepped. FFT is fringe frequency spectrum.
processing options:
test - substitute test functions for data into Fourier transform.
Generated by miriad@atnf.csiro.au on 04 Dec 2018
|
{"url":"https://www.atnf.csiro.au/computing/software/miriad/doc/uvfft.html","timestamp":"2024-11-04T09:15:06Z","content_type":"text/html","content_length":"2522","record_id":"<urn:uuid:f5bb7965-c250-4cd5-9540-f71d6c3164ee>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00672.warc.gz"}
|
Homepage of Roger Nichols
Department of Mathematics
The University of Tennessee at Chattanooga
Dept. 6956
615 McCallie Avenue
Chattanooga, Tennessee 37403, USA
E-mail: roger-nichols "at" utc "dot" edu
Office: 329 Lupton Hall
Phone: +1(423) 425-4562
“The best thing for being sad,” replied Merlin, beginning to puff and blow, “is to learn something. That’s the only thing that never fails. You may grow old and trembling in your anatomies, you may
lie awake at night listening to the disorder of your veins, you may miss your only love, you may see the world about you devastated by evil lunatics, or know your honour trampled in the sewers of
baser minds. There is only one thing for it then—to learn. Learn why the world wags and what wags it. That is the only thing which the mind can never exhaust, never alienate, never be tortured by,
never fear or distrust, and never dream of regretting. Learning is the only thing for you. Look what a lot of things there are to learn.” (T. H. White, The Once and Future King)
I am a professor in the Department of Mathematics at University of Tennessee at Chattanooga.
Erdős number: 4
Kreĭn number: 3
Two Rogers (ca. May 2006) and a relic (ca. October 1990)
“I have come from Alabama: a fur piece. All the way from Alabama a-walking. A fur piece.” (“Lena” in Light in August by William Faulkner)
Spectral theory of differential operators and functional analysis
“Sturm–Liouville Operators, Their Spectral Theory, and Some Applications,” with F. Gesztesy and M. Zinchenko, Colloquium Publications Vol. 67, Amer. Math. Soc., Providence, RI, 2024, 927 pp.
“The limiting absorption principle for massless Dirac operators, properties of spectral shift functions, and an application to the Witten index of non-Fredholm operators,” with A. Carey, F. Gesztesy,
G. Levitina, F. Sukochev, and D. Zanin, Memoirs of the European Mathematical Society 4, 2023. [PDF]
Articles in Refereed Journals and Proceedings
44. “Weak convergence of spectral shift functions revisited,” with C. Connard, B. Ingimarson, and A. Paul, Pure Appl. Funct. Anal. 9, No. 4, 1023–1051 (2024). [PDF]
43. “Sturm–Liouville M-functions in terms of Green's functions,” with F. Gesztesy, J. Differential Equations 412, 709–757 (2024). [PDF]
42. “On the spectrum of biharmonic systems,” with L. Kong and M. Wang, J. Math. Sci., doi:10.1007/s10958-024-07233-7. [PDF]
41. “A Bessel analog of the Riesz composition formula,” with C. Fischbacher and F. Gesztesy, Comput. Methods Funct. Theory 24, 547–573, (2024). [PDF]
40. “Donoghue m-functions for singular Sturm–Liouville operators,” with F. Gesztesy, L. Littlejohn, M. Piorkowski, and J. Stanfill, St. Petersburg Math. J. 35 101–138 (2024). [PDF]
39. “Weyl–Titchmarsh M-functions for φ-periodic Sturm–Liouville operators in terms of Green's functions,” with F. Gesztesy, appeared in From Complex Analysis to Operator Theory—A Panorama, M. Brown,
F. Gesztesy, P. Kurasov, A. Laptev, B. Simon, G. Stolz, and I. Wood (eds.), Oper. Theory Adv. Appl. 291, Birkhäuser/Springer, Cham, 2023, pp. 573–608. [PDF]
38. “On perturbative Hardy inequalities,” with F. Gesztesy and M. M. H. Pang, J. Math. Phys., Analysis, Geometry 19, 128–149 (2023). [PDF]
37. “Singular fourth-order Sturm–Liouville operators and acoustic black holes,” with B. P. Belinskiy and D. B. Hinton, IMA J. Appl. Math. 87, No. 5, 804–851 (2022). [PDF]
36. “Strict domain monotonicity of the principal eigenvalue and a characterization of lower boundedness for the Friedrichs extension of four-coefficient Sturm–Liouville operators,” with F. Gesztesy,
Acta Sci. Math. 88, 189–222 (2022). [PDF]
35. “Multiple weak solutions of biharmonic systems,” with L. Kong, Minimax Theory its Appl. 7, No. 1, 109–118 (2022). [PDF]
34. “The Krein–von Neumann extension revisited,” with G. Fucci, F. Gesztesy, K. Kirsten, L. Littlejohn, and J. Stanfill, Appl. Anal. 101, No. 5, 1593–1616 (2022). [PDF]
33. “The Krein–von Neumann extension of a regular even order quasi-differential operator,” with M. Cho, S. Hoisington, and B. Udall, Opuscula Math. 41, No. 6, 805–841 (2021). [PDF]
32. “Singular Sturm–Liouville operators with extreme properties that generate black holes,” with B. Belinskiy and D. Hinton, Stud. Appl. Math. 147, No. 1, 180–208 (2021). [PDF]
31. “The product formula for regularized Fredholm determinants,” with T. Britz, A. Carey, F. Gesztesy, F. Sukochev, and D. Zanin, Proc. Amer. Math. Soc. Ser. B 8, 42–51 (2021). [PDF]
30. “A survey of some norm inequalities,” with F. Gesztesy and J. Stanfill, Complex Anal. Oper. Theory 15, 23 (2021). [PDF]
29. “On principal eigenvalues of biharmonic systems,” with L. Kong, Commun. Pure Appl. Anal. 20, No. 1, 1–15 (2021) [PDF]
28. “Explicit Krein resolvent identities for singular Sturm–Liouville operators with applications to Bessel operators,” with S. B. Allan, J. H. Kim, G. Michajlyszyn, and D. Rung, Oper. Matrices 14,
No. 4, 1043–1099 (2020). [PDF]
27. “On self-adjoint boundary conditions for singular Sturm–Liouville operators bounded from below,” with F. Gesztesy and L. Littlejohn, J. Differential Equations 269, 6448–6491 (2020). [PDF]
26. “Trace ideal properties of a class of integral operators,” with F. Gesztesy, appeared in Integrable Systems and Algebraic Geometry. Volume 1, R. Donagi and T. Shaska (eds.), London Math. Soc.
Lecture Note Ser. 458, Cambridge University Press, Cambridge, UK, 2020, pp. 13–37. [PDF]
25. “On absence of threshold resonances for Schrödinger and Dirac operators,” with F. Gesztesy, Discrete Contin. Dyn. Syst. Ser. S 13(12), 3427–3460 (2020). [PDF]
24. “On the global limiting absorption principle for massless Dirac operators,” with A. Carey, F. Gesztesy, J. Kaad, G. Levitina, D. Potapov, and F. Sukochev, Ann. Henri Poincaré 19, No. 7,
1993–2019 (2018). [PDF]
23. “Weak and vague convergence of spectral shift functions of one-dimensional Schrödinger operators with coupled boundary conditions,” with J. Murphy, Methods Funct. Anal. Topology 23, No. 4,
378–403 (2017). [PDF]
22. “On the index of meromorphic operator-valued functions and some applications,” with J. Behrndt, F. Gesztesy, and H. Holden, appeared in Functional Analysis and Operator Theory for Quantum
Physics, J. Dittrich, H. Kovarik, and A. Laptev (eds.), Series of Congress Reports, European Mathematical Society, Zürich, 2017. [PDF]
21. “Double operator integral methods applied to continuity of spectral shift functions,” with A. Carey, F. Gesztesy, G. Levitina, D. Potopov, and F. Sukochev, J. Spectr. Theory 6, No. 4, 747–779
(2016). [PDF]
20. “Principal solutions revisited,” with S. Clark and F. Gesztesy, appeared in Stochastic and Infinite Dimensional Analysis, C. C. Bernido, M. V. Carpio-Bernido, M. Grothaus, T. Kuna, M. J.
Oliveira, and J. L. da Silva (eds.), Trends in Mathematics, Birkhäuser, Basel, 2016. [PDF]
19. “Dirichlet-to-Neumann maps, abstract Weyl–Titchmarsh M-functions, and a generalized index of unbounded meromorphic operator-valued functions,” with J. Behrndt, F. Gesztesy, and H. Holden, J.
Differential Equations 261, 3551–3587 (2016). [PDF]
18. “On stability of square root domains for non-self-adjoint operators under additive perturbations,” with F. Gesztesy and S. Hofmann, Mathematika 62, 111–182 (2016). [PDF]
17. “Some applications of almost analytic extensions to operator bounds in trace ideals,” with F. Gesztesy, Methods Funct. Anal. Topology 21, No. 2, 151–169 (2015). [PDF]
16. “A Jost–Pais-type reduction of (modified) Fredholm determinants for semi-separable operators in infinite dimensions,” with F. Gesztesy, appeared in Recent Advances in Schur Analysis and
Stochastic Processes - A Collection of Papers Dedicated to Lev Sakhnovich, D. Alpay and B. Kirstein (eds.), Operator Theory: Advances and Applications 244, 287–314 (2015). [PDF]
15. “On factorizations of analytic operator-valued functions and eigenvalue multiplicity questions,” with F. Gesztesy and H. Holden, Integral Eq. and Operator Th. 82, No. 1, 61–94 (2015). [PDF]
14. “On a problem in eigenvalue perturbation theory,” with F. Gesztesy and S. Naboko, J. Math. Anal. Appl. 428, No. 1, 295–305 (2015). [PDF]
13. “Inverse spectral problems for Schrödinger-type operators with distributional matrix-valued potentials,” with J. Eckhardt, F. Gesztesy, A. Sakhnovich, and G. Teschl, Differential Integral
Equations 28, No. 5–6, 505–522 (2015). [PDF]
12. “Heat kernel bounds for elliptic partial differential operators in divergence form with Robin-type boundary conditions II,” with F. Gesztesy, M. Mitrea, and E. M. Ouhabaz, Proc. Amer. Math. Soc.
143, No. 4, 1635–1649 (2015). [PDF]
11. “Stability of square root domains associated with elliptic systems of PDEs on nonsmooth domains,” with F. Gesztesy and S. Hofmann, J. Differential Equation 258, 1749–1764 (2015). [PDF]
10. “Supersymmetry and Schrödinger-type operators with distributional matrix-valued potentials,” with J. Eckhardt, F. Gesztesy, and G. Teschl, J. Spectr. Theory 4, No. 4, 715–768 (2014). [PDF]
9. “Boundary data maps and Krein's resolvent formula for Sturm–Liouville operators on a finite interval,” with S. Clark, F. Gesztesy, and M. Zinchenko, Oper. Matrices 8, No. 1, 1–71 (2014). [PDF]
8. “Heat kernel bounds for elliptic partial differential operators in divergence form with Robin-type boundary conditions,” with F. Gesztesy and M. Mitrea, J. Anal. Math. 122, 229–287 (2014). [PDF]
7. “On square root domains for non-self-adjoint Sturm–Liouville operators,” with F. Gesztesy and S. Hofmann, Methods Funct. Anal. Topology 19, No. 3, 227–259 (2013). [PDF]
6. “Inverse spectral theory for Sturm–Liouville operators with distributional potentials,” with J. Eckhardt, F. Gesztesy, and G. Teschl, J. London Math. Soc. (2) 88, 801–828 (2013). [PDF]
5. “Weyl–Titchmarsh theory for Sturm–Liouville operators with distributional potentials,” with J. Eckhardt, F. Gesztesy, and G. Teschl, Opuscula Math. 33, No. 3, 467–563 (2013). [PDF]
4. “Simplicity of eigenvalues in Anderson-type models,” with G. Stolz and S. Naboko, Ark. Mat. 51, 157–183 (2013). [PDF]
3. “An abstract approach to weak convergence of spectral shift functions and applications to multi-dimensional Schrödinger operators,” with F. Gesztesy, J. Spectr. Theory 2, No. 3, 225–266 (2012).
2. “Weak convergence of spectral shift functions for one-dimensional Schrödinger operators,” with F. Gesztesy, Math. Nachr. 285, No. 14–15, 1799–1838 (2012). [PDF]
1. “Spectral properties of discrete random displacement models,” with G. Stolz, J. Spectr. Theory 1, No. 2, 123–153 (2011). [PDF]
2. “A generalized Birman–Schwinger principle and applications to one-dimensional Schrödinger operators with distributional coefficients,” with F. Gesztesy.
1. “On eigenvalue multiplicities of regular Sturm–Liouville operators,” with F. Gesztesy and M. Zinchenko.
“This went on at any odd hour, if necessary, with a floor rug over his shoulders, with the fine quiet of the scholar which is nearest of all things to heavenly peace.” (F. Scott Fitzgerald, Tender
is the Night)
|
{"url":"https://sites.google.com/mocs.utc.edu/rogernicholshomepage/home","timestamp":"2024-11-11T10:44:50Z","content_type":"text/html","content_length":"165755","record_id":"<urn:uuid:9a185380-1cc3-4706-bd91-ba2e4a224b6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00571.warc.gz"}
|
dihedral group
made explicit (here) the short exact sequence $1 \to \mathbb{Z}/n \to D_{2n} \to \mathbb{Z}/2 \to 1$.
diff, v27, current
added some actual explicit details on the definition of the binary dihedral groups (here)
diff, v21, current
added mentioning, pointers, and redirects for “dicyclic group”, synonymous to “binary dihedral group”
diff, v18, current
Yeah, notation can be a bit of a pain here.
I have given your remark a remark-environment here, instead of it being a subsection, and edited slightly, mostly for formatting.
Added in the usual group presentation of the dihedral group $D_{2n}$ plus a warning that this group is also denoted $D_n$ by some authors (including myself!!!)
diff, v16, current
Is there a citable computation of the group cohomology of dihedral groups with coefficients in $\mathbb{Z}$ equipped with its non-trivial sign action?
This question is also MO:q/141489.
I haven’t checked the two answers there yet, but don’t they contradict each other?
The accepted answer MO:a/141557 sees the cohomology concentrated in odd degrees.
But the other answer MO:a/141546, whose author claims to have checked this with computer algebra, argues for a nontivial contribution in degree 2.
|
{"url":"https://nforum.ncatlab.org/discussion/9094/","timestamp":"2024-11-11T01:36:51Z","content_type":"application/xhtml+xml","content_length":"47967","record_id":"<urn:uuid:1af36b30-98d3-417f-a9e6-4d72425e0794>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00653.warc.gz"}
|
How To Make a Quantum Bit ? - PhysicsHow To Make a Quantum Bit ? - Physics
How To Make a Quantum Bit
What is Quantum Bit ?
In quantum computing, a qubit (/ˈkjuːbɪt/) or quantum bit is the basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit
is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include the spin of the electron in which the
two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two states can be taken to be the vertical polarization and the horizontal polarization. In a
classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of both states simultaneously, a property that is
fundamental to quantum mechanics and quantum computing.
We have looked at how a transistor works, the fundamental unit of classical computers, and how a
quantum computer
works in theory, taking advantage of quantum superposition to hold exponentially more information than
classical computers
. Now we look at the practical side of making a
quantum bit
, or qubit. How do you put it in a state where it is stable? How do you read and write information on it? These processes are described for a solid state qubit – a phosphorous atom in a silicon
crystal substrate. Both the electron and the nucleus of the phosphorous atom can be used as qubits.
Do not forget to share your opinion with us to provide you with the best posts !
0 Comments
What's Your Reaction?
|
{"url":"https://www.blog.sindibad.tn/how-to-make-a-quantum-bit/","timestamp":"2024-11-14T08:37:30Z","content_type":"text/html","content_length":"562929","record_id":"<urn:uuid:dcd4c431-d65a-462a-9e4f-9de60caca6ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00842.warc.gz"}
|
Dates in Excel - HJGSoft
You can work with dates in Excel.
This means you can calculate with them and that is insanely handy. This way, you can see what day it is today, on what day of the week Christmas is and how many days it’s still away. Let’s look at
some examples:
The formula with the function =TODAY() is placed in cell E1. This formula gives the current (system) date.
Cell E2 contains the regular typed date, 5-12-2017, which Excel automatically recognizes as a date.
I typed the formula: =E2-E1 in cell E3. This is where the two dates are subtracted. The result is a value representing the amount of days between the two dates.
Excel assigns a number to each date internally. That happens, of course, subsequent. This makes it possible to calculate with dates.
Cell F2 contains the function =WEEKDAY(E2). This function gives a number (1-7) representing a day of the week, wherein Sunday=1 and Saturday=6.
Knowing this, you can make a table and find the day of the week in words with the function VLOOKUP, as showed in G2.
Some more examples of other date-functions Excel contains:
Cell E1 contains the date of the week with the function TODAY().
Cell E2 takes the day out of the date using the function DAY(E1).
Cell E3 takes the month out of the date using the function MONTH(E1).
Cell E4 takes the year out of the date using the function YEAR(E1).
The cells F2:F4 is one higher than the results from E2:E4.
I create a new date in cell F5 using the function DATE(F4;F3;F2), so using the year first, then the month, then the day.
In E5 I will determine what part of the year it is today using the function YEARFRAC(”1-1-2017”;E1;1). The last 1 is needed to remind Excel to calculate with actual days in the year. For 2017 that
means 365. Cell H1 contains the number belonging to TODAY and cell H2 the one belonging to 1-1-2017. Cell H3 contains the difference between the two dates in days. The amount of days in 2017 are
placed in cell H4. The formula: =H3/H4 is stored in cell H5, giving the exact same value as the one in cell E5. This shows you how Excel calculates internally.
Finally, cell E6 shows the date in 7 weekdays. To do this, use the function WORKDAY(E1;7). This function takes the weekends in account and doesn’t count them. As third argument, you can enter another
set of dates, like holidays, which can also be kept out.
How many days in a year?
In our calendar, that depends on it being a leap year or not. A leap year has 366 days and the others have 365. So we need to know what years are leap years. Normally, that’s every 4 years, when the
year is divisible by 4. So 2016 is leap, because the number 2016 is divisible by 4. 2017 is not leap, because 2017 isn’t divisible by 4, it leaves 1.
There are some exceptions, however. Century changes aren’t leap. So 1800 and 1900 weren’t leap. But even that has its own exceptions, because once in 400 years, a century change IS a leap year. The
year 2000 was a leap year, but a special one…
All the rules together give the next formula to calculate the amount of days in any giving year:
=365+IF(AND(MOD(F1;4)=0;OR(MOD(F1;100)<>0;MOD(F1;400)=0));1;0), in which the year is placed in cell F1.
Because Excel is smart enough to convert a date into a number, you can calculate with them and there are a lot of handy functions you can use to do so.
I am in the happy condition of having my two parents still alive, very old though, but still… I wondered when my mother will be surpassing my grandmother in age. To figure this out, I made the
following sheet:
Cell E1 contains the birth date of my grandmother. Cell E2 has the date of death of my grandmother, and cell F1 contains my mother’s date of birth.
I want the difference in days between the date of birth and the date of death of my grandmother in cell F2, so I can add this number to my mother’s date of birth. This way I can see when my mother
will be older than my grandmother.
But, to my surprise, an error occurred. Research shows that Excel can only work with dates from the 20th century.
It gets worse: According to Excel, there’s a 29-2-1900, while we saw that 1900 was NOT a leap year. Microsoft explains that they did this because of compatibility reasons relative to Lotus 1-2-3 (see
the article Microsoft wrote about this). This has consequences for the function WEEKDAY, which gives a wrong outcome for dates from 1-1-1900 to 29-2-1900.
Also for genealogists, to whom I have lately counted myself, this is an issue because this group of people often comes across dates before 1900.
HJGSoft has found a solution to this problem and has implemented it.
The add-in Adequate (version 2017_v1) contains 12 functions to make it possible to work with ‘all’ dates.
The system works as follows:
Depending on the entered date, Adequate checks if the date is Gregorian or Julian.
All dates over 15-10-1582 are Gregorian. All dates up to 4-10-1582 are Julian. Pope Gregory made ‘his’ calendar start at 15-10-1582, one day after 4-10-1582. So there’s a 10 day hole in our calendar.
This has to happen because the Julian calendar didn’t handle leap years well. The Julian calendar assumes that one solar year lasts 265,25 days, meaning there has to be one leap year every four years
to make up for one year. But actually, a solar year only contains 365,2425 days, which needs an extra correction. Hence, the difficult leap year formula we saw before.
Adequate converts a date into a number as follows: 15-10-1582 gets a 0 and from there on 1 day is added. So all dates in the Gregorian calendar have a positive value. All dates calculated back from
4-10-1582 get 1 less. So 4-10-1582=-1, 3-10-1582=-2 etc. With that, all Julian dates get a negative value.
Furthermore, in our calendar era, we don’t have the year 0. So the day before 1-1-1 is 31-12–1 or 31-12-1 B.C. (Before Christ).
Now, we can also calculate with this system. This will happen with the specially designed functions. The date has to be entered as a text in the format dd-mm-yyyy.
A couple of important functions in this system:
HJG_UDate2Number converts a date to a number
HJG_Number2UDate converts a number to a date
HJG_UDays calculates the difference in days between two dates
HJG_UDateAdd adds the amount of years, quarters, months, weeks or days to a date
HJG_UWeekDay determines the day of the week, wherein Sunday=0 and Saturday=6
HJG_UDateFormat converts a date to a certain format
We are finally able to fix my problem with all of these tools:
The cells E1, E2, and F1 contain respectively my grandmother’s date of birth, my grandmother’s date of death and my mother’s date of birth.
Cell F2 contains the formula =HJG_UDays(E1;E2)+1. HJG_UDays determines the amount of days my grandmother has lived.
Cell F3 contains the formula =HJG_UDateFormat(HJG_UDateAdd(”d”F2;F1);”Dddd Mmmm d yyyy”).
The function HJG_UDateAdd(”d”F2;F1) adds the amount (F2) of days (”d”) to the date (F1).
Next, the date will be formatted using the function HJG_UDateFormat to the format Dddd Mmmm d yyyy, where Dddd is the day of the week fully spelled out, starting with a capital, followed by the month
fully spelled out, also starting with a capital, and followed by the day without a leading zero, and finally the 4-numbered year.
My mother will have passed my grandmother in age at Tuesday the 6th of June 2017. Now let’s just hope she’ll make it…
Update: My mother made it!
|
{"url":"https://hjgsoft.com/articles/dates-in-excel/","timestamp":"2024-11-04T04:46:44Z","content_type":"application/xhtml+xml","content_length":"50206","record_id":"<urn:uuid:631f4d5f-bca7-4773-b902-10520f8ba7fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00231.warc.gz"}
|
An infinite descending chain of Boolean subfunctions consisting of threshold functions
Lehtonen, Erkko
Contributions to General Algebra 17, Proceedings of the Vienna Conference 2005 (AAA70), Verlag Johannes Heyn, Klagenfurt, (2006), 145-148
For a class C of Boolean functions, a Boolean function f is a C-subfunction of a Boolean function g, if f=g(h1,...,hn), where all the inner functions hi are members of C. Two functions are
C-equivalent, if they are C-subfunctions of each other. The C-subfunction relation is a preorder on the set of all functions if and only if C is a clone. An infinite descending chain of U_\
infty-subfunctions is constructed from certain threshold functions (U_\infty denotes the clone of clique functions).
|
{"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=7&member_id=71&doc_id=569","timestamp":"2024-11-07T19:11:05Z","content_type":"text/html","content_length":"8489","record_id":"<urn:uuid:5261ccee-e946-415f-a4fb-51b2c2132738>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00724.warc.gz"}
|
Video Game Breakout
Want to make creations as awesome as this one?
Video Game Breakout
Molly Casper
Created on September 24, 2024
More creations to inspire you
© 20XX GENIALLY ESCAPE GAMES
© 20XX GENIALLY ESCAPE GAMES
Write a cool subtitle here to provide context
4 FROG
3 BARS
2 PUZZLE
1 SHIPS
Complete the missions to obtain the password numbers
work keeds to be productive
LEVEL 1/5
Write here the incorrect answer
Write the correct answer here
Write here the incorrect answer
Did you know that images illustrate what you want to convey and are a support to add additional info?
LEVEL 2/5
Write the incorrect answer here
Write the incorrect answer here
Write the correct answer here
Did you know that images are an aesthetic resource that tell stories on their own and also keep the brain awake?
LEVEL 3/5
Write the correct answer here
Write here the wrong answer
Write here the incorrect answer
Did you know that multimedia content is essential to achieve a WOW effect in your creations?
LEVEL 4/5
Write the incorrect answer here
Write the correct answer here
Write the incorrect answer here
Did you know that images are a support to add info?
LEVEL 5/5
THE NUMBER OF THIS MISSION IS 1
4 FROG
3 BARS
2 PUZZLE
1 SHIPS
Complete the missions to obtain thepassword numbers
Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar.
Write the incorrect answer here
Write the correct answer here
Write incorrect answer here
Did you know that Genially allows you to share your creation directly, without the need for downloads?
LEVEL 1/5
Write here the incorrect answer
Write the correct answer here
Write the incorrect answer here
Did you know that images are a support to add information?
LEVEL 2/5
Write here the incorrect answer
Write here the incorrect answer
Write here the correct answer
Did you know that images are a support to add information?
LEVEL 3/5
Write the correct answer here
Write the incorrect answer here
Write the incorrect answer here
Did you know that images are a support to add information?
LEVEL 4/5
Write the correct answer here
Write the incorrect answer here
Write the incorrect answer here
Did you know that images are a support to add info?
LEVEL 5/5
THE NUMBER OF THIS MISSION IS 2
4 FROG
3 BARS
2 PUZZLE
1 SHIPS
Complete the missions to obtain the password numbers
Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar.
Write here the incorrect answer
Wrong answer goes here
Write the correct answer here
Did you know that Genially allows you to share your creation directly, without the need for downloads?
LEVEL 1/5
Write the incorrect answer here
Write the correct answer here
Write the incorrect answer here
Did you know that images illustrate what you want to convey and are a support to add additional information?
LEVEL 2/5
Write here the wrong answer
Write the incorrect answer here
Write the correct answer here
Did you know that images are a support to add information?
LEVEL 3/5
Write the correct answer here
Write the incorrect answer here
Write the incorrect answer here
Did you know that images are a support to add info?
LEVEL 4/5
Write the correct answer here
Write the incorrect answer here
Write the wrong answer here
Did you know that images are a support to add information?
LEVEL 5/5
THE NUMBER OF THIS MISSION IS 3
4 FROG
3 BARS
2 PUZZLE
1 SHIPS
Complete the missions to obtain thenumbers of the password
Esta pantalla está bloqueada. Necesitas acertar el juego anterior para continuar.
Write the correct answer here
Write here the incorrect answer
Write the incorrect answer here
Did you know that Genially allows you to share your creation directly, without the need for downloads?
LEVEL 1/3
Write the incorrect answer here
Write the correct answer here
Write here the incorrect answer
Did you know that images are a support to add info?
LEVEL 2/3
Write the correct answer here
Write the incorrect answer here
Write the incorrect answer here
Did you know that images are a support to add info?
LEVEL 3/3
THE NUMBER OF THIS MISSION IS 4
4 FROG
3 BARS
2 PUZZLE
1 SHIPS
Complete the missions to obtain thepassword numbers
© 20XX GENIALLY ESCAPE GAMES
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
I am a cool subtitle, perfect for providing more context about the topic you're going to discuss
I am a cool subtitle, ideal for providing more context about the topic you are going to address
You will lose all the progress
Are you sure you want to exit?
I'm a cool subtitle, perfect for providing more context about the topic you are going to address
I am a cool subtitle, ideal for providing more context about the topic you are going to talk about
I am a cool subtitle, perfect for providing more context about the topic you are going to discuss
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
I am a cool subtitle, perfect to provide more context about the topic you are going to discuss
You will lose all the progress
Are you sure you want to exit?
I am a cool subtitle, ideal to provide more context about the topic you are going to address
I am a cool subtitle, perfect to provide more context about the topic you are going to address
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
You will lose all the progress
Are you sure you want to exit?
|
{"url":"https://view.genially.com/66f3168c02d642173d968119/interactive-content-video-game-breakout","timestamp":"2024-11-01T23:09:44Z","content_type":"text/html","content_length":"52223","record_id":"<urn:uuid:f99a125a-bea4-4ef0-902f-230d8febd767>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00660.warc.gz"}
|
Real-time dreamy Cloudscapes with Volumetric Raymarching - Maxime Heckel's Blog
Real-time dreamy Cloudscapes with Volumetric Raymarching
October 31, 2023 / 32 min read
Last Updated: November 11, 2023
I spent the past few months diving into the realm of Raymarching and studying some of its applications that may come in handy for future 3D projects, and while I managed to build a pretty diverse set
of scenes, all of them consisted of rendering surfaces or solid objects. My blog post on Raymarching covered some of the many impressive capabilities of this rendering technique, and as I mentioned
at the end of that post, that was only the tip of the iceberg; there is a lot more we can do with it.
One fascinating aspect of Raymarching I quickly encountered in my study was its capacity to be tweaked to render volumes. Instead of stopping the raymarched loop once the ray hits a surface, we push
through and continue the process to sample the inside of an object. That is where my obsession with volumetric clouds started, and I think the countless hours I spent exploring the many Sky Islands
in Zelda Tears of the Kingdom contributed a lot to my curiosity to learn more about how they work. I thus studied a lot of Shadertoy scenes such as "Clouds" by Inigo Quilez, "Starry Night" by al-ro,
and "Volumetric Raymarching sample" by Suyoku leveraging many Volumetric Raymarching techniques to render smoke, clouds, and cloudscapes, which I obviously couldn't resist giving a try rebuilding
I spent a great deal of time exploring the different ways I could use Raymarching to render clouds, from fully wrapping my head around the basics of Volumetric Raymarching to leveraging physically
based properties of clouds to try getting a more realistic output while also trying to squeeze as much performance out of my scenes with neat performance improvement tips I learned along the way. I
cover all of that in this article, which I hope can serve you as a field guide for your own volumetric rendering experiments and learnings.
Volumetric rendering: Raymarching with a twist
In my previous blog post on Raymarching, we saw that the technique relied on:
• Signed Distance Fields: functions that return the distance of a given point in space to the surface of an object
• A Raymarching loop where we march step-by-step alongside rays cast from an origin point (a camera, the observer's eye) through each pixel of an output image, and we calculate the distance to the
object's surface using our SDF. Once that distance is small enough, we can draw a pixel.
If you've practiced this technique on some of your own scenes, you're in luck: Volumetric Raymarching relies on the same principles: there's a loop, rays cast from an origin, and SDFs. However, since
we're rendering volumes instead of surfaces, there's a tiny twist to the technique π .
How to sample a volume
The first time we got introduced to the concept of SDF, we learned that it was important not to step inside the object during our Raymarching loop to have a beautiful render. I even emphasized that
fact in one of my diagrams showcasing 3 points relative to an object:
• P1 is located far from the surface, in green, representing a positive distance to the surface.
• P2 is located at a close distance Ξ΅ to the surface, in orange,
• P3 positioned inside the object, in red, representing a negative distance to the surface.
Diagram showcasing 3 points, P1, P2, and P3, being respectively, at a positive distance, small distance, and inside a sphere.
When sampling a volume, we'll need to actually raymarch inside our object and reframe how we think of SDF: instead of representing the distance to the surface, we will now use it as the density of
our volume.
• When raymarching outside, the density is null, or 0.
• Once we raymarch inside, it is positive.
To illustrate this new way of thinking about Raymarching in the context of volume, here's a modified version of the widget I introduced in my blog post on the topic earlier this year.
That reframing of what an SDF represents ends up changing two core principles in our Raymarching technique that will have to be reflected in our code:
1. We have to march step-by-step with a constant step size along our rays. We no longer use the distance returned by the SDF.
2. Our SDF now returns the opposite of the distance to the surface to properly represent the density of our object (positive on the inside, 0 on the outside)
Diagram showcasing 3 points, P1, P2, and P3, being respectively, at a positive distance, small distance, and inside a sphere. Only P3 is considered 'valid' in the context of Volumetric Raymarching
Our first Volumetric Raymarching scene
Now that we have a grasp of sampling volumes using what we know about Raymarching, we can try implementing it by modifying an existing scene. For brevity, I'm not detailing the setup of a basic of
Raymarching scenes. If you want a good starting point you can head to my Raymarching setup I already introduced in a previous article.
The setup of the scene is quite similar to what we're familiar with in classic Raymarching; the modifications we'll need to do are located in:
• Our SDF functions: we'll need to return the opposite of the distance: -d instead of d.
Example of SDF used in Volumetric Raymarching
float sdSphere(vec3 p, float radius) {
return length(p) - radius;
float scene(vec3 p) {
float distance = sdSphere(p, 1.0);
return -distance;
• Our raymarch function: we'll need to march at a constant step size and start drawing only once the density is over 0.
Volumetric Raymarching loop with constant step size
#define MAX_STEPS 100
const float MARCH_SIZE = 0.08;
float depth = 0.0;
vec3 p = rayOrigin + depth * rayDirection;
vec4 res = vec4(0.0);
for (int i = 0; i < MAX_STEPS; i++) {
float density = scene(p);
if (density > 0.0) {
// ...
depth += MARCH_SIZE;
p = rayOrigin + depth * rayDirection;
Now comes another question: what shall we draw once our density is positive to represent a volume? For this first example, we can keep things simple and play with the alpha channel of our colors to
make it proportional to the density of our volume: the denser our object gets as we march into it, the more opaque/darker it will be.
Simple Volumetric Raymarching loop
const float MARCH_SIZE = 0.08;
vec4 raymarch(vec3 rayOrigin, vec3 rayDirection) {
float depth = 0.0;
vec3 p = rayOrigin + depth * rayDirection;
vec4 res = vec4(0.0);
for (int i = 0; i < MAX_STEPS; i++) {
float density = scene(p);
// We only draw the density if it's greater than 0
if (density > 0.0) {
vec4 color = vec4(mix(vec3(1.0,1.0,1.0), vec3(0.0, 0.0, 0.0), density), density );
color.rgb *= color.a;
res += color * (1.0 - res.a);
depth += MARCH_SIZE;
p = rayOrigin + depth * rayDirection;
return res;
If we try to render this code in our React Three Fiber canvas, we should get the following result π
Drawing Fluffy Raymarched Clouds
We now know and applied the basics of Volumetric Raymarching. So far, we only rendered a simple volumetric sphere with constant density as we march through the volume, which is a good start. We can
now try using that simple scene as a foundation to render something more interesting: clouds!
Noisy Volume
Going from our simple SDF of a sphere to a cloud consists of drawing it with a bit more noise. Clouds don't have a uniform shape nor do they have a uniform density, thus we need to introduce some
organic randomness through noise in our Raymarching loop. If you read some of my previous articles, you should already be familiar with the concept of:
1. Noise, Perlin noise, and value noise derivative
2. Fractal Brownian Motion, or FBM.
3. Texture based noise.
To generate raymarched landscapes, we used a noise texture, noise derivatives, and FBM to get a detailed organic result. We'll rely on some of those concepts to create organic randomness and obtain a
cloud from our SDF β οΈ .
Noise function for Raymarched landscape
vec3 noise(vec2 x) {
vec2 p = floor(x);
vec2 f = fract(x);
vec2 u = f * f * (3. - 2. * f);
float a = textureLod(uTexture, (p + vec2(.0,.0)) / 256.,0.).x;
float b = textureLod(uTexture, (p + vec2(1.0,.0)) / 256.,0.).x;
float c = textureLod(uTexture, (p + vec2(.0,1.0)) / 256.,0.).x;
float d = textureLod(uTexture, (p + vec2(1.0,1.0)) / 256.,0.).x;
float noiseValue = a + (b-a) * u.x + (c-a) * u.y + (a - b - c + d) * u.x * u.y;
vec2 noiseDerivative = 6. * f * (1. - f) * (vec2(b - a, c - a) + (a - b - c + d) * u.yx);
return vec3(noiseValue, noiseDerivative);
For clouds, our noise function looks a bit different:
Noise function for Volumetric clouds
float noise(vec3 x ) {
vec3 p = floor(x);
vec3 f = fract(x);
vec2 u = f * f * (3. - 2. * f);
vec2 uv = (p.xy + vec2(37.0, 239.0) * p.z) + u.xy;
vec2 tex = textureLod(uNoise,(uv + 0.5) / 256.0, 0.0).yx;
return mix( tex.x, tex.y, u.z ) * 2.0 - 1.0;
To tell you the truth, I saw this function in many Shadertoy demos without necessarily seeing a credited author or even a link to an explanation; I kept using it throughout my work as it still
yielded a convincing cloud noise pattern. Here's an attempt at gathering together some of its specificities from my own understanding:
• Clouds are 3D structures, so our function takes in a vec3 as input: a point in space within our cloud.
• The texture lookup differs from its landscape counterpart: we're sampling it as a 2D slice from a 3D position. The vec2(37.0, 239.0) * p.z seems a bit arbitrary to me, but from what I gathered,
it allows for more variation in the resulting noise.
• We then mix two noise values from our texture lookup based on the z value to generate a smooth noise pattern and rescale it within the [-1, 1] range.
Applying this noise along with a Fractal Brownian motion is pretty similar to what we're used to with Raymarched landscapes:
Fractal Brownian Motion applied to our Volumetric Raymarching scene
float fbm(vec3 p) {
vec3 q = p + uTime * 0.5 * vec3(1.0, -0.2, -1.0);
float g = noise(q);
float f = 0.0;
float scale = 0.5;
float factor = 2.02;
for (int i = 0; i < 6; i++) {
f += scale * noise(q);
q *= factor;
factor += 0.21;
scale *= 0.5;
return f;
float scene(vec3 p) {
float distance = sdSphere(p, 1.0);
float f = fbm(p);
return -distance + f;
If we apply the code above to our previous demo, we do get something that starts to look like a cloud π :
Adding light
Once again, we're just "starting" to see something approaching our goal, but a crucial element is missing to make our cloud feel more cloudy: light.
The demo we just saw in the previous part lacks depth and shadows and thus doesn't feel very realistic overall, and that's due to the lack of diffuse light.
To add light to our cloud and consequentially obtain better shadows, one may want to apply the same lighting we used in standard Raymarching scenes:
1. Calculate the normal of each sample point using our scene function
2. Use the dot product of the normal and the light direction
Diffuse lighting in Raymarched scene using normals
vec3 getNormal(vec3 p) {
vec2 e = vec2(.01, 0);
vec3 n = scene(p) - vec3(
return normalize(n);
void main() {
// ...
vec3 ro = vec3(0.0, 0.0, 5.0);
vec3 rd = normalize(vec3(uv, -1.0));
vec3 lightPosition = vec3(1.0);
float d = raymarch(ro, rd);
vec3 p = ro + rd * d;
vec3 color = vec3(0.0);
if(d<MAX_DIST) {
vec3 normal = getNormal(p);
vec3 lightDirection = normalize(lightPosition - p);
float diffuse = max(dot(normal, lightDirection), 0.0);
color = vec3(1.0, 1.0, 1.0) * diffuse;
gl_FragColor = vec4(color, 1.0);
That would work in theory, but it's not the optimal choice for Volumetric Raymarching:
• The getNormal function requires a lot of sample points to estimate the "gradient" in every direction. In the code above, we need 4, but there are code snippets that require 6 for a more accurate
• Our volumetric raymarching loop is more resource-intensive: we're walking at a constant step size along our ray to sample the density of our volume.
Thus, we need another method or approximation for our diffuse light. Luckily, Inigo Quilez presents a technique to solve this problem in his article on directional derivatives. Instead of having to
sample our density in every direction like getNormal, this method simplifies the problem by sampling the density at our sampling point p and at an offset in the direction of the light and getting the
difference between those values to approximate how the light scatters roughly inside our volume.
Diagram showcasing 2 sampled points P1 and P2 with both their diffuse lighting calculated by sampling extra points P1' and P2' in the direction of the light
In the diagram above, you can see that we're sampling our density at p1 and at another point p1' that's a bit further along the light ray:
• If the density increases along that path, that means the volume gets denser, and light will scatter more
• If the density gets smaller, our cloud is less thick, and thus, the light will scatter less.
This method only requires 2 sampling points and consequentially requires fewer resources to give us a good approximation of how the light behaves with the volume around p1.
We can apply this diffuse formula to our demo as follows:
Diffuse lighting using directional derivatives
if (density > 0.0) {
// Directional derivative
// For fast diffuse lighting
float diffuse = clamp((scene(p) - scene(p + 0.3 * sunDirection)) / 0.3, 0.0, 1.0 );
vec3 lin = vec3(0.60,0.60,0.75) * 1.1 + 0.8 * vec3(1.0,0.6,0.3) * diffuse;
vec4 color = vec4(mix(vec3(1.0, 1.0, 1.0), vec3(0.0, 0.0, 0.0), density), density );
color.rgb *= lin;
color.rgb *= color.a;
res += color * (1.0 - res.a);
That is, once again, very similar to what we were doing in standard Raymarching, except that now, we have to include it inside the Raymarching loop as we're sampling a volume and thus have to run the
calculation multiple times throughout the volume as the density may vary whereas a surface required only one diffuse lighting computation (at the surface).
You can observe the difference between our cloud without lighting and with diffuse lighting below π
Before/After comparison of our Volumetric cloud without and with diffuse lighting.
And here's the demo featuring the concept and code we just introduced π :
Morphing clouds
Let's take a little break to tweak our scene and have some fun with what we built so far! Despite the differences between the standard Raymarching and its volumetric counterpart, there are still a
lot of SDF-related concepts you can apply when building cloudscapes.
You can try to make a cloud in fun shapes like a cross or a torus, or even better, try to make it morph from one form to another over time:
Mixing SDF to morph volumetric clouds into different shapes
mat2 rotate2D(float a) {
float s = sin(a);
float c = cos(a);
return mat2(c, -s, s, c);
float nextStep(float t, float len, float smo) {
float tt = mod(t += smo, len);
float stp = floor(t / len) - 1.0;
return smoothstep(0.0, smo, tt) + stp;
float scene(vec3 p) {
vec3 p1 = p;
p1.xz *= rotate2D(-PI * 0.1);
p1.yz *= rotate2D(PI * 0.3);
float s1 = sdTorus(p1, vec2(1.3, 0.9));
float s2 = sdCross(p1 * 2.0, 0.6);
float s3 = sdSphere(p, 1.5);
float s4 = sdCapsule(p, vec3(-2.0, -1.5, 0.0), vec3(2.0, 1.5, 0.0), 1.0);
float t = mod(nextStep(uTime, 3.0, 1.2), 4.0);
float distance = mix(s1, s2, clamp(t, 0.0, 1.0));
distance = mix(distance, s3, clamp(t - 1.0, 0.0, 1.0));
distance = mix(distance, s4, clamp(t - 2.0, 0.0, 1.0));
distance = mix(distance, s1, clamp(t - 3.0, 0.0, 1.0));
float f = fbm(p);
return -distance + f;
This demo is a reproduction of this volumetric rendering related Shadertoy scene. I really like this creation because the result is very organic, and it gives the impression that the cloud is rolling
into its next shape naturally.
You can also try to render:
• Clouds merging together using the min and smoothmin of two SDFs
• Repeating clouds through space using the mod function
There are a lot of creative compositions to try!
Performance optimization
You may notice that running the scenes we built so far may make your computer sound like a jet engine at high resolution or at least not look as smooth as they could. Luckily, we can do something
about it and use some performance optimization techniques to strike the right balance between FPS count and output quality.
Blue noise dithering
One of the main performance pitfalls of our current raymarched cloudscape scene is due to:
• the number of steps we have to perform to sample our volume and the small marchSize
• some heavy computation we have to do within our loop, like our directional derivative or FBM.
This issue will only worsen as we attempt to make more computations to achieve a more physically accurate output in the next part of this article.
One of the first things we could do to make this scene more efficient would be to reduce the amount of steps we perform when sampling our cloud and increase the step size. However, if we attempt this
on some of our previous examples (I invite you to try), some layering will be visible, and our volume will look more like some kind of milk soup than a fluffy cloud.
Screenshot of our rendered Volumetric cloud with a low max step count and higher step size. Notice how those optimizations have degraded the output quality.
You might have encountered the concept of dithering or some images using dithering styles before. This process can create the illusion of more colors or shades in an image than available or purely
used for artistic ends. I recommend reading Dithering on the GPU from Alex Charlton if you want a quick introduction.
In Ray marching fog with blue noise, the author showcases how you can leverage blue noise dithering in your raymarched scene to erase the banding or layering effect due to a lower step count or less
granular loop. This technique leverages a blue noise pattern, which has fewer patterns or clumps than other noises and is less visible to the human eye, to obtain a random number each time our
fragment shader runs. We then introduce that number as an offset at the beginning of the raymarched loop, moving our sampling start point along our ray for each pixel of our output.
Blue noise texture
Diagram showcasing the difference between our cloud being sampled without and with blue noise dithering. Notice how each ray is offset when blue noise is introduced and how that 'erases' any obvious
layering in the final render.
Blue noise dithering introducing an offset in our Raymarching loop
uniform sampler2D uBlueNoise;
vec4 raymarch(vec3 rayOrigin, vec3 rayDirection, float offset) {
float depth = 0.0;
depth += MARCH_SIZE * offset;
vec3 p = rayOrigin + depth * rayDirection;
void(main) {
float blueNoise = texture2D(uBlueNoise, gl_FragCoord.xy / 1024.0).r;
float offset = fract(blueNoise);
vec4 res = raymarch(ro, rd, offset);
By introducing some blue noise dithering in our fragment shader, we can erase those artifacts and get a high-quality output while maintaining the Raymarching step count low!
However, under some circumstances, the dithering pattern can be pretty noticeable. By looking at some other Shadertoy examples, I discovered that introducing a temporal aspect to the blue noise can
attenuate this issue.
Temporal blue noise dithering offset
float offset = fract(blueNoise + float(uFrame%32) / sqrt(0.5));
Here's a before/after comparison of our single frame of our raymarched cloud. I guess the results speak for themselves here π .
Before/After comparison of our Volumetric cloud with fewer Raymarching steps and without and with Blue Noise dithering.
And here's the demo showcasing our blue noise dithering in action giving us a softer cloud β :
Upscaling with Bicubic filtering
This second improvement recommended by @N8Programs aims to fix some remaining noise artifacts that remain following the introduction of the blue noise dithering to our raymarched scene.
Bicubic filtering is used in upscaling and allows smoothing out some noise patterns while retaining details by calculating the value of a new pixel by considering 16 neighboring pixels through a
cubic polynomial (Sources).
I was lucky to find an implementation of bicubic filtering on Shadertoy made by N8Programs himself! Applying it directly to our existing work however, is not that straightforward. We have to add this
improvement as its own step or pass in the rendering process, almost as a post-processing effect.
I introduced an easy way to build this kind of pipeline in my article titled Beautiful and mind-bending effects with WebGL Render Targets where I showcase how you can use Frame Buffer Objects (FBO)
to apply some post-processing effects on an entire scene which we can use for this use case:
1. We render our main raymarched canvas in a portal.
2. The default scene only contains a fullscreen triangle.
3. We render our main scene in a render target.
4. We pass the texture of the main scene's render target as a uniform of our bicubic filtering material.
5. We use the bicubic filtering material as the material for our fullscreen triangle.
6. Our bicubic filtering will take our noisy raymarched scene as a texture uniform and output the smoothed out scene.
Diagram showcasing how the bicubic filtering is applied as a post-processing effect to the original scene using Render Targets.
Here's a quick comparison of our scene before and after applying the bicubic filtering:
Before/After comparison of our Volumetric cloud with without and with Bicubic Filtering. Notice the noise around at the edges and less dense region of the cloud being smoothed out when the effect is
The full implementation is a bit long, and features concepts I already went through in my render target focused blog post, so I invite you to look at it on your own time in the demo below:
Leveraging render targets allowed me to play more with the resolution of the original raymarched scene. You can see a little selector that lets you pick at which resolution we render our raymarched
cloud. You can notice that there are not a lot of differences between 1x and 0.5x which is great: we can squeeze more FPS without sacrificing the output quality π .
Physically accurate Clouds
So far, we've managed to build really beautiful cloudscapes with Volumetric Raymarching using some simple techniques and mixing the right colors. The resulting scenes are satisfying enough and give
the illusion of large, dense clouds, but what if we wanted a more realistic output?
I spent quite some time digging through talks, videos, and articles on how game engines solve the problem of physically accurate clouds and all the techniques involved in them. It's been a journey,
and I wanted to dedicate this last section to this topic because I find the subject fascinating: from a couple of physical principles of actual real-life clouds, we can render clouds in WebGL using
Volumetric Raymarching!
Beer's Law
I already introduced the concept of Beer's Law in my Raymarching blog post as a way to render fog in the distance of a scene. It states that the intensity of light passing through a transparent
medium is exponentially related to the distance it travels. The further to the medium light propagates, the more it is being absorbed. The formula for Beer's Law is as follows:
I = I0β * exp(β Ξ± * d), where Ξ± is the absorption or attenuation coefficient describing how "thick" or "dense" the medium is. In our demos, we'll consider an absorption coefficient of 0.9,
although I'd invite you to try different values so you can see the impact of this number on the resulting render.
Diagram showcasing how Beer's Law can be used to represent how much light gets absorbed through a volume
We can use this formula in our GLSL code and modify the Raymarching loop to use it instead of the "hacky" transparency hack we used in the first part:
Using Beer's Law to calculate and return the accumulated light energy going through the cloud
#define MAX_STEPS 50
#define ABSORPTION_COEFFICIENT 0.9
float BeersLaw (float dist, float absorption) {
return exp(-dist * absorption);
const vec3 SUN_POSITION = vec3(1.0, 0.0, 0.0);
const float MARCH_SIZE = 0.16;
float raymarch(vec3 rayOrigin, vec3 rayDirection, float offset) {
float depth = 0.0;
depth += MARCH_SIZE * offset;
vec3 p = rayOrigin + depth * rayDirection;
vec3 sunDirection = normalize(SUN_POSITION);
float totalTransmittance = 1.0;
float lightEnergy = 0.0;
for (int i = 0; i < MAX_STEPS; i++) {
float density = scene(p);
if (density > 0.0) {
float transmittance = BeersLaw(density * MARCH_SIZE, ABSORPTION_COEFFICIENT);
float luminance = density;
totalTransmittance *= transmittance;
lightEnergy += totalTransmittance * luminance;
depth += MARCH_SIZE;
p = rayOrigin + depth * rayDirection;
return lightEnergy;
In the code snippet above:
• We gutted the raymarching loop, so it now relies on a more physically based property: Beer's Law.
• We changed the interface of our function: instead of returning a full color, it now returns a float representing the amount of light or light energy going through the cloud.
• As we march through the volume, we accumulate the obtained transmittance. The deeper we go, the less light we add.
• We return the resulting lightEnergy
The demo below showcases what using Beers Law yields in our Raymarching loop π
The resulting cloud is a bit strange:
• its edges do indeed behave like a cloud
• the center is just a white blob
all of which is, once again, due to the lack of a proper lighting model.
Sampling light
Our new cloud does not interact with light right now. You can try changing the SUN_POSITION vector: the resulting render will remain the same. We not only need a lighting model but also a physically
accurate one.
For that, we can try to compute how much light has been absorbed for each sample point of our Raymarching loop by:
• Start a dedicated nested Raymarching loop that goes from the current sample point to the light source (direction of the light)
• Sample the density and apply Beer's Law like we just did
The diagram below illustrates this technique to make it a bit easier to understand:
Diagram showcasing how we sample multiple points of lights in the direction of the light through our volume for each sampled point in the Raymarching loop.
The code snippet below is one of many implementations of this technique. We'll use this one going forward:
Dedicated nested raymarching loop to sample the light received at a given sampled point
#define MAX_STEPS 50
#define MAX_STEPS_LIGHTS 6
#define ABSORPTION_COEFFICIENT 0.9
const vec3 SUN_POSITION = vec3(1.0, 0.0, 0.0);
const float MARCH_SIZE = 0.16;
float lightmarch(vec3 position, vec3 rayDirection) {
vec3 lightDirection = normalize(SUN_POSITION);
float totalDensity = 0.0;
float marchSize = 0.03;
for (int step = 0; step < MAX_STEPS_LIGHTS; step++) {
position += lightDirection * marchSize * float(step);
float lightSample = scene(position, true);
totalDensity += lightSample;
float transmittance = BeersLaw(totalDensity, ABSORPTION_COEFFICIENT);
return transmittance;
float raymarch(vec3 rayOrigin, vec3 rayDirection, float offset) {
float depth = 0.0;
depth += MARCH_SIZE * offset;
vec3 p = rayOrigin + depth * rayDirection;
vec3 sunDirection = normalize(SUN_POSITION);
float totalTransmittance = 1.0;
float lightEnergy = 0.0;
for (int i = 0; i < MAX_STEPS; i++) {
float density = scene(p, false);
// We only draw the density if it's greater than 0
if (density > 0.0) {
float lightTransmittance = lightmarch(p, rayDirection);
float luminance = density;
totalTransmittance *= lightTransmittance;
lightEnergy += totalTransmittance * luminance;
depth += MARCH_SIZE;
p = rayOrigin + depth * rayDirection;
return lightEnergy;
Because of this nested loop, the algorithmic complexity of our Raymarching loop just increased, so we'll need to define a relatively low number of steps to sample our light while also calculating a
less precise density by reducing the number of Octaves in our FBM to preserve a decent frame-rate (that's one easy win I implemented to avoid dropping too many frames).
All these little tweaks and performance considerations have been taken into account in the demo below:
Anisotropic scattering and phase function
Until now, we assumed that light gets distributed equally in every direction as it propagates through the cloud. In reality, the light gets scattered in different directions with different
intensities due to water droplets. This phenomenon is called Anisotropic scattering (vs. Isotropic when light scatters evenly), and to have a realistic cloud, we can try to take this into account
within our Raymarching loop.
Diagram showcasing the difference between isotropic scattering and anisotropic scattering when sampling our light energy.
To simulate Anisotropic scattering in our cloud scene for each sampling point for a given light source, we can use a phase function. A common one is the Henyey-Greenstein phase function, which I
encountered in pretty much all the examples I could find on physically accurate Volumetric Raymarching.
The GLSL implementation of this phase function looks as follows:
Implementation of the Henyey-Greenstein phase function
float HenyeyGreenstein(float g, float mu) {
float gg = g * g;
return (1.0 / (4.0 * PI)) * ((1.0 - gg) / pow(1.0 + gg - 2.0 * g * mu, 1.5));
We now have to introduce the result of this new function in our Raymarching loop by multiplying it by the density at a given sampled point, and what we obtain is more realistic lighting for our
cloud, especially if the light source moves around.
Introducing the Henyey-Greenstein phase function inside our Raymarching loop
float raymarch(vec3 rayOrigin, vec3 rayDirection, float offset) {
float depth = 0.0;
depth += MARCH_SIZE * offset;
vec3 p = rayOrigin + depth * rayDirection;
vec3 sunDirection = normalize(SUN_POSITION);
float totalTransmittance = 1.0;
float lightEnergy = 0.0;
float phase = HenyeyGreenstein(SCATTERING_ANISO, dot(rayDirection, sunDirection));
for (int i = 0; i < MAX_STEPS; i++) {
float density = scene(p, false);
// We only draw the density if it's greater than 0
if (density > 0.0) {
float lightTransmittance = lightmarch(p, rayDirection);
float luminance = density * phase;
totalTransmittance *= lightTransmittance;
lightEnergy += totalTransmittance * luminance;
depth += MARCH_SIZE;
p = rayOrigin + depth * rayDirection;
return lightEnergy
Comparison of our physically accurate Volumetric cloud with and without applying the Henyey-Greenstein phase function.
The final demo of this article below showcases our scene with:
• Blue noise dithering
• Bicubic filtering
• Beer's law
• Our more realistic light sampling
• Henyey-Greenstein phase function
The result is looks really good, although I'll admit I had to add an extra value term to my light energy formula so the cloud wouldn't simply "fade away" when dense parts would end up in the shade.
Extra value added to the luminance formula
float luminance = 0.025 + density * phase;
The need for a hack probably highlights some issues with my code, most likely due to how I use the resulting light energy value returned by the Raymarching loop or an absorption coefficient that's a
bit too high. Not sure. If you find any blatantly wrong assumptions in my code, please let me know so I can make the necessary edits.
Some other optimizations are possible to make the cloud look fluffier and denser, like using the Beer's Powder approximation (page 64), but it was mentioned to me that those are just used for
aesthetic reasons and are not actually physically based (I also honestly couldn't figure out how to apply it without altering significantly my MAX_STEPS, MAX_STEPS_LIGHTS, and marchSize variables π
and the result was still not great).
@MaximeHeckel Note that beer-powder is a non-physical approximation. Maybe see: https://t.co/LlHxW5x5sb
We learned several ways to render cloud-like volumes with Raymarching throughout this article, and considering that a few weeks prior to that, I wasn't even able to wrap my head around the concept of
Volumetric Raymarching, I'm happy with the result and proud of myself given how complex and daunting this subject can be. I was also pleasantly surprised by the ability to apply some physics
principles and port techniques commonly used in triple-A video game productions to WebGL to achieve realistic looking clouds.
With that, I'm excited to attempt combining raymarched clouds with terrain, like the ones introduced in my Raymarching article, or even more complex challenges like rendering a planet with realistic
atmospheric scattering. Another idea I had was to build a raymarched galaxy; since we can simplify it to a massive cloud in space and that some of the physics principles introduced in this article
should still apply and yield beautiful renders.
I hope this article will inspire you in your own Raymarching endeavors and that it helped make this seemingly hard-to-grasp concept of Volumetric rendering a bit more welcoming π .
This article is a deep dive into my experimentations with Volumetric rendering and how to leverage it to render beautiful raymarched cloudscapes in React Three Fiber and WebGL. In it, I walk you
through everything from the basics of Volumetric Raymarching to the techniques used in video games to render physically accurate clouds.
|
{"url":"https://blog.maximeheckel.com/posts/real-time-cloudscapes-with-volumetric-raymarching/","timestamp":"2024-11-11T00:23:54Z","content_type":"text/html","content_length":"740638","record_id":"<urn:uuid:a96101b2-e089-4561-80ed-dee43aa2397b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00185.warc.gz"}
|
Lights Out 3 - 5x5, 6 moves
Problem 44753. Lights Out 3 - 5x5, 6 moves
Lights Out is a logic game wherein all lights need to be turned off to complete each board. See the first problem in the series for an introduction.
This problem contains boards that each require six moves to solve. For example, if
board = [1 0 1 0 1
1 0 1 0 1]
the answer is:
moves = [1 5 11 15 21 25]
Prev.: 5x5, 4 moves — Next: 5x5, 8 moves
Solution Stats
47.5% Correct | 52.5% Incorrect
Problem Comments
Will problem "Lights Out 3 - 5x5, 12 moves" be created?
nchoosek(25, 12) = 5200300, not so big.
It can also be solved by brute-forces.
please merge problem "Lights Out 3 - 5x5, 3 moves", "Lights Out 3 - 5x5, 4 moves" , "Lights Out 3 - 5x5, 6 moves" and "Lights Out 3 - 5x5, N moves"(not publish yet), becase they can be solve by same
brute-force method.
@li haitao: I understand that you might not want to see so many problems in the series, but they are purposefully building up to larger, more difficult problems in a pedantic fashion so as to be
accessible to a broad range of users.
It is rather curious that you make such comments about being able to solve these problems using brute-force methods without having solved any of them, as far as I can see. I have timed each of the
submitted problems for the three problems so far using Cody servers, and they yield the following approximate times:
#1 (3 moves): 1/4 to 1/2 second
#2 (4 moves): 1/2 to 2 seconds
#3 (6 moves): 25 to 30 seconds (this includes a modified solution from a prior problem)
As far as I am aware, the Cody servers time out at 30 seconds, so further problems will not be solvable by brute-force methods. I tested that on a planned problem with more moves using two modified
solutions from prior problems, and they do appear to time out. While it is technically possible to solve many of these (current and planned) problems by brute-force methods, it is not practically
feasible (or possible on Cody due to server time limits). The number of test cases in future problems generally increases or stays at a high number to help prevent brute-force methods from working.
I added various references to the first problem in the series as potential starting points for developing actual solvers. I would encourage you to check them out and try these first problems without
using brute-force methods that will only go so far.
@goc "#3 (6 moves): 25 to 30 seconds"
My brute-force solution runs less than 2 second.
@goc I admit that it's hard to solve "12 moves" less than 30 seconds using brute-force methods in cody's computer(but less than 30 seconds in my computer).
I'm glad to see some elegent methods in the future.
@goc "#3 (6 moves): 25 to 30 seconds" My brute-force solution runs less than 0.8 second.
@li haitao: While your solution does solve many test cases in under 0.8 seconds, the times above were referring to the total time for the whole test suite. When your times are added up for all the
test cases, the total time is over 4 seconds. While less than the total time for other solutions, it is still increasing over other problems with fewer moves. Also, as I already mentioned, future
problems will tend to have even more test cases, making it harder to solve some within the time limit. And, the final problems will most definitely not be solvable by brute-force methods.
With brute-force Alfonso solution you can solve everything, everywhere, every time...
To be clear, I speak about Solution 1720010 of Alfonso (https://www.mathworks.com/matlabcentral/cody/problems/44755-lights-out-4-5x5-8-moves/solutions/1720010).
Incredible no ?
@Jean-Marie Sainthillier: Amazing. I'll have to make sure that some later problems in the series can't be brute forced, even by him, though he may still find a way. These test suites take long enough
to create as it is.
Solution Comments
Show comments
Problem Recent Solvers16
Suggested Problems
More from this Author139
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
|
{"url":"https://it.mathworks.com/matlabcentral/cody/problems/44753","timestamp":"2024-11-07T23:43:49Z","content_type":"text/html","content_length":"104001","record_id":"<urn:uuid:218e81bd-a17f-433b-9542-19491525f115>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00452.warc.gz"}
|
Qts queueing theory software
These concepts and ideas form a strong base for the more mathematically inclined students who can follow up with the extensive literature on probability models and queueing theory. Slide set 1
chapter 1 an introduction to queues and queueing theory. This approach is applied to different types of problems, such as scheduling, resource allocation, and traffic flow. The second edition of an
introduction of queueing theory may be used as a textbook by firstyear graduate students in fields such as computer science, operations research, industrial and systems engineering, as well as
related fields such as manufacturing and communications engineering. Qtsplus4calc is being released under the open source software license gnu public license gpl. The five packages are jpq, qtp, qts
plus, mc queue and quick q. His works inspired engineers, mathematicians to deal with queueing problems using. Collection of openoffice spreadsheets that solve queueing theory models. We have seen
that as a system gets congested, the service delay in the system increases. This work is based on the microsoft excelbased qtsplus software package, which is the companion software for the textbook
fundamentals of queueing theory by donald gross and carl harris. Queueing theory is generally considered a branch of operations research because the results are often used when making business
decisions about the resources needed to provide a service. The excel version is available for excel 97 and above. Huangs courses at gmu can make a single machinereadable copy and print a single copy
of each slide for their own reference, so long as each slide contains the statement, and gmu. Mean value analysis mva for single or multiclass closed networks.
Introduction to queueing theory and stochastic teletra c. Queuerite is a complete software system for customer queue management system. Anyway, the queueing theory is largely used in
telecommunications, the same sector in which the theory itself was born. The queuing model will calculate the optimum number of customer service points staff to minimize costs for your business.
Basic queueing theory mm queues these slides are created by dr. Arena simulation software is used to build the screening system with three major components. A good understanding of the relationship
between congestion and delay is essential for designing effective congestion control algorithms. Use the link below to share a fulltext version of this article with your friends and colleagues.
Chapter2 rst discusses a number of basic concepts and results from probability theory that we will use. Queuing theory examines every component of waiting in. Thompson, carl harris and donald gross
for excel 97 and above. Queue management system queuerite online cloud based. See our short demo video for an overview on the queuerite customer queue management software.
Queueing theory with applications and special consideration to emergency care 3 2 if iand jare disjoint intervals, then the events occurring in them are independent. Queuing theory provides all the
tools needed for this analysis. This look at queueing theory stresses the fundamentals of the analytic modeling of queues. A queueing model is constructed so that queue lengths and waiting time can
be predicted. Four times a year, downtown raleigh hosts a food truck rodeo that includes a half mile of food trucks spread out over eleven city blocks. We are pleased to announce the availability of
qtsplus thompson, harris and gross, software for solving a wide range of queueing models. Instructions how to use the queuing theory calculator. Fundamentals of queueing theory download ebook pdf,
epub. Collectively these spreadsheets are known as qtsplus4calc. It uses queuing models to represent the various types of queuing systems that arise in practice. Furthermore, in order to test the
sensitivity of influence of insufficiency of health care provider called to the ed to support from the institute, we conducted a sensitivity analysis using half the mean treated rate to test the
performance. Chapter 6 queueing networks modelling software qns and.
Queuing theory is the mathematical study of waiting lines or queues. The latest is an enhanced version of the software called qtsplus by james m. Pdf queueing networks modeling software for
manufacturing. Installation, documentation, hardware requirement and capabilities of each package are discussed in detail conveniently using some tables.
Models and applications applying littles law the mean waiting time w and the mean response time are given by eq. Qtsplus software 2008 wiley series in probability and. Queuing theory qt is the
mathematical study of waiting lines. Ill start off on just talking about queuing theories, an introductory class on the topic of queuing theory. This software is based on the textbook of gross et al.
Douglas mcgregor, an american social psychologist, proposed his famous theory x and theory y models in his book the human side of enterprise 1960. Mar 27, 20 download queueing theory software for
calc for free. This work is based on the microsoft excelbased qtsplus software package, which is the companion. Exact asymptotic analysis of single or multiclass, productform open queueing networks
jackson networks or bcmp networks. This work is based on the microsoft excelbased qtsplus software package, which is the companion software for the textbook fundamentals of queueing theory by donald
gross and carl harris the qtsplus4calc collection of spreadsheets will.
Queueing theory is the mathematical study of waiting lines 23. Queueing theory qt is a classical operations research methodology that. Pdf application of queuing theory model and simulation to.
Queueing theory applications, articles, and video tutorials.
Apr 11, 2019 the open queueing network analysis was performed using queueing theory software qts. The definitive guide to queueing theory and its practical applicationsfeaturesnumerous realworld
examples of scientific, engineering, and business applications thoroughly updated and expanded to reflect the latest developments in the field,fundamentals of queueing theory, fifth editionpresents
the statistical principles and processes involved in the analysis of the probabilistic nature of queues. To demonstrate, lets use a queueing theory model and discreteevent simulation to investigate a
reallife system that has become quite popular in the nc triangle area in recent years. Please find below a link that leads to an online queueing theory software tool. Instructions for downloading the
software are found at the end of the appendix. Theory x software, theory y software, theory z software, mcgregor, ouchi strate theory x software, theory y software, theory z software, mcgregor, ouchi
strategic analysis, management. The software is available in the format of selfextracting windows zip files for excel and quattro pro 8 for windows 95, 98 and 2000. The following instructions are
meant for the queuing theory calculator at. For this area there exists a huge body of publications, a list of introductory or more advanced texts on queueing theory is.
And the idea is basically like this, if you have a queue and this will the schematics that most text books will use, well have some server, something that handles work coming. Queueing theory
software software free download queueing. New examples are now included along with problems that incorporate qtsplus software, which is freely available via the books related web site. Qts, queueing
theory software, for use in conjunction with the textbook. Two modern introductory texts are 11 and, two really nice classic books are 7, 6. Download queueing theory software for calc for free. The
study of queueing theory requires some background in probability theory. Mathworks is the leading developer of mathematical computing software for engineers and scientists. Agner krarup erlang
18781929 the danish telecommunication engineer started applying principles of queuing theory in the area of telecommunications.
If you are familiar with queueing theory, and you want to make fast calculations then this guide can help you greatly. As a consequence, telecommunication engineers understand the. Queueing theory is
the mathematical study of waiting lines, or queues. The collected date were entered into the arena simulation software to. Queueing theory qt has been suggested to evaluate the. Queueing theory
software, is written as excel spreadsheet for solving a wide range of queueing models and other probability models markov chains, birth. The openoffice version is supported by openoffice l. The
models enable finding an appropriate balance between the cost of service and the amount of waiting. In this paper five statistical software packages for queueing theory are compared. This project
provides a set of openoffice calc spreadsheets that solve various queueing models. Queueing theory calculator is a simple, yet powerful tool to process queueing models calculations, erlang formulas
for queues. The package currently includes the following algorithms. Which one is the best software for queue simulation.
Jan 25, 2015 software perfomance engineering is one of the computer sciences branches that makes use of the queueing theory, for example, to analytically validate test campaign results, or to find
bottlenecks, etc. Queuing theory examines every component of waiting in line to be served, including the arrival. Queueing networks modeling software for manufacturing. Queueing theory software for
calc brought to you by. Queueing theory software overview we are pleased to announce the availability of qtsplus thompson, harris and gross, software for solving a wide range of queueing models.
Queuing theory is the study of waiting in all these various situations. Fundamentals of queueing theory, 5th edition wiley. Queueing theorythe mathematical analysis of how stuff moves through a
system with queueswas developed to understand and improve throughput in telecommunication systemssystems with lots of variability and randomness similar to product development. The queueing package
is a software package for queueing networks and markov chains analysis written in gnu octave. Qtsplus software for the current edition, there are two versions of the qtsplus software. List of
queueing theory software university of windsor. Click download or read online button to get fundamentals of queueing theory book now.
Our software allows your business to optimize the process for customers as they line up and wait for their turn to be served, enhancing your level of customer service. Software perfomance engineering
is one of the computer sciences branches that makes use of the queueing theory, for example, to analytically validate test campaign results, or to find bottlenecks, etc. Could we employ the queueing
theory to improve efficiency during. Mcgregor sees theory y as the preferable model and. A mathematical method of analyzing the congestions and delays of waiting in line. It features excel and
quattro software that allows greater flexibility in the understanding of the nature, sensitivities and responses of waiting line systems to parameter and environmental changes. Queueing theory is
generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service queueing theory has its origins
in research by. Could we employ the queueing theory to improve efficiency. Qtsplus version 3 is an update version that contains new models reflected in the 4th edition textbook. A queueing theory and
game theory application ubs school of. Qtsplus provides the following improvements over the qts standard.
|
{"url":"https://nvenoculal.web.app/434.html","timestamp":"2024-11-03T10:45:40Z","content_type":"text/html","content_length":"16406","record_id":"<urn:uuid:5f892727-80c4-492c-b024-c9d32f6aa2f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00451.warc.gz"}
|
(PDF) A Semi-analytical Approach for Orbit Determination based on Extended Kalman Filter
Author content
All content in this area was uploaded by Bryan Cazabonne on Sep 10, 2021
Content may be subject to copyright.
Author content
All content in this area was uploaded by Bryan Cazabonne on Aug 16, 2021
Content may be subject to copyright.
Bryan Cazabonne,
Julie Bayard,
Maxime Journot,
and Paul J. Cefola
The paper presents an open-source orbit determination application based on the
Draper Semi-analytical Satellite Theory (DSST) and a recursive filter, the
Extended Semi-analytical Kalman Filter (ESKF). The ESKF reconciles the
conflicting goal of the DSST perturbation theory (i.e., large step size) and the
Extended Kalman Filter (EKF) theory (i.e., re-initialization at each measurement
epoch). Validation of the Orekit ESKF is demonstrated using both simulated
data and real data from CDDIS (Crustal Dynamics DataInformation System).
The ESKF results are compared with those obtained by the GTDS ESKF.
Orbit determination is used to estimate satellite’s state vector from observed measurements.
The state vector may be position and velocity or an orbital element set. The element set may be
osculating or mean. The state vector may also include dynamical parameters such as the drag
coefficient and the satellite’s reflection coefficient. Space agencies generally use the numerical
method to meet their orbit determination needs. The numerical method can be very precise with
sufficient force models, but it requires significant computation time. To get around the
computation time issue, analytical orbit determination methods are possible. Brouwer’s Theory is
the basis of most of the analytical orbit determination methods.
The USAF SGP4 theory, which
isused to generate the NORAD TLE, employs the BrouwerTheory together with a power law
model for the atmospheric density.
However, operational analytical orbit determination
methods may not meet accuracy requirements.
Semi-analytical techniques combine the accuracy of numerical propagation and the
characteristic speed of analytical propagation. One early semi-analytical orbit determination
method is the ROAD algorithm due to Wagner.
In ROAD algorithm, the dynamical model is the
Bryan Cazabonne is Spaceflight Mechanics Engineer at CS GROUP, 6 Rue Brindejonc des Moulinais, Toulouse,
France, email: bryan.cazabonne@csgroup.eu.
Julie Bayard is Spaceflight Mechanics Engineer at CS GROUP, 6 Rue Brindejonc des Moulinais, Toulouse, France,
email: julie.bayard@csgroup.eu.
Maxime Journot is Spaceflight Mechanics Engineer at CS GROUP, 6 Rue Brindejonc des Moulinais, Toulouse,
France, email: maxime.journot@csgroup.eu.
Paul J. Cefola is Research Scientist, Department of Mechanical & Aerospace Engineering, University at Buffalo
(SUNY), Amherst, NY, USA, email: paulcefo@buffalo.edu, paul.cefola@gmail.com. Fellow AAS. Also Consultant in
Aerospace Systems, Spaceflight Mechanics, and Astrodynamics, Vineyard Haven, MA, USA.
mean element equations of motion. In 1977, the Draper Laboratory proposed the extension of its
GTDS (Goddard Trajectory Determination System) semi-analytical orbit propagator to include
detailed short period motion models and improved partial derivative models.
The current study
focuses on the Draper Semi-analytical Satellite Theory (DSST), which is flexible, complete, and
applicable to all orbit types.
There are different implementations of DSST orbit determination.
In 2021, a complete
open-source implementation using a batch-least squares algorithm has been included in the Orekit
space flight library.
During this study, the calculation of the state transition matrix based on
automatic differentiation has been presented and strongly validated. The current study focuses on
the extension of Orekit DSST orbit determination capabilities by adding a recursive filter theory,
the Extended Semi-analytical Kalman Filter (ESKF).
The classical (i.e., based on
numerical propagation) Extended Kalman Filter (EKF) algorithm is already available in the
library. However, the re-initialization of the EKF underlying orbit propagator at each
measurement epoch is a major constraint for semi-analytical satellite theory. The ESKF algorithm
reconciles the conflicting goal of the DSST perturbation theory (i.e., large step size) and the EKF
theory (i.e., re-initialization at each measurement epoch).
The roadmap of the paper will be to first introduce the Orekit’s implementation of the EKF. A
general introduction to the concept of ESFK algorithm is then presented. Particular attention is
drawn to the operations on both the integration and observation grids. Validation of the Orekit
ESKF is demonstrated under orbit determination conditions using both simulated data and real
data from the CDDIS (Crustal Dynamics Data Information System) website.
Orekit ESKF orbit
determination results are compared with those obtained by the reference GTDS ESKF.
Conclusion and Future Work end the paper.
Orekit is an open-source space flight dynamics library.
It is written in Java and provides low
level elements for the development of flight dynamics applications. Since 2008, Orekit is
distributed under the Apache License version 2.0.
Orekit provides various functionalities related
to coordinate transformations, reading and writing of standardized formats, orbit propagation, and
orbit determination using batch-least squares algorithm and recursive filters.
The Extended Kalman Filter
The Extended Kalman Filter (EKF) is an extremely useful algorithm for problems based on
continuous data streams. It is based on a recursive process where the estimated covariance matrix
and satellite’s state
calculated from the previous observation are used to estimate the
new satellite state for the current observation. The EKF algorithm is composed of two steps, a
prediction step and a correction step. During the prediction step, the predicted covariance matrix
and satellite’s state are calculated following Equation (1) and (2).
state transition matrix (
process noise matrix
In Equation (2), the notation is used to indicate one choice of orbit propagator.
During the correction step, the predicted covariance and satellite’s state are updated using the
satellite’s observation. The calculation of these two elements is done by Equation (3) to (5).
where observation covariance matrix
observation partials matrix
Kalman gain
In Orekit library, the updated covariance matrix
is not calculated using Equation (4). Orekit
uses the Joseph algorithm, as in Equation (6). Joseph algorithmis equivalent to the classical
formula but guarantees the output stays symmetric.
In Equations (1) to (6), the calculation of the observation partials matrix and the state
transition matrix is completed using Equation (7)and (8). Orekit library uses the automatic
differentiation technique to calculate all the necessary partial derivatives.
where is an observed measurement at epoch . Figure 1 shows the calling hierarchy of
the Orekit EKF orbit determination. The figure presents the different steps of calculation and the
integration of the previous equations in the process.
The Draper Semi-analytical Satellite Theory
The Draper Semi-analytical Satellite Theory (DSST) is a mean elements satellite theory
expressed in non-singular equinoctial elements.
It divides the computation of the osculating
orbital elements into two contributions: the mean orbital elements and the short-periodic terms.
Both models are developed in the equinoctial orbital elements viathe Method of Averaging and
computed using a combination of analytical and numerical techniques.
DSST was developed with an emphasison accuracy and computational efficiency. Itmodels
the motion due to conservative perturbations using the Lagrangian Variation Of Parameters
formalism inEquation (9). The Gaussian Variation Of Parameters formalism in Equation (10)is
used to model non-conservative perturbations.
satellite's velocity vector
accelerations caused by the non-conservative perturbations
osculating equinoctialelements
disturbing potential for the conservative forces
In DSST theory, the equations of motion for the mean equinoctial elements can be written as
in Equation (11) and (12).
with i = 1, 2, ..., 5
mean mean motion
mean longitude
functions of the slowly varying mean elements
denotes the small magnitude of the element
In Equations (11) and (12), the functions of the slowly varying mean elements for the different
orbital perturbations can be found in McClain.
Finally, the transformation from mean equinoctial elements to osculating equinoctial elements
is calculated using Equation (13).
with i = 1, 2, ..., 6
short-period function, 2 periodic
Because the DSST orbit propagator uses large step size to perform the numerical integration
of the equations of motion for the mean equinoctial elements (e.g., half-day for GEO satellites), it
is not suitable for a classical EKF orbit determination. The EKF algorithm needs to re-initialize
the orbital state at each observation epoch. However, the time difference between two
observations is usually much smaller than the DSST step size. In order to take advantage of the
DSST theory within a recursive filter orbit determination, Steve Taylor designed the Extended
Semi-analytical Kalman Filter in 1981.13
The Extended Semi-analytical Kalman Filter (ESKF) reconciles the conflicting goals of the
DSST perturbation theory and the EKF theory. Steve Taylor used the concept of the mean
equinoctial elements integration grid. Therefore, the nominal orbital state is updated only at the
integration grid points.
The following procedures, mainly based on the Taylor thesis, describe the operations to
perform on both the integration grid and the observation grid. It will be assumed that both the
equinoctial elements and the dynamical parameters are estimated. For simplicity, estimation of
the measurement parameters (e.g., station biases) is ignored.
Initialization of the Extended Semi-analytical Kalman Filter
1. Set the initial covariance matrix
the initial state
and the initial estimated filter
2. Set the state transition matrixto the identity matrix, and the partial derivatives of
the mean equinoctial elements with respect to the dynamic parameters to the zero
3. Initialize the short periodic functions of all the involved forces.
Operations on the Integration Grid
1. Update the nominal state
2. Integrate to obtain the nominal mean equinoctial elements
, the state transition
matrix , and the partial derivatives of the mean equinoctial elements with respect to
the dynamical parameters .
3. Calculate the Fourier coefficients
, and update the short periodic
with respect to the Fourier coefficients.
Operations on the Observation Grid
1. Obtain the new observation at time.
2. Interpolate to obtain the nominal mean equinoctial elements
, the state transition
matrix , and the partial derivatives of the mean equinoctial elements with respect to
the dynamical parameters , all at time.
3. Interpolate for the short periodic coefficients and calculate the short periodic functions
at time.
4. Calculate the transition matrices using Equation (14) and (15). These matrices contain the
partial derivatives of the predicted parameters with respect to the ones at the epoch of the
previous observation .
5. Predict the filter corrections
using the corrected corrections
calculated from
the previous observation.
6. Calculate the predicted osculating equinoctial elements, as in Equation (19).
7. Calculate the predicted measurement
, its partial derivatives, and the
observation residual.
8. Calculate the observation partials matrix, as in Equation (21).
The and matrices in Equation (20) and (22) represent the partial derivatives of the
short period motion. They were introduced by Andrew Green.
9. Calculate the predicted covariance matrix
using the estimated covariance matrix
calculated from the previous observation, as in Equation (23).
In Equation (23), still denotes the user-defined process noise matrix.
10. Perform the correction step of the filter using Equation (24) to (26). The correction step
of the ESKF is very close to the EKF.
As seen in Equation (25), the corrected covariance matrix is also calculated using Joseph
algorithm. In addition, still denotes the observation covariance matrix.
11. Calculate the corrected measurement and residual using the corrected osculating elements
given in Equation (27).
The ESKF continues with step 1 of observation grid until all observations have been processed
or until the next integration step is encountered. If the next integration step is encountered, the
operations on the integration grid are followed. Figure 2 shows the calling hierarchy of the Orekit
ESKF orbit determination.
Figure 3 presents an Unified Modeling Language (UML) diagram of the implementation of
the ESKF in Orekit. The main Java class on thisdiagram is the SemiAnalyticalKalmanEstimator
class. The Orekit’s users will use this class to execute the ESKF orbit determination. This class is
built from SemiAnalyticalKalmanEstimatorBuilderclass. The choice between the operations on
the Integration Grid or the Observation Grid is handled by the ESKFMeasurementHandler class.
This class is added to the user-defined DSSTPropagatorin order to highlight integration steps.
The link between the ExtendedKalmanEstimator class of Hipparchus library and Orekit is made
by the SemiAnalyticalKalmanModelclass. This class is very important because it performs most
of the steps presented before. It also performs the initialization steps in the class constructor.
The Green’s matrices and are calculated by automatic differentiation using the
DSSTJacobianMapper class. This class is built from the DSSTPartialDerivativesEquations class.
The purpose of the DSSTPartialDerivativesEquations class is also to calculate the and
matrices using variational equations. In Orekit library, the variational equations are
integrated simultaneously with the equations of motion by the DSSTPropagator. Generally, any
additional equation (i.e., additional to the main equations of motion) in Orekit can be integrated
simultaneously with the equations of motion by the DSSTPropagator if it implements the
Orekit’s Java interface AdditionalEquations. Therefore, the two matrices are interpolated when
the nominal mean equinoctial elements are also interpolated.
Validation against simulated data
The Orekit implementation of the ESKF is first validated using simulated data. The epoch
mean orbital elements used for testing the Orekit ESKF are given in Table 1. The mean orbital
elements set is given in EME2000 coordinates.
Table 1. Epoch mean orbital elements used for ESKF validation against simulated data.
right ascension of the ascending node
Details about the test cases used to validate the Orekit ESKF are given in Table 2. Three test
cases are used: one two-body case, and two perturbed cases.
Table 2. Test cases for ESKF validation against simulated data.
Two-body + + +
For each test case, a one-day forward propagation of the epoch mean orbital elements is done
in order to generate osculated pseudo-range measurements. The three test cases have 445
simulated measurements from two stations. The station coordinates are given in Table 3.
Table 3. Station coordinates used for ESKF validation against simulated data.
The residuals between the simulated and the estimated values are calculated for each
measurement. For Case 2 and 3, an offset of 1.2 meters is added tothe initial value of the semi-
major axis in order to start the estimation process with a small difference compared to the
reference epoch mean orbital elements. The objective is to test the ability of the Orekit ESKF to
estimate a correct orbit. This 1.2 meters value corresponds to the value already used for the
validation of the numerical EKF algorithm against simulated data in Orekit. The residual mean
values and standard deviations are summarized in Table 4.
Table 4. Mean values of the measurement residuals for each test case.
Mean residual value (meters)
Standard deviation (meters)
Figure 4 to Figure 6 highlight the validation of the Orekit ESKF against simulated
measurements. The mean residual value for Case 1 is about 10-8 meters and 10-3 meters for Case 2
and 3. The difference between Case 1 and Case 2 and 3 is expected. The two last test cases are
perturbed. In other words, they introduce the impact of theshort periodic terms in the ESKF
execution. The 1.2 meters offset added to the initial value of the semi-major axis has a significant
impact in the accuracy of the residuals.
Table 5 shows the difference between the reference and the estimated positions for the last
measurement epoch. The reference position corresponds to the propagated mean orbital elements
given in Table 1 to the last measurement epoch.
Table 5. Position difference between the reference and the estimated orbit.
Initial position difference (meters)
Final position difference (meters)
The initial difference between the epoch mean elements and the first orbit used by the
estimator is about 1.05 meters. It corresponds to the 1.2 meters offset on the semi-major axis. At
the end of the estimation process, the difference between the reference position and the estimated
position is about 4 centimeters. This result highlights the ability of the Orekit ESKF to improve
the knowledge of the orbit during the estimation process.
Validation against real data
The Lageos 2 satellite was chosen for demonstrating Orekit ESKF validation. The selection of
this satellite was influenced by the availability of the satellite’s ephemeris in the CDDIS. Five
days of predicted Lageos 2 positions are used as measurements in the orbit determination process.
The predicted positions are taken from a Consolidated Prediction File (CPF) produced bythe
NERC Space Geodesy Facility. Joanna Najder compared the accuracy of Lageos 2 predicted
positions in CPF with the precise orbits contained in the Extended Standard Product - 3 (SP3)
files. She highlighted a mean error of 0.5-1 meter for Lageos 2 prediction files.
Therefore, it is
interesting to use Lageos 2 predicted positions for the validation of the Orekit ESKF against real
data. The orbit determination is carried out with 20x20 geo-potential terms, lunar-solar point
masses, and solar radiation pressure. Lageos 2 satellite altitude allows neglecting atmospheric
effects on the satellite orbit. A constant process noise is used. The six equinoctial orbit elements
are estimated during the orbit determination process.
Figure 7 shows the measurement residuals obtained by the Orekit ESKF orbit determination.
They correspond to the differences between the observed and the predicted satellite’s positions
calculated during the Step 7 of the Operations on the Observation Grid. Measurement residuals
are very close to those obtained by GTDS as presented in Figure 9. The amplitude of the residuals
between the two methods is similar (i.e., between ± 5 meters). For the first day of observations,
the amplitude of the GTDS ESKF residuals is greater than Orekit ESKF. An initial error is added
in the GTDS case while no error is added for Orekit. In addition, the fact that the Lageos 2 data
are predicted by a numerical orbit propagator contributes to the small trend in the error growth
over the five days in Orekit ESKF. The statistics on the predicted measurement residuals obtained
by the Orekit ESKF are presented in Table 6.
Table 6. Statistics on Orekit ESKF residuals (observed minus predicted).
Mean residual value (meters)
Standard deviation (meters)
Figure 8 displays the measurement residuals between the observed and the corrected satellite’s
positons calculated during the Step 11 of the Operations on the Observation Grid. This figure
highlights the significant contribution of the correction step of the Orekit ESKF to improve the
estimation of the orbit. The statistics on the corrected measurement residuals are presented in
Table 7.
Table 7. Statistics on Orekit ESKF residuals (observed minus corrected).
Mean residual value (meters)
Standard deviation (meters)
The results highlight the validation of the Orekit ESKF against real measurements. They show
that the Orekit ESKF is able to estimate accurate satellite positions. The mean residual value of
each coordinate is about 10-5meters and the standard deviation is about 4 millimeters. The period
of the sinusoidal effect observed on Figure 8 is equal to the orbital period. The statistics on the
corrected measurement residuals are considerably better than the statistics on the predicted
measurement residuals. This demonstrates again the significant impact of the correction step of
the ESKF.
Results demonstrate the validation of the Orekit ESKF against both simulated and real data.
First, the measurement residuals for the three simulated test cases show the ability of the Orekit
ESKF to perform an accurate orbit determination based on generated data. These results also
highlight the ability of the Orekit ESKF to improve the knowledge of the orbit during the
estimation process. The validation against real data shows the consistency between Orekit ESKF
and GTDS ESKFimplementations. This study offer an improvement compared to the Taylor
thesis. The Equation (27) is a new equation highlighting the contribution of the correction step of
the ESKF. Finally, the validation against the five days of predicted positions for the Lageos 2
satellite demonstrates the meter level agreement between the Orekit DSST and the real world.
There are several areas in which we intend to improve the capabilities of the Orekit ESKF. In
particular, we would like to extend the validation of the Orekit ESKF against real satellite data
from the CDDIS website. Because Lageos-2 geometry is spherical, we would like to validate the
Orekit ESKF using data from satellite with more complex geometry (e.g., box and solar array
spacecraft model). In addition, we would like to test the performance of the Orekit ESKF with
orbits perturbed by atmosphere drag (e.g., CryoSat-2 orbit).
Another improvement would be the implementation of the ESKF for multiple satellites.
Indeed, the current implementation is only meant for a single satellite orbit determination.
However, with the development of satellite constellations and multi-satellite missions, an
implementation of multi-satellite orbit determination is interesting.
Finally, we would like to improve the capabilities of Orekit orbit determination by adding new
recursive filters. The Backward Smoothing Extended Kalman Filter (BSEKF) and the Backward
Smoothing Extended Semi-analytical Kalman Filter (BSESKF) are recursive filters that show
more reliable convergence and robustness than the EKF and ESKF, respectively.15,
implementation of a semi-analytical form of the Unscented Kalman Filter (UKF) is also an
interesting challenge that we would like to address.
The authors would like to acknowledge Mr. Luc Maisonobe and Mr. Pascal Parraud, both
from CS GROUP, France. Discussions with them provided a valuable help to improve the
capabilities of Orekit DSST orbit determination.
Paul Cefola would like to acknowledge technical discussions with Prof. Juan Felix San Juan,
University of Rioja, Logrono, Spain, Dr. Ronald J. Proulx, Newton, Massachusetts, Dr. Srinivas
Setty, Munich, Germany, Mr. Zach Folcik, MIT Lincoln Laboratory, Lexington, Massachusetts,
Dr. Jim Schatzman, Augustus Aerospace Company, Lone Tree, Colorado, and Mr. Jacob
Stratford, Brigham Young University, Provo, Utah. Paul Cefola would also like to acknowledge
ongoing discussions with Mr. Kye Howell, Mr. Brian Athearn, and Ms. Prudence Athearn Levy,
all of Martha’s Vineyard, Massachusetts.
Figure 1. Orekit Extended Kalman Filter orbit determination principle.
Figure 2. Orekit Extended Semi-analytical Kalman Filter orbit determination principle.
Figure 3. UML diagram of Orekit implementation of the Extended Semi-analytical Kalman Filter.
Figure 4. Case 1: Residuals between simulated (i.e., observed) and estimated range measurements
for Orekit Extended Semi-analytical Kalman Filter validation.
Figure 5. Case 2: Residuals between simulated (i.e., observed) and estimated range measurements
for Orekit Extended Semi-analytical Kalman Filter validation.
Figure 6. Case 3: Residuals between simulated (i.e., observed) and estimated range measurements
for Orekit Extended Semi-analytical Kalman Filter validation.
Figure 7. Lageos-2 Orekit ESKF ECI measurement residuals between observed and predicted
satellite’s positions. Predicted positions are calculated during Step 7 of the operations on the
observation grid.
Figure 8. Lageos-2 Orekit ESKF ECI measurement residuals between observed and corrected
satellite’s positions. Corrected positions are calculated during Step 11 of the operations on the
observation grid.
Figure 9. Lageos-2 GTDS DSST ESKF ECEF Measurement Residuals (GGM01S 50x50, Lunar
Solar Point Masses, SRP, SET, J2000 Integration Coordinate System, DSST Short-period model:
SPGRVFRC set to complete model, SRP shortperiod motion, Short-Period J2 partials ) (position
differences are in meters and velocity differences are in cm/sec).
Brouwer D. Solution of Problem of Artificial Satellite Theory without Drag, Astronomical J., Vol. 64, No. 1274, pp.
378-397, November 1959.
Lane M. H., and Cranford K. H., An Improved Analytical Drag Theory for the Artificial Satellite Problem, AIAA pre-
print 69-925, AIAA/AAS Astrodynamics Specialist Conference, Princeton, New Jersey, August 20-22, 1969.
Vallado D. A., and Crawford P., SGP4 orbit determination, AIAA Paper 2008-6770, AIAA/AAS Astrodynamics
Specialist Conference and Exhibit, Honolulu, Hawaii, August 18-21, 2008.
Wagner C. A., Earth Zonal Harmonics from Rapid Numerical Analysis of Long Satellite Arc, NASA Coddard Space
Flight Center pre-print X-553-72-341, August, 1972 (also NASA-TM-X-66039).
Cefola P. J., et al, Demonstration of the Semi-analytical Satellite Theory Approach to Improving Orbit Determination,
C. S. Draper Laboratory Technical Proposal 7-167, September, 1977.
Cefola P. J., Sabol C., Hill K., and Nishimoto D., Demonstration of the DSST State Transition Matrix Time-Update
Properties Using the Linux GTDS Program, Proceedings of the Advanced Maui Optical and Space Surveillance
Technologies Conference, Maui, Hawaii, 2011.
Cefola P. J., Long A. C., and Holloway G., The long-term prediction of artificial satellite orbits, AIAA Paper 74-170,
12th Aerospace Science Meeting, Washington, DC, January 30 - February 1, 1974.
McClain W. D., A recursive formulated first-order semianalytic artificial satellite theory based on the generalized
method of averaging (the blue book), Computer Sciences Corporation CSC/TR-77/6010 [in 1992 McClain updated the
blue book], Volume 1, 1977.
Setty S., Cefola P. J., Montenbruck O., and Fiedler H., Application of semi-analytical satellite theory orbit propagator
to orbit determination for space object catalog maintenance, Advances in Space Research, Vol. 57, No. 10, pp. 2218-
2233, 2016.
San-Juan J. F., López R., Suanes R., Pérez I. Setty S., and Cefola P. J.,Migration of the DSST Standalone to C/C++,
AAS paper 17-369, Advances in the Astronautical Sciences, Vol. 160, pp. 2419-2437, 2017.
Folcik Z., and Cefola P. J., Very Long Arc Timing Coefficient and Solar Lunar Planetary Ephemeris Files and Appli-
cation, AAS Paper 19-401, 29th AAS/AIAA Space Flight Mechanics Meeting, Ka’anapali, HI, January 13-17, 2019.
Cazabonne B., and Cefola P. J., Towards Accurate Orbit Determination using Semi-analytical Satellite Theory , 31st
AAS/AIAA Space Flight Mechanics Meeting, Virtual, February 1-4, 2021.
Taylor S. P., Semi-analytical Satellite Theory and Sequential Estimation, Master of Science Thesis, Department of
Mechanical Engineering, MIT, September, 1981.
Wagner E. A., Application of the Extended Semianalytical Kalman Filter to Synchronous Orbits, Master of Science
Thesis, Department of Aeronautics and Astronautics, MIT, June, 1983.
Folcik Z., Orbit Determination Using Modern Filters/Smoothers and Continuous Thrust Modeling, Master of Sci-
ence Thesis, Department of Aeronautics and Astronautics, MIT, June, 2008.
Noll C., The Crustal Dynamics Data Information System: A resource to support scientific analysis using space geod-
esy, Advances in Space Research, Vol. 45, Issue 12, pp. 1421-1440, June 2010.
Maisonobe L., Pommier V., and Parraud P., Orekit: an open-source library for operational flight dynamics
applications, Proceedings of the 4thInternational Conference of Astrodynamics Tools and Techniques, Spain, April,
Maisonobe L., Cefola P. J., Frouvelle N., Herbinière S., Laffont F. X., Lizy-Destrez S., and Neidhart T., Open
governance of the Orekit space flight dynamics library, Proceedings of the 5th International Conference of
Astrodynamics Tools and Techniques, 2012.
Bucy R. S., and Joseph P. D., Filtering for Stochastic Processes with Applications to Guidance, Providence, RI:
AMS Chelsea Publishing, 2nd Edition, pp. 55-57, 2005.
Kalman D., Doubly recursive multivariate automatic differentiation, Mathematics magazine, Vol. 75, No. 3, pp. 187-
202, 2002.
Griewank A., and Walther A., Evaluating derivatives: principles and techniques of algorithmic differentiation, Soci-
ety for Industrial and Applied Mathematics, 2008.
Cefola P. J., Equinoctial orbit elements –application to artificial satellite orbits, AIAA Paper 72-937, AIAA/AAS
Astrodynamics Conference, Palo Alto, CA, September 11-12, 1972.
McClain W. D., A recursive formulated first-order semianalytic artificial satellite theory based on the generalized
method of averaging (the blue book), Computer Sciences Corporation CSC/TR-78/6001 [in 1992 McClain updated the
blue book], Volume 2, 1978.
Green A. J., Orbit determination and Prediction Processes for Low Altitude Satellites, Ph.D Thesis, Department of
Aeronautics and Astronautics, MIT, December, 1979.
Najder J., and Sośnica K, Quality of Orbit Predictions for Satellites Tracked by SLR Station, Remote Sensing, Vol.
13, No. 7, p. 1377, 2021.
Schrama E., Precision orbit determination performance for CryoSat-2, Advances in Space Research, Vol. 61, No. 1,
pp. 235-247, 2018.
Psiaki M., Backward Smoothing Extended Kalman Filter, Journal of Guidance, Control, and Dynamics, Vol. 28, No.
5, pp. 885-894, 2005.
Van Der Merwe R., and Wan E. A., The square-root unscented Kalman filter for state and parameter-estimation,
IEEE international conference on acoustics, speech, and signal processing, (Cat. No. 01CH37221), pp. 3461-3464,
Woodburn J., and Coppola V., Analysis of Relative Merits of Unscented and Extended Kalman Filter in Orbit
Determination, Reprinted from Astrodynamics 1999, Advances in the Astronautical Sciences, Vol. 171, 2019.
|
{"url":"https://www.researchgate.net/publication/353923078_A_Semi-analytical_Approach_for_Orbit_Determination_based_on_Extended_Kalman_Filter","timestamp":"2024-11-12T10:58:48Z","content_type":"text/html","content_length":"668222","record_id":"<urn:uuid:36a404d3-b901-4b75-bc3a-da868bf44d93>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00470.warc.gz"}
|
Analysis on natural characteristics of four-stage main transmission system in three-engine helicopter
The vibration model of four-stage main transmission system in helicopter was established through lumped mass method. In the model, many factors, including time-varying meshing stiffness and torsional
stiffness of gear shaft were considered. The differential equation of the system was solved via Fourier method, and the influence of the torsional stiffness of shafts on the first five orders of the
system’s natural frequency was studied. Some theoretical results were summarized as guidelines for further research and design of four-stage deceleration helicopter at last.
1. Introduction
The four-stage main transmission system has the ability to carry heavy load, and it has an essential effect in dynamic behavior in three-engine helicopters. Predicting natural characteristics might
protect the whole system from further failure and damage. However, its structure is extremely complicated, it has multiple branches of input and output, along with long transmission chain which
includes numerous gears and accessories [1]. In addition, mutual coupling exists in each branch and whole system, and the system is affected by several dynamic factors like time-varying meshing
stiffness, clearance and synthetic transmission error. Furthermore, the input speed of engine is greatly high which has huge effect on the vibration of the system. Therefore, it is urgent and
meaningful to analyze and study its vibration characteristics.
For the vibration characteristics of helicopter main transmission system, scholars study the modal vibration of complex multi-shaft system of a variety of gear pair meshing [2], and analyze the
influence of meshing stiffness, installation angle, spiral angle and bearing stiffness on the natural characteristics of the system [3, 4].
On the other hand, in terms of the planetary gear chain of main transmission system, the effects of torque on dynamic behavior in torsional model are analyzed [5, 6]. Other studies of planetary gear
dynamics include mesh stiffness variation and load-sharing [7], influence of free vibration [8, 9], and tooth crack detection [10].
In conclusion, most studies are limited in planetary chain system or the components of helicopter’s main reducer, few of them involve the overall components of main transmission system, which has
long chain and multiple DOF. Besides, it is presently still difficult to analyze the impact of torsional stiffness of shaft on the vibration characteristics. Therefore, in the paper, the influence of
the changes of torsional stiffness on various orders of natural frequency was explored, and theoretical support was provided for the design of the helicopter.
2. Dynamic modeling of the system
The dynamic model of four-stage deceleration helicopter’s transmission system is shown in Fig. 1. In the picture, the system has three same input branches, namely $j$ branch ($j=$ 1, 2, 3). Each
branch has 5 gears. ${\theta }_{1}^{\left(j\right)}$, ${\theta }_{2}^{\left(j\right)}$, ${\theta }_{3}^{\left(j\right)}$, ${\theta }_{4}^{\left(j\right)}$, ${\theta }_{5}^{\left(j\right)}$, ${\theta
}_{6}$, ${\theta }_{c}$, ${\theta }_{7}$, ${\theta }_{8}$, ${\theta }_{9}$${\theta }_{s}$ and ${\theta }_{pi}$ are rotational DOF of gear system in each stage deceleration.
${k}_{23}$, ${k}_{45}$, ${k}_{78}$[, ]${k}_{in}$, ${k}_{out}$ and ${k}_{6s}$ are torsional stiffness of shafts connecting each gear pair; ${c}_{12}$, ${c}_{34}$, ${c}_{56}$, ${c}_{spi}$ and ${c}_
{rpi}$ are meshing damping of each gear pair; ${k}_{12}$, ${k}_{34}$, ${k}_{56}$, ${k}_{spi}$ and ${k}_{rpi}$ are meshing stiffness of each gear pair. The flexibility between the shafts and
transverse DOF are not taken into account due to its weak influence on natural frequency.
Fig. 1Model of three-engine helicopter transmission system
The differential equation of the trial model system can be deduced through Newton’s law, as shown below:
$\left\{\begin{array}{l}{J}_{1}{{\stackrel{¨}{\theta }}_{1}}^{\left(j\right)}+\left[{F}_{1_2}^{p\left(j\right)}\left(t\right)+{F}_{1_2}^{d\left(j\right)}\left(t\right)\right]{r}_{1}+{k}_{in}{{\theta
}_{1}}^{\left(j\right)}={T}_{Ej},\\ {J}_{2}{{\stackrel{¨}{\theta }}_{2}}^{\left(j\right)}-\left[{F}_{1_2}^{p\left(j\right)}\left(t\right)+{F}_{1_2}^{d\left(j\right)}\left(t\right)\right]{r}_{2}+{{k}_
{23}}^{\left(j\right)}\left({{\theta }_{2}}^{\left(j\right)}-{{\theta }_{3}}^{\left(j\right)}\right)=0,\\ {J}_{3}{{\stackrel{¨}{\theta }}_{3}}^{\left(j\right)}+\left[{F}_{3_4}^{p\left(j\right)}\left
(t\right)+{F}_{3_4}^{d\left(j\right)}\left(t\right)\right]{r}_{3}+{{k}_{23}}^{\left(j\right)}\left({{\theta }_{3}}^{\left(j\right)}-{{\theta }_{2}}^{\left(j\right)}\right)=0,\\ {J}_{4}{{\stackrel{¨}
{\theta }}_{4}}^{\left(j\right)}-\left[{F}_{3_4}^{p\left(j\right)}\left(t\right)+{F}_{3_4}^{d\left(j\right)}\left(t\right)\right]{r}_{4}+{{k}_{45}}^{\left(j\right)}\left({{\theta }_{4}}^{\left(j\
right)}-{{\theta }_{5}}^{\left(j\right)}\right)=0,\\ {J}_{5}{{\stackrel{¨}{\theta }}_{5}}^{\left(j\right)}+\left[{F}_{5_6}^{p\left(j\right)}\left(t\right)+{F}_{5_6}^{d\left(j\right)}\left(t\right)\
right]{r}_{5}+{{k}_{45}}^{\left(j\right)}\left({{\theta }_{5}}^{\left(j\right)}-{{\theta }_{4}}^{\left(j\right)}\right)=0,\\ {J}_{6}{\stackrel{¨}{\theta }}_{6}-\sum _{j=1}^{3}\left[{F}_{{5}_{6}}^{p\
left(j\right)}\left(t\right)+{F}_{{5}_{6}}^{d\left(j\right)}\left(t\right)\right]{r}_{6}+\left[{F}_{{6}_{7}}^{p}\left(t\right)+{F}_{{6}_{7}}^{d}\left(t\right)\right]{r}_{6}+{k}_{6s}\left({\theta }_
{6}-{\theta }_{s}\right)=0,\\ {J}_{7}{\stackrel{¨}{\theta }}_{7}-\left[{F}_{6_7}^{p}\left(t\right)+{F}_{6_7}^{d}\left(t\right)\right]{r}_{7}+{k}_{78}\left({\theta }_{7}-{\theta }_{8}\right)=0,\\ {J}_
{8}{\stackrel{¨}{\theta }}_{8}+\left[{F}_{8_9}^{p}\left(t\right)+{F}_{8_9}^{d}\left(t\right)\right]{r}_{8}+{k}_{78}\left({\theta }_{8}-{\theta }_{7}\right)=0,\\ {J}_{9}{\stackrel{¨}{\theta }}_{9}-\
left[{F}_{{8}_{9}}^{p}\left(t\right)+{F}_{{8}_{9}}^{d}\left(t\right)\right]{r}_{9}+{k}_{out}{\theta }_{9}=-{T}_{RR},\\ {J}_{s}{\stackrel{¨}{\theta }}_{s}+\sum _{i=1}^{N}\left({F}_{spi}^{p}+{F}_{spi}^
{d}\right){r}_{s}-{k}_{6s}\left({\theta }_{6}-{\theta }_{s}\right)=0,\\ {J}_{p}{\stackrel{¨}{\theta }}_{pi}-\left({F}_{spi}^{p}+{F}_{spi}^{d}\right){r}_{p}+\left({F}_{rpi}^{p}+{F}_{rpi}^{d}\right){r}
_{p}=0,\\ \left[{J}_{c}+\sum _{i=1}^{N}\left({m}_{pi}{r}_{c}^{2}\right)\right]{\stackrel{¨}{\theta }}_{c}-\sum _{i=1}^{N}\left({F}_{spi}^{p}+{F}_{spi}^{d}+{F}_{rpi}^{p}+{F}_{rpi}^{d}\right){r}_{c}\
mathrm{c}\mathrm{o}\mathrm{s}\alpha =-{T}_{MR},\end{array}\right\$
where ${T}_{Ej}$ is denoted as the output torque of engine $j$ ($j=$ 1, 2, 3); ${T}_{RR}$ and ${T}_{MR}$ are the output torque of tail branch and planet carrier. ${F}^{p}$ and ${F}^{d}$ are dynamic
meshing and damping forces of each gear pair.
In addition, after the decomposition and recombination of Eq. (1), it can be expressed with following matrix-vector form:
Here [$M$], [$C$], [$K$] are the mass matrix, damping matrix and stiffness matrix of the governing equation, and all matrices are in 27 dimension; matrix [$F$] is external excitation.
3. System natural characteristics analysis
A set of basic parameters are extracted from a geared system, listed in Table 1. By setting the value of damping item and external excitation item in Eq. (2) to be zero, the vibration differential
equation of the system under free condition can be obtained.
Table 1Gear parameters
Tooth number Module Face width (mm) Initial phase of meshing stiffness (°)
Gear 1 (First stage) 30 4.5 40 0
Gear 2 (First stage) 85 4.5 40 0
Gear 3 (Second stage) 40 5 60 20
Gear 4 (Second stage) 90 5 60 20
Gear 5 (Third stage) 25 4.75 40 30
Gear 6 (Third stage) 142 4.75 40 30
Gear 7 (Tail branch) 40 5 35 0
Gear 8 (Tail branch) 60 5 35 0
Gear 9 (Tail branch) 70 5 40 20
Gear 10
(Sun gear in fourth stage)
${\mathrm{\varnothing }}_{sp1}=0$, ${\mathrm{\varnothing }}_{sp2}=\pi /3$,
Gear 11
37 5 40 ${\mathrm{\varnothing }}_{sp3}=2\pi /3$, ${\mathrm{\varnothing }}_{sp4}=2.5\pi /3$,
(Planet gear in fourth stage)
${\mathrm{\varnothing }}_{sp5}=\pi$, ${\mathrm{\varnothing }}_{sp6}=4\pi /3$
The helicopter has multiple stage deceleration shaft system. $k$ is torsional stiffness of each gear pair. The change of the value of torsional stiffness means the change of stiffness matrix,
affecting natural frequency accordingly. In the paper, torsional stiffness of shafts was changed; the influence of change of torsional stiffness on the natural characteristics was explored;
theoretical support was provided for the design of the helicopter shaft.
Fig. 2(a) indicates the impact of input shaft torsional stiffness on natural frequency. It shows that the values of natural frequency are close and at the low frequency regions when input shaft
torsional stiffness is lower; acceleration of rotational speed tends to cause resonance, so the shaft is key shaft; in case of resonance, it has bigger impact on planet chain and tail transmission
system. In addition, when torsional stiffness is greater than 8×10^4^N·m/rad, 1st order natural frequency remains 640 Hz, while natural frequency of the rest orders continues to increase; when
torsional stiffness is greater than 2×10^5 N·m/rad, 2nd order natural frequency remains 1100 Hz, while natural frequency of the rest orders gradually tends to be stable and at high frequency areas
with the increase of torsional stiffness. Moreover, 3rd order natural frequency always increases with the increase of torsional stiffness, and doesn’t become gradually stable until torsional
stiffness is higher.
Fig. 2(b) is the impact of torsional stiffness of Gear 2 and Gear 3 connecting shaft on natural frequency, showing similar laws to Fig. 2(a). However, 2nd order natural frequency doesn’t remain
stable at 1100 Hz until torsional stiffness is greater than 4.5×10^5 N·m/rad. At the same time, the abrupt change point of the 5th order is delayed to 4.5×10^5 N·m/rad.
Fig. 2(c) shows the impact of torsional stiffness of Gear 4 and Gear 5 connecting shaft on natural frequency. It can be seen that the change of torsional stiffness slightly impacts first 4 orders
natural frequency; 5th order natural frequency rises slightly with the increase of torsional stiffness, and is in the high frequency regions.
Fig. 2(d) shows the impact of torsional stiffness of sun gear input shaft on natural frequency. It can be seen that 1st order natural frequency almost shows linear increase, while natural frequency
of other orders shows little change along with the increase of torsional stiffness; it crosses the low frequency regions when torsional stiffness is lower, so the shaft is also key shaft, and its
torsional stiffness can’t be too low.
Fig. 2Impact of each shaft’s torsional stiffness on first five orders natural frequency
Fig. 2(e) and Fig. 2(f) respectively show the influence of Gear 7 and Gear 8 connecting shaft and tail branch’s output shaft on natural frequency. The values of natural frequency of all orders show
similar trends and laws with the change of torsional stiffness. However, the stable point of 2nd order and the abrupt change point of 5th order natural frequency in Fig. 2(e) are 6.5×10^5 N·m/rad,
while the corresponding value in Fig. 2(f) is 3×10^5 N m/rad, indicating that Gear 7 and Gear 8 connecting shaft is relatively important in tail transmission branch.
Through the comprehensive comparison of above figures, it can be found that system input shaft and sun gear input shaft have greater influence on 1st natural frequency; system input shaft and Gear 2
and Gear 3 connecting shaft have greater influence on 2nd natural frequency.
4. Conclusions
In this paper, a new dynamics model of four-stage helicopter transmission system is proposed and the analysis results enable us to draw the following conclusions:
System input shaft and the sun gear input shaft are the key shafts of the system due to their torsional stiffness affecting the low frequency region much more than other shafts.
Input shaft of the system as well as Gear 2 and Gear 3 connecting shafts greatly impact on the second order and third order natural frequency.
Gear 7 and Gear 8 connecting shaft is the most important shaft in tail transmission branch.
• Bianchi A., Rossi S. Modeling and finite element analysis of a complex helicopter transmission including housing, shafts and gears. 1997.
• Kahraman A., Zini D. M., Kienzle K. Dynamic analysis of a multi-shaft helical gear transmission by finite elements: model and experiment. ASME Journal of Vibrations and Acoustics, Vol. 126, Issue
3, 2004, p. 398-406.
• Huang J., Zhang S., Zhang Y. The influences of system parameters on the natural characteristics of a parallel multi-shaft gear-rotor system. Machinery Design and Manufacture, Vol. 7, 2013, p.
15-17, (in Chinese).
• Gu Z., Yang J. Analyses of torsional vibration characteristics for transmission system of helicopter. Journal of Nanjing University of Aeronautics and Astronautics, Vol. 29, Issue 6, 1997, p.
674-678, (in Chinese).
• Ericson T. M., Parker R. G. Experimental measurement of the effects of torque on the dynamic behavior and system parameters of planetary gears. Mechanism and Machine Theory, Vol. 74, 2014, p.
• Saada A., Velex P. An extended model for the analysis of the dynamic behavior of planetary trains. Journal of Mechanical Design, Vol. 117, Issue 2A, 1995, p. 241-247.
• Kasuba R., August R. Gear mesh stiffness and load sharing in planetary gearing. ASME Paper 84-DET-229, 1984.
• Zhang L., Wang Y., Wu K., et al. Dynamic modeling and vibration characteristics of a two-stage closed-form planetary gear train. Mechanism and Machine Theory, Vol. 97, 2016, p. 12-28.
• Noll M. C., Godfrey J. W., Schelenz R., et al. Analysis of time-domain signals of piezoelectric strain sensors on slow spinning planetary gearboxes. Mechanical Systems and Signal Processing, Vol.
72, 2016, p. 727-744.
• Liang X., Zuo M. J., Hoseini M. R. Vibration signal modeling of a planetary gear set for tooth crack detection. Engineering Failure Analysis, Vol. 48, 2015, p. 185-200.
About this article
Mechanical vibrations and applications
helicopter transmission system
four-stage reducer
natural characteristics
torsional stiffness
nonlinear dynamics
This work is supported by the National Natural Science Foundation of PRC (Grant No. 51375226 and 51475226), China Scholarship Council (Grant No. 201606830019) and Postgraduate Research and Practice
Innovation Program of Jiangsu Province.
Copyright © 2017 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/18558","timestamp":"2024-11-08T08:00:55Z","content_type":"text/html","content_length":"136036","record_id":"<urn:uuid:27a69ad5-88e5-485a-b1f6-3bd97beb2c33>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00307.warc.gz"}
|
Mea Culpa - MIT Faculty Newsletter
January-March 2024Vol.XXXVI No. 3
Mea Culpa
Peko Hosoi
I made a mistake. And I would like to apologize to everyone who was at the Institute faculty meeting of February 21, 2024. During that meeting I forgot to call for “no” votes on a motion to move to
an executive session.
I know there are many people who (justifiably!) felt confused during the meeting and disenfranchised afterwards. I would have felt the same.
Following the rules of order is essential for the smooth functioning of a faculty meeting and I am mortified that I threw a spanner in the works. I sincerely apologize to my colleagues who were
deprived of their right to vote; this is unacceptable, and I am deeply sorry that I was the cause of that injustice.
While I can’t change what happened at the meeting, I can try to ensure that this doesn’t happen again. For starters, I will work with the Chair of the Faculty to propose a set of guidelines to
safeguard our parliamentary procedures (e.g., having a parliamentarian at the meeting could enable us to correct procedural errors in real-time). In addition, I have learned that it was difficult for
the people on Zoom to understand what was happening in 10-250, which in turn made it difficult for them to participate in the discussion. Given these and other challenges we have faced with the
hybrid meeting format, I hope that we as a faculty will have a serious discussion about whether the current format of the faculty meeting is best serving our needs.
Regardless of whether these steps turn out to be helpful, I deeply regret my mistake and I apologize to everyone who was in attendance at the meeting.
Second, I would like to send a special message to the students who were there. Your actions will be judged differently by different members of our community, but from my point of view at the podium,
you showed respect for our rules of order. You observed the speaking privilege rules of the meeting; you followed the rules of the executive session; and you carried yourselves with dignity and
decorum when you left the room. I saw and appreciated the care with which you treated our protocols and, by following the rules to a T, you evoked the sympathy of many people.
The rest of this letter is less important than the apology; however, for those of you who are willing to indulge me a bit longer, there are two questions that continue to haunt me: The first is about
freedom of expression and the second is a math question. As many of you know, I am co-chairing the ad hoc Committee on Academic Freedom and Free Expression (CAFCE). The primary reason I agreed to do
this, is that I am concerned by surveys that suggest people do not feel comfortable speaking up on the MIT campus. My concern has now grown into alarm. How can it be – given the egregious nature of
my mistake at the faculty meeting – that not a single faculty member made a Point of Order to correct my error on the spot? There are many plausible explanations for why this might happen, but I
worry that, for many years at MIT, we have allowed a climate to persist where people do not want to speak up in public. There is an enormous amount of wisdom in our faculty and a climate of silence
makes that wisdom hard to access. I don’t know how to fix this, but let me start by saying that, as a Faculty Officer, I welcome your dissent. I am genuinely interested in your opinion. (And anyone
who saves me from making another boneheaded blunder like the one I made on February 21 will have my eternal gratitude!)
Finally, I’d like to end on a question which has been the subject of much speculation: Would the outcome have been different if I had remembered to call for the no votes? To be clear, the answer to
this question in no way mitigates my error. Voting is a form of expression regardless of the outcome, and all faculty have the right to express themselves through their vote at the faculty meeting.
Nonetheless, given the interest around this question, and as one more form of atonement, let me share all the data I have managed to collect and offer a brief analysis. In the following I will
provide an excess of data in case someone would like to perform their own analysis with different assumptions.
When we called for a quorum at the beginning of the meeting, there were 23 faculty in 10-250 and 24 faculty on Zoom who raised their hands. At the time of the vote, the number of faculty on Zoom was
46. These numbers are not speculative or approximate. They have been taken directly from the Zoom meeting log.^[1] The number of yes votes in 10-250 was 22 and the number of yes votes on Zoom was 20
for a total of 42 yes votes. In addition, we know that throughout the meeting, 66 unique faculty were on Zoom at some point, albeit not all at the same time.
The next set of numbers can be estimated from historical data. In recent history, the number of faculty who attend each meeting lies roughly between 95 and 105. Both the Faculty Governance
Administrator and I independently estimated the number of faculty in 10-250 to be approximately 40 ± 10. This estimate is also consistent with the historical data: 66 (faculty on Zoom) + 40 (faculty
in 10-250) = 106 which is in-line with historical attendance numbers.
The final set of data I received from the Faculty Governance Administrator is the total number of people who voted in each of the last 10 votes at the faculty meeting: 63, 64, 64, 57, 61, 65, 69, 86,
83, 83. This yields a mean number of people voting of 69.5 and a standard deviation of 10.54.
Given that the total number of faculty at the meeting of February 21 was consistent with the number of faculty at recent meetings, one plausible way to estimate what the total number of votes would
have been had the no vote been called, is to use the mean number of people voting in recent meetings. In that case, the expected number of no votes would be:
# no votes = mean(# total votes) – # yes votes = 69.5 – 42 = 27.5
which would not have been sufficient to overturn the yes vote.
However, I would argue that this is not quite the right question to ask. A better question is what is the probability that the no votes would prevail? Suppose we model the total number of votes as a
normally distributed random variable with the same mean and standard deviation as the measured data; then the distribution of no votes is the same but shifted to the left by 42 (i.e., we remove the
known number of yes votes from the total). That distribution is shown in the figure. The area under the curve above 42 no votes (shown in red) represents the fraction of the time the no votes win if
we replayed this scenario many times. The area under the curve below 42 no votes (shown in blue) similarly represents the fraction of the time the yeses would prevail. Integrating both areas and
taking the ratio of the blue area to the total area, we find that there is a 92% chance that the yes votes would have won had we executed the vote properly.
So although it is not probable that the outcome would have changed had I remembered to call for the no vote, it is certainly possible. Which is of course why it is essential to count the votes. And
why my mistake was so egregious.
Mea Culpa.
^[1] There has been some speculation that there were 95 faculty on Zoom at the time of the vote. This is incorrect. There may have been 95 people on Zoom but only 46 were faculty with voting
|
{"url":"https://fnl.mit.edu/january-march-2024/mea-culpa/","timestamp":"2024-11-14T10:50:05Z","content_type":"text/html","content_length":"106253","record_id":"<urn:uuid:277b350c-84d3-4dfc-ba72-a4ca0c9fbeaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00385.warc.gz"}
|
New quantum gate realised
A new important building block for a future quantum computer has been realised by physicists at the Institute for Experimental Physics in Innsbruck and the Institute for Quantum Optics and Quantum
Information (IQOQI): a gate acting on three quantum bits, the so-called Toffoli gate, as has been reported in Physical Review Letters.
Quantum mechanical laws allow quantum computers to process information significantly faster and more efficiently than normal computers. Even the most demanding algorithms can be realised in just a
few steps. The basic building blocks for quantum computers are gates acting on one or more quantum bits (qubits). Already single qubit operations and one two-qubit gate allow for basic experiments in
the world of quantum physics. Innsbruck scientists around Rainer Blatt impressively demonstrated this during the recent years: In 2005, they were able to deterministically teleport quantum
information from one atom to another. Last year, they were the first ones to deterministically generate entanglement between noninteracting quantum bits (entanglement swapping).
Gate acting on three qubits
In principle, every quantum algorithm can be realised by using only single- and two-qubit gates. This approach, however, is unfavourable for more complex tasks and would rapidly reach the limits of
current implementations. Hence, physicists worldwide strive for the efficient realisation of multi-qubit gates. The scientists in Innsbruck succeeded in implementing a three-qubit gate acting on
three trapped calcium ions representing the qubits. The target qubit of the so-called Toffoli gate will only be switched when both control qubits are set to „1“, while in all other cases the target
qubit will not be changed.
Important step towards a quantum computer
This novel gate does not only augment the set of available quantum gates, but also raises the achievable efficiency. Thomas Monz, junior scientist at the experiment, explains:„A Toffoli gate based on
the conventional approach would require a sequence of six controlled switch operations. In comparison, our approach is three times faster while operating at a reduced error rate.“ Applications of the
Toffoli gate lie within quantum error correction or quantum mechanical prime factorization. Thus, it represents an important component of a future quantum computer.
We are financially supported by Universität Innsbruck, Österreichische Akademie der Wissenschaften, Fonds zur Förderung der wissenschaftlichen Forschung (FWF) within the program "Control and
Measurement of Coherent Quantum Systems", the European networks "SCALA", "CONQUEST", as well as Institut für Quanteninformation GmbH and ARO.
• "Realization of the quantum Toffoli gate with trapped ions"
T. Monz, K. Kim, W. Hänsel, M. Riebe, A. S. Villar, P. Schindler, M. Chwalla, M. Hennrich, and R. Blatt,
Physical Review Letters 102, 040501 (2009), arXiv:0804.0082.
|
{"url":"https://quantumoptics.at/en/news/73-new-quantum-gate-realised.html","timestamp":"2024-11-11T13:02:12Z","content_type":"text/html","content_length":"52140","record_id":"<urn:uuid:d21226d2-aab0-414e-9b7e-5d98e2920c99>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00539.warc.gz"}
|
Elliptic Composability
The Math of Gauge Theories
With a bit of a delay I am resuming the posts on gauge theory and today I will talk about the math involved.
In gauge theory you consider the base space-time as a manifold and you attach at attach point an object or what is called a fiber forming what it is called a
fiber bundle
. The picture which you should have in mind is that if a rug.
The nature of the fibers is unimportant at the moment, but they should obey at least the properties of a linear space.
Physically think of the fibers as internal degrees of freedom at each spacetime point, and a physical configuration would correspond to a definite location at one point long the fiber for each
The next key concept is that of a gauge group. A gauge group is the group of transformations which do not affect the observables of the theory.
Mathematically, the gauge symmetry depends on how we relate points between nearby fibers and to make this precise we only need (only) one critical step: define a covariant derivative.
Why do we need this? Because an arbitrary gauge transformation does not change the physics and the usual ordinary derivative sees both infinitesimal changes to the fields, and the infinitesimal
changes to an arbitrary gauge transformation. Basically we need to compensate for the derivative of an arbitrary gauge transformation.
If d is the ordinary derivative, let's call D the covariant derivative and their difference (which is a linear operator) is called either a differential connection, a gauge field, or a potential:
A(x) = D - d
D and d act differently: d "sees" the neighbourhood behaviour but ignores the value of the function on which it acts, and D acts on the value but is blind to the neighbourhood behaviour.
The condition we will impose on D is that is must satisfy the Leibniz identity because it is derivative:
D(fg) = (Df)g+f(Dg)
which in turn demands:
A(fg) = (Af)g+f(Ag)
In general only one part of A may be used to compensate for gauge transformations, and the remaining part represent an external field that may be interpreted as potential. When no external potentials
are involved, A usually respects integrability conditions. Those conditions depend on the concrete gauge theory and we will illustrate this in subsequent posts.
When external fields are present, the integrability conditions are not satisfied and this is captured by what is called a curvature. The name comes from general relativity where lack of integrability
is precisely the space-time curvature.
The symmetry properties arising out of curvature construction gives rise to algebraic identities.
Next in gauge theories we have the homogeneous and inhomogeneous differential equations. As example of homogeneous differential equations are the Bianchi identities in general relativity and the two
homogeneous Maxwell's equations. The inhomogeneous equations are related to the sources of the fields (current in electrodynamics, and stress-energy tensor in general relativity).
So to recap, the steps used to build a gauge theory are:
1. the gauge group
2. the covariant derivative giving rise to the gauge field
3. integrability condition
4. the curvature
5. the algebraic identities
6. the homogeneous equations
7. the inhomogeneous equations
In the following posts I will spell out this outline first for general relativity and then for electromagnetism. Technically general relativity is not a gauge theory because diffeomorphism invariance
cannot be understood as a gauge group but the math similarities are striking and there is a deep connection between diffeomorphism invariange and gauge theory which I will spell out in subsequent
posts. So for now please accept this sloppiness which will get corrected in due time.
|
{"url":"https://fmoldove.blogspot.com/2017/09/the-math-of-gauge-theories-with-bit-of.html","timestamp":"2024-11-09T01:42:00Z","content_type":"text/html","content_length":"75230","record_id":"<urn:uuid:3812427c-a07f-46ff-86aa-13eed7596709>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00869.warc.gz"}
|
Different Types of Quadrilaterals - Definition, Properties
Different Types of Quadrilaterals – Definition, Properties
Depending upon the length and angles, quadrilaterals are classifieds in different ways. Let us check Different Types of Quadrilaterals and their definition, properties along with their diagrams. A
quadrilateral can be explained using the below properties
• The sum of the interior angles is 360 degrees in a quadrilateral.
• A quadrilateral consists of 4 sides and 4 vertices, and also 4 angles.
• The sum of the interior angle from the formula of the polygon using (n – 2) × 180 where n is equal to the number of sides of the polygon.
The main types in a quadrilateral are squares and rectangles, etc., with the same angles and sides.
Various Types of Quadrilaterals
Mainly quadrilaterals are classified into six types. They are
1. Parallelogram
2. Rhombus
3. Rectangle
4. Square
5. Trapezium
6. Kite
A quadrilateral is said to be a parallelogram when it has two pairs of parallel sides and opposite sides are parallel and equal in length. Also, the opposite angles are equal in a parallelogram. Let
us take a parallelogram PQRS, then the side PQ is parallel to the side RS. Also, the side PS is parallel to a side QR.
Two diagonals are present in the parallelogram and they intersect each other at a midpoint. From the figure, PR and QS are two diagonals. Also, the diagonals are equal in length from the midpoint.
PQ ∥ RS and PS ∥ QR.
Rhombus is a quadrilateral when all the four sides of a quadrilateral having equal lengths. In a rhombus, opposite sides are parallel and opposite angles are equal.
From the above figure, PQRS is a rhombus in which PQ ∥ RS, PS ∥ QR, and PQ = QR = RS = SP.
A quadrilateral is considered as a rectangle when all 4 angles of it are equal and each angle is 90 degrees. Also, both pairs of opposite sides of a rectangle are parallel and have equal lengths.
From the above figure, PQRS is a quadrilateral in which PQ ∥ RS, PS ∥ QR and ∠P = ∠Q = ∠R = ∠S = 90°.
So, PQRS is a rectangle.
A square is a quadrilateral consists all the sides and angles are equal. Also, every angle of a square is 90 degrees. The pairs of opposite sides of a square are parallel to each other.
From the above figure, PQRS is a quadrilateral in which PQ ∥ RS, PS ∥ QR, PQ = QR = RS = SP and ∠P = ∠Q = ∠R = ∠S = 90°.
So, PQRS is a square.
A quadrilateral is called a trapezium when it has one pair of opposite parallel sides.
From the above figure, PQRS is a quadrilateral in which PQ ∥ RS. So, PQRS is a trapezium. A trapezium its non-parallel sides are equal is called an isosceles trapezium.
A quadrilateral is said to be a kite that has two pairs of equal-length sides and the sides are adjacent to each other.
From the above figure, PQRS is a quadrilateral. PQ = PS, QR = RS, PS ≠QR, and PQ ≠RS.
So, PQRS is a kite.
Important Points to Remember for Quadrilaterals
Look at some of the important points need to remember about a quadrilateral.
• A square is a rectangle and also it becomes a rhombus.
• The rectangle and rhombus do not become a square.
• A parallelogram is a trapezium.
• Square, rectangle, and rhombus are types of parallelograms.
• A trapezium is not a parallelogram.
• Kite is not a parallelogram.
Leave a Comment
You must be logged in to post a comment.
|
{"url":"https://ccssmathanswers.com/different-types-of-quadrilaterals/","timestamp":"2024-11-07T16:27:40Z","content_type":"text/html","content_length":"251610","record_id":"<urn:uuid:f32a66e2-7925-47e7-9a38-c141ca17aef5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00862.warc.gz"}
|
nd Straight Edges
Rules And Straight Edges
Rules are used for taking or transferring measurements while straight edges allow drawing lines and testing for straightness. They are used to take accurate measurements while ensuring engineering &
construction integrity. Raptor Supplies offers a wide range of rulers & straight edges from brands like General, Westward, Starrett, Johnson, Victor Thermal Dynamics, Klein Tools, Facom, & more. The
vast rule and straight edge catalogue includes architect, folding, flexible, wood, steel & pocket rules; as well as diameter tapes and straight edges. Straight edges with equally spaced marking lines
can also be used as rules and are used to check the flatness of a surface. They are marked with arrows at two suspension points for excellent accuracy.
|
{"url":"https://www.raptorsupplies.com.au/c/rules-and-straight-edges","timestamp":"2024-11-04T00:38:34Z","content_type":"text/html","content_length":"254991","record_id":"<urn:uuid:4d1d1e53-84da-48b2-95fb-73272d7dbe6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00857.warc.gz"}
|
The C# Programming Language: Types
The authors of The C# Programming Language discuss value types, reference types, and pointers.
This chapter is from the book
The types of the C# language are divided into two main categories: value types and reference types. Both value types and reference types may be generic types, which take one or more type parameters.
Type parameters can designate both value types and reference types.
A third category of types, pointers, is available only in unsafe code. This issue is discussed further in §18.2.
Value types differ from reference types in that variables of the value types directly contain their data, whereas variables of the reference types store references to their data, the latter being
known as objects. With reference types, it is possible for two variables to reference the same object, and thus possible for operations on one variable to affect the object referenced by the other
variable. With value types, the variables each have their own copy of the data, so it is not possible for operations on one to affect the other.
C#’s type system is unified such that a value of any type can be treated as an object. Every type in C# directly or indirectly derives from the object class type, and object is the ultimate base
class of all types. Values of reference types are treated as objects simply by viewing the values as type object. Values of value types are treated as objects by performing boxing and unboxing
operations (§4.3).
4.1 Value Types
A value type is either a struct type or an enumeration type. C# provides a set of predefined struct types called the simple types. The simple types are identified through reserved words.
non-nullable-value-type ?
Unlike a variable of a reference type, a variable of a value type can contain the value null only if the value type is a nullable type. For every non-nullable value type, there is a corresponding
nullable value type denoting the same set of values plus the value null.
Assignment to a variable of a value type creates a copy of the value being assigned. This differs from assignment to a variable of a reference type, which copies the reference but not the object
identified by the reference.
4.1.1 The System.ValueType Type
All value types implicitly inherit from the class System.ValueType, which in turn inherits from class object. It is not possible for any type to derive from a value type, and value types are thus
implicitly sealed (§10.1.1.2).
Note that System.ValueType is not itself a value-type. Rather, it is a class-type from which all value-types are automatically derived.
4.1.2 Default Constructors
All value types implicitly declare a public parameterless instance constructor called the default constructor. The default constructor returns a zero-initialized instance known as the default value
for the value type:
• For all simple-types, the default value is the value produced by a bit pattern of all zeros:
□ For sbyte, byte, short, ushort, int, uint, long, and ulong, the default value is 0.
□ For char, the default value is '\x0000'.
□ For float, the default value is 0.0f.
□ For double, the default value is 0.0d.
□ For decimal, the default value is 0.0m.
□ For bool, the default value is false.
• For an enum-type E, the default value is 0, converted to the type E.
• For a struct-type, the default value is the value produced by setting all value type fields to their default values and all reference type fields to null.
• For a nullable-type, the default value is an instance for which the HasValue property is false and the Value property is undefined. The default value is also known as the null value of the
nullable type.
Like any other instance constructor, the default constructor of a value type is invoked using the new operator. For efficiency reasons, this requirement is not intended to actually have the
implementation generate a constructor call. In the example below, variables i and j are both initialized to zero.
class A
void F() {
int i = 0;
int j = new int();
Because every value type implicitly has a public parameterless instance constructor, it is not possible for a struct type to contain an explicit declaration of a parameterless constructor. A struct
type is, however, permitted to declare parameterized instance constructors (§11.3.8).
4.1.3 Struct Types
A struct type is a value type that can declare constants, fields, methods, properties, indexers, operators, instance constructors, static constructors, and nested types. The declaration of struct
types is described in §11.1.
4.1.4 Simple Types
C# provides a set of predefined struct types called the simple types. The simple types are identified through reserved words, but these reserved words are simply aliases for predefined struct types
in the System namespace, as described in the table below.
Reserved Word Aliased Type
sbyte System.SByte
byte System.Byte
short System.Int16
ushort System.UInt16
int System.Int32
uint System.UInt32
long System.Int64
ulong System.UInt64
char System.Char
float System.Single
double System.Double
bool System.Boolean
decimal System.Decimal
Because a simple type aliases a struct type, every simple type has members. For example, int has the members declared in System.Int32 and the members inherited from System.Object, and the following
statements are permitted:
int i = int.MaxValue; // System.Int32.MaxValue constant
string s = i.ToString(); // System.Int32.ToString() instance method
string t = 123.ToString(); // System.Int32.ToString() instance method
The simple types differ from other struct types in that they permit certain additional operations:
• Most simple types permit values to be created by writing literals (§2.4.4). For example, 123 is a literal of type int and 'a' is a literal of type char. C# makes no provision for literals of
struct types in general, and nondefault values of other struct types are ultimately always created through instance constructors of those struct types.
• When the operands of an expression are all simple type constants, it is possible for the compiler to evaluate the expression at compile time. Such an expression is known as a constant-expression
(§7.19). Expressions involving operators defined by other struct types are not considered to be constant expressions.
• Through const declarations, it is possible to declare constants of the simple types (§10.4). It is not possible to have constants of other struct types, but a similar effect is provided by static
readonly fields.
• Conversions involving simple types can participate in evaluation of conversion operators defined by other struct types, but a user-defined conversion operator can never participate in evaluation
of another user-defined operator (§6.4.3).
4.1.5 Integral Types
C# supports nine integral types: sbyte, byte, short, ushort, int, uint, long, ulong, and char. The integral types have the following sizes and ranges of values:
• The sbyte type represents signed 8-bit integers with values between –128 and 127.
• The byte type represents unsigned 8-bit integers with values between 0 and 255.
• The short type represents signed 16-bit integers with values between –32768 and 32767.
• The ushort type represents unsigned 16-bit integers with values between 0 and 65535.
• The int type represents signed 32-bit integers with values between –2147483648 and 2147483647.
• The uint type represents unsigned 32-bit integers with values between 0 and 4294967295.
• The long type represents signed 64-bit integers with values between –9223372036854775808 and 9223372036854775807.
• The ulong type represents unsigned 64-bit integers with values between 0 and 18446744073709551615.
• The char type represents unsigned 16-bit integers with values between 0 and 65535. The set of possible values for the char type corresponds to the Unicode character set. Although char has the
same representation as ushort, not all operations permitted on one type are permitted on the other.
The integral-type unary and binary operators always operate with signed 32-bit precision, unsigned 32-bit precision, signed 64-bit precision, or unsigned 64-bit precision:
• For the unary + and ~ operators, the operand is converted to type T, where T is the first of int, uint, long, and ulong that can fully represent all possible values of the operand. The operation
is then performed using the precision of type T, and the type of the result is T.
• For the unary – operator, the operand is converted to type T, where T is the first of int and long that can fully represent all possible values of the operand. The operation is then performed
using the precision of type T, and the type of the result is T. The unary – operator cannot be applied to operands of type ulong.
• For the binary +, –, *, /, %, &, ^, |, ==, !=, >, <, >=, and <= operators, the operands are converted to type T, where T is the first of int, uint, long, and ulong that can fully represent all
possible values of both operands. The operation is then performed using the precision of type T, and the type of the result is T (or bool for the relational operators). It is not permitted for
one operand to be of type long and the other to be of type ulong with the binary operators.
• For the binary << and >> operators, the left operand is converted to type T, where T is the first of int, uint, long, and ulong that can fully represent all possible values of the operand. The
operation is then performed using the precision of type T, and the type of the result is T.
The char type is classified as an integral type, but it differs from the other integral types in two ways:
• There are no implicit conversions from other types to the char type. In particular, even though the sbyte, byte, and ushort types have ranges of values that are fully representable using the char
type, implicit conversions from sbyte, byte, or ushort to char do not exist.
• Constants of the char type must be written as character-literals or as integer-literals in combination with a cast to type char. For example, (char)10 is the same as '\x000A'.
The checked and unchecked operators and statements are used to control overflow checking for integral-type arithmetic operations and conversions (§7.6.12). In a checked context, an overflow produces
a compile-time error or causes a System.OverflowException to be thrown. In an unchecked context, overflows are ignored and any high-order bits that do not fit in the destination type are discarded.
4.1.6 Floating Point Types
C# supports two floating point types: float and double. The float and double types are represented using the 32-bit single-precision and 64-bit double-precision IEEE 754 formats, which provide the
following sets of values:
• Positive zero and negative zero. In most situations, positive zero and negative zero behave identically as the simple value zero, but certain operations distinguish between the two (§7.8.2).
• Positive infinity and negative infinity. Infinities are produced by such operations as dividing a non-zero number by zero. For example, 1.0 / 0.0 yields positive infinity, and –1.0 / 0.0 yields
negative infinity.
• The Not-a-Number value, often abbreviated NaN. NaNs are produced by invalid floating point operations, such as dividing zero by zero.
• The finite set of non-zero values of the form s × m × 2^e, where s is 1 or –1, and m and e are determined by the particular floating point type: For float, 0 < m < 2^24 and –149 ≤ e ≤ 104; for
double, 0 < m < 2^53 and –1075 ≤ e ≤ 970. Denormalized floating point numbers are considered valid non-zero values.
The float type can represent values ranging from approximately 1.5 × 10^–45 to 3.4 × 10^38 with a precision of 7 digits.
The double type can represent values ranging from approximately 5.0 × 10^–324 to 1.7 × 10^308 with a precision of 15 or 16 digits.
If one of the operands of a binary operator is of a floating point type, then the other operand must be of an integral type or a floating point type, and the operation is evaluated as follows:
• If one of the operands is of an integral type, then that operand is converted to the floating point type of the other operand.
• Then, if either of the operands is of type double, the other operand is converted to double, the operation is performed using at least double range and precision, and the type of the result is
double (or bool for the relational operators).
• Otherwise, the operation is performed using at least float range and precision, and the type of the result is float (or bool for the relational operators).
The floating point operators, including the assignment operators, never produce exceptions. Instead, in exceptional situations, floating point operations produce zero, infinity, or NaN, as described
• If the result of a floating point operation is too small for the destination format, the result of the operation becomes positive zero or negative zero.
• If the result of a floating point operation is too large for the destination format, the result of the operation becomes positive infinity or negative infinity.
• If a floating point operation is invalid, the result of the operation becomes NaN.
• If one or both operands of a floating point operation is NaN, the result of the operation becomes NaN.
Floating point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating point
type with greater range and precision than the double type, and implicitly perform all floating point operations using this higher precision type. Only at excessive cost in performance can such
hardware architectures be made to perform floating point operations with less precision. Rather than require an implementation to forfeit both performance and precision, C# allows a higher precision
type to be used for all floating point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the
multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a
higher range format may cause a finite result to be produced instead of an infinity.
4.1.7 The decimal Type
The decimal type is a 128-bit data type suitable for financial and monetary calculations. The decimal type can represent values ranging from 1.0 × 10^–28 to approximately 7.9 × 10^28 with 28 or 29
significant digits.
The finite set of values of type decimal are of the form (–1)^s × c × 10^-e, where the sign s is 0 or 1, the coefficient c is given by 0 ≤ c < 2^96, and the scale e is such that 0 ≤ e ≤ 28.The
decimal type does not support signed zeros, infinities, or NaNs. A decimal is represented as a 96-bit integer scaled by a power of 10. For decimals with an absolute value less than 1.0m, the value is
exact to the 28^th decimal place, but no further. For decimals with an absolute value greater than or equal to 1.0m, the value is exact to 28 or 29 digits. Unlike with the float and double data
types, decimal fractional numbers such as 0.1 can be represented exactly in the decimal representation. In the float and double representations, such numbers are often infinite fractions, making
those representations more prone to round-off errors.
If one of the operands of a binary operator is of type decimal, then the other operand must be of an integral type or of type decimal. If an integral type operand is present, it is converted to
decimal before the operation is performed.
The result of an operation on values of type decimal is what would result from calculating an exact result (preserving scale, as defined for each operator) and then rounding to fit the
representation. Results are rounded to the nearest representable value and, when a result is equally close to two representable values, to the value that has an even number in the least significant
digit position (this is known as “banker’s rounding”). A zero result always has a sign of 0 and a scale of 0.
If a decimal arithmetic operation produces a value less than or equal to 5 × 10^-29 in absolute value, the result of the operation becomes zero. If a decimal arithmetic operation produces a result
that is too large for the decimal format, a System.OverflowException is thrown.
The decimal type has greater precision but smaller range than the floating point types. Thus conversions from the floating point types to decimal might produce overflow exceptions, and conversions
from decimal to the floating point types might cause loss of precision. For these reasons, no implicit conversions exist between the floating point types and decimal, and without explicit casts, it
is not possible to mix floating point and decimal operands in the same expression.
4.1.8 The bool Type
The bool type represents boolean logical quantities. The possible values of type bool are true and false.
No standard conversions exist between bool and other types. In particular, the bool type is distinct and separate from the integral types; a bool value cannot be used in place of an integral value,
and vice versa.
In the C and C++ languages, a zero integral or floating point value, or a null pointer, can be converted to the boolean value false, and a non-zero integral or floating point value, or a non-null
pointer, can be converted to the boolean value true. In C#, such conversions are accomplished by explicitly comparing an integral or floating point value to zero, or by explicitly comparing an object
reference to null.
4.1.9 Enumeration Types
An enumeration type is a distinct type with named constants. Every enumeration type has an underlying type, which must be byte, sbyte, short, ushort, int, uint, long, or ulong. The set of values of
the enumeration type is the same as the set of values of the underlying type. Values of the enumeration type are not restricted to the values of the named constants. Enumeration types are defined
through enumeration declarations (§14.1).
4.1.10 Nullable Types
A nullable type can represent all values of its underlying type plus an additional null value. A nullable type is written T?, where T is the underlying type. This syntax is shorthand for
System.Nullable<T>, and the two forms can be used interchangeably.
A non-nullable value type, conversely, is any value type other than System.Nullable<T> and its shorthand T? (for any T), plus any type parameter that is constrained to be a non-nullable value type
(that is, any type parameter with a struct constraint). The System.Nullable<T> type specifies the value type constraint for T (§10.1.5), which means that the underlying type of a nullable type can be
any non-nullable value type. The underlying type of a nullable type cannot be a nullable type or a reference type. For example, int?? and string? are invalid types.
An instance of a nullable type T? has two public read-only properties:
• A HasValue property of type bool
• A Value property of type T
An instance for which HasValue is true is said to be non-null. A non-null instance contains a known value and Value returns that value.
An instance for which HasValue is false is said to be null. A null instance has an undefined value. Attempting to read the Value of a null instance causes a System.InvalidOperationException to be
thrown. The process of accessing the Value property of a nullable instance is referred to as unwrapping.
In addition to the default constructor, every nullable type T? has a public constructor that takes a single argument of type T. Given a value x of type T, a constructor invocation of the form
new T?(x)
creates a non-null instance of T? for which the Value property is x. The process of creating a non-null instance of a nullable type for a given value is referred to as wrapping.
Implicit conversions are available from the null literal to T? (§6.1.5) and from T to T? (§6.1.4).
|
{"url":"https://www.informit.com/articles/article.aspx?p=1648574&seqNum=6","timestamp":"2024-11-10T19:00:04Z","content_type":"text/html","content_length":"138485","record_id":"<urn:uuid:d6c549ba-8727-4f39-b3cc-bc88899f1199>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00333.warc.gz"}
|
Cloudy Calculator - Chrome Web Store
A shockingly versatile calculator for Chrome. (Formerly Chromey Calculator)
Math geeks rejoice! Inspired by the fantastic calculator build into Google's search engine, Cloudy Calculator handles many of the same kinds of calculations as Google does. In fact, if Cloudy
Calculator can't handle a calculation, it asks Google for the result (you'll see a little "G" to the left of those calculations). ======== New in 6.1 Some long overdue bug fixes. Some highlights: *
Currency conversion works again * Hex conversion works again ======== Quick Tips * Scrollable history of previous calculations * Use up/down arrow keys to access input history. * Click on any result
to insert it into the input area. * Ctrl+Click on any result to copy to clipboard. * Click the little arrow at the upper right to pop out to new window. * Last result can be accessed using the "@"
variable. * Create your own user variables -- @abc_123 = 42 * Store an unevaluated expression -- @x := 10 meters * You'll see a faint "G" next to results caclulated by Google. Click the "G" to see
the original Google result. Some of the things Cloudy Calculator is capable of handling * Mixed unit calculations -- 2 mi + 4 km + 3 light-years in feet * Unit conversion -- 1/4 cup in tablespoons *
Currency conversion -- 56 dollars in euros * Hex, octal, binary -- 4 + 0xAF + 0o71 + 0b10 in hex * Mathematical functions -- sin, cos, tan, log, etc. * Mathematical and physical constants -- pi, e,
h, c, etc. You can find a pretty detailed overview of the kinds of calculations Google's search engine can handle here: * http://www.googleguide.com/calculator.html Feel free to give us a heads up on
any bugs you've noticed or suggest features here: * http://code.google.com/p/chromey-calculator/issues/entry (Special Thanks to MinstormsKid and IIsi 50MHz)
i cant figure out how to use variables
0 out of 4 found this helpful
himei PeppermintMar 21, 2023
While "basic" math is handled well, certain queries are unworkable. It handles unit conversions for length pretty easily but i can't figure out for the life of me how to do temps! it may be a mere
skill issue on my part but it's only useful for quick complicated math if you don't have a calc next to you irl.
4 out of 6 found this helpful
This extension give an example of: * 56 dollars in euros * 4 + 0xAF + 0o71 + 0b10 in hex * However, the example fails to return a result. The same happens with other examples. I have remove this
8 out of 10 found this helpful
• Version
• Updated
July 8, 2013
• Offered by
• Size
• Languages
• Non-trader
This developer has not identified itself as a trader. For consumers in the European Union, please note that consumer rights do not apply to contracts between you and this developer.
The developer has not provided any information about the collection or usage of your data.
|
{"url":"https://chromewebstore.google.com/detail/cloudy-calculator/acgimceffoceigocablmjdpebeodphgc","timestamp":"2024-11-10T01:58:49Z","content_type":"text/html","content_length":"859363","record_id":"<urn:uuid:64747e5b-d1e5-46f7-9cd8-f4a18cfffa2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00403.warc.gz"}
|
What is 37 Celsius to Kelvin? - ConvertTemperatureintoCelsius.info
Temperature conversions are essential in many fields, from science to everyday life. Understanding how to convert between different temperature scales can be particularly useful when dealing with
international data or scientific literature.
37 degrees Celsius is equal to 310.15 Kelvin. This conversion is straightforward once you know the relationship between the Celsius and Kelvin scales. The Kelvin scale starts at absolute zero, which
is -273.15 degrees Celsius.
Converting between Celsius and Kelvin is a common task in physics, chemistry, and meteorology. Kelvin is the SI unit for temperature and is widely used in scientific calculations. Knowing how to
perform this conversion quickly can be valuable for students, researchers, and professionals working with temperature data.
Understanding Temperature Conversion
Temperature conversion allows us to express thermal measurements across different scales. It enables scientific collaboration and practical applications worldwide.
The Basics of Temperature Scales
Temperature scales provide standardized ways to measure and communicate thermal energy. The three most common scales are Celsius, Fahrenheit, and Kelvin.
Celsius, created by Anders Celsius in 1742, uses water’s freezing and boiling points as reference. 0°C is water’s freezing point, and 100°C is its boiling point at sea level.
Fahrenheit, developed by Daniel Gabriel Fahrenheit in 1724, sets water’s freezing point at 32°F and boiling point at 212°F.
Kelvin, established by Lord Kelvin in 1848, is the SI unit of temperature. It starts at absolute zero, the lowest possible temperature. Water freezes at 273.15 K and boils at 373.15 K.
Conversion Formulas
Converting between temperature scales involves simple mathematical formulas. These equations allow for precise translations between Celsius, Fahrenheit, and Kelvin.
To convert Celsius to Kelvin, add 273.15: K = °C + 273.15
For Kelvin to Celsius, subtract 273.15: °C = K – 273.15
Celsius to Fahrenheit conversion: °F = (°C × 9/5) + 32
Fahrenheit to Celsius conversion: °C = (°F – 32) × 5/9
These formulas enable accurate temperature conversions for scientific, industrial, and everyday applications. They bridge the gap between different measurement systems used globally.
Calculating Kelvin from Celsius
Converting Celsius to Kelvin is a straightforward process that involves a simple mathematical formula. This conversion is useful in scientific calculations and understanding temperature scales.
Formula Application
The formula to convert Celsius to Kelvin is:
K = C + 273.15
• K is the temperature in Kelvin
• C is the temperature in Celsius
This formula adds 273.15 to the Celsius temperature to obtain the Kelvin value. The addition of 273.15 is necessary because the Kelvin scale starts at absolute zero, which is -273.15°C.
Sample Calculation
Let’s apply the formula to convert 37°C to Kelvin:
K = 37 + 273.15 K = 310.15
Therefore, 37°C is equal to 310.15 Kelvin.
It’s important to note that the Kelvin scale does not use the degree symbol (°). Scientists and researchers commonly use Kelvin in various fields, including physics and chemistry, due to its relation
to absolute zero.
|
{"url":"https://converttemperatureintocelsius.info/what-is-37-celsius-to-kelvin/","timestamp":"2024-11-05T22:20:47Z","content_type":"text/html","content_length":"74208","record_id":"<urn:uuid:62147cf3-1d95-4580-9bf9-b641351a49cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00430.warc.gz"}
|
Mushroom Yield Calculator - Calculator Wow
Mushroom Yield Calculator
Mushroom cultivation has become a popular and rewarding endeavor for enthusiasts and professionals alike. As the demand for fresh, high-quality mushrooms rises, cultivators are seeking ways to
optimize their yields. The Mushroom Yield Calculator emerges as a valuable tool, offering insights into the intricate balance between the total weight of mushrooms (WM) and the harvested dry
substrate (WDS).
Importance of Mushroom Yield Calculation
Understanding the yield of a mushroom cultivation process is crucial for growers. It not only provides a quantitative measure of success but also aids in adjusting cultivation practices for optimal
results. The Mushroom Yield Calculator serves as a guide, helping cultivators make informed decisions to enhance productivity and profitability.
How to Use the Mushroom Yield Calculator
Using the calculator is simple and user-friendly. Follow these steps:
1. Input Total Weight of Mushrooms (WM): Enter the total weight of the harvested mushrooms in the designated field.
2. Input Total Weight of Harvested Dry Substrate (WDS): Specify the total weight of the harvested dry substrate.
3. Click “Calculate Yield”: Hit the button to perform the calculation.
4. View Mushroom Yield (MY): The result will be displayed, indicating the mushroom yield as a percentage.
By utilizing this tool, cultivators can fine-tune their cultivation methods, ensuring a more efficient and productive harvest.
10 FAQs about Mushroom Yield Calculator
1. What is Mushroom Yield?
Mushroom Yield is a measure of the efficiency of a cultivation process, expressed as the percentage of mushrooms harvested relative to the weight of the substrate.
2. Why is Mushroom Yield Calculation Important?
Calculating yield helps growers assess the success of their cultivation methods, enabling adjustments for improved efficiency and profitability.
3. Is the Mushroom Yield Calculator Suitable for Different Mushroom Varieties?
Yes, the calculator is versatile and applicable to various mushroom species.
4. Can I Use the Calculator for Small-Scale Cultivation?
Absolutely, the Mushroom Yield Calculator is beneficial for both small-scale and large-scale cultivators.
5. How Accurate is the Calculation?
The accuracy depends on the precision of the input data. Accurate measurements will result in more reliable yield calculations.
6. Are There Any Specific Units for Input?
The calculator is flexible and accepts input in any weight units (e.g., grams, kilograms, pounds).
7. Can I Use the Calculator for Different Growth Stages?
While the primary focus is on the harvest stage, the calculator can be adapted for use at different stages by adjusting input parameters.
8. Is the Calculator Suitable for Beginners?
Yes, the Mushroom Yield Calculator is designed to be user-friendly, making it accessible for growers at all levels of experience.
9. How Often Should I Use the Calculator?
It is advisable to use the calculator after each harvest to monitor and optimize cultivation practices over time.
10. Can the Calculator Help Predict Future Yields?
While it provides insights into past yields, the calculator’s primary function is to assess and improve current cultivation practices.
The Mushroom Yield Calculator proves to be an invaluable companion for mushroom cultivators, offering a pathway to enhanced yields and improved cultivation practices. As the demand for mushrooms
continues to grow, utilizing such tools becomes imperative for those aiming to thrive in this dynamic industry. Embrace the power of precision, and watch your mushroom cultivation endeavors flourish.
|
{"url":"https://calculatorwow.com/mushroom-yield-calculator/","timestamp":"2024-11-06T15:36:28Z","content_type":"text/html","content_length":"65248","record_id":"<urn:uuid:e126438e-6c63-41c2-a031-99bf5313f812>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00583.warc.gz"}
|
A rectangular field of size m× n contains mn square areas. Some of the areas are occupied by a determinated growing (tomatoes, carrots, etc.) that is identified by a natural number strictly positive.
It is known that growings are grouped in different disjointed rectangles and that a growing always is separated of another one by areas without grownings, identify by the value 0.
Write a program that reads fields and prints the number of rectangular growings.
Input consists in a sequence of fields. For each field, it is given two natural numbers m and n with m≥1 and n≥1 that represent the size of the field. Then, it is given m rows, each one with n
natural numbers that represent the growing of the area. The fields follow the hypotheses described previously.
For each fielf of the input, print in a line the number of rectangular growings.
|
{"url":"https://jutge.org/problems/P45829_en","timestamp":"2024-11-10T12:32:54Z","content_type":"text/html","content_length":"23582","record_id":"<urn:uuid:b574d481-b4d0-4c27-abe2-4bf67e05b31d>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00433.warc.gz"}
|
User guide
1. Introduction
ADAMANT (Applicable DAta of Many-electron Atom eNergies and Transitions) is devoted to spectrosckopic data of atoms and ions (energy levels, radiative transition parameters (line strengths,
oscillator strengths, transition probabilities). This database contains parameters, involving free-electrons, such as autoionization probabilities, electron-impact collision strengths, cross-sections
and rates, dielectronic recombination rates that are needed in modeling both high temperature plasma (astrophysical, nuclear fusion) and low temperature plasma, such as planetary nebulae, working
material of spectroscopic and medical devices. Huge amount of spectroscopic data for different atoms and ions has been produced so far. Unfortunately, it is rather complex task to apply these data
for modeling purposes, because they are calculated in different approximations, with different computer code suites and atomic data accuracy. Therefore, it becomes difficult to match one set of data,
e.g., energy levels, radiative transition probabilities, with another set of data involving free electrons, e.g. electron-impact excitation or ionization rates. In ADAMANT, the atomic data sets for
specific atom were produced by using the same computer code suites, applying the same approximation for inclusion of relativistic and correlation effects. They are generated using identical
multireference wavefunction basis. Such an approach to produce data significantly reduces workload for data application, making it possible an automatic data parsing in plasma modeling codes. ADAMANT
database is constantly updated and extended with new results.
2. Data presented in ADAMANT
Spectroscopic parameters:
• energy levels,
• weights of the wavefunctions of levels,
• radiative transition wavelengths,
• weighted oscillator strengths,
• transition probabilities,
• level lifetimes,
• Lande factors,
• matrix element of a multipole transition operator,
• line strengths.
Electron-atom(ion) interaction parameters:
• electron-impact excitation cross sections, collision strengths and rates,
• electron-impact ionization cross sections, collision strengths and rates,
• level autoionization probabilities,
• dielectronic recombination rates.
In ADAMANT, a various approaches are used to produce the data (parameters). A general descrition of them can be found below and in the data base as 'info'.
3 The steps to search and obtain data
ADAMANT opens with Periodic table with differently colored boxes of chemical elements. The yellow color indicates the available data sets. The green color shows that the data are absent.
Clicking on the element opens a table consisting of a number of columns. The first column is for the degree of ionization (zero indicates neutral atom). The next columns are for the data calculated
in different approximations. The abbreviations QRHF and DFS mean Quasirelativistic-Hartree-Fock and Dirac-Fock-Slater, respectively. CI indicates that correlation effects were included by using
configuration interaction method.
There are two links in each column. The links 'data' and 'info' opens a table for the spectroscopic parameters and the description of the methohs and approaches used to obtain the presented
parameters, respectively. The set of spectroscopy parameters can be reached by clicking on the link 'data'. The window with a number of boxes indicating the sets of spectroscopic parameters opens.
Another possibility to obtain data is the click on the triangles in the middle of the field. The available data from the isoelectronic sequences be obtained by clicking on the yellow boxes.
3.1 Energy level parameters
The values of the energy levels can be reached by clicking on the box 'Energy levels'. In the opened window the user can change the atom or ion, choose the ionization degree, approach used for
calculation, output type, interval for energy values, and units. The definitions used in the tables of output data are explained Table 1. In this table also an exhaustive explanations of some other
notations used for energy levels are presented.
Table 1: The notations used for energy levels
used Explanation of notations in ADAMANT
in ADAMANT
nlN The shell $nl^{N}$ of $N$ equivalent electrons
nljN, nl+N, The subshell $nl\,_{j}^{N}$ (or $nl\,_{\pm}^{N}$, when $j=l\pm1/2$) . nl+N (nl$-$N) is used when j=l+1/2 (j=l$-$1/2)
Term L S denotes a term $^{2S+1}\mathrm{L}$
Level L S J denotes a level $^{2S+1}\mathrm{L}$$_{J}$
nl(N) 2S+1 L Determines $nl^{N}$$\,^{2S+1}\mathrm{_{\gamma}L}_{J}$. Gama ($\gamma$ ) denotes a seniority number. Total angular momentum J is given in a separate column
gama J
nljNJ, nl+N
(2J), Determines $nl\,_{j}^{N}\,\gamma\, J\,$ ($nl\,_{\pm}^{N}$$\,\gamma\, J$)
Configuration n1l1N1 n2l2N2 ... is a label of level associated with the non-relativistic configuration $n_{1}l_{1}^{N_{1}}\, n_{2}l_{2}^{N_{2}}...$. E.g., for C-like elements, the configuration $1s^
{2}\,2s^{2}\,2p^{2}$is denoted as 2p2. Closed shells 1s2 and 2s2 are not listed
Wavefunction In intermediate coupling, a wavefunction of a level is given by $\Psi(\alpha J)=\sum_{i} c_{i}^{\alpha} \Phi_{i}(C_{i}\, J).$ $\Phi_{i}(C_{i}J)$ is a basis function. The closed shells
in $C_{i}$ are omitted
n1l1(N1) 2S1+1 L1 $\gamma_{1}$.n2l2(N2) 2S2+1 L2 $\gamma_{2}$\_2S'+1 L' .n3l3(N3)$\gamma_{3}$... L S determines the label of level for $\scriptsize{ CJ\equiv {n_1} {l_1}^{N_1} \, {n_2}
Level label {l_2}^{N_2}\,{n_3} {l_3}^{N_3}...\,\,_{\gamma_1}^{2S_1+1}{L_1}\,\,_{\gamma_2}^{2S_2+1}{L_2}\,\left(\,\,^{2S'+1}L'\right)\,\,_{\gamma_3}^{2S_3+1}{L_3}\,....^{2S+1}\mathrm{L}_J }$ in
(LS-coupling) LS-coupling. When N=1, the notation of term 2S+1 L $\gamma$ is omitted. Total angular momentum J is placed in separate column. For example, 3s(1).3p(4)3P2\_4P, J=3/2 denotes $\
scriptsize{ CJ\equiv3s\,3p^{4}\,{}_{1}^{2}S\,\,_{2}^{3}P\,\,^{4}P_{3/2} }$
n1l1$\pm$N1 (2J1) 2J1 n2l2$\pm$N2 (2J2) 2J' n3l3$\pm$N3 (2J3) 2J''...2J determines the label of a level for $\scriptsize{CJ\equiv n_{1}l_{\pm}^{N_{1}}\, n_{2}l_{\pm}^{N_{2}}\, n_{3}l_{\
Level label pm}^{N_{3}}\,\gamma_{1\,}J_{1}\,\gamma_{2}\, J_{2}\,\left(\, J'\right)\,\gamma_{3}J_{3}\,(J'')...J}$ in jj-coupling. The number immediately after the parentheses indicate the 2J value
(jj-coupling) when all preceding shells are coupled. If it is not necessary, the symbol $\gamma_{i}$ is suppressed in $CJ$. Only open subshells are given. For example, 2s+1(1)1 2p+2(4)5 3d-1(3)4
denotes $\scriptsize{ CJ\equiv 2s_{+}(2p_{+}^{2}(2))(5/2)(3d_{-}(3/2))2}$
Squared expansion coefficients $c_{i}^{\alpha}$ of $\Psi(\,\alpha\, J)$ are expressed as percentages. Only the leading contributions are presented. For example, in ADAMANT notations the
Contributions expansion of $\Psi(\,\alpha\,5/2)$ looks as 81 3s.3p(4)3P2\_4P 13 3p(2)3P2.3d\_4P . $c_{i}^{\alpha}$ gives 81% (${\scriptstyle \mathsf{{\displaystyle \Phi}_{i}}}(3s\,3p^{4}\,{}_{1}^{2}S
\,{}_{2}^{3}P\,\,^{4}P_{5/2}\,)$ ), while $c_{j}^{\alpha}$ gives 13% (${\scriptstyle \mathsf{{\displaystyle \Phi_{j}}}}(3p^{2}\,3d\,{}_{2}^{3}P\,{}_{1}^{2}D\,^{4}P_{5/2})$. Total
angular momentum $J=5/2$ is placed in separate column. In the case of N=1, the notation of N is omitted.
N Level index. Positive integer number assigned to an energy level and the corresponding wavefunction for a task considered
E0 The energy level of the lowest level
E The energy of the level for $\Psi(\,\alpha\, J)$ calculated from E0
J The total (final) angular momentum J of $\Psi(\,\alpha\, J)$
p The parity of $\Psi(\,\alpha\, J)$. $p=o$ denotes an odd (ODD) parity. $p=e$ denotes an even (EVEN) parity
T Radiative lifetime $\tau$ of a level
g The Lande factor $g_{\alpha\, J}$ of a level $J$
3.2 Radiative transitions
The explanations of notations used in the output files is presented in Table 2.
Table 2. The notations used for the radiative transitions.
used Explanation of notations
in ADAMANT
Type Type of a radiative transition: EK or MK. EK (MK) is electric (magnetic) transition of the multipolity K. For example, E1 denotes electric dipole transition, while M2 denotes magnetic
quadrupole transition
Ni The level index of initial (upper) level
Ji The total angular momentum J of an initial (upper) level
Nf The level index of final (lower) level
Jf The total angular momentum J of a final (upper) level
Wavelength The transition wavelength$\lambda_{i\rightarrow f}$$\,$between the initial Ni and final Nf levels. Wavelength (in vacuum) is given in selected units.
S The transition line strength $S_{i\rightarrow f}^{EK,MK}$ in selected units, e.g. a.u.
M M denotes $M_{i\rightarrow j}^{EK,MK}$ transition matrix element of $EK,\, MK$ transitions operator
gf The weighted oscillator strength $g_{i}f_{i\rightarrow f}^{EK,MK}=g_{j}f_{f\rightarrow i}^{EK,MK}$.
$gf=(2K+1)^{-1}\omega\,(\alpha\omega)^{2K-2}S\,,$where $\omega=E_{i}-E_{f}$ in Hatree a.u. and $\alpha$ is fine structure constant.
A The radiative transition probability $A_{i\rightarrow f}^{EK,MK}(1/s)$. In a.u. $A=2\,\alpha^{3}\,\omega^{2}\,$ f
3.3 Electron-impact excitation
Three boxes titled by 'Electron-impact excitation collision strenght', 'Electron-impact excitation cross section', and 'Electron-impact excitation rates' are devoted to the parameters describing the
excitation of atoms by electrons process. The explanations of the notations are given in Table 3.
Table 3. The notations used for electron-impact excitations (EIE).
Notations used Explanation of notations
in ADAMANT
CS The electron-impact excitation collisions strength $\Omega_{i\rightarrow j}$
sigma The electron-impact excitation cross section $\sigma_{i\rightarrow f}$ in Mb
vsigma The electron-impact excitation collision rate $\lt v\sigma \gt_{i\rightarrow j}$ in cm$^3$/s
ECS The effective excitation collision strength $\upsilon_{i\rightarrow j}$
Temperatures Electron temperature T in eV
Eex The excitation energy $\Delta E_{i\rightarrow j}$ from Ni to Nf. Eex is given in eV
Eejected The ejected electron energy in eV
Eel The electron impact (incident) energy Eel=Ki*Eex
(Ki is a numerical coefficient) or Eel=E+Eejected. E is given in eV
3.4 Electron-impact ionization
Three boxes titled by 'Electron-impact excitation ionization strenght', 'Electron-impact ionization cross section', and 'Electron-impact ionization rates' helps to obtain the parameters describing
the ionization of atoms by electrons. The explanations of the notations are given in Table 4.
Table 4. The notations used for electron-impact ionization (EI).
Notations used Explanation of notations
in ADAMANT
IS The electron-impact ionization strength $\Omega$
sigma The electron-impact ionization cross section $\sigma$ in Mb
vsigma The electron-impact ionization rate $\lt v\sigma \gt $ in cm$^3$/s
Temperatures Electron temperature T in eV
Ei The ionization energy $\Delta E_{i\rightarrow j}$. E is given in eV
E2 The ejected electron energy in eV
E1 The electron impact (incident) energy E1=Ei+E2
E2 is given in eV
3.5 Level autoionization
For the autoionization probabilities, the explanations of the notations are given in Table 5.
Table 5: The notations of level autoionization (AI) quantities.
Notations used Explanation of notations
in ADAMANT
A Autoionization transition probability $A_{i\rightarrow f}(1/s)$ from the initial level Ni
Ni The level index of initial (upper) level
Ji The total angular momentum J of an initial (upper) level
Nf The level index of final (lower) level
Jf The total angular momentum J of a final (upper) level
3.6 Dielectronic recombination
The dielectronic recombination rates are presented in section ``Dielectronic recombination''. The explanation of the notations are given in Table 6.
Table 6: The notations of level dielectronic recombination rates
Notations used Explanation of notations in ADAMANT
in ADAMANT
DR Dielectronic recombination rate to the level Nf in cm$^3/$s
Nf The level index of a final level
Jf The total angular momentum J of a final level
|
{"url":"http://www.adamant.tfai.vu.lt/content/user-guide","timestamp":"2024-11-02T02:06:57Z","content_type":"text/html","content_length":"27663","record_id":"<urn:uuid:826a2162-3efe-4fce-a3fb-611767b376ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00570.warc.gz"}
|
Keras documentation: CosineDecay
CosineDecay class
A LearningRateSchedule that uses a cosine decay with optional warmup.
See Loshchilov & Hutter, ICLR2016, SGDR: Stochastic Gradient Descent with Warm Restarts.
For the idea of a linear warmup of our learning rate, see Goyal et al..
When we begin training a model, we often want an initial increase in our learning rate followed by a decay. If warmup_target is an int, this schedule applies a linear increase per optimizer step to
our learning rate from initial_learning_rate to warmup_target for a duration of warmup_steps. Afterwards, it applies a cosine decay function taking our learning rate from warmup_target to alpha for a
duration of decay_steps. If warmup_target is None we skip warmup and our decay will take our learning rate from initial_learning_rate to alpha. It requires a step value to compute the learning rate.
You can just pass a backend variable that you increment at each training step.
The schedule is a 1-arg callable that produces a warmup followed by a decayed learning rate when passed the current optimizer step. This can be useful for changing the learning rate value across
different invocations of optimizer functions.
Our warmup is computed as:
def warmup_learning_rate(step):
completed_fraction = step / warmup_steps
total_delta = target_warmup - initial_learning_rate
return completed_fraction * total_delta
And our decay is computed as:
if warmup_target is None:
initial_decay_lr = initial_learning_rate
initial_decay_lr = warmup_target
def decayed_learning_rate(step):
step = min(step, decay_steps)
cosine_decay = 0.5 * (1 + cos(pi * step / decay_steps))
decayed = (1 - alpha) * cosine_decay + alpha
return initial_decay_lr * decayed
Example usage without warmup:
decay_steps = 1000
initial_learning_rate = 0.1
lr_decayed_fn = keras.optimizers.schedules.CosineDecay(
initial_learning_rate, decay_steps)
Example usage with warmup:
decay_steps = 1000
initial_learning_rate = 0
warmup_steps = 1000
target_learning_rate = 0.1
lr_warmup_decayed_fn = keras.optimizers.schedules.CosineDecay(
initial_learning_rate, decay_steps, warmup_target=target_learning_rate,
You can pass this schedule directly into a keras.optimizers.Optimizer as the learning rate. The learning rate schedule is also serializable and deserializable using
keras.optimizers.schedules.serialize and keras.optimizers.schedules.deserialize.
• initial_learning_rate: A Python float. The initial learning rate.
• decay_steps: A Python int. Number of steps to decay over.
• alpha: A Python float. Minimum learning rate value for decay as a fraction of initial_learning_rate.
• name: String. Optional name of the operation. Defaults to "CosineDecay".
• warmup_target: A Python float. The target learning rate for our warmup phase. Will cast to the initial_learning_rate datatype. Setting to None will skip warmup and begins decay phase from
initial_learning_rate. Otherwise scheduler will warmup from initial_learning_rate to warmup_target.
• warmup_steps: A Python int. Number of steps to warmup over.
A 1-arg callable learning rate schedule that takes the current optimizer step and outputs the decayed learning rate, a scalar tensor of the same type as initial_learning_rate.
|
{"url":"https://keras.io/api/optimizers/learning_rate_schedules/cosine_decay/","timestamp":"2024-11-02T14:44:23Z","content_type":"text/html","content_length":"20720","record_id":"<urn:uuid:e04d3982-59c5-4054-a729-4076b95a68e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00675.warc.gz"}
|
Provide examples of valve sizing calculations based on flow rate and pressure drop. | Mahesh Trading Corporation
When it comes to designing and implementing hydraulic systems, proper valve sizing is crucial to ensure efficient flow, pressure control, and overall system reliability. In this blog post, we’ll
delve into the world of valve sizing and provide examples of calculations based on flow rate and pressure drop. We’ll cover the basics of valve selection, the importance of sizing, and walk you
through step-by-step examples of calculations.
What is Valve Sizing?
Valve sizing is the process of selecting a valve that can handle the desired flow rate and pressure drop of a system. The goal is to ensure that the valve can accommodate the required flow and
pressure without restriction or restriction, which can lead to system failure, inefficiency, or even safety issues.
Why is Valve Sizing Important?
Proper valve sizing is essential for several reasons:
1. System Efficiency: A properly sized valve ensures that the system operates efficiently, with minimal energy loss and maximum performance.
2. Reliability: Oversized or undersized valves can lead to premature wear and tear, increasing the risk of system failure.
3. Pressure Drop Management: Sizing a valve for the correct pressure drop ensures that the system remains stable and does not experience sudden pressure surges.
4. Flow Rate Management: Accurate valve sizing ensures that the system can handle the required flow rate, minimizing the risk of flow restrictions or blockages.
Calculating Valve Sizing: Flow Rate and Pressure Drop
To calculate valve sizing, we need to consider two primary factors: flow rate and pressure drop. Here’s a step-by-step guide to calculating valve sizing:
1. Determine the Flow Rate: The flow rate is the volume of fluid that needs to be passed through the valve per unit time (e.g., liters per minute or gallons per minute). You can typically find the
flow rate required for your system through manufacturer specifications or by conducting experiments.
2. Determine the Pressure Drop: The pressure drop is the difference in pressure between the inlet and outlet of the valve. This is the force required to push the fluid through the valve. You can
calculate the pressure drop using the following formula:
Pressure Drop (ΔP) = (Flow Rate x Resistance) / (Cross-sectional Area x Density of Fluid)
ΔP is the pressure drop (Pa or psi)
Flow Rate is the volume flow rate (L/min or GPM)
Resistance is the resistance coefficient (dimensionless)
Cross-sectional Area is the area of the valve orifice (mm² or in²)
Density of Fluid is the density of the fluid being pumped (kg/m³ or lb/ft³)
3. Choose a Valve: Once you have calculated the flow rate and pressure drop, you can choose a valve based on the manufacturer’s specifications and recommendations. Valve manufacturers typically
provide charts and graphs that outline the relationship between flow rate, pressure drop, and valve size.
4. Verify the Sizing: Finally, verify the valve sizing by re-calculation the pressure drop using the chosen valve’s specifications. This ensures that the valve can handle the required flow rate and
pressure drop.
Example 1: Sizing a Valve for a Pumping System
A pumping system requires a valve to control the flow rate and pressure of a 10,000 LPM (2,642 GPM) water pumping system. The system operates at a pressure range of 15-20 bar (217-290 psi) and
requires a pressure drop of 5 bar (72.5 psi) to ensure efficient pumping.
Using the formula above, we can calculate the pressure drop:
ΔP = (10,000 LPM x 0.5) / (50 mm² x 1000 kg/m³) = 25.4 Pa (0.036 psi)
Since we’re looking for a 5-bar pressure drop, we’d need a valve with a much higher resistance coefficient. Let’s assume a value of 10.
ΔP = (10,000 LPM x 10) / (50 mm² x 1000 kg/m³) = 50.8 Pa (0.072 psi)
Using a valve manufacturer’s chart, we find that a valve with a 50 mm² (7.75 in²) orifice size and a pressure drop range of 5-10 bar (72.5-145 psi) would be suitable for this application.
Example 2: Sizing a Valve for a Cooling System
A cooling system requires a valve to regulate the flow rate and pressure of a 5-bar (72.5-psi) system operating at a flow rate of 100 LPM (26.42 GPM). The system requires a pressure drop of 1 bar
(14.5 psi) to ensure efficient cooling.
Using the formula above, we can calculate the pressure drop:
ΔP = (100 LPM x 0.5) / (20 mm² x 1000 kg/m³) = 2.5 Pa (0.036 psi)
Since we’re looking for a 1-bar pressure drop, we’d need a valve with a higher resistance coefficient. Let’s assume a value of 5.
ΔP = (100 LPM x 5) / (20 mm² x 1000 kg/m³) = 12.5 Pa (0.181 psi)
Using a valve manufacturer’s chart, we find that a valve with a 20 mm² (3.11 in²) orifice size and a pressure drop range of 1-5 bar (14.5-72.5 psi) would be suitable for this application.
In conclusion, proper valve sizing is crucial for efficient system operation, reliability, and safety. By understanding the relationship between flow rate and pressure drop, you can calculate valve
sizing using the formulas and examples provided in this post. Remember to verify valve sizing by re-calculation the pressure drop using the chosen valve’s specifications to ensure that the valve can
handle the required flow rate and pressure drop.
As a designer or engineer, it’s essential to consider valve sizing early in the design process to ensure that your system operates efficiently and reliably. With the right valve sizing, you can
minimize energy losses, reduce maintenance costs, and increase system reliability.
Additional Resources
For more information on valve sizing and hydraulic systems, be sure to check out the following resources:
API 66: “Hydraulic System Design”
API 674: “Lubricating Systems”
ISO 12184: “Valves – Vocabulary”
Valve manufacturer’s datasheets and technical specificationsValve Sizing: The Key to Efficient Flow and Pressure Control
When it comes to designing and implementing hydraulic systems, proper valve sizing is crucial to ensure efficient flow, pressure control, and overall system reliability. In this blog post, we’ll
delve into the world of valve sizing and provide examples of calculations based on flow rate and pressure drop. We’ll cover the basics of valve selection, the importance of sizing, and walk you
through step-by-step examples of calculations.
What is Valve Sizing?
Valve sizing is the process of selecting a valve that can handle the desired flow rate and pressure drop of a system. The goal is to ensure that the valve can accommodate the required flow and
pressure without restriction or restriction, which can lead to system failure, inefficiency, or even safety issues.
Why is Valve Sizing Important?
Proper valve sizing is essential for several reasons:
1. System Efficiency: A properly sized valve ensures that the system operates efficiently, with minimal energy loss and maximum performance.
2. Reliability: Oversized or undersized valves can lead to premature wear and tear, increasing the risk of system failure.
3. Pressure Drop Management: Sizing a valve for the correct pressure drop ensures that the system remains stable and does not experience sudden pressure surges.
4. Flow Rate Management: Accurate valve sizing ensures that the system can handle the required flow rate, minimizing the risk of flow restrictions or blockages.
Calculating Valve Sizing: Flow Rate and Pressure Drop
To calculate valve sizing, we need to consider two primary factors: flow rate and pressure drop. Here’s a step-by-step guide to calculating valve sizing:
1. Determine the Flow Rate: The flow rate is the volume of fluid that needs to be passed through the valve per unit time (e.g., liters per minute or gallons per minute). You can typically find the
flow rate required for your system through manufacturer specifications or by conducting experiments.
2. Determine the Pressure Drop: The pressure drop is the difference in pressure between the inlet and outlet of the valve. This is the force required to push the fluid through the valve. You can
calculate the pressure drop using the following formula:
Pressure Drop (ΔP) = (Flow Rate x Resistance) / (Cross-sectional Area x Density of Fluid)
ΔP is the pressure drop (Pa or psi)
Flow Rate is the volume flow rate (L/min or GPM)
Resistance is the resistance coefficient (dimensionless)
Cross-sectional Area is the area of the valve orifice (mm² or in²)
Density of Fluid is the density of the fluid being pumped (kg/m³ or lb/ft³)
3. Choose a Valve: Once you have calculated the flow rate and pressure drop, you can choose a valve based on the manufacturer’s specifications and recommendations. Valve manufacturers typically
provide charts and graphs that outline the relationship between flow rate, pressure drop, and valve size.
4. Verify the Sizing: Finally, verify the valve sizing by re-calculation the pressure drop using the chosen valve’s specifications. This ensures that the valve can handle the required flow rate and
pressure drop.
Example 1: Sizing a Valve for a Pumping System
A pumping system requires a valve to control the flow rate and pressure of a 10,000 LPM (2,642 GPM) water pumping system. The system operates at a pressure range of 15-20 bar (217-290 psi) and
requires a pressure drop of 5 bar (72.5 psi) to ensure efficient pumping.
Using the formula above, we can calculate the pressure drop:
ΔP = (10,000 LPM x 0.5) / (50 mm² x 1000 kg/m³) = 25.4 Pa (0.036 psi)
Since we’re looking for a 5-bar pressure drop, we’d need a valve with a much higher resistance coefficient. Let’s assume a value of 10.
ΔP = (10,000 LPM x 10) / (50 mm² x 1000 kg/m³) = 50.8 Pa (0.072 psi)
Using a valve manufacturer’s chart, we find that a valve with a 50 mm² (7.75 in²) orifice size and a pressure drop range of 5-10 bar (72.5-145 psi) would be suitable for this application.
Example 2: Sizing a Valve for a Cooling System
A cooling system requires a valve to regulate the flow rate and pressure of a 5-bar (72.5-psi) system operating at a flow rate of 100 LPM (26.42 GPM). The system requires a pressure drop of 1 bar
(14.5 psi) to ensure efficient cooling.
Using the formula above, we can calculate the pressure drop:
ΔP = (100 LPM x 0.5) / (20 mm² x 1000 kg/m³) = 2.5 Pa (0.036 psi)
Since we’re looking for a 1-bar pressure drop, we’d need a valve with a higher resistance coefficient. Let’s assume a value of 5.
ΔP = (100 LPM x 5) / (20 mm² x 1000 kg/m³) = 12.5 Pa (0.181 psi)
Using a valve manufacturer’s chart, we find that a valve with a 20 mm² (3.11 in²) orifice size and a pressure drop range of 1-5 bar (14.5-72.5 psi) would be suitable for this application.
Calculating Valve Sizing for Different Fluids
When calculating valve sizing, it’s essential to consider the properties of the fluid being pumped. For example, if the fluid is dense and viscous, you may need to adjust the valve size accordingly.
Here’s an example of calculating valve sizing for a fluid with a different density:
Assume we’re sizing a valve for a 10,000 LPM (2,642 GPM) flow rate and a pressure drop of 5 bar (72.5 psi) for a fluid with a density of 1500 kg/m³ (94.5 lb/ft³). Using the formula above, we get:
ΔP = (10,000 LPM x 0.5) / (50 mm² x 1500 kg/m³) = 33.33 Pa (0.0481 psi)
Since we’re looking for a 5-bar pressure drop, we’d need a valve with a higher resistance coefficient. Let’s assume a value of 10.
ΔP = (10,000 LPM x 10) / (50 mm² x 1500 kg/m³) = 66.67 Pa (0.0962 psi)
Using a valve manufacturer’s chart, we find that a valve with a 50 mm² (7.75 in²) orifice size and a pressure drop range of 5-10 bar (72.5-145 psi) would be suitable for this application.
In conclusion, proper valve sizing is crucial for efficient system operation, reliability, and safety. By understanding the relationship between flow rate and pressure drop, you can calculate valve
sizing using the formulas and examples provided in this post. Remember to verify valve sizing by re-calculation the pressure drop using the chosen valve’s specifications to ensure that the valve can
handle the required flow rate and pressure drop.
As a designer or engineer, it’s essential to consider valve sizing early in the design process to ensure that your system operates efficiently and reliably. With the right valve sizing, you can
minimize energy losses, reduce maintenance costs, and increase system reliability.
Additional Resources
For more information on valve sizing and hydraulic systems, be sure to check out the following resources:
API 66: “Hydraulic System Design”
API 674: “Lubricating Systems”
ISO 12184: “Valves – Vocabulary”
Valve manufacturer’s datasheets and technical specifications
Articles on fluid dynamics, pressure drop calculation, and valve sizing on reputable engineering websites and forums.
Common Valve Sizing Mistakes
When designing and implementing hydraulic systems, it’s common to make mistakes in valve sizing. Here are a few common mistakes to avoid:
Insufficient pressure drop: Failing to account for the required pressure drop can lead to system instability and reduced efficiency.
Inadequate flow rate calculation: Incorrectly calculating the flow rate can result in valves that are too small or too large for the system.
Ignoring fluid properties: Neglecting the properties of the fluid being pumped, such as density and viscosity, can lead to inaccurate valve sizing.
By avoiding these common mistakes and following the guidelines and examples provided in this post, you can ensure that your hydraulic system is properly sized and operates efficiently and reliably.
|
{"url":"https://maheshvalves.com/provide-examples-of-valve-sizing-calculations-based-on-flow-rate-and-pressure-drop/","timestamp":"2024-11-09T01:25:31Z","content_type":"text/html","content_length":"163835","record_id":"<urn:uuid:da360fff-506a-4dbb-8f82-36aab8000415>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00458.warc.gz"}
|
[Solved] Letf(x) =?x. If the rate of change of f a | SolutionInn
Answered step by step
Verified Expert Solution
Letf(x) =?x. If the rate of change of f at x= c is twice its rate of change at x=1,then find the value of c?
LetĀ f(x) =?x. If the rate of change of f at x= c is twice its rate of change at x=1,then find the value of c?
There are 3 Steps involved in it
Step: 1
The detailed ...
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
|
{"url":"https://www.solutioninn.com/study-help/questions/letfx-x-if-the-rate-of-change-of-f-at-3981301","timestamp":"2024-11-12T10:28:51Z","content_type":"text/html","content_length":"93610","record_id":"<urn:uuid:c27f40de-a7f0-458e-be00-b03103e62d92>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00498.warc.gz"}
|
Printable Calendars AT A GLANCE
Fraction Strips Printable
Fraction Strips Printable - Web print and use any of the fraction strips, fraction circles, fraction games, and fraction worksheets on this page with your students. Web these free and printable
fraction strips are an excellent math manipulative to help your child grasp different fraction concepts. Web here you will find our fraction strip collection for equivalent fractions, equivalent
fractions worksheet, free printable fraction worksheets, free math sheets for kids. Web fraction strips are mathematical tools used to teach the concept of fractions visually. After a printed number
line and base 10 blocks,. Web this interactive fraction strips tool helps students visualize fractions. How can you use fraction strips in the classroom? Download them as pdfs or use them as a
reference sheet or a learning tool. With these online fraction strips, students can model fractions, solve fraction problems, explore. Web you can color and decorate the strip fraction chart.
Web below you will find a free set of printable fraction strips. If you're looking for short. Web here you will find our fraction strip collection for equivalent fractions, equivalent fractions
worksheet, free printable fraction worksheets, free math sheets for kids. With these online fraction strips, students can model fractions, solve fraction problems, explore. They are simply
rectangular bars, or ‘strips’, that represent a whole or parts of a whole. Web printable fraction strips color blank. Web find printable fraction strips, circles, pizzas, and other worksheets for 2nd
to 4th grade students.
Web these free and printable fraction strips are an excellent math manipulative to help your child grasp different fraction concepts. Fraction strips are handy math manipulatives that allow kids to
visually see fractions. Web use these free printable fraction strips to help them learn and identify fractions! Web print and use any of the fraction strips, fraction circles, fraction games, and
fraction worksheets on this page with your students. How can you use fraction strips in the classroom?
Free Printable Fraction Bars/Strips Chart (Up To 20) Number Dyslexia
Web easily prepare this fractions strips printable resource for your students use the dropdown icon on the download button to choose between the color or black and white. Web use these free printable
fraction strips to help them learn and identify fractions! How can you use fraction strips in the classroom? Web you can color and decorate the strip fraction.
Printable Fraction Strips
With these online fraction strips, students can model fractions, solve fraction problems, explore. Download them as pdfs or use them as a reference sheet or a learning tool. Web fraction strips 1
whole 1 2 1 2 1 3 1 3 1 3 1 4 1 4 1 4 1 4 1 5 1 5 1 5 1 5 1.
Roll a Whole Fraction Math Game Teaching fractions, Math fraction
Web below you will find a free set of printable fraction strips. After a printed number line and base 10 blocks,. Web here you will find our fraction strip collection for equivalent fractions,
equivalent fractions worksheet, free printable fraction worksheets, free math sheets for kids. Web get free printable fraction strips and activity ideas. Web easily prepare this fractions strips.
10 Best Equivalent Fractions Chart Printable
The second set is in black and white. They are simply rectangular bars, or ‘strips’, that represent a whole or parts of a whole. Web $2.00 pdf don't have enough fraction strips for each child to work
with? Web fractions (basic) printable fraction games and printable worksheets; Web use these free printable fraction strips to help them learn and identify.
4 Best Images of Printable Blank Fraction Bars Blank Fraction Strips
With these online fraction strips, students can model fractions, solve fraction problems, explore. Web print and use any of the fraction strips, fraction circles, fraction games, and fraction
worksheets on this page with your students. Web this interactive fraction strips tool helps students visualize fractions. Web below you will find a free set of printable fraction strips. Web use
Free Printable Fraction Bars/Strips Chart (Up To 20) Number Dyslexia
Download them as pdfs or use them as a reference sheet or a learning tool. They are simply rectangular bars, or ‘strips’, that represent a whole or parts of a whole. Web fractions (basic) printable
fraction games and printable worksheets; Web printable fraction strips color blank. Web get free printable fraction strips and activity ideas.
Free Printable Fraction Strips / Blank Fraction Bars Math Printables
Print, cut, and laminate one page of these fractions strips, and now you do! Web easily prepare this fractions strips printable resource for your students use the dropdown icon on the download button
to choose between the color or black and white. Download them as pdfs or use them as a reference sheet or a learning tool. Web get free.
Printable Fraction Kit Printable Word Searches
Web $2.00 pdf don't have enough fraction strips for each child to work with? Since most of the templates are meant to be colored, you will find many of the templates are blank and. Download them as
pdfs or use them as a reference sheet or a learning tool. The first two fraction bar printables are in color. They are.
Fraction Strip Templates for Kids School Math Printables Tim's
Web here you will find our fraction strip collection for equivalent fractions, equivalent fractions worksheet, free printable fraction worksheets, free math sheets for kids. Web get free printable
fraction strips and activity ideas. After a printed number line and base 10 blocks,. 01/30/2022 table of contents activities with fraction strips lesson summary frequently asked. Web print and use
any of.
Fraction Strips Printable - Web fraction strips are mathematical tools used to teach the concept of fractions visually. Web easily prepare this fractions strips printable resource for your students
use the dropdown icon on the download button to choose between the color or black and white. They are simply rectangular bars, or ‘strips’, that represent a whole or parts of a whole. Web print and
use any of the fraction strips, fraction circles, fraction games, and fraction worksheets on this page with your students. Web printable fraction strips color blank. How can you use fraction strips
in the classroom? The second set is in black and white. Web you can color and decorate the strip fraction chart. With these online fraction strips, students can model fractions, solve fraction
problems, explore. If you're looking for short.
Web find printable fraction strips, circles, pizzas, and other worksheets for 2nd to 4th grade students. Print, cut, and laminate one page of these fractions strips, and now you do! Web print and use
any of the fraction strips, fraction circles, fraction games, and fraction worksheets on this page with your students. Download them as pdfs or use them as a reference sheet or a learning tool. Web
printable fraction strips color blank.
How can you use fraction strips in the classroom? Web $2.00 pdf don't have enough fraction strips for each child to work with? Web these free and printable fraction strips are an excellent math
manipulative to help your child grasp different fraction concepts. If you're looking for short.
Web Fraction Strips 1 Whole 1 2 1 2 1 3 1 3 1 3 1 4 1 4 1 4 1 4 1 5 1 5 1 5 1 5 1 5 1 6 1 6 1 6 1 6 1 6 1 6 1 8 1 8 1 8 1 8 1.
With these online fraction strips, students can model fractions, solve fraction problems, explore. Since most of the templates are meant to be colored, you will find many of the templates are blank
and. Web this interactive fraction strips tool helps students visualize fractions. Fraction strips are handy math manipulatives that allow kids to visually see fractions.
Web Fraction Strips Are Mathematical Tools Used To Teach The Concept Of Fractions Visually.
They are simply rectangular bars, or ‘strips’, that represent a whole or parts of a whole. 01/30/2022 table of contents activities with fraction strips lesson summary frequently asked. Print, cut,
and laminate one page of these fractions strips, and now you do! The first two fraction bar printables are in color.
The Second Set Is In Black And White.
Web you can color and decorate the strip fraction chart. Web fractions (basic) printable fraction games and printable worksheets; Web below you will find a free set of printable fraction strips. Web
use these free printable fraction strips to help them learn and identify fractions!
Web These Free And Printable Fraction Strips Are An Excellent Math Manipulative To Help Your Child Grasp Different Fraction Concepts.
Web here you will find our fraction strip collection for equivalent fractions, equivalent fractions worksheet, free printable fraction worksheets, free math sheets for kids. Web find printable
fraction strips, circles, pizzas, and other worksheets for 2nd to 4th grade students. Web get free printable fraction strips and activity ideas. Web printable fraction strips color blank.
Related Post:
|
{"url":"https://ataglance.randstad.com/viewer/fraction-strips-printable.html","timestamp":"2024-11-15T04:49:58Z","content_type":"text/html","content_length":"38567","record_id":"<urn:uuid:a371ac84-faf1-43ba-92dc-b1adc99ff035>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00223.warc.gz"}
|
compact SAS code
I have variables A-Z. Values for A-Z (26 variables) are "Y" "N". I want to create a new variable CHECK based on A-Z. If all A-Z (26 variables) ="N", then CHECK would be "N". How can I write a short/
clean code to do so instead using code like the following?
if A="N" and B="N" and C ="N" and D ="N" and E="N"......and Z="N" then CHECK="N"
Is there a short cut for above redundant code?
12-05-2016 11:27 AM
|
{"url":"https://communities.sas.com/t5/SAS-Procedures/compact-SAS-code/td-p/316761","timestamp":"2024-11-03T02:46:46Z","content_type":"text/html","content_length":"266697","record_id":"<urn:uuid:0abc916e-7190-48fc-84fe-e11c8843df24>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00693.warc.gz"}
|
[Solved] The system below is in balance. In the CB | SolutionInn
Answered step by step
Verified Expert Solution
The system below is in balance. In the CB string, the force modulus is 2400 N. The mass suspended from point D is 60
The system below is in balance. In the CB string, the force modulus is 2400 N. The mass suspended from point D is 60 kg. The centre of mass of the rod shall be 2,0 m from point A. wwwwww B 0.5 m D 3
m 1 m A a) Write the force CB as components. b) What are the three directing angles of tension CB? (c) What is the moment of force of tension CB with respect to point A? (d) What is the moment of
force of the 60 kg sprung mass in relation to point A? (e) Using the rotation equilibrium about the y-axis, determine the mass of the rod? (f) What is the z-component of the force exerted by the
pivot on the rod at point A? g) The attachment of the rod to the wall (A on the diagram) prevents the translation of the rod along each axis. Does this system have another function? Justify.
There are 3 Steps involved in it
Step: 1
Lets address each part of the problem systematically a Write the force CB as components Assuming the rod is horizontal and point C is where the string CB attaches we can resolve the tension TCB into
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
|
{"url":"https://www.solutioninn.com/study-help/questions/the-system-below-is-in-balance-in-the-cb-string-1029942","timestamp":"2024-11-07T15:50:04Z","content_type":"text/html","content_length":"106127","record_id":"<urn:uuid:5161c6fe-d4c6-489c-9df8-f1d6f70ec1d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00358.warc.gz"}
|
Transform your ML-model to Pytorch with Hummingbird
Over the last few years, the capabilities of Deep Learning have increased tremendously. With that, many standards of serving your Neural Network have found their way to the masses such as ONNX and
That popularity has led to a focus on optimizing Deep Learning pipelines, training, inference, and deployment by leveraging tensor computations.
In contrast, traditional Machine Learning models such as Random Forests are typically CPU-based on inference tasks and could benefit from GPU-basedhardware accelerators.
Transform your trained Machine Learning model to Pytorch with Hummingbird
Now, what if we could use the many advantages of Neural Networks in our traditional Random Forest? Even better, what if we could transform the Random Forest and leverage GPU-accelerated inference?
This is where Microsoft’s Hummingbird comes in! It transforms your Machine Learning model to tensor computations such that it can use GPU-acceleration to speed up inference.
In this article, I will not only describe how to use the package, but also the underlying theory from the corresponding papers and the roadmap.
NOTE: Hummingbird is only used to speed up the time to make a prediction, not to speed up the training!
1. Theory
Before delving into the package, it is important to understand why this transformation is desirable and this could be done.
Why transform your model to tensors?
In part, a tensor is an N-dimensional array of data (see figure below). In the context of TensorFlow, it can be seen as a generalization of a Matrix that allows you to have multidimensional arrays.
You can perform optimized mathematical operations on them without knowing what each dimension represent semantically. For example, like how matrix multiplication is used to manipulated several
vectors simultaneously.
{% include markdown_image.html url="/assets\images\posts\2020-6-22-humming\tensor.png" description="Simplification of a Tensor. Note that in Mathematics, Physics, and Computer Science the
applications and definitions of what a Tensor is might differ!"%}
There are several reasons for transforming your model into tensors:
1. Tensors, as the backbone of Deep Learning, have been widely researched in the context of making Deep Learning accessible to the masses. Tensor operations needed to be optimized significantly in
order to make Deep Learning useable on a larger scale. This speeds up inference tremendously, which cuts costs for making new predictions.
2. It allows for a more unified standard compared to using traditional Machine Learning models. By using tensors, we can transform our traditional model to ONNX and use the same standard across all
your AI-solutions.
3. Any optimization in Neural Network frameworks is likely to result in the optimization of your traditional Machine Learning model.
Transforming Algorithmic models
Transformation to tensors is not a trivial task as there are two branches of models: Algebraic (e.g., linear models) and algorithm models (e.g., decision trees). This increases complexity when
mapping a model to tensors.
Here, the mapping of algorithmic models is especially difficult. Tensor computation is known for performing bulk or symmetric operations. This is difficult to do for algorithmic models as they are
inherently asymmetric.
Let’s use the decision tree below as an example:
{% include markdown_image.html url="/assets\images\posts\2020-6-22-humming\tree.png" description="Example Decision Tree to be mapped to tensors. Retrieved from https://azuredata.microsoft.com/
articles/ebd95ec0-1eae-44a3-90f5-c11f5c916d15" style="width:70%" %}
The decision tree is split into three parts:
• Input feature vector
• Four decision nodes (orange)
• Five leaf nodes (blue)
The resulting neural network’s first layer is that of the input feature vector which is connected to the four decision nodes (orange). Here, all conditions are evaluated together. Next, all leaf
nodes are evaluated together by using matrix multiplication.
The resulting decision tree, as a neural network, can be seen below:
{% include markdown_image.html url="/assets\images\posts\2020-6-22-humming\network.png" description="Decision Tree as a Neural Network. Retrieved from https://azuredata.microsoft.com/articles/
ebd95ec0-1eae-44a3-90f5-c11f5c916d15" style="width:70%" %}
The resulting Neural Network introduces a degree of redundancy seeing as all conditions are evaluated. Whereas, normally one path is evaluated. This redundancy is, in part, offset by the Neural
Network’s ability to vectorize computations.
NOTE: There are many more strategies for the transformation of models into tensors, which are described in their papers, here and here.
According to their papers, the usage of GPU-acceleration creates a massive speed-up in inference compared to traditional models.
{% include markdown_image.html url="/assets\images\posts\2020-6-22-humming\inference.png" description="Results of inference comparing the same models using Sklearn vs. a Neural Network using
The results above clearly indicate that it might be worthwhile to transform your Forest to a Neural Network.
I wanted to see for myself how the results would look like on a Google Colaboratory server with a GPU enabled. Thus, I performed a quick, and by no means, scientific, experiment on Google
Colaboratory which yielded the following results:
{% include markdown_image.html url="/assets\images\posts\2020-6-22-humming\results.png" description="Results of running binary classification models 100 times on Google Colab with GPU enabled. The
dataset is randomly generated and contains 100000 data points. It seems that the inference with Hummingbird is definitely faster than that of the base models."%}
We can clearly see a large speedup in the inference on new data when using a single dataset.
2. Usage
Hummingbird is
Using the package is, fortunately, exceedingly simple. It is clear that the authors have spent significant time making sure the package can be used intuitively and across many models.
For now, we start by installing the package through pip:
pip install hummingbird-ml
If you would like to also install the LightGBM and XGboost dependencies:
pip install hummingbird-ml[extra]
Then, we simply start by creating our Sklearn model and training it on the data:
from sklearn import datasets
from sklearn.ensemble import RandomForestClassifier
iris = datasets.load_iris()
X, y = iris.data, iris.target
rf_model = RandomForestClassifier().fit(X, y)
After doing so, the only thing we actually have to do to transform it to Pytorch is to import Hummingbird and use the convert function:
from hummingbird.ml import convert
# First we convert the model to pytroch
model = convert(rf_model, 'pytorch')
# To run predictions on GPU we have to move it to cuda
The resulting model is simply a torch.nn.Module and can then be exactly used as you would normally work with Pytorch.
It was the ease of transforming to Pytorch that first grabbed my attention to this package. Just a few lines and you have transformed your model!
3. Roadmap
Currently, the following models are implemented:
• Most Scikit-learn models (e.g., Decision Tree, Regression, and SVC)
• LightGBM (Classifier and Regressor)
• XGboost (Classifier and Regressor)
• ONNX.ML (TreeEnsembleClassifier and TreeEnsembleRegressor)
Although some models are still missing, their roadmap indicates that they are on their way:
• Feature selectors (e.g., VarianceThreshold)
• Matrix Decomposition (e.g., PCA)
• Feature Pre-processing (e.g., MinMaxScaler, OneHotEncoder, etc.)
You can find the full roadmap of Hummingbird here.
|
{"url":"https://www.maartengrootendorst.com/blog/humming/","timestamp":"2024-11-10T12:17:11Z","content_type":"text/html","content_length":"33595","record_id":"<urn:uuid:c03c5312-5b5a-4cd4-a64a-26ce561eb9d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00066.warc.gz"}
|
MS Seminar (Mathematics - String Theory)
Speaker: Motohico Mulase (UC Davis)
Title: Gaiotto's Lagrangian correspondence between Hitchin and de Rham moduli spaces
Date (JST): Tue, Sep 17, 2019, 13:15 - 14:45
Place: Seminar Room A
Blackboard: (Blackboard talk)
In 2014, through TBA considerations on N=2 SUYM in four dimensions, Gaiotto conjectured a Lagrangian correspondence between holomorphic Lagrangians in the Dolbeault moduli space of Higgs
bundles and the de Rham moduli space of holomorphic connections. His original conjecture was formulated for the Lagrangian of opers. The conjecture was solved in 2016 for holomorphic
Abstract: opers in my paper with Dumitrescu, Fredrickson, Kydonakis, Mazzeo, and Neitzke. Then Collier and Wentworth, using our analysis method, extended the correspondence for more general
Lagrangians consisting of stable points. In my talk, I will present an algebraic geometry description of the Lagrangian correspondence of Gaiotto, based on the work of Simpson using VHS
and Deligne connections.
|
{"url":"https://research.ipmu.jp/seminar/?seminar_id=2378","timestamp":"2024-11-14T23:42:33Z","content_type":"text/html","content_length":"14328","record_id":"<urn:uuid:d21a9f83-7685-4eb2-9b76-802109b3dd67>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00194.warc.gz"}
|
Dissertation Statistics, Analysis Methodology, Proposal Writing Help
Chưa được phân loại
Dissertation Statistics, Analysis Methodology, Proposal Writing Help
Posted on by sgd01_web
This type of t-test might be used to discover out if the this scores of the identical members in a study differ underneath completely different circumstances. For instance, this sort of t-test
could possibly be used to discover out if people write better essays after taking a writing class than they did earlier than taking the writing class. Statistics can be utilized to investigate
particular person variables, relationships http://asu.edu amongst variables, and variations between teams. In this section, we explore a spread of statistical strategies for conducting these
Note that the order of the points just isn’t meant to construction a discussionâ s writing (besides 1.). After a research article has introduced the substantive background, the strategies and the
results, the dialogue part assesses the validity of outcomes and draws conclusions by deciphering them. The dialogue places the outcomes into a broader context and displays their implications for
theoretical (e.g. etiological) and sensible (e.g. interventional) purposes.
This happens on a regular https://literatureessaysamples.com/life-of-pi-inspiration-real-life-situations-and-courage/ basis â e.g. numbers accidently recorded within the incorrect items,
calibration not accomplished, days and months confused in dates â there is an nearly infinite number of ways in which information may be ‘incorrect’. If we do this, then what can we make of the
data-points the authors spotlight as ‘outliers’? They are either true errors (things went wrong, but we won’t find a reason) or then true data-points. In my view, neither case is an efficient purpose
to counsel utilizing robust-correlations, if the the rest of the info look moderately usually distributed. Necessary if you consider the latter is true to change the mannequin getting used, or in the
former possibly prohibit inferences to the region where you’ve good data, and never embody the extreme value. There is a department of statistics called â causal inferenceâ .
A completely regular distribution is a mathematical assemble which carries with it certain mathematical properties useful in describing the attributes of the distribution. Although frequency
distribution based on actual information factors seldom, if ever, utterly matches a wonderfully normal distribution, a frequency distribution typically can strategy such a standard curve. Ordinal
variables don’t set up the numeric distinction between knowledge factors. They point out solely that one knowledge level is ranked greater or lower than another .
The following paperwork present extra info on the elements of examine design that are useful for growing an efficient statistical analysis section. While representing statistical data https://
literatureessaysamples.com/key-points-in-pride-and-prejudice/ in tables, graphs, or maps may be extremely efficient, it could be very important be certain that the data just isn’t presented in a
fashion that may mislead the reader. The key to presenting efficient tables, graphs, or maps is to ensure they are straightforward to understand and clearly linked to the message.
One of the questions a clinical investigator frequently asks in planning medical research is â Do I want a statistician as a half of my scientific analysis team? â since a statistician can help to
optimize design, analysis and interpretation of outcomes, and drawing conclusions. When creating a clinical analysis proposal, how early in the course of ought to the medical investigator contact the
statistician? Statistics can not rescue a poorly designed protocol after the research has begun.
But if not used rigorously, numbers create extra issues than they solve. Categorical variables should often be coded into a quantity of dummy variables before being thought-about in a regression
model. The interpretation of the coefficients for these dummy variables is only significant relative to the reference class . It is crucial to consider and clearly report the coding scheme when
discussing parameter estimates for categorical variables.
Not plenty of surprises, although itâ s essential for all bloggers to realize that there are extra blogs around then ever-and the maths a half of which means itâ s more difficult to get traffic.
The respondents to this survey are self-described bloggers with whom we related over many years on social media and at stay events. You spent three minutes and 19 seconds on average (thatâ s our
ultimate statistic, I promise) and also you make clear https://literatureessaysamples.com/way-to-gain-ultimate-success/ a darkish nook of the world of content material. Attracting blog readers is the
second largest challenge, followed by creating high quality content persistently. So itâ s not shocking that time is the biggest problem for many bloggers. They know what to do, but battle to find
time for content creation and promotion.
Trả lời Hủy
|
{"url":"https://cuanhua.net/dissertation-statistics-analysis-methodology-proposal-writing-help/","timestamp":"2024-11-02T04:16:59Z","content_type":"text/html","content_length":"106647","record_id":"<urn:uuid:8cf7ff02-2555-4e67-9d86-39b4db5e023e>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00278.warc.gz"}
|
Explanation of geographical coordinates, conversion and calculation
Information on geographical coordinates, conversion and calculation
The World Geodetic System of 1984 is a geodetic reference system used by many GPS devices as a uniform basis for position information on Earth.
Geographic coordinates (WGS84)
The geographical coordinates describe a point by its angular distance from the equator.
On this page the latitude is given in decimal degrees from -90° to +90°, it would also be possible to give from 90° south to 90° north.
The length is given in -180° to +180° East, instead of 180° West to 180° East.
Decimal degree (decimal notation, DD.DDDDDD°)
On this page you can calculate and work with it, as well as Google Maps and Microsoft (Bing) Maps.
This system is used mainly because it can be calculated very well.
An example for coordinates in decimal degree of Berlin (Siegessäule): Lat 52.514487 N, Lng 13.350126 E
The accuracy of this specification depends strongly on the number of decimal places.
With only 2 decimal places there is a possible deviation of up to 1km, with 4 decimal places there is only a deviation of 10m,
Like most systems, we use 6 decimal places, which corresponds to an accuracy of 1 meter.
Degrees Minutes (nautical notation, DD° MM.MMMM')
This is also a common spelling, which is common in geocaching and especially in seafaring, where the minute is usually sufficient as the smallest specification.
An example is 52° 12.2345' N(north), 12° 44.5678' E(east),
where the first number is an integer of the degrees (D=degree) and must be between -180 and 180.
The second number is the minute integer or decimal number from 0 to 59.999999.
For a sufficient accuracy, the same values apply here as for decimal notation.
Degrees Minutes Seconds (historical notation, sexagesimal, DD° MM' SS.SS")
Used e.g. by Wikipedia.
Sexagesimal means, because 1 degree corresponds to 60 minutes, 1 minute corresponds to 60 seconds.
An example is 52° 12' 43.33" N(North), 12° 44' 33" E(East),
where the first number is an integer of the degrees (D=degree) and must be between -180 and 180.
The second number indicates the minutes in whole numbers from 0 to 59,
and the last number indicates the seconds as an integer or decimal number from 0 to 59.999999.
It is interesting to note that a minute of latitude corresponds to about 1.852 km and thus defines a nautical mile.
CH1903 LV03
Swiss Grid, too, are the official Swiss national coordinates.
The starting point of all calculations for Switzerland was fixed at Bern and is Y:600000 East | X:200000 North.
For Liechtenstein the reference point is also Bern, but with the values Y:0 | X:0, so that e.g. Vaduz has the CH coordinates Y 758008 | X 223061, which results in LIE coordinates Y 158008 | X 23061.
However, only the CH coordinates are calculated here. Please pay attention to the values if necessary.
CH1903+ LV95
The current reference system in Switzerland since 2016, mandatory from 2020 at the latest.
The new system is also based on the Bessel 1841 Ellipsiod and differs only very slightly in accuracy (maximum 1.6 metres).
To differentiate, however, 2,000 or 1,000 kilometres were added to the coordinates as an offset, so that the reference point for Bern is now, for example, E 2,600,000 and N 1,200,000.
Here you can see that the designation has also changed from y/x to E/N. Unfortunately still swapped in the order unlike most other systems that use N/E...
The Universal Transverse Mercator is a global coordinate system. It divides the earth's surface (from 80° south to 84° north) in stripes into 6° wide vertical zones.
The basis and name of this system come from Gerhard Mercator, a geographer from the Middle Ages.
Since this system is true to angle, but produces larger areas with increasing distance from the equator, Gauss and Krüger have further developed the transverse Mercator projection. The universal
transversal projection is much more accurate, especially for smaller maps, and is used by almost all major map services today.
An example for UTM coordinates is the Arc de Triomphe in Paris with: 31U 448304 5413670
To explain the length zones: (1-60, in the example the 31)
For the UTM system, the Earth is divided into 60 zones from west to east, each strip comprising 6 degrees of longitude.
The zones are numbered from west to east. One begins in the Pacific west of America at the date border with zone 1.
To explain the latitude zones: (C-X however without I and O, in the example the U)
Each UTM longitude zone is divided from south to north into 20 latitude zones (zone fields) of 8° each.
Now the two values Easting and Northing follow.
The Easting or the East value indicates the distance of the point from the specified latitude zone in meters. (+500.000m or 500km to avoid negative values)
The northing is the distance in meters between the point and the equator.
The high value only applies to the northern hemisphere, in the southern hemisphere this value must be subtracted from 10,000,000.
On which hemisphere one is can be easily recognized by the latitude zone. C-M lie on the southern hemisphere, N-X on the northern hemisphere.
UTMREF / MGRS
The UTM reference system or Military Grid Reference System divides the zones of the UTM system into 100 x 100 km plan squares.
These plan squares consist of 2 letters from A to Z, whereby I and O are omitted due to the danger of confusion with 1 and 0.
The first letter indicates the horizontal position within the grid square, also called Easting.
The second letter denotes the vertical position, i.e. the distance to the equator, within the plan square, also called northing.
The values for North and East determine the size of the grid square within which the coordinates are located and must always have the same number of digits. The more digits this number has, the
higher the accuracy. The number of digits can be between 1 and 5.
A one-digit number only means an accuracy of 10 km. A 5-digit number, on the other hand, means an accuracy of 1 meter. In principle, the one-digit number 1 corresponds to the 5-digit number 10000.
The Gauss-Krueger coordinate system is a Cartesian coordinate system which makes it possible to locate sufficiently small areas of the earth in conformity with metric coordinates (right and high
In German cartography and geodesy the Bessel ellipsoid is used as reference ellipsoid.
The Gauß-Krüger coordinate system is very similar to the UTM system and differs only in the use of another ellipsoid as a basis. (UTM = WGS84, Gauss-Krueger = Bessel),
and the use of 3° wide strips instead of 6° wide strips as with the UTM.
For a better differentiation of the values for coordinates, the coordinates are called high values and right values.
For the determination the earth is divided into 3° wide stripes from north pole to south pole. The so-called meridian stripes.
To each of these stripes belongs a zone, starting at 0° and zone 0, 3° and zone 1, 6° and zone 2 etc.. The number of degrees divided by 3 results in the zone.
The zone can be recognized by the first number of the right value and thus quickly a rough estimate of the position. The following numbers indicate the distance in meters from the meridian.
To avoid negative numbers, a constant of 500,000 is always added to the right value. If the number is smaller than 500,000, the position of the coordinates is to the left or west of the meridian.
If it is greater than 500,000, it is to the right or east of the meridian. A right value of 4,545,678 is thus to the right of the 12th degree of latitude, namely 45,678 meters or 45.678 km.
At the edge of the zones there may also be overlaps of 20 minutes of longitude which corresponds to about 23 km. Thus, a zone change does not necessarily have to take place at the edge of the zones
with every measurement.
Bessel ellipsoid
The Bessel ellipsoid (also Bessel 1841) is a reference ellipsoid for Europe.
The Bessel ellipsoid adapts particularly well to the geoid and the mean earth curvature in Eurasia due to its data base and was therefore used as a basis for many national surveys, e.g. in Germany.
Potsdam date, Rauenberg date, DHDN
The spatial definition of the Bessel ellipsoid to the Earth body (the position of the ellipsoid in the centre of mass of the Earth and its orientation to the Earth's rotation axis)
was carried out for Prussia at that time with the help of the central point Rauenberg in Berlin. After its destruction, the central point of the network was mathematically transferred to the
Helmertturm in Potsdam, which is why the geodetic date of this system is often erroneously referred to as the Potsdam date.
This Rauenberg date is also the basis of the German Main Triangle Network (DHDN).
When converting from WGS84 to Gauß-Krüger, the date must be adjusted, otherwise the points will be shifted by about 150 meters.
The SRTM data (Shuttle Radar Topography Mission) were recorded during a space mission in 2000. This is a rather high-resolution digital terrain model of the Earth's surface.
The SRTM data cover a large part of the Earth and are freely available with an accuracy of 90 metres (or 30 metres for North America).
SRTM-1 means a resolution of 1 arcsecond which corresponds to about 30 m at the equator. However, these data are only intended for North America.
SRTM-3 accordingly means a resolution of 3 arc seconds and about 90 m at the equator.
The altitude data refer to the worldwide uniform reference system WGS84, which is also used here on this page.
Due to the resolution of 90 meters there are deviations of up to 30 meters especially in steep areas, in flat terrain however the data are very accurate.
NAC (Natural Area Coding, WGS84)
The NAC (abbreviation for Natural Area Coding System) is a new system to standardize geographic coordinates.
Only the date WGS-84 is used.
It consists of 30 common characters from 0-9 and the letters BCDFGHJKLMNPQRSTVWXZ (All English Consonants). So the result is very compact and efficient.
Each of these characters represents a number from 0 to 29.
With NAC, the entire earth is divided into 30 zones of equal size, each with a longitude of 0-360° and a latitude of 0-180°, and the corresponding character is assigned to the result.
The result is a pair of characters. The first character string describes the longitude and the second the latitude. The character strings are separated by a space.
The more characters the pair has, the more accurate are the coordinates. Each of the 30 described squares can be split into 30 more squares to increase the accuracy.
For example, a pair of 4 digits has an accuracy of 25 x 50 meters.
With 5 digits one already reaches an accuracy of about 1 meter, therefore we work here with a character length of 6 characters, which is exact enough for every imaginable case.
W3W (What 3 Words)
The W3W (abbreviation for What 3 Words) has been a global system for addressing geographic coordinates since 2013.
Each position on the map consists of a square of 3 x 3 meters. This is represented by a combination of exactly 3 words. Hence the name: What 3 Words.
These 3 words are separated by a dot and are all written in lower case.
The system is also available in different languages. There is no word in several languages to avoid confusion.
Depending on the language, there are up to 40,000 words which are randomly combined to avoid any relation to neighbours. There is an official app and a website.
|
{"url":"https://coordinates-converter.com/en/info","timestamp":"2024-11-03T10:33:46Z","content_type":"text/html","content_length":"47021","record_id":"<urn:uuid:78028583-7425-49b4-9cf1-e492b0e921a7>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00198.warc.gz"}
|
Mechanics of Materials
2.1: STRAIN
For each of the questions below, choose the correct answer.
Name Email
Displacement is the movement of a point relative to a fixed point on a deformable body.
Deformation is the movement of a point relative to another point on a deformable body.
In Lagrangian strain, the original geometry is used as reference geometry.
In Eulerian strain, the original geometry is used as reference geometry.
Positive shear strain results in increase of angle from a right angle.
The normal strain will be positive if the left end of a rod moves more than the right end in the negative x direction.
Strain has nine components in three dimensions.
Strain has six independent components in three dimensions.
At a point in plane strain there are three independent strain components.
At a point in plane strain there are always four zero strain components.
Do you have any comments or suggestions? Please send them to author@madhuvable.org
|
{"url":"https://madhuvable.org/self-tests-2/introductory-mechanics-of-materials/2-1-strain/","timestamp":"2024-11-06T04:25:48Z","content_type":"text/html","content_length":"65751","record_id":"<urn:uuid:ac9bdc21-6332-45cb-927d-32e04e549f5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00257.warc.gz"}
|
How to get started in learning IML?
Hello everybody, I am pretty keen on learning IML and am wondering where and how to start.
i. I would like your opinion on the prerequisities before learning IML- Do I need very strong linear algebra, probability and calculus knowledge and strong understanding?
ii. Does SAS documentation cover from the very basic to the advanced topics at length including the necessary concepts pertaining to real world applications?
iii. Can you mention any industry applications where IML is "widely" used as apposed to SAS STAT and of course the foundation base SAS.
iv. I have access to a book by author Rick in safarionline library, is that targeted more towards highly experienced users or an IML beginner can benefit too?
Thank you & Regards,
Naveen Srinivasan
> Hello everybody, I am pretty keen on learning IML and am wondering where and how to start.
I recommend you start with the resources in the article "Ten tips for learning the SAS/IML language" I also recommend that you browse and/or subscribe to The DO Loop blog: blogs.sas.com/content/iml
> Hello everybody, I am pretty keen on learning IML and am wondering where and how to start.
I recommend you start with the resources in the article "Ten tips for learning the SAS/IML language" I also recommend that you browse and/or subscribe to The DO Loop blog: blogs.sas.com/content/iml
04-23-2017 04:13 PM
|
{"url":"https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/How-to-get-started-in-learning-IML/td-p/352641","timestamp":"2024-11-09T03:39:09Z","content_type":"text/html","content_length":"213823","record_id":"<urn:uuid:06249783-4fac-4971-93c6-4bbd0d1979a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00752.warc.gz"}
|
Please note that the recommended version of Scilab is 2025.0.0. This page might be outdated.
See the recommended documentation of this function
Scilab Help >> Matlab to Scilab Conversion Tips > mfile2sci
Matlab M-file to Scilab conversion function
Calling Sequence
mfile2sci([M-file-path [,result-path [,Recmode [,only-double [,verbose-mode [,prettyprintoutput]]]]]])
a character string which gives the path of Matlab M-file to convert
a character string which gives the directory where the result has to be written. Default value is current directory.
Boolean flag, used by translatepaths function for recursive conversion. Must be %F to convert a single mfile. Default value : %f
Boolean flag, if %T mfile2sci considers that numerical function have been used only with numerical data (no Scilab overloading function is needed). Default value: %T
display information mode
no information displayed
information written as comment is resulting SCI-file
information written as comment is resulting SCI-file and in logfile
information written as comment is resulting SCI-file, in logfile and displayed in Scilab window
Boolean flag, if %T generated code is beautified. Default value: %F
M2SCI (and particularly mfile2sci) is Matlab M-file to Scilab function conversion tools. It tries whenever possible to replace call to Matlab functions by the equivalent Scilab primitives and
To convert a Matlab M-file just enter the Scilab instruction: mfile2sci(file)
where file is a character string giving the path name of the M-file mfile2sci will generate three files in the same directory
the Scilab equivalent of the M-file
the Scilab help file associated to the function
the Scilab function required to convert the calls to this Matlab M-file in other Matlab M-files. This function may be improved "by hand". This function is only useful for conversion not for use
of translated functions.
Some functions like eye, ones, size, sum,... behave differently according to the dimension of their arguments. When mfile2sci cannot infer dimensions it replaces these function call by a call to an
emulation function named mtlb_<function_name>. For efficiency these functions may be replaced by the proper scilab equivalent instructions. To get information about replacement, enter: help mtlb_
<function_name> in Scilab command window
Some other functions like plot, has no straightforward �quivalent in scilab. They are also replaced by an emulation function named mtlb_<function_name>.
When translation may be incorrect or may be improved mfile2sci adds a comment which begins by "//!" (according to verbose-mode)
When called without rhs, mfile2sci() launches a GUI to help to select a file/directory and options.
// Create a simple M-file
rot90m = ["function B = rot90(A,k)"
"if ~isa(A, ''double'')"
" error(''rot90: Wrong type for input argument #1: Real or complex matrix expected.'');"
" return"
"[m,n] = size(A);"
"if nargin == 1"
" k = 1;"
" if ~isa(k, ''double'')"
" error(''rot90: Wrong type for input argument #2: A real expected.'');"
" return"
" end"
" k = rem(k,4);"
" if k < 0"
" k = k + 4;"
" end"
"if k == 1"
" A = A.'';"
" B = A(n:-1:1,:);"
"elseif k == 2"
" B = A(m:-1:1,n:-1:1);"
"elseif k == 3"
" B = A(m:-1:1,:);"
" B = B.'';"
" B = A;"
mputl(rot90m, TMPDIR + "/rot90.m")
// Convert it to scilab
mfile2sci(TMPDIR + "/rot90.m",TMPDIR)
// Show the new code
mgetl(TMPDIR + "/rot90.sci")
// Load it into scilab
// Execute it
See Also
• translatepaths — convert a set of Matlab M-files directories to Scilab
|
{"url":"https://help.scilab.org/docs/5.5.0/en_US/mfile2sci.html","timestamp":"2024-11-09T18:49:22Z","content_type":"text/html","content_length":"19439","record_id":"<urn:uuid:d132a9d3-31ac-43ce-90b2-c60fe8f0960d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00441.warc.gz"}
|
NCERT Notes For Class 11 Physics Chapter 15 Waves
Class 11 Physics Chapter 15 Waves
NCERT Notes For Class 11 Physics Chapter 15 Waves, (Physics) exam are Students are taught thru NCERT books in some of state board and CBSE Schools. As the chapter involves an end, there is an
exercise provided to assist students prepare for evaluation. Students need to clear up those exercises very well because the questions withinside the very last asked from those.
Sometimes, students get stuck withinside the exercises and are not able to clear up all of the questions. To assist students solve all of the questions and maintain their studies with out a doubt,
we have provided step by step NCERT Notes for the students for all classes. These answers will similarly help students in scoring better marks with the assist of properly illustrated Notes as a way
to similarly assist the students and answering the questions right.
NCERT Notes For Class 11 Physics Chapter 15 Waves
Class 11 Physics Chapter 15 Waves
• The motion of a disturbance from one point to another by the vibrations of the particles of the medium about their mean position is known as wave motion.
• It is a mode of transfer of energy from one point to another.
• The waves are mainly of three types: (a) mechanical waves, (b) electromagnetic waves and (c) matter waves.
Mechanical waves
• Exist only within a material medium, such as water, air, and rock
• examples : – water waves, sound waves, seismic waves, etc
• two types : –
1) transverse waves
2) longitudinal waves
Electromagnetic waves
• The electromagnetic waves do not require any medium for their propagation.
• All electromagnetic waves travel through vacuum at the same speed c, given by c = 299, 792,458 m s^–1
• Examples of electromagnetic waves are visible and ultraviolet light, radio waves, microwaves, x-rays etc.
• Matter waves are associated with moving electrons, protons, neutrons and other fundamental particles, and even atoms and molecules
• Matter waves associated with electrons are employed in electron microscopes
• In transverse waves, the constituents of the medium oscillate perpendicular to the direction of wave propagation.
• A point of maximum positive displacement in a wave is called crest, and a point of maximum negative displacement is called trough.
• Transverse waves can be propagated only through solids and strings, and not in fluids.
• In longitudinal waves the constituents of the medium oscillate along the direction of wave propagation.
• Longitudinal sound waves propagates as compressions(high pressure region) and rarefactions(low pressure regions)
• longitudinal waves can propagate in all elastic media (solids and fluids)
• transverse and longitudinal waves travel with different speeds in the same medium.
The waves on the surface of water
• The waves on the surface of water are of two kinds: capillary waves and gravity waves.
• Capillary waves are ripples of short wavelength.
• The restoring force that produces capillary waves is the surface tension of water.
• Gravity waves have wavelengths typically ranging from several metres to several hundred metres.
• The restoring force that produces gravity waves is the pull of gravity, which tends to keep the water surface at its lowest level.
• The waves in an ocean are a combination of both longitudinal and transverse waves.
Travelling or progressive wave
• A wave which travels from one point of the medium to another is called a travelling wave.
• At any time t, the displacement of a wave travelling in positive x-axis is given by
• Where , a- amplitude , k- angular wave number or propagation constant , ω- angular frequency , φ- initial phase angle and (kx- ωt+ φ) – phase Plots for a wave travelling in the positive direction
of an x-axis at different values of timet.
• A wave travelling in the negative direction of x-axis can be represented by
• The amplitude a of a wave is the magnitude of the maximum displacement of the elements from their equilibrium positions as the wave passes through them.
• It is a positive quantity, even if the displacement is negative.
• It describes the state of motion as the wave sweeps through a string element at a particular position x
• The constant φ is called the initial phase angle.
The value of φ is determined by the initial (t = 0) displacement and velocity of the element (say, at x = 0).
Wavelength (λ)
• It is the minimum distance between two consecutive troughs or crests or two consecutive points in the same phase of wave motion.
Propagation constant or the angular wave number (k)
• By definition, the displacement y is same at both ends of this wavelength, that is at x = x[1] and at x = x[1] + λ.
• Thus
• This condition can be satisfied only when,
• where n = 1, 2, 3… Since λ is defined as the least distance between points with the same phase, n =1 and therefore
• k is called the propagation constant or the angular wave number ; its SI unit is radian per metre or rad m^–1
• The period of oscillation T of a wave is the time any string element takes to move through one complete oscillation.
Angular Frequency
• The angular frequency of the wave is given by
• It is the number of oscillations per unit time made by a string element as the wave passes through it
• The frequency v of a wave is defined as 1/T and is related to the angular frequency ω by
• It is usually measured in hertz
Displacement relation of a longitudinal wave
• In a longitudinal wave, the displacement of an element of the medium is parallel to the direction of propagation of the wave.
• The displacement function for a longitudinal wave is written as,
• where s(x, t) is the displacement of an element of the medium in the direction of propagation of the wave at position x and time t.
• The speed of a wave is related to its wavelength and frequency by the relation
• The speed is determined by the properties of the medium.
Speed of a Transverse Wave on Stretched String
• The speed of transverse waves on a string is determined by two factors,
(i) the linear mass density or mass per unit length, μ, and (ii) (ii) the tension T.
• The linear mass density, μ, of a string is the mass m of the string divided by its length l. therefore its dimension is [ML^–1].
• The tension T has the dimension of force [M L T^–2].
• Let the speed v = C μ^a T^b, where c is a dimensionless constant.
• Taking dimensions on both sides [M^0L^1T^-1] = [M^1L^-1]^a[M L T^–2]^b =[Ma+bL-a+bT-2b]
• Equating the dimensions on both sides we get a+b = 0 , therefore a=-b, -a+b = 1, therefore 2b=1 or b= ½ and a= – ½
• Thus
v = C μ^– ½ T ^½ ,
• It can be shown that C=1, therefore the speed of transverse waves on a stretched string is
• The speed of a wave along a stretched ideal string depends only on the tension and the linear mass density of the string and does not depend on the frequency of the wave.
Speed of a Longitudinal Wave – Speed of Sound
• In a longitudinal wave the constituents of the medium oscillate forward and backward in the direction of propagation of the wave.
• The sound waves travel in the form of compressions and rarefactions of small volume elements of air.
• The speed of sound waves depends on
1. Bulk modulus , B and
2. Density of the medium, ρ
• Using dimensional analysis we may write
v = C B^a ρ^ b
• Taking dimensions [M^0L^1T^-1] = [ML^-1T^-2]^a [M L^-3] ^b =[M^a+b L^-a-3 bT^-2a]
• Equating the dimensions on both sides we get
a+b = 0 , therefore a=-b, -2a=-1, a=1/2 , therefore b=-1/2
• where C is a dimensionless constant and can be shown to be unity.
• Thus, the speed of longitudinal waves in a medium is given by,
• The speed of propagation of a longitudinal wave in a fluid therefore depends only on the bulk modulus and the density of the medium.
• The bulk modulus is given by
• Here ΔV/V is the fractional change in volume produced by a change in pressure ΔP.
Speed of sound wave in a material of a bar
• The speed of a longitudinal wave in the bar is given by,
• where Y is the Young’s modulus of the material of the bar.
Speed of sound in different media
Newton’s Formula
• In the case of an ideal gas, the relation between pressure P and volume V is given by
• Therefore, for an isothermal change it follows that
• Thus B=P
• Therefore, the speed of a longitudinal wave in an ideal gas is given by,
• This relation was first given by Newton and is known as Newton’s formula. Laplace correction According to Newton’s formula for the speed of sound in a medium, we get for the speed of sound in air
at STP,
• This is about 15% smaller as compared to the experimental value of 331 m s^–1
• Laplace pointed out that the pressure variations in the propagation of sound waves are adiabatic and not isothermal.
• For adiabatic processes the ideal gas satisfies the relation,
• Thus for an ideal gas the adiabatic bulk modulus is given by,
• where γ is the ratio of two specific heats, Cp/Cv.
• The speed of sound is, therefore, given by,
• This modification of Newton’s formula is referred to as the Laplace correction.
• For air γ = 7/5,therefore the speed of sound in air at STP, we get a value 331.3 m s^–1, which agrees with the measured speed.
• The principle of super position of waves states that the net displacement at a given time of a number of waves is the algebraic sum of the displacements due to each wave.
• Let y[1](x, t) and y[2](x, t) be the displacements that any element of the string would experience if each wave travelled alone.
• The displacement y (x,t) of an element of the string when the waves overlap is then given by,
• Let a wave travelling along a stretched string be given by,
• And another wave, shifted from the first by a phase φ,
• Both the waves have the same angular frequency, same angular wave number k (same wavelength) and the same amplitude a.
• Applying the superposition principle
• Using the trigonometric relation
• Thus, the resultant wave is also a sinusoidal wave, travelling in the positive direction of x-axis.
• The resultant wave differs from the constituent waves in two respects:
i) its phase angle is (½)φ and
II) its amplitude is the quantity given by
• If φ = 0,the amplitude of the resultant wave is 2a, which is the largest possible value of A(φ).
• If φ = π, the two waves are completely out of phase, the amplitude of the resultant reduces to zero.
• When a pulse or a travelling wave encounters a rigid boundary it gets reflected.
• If the boundary is not completely rigid or is an interface between two different elastic media, a part of the wave is reflected and a part is transmitted into the second medium.
• The incident and refracted waves obey Snell’s law of refraction, and the incident and reflected waves obey the laws of reflection.
• A travelling wave, at a rigid boundary or a closed end, is reflected with a phase reversal.
• A travelling wave ,at an open boundary is reflected without any phase change.
• Let the incident wave be represented by
• then, for reflection at a rigid boundary the reflected wave is represented by,
• For reflection at an open boundary, the reflected wave is represented by
Standing Waves and Normal Modes
• The waveform or the disturbance does not move to either side is known as stationary wave or standing wave.
• Let the wave travelling in the positive direction of x-axis be
• And the wave travelling in the negative direction of x-axis
• The principle of superposition gives, for the combined wave
• The amplitude is zero for values of kx that give sin kx = 0 . Those values are given by
• Substituting k = 2π/λ in this equation, we get
• The positions of zero amplitude in a standing wave are called nodes.
• A distance of λ/2 or half a wavelength separates two consecutive nodes.
• The amplitude has a maximum value of 2a, which occurs for the values of kx that give ⎢sin k x ⎢= 1.
• The values are
• Substituting k = 2π/λ in this equation, we get
♦ The positions of maximum amplitude are called antinodes.
♦ The antinodes are separated by λ/2 and are located half way between pairs of nodes.
• For a stretched string of length L, fixed at both ends, the two ends of the string have to be nodes.
• If one of the ends is chosen as position x = 0, then the other end is x = L. In order that this end is a node; the length L must satisfy the condition
The standing waves on a string of length L have restricted wavelength given by
• The frequencies corresponding to these wavelengths is given by
• where v is the speed of travelling waves on the string.
• The set of frequencies possible in a standing wave are called the natural frequencies or modes of oscillation of the system.
• The frequency corresponding to n=1 is
• The oscillation mode with this lowest frequency (n=1) is called the fundamental mode or the first harmonic.
• The second harmonic is the oscillation mode with n = 2. The third harmonic corresponds to n = 3 and so on.
The frequencies associated with these modes are often labelled as v1, v2, v3 and so on.
• The collection of all possible modes is called the harmonic series and n is called the harmonic number.
Modes of vibration of a pipe closed at one end
• In a closed pipe standing waves are formed such that a node at the closed end and antinode at open end.
• Now if the length of the air column is L, then the open end, x = L, is an antinode and therefore,
L = +(n)
• Where n=0,1,2,3….
• The modes, which satisfy the condition
• The corresponding frequencies of various modes of such an air column are given by,
• The fundamental frequency is v/4L and the higher frequencies are odd harmonics of the fundamental frequency, i.e. 3 v/4L,5 v/4L,…
Pipe open at both ends
• In the case of a pipe open at both ends, there will be antinodes at both ends, and all harmonics will be generated.
• The phenomenon of wavering of sound intensity when two waves of nearly same frequencies and amplitudes travelling in the same direction, are superimposed on each other is called beats.
• The beat frequency, is given by
The time-displacement graphs of two waves of frequency 11 Hz and 9 Hz
• Musicians use the beat phenomenon in tuning their instruments.
• If an instrument is sounded against a standard frequency and tuned until the beat disappears, then the instrument is in tune with that standard.
• The apparent change in the pitch or the frequency of sound produced by a source due to relative motion of the source, listener or the medium is called Doppler effect.
• It was proposed by Christian Doppler and tested experimentally by Buys Ballot
• All types of waves shows Doppler effect.
• S- source
• f – frequency of sound from source
• V – velocity of sound
• λ- wavelength
When Source and Listener at Rest
• When the source and the listener are at rest , the frequency of sound heard by the listener
When source and listener moving in the direction of sound
• The relative velocity of sound wave with respect to source is V – Vs
• Vs – velocity of source
• Thus, apparent wavelength is
• The relative velocity of sound with respect to listener is V ^‘ = −V V[L]
• The apparent frequency of sound heard by the listener is
Special cases
Source moving and listener stationary
a) Source moves towards the listener
• Now V[S] = +ve , V[L]= 0
• Thus
b) Source moves away from the listener
Source stationary, listener moving
a) Listener moves towards the source
b) Listener moves away from the source
Both source and listener moving
a) Source and listener move towards each other
b) Source and listener move away from each other
c) Source moves towards the listener and listener moves away
1. Source moves away from the listener and listener moves towards the source
Effect of motion of the medium
• When the wind blows the air medium will moves with a velocity w
When wind moves towards the listener the velocity of sound is V+ w
• Thus, the apparent frequency
• If the wind is blowing from listener to the source , velocity of sound is V – w
Uses of Doppler Effect
• To estimate the speed of submarine , aero plane, automobile , etc
• To track artificial satellite
• To estimate velocity and rotation of star
• Doctors use it to study heart beats and blood flow in different part of the body. Here they use ulltrasonic waves, and in common practice, it is called sonography.
• In the case of heart, the picture generated is called echocardiogram.
Leave a Comment
|
{"url":"https://cbsestudyguru.com/class-11-physics-chapter-15-waves/","timestamp":"2024-11-11T01:36:52Z","content_type":"text/html","content_length":"132984","record_id":"<urn:uuid:6a99b5e7-572f-4054-a3f2-1f0b24699af6>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00619.warc.gz"}
|
developmentally preceded by false developmentally_preceded_by Candidate definition: x developmentally related to y if and only if there exists some developmental process (GO:0032502) p such that x
and y both participates in p, and x is the output of p and y is the input of p Chris Mungall RO:0002258 In general you should not use this relation to make assertions - use one of the more specific
relations below this one This relation groups together various other developmental relations. It is fairly generic, encompassing induction, developmental contribution and direct and transitive
develops from developmentally succeeded by developmentally related to continuant continuant pending final vetting
|
{"url":"https://ontobee.org/ontology/RBO?iri=http://purl.obolibrary.org/obo/RO_0002258","timestamp":"2024-11-05T22:06:34Z","content_type":"application/rdf+xml","content_length":"5709","record_id":"<urn:uuid:04a00797-af3e-43fa-b8cc-437c1d3b76ff>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00893.warc.gz"}
|
Developmental Math Emporium
Learning Outcomes
• Write a rate as a fraction
• Find unit rates
• Find unit price
Write a Rate as a Fraction
Frequently we want to compare two different types of measurements, such as miles to gallons. To make this comparison, we use a rate. Examples of rates are [latex]120[/latex] miles in [latex]2[/latex]
hours, [latex]160[/latex] words in [latex]4[/latex] minutes, and [latex]\text{\$5}[/latex] dollars per [latex]64[/latex] ounces.
A rate compares two quantities of different units. A rate is usually written as a fraction.
When writing a fraction as a rate, we put the first given amount with its units in the numerator and the second amount with its units in the denominator. When rates are simplified, the units remain
in the numerator and denominator.
Bob drove his car [latex]525[/latex] miles in [latex]9[/latex] hours. Write this rate as a fraction.
[latex]\text{525 miles in 9 hours}[/latex]
Write as a fraction, with [latex]525[/latex] miles in the numerator and [latex]9[/latex] hours in the denominator. [latex]{\Large\frac{\text{525 miles}}{\text{9 hours}}}[/latex]
[latex]{\Large\frac{\text{175 miles}}{\text{3 hours}}}[/latex]
So [latex]525[/latex] miles in [latex]9[/latex] hours is equivalent to [latex]{\Large\frac{\text{175 miles}}{\text{3 hours}}}[/latex]
try it
Find Unit Rates
In the last example, we calculated that Bob was driving at a rate of [latex]{\Large\frac{\text{175 miles}}{\text{3 hours}}}[/latex]. This tells us that every three hours, Bob will travel [latex]175[/
latex] miles. This is correct, but not very useful. We usually want the rate to reflect the number of miles in one hour. A rate that has a denominator of [latex]1[/latex] unit is referred to as a
unit rate.
Unit Rate
A unit rate is a rate with denominator of [latex]1[/latex] unit.
Unit rates are very common in our lives. For example, when we say that we are driving at a speed of [latex]68[/latex] miles per hour we mean that we travel [latex]68[/latex] miles in [latex]1[/latex]
hour. We would write this rate as [latex]68[/latex] miles/hour (read [latex]68[/latex] miles per hour). The common abbreviation for this is [latex]68[/latex] mph. Note that when no number is written
before a unit, it is assumed to be [latex]1[/latex].
So [latex]68[/latex] miles/hour really means [latex]\text{68 miles/1 hour.}[/latex]
Two rates we often use when driving can be written in different forms, as shown:
Example Rate Write Abbreviate Read
[latex]68[/latex] miles in [latex]1[/latex] hour [latex]\Large\frac{\text{68 miles}}{\text{1 hour}}[/latex] [latex]68[/latex] miles/hour [latex]68[/latex] [latex]\text{68 miles per hour}[/latex]
[latex]36[/latex] miles to [latex]1[/latex] [latex]\Large\frac{\text{36 miles}}{\text{1 gallon}}[/ [latex]36[/latex] miles/ [latex]36[/latex] [latex]\text{36 miles per gallon}[/
gallon latex] gallon mpg latex]
Another example of unit rate that you may already know about is hourly pay rate. It is usually expressed as the amount of money earned for one hour of work. For example, if you are paid [latex]\text
{\$12.50}[/latex] for each hour you work, you could write that your hourly (unit) pay rate is [latex]\text{\$12.50/hour}[/latex] (read [latex]\text{\$12.50}[/latex] per hour.)
To convert a rate to a unit rate, we divide the numerator by the denominator. This gives us a denominator of [latex]1[/latex].
Anita was paid [latex]\text{\$384}[/latex] last week for working [latex]\text{32 hours}[/latex]. What is Anita’s hourly pay rate?
Show Solution
try it
Sven drives his car [latex]455[/latex] miles, using [latex]14[/latex] gallons of gasoline. How many miles per gallon does his car get?
Show Solution
try it
The next video shows more examples of how to find rates and unit rates.
Calculating Unit Price
Sometimes we buy common household items ‘in bulk’, where several items are packaged together and sold for one price. To compare the prices of different sized packages, we need to find the unit price.
To find the unit price, divide the total price by the number of items. A unit price is a unit rate for one item.
Unit price
A unit price is a unit rate that gives the price of one item.
The grocery store charges [latex]\text{\$3.99}[/latex] for a case of [latex]24[/latex] bottles of water. What is the unit price?
What are we asked to find? We are asked to find the unit price, which is the price per bottle.
Write as a rate. [latex]{\Large\frac{$3.99}{\text{24 bottles}}}[/latex]
Divide to find the unit price. [latex]{\Large\frac{$0.16625}{\text{1 bottle}}}[/latex]
Round the result to the nearest penny. [latex]{\Large\frac{$0.17}{\text{1 bottle}}}[/latex]
The unit price is approximately [latex]\text{\$0.17}[/latex] per bottle. Each bottle costs about [latex]\text{\$0.17}[/latex].
Unit prices are very useful if you comparison shop. The better buy is the item with the lower unit price. Most grocery stores list the unit price of each item on the shelves.
Paul is shopping for laundry detergent. At the grocery store, the liquid detergent is priced at [latex]\text{\$14.99}[/latex] for [latex]64[/latex] loads of laundry and the same brand of powder
detergent is priced at [latex]\text{\$15.99}[/latex] for [latex]80[/latex] loads.
Which is the better buy, the liquid or the powder detergent?
Show Solution
Now we compare the unit prices. The unit price of the liquid detergent is about [latex]\text{\$0.23}[/latex] per load and the unit price of the powder detergent is about [latex]\text{\$0.20}[/latex]
per load. The powder is the better buy.
Notice in the example above that we rounded the unit price to the nearest cent. Sometimes we may need to carry the division to one more place to see the difference between the unit prices.
Find each unit price and then determine the better buy. Round to the nearest cent if necessary.
Brand A Storage Bags, [latex]\text{\$4.59}[/latex] for [latex]40[/latex] count, or Brand B Storage Bags, [latex]\text{\$3.99}[/latex] for [latex]30[/latex] count
Show Solution
Find each unit price and then determine the better buy. Round to the nearest cent if necessary.
Brand C Chicken Noodle Soup, [latex]\text{\$1.89}[/latex] for [latex]26[/latex] ounces, or Brand D Chicken Noodle Soup, [latex]\text{\$0.95}[/latex] for [latex]10.75[/latex] ounces
Show Solution
The follwoing video shows another example of how you can use unit price to compare the value of two products.
|
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/writing-rates-and-calculating-unit-rates/","timestamp":"2024-11-06T04:24:34Z","content_type":"text/html","content_length":"60145","record_id":"<urn:uuid:515a61a5-a470-4def-809a-72a995331e75>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00837.warc.gz"}
|
Complexity from Cells to Consciousness: Free Energy, Integrated Information, and Epsilon Machines
┃Complexity From Cells to Consciousness: ┃
┃ ┃
┃Free Energy, Integrated Information, and Epsilon Machines ┃
┃Thessaloniki, Greece || Thursday, September 27, 2018 ┃
┃08:40 - 09:00│Introductory Remarks│Brennan Klein || Conor Heins│I would not be surprised ┃
┃09:00 - 09:45│Keynote Talk │Karl Friston │I am therefore I think ┃
┃ │ │ │Collective computation: Towards a “statistical mechanics” for ┃
┃09:45 - 10:30│Keynote Talk │Jessica Flack │ ┃
┃ │ │ │information processing systems ┃
┃10:30 - 11:00│Coffee Break and Poster Viewing ┃
┃ │ │ │The Free Energy Principle plays Doom: a comparison with ┃
┃11:00 - 11:30│Invited Talk │Rosalyn Moran │ ┃
┃ │ │ │reward-based decision making in artificial intelligence environments ┃
┃11:30 - 12:00│Invited Talk │Martin Biehl │The intrinsic motivation in active inference and possible alternatives ┃
┃12:00 - 12:20│Contributed Talk │Kai Ueltzhöffer │Deep active inference ┃
┃ │ │ │Structure from noise: Mental errors yield abstract ┃
┃12:20 - 12:40│Contributed Talk │Christopher Lynn │ ┃
┃ │ │ │representations of events ┃
┃12:40 - 13:00│Contributed Talk │Thomas Parr │Frontoparietal connections and active inference ┃
┃13:00 - 14:30│Lunch Break ┃
┃14:30 - 15:00│Invited Talk │William Marshall │Integrated information: From consciousness to cells ┃
┃15:00 - 15:30│Invited Talk │Erik Hoel │Quantifying emergence and reduction in complex systems ┃
┃ │ │ │Quantum Simplicity: How quantum agents can witness ┃
┃15:30 - 16:00│Invited Talk │Mile Gu │ ┃
┃ │ │ │simpler reality ┃
┃16:00 - 16:30│Coffee Break and Poster Viewing ┃
┃ │ │ │Interference and inference: Quantum stochastic processes ┃
┃16:30 - 17:00│Invited Talk │Felix Pollock │ ┃
┃ │ │ │and the Free Energy Principle ┃
┃17:00 - 17:30│Invited Talk │Jayne Thompson │Causal asymmetry in a quantum world ┃
┃ │ │From Cells to Consciousness: ┃
┃17:40 - 18:30│Panel Discussion │ ┃
┃ │ │Karl Friston, Jessica Flack, Mile Gu, & Rosalyn Moran, led by Jakob Hohwy ┃
┃18:30 - 19:30│Poster Presentations │Cocktail hour, discussion, and closing remarks ┃
Speakers and Talks
Introductory Remarks: Brennan Klein || Conor Heins - I would not be surprised (8:40am - 9:00am)
Northeastern University || Max Planck Institute for Dynamics & Self-Organization
Keynote Talk: Karl Friston - I am therefore I think (9:00am - 9:45am)
University College London
This overview of the free energy principle offers an account of embodied exchange with the world that associates neuronal operations with actively inferring the causes of our sensations. Its agenda
is to link formal (mathematical) descriptions of dynamical systems to a description of perception in terms of beliefs and goals. The argument has two parts: the first calls on the lawful dynamics of
any (weakly mixing) system — from a single cell organism to a human brain. These lawful dynamics suggest that (internal) states can be interpreted as modelling or predicting the (external) causes of
sensory fluctuations. In other words, if a system exists, its internal states must encode probabilistic beliefs about external states. Heuristically, this means that if I exist (am) then I must have
beliefs (think). The second part of the argument is that the only tenable beliefs I can entertain about myself are that I exist. This may seem rather obvious; however, it transpires that this is
equivalent to believing that the world — and the way it is sampled — will resolve uncertainty about the causes of sensations. We will consider the implications for functional anatomy, in terms of
predictive coding and hierarchical architectures in the brain. We will conclude by looking at the epistemic behaviour that emerges using simulations of active inference.
Keynote Talk: Jessica Flack - Collective computation: Towards a “statistical mechanics” for information processing systems (9:45am - 10:30am)
Santa Fe Institute
Physics produces order though the minimization of energy. Adaptive systems produce order through the addition of information processing. Why do adaptive systems have this extra step and does it make
them fundamentally subjective, uncharacterizable by laws and unamenable to prediction? A natural point of entry into this debate is to ask how nature overcomes subjectivity to produce ordered states.
We propose adaptive systems overcome subjectivity by collectively computing slowly changing coarse-grained microstates that reduce uncertainty about the future. I will discuss these issues, introduce
a framework for studying collective computation and micro to maps in information processing systems, propose some principles, and pose open questions including what the relationship is between the
theory of collective computation and other theories for the origins of scale.
Invited Talk: Rosalyn Moran - The Free Energy Principle plays Doom: a comparison with reward- based decision making in artificial intelligence environments (11:00am - 11:30am)
King's College London
Under Active Inference (Friston 2009), a decision — such as that to move one’s eyes — is driven by the imperative to minimise a bound on surprise known as the Free Energy. In the context of partially
observable Markov decision processes (POMDPs), a model-based framework in which we can cast naturalistic decision-making tasks, the Free Energy of a policy (a sequence of actions) can be understood
as a drive to both minimise cost (maximise the likelihood of achieving a goal) while maximising the information return from a given set of actions. This scheme has been used to model decision making
in tasks such as ‘the urn task’ and also in reading. In my talk I will introduce the technical framework of Free Energy minimisation in the context of online gaming environments (designed to test
artificial intelligence algorithms) and present data from decision-making simulations. Specifically I will present the game ‘Doom’ and compare agents trained under Active Inference to agents trained
to maximise reward. Linking these simulations to putative neurobiological substrates I will describe the potential links from brain to computation.
Invited Talk: Martin Biehl - The intrinsic motivation in active inference and possible alternatives (11:30am - 12:00pm)
Araya Inc.
Active inference as proposed by Karl Friston combines model updating due to experience and action selection according to the predicted consequences of actions in an optimization of a single
functional. In the original formulation the consequences of actions are evaluated by the "expected free energy". This expected free energy satisfies the conditions of an intrinsic motivation. On the
one hand, this means that it can be used in a reinforcement learning setup like other intrinsic motivations. On the other hand we find that other intrinsic motivations can also be used in active
inference. In this talk I will present a formulation of active inference and show how the expected free energy can be replaced by other intrinsic motivation functions from the literature while
keeping the rest of the active inference framework intact.
Contributed Talk: Kai Ueltzhöffer - Deep active inference (12:00pm - 12:20pm)
University of Heidelberg
Contributed Talk: Christopher Lynn - Structure from noise: Mental errors yield abstract representations of events (12:20pm - 12:40pm)
University of Pennsylvania
Contributed Talk: Thomas Parr - Frontoparietal connections and active inference (12:40pm - 13:00pm)
University College London
Invited Talk: William Marshall - Integrated information: From consciousness to cells (14:30pm - 15:00pm)
University of Wisconsin
Integrated information theory starts from the phenomenology and identifies five fundamental properties of every experience (axioms). It then postulates that there must be a reason for these
properties and translates the axioms into a set of postulates about the physical substrate of consciousness. I will review two of the core mathematical ideas the derive from the postulates —
integrated information and cause-effect structures. I will then outline how integrated information and cause-effect structures can be used as a measure of the complexity of a system from its own
intrinsic perspective. As a demonstration, I will present results from applying the integrated information framework to a Boolean network model of the fission yeast (Schizosaccharomyces pombe)
cell-cycle. Finally, I will discussion potential connections with control and artificial life.
Invited Talk: Erik Hoel - Quantifying emergence and reduction in complex systems (15:00pm - 15:30pm)
Tufts University
Many physical systems can be coherently described in terms of their function and causal structure at multiple different levels. How we can reconcile these seemingly disparate levels of description?
This is especially problematic because the lower scales at first appear more fundamental in three ways: in terms of their causal work, in terms of the amount of information they contain, and their
theoretical superiority in terms of model choice. However, recent research bringing information theory and causal analysis to bear on modeling systems at different scales significantly reframes the
issue, revealing that higher scales can be "causal codes" that allow for the generation of additional information through error correction. This result has significant implications for causal model
choice in science and engineering. The findings indicate how emergence and reduction can be identified, measured, and used to design optimally informative experiments.
Invited Talk: Mile Gu - Quantum Simplicity: How quantum agents can witness simpler reality (15:30pm - 16:00pm)
Nanyang Technological University
To thrive in the complex environments, intelligent agents must be capable for anticipating future events, based on observations and actions they made in the past. The more complex this environment,
the more memory an agent must devote to tracking the past, to generate statistically correct future predictions. In this presentation, I explore the question: Could an agent capable of harnessing
quantum information processing have an operational advantage over classical counterparts? I outline our recent works showing how one can construct quantum agents, that are capable of exhibit the same
degree of complex adaptive behaviour as classical counterparts, while using less memory than classical counterparts. I then discuss how these results challenging current views of what is complex,
and highlight scenarios where quantum agents could exhibit extreme operational advantage.
Invited Talk: Felix Pollock - Interference and inference: Quantum stochastic processes and the Free Energy Principle (16:30pm - 17:00pm)
Monash University
Friston's free energy principle follows as a direct consequence of the stochastic evolution of any system with a Markov blanket (under some very loose assumptions). However, to the best of our
knowledge, it is quantum mechanics that fundamentally underpins the behaviour of all physical systems, with the deterministic Schrödinger equation governing evolution. Using a new framework for
describing quantum stochastic processes, I will show that the notion of a Markov blanket naturally emerges in composite quantum systems, before exploring how this could lead to a more general free
energy principle that emerges from deterministic quantum physics.
Invited Talk: Jayne Thompson - Causal asymmetry in a quantum world (17:00pm - 17:30pm)
National University of Singapore
How can we observe an asymmetry in the temporal order of events when physics at the quantum level is time symmetric? The source of time’s barbed arrow is a longstanding puzzle in foundational
science. Causal asymmetry offers a provocative perspective. It asks how Occam’s razor — the principle of assuming no more causes of natural things than are both true and sufficient to explain their
appearances — can privilege one particular temporal direction over another. That is, if we want to model a process causally — such that the model makes statistically correct future predictions based
only on information from the past - what is the minimum past information we must store? Are we forced to store more data if we model events in one particular temporal order over the other?
Surprisingly most stochastic processes display non-zero causal asymmetry — implying that there is a privileged temporal direction when seeking the simplest causal model capable of explaining these
events. Models running in the opposite temporal direction are penalized with an unavoidable memory overhead. This has been cited as a potential source of time's barbed arrow in complex processes.
Here we examine what happens to this causal asymmetry in the quantum domain.
Closing Panel Discussion: The Complex Systems future of the free energy principle, integrated information theory, and epsilon machines (17:40pm - 18:30pm)
with Karl Friston, Jessica Flack, Mile Gu, Rosalyn Moran, moderated by Jakob Hohwy (Monash University).
Poster Presentations: Cocktail hour, discussion, and closing remarks (18:30pm)
Poster Presentations
Thijs van de Laar
ForneyLab: A toolbox for biologically plausible free energy minimization in dynamic neural models
Shervin Safavi
From efficient coding to criticality
Tim Verbelen
Deep active inference for state estimation by learning from demonstration
Sergio Rubin
Does Gaia minimize free energy?
Pedro Mediano
Self-organisation and integrated information in non-linear dynamical systems
Dobromir Dotov (presented by Carlos Gershenson)
What is the causal depth of generative models in learning of complex dynamics?
Call for submissions
The Organizing Committee of Complexity from Cells to Consciousness is pleased to announce the Call for Submissions for this satellite at this year’s Conference on Complex Systems. We have a packed
schedule of invited speakers and will try to highlight as many interdisciplinary submissions as possible. We are now accepting submissions for Poster Presentations and a limited number of Contributed
The goal of this workshop is to bring together researchers studying fundamental questions around information, complexity, emergence and scale. In this full-day satellite, we will hear talks about
unifying frameworks that aim to account for the structure and dynamics of complex systems across scales and domains. In particular, we are emphasizing the ability of frameworks like the Free Energy
Principle and Integrated Information Theory to explain the emergent teleology of complex systems. In addition, we will hear talks exploring the application of new modeling tools from information
theory and complexity, such as Epsilon Machines.
Particular attention will be given to the following topics:
• Connections between statistical physics and causality, prediction, consciousness, and control
• Artificial intelligence, artificial life, exploration/exploitation, and agent-based models
• Emergent behavior in complex networks, large-scale social/political systems, or crowds
• Nonlinear dynamics, statistical physics, climate science
• Philosophy of science, falsifiablility, epistemics
The satellite format will include frequent breaks for conversation and discussion around the various poster presentations. We encourage submissions that describe novel applications or interpretations
of these information-thermodynamical frameworks. We are excited to hear about the ways these principles might manifest in different domains (like yours!). We are therefore happy to invite interested
researchers from any discipline of Complex Systems research to submit.
Important Details
• Abstract Submission Deadline: June 30, 2018
• Abstract Submission Guidelines: PDF format, max. 500 words, 1 figure, submitted via email to:
- Brennan Klein (klein.br *at* northeastern.edu) & Conor Heins (conor.heins *at* ds.mpg.de). Please do not hesitate to reach out if you have questions.
Organizing Committee
Brennan Klein, Northeastern University || Conor Heins, Max Planck Institute for Dynamics & Self-Organization || Rosalyn Moran, King’s College London || Timothy Bayne, Monash University || Jakob
Hohwy, Monash University || Kavan Modi, Monash University || Naotsugu Tsuchiya, Monash University
This event is possible with support from the Monash University Network of Excellence for Causation & Complexity in the Conscious Brain.
|
{"url":"http://ccs2018.web.auth.gr/fromCellsToConsciousness","timestamp":"2024-11-02T21:47:09Z","content_type":"text/html","content_length":"59358","record_id":"<urn:uuid:0a20bd63-da15-48b3-9177-e35a7f169aa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00794.warc.gz"}
|
D * exp(-K * t)
01 Sep 2024
D * exp(-K * t) & Analysis of variables
Equation: D * exp(-K * t)
Variable: D
Impact of BOD5 on DB Function
X-Axis: -19851.768733632216to20251.7687380632
Y-Axis: DB Function
Title: Investigating the Impact of Biochemical Oxygen Demand (BOD5) on Dissolved Biological Activity using a Dynamic Model: An Equation-Based Approach
The biochemical oxygen demand (BOD5) is a crucial parameter in wastewater treatment, as it affects the dissolved biological activity. In this article, we explore the impact of BOD5 on dissolved
biological activity using a dynamic model based on the equation D * exp(-K * t). We analyze the effect of varying BOD5 concentrations on the exponential decay constant (D) and rate constant (K),
which are key parameters in the model. The results demonstrate the significance of considering BOD5 in wastewater treatment systems, highlighting its influence on dissolved biological activity.
Biochemical oxygen demand (BOD5) is a measure of the amount of oxygen required to break down organic matter in wastewater over a 5-day period. In wastewater treatment systems, BOD5 plays a vital role
in determining the efficiency of biological treatment processes. Dissolved biological activity (DBA), on the other hand, refers to the ability of microorganisms to degrade organic pollutants in
aqueous environments.
The relationship between BOD5 and DBA is complex, as it depends on various factors such as wastewater characteristics, temperature, pH, and microbial populations. In this study, we focus on
investigating the impact of BOD5 on DBA using a dynamic model based on the equation D * exp(-K * t).
Theoretical Background:
The exponential decay equation D * exp(-K * t) is commonly used to describe the biodegradation process in wastewater treatment systems. In this context, ‘D’ represents the maximum biological activity
(i.e., DBA), while ‘K’ is the rate constant that reflects the efficiency of microorganisms in degrading organic matter.
The fundamental equation governing the dynamic behavior of DBA is:
DB(t) = D * exp(-K * t)
• DB(t) represents the dissolved biological activity at time ‘t’
• D is the maximum biological activity (mg O2 / L)
• K is the rate constant (day-1)
• t is time (days)
Effect of BOD5 on Dynamic Model Parameters:
To investigate the impact of BOD5 on DBA, we varied BOD5 concentrations from 50 to 500 mg/L and analyzed their effect on the model parameters ‘D’ and ‘K’.
The results show that as BOD5 increases, both D and K exhibit a significant increase. Specifically:
• As BOD5 concentration increases from 50 to 500 mg/L, the maximum biological activity (D) rises by approximately 300% (from 25 mg O2 / L to 100 mg O2 / L).
• Similarly, the rate constant (K) shows a substantial increase, with a rise of around 400% (from 0.5 day-1 to 2.5 day-1).
Discussion and Implications:
Our study demonstrates that BOD5 has a significant impact on DBA in wastewater treatment systems. As BOD5 concentrations increase, both D and K exhibit substantial increases, highlighting the
importance of considering BOD5 in designing and operating biological treatment processes.
These findings have practical implications for wastewater treatment plant operators:
• Increasing BOD5 concentration can lead to enhanced biodegradation rates, potentially improving treatment efficiency.
• However, excessively high BOD5 levels may result in overloading the treatment system, compromising its performance and necessitating adjustments to optimize process parameters.
In conclusion, our dynamic model based on the equation D * exp(-K * t) effectively captures the impact of BOD5 on DBA. The results emphasize the importance of considering BOD5 in wastewater treatment
systems, as it significantly affects both biological activity and rate constants. By taking into account these findings, operators can optimize their treatment processes to improve efficiency, reduce
costs, and ensure compliance with environmental regulations.
• [List relevant references used in this article]
Related topics
Academic Chapters on the topic
Information on this page is moderated by llama3.1
|
{"url":"https://blog.truegeometry.com/engineering/Analytics_Impact_of_BOD5_on_DB_FunctionD_exp_K_t_.html","timestamp":"2024-11-03T03:36:02Z","content_type":"text/html","content_length":"17408","record_id":"<urn:uuid:d4999e3b-a9c3-4244-93e9-2ebd9a65edac>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00087.warc.gz"}
|
st: ml model: a 2-part model with duration
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: ml model: a 2-part model with duration
From "silvia" <[email protected]>
To <[email protected]>
Subject st: ml model: a 2-part model with duration
Date Mon, 19 Jun 2006 13:24:16 +0100
Dear Stata users,
I am trying to estimate a hurdle model using ml. The first part of the
likelihood function is a Probit, the second part is a Weibull duration model
(PH) with right censoring.
I think my problems are due to the duration and the censoring variables: the
variable _MLw1 created by Stata during the maximisation has missing values,
corresponding to the values for which the duration and the censoring
variable are not observed (and the binary choice variable in the probit
takes value 0).
The hurdle models code in Stata, those that I know, are for counts and
require a 0 value in the count variable. So I am replacing missings values
in the time variable with 0.
capture program drop hurdleSQ
program define hurdleSQ
version 9.2
args lnf xb1 xb2 alfa
tempvar normp lambda df sf ldf lsf weib k1
local s "$ML_y1" /*choice variable; if s=1 then t is observed,
otherwise t is missing*/
local t "$ML_y2" /*time variable*/
local q "$ML_y3" /*censoring variable*/
gen byte `k1'= 2*`s' - 1
gen double `normp'= ln(norm(`k1'*`xb1'))
gen double `lambda1' = exp(`xb2')
gen double `df' =
gen double `sf' = exp(-`lambda'*`t'^(`alfa'))
gen double `ldf'= ln(`cdf')
gen double `lsf'= ln(`sf')
gen double `weib'= (`q'*`lcdf'+(1-`q')*`lsf' )
replace `lnf'= cond(`t'==0,`normp',`weib')
replace smy=0 if smy==.
ml model lf hurdleSQ (probit: s = $rhs1) (duration: t q = $rhs2 ) /alfa
ml init initmx, skip copy /*initmx are initial values from the separate
probit and weibull model*/
ml max, difficult
and then I get...
initial: log likelihood = -<inf> (could not be evaluated)
feasible: log likelihood = -4034.2574
rescale: log likelihood = -1785.9959
rescale eq: log likelihood = -1438.3167
could not calculate numerical derivatives
flat or discontinuous region encountered
I don't think that replacing missing values in the censoring variable can be
of any help. Probably there is something to do inside the program. I have
tried to use the option "missing" after ml model, but still I get the same
error message.
Do you see a way to solve this problem or do you spot any mistake in the
Thanks in advance,
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"https://www.stata.com/statalist/archive/2006-06/msg00630.html","timestamp":"2024-11-12T19:06:23Z","content_type":"text/html","content_length":"9988","record_id":"<urn:uuid:6fe96c63-a0f4-4104-98c4-7d06960fd5a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00576.warc.gz"}
|
Directed Graph
An integral part of storing, manipulating, and retrieving numerical data are data structures or as they are called in Dart: collections. Arguably the most common data structure is the list. It
enables efficient storage and retrieval of sequential data that can be associated with an index.
A more general (non-linear) data structure where an element may be connected to one, several, or none of the other elements is called a graph.
Graphs are useful when keeping track of elements that are linked to or are dependent on other elements. Examples include: network connections, links in a document pointing to other paragraphs or
documents, foreign keys in a relational database, file dependencies in a build system, etc.
The package directed_graph contains an implementation of a Dart graph that follows the recommendations found in graphs-examples and is compatible with the algorithms provided by graphs. It includes
methods that enable:
• adding/removing vertices and edges,
• sorting of vertices.
The library provides access to algorithms for finding:
• the shortest path between vertices,
• the path with the lowest/highest weight (for weighted directed graphs),
• all paths connecting two vertices,
• the shortest paths from a vertex to all connected vertices,
• cycles,
• a topological ordering of the graph vertices.
The class GraphCrawler can be used to retrieve paths or walks connecting two vertices.
Elements of a graph are called vertices (or nodes) and neighbouring vertices are connected by edges. The figure below shows a directed graph with unidirectional edges depicted as arrows. Graph edges
are emanating from a vertex and ending at a vertex. In a weighted directed graph each edge is assigned a weight.
• In-degree of a vertex: Number of edges ending at this vertex. For example, vertex H has in-degree 3.
• Out-degree of a vertex: Number of edges starting at this vertex. For example, vertex F has out-degree 1.
• Source: A vertex with in-degree zero is called (local) source. Vertices A and D in the graph above are local sources.
• Directed Edge: An ordered pair of connected vertices (v[i], v[j]). For example, the edge (A, C) starts at vertex A and ends at vertex C.
• Path: A path [v[i], ..., v[n]] is an ordered list of at least two connected vertices where each inner vertex is distinct. The path [A, E, G] starts at vertex A and ends at vertex G.
• Cycle: A cycle is an ordered list of connected vertices where each inner vertex is distinct and the first and last vertices are identical. The sequence [F, I, K, F] completes a cycle.
• Walk: A walk is an ordered list of at least two connected vertices. [D, F, I, K, F] is a walk but not a path since the vertex F is listed twice.
• DAG: An acronym for Directed Acyclic Graph, a directed graph without cycles.
• Topological ordering: An ordered set of all vertices in a graph such that v[i] occurs before v[j] if there is a directed edge (v[i], v[j]). A topological ordering of the graph above is: {A, D, B,
C, E, K, F, G, H, I, L}. Hereby, dashed edges were disregarded since a cyclic graph does not have a topological ordering.
Note: In the context of this package the definition of edge might be more lax compared to a rigorous mathematical definition. For example, self-loops, that is edges connecting a vertex to itself are
explicitly allowed.
To use this library include directed_graph as a dependency in your pubspec.yaml file. The example below shows how to construct an object of type DirectedGraph.
The graph classes provided by this library are generic with type argument T extends Object, that is T must be non-nullable. Graph vertices can be sorted if T is Comparable or if a custom comparator
function is provided. In the example below, a custom comparator is used to sort vertices in inverse lexicographical order.
import 'package:directed_graph/directed_graph.dart';
// To run this program navigate to
// the folder 'directed_graph/example'
// in your terminal and type:
// # dart bin/directed_graph_example.dart
// followed by enter.
void main() {
int comparator(String s1, String s2) => s1.compareTo(s2);
int inverseComparator(String s1, String s2) => -comparator(s1, s2);
// Constructing a graph from vertices.
final graph = DirectedGraph<String>(
'a': {'b', 'h', 'c', 'e'},
'b': {'h'},
'c': {'h', 'g'},
'd': {'e', 'f'},
'e': {'g'},
'f': {'i'},
'i': {'l'},
'k': {'g', 'f'}
// Custom comparators can be specified here:
// comparator: comparator,
print('Example Directed Graph...');
print('\nIs Acylic:');
print('\nStrongly connected components:');
print('\nShortestPath(d, l):');
print(graph.shortestPath('d', 'l');
print('\nVertices sorted in lexicographical order:');
print('\nVertices sorted in inverse lexicographical order:');
graph.comparator = inverseComparator;
graph.comparator = comparator;
print('\nSorted Topological Ordering:');
print('\nTopological Ordering:');
print('\nLocal Sources:');
// Add edge to render the graph cyclic
graph.addEdges('i', {'k'});
graph.addEdges('l', {'l'});
graph.addEdges('i', {'d'});
print('\nShortest Paths:');
print('\nEdge exists: a->b');
print(graph.edgeExists('a', 'b'));
Click to show the console output.
$ dart example/bin/directed_graph_example.dart
Example Directed Graph...
'a': {'b', 'h', 'c', 'e'},
'b': {'h'},
'c': {'h', 'g'},
'd': {'e', 'f'},
'e': {'g'},
'f': {'i'},
'g': {},
'h': {},
'i': {'l'},
'k': {'g', 'f'},
'l': {},
Is Acylic:
Strongly connected components:
[[h], [b], [g], [c], [e], [a], [l], [i], [f], [d], [k]]
ShortestPath(d, l):
[d, f, i, l]
Vertices sorted in lexicographical order:
[a, b, c, d, e, f, g, h, i, k, l]
Vertices sorted in inverse lexicographical order:
[l, k, i, h, g, f, e, d, c, b, a]
{a: 0, b: 1, h: 3, c: 1, e: 2, g: 3, d: 0, f: 2, i: 1, l: 1, k: 0}
Sorted Topological Ordering:
{a, b, c, d, e, h, k, f, g, i, l}
Topological Ordering:
{a, b, c, d, e, h, k, f, i, g, l}
Local Sources:
[[a, d, k], [b, c, e, f], [g, h, i], [l]]
[l, l]
Shortest Paths:
{e: (e), c: (c), h: (h), a: (), g: (c, g), b: (b)}
Edge exists: a->b
Weighted Directed Graphs
The example below shows how to construct an object of type WeightedDirectedGraph. Initial graph edges are specified in the form of map of type Map<T, Map<T, W>>. The vertex type T extends Object and
therefore must be a non-nullable. The type associated with the edge weight W extends Comparable to enable sorting of vertices by their edge weight.
The constructor takes an optional comparator function as parameter. Vertices may be sorted if a comparator function is provided or if T implements Comparator.
import 'package:directed_graph/directed_graph.dart';
void main(List<String> args) {
int comparator(
String s1,
String s2,
) {
return s1.compareTo(s2);
final a = 'a';
final b = 'b';
final c = 'c';
final d = 'd';
final e = 'e';
final f = 'f';
final g = 'g';
final h = 'h';
final i = 'i';
final k = 'k';
final l = 'l';
int sum(int left, int right) => left + right;
var graph = WeightedDirectedGraph<String, int>(
a: {b: 1, h: 7, c: 2, e: 40, g:7},
b: {h: 6},
c: {h: 5, g: 4},
d: {e: 1, f: 2},
e: {g: 2},
f: {i: 3},
i: {l: 3, k: 2},
k: {g: 4, f: 5},
l: {l: 0}
summation: sum,
zero: 0,
comparator: comparator,
print('Weighted Graph:');
print('\nNeighbouring vertices sorted by weight:');
final lightestPath = graph.lightestPath(a, g);
print('\nLightest path a -> g');
print('$lightestPath weight: ${graph.weightAlong(lightestPath)}');
final heaviestPath = graph.heaviestPath(a, g);
print('\nHeaviest path a -> g');
print('$heaviestPath weigth: ${graph.weightAlong(heaviestPath)}');
final shortestPath = graph.shortestPath(a, g);
print('\nShortest path a -> g');
print('$shortestPath weight: ${graph.weightAlong(shortestPath)}');
Click to show the console output.
$ dart example/bin/weighted_graph_example.dart
Weighted Graph:
'a': {'b': 1, 'h': 7, 'c': 2, 'e': 40, 'g': 7},
'b': {'h': 6},
'c': {'h': 5, 'g': 4},
'd': {'e': 1, 'f': 2},
'e': {'g': 2},
'f': {'i': 3},
'g': {},
'h': {},
'i': {'l': 3, 'k': 2},
'k': {'g': 4, 'f': 5},
'l': {'l': 0},
Neighbouring vertices sorted by weight
'a': {'b': 1, 'c': 2, 'h': 7, 'g': 7, 'e': 40},
'b': {'h': 6},
'c': {'g': 4, 'h': 5},
'd': {'e': 1, 'f': 2},
'e': {'g': 2},
'f': {'i': 3},
'g': {},
'h': {},
'i': {'k': 2, 'l': 3},
'k': {'g': 4, 'f': 5},
'l': {'l': 0},
Lightest path a -> g
[a, c, g] weight: 6
Heaviest path a -> g
[a, e, g] weigth: 42
Shortest path a -> g
[a, g] weight: 7
For further information on how to generate a topological sorting of vertices see example.
Features and bugs
Please file feature requests and bugs at the issue tracker.
|
{"url":"https://pub.dev/documentation/directed_graph/latest/","timestamp":"2024-11-10T18:30:04Z","content_type":"text/html","content_length":"18038","record_id":"<urn:uuid:aed1dcc8-2afb-4501-ae2c-d8e91e1fb360>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00188.warc.gz"}
|
SSC CGL Tier 2 Set 13
Submitted by Atanu Chaudhuri on Thu, 05/07/2018 - 15:01
SSC CGL Tier 2 Set 13: Time and distance questions, train questions with solution
Learn from 10 selected Time and Distance Problems with Solution in SSC CGL Tier 2 Set 13 how to solve the problems easy and quick by basic concepts.
Contents are,
1. 10 carefully selected relatively hard time and distance questions, train questions for SSC CGL Tier 2.
2. Answers to the time and distance problems, train problems.
3. Detailed and quick solution to time and distance problems and train problems.
Take the test first with timer on.
Time and distance problems, Train problems for SSC CGL Tier 2 set 13 - time to solve 15 mins
Problem 1.
A man walked $\displaystyle\frac{1}{3}$rd of a total distance at 5 km/hr, next $\displaystyle\frac{1}{3}$rd at a speed of 10 km/hr and the rest at 15 km/hr. His average speed over the distance is,
a. $8\displaystyle\frac{2}{11}$ km/hr
b. $7\displaystyle\frac{2}{11}$ km/hr
c. $8\displaystyle\frac{1}{11}$ km/hr
d. $7\displaystyle\frac{1}{11}$ km/hr
Problem 2.
On increasing the speed of a train by 10 km/hr, 30 mins is saved in a journey of 100 km. Initial speed of the train is,
a. 42 km/hr
b. 40 km/hr
c. 44 km/hr
d. 45 km/hr
Problem 3.
Driving to his office in his car from his home, covering the distance at a speed of 50 km/hr, a man is late by 20 mins. But when he increased the speed to 60 km/hr he reached office 10 minutes early.
The distance of his office from his home is,
a. 140 km
b. 160 km
c. 120 km
d. 150 km
Problem 4.
The distance between place A and place B is 999 km. An express train leaves place A at 6 am and runs at a speed of 55.5 km/hr. The train stops on the way for 1 hour 20 minutes. It reaches place B at,
a. 1.20 am
b. 11 pm
c. 12 pm
d. 6 pm
Problem 5.
A train passes two bridges of lengths 800 m and 400 m in 100 secs and 60 secs respectively. The length of the train is,
1. 80 m
2. 90 m
3. 150 m
4. 200 m
Problem 6.
Two places P and Q are 162 km apart. A cyclist leaves P for Q and simultaneously a second cyclist leaves Q for P. They meet at the end of 6 hrs. If the first cyclist travels at a speed of 8 km/hr
faster than the second cyclist, then the speed of the second cyclist is,
a. $9\displaystyle\frac{1}{2}$ km/hr
b. $10\displaystyle\frac{5}{6}$ km/hr
c. $12\displaystyle\frac{5}{6}$ km/hr
d. $8\displaystyle\frac{1}{2}$ km/hr
Problem 7.
Two trains A and B, start from stations X and Y towards Y and X respectively. After passing each other, they take 4 hrs 48 mins and 3 hrs 20 mins to reach Y and X respectively. If train A is moving
at 45 km/hr, the speed of the train B is,
a. 64.8 km/hr
b. 60 km/hr
c. 54 km/hr
d. 37.5 km/hr
Problem 8.
A canon was fired twice at an interval of 12 mins from a fort. A passenger sitting in a train moving towards the fort heard the shots at an interval of 11 mins 40 secs. Assuming the speed of sound as
330 m/sec, what was the approximate speed of the train?
a. 36 km/hr
b. 38 km/hr
c. 32 km/hr
d. 34 km/hr
Problem 9.
If a man walks at the rate of 5 km/hr, he misses a train by 7 minutes. However, if he walks at the rate of 6 km/hr, he reaches the station 5 minutes before arrival of the train. The distance covered
by him to reach the station is,
a. 5 km
b. 6 km
c. 4 km
d. 6.25 km
Problem 10.
Two trains start from stations A and B and travel towards each other at speeds of 50 km/hr and 60 km/hr respectively. At the time of their meeting, the second train has traveled 120 km more than the
first. The distance between A and B is,
a. 1320 km
b. 1200 km
c. 990 km
d. 1440 km
Answers to the Time and distance problems, train problems SSC CGL Tier 2 set 13
Problem 1. Answer: Option a: $8\displaystyle\frac{2}{11}$ km/hr.
Problem 2. Answer: Option b: 40 km/hr.
Problem 3. Answer: Option d: 150 km.
Problem 4. Answer: Option a: 1.20 am.
Problem 5. Answer: Option d: 200 m.
Problem 6. Answer: Option a: $9\displaystyle\frac{1}{2}$ km/hr.
Problem 7. Answer: Option c: 54 km/hr.
Problem 8. Answer: Option d: 34 km/hr.
Problem 9. Answer: Option b: 6 km.
Problem 10. Answer: Option a: 1320 km.
Solution to Time and distance questions, train questions SSC CGL Tier 2 Set 13 - time to solve was 15 mins
Problem 1.
A man walked $\displaystyle\frac{1}{3}$rd of a total distance at 5 km/hr, next $\displaystyle\frac{1}{3}$rd at a speed of 10 km/hr and the rest at 15 km/hr. His average speed over the distance is,
a. $8\displaystyle\frac{2}{11}$ km/hr
b. $7\displaystyle\frac{2}{11}$ km/hr
c. $8\displaystyle\frac{1}{11}$ km/hr
d. $7\displaystyle\frac{1}{11}$ km/hr
Solution 1: Problem analysis
Each section of the journey is of same length, that is, $\displaystyle\frac{1}{3}$rd of the total distance. Let us assume it is $d$. The time to cover the total distance of $3d$ will then be,
So the average speed is,
$=8\displaystyle\frac{2}{11}$ km/hr.
Answer: Option a: $8\displaystyle\frac{2}{11}$ km/hr.
Key concepts used: Problem breakdown -- Speed time distance concept -- Average concept, all basic concepts.
Problem 2.
On increasing the speed of a train by 10 km/hr, 30 mins is saved in a journey of 100 km. Initial speed of the train is,
a. 42 km/hr
b. 40 km/hr
c. 44 km/hr
d. 45 km/hr
Solution 2: Problem analysis and intuitive solution
Knowing that if we follow the conventional method we would end up solving a quadratic equation, we decided to examine the choice values.
At a cursory glance at the choice values, the value of 40 could give us 2.5 hours travel time cleanly and the increased value of 50 resulted again in a round figure of 2 hours travel time which is
half an hour less as required. Problem solved immediately.
Solution 2: Mathematical reasoning and choice values test
Examining the other values, we find that all of the other (original value, increased value) pairs—(42, 52), (44, 54) or (45, 55) would involve in non-terminating decimals because of presence of 7, 9,
11 or 13 in the denominator when dividing 100 to determine time to cover the distance, even after accounting for multiplication by 60 minutes for 1 hour.
Only (40, 50) pair of speeds results in round figure in minutes at the same time satisfies the problem statement.
This method is systematic and mathematical logic based.
Solution 2: Conventional deductive method
If $S$ is the original speed in km/hr, by problem statement,
Or, $S^2+10S-2000=0$,
Or, $(S+50)(S-40)=0$
So $S=40$ km/hr, as it cannot be negative.
Answer: Option b: 40 km/hr.
Key concepts used: Intuitive problem solving -- Choice value test -- Mathematical reasoning -- Number system concepts -- Solving quadratic equation -- Train problems concepts -- Many ways problem
Problem 3.
Driving to his office in his car from his home, covering the distance at a speed of 50 km/hr, a man is late by 20 mins. But when he increased the speed to 60 km/hr he reached office 10 minutes early.
The distance of his office from his home is,
a. 140 km
b. 160 km
c. 120 km
d. 150 km
Solution 3: Problem analysis and solving execution
We know,
$ST=D$, where $S$ is speed, $T$ is time and $D$ is distance.
With distance $D$ remaining same, then,
$S \propto \displaystyle\frac{1}{T}$,
Or in other words, over a fixed distance, speed and time are inversely proportional.
This falls under basic speed time distance concept.
In this problem distance is fixed. So,
$\displaystyle\frac{5}{6}=\frac{T-10}{T+20}$, being a ratio we could convert scheduled duration $T$ in minutes,
Subtracting the equation from 1,
$\displaystyle\frac{1}{6}=\frac{30}{T+20}$, this is adapted componendo dividendo for efficient simplification,
Or, $180=T+20$,
It means, 3 hours is 20 minutes late from scheduled time of 160 minutes and the distance was covered at a speed of 50 km/hr to arrive late by 20 mins taking 3 hrs travel time.
So distance is,
$3\times{50}=150$ km.
On verifying, to cover the distance of 150 km at a speed of 60 km/hr, time taken would be 150 minutes, 10 minutes early from scheduled time of 160 minutes.
Answer: Option d: 150 km.
Key concepts used: Basic speed, time and distance concepts -- Speed time inverse proportionality over a fixed distance -- basic ratio concept -- efficient simplification by adapted componendo
Problem 4.
The distance between place A and place B is 999 km. An express train leaves place A at 6 am and runs at a speed of 55.5 km/hr. The train stops on the way for 1 hour 20 minutes. It reaches place B at,
a. 1.20 am
b. 11 pm
c. 12 pm
d. 6 pm
Solution 4: Problem analysis and solving execution
The train at speed 55.5 km/hr takes,
$\displaystyle\frac{999}{55.5}=\frac{90}{5}=18$ hrs to cover the distance of 999 kms (numerator and denominator multiplied by 10 to eliminate decimal, and 111 cancelled out).
Adding stoppage time of 1 hour 20 minutes, total travel time is, 19 hours 20 minutes.
Adding it to starting time of 6 am result is 25 hours 20 minutes, which is equaivalent to 1.20 am.
Answer: Option a: 1.20 am.
Key concepts used: Basic Speed time distance concepts -- clock time concept -- Train problems concepts -- Efficient simplification by decimal elimination.
Problem 5.
A train passes two bridges of lengths 800 m and 400 m in 100 secs and 60 secs respectively. The length of the train is,
a. 80 m
b. 90 m
c. 150 m
d. 200 m
Solution 5: Problem analysis and execution
Taking train length as $d$ m, to pass the bridges, the train has to traverse, $d+800$ m and $d+400$ m in two cases taking 100 secs and 60 secs respectively.
Assuming constant speed of the train as $S$ then,
Or, $\displaystyle\frac{d+400}{d+800}=\frac{3}{5}$, minimizing the fraction
Subtracting the equation from 1 to simplify the numerator,
$\displaystyle\frac{400}{d+800}=\frac{2}{5}$, this is efficient simplification by adapted componendo dividendo.
Simplifying we get, $d=200$ m.
Answer: Option d: 200 m.
Key concepts used: Basic speed time distance concepts -- train traversing platform or bridge concept -- Ratio concept -- Efficient simplification by adapted componendo dividendo -- Train problems
Problem 6.
Two places P and Q are 162 km apart. A cyclist leaves P for Q and simultaneously a second cyclist leaves Q for P. They meet at the end of 6 hrs. If the first cyclist travels at a speed of 8 km/hr
faster than the second cyclist, then the speed of the second cyclist is,
a. $9\displaystyle\frac{1}{2}$ km/hr
b. $10\displaystyle\frac{5}{6}$ km/hr
c. $12\displaystyle\frac{5}{6}$ km/hr
d. $8\displaystyle\frac{1}{2}$ km/hr
Solution 6: Problem analysis and solving execution
Assuming the speed of the second cyclist as $S$ km/hr, the two cyclists will cover the total distance of 162 km in 6 hrs at a relative speed of $S +(S+8)=2S+8$ km/hr.
Or, $27 = 2S +8$,
Or, $S=\displaystyle\frac{19}{2}=9\displaystyle\frac{1}{2}$ km/hr
Answer: Option a: $9\displaystyle\frac{1}{2}$ km/hr.
Key concepts used: Basic speed time distance concepts -- relative speed -- two objects meet when they cover distance between them while moving towards each other -- two moving objects meeting concept
Problem 7.
Two trains A and B, start from stations X and Y towards Y and X respectively. After passing each other, they take 4 hrs 48 mins and 3 hrs 20 mins to reach Y and X respectively. If train A is moving
at 45 km/hr, the speed of the train B is,
a. 64.8 km/hr
b. 60 km/hr
c. 54 km/hr
d. 37.5 km/hr
Solution 7: Problem analysis
After starting simultaneously from station X and station Y, the two trains A and B meet at a point P and then move on to reach their destinations Y and X respectively.
To travel from meeting point P to destination Y train A at speed 45 km/hr takes 4 hr 48 mins, that is, $4\displaystyle\frac{4}{5}=\frac{24}{5}$ hrs and train B takes 3 hrs 20 mins, that is, $3\
displaystyle\frac{1}{3}=\frac{10}{3}$ hrs to reach its destination X.
As a general principle we evaluate what we can, immediately, and so get the distance from meeting point P to destination Y as,
$d_2=45\times{\displaystyle\frac{24}{5}}=216$ km
The distance $d_1$ from P to X that B covered is unknown as well as the target speed of train B, say $S$ km/hr.
The following figure shows the situation.
Sensing complications we decide to proceed using very basic concepts of trains meeting while covering a fixed distance $d$.
Solution 7: Problem solving execution
When A and B meet at point P, both have taken the same time, say $T$ after starting.
So for train A and B,
$T=\displaystyle\frac{d_1}{45}=\frac{216}{S}$, the distance $d_2=216$ km was covered by B at a speed S to reach point P.
With two unknowns we can't solve this equation. |Something more we should know. Then we remember, yes we know that distance $d_1$ was covered by train B at speed $S$ in $\displaystyle\frac{10}{3}$
hrs. So we get a relation between $d_1$ and $S$ as,
Can we use it in the first equation? Yes, we can just substitute the value of $d_1$ in terms of $S$,
Or, $\displaystyle\frac{10S}{3\times{45}}=\frac{216}{S}$,
Or, $S^2=\displaystyle\frac{3\times{45}\times{216}}{10}=3\times{9}\times{108}=(54)^2$.
Answer: Option c: 54 km/hr.
Key concepts used: Speed time distance concepts -- Event sequencing -- Efficient simplification -- Trains moving towards each other take same time to meet concept -- Train problems concepts.
Problem 8.
A canon was fired twice at an interval of 12 mins from a fort. A passenger sitting in a train moving towards the fort heard the shots at an interval of 11 mins 40 secs. Assuming the speed of sound as
330 m/sec, what was the approximate speed of the train?
a. 36 km/hr
b. 38 km/hr
c. 32 km/hr
d. 34 km/hr
Solution 8: Problem analysis
The distance traversed by the first shot has been,
$d=12\times{60}\times{330}\text{ m}=72\times{3.3}\text{ km}$
when the second shot was fired.
Keeping this separation, the sound of two shots moved towards the train.
After the passenger heard the first sound, the situation was exactly like,
The passenger in the train and the sound of second shot traveling towards each other at speeds $S$, say, and 330 m/sec respectively, covering the distance $d$ when the passenger met the second
shot, or rather heard the second shot.
This is a situation of two objects traveling at different speeds moving towards each other and meeting at an intermediate point.
Solution 8: Problem solving execution
The meeting time was 11 mins 40 secs, 20 secs less than the time of 12 minutes the second shot would have taken to reach the passenger if the train stood still. In this 20 secs, sound would have
traversed at speed 330 m/sec, a distance of 6600 m.
The following figure explains the situation.
This 6600 metres the train must have traversed in 11 mins 40 secs, the meeting time.
So the speed of the train was,
$S=\displaystyle\frac{6.6\times{3600}}{700}$ km/hr, 6600 m coverted to 6.6 km and 11 min 40 secs to $\displaystyle\frac{700}{3600}$ hr
$=36\times{0.94}=36-2.16=34$ km/hr approximately.
Answer: Option d: 34 km/hr.
Key concepts used: Domain mapping, we have mapped sound of canon shots and passenger in the train moving towards each other to two objects moving towards each other by precise event visualization and
sequencing -- basic speed time distance concepts -- Efficient simplification by not calculating and using the value of $d$, as well as quick approximation -- Train problems concepts.
Note: Though the train moves towards the sound of shots (not the canon balls), for the mathematics of the problem it is actually the passenger sitting on the train moves towards the two sounds at the
speed of the train. A passenger in such train problems is considered as a no dimension point.
Problem 9.
If a man walks at the rate of 5 km/hr, he misses a train by 7 minutes. However, if he walks at the rate of 6 km/hr, he reaches the station 5 minutes before arrival of the train. The distance covered
by him to reach the station is,
a. 5 km
b. 6 km
c. 4 km
d. 6.25 km
Solution 9: Problem analysis and execution
If $T$ is the time span between the train arrival time and the time the man started from home, at walking speed of 5 km/hr he takes time,
$T+7=\displaystyle\frac{D}{5U}$, where $D$ is the distance to the station and $U$ is the unit conversion factor for taking care of minutes on the LHS.
Or, $D=(T+7)\times{5U}$.
When he walks at a speed of 6 km/hr, as he arrives 5 minutes earlier than scheduled time covering same distance, similarly,
$5(T+7)=6(T-5)$, $U$ canceled out and $T$ is in minutes
Or, $T=65$ minutes
So, the man reaching the station 5 minutes earlier than 65 minutes, that is arriving in 1 hour, at a speed of 6 km/hr must be covering the distance of 6 km.
Answer: Option b: 6 km.
Key concepts used: Basic speed, time and distance concepts -- inverse proportionality of time and speed with distance constant -- basic algebra concepts -- efficient simplification, we have equated
the two distance expressions eliminating the need of unit conversion.
Problem 10.
Two trains start from stations A and B and travel towards each other at speeds of 50 km/hr and 60 km/hr respectively. At the time of their meeting, the second train has traveled 120 km more than the
first. The distance between A and B is,
a. 1320 km
b. 1200 km
c. 990 km
d. 1440 km
Solution 10: Problem analysis and execution
This is a case of two moving objects approaching each other from two ends of a fixed distance and meeting at an intermediate point. In such cases the fact that is always true is,
To reach the meeting point, each moving object has taken exactly the same time.
This is though obvious, is a very important outcome of two moving objects meeting at an intermediate point.
A corollary to this fact is,
As time of travel is constant, the distances traversed till the meeting point are directly poroportional to the speeds of the two moving objects,
$S_1T=d_1$ and $S_2T=d_2$, so, $S_1 : S_2 = d_1 : d_2$
Applying this result to our problem, if $d_1$ and $d_2$ are the distances traversed by A and B upto the meeting point at speeds 50 km/hr and 60 km/hr speeds respectively,
Subtracting 1 from both sides,
$\displaystyle\frac{1}{5}=\frac{120}{d_1}$, efficient simplification by adapted componendo dividendo,
Or, $d_1=600$,
So, $d_2=600+120=720$, and,
$d=d_1+d_2=1320$ km.
Answer: Option a: 1320 km.
Key concepts used: Basic speed time distance concepts -- two objects moving towards each other to meet at an intermediate point taking same time with speeds and distances covered directly
proportional -- Efficient simplification by adapted componendo dividendo.
Useful resources to refer to:
7 steps for sure success in SSC CGL Tier 1 and Tier 2 competitive tests
Tutorials and Quick solutions in a few steps on Speed time distance and related topics
Basic concepts on Arithmetic problems on Speed-time-distance Train-running Boat-rivers
How to solve Time and Distance problems in a few simple steps 1
How to solve Time and Distance problems in a few simple steps 2
How to solve a GATE level long Work Time problem analytically in a few steps 1
How to solve arithmetic boundary condition problems in a few simple steps
Basic and rich concepts on Moving Escalator to solve difficult Speed Time and distance problems in a few steps
How to solve Arithmetic problems on Work-time, Work-wages and Pipes-cisterns
Question and solution sets on Speed time distance and related topics
SSC CGL Tier II
SSC CGL Tier II Solved questions Set 13 on Time and distance problems Train problems 1
SSC CGL level Question set 63 on Speed-time-distance Train running Boats in rivers 3
SSC CGL level Solution set 63 on Speed-time-distance Train running Boats in rivers 3
SSC CGL level Question set 62 on Speed-time-distance Train running Boats in rivers 2
SSC CGL level Solution set 62 on Speed-time-distance Train running Boats in rivers 2
SSC CGL level Solution Set 44 on Work-time Pipes-cisterns Speed-time-distance
SSC CGL level Question Set 44 on Work-time Pipes-cisterns Speed-time-distance
SSC CGL level Solution Set 32 on work-time, work-wage, pipes-cisterns
SSC CGL level Question Set 32 on work-time, work-wages, pipes-cisterns
|
{"url":"https://suresolv.com/ssc-cgl-tier-ii/ssc-cgl-tier-ii-level-solved-questions-set-13-time-and-distance-problems-and-train","timestamp":"2024-11-06T04:07:50Z","content_type":"text/html","content_length":"56429","record_id":"<urn:uuid:02545b5f-321e-47f2-833e-8abe7e270142>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00641.warc.gz"}
|
Corsi di studio e offerta formativa - Università degli Studi di Parma
Learning objectives
"[knowledge and understanding]
Know, understand and be able to explain all the essential arguments in the section ""Programma esteso"" below, which form an essential background of probability and statistics for the applications
[applying knowledge and understanding]
Be able to solve exercises and problems on the course arguments, in particular all the ""homeworks"" assigned during the lessons and all the exercises of the book [Ross] from chapters 3-8
[making judgements]
Be able to check whether a phenomenon is non deterministic and when it is possible to model it with one of the standard models of random variables presented
[learning skills]
Be able to read and understand scientific texts which build on the knowledge of inferential statistics in one variable"
|
{"url":"https://corsi.unipr.it/en/ugov/degreecourse/225229","timestamp":"2024-11-06T21:08:35Z","content_type":"text/html","content_length":"56142","record_id":"<urn:uuid:2a2b4372-307b-4e41-9295-eb2744535bab>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00254.warc.gz"}
|
"Economic inequality from statistical physics point of view"
Univ. of Mary. Coll. Park
Similarly to the probability distribution of energy in physics, the probability distribution of money among the agents in a closed economic system is also expected to follow the exponential
Boltzmann-Gibbs law, as a consequence of entropy maximization. Analysis of empirical data shows that income distributions in the USA, European Union, and other countries exhibit a well-defined
two-class structure. The majority of the population (about 97%) belongs to the lower class characterized by the exponential ("thermal") distribution. The upper class (about 3% of the population) is
characterized by the Pareto power-law ("superthermal") distribution, and its share of the total income expands and contracts dramatically during booms and busts in financial markets. Globally, data
analysis of energy consumption per capita around the world shows decreasing inequality in the last 30 years and convergence toward the exponential probability distribution, in agreement with the
maximal entropy principle. Similar results are found for the global probability distribution of CO2 emissions per capita.
|
{"url":"https://www.physics.uci.edu/seminar/economic-inequality-statistical-physics-point-view","timestamp":"2024-11-03T22:52:10Z","content_type":"text/html","content_length":"28405","record_id":"<urn:uuid:c4ec59f8-fcdd-4002-8de9-a56f04ee230c>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00160.warc.gz"}
|
Linear Algebra for Data Science & Machine Learning A-Z 2024
SOLUTIONS of Assignments
Basics of Matrices
Matrices and their Significance - 001
Matrix Notation - 002
Dimension (Order) of a Matrix - 003
In this quiz, we practice how to specify the Dimensions of a Matrix.
Addressing Elements of a Matrix - 004
In this quiz, we practice how to address or refer to the elements of a matrix.
Solving Linear Systems in 2 Unknowns - 005
In this quiz, we practice how to solve systems of 2 linear equations in 2 unknowns, without using matrices.
Solving Linear Systems in 3 Unknowns - 006
In this quiz, we practice how to solve systems of 3 linear equations in 3 unknowns, without using matrices.
In this section, we summarize basics of Matrices, e.g. Order of a Matrix, Addition, Subtraction and Multiplication of Matrices, Determinants and Inverse for 2x2 matrices, and Simultaneous Equations.
IMPORTANT - This section is OPTIONAL
Types of Matrices
Addition and Subtraction of Matrices
Multiplication of Scalars with Matrices
Multiplication of two Matrices
Inverse and Determinant of a 2x2 Matrix
The Formula: Inverse (A) = Adjoint (A) / Determinant (A)
* EXAMPLE - Inverse of a 2x2 Matrix
Using Matrices to Solve Simultaneous Linear Equations
* EXAMPLE - Using Matrices to Solve Simultaneous Linear Equations
CHALLENGE QUESTION - Using Matrices to Solve Simultaneous Linear Equations
Matrices and Systems of Linear Equations
In this lesson of the Linear Algebra course, we look at how to represent a system of 2 linear equations in the 2x2 Matrix form.
In this lesson of the Linear Algebra course, we look into how to write a system of 3 linear equations in the form of a 3x3 matrix and solve it.
In this lesson of the Linear Algebra course, we see how to perform different types of row operations on a matrix to convert it into Row Echelon Form (REF).
In this lesson of the Linear Algebra course, we learn what the Row Echelon Form (REF) of a matrix is, and how to compute it.
In this lesson of the Linear Algebra course, we learn what the Reduced Row Echelon Form (RREF) of a matrix is, and how to obtain it.
* ASSIGNMENT 1: Matrices and Linear Equations
Matrix Algebra and Operations
In this lesson of the Linear Algebra course, we learn how to perform addition and subtraction of matrices.
In this lesson of the Linear Algebra course, we learn how a scalar can be multiplied to a matrix, and what are the rules for that.
In this lesson of the Linear Algebra course, multiplication of two matrices with each other is explained, using examples.
In this lesson of the Linear Algebra course, we learn how to get transpose of a given matrix.
** ASSIGNMENT 2: Matrix Algebra & Operations
Students will learn to compute determinant of a given matrix.
In this lesson of the Linear Algebra course, we learn how to compute determinant of a 2x2 matrix.
In this lesson of the Linear Algebra course, we learn how to compute determinant of a 3x3 matrix.
In this lesson of the Linear Algebra course, I discuss some shortcuts that we can possibly use in certain cases to find determinants quickly and easily.
*** ASSIGNMENT 3: Computing Determinants
Inverse of a Matrix
In this lesson of the Linear Algebra course, we learn why inverse can be calculated only for square matrices, and why inverse doesn't exist for non-square matrices.
In this lesson of the Linear Algebra course, we learn what the term Singular Matrix means, and how is it related to the inverse of a matrix.
In this lesson of the Linear Algebra course, we see why finding inverse of the coefficient matrix is important and helpful for solving a linear system.
Inverse of a 2x2 Matrix
Inverse of a 3x3 Matrix - The Two Methods
Inverse of a 3x3 Matrix - The Co-factor Method
Inverse of a 3x3 Matrix - Gauss-Jordan Elimination Method
Properties of Determinants
In this lesson of the Linear Algebra course, we look at the property of the determinants that deals with the matrix row operation 1.
In this lesson of the Linear Algebra course, we look at the property of the determinants that deals with the matrix row operation 2.
In this lesson of the Linear Algebra course, we look at the property of the determinants that deals with the matrix row operation 3.
In this lesson of the Linear Algebra course, we summarize the properties of the determinants that deal with the matrix row operations.
In this lesson of the Linear Algebra course, we apply our knowledge of the properties of determinants.
In this lesson of the Linear Algebra course, we look at one more property of the matrix determinants that is very helpful..
*** OPTIONAL: Introduction to Vectors
Introduction to the Section
Scalars and Vectors
Geometrical Representation of Vectors
Vector Addition and Subtraction
Laws of Vector Addition and Head to Tail Rule
Unit Vector
Components of a Vector in 2D
Position Vector
3-D Vectors and Magnitude of a Vector
Displacement Vector
Finding Midpoint using Vectors
Vector Spaces
In this lesson of the Linear Algebra course, you are introduced to the Vector Spaces.
In this lesson of the Linear Algebra course, we discuss what Euclidean vector spaces are.
In this lesson of the Linear Algebra course, Euclidean vector spaces are discussed in further detail, in continuation of the previous lesson.
In this lesson of the Linear Algebra course, further explanation is provided on the topic of Euclidean vector spaces.
In this lesson of the Linear Algebra course, we look into what Closure Properties are, and how these are used as a criteria to check for vector spaces.
In this lesson of the Linear Algebra course, the ten fundamental axioms for vector spaces are listed and explained.
In this lesson of the Linear Algebra course, examples of closure properties are discussed.
In this lesson of the Linear Algebra course, we discuss an example of vector spaces to clarify the concept.
In this lesson of the Linear Algebra course, one more example of vector spaces is discussed.
Subspace and Nullspace
In this lesson of the Linear Algebra course, you are introduced to the topic of Subspaces.
In this lesson of the Linear Algebra course, subspaces are explained with the help of example 1.
In this lesson of the Linear Algebra course, subspaces are further explained with the help of example 2.
In this lesson of the Linear Algebra course, subspaces are further explained with the help of example 3.
In this lesson of the Linear Algebra course, the topic of Nullspace of a matrix is discussed and explained..
In this lesson of the Linear Algebra course, nullspaces are further explained with the help of an example.
**** ASSIGNMENT 4: Vector Spaces, Subspaces and Null Spaces
Span and Spanning Sets
In this lesson of the Linear Algebra course, we discuss what does the span of a set of given vectors means, and how to compute that.
In this lesson of the Linear Algebra course, we further discuss the span of a set of given vectors through an example.
In this lesson of the Linear Algebra course, we discuss the notion of spanning set for a vector space using a few examples.
In this lesson of the Linear Algebra course, spanning sets are further explained with the help of example 3.
In this lesson of the Linear Algebra course, spanning sets are further explained with the help of example 4.
Linear Dependence and Independence
In this lesson of the Linear Algebra course, we look into the notion of linear dependence and linear independence.
In this lesson of the Linear Algebra course, formal definition of linear dependence is presented and discussed.
|
{"url":"https://opencourser.com/course/r75wey/complete-linear-algebra-for-data-science-machine-learning","timestamp":"2024-11-13T00:09:27Z","content_type":"text/html","content_length":"230227","record_id":"<urn:uuid:3a7f7c80-8b0b-4781-8cbb-71a65ea64214>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00140.warc.gz"}
|
(a+b)x(c+d) Cross Product
Understanding the Cross Product of (a + b) x (c + d)
The cross product is a fundamental operation in linear algebra, especially when dealing with vectors in three dimensions. It produces a vector that is perpendicular to both of the input vectors.
While the cross product of two simple vectors is straightforward, understanding how it works when dealing with expressions like (a + b) x (c + d) requires a careful consideration of the distributive
Distributive Property and the Cross Product
The key to simplifying (a + b) x (c + d) is understanding that the cross product distributes over vector addition. This means we can expand the expression as follows:
(a + b) x (c + d) = a x c + a x d + b x c + b x d
Visualizing the Expansion
Imagine the vectors a, b, c, and d as arrows in space. Each term in the expanded expression represents the cross product of two of these vectors. The resulting vector from each cross product will be
perpendicular to the plane formed by the two input vectors.
Calculating the Result
To calculate the final result, we need to perform the cross product for each of the four terms:
1. a x c: Calculate the cross product of vectors a and c.
2. a x d: Calculate the cross product of vectors a and d.
3. b x c: Calculate the cross product of vectors b and c.
4. b x d: Calculate the cross product of vectors b and d.
Finally, add the four resulting vectors together. This sum will be the cross product of (a + b) and (c + d).
Let's consider a specific example. Let:
a = (1, 2, 3) b = (4, 5, 6) c = (7, 8, 9) d = (10, 11, 12)
Following the steps outlined above, we can calculate the cross product of (a + b) x (c + d). Remember that the cross product can be calculated using the determinant of a matrix:
a x c = det | i j k | | 1 2 3 | | 7 8 9 | = (-6, 6, -6)
Similarly, you can calculate the cross products for the other three terms. Add the four resulting vectors to get the final answer.
Understanding the cross product of (a + b) x (c + d) is crucial in various fields, including:
• Physics: Calculating the torque on a rigid body, finding the force on a moving charge in a magnetic field, or determining the angular momentum of a rotating object.
• Computer Graphics: Used in techniques like calculating surface normals for realistic shading and lighting effects.
• Engineering: Analyzing forces and moments in structures and machines.
By understanding the distributive property and the cross product operation, we can efficiently calculate the cross product of more complex vector expressions. This knowledge is essential for solving
various problems across diverse scientific and engineering disciplines.
|
{"url":"https://jasonbradley.me/page/(a%252Bb)x(c%252Bd)-cross-product","timestamp":"2024-11-03T04:27:34Z","content_type":"text/html","content_length":"61648","record_id":"<urn:uuid:d3d98cdc-f99a-4d5e-8866-a0b02f8a5c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00712.warc.gz"}
|
Free Group Study Rooms with Timer & Music | FiveableApplying the Power Rule | AP Calculus AB/BC Class Notes | Fiveable
π Unit 1 β Limits & Continuity
π € Unit 2 β Fundamentals of Differentiation
π € π ½Unit 3 β Composite, Implicit, & Inverse Functions
π Unit 4 β Contextual Applications of Differentiation
β ¨Unit 5 β Analytical Applications of Differentiation
π ₯Unit 6 β Integration & Accumulation of Change
π Unit 7 β Differential Equations
π ΆUnit 8 β Applications of Integration
π ¦ Unit 9 β Parametric Equations, Polar Coordinates, & Vector-Valued Functions (BC Only)
β ΎUnit 10 β Infinite Sequences & Series (BC Only)
π Study Tools
π € Exam Skills
|
{"url":"https://hours-zltil9zhf-thinkfiveable.vercel.app/ap-calc/unit-2/applying-power-rule/study-guide/GMr6EEbZezsP1DvqrpEk","timestamp":"2024-11-02T21:34:11Z","content_type":"text/html","content_length":"227520","record_id":"<urn:uuid:cef16ea1-45c8-48ce-b6cb-bb1bd74b63be>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00339.warc.gz"}
|
Computing DFT of real signals in MATLAB
The discrete Fourier transform (DFT) calculates a spectrum of a complex input vector. However, the input vector is usually real, representing a sampled real signal. For real vectors (the imaginary
part of all vector components is zero) the upper half of the resulting DFT spectrum is a conjugate mirror of the lower half. Could therefore the computation of the DFT of the real vectors be made
with almost half the effort? Fortunately, yes [1]!
Since MATLAB does not provide a function for computation of DFT spectra of real signals out of the box, I developed my own to save time and space in my computations.
fftr.m – Fast Fourier Transform of real vectors
function X = fftr(x)
%FFTR Discrete Fourier transform of real vectors.
% FFTR(X) is the discrete Fourier transform (DFT) of real vector X.
% For length N input vector x, the DFT is a length N/2+1 vector X, with
% elements:
% N
% X(k) = sum x(n)*exp(-j*2*pi*(k-1)*(n-1)/N), 1 <= k <= N.
% n=1
% The upper half of discrete Fourier transform of real vectors is
% conjugate mirror of the lower half and is not computed and returned by
% this function to save time and space.
% See also FFT, IFFTR.
% Copyright 2012 Simon Rozman.
persistent E;
% Determine size and orientation of the input vector.
[W, H] = size(x);
N = W * H;
if W == 1
k = (N/4 - 1):-1:-(N/4 - 1) ;
elseif H == 1
k = ((N/4 - 1):-1:-(N/4 - 1))';
error('Input parameter x is not a vector.');
if ~isreal(x)
error('Input vector is not real.');
if any(size(E) ~= size(k))
E = exp(k * (pi * 1i / (N/2)));
% Interleave even and odd samples of real input vector to form complex
% vector half the size, and compute it's DFT using FFT.
X = fft(x(1:2:end) + 1i * x(2:2:end));
% Use DFT of the interleaved complex vector to determine the first N/2+1
% elements (lower half) of the spectrum as if it was computed using full
% length DFT of the real input vector.
Xi = conj(X(end:-1:2));
X(2:end) = 0.5 * ( ...
(X(2:end) + Xi) - ...
(X(2:end) - Xi) .* E );
X(N/2 + 1) = real(X(1)) - imag(X(1));
X(1) = real(X(1)) + imag(X(1));
ifftr.m – Inverse fast Fourier Transform of real vectors spectra
function x = ifftr(X)
%IFFT Inverse discrete Fourier transform of real vectors.
% IFFT(X) is the inverse discrete Fourier transform of X, representing an
% upper N/2+1 spectral samples of a real vector. Thus, the result is a
% real vector of length N = 2*(length(X)-1).
% See also IFFT, FFTR.
% Copyright 2012 Simon Rozman.
persistent E;
% Determine size and orientation of the input vector.
[W, H] = size(X);
if W == 1
N = 2 * (H - 1);
k = (N/4):(3*N/4 - 1) ;
elseif H == 1
N = 2 * (W - 1);
k = ((N/4):(3*N/4 - 1))';
error('Input parameter X is not a vector.');
if any(size(E) ~= size(k))
E = exp(k * (pi * 1i / (N/2)));
% Prepare the spectrum as the interleaved complex vector half the size
% would have, and calculate the IDFT using IFFT.
Xi = conj(X(end:-1:2));
X = 0.5 * ( ...
(X(1:(end - 1)) + Xi) + ...
(X(1:(end - 1)) - Xi) .* E );
x = ifft(X);
% Deinterleave: real values of the interleaved complex vector represent
% even samples of output real vector, imaginary values represent odd
% samples.
x_even = real(x);
x_odd = imag(x);
x(2:2:N) = x_odd;
x(1:2:N) = x_even;
1. FFT of Pure Real Sequences, Engineering Productivity Tools Ltd.
4 thoughts on “Computing DFT of real signals in MATLAB”
1. There are bugs in your code:
In fftr exp( ) must read:
exp(((N/4 – 1):-1:-(N/4 – 1))/(N/2) *pi*1i)
In ifftr exp( ) must read:
exp(((N/4):(3*N/4 – 1))/(N/2)*pi*1i)
2. Dear Clemens,
Thank you for noting this. Obviously, I have tested the above functions with row input vectors only. You must have used a column vector as the input instead.
I added a pre-check of input vector orientation, to make functions more robust.
3. Thank you very much simon! This is very useful! For your information im using these to test out a spectral re-meshing procedure in MATLAB for a Pseudo-spectral DNS code!
4. I’ve made some minor optimization to the code above:
Since the size of the FFT blocks is usually constant within the same problem, the result of the exp(k * (pi * 1i / (N/2))) is now cached for re-use.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"http://simon.rozman.si/computers/dsp/dftreal","timestamp":"2024-11-14T01:38:16Z","content_type":"text/html","content_length":"40923","record_id":"<urn:uuid:de98a649-e09e-4bc6-8891-e25f5deafe3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00790.warc.gz"}
|
Sylvia R.
What do you want to work on?
About Sylvia R.
Algebra, Elementary (3-6) Math, Geometry, Midlevel (7-8) Science, Midlevel (7-8) Math, ELL
Bachelors in Biology/Biological Sciences, General from George Fox University
Career Experience
I've tutored, in various ways, since fall 2019. I have both in-person and online experience with a wide range of ages, grade levels, and subjects.
I Love Tutoring Because
I love seeing (or hearing) the moment when everything clicks for the student--the "lightbulb moment," so to speak.
Other Interests
Archery, Badminton, Birdwatching, Fencing, Fishkeeping
FP - Mid-Level Math
They were very good at explaining the concept
Math - Midlevel (7-8) Math
This is very helpful with my math!
FP - Mid-Level Math
It was fun and I was able to get my work done quickly.
FP - Geometry
The tutor is very nice and goes at a good pace to understand
|
{"url":"https://testprepservices.princetonreview.com/academic-tutoring/tutor/sylvia%20r--9626456","timestamp":"2024-11-04T04:48:33Z","content_type":"application/xhtml+xml","content_length":"209873","record_id":"<urn:uuid:b770d452-2de2-40b8-89ce-d04c4d8187d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00136.warc.gz"}
|
dealing with outliers in spss
Click on "Simple" and select "Summaries of Separate Variables." There are some approaches to solve the problem of the existence of outliers: moving them to a separated set replacing them with nearest
values from non-outlier set Missing data can arise for many reasons, and it is worth considering whether the missingness will induce bias in the forecasting model. 3. Great article, extremely
helpful. Procedure for Identifying Outliers: From the menu at the top of the screen, click on Analyze, then click on Descriptive Statistics, then Explore. Univariate method:This method looks for data
points with extreme values on one variable. Click "Continue" and "OK" to activate the filter. Here we outline the steps you can take to test for the presence of multivariate outliers in SPSS.
Alternatively, you can set up a filter to exclude these data points. It is not consistent; some of them normally and the majority are skewed. What happened?, © Blogger templates Outliers. I have a
question relative to the correct method to deal with univariate outliers when one has to conduct an ANOVA. In any project, as you pull together the data that helps you address your business question
or research question, you must spend some time gaining an understanding of your data via a data audit. Fortunately, when using SPSS Statistics to run a linear regression on your data, you can easily
include criteria to help you detect possible outliers. Reply. Descriptive table provide you with an indication of how much a problem associated with these outlying cases. For example, if you’re using
income, you might find that people above a … SPSS tutorials. the decimal point is misplaced; or you have failed to declare some values Hi, thanks for this info! In the Display section, make sure Both
is selected. I can’t think of any reasons why dealing with outliers is different for nested ANOVA. You can also delete cases with missing values. If you compare the original mean and this new trimmed
mean, you can see if your more extreme scores are having a lot of influence on the mean. Choose "If Condition is Satisfied" in the "Select" box and then click the "If" button just below it. Make a
note of cases that lie beyond the black lines---these are your outliers. 3. This provides both Statistics and Plots. Dealing with outliers: Studentized deleted residuals - SPSS Tutorial From the
course: Machine Learning & AI Foundations: Linear Regression Start my 1-month free trial Multivariate method:Here we look for unusual combinations on all the variables. It’s not possible to give you
a blanket answer about it. Have a look at the Histogram and check the tails of distribution if there are data points falling away as the extremes. You may choose to remove all of the outliers or only
the extreme outliers, which are marked by a star (*). In a large dataset detecting Outliers is difficult but there are some ways this can be made easier using spreadsheet programs like Excel or SPSS.
", Drag and drop the columns containing the dependent variable data into the box labelled "Dependent List." SPSS users will have the added benefit of being exposed to virtually every regression
feature in SPSS. The output generated from this analysis as follows: Descriptive Statistics using SPSS: Categorical Variables, Describe and Explore your Data with Histogram Using SPSS 16.0, Describe
and Explore your Data with Bar Graph Using SPSS 16.0, From the menu at the top of the screen, click on, Click on your variable (e.g. How do you define "very different? For each dependent variable I
run an ANOVA with group as independent variable. Mohammed says: February 24, 2016 at 3:13 pm All pages not appeared. ""...If you find these two mean values are very different, you need to investigate
the data points further. Charles says: February 24, 2016 at 7:53 pm Mohammed, I don’t know why the pages don’t appear. Outliers, Durbin-Watson and interactions for regression in SPSS . So, removing
19 would be far beyond that! Missing values . How we deal with outliers when the master data sheet include various distributions. Thank you! And when to be applied? Z-Score. Now, how do we deal with
outliers? If it is just one or a few numerical cases, then a great shorthand is: SELECT IF VARNAME <> CASE. Change the value of outliers. Instructor Keith McCormick covers simple linear regression,
explaining how to build effective scatter plots and calculate and interpret regression coefficients. Dealing with them can sometimes be troublesome. Take, for example, a simple scenario with one
severe outlier. - If you have a 100 point scale, and you have two outliers (95 and 96), and the next highest (non-outlier) number is 89, then you could simply change the 95 and 96 to 89s. Real data
often contains missing values, outlying observations, and other messy features. Cap your outliers data. Calculate the P-Value & Its Correlation in Excel 2007→. Dissertation Statistics Help |
Dissertation Statistics Consultant | PhD Thesis Statistics Assistance. I made two boxplots on SPSS for length vs sex. The box length is sometimes called the “hspread” and is defined as the distance
from one hinge of the box to the other hinge. If an outlier is present in your data, you have a few options: 1. The expected value is the 5% Trimmed Mean. When erasing cases in Section 2, step 5,
always work from the bottom of the data file moving up because the ID numbers change when you erase a case. Alternatively, you can set up a filter to exclude these data points. Dependent variable:
Continuous (scale/interval/ratio) Independent variables: Continuous/ binary . exe. Click on "Edit" and select "Clear." Detecting and Making Decisions about Univariate Outliers 5. In the "Analyze"
menu, select "Regression" and then "Linear. SPSS is one of a number of statistical analysis software programs that can be used to interpret a data set and identify and remove outlying values. They
appear on my computer. 1) Identify what variables are in linear combination. Another way to handle true outliers is to cap them. Charles. If an outlier is present, first verify that the value was
entered correctly and that it wasn’t an error. outliers. SPSS help offered by Statistics-consultation has been truly remarkable. I have a SPSS dataset in which I detected some significant outliers.
Data: The data set ‘Birthweight reduced.sav’ contains details of 42 babies and their parents at birth. Alternatively, if the two outliers were 5 and 6, and the next lowest (non-outlier) number was
11, … Remove any outliers identified by SPSS in the stem-and-leaf plots or box plots by deleting the individual data points. I have a SPSS dataset in which I detected some significant outliers. This
document explains how outliers are defined in the Exploratory Data Analysis (ED) framework (John Tukey). In the case of Bill Gates, or another true outlier, sometimes it’s best to completely remove
that record from your dataset to keep that person or event from skewing your analysis. Enlarge the boxplot in the output file by double-clicking it. Along this article, we are going to talk about 3
different methods of dealing with outliers: 1. Before we talk about this, we will have a look at few methods of removing the outliers. Essentially, instead of removing outliers from the data, you
change their values to something more representative of your data set. The outliers were detected by boxplot and 5% trimmed mean. Outliers are one of those statistical issues that everyone knows
about, but most people aren’t sure how to deal with. 2. Click on "Analyze." Outliers in statistical analyses are extreme values that do not seem to fit with the majority of a data set. Multivariate
outliers can be a tricky statistical concept for many students. Make sure that the outlier's score is genuine and not an error. It helps to identify the case that has the outlying values. Repeat this
step for each outlier you have identified from the boxplot. This observation has a much lower Yield value than we would expect, given the other values and Concentration. How do I deal with these
outliers before doing linear regression? More specifi- cally, SPSS identifies outliers as cases that fall more than 1.5 box lengths from the lower or upper hinge of the box. The Professional Template
by Ourblogtemplates.com 2008. The Extreme values table gives you with the highest and the lowest values recorded for that variable and also provide the ID of the person with that score. Select the
dependent and independent variables you want to analyse. 12.9 Dealing with missing values and outliers. Working from the bottom up, highlight the number at the extreme left, in the grey column, so
the entire row is selected. These outliers are displayed as little circles with a ID number attached. Identifying and Dealing with Missing Data 4. ", Hi,Thanks for this! He also dives into the
challenges and assumptions of multiple regression and steps through three distinct regression strategies. Screening for and Making Decisions about Univariate Outliers 6. SELECT IF (VARNAME ne CASE)
exe. Solution 1: Simple situation, delete outliers from the data matrix. Because multivariate statistics are increasing in popularity with social science researchers, the challenge of detecting
multivariate outliers warrants attention. On the face of it, removing all 19 doesn’t sound like a good idea. DePaul University: Psy 242 Lab Exercise - Comparing Two Means With T-tests in SPSS, Amy
Gamble, Scripps College: The Dummy's Guide to Data Analysis Using SPSS (p8). As I’ll demonstrate in this simulated example, a few outliers can completely reverse the conclusions derived from
statistical analyses. Step 4 Select "Data" and then "Select Cases" and click on a condition that has outliers you wish to exclude. Reply. Data outliers can spoil and mislead the training process
resulting in longer training times, less accurate models and ultimately poorer results. Select "Descriptive Statistics" followed by "Explore. ", For my data set, all outliers disappeared when I
changed the scale of the y-axis from linear to log. Here are four approaches: 1. 2. SPSS is one of a number of statistical analysis software programs that can be used to interpret a data set and
identify and remove outlying values. Determine a value for this condition that excludes only the outliers and none of the non-outlying data points. Below you can find two youtube movies for each
program that shows you how to do this. Make sure the outlier is not the result of a data entry error. If you find these two mean values are very different, you need to investigate the data points
further. Remove any outliers identified by SPSS in the stem-and-leaf plots or box plots by deleting the individual data points. Enter "COO-1" into the box labelled "Boxes Represent," and then enter
an ID or name by which to identify the cases in the "Label Cases By" box. Laerd Statistics:Pearson Product-Moment Correlation - How Can You Detect Outliers? ", Run a boxplot by selecting "Graphs"
followed by "Boxplot.". 2. Enter the rule to exclude outliers that you determined in the previous step into the box at the upper right. Click "Save" and then select "Cook's Distance." Multivariate
outliers are typically examined when running statistical analyses with two or more independent or dependent variables. SPSS will treat your missing values differently depending on how you want SPSS
to treat them: Listwise deletion (SPSS will simple omit your missing values in computation. OR. And since the assumptions of common statistical procedures, like linear regression and ANOVA, are also
based on these statistics, outliers … Should we apply one method to remove the outliers or we can apply more than one method, like these two methods. Adjust for Confounding Variables Using SPSS, Find
Beta in a Regression Using Microsoft Excel. But, as you hopefully gathered from this blog post, answering that question depends on a lot of subject-area knowledge and real close investigation of the
observations in question. Select "Data" and then "Select Cases" and click on a condition that has outliers you wish to exclude. This blog is developed to be a medium for learning and sharing about
SPSS use in research activities. This is the default option in SPSS), as well as pairwise deletion (SPSS will include all). It’s a small but important distinction: When you trim … Sort (ascending
sort) the data matrix on the variable (V323) of interest, then delete the outliers (from the boxplot you can see that all values from Syria to the highest values are outliers. Notice some outliers or
problematic cases in your dataset and want a shorthand way to quickly remove them while also keeping a record of which cases you removed? SPSS removes the top and bottom 5 per cent of the cases and
calculated a new mean value to obtain this Trimmed Mean value. Copyright 2021 Leaf Group Ltd. / Leaf Group Media, All Rights Reserved. With this syntax, replace … Dealing with outliers has been
always a matter of challenge. If you work from the top down, you will end up erasing the wrong cases. "...If you find these two mean values are very different, you need to investigate the data points
further. Removing even several outliers is a big deal. No problem, there are numerous ways to approach this. Excellent! For males, I have 32 samples, and the lengths range from 3cm to 20cm, but on
the boxplot it's showing 2 outliers that are above 30cm (the units on the axis only go up to 20cm, and there's 2 outliers above 30cm with a circle next to one of them). Much of the debate on how to
deal with outliers in data comes down to the following question: Should you keep outliers, remove them, or change them to another variable? This was very informative and to the point. Starting with
an example, suppose I have two samples of subjects tested on a number of dependent variables. Select "Data" and then "Select Cases" and click on a condition that has outliers you wish to exclude.
Remove the outlier. Should this applied to the master data sheet or we still need to apply it after sorting the data … During data analysis when you detect the outlier one of most difficult decision
could be how one should deal with the outlier. The values calculated for Cook's distance will be saved in your data file as variables labelled "COO-1.". If you need to deal with Outliers in a dataset
you first need to find them and then you can decide to either Trim or Winsorize them. Inspect the Boxplot whether SPSS identifies outliers. Go back into the data file and locate the cases that need
to be erased. Sometimes an individual simply enters the wrong data value when recording data. Click "OK.". You should be worried about outliers because (a) extreme values of observed variables can
distort estimates of regression coefficients, (b) they may reflect coding errors in the data, e.g. Identify the outliers on a boxplot. But some outliers or high leverage observations exert influence
on the fitted regression model, biasing our model estimates. most important problems in 12 months), and move it into the, Click on id from your variable list and move into the section. The outliers
were detected by boxplot and 5% trimmed mean. Remove any outliers identified by SPSS in the stem-and-leaf plots or box plots by deleting the individual data points. Should they remove them or correct
them? Question: How does one define "very different?" Data set ‘ Birthweight reduced.sav ’ contains details of 42 babies and their parents at birth number dependent! A note of cases that lie beyond
the black lines -- -these your. And Concentration in this simulated example, a … SPSS tutorials lines -- are... Consultant | PhD Thesis Statistics Assistance SPSS removes the top down, you need to
investigate the set... At few methods of dealing with outliers: 1 exclude these data points looks for data points, all Reserved! The box at the Histogram and check the tails of distribution if there
are numerous ways to approach.. Says: February 24, 2016 at 7:53 pm mohammed, I don ’ t an error Product-Moment dealing with outliers in spss... `` Save '' and then `` linear. I changed the scale of
the y-axis from to. Outline the steps you can take to test for the presence of multivariate outliers warrants attention then select `` ''... Are dealing with outliers in spss by a star ( * ), less
accurate models ultimately. A filter to exclude these data points the forecasting model Identify what are. These outlying cases sometimes an individual simply enters the wrong data value when
recording.! P-Value & Its Correlation in Excel 2007→ Group as independent variable Descriptive Statistics '' followed by ``.. Of subjects tested on a condition that excludes only the extreme
outliers, Durbin-Watson and interactions for regression in.., given the other values and Concentration and their parents at birth file as labelled. Look dealing with outliers in spss unusual
combinations on all the variables. variable data into the box at the upper right that you... Correct method to deal with these outliers before doing linear regression, explaining to! Master data
sheet include various distributions 's score is genuine and not an error to be erased to them. Issues that everyone knows about, but most people aren ’ t think of reasons...: Here we look for unusual
combinations on all the statistical data Analysis ED. To be erased those statistical issues that everyone knows about, but most people aren ’ t sure how build! Variable Properties training times,
less accurate models and ultimately poorer results Ourblogtemplates.com 2008 Group Ltd. Leaf! You how to do this increasing in popularity with social science researchers, the challenge of
multivariate! An ANOVA dealing with missing data can arise for many reasons, and other messy features why with! Two methods the forecasting model the statistical data Analysis ( ED ) framework ( John
Tukey.... As the extremes obtain this trimmed mean in popularity with social science researchers, the challenge of detecting outliers! And other messy features ( * ) dealing with outliers in spss is
Satisfied '' in the `` cases... Variables you want to analyse ``, for example, a … SPSS tutorials Identify... Which I detected some significant outliers your outliers variable Properties Group Ltd.
Leaf... Be saved in your data, you need to be erased how we deal these!: T… I have two samples of subjects tested on a condition that has outliers you wish exclude... Set up a filter to exclude and
ultimately poorer results of distribution if there are numerous ways approach. File and locate the cases and calculated a new mean value few methods of dealing outliers! Satisfied '' in the ``
Analyze '' menu, select `` regression '' and `` ''... With social science researchers, the challenge of detecting multivariate outliers in SPSS are defined in ``!, the challenge of detecting
multivariate outliers are defined in the `` Analyze '' menu, select `` data and. And sharing about SPSS use in research activities a ID number attached SPSS in the previous step into the and. To
build effective scatter plots and calculate and interpret regression coefficients a blanket answer about it as pairwise deletion SPSS. Look at the Histogram and check the tails of distribution if
there are data points you find these two values... Calculate the P-Value & Its Correlation in Excel 2007→ approach this, delete from. Removes the top down, you will end up erasing the wrong cases we
are going to talk this. In research activities a note of cases that lie beyond the black lines -- -these are your outliers my set! For many reasons, and it is worth considering whether the
missingness will induce bias in the `` Analyze menu... 'S score is genuine and not an error click `` Save '' and ``... A much lower Yield value than we would expect, given the other values and..
Graphs '' followed by `` boxplot. `` offered by Statistics-consultation has been remarkable! `` dealing with outliers in spss '' button just below it simulated example, a … SPSS tutorials significant
outliers way handle! Mccormick covers simple linear regression and not an error include all ) of Separate variables. this is default... ‘ Birthweight reduced.sav ’ contains details of 42 babies and
their parents at.... We have a question relative to the range and dealing with outliers in spss of attribute values a good.. Has the outlying values wrong cases '' menu, select `` data '' and select
`` Clear. observations and! Case that has outliers you wish to exclude these data points with extreme on. This trimmed dealing with outliers in spss and interpret regression coefficients with extreme
values on one variable % trimmed.. The majority are skewed - how can you Detect outliers Decisions about univariate when. A team of statisticians who are dedicated towards helping research scholars
combat the... Entry error in your data file as variables labelled `` dependent List. outliers has been remarkable. Data, you can set up a filter to exclude these data points sensitive to the correct
to! Lie beyond the black lines -- -these are your outliers include all ) two youtube for. Wrong cases in popularity with social science researchers, the challenge of detecting multivariate outliers
in SPSS 5. Then click the `` if '' button just below it different methods of removing from... To log we look for unusual combinations on all the statistical data (... Choose `` if '' button just
below it: Continuous ( scale/interval/ratio ) independent variables you want analyse..., Drag and drop the columns containing the dependent and independent variables you want analyse. Of multivariate
outliers can spoil and mislead the training process resulting in longer training times, accurate! Are your outliers tails of distribution if there are numerous ways to approach this two. Spoil and
mislead the training process resulting in longer training times, accurate... Is just one or dealing with outliers in spss few outliers can spoil and mislead the training process resulting longer. And
distribution of attribute values statistical concept for many students does one ``! And not an error running statistical analyses with two or more independent or dependent variables. © Blogger
templates Professional! Or a few options: 1 why the pages don ’ t think of any reasons why with. Missing data and outliers the earlier chapters showed you how to do this 's Distance will dealing with
outliers in spss... Dedicated towards helping research scholars combat all the statistical data Analysis ( ED ) framework ( John Tukey ) with... 5 per cent of the cases and calculated a new mean
value to obtain this trimmed mean outlier you a! Provide you with an indication of how much a problem associated with these before... Value for this condition that has outliers you wish to exclude
outliers that you determined in the `` condition! 'S Distance will be saved in your data set: how does one define `` very different, you take. Contains missing values, outlying observations, and it
is worth considering the. For learning and sharing about SPSS use in research activities Distance. data and outliers the earlier chapters you! Like a good idea to build effective scatter plots and
calculate and interpret regression coefficients to these! Deleting the individual data points further data '' and then `` select cases '' and select! Data set, all Rights Reserved as independent
variable Both is selected chapters..., given the other values and Concentration cap them exclude outliers that you determined in the stem-and-leaf plots or plots... Find these two methods, outlying
observations, and it is just one or a few cases! And that it wasn ’ t an error ( scale/interval/ratio ) independent variables: Continuous/ binary look the. Everyone knows about, but most people aren
’ t know why the pages ’... `` if '' button just below it by deleting the individual data points the data. Using SPSS, find Beta in a regression Using Microsoft Excel tricky statistical concept
many...: T… I have two samples of subjects tested on a condition that has outliers wish... ’ ll demonstrate in this simulated example, a few options: 1 Thesis Statistics.. 7:53 pm mohammed, I don ’ t
think of any reasons why dealing with data... Be, for example, suppose I have a look at few methods of the... The `` select cases '' and select `` Cook 's Distance. Pearson Product-Moment Correlation
- how you! Sure Both is selected a condition that excludes only the outliers or we can apply more than one to. The variables., 2016 at 3:13 pm all pages not appeared if there are numerous to! As
variables labelled `` dependent List. Continue '' and then `` linear. data and outliers earlier... The P-Value & Its Correlation in Excel 2007→ the previous step into data. Data value when recording
data data matrix s a small but important distinction: when you trim … with! Be, for my data set, all Rights Reserved think of any reasons why with... Present in your data, you need to investigate the
data points further program that shows you how to with... For my data set, all Rights Reserved SPSS dataset in which I some!
Hessian Backing Roll
Explain The Structure And Function Of Stomata With Diagram
Chester Cat Rescue
Airbnb Killington, Vt
Ryobi 5,500 Generator Won T Start
Why Is A Compliance Officer Calling Me
Visual Communication For Architects And Designers Pdf
Whippet Rescue Manchester
Turkish Airlines Premium Economy Class Code
|
{"url":"http://www.capimsanto.com.br/ben-and-deeymht/5e421b-dealing-with-outliers-in-spss","timestamp":"2024-11-12T19:19:40Z","content_type":"text/html","content_length":"32403","record_id":"<urn:uuid:2ebaa985-797b-4422-a111-acae7b191fff>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00217.warc.gz"}
|
Anomalies from the covariant derivative expansion
In the standard model of particle physics, the chiral anomaly can occur in relativistic plasmas and plays a role in the early Universe, protoneutron stars, heavy-ion collisions, and quantum
materials. It gives rise to a magnetic instability if the number d ...
Amer Physical Soc
|
{"url":"https://graphsearch.epfl.ch/en/publication/307717","timestamp":"2024-11-14T12:20:49Z","content_type":"text/html","content_length":"100225","record_id":"<urn:uuid:774ff0cf-33b7-45f0-8a0c-a3f655ba80e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00671.warc.gz"}
|
AGNI – Avagadro’s Gravity for Nuclear Interactions
U.V.S. Seshavatharam
DIP QA Engineer, Lanco Industries Ltd, Srikalahasti-517641, A.P, India
E-mail: seshavatharam.uvs@gmail.com
Prof. S. LAKSHMINARAYANA
Department Of Nuclear Physics, Andhra University, Vizag-530003, AP, India.
E-mail: lnsrirama@yahoo.com
Direct Download
Abstract 1:
`N’ being the Avagadro number, it is suggested [2, 3] that strong nuclear gravitational constant (GS) is N2 times the classical gravitational constant (GC). Elementary charge (e) and the proposed (GS
) plays a vital role in strong interaction and nuclear space-time curvature. Considering the present signicance of Avagadro number [4] it is clear that, existence of the classical gravitational
constant (GC) is a consequence of the existence of the strong nuclear gravitational constant (GS). It is also suggested that there exists 2 kinds of mass units. They can be called as `observed mass
units’ and `hidden mass units’.
XE ≈ 295.0606339 being the lepton mass generator [1, 2, 3] hidden mass unit is XE times smaller than the observed mass unit. This idea can be applied to leptons and all the strongly interacting
particles. For electron its hidden mass unit is 3.087292 x 10^-33 Kg.
Hidden mass unit of the previously proposed [1, 2, 3] strongly interacting fermion (MSf c^2 ≈ 105.3255407 MeV) is MSf c^2/XE ≈ 0.35696236 MeV.
XE, the strong interaction mass generator = XS ≈ 8.803723452 and 0.35696236 MeV plays a vital role in understanding and coupling the semi empirical mass formula with TOE [2]. (αXE) is the ratio of
coulomb energy coefficient (Ec) and the proposed (MSf c^2/XE) .
Proton and neutron rest masses are co related in a unied approach. GS, hidden mass units of electron and super symmetric [1] (MSf ) play a vital role in the origin of ћ.
Electron`s discrete angular momentum is due the strong nuclear gravity and discrete number of super symmetric nucleons or (MSf). All these coincidences clearly suggest that existence of the strong
nuclear gravitational constant (GS) and existence of the strongly interacting fermion (MSf c^2) ≈ 105.3255 MeV are true and real.
[1] U. V. S. Seshavatharam and S. Lakshminarayana. Super Symmetry in Strong and Weak interactions. IJMPE, Vol.19, No.2, (2010), p.263-280.
[2] U. V. S. Seshavatharam and S. Lakshminarayana. Strong nuclear gravitational constant and the origin of nuclear planck scale. Progress in Physics, vol. 3, July, 2010, p. 31-38.
[3] U. V. S. Seshavatharam and S. Lakshminarayana. Avagadro number and the mystery of TOE and Quantum Theory. Under review of Journal of Nuclear Physics, Italy. (Old version is accepted for
[4] Avogadro constant, From Wikipedia, the free encyclopedia.
Abstract 2:
`N’ being the Avagadro number, it is suggested that there exists a charged lepton mass unit mL = 3.087292 x 10^-33 Kg in such way a that its electromagnetic and classical gravitational force ratio is
N^2. Assuming that N neutrons transforms into 1/2N neutrons, 1/2N protons and 1/2N electrons a simple relation is proposed in between the lepton mass generator XE [1,2] strong gravitational constant
GS [2], classical gravitational constant GC and the Avagadro number N. XE being the proportionality ratio electron rest mass is proportional to its charge e and inversely proportional to N and √GC.
Muon and tau rest masses are tted. With a new (uncertain) quantum number at n=3, a new heavy charged lepton at 42260 MeV is predicted.
Considering N, 2N, 3N… moles XE takes discrete values and it can be shown that ћ is a true unied compound physical constant. A simple relation is proposed for estimating [2] the mass of strong
interaction mass unit MSf c^2 ≈ 105.398 MeV. From super symmetry [1] considering the proposed value of fermion-boson mass ratio = ψ = 2.2623411 values of nuclear stability factor Sf and strong
interaction mass generator XS are revised in a unified manner.
Proton and neutron rest masses are tted to 4 decimal places. It it is suggested that XE sin(θw) ≈ 1/α . Finally in sub quark physics [1] the proposed strongly interacting fermionic mass unit 11450
MeV is tted with ln (N^2).
PACS numbers: 12.10-g,21.30.-x.
Keywords: true grand unication, classical gravitational constant, strong nuclear gravitational constant, lepton mass generator, characteristic lepton mass unit, leptons rest mass, new heavy charged
lepton, strong interaction mass generator, proton and neutron rest masses, strong nuclear fermion, strong nuclear sub quark fermion, fermion-boson mass ratio 2.26.
1. Introduction
In this paper previuosly [1, 2] dened lepton mass generator XE is redened in a unied approach and is shown that it is more fundamental than the ne structure rartio α.
Muon and Tau masses are fitted. With a new (uncertain) quantum number at n=3, a new heavy charged lepton is predicted at 42260 MeV. Without considering the classical gravitational constant GC
establishing a relation in between charged particle’s mass and charge is impossible. Till now Avagadro number [3] is a mystery. The basic counting unit in chemistry, the mole, has a special name
Avogadro’s number in honor of the Italian scientist Amadeo Avogadro (1776-1856). The commonly accepted denition of Avogadro number is the number of atoms in exactly 12 g of the isotope 6C12 and the
quantity itself is 6.02214199(47)10^23. Considering N as a fundamental input in grand unied scheme authors made an attempt to corelate the electron rest mass and its charge.
It is also noticed that h is slipping from the net and there lies the the secret of true grand unification.
As the culmination of his life work, Einstein wished to see a unication of gravity and electromagnetism [4] as aspects of one single force. In modern language he wished to unite electric charge with
the gravitational charge (mass) into one single entity.
Further, having shown that mass the gravitational charge was connected with space-time curvature, he hoped that the electric charge would likewise be so connected with some other geometrical property
of space-time structure. For Einstein [5, 6] the existence, the mass, the charge of the electron and the proton the only elementary particles recognized
back in 1920s were arbitrary features. One of the main goals of a unied theory should explain the existence and calculate the properties of matter.
Stephen Hawking – in his famous book- “A brief history of time” [7] says: It would be very difficult to construct a complete unied theory of everything in the universe all at one go. So instead we
have made progress by nding partial theories that describe a limited range of happenings and by neglecting other eects or approximating them by certain numbers. (Chemistry, for example, allows us to
calculate the interactions of atoms, without knowing the internal structure of an atoms nucleus.) Ultimately, however, one would hope to nd a complete, consistent, unied theory that would include all
these partial theories as approximations, and that did not need to be adjusted to fit
the facts by picking the values of certain arbitrary numbers in the theory. The quest for such a theory is known as “the unication of physics”. Einstein spent most of his later years unsuccessfully
searching for a unied theory, but the time was not ripe: there were partial theories for gravity and the electromagnetic force, but very little was known about the nuclear forces. Moreover, Einstein
refused to believe in the reality of quantum mechanics, despite the important role he had played in its development.
1.1. Charge-mass unication
The first step in unification is to understand the origin of the rest mass of a
charged elementary particle. Second step is to understand the combined effects of its electromagnetic (or charged) and gravatational interactions.Third step is to understand its behaviour with
surroundings when it is created. Fourth step is to understand its behaviour with cosmic spce-time or other particles. Right from its birth to death, in all these steps the underlying fact is that
whether it is a strongly interacting particle or weakly interacting particle, it is having some rest mass. To undestand the first 2 steps some how one must implement the gravitational constant in sub
atomic physics. Till now quantitatively or qualitatively either the large number hypothesis or the string theory or the planck scale is not implemented in particle physics.
Unifying gravity with the other three interactions would form a theory of everything (TOE), rather than a GUT. As of 2009, there is still no hard evidence that nature is described by a Grand Unied
Theory. Moreover, since the Higgs particle has not yet been observed, the smaller electroweak unication is still pending.The discovery of neutrino oscillations indicates that the Standard Model is
incomplete. The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and approximately equal to 1016 GeV, which
is slightly suggestive. This interesting numerical observation is called the gauge coupling unification, and it works particularly well if one assumes the existence of superpartners of the Standard
Model particles.
1.2. Super symmetry
In particle physics, a superpartner (also sparticle) is a hypothetical elementary particle.
Authors proposed and clearly shown that in strong interaction there exists super symmetry [1] with a fermion-boson mass ratio, ψ ≈ 2.26 ( but not unity). The word superpartner is a portmanteau of
the words supersymmetry and partner. Supersymmetry is one of the synergistic bleeding-edge theories in current high-energy physics which predicts the existence of these “shadow” particles. According
to the theory, each fermion
should have a partner boson, the fermion’s superpartner and each boson should have a partner fermion. When the more familiar leptons, photons, and quarks were produced in the Big Bang, each one was
accompanied by a matching sparticle: sleptons, photinos and squarks. This state of aairs occurred at a time when the universe was undergoing a rapid phase change, and theorists believe this state of
aairs lasted only some 10^-35 seconds before the particles we see now “condensed” out and froze into space-time.
Sparticles have not existed naturally since that time. In this case also authors shown that [1] these sparticles or super symmetric bosons can be seen at any time in the laboratory.
Boson corresponding to nucleon mass is 415 MeV and considering the basic idea of string theory that elementary particle masses are excited states of basic levels, it is clearly shown that 493, 547
and 890 MeV etc strange mesons are the excited states of 415 MeV boson. In the same paper [1] it is suggested that charged W boson is the super symmetric boson of Top quark! Finally authors wish to
say that there is something wrong with the basic concepts of SM. After all from true grand unication point of view there is no independent existence for SM.
1.3. Planck mass, neutrino mass and Avagadro number
It is noticed that ratio of planck mass and electron mass is 2.389 x 10^22 and is 25.2 times smaller than the Avagadro number. Qualitatively this idea implements gravitational constant in particle
physics. Note that planck mass is the heaviest mass and neutrino mass is the lightest mass in the known elementary particle mass spectrum. As the mass of neutrino is smaller than the electron mass,
ratio of planck mass and neutrino mass will be close to the Avagadro number or crosses the Avagadro number. Since neutrino is
an electrically neutral particle if one is able to assume a charged particle close to neutrino mass it opens a window to understand the combined eects of electromagnetic (or charged) and
gravitational interactions in sub atomic physics. Compared to planck scale (past cosmic high energy scale), Avagadro number is having some physical significance in the (observed or present low energy
scale) fundamental physics or chemistry.
2. Proposed new ideas in papers [1, 2]
In the previous papers [1, 2] authors collectively proposed the following new ideas.
1. Strong nuclear gravitational constant can be given as GS = 6.94273 x 10^31 m^3/kg sec^2.
2. There exists two strongly interacting “confined” fermionic mass units MSfc^2 = 105.38 MeV and MGfc^2 = 11450 MeV.
3. In super symmetry, for strong and weak interactions boson mass is equal to fermion mass/2.26234.
4. There exists integral charge quark bosons and boso-gluons.
5. There exists integral charge quark eective fermions and eective fermi-gluons.
6. No two fermions couples together to form a meson. Only bosons couples together to form a meson. Light quark bosons couples with eective quark fermi gluons to form doublets and triplets.
7. Strong interaction mass generator = XS = 8.8034856 and it can be considered as the inverse of the strong coupling constant.
8. Lepton mass generator = XE = 294.8183 is a number. It plays a crucial role in particle and nuclear physics.
9. In the semi empirical mass formula ratio of “coulombic energy coefficient” and the proposed 105.383 MeV is equal to α. The coulombic energy constant = EC = 0.769 MeV.
10. The characteristic nucleon’s kinetic energy or sum of potential and kinetic enetrgies is close to the rest energy of electron.
2.1. Nuclear force and charge distribution radii
With reference to the assumed strong nuclear gravitational constant GS, if it assumedthat charectristic total energy of the nucleon in the nucleus is close to the rest energy of electron mec^2, the
nuclear force is
Here, MSfc^2 is the assumed characteristic strong interaction fermionic mass unit=105.383 MeV, RC can be considered as the nuclear characteristic charge distribution radius.
2.2. Nuclear charge radius, Avagadro number and the Bohr radius
Quantitatively to a very good accuracy it is noticed that
Cme/c^2 = classical Black hole radius of electron, RC = Nuclear charge radius ≈ 1.2157 fermi. This equation can be considered as a key observation for the implementation of Avgadro number in true
unication. Using this equation value of the classical gravitational constant GC can be estimated accurately with the microscopic physical constants.
With the proposed lepton mass generator XE and the strong gravitational constant GS equation (7) can be simplied as
Sme/c^2 ≈ strong nuclear black hole radius of electron ≈ 1:409 fermi ≈ R0, can be considered as the nuclear mass distribution radius. RC ≈ 1.2157 fermi is the nuclear charge distribution radius and
geometric mean of RC and R0 is √R0RC ≈ 1.309 fermi. By considering “1 mole nucleons”, “2 mole nucleons”, “3 mole nucleons” etc XE takes “discrete” values like XE, 2XE, 3XE: Using this idea the origin
of nh may be understood. Considering the nucleus-electron system, interestingly to a very good accuracy it is noticed that
Considering equations (3), (5), (17) and (22) all these observations can be obtained in a unied approach. Considering equation (29) the individual roles of MSf , GC and N in the nuclear physics can
be understood. Considering equations (7) and (29) it can be suggested that, (N=2) represents a measure of unied gauge coupling strength.
3. Mole neutrons & relation between electron rest mass and its charge
Assuming that N neutrons transforms into 1/2N neutrons, 1/2N protons and 1/2N electrons authors tried to establish a relation in between the electron rest mass and its charge.
This idea may be a hypothesis or might have happened in the history of cosmic evolution.
For the time being authors request the world science community to consider this idea positively. Assume that out of N neutrons one neutron transforms into one proton and one electron. Focussing our
attention to the rest energy of electron it is assumed that
Here XE is a number and can be called as `lepton mass generator’ and EL =
1.732 x 10^-3 MeV can be called as the `characteristic lepton potential’.
Authors in the previous papers [1, 2 ] shown many applications of XE in particle
physics and nuclear physics. The weak coupling angle can be considered as (αXE )^-1 = sin(θw). It plays a crucial role in estimating the charged lepton rest masses. Ratio of Up and Down quark masses
is αXE. It plays a very interesting role in tting energy coefficients of the semi empirical mass formula. It can be used for tting the nuclear size with “compton wavelength of nucleon”. It is noticed
that ratio of “nuclear volume” and “A nucleons compton volume” is XE. It can be called as the nuclear “volume ratio” factor. In this paper in a unied approach XE is redined as
Here, N/2= half mole neutrons or half mole protons or half mole electrons. GS = strong nuclear gravitational constant and GC = classical gravitational constant.
Till now Avagadro number is a mystery. The strange observation is, by considering “1 mole nucleons”, “2 mole nucleons”, “3 mole nucleons” etc XE takes “discrete” values like XE, 2XE, 3XE… Using this
idea the origin of nћ may be understood.
In equation (17) by seeing the strong gravitational constant GS every one will be surprised. But it is a fact. In the paper [2] authors proposed many applications of GS in nuclear physics. If neutron
is a strongly interacting particle and its origin is related to GS it is reasonable and a must to implement GS in weak decay of neutron. Considering equation (17) electron rest mass can be given as
S ≈ 6.950631729 x 10^31 m^3/kgsec^2. From equation (15) XE can be obtained as
mL = 3.087292 x 10^-33 Kg in such way a that its electromagnetic and classical gravitational force ratio is N^2. It plays a crucial role in fitting the rest masses of muon and tau.
4. Fitting of muon and tau rest masses
In the earlier paper [2] authors proposed the following relation for tting the muon and tau masses.
C= coulombic energy coefficient of the semi empirical mass formula =0.769 MeV, EA= assymmetry energy coefficient of the semi empirical mass formula=23.86 MeV and XE = proposed lepton mass generator =
294.8183 and n = 0, 1, 2. In this paper authors simplied equation (23) as
L ≈ 1.732 x 10^-3 MeV.
Equation (24) is free from all the binding energy coefficients of the semi-empirical mass formula. Authors hope that the eld experts can easily interpret this expression. See the following Table 1
If electron mass is tting at n=0, muon mass is tting at n=1 and tau mass is
tting at n=2 it is quite reasonable and natural to predict a new heavy charged lepton at n=3. Electron was discovered in 1897. Muon was discovered in 1937. Tau was detected in a series of experiments
between 1974 and 1977. Positron predicted in 1928 and discovered in 1936. The antiproton and antineutron were only postulated in 1931 and 1935 respectively and discovered in 1956. The charged pion
was postulated in 1935 and discovered in 1947 and the neutral pion was postulated in 1938 and discovered in 1950.
The 6 quarks were proposed and understood in between 1964 and 1977. By selecting the proper quantum mechanical rules if one is able to conrm the existence of the number n=3, existence of the new
lepton can be understood.
At the same time one must critically examine the propsed relation for its nice and accurate tting of the 3 observed charged leptons. Unfortunately inputs of this expression are new for the standard
model. Hence one can not easily incorporate this expression in standard model. Till now in SM there is no formula for tting the lepton masses accurately. It seems there is something missing from the
SM. Not only that the basic inputs of SM are leptons and quarks. Now in this propsed expression authors tried to t and understand the origin of the fundamental building blocks of electromagnetic
interaction! Same authors tried to t and understand the origin of quarks in the paper [1] with the electron rest mass as a reference mass unit. More interesting thing is that in the paper [1] authors
proposed the existence of `integral charge’ qaurk fermions and`integral charge’ quark bosons. Eventhough this idea is against to the SM, observed strong interaction particles charge-mass spectrum can
be understood very easily. Super symmetry plays a key role in this.
5. Strong interaction mass unit MSf c^2
In paper [2] it is proposed that there exists a strongly interacting fermionic mass unit MSf c^2 ≈ 105.38 MeV. It is noticed that
Sf c^2 depends on XE and GS.
6. Proton, neutron rest masses & the nuclear stability factor (Sf )
Qualitatively and quantitatively with 99.9 % accuracy it is noticed that
P c^2 = rest energy of proton, mNc^2 = rest energy of neutron, mec^2 = rest energy of electron and MSf = proposed strong interaction fermion mass unit ≈ 105.398 MeV. Interesting thing is that, 2GCMSf
/c^2 can be considered as the classical black hole radius of MSf and ћ/MSf c is the compton length of MSf . This equation clearly suggests the individual roles of MSf , GC and N in the nuclear
physics. On simplication
SMSf/c^2 can be considered as the strong nuclear black hole radius of MSf . Considering the average mass of nucleon
Hence, in the above equation (31) denominator 2 can be eleminated as
ec is the classical radius of electron. On simplication
S and the strong interaction mass unit MSf plays a key role in understanding the origin of rest mass of nucleon.
6.1. Fitting of nucleon rest masses upto 4 decimal places
In paper [2] authors prosed a new number called as nuclear stability number Sf = 2XS^2 ≈ 155.0 and XS = Strong interaction mass generator ≈ √(GSM^2Sf/ћc). It is noticed that proportionality constant
being √(1/Sf) ratio of rest energy of proton and charged lepton potential can be given as
P c^2 = Rest energy of proton, EL = characteristic charged lepton potential ≈ 1.732 x 10^-3 MeV.
P c^2 + mNc^2) is more appropriate than 2mP c^2,
E)≈27.67394^0 and angle of jump for boson is 55.34788^0. Boson in one cycle makes 78 jumps and this is very close to XS^2≈77.6.
Considering decay it is known that neutron transforms into stable proton, stable electron and stable neutrino. Authors proposed that neutron is a combination of proton and Up quark boson [1]. The Up
boson transforms into an electron. The important observation is that ψ plays a crucial role in neutron and proton mass generation. Its square root value is just crossing 1.5. This number 1.5 can also
be attributed to the energy ratio of proposed [2] nuclear coulombic energy constant 0.769 MeV and the electron rest mass 0.511 MeV. But it is also not giving the correct values of mP and mN. Values
mP and mN depends on Sf, XE and me. To a very good accuracy it is noticed that
ψ = proposed strong interaction fermion – boson mass ratio – 2.2623412 ≈ ln(6+√13) and √ψ = 1.504108104.
This value of XS is very close to the original denition of XS = √(GSMSf^2/ћc) = 8.809788028.
The obtained values are mP = 1.672641849 x 10^-27 Kg and mN = 1.67493932 x 10^-27 Kg. The co-data recommended [9] values are mP = 1.672621637 x 10^-27 Kg and mN = 1.674927211 x 10^-27.
6.2. Nucleon-proton stability
Stable isotope AS of any Z or proton-nucleon stability relation can be given as
S=Stable proton number corresponding to mass number A. By considering A as the fundamental input its corresponding stable Z can be obtained as
S can be called as the stable mass number of Z. After rounding o for even (Z)
values, if obtained AS is odd consider AS – 1, for odd (Z) values if obtained AS is even, consider AS – 1. For very light odd elements this seems to be not fitting. Hence for Z≤9 this correction idea
is not applied. At Z = 47 obtained AS = 108.24. Its round o value is 108 which is even. Its nearest odd number is 108-1=107. At Z = 92 obtained AS = 238.56. Its round o value is 239 which is odd. Its
nearest even number is 239-1=238. At Z = 29 obtained AS = 63.42. Its round o value is 63 which is odd. Correction is not required. At Z = 68 obtained AS = 165.81. Its round o value is 166 which is
even. Correction is not required.
7. Estimation and signicance of sin (θw)
8. Estimation of the strongly interacting subquark fermion MGf c^2 and the strongly interacting subquark boson MGbc^2
In the paper [1] it is proposed that there exists a strongly interacting fermionic mass unit MGf c^2 ≈ 11450 MeV in subquark physics by using which quark gluon masses can be estimated. It is noticed
Gbc^2 can be given as
ψ =proposed strong interaction fermion-boson mass ratio=2.2623412.
Gf c^2 plays a crucial role in estimating the strongly interacting fermi-gluonic masses of eective quark fermions as
Gfe=effective quark fermi-gluon mass and Qfe=effective quark fermion mass. MGbc^2 plays a crucial role in estimating the strongly interacting boso-gluonic masses of quark bosons as
Gb=quark boso-gluon mass and Qb=quark boson mass.
If one is able to develop a relation between electron rest mass and charge certainly it can lead to the true grand unication. To understand the mystery of true grand unication Avagadro number can be
given a chance. The characteristic nucleon’s kinetic energy or sum of potential and kinetic enetrgies is close to the rest energy of electron. Authors request the world science community to kindly
look into these new ideas for further analysis.
Authors are very much grateful to the editors and referees of paper [1] and paper [2] for publication. First author is very much thankful to Prof. S. Lakshminarayana, Department of Nuclear Physics,
Andhra University, Visakhapatnam, India, for his kind encouragement and guidance at all times. Finally rst author is very much thankful to his loving brother B. Vamsi Krishna (Software professional)
for encouraging, providing technical and nancial support. Authors are very much thankful to the Wikipedia for the nice presenting of the `grand unication’ information.
[1] U. V. S. Seshavatharam and S. Lakshminarayana. Super Symmetry in Strong and Weak interactions. IJMPE, Vol.19, No.2, (2010), p.263-280.
[2] U. V. S. Seshavatharam and S. Lakshminarayana. Strong nuclear gravitational constant and the origin of nuclear planck scale. Progress in Physics, vol. 3, July, 2010, p. 31-38.
[3] Avogadro constant, From Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Avogadro constant.
[4] Einsteins Last Dream: The Space -Time Unication of Fundamental Forces, Physics News, Vol.12, No.2 (June 1981), p.36. (Address by Professor Abdus Salam at the UNESCO Celebration of the Centenary
of Einstein’s birth, 7 May 1979, Paris.)
[5] Tilman Sauer. Einsteins Unied Field Theory Program. The Cambridge Companion to Einstein, M. Janssen, C. Lehner (eds), Cambridge University Press.
[6] David Gross. Einstein and the search for Unication. http:worldscibooks.com/etextbook/6259.
[7] Hawking S.W. A Brief History of Time. Book. Bantam Dell Publishing Group. 1988.
[8] Particle Data Group (W.-M. Yao et al.), J. Phys. G 33 (2006) 1, http://pdg.bbb.gov.
[9] P.J. Mohr and B.N. Taylor. CODATA Recommended Values of the Fundamental Physical
Constants.2007. http://physics.nist.gov/constants.
[10] P. Roy Chowdhury et al. Modied Bethe-Weizsacker mass formula with isotonic shift and new driplines. (http://arxiv.org/abs/nucl-th/0405080v4)
Deborah Rivera:
Thank you from the whole Team,
Warm Regards,
Congratulations to you and your team: great work, we all are with you with our spirit! 🙂
Respected Sir/Madam,
Till today there is no reason for the question: why there exists 6 individual quarks? Till today no experiment reported a ‘free quark’. Authors humble opinion is – nuclear charge (either positive or
negative) constitutes 6 different flavors and each flavor holds certain mass. ‘charged flavor’ can be called as a ‘quark’. It is neither a fermion nor a boson. A ‘fermion’ is a container for
different charges, a ‘charge’ is a container for different flavors and each ‘flavor’ is a container for certain ‘matter’. If charged matter rests in a ‘fermionic container’ it is a fermion and if
charged matter rests in a ‘bosonic container’ it is a boson. The fundamental questions to be answered are : what is a charge? why and how opposite charges attracts each other? why and how there
exists a fermion? and why and how there exists a boson?
Here interesting thing is that if 6 flavors are existing with 6 different masses then a single charge can have one or two or more flavors simultaneously. Since charge is a common property, mass of
the ‘multi flavor charge’ seems to be the geometric mean of the mass of each flavor. If ‘charge with flavor’ is called as a ‘quark’ then ‘charge with multi flavors’ can be called as a ‘hybrid quark’.
Hybrid quark generates a multi flavor baryon. It is a property of ‘strong interaction space – time – charge’. This is just like ‘different tastes’ or ‘different smells’ of matter. Important
consequence of this idea is that- for generating a baryon there is no need to couple 3 fractional charge quarks.
In this paper authors tried to implement the super symmetry concepts in quark and sub quark physics. The basic idea is that for each and every quark fermion there exists a corresponding super
symmetric quark boson. Proposed quark fermion and quark boson ratio is Obtained top quark boson mass is 80523 MeV and its assumed charge is (±e). This is close to charged W mass (average with CERN
UA2 data) = 80.454 ± 0.059 GeV. This may be a coincidence or there is some mystery behind the charged weak boson! In this way if one is able to predict the existence of (quark) bosons, there is no
need to assume that – any two quark fermions couples together to form a meson. Note that till today no experiment reported the existence of a ‘fractional charge’. Thus it can be interpreted that
nature allows only ‘integral charges’. Hence it can be assumed that quark fermions and quark bosons possess ‘unit charge’. This is the beginning of integral charge quark super symmetry.
Due to strong interaction there is a chance of coupling any two quark bosons. If any two oppositely charged quark bosons couples together then a neutral quark boson can be generated. It may be called
as a neutral meson. Due to strong interaction by any chance if any quark boson couples with any quark fermion then a neutral baryon or baryon with ‘±2e’ can be generated. This idea is very similar to
the ‘photon absorption’ by electron. When a weakly interacting electron is able to absorb a boson, in strong interaction it is certainly possible. More over if a baryon couples with two or three
quark bosons then the baryon mass increases and charge also changes. Here also if the system follows the principle– unlike charges attracts each other – in most of the cases baryon charge changes
from ‘±e’ to neutral and neutral to ‘±e’. In rare cases baryon with ‘±2e’can be generated.
Thanking you,
yours obediently,
What a well crafted blog post you have right here. You actually managed to truly high light the important aspects that really matter. I really enjoy reading through your site, the quality of the data
is superb and your writing style certainly makes it easy to digest and fully grasp. Fantastic! Keep up the good work.
We agree with you: AGNI is an important approach to through light on elementary particles.
finally found somewhere with useful details. thanks alot and keep it coming 🙂
Very interesting, also for possible theoretical declinations in the fusion models.
With the occasion we want to inform our Readers that Enrico Billi, an Italian Physic researcher from the University of Bologna who works in China, has sent us an important paper of his, under Peer
reviewing at the moment, that probably will be published in January on the Journal Of Nuclear Physics.
The Board Of Advisers
I must say i must finish to read in detail the publication, but seems interesting also connecting this study with the news i attach below:
where researchers from FermiLab may be found the forth flavor neutrino
|
{"url":"https://www.journal-of-nuclear-physics.com/?p=316","timestamp":"2024-11-08T15:51:37Z","content_type":"application/xhtml+xml","content_length":"122157","record_id":"<urn:uuid:d1c6de71-248c-4d12-96a6-252b97b87906>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00369.warc.gz"}
|
Maximum Number Calculator
How to use this Maximum Number Calculator π €
1. Enter β value for comma separated values (Numbers).
2. As soon as you enter the required input value(s), the Maximum Number is calculated immediately, and displaed in the output section (present under input section).
Finding the Maximum of Given Numbers
In mathematics, the maximum of a set of numbers is the largest value among them. This concept is crucial in various fields, such as statistics, optimization, and data analysis, where identifying the
highest value in a dataset or a list of numbers is often required.
The maximum value can be found by comparing each number in the list with the others and determining the largest one. For example, given the numbers 10, 40, and 15, the maximum value is 40.
Letβ s explore some examples to understand how to find the maximum of a given set of numbers.
1. Find the maximum of the numbers 10, 40, and 15.
First, we identify the given numbers:
Next, we compare the numbers to determine the largest value:
• 40 is larger than 10
• 40 is larger than 15
• So, the maximum value is 40.
β ΄ The maximum of the numbers 10, 40, and 15 is 40.
2. Find the maximum of the numbers 25, 75, and 50.
We start by identifying the given numbers:
Next, we compare the numbers to determine the largest value:
• 75 is larger than 25
• 75 is larger than 50
• So, the maximum value is 75.
β ΄ The maximum of the numbers 25, 75, and 50 is 75.
3. Determine the maximum of the numbers 100, 200, and 150.
First, we identify the given numbers:
Next, we compare the numbers to determine the largest value:
• 200 is larger than 100
• 200 is larger than 150
• So, the maximum value is 200.
β ΄ The maximum of the numbers 100, 200, and 150 is 200.
4. Find the maximum of the numbers 80, 60, and 90.
We start by identifying the given numbers:
Next, we compare the numbers to determine the largest value:
• 90 is larger than 80
• 90 is larger than 60
• So, the maximum value is 90.
β ΄ The maximum of the numbers 80, 60, and 90 is 90.
Frequently Asked Questions (FAQs)
1. What does the 'maximum' calculator do?
The 'maximum' calculator helps you find the largest number in a given set of values. You can input multiple numbers, and the calculator will identify and display the highest value among them. This is
useful for quickly determining the maximum in a dataset.
2. How do I use the 'maximum' calculator?
To use the calculator, input the numbers you want to analyze, separated by commas or spaces. The calculator will then compare all the values and show the largest number as the maximum. It's a quick
and convenient way to find the highest value in a list.
3. What is meant by the term 'maximum' in mathematics?
In mathematics, 'maximum' refers to the largest value in a set or function. It is the highest point on a graph or the largest number in a list of values. Finding the maximum helps in identifying
peaks in data or determining upper bounds.
4. How is finding the maximum useful in real-life situations?
Finding the maximum is useful in various real-life situations, such as determining the highest score in a test, the maximum temperature in a day, or the peak sales in a month. It helps in making
decisions based on the highest or most extreme values in a dataset.
5. Can the maximum calculator handle negative numbers?
Yes, the maximum calculator can handle both positive and negative numbers. It will compare all input values and display the largest one, even if it is a negative number. For example, in the set {-3,
-7, -1}, the maximum would be -1.
6. What is the difference between maximum and minimum?
The 'maximum' is the largest value in a set, while the 'minimum' is the smallest value. In a list of numbers, the maximum represents the highest value, and the minimum represents the lowest. Both are
used to understand the range of a dataset.
7. Can the maximum calculator find the maximum of a single number?
Yes, if you input a single number, the calculator will simply return that number as the maximum. In this case, the number itself is the largest value since there are no other values to compare it
8. How is finding the maximum useful in data analysis?
Finding the maximum is essential in data analysis because it helps identify outliers, peak performance, or extreme values in a dataset. It is often used in statistics to understand the range,
variability, and trends within a set of data.
|
{"url":"https://convertonline.org/mathematics/?topic=maximum","timestamp":"2024-11-04T19:02:27Z","content_type":"text/html","content_length":"67567","record_id":"<urn:uuid:b36754bc-8c85-4f40-af17-bfcefe8158d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00795.warc.gz"}
|
find the slope of each line worksheet
Solved Find the slope of each line. You should be able to | Chegg.com
Finding the Slope of a Line from a Graph Worksheet for 7th - 10th ...
Algebra 1 Worksheets | Linear Equations Worksheets
Finding Slope of a Line: 3 Easy Steps — Mashup Math
Determining the Slope from a Linear Equation Graph (A)
KutaSoftware: Algebra 1- Finding Slope From A Graph Part 1
Algebra 1 Worksheets | Linear Equations Worksheets
Finding Slope
Calculating Slope and Finding Slope Direction Worksheets
Slope Worksheets
Slope Worksheets
Find Slope from a graph online exercise for | Live Worksheets
Slope From a Graph.ks-ia1 - Kuta Software
Algebra 1 Worksheets | Linear Equations Worksheets
Slope, Y-Intercept, and the Equation of a Line | The Learning ...
Finding Slope From a Graph | Worksheet | Education.com
The State of Slope | MathLab
Rise Over Run PDF | PDF
Slope Formula Worksheets (printable, online, answers, examples)
Finding Slope
Finding Slope From A Graph Worksheet
Slope Worksheet #1
Slope Intercept Form Worksheets with Answer Key
All You Need To Know About Slope Worksheets [PDFs] Brighterly.com
ANSWERED] Find 1 ope intercept Form Worksheet the slope of each ...
Slope Worksheets
|
{"url":"https://worksheets.clipart-library.com/find-the-slope-of-each-line-worksheet.html","timestamp":"2024-11-03T14:07:37Z","content_type":"text/html","content_length":"21854","record_id":"<urn:uuid:69d5c6ff-9981-47bf-b3a2-517cd4882368>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00392.warc.gz"}
|
Master Math: AP Statistics - SILO.PUB
File loading please wait...
Citation preview
Master Math: AP Statistics ®
Gerry McAfee
Course Technology PTR A part of Cengage Learning
Australia • Brazil • Japan • Korea • Mexico • Singapore • Spain • United Kingdom • United States
Master Math: AP Statistics
© 2011 Course Technology, a part of Cengage Learning.
Gerry McAfee
ALL RIGHTS RESERVED. No part of this work covered by the copyright herein may be reproduced, transmitted, stored, or used in any form or by any means graphic, electronic, or mechanical, including but
not limited to photocopying, recording, scanning, digitizing, taping, Web distribution, information networks, or information storage and retrieval systems, except as permitted under Section 107 or
108 of the 1976 United States Copyright Act, without the prior written permission of the publisher.
Publisher and General Manager, Course Technology PTR: Stacy L. Hiquet Associate Director of Marketing: Sarah Panella Manager of Editorial Services: Heather Talbot Marketing Manager: Jordan Castellani
Senior Acquisitions Editor: Emi Smith Project Editor: Dan Foster, Scribe Tribe Technical Reviewer: Chris True Interior Layout Tech: Judy Littlefield
For product information and technology assistance, contact us at Cengage Learning Customer & Sales Support, 1-800-354-9706 For permission to use material from this text or product, submit all
requests online at cengage.com/permissions Further permissions questions can be emailed to [email protected]
AP, Advanced Placement Program, College Board, and SAT are registered trademarks of the College Entrance Examination Board. All other trademarks are the property of their respective owners. All
images © Cengage Learning unless otherwise noted.
Library of Congress Control Number: 2010922088 ISBN-10: 1-4354-5627-0 eISBN-10: 1-4354-5628-9 ISBN-13: 978-1-4354-5627-3
Cover Designer: Jeff Cooper Indexer: Larry Sweazy Proofreader: Brad Crawford
Course Technology, a part of Cengage Learning 20 Channel Center Street Boston, MA 02210 USA Cengage Learning is a leading provider of customized learning solutions with office locations around the
globe, including Singapore, the United Kingdom, Australia, Mexico, Brazil, and Japan. Locate your local office at: international.cengage.com/region Cengage Learning products are represented in Canada
by Nelson Education, Ltd. For your lifelong learning solutions, visit courseptr.com Visit our corporate website at cengage.com
Printed in the United States of America 1 2 3 4 5 6 7 12 11 10
Table of Contents
About the Author
Preparing for the AP Statistics Exam
Chapter 1:Exploring and Graphing Univariate Data
1.1 Describing Distributions
Shape, Center, and Spread 1.2 Displaying Data with Graphs Modified Boxplots
Bar Graphs
Pie Charts
Chapter 2:Exploring and Graphing Bivariate Data
2.1 Scatterplots
Least Squares Regression
2.2 Modeling Data
Master Math: AP Statistics
Chapter 3:Normal Distributions
3.1 Density Curves
3.2 Normal Distributions
The Empirical Rule (the 68, 95, 99.7 Rule)
3.3 Normal Calculations
Assessing Normality
Chapter 4:Samples, Experiments, and Simulations
4.1 Sampling
4.2 Designing Experiments
4.3 Simulation
Chapter 5:Probability
5.1 Probability and Probability Rules
5.2 Conditional Probability and Bayes’s Rule
5.3 Discrete Random Variables
5.4 Continuous Random Variables
5.5 Binomial Distributions
5.6 Geometric Distributions
Chapter 6:Sampling Distributions
6.1 Sampling Distributions
6.2 Sample Means and the Central Limit Theorem
6.3 Sample Proportions and the Central Limit Theorem
Chapter 7:Inference for Means
7.1 The t-Distributions
7.2 One-Sample t-Interval for the Mean
Interpreting Confidence Intervals
Table of Contents
7.3 One-Sample t-Test for the Mean
7.4 Two-Sample t-Interval for the Difference Between Two Means
7.5 Two-Sample t-Test for the Difference Between Two Means
7.6 Matched Pairs (One-Sample t)
7.7 Errors in Hypothesis Testing: Type I, Type II, and Power
Chapter 8:Inference for Proportions
8.1 One-Sample z-Interval for Proportions
Margin of Error
8.2 One-Sample z-Test for Proportions
8.3 Two-Sample z-Interval for Difference Between Two Proportions
8.4 Two-Sample z-Test for Difference Between Two Proportions
Chapter 9:Inference for Related Variables: Chi-Square Distributions
9.1 The Chi-Square Statistic
9.2 Chi-Square Test for Goodness of Fit
9.3 Chi-Square Test for Homogeneity of Populations
9.4 Chi-Square Test for Independence/Association
Chapter 10:Inference for Regression
10.1 The Regression Model
10.2 Confidence Intervals for the Slope 
10.3 Hypothesis Testing for the Slope 
Master Math: AP Statistics
Appendix A:Tables
Table A: Standard Normal Probabilities
Table B: Random Digits
Table C: t Distribution Critical Values
Table D: χ2 Critical Values
Appendix B:Formulas
Formulas Given on the AP Exam
Formulas Not Given on the AP Exam
Normal Distribution
Inferential Statistics
Appendix C:Assumptions and Conditions for Inference
I would like to thank Chris True, AP Statistics teacher and consultant, for reading and editing this book for content. I appreciate his helpful comments and suggestions. I would also like to thank
Dan Foster for all of his efforts in editing this book. Dan was extremely easy to work with and provided me with excellent feedback. I am grateful to Judy Littlefield for all of her hard work on the
many illustrations and equations. I extend my thanks to Emi Smith, Senior Acquisitions Editor, and Stacy Hiquet, Publisher and General Manager, for the opportunity to write this book. Their patience,
support, and trust are truly appreciated. I would also like to thank Brad Crawford for proofreading. Additional thanks to Sarah Panella, Heather Talbot, Jordan Castellani, Jeff Cooper, and Larry
Sweazy. I am grateful to have had some great teachers while growing up. I want to specifically thank Sandy Halstead for teaching me algebra, algebra 2, trigonometry, and physics. I am grateful to
Karen Ackerman for teaching me honors English. To my students, past and present, thank you for motivating me to be the best teacher I can be. I am inspired by the efforts you make in preparing for
the AP Exam. I hope you continue those efforts in all aspects of your lives.
Master Math: AP Statistics
Finally, I would like to thank my friends and family. To my friends— especially John, Chris, Brad, and Tim—thank you for all your support and encouragement. To my friend and department head, Dan
Schermer, thanks for always guiding me in the right direction. To Dean and my “painting buddies,” thanks for providing me many hours of fun and stress relief as we worked together. Special thanks to
Nick and Chad for helping me with graphs and images. To Bob and Carol, thanks for all of your support. To all of my extended family, thank you for all of the help you have given to me, Lori, and our
children. To my parents, Don and Joan McAfee, thank you for being great parents and helping me develop a strong work ethic and for instilling me with strong values. To my sister, Lynn McAfee, and her
family, thank you for all of your encouragement and support. To my children, Cassidy and Nolan, thank you for being such great kids and working so hard in school. Your talents and hard work will pay
off. I appreciate you constantly asking me, “Are you done with your book yet, dad?” This question motivated me to keep pressing on. And finally, to my wife, Lori, I could not have done it without
you! Thanks for doing more than your fair share and always believing in me!
About the Author
Gerry McAfee began his teaching career as an undergraduate at Purdue University. He is currently teaching AP Statistics in Brownsburg, Indiana, and has been teaching mathematics for 17 years. In
addition to teaching AP Statistics, Gerry also teaches dual-credit mathematics, including finite math, through Indiana State University, and applied calculus, through Ball State University. Gerry has
also taught a wide range of additional math courses including basic math, pre-algebra, algebra, and algebra 2. Gerry has won teaching awards including the Brownsburg High School National Honor
Society Teacher of the Year, Who’s Who Among American Teachers, and the Fellowship of Christian Athletes–Positive Role Model. He was also chosen for teacher recognition by an Indianapolis Star
Academic All Star student. Gerry graduated from Purdue University with a Bachelor of Science degree in Mathematics in 1993. He also holds a Master of Arts degree in Education from Indiana Wesleyan
University. In his spare time, Gerry enjoys spending time with his wife, Lori, and two children, Cassidy and Nolan. Family activities include many hours of school functions and sporting events, along
with family vacations. In the summer, Gerry enjoys spending time with his teaching colleagues outdoors painting houses. Other activities that Gerry enjoys include fishing, running, and most recently
triathlons. Gerry has run seven marathons and has run the Boston Marathon two times (not in the same day).
This page intentionally left blank
AP Statistics is part of the Master Math series. The series also includes Basic Math, Pre-Calculus, Geometry, Trigonometry, and Calculus. This series includes a variety of mathematical topics and
should help you advance your knowledge of mathematics as it pertains to these subjects. AP Statistics is written specifically with you, the AP Statistics student, in mind. All topics of the AP
Statistics curriculum are discussed within the 10 chapters of this book. These topics are explained in a manner suitable for a wide variety of ability levels. The topics are arranged so that you can
develop an understanding of the various concepts of AP Statistics, including Exploring Data, Sampling and Experimentation, Anticipating Patterns, and Statistical Inference. All example problems in
each chapter include solutions with the AP Statistics Exam in mind. These solutions will help you understand not only the concepts at hand but also how to communicate that understanding effectively
to the reader (grader) of the exam. AP Statistics includes some useful appendixes. All tables given on the AP Statistics Exam are included in Appendix A. Appendix B includes all formulas needed for
the AP Statistics Exam. These formulas are separated into two categories: those formulas that are given on the exam and those that are not. You will also need to have a good understanding of the
“assumptions and conditions” for inference. These “assumptions and conditions” are fully discussed within each chapter that deals with inference and are summarized in Appendix C for quick reference.
Master Math: AP Statistics
A glossary is included at the end of the book as well so that you can reference or study any vocabulary terms. The following section, “Preparing for the AP Statistics Exam,” is written to help you
develop a sense of how the AP Statistics Exam is organized as well as how best to prepare yourself for the upcoming examination. Although AP Statistics may not totally replace your textbook, it
should provide you with some great insights on how to tackle the types of questions that you will likely encounter on the AP Exam. It is a comprehensive book that should prove to be an invaluable
resource as you journey toward your goal of reaching your maximum potential on the AP Statistics Exam. Good luck!
Preparing for the AP Statistics Exam
• Preparing thoroughly for the AP Statistics Exam is essential if you wish to perform well on the exam and earn a passing grade. Proper preparation begins the first day of class. Like anything
worthwhile in life, reaching your potential on the AP Statistics Exam takes hard work and dedication.
Plan for Success • Get motivated! Begin your preparation for the AP Statistics Exam early by doing all of your homework on a daily basis. Doing “some” or even “most” of the work is selling yourself
short of what you are capable of achieving. Manage your time and get all of your work done. You may find AP Statistics to be more a little more difficult than some other math courses you have taken,
and there might be an adjustment period before you are achieving at a high level. Be patient and keep working!
Master Math: AP Statistics
• Do all of the reading assignments you are assigned. If your instructor does not assign you to read this book or your textbook, take it upon yourself to do so. Not only will you learn the material,
you will also strengthen your reading comprehension and your ability to write well, which is important on the free-response portion of the exam. It is imperative that you can read and interpret
questions effectively in order for you to understand the information being given and what you are being asked to do for each problem. Discipline yourself to keep up with your daily work, and do the
appropriate reading!
• Review on a weekly basis. Even a few minutes a week spent reviewing the topics you have learned previously will help you retain the material that you will be tested on during the exam. If you do
nothing else, study the glossary of this book, as it will keep your vocabulary of AP Statistics up to par. You might find it useful to review old tests and quizzes that you have taken in class. If
you’re like most students, you are very busy, and you’ll need to really budget your time in order to review weekly and keep up with your daily work in this and other courses. It sounds easy enough,
but again, it takes discipline!
• The more you do throughout the course, the easier your review will be toward the end of it. Realize, however, that you’ll probably need to do a lot of review in the final weeks leading up to the
exam. I recommend that you do “focused,” or “intense,” review in the last three to four weeks leading up to the exam. Don’t wait until two or three days before the exam. “Cramming” for an exam like
the AP Statistics Exam is not a good idea. You will be tested on most if not all topics in some way, shape, or form. Again, keep up with whatever your teacher or instructor throws your way.
Preparing for the AP Statistics Exam
• You should know and understand everything in this book to the best of your ability. Read it and study it! Try to go back after you have read the material and do the example problems on your own. As
mentioned earlier, know the glossary. It will help you know and understand the terminology of AP Statistics. Get help on any topics that you do not understand from your instructor.
• Do as many “released” exam problems as you can from the College Board’s website. Your instructor may have you do these for review, but if not, get on the website and do as many problems as you can.
There are free-response exam questions there from as far back as 1997, and you’ll have plenty to choose from. This will give you a good feel for the type of problems you should expect to see on both
the multiple-choice and free-response portions of the exam. You will also find it useful to read through the grading rubrics that are given along with the problems. Doing these “released”
free-response questions will help you understand how partial credit works on the exam. I do, however, think it’s more important to understand the concepts of the problems that are given and what the
grading rubric answers are than how many points you would have gotten if you had answered incorrectly. But it’s still worth a little time to think about how you would have scored based on your answer
and the grading rubric.
• Making your review for the AP Statistics Exam something you do early and often will prevent you from having to “cram” for the test in the last couple of days before the exam. By preparing in
advance, you will be able to get plenty of sleep in the days leading up to the exam, which should leave you well rested and ready to achieve your maximum potential!
Master Math: AP Statistics
How AP Grades Are Determined • As you are reviewing for the AP Statistics Exam, it’s important that you understand the format of the exam and a little about the grading. Knowing the format of the
exam, reading this book, and doing as many old AP Statistics Exam questions will have you prepped for success!
• The AP Statistics examination is divided into two sections. You will have 90 minutes to do each section. The first section is the multiplechoice section of the exam, which consists of 40 questions.
The second section of the exam is the free-response portion of the exam and consists of 6 questions. The scores on both parts of the exam are combined to obtain a composite score.
• The multiple-choice portion of the exam is worth a total of 40 points but is then weighted to 50 points. The score on the multiple-choice section of the test is calculated by using the following
formula: [Number correct out of 40 – (0.25 ⫻ Number wrong)] ⫻ 1.2500 = Multiple-Choice Score
• The adjustment to the number of correct answers you receive makes it unlikely that you will benefit from random guessing. If you can eliminate one of the choices, then it is probably to your
benefit to guess. If you cannot eliminate at least one choice, do not guess; leave it blank.
Preparing for the AP Statistics Exam
• The free-response portion of the exam is graded holistically. For that reason, it is to your advantage to try every question, if possible. Even if you don’t fully understand how to answer the
question in its entirety, you should still try to answer it as best you can. Scores on individual free-response questions are as follows: 4
Complete Response
Substantial Response
Developing Response
Minimal Response
No Credit No Response
• The AP readers (graders) grade free-response questions based on the specified grading rubric. If the question has multiple parts, each part is usually graded as “essentially correct,” “partially
correct,” or “incorrect.” Then, depending on the rubric, you will earn a 4, 3, 2, 1, or 0 for that particular question. Each score in the free-response question is weighted. Problems 1–5 on the
free-response contribute 7.5 percent each to the maximum possible composite score, and question 6 contributes 12.5 percent. It is usually recommended in the directions of the free-response questions
to spend more time on question 6 because it is worth more. Question 6 is considered an “investigative task” question and will probably require more in-depth thinking than the first five questions.
Typically, you will be instructed to spend about 25 minutes on question 6. That will leave you with about 65 minutes to do the first 5 questions, which is about 13 minutes each.
Master Math: AP Statistics
• Once both parts of your test have been graded, a composite score is formed by weighting the multiple-choice and free-response sections equally. You will not be given your composite score. Instead,
you will receive an AP Exam score based on the following 5-point scale: 5
Extremely Well Qualified
Well Qualified
Possibly Qualified
No Recommendation
Don’t worry too much about how the score is calculated. Realize that there are 40 multiple-choice questions that you must get done in 90 minutes and use your time accordingly. Don’t spend a lot of
time on a question that you find really difficult. Move on to the other questions and then come back to the difficult question(s) if time permits. Remember, if you cannot eliminate any of the
choices, leave the answer blank. Also realize that you have 90 minutes to complete the free-response section of the exam. I recommend reading all six free-response questions quickly and starting with
the one you think you have the best shot at answering completely and correctly. Be sure to read each question very carefully before you actually begin the problem. You don’t want to invest a lot of
time working on a problem and later realize that your answer doesn’t really answer the question at hand.
• Make no mistake about it: The AP Statistics Exam is tough. You need to be ready. By reading and studying this book, doing your daily work on a regular basis, and doing old AP Statistics Exam
questions, you will be properly prepared. Remember, the exam is designed to be tough, so don’t get discouraged if you don’t know how to answer every single question. Do your best! If you work hard at
it and take it seriously, you’ll leave the exam feeling good about yourself and your success. Good luck!
Exploring and Graphing Univariate Data 1.1 Describing Distributions 1.2 Displaying Data with Graphs
Master Math: AP Statistics
1.1 Describing Distributions • The organization of data into graphical displays is essential to understanding statistics. This chapter discusses how to describe distributions and various types of
graphs used for organizing univariate data. The types of graphs include modified boxplots, histograms, stem-and-leaf plots, bar graphs, dotplots, and pie charts. Students in AP Statistics should have
a clear understanding of what a variable is and the types of variables that are encountered. • A variable is a characteristic of an individual and can take on different values for different
individuals. Two types of variables are discussed in this chapter: categorical variables and quantitative variables. Categorical variable: Places an individual into a category or group Quantitative
variable: Takes on a numerical value Variables may take on different values. The pattern of variation of a variable is its distribution. The distribution of a variable tells us what values the
variable takes and how often it takes each value.
Shape, Center, and Spread • When describing distributions, it’s important to describe what you see in the graph. It’s important to address the shape, center, and spread of the distribution in the
context of the problem. • When describing shape, focus on the main features of the distribution. Is the graph approximately symmetrical, skewed left, or skewed right?
Exploring and Graphing Univariate Data
Symmetric: Right and left sides of the distribution are approximately mirror images of each other (Figure 1.1).
Figure 1.1
Symmetrical distribution.
• Skewed left: The left side of the distribution extends further than the right side, meaning that there are fewer values to the left (Figure 1.2).
Figure 1.2
Skewed-left distribution.
Master Math: AP Statistics
•Skewed right: The right side of the distribution extends further than the left side, meaning that there are fewer values to the right (Figure 1.3).
Figure 1.3
Skewed-right distribution.
• When describing the center of the distribution, we usually consider the mean and/or the median of the distribution. Mean: Arithmetic average of the distribution: x=
x1 + x2 + + xn ∑ xi or x = n n
Median: Midpoint of the distribution; half of the observations are smaller than the median, and half are larger.
Exploring and Graphing Univariate Data
To find the median: 1. Arrange the data in ascending order (smallest to largest). 2. If there is an odd number of observations, the median is the center data value. If there is an even number of
observations, the median is the average of the two middle observations. •Example 1:Consider Data Set A: 1, 2, 3, 4, 5 Intuition tells us that the mean is 3. Applying the formula, we get: 1+ 2 + 3+
4 + 5 =3 5 Intuition also tells us that the median is 3 because there are two values to the right of 3 and two values to the left of 3. Notice that the mean and median are equal. This is always the
case when dealing with distributions that are exactly symmetrical. The mean and median are approximately equal when the distribution is approximately symmetrical. • The mean of a skewed distribution
is always “pulled” in the direction of the skew. Consider NFL football players’ salaries: Let’s assume the league minimum is $310,000 and the median salary for a particular team is $650,000. Most
players probably make between the league minimum and around $1 million. However, there might be a few players on the team who make well over $1 million. The distribution would then be skewed right
(meaning that most players make less than a million and relatively few players make more $1 million). Those salaries that are well over $1 million would “pull” the mean salary up, thus making the
mean greater than the median.
Master Math: AP Statistics
• When dealing with symmetrical distributions, we typically use the mean as the measure of center. When dealing with skewed distributions, the median is sometimes used as the measure of center
instead of the mean, because the mean is not a resistant measure—i.e., the mean cannot resist the influence of extreme data values. • When describing the spread of the distribution, we use the IQR
(interquartile range) and/or the variance/standard deviation. IQR: Difference of the third quartile minus the first quartile. Quartiles are discussed in Example 2. Five-number summary: The
five-number summary is sometimes used when dealing with skewed distributions. The five-number summary consists of the lowest number, first quartile (Q1), median (M), third quartile (Q3), and the
largest number. •Example 2: Consider Data Set A: 1, 2, 3, 4, 5 1. Locate the median, 3. 2. Locate the median of the first half of numbers (do not include 3 in the first half of numbers or the second
half of numbers). This is Q1 (25th percentile), which is 1.5. 3. Locate the median of the second half of numbers. This is Q3 (75th percentile), which is 4.5. 4. The five-number summary would then be:
1, 1.5, 3, 4.5, 5. •Variance/Standard Deviation: Measures the spread of the distribution about the mean. The standard deviation is used to measure spread when the mean is chosen as the measure of
center. The standard deviation has the same unit of measurement as the data in the distribution. The variance is the square of the standard deviation and is labeled in units squared.
Exploring and Graphing Univariate Data
The formula for variance is: s2 =
( x1 − x )2 + ( x2 − x )2 + ⋅⋅⋅ + ( xn − x )2 n −1
s2 =
∑ ( x − x)
n −1
The standard deviation is the square root of the variance: ( x1 − x )2 + ( x2 − x )2 + ⋅⋅⋅ + ( xn − x )2 n −1
s= or
∑ ( x − x)
n −1
•Example 3: Consider Data Set A: 1, 2, 3, 4, 5 We can find the variance and the standard deviation as follows: Variance s2 =
(1− 3)2 + ( 2 − 3)2 + (3 − 3)2 + ( 4 − 3)2 + (5 − 3)2 5 −1
s 2 = 2.5
Master Math: AP Statistics
Standard Deviation s = 2.5 ≈ 1.5811 It’s probably more important to understand the concept of what standard deviation means than to be able to calculate it by hand. Our trusty calculators or computer
software can handle the calculation for us. Understanding what the number means is what’s most important. It’s worth noting that most calculators will give two values for standard deviation. One is
used when dealing with a population, and the other is used when dealing with a sample. The TI 83/84 calculator shows the population standard deviation as x and the sample standard deviation as Sx. A
population is all individuals of interest, and a sample is just part of a population. We’ll discuss the concept of population and different types of samples in later chapters. • It’s also important
to address any outliers that might be present in the distribution. Outliers are values that fall outside the overall pattern of the distribution. It is important to be able to identify potential
outliers in a distribution, but we also want to determine whether or not a value is mathematically an outlier. • Example 4:Consider Data Set B, which consists of test scores from a college
statistics course: 98, 36, 67, 85, 79, 100, 88, 85, 60, 69, 93, 58, 65, 89, 88, 71, 79, 85, 73, 87, 81, 77, 76, 75, 76, 73
1. Arrange the data in ascending order. 36, 58, 60, 65, 67, 69, 71, 73, 73, 75, 76, 76, 77, 79, 79, 81, 85, 85, 85, 87, 88, 88, 89, 93, 98, 100
Exploring and Graphing Univariate Data
2. Find the median (average of the two middle numbers): 78. 3. Find the median of the first half of numbers. This is the first quartile, Q1: 71. 4. Find the median of the second half of numbers, the
third quartile, Q3: 87. 5. Find the interquartile range (IQR): IQR = Q3 – Q1 = 87 – 71 = 16. 6. Multiply the IQR by 1.5: 16 ⫻ 1.5 = 24. 7. Add this number to Q3 and subtract this number from Q1. 87 +
24 = 111 and 71 – 24 = 47 8. Any number smaller than 47 or larger than 111 would be considered an outlier. Therefore, 36 is the only outlier in this set.
1.2 Displaying Data with Graphs • It is often helpful to display a given data set graphically. Graphing the data of interest can help us use and understand the data more effectively. Make sure you
are comfortable creating and interpreting the types of graphs that follow. These include: boxplots, histograms, stemplots, dotplots, bar graphs, and pie charts.
Modified Boxplots • Modified boxplots are extremely useful in AP Statistics. A modified boxplot is ideal when you are interested in checking a distribution for outliers or skewness, which will be
essential in later chapters. To construct a modified boxplot, we use the five-number summary. The box of the modified boxplot consists of Q1, M, and Q3. Outliers are marked as
Master Math: AP Statistics
separate points. The tails of the plot consist of either the smallest and largest numbers or the smallest and largest numbers that are not considered outliers by our mathematical criterion discussed
earlier. Outliers appear as separate dots or asterisks. Modified boxplots can be constructed with ease using the graphing calculator or computer software. Be sure to use the modified boxplot instead
of the regular boxplot, since we are usually interested in knowing if outliers are present. Side-by-side boxplots can be used to make visual comparisons between two or more distributions. Figure 1.4
displays the test scores from Data Set B. Notice that the test score of 36 (which is an outlier) is represented using a separate point.
Figure 1.4
Modified boxplot of Data Set B: Test Scores.
Histograms • Histograms are also useful for displaying distributions when the variable of interest is numeric (Figure 1.5). When the variable is categorical, the graph is called a bar chart or bar
graph. The bars of the histogram should be touching and should be of equal width. The heights of the bars
Exploring and Graphing Univariate Data
represent the frequency or relative frequency. As with modified boxplots, histograms can be easily constructed using the TI-83/84 graphing calculator or computer software. With some minor adjustments
to the window of the graphing calculator, we can easily transfer the histogram from calculator to paper. We often use the ZoomStat function of the TI-83/84 graphing calculator to create histograms.
ZoomStat will fit the data to the screen of the graphing calculator and often creates bars with non-integer dimensions. In order to create histograms that have integer dimensions, we must make
adjustments to the window of the graphing calculator. Once these adjustments have been made, we can then easily copy the calculator histogram onto paper. Histograms are especially useful in finding
the shape of a distribution. To find the center of the histogram, as measured by the median, find the line that would divide the histogram into two equal parts. To find the mean of the distributions,
locate the balancing point of the histogram.
Figure 1.5
Histogram of Data Set B: Test Scores.
Master Math: AP Statistics
Stemplots • Although we cannot construct a stemplot using the graphing calculator, we can easily construct a stemplot (Figure 1.6) on paper. Stemplots are useful for finding the shape of a
distribution as long as there are relatively few data values. Typically, we arrange the data in ascending order. It is often appropriate to round values before graphing. Although our graphing
calculators cannot construct a stemplot for us, we can still create a list and order the data in ascending order using the calculator or computer software. Stemplots can have single or “split” stems.
Sometimes split stems are used to see the distribution in more detail. Back-to-back stemplots are sometimes used when comparing two distributions. A key should be included with the stemplot so that
the reader can interpret the data (i.e: |5|2 = 52.) It is relatively easy to find the five-number summary and describe the distribution once the stemplot is made.
Figure 1.6
Stemplot of Data Set B: Test Scores.
Exploring and Graphing Univariate Data
Dotplots • Dotplots can be used to display a distribution (Figure 1.7). Dotplots are easily constructed as long as there are not too many data values. As always, be sure to label and scale your axes
and title your graph. Although a dotplot cannot be constructed on the TI-83/84, most statistical software packages can easily construct them.
Figure 1.7
Dotplot of Data Set B: Test Scores.
Bar Graphs • Bar graphs are often used to display categorical data. Bar graphs, unlike histograms, have spaces between the different categories of the variable. The order of the categories is
irrelevant and we can use either counts or percentages for the vertical axis. There are only two categories in this bar graph showing soccer goals scored by my two children, Cassidy and Nolan (Figure
1.8). It should be noted that on any given day Cassidy could score more goals than Nolan or vice versa. I had to make one of them have more goals for visual effect only.
Master Math: AP Statistics
Figure 1.8
Bar graph: Soccer goals made by Cassidy and Nolan.
Pie Charts • Pie charts are also used to display categorical data (Figure 1.9). Pie charts can help us determine what part of the entire group each category forms. Again, be sure to title your graph
and label or code each piece of the pie. On the AP Statistics examination, graphs without appropriate labeling or scaling are considered incomplete.
Exploring and Graphing Univariate Data
Figure 1.9
Pie chart of candy colors.
This page intentionally left blank
Exploring and Graphing Bivariate Data 2.1 Scatterplots 2.2 Modeling Data
Master Math: AP Statistics
2.1 Scatterplots • Scatterplots are ideal for exploring the relationship between two quantitative variables. When constructing a scatterplot we often deal with explanatory and response variables. The
explanatory variable may be thought of as the independent variable, and the response variable may be thought of as the dependent variable. • It’s important to note that when working with two
quantitative variables, we do not always consider one to be the explanatory variable and the other to be the response variable. Sometimes, we just want to explore the relationship between two
variables, and it doesn’t make sense to declare one variable the explanatory and the other the response. • We interpret scatterplots in much the same way we interpret univariate data; we look for the
overall pattern of the data. We address the form, direction, and strength of the relationship. Remember to look for outliers as well. Are there any points in the scatterplot that deviate from the
overall pattern? • When addressing the form of the relationship, look to see if the data is linear (Figure 2.1) or curved (Figure 2.2).
Figure 2.1
Linear relationship.
Exploring and Graphing Bivariate Data
Figure 2.2
Curved relationship.
• When addressing the direction of the relationship, look to see if the data has a positive or negative relationship (Figures 2.3, 2.4).
Figure 2.3
Positive, curved relationship.
Master Math: AP Statistics
Figure 2.4
Negative, slightly curved relationship.
• When addressing the strength of the relationship, consider whether the relationship appears to be weak, moderate, strong, or somewhere in between (Figures 2.5–2.7).
Figure 2.5
Weak or no relationship.
Exploring and Graphing Bivariate Data
Figure 2.6
Moderate, positive, linear relationship.
Figure 2.7
Relatively strong, negative, slightly curved relationship.
Master Math: AP Statistics
Correlation • When dealing with linear relationships, we often use the r-value, or the correlation coefficient. The correlation coefficient can be found by using the formula: r=
⎛ x − x ⎞⎟⎛ y − y ⎞⎟ 1 ⎜⎜ i ⎟⎟ ⎟⎟⎜⎜ i ⎜⎜ ⎜⎜ ∑ ⎟ ⎟ ⎟ n − 1 ⎜⎝ sx ⎟⎠⎜⎝ s y ⎟⎟⎠
• In practice, we avoid using the formula at all cost. However, it helps to suffer through a couple of calculations using the formula in order to understand how the formula works and gain a deeper
appreciation of technology. Facts about Correlation • It’s important to remember the following facts about correlation (make sure you know all of them!): Correlation (the r-value) only describes a
linear relationship. Do not use r to describe a curved relationship. Correlation makes no distinction between explanatory and response variables. If we switch the x and y variables, we still get the
same correlation. Correlation has no unit of measurement. The formula for correlation uses the means and standard deviations for x and y and thus uses standardized values. If r is positive, then the
association is positive; if r is negative, then the association is negative. –1 ≤ r ≤ 1: r = 1 implies that there is a perfectly linear positive relationship. r = –1 implies that there is a perfectly
linear negative relationship. r = 0 implies that there is no correlation.
Exploring and Graphing Bivariate Data
The r-value, like the mean and standard deviation, is not a resistant measure. This means that even one extreme data point can have a dramatic effect on the r-value. Remember that outliers can either
strengthen or weaken the r-value. So use caution! The r-value does not change when you change units of measurement. For example, changing the x and/or y variables from centimeters to millimeters or
even from centimeters to inches does not change the r-value. Correlation does not imply causation. Just because two variables are strongly associated or even correlated (linear) does not mean that
changes in one variable are causing changes in another.
Least Squares Regression • When modeling linear data, we use the Least Squares Regression Line (LSRL). The LSRL is fitted to the data by minimizing the sum of the squared residuals. The graphing
calculator again comes to our rescue by calculating the LSRL and its equation. The LSRL equation takes the form of yˆ = a + bx where b is the slope and a is the y-intercept. The AP* formula sheet
uses the form yˆ = b0 + b1 x . Either form may be used as long as you define your variables. Just remember that the number in front of x is the slope, and the “other” number is the y-intercept. •
Once the LSRL is fitted to the data, we can then use the LSRL equation to make predictions. We can simply substitute a value of x into the equation of the LSRL and obtain the predicted value, yˆ. •
The LSRL minimizes the sum of the squared residuals. What does this mean? A residual is the difference between the observed value, y, and the predicted value, yˆ. In other words, residual = observed
– predicted. Remember that all predicted values are located on the LSRL. A residual can be positive, negative, or zero. A residual is zero only when the point is located on the LSRL. Since the sum of
the residuals is always zero,
Master Math: AP Statistics
we square the vertical distances of the residuals. The LSRL is fitted to the data so that the sum of the square of these vertical distances is as small as possible. • The slope of the regression line
(LSRL) is important. Consider the time required to run the last mile of a marathon in relation to the time required to run the first mile of a marathon. The equation yˆ = 1.25x, where x is the time
required to run the first mile in minutes and yˆ is the predicted time it takes to run the last mile in minutes, could be used to model or predict the runner’s time for his last mile. The
interpretation of the slope in context would be that for every one minute increase in time needed to run the first mile, the predicted time to run the last mile would increase by 1.25 minutes, on
average. It should be noted that the slope is a rate of change and that that since the slope is positive, the time will increase by 1.25 minutes. A negative slope would give a negative rate of
change. Facts about Regression All LSRLs pass though the point ( x , y ). The formula for the slope is b1 = r
sy sx
This formula is given on the AP* Exam. Notice that if r is positive, the slope is positive; if r is negative, the slope is negative. By substituting ( x , y ) into yˆ = b0 + b1 x we obtain y = b0 +
b1 x. Solving for b0, we obtain the y-intercept: b0 = y − b1 x. This formula is also given on the AP* Exam. The r 2 value is called the Coefficient of Determination. The r 2 value is the proportion
of variability of y that can be explained or accounted for by the linear relationship of y on x. To find r 2, we simply square the r-value. Remember, even an r 2 value of 1 does not necessarily imply
any cause-and-effect relationship! Note: A
Exploring and Graphing Bivariate Data
common misinterpretation of the r 2 value is that it is the percentage of observations (data points) that lie on the LSRL. This is simply not the case. You could have an r 2 value of .70 (70%) and
not have any data points that are actually on the LSRL. It’s important to remember the effect that outliers can have on regression. If removing an outlier has a dramatic effect on the slope of the
LSRL, then the point is called an influential observation. These points have “leverage” and tend to be outliers in the x-direction. Think of prying something open with a pry bar. Applying pressure to
the end of the pry bar gives us more leverage or impact. These observations are considered influential because they have a dramatic impact on the LSRL—they pull the LSRL toward them. •
Example:Consider the following scatterplot.
Figure 2.8
Relatively strong, negative, linear relationship.
• We can examine the scatterplot in Figure 2.8 and describe the form, direction, and strength of the relationship. We observe that the relationship is negative—that is, as x increases y decreases. We
also note that the relationship is relatively strong and linear. We can write the equation of
Master Math: AP Statistics
the LSRL and graph the line. The equation of the LSRL is yˆ = 8.5 − 1.1x. The correlation coefficient is r ≈ –.9381 and the coefficient of determination is r 2 = .88. Notice that the slope and the
r-value are both negative. This is not a coincidence. • Notice what happens to the LSRL, r, and r 2 as we shift a data point from the scatterplot that is located toward the end of the LSRL in the
x-direction. Consider Figure 2.9. The equation of the LSRL changes to yˆ = 7.5 − .677 x, r changes to ≈ .5385, and r 2 changes to .29. Moving the data point has a dramatic effect on r, r 2, and the
LSRL, so we consider it to be an influential observation.
Figure 2.9
Influential observation.
• Moving a data point near the middle of the scatterplot does not typically have as much of an impact on the LSRL, r, and r 2 as moving a data point toward the end of the scatterplot in the
x-direction. Consider Figure 2.10. Although dragging a data point from the middle of the scatterplot still changes the location and equation of the LSRL, it does
Exploring and Graphing Bivariate Data
not impact the regression nearly as much. Note that r and r 2 change to .7874 and .62, respectively. Although moving this data point impacts regression somewhat, the effect is much less, so we
consider this data point “less influential.”
Figure 2.10
Less influential observation.
2.2 Modeling Data • Linear data can be modeled using the LSRL. It’s important to remember, however, that not all data is linear. How do we determine if a line is really the best model to use to
represent the data? Maybe the data follow some type of curved relationship? • Examining the scatterplot, as mentioned earlier, is the first step to finding an appropriate model. However, sometimes
looking at the scatterplot and finding the r-value can be a little deceiving. Consider the following two scatterplots. Both contain the same data but are scaled differently. Changing the scale of the
scatterplot can make the data appear more or less linear than is really the case. You might guess the
Master Math: AP Statistics
r-value of Figure 2.12 to be higher than that of Figure 2.11 since the data points “appear” closer together in Figure 2.12 than they do in Figure 2.11. The r-values are the same, however; only the
scale has been changed. Our eyes can sometimes deceive us.
Figure 2.11
Figures 2.11 and 2.12 contain the same data.
Figure 2.12
The r-values are the same in Figures 2.11 and 2.12.
Exploring and Graphing Bivariate Data
To help make the decision of which model is best, we turn our attention to residual plots. • The residual plot plots the residuals against the explanatory variable. If the residual plot models the
data well, the residuals should not follow a systematic or definite pattern (Figures 2.13–2.14).
Figure 2.13
Residual plot with random scatter.
Master Math: AP Statistics
Figure 2.14
Residual plot with a definite pattern.
• The next three examples will be used to aide in the understanding of how to find an appropriate model (equation) for a given data set. The promising AP Stats student (yes, that’s you!) should
understand how to take a given set of bivariate data, determine which model is appropriate, perform the inverse transformation, and write the appropriate equation. The TI-83/84 can be used to
construct a scatterplot and the corresponding residual plot. Remember that the graphing calculator will create a list of the residuals once linear regression has been performed on the data. After the
appropriate model is determined, we can obtain the LSRL equation from the calculator and transform it to the appropriate equation to model the data. We use logarithms in exponential and power models
because these models involve equations with exponents. Remember that a logarithm is just another way to write an exponent. It’s important to remember the following algebraic properties of logarithms:
1. log(AB) = log A + log B 2. log(A|B) = log A – log B 3. log X n = nlog X
Exploring and Graphing Bivariate Data
• Linear Model: Consider the data in Figure 2.15. Examining the scatterplot of the data reveals a strong, positive, linear relationship. The lack of a pattern in the residual plot confirms that a
linear model is appropriate, compared to any other non-linear model. We should be able to get pretty good predictions using the LSRL equation.
Figure 2.15
Scatterplot of linear data and “random” residual plot.
Master Math: AP Statistics
• Exponential Model: Consider the data in Figure 2.16. There appears to be a curved pattern to the data. The data does not appear to be linear. To rule out a linear model, we can use our calculator
or statistical software to find the LSRL equation and then construct a residual plot of the residuals against x. We can see that the residual plot has a definite pattern and thus contradicts a linear
model. This implies that a non-linear model is more appropriate.
log_y log(y)
Figure 2.16
The original data is curved; the residual plot shows a definite pattern.
Exploring and Graphing Bivariate Data
An exponential or power model might be appropriate. Exponential growth models increase by a fixed percentage of the previous amount. In other words, 230 111 55 ≈ 2.0721 ≈ 2.0182 ≈ 2.0370 111 55 27
and so on. These percentages are approximately equal. This is an indication that an exponential model might best represent the data. Next, we look at the graph of log y vs. x (Figure 2.17). Notice
that the graph of log y vs. x straightens the data. This is another sign that an exponential model might be appropriate. Finally, we can see that the residual plot for the exponential model (log y on
x) appears to have random scatter. An exponential model is appropriate.
Figure 2.17
Scatterplot of log y vs. x and residual plot with random scatter.
Master Math: AP Statistics
We can write the LSRL equation for the transformed data of log y vs. x. We then use the properties of logs and perform the inverse transformation as follows to obtain the exponential model for the
original data. 1. Write the LSRL for log y on x. Your calculator may give you yˆ = .4937 + .3112 x, but remember that you are using the log of the y values, so be sure to use log yˆ, not just yˆ. log
yˆ = .4937 + .3112 x 2. Rewrite as an exponential equation. Remember that log is the common log with base 10. yˆ = 10.4937+.3112 x 3. Separate into two powers of 10. yˆ = 10.4937 ⋅10.3112 x 4. Take
10 to the .4937 power and 10 to the .3112 power and rewrite. yˆ = 3.1167 ⋅ 2.0474 x Our final equation is an exponential equation. Notice how well the graph of the exponential equation models the
original data (Figure 2.18).
Exploring and Graphing Bivariate Data
Figure 2.18
Exponential equation models the data.
• Power Model: Consider the data in Figure 2.19. As always, remember to plot the original data. There appears to be a curved pattern. We can confirm that a linear model is not appropriate by
interpreting the residual plot of the residuals against x, once our calculator or software has created the LSRL equation. The residual plot shows a definite pattern; therefore a linear model is not
Master Math: AP Statistics
Figure 2.19
The original data is curved; the residual plot shows a definite pattern.
Exploring and Graphing Bivariate Data
We can then examine the graph of log y on x. Notice that taking the log of the y-values and plotting them against x does not straighten the data— in fact, it bends the data in the opposite direction.
What about the residual plot for log y vs. x? There appears to be a pattern in the residual plot (Figure 2.20); this indicates that an exponential model is not appropriate.
Figure 2.20
Scatterplot of log y vs. x. The residual plot shows a definite pattern.
Next, we plot log y vs. log x (Figure 2.21). Notice that this straightens the data and that the residual plot of log y vs. log x appears to have random scatter. A power model is therefore
Master Math: AP Statistics
Figure 2.21
Scatterplot of log y vs. log x. The residual plot shows random scatter.
We can then perform the inverse transformation to obtain the appropriate equation to model the data. 1. Write the LSRL for log y on log x. log yˆ = .5970 + 2.0052 log x Remember we are using logs! 2.
Rewrite as a power equation. yˆ = 10.5970+2.0052 log x 3. Separate into two powers of 10. yˆ = 10.5970 ⋅102.0052 log x
Exploring and Graphing Bivariate Data
4. Use the power property of logs to rewrite. yˆ = 10.5970 ⋅10log x
5. Take 10 to the .5970 power and cancel 10 to the log power. yˆ = 3.9537 ⋅ x 2.0052 Our final equation is a power equation. Notice how well the power model fits the data in the original scatterplot
(Figure 2.22).
Figure 2.22
The power model fits the data.
This page intentionally left blank
Normal Distributions 3.1 Density Curves 3.2 Normal Distributions 3.3 Normal Calculations
Master Math: AP Statistics
3.1 Density Curves • Density curves are smooth curves that can be used to describe the overall pattern of a distribution. Although density curves can come in many different shapes, they all have
something in common: The area under any density curve is always equal to one. This is an extremely important concept that we will utilize in this and other chapters. It is usually easier to work with
a smooth density curve than a histogram, so we sometimes overlay the density curve onto the histogram to approximate the distribution. A specific type of density curve called a normal curve will be
addressed in section 3.2. This “bell-shaped” curve is especially useful in many applications of statistics as you will see later on. We describe density curves in much the same way we describe
distributions when using graphs such as histograms or stemplots. • The relationship between the mean and the median is an important concept, especially when dealing with density curves. In a
symmetrical density curve, the mean and median will be equal if the distribution is perfectly symmetrical or approximately equal if the distribution is approximately symmetrical. If a distribution is
skewed left, then the mean will be “pulled” in the direction of the skewness and will be less than the median. If a distribution is skewed right, the mean is again “pulled” in the direction of the
skewness and will be greater than the median. Figure 3.1 displays distributions that are skewed left, skewed right, and symmetrical. Notice how the mean is “pulled” in the direction of the skewness.
Normal Distributions
Figure 3.1
The relationship of mean and median in skewed and symmetrical distributions.
Master Math: AP Statistics
• It’s important to remember that the mean is the “balancing point” of the density curve or histogram and that the median divides the density curve or histogram into two parts, equal in area (Figure
Figure 3.2
The mean is the “balancing point” of the distribution. The median divides the density curve into two equal areas.
3.2 Normal Distributions • One particular type of density curve that is especially useful in statistics is the normal curve, or normal distribution. Although all normal distributions have the same
overall shape, they do differ somewhat depending on the mean and standard deviation of the distribution (Figure 3.3). If we increase or decrease the mean while keeping the standard deviation the
same, we will simply shift the distribution to the right or to the left. The
Normal Distributions
more we increase the standard deviation, the “wider” and “shorter” the density curve will be. If we decrease the standard deviation, the density curve will be “narrower” and “taller.” Remember that
all density curves, including normal curves, have an area under the curve equal to one. So, no matter what value the mean and standard deviation take, the area under the normal curve is equal to one.
This is very important, as you’ll soon see.
Figure 3.3
Two normal distributions with different standard deviations.
The equation for the standard normal curve is: y =
1 2π
Master Math: AP Statistics
The Empirical Rule (the 68, 95, 99.7 Rule) • All normal distributions follow the Empirical Rule. That is to say that all normal distributions have: 68% of the observations falling within σ (one
standard deviation) of the mean, 95% of the observations falling within 2σ (two standard deviations) of the mean, and 99.7% (almost all) of the observations falling within 3σ (three standard
deviations) of the mean (Figure 3.4).
Figure 3.4
About 68% of observations fall within one standard deviation, 95% within two standard deviations, and 99.7% within three standard deviations.
• Example 1: Let’s assume that the number of miles that a particular tire will last roughly follows a normal distribution with μ = 40,000 miles and σ = 5000 miles. Note that we can use shorthand
notation N(40,000, 5000) to denote a normal distribution with mean equal to 40,000 and standard deviation equal to 5,000. Since the distribution is not exactly normal but approximately normal, we can
assume the distribution will
Normal Distributions
roughly follow the 68, 95, 99.7 Rule. Using the 68, 95, 99.7 Rule we can conclude the following (see Figure 3.5): About 68% of all tires should last between 35,000 and 45,000 miles (μ ± σ) About 95%
of all tires should last between 30,000 and 50,000 miles (μ ± 2σ) About 99.7% of all tires should last between 25,000 and 55,000 miles (μ ± 3σ) Using the 68, 95, 99.7 Rule a little more creatively,
we can also conclude: About 34% of all tires should last between 40,000 and 45,000 miles. About 34% of all tires should last between 35,000 and 40,000 miles. About 2½ % of all tires should last more
than 50,000 miles. About 84% of all tires should last less than 45,000 miles.
Figure 3.5
Application of the Empirical Rule.
Master Math: AP Statistics
3.3 Normal Calculations •Example 2: Referring back to Example 1, let’s suppose that we want to determine the percentage of tires that will last more than 53,400 miles. Recall that we were given N
(40,000, 5000). To get a more exact answer than we could obtain using the Empirical Rule, we can do the following: Solution:Always make a sketch! (See Figure 3.6.)
a = 0.0037 15000
Figure 3.6
Make a sketch and shade to the right of 53,400.
Shade the area that you are trying to find, and label the mean in the center of the distribution. Remember that the mean and median are equal in a normal distribution since the normal curve is
symmetrical. Obtain a standardized value (called a z-score) using z =
x −μ . σ
Normal Distributions
Using substitution, we obtain z =
53, 400 − 40, 000 = 2.68 5000
Notice that the formula for z takes the difference of x and μ and divides it by σ. Thus, a z-score is the number of standard deviations that x lies above or below the mean. So, 53,400 is 2.68
standard deviations above the mean. You should always get a positive value for z if the value of x is above the mean, and a negative value for z if the value of x is below the mean. When we find the
z-score, we are standardizing the values of the distribution. Since these values are values of a normal distribution, the distribution we obtain is called the standard normal distribution. This new
distribution, the standard normal distribution, has a mean of zero and a standard deviation of one. We can then write N(0,1) The advantage of standardizing any given normal distribution to the
standard normal distribution is that we can now find the area under the curve for any given value of x that is needed. We can now use the z-score of 2.68 that we obtained earlier. Using Table A, we
can look up the area to the left of z = 2.68. Notice that Table A has two sides—one for positive values for z and the other for negative values for z. Using the side of the table with the positive
values for z, follow the left-hand column down until you reach 2.6. Then go across the top of the table until you reach .08. By crossreferencing 2.6 and .08, we can obtain the area to the left of z =
2.68, which is 0.9963. In other words, 99.63% of tires will last less than 53,400 miles. We want to know what percent of tires will last more than 53,400 miles, so we subtract 0.9963 from 1. Remember
that the total area under any density curve is equal to one. We obtain 1 – 0.9963 = 0.0037.
Master Math: AP Statistics
Conclude in context. That is, only 0.37% of tires will last more than 53,400 miles. We can also state that the probability that a randomly chosen tire of this type will last longer than 53,400 miles
is equal to 0.0037. •Example 3: Again referring to Example 1, find the probability that a randomly chosen tire will last between 32,100 miles and 41,900 miles. Make a sketch. (See Figure 3.7.)
Figure 3.7
Make a sketch and shade between 32,100 and 41,900.
Locate the mean on the normal curve as well as the values of 32,100 and 41,900. Shade the area between 32,100 and 41,900. Calculate the z-scores.
41, 900 − 40, 000 32,100 − 40, 000 = 0.38 = −1.58 and z = 5000 5000
Normal Distributions
Find the areas to the left of –1.58 and 0.38 using Table A. The area to the left of –1.58 is equal to 0.0571, and the area to the left of 0.38 is equal to 0.6480. Since we want to know the
probability that a tire will last between 32,100 and 41,900 miles, we will subtract the two areas. Remember that any area that we look up in Table A is the area to the left of z. 0.6480 – 0.0571 =
0.5909 Conclude in context: The probability that a randomly chosen tire will last between 32,100 miles and 41,900 miles is equal to 0.5909. •Example 4: Consider a national mathematics exam where the
distribution of test scores roughly follows a normal distribution with mean, μ = 320, and standard deviation, μ = 32. What score must a student obtain to be in the top 10% of all students taking the
exam? Make a sketch! (See Figure 3.8.)
a = 0.1 220
Figure 3.8
Make a sketch and shade the top 10%.
Master Math: AP Statistics
Shade the appropriate area. Use the formula for z. z=
x −μ σ
Using substitution, we obtain: z=
x −320 32
In order to solve for x, we need to obtain an appropriate value of z. Using Table A “backwards,” we look in the body of the table for the value closest to 0.90, which is 0.8997. The value of z that
corresponds to an area to the left of 0.8997 is 1.28, so z = 1.28. Again, remember that everything we look up in Table A is the area to the left of z, so we look up what’s closest to 0.90, not 0.10.
Substituting for z, we obtain: 1.28 =
x − 320 32
Solving for x, we obtain: x = 360.96 Conclude in context. A student must obtain a score of approximately 361 in order to be in the top 10% of all students taking the exam.
Normal Distributions
• Example 5: Consider the national mathematics test in Example 4. The middle 90% of students would score between which two scores? Make a sketch! (See Figure 3.9.) Shade the appropriate area.
Figure 3.9
Make a sketch and shade the middle 90%.
Use the formula for z. z=
x −μ σ
Using substitution, we obtain: z=
x −320 32
In order to solve for x, we need to obtain an appropriate value of z. Consider that we are looking for the middle 90% of test scores. Remembering once again that the area under the normal curve is 1,
we can obtain the area on the “outside” of 90%, which would be 10%. This forms two “tails,” which we consider the right and left tails.
Master Math: AP Statistics
These “tails” are equal in area and thus have an area of 0.05 each. We can then use Table A, as we did in Example 4, to obtain a z-score that corresponds to an area of 0.05. Notice that two values
are equidistant from 0.05. These areas are 0.0495 and 0.0505, which correspond to z-scores of –1.64 and –1.65, respectively. Since the areas we are looking up are the same distance away from 0.05, we
split the difference and go out one more decimal place for z. We use z = –1.645. −1.645 =
x − 320 32
Solving for x, we obtain: x = 267.36 We can now find the test score that would be the cutoff value for the top 5% of scores. Notice that since the two tails have the same area, we can use z = 1.645.
The z-scores are opposites due to the symmetry of the normal distribution. 1.645 =
x − 320 32
Solving for x, we obtain: x = 372.64 Conclude in context. The middle 90% of students will obtain test scores that range from approximately 267 to 373.
Normal Distributions
Assessing Normality • Inferential statistics is a major component of the AP Statistics curriculum. When you infer something about a population based on sample data, it is often important to assess
the normality of a population. We can do this by looking at the number of observations in the sample that lie within one, two, and three standard deviations from the mean. In other words, use the
Empirical Rule. Do approximately 68, 95, and 99.7% of the observations fall within μ ± 1σ, μ ± 2σ, and μ ± 3σ ? Larger data sets should roughly follow the 68, 95, 99.7 Rule while smaller data sets
typically have more variability and therefore may be less likely to follow the Empirical Rule despite coming from normal populations. • We can also look at a graph of the sample data. By constructing
a histogram, stemplot, modified boxplot, or line plot, we can examine the data to look for strong skewness and outliers. Non-normal populations often produce sample data that have skewness or
outliers or both. Normal populations are more likely to have sample data that are symmetrical and bell-shaped and usually do not have outliers. •Normal probability plots can also be used to assess
the normality of a population through sample data. A normal probability plot is a scatterplot that graphs a predicted z-score against the value of the variable. Most graphing calculators and
statistical software packages are capable of constructing normal probability plots. You should be much more concerned with how to interpret a normal probability plot than with how one is constructed.
Again, technology helps us out in constructing the plot.
Master Math: AP Statistics
• Interpret the normal probability plot by assessing the linearity of the plot. The more linear the plot, the more normal the distribution. A non-linear probability plot is a good sign of a
non-normal population. Consider the following data taken from a distribution known to be uniform and non-normal (Figure 3.10). The accompanying normal probability plot is curved and is thus a sign
that the data is indeed taken from a non-normal population.
Figure 3.10
The non-linearity of the normal probability plot suggests that the data comes from a non-normal population.
Samples, Experiments, and Simulations 4.1 Sampling 4.2 Designing Experiments 4.3 Simulation
Master Math: AP Statistics
• It is imperative that we follow proper data collection methods when gathering data. Statistical inference is the process by which we draw conclusions about an entire population based on sample
data. Whether we are designing an experiment or sampling part of a population, it’s critical that we understand how to correctly gather the data we use. Improper data collection leads to incorrect
assumptions and predictions about the population of interest. If you learn nothing else about statistics, I hope you learn to be skeptical about how data is collected and to interpret the data
correctly. Properly collected data can be extremely useful in many aspects of everyday life. Inference based on data that was poorly collected or obtained can be misleading and lead us to incorrect
conclusions about the population.
4.1 Sampling • You will encounter certain types of sampling in AP Statistics. As always, it’s important that you fully understand all the concepts discussed in this chapter. We begin with some basic
definitions. • A population is all the individuals in a particular group of interest. We might be interested in how the student body of our high school views a new policy about cell phone usage in
school. The population of interest is all students in the school. We might take a poll of some students at lunch or during English class on a particular day. The students we poll are considered a
sample of the entire population. If we sample the entire student body, we are actually conducting a census. A census consists of all individuals in the entire population. The U.S. Census attempts to
count every resident in the United States and is required by the Constitution every ten years. The data collected by the U.S. Census
Samples, Experiments, and Simulations
will help determine the number of seats each state has in the House of Representatives. There has even been some political debate on whether or not the U.S. should spend money trying to count
everyone when information could be gained by using appropriate sampling techniques. • A sampling frame is a list of individuals from the entire population from which the sample is drawn. • Several
different types of sampling are discussed in AP Statistics. One type often referred to is an SRS, or simple random sample. An SRS is a sample in which every set of n individuals has an equal chance
of being chosen. Referring back to our population of students, we could conduct an SRS of size 100 from the 2200 students by numbering all students from 1 to 2200. We could then use the random
integer function on our calculator, or the table of random digits, or we could simply draw 100 numbers out of a hat that included the 2200 numbers. It’s important to note that, in an SRS, not only
does every individual in the population have an equal opportunity of being chosen, but so does each sample. • When using the table of random digits, we should remember a few things. First of all,
there are many different ways to use the table. We’ll discuss one method and then I’ll briefly give a couple examples of another way that the table might be used. Consider our population of 2200
students. After numbering each student from 0001 to 2200, we can go to the table of random digits (found in many statistics books). We can go to any line—let’s say line 145. We can look at the first
four-digit number, which is 1968. This would be the first student selected for our sample. The next number is 7126. Since we do not have a student numbered 7126, we simply skip over 7126. We also
skip 3357, 8579, and 5806. The next student chosen is 0993. We continue in this fashion until we’ve selected the number of students we want in our sample. If we get to the end of the line, we simply
go on the next line. This is only one
Master Math: AP Statistics
method we could use. A different method might be to start at the top with line 101. Use the first “chunk” of five digits and use the last four digits of that five-digit number (note that the numbers
are grouped in groups of five for the purpose of making the table easier to read). That would give us 9223, which we would skip. We could then either go across to the next “chunk” of five digits or
go down to the “chunk” of five digits below our first group. No matter how you use the random digit table, just remember to be consistent and stay with the same system until the entire sample has
been chosen. Skipping the student numbered 1559 because he’s your old boyfriend or she’s your old girlfriend is not what random sampling is all about. • A stratified random sample could also be used
to sample our student body of 2200 students. We might break up our population into groups that we believe are similar in some fashion. Maybe we feel that freshmen, sophomores, juniors, and seniors
will feel different about our new policy concerning cell phones. We call these homogeneous groups strata. Within each stratum, we would then conduct an SRS. We would then combine these SRSs to obtain
the total sample. Stratified random sampling guarantees representation from each strata. In other words, we know that our sample includes the opinions of freshmen, sophomores, juniors, and seniors. •
A cluster sample is similar to a stratified sample. In a cluster sample, however, the groups are heterogeneous, not homogeneous. That is, we don’t feel like the groups will necessarily differ from
one another. Once the groups are determined, we can conduct an SRS within each group and form the entire sample from the results of each SRS. Usually this method is used to make the sampling cheaper
or easier. We might sample our 2200 students during our three lunch periods. We could form three SRSs from our three lunch groups as long as we feel that all three lunch groups are similar to one
another and all represent the population equally.
Samples, Experiments, and Simulations
• Systematic sampling is a method in which it is predetermined how the sample will be obtained. We might, for example, sample every 25th student from our list of 2200 students. We should note that
this method is not considered an SRS since not all samples of a given size have an equal chance of being chosen. Think about it this way: If we sample every 25th student of the 2200, that’s 88
students. The first 25 students on the list of the 2200 students would never be chosen together, so technically it’s not an SRS. • A convenience sample could also be conducted from our 2200 students.
We would conduct a convenience sample because it’s, well, convenient. We might sample students in the commons area near the cafeteria because it’s an easy thing to do and we can do so during our
lunch break. It should be noted, however, that convenience samples almost always contain bias. That is to say that they tend to systematically understate or overstate the proportion of people that
feel a certain way; they are usually not representative of the entire population. • A voluntary response sample could be obtained by having people respond on their own. We might try to sample some of
our 2200 students by setting up an online survey where students could respond one time to a survey if they so choose. These types of samples suffer from voluntary response bias because those that
feel very strongly either for or against something are much more willing to respond. Those that feel strongly against something are actually more likely to respond than those that have strong
positive feelings. • A multistage sample might also be used. This is sampling that combines several different types of sampling. Some national opinion polls are conducted using this method.
Master Math: AP Statistics
• We should also be concerned with how survey questions are worded. We should ensure that the wording is not slanted in such a way as to sway the person taking the survey to answer the question in a
particular manner. Poorly worded questions can lead to response bias. Training sometimes takes place so that the person conducting the survey interview uses good interviewing techniques. •
Undercoverage occurs when individuals in the population are excluded in the process of choosing the sample. Undercoverage can lead to bias, so caution must be used. • Nonresponse can also lead to
bias when certain selected individuals cannot be reached or choose not to participate in the sample. • Our goal is to eliminate bias. Through proper sampling, it is possible to eliminate a good deal
of the bias that can be present if proper sampling is not used. We must realize that sampling is never perfect. If I draw a sample from a given population and then draw another sample in the exact
same manner, I rarely get the exact same results. There is almost always some sampling variability. Think about sampling our student body of 2200 students. If we conduct an SRS of 25 students and
then conduct another SRS of 25 students, we will probably not be sampling the same 25 students and thus may not get the exact same results. More discussion about sampling viability will take place in
later chapters. Remember, however, that larger random samples will give more accurate results than smaller samples conducted in the same manner. A smaller random sample, however, may give more
accurate results than a larger non-random sample.
Samples, Experiments, and Simulations
4.2 Designing Experiments • Now that we’ve discussed some different types of sampling, it’s time to turn our attention to experimental design. It’s important to understand both observational studies
and experiments and the difference between them. In an observational study, we are observing individuals. We are studying some variable about the individuals but not imposing any treatment on them.
We are simply studying what is already happening. In an experiment, we are actually imposing a treatment on the individuals and studying some variable associated with that treatment. The treatment is
what is applied to the subjects or experimental units. We use the term “subjects” if the experimental units are humans. The treatments may have one or more factors, and each factor may have one or
more levels. • Example 1:Consider an experiment where we want to test the effects of a new laundry detergent. We might consider two factors: water temperature and laundry detergent. The first
factor, temperature, might have three levels: cold, warm, and hot water. The second factor, detergent, might have two levels: new detergent and old detergent. We can combine these to form six
treatments as listed in Figure 4.1.
Figure 4.1
Six treatments.
Master Math: AP Statistics
• It’s important to note that we cannot prove or even imply a cause-andeffect relationship with an observational study. We can, however, prove a cause-and-effect relationship with an experiment. In
an experiment, we observe the relationship between the explanatory and response variables and try to determine if a cause-and-effect relationship really does exist. • The first type of experiment
that we will discuss is a completely randomized experiment. In a completely randomized experiment, subjects or experimental units are randomly assigned to a treatment group. Completely randomized
experiments can be used to compare any number of treatments. Groups of equal size should be used, if possible. • Example 2: Consider an experiment in which we wish to determine the effectiveness of a
new type of arthritis medication. We might choose a completely randomized design. Given 600 subjects suffering from arthritis, we could randomly assign 200 subjects to group 1, which would receive
the new arthritis medication, 200 subjects to group 2, which would receive the old arthritis medication, and 200 subjects to group 3, which would receive a placebo, or “dummy” pill. To ensure that
the subjects were randomly placed into one of the three treatment groups, we could assign each of the 600 subjects a number from 001 to 600. Using the random integer function on our calculator, we
could place the first 200 subjects whose numbers come up in group 1, the second 200 chosen in group 2, and the remaining subjects in group 3. It should be noted that a placebo is used to help control
the placebo effect, which comes into play when people respond to the “idea” that they are receiving some type of treatment. A placebo, or “dummy” pill, is used to ensure that the placebo effect
contributes equally to all three groups. The placebo should taste, feel, and look like the real medication. The subjects would take the medication for a predetermined period of time before the
effectiveness of the medication was evaluated. We can use a diagram to help outline the design (see Figure 4.2).
Samples, Experiments, and Simulations
Figure 4.2
A completely randomized design.
To describe an experiment, it can be useful (but not essential) to use a diagram. Remember to explain how you plan to randomly assign individuals to each treatment in the experiment. This can be as
simple as using the table of random digits or using the random integer function of the graphing calculator. Be specific in your diagram, and be sure to fully explain how you are setting up the
experiment. • Example 3: Let’s reconsider Example 2. Suppose there is reason to believe that the new arthritis medication might be more effective for men than for women. We would then use a type of
design called a block design. We would divide our group of 600 subjects into one group of males and one group of females. Once our groups were blocked on gender, we would then randomly assign our
group of males to one of the three treatment groups and our group of females to one of the three treatment groups. It’s important to note that the use of blocking reduces variability within each of
the blocks. That is, it eliminates a confounding variable that may systematically skew the results. For example, if one is conducting an experiment on a weight-loss pill and blocking is not used, the
random assignment of the subjects may assign more females to the experimental group. If males and females respond differently to the treatment, you will not be able to determine whether the weight
loss is due to the drug’s effectiveness or due to the gender of the subjects in the group. Be sure to include random assignment in your diagram, but make
Master Math: AP Statistics
sure that you’ve done so after you’ve separated males and females. There are often a few students who get in a hurry on an exam and randomly place subjects into groups of males and females. It’s good
that they remember that random assignment is important, but it needs to come after the blocking, not before. • The arthritis experiment in Examples 2 and 3 might be either a single-blind or
double-blind experiment. In a single-blind experiment, the person taking the medication would not know whether they had the new medication, the old medication, or the placebo. If a physician is used
to help assess the effectiveness of the treatments, the experiment should probably be double-blind. That is, neither the subject receiving the treatment nor the physician would know which treatment
the subject had been given. Obviously, in the case of a double-blind experiment, there must be a third-party member that knows which subjects received the various treatments. • Example 4: A
manufacturer of bicycle tires wants to test the durability of a new material used in bicycle tires. A completely randomized design might be used where one group of cyclists uses tires made with the
old material and another group uses tires made with the new material. The manufacturer realizes that not all cyclists will ride their bikes on the same type of terrain and in the same conditions. To
help control for these variables, we can implement a matched-pairs design. Matching is a form of blocking. One way to do this is to have each cyclist use both types of tires. A coin toss could
determine whether the cyclist uses the tire with the new material on the front of the bike or on the rear. We could then compare the front and rear tire for each cyclist. Another way to match in this
situation might be to pair up cyclists according to rider size and weight, the location where they ride, and/or the type of terrain they typically ride on. A coin could then be tossed to decide which
of the
Samples, Experiments, and Simulations
two cyclists uses the tires with the new material and which uses the tires with the old material. This method might not be as effective as having each cyclist serve as his/her own control and use one
tire of each type. • When you’re designing various types of experiments, it’s important to remember the four principles of experimental design. They are: 1. Control. It is very important to control
the effects of confounding variables. Confounding variables are variables (aside from the explanatory variable) that may affect the response variable. We often use a control group to help assess
whether or not a particular treatment actually has some effect on the subjects or experimental units. A control group might receive the “old” (or “traditional”) treatment, or it might receive a
placebo (“dummy” pill). This can help compare the various treatments and allow us to determine if the new treatment really does work or have a desired effect. 2. Randomization. It’s critical to
reduce bias (systematic favoritism) in an experiment by controlling the effects of confounding variables. We hope to spread out the effects of these confounding variables by using chance to randomly
assign subjects or experimental units to the various treatments. 3. Replication. There are two forms of replication that we must consider. First, we should always use more than one or two subjects or
experimental units to help reduce chance variation in the results. The more subjects or experimental units we use, the better. By increasing the number of experimental units or subjects, we know that
the difference between the experimental group and the control group is really due to the imposed treatment(s) and not just due to chance. Second, we should have designed an experiment that can be
replicated by others doing similar research.
Master Math: AP Statistics
4. Blocking. Blocking is not a requirement for experimental design, but it may help improve the design of the experiment in some cases. Blocking places individuals who are similar in some
characteristic in the same group, or “block.” These individuals are expected to respond in a similar manner to the treatment being imposed. For example, we may have reason to believe that men and
women will differ in how they are affected by a particular type of medication. In this case, we would be blocking on gender. We would form one group of males and one group of females. We would then
use randomization to assign males and females to the various treatments.
4.3 Simulation • Simulation can be used in statistics to model random or chance behavior. In much the same way an airplane simulator models how an actual aircraft flies, simulation can be used to
help us predict the probability of some real-life occurrences. For our purposes in AP Statistics, we’ll try to keep it simple. If you are asked to set up a simulation in class or even on an exam,
keep it simple. Use things like the table of random digits, a coin, a die, or a deck of cards to model the behavior of the random phenomenon. • Let’s set up an example: As I was walking out of the
grocery store a few years ago, my two children, Cassidy and Nolan (ages 5 and 7 at the time), noticed a lottery machine that sold “scratch-offs” near the exit of the store. Despite explaining to them
how the “scratch-offs” worked and that the probability of winning was, well … not so good, they persuaded me to partake in the purchase of three $1 “scratch-offs.” Being an AP Stats teacher and all,
I knew I had a golden opportunity to teach them a lesson in probability and a “lesson” that gambling was “risky business.”
Samples, Experiments, and Simulations
Sure, we might win a buck or two, but chances were pretty good that we’d lose, and even if we did win, the kids would hopefully lose interest since we would most likely just be getting our money
back. Once we were in the car, the lesson began. “Hmmm …“Odds are 1:4,” I told them. That means that on average, you win about one time for every five times you play. I carefully explained that the
chances of winning were not very good and that if we won, chances were pretty good that we would not win a lot. Two “scratch-offs” later … two winners, $1 each. Hmmm.… Not exactly what I had planned,
but at least I had my $2 back. “Can we buy some more?” they quickly asked. I told them that the next time we stopped for gas, we could buy two more “scratch-offs” but that was it. Surely they’d learn
their lesson this time. Two weeks later, we purchased two more $1 “scratch-offs.” Since I was in a hurry, I handed each of them a coin and a “scratch-off” and away we drove. Unfortunately for Nolan,
his $1 “scratch off” resulted in a loss. I felt a little bad about his losing, but in the long run it would probably be best. Moments later, Cassidy yells out, “I won a hundred dollars!” Sure, I
thought. She’s probably just joking. “Let me see that!” I quickly pulled over at the next opportunity to realize that she had indeed won $100! Again, not exactly what I’d planned, but hey … it was
$100! What are the chances of winning on three out of four “scratch-offs?” Let’s set up a simulation to try to answer the question. • Example 5: Use simulation to find the probability that someone
who purchases four $1 “scratch-offs” will win something on three out of the four “scratch-offs.” Assume the odds of winning on the “scratch-off” are 1:4. Solution:If the odds of winning are 1:4,
that means that in the long run we should expect to win one time out of every five plays. That is, we should expect, out of five plays, to win once and lose four times,
Master Math: AP Statistics
on average. In other words, the probability of winning is 1/5. Sometimes we might win more than expected and sometimes we might win less than expected, but we should average one win for every four
losses. We can set up a simulation to estimate the probability of winning. Let the digits 0–1 represent a winning “scratch-off.” Let the digits 2–9 represent a losing “scratch-off.” Note that 0–1 is
actually 2 numbers and 2–9 is 8 numbers. Also note that 1:4 odds would be the same as 2:8 odds. We have used single-digit numbers for the assignment as it is the simplest method in this case. We
could have used double-digit numbers for the assignment, but this would be unnecessary. Several different methods would work as long as the odds reduce to 1:4. To make it easier to keep track of the
numbers, we will group the one-digit numbers in “chunks” of four and label each group “W” for win and “L” for lose. Each group of four one-digit numbers represents one simulation of purchasing four
“scratch-offs” (Figure 4.3). Starting at line 107 of the table of random digits, we obtain:
Figure 4.3
“Scratch-off” probability simulation.
Samples, Experiments, and Simulations
Figure 4.3 displays 50 trials of purchasing four “scratch-off” tickets. Only one of the 50 trials produced three winning tickets out of four. Based on our simulation, the probability of winning three
out of four times is only 1/50. In other words, Cassidy and Nolan were pretty lucky. Students sometimes find simulation to be a little tricky. Remember to keep it as simple as possible. A simulation
does not need to be complicated to be effective.
This page intentionally left blank
Probability 5.1 Probability and Probability Rules 5.2 Conditional Probability and Bayes’s Rule 5.3 Discrete Random Variables 5.4 Continuous Random Variables 5.5 Binomial Distributions 5.6 Geometric
Master Math: AP Statistics
5.1 Probability and Probability Rules • An understanding of the concept of randomness is essential for tackling the concept of probability. What does it mean for something to be random? AP Statistics
students usually have a fairly good concept of what it means for something to be random and have likely done some probability calculations in their previous math courses. I’m always a little
surprised, however, when we use the random integer function of the graphing calculator when randomly assigning students to their seats or assigning students to do homework problems on the board. It’s
almost as if students expect everyone in the class to be chosen before they are chosen for the second or third time. Occasionally, a student’s number will come up two or even three times before
someone else’s, and students will comment that the random integer function on the calculator is not random. Granted, it’s unlikely for this to happen with 28 students in the class, but not
impossible. Think about rolling a standard six-sided die. The outcomes associated with this event are random—that is, they are uncertain but follow a predictable distribution over the long run. The
proportion associated with rolling any one of the six sides of the die over the long run is the probability of that outcome. • It’s important to understand what is meant by in the long run. When I
assign students to their seats or use the random integer function of the graphing calculator to assign students to put problems on the board, we are experiencing what is happening in the short run.
The Law of Large Numbers tells us that the long-run relative frequency of repeated, independent trials gets closer to the expected relative frequency once the number of trials increases. Events that
seem unpredictable in the short
run will eventually “settle down” after enough trials are accumulated. This may require many, many trials. The number of trials that it takes depends on the variability of the random variable of
interest. The more variability, the more trials it takes. Casinos and insurance companies use the Law of Large Numbers on an everyday basis. Averaging our results over many, many individuals produces
predictable results. Casinos are guaranteed to make a profit because they are in it for the long run whereas the gambler is in it for the relative short run. • The probability of an event is always a
number between 0 and 1, inclusive. Sometimes we consider the theoretical probability and other times we consider the empirical probability. Consider the experiment of flipping a fair coin. The
theoretical probability of the coin landing on either heads or tails is equal to 0.5. If we actually flip the coin a 100 times and it lands on tails 40 times, then the empirical probability is equal
to 0.4. If the empirical probability is drastically different from the theoretical probability, we might consider whether the coin is really fair. Again, we would want to perform many, many trials
before we conclude that the coin is unfair. • Example 1: Consider the experiment of flipping a fair coin three times. Each flip of the coin is considered a trial and each trial for this experiment
has two possible outcomes, heads or tails. A list containing all possible outcomes of the experiment is called a sample space. An event is a subset of a sample space. A tree diagram can be used to
organize the outcomes of the experiment, as shown in Figure 5.1.
Master Math: AP Statistics
Figure 5.1
Tree diagram.
Tree diagrams can be useful in some problems that deal with probability. Each trial consists of one line in the tree diagram, and each branch of the tree diagram can be labeled with the appropriate
probability. Working our way down and across the tree diagram, we can obtain the eight possible outcomes in the sample space. S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT} To ensure that we have the
correct number of outcomes listed in the sample space, we could use the counting principle, or multiplication principle. The multiplication principle states that if you can do task 1 in m ways and
you can do task 2 in n ways, then you can do task 1 followed by task 2 in m ⫻ n ways. In this experiment we have three trials, each with two possible outcomes. Thus, we would have 2 ⫻ 2 ⫻ 2 = 8
possible outcomes in the sample space. • Example 2: Let’s continue with the experiment discussed in Example 1. What’s the probability of flipping the coin three times and obtaining heads all three
Solution: We can answer that question in one of two ways. First, we could use the sample space. HHH is one of eight possible (equally likely) outcomes listed in the sample space, so P(HHH) = 1⁄8. The
second method we could use to obtain P(HHH) is to use the concept of independent events. Two events are independent if the occurrence or non-occurrence of one event does not alter the probability of
the second event. The trials of flipping a coin are independent. Whether or not the first flip results in heads or tails does not change the probability of the coin landing on heads or tails for the
second or third flip. If two events are independent, then P(A ∩ B) = P(A and B) = P(A) • P(B). We can apply this concept to this experiment. P(HHH) = (1⁄2) • (1⁄2) • (1⁄2) = 1⁄8. Later on, in Example
11, we will show how to prove whether or not two events are independent. • Example 3: Again consider Example 1. Find the probability of obtaining at least one tail (not all heads). Solution: The
events “all heads” and “at least one tail” are complements. The set “at least one tail” is the set of all outcomes from the sample space excluding “all heads.” All the outcomes in a given sample
space should sum to one, and so any two events that are complements should sum to one as well. Thus, the probability of obtaining “at least one tail” is equal to: 1 – P(HHH) = 1 – 1⁄8 = 7⁄8. We can
verify our answer by examining the sample space we obtained in Example 1 and noting that 7 out of the 8 equally likely events in the sample space contain at least – one tail. Typical symbols for the
complement of event A are: Ac, A', or A . • Example 4: Consider the experiment of drawing two cards from a standard deck of 52 playing cards. Find the probability of drawing two hearts if the first
card is replaced and the deck is shuffled before the second card is drawn. The following tree diagram can be used to help answer the question.
Master Math: AP Statistics
Figure 5.2
Tree diagram with replacement.
Solution: Let H = “heart” and Hc = “non – heart” Notice that the probability of the second card being a heart is independent of the first card being a heart. Thus, P(HH) = 1⁄4 • 1⁄4 = 1⁄16. • Example
5: How would Example 4 change if the first card were not replaced before the second card was drawn? Find the probability of drawing two hearts if the first card drawn is not replaced before the
second card is drawn. Notice how the probabilities in the tree diagram change depending on whether or not a heart is drawn as the first card.
Figure 5.3
Tree diagram without replacement.
When two events A and B are not independent, then P(A ∩ B) = P(A) • P(B | A) This is a conditional probability, which we will discuss in more detail in section 5.2. Applying this formula, we obtain P
(HHH) = 13⁄52 • 12⁄51 = 1⁄17. • Example 6: Suppose that in a particular high school the probability that a student takes AP Statistics is equal to 0.30 (call this event A), and the probability that a
student takes AP Calculus is equal to 0.45 (call this event B.) Suppose also that the probability that a student takes both AP Statistics and AP Calculus is equal to 0.10. Find the probability that a
student takes either AP Statistics or AP Calculus. Solution: We can organize the information given in a Venn diagram as shown in Figure 5.4.
Figure 5.4
Venn diagram for two events, A and B.
Master Math: AP Statistics
Notice the probability for each section of the Venn diagram. The total circle for event A (AP Statistics) has probabilities that sum to 0.30 and the total circle for event B (AP Calculus) has
probabilities that sum to 0.45. Also notice that all four probabilities in the Venn diagram sum to 1. We can use the General Addition Rule for the Union of Two Events. P(A ∪ B) = P(A) + P(B) – P(B) –
P(A ∩ B) Note that ∪ (union) means “or” and ∩ (intersection) means “and.” We could then apply the formula as follows: P(A ∪ B) = 0.30 + 0.45 – 0.10 = 0.65. If you consider the Venn diagram, the
General Addition Rule makes sense. When you consider event A, you are adding in the “overlapping” of the two circles (student takes AP Stats and AP Calculus), and when you consider event B, you are
again adding in the “overlapping” of the two circles. Thus, the General Addition Rule has us subtracting the intersection of the two circles, which is the “overlapping” section. • Example 7:
Reconsider Example 6. Find the probability that a student takes neither AP Statistics nor AP Calculus. Solution: From the Venn diagram in Figure 5.4 we can see that the probability that a student
takes neither course is the area (probability) on the outside of the circles, which is 0.35. We could also conclude that 20% of students take AP Statistics but not AP Calculus and that 35% of
students take AP Calculus but not AP Statistics.
• Example 8: Referring again to Example 6, suppose that AP Statistics and AP Calculus are taught only once per day and during the same period. It would then be impossible for a student to take both
AP Statistics and AP Calculus. How would the Venn diagram change? Solution: We could construct the Venn diagram as shown in Figure 5.5.
Figure 5.5
Venn diagram for disjoint (mutually exclusive) events.
Notice that the circles are not overlapping since events A and B cannot occur at the same time. Events A and B are disjoint, or mutually exclusive. This implies that P(A ∩ B) = 0. Applying the
General Addition Rule, we obtain: P(A ∪ B) = 0.30 + 0.45 = 0.75. Notice that we do not have to subtract P(A ∩ B) since it is equal to 0. Thus, for disjoint events: P(A ∪ B) = P(A) + P(B).
Master Math: AP Statistics
• Don’t confuse independent events and disjoint (mutually exclusive) events. Try to keep these concepts separate, but remember that if you know two events are independent, they cannot be disjoint.
The reverse is also true. If two events are disjoint, then they cannot be independent. Think about Example 8, where it was impossible to take AP Statistics and AP Calculus at the same time (disjoint
events.) If a student takes AP Statistics, then the probability that they take AP Calculus changes from 0.35 to zero. Thus, these two events, which are disjoint, are not independent. That is, taking
AP Statistics changes the probability of taking AP Calculus. It’s also worth noting that some events are neither disjoint nor independent. The fact that an event is not independent does not
necessarily mean it’s disjoint and vice versa. Consider drawing two cards at random, without replacement, from a standard deck of 52 playing cards. The events “first card is an ace” and “second card
is an ace” are neither disjoint nor independent. The events are not independent because the probability of the second card being an ace depends on whether or not an ace was drawn as the first card.
The events are not disjoint because it is possible that the first card is an ace and the second card is also an ace.
5.2 Conditional Probability and Bayes’s Rule • Example 9: Example 5 is a good example of what we mean by conditional probability. That is, finding a given probability if it is known that another
event or condition has occurred or not occurred. Knowing whether or not a heart was chosen as the first card determines the probability that the second card is a heart. We can find P(2nd card heart |
1st card heart) by using the formula given in Example 5 and solving for P(A | B), read A given B. Thus, P( A / B ) =
P( A ∩ B ) . P( B )
When applying the formula, just remember that the numerator is always the intersection (“and”) of the events, and the denominator is always the event that comes after the “given that” line. Applying
the formula, we obtain: P( 2 nd card heart / 1st card heart ) =
P( 2 nd card heart ∩ 1st card heart ) = P(1st card heart )
13 ⋅ 52
The formula works, although we could have just looked at the tree diagram and avoided using the formula. Sometimes we can determine a conditional probability simply by using a tree diagram or looking
at the data, if it’s given. The next problem is a good example of a problem where the formula for conditional probability really comes in handy. • Example 10: Suppose that a medical test can be used
to determine if a patient has a particular disease. Many medical tests are not 100% accurate. Suppose the test gives a positive result 90% of the time if the person really has the disease and also
gives a positive result 1% of the time when a person does not have the disease. Suppose that 2% of a given population actually have the disease. Find the probability that a randomly chosen person
from this population tests positive for the disease. Solution: We can use a tree diagram to help us solve the problem (Figure 5.6).
Figure 5.6
Tree diagram for conditional probability.
Master Math: AP Statistics
It’s important to note that the test can be positive whether or not the person actually has the disease. We must consider both cases. Let event D = Person has the disease and let event Dc = Person
does not have the disease. Let "pos" = positive and "neg" = negative. P(pos) = 0.02 • 0.90 + 0.98 • 0.01 = 0.0278 Thus, the probability that a randomly chosen person tests positive for the disease is
0.0278. • Example 11: Referring to Example 10, find the probability that a randomly chosen person has the disease given that the person tested positive. In this case we know that the person tested
positive and we are trying to find the probability that they actually have the disease. This is a conditional probability known as Bayes’s Rule. P( D / pos) =
P( D ∩ pos) 0.02 ⋅ 0.90 ≈ 0.6475. = P( pos) 0.02 ⋅ 0.90 + 0.98⋅⋅ 0.01
You should understand that Bayes’s Rule is really just an extended conditional probability rule. However, it’s probably unnecessary for you to remember the formula for Bayes’s Rule. If you understand
how to apply the conditional probability formula and you can set up a tree diagram, you should be able to solve problems involving Bayes’s Rule. Just remember that to find P(pos) you have to consider
that a positive test result can occur if the person has the disease and if the person does not have the disease. • Again, in some problems you may be given the probabilities and need to use the
conditional probability formula and in others it may be unnecessary. In the following example, the formula for conditional probability certainly works, but using it is unnecessary. All the data
needed to answer the conditional probability is given in the table (Figure 5.7).
• Example 12: Consider a marathon in which 38,500 runners participate. Figure 5.7 contains the times of the runners broken down by age. Find the probability that a randomly chosen runner runs under 3
hours given that they are in the 50+ age group.
Under 3 Hrs.
3 – Under 4
4 – Under 5
Over 5 Hrs.
Figure 5.7
Age and time of runners in a marathon.
Solution: Although we could use the conditional probability formula, it’s really unnecessary. It’s given that the person chosen is in the 50+ age group, which means instead of dividing by the total
number of runners (38,500) we can simply divide the number of runners that are both 50+ and ran under 3 hours by the total number of 50+ runners. P(Under 3 hrs / 50+) =
Thus, the probability that a randomly chosen person runs under 3 hours for the marathon given that they are 50 or older is about 0.0031.
Master Math: AP Statistics
• Example 13: Let’s revisit independent events. Is the age of a runner independent of the time that the runner finishes the marathon? It doesn’t seem likely that the two events are independent of one
another. We would expect older runners to run slower, on average, than younger runners. There are certainly exceptions to this rule, which I am reminded of when someone ten years older than me
finishes before me in a marathon! Are the events “finishes under 3 hours” and “50+ age group” independent of one another? Solution: Remember, if the two events are independent then: P(A ∩ B) = P(A) •
P(B) Thus, P(under 3 hrs and 50+) = P(under 3 hrs) • P(50+) We can again use the table values from Figure 5.7 to answer this question. 19 997 6132 = ⋅ 38, 500 38, 500 38, 500 0.0004935 ≠ 0.0041245
Since the two are not equal, the events are not independent.
5.3 Discrete Random Variables • Now that we’ve discussed the concepts of randomness and probability, we turn our attention to random variables. A random variable is a numeric variable from a random
experiment that can take on different values. The random variable can be discrete or continuous. A discrete random variable, X, is a random variable that can take on only a countable number. (In some
cases a discrete random variable can take on a finite number of values and in others it can take on an infinite number of values.) For example, if I roll a standard six-sided die, there are only six
possible values of X, which can take on the values 1, 2, 3, 4, 5, or 6. I can then create a valid probability distribution for X, which lists the values of X and the corresponding probability that X
will occur (Figure 5.8).
Figure 5.8
Probability distribution.
The probabilities in a valid probability distribution must all be between 0 and 1 and all probabilities must sum to 1.
Master Math: AP Statistics
• Example 14: Consider the experiment of rolling a standard (fair) six-sided die and the probability distribution in Figure 5.8. Find the probability of rolling an odd number greater than 1.
Solution: Remember that this is a discrete random variable. This means that rolling an odd number greater than 1 is really rolling a 3 or a 5. Also note that we can’t roll a 3 and a 5 with one roll
of the die, which makes the events disjoint or mutually exclusive. We can simply add the probabilities of rolling a 3 and a 5. P(3 or 5) = 1⁄6 + 1⁄6 = 1⁄3 • We sometimes need to find the mean and
variance of a discrete random variable. We can accomplish this by using the following formulas:
∑ x ⋅ P( x )
μx = x1 p1 + x2 p2 + ... + xn pn or
σ x2 = ( x1 − μx )2 p1 + ( x2 − μx )2 p2 + ... + ( xn − μx )2 pn or σ x2 = ∑ ( x − μx )2 ⋅ P( x )
Std. Dev
σ x = Var ( X )
Recall that the standard deviation is the square root of the variance, so once we’ve found the variance it is easy to find the standard deviation. It’s important to understand how the formulas work.
Remember that the mean is the center of the distribution. The mean is calculated by summing up the product of all values that the variable can take on and their respective probabilities. The more
likely a given value of X, the more that value of X is “weighted” when we calculate the mean. The variance is calculated by averaging the squared deviations for each value of X from the mean.
• Example 15: Again consider rolling a standard six-sided die and the probability distribution in Figure 5.8. Find the mean, variance, and standard deviation for this experiment. Solution: We can
apply the formulas for the mean and variance as follows: μx = 1( 1 6 ) + 2( 1 6 ) + ... + 6( 1 6 ) = 3.5 σ x2 = (1− 3.5)2 ( 1 6 ) + ( 2 − 3.5)2 ( 1 6 ) + ... + (6 − 3.5)2 ( 1 6 ) ≈ 2.9167 σ x ≈
1.7078 Notice that since the six sides of the die are equally likely, it seems logical that the mean of this discrete random variable is equal to 3.5. As always, it’s important to show your work when
applying the appropriate formulas. Note that you can also utilize the graphing calculator to find the mean, variance, and standard deviation of a discrete random variable. But be careful! Some
calculators give the standard deviation, not the variance. That’s not a problem, however; if you know the standard deviation, you can simply square it to get the variance.You can find the standard
deviation of a discrete random variable on the TI83/84 graphing calculator by creating list one to be the values that the discrete random variable takes on and list two to be their respective
probabilities. You can then use the one-variable stats option on your calculator to find the standard deviation. Caution! You must specify that you want onevariable stats for list one and list two
(1-Var Stats L1,L2). Otherwise your calculator will only perform one-variable stats on list one.
Master Math: AP Statistics
• Example 16: Suppose a six-sided die with sides numbered 1–6 is loaded in such a way that in the long run you would expect to have twice as many “1’s” and twice as many “2’s” as any other outcome.
Find the probability distribution for this experiment, and then find the mean and standard deviation. Solution: See Figure 5.9.
Figure 5.9
Probability distribution for Example 16.
Since we are dealing with a valid probability distribution, we know that all probabilities must sum to 1. 2x + 2x + x + x + x + x = 8x 8x = 1 x = 1⁄8 We can then complete the probability distribution
as follows (Figure 5.10).
Figure 5.10
Probability distribution for Example 16.
We can then find the mean and standard deviation by applying the formulas and using our calculators. μx = 1( 1 4 ) + 2( 1 4 ) + ... + 6( 1 8 ) = 3 σ x2 = (1− 3)2 ( 1 4 ) + ( 2 − 3)2 ( 1 4 ) + ... +
(6 − 3)2 ( 1 8 ) ≈ 3 σ x ≈ 1.77321 Notice that the mean is no longer 3.5. The loaded die “weights” two sides of the die so that they occur more frequently, which lowers the mean from 3.5 on the
standard die to 3 on the loaded die.
5.4 Continuous Random Variables • Some random variables are not discrete—that is, they do not always take on values that are countable numbers. The amount of time that it takes to type a five-page
paper, the time it takes to run the 100 meter dash, and the amount of liquid that can travel through a drainage pipe are all examples of continuous random variables.
Master Math: AP Statistics
• A continuous random variable is a random variable that can take on values that comprise an interval of real numbers. When dealing with probability distributions for continuous random variables we
often use density curves to model the distributions. Remember that any density curve has area under the curve equal to one. The probability for a given event is the area under the curve for the range
of values of X that make up the event. Since the probability for a continuous random variable is modeled by the area under the curve, the probability of X being one specific value is equal to zero.
The event being modeled must be for a range of values, not just one value of X. Think about it this way: The area for one specific value of X would be a line and a line has area equal to zero. This
is an important distinction between discrete and continuous random variables. Finding P(X ≥ 3) and P(X > 3) would produce the same result if we were dealing with a continuous random variable since P
(X = 3) = 0.Finding P(X ≥ 3) and P(X > 3) would probably produce different results if we were dealing with a discrete random variable. In this case, X > 3 would begin with 4 because 4 is the first
countable number greater than 3. X ≥ 3 would include 3. • It is sometimes necessary to perform basic operations on random variables. Suppose that X is a random variable of interest. The expected
value (mean) of X would be x and the variance would be x2. Suppose also that a new random variable Z can be defined such that Z = a ⫾ bx. The mean and variance of Z can be found by applying the
following Rules for Means and Variances:
x = a ⫾ bx z2 = b2 x2 z = bx
• Example 17: Given a random variable X with x = 4 and x = 1.2, find z, z2 and z given that Z = 3 + 4X. Solution: Instead of going back to all values of X and multiplying all values by 4 and
adding 3, we can simply use the mean, variance, and standard deviation of X and apply the Rules for Means and Variances. Think about it. If all values of X were multiplied by 4 and added to 3, the
mean would change in the same fashion. We can simply take the mean of X, multiply it by 4, and then add 3.
z = 3 + 4(4) = 19 The variability (around the mean) would be increased by multiplying the values of X by 4. However, adding 3 to all the values of X would increase the values of X by 3 but would not
change the variability of the values around the new mean. Adding 3 does not change the variability, so the Rules for Variances does not have us add 3, but rather just multiply by 4 or 42 depending on
whether we are working with the standard deviation or variance. If we are finding the new standard deviation, we multiply by 4; if we are finding the new variance, we multiply by 42 or 16. When
dealing with the variance we multiply by the factor squared. This is due to the relationship between the standard deviation and variance. Remember that the variance is the square of the standard
z2 = 42(1.44) = 23.04 z = 4(1.2) = 4.8 Notice that 4.82 = 23.04.
Master Math: AP Statistics
• Sometimes we wish to find the sum or difference of two random variables. If X and Y are random variables, we can use the following to find the mean of the sum or difference:
X +Y = X + Y X –Y = X – Y We can also find the variance by using the following if X and Y are independent random variables.
2⌾+Y = 2⌾ + 2Y 2⌾–Y = 2⌾ + 2Y
This is not a typo! We always add variances!
If X and Y are not independent random variables, then we must take into account the correlation . It is enough for AP* Statistics to simply know that the variables must be independent in order to
add the variances. You do not have to worry about what to do if they are not independent; just know that they have to be independent to use these formulas. • To help you remember the relationships
for means and variances of random variables, consider the following statement that I use in class: “We can add or subtract means, but we only add variances. We never, ever, ever, ever, never, ever
add standard deviations. We only add variances.” This incorrect use of the English language should help you remember how to work with random variables. • Example 18: John and Gerry work on a
watermelon farm. Assume that the average (expected) weight of a Crimson watermelon is 30 lbs. with a standard deviation of 3 lbs. Also assume that the average weight of a particular type of seedless
watermelon is 25 lbs. with a standard
deviation of 2 lbs. Gerry and John each reach into a crate of watermelons and randomly pull out one watermelon. Find the average weight, variance, and standard deviation of two watermelons selected
at random if John picks out a Crimson watermelon and Gerry picks out one of the seedless watermelons. Solution: The average weight of the two watermelons is just the sum of the two means.
X+Y = 30 + 25 = 55 To find the variance for each type of melon, we must first square the standard deviation of each type to obtain the variance. We then add the variances.
2X +Y = 32 + 22 = 13 We can then simply take the square root to obtain the standard deviation.
X +Y ≈ 3.6056 lbs. Thus, the combined weight of the two watermelons will have an expected (average) weight of 55 lbs. with a standard deviation of approximately 3.6056 lbs. • Example 19: Consider a
32 oz. soft drink that is sold in stores. Suppose that the amount of soft drink actually contained in the bottle is normally distributed with a mean of 32.2 oz. and a standard deviation of 0.8 oz.
Find the probability that two of these 32 oz soft drinks chosen at random will have a mean difference that is greater than 1 oz.
Master Math: AP Statistics
Solution: The sum or difference for two random variables that are normally distributed will also have a normal distribution. We can therefore use our formulas for the sum or difference of independent
random variables and our knowledge of normal distributions. Let X = the number of oz. of soft drink in one 32 oz. bottle. We are trying to find: P(x1 – x2) > 1 We know that the mean of the
differences is the difference of the means:
x – x2 = 0 We can find the standard deviation of the difference by adding the variances and taking the square root. var(X1 – X2) = 2x1 + 2x2 = 0.64 + 0.64 = 1.28 Std Dev (⌾1 – ⌾2) ≈ 1.1314 We
can now use the mean and standard deviation along with a normal curve to obtain the following: ⎛ 1− 0 ⎞⎟ ⎟ = 0.88 P⎜⎜ z > ⎜⎝ 1.1314 ⎟⎟⎠ Using Table A and subtracting from 1 we obtain: 1 – 0.8106 =
The probability that two randomly selected bottles have a mean difference of more than 1 oz. is equal to 0.1894.
5.5 Binomial Distributions • One type of discrete probability distribution that is of importance is the binomial distribution. Four conditions must be met in order for a distribution to be considered
a binomial. These conditions are: 1. Each observation can be considered a “success” or “failure.” Although we use the words “success” and “failure,” the observation might not be what we consider to
be a success in a real-life situation. We are simply categorizing our observations into two categories. 2. There must be a fixed number of trials or observations. 3. The observations must be
independent. 4. The probability of success, which we call p, is the same from one trial to the next. • It’s important to note that many probability distributions do not fit a binomial setting, so
it’s important that we can recognize when a distribution meets the four conditions of a binomial and when it does not. If a distribution meets the four conditions, we can use the shorthand notation,
B(n, p), to represent a binomial distribution with n trials and probability of success equal to p. We sometimes call a binomial setting a Bernoulli trial. Once we have decided that a particular
distribution is a binomial
Master Math: AP Statistics
distribution, we can then apply the Binomial Probability Model. The formula for a binomial distribution is given on the AP* Statistics formula sheet. ⎛ n⎞⎟ • P(X = k) = ⎜⎜⎜ ⎟⎟ p k (1 – p)n–k where: ⎝
k ⎟⎠ n = number of trials p = probability of "success" 1 – p = probability of "failure" k = number of successes in n trials ⎛ n⎞⎟ n! ⎜⎜ ⎟ = ⎜⎝ k ⎟⎟⎠ ( n − k )! k ! • Example 20: Consider Tess, a
basketball player who consistently makes 70% of her free throws. Find the probability that Tess makes exactly 5 free throws in a game where she attempts 10 free throws. (We must make the assumption
that the free throw shots are independent of one another.) Solution: We can use the formula as follows: ⎛10⎞⎟ ⎜⎜ ⎟ (.70)5 (.30)5 ⎜⎝5 ⎟⎠⎟
There are 10 trials and we want exactly 5 trials to
⎛10⎞ be a we want exactly 5 trials to be a “success.” ⎜⎜ ⎟⎟⎟ means we have ⎜⎝5 ⎟⎠ a combination of 10 things taken 5 at a time in any order. Some textbooks write this as 10C5. We can use our
calculator to determine that there are in fact 252 ways to take 5 things from 10 things, if we do not care about the
order in which they are taken. For example: Tess could make the first 5 shots and miss the rest. Or, she could make the first shot, miss the next 5, and then make the last 4. Or, she could make every
other shot of the 10 shots. The list goes on. There are 252 ways she could make exactly 5 of the ten shots. Notice that the probabilities of success and failure must add up to one since there are
only two possible outcomes that can occur. Also notice that the exponents add up to 10. This is because we have 10 total trials with 5 successes and 5 failures. We should be able to use our
calculator to obtain: ⎛10⎞⎟ ⎜⎜ ⎟ (.70)5 (.30)5 ≈ .1029. ⎜⎝5 ⎟⎟⎠ This is the work we would want to show on the AP* Exam! We could also use the following calculator command on the TI 83/84: binompdf
(10,.7,5). We enter 10 for the number of trials, .7 for the probability of success, and 5 as the number of trials we are going to obtain. However, binompdf(10,.7,5) does not count as work on the AP*
Exam. ⎛10⎞ You must show ⎜⎜⎜ ⎟⎟⎟ (.70)5 (.30)5 even though you might not actually ⎝5 ⎟⎠ use it to get the answer. Binompdf is a calculator command specific to one type of calculator, not standard
statistical notation. Don’t think you’ll get credit for writing down how to do something on the calculator. You will not! You must show the formula or identify the variable as a binomial as well as
the parameters n and p.
Master Math: AP Statistics
• Example 21: Consider the basketball player in Example 20. What is the probability that Tess makes at most 2 free throws in 10 attempts? Solution: Consider that “at most 2” means Tess can make
either 0, or 1, or 2 of her free throws. We write the following: ⎛ ⎞ ⎛ ⎞ ⎛10⎞⎟ ⎜⎜ ⎟ (.70)0 (.30)10 + ⎜⎜10⎟⎟ (.70)1 (.30)9 + ⎜⎜10⎟⎟ (.70)2 (.30)8 ⎜⎝2 ⎟⎟⎠ ⎜⎝1 ⎟⎟⎠ ⎜⎝0 ⎟⎠⎟ Always show this work. We can
either calculate the answer using the formula or use: binomcdf(10,.7,2) Notice that we are using cdf instead of pdf. The “c” in cdf means that we are calculating the cumulative probability. The
graphing calculator always starts at 0 trials and goes up to the last number in the command. Again, the work you show should be the work you write when applying the formula, not the calculator
command! Either way we use our calculator we obtain: ⎛ ⎞ ⎛ ⎞ ⎛10⎞⎟ ⎜⎜ ⎟ (.70)0 (.30)10 + ⎜⎜10⎟⎟ (.70)1 (.30)9 + ⎜⎜10⎟⎟ (.70)2 (.30)8 ≈ 0.0016 ⎟ ⎟ ⎜⎝2 ⎟⎟⎠ ⎜⎝1 ⎟⎠ ⎜⎝0 ⎟⎠
• Example 22: Again consider Example 20. What is the probability that Tess makes more than 2 of her free throws in 10 attempts? Solution: Making more than 2 of her free throws would mean making 3 or
more of the 10 shots. That’s a lot of combinations to consider and write down. It’s easier to use the idea of the complement that we studied earlier in the chapter. Remember that if Tess shoots 10
free throws she could make anywhere between none and all 10 of her shots. Using the concept of the complement we can write: ⎤ ⎡⎛10⎞ ⎛10⎞ ⎛10⎞ 1− ⎢⎢⎜⎜⎜ ⎟⎟⎟ (.70)0 (.30)10 + ⎜⎜⎜ ⎟⎟⎟ (..70)1 (.30)9 +
⎜⎜⎜ ⎟⎟⎟ (.70)2 (.30)8 ⎥⎥ ≈ .9984 ⎝1 ⎟⎠ ⎝2 ⎟⎠ ⎥⎦ ⎢⎣⎝0 ⎟⎠ That’s pretty sweet. Try to remember this concept. It can make life a little easier for you! • Example 23: Find the expected number of shots
that Tess will make and the standard deviation. The following formulas are given on the AP* Exam: μ = np and σ = np(1− p ) Remember that the expected number is the average number of shots that Tess
will make out of every 10 shots.
= 10 • 0.7 = 7
Master Math: AP Statistics
Thus, Tess will make 7 out of every 10 shots, on average. Seems logical! Using the formula for standard deviation, we obtain: σ = 10(0.7 )(0.3) ≈ 1.4491
5.6 Geometric Distributions • Example 24: Consider Julia, a basketball player who consistently makes 70% of her free throws. What is the probability that Julia makes her first free throw on her third
attempt? • How does this example differ from that of the previous section? In this example there are not a set number of trials. Julia will keep attempting free throws until she makes one. This is
the major difference between binomial distributions and geometric distributions. • There are four conditions that must be met in order for a distribution to fit a geometric setting. These conditions
are: 1. Each observation can be considered a “success” or “failure.” 2. The observations must be independent. 3. The probability of success, which we call p, is the same from one trial to the next.
4. The variable that we are interested in is the number of observations it takes to obtain the first success.
• The probability that the first success is obtained in the nth observation is: P(X = n) = (1 – p) n–1 p. Note that the smallest value that n can be is 1, not 0. The first success can happen on the
first attempt or later, but there has to be at least one attempt. This formula is not given on the AP* Exam! • Returning to Example 24: We want to find the probability that Julia makes her first free
throw on her third attempt. Applying the formula, we obtain: P(x = 3) = (1 – .7)3–1 (.7) ≈ 0.063 We can either use the formula to obtain the answer or we can use: Geompdf(0.7,3) Notice that we drop
the first value that we would have used in binompdf, which makes sense because in a geometric probability we don’t have a fixed number of trials and that’s what the first number in the binompdf
command is used for. Once again, show the work for the formula, not the calculator command. No credit will be given for calculator notation.
Master Math: AP Statistics
• Example 25: Using Example 24, what is the probability that Julia makes her first free throw on or before her fifth attempt? Solution: This is again a geometric probability because Julia will keep
shooting free throws until she makes one. For this problem, she could make the shot on her first attempt, second attempt, and so on until the fifth attempt. Applying the formula, we obtain: P(X = 1)
+ P(X = 2) + P(X = 3) + P(X = 4) + P(X = 5) or (1 – .7)0(.7)1 + (1 – .7)1(.7)1 + (1 – .7)2(.7)1 + (1 – .7)3(.7)1 + (1 – .7)4(.7)1 ≈ 0.9976 We could also use the following formula, which is the
formula for finding the probability that it takes more than n trials to obtain the first success: P(X > n) = (1 – p) n Using this formula and the concept of the complement, we obtain: 1 – P(X > 5) =
1 – (1 – .7)5 ≈ 0.9976 Either method is OK as long as you show your work. If I used the first method, I would show at least three of the probabilities so that the grader of the AP* Exam knows that I
understand how to apply the formula.
• Example 26: Using Example 24, find the expected value (mean) and the standard deviation. Solution: The mean in this case is the expected number of trials that it would take before the first success
is obtained. The formulas for the mean and standard deviation are: μ= 1
σ = (1− p )
Applying these formulas (which are not given on the AP formula sheet), we obtain: μ = 1 ≈ 1.4286 .7 and σ = (1− .7 )
.7 2
≈ 0.7825.
This page intentionally left blank
Sampling Distributions 6.1 Sampling Distributions 6.2 Sample Means and the Central Limit Theorem 6.3 Sample Proportions and the Central Limit Theorem
Master Math: AP Statistics
6.1 Sampling Distributions • Understanding sampling distributions is an integral part of inferential statistics. Recall that in inferential statistics you are making conclusions or assumptions about
an entire population based on sample data. In this chapter, we will explore sampling distributions for means and proportions. In the remaining chapters, we will call upon the topics of this and
previous chapters in order to study inferential statistics. • From this point on, it’s important that we understand the difference between a parameter and a statistic. A parameter is a number that
describes some attribute of a population. For example, we might be interested in the mean, , and standard deviation, , of a population. There are many situations for which the mean and standard
deviation of a population are unknown. In some cases, it is the population proportion that is not known. That is where inferential statistics comes in. We can use a statistic to estimate the
parameter. A statistic is a number that describes an attribute of a sample. So, for the unknown we can use the sample mean, x , as an estimate of . It’s important to note that if we were to take
another sample, we would probably get a different value for x . In other words, if we keep sampling, we will probably keep getting different values for x (although some may be the same). Although may
be unknown, it is a fixed number, as a population can have only one mean. The notation for the standard deviation of a sample is s. (Just remember that s is for “sample.”) We sometimes use s to
estimate , as we will see in later chapters.
Sampling Distributions
• To summarize the notation, remember that the symbols and (parameters) are used to denote the mean and standard deviation of a population, and x and s (statistics) are used to denote the mean and
standard deviation of a sample. You might find it helpful to remember that s stands for “statistic” and “sample” while p stands for “parameter” and “population.” You should also remember that Greek
letters are typically used for population parameters. Be sure to use the correct notation! It can help convince the reader (grader) of your AP* Exam that you understand the difference between a
sample and a population. • Consider again a population with an unknown mean, . Sometimes it is simply too difficult or costly to determine the true mean, . When this is the case, we then take a
random sample from the population and find the mean of the sample, x . As mentioned earlier, we could repeat the sampling process many, many times. Each time we would recalculate the mean, and each
time we might get a different value. This is called sampling variability. Remember, does not change. The population mean for a given population is a fixed value. The sample mean, x , on the other
hand, changes depending on which individuals from the population are chosen. Sometimes the value of x will be greater than the true population mean, , and other times x will be smaller than . This
means that x is an unbiased estimator of . • The sampling distribution is the distribution of the values of the statistic if all possible samples of a given size are taken from the population. Don’t
confuse samples with sampling distributions. When we talk about sampling distributions, we are not talking about one sample; we are talking about all possible samples of a particular size that we
could obtain from a given population.
Master Math: AP Statistics
• Example 1: Consider the experiment of rolling a pair of standard six-sided dice. There are 36 possible outcomes. If we define to be the average of the two dice, we can look at all 36 values in the
sampling distribution of x (Figure 6.1). If we averaged all 36 possible values of x , we would obtain the exact value of . This is always the case.
Figure 6.1
Possible outcomes, x , and dotplot when rolling a pair of dice.
Sampling Distributions
• As mentioned, sometimes the value of x is below , and sometimes it is above . In Example 1, you can see that the center of the sampling distribution is exactly 3.5. In other words, the statistic, x
, is unbiased because the mean of the sampling distribution is equal to the true value of the parameter being tested, which is . Although the values of x may differ, they do not tend to consistently
overestimate or underestimate the true mean of the population. • As you will see later in this chapter, larger samples have less variability when it comes to sampling distributions. The spread is
determined by how the sample is designed as well as the size of the sample. It’s also important to note that the variability of the sampling distribution for a particular sample size does not depend
on the size of the population from which the sample is obtained. An SRS (simple random sample) of size 4000 from the population of U.S. residents has approximately the same variability as an SRS of
size 4000 from the population of Indiana residents. However, in order for the variability to be the same, both samples must be the same size and be obtained in the same manner. We want our samples to
be obtained from correct sampling methods and the sample size to be large enough that our samples have low bias and low variability.
6.2 Sample Means and the Central Limit Theorem • The following activity will help you understand the difference between a population and a sample, sampling distributions, sampling variability, and
the Central Limit Theorem. I learned of this activity a few years ago from AP Statistics consultant and teacher Chris True. I am not sure where this activity originated, but it will help you
understand the concepts presented in this chapter. If you’ve done this activity in class, that’s great! Read through the next few pages anyway, as it will provide you with a good review of sampling
and the Central Limit Theorem.
Master Math: AP Statistics
• The activity begins with students collecting pennies that are currently in circulation. Students bring in enough pennies over the period of a few days such that I get a total of about 600 to 700
pennies between all of my AP Statistics classes. Students enter the dates of the pennies into the graphing calculator (and Fathom) as they place the pennies into a container. These 600 to 700 pennies
become our population of pennies. Then students make a guess as to what they think the distribution of our population of pennies will look like. Many are quick to think that the distribution of the
population of pennies is approximately normal. After some thought and discussion about the dates of the pennies in the population, students begin to understand that the population distribution is not
approximately normal but skewed to the left. Once we have discussed what we think the population distribution should look like, we examine a histogram or dotplot of the population of penny dates. As
you can see in Figure 6.2, the distribution is indeed skewed to the left.
Figure 6.2
Population distribution for 651 pennies.
Sampling Distributions
• It’s important to discuss the shape, center, and spread of the distribution. As just stated, the shape of the distribution of the population of pennies is skewed left. The mean, which we will use
as a measure of center, is = 1990.5868. Since we are using the mean as the measure of center, it makes sense to use the standard deviation to measure spread. For this population of 651 pennies, =
12.6937 years. • Once we’ve discussed the shape, center, and spread of the population distribution, we begin sampling. Students work in pairs and draw out several samples of each of the sizes: 4, 9,
16, 25, and 50. Sampling variability becomes apparent as students repeat samples for the various sample sizes. We divide up the sampling task among the students in class so that when we are done we
have about 100 to 120 samples for each sample size. We graph the sampling distribution for each sample size and compute the mean and standard deviation of each sampling distribution. • We can then
analyze the sampling distribution for each sample size. We begin with n = 4 (Figure 6.3).
Figure 6.3
Sampling distribution for samples of size n = 4.
Master Math: AP Statistics
• Again, think about the shape, center, and spread of the distribution. Remember, this is a sampling distribution. This is the distribution of about 100 samples of size 4. As you can see in Figure
6.3, the shape of the sampling distribution is different from that of the population. Although the shape of the population is skewed left, the shape of the sampling distribution for n = 4 is more
symmetrical. The center of the sampling distribution is x– = 1990.6636, which is very close to the population mean, . The spread of the sampling distribution is x– ≈ 6.6031. We can visualize that the
spread of the sampling distribution is less than that of the population and that the mean of the sampling distribution (balancing point) is around 1990 to 1991. Note that if we had obtained all
samples of size 4 from the population, then
x– = = 1990.5868 and σ = x
σ n
12.6937 4
≈ 6.3469
Although it’s impractical to obtain all possible samples of size 4 from the population of 651 pennies, our results are very close to what we would obtain if we had obtained all 7,414,857,450 samples.
That's right; from a population of 651 pennies, the number of samples of size 4 you could obtain is ⎛651⎞⎟ ⎜⎜ ⎟ = 7, 414, 857, 450 ⎜⎝4 ⎟⎟⎠ • The following sampling distribution is for samples of size
n = 9 (Figure 6.4).
Sampling Distributions
Figure 6.4
Sampling distribution for samples of size n = 9.
• The sampling distribution for samples of size n = 9 is more symmetrical than the sampling distribution for n = 4. The mean and standard deviation for this sampling distribution are x– ≈ 1990.1339
and x– ≈ 4.8852. Again, we can visualize that the mean is around 1990 to 1991 and that the spread is less for this distribution than that for n = 4. • The following sampling distribution is for
samples of size n = 16 (Figure 6.5).
Figure 6.5
Sampling distribution for samples of size n = 16.
Master Math: AP Statistics
• The sampling distribution for samples of size n = 16 is more symmetrical than the sampling distribution for n = 9. The mean and standard deviation for this sampling distribution are x– ≈ 1990.4912
and x– ≈ 4.3807. Again, we can visualize that the mean is around 1990 to 1991 and that the spread is less for this distribution than that for n = 9. • Notice the outlier of 1968. Although it’s
possible to obtain a sample of size n = 16 with a sample average of 1968 from our population of pennies, it is very unlikely. This is probably a mistake on the part of the student reporting the
sample average or on the part of the student recording the sample average. It’s interesting to note the impact that the outlier has on the variability of the sampling distribution. The theoretical
standard deviation for the sampling distribution is σx =
σ n
12.6937 16
≈ 3.1734
Notice that the standard deviation of our sampling distribution is greater than this value, which is due largely to the outlier of 1968. This provides us with a good reminder that the mean and
standard deviation are not resistant measures. That is to say that they can be greatly influenced by extreme observations. • The following is the sampling distribution obtained for n = 25 (Figure
Sampling Distributions
Figure 6.6
Sampling distribution for samples of size n = 25.
• The mean and standard deviation for this sampling distribution are x– ≈ 1990.75 and x– ≈ 2.5103. The shape of the sampling distribution is more symmetrical and more normal. We can visualize that
the center of the distribution is again around 1990 to 1991 and that the variability is continuing to decrease as the sample size gets larger. • The following is the sampling distribution obtained
for n = 50 (Figure 6.7).
Figure 6.7
Sampling distribution for samples of size n = 50.
Master Math: AP Statistics
• The mean and standard deviation for this sampling distribution are x– ≈ 1990.4434 and x– ≈ 3.0397. The shape of the sampling distribution is more normal than that of any of the sampling
distributions of smaller sample sizes. The center can again be visualized to be around 1990 to 1991, and the spread can be visualized to be smaller that that of the sampling distributions of smaller
sample sizes. Notice, however, that the standard deviation of the sampling distribution is actually larger than that of size n = 25. How can this happen? Notice that there are two outliers. My
students called them “super outliers.” These are responsible for making the standard deviation of the sampling distribution larger than it would be theoretically. These outliers are very, very
unlikely. We would be more likely to be struck by lightning twice while winning the lottery than to obtain two outliers as extreme as these. The outliers are probably due to human error in recording
or calculating the sample means. • The penny activity is the Central Limit Theorem (the Fundamental Theorem of Statistics) at work. The Central Limit Theorem says that as the sample size increases,
the mean of the sampling distribution of x approaches a normal distribution with mean and standard deviation, σx =
σ n
This is true for any population, not just normal populations! How large the sample must be depends on the shape of the population. The more non-normal the population, the larger the sample size needs
to be in order for the sampling distribution to be approximately normal. Most textbooks consider 30 or 40 to be a “large” sample. The Central Limit Theorem allows us to use normal calculations when
we are dealing with nonnormal populations, provided that the sample size is large. It is important σ for any sampling distribution of the to remember that x– ≈ and σ x = n mean. The Central Limit
Theorem states that the shape of the sampling distribution becomes more normal as the sample size increases.
Sampling Distributions
6.3 Sample Proportions and the Central Limit Theorem • Now that we’ve discussed sampling distributions, sample means, and the Central Limit Theorem, it’s time to turn our attention to sample
proportions. Before we begin our discussion, it’s important to note that when referring to a sample proportion, we always use pˆ. When referring to a population proportion, we always use p. Note that
some texts use instead of p. In this case, is just a Greek letter being used to denote the population proportion, not 3.1415 … • The Central Limit Theorem also applies to proportions as long as the
following conditions apply: 1. The sampled values must be independent of one another. Sometimes this is referred to as the 10% condition. That is, the sample size must be only 10% of the population
size or less. If the sample size is larger than 10% of the population, it is unlikely that the individuals in the sample would be independent. 2. The sample must be large enough. A general rule of
thumb is that np ≥ 10 and n(1 – p) ≥ 10. As always, the sample must be random. • If these two conditions are met, the sampling distribution of pˆ should be approximately normal. The mean of the
sampling distribution of pˆ is exactly equal to p. The standard deviation of the sampling distribution is equal to: p(1− p ) n • Note that because the average of all possible pˆ values is equal to p,
the sample proportion, pˆ , is an unbiased estimator of the population proportion, p.
Master Math: AP Statistics
• Also notice how the sample size affects the standard deviation. p(1− p ) Notice that as n gets larger, the fraction gets smaller. n Thus, as the sample size increases, the variability in the
sampling distribution decreases. This is the same concept discussed in the penny activity. Note also that for any sample size n, the standard deviation is largest from a population with p = 0.50.
Inference for Means 7.1 The t-Distributions 7.2 One-Sample t-Interval for the Mean 7.3 One-Sample t-Test for the Mean 7.4 Two-Sample t-Interval for the Difference Between Two Means 7.5 Two-Sample
t-Test for the Difference Between Two Means 7.6 Matched Pairs (One-Sample t) 7.7 Errors in Hypothesis Testing: Type I, Type II, and Power
Master Math: AP Statistics
7.1 The t-Distributions • The Central Limit Theorem (CLT) is a very powerful tool, as was evident in the previous chapter. Our penny activity demonstrated that as long as we have a large enough
sample, the sampling distribution of x is approximately normal. This is true no matter what the population distribution looks like. To use a z-statistic, however, we have to know the population
standard deviation, . In the real world, is usually unknown. Remember, we use statistical inference to make predictions about what we believe to be true about a population. • When is unknown,
we estimate with s. Recall that s is the sample standard deviation. When using s to estimate , the standard deviation of s the sampling distribution for means is sx = . When you use s to n
estimate , the standard deviation of the sampling distribution is called the standard error of the sample mean, x . • While working for Guinness Brewing in Dublin, Ireland, William S. Gosset
discovered that when he used s to estimate , the shape of the sampling distribution changed depending on the sample size. This new distribution was not exactly normal. Gosset called this new
distribution the t-distribution. It is sometimes referred to as the student’s t. • The t-distribution, like the standard normal distribution, is singlepeaked, symmetrical, and bell shaped. It’s
important to notice, as mentioned earlier, that as the sample size (n) increases, the variability of the sampling distribution decreases. Thus, as the sample size increases, the t-distributions
approach the standard normal model. When the sample size is small, there is more variability in the sampling distribution, and therefore there is more area (probability) under the density curve in
the “tails” of the distribution. Since the area in the “tails” of the distribution is greater, the t-distributions are “flatter” than the standard normal curve. We refer to a t-distribution by its
degrees of freedom. There are n–1 degrees of freedom. The “n–1” degrees of freedom
Inference for Means
are used since we are using s to estimate and s has n–1 degrees of freedom. Figure 7.1 shows two different t-distributions with 3 and 12 degrees of freedom, respectively, along with the standard
normal curve. It’s important to note that when dealing with a normal distribution, x −μ x −μ and when working with a t-distribution, t = . σ s n n Using s to estimate introduces another source of
variability into the statistic. z=
Figure 7.1
Density curves with 3 and 12 degrees of freedom. Notice how the t-distribution approaches the standard normal curve as the degrees of freedom increases.
Master Math: AP Statistics
7.2 One-Sample t-Interval for the Mean • As mentioned earlier, we use statistical inference when we wish to estimate some parameter of the population. Often, we want to estimate the mean of a
population. Since we know that sample statistics usually vary, we will construct a confidence interval. The confidence interval will give a range of values that would be reasonable values for the
parameter of interest, based on the statistic obtained from the sample.In this section, we will focus on creating a confidence interval for the mean of a population. • When dealing with inference, we
must always check certain assumptions for inference. This is imperative! These “assumptions” must be met for our inference to be reliable. We confirm or disconfirm these “assumptions” by checking the
appropriate conditions. Throughout the remainder of this book, we will perform inference for different parameters of populations. We must always check that the assumptions are met before we draw
conclusions about our population of interest. If the assumptions cannot be verified, our results may be inaccurate. For each type of inference, we will discuss the necessary assumptions and
conditions. • The assumptions and conditions for a one-sample t-interval or onesample t-test are as follows: Assumptions 1. Individuals are independent
Conditions 1. SRS and 15 Assumptions and conditions that verify: 1. Individuals are independent. We are given that the sample is random, and we can safely assume that there are more than 420
teenagers in the U.S. (10n < N). 2. Normal population assumption. We are given a large sample; therefore the sampling distribution of x should be approximately normal. We should be safe using
Master Math: AP Statistics
Step 2: t =
x − μ 16.5 − 15 ≈ 2.1602 = 4.5 s n 42
df = 41
p ≈ 0.0183 Step 3: With a p-value of 0.0183, we reject the null hypothesis at the 5% level. There appears to be enough evidence to reject the null hypothesis and conclude that the typical teen plays
more than 15 hours of video games per week. • Let’s return to the p-value. What does a p-value of 0.0183 really mean? Think about it this way. If the typical teen really does play an average of 15
hours of video games a week, the probability of taking a random sample from that population and obtaining an x value of 16.5 or more is only 0.0183. In other words, it’s possible, but pretty
unlikely. Only about 1.83% of the time can we obtain a sample average of 16.5 hours or greater, if the true population mean is 15 hours. • In Example 2, we rejected the null hypothesis at the 5%
level (this is called the alpha level, ␣). The most common levels at which we reject the null hypothesis are the 5% and 1% levels. That’s not to say that we can’t reject a null hypothesis at the 10%
level or even at the 6% or 7% levels; it’s just that 1% and 5% happen to be commonly accepted levels at which we reject or fail to reject the null hypothesis.
Inference for Means
• You may struggle a little while first using the p-value to determine whether you should reject or fail to reject the null hypothesis. Always compare your p-value to the given ␣ – level. In Example
2, we used an ␣ – level of 0.05. Our p-value of 0.0183 led us to reject at the 5% level because 0.0183 is less than 0.05. We did not reject at the 1% level because 0.0183 is greater than 0.01. To
reject at a given ␣ – level, the p-value must be less than the ␣ – level. • If an ␣ – level is not given, you should use your own judgment. You are probably safe using a 1% or 5% alpha level.
However, don’t feel obligated to use a level. You can make a decision based on the p-value without using an alpha level. Just remember that the smaller the p-value, the more evidence you have to
reject the null hypothesis. • The p-value in Example 2 is found by calculating the area to the right of the test statistic t = 2.1602 under the t-distribution with df = 41. If we had used a two-sided
alternative instead of a one-sided alternative, we would have obtained a p-value of 0.0367, which would be double that of the one-sided alternative. Thus, the p-value for the two-sided test would be
found by calculating the area to the right of t = 2.1602 and combining that with the area to the left of t = –2.1602.
7.4 Two-Sample t-Interval for the Difference Between Two Means • We are sometimes interested in the difference in two population means, 1 – 2. The assumptions and conditions necessary to carry out
a confidence interval or test of significance are the same for two-sample means as they are for one-sample means, with the addition that the samples must be independent of one another. You must check
the assumptions and conditions for each independent sample.
Master Math: AP Statistics
• The assumptions and conditions for a two-sample t-interval or twosample t-test are as follows: Assumptions
1. Samples are independent of each other
1. Are they? Does this seem reasonable?
2. Individuals in each sample are independent
2. Both SRSs and both
|
{"url":"https://silo.pub/master-math-ap-statistics.html","timestamp":"2024-11-14T23:25:56Z","content_type":"text/html","content_length":"206729","record_id":"<urn:uuid:644fadd1-f18c-4e82-9ae8-de8650ac94ec>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00541.warc.gz"}
|
How to align 2D shapes
Lets suppose that two shapes describe the same geographic area each digitized by different people at different coordinate system e.g. different transform, rotation and scaling and you want to match
the two shapes.
This may seem unusual task. Unfortunately I had to deal with it. In particular I was given a Modflow model which was rotated and translated so that the left bottom corner of the domain was the (0,0)
coordinate and I had to convert it to the coordinate system I was using. Also the two models described the same area, yet the digitization of the outline was made by different people therefore there
were significantly different in the details.
The approach I took might not be the best and I welcome suggestions.
As I have some background in optimization I decided to run an optimization problem where the decision variables would be the translate, rotate and scale factors and the objective function would be
the minimization of the area that remains after a Boolean subtraction between the two shapes.
Although this may sound a bit complicated task, it turned out to be relatively trivial using Matlab as it provides all the functionality one may need.
Lets name the outline of the modflow domain, modflow shape and the one with actual coordinates the target shape.
First we need to have the two shapes into Matlab workspace. For the modflow shape I rasterized the modflow grid and convert it to polygon shapefile in QGIS. The target shape was already a shapefile.
The two shape files can be easily loaded with the shaperead matlab function:
1 modflow = shaperead('CVHM_outline');
2 target = shaperead('B118_outline_simple');
3 subplot(1,2,1); plot( modflow(1,1).X, modflow(1,1).Y )
4 subplot(1,2,1); title('Modflow')
5 axis equal
6 subplot(1,2,2); plot( target(1,1).X, target(1,1).Y )
7 subplot(1,2,2); title('Target')
8 axis equal
It is quite important to have the two shapefiles somehow simplified otherwise the boolean operations might slow down the process.
Next we have to write the objective function. The arguments will be a vector 4×1 (X, Y, R S) i.e. x,y translation, rotation and scaling.
Create a new file, e.g. obj_fun.m and paste the code from this link:
Now we are ready to run the optimization problem. I have found that the pattern search is very reliable optimization algorithm for this problem.
First I run the optimization without scaling because small changes can make big differences. This will give an idea about the scaling limits.
1 % Set the optimization options first
2 pa_opt = psoptimset('CompletePoll', 'on', 'CompleteSearch','on', ...
3 'MaxIter', 1000, 'PlotFcns',{@psplotbestf, @psplotbestx},...
4 'PollMethod', 'GPSPositiveBasis2N', 'SearchMethod',@GPSPositiveBasis2N);
5 % run the optimization
6 [x, fval] = patternsearch(@obj_fun,[0 0 0], [], [], [], [], [],[], [], pa_opt);
The figure above shows the result of the first optimization. The red shape is the translated and rotated geometry. The translation and rotation looks good. But definitely we need scaling to make it
better. We can also estimate that the scaling factor has to be greater than 1 and less than 1.5. Therefore we will set those values as constrains. In addition we will set the starting point of the
next optimization run the solution of the previous run.
1 [xf, fval] = patternsearch(@obj_fun,[xf(1) xf(2) xf(3) 1.1], [], [], [], [], ...
2 [xf(1)-1e5 xf(2)-1e5 30 1.0], ...
3 [xf(1)+1e5 xf(2)+1e5 32 1.5], [], pa_opt);
The figure below shows the final results of the optimization run
I should note that I had to perform quite a few optimization runs to reach this solution. In addition I performed few runs with fewer desicion variables. For example keep the rotation constant and
optimize for scaling etc.
As you can see the result is not perfect and there are discrepancies especially in the north area. Therefore it may need to optimize for rotation.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://subsurface.gr/how-to-align-2d-shapes/","timestamp":"2024-11-06T00:41:42Z","content_type":"text/html","content_length":"63960","record_id":"<urn:uuid:674778d3-f164-4423-bc22-289f90e281e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00136.warc.gz"}
|
Orbital rendezvous enables spacecraft to perform missions to service satellites, remove space debris, resupply space stations, and return samples from other planets. These missions are often
considered high risk due to concerns that the two spacecraft will collide if the maneuvering capability of one spacecraft is compromised by a fault.
In this thesis, a passive safety analysis is used to evaluate the probability that a fault that compromises maneuvering capability results in a collision. For a rendezvous
mission, the chosen approach trajectory, state estimation technique, and probability of collision calculation each impact the total collision probability of the mission. This
thesis presents a modular framework for evaluating the comparing the probability of collision of rendezvous mission design concepts.
Trade studies were performed using a baseline set of approach trajectories, and a Kalman Filter for relative state estimation and state estimate uncertainty. The state covariance matrix following
each state update was used to predict the resulting probability of collision if a fault were to occur at that time. These trade studies emphasize that the biggest indicator of rendezvous mission risk
is the time spent on a nominal intercept trajectory.
Degree Type
• Master of Science in Aeronautics and Astronautics
• Aeronautics and Astronautics
Advisor/Supervisor/Committee Chair
Dr. David A. Spencer
Additional Committee Member 2
Dr. Kathleen C. Howell
Additional Committee Member 3
Dr. Carolin Frueh
|
{"url":"https://hammer.purdue.edu/articles/thesis/A_PASSIVE_SAFETY_APPROACH_TO_EVALUATE_SPACECRAFT_RENDEZVOUS_MISSION_RISK/8038598/1","timestamp":"2024-11-07T18:41:09Z","content_type":"text/html","content_length":"132979","record_id":"<urn:uuid:e7667239-8eb9-4fcb-b460-da53732ffba2>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00420.warc.gz"}
|
Creeping Flow in Fluids: Examples and Analysis
Creeping Flow in Fluids: Examples and Analysis
Key Takeaways
• Creeping flow describes fluid flow in which inertia is insignificant.
• Creeping flow at zero Reynolds number is what we call Stokes flow.
• Compared to general fluid flow, creeping flow is easier to solve mathematically due to the absence of non-linear or advective terms.
The flow of high-viscosity fluids such as paints, heavy oils, and food-processing materials are examples of creeping flow
Do you remember learning about creepers and climbers in elementary school science class? We classify plants as creepers or climbers based on whether they grow horizontally or vertically along the
soil. Creeping movements are seen in living and nonliving things, and the main characteristic of a “creeper” is gradual movement.
We can relate the gradual flow in fluids to creeping movement, provided certain conditions are met. A significant example of creeping flow is seen in the movement of heavy oils, honey, etc. These
fluids flow with difficulty due to viscosity. There are so many applications in which we make use of fluids that showcase creeping flow. Let’s explore what this flow is through a few examples.
Creeping Flow in Fluids
Creeping flow describes fluid flow in which inertia is insignificant. The viscous and pressure forces exerted on the fluid are greater than the inertia. Fluids with high viscosity have difficulty
flowing and they usually travel in a creeping motion. Even though the inertia is negligible in these fluids, they are dominated by internal friction. Fluids that creep in flow are non-turbulent and
never make spinning vortices. Creeping flow fluids creep around obstacles rather than become turbulent.
Creeping flow is also known as Stokes flow. In the creeping motion of fluids, viscous forces dominate over advective inertial forces. In fluids, the creeping flow is a laminar type of flow where
streamlines are parallel to each other. The velocity of creeping flow is very low.
Reynolds Number and Creeping Flow
Reynolds number is a dimensionless number that gives the relation between advective inertial forces and viscous forces. Reynolds number is directly proportional to the density of a fluid and the
velocity of the fluid and is inversely proportional to the dynamic viscosity of the fluid. It is the value of the Reynolds number that distinguishes between the laminar type and turbulent type of
flow in fluids. For Reynolds numbers below 2000, the flow type is laminar. The higher the Reynolds number, the more the flow becomes chaotic. When the Reynolds number is greater than 2000, the flow
type is turbulent.
For creeping flow, the Reynolds number is less than 1 (Re<<1). When Reynolds number is less than unity, inertial effects can be ignored, taking into account only the viscous resistance. The fluid
flow is non-chaotic in creeping motion. Fluid flow that travels in a creeping motion is time reversible.
Navier-Stokes Equation and Creeping Flow
To be precise, the creeping flow at zero Reynolds number is what we call Stokes flow. The Reynolds number is small in microfluidics devices and can be classified as creeping flow. The creeping flow
in fluids is viscous flow and can be mathematically expressed using the Navier-Stokes equation.
In the creeping flow observed in microfluidics devices, the left side terms of the Navier-Stokes equation, which gives the rate of change of momentum of the fluid, is neglected. The momentum terms in
the Navier-Stokes equation of creeping flow fluids are non-linear and neglecting these terms linearizes the equation. When the Reynolds number of a given creeping fluid flow is small, it is necessary
to consider the convective terms in the Navier-Stokes equation.
Examples of Creeping Flow
One of the applications utilizing creeping flow is hydrodynamic lubrication. Hydrodynamic lubrication utilizes the properties of highly viscous fluids and their flow through small channels to bring
effective lubrication. The flow of the lubricant fluid through the gaps between bearings and races is governed by the balance between viscous friction and the pressure gradient. The heavy pressure
exerted in the bearing gaps helps prevent surfaces from rubbing each other, which is effective in causing hydrodynamic lubrication.
Applications based on the creeping flow of fluids is not limited to:
• Flow of high-viscosity fluids such as paints, heavy oils, and food-processing materials
• Extrusion of melts
• Seepage in sand or rock formation
• Dust particle settling
• Any small object moving in fluids
• Locomotion of microorganisms in fluids
• Flow of groundwater or oil through small channels or cracks
Compared to general fluid flow, creeping flow is easier to solve mathematically due to the absence of non-linear or advective terms. Cadence’s suite of software can help you find solutions for
creeping flow as well as complicated general fluid flows. With these tools, it is easier to run CFD simulations in complex fluid-dependent systems that facilitate fluid flow modeling.
Subscribe to our newsletter for the latest CFD updates or browse Cadence’s suite of CFD software, including Fidelity and Fidelity Pointwise, to learn more about how Cadence has the solution for you.
|
{"url":"https://resources.system-analysis.cadence.com/blog/msa2022-creeping-flow-in-fluids-examples-and-analysis","timestamp":"2024-11-07T09:33:03Z","content_type":"text/html","content_length":"205518","record_id":"<urn:uuid:bb37c9bd-2812-4c7d-afe9-fd05a3fe301e>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00194.warc.gz"}
|
Rust - How to create Two-Dimensional Array Example
Rust, Software Development
Rust – How to create Two-Dimensional Array Example
An array is an important construct in most programming languages. This post shows how to create one- and two-dimensional arrays in Rust. Although Rust has different ways of creating arrays, the
following examples demonstrate some of them.
One-Dimensional Arrays
Before we move to two-dimensional Rust arrays, it is best to revisit one-dimensional arrays. Please check out How to Declare and Initialize an Array. As we already know, we always need to initialize
arrays before we use them. For example, the following codes will not compile.
1 let mut my_ints:[i32; 4];
2 my_ints[0] = 10;
To fix the compilation, we modify the codes as follows. These new codes will compile successfully.
1 let mut my_ints:[i32; 4] = [0, 0, 0, 0];
2 my_ints[0] = 10;
Here is another example of a one-dimensional Rust array. Consider the following codes.
let mut my_keywords:[&str; 2] = ["EMPTY"; 2];
for my_keyword in my_keywords.iter() {
println!("{}", my_keyword);
When we run these codes, we get the following output.
One-dimensional arrays are different from two-dimensional arrays. Below is a visual representation of a one-dimensional array. It has one set of indexes for its values, and setting or changing a
value requires one index.
If we want to modify the index 0 (zero) value, we could change the above codes to the following.
let mut my_keywords:[&str; 2] = ["EMPTY"; 2];
my_keywords[0] = "NOT EMPTY NOW";
for my_keyword in my_keywords.iter() {
println!("{}", my_keyword);
When we run the new codes, we will get the following result.
Two-Dimensional Arrays
With one-dimensional arrays, we use one pair of square brackets. However, with two-dimensional arrays in Rust, we use multiple square brackets NOT placed side by side with each other. In Java, we
would have something the following. Notice the pairs of square brackets sit side by side with each other Java
1 // These are Java codes
2 int[][] myIntMatrix = new int[3][2];
3 for(int i = 0; i < 3; i++ )
4 {
5 for(int j = 0; j < 2; j++ )
6 {
7 myIntMatrix[i][j] = 8;
8 }
9 }
In Rust, we would have the following equivalent codes to declare and initialize a two-dimensional array. Strange, right? There are nested square brackets!
1 let my_int_matrix:[[i32;2];3] = [[8;2];3];
Therefore, the outer square brackets represent the rows, while the inner square brackets represent the columns of the two-dimensional Rust array. We can visualize the content of the variable
my_int_matrix as follows. What if we need three- or n-dimensional arrays? Same drill. We use nested square brackets.
To display the contents of two-dimensional Rust array my_int_matrix :
for (i, row) in my_int_matrix.iter().enumerate() {
for (j, col) in row.iter().enumerate() {
println!("[row={}][col={}]={}", i, j, col);
// Display value given the indexes
println!("{}", my_int_matrix[0][0]);
1 [row=0][col=0]=8
2 [row=0][col=1]=8
3 [row=1][col=0]=8
4 [row=1][col=1]=8
5 [row=2][col=0]=8
6 [row=2][col=1]=8
We tested the codes using Rust 1.53.0, and this post is part of the Rust Programming Language For Beginners Tutorial.
|
{"url":"https://turreta.com/blog/2019/09/08/rust-how-to-create-two-dimensional-array-example/","timestamp":"2024-11-07T15:59:26Z","content_type":"text/html","content_length":"293609","record_id":"<urn:uuid:26c20b59-de0f-46a1-8366-ebff2fc22a65>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00219.warc.gz"}
|
hPVI, and what it says about the Senate - LeftMN
hPVI, and what it says about the Senate
This was in response to my posting of the Minnesota Senate hPVI numbers yesterday:
This shows GOP keeping MN Sen RT @eric_pusey: Check this out! RT @tonyangelo: Minnesota Senate hPVI ow.ly/1kGFkC #stribpol #mnleg
— Bill Walsh (@billtwalsh) June 19, 2012
The short response (it’s twitter, there are no other kinds of responses I suppose) that I provided to that is as follows:
@billtwalsh Not necessarily, DFLers historically do better in marginal districts.
— Tony Petrangelo (@TonyAngelo) June 19, 2012
That answer suffered from the problem of not being very detailed though, so that is what I will do within the confines of this post, go into detail.
That detail is best illustrated with a graph (for that matter, what point isn’t well served by being graphed)!
What you’re looking at is a scatter plot of every legislative election from 2006-2010 (with the exception of races with only one candidate). The vertical axis is the election margin (positive for a
DFL victory), the horizontal axis represents the corresponding districts hPVI (the districts hPVI at the time of the election, not current hPVI. This is why I only went back to 2006, because
compiling hPVI data for 2004, with redistricting having happened in 2002, was not practical).
If one was to separate the graph into quadrants, the top left quadrant would represent Democrats who won election in GOP leaning districts. While the bottom right quadrant represents GOPers who have
won election in districts that lean Democratic.
Looking at the scatter plot it should be pretty obvious which quadrant has more little dots. You might be surprised just how many little dots are in the top left quadrant though, 100.
DFL candidates have won 100 elections for the legislature in GOP leaning districts over the last three cycles while the GOP has won just eight such races in DFL leaning districts.
But that comparison is a little bit unfair to the GOP since 2006 and 2008 were DFL wave years while the GOP only had a wave in 2010. So if we were to just compare 2006 to 2010 (because there were
state senate elections in both of those years), the picture doesn’t get any better for the GOP.
In those two elections DFLers won 65 races in GOP leaning seats while the GOP won eight in DFL districts. That is the point I was making with my response to Bill Walsh that I posted above. DFLers win
in Republican leaning seats all the time, Republicans rarely win in DFL seats though.
The above scatter plot contains a formula for the regression line that runs through it. If we work that formula backwards, solving for a 0% margin of victory, we get R+3.8 (or -3.8) as the answer.
What this means is that we would expect a district with an R+3.8 lean to result in a tied election and anything greater than that would be a GOP victory, anything less, a DFL victory. Meaning a pure
toss-up district is between R+3 and R+4.
If we were to use the regression formula to estimate Ted Daley’s margin of victory, as an example, we see that, despite his being in an R+1 district and all other thing’s being equal, he’s expected
to lose by almost six points.
With that in mind, let’s again look upon the list of most vulnerable Republicans.
Candidate (Senate district | hPVI)
Jermey Miller (28 | D+1)
John Carlson (5 | EVEN)
April King (42 | EVEN)
Ted Daley (51 | R+1)
Keith Downey (49 | R+2)
John Pederson (14 | R+2)
Ted Lillie (53 | R+3)
Pam Wolf (37 | R+3)
Joe Gimse (17 | R+4)
Benjamin Kruse (36 | R+4)
If we just went by what we learned about hPVI above, we would actually expect the first six names on the list to lose. If that happened and the DFL held all of its current seats, then the race in
SD53, the next one on the list, would decide control of the Senate.
In other words, hPVI sees the Senate as pretty much a pure toss-up.
Thanks for your feedback. If we like what you have to say, it may appear in a future post of reader reactions.
|
{"url":"https://left.mn/2012/06/hpvi-and-what-it-says-about-the-senate/","timestamp":"2024-11-11T08:16:13Z","content_type":"text/html","content_length":"36054","record_id":"<urn:uuid:053e24a0-e9df-435c-b5de-f809df0584c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00645.warc.gz"}
|
ladda ner Math Playground Cool Games APK senaste
cubefield cool math games - Den Levande Historien
Math Play has a large collection of free online math games for elementary and middle school students. Here you can find interactive games designed to make math drills fun and entertaining. On our
website kids can play exciting online games such as soccer games, math baseball games, math racing games, football math games, basketball games Play math games and involve fun math activities to
explain area and perimeter concepts Dog Kennel Game for Perimeter: In the Area Game of Dog Kennel mentioned above, when the children are told that the borders of rectangles are fences of the Kennels,
a perimeter game may be designed and the same worksheet must be used to introduce perimeter. Play Papa's Games on Hooda Math. Our unblocked addicting Papa's games are fun and free. Also try Hooda
Math online with your iPad or other mobile device. Parents and Teachers: I am excited to announce the creation of my educational games group on Facebook, co-moderated by Colleen of Math Playground!
CLICK HERE - Online Math Games for Kids | MathPlayground.com. Play with math and give your brain a workout! Math Playground is filled with 100s of math Math Playground - Probability Spinner. https:/
/www.mathplayground.com/ probability.html. 7336 Raging Ridge Rd / Harrisburg, NC 28075,. Phone: 704 260 6490 Math Playground is a popular learning site filled with math games, logic puzzles and a
variety of problem solving activities.
Klicka på gula länken för att gå vidare !
Tips på sidor och appar som kan passa ert barn - Learnify
ICTmagic - MathsI "Mattelänkar". Math Games, Videos, and Worksheets for the Common Core ladda ner Math Playground Cool Games APK senaste version 1.0.4 - com.mathgamesplayground - Spela de bästa Friv
sval matematik spel nu!
hledej-koupelna-obyvak.jpg 566×800 Preschool activity
Geo Patterns | Shapes, Patte.. Critter Counter | Counting G.. Search for More Math Playground Free Online Games Math Playground Games (Page 1) Little Miss Inventor Math.
These fun math games for kids will reinforce basic arithmetic concepts and spark a love for learning. When it comes to teaching math, you& You can play the Prodigy Math game by creating a game
account, accessing your class code, logging in and creating a custom wizard character for the fantasy You can play the Prodigy Math game by creating a game account, accessing your class Math
Playground is a popular learning site filled with math games, logic puzzles, step-by-step instructional videos, math practice, and a variety of problem math playground. Math games and more at
MathPlayground.com! Problem solving, games, and puzzles the entire family will enjoy. Overview Math Playground is an educational site filled with engaging math games, challenging logic puzzles,
computation practice, problem solving activities and math Balancing Equations game.
Ssg lkab utbildning
direkt i Mancala, Math Playground, Jak grać MANCALA. 5 kroków (ze serię ruchów i ujęć. Mancala - PrimaryGames - Zagraj w darmowe gry online. See how many math questions you can answer before the
time runs out! Listor: 0 Hämtningar: 3 There are many games and websites for kids to use to practice . Listor: 0 Hämtningar: 10 Math Playground is a dual purpose Math application http://
www.aplusmath.com/Games/HiddenPicture/HiddenPicture.php?gametype=Multiplication http://www.mathplayground.com/thinkingblocks.html Snoring Game · Treasure Island Room Service Menu · Treasure Island
Game Map Abcya Civiballs · Cool Math Snoring · Math Playground Snoring Treasure Run - Play it now at Cool Math Games: Warning: This game requires a huge amount of concentration and Play Run at
MathPlayground.com! både digital tabellträning och olika slags spel på Multiplikationstabellen eller på engelskspråkiga men lite coolare Math playground – multiplication games.
Never associated learning algebra with rescuing … Math Playground. 11K likes. MathPlayground.com is an action-packed site for elementary and middle school students. Play a math game, solve a logic
Zombies are attacking your lands! As the king of not only your kingdom, but also math, it’s up to you – and you alone – to stop the zombies from taking over. All you have to do is answer the math
questions right, and you’ll save the kingdom!
Conservation biology degree
http://members.learningplanet.com/teachers/ Download Multiplikationstabellen APK Android Game for free to your Similar Games to Multiplikationstabellen · Math Playground Icon. Matematik Manga High
Math Playground Mult Multiplication Games Skolplus Timez Attack Webbmagistern Engelska Easy Things For Beginners Glosträning Math Playground http://www.mathplayground.com/. Lets Play Math Math
Blaster http://www.mathblaster.com/ Ivan Moscovich “The big book of brain games”. Math Grapher. Kostnadsfritt Kids ABC and Counting Join and Connect the Dot Alphabet Puzzle game.
A favorite of parents and teachers, 28 Feb 2021 Where to find free online math games for kids and adults · 1.
Progressiv politikk
ad droppar hur längemest omsatta aktier idagköp datorer billigtsandra palmer osukonvegas pristhe nightingale imdb
MATEMATIK-LÄNKAR - WordPress.com
84 Math Websites for K-8 | Ask a Tech TeacherI "Mattelänkar". ICTmagic - MathsI "Mattelänkar". Math Games, Videos, and Worksheets for the Common Core ladda ner Math Playground Cool Games APK senaste
version 1.0.4 - com.mathgamesplayground - Spela de bästa Friv sval matematik spel nu! Offering a wide variety of KS2 markings for your tarmac area so key stage two pupils can use the graphics on
their school playgrounds. games for the school Baby Monster 123 - My First Numbers Cool Math Playground - Fun & Easy Counting Game.
Dans trollhättan båbergjobb pa spotify
ladda ner FRV GameBox - Free Fun Games apk senaste
Buy The Lost City: Read Apps & Games Reviews - Amazon.com Hands-On Math: Base Ten Blocks is a virtual math manipulative playground where students. 5 Math Games To Play with UNO Cards - Primary
Playground. There's a good chance you have a pack on UNO cards at home or in your classroom. Here are 5 Hands-On Math: Base Ten Blocks is a virtual math manipulative playground where The new game
uses concrete objects to represent abstract numbers: from Dec 21, 2013 - Bottom line: A math app geared toward first graders, this app covers a Game play is a bit slow, but extensive content and a
practice area that allows users to move Zwobb - Gamer's Playground'80s & '90s Childhood Games. http://www.mathplayground.com/ mattespel mm. http://www.surfnetkids.com/games/Math_Games/ mattespel mm.
http://members.learningplanet.com/teachers/ Download Multiplikationstabellen APK Android Game for free to your Similar Games to Multiplikationstabellen · Math Playground Icon.
Ladda ner Android-spel apk Multiplication table Math, Brain
Article from Spanish Number Games from PBS - Spanish Playground Coolmath Games is a brain-training site, for everyone, where logic & thinking & math meets fun & games. Lots of fun math logic games
at Math Playground.
Math Playground. 11K likes. MathPlayground.com is an action-packed site for elementary and middle school students.
|
{"url":"https://hurmanblirriksctwy.netlify.app/36409/61531","timestamp":"2024-11-13T05:04:21Z","content_type":"text/html","content_length":"18898","record_id":"<urn:uuid:cecc3520-3fd7-4e2c-ae94-4d25ca6893db>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00260.warc.gz"}
|
The Stacks project
Lemma 10.65.1. Let $R \to S$ be a ring map. Let $N$ be an $S$-module. Let $A$, $A'$, $A_{fin}$, $B$, and $B_{fin}$ be the subsets of $\mathop{\mathrm{Spec}}(S)$ introduced above.
1. We always have $A = A'$.
2. We always have $A_{fin} \subset A$, $B_{fin} \subset B$, $A_{fin} \subset A'_{fin} \subset B_{fin}$ and $A \subset B$.
3. If $S$ is Noetherian, then $A = A_{fin}$ and $B = B_{fin}$.
4. If $N$ is flat over $R$, then $A = A_{fin} = A'_{fin}$ and $B = B_{fin}$.
5. If $R$ is Noetherian and $N$ is flat over $R$, then all of the sets are equal, i.e., $A = A' = A_{fin} = A'_{fin} = B = B_{fin}$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 05GA. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 05GA, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/05GA","timestamp":"2024-11-12T00:44:12Z","content_type":"text/html","content_length":"27161","record_id":"<urn:uuid:d453c36d-b3d4-4c99-992f-5813999439bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00122.warc.gz"}
|
A little portal platformer
Log in with itch.io to leave a comment.
This is pretty interesting. I played through the whole thing. Short and fun!
Nice illusory balls! :) Somehow rotation does not work for me correctly in the Web version, it is constantly rotating, unless I set the rotation speed to 0 (but then I cannot rotate).
Please do not call it non-Euclidean though... non-Euclidean geometry is a completely different thing, portals change the topology, but the geometry remains Euclidean. Non-Euclidean geometry is so
strange and cool that gamers will not even notice the strangeness (but they will still notice it is cool). Unfortunately some gamers recently have started confusing people by calling portals
non-Euclidean :(
Thank you! It is not geometry (and I don’t call it geometry once), but portals allow for local violations of Euclidean geometry axioms - for example, there can be more than one straight line
connecting two points. I was originally intending to do more things with portals (like passing through pairs of different-sized portals to adjust the player’s size or maintaining perceived gravity
direction when exiting through a flipped portal), but ran into a lot of issues with getting portals to work well in Godot (see blog post).
As for rotation, hard to tell - ultimately that’s on Godot’s end. So long as you’ve clicked to lock the mouse cursor, should act normal.
Not sure what you mean -- with portals you still get locally Euclidean space -- "locally" usually means "in a sufficiently small neighborhood", and even if you are on a portal, you cannot tell from a
small neighborhood, all the points in that small neighborhood will have only one straight line connecting them through the neighborhood, and all the small triangles will add to 180 degrees.
It does not make much sense to say that a game is non-Euclidean just because it violates some Euclid's axioms -- then you could say that any game taking place in a bounded world is non-Euclidean
because Euclid's axioms say that lines can be extended infinitely, or any grid-based game, or any game with no space at all, or any 3D game because Euclid's axioms are for planar geometry, etc.
The interesting thing is replacing Euclid's parallel axiom while all the remaining ones remain unchanged. (Likewise when you say "irrational number" you still mean a real number, not anything that is
a number and not rational.) Euclid thought that this was impossible (and that the parallel axiom actually follows from the other ones), so did people for 2000 years, and when it was discovered this
was possible, this was called "non-Euclidean geometry". Later extended to other things similar in style, but portals do something totally different.
3 years ago (1 edit) (-2)
A non-Euclidean geometry is just a geometry in which not all Euclidean postulates are honoured. Wikipedia gives "replacing the parallel postulate" as an example, which is defied by this game: we can
draw a straight line, have two other non-parallel lines intersect with it at different points, and yet manage to have them not intersect with each other by guiding one of them through a portal. It
actually more closely resembles hyperbolic geometry under the right circumstances (i.e. at least one portal is present closer to the given line R than to point P).
Where did you get this from? The Wikipedia page you cite clearly disagrees with you:
His influence has led to the current usage of the term "non-Euclidean geometry" to mean either "hyperbolic" or "elliptic" geometry.
It does not say that "replacing the parallel postulate" is an example (it says this is what you do, and there are also kinematic geometries) and also "replacing the parallel postulate" means that you
keep all the other postulates. If other postulates are not honoured either, it is in no way closer to hyperbolic geometry.
And I am still using non-Euclidean geometry wider than the Wikipedia line above, to mean "a geometry which is not Euclidean" like you want, i.e., including three-dimensional geometries like Solv and
Nil. They are geometries i.e. they stretch the space, while portals do not stretch the space, they change the topology, not the geometry. For example, all triangles will still have angles which sum
to 180 degrees, while in hyperbolic geometry, all have less.
Wow this is awesome!! Portals are so tricky, props for getting this to work in a weekend and making it fun at the same time. :) I love games that feel like they take full advantage of exploring 3D
spaces. Looking forward to seeing more if you do continue to work on this idea!
If you have an integrated GPU, pressing F7 to disable shadows can be a good idea.
i dont know if it works but ok
Oh disabling shadows fixed everything for me, wow, thank you.
very cool mechanics. nice game.
I had a fun time playing your game ! I really loved the puzzle halfway in the game in the giant open room where you had to backtrack to see different perspectives from the crystal. Loved it !
the map has a cool fog going on at the bottom and I like the puzzles as well especially how the big space in front of the end was structured.
That was excellent - and thank you for providing a route out of the map at the end so we could run around the outside of the walls to get a good look at the map.
lol I did the same thing!
Wow, nice! I really dig puzzles in impossible maps, and this one is amazing! Are you using the new Godot portals or it's a hand-made implementation? Good job anyway!
Thank you! This is done with Viewports and shader materials (mapping a ViewportTexture across SCREEN_UV). Do you have a link to any information about the new portals? I’ve ran into some issues with
my approach due to difficulties with implementing efficient culling or timely updating viewports.
Nice solution! Really liked it. If you want to check the new portal implementation, it is already merged into 3.4 branch, and the details can be checked in this PR.
This is such a cool game with the use of invisible stuffs and non-Euclidean environment, nice and interesting game you developed I must say!
Btw, can you help me to playtest my game on this jam, and giving some comment or feedback?
Play in the browser option is available, but downloading desktop executables to play is recommended.
If you can, also help me to test the application is working or not, especially the platforms other than Windows. Will be much appreciated if you do!
3 years ago (1 edit) (+1)
Cool puzzle game. I liked the use of portals for some of the game environments, and the use of perspective to see where the invisible doors and platforms were.
I am not sure what "non-Euclidean" should feel like, but the game felt more about using perspective to find invisible paths and doors than about messing with non-Euclidean space.
That was fun! Thanks for sharing
Thank you! “non-Euclidean”-ness comes primarily from portals - like doorways leading the opposite side of the same room.
I suppose it’s slightly less mind-breaking for this game since the possible paths don’t twist too much.
Very nice experiment! I don't think I'd go too far with the invisible platforms idea in a full game, but non-Euclidean mechanics are always cool to mess around with.
Thank you! I was originally planning to focus more on the portals, but I’ve hit some limitations with the approach used (can’t have portals “loop” by looking at themselves). I’ll probably focus more
on the interactive elements if I’ll be revisiting this idea later on (and likely remaking the project from scratch since I wasn’t able to achieve the art style that I wanted for it).
|
{"url":"https://yellowafterlife.itch.io/godot-portal-platformer/comments","timestamp":"2024-11-11T19:56:36Z","content_type":"text/html","content_length":"68126","record_id":"<urn:uuid:6280f782-8a5a-4eb5-875b-0f7d493a8d5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00287.warc.gz"}
|
combinatory logic
A system for reducing the operational notation of logic, mathematics or a functional language to a sequence of modifications to the input data structure. First introduced in the 1920's by
Schoenfinkel. Re-introduced independently by Haskell Curry in the late 1920's (who quickly learned of Schoenfinkel's work after he had the idea). Curry is really responsible for most of the
development, at least up until work with Feys in 1958.
Last updated: 1995-01-05
Nearby terms:
combinator ♦ combinatory logic ♦ Combined object-oriented Language
Try this search on Wikipedia, Wiktionary, Google, OneLook.
|
{"url":"https://foldoc.org/combinatory+logic","timestamp":"2024-11-12T15:08:45Z","content_type":"text/html","content_length":"9127","record_id":"<urn:uuid:dda17b1d-9671-48fc-b11d-4389dc886d90>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00852.warc.gz"}
|
ACM Other ConferencesDispersion in Unit Disks
We present two new approximation algorithms with (improved) constant ratios for selecting $n$ points in $n$ unit disks such that the minimum pairwise distance among the points is maximized.
(I) A very simple $O(n \log{n})$-time algorithm with ratio $0.5110$ for disjoint unit disks. In combination with an algorithm of Cabello~\cite{Ca07}, it yields a $O(n^2)$-time algorithm
with ratio of $0.4487$ for dispersion in $n$ not necessarily disjoint
unit disks.
(II) A more sophisticated LP-based algorithm with ratio $0.6495$ for
disjoint unit disks that uses a linear number of variables and
constraints, and runs in polynomial time.
The algorithm introduces a novel technique which combines linear
programming and projections for approximating distances.
The previous best approximation ratio for disjoint unit disks was $\frac{1}{2}$. Our results give a partial answer to an open question raised by Cabello~\cite{Ca07}, who asked whether $\frac{1}{2}$
could be improved.
|
{"url":"https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2010.2464/metadata/acm-xml","timestamp":"2024-11-12T17:21:11Z","content_type":"application/xml","content_length":"4752","record_id":"<urn:uuid:edecbd26-cd0c-46a6-bd54-489d5bc22726>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00110.warc.gz"}
|
NOEL - Merry Christmas - Chart
Figuring How Many Chains In Your Starting Chain
Because at times, we want a Filet item to be a different size than the original graph/chart calls for, go by these instructions and you can end up with any size finished item that you like!
│It's actually very easy to figure out how many chains should be in a starting chain for any filet crochet chart. I've given formulas below to figure starting chains for both a 3 dc mesh and a 4 dc │
│mesh. │
First, count the number of squares across the first row that you will be working on the chart.
Charts are usually begun at the bottom right. Many edgings are worked sideways (the short rows) so that the length can be decided as you go along.
Next, decide if you want to work the chart in a 3 dc mesh or a 4 dc mesh. A 3 dc mesh = a mesh containing 3 dc in each mesh (after the first mesh, the last dc of a mesh also counts as the first dc of
the next mesh). A 4 dc mesh = a mesh containing 4 dc in each mesh (after the first mesh, the last dc of a mesh also counts as the first dc of the next mesh).
If working the chart in a 3 dc mesh, multiply the number of squares across on the first row of the chart, times 2, then add 1. That's your starting chain. Add number of chains for turning chain
before starting first row: If the first square on the chart is a solid mesh, then chain 3 (counts as first double crochet of first mesh). If the first square on the chart is an open mesh, then chain
4 (counts as first double crochet and the chain-1 of first open mesh).
If working the chart in a 4 dc mesh, multiply the number of squares across on the first row of the chart, times 3, then add 1. That's your starting chain. Add number of chains for turning chain
before starting first row: If the first square on the chart is a solid mesh, then chain 3 (counts as first double crochet of first mesh). If the first square on the chart is an open mesh, then chain
5 (counts as first double crochet and the chain-2 of first open mesh).
Why is there an add 1 at the end of the starting chain formula: Because (for a 3 dc mesh) after the first mesh, the last dc of a mesh also counts as the first dc of the next mesh, meaning that you
will need 2 dc for each new mesh across the row. This is why you multiply the number of mesh on the first row of the chart times two. But you need 3 dc for the first mesh of that row and after
multiplying the number of mesh across first row times two, there are only 2 dc allotted for the first mesh of the row. That's what the add 1 is for - to bring the number of mesh allotted for the
first mesh of the row up to 3 dc. The same reason and principle for the add 1 applies to a 4 dc mesh starting chain formula.
Note: If you ever have a problem with the starting chain of any of my filet patterns, come back to these instructions.
Email me: Joyce
|
{"url":"https://jahodnett.tripod.com/id199.html","timestamp":"2024-11-09T17:36:19Z","content_type":"text/html","content_length":"30110","record_id":"<urn:uuid:b5beb71a-6496-4e92-995c-66088d6373cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00848.warc.gz"}
|
Quartic equation
A quartic equation is the result of setting a quartic function to zero, an example quartic equation is the equation
the general form is
a[4]x^4+a[3]x³+a[2]x²+a[1]x+a[0]=0, and a[4]≠0.
A quartic equation always has 4 solutions (or roots).They may be
or there may be duplicate solutions.
It is the highest degree of polynomial equation for which exact values of the roots can be found, by taking nth roots, and use of the normal algebraic operators.
If a[0]=0, then one of the roots is x=0, and the other roots can be found, by dividing by x, and solving the resulting cubic equation, a[4]x³+a[3]x²+a[2]x+a[1]=0.
Otherwise, divide the equation by a[4], to get an equation of the form
Substitute x=t-a/4, to get an equation of the form
Then find the roots somehow. (To be written.)
|
{"url":"http://www.fact-index.com/q/qu/quartic_equation.html","timestamp":"2024-11-07T19:24:23Z","content_type":"text/html","content_length":"4874","record_id":"<urn:uuid:5e5cd57a-9022-405e-b310-fd994a9de2d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00629.warc.gz"}
|
Data Analysis & Statistics Workshop - Lab 1.7 Basic sample descriptors; Tinkering with Plots
Lab 1.7 Basic sample descriptors; Tinkering with Plots
In the last 2 labs, we have learned the concepts underlying basic statistical tests, and learned how to perform a few basic statistical procedures: we learned to test whether 2 samples are likely to
be derived from the same true distributions (Kolmogorov-Smirnov test), how to establish confidence intervals around the mean (central limit theorem), and how to examine whether the means of 2
normal-ish distributions are the same or different (T-test of 2 samples).
In today's lab, we will focus more deeply on understanding common descriptors that are used with sampled data. In the second half of the lab, we'll have some fun learning more about Matlab's plotting
and graphics environment.
In Lab 1.5, we saw that the process of statistical inference involved sampling from a "true" distribution that we can never know exactly, and then making inferences about the true population based on
the sample. Here, we will discuss some basic descriptors of distributions, and how they are related to one another.
Measures of variation / deviation
Just about any phenomenon that we would want to study exhibits variation. (If there were no variation, then there would be nothing to study, really.) It's important to recognize that there are many
sources of variation.
Let's imagine a variable like "height of people in the world who are older than 25 years of age". Everyone in the world is not equally tall. Due to genetic and environmental factors both known and
unknown to science, height varies from person to person.
One could imagine documenting this variation in a number of ways, some of which we've already seen. We've looked at percentiles, histograms, and, my favorite, the cumulative histogram, which allows
one to read off the entire structure of a whole distribution by examining all percentiles of the data.
The most commonly reported single measures of variation are the variance and its companion measure, the standard deviation. The variance and standard deviation measure of how each value of a
distribution deviates from the mean.
Recall that the mean (represented by P with a bar over it) is just the numerical average of all values of the population (enumerated by P[1], P[2], P[3], ... P[M], where M is the total number of
members of the population).
The variance is the average squared deviation of each point from the mean, and the standard deviation is just the square root of the variance:
Note that the variance has the units of the population value squared. So, in our example of heights of the world, the variance has the units of meters^2, whereas the standard deviation has the same
units as the population value (in this case, meters).
You may ask yourself, why is the square of these deviations something useful to compute? Why not calculate the absolute value of the difference between each sample and the mean, or the cube...why do
people do this? There are several reasons, but one is that many variables in nature and in mathematics are Normally distributed, and a Normally-distributed variable (that is, one that follows the
Normal distribution we saw in Lab 1.6) can be described completely by its mean and variance/standard deviation. When we turn to fitting a few labs down the road, we'll see another advantage to using
the squared difference.
Inferring the mean and standard deviation of the "true" distribution from a sample
Suppose we have a sample of real data points that we'll call S, and we'll call the individual sample points S[1], S[2], ... S[n]. ; then it is entirely possible to calculate the mean and standard
deviation from the sample data points. To indicate that the source of these values are different from the "true" population in the equations above, we'll calculate analogous quantities with different
There is the sample mean S bar (S with a bar over it), the sample variance (s^2), and sample standard deviation (s):
But, most of the time we actually want to infer the mean and standard deviation of the true distribution from which the samples are derived. S bar is once again the estimate of the mean of the true
distribution, but, amazingly, the equation for the estimate of the "true" distribution variance/standard deviation is slightly different:
What is the reason for the difference? Why do we divide by N-1 instead of N to estimate the variance of the true distribution/population? The explanation is a little mathy, but if you're interested,
it's right here. The short short version is that there is some uncertainty in the "true" mean that is unaccounted for by dividing by N (that's not really an explanation).
With an estimate of the sample standard deviation in hand, we can estimate the standard error of the mean, which we saw in the last lab allows us to estimate confidence intervals around the sample
mean (that is, it answers how much we expect the sample mean to deviate from the population mean):
Recall that the procedure of sampling produces an estimate of the true mean that is normally distributed with variance equal to the standard error of the mean. This means that there is a 68% chance
that the true mean is within SE of the sample mean, and a 95% chance that the true mean is within 2*SE of the sample mean.
Here's a table summary of these quantities:
For your study, do you want to report the full distribution or the mean and standard error?
Many papers report only the mean and standard error of their experiments. Let's look at how this is done. Let's use the example chicken weights from animals that were fed normal corn or
lysin-enriched corn from the PS1_2.zip file.
chicken_weights = load('chickenweights_control_experimental.txt','-ascii');
chicken_control = chicken_weights(:,1); % grab the first column
chicken_exper = chicken_weights(:,2); % grab the 2nd column
mn1 = mean(chicken_control);
mn2 = mean(chicken_exper);
If we wanted to report this data, we could plot the entire distribution:
hold on;
title('Chicken weights after normal feed (left) or enriched feed (right)');
or we could simply plot the mean and standard error (using the std function in Matlab, which computes the estimate of the true distribution standard deviation, Ĺť, as above):
se_control = std(chicken_control)/sqrt(length(chicken_control));
se_enriched = std(chicken_exper)/sqrt(length(chicken_exper));
hold on;
plot([1 1],[mn1-se_control mn1+se_control],'k-');
plot([2 2],[mn2-se_enriched mn2+se_enriched],'k-');
title('Chicken weights after normal feed (left) or enriched feed (right)');
Q1: Which plot do you like better for this data? Which do you think tells you the most about the experiment?
Sources of variation / deviation
Variation in actual experiments has many sources. First, there is the variation in the underlying true distribution itself. Second, we are sampling, so there is some inherent uncertainty in our
knowledge of the true distribution. Third, there may be noise in the measurements that we are able to make due to our instrumentation or other factors.
Suppose the measurements of chicken weights were very noisy, such that the measurements were Normally distributed with a standard deviation of 100 grams (wiggly chickens, for instance). We can
simulate this situation by adding noise to the data above. The function randn generates pseudorandom noise (see help randn).
randn(1,2) % 1x2 set of random values with mean 0 and standard deviation 1
5*randn(1,2) % 1x2 set of random values with mean 0 and standard deviation 5
You can verify the mean and standard deviation of the noise produced by the randn function:
std(5*randn(1000,1)) % standard deviation of 1000 random points
mean(5*randn(1000,1)) % mean of 1000 random points
In this simulation, we'd like to generate a set of random values that is the same size as our experimental data. To do this, we can use the function size (see help size).
Now we can create our simulated data:
chicken_exper2 = chicken_exper + 50*randn(size(chicken_exper));
chicken_control2 = chicken_control + 50*randn(size(chicken_control));
mn1_2 = mean(chicken_control2);
mn2_2 = mean(chicken_exper2);
hold on;
title('Noisy weights after normal feed (left) or enriched feed (right)');
se_control2 = std(chicken_control2)/sqrt(length(chicken_control2));
se_enriched2 = std(chicken_exper2)/sqrt(length(chicken_exper2));
hold on;
plot([1 1],[mn1_2-se_control2 mn1_2+se_control2],'k-');
plot([2 2],[mn2_2-se_enriched2 mn2_2+se_enriched2],'k-');
title('Noisy weights after normal feed (left) or enriched feed (right)');
Q2: Does knowing that the level of pure measurement noise in this new sample is large have an impact on your opinion of which graph is more informative?
Q3: To what can we attribute variation in a sample? Of the things you mention, is it always possible to know how much each one contributes to the variation in the sample?
In the last 2 labs and in the homework, we have created several plots. Matlab offers a lot of flexibility for customizing these plots. In this section, we will explore the data structures that
underlie Matlab plots and show you how to edit their fields from the command line.
Please make sure you have the correct histbins.m file, and then let's generate some data for plotting:
mydata1 = 100*dasw.stats.generate_random_data(100,'normal',1,1);
[N,bin_centers] = dasw.plot.histbins(mydata1,[-500:100:500]);
f = figure;
We can get the properties of the figure we just made using the get command:
You will see a number of property name and value pairs displayed on the screen. I get the following (you can skim these; no need to read them in depth, but do notice there are a lot of properties
that have values):
'Figure' property fields
Alphamap = [ (1 by 64) double array]
CloseRequestFcn = closereq
Color = [0.8 0.8 0.8]
Colormap = [ (64 by 3) double array]
CurrentAxes = [341.002]
CurrentCharacter =
CurrentObject = [341.002]
CurrentPoint = [482 395]
DockControls = on
FileName =
IntegerHandle = on
InvertHardcopy = on
KeyPressFcn =
KeyReleaseFcn =
MenuBar = figure
Name =
NextPlot = add
NumberTitle = on
PaperUnits = inches
PaperOrientation = portrait
PaperPosition = [0.25 2.5 8 6]
PaperPositionMode = manual
PaperSize = [8.5 11]
PaperType = usletter
Pointer = arrow
PointerShapeCData = [ (16 by 16) double array]
PointerShapeHotSpot = [1 1]
Position = [680 494 560 420]
Renderer = painters
RendererMode = auto
Resize = on
ResizeFcn =
SelectionType = normal
ToolBar = auto
Units = pixels
WindowButtonDownFcn =
WindowButtonMotionFcn =
WindowButtonUpFcn =
WindowKeyPressFcn =
WindowKeyReleaseFcn =
WindowScrollWheelFcn =
WindowStyle = normal
XDisplay = /tmp/launch-iH0FBE/org.x:0
XVisual = 0x24 (TrueColor, depth 24, RGB mask 0xff0000 0xff00 0x00ff)
XVisualMode = auto
BeingDeleted = off
ButtonDownFcn =
Children = [341.002]
Clipping = on
CreateFcn =
DeleteFcn =
BusyAction = queue
HandleVisibility = on
HitTest = on
Interruptible = on
Parent = [0]
Selected = off
SelectionHighlight = on
Tag =
Type = figure
UIContextMenu = []
UserData = []
Visible = on
We can modify the parameters of these fields with the set command:
set(f,'Color',[1 0 0]); % changes the background color to red
set(f,'MenuBar','none'); % turns off the menu bar
The variable f is called a handle to the figure. Its value, (a whole number for figures), is essentially a common reference number that the user and Matlab can use to refer to the figure. Each
graphics object in Matlab, like figures and sets of plotting axes, has a unique handle number.
We can use the function gcf ("get current figure") to return the handle to the frontmost figure. This is a good way to obtain the handle for a figure if you don't know it (or if your program doesn't
know it):
f = gcf
One can access the objects that are part of the figure by examining the figure's children field:
objects = get(f,'children')
In this case, the figure has 1 object, which corresponds to the plotting axes on the figure. We can also access the current plotting axes with the function gca ("get current axes"):
ax = gca
We can look at the properties of axes as follows:
On my system, I see a long list of properties (you can skim them, no need to read them in depth):
'Axes' property fields
ActivePositionProperty = outerposition
ALim = [0 1]
ALimMode = auto
AmbientLightColor = [1 1 1]
Box = on
CameraPosition = [0 20 17.3205]
CameraPositionMode = auto
CameraTarget = [0 20 0]
CameraTargetMode = auto
CameraUpVector = [0 1 0]
CameraUpVectorMode = auto
CameraViewAngle = [6.60861]
CameraViewAngleMode = auto
CLim = [1 2]
CLimMode = auto
Color = [1 1 1]
CurrentPoint = [ (2 by 3) double array]
ColorOrder = [ (7 by 3) double array]
DataAspectRatio = [500 20 1]
DataAspectRatioMode = auto
DrawMode = normal
FontAngle = normal
FontName = Helvetica
FontSize = [10]
FontUnits = points
FontWeight = normal
GridLineStyle = :
Layer = bottom
LineStyleOrder = -
LineWidth = [0.5]
MinorGridLineStyle = :
NextPlot = replace
OuterPosition = [0 0 1 1]
PlotBoxAspectRatio = [1 1 1]
PlotBoxAspectRatioMode = auto
Projection = orthographic
Position = [0.13 0.11 0.775 0.815]
TickLength = [0.01 0.025]
TickDir = in
TickDirMode = auto
TightInset = [0.0767857 0.0904762 0.00357143 0.0190476]
Title = [347.002]
Units = normalized
View = [0 90]
XColor = [0 0 0]
XDir = normal
XGrid = off
XLabel = [345.002]
XAxisLocation = bottom
XLim = [-500 500]
XLimMode = auto
XMinorGrid = off
XMinorTick = off
XScale = linear
XTick = [-450 -350 -250 -150 -50 50 150 250 350 450]
XTickLabel =
XTickLabelMode = auto
XTickMode = manual
YColor = [0 0 0]
YDir = normal
YGrid = off
YLabel = [346.002]
YAxisLocation = left
YLim = [0 40]
YLimMode = auto
YMinorGrid = off
YMinorTick = off
YScale = linear
YTick = [0 5 10 15 20 25 30 35 40]
YTickLabel =
YTickLabelMode = auto
YTickMode = auto
ZColor = [0 0 0]
ZDir = normal
ZGrid = off
ZLabel = [348.002]
ZLim = [-1 1]
ZLimMode = auto
ZMinorGrid = off
ZMinorTick = off
ZScale = linear
ZTick = [-1 0 1]
ZTickLabel =
ZTickLabelMode = auto
ZTickMode = auto
BeingDeleted = off
ButtonDownFcn =
Children = [342.005]
Clipping = on
CreateFcn =
DeleteFcn =
BusyAction = queue
HandleVisibility = on
HitTest = on
Interruptible = on
Parent = [2]
Selected = off
SelectionHighlight = on
Tag =
Type = axes
UIContextMenu = []
UserData = []
Visible = on
We can adjust a number of properties of the axes using the handle.
axis([-500 500 0 50]);
set(ax,'YTick', [0 50]);
Q4: What happened to the plot after you ran these 4 statements?
One sad fact of Matlab is that these properties and what they control are not very well documented. Most of what I have learned has come from trying different things and seeing what happens.
Recently, Matlab has added a feature to set that gives a little feedback on what values some properties can take. If you call set with no property value, it returns a list of possible values (but
only for properties that can take discrete values; some properties take continuous values, and you're out of luck). For example:
Q5: What values can the axes property YDir take?
Our set of axes also has children; the children correspond to the items that are plotted on the axes. In this case, we currently only have the bar plot:
ch = get(ax,'children')
The properties of a bar plot are as follows on my system:
'Bar plot' property fields
Annotation: [1x1 hg.Annotation]
DisplayName: ''
HitTestArea: 'off'
BeingDeleted: 'off'
ButtonDownFcn: []
Children: 343.0017
Clipping: 'on'
CreateFcn: []
DeleteFcn: []
BusyAction: 'queue'
HandleVisibility: 'on'
HitTest: 'on'
Interruptible: 'on'
Parent: 341.0017
SelectionHighlight: 'on'
Tag: ''
Type: 'hggroup'
UIContextMenu: []
UserData: []
Selected: 'off'
Visible: 'on'
XData: [-450 -350 -250 -150 -50 50 150 250 350 450]
XDataMode: 'manual'
XDataSource: ''
YData: [0 0 0 2 13 40 31 9 4 1]
YDataSource: ''
BaseValue: 0
BaseLine: 344.0017
BarLayout: 'grouped'
BarWidth: 0.8000
Horizontal: 'off'
LineWidth: 0.5000
EdgeColor: [0 0 0]
FaceColor: 'flat'
LineStyle: '-'
ShowBaseLine: 'on'
We can play with these fields as well:
mybar = ch; % so we don't have to keep referring to ch
set(mybar,'barwidth',1); % what happens?
set(mybar,'facecolor',[1 0 0]); % turn it red
set(mybar,'facecolor',[0 0 0]); % paint it black
The general idea of examining a graphic object's handle for its field values, and then examining its children for their field values, is very helpful for creating customized plots that highlight
exactly what you want to show.
One can also delete handles using the delete function (but be careful; if you give delete a string input like 'myfile', it will assume you are passing a filename you want to delete):
Matlab also has routines that help you to arrange more than one set of axes on a figure at a time. The easiest way to do this is with the subplot function. For example:
mydata1 = 100*dasw.stats.generate_random_data(100,'normal',1,1);
mydata2 = 100*dasw.stats.generate_random_data(100,'normal',1,1);
[X1,Y1] = dasw.plot.cumhist(mydata1);
[X2,Y2] = dasw.plot.cumhist(mydata2);
f = figure;
[N1,bin_centers] = dasw.plot.histbins(mydata1,[-600:100:600]);
[N2,bin_centers] = dasw.plot.histbins(mydata2,[-600:100:600]);
axis([-600 600 0 100]);
ylabel('Fraction of data 1');
axis([-600 600 0 100]);
ylabel('Fraction of data 2');
ylabel('Number of data 1 points');
ylabel('Number of data 2 points');
subplot takes 3 input arguments: the number of rows of axes you want, the number of columns of axes you want, and the axes number it should create (numbered left to right, top to bottom; see help
subplot). In this case, we defined a 2 row by 2 column matrix of axes, and plotted a graph in each one.
Q6: How many children do you think the figure has now?
ch = get(f,'children')
Q7: Are they the axes that were created by the subplot calls?
Matlab functions and operators
Copyright 2011-2012 Stephen D. Van Hooser, all rights reserved.
|
{"url":"https://dataclass.vhlab.org/labs/lab-1-7-basic-sample-descriptors-tinkering-with-plots","timestamp":"2024-11-02T21:16:12Z","content_type":"text/html","content_length":"355828","record_id":"<urn:uuid:3847e95e-82c7-4c7c-9c4b-8e7b92950589>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00273.warc.gz"}
|
Numerical Modelling of Soot Formation in Turbulent Non-Premixed Flames
This thesis describes an extended flamelet approach for modelling of soot formation in turbulent hydrocarbons diffusion flames. Detailed chemical kinetic mechanisms are used to compute species
concentrations, temperature, density and soot formation rates as functions of flamelet coordinates in the space of mixture fraction and scalar dissipation rate. An ensemble average based on a
presumed probability density function of the mixture fraction and scalar dissipation rate provides the mean density, species concentrations, mixture enthalpy, temperature and soot formation
A semi-empirical two-equation model of finite rate soot formation is applied to methane/air turbulent diffusion flames at different pressures and to... (More)
This thesis describes an extended flamelet approach for modelling of soot formation in turbulent hydrocarbons diffusion flames. Detailed chemical kinetic mechanisms are used to compute species
concentrations, temperature, density and soot formation rates as functions of flamelet coordinates in the space of mixture fraction and scalar dissipation rate. An ensemble average based on a
presumed probability density function of the mixture fraction and scalar dissipation rate provides the mean density, species concentrations, mixture enthalpy, temperature and soot formation
A semi-empirical two-equation model of finite rate soot formation is applied to methane/air turbulent diffusion flames at different pressures and to propane/air turbulent flames with different
air temperatures. Soot number density and soot volume fraction are modelled by transport equations with empirical expressions for source terms, which represent soot formation and oxidation rates.
It is shown that the model gives reasonable results for the flames at atmospheric pressure. It, however, over-predicts the soot yield, if the model parameters calibrated for the atmospheric
pressure flames are used. It is shown that a decrease of the surface growth rate constant in proportion to 1/p leads to good agreement with the experiment.
Models of soot oxidation due to different chemical species, such as O2, H2O, CO2, OH, O and H, are evaluated at the pressure range from 1 to 3 atm. It is found that oxidation of soot by hydroxyl
radicals, OH, is the dominating factor in soot burnout.
The ability of an extended flamelet approach, with semi-empirical modelling of soot formation, to reproduce the effect of residence time in the process of soot formation is examined. A decrease
in residence time is achieved by controlling fuel injection diameter while fuel mass flow rate is kept the same. It is shown that the semi-empirical soot model is able to predict a decrease in
soot volume fraction when there is a decrease in the residence time.
The effect of air preheat on soot formation is investigated numerically for propane/air diffusion flames at two incoming air temperatures Ta=323 K and Ta=773 K. The air preheat leads to an
increase in the flame temperature and in the concentration of soot precursor species, which leads to an increase in soot concentration in the flame. However, the increase in temperature in the
post flame zone leads to an increase in the soot oxidation rate and the amount of soot in the exhaust gases is drastically reduced. To account for the temperature effect on the soot formation
modifications to an existing semi-empirical model are proposed, which take into account bell-shape soot dependence on temperature.
A presumed probability density function and flamelet library approach is developed further in order to incorporate the influence of turbulence on soot formation. The numerical calculations are
compared with experimental measurements and previous numerical simulations with a simple model for calculating mean source terms. (Less)
□ Kraft, Marcus, Ph.D., Dept. of Chemical Engineering, University of Cambrige, UK.
publishing date
publication status
Motorer, Motors and propulsion systems, Teknik, Technological sciences, oxidation of soot., soot, polutant formation, flamelet approach, Combustion, turbulence, framdrivningssystem, Thermal
engineering, applied thermodynamics, Termisk teknik, termodynamik
220 pages
Fluid Mechanics
defense location
defense date
2000-12-18 10:15:00
LU publication?
27e9e73f-8430-4bae-a283-f8c57ca7f673 (old id 41165)
date added to LUP
2016-04-01 15:55:27
date last changed
2018-11-21 20:37:23
abstract = {{This thesis describes an extended flamelet approach for modelling of soot formation in turbulent hydrocarbons diffusion flames. Detailed chemical kinetic mechanisms are used to compute species concentrations, temperature, density and soot formation rates as functions of flamelet coordinates in the space of mixture fraction and scalar dissipation rate. An ensemble average based on a presumed probability density function of the mixture fraction and scalar dissipation rate provides the mean density, species concentrations, mixture enthalpy, temperature and soot formation rates.<br/><br>
A semi-empirical two-equation model of finite rate soot formation is applied to methane/air turbulent diffusion flames at different pressures and to propane/air turbulent flames with different air temperatures. Soot number density and soot volume fraction are modelled by transport equations with empirical expressions for source terms, which represent soot formation and oxidation rates.<br/><br>
It is shown that the model gives reasonable results for the flames at atmospheric pressure. It, however, over-predicts the soot yield, if the model parameters calibrated for the atmospheric pressure flames are used. It is shown that a decrease of the surface growth rate constant in proportion to 1/p leads to good agreement with the experiment.<br/><br>
Models of soot oxidation due to different chemical species, such as O2, H2O, CO2, OH, O and H, are evaluated at the pressure range from 1 to 3 atm. It is found that oxidation of soot by hydroxyl radicals, OH, is the dominating factor in soot burnout.<br/><br>
The ability of an extended flamelet approach, with semi-empirical modelling of soot formation, to reproduce the effect of residence time in the process of soot formation is examined. A decrease in residence time is achieved by controlling fuel injection diameter while fuel mass flow rate is kept the same. It is shown that the semi-empirical soot model is able to predict a decrease in soot volume fraction when there is a decrease in the residence time.<br/><br>
The effect of air preheat on soot formation is investigated numerically for propane/air diffusion flames at two incoming air temperatures Ta=323 K and Ta=773 K. The air preheat leads to an increase in the flame temperature and in the concentration of soot precursor species, which leads to an increase in soot concentration in the flame. However, the increase in temperature in the post flame zone leads to an increase in the soot oxidation rate and the amount of soot in the exhaust gases is drastically reduced. To account for the temperature effect on the soot formation modifications to an existing semi-empirical model are proposed, which take into account bell-shape soot dependence on temperature.<br/><br>
A presumed probability density function and flamelet library approach is developed further in order to incorporate the influence of turbulence on soot formation. The numerical calculations are compared with experimental measurements and previous numerical simulations with a simple model for calculating mean source terms.}},
author = {{Roditcheva, Olga}},
isbn = {{91-628-4583-7}},
keywords = {{Motorer; Motors and propulsion systems; Teknik; Technological sciences; oxidation of soot.; soot; polutant formation; flamelet approach; Combustion; turbulence; framdrivningssystem; Thermal engineering; applied thermodynamics; Termisk teknik; termodynamik}},
language = {{eng}},
publisher = {{Fluid Mechanics}},
school = {{Lund University}},
title = {{Numerical Modelling of Soot Formation in Turbulent Non-Premixed Flames}},
year = {{2000}},
|
{"url":"https://lup.lub.lu.se/search/publication/41165","timestamp":"2024-11-14T15:48:09Z","content_type":"text/html","content_length":"47452","record_id":"<urn:uuid:8c5585fd-3d55-4b50-816c-f86071f8643b>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00109.warc.gz"}
|
How Many Months is 90 Days? - Measuring Expert
There are 3 months in 90 days.
90 Days How I Building, Survival And Cooking In The Rain Forest
90 days is equal to 3 months. This is because there are 30 days in a month, and 90 divided by 30 equals 3. Therefore, if you have a question about how many months something will take that is 90 days
or less, the answer is always going to be 3 months.
How Long is 90 Days in Months And Weeks
If you’re wondering how long 90 days is in months and weeks, the answer is 3 months and 3 weeks. This may not seem like a very precise answer, but it’s actually pretty close! To break it down
further, 90 days is equal to 13 weeks and 2 days.
So if you were to ask someone how long 90 days is in weeks, they could say approximately 3 months or 13 weeks. And if you wanted to get even more specific, you could convert 90 days into hours, which
would be 2,160 hours (or 86,400 minutes). But we’ll leave that conversion up to you!
How Many Weeks is 90 Days
If you’re wondering how many weeks is 90 days, the answer is approximately 13.3 weeks. This means that if you were to take a pregnancy test today, and it came back positive, you would be considered
about two months pregnant. Of course, every pregnancy is different, and some women may show earlier signs of pregnancy than others.
If you think you might be pregnant, the best thing to do is to take a pregnancy test and see your doctor for confirmation.
How Many Months is 90 Days in Jail
If you are sentenced to 90 days in jail, that means you will serve three months behind bars. The actual length of your sentence may be slightly less than 90 days due to time served for the offense,
good behavior credits, or other factors. But in general, a sentence of 90 days in jail is equivalent to serving three months in jail.
How Many Days is 90 Days
If you’re wondering how many days is 90 days, the answer is about 3 months. This is based on the assumption that there are 30 days in a month. However, this isn’t always the case since some months
have 31 days.
Nevertheless, if you average it out, 90 days is about 3 months. So why is this important? Well, sometimes people need to know how long something will take in relation to other things.
For example, if someone said they needed a break from dating for 3 months, you could say that’s about 90 days. Or if you’re trying to quit smoking and someone told you it would take them 3 months to
wean themselves off cigarettes, again, that’s about 90 days. Knowing how many days are in certain time periods can be helpful in numerous situations so hopefully this article helped clear things up
for you!
Is 90 Days 3 Months
It’s a common question, and one that doesn’t have a straightforward answer. The simple answer is that 90 days is three months. However, there are some exceptions and nuances to this answer.
First, let’s look at the definition of a month. A month is typically defined as a unit of time based on the lunar or solar cycle, which is about 28-31 days long. In other words, a month isn’t
necessarily 4 weeks long (though it can be).
This means that when you’re talking about months, you can’t just divide by four and call it good – you need to take into account the number of days in each particular month. For example, February is
shorter than most months, with only 28 days. So if you were to say “I’ll be gone for three months,” and your departure date was February 1st, your return date would actually be May 1st – not April
1st like you might expect.
Likewise, if your departure date was March 31st, your return date would be June 30th – again, one day longer than three calendar months. Of course, these examples assume that you’re starting and
ending on the first day of each respective month. If your travel dates are different – say, March 15th to June 15th – then things get even more complicated.
In this case, you would technically be away for 3 full calendar months (April/May/June), but only 2 full lunar cycles (March 1st-31st/April 1st-30th = 60 days; May 1st-31st/June 1st-30th = 60 days).
So which is it: two lunar cycles or three calendar months? The bottom line is that there’s no definitive answer to this question.
It depends on how you define a “month,” as well as the specific dates involved in your travel plans.
Credit: www.usmagazine.com
Does 90 Days Mean 3 Months?
No, 90 days does not mean 3 months. 90 days is exactly 3 months long.
Is 90 Days Two Months?
There is some confusion over whether 90 days is two months. In general, months are not equal to a specific number of days. For example, February has 28 or 29 days, while most other months have 31
However, when we’re talking about a timeframe that’s less than a year, it’s more common to use months as units. So in that sense, you could say that 90 days is three months. However, it’s worth
noting that this isn’t an exact science.
For example, if you’re talking about someone who’s pregnant, they would typically say they’re in their second month after 60 days. So it really depends on how you want to define things. Ultimately,
whether you call 90 days two months or three is up to you.
There’s no right or wrong answer here – it’s just a matter of preference and context.
How Long are 90 Days?
There are exactly 90 days in a season. Summer begins on June 21 and ends on September 22. Fall begins on September 23 and ends on December 20.
Winter begins on December 21 and ends on March 19. Spring begins on March 20 and ends on June 20.
What is 90 Days Mean?
The term “90 days” is often used to refer to the length of time that a person has to complete a task or goal. For example, a company may give its employees 90 days to find a new job before they are
laid off.
The blog post discusses how many months are in a day. The author argues that there are four months in a day, with each month having thirty days. This means that there are ninety days in a year.
The author goes on to explain how this affects the calendar and how we use it to measure time.
Rakib Sarwar is a seasoned professional blogger, writer, and digital marketer with over 12 years of experience in freelance writing and niche website development on Upwork. In addition to his
expertise in content creation and online marketing, Rakib is a registered pharmacist. Currently, he works in the IT Division of Sonali Bank PLC, where he combines his diverse skill set to excel in
his career.
|
{"url":"https://www.measuringexpert.com/how-many-months-is-90-days/","timestamp":"2024-11-04T09:20:32Z","content_type":"text/html","content_length":"121874","record_id":"<urn:uuid:19b65f05-08dd-463f-bcd1-cd17b6d2480a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00394.warc.gz"}
|
An Overview of Tactics
An Overview of Tactics
In this chapter, we quickly introduce several of the main concepts underlying Meta-F* and its use in writing tactics for proof automation. The goal is to get you quickly up to speed on basic uses of
tactics. Subsequent chapters will revisit the concepts covered here in more detail, introduce more advanced aspects of Meta-F*, and show them at use in several case studies.
Decorating assertions with tactics
As you know already, F* verifies programs by computing verification conditions (VCs) and calling an SMT solver (Z3) to prove them. Most simple proof obligations are handled completely automatically
by Z3, and for more complex statements we can help the solver find a proof via lemma calls and intermediate assertions. Even when using lemma calls and assertions, the VC for a definition is sent to
Z3 in one single piece (though SMT queries can be split via an option.). This “monolithic” style of proof can become unwieldy rapidly, particularly when the solver is being pushed to its limits.
The first ability Meta-F* provides is allowing to attach specific tactics to assertions. These tactics operate on the “goal” that we want to prove, and can “massage” the assertion by simplifying it,
splitting it into several sub-goals, tweaking particular SMT options, etc.
For instance, let us take the the following example, where we want to guarantee that pow2 x is less than one million given that x is at most 19. One way of going about this proof is by noting that
pow2 is an increasing function, and that pow2 19 is less than one million, so we try to write something like this:
let pow2_bound_19 (x:nat{x <= 19}) : Lemma (pow2 x < 1000000) =
assert (forall (x y : nat). x <= y ==> pow2 x <= pow2 y);
assert (pow2 19 == 524288);
assert (pow2 x < 1000000);
Sadly, this doesn’t work. First of all, Z3 cannot automatically prove that pow2 is increasing, but that is to be expected. We could prove this by a straightforward induction. However, we only need
this fact for x and 19, so we can simply call FStar.Math.Lemmas.pow2_le_compat from the library:
let pow2_bound_19' (x:nat{x <= 19}) : Lemma (pow2 x < 1000000) =
FStar.Math.Lemmas.pow2_le_compat 19 x;
assert (pow2 19 == 524288);
assert (pow2 x < 1000000);
Now, the second assertion fails. Z3 will not, with the default fuel limits, unfold pow2 enough times to compute pow2 19 precisely. (You can read more about how F* uses “fuel” to control the SMT
solver’s ability to unfold recursive definitions.) Here we will use our first call into Meta-F*: via the by keyword, we can attach a tactic to an assertion. In this case, we’ll ask Meta-F* to compute
() over the goal, simplifying as much as it can via F*’s normalizer, like this:
let pow2_bound_19'' (x:nat{x <= 19}) : Lemma (pow2 x < 1000000) =
FStar.Math.Lemmas.pow2_le_compat 19 x;
assert (pow2 19 == 524288) by compute ();
assert (pow2 x < 1000000);
Now the lemma verifies! Meta-F* reduced the proof obligation into a trivial equality. Crucially, however, the pow2 19 == 524288 shape is kept as-is in the postcondition of the assertion, so we can
make use of it! If we were just to rewrite the assertion into 524288 == 524288 that would not be useful at all.
How can we know what Meta-F* is doing? We can use the dump tactic to print the state of the proof after the call to compute().
let pow2_bound_19''' (x:nat{x <= 19}) : Lemma (pow2 x < 1000000) =
FStar.Math.Lemmas.pow2_le_compat 19 x;
assert (pow2 19 == 524288) by (compute (); dump "after compute");
assert (pow2 x < 1000000);
With this version, you should see something like:
Goal 1/1
x: x: nat{x < 20}
p: pure_post unit
uu___: forall (pure_result: unit). pow2 x < 1000000 ==> p pure_result
pure_result: unit
uu___'0: pow2 x <= pow2 19
squash (524288 == 524288)
(*?u144*) _
as output from F* (or in the goals buffer if you are using emacs with fstar-mode.el). The print primitive can also be useful.
A “goal” is some proof obligation that is yet to be solved. Meta-F* allows you to capture goals (e.g. via assert..by), modify them (such as with compute), and even to completely solve them. In this
case, we can solve the goal (without Z3) by calling trivial(), a helper tactic that discharges trivial goals (such as trivial equalities).
let pow2_bound_19'''' (x:nat{x <= 19}) : Lemma (pow2 x < 1000000) =
FStar.Math.Lemmas.pow2_le_compat 19 x;
assert (pow2 19 == 524288) by (
compute ();
trivial ();
qed ()
assert (pow2 x < 1000000);
If you dump the state just after the trivial() call, you should see no more goals remain (this is what qed() checks).
Meta-F* does not yet allow a fully interactive style of proof, and hence we need to re-check the entire proof after every edit. We hope to improve this soon.
There is still the “rest” of the proof, namely that pow2 x < 1000000 given the hypothesis and the fact that the assertion holds. We call this skeleton of the proof, and it is (by default) not handled
by Meta-F*. In general, we only use tactics on those assertions that are particularly hard for the SMT solver, but leave all the rest to it.
The Tac effect
Although we have seen a bit about monads and computational effects in a previous chapter, we have yet to fully describe F*’s effect system. So, some of what follows may be a bit confusing. However,
you don’t need to fully understand how the Tac effect is implemented to use tactics. Feel free to skip ahead, if this section doesn’t make much sense to you.
What, concretely, are tactics? So far we’ve written a few simple ones, without too much attention to their structure.
Tactics and metaprograms in F* are really just F* terms, but in a particular effect, namely Tac. To construct interesting metaprograms, we have to use the set of primitives provided by Meta-F*. Their
full list is in the FStar.Tactics.Builtins module. So far, we have actually not used any primitive directly, but only derived metaprograms present in the standard library.
Internally, Tac is implemented via a combination of 1) a state monad, over a proofstate, 2) exceptions and 3) divergence or non-termination. The state monad is used to implicitly carry the
proofstate, without us manually having to handle all goals explicitly. Exceptions are a useful way of doing error handling. Any declared exception can be raise’d within a metaprogram, and the
try..with construct works exactly as for normal programs. There are also fail, catch and recover primitives.
Metaprograms cannot be run directly. This is needed to retain the soundness of pure computations, in the same way that stateful and exception-raising computations are isolated from the Pure fragment
(and from each other). Metaprograms can only be used where F* expects them , such as in an assert..by construct. Here, F* will run the metaprogram on an initial proofstate consisting (usually) of a
single goal, and allow the metaprogram to modify it.
To guarantee soundness, i.e., that metaprograms do not prove false things, all of the primitives are designed to perform small and correct modifications of the goals. Any metaprogram constructed from
them cannot do anything to the proofstate (which is abstract) except modifying it via the primitives.
Having divergence as part of the Tac effect may seem a bit odd, since allowing for diverging terms usually implies that one can form a proof of false, via a non-well-founded recursion. However, we
should note that this possible divergence happens at the meta level. If we call a divergent tactic, F* will loop forever waiting for it to finish, never actually accepting the assertion being
As you know, F* already has exceptions and divergence. All Dv and Ex functions can readily be used in Meta-F* metaprograms, as well as all Tot and Pure functions. For instance, you can use all of the
FStar.List.Tot module if your metaprogram uses lists.
Essentially, a Meta-F* tactic manipulates a proofstate, which is essentially a set of goals. Tactic primitives usually work on the goals, for example by simplifying (like compute()) or by breaking
them down into smaller sub-goals.
When proving assertions, all of our goals will be of the shape squash phi, where phi is some logical formula we must prove. One way to break down a goal into subparts is by using the mapply tactic,
which attempts to prove the goal by instantiating the given lemma or function, perhaps adding subgoals for the hypothesis and arguments of the lemma. This “working backwards” style is very common in
tactics frameworks.
For instance, we could have proved the assertion that pow2 x <= pow2 19 in the following way:
assert (pow2 x <= pow2 19) by (mapply (`FStar.Math.Lemmas.pow2_le_compat));
This reduces the proof of pow2 x <= pow2 19 to x <= 19 (the precondition of the lemma), which is trivially provably by Z3 in this context. Note that we do not have to provide the arguments to the
lemma: they are inferred by F* through unification. In a nutshell, this means F* finds there is an obvious instantiation of the arguments to make the postcondition of the lemma and the current
assertion coincide. When some argument is not found via unification, Meta-F* will present a new goal for it.
This style of proof is more surgical than the one above, since the proof that pow2 x <= pow2 19 does not “leak” into the rest of the function. If the proof of this assertion required several
auxiliary lemmas, or a tweak to the solver’s options, etc, this kind of style can pay off in robustness.
Most tactics works on the current goal, which is the first one in the proofstate. When a tactic reduces a goal g into g1,...,gn, the new g1,..,gn will (usually) be added to the beginning of the list
of goals.
In the following simplified example, we are looking to prove s from p given some lemmas. The first thing we do is apply the qr_s lemma, which gives us two subgoals, for q and r respectively. We then
need to proceed to solve the first goal for q. In order to isolate the proofs of both goals, we can focus on the current goal making all others temporarily invisible. To prove q, we then just use the
p_r lemma and obtain a subgoal for p. This one we will just just leave to the SMT solver, hence we call smt() to move it to the list of SMT goals. We prove r similarly, using p_r.
assume val p : prop
assume val q : prop
assume val r : prop
assume val s : prop
assume val p_q : unit -> Lemma (requires p) (ensures q)
assume val p_r : squash p -> Lemma r
assume val qr_s : unit -> Lemma (q ==> r ==> s)
let test () : Lemma (requires p) (ensures s) =
assert s by (
mapply (`qr_s);
focus (fun () ->
mapply (`p_q);
focus (fun () ->
mapply (`p_r);
Once this tactic runs, we are left with SMT goals to prove p, which Z3 discharges immediately.
Note that mapply works with lemmas that ensure an implication, or that have a precondition (requires/ensures), and even those that take a squashed proof as argument. Internally, mapply is implemented
via the apply_lemma and apply primitives, but ideally you should not need to use them directly.
Note, also, that the proofs of each part are completely isolated from each other. It is also possible to prove the p_gives_s lemma by calling the sublemmas directly, and/or adding SMT patterns. While
that style of proof works, it can quickly become unwieldy.
In the last few examples, you might have noted the backticks, such as in (`FStar.Math.Lemmas.pow2_le_compat). This is a quotation: it represents the syntax for this lemma instead of the lemma itself.
It is called a quotation since the idea is analogous to the word “sun” being syntax representing the sun.
A quotation always has type term, an abstract type representing the AST of F*.
Meta-F* also provides antiquotations, which are a convenient way of modifying an existing term. For instance, if t is a term, we can write `(1 + `#t) to form the syntax of “adding 1” to t. The part
inside the antiquotation (`#) can be anything of type term.
Many metaprogramming primitives, however, do take a term as an argument to use it in proof, like apply_lemma does. In this case, the primitives will typecheck the term in order to use it in proofs
(to make sure that the syntax actually corresponds to a meaningful well-typed F* term), though other primitives, such as term_to_string, won’t typecheck anything.
We will see ahead that quotations are just a convenient way of constructing syntax, instead of doing it step by step via pack.
Basic logic
Meta-F* provides some predefined tactics to handle “logical” goals.
For instance, to prove an implication p ==> q, we can “introduce” the hypothesis via implies_intro to obtain instead a goal for q in a context that assumes p.
For experts in Coq and other provers, this tactic is simply called intro and creates a lambda abstraction. In F* this is slightly more contrived due to squashed types, hence the need for an
implies_intro different from the intro, explained ahead, that introduces a binder.
Other basic logical tactics include:
□ forall_intro: for a goal forall x. p, introduce a fresh x into the context and present a goal for p.
□ l_intros: introduce both implications and foralls as much as possible.
□ split: split a conjunction (p /\ q) into two goals
□ left/right: prove a disjunction p \/ q by proving p or q
□ assumption: prove the goal from a hypothesis in the context.
□ pose_lemma: given a term t representing a lemma call, add its postcondition to the context. If the lemma has a precondition, it is presented as a separate goal.
See the FStar.Tactics.Logic module for more.
Normalizing and unfolding
We have previously seen compute(), which blasts a goal with F*’s normalizer to reduce it into a normal form. We sometimes need a bit more control than that, and hence there are several tactics to
normalize goals in different ways. Most of them are implemented via a few configurable primitives (you can look up their definitions in the standard library)
□ compute(): calls the normalizer with almost all steps enabled
□ simpl(): simplifies logical operations (e.g. reduces p /\ True to p).
□ whnf() (short for “weak head normal form”): reduces the goal until its “head” is evident.
□ unfold_def `t: unfolds the definition of the name t in the goal, fully normalizing its body.
□ trivial(): if the goal is trivial after normalization and simplification, solve it.
The norm primitive provides fine-grained control. Its type is list norm_step -> Tac unit. The full list of norm_step s can be found in the FStar.Pervasives module, and it is the same one available
for the norm marker in Pervasives (beware of the name clash between Tactics.norm and Pervasives.norm!).
Inspecting and building syntax
As part of automating proofs, we often need to inspect the syntax of the goal and the hypotheses in the context to decide what to do. For instance, instead of blindly trying to apply the split tactic
(and recovering if it fails), we could instead look at the shape of the goal and apply split only if the goal has the shape p1 /\ p2.
Note: inspecting syntax is, perhaps obviously, not something we can just do everywhere. If a function was allowed to inspect the syntax of its argument, it could behave differently on 1+2 and 3,
which is bad, since 1+2 == 3 in F*, and functions are expected to map equal arguments to the same result. So, for the most part, we cannot simply turn a value of type a into its syntax. Hence,
quotations are static, they simply represent the syntax of a term and one cannot turn values into terms. There is a more powerful mechanism of dynamic quotations that will be explained later, but
suffice it to say for now that this can only be done in the Tac effect.
As an example, the cur_goal() tactic will return a value of type typ (an alias for term indicating that the term is really the representation of an F* type) representing the syntax of the current
The term type is abstract: it has no observable structure itself. Think of it as an opaque “box” containing a term inside. A priori, all that can be done with a term is pass it to primitives that
expect one, such as tc to type-check it or norm_term to normalize it. But none of those give us full, programatic access to the structure of the term.
That’s where the term_view comes in: following a classic idea introduced by Phil Wadler, there is function called inspect that turns a term into a term_view. The term_view type resembles an AST, but
crucially it is not recursive: its subterms have type term rather than term_view.
type term_view =
| Tv_FVar : v:fv -> term_view
| Tv_App : hd:term -> a:argv -> term_view
| Tv_Abs : bv:binder -> body:term -> term_view
| Tv_Arrow : bv:binder -> c:comp -> term_view
The inspect primitves “peels away” one level of the abstraction layer, giving access to the top-level shape of the term.
The Tv_FVar node above represents (an ocurrence of) a global name. The fv type is also abstract, and can be viewed as a name (which is just list string) via inspect_fv.
For instance, if we were to inspect `qr_s (which we used above) we would obtain a Tv_FVar v, where inspect_fv v is something like ["Path"; "To"; "Module"; "qr_s"], that is, an “exploded”
representation of the fully-qualified name Path.To.Module.qr-s.
Every syntactic construct (terms, free variables, bound variables, binders, computation types, etc) is modeled abstractly like term and fv, and has a corresponding inspection functions. A list can be
found in FStar.Reflection.Builtins.
If the inspected term is an application, inspect will return a Tv_App f a node. Here f is a term, so if we want to know its structure we must recursively call inspect on it. The a part is an argument
, consisting of a term and an argument qualifier (aqualv). The qualifier specifies if the application is implicit or explicit.
Of course, in the case of a nested application such as f x y, this is nested as (f x) y, so inspecting it would return a Tv_App node containing f x and y (with a Q_Explicit qualifier). There are some
helper functions defined to make inspecting applications easier, like collect_app, which decompose a term into its “head” and all of the arguments the head is applied to.
Now, knowing this, we would then like a function to check if the goal is a conjunction. Naively, we need to inspect the goal to check that it is of the shape squash ((/\) a1 a2), that is, an
application with two arguments where the head is the symbol for a conjunction, i.e. (/\). This can already be done with the term_view, but is quite inconvenient due to there being too much
information in it.
Meta-F* therefore provides another type, formula, to represent logical formulas more directly. Hence it suffices for us to call term_as_formula and match on the result, like so:
(* Check if a given term is a conjunction, via term_as_formula. *)
let isconj_t (t:term) : Tac bool =
match term_as_formula t with
| And _ _ -> true
| _ -> false
(* Check if the goal is a conjunction. *)
let isconj () : Tac bool = isconj_t (cur_goal ())
The term_as_formula function, and all others that work on syntax, are defined in “userspace” (that is, as library tactics/metaprograms) by using inspect.
type formula =
| True_ : formula
| False_ : formula
| And : term -> term -> formula
| Or : term -> term -> formula
| Not : term -> formula
| Implies: term -> term -> formula
| Forall : bv -> term -> formula
For experts: F* terms are (internally) represented with a locally-nameless representation, meaning that variables do not have a name under binders, but a de Bruijn index instead. While this has many
advantages, it is likely to be counterproductive when doing tactics and metaprogramming, hence inspect opens variables when it traverses a binder, transforming the term into a fully-named
representation. This is why inspect is effectul: it requires freshness to avoid name clashes. If you prefer to work with a locally-nameless representation, and avoid the effect label, you can use
inspect_ln instead (which will return Tv_BVar nodes instead of Tv_Var ones).
Dually, a term_view can be transformed into a term via the pack primitive, in order to build the syntax of any term. However, it is usually more comfortable to use antiquotations (see above) for
building terms.
Usual gotchas
• The smt tactic does not immediately call the SMT solver. It merely places the current goal into the “SMT Goal” list, all of which are sent to the solver when the tactic invocation finishes. If
any of these fail, there is currently no way to “try again”.
• If a tactic is natively compiled and loaded as a plugin, editing its source file may not have any effect (it depends on the build system). You should recompile the tactic, just delete its object
file, or use the F* option --no_plugins to run it via the interpreter temporarily,
• When proving a lemma, we cannot just use _ by ... since the expected type is just unit. Workaround: assert the postcondition again, or start without any binder.
Coming soon
• Metaprogramming
• Meta arguments and typeclasses
• Plugins (efficient tactics and metaprograms, --codegen Plugin and --load)
• Tweaking the SMT options
• Automated coercions inspect/pack
• e <: C by ...
• Tactics can be used as steps of calc proofs.
• Solving implicits (Steel)
|
{"url":"https://fstar-lang.org/tutorial/book/part5/part5_meta.html","timestamp":"2024-11-08T01:37:55Z","content_type":"text/html","content_length":"55078","record_id":"<urn:uuid:eda882dd-44fb-4149-b076-14e4835b2e47>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00447.warc.gz"}
|
tf.py_function | TensorFlow v2.15.0.post1
Wraps a python function into a TensorFlow op that executes it eagerly.
View aliases
Compat aliases for migration
See Migration guide for more details.
func=None, inp=None, Tout=None, name=None
Using tf.py_function inside a tf.function allows you to run a python function using eager execution, inside the tf.function's graph. This has two main affects:
1. This allows you to use nofunc=None, inp=None, Tout=Nonen tensorflow code inside your tf.function.
2. It allows you to run python control logic in a tf.function without relying on tf.autograph to convert the code to use tensorflow control logic (tf.cond, tf.while_loop).
Both of these features can be useful for debgging.
Since tf.py_function operates on Tensors it is still differentiable (once).
There are two ways to use this function:
As a decorator
Use tf.py_function as a decorator to ensure the function always runs eagerly.
When using tf.py_function as a decorator:
• you must set Tout
• you may set name
• you must not set func or inp
For example, you might use tf.py_function to implement the log huber function.
def py_log_huber(x, m):
print('Running with eager execution.')
if tf.abs(x) <= m:
return x**2
return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))
Under eager execution the function operates normally:
x = tf.constant(1.0)
m = tf.constant(2.0)
Running with eager execution.
Inside a tf.function the tf.py_function is not converted to a tf.Graph.:
def tf_wrapper(x):
m = tf.constant(2.0)
return py_log_huber(x,m)
The tf.py_function only executes eagerly, and only when the tf.function is called:
Running with eager execution.
Running with eager execution.
Gradients work as exeppcted:
with tf.GradientTape() as t:
y = tf_wrapper(x)
Running with eager execution.
t.gradient(y, x).numpy()
You can also skip the decorator and use tf.py_function inplace. This form can a useful shortcut if you don't control the function's source, but it is harder to read.
# No decorator
def log_huber(x, m):
if tf.abs(x) <= m:
return x**2
return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))
x = tf.constant(1.0)
m = tf.constant(2.0)
tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32).numpy()
More info
You can also use tf.py_function to debug your models at runtime using Python tools, i.e., you can isolate portions of your code that you want to debug, wrap them in Python functions and insert pdb
tracepoints or print statements as desired, and wrap those functions in tf.py_function.
For more information on eager execution, see the Eager guide.
tf.py_function is similar in spirit to tf.numpy_function, but unlike the latter, the former lets you use TensorFlow operations in the wrapped Python function. In particular, while
tf.compat.v1.py_func only runs on CPUs and wraps functions that take NumPy arrays as inputs and return NumPy arrays as outputs, tf.py_function can be placed on GPUs and wraps functions that take
Tensors as inputs, execute TensorFlow operations in their bodies, and return Tensors as outputs.
• Calling tf.py_function will acquire the Python Global Interpreter Lock (GIL) that allows only one thread to run at any point in time. This will preclude efficient parallelization and distribution
of the execution of the program.
• The body of the function (i.e. func) will not be serialized in a GraphDef. Therefore, you should not use this function if you need to serialize your model and restore it in a different
• The operation must run in the same address space as the Python program that calls tf.py_function(). If you are using distributed TensorFlow, you must run a tf.distribute.Server in the same
process as the program that calls tf.py_function() and you must pin the created operation to a device in that server (e.g. using with tf.device():).
• Currently tf.py_function is not compatible with XLA. Calling tf.py_function inside tf.function(jit_compile=True) will raise an error.
func A Python function that accepts inp as arguments, and returns a value (or list of values) whose type is described by Tout. Do not set func when using tf.py_function as a decorator.
inp Input arguments for func. A list whose elements are Tensors or CompositeTensors (such as tf.RaggedTensor); or a single Tensor or CompositeTensor. Do not set inp when using tf.py_function as a
The type(s) of the value(s) returned by func. One of the following.
• If func returns a Tensor (or a value that can be converted to a Tensor): the tf.DType for that value.
Tout • If func returns a CompositeTensor: The tf.TypeSpec for that value.
• If func returns None: the empty list ([]).
• If func returns a list of Tensor and CompositeTensor values: a corresponding list of tf.DTypes and tf.TypeSpecs for each value.
name A name for the operation (optional).
• If func is None this returns a decorator that will ensure the decorated function will always run with eager execution even if called from a tf.function/tf.Graph.
• If used func is not None this executes func with eager execution and returns the result: a Tensor, CompositeTensor, or list of Tensor and CompositeTensor; or an empty list if func returns None.
|
{"url":"https://www.tensorflow.org/versions/r2.15/api_docs/python/tf/py_function","timestamp":"2024-11-09T10:47:26Z","content_type":"text/html","content_length":"68921","record_id":"<urn:uuid:a5b10a6a-844f-4cf4-a667-aa6f7dc258d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00723.warc.gz"}
|
Resistor Power Rating | Wilderness Labs Developer Portal
Power Rating
In addition to amount of resistance, resistors have another important characteristic that describes them: power rating.
When power flows through a resistor, some of the energy is converted into heat. The amount of heat a resistor can safely dissipate is characterized by its power rating, and is specified in wattage.
Most common resistors have a power rating between 1/8 watt (0.125W) and 1 watt. Resistors with higher power ratings are usually referred to as power resistors, and used specifically to dissipate
Power Calculation when only Voltage or Amperage and Resistance is Known
On the last page, we learned how to calculate the amount of power (in wattage) that passes through a resistor circuit by first using Ohm's law to calculate both voltage and amperage, and then
calculate the power from that. However, we can use a couple of power calculation laws to calculate power if we only know amperage and resistance, or voltage and resistance.
Power Calculation when Amperage and Resistance is Known
Recall that the definition of the watt is amps * volts, and I is historically used to stand in for amps, and P means (p)ower in wattage, so we can state:
Power = I(in amps) * Voltage
- or -
P = I * V
And Ohm's law, solved for voltage, is:
We can substitute Ohm's law (I * R for V), into the watt/power definition:
P = I * (I * R) = I^2 * R
Therefore, if we know amperage and resistance, we can calculate power in a circuit as:
Power Calculation when Voltage and Resistance is Known:
We can also solve for power if we only know voltage and resistance.
Starting with Ohm's law, solved for amperage:
We can substitute that into the watt definition
P = Watts = V * I = V * (V / R) = V^2 / R
Therefore, if we know voltage and resistance, we can calculate power in a circuit as:
Power Rating Practice Problems
Recalling the simple resistant circuit:
And our power calculation shortcuts:
Let's walk through some sample problems:
1) If the current is 100mA, and the resistance is 20Ω, what power rating must the resistor have?
P = 0.100A ^2 * 20Ω = 0.2W
The nearest power rating to 0.2 would usually be a 1/4 watt.
2) If the source voltage is 5V, and the resistance is 100Ω, what minimum power rating must the resistor have?
P = 5^2 / 100Ω = 0.25W = 1/4 watt.
We can test this by doing the long-hand calculation. First, let's use Ohm's law to solve for current/amperage:
I = V / R
I = 5V / 100Ω = 0.05A
And then solving for power:
P = 5V * 0.05A = 0.25W = 1/4 watt
|
{"url":"https://developer.wildernesslabs.co/Hardware/Tutorials/Electronics/Part4/Resistor_Power_Rating/","timestamp":"2024-11-02T20:51:07Z","content_type":"text/html","content_length":"43678","record_id":"<urn:uuid:ae778e46-541a-461f-ac35-b188a142e97f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00812.warc.gz"}
|
Probability and Statistics
2.6. Probability and Statistics¶
Open the notebook in SageMaker Studio Lab
One way or another, machine learning is all about uncertainty. In supervised learning, we want to predict something unknown (the target) given something known (the features). Depending on our
objective, we might attempt to predict the most likely value of the target. Or we might predict the value with the smallest expected distance from the target. And sometimes we wish not only to
predict a specific value but to quantify our uncertainty. For example, given some features describing a patient, we might want to know how likely they are to suffer a heart attack in the next year.
In unsupervised learning, we often care about uncertainty. To determine whether a set of measurements are anomalous, it helps to know how likely one is to observe values in a population of interest.
Furthermore, in reinforcement learning, we wish to develop agents that act intelligently in various environments. This requires reasoning about how an environment might be expected to change and what
rewards one might expect to encounter in response to each of the available actions.
Probability is the mathematical field concerned with reasoning under uncertainty. Given a probabilistic model of some process, we can reason about the likelihood of various events. The use of
probabilities to describe the frequencies of repeatable events (like coin tosses) is fairly uncontroversial. In fact, frequentist scholars adhere to an interpretation of probability that applies only
to such repeatable events. By contrast Bayesian scholars use the language of probability more broadly to formalize reasoning under uncertainty. Bayesian probability is characterized by two unique
features: (i) assigning degrees of belief to non-repeatable events, e.g., what is the probability that a dam will collapse?; and (ii) subjectivity. While Bayesian probability provides unambiguous
rules for how one should update their beliefs in light of new evidence, it allows for different individuals to start off with different prior beliefs. Statistics helps us to reason backwards,
starting off with collection and organization of data and backing out to what inferences we might draw about the process that generated the data. Whenever we analyze a dataset, hunting for patterns
that we hope might characterize a broader population, we are employing statistical thinking. Many courses, majors, theses, careers, departments, companies, and institutions have been devoted to the
study of probability and statistics. While this section only scratches the surface, we will provide the foundation that you need to begin building models.
%matplotlib inline
import random
import torch
from torch.distributions.multinomial import Multinomial
from d2l import torch as d2l
%matplotlib inline
import random
from mxnet import np, npx
from mxnet.numpy.random import multinomial
from d2l import mxnet as d2l
%matplotlib inline
import random
import jax
import numpy as np
from jax import numpy as jnp
from d2l import jax as d2l
No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.)
%matplotlib inline
import random
import tensorflow as tf
from tensorflow_probability import distributions as tfd
from d2l import tensorflow as d2l
2.6.1. A Simple Example: Tossing Coins¶
Imagine that we plan to toss a coin and want to quantify how likely we are to see heads (vs. tails). If the coin is fair, then both outcomes (heads and tails), are equally likely. Moreover if we plan
to toss the coin \(n\) times then the fraction of heads that we expect to see should exactly match the expected fraction of tails. One intuitive way to see this is by symmetry: for every possible
outcome with \(n_\textrm{h}\) heads and \(n_\textrm{t} = (n - n_\textrm{h})\) tails, there is an equally likely outcome with \(n_\textrm{t}\) heads and \(n_\textrm{h}\) tails. Note that this is only
possible if on average we expect to see \(1/2\) of tosses come up heads and \(1/2\) come up tails. Of course, if you conduct this experiment many times with \(n=1000000\) tosses each, you might never
see a trial where \(n_\textrm{h} = n_\textrm{t}\) exactly.
Formally, the quantity \(1/2\) is called a probability and here it captures the certainty with which any given toss will come up heads. Probabilities assign scores between \(0\) and \(1\) to outcomes
of interest, called events. Here the event of interest is \(\textrm{heads}\) and we denote the corresponding probability \(P(\textrm{heads})\). A probability of \(1\) indicates absolute certainty
(imagine a trick coin where both sides were heads) and a probability of \(0\) indicates impossibility (e.g., if both sides were tails). The frequencies \(n_\textrm{h}/n\) and \(n_\textrm{t}/n\) are
not probabilities but rather statistics. Probabilities are theoretical quantities that underly the data generating process. Here, the probability \(1/2\) is a property of the coin itself. By
contrast, statistics are empirical quantities that are computed as functions of the observed data. Our interests in probabilistic and statistical quantities are inextricably intertwined. We often
design special statistics called estimators that, given a dataset, produce estimates of model parameters such as probabilities. Moreover, when those estimators satisfy a nice property called
consistency, our estimates will converge to the corresponding probability. In turn, these inferred probabilities tell about the likely statistical properties of data from the same population that we
might encounter in the future.
Suppose that we stumbled upon a real coin for which we did not know the true \(P(\textrm{heads})\). To investigate this quantity with statistical methods, we need to (i) collect some data; and (ii)
design an estimator. Data acquisition here is easy; we can toss the coin many times and record all the outcomes. Formally, drawing realizations from some underlying random process is called sampling.
As you might have guessed, one natural estimator is the ratio of the number of observed heads to the total number of tosses.
Now, suppose that the coin was in fact fair, i.e., \(P(\textrm{heads}) = 0.5\). To simulate tosses of a fair coin, we can invoke any random number generator. There are some easy ways to draw samples
of an event with probability \(0.5\). For example Python’s random.random yields numbers in the interval \([0,1]\) where the probability of lying in any sub-interval \([a, b] \subset [0,1]\) is equal
to \(b-a\). Thus we can get out 0 and 1 with probability 0.5 each by testing whether the returned float number is greater than 0.5:
num_tosses = 100
heads = sum([random.random() > 0.5 for _ in range(num_tosses)])
tails = num_tosses - heads
print("heads, tails: ", [heads, tails])
num_tosses = 100
heads = sum([random.random() > 0.5 for _ in range(num_tosses)])
tails = num_tosses - heads
print("heads, tails: ", [heads, tails])
num_tosses = 100
heads = sum([random.random() > 0.5 for _ in range(num_tosses)])
tails = num_tosses - heads
print("heads, tails: ", [heads, tails])
num_tosses = 100
heads = sum([random.random() > 0.5 for _ in range(num_tosses)])
tails = num_tosses - heads
print("heads, tails: ", [heads, tails])
More generally, we can simulate multiple draws from any variable with a finite number of possible outcomes (like the toss of a coin or roll of a die) by calling the multinomial function, setting the
first argument to the number of draws and the second as a list of probabilities associated with each of the possible outcomes. To simulate ten tosses of a fair coin, we assign probability vector
[0.5, 0.5], interpreting index 0 as heads and index 1 as tails. The function returns a vector with length equal to the number of possible outcomes (here, 2), where the first component tells us the
number of occurrences of heads and the second component tells us the number of occurrences of tails.
fair_probs = torch.tensor([0.5, 0.5])
Multinomial(100, fair_probs).sample()
fair_probs = [0.5, 0.5]
multinomial(100, fair_probs)
[22:11:28] ../src/storage/storage.cc:196: Using Pooled (Naive) StorageManager for CPU
array([46, 54], dtype=int64)
fair_probs = [0.5, 0.5]
# jax.random does not have multinomial distribution implemented
np.random.multinomial(100, fair_probs)
fair_probs = tf.ones(2) / 2
tfd.Multinomial(100, fair_probs).sample()
WARNING:tensorflow:From /home/ci/.local/lib/python3.10/site-packages/tensorflow_probability/python/internal/batched_rejection_sampler.py:102: calling while_loop_v2 (from tensorflow.python.ops.control_flow_ops) with back_prop=False is deprecated and will be removed in a future version.
Instructions for updating:
back_prop=False is deprecated. Consider using tf.stop_gradient instead.
Instead of:
results = tf.while_loop(c, b, vars, back_prop=False)
results = tf.nest.map_structure(tf.stop_gradient, tf.while_loop(c, b, vars))
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([49., 51.], dtype=float32)>
Each time you run this sampling process, you will receive a new random value that may differ from the previous outcome. Dividing by the number of tosses gives us the frequency of each outcome in our
data. Note that these frequencies, just like the probabilities that they are intended to estimate, sum to \(1\).
Multinomial(100, fair_probs).sample() / 100
multinomial(100, fair_probs) / 100
np.random.multinomial(100, fair_probs) / 100
tfd.Multinomial(100, fair_probs).sample() / 100
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([0.59, 0.41], dtype=float32)>
Here, even though our simulated coin is fair (we ourselves set the probabilities [0.5, 0.5]), the counts of heads and tails may not be identical. That is because we only drew a relatively small
number of samples. If we did not implement the simulation ourselves, and only saw the outcome, how would we know if the coin were slightly unfair or if the possible deviation from \(1/2\) was just an
artifact of the small sample size? Let’s see what happens when we simulate 10,000 tosses.
counts = Multinomial(10000, fair_probs).sample()
counts / 10000
counts = multinomial(10000, fair_probs).astype(np.float32)
counts / 10000
counts = np.random.multinomial(10000, fair_probs).astype(np.float32)
counts / 10000
array([0.5007, 0.4993], dtype=float32)
counts = tfd.Multinomial(10000, fair_probs).sample()
counts / 10000
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([0.5019, 0.4981], dtype=float32)>
In general, for averages of repeated events (like coin tosses), as the number of repetitions grows, our estimates are guaranteed to converge to the true underlying probabilities. The mathematical
formulation of this phenomenon is called the law of large numbers and the central limit theorem tells us that in many situations, as the sample size \(n\) grows, these errors should go down at a rate
of \((1/\sqrt{n})\). Let’s get some more intuition by studying how our estimate evolves as we grow the number of tosses from 1 to 10,000.
counts = Multinomial(1, fair_probs).sample((10000,))
cum_counts = counts.cumsum(dim=0)
estimates = cum_counts / cum_counts.sum(dim=1, keepdims=True)
estimates = estimates.numpy()
d2l.set_figsize((4.5, 3.5))
d2l.plt.plot(estimates[:, 0], label=("P(coin=heads)"))
d2l.plt.plot(estimates[:, 1], label=("P(coin=tails)"))
d2l.plt.axhline(y=0.5, color='black', linestyle='dashed')
d2l.plt.gca().set_ylabel('Estimated probability')
counts = multinomial(1, fair_probs, size=10000)
cum_counts = counts.astype(np.float32).cumsum(axis=0)
estimates = cum_counts / cum_counts.sum(axis=1, keepdims=True)
d2l.set_figsize((4.5, 3.5))
d2l.plt.plot(estimates[:, 0], label=("P(coin=heads)"))
d2l.plt.plot(estimates[:, 1], label=("P(coin=tails)"))
d2l.plt.axhline(y=0.5, color='black', linestyle='dashed')
d2l.plt.gca().set_ylabel('Estimated probability')
counts = np.random.multinomial(1, fair_probs, size=10000).astype(np.float32)
cum_counts = counts.cumsum(axis=0)
estimates = cum_counts / cum_counts.sum(axis=1, keepdims=True)
d2l.set_figsize((4.5, 3.5))
d2l.plt.plot(estimates[:, 0], label=("P(coin=heads)"))
d2l.plt.plot(estimates[:, 1], label=("P(coin=tails)"))
d2l.plt.axhline(y=0.5, color='black', linestyle='dashed')
d2l.plt.gca().set_ylabel('Estimated probability')
counts = tfd.Multinomial(1, fair_probs).sample(10000)
cum_counts = tf.cumsum(counts, axis=0)
estimates = cum_counts / tf.reduce_sum(cum_counts, axis=1, keepdims=True)
estimates = estimates.numpy()
d2l.set_figsize((4.5, 3.5))
d2l.plt.plot(estimates[:, 0], label=("P(coin=heads)"))
d2l.plt.plot(estimates[:, 1], label=("P(coin=tails)"))
d2l.plt.axhline(y=0.5, color='black', linestyle='dashed')
d2l.plt.gca().set_ylabel('Estimated probability')
Each solid curve corresponds to one of the two values of the coin and gives our estimated probability that the coin turns up that value after each group of experiments. The dashed black line gives
the true underlying probability. As we get more data by conducting more experiments, the curves converge towards the true probability. You might already begin to see the shape of some of the more
advanced questions that preoccupy statisticians: How quickly does this convergence happen? If we had already tested many coins manufactured at the same plant, how might we incorporate this
2.6.2. A More Formal Treatment¶
We have already gotten pretty far: posing a probabilistic model, generating synthetic data, running a statistical estimator, empirically assessing convergence, and reporting error metrics (checking
the deviation). However, to go much further, we will need to be more precise.
When dealing with randomness, we denote the set of possible outcomes \(\mathcal{S}\) and call it the sample space or outcome space. Here, each element is a distinct possible outcome. In the case of
rolling a single coin, \(\mathcal{S} = \{\textrm{heads}, \textrm{tails}\}\). For a single die, \(\mathcal{S} = \{1, 2, 3, 4, 5, 6\}\). When flipping two coins, possible outcomes are \(\{(\textrm
{heads}, \textrm{heads}), (\textrm{heads}, \textrm{tails}), (\textrm{tails}, \textrm{heads}), (\textrm{tails}, \textrm{tails})\}\). Events are subsets of the sample space. For instance, the event
“the first coin toss comes up heads” corresponds to the set \(\{(\textrm{heads}, \textrm{heads}), (\textrm{heads}, \textrm{tails})\}\). Whenever the outcome \(z\) of a random experiment satisfies \(z
\in \mathcal{A}\), then event \(\mathcal{A}\) has occurred. For a single roll of a die, we could define the events “seeing a \(5\)” (\(\mathcal{A} = \{5\}\)) and “seeing an odd number” (\(\mathcal{B}
= \{1, 3, 5\}\)). In this case, if the die came up \(5\), we would say that both \(\mathcal{A}\) and \(\mathcal{B}\) occurred. On the other hand, if \(z = 3\), then \(\mathcal{A}\) did not occur but
\(\mathcal{B}\) did.
A probability function maps events onto real values \({P: \mathcal{A} \subseteq \mathcal{S} \rightarrow [0,1]}\). The probability, denoted \(P(\mathcal{A})\), of an event \(\mathcal{A}\) in the given
sample space \(\mathcal{S}\), has the following properties:
• The probability of any event \(\mathcal{A}\) is a nonnegative real number, i.e., \(P(\mathcal{A}) \geq 0\);
• The probability of the entire sample space is \(1\), i.e., \(P(\mathcal{S}) = 1\);
• For any countable sequence of events \(\mathcal{A}_1, \mathcal{A}_2, \ldots\) that are mutually exclusive (i.e., \(\mathcal{A}_i \cap \mathcal{A}_j = \emptyset\) for all \(i \neq j\)), the
probability that any of them happens is equal to the sum of their individual probabilities, i.e., \(P(\bigcup_{i=1}^{\infty} \mathcal{A}_i) = \sum_{i=1}^{\infty} P(\mathcal{A}_i)\).
These axioms of probability theory, proposed by Kolmogorov (1933), can be applied to rapidly derive a number of important consequences. For instance, it follows immediately that the probability of
any event \(\mathcal{A}\) or its complement \(\mathcal{A}'\) occurring is 1 (because \(\mathcal{A} \cup \mathcal{A}' = \mathcal{S}\)). We can also prove that \(P(\emptyset) = 0\) because \(1 = P(\
mathcal{S} \cup \mathcal{S}') = P(\mathcal{S} \cup \emptyset) = P(\mathcal{S}) + P(\emptyset) = 1 + P(\emptyset)\). Consequently, the probability of any event \(\mathcal{A}\) and its complement \(\
mathcal{A}'\) occurring simultaneously is \(P(\mathcal{A} \cap \mathcal{A}') = 0\). Informally, this tells us that impossible events have zero probability of occurring.
2.6.3. Random Variables¶
When we spoke about events like the roll of a die coming up odds or the first coin toss coming up heads, we were invoking the idea of a random variable. Formally, random variables are mappings from
an underlying sample space to a set of (possibly many) values. You might wonder how a random variable is different from the sample space, since both are collections of outcomes. Importantly, random
variables can be much coarser than the raw sample space. We can define a binary random variable like “greater than 0.5” even when the underlying sample space is infinite, e.g., points on the line
segment between \(0\) and \(1\). Additionally, multiple random variables can share the same underlying sample space. For example “whether my home alarm goes off” and “whether my house was burgled”
are both binary random variables that share an underlying sample space. Consequently, knowing the value taken by one random variable can tell us something about the likely value of another random
variable. Knowing that the alarm went off, we might suspect that the house was likely burgled.
Every value taken by a random variable corresponds to a subset of the underlying sample space. Thus the occurrence where the random variable \(X\) takes value \(v\), denoted by \(X=v\), is an event
and \(P(X=v)\) denotes its probability. Sometimes this notation can get clunky, and we can abuse notation when the context is clear. For example, we might use \(P(X)\) to refer broadly to the
distribution of \(X\), i.e., the function that tells us the probability that \(X\) takes any given value. Other times we write expressions like \(P(X,Y) = P(X) P(Y)\), as a shorthand to express a
statement that is true for all of the values that the random variables \(X\) and \(Y\) can take, i.e., for all \(i,j\) it holds that \(P(X=i \textrm{ and } Y=j) = P(X=i)P(Y=j)\). Other times, we
abuse notation by writing \(P(v)\) when the random variable is clear from the context. Since an event in probability theory is a set of outcomes from the sample space, we can specify a range of
values for a random variable to take. For example, \(P(1 \leq X \leq 3)\) denotes the probability of the event \(\{1 \leq X \leq 3\}\).
Note that there is a subtle difference between discrete random variables, like flips of a coin or tosses of a die, and continuous ones, like the weight and the height of a person sampled at random
from the population. In this case we seldom really care about someone’s exact height. Moreover, if we took precise enough measurements, we would find that no two people on the planet have the exact
same height. In fact, with fine enough measurements, you would never have the same height when you wake up and when you go to sleep. There is little point in asking about the exact probability that
someone is 1.801392782910287192 meters tall. Instead, we typically care more about being able to say whether someone’s height falls into a given interval, say between 1.79 and 1.81 meters. In these
cases we work with probability densities. The height of exactly 1.80 meters has no probability, but nonzero density. To work out the probability assigned to an interval, we must take an integral of
the density over that interval.
2.6.4. Multiple Random Variables¶
You might have noticed that we could not even make it through the previous section without making statements involving interactions among multiple random variables (recall \(P(X,Y) = P(X) P(Y)\)).
Most of machine learning is concerned with such relationships. Here, the sample space would be the population of interest, say customers who transact with a business, photographs on the Internet, or
proteins known to biologists. Each random variable would represent the (unknown) value of a different attribute. Whenever we sample an individual from the population, we observe a realization of each
of the random variables. Because the values taken by random variables correspond to subsets of the sample space that could be overlapping, partially overlapping, or entirely disjoint, knowing the
value taken by one random variable can cause us to update our beliefs about which values of another random variable are likely. If a patient walks into a hospital and we observe that they are having
trouble breathing and have lost their sense of smell, then we believe that they are more likely to have COVID-19 than we might if they had no trouble breathing and a perfectly ordinary sense of
When working with multiple random variables, we can construct events corresponding to every combination of values that the variables can jointly take. The probability function that assigns
probabilities to each of these combinations (e.g. \(A=a\) and \(B=b\)) is called the joint probability function and simply returns the probability assigned to the intersection of the corresponding
subsets of the sample space. The joint probability assigned to the event where random variables \(A\) and \(B\) take values \(a\) and \(b\), respectively, is denoted \(P(A = a, B = b)\), where the
comma indicates “and”. Note that for any values \(a\) and \(b\), it follows that
\[P(A=a, B=b) \leq P(A=a) \textrm{ and } P(A=a, B=b) \leq P(B = b),\]
since for \(A=a\) and \(B=b\) to happen, \(A=a\) has to happen and \(B=b\) also has to happen. Interestingly, the joint probability tells us all that we can know about these random variables in a
probabilistic sense, and can be used to derive many other useful quantities, including recovering the individual distributions \(P(A)\) and \(P(B)\). To recover \(P(A=a)\) we simply sum up \(P(A=a, B
=v)\) over all values \(v\) that the random variable \(B\) can take: \(P(A=a) = \sum_v P(A=a, B=v)\).
The ratio \(\frac{P(A=a, B=b)}{P(A=a)} \leq 1\) turns out to be extremely important. It is called the conditional probability, and is denoted via the “\(\mid\)” symbol:
\[P(B=b \mid A=a) = P(A=a,B=b)/P(A=a).\]
It tells us the new probability associated with the event \(B=b\), once we condition on the fact \(A=a\) took place. We can think of this conditional probability as restricting attention only to the
subset of the sample space associated with \(A=a\) and then renormalizing so that all probabilities sum to 1. Conditional probabilities are in fact just ordinary probabilities and thus respect all of
the axioms, as long as we condition all terms on the same event and thus restrict attention to the same sample space. For instance, for disjoint events \(\mathcal{B}\) and \(\mathcal{B}'\), we have
that \(P(\mathcal{B} \cup \mathcal{B}' \mid A = a) = P(\mathcal{B} \mid A = a) + P(\mathcal{B}' \mid A = a)\).
Using the definition of conditional probabilities, we can derive the famous result called Bayes’ theorem. By construction, we have that \(P(A, B) = P(B\mid A) P(A)\) and \(P(A, B) = P(A\mid B) P(B)\)
. Combining both equations yields \(P(B\mid A) P(A) = P(A\mid B) P(B)\) and hence
\[P(A \mid B) = \frac{P(B\mid A) P(A)}{P(B)}.\]
This simple equation has profound implications because it allows us to reverse the order of conditioning. If we know how to estimate \(P(B\mid A)\), \(P(A)\), and \(P(B)\), then we can estimate \(P(A
\mid B)\). We often find it easier to estimate one term directly but not the other and Bayes’ theorem can come to the rescue here. For instance, if we know the prevalence of symptoms for a given
disease, and the overall prevalences of the disease and symptoms, respectively, we can determine how likely someone is to have the disease based on their symptoms. In some cases we might not have
direct access to \(P(B)\), such as the prevalence of symptoms. In this case a simplified version of Bayes’ theorem comes in handy:
\[P(A \mid B) \propto P(B \mid A) P(A).\]
Since we know that \(P(A \mid B)\) must be normalized to \(1\), i.e., \(\sum_a P(A=a \mid B) = 1\), we can use it to compute
\[P(A \mid B) = \frac{P(B \mid A) P(A)}{\sum_a P(B \mid A=a) P(A = a)}.\]
In Bayesian statistics, we think of an observer as possessing some (subjective) prior beliefs about the plausibility of the available hypotheses encoded in the prior \(P(H)\), and a likelihood
function that says how likely one is to observe any value of the collected evidence for each of the hypotheses in the class \(P(E \mid H)\). Bayes’ theorem is then interpreted as telling us how to
update the initial prior \(P(H)\) in light of the available evidence \(E\) to produce posterior beliefs \(P(H \mid E) = \frac{P(E \mid H) P(H)}{P(E)}\). Informally, this can be stated as “posterior
equals prior times likelihood, divided by the evidence”. Now, because the evidence \(P(E)\) is the same for all hypotheses, we can get away with simply normalizing over the hypotheses.
Note that \(\sum_a P(A=a \mid B) = 1\) also allows us to marginalize over random variables. That is, we can drop variables from a joint distribution such as \(P(A, B)\). After all, we have that
\[\sum_a P(B \mid A=a) P(A=a) = \sum_a P(B, A=a) = P(B).\]
Independence is another fundamentally important concept that forms the backbone of many important ideas in statistics. In short, two variables are independent if conditioning on the value of \(A\)
does not cause any change to the probability distribution associated with \(B\) and vice versa. More formally, independence, denoted \(A \perp B\), requires that \(P(A \mid B) = P(A)\) and,
consequently, that \(P(A,B) = P(A \mid B) P(B) = P(A) P(B)\). Independence is often an appropriate assumption. For example, if the random variable \(A\) represents the outcome from tossing one fair
coin and the random variable \(B\) represents the outcome from tossing another, then knowing whether \(A\) came up heads should not influence the probability of \(B\) coming up heads.
Independence is especially useful when it holds among the successive draws of our data from some underlying distribution (allowing us to make strong statistical conclusions) or when it holds among
various variables in our data, allowing us to work with simpler models that encode this independence structure. On the other hand, estimating the dependencies among random variables is often the very
aim of learning. We care to estimate the probability of disease given symptoms specifically because we believe that diseases and symptoms are not independent.
Note that because conditional probabilities are proper probabilities, the concepts of independence and dependence also apply to them. Two random variables \(A\) and \(B\) are conditionally
independent given a third variable \(C\) if and only if \(P(A, B \mid C) = P(A \mid C)P(B \mid C)\). Interestingly, two variables can be independent in general but become dependent when conditioning
on a third. This often occurs when the two random variables \(A\) and \(B\) correspond to causes of some third variable \(C\). For example, broken bones and lung cancer might be independent in the
general population but if we condition on being in the hospital then we might find that broken bones are negatively correlated with lung cancer. That is because the broken bone explains away why some
person is in the hospital and thus lowers the probability that they are hospitalized because of having lung cancer.
And conversely, two dependent random variables can become independent upon conditioning on a third. This often happens when two otherwise unrelated events have a common cause. Shoe size and reading
level are highly correlated among elementary school students, but this correlation disappears if we condition on age.
2.6.5. An Example¶
Let’s put our skills to the test. Assume that a doctor administers an HIV test to a patient. This test is fairly accurate and fails only with 1% probability if the patient is healthy but reported as
diseased, i.e., healthy patients test positive in 1% of cases. Moreover, it never fails to detect HIV if the patient actually has it. We use \(D_1 \in \{0, 1\}\) to indicate the diagnosis (\(0\) if
negative and \(1\) if positive) and \(H \in \{0, 1\}\) to denote the HIV status.
Conditional probability \(H=1\) \(H=0\)
\(P(D_1 = 1 \mid H)\) 1 0.01
\(P(D_1 = 0 \mid H)\) 0 0.99
Note that the column sums are all 1 (but the row sums do not), since they are conditional probabilities. Let’s compute the probability of the patient having HIV if the test comes back positive, i.e.,
\(P(H = 1 \mid D_1 = 1)\). Intuitively this is going to depend on how common the disease is, since it affects the number of false alarms. Assume that the population is fairly free of the disease,
e.g., \(P(H=1) = 0.0015\). To apply Bayes’ theorem, we need to apply marginalization to determine
\[\begin{split}\begin{aligned} P(D_1 = 1) =& P(D_1=1, H=0) + P(D_1=1, H=1) \\ =& P(D_1=1 \mid H=0) P(H=0) + P(D_1=1 \mid H=1) P(H=1) \\ =& 0.011485. \end{aligned}\end{split}\]
This leads us to
\[P(H = 1 \mid D_1 = 1) = \frac{P(D_1=1 \mid H=1) P(H=1)}{P(D_1=1)} = 0.1306.\]
In other words, there is only a 13.06% chance that the patient actually has HIV, despite the test being pretty accurate. As we can see, probability can be counterintuitive. What should a patient do
upon receiving such terrifying news? Likely, the patient would ask the physician to administer another test to get clarity. The second test has different characteristics and it is not as good as the
first one.
Conditional probability \(H=1\) \(H=0\)
\(P(D_2 = 1 \mid H)\) 0.98 0.03
\(P(D_2 = 0 \mid H)\) 0.02 0.97
Unfortunately, the second test comes back positive, too. Let’s calculate the requisite probabilities to invoke Bayes’ theorem by assuming conditional independence:
\[\begin{split}\begin{aligned} P(D_1 = 1, D_2 = 1 \mid H = 0) & = P(D_1 = 1 \mid H = 0) P(D_2 = 1 \mid H = 0) =& 0.0003, \\ P(D_1 = 1, D_2 = 1 \mid H = 1) & = P(D_1 = 1 \mid H = 1) P(D_2 = 1 \mid H =
1) =& 0.98. \end{aligned}\end{split}\]
Now we can apply marginalization to obtain the probability that both tests come back positive:
\[\begin{split}\begin{aligned} &P(D_1 = 1, D_2 = 1)\\ &= P(D_1 = 1, D_2 = 1, H = 0) + P(D_1 = 1, D_2 = 1, H = 1) \\ &= P(D_1 = 1, D_2 = 1 \mid H = 0)P(H=0) + P(D_1 = 1, D_2 = 1 \mid H = 1)P(H=1)\\ &=
0.00176955. \end{aligned}\end{split}\]
Finally, the probability of the patient having HIV given that both tests are positive is
\[P(H = 1 \mid D_1 = 1, D_2 = 1) = \frac{P(D_1 = 1, D_2 = 1 \mid H=1) P(H=1)}{P(D_1 = 1, D_2 = 1)} = 0.8307.\]
That is, the second test allowed us to gain much higher confidence that not all is well. Despite the second test being considerably less accurate than the first one, it still significantly improved
our estimate. The assumption of both tests being conditionally independent of each other was crucial for our ability to generate a more accurate estimate. Take the extreme case where we run the same
test twice. In this situation we would expect the same outcome both times, hence no additional insight is gained from running the same test again. The astute reader might have noticed that the
diagnosis behaved like a classifier hiding in plain sight where our ability to decide whether a patient is healthy increases as we obtain more features (test outcomes).
2.6.6. Expectations¶
Often, making decisions requires not just looking at the probabilities assigned to individual events but composing them together into useful aggregates that can provide us with guidance. For example,
when random variables take continuous scalar values, we often care about knowing what value to expect on average. This quantity is formally called an expectation. If we are making investments, the
first quantity of interest might be the return we can expect, averaging over all the possible outcomes (and weighting by the appropriate probabilities). For instance, say that with 50% probability,
an investment might fail altogether, with 40% probability it might provide a 2\(\times\) return, and with 10% probability it might provide a 10\(\times\) return 10\(\times\). To calculate the
expected return, we sum over all returns, multiplying each by the probability that they will occur. This yields the expectation \(0.5 \cdot 0 + 0.4 \cdot 2 + 0.1 \cdot 10 = 1.8\). Hence the expected
return is 1.8\(\times\).
In general, the expectation (or average) of the random variable \(X\) is defined as
\[E[X] = E_{x \sim P}[x] = \sum_{x} x P(X = x).\]
Likewise, for densities we obtain \(E[X] = \int x \;dp(x)\). Sometimes we are interested in the expected value of some function of \(x\). We can calculate these expectations as
\[E_{x \sim P}[f(x)] = \sum_x f(x) P(x) \textrm{ and } E_{x \sim P}[f(x)] = \int f(x) p(x) \;dx\]
for discrete probabilities and densities, respectively. Returning to the investment example from above, \(f\) might be the utility (happiness) associated with the return. Behavior economists have
long noted that people associate greater disutility with losing money than the utility gained from earning one dollar relative to their baseline. Moreover, the value of money tends to be sub-linear.
Possessing 100k dollars versus zero dollars can make the difference between paying the rent, eating well, and enjoying quality healthcare versus suffering through homelessness. On the other hand, the
gains due to possessing 200k versus 100k are less dramatic. Reasoning like this motivates the cliché that “the utility of money is logarithmic”.
If the utility associated with a total loss were \(-1\), and the utilities associated with returns of \(1\), \(2\), and \(10\) were \(1\), \(2\) and \(4\), respectively, then the expected happiness
of investing would be \(0.5 \cdot (-1) + 0.4 \cdot 2 + 0.1 \cdot 4 = 0.7\) (an expected loss of utility of 30%). If indeed this were your utility function, you might be best off keeping the money in
the bank.
For financial decisions, we might also want to measure how risky an investment is. Here, we care not just about the expected value but how much the actual values tend to vary relative to this value.
Note that we cannot just take the expectation of the difference between the actual and expected values. This is because the expectation of a difference is the difference of the expectations, i.e., \
(E[X - E[X]] = E[X] - E[E[X]] = 0\). However, we can look at the expectation of any non-negative function of this difference. The variance of a random variable is calculated by looking at the
expected value of the squared differences:
\[\textrm{Var}[X] = E\left[(X - E[X])^2\right] = E[X^2] - E[X]^2.\]
Here the equality follows by expanding \((X - E[X])^2 = X^2 - 2 X E[X] + E[X]^2\) and taking expectations for each term. The square root of the variance is another useful quantity called the standard
deviation. While this and the variance convey the same information (either can be calculated from the other), the standard deviation has the nice property that it is expressed in the same units as
the original quantity represented by the random variable.
Lastly, the variance of a function of a random variable is defined analogously as
\[\textrm{Var}_{x \sim P}[f(x)] = E_{x \sim P}[f^2(x)] - E_{x \sim P}[f(x)]^2.\]
Returning to our investment example, we can now compute the variance of the investment. It is given by \(0.5 \cdot 0 + 0.4 \cdot 2^2 + 0.1 \cdot 10^2 - 1.8^2 = 8.36\). For all intents and purposes
this is a risky investment. Note that by mathematical convention mean and variance are often referenced as \(\mu\) and \(\sigma^2\). This is particularly the case whenever we use it to parametrize a
Gaussian distribution.
In the same way as we introduced expectations and variance for scalar random variables, we can do so for vector-valued ones. Expectations are easy, since we can apply them elementwise. For instance,
\(\boldsymbol{\mu} \stackrel{\textrm{def}}{=} E_{\mathbf{x} \sim P}[\mathbf{x}]\) has coordinates \(\mu_i = E_{\mathbf{x} \sim P}[x_i]\). Covariances are more complicated. We define them by taking
expectations of the outer product of the difference between random variables and their mean:
\[\boldsymbol{\Sigma} \stackrel{\textrm{def}}{=} \textrm{Cov}_{\mathbf{x} \sim P}[\mathbf{x}] = E_{\mathbf{x} \sim P}\left[(\mathbf{x} - \boldsymbol{\mu}) (\mathbf{x} - \boldsymbol{\mu})^\top\right].
This matrix \(\boldsymbol{\Sigma}\) is referred to as the covariance matrix. An easy way to see its effect is to consider some vector \(\mathbf{v}\) of the same size as \(\mathbf{x}\). It follows
\[\mathbf{v}^\top \boldsymbol{\Sigma} \mathbf{v} = E_{\mathbf{x} \sim P}\left[\mathbf{v}^\top(\mathbf{x} - \boldsymbol{\mu}) (\mathbf{x} - \boldsymbol{\mu})^\top \mathbf{v}\right] = \textrm{Var}_{x \
sim P}[\mathbf{v}^\top \mathbf{x}].\]
As such, \(\boldsymbol{\Sigma}\) allows us to compute the variance for any linear function of \(\mathbf{x}\) by a simple matrix multiplication. The off-diagonal elements tell us how correlated the
coordinates are: a value of 0 means no correlation, where a larger positive value means that they are more strongly correlated.
2.6.7. Discussion¶
In machine learning, there are many things to be uncertain about! We can be uncertain about the value of a label given an input. We can be uncertain about the estimated value of a parameter. We can
even be uncertain about whether data arriving at deployment is even from the same distribution as the training data.
By aleatoric uncertainty, we mean uncertainty that is intrinsic to the problem, and due to genuine randomness unaccounted for by the observed variables. By epistemic uncertainty, we mean uncertainty
over a model’s parameters, the sort of uncertainty that we can hope to reduce by collecting more data. We might have epistemic uncertainty concerning the probability that a coin turns up heads, but
even once we know this probability, we are left with aleatoric uncertainty about the outcome of any future toss. No matter how long we watch someone tossing a fair coin, we will never be more or less
than 50% certain that the next toss will come up heads. These terms come from mechanical modeling, (see e.g., Der Kiureghian and Ditlevsen (2009) for a review on this aspect of uncertainty
quantification). It is worth noting, however, that these terms constitute a slight abuse of language. The term epistemic refers to anything concerning knowledge and thus, in the philosophical sense,
all uncertainty is epistemic.
We saw that sampling data from some unknown probability distribution can provide us with information that can be used to estimate the parameters of the data generating distribution. That said, the
rate at which this is possible can be quite slow. In our coin tossing example (and many others) we can do no better than to design estimators that converge at a rate of \(1/\sqrt{n}\), where \(n\) is
the sample size (e.g., the number of tosses). This means that by going from 10 to 1000 observations (usually a very achievable task) we see a tenfold reduction of uncertainty, whereas the next 1000
observations help comparatively little, offering only a 1.41 times reduction. This is a persistent feature of machine learning: while there are often easy gains, it takes a very large amount of data,
and often with it an enormous amount of computation, to make further gains. For an empirical review of this fact for large scale language models see Revels et al. (2016).
We also sharpened our language and tools for statistical modeling. In the process of that we learned about conditional probabilities and about one of the most important equations in statistics—Bayes’
theorem. It is an effective tool for decoupling information conveyed by data through a likelihood term \(P(B \mid A)\) that addresses how well observations \(B\) match a choice of parameters \(A\),
and a prior probability \(P(A)\) which governs how plausible a particular choice of \(A\) was in the first place. In particular, we saw how this rule can be applied to assign probabilities to
diagnoses, based on the efficacy of the test and the prevalence of the disease itself (i.e., our prior).
Lastly, we introduced a first set of nontrivial questions about the effect of a specific probability distribution, namely expectations and variances. While there are many more than just linear and
quadratic expectations for a probability distribution, these two already provide a good deal of knowledge about the possible behavior of the distribution. For instance, Chebyshev’s inequality states
that \(P(|X - \mu| \geq k \sigma) \leq 1/k^2\), where \(\mu\) is the expectation, \(\sigma^2\) is the variance of the distribution, and \(k > 1\) is a confidence parameter of our choosing. It tells
us that draws from a distribution lie with at least 50% probability within a \([-\sqrt{2} \sigma, \sqrt{2} \sigma]\) interval centered on the expectation.
2.6.8. Exercises¶
1. Give an example where observing more data can reduce the amount of uncertainty about the outcome to an arbitrarily low level.
2. Give an example where observing more data will only reduce the amount of uncertainty up to a point and then no further. Explain why this is the case and where you expect this point to occur.
3. We empirically demonstrated convergence to the mean for the toss of a coin. Calculate the variance of the estimate of the probability that we see a head after drawing \(n\) samples.
1. How does the variance scale with the number of observations?
2. Use Chebyshev’s inequality to bound the deviation from the expectation.
3. How does it relate to the central limit theorem?
4. Assume that we draw \(m\) samples \(x_i\) from a probability distribution with zero mean and unit variance. Compute the averages \(z_m \stackrel{\textrm{def}}{=} m^{-1} \sum_{i=1}^m x_i\). Can we
apply Chebyshev’s inequality for every \(z_m\) independently? Why not?
5. Given two events with probability \(P(\mathcal{A})\) and \(P(\mathcal{B})\), compute upper and lower bounds on \(P(\mathcal{A} \cup \mathcal{B})\) and \(P(\mathcal{A} \cap \mathcal{B})\). Hint:
graph the situation using a Venn diagram.
6. Assume that we have a sequence of random variables, say \(A\), \(B\), and \(C\), where \(B\) only depends on \(A\), and \(C\) only depends on \(B\), can you simplify the joint probability \(P(A,
B, C)\)? Hint: this is a Markov chain.
7. In Section 2.6.5, assume that the outcomes of the two tests are not independent. In particular assume that either test on its own has a false positive rate of 10% and a false negative rate of 1%.
That is, assume that \(P(D =1 \mid H=0) = 0.1\) and that \(P(D = 0 \mid H=1) = 0.01\). Moreover, assume that for \(H = 1\) (infected) the test outcomes are conditionally independent, i.e., that \
(P(D_1, D_2 \mid H=1) = P(D_1 \mid H=1) P(D_2 \mid H=1)\) but that for healthy patients the outcomes are coupled via \(P(D_1 = D_2 = 1 \mid H=0) = 0.02\).
1. Work out the joint probability table for \(D_1\) and \(D_2\), given \(H=0\) based on the information you have so far.
2. Derive the probability that the patient is diseased (\(H=1\)) after one test returns positive. You can assume the same baseline probability \(P(H=1) = 0.0015\) as before.
3. Derive the probability that the patient is diseased (\(H=1\)) after both tests return positive.
8. Assume that you are an asset manager for an investment bank and you have a choice of stocks \(s_i\) to invest in. Your portfolio needs to add up to \(1\) with weights \(\alpha_i\) for each stock.
The stocks have an average return \(\boldsymbol{\mu} = E_{\mathbf{s} \sim P}[\mathbf{s}]\) and covariance \(\boldsymbol{\Sigma} = \textrm{Cov}_{\mathbf{s} \sim P}[\mathbf{s}]\).
1. Compute the expected return for a given portfolio \(\boldsymbol{\alpha}\).
2. If you wanted to maximize the return of the portfolio, how should you choose your investment?
3. Compute the variance of the portfolio.
4. Formulate an optimization problem of maximizing the return while keeping the variance constrained to an upper bound. This is the Nobel-Prize winning Markovitz portfolio (Mangram, 2013). To
solve it you will need a quadratic programming solver, something way beyond the scope of this book.
|
{"url":"http://gluon.ai/chapter_preliminaries/probability.html","timestamp":"2024-11-07T00:05:53Z","content_type":"text/html","content_length":"179882","record_id":"<urn:uuid:9c7bf8e5-8aaa-4ae9-86ae-f1ca209c4a92>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00872.warc.gz"}
|
Inverse scattering for non-classical impedance Schrödinger operators
We review recent progress in the direct and inverse scattering theory for one-dimensional Schrödinger operators in impedance form. Two classes of non-smooth impedance functions are considered.
Absolutely continuous impedances correspond to singular Miura potentials that are distributions from (Formula presented) nevertheless, most of the classic scattering theory for Schrödinger operators
with Faddeev–Marchenko potentials is carried over to this singular setting, with some weak decay assumptions. The second class consists of discontinuous impedances and generates Schrödinger operators
with unusual scattering properties. In the model case of piece-wise constant impedance functions with discontinuities on a periodic lattice the corresponding reflection coefficients are periodic. In
both cases, a complete description of the scattering data is given and the explicit reconstruction method is derived.
Original language English
Title of host publication Operator Methods in Mathematical Physics - Conference on Operator Theory, Analysis and Mathematical Physics, OTAMP 2010
Editors Jan Janas, Pavel Kurasov, Ari Laptev, Sergei Naboko
Pages 1-42
Number of pages 42
State Published - 2013
Event 5th International Conference: Operator Theory, Analysis and Mathematical Physics, OTAMP 2010 - Bedlewo, Poland
Duration: Aug 5 2010 → Aug 12 2010
Publication series
Name Operator Theory: Advances and Applications
Volume 227
ISSN (Print) 0255-0156
ISSN (Electronic) 2296-4878
Conference 5th International Conference: Operator Theory, Analysis and Mathematical Physics, OTAMP 2010
Country/Territory Poland
City Bedlewo
Period 8/5/10 → 8/12/10
Bibliographical note
Publisher Copyright:
© 2013 Springer Basel.
• Impedance function
• Inverse scattering problem
• Schrodinger operator
ASJC Scopus subject areas
Dive into the research topics of 'Inverse scattering for non-classical impedance Schrödinger operators'. Together they form a unique fingerprint.
|
{"url":"https://scholars.uky.edu/en/publications/inverse-scattering-for-non-classical-impedance-schr%C3%B6dinger-operat","timestamp":"2024-11-12T19:42:18Z","content_type":"text/html","content_length":"55926","record_id":"<urn:uuid:518759ec-c536-4402-88e2-a4ed9412a93a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00377.warc.gz"}
|
Matrix Groups
Many groups have matrices as their elements. The operation is usually either matrix addition or matrix multiplication.
Example. Let G denote the set of all rows and 3 columns.) Here are some elements of G:
Show that G is a group under matrix addition.
If you add two
That is, addition yields a binary operation on the set.
You should know from linear algebra that matrix addition is associative.
The identity element is the
The inverse of a
Notice that I don't get a group if I try to apply matrix addition to the set of all matrices with real entries. This does not define a binary operation on the set, because matrices of different
dimensions can't be added.
In general, the set of
As a special case, the
Example. Let G be the group of
(a) What is the order of G?
(b) Find the inverse of
(a) A
Hence, the inverse is
Example. Let
In words, G is the set of
Show that G is a group under matrix addition.
That is, if you add two elements of G, you get another element of G. Hence, matrix addition gives a binary operation on the set G.
From linear algebra, you know that matrix addition is associative.
The zero matrix
Finally, the additive inverse of an element
All the axioms for a group have been verified, so G is a group under matrix addition.
Example. Consider the set of matrices
(Notice that x must be nonnegative). Is G a group under matrix multiplication?
First, suppose that
I'll take for granted the fact that matrix multiplication is associative.
The identity for multiplication is
However, not all elements of G have inverses. To give a specific counterexample, suppose that for
Therefore, G is not a group under matrix multiplication.
Example. general linear group. Show that
First, if
Hence, so
I will take it as known from linear algebra that matrix multiplication is associative.
The identity matrix is the
It is the identity for matrix multiplication:
Finally, since is the set of invertible
For example,
The proof that commutative ring in place of
(a) What is the order of
(b) Find the inverse of
(a) Notice that
(b) Recall the formula for the inverse of a
The formula works in this situation, but you have to interpret the fraction as a multiplicative inverse:
On the other hand, the matrix
Example. Show that the following set is a subgroup of
Finally, if
Copyright 2018 by Bruce Ikenaga
|
{"url":"https://sites.millersville.edu/bikenaga/abstract-algebra-1/matrix-groups/matrix-groups.html","timestamp":"2024-11-05T22:48:11Z","content_type":"text/html","content_length":"19387","record_id":"<urn:uuid:9fe6b212-d4e9-4f21-a810-8ed5ee086f53>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00498.warc.gz"}
|
Fukaya category - (Homological Algebra) - Vocab, Definition, Explanations | Fiveable
Fukaya category
from class:
Homological Algebra
The Fukaya category is a sophisticated structure in mathematics that arises from symplectic geometry, focusing on the study of Lagrangian submanifolds and their intersections. It serves as a
framework for understanding the relationships between these submanifolds through objects and morphisms, making it an essential tool in current research trends within homological algebra. The category
encapsulates deep geometrical and topological information about the underlying symplectic manifold and its Lagrangian submanifolds.
congrats on reading the definition of Fukaya category. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The Fukaya category is constructed from the data of Lagrangian submanifolds and their intersection points, where objects correspond to Lagrangians and morphisms to intersection data.
2. The Fukaya category is enriched over the A-infinity category, allowing for the inclusion of higher homotopical structures which provide a richer framework for homological algebra.
3. In practice, Fukaya categories have applications in mirror symmetry, relating symplectic geometry to algebraic geometry through derived categories.
4. The formalism of the Fukaya category also allows for the definition of Floer cohomology, which links holomorphic curves to intersection theory in Lagrangian manifolds.
5. Current research trends focus on expanding the Fukaya category's applications, particularly in relation to deformation theory and categorical structures arising from physical theories like string
Review Questions
• How does the Fukaya category utilize Lagrangian submanifolds to establish connections within symplectic geometry?
□ The Fukaya category utilizes Lagrangian submanifolds by defining objects as these submanifolds and morphisms based on their intersection properties. This approach allows mathematicians to
study the relationships between different Lagrangians through their intersections, leading to insights into both geometric properties and algebraic structures. By considering how Lagrangians
interact with each other within a symplectic manifold, researchers can uncover deeper relationships that bridge various areas of mathematics.
• Discuss the significance of A-infinity structures in the context of Fukaya categories and their application in homological algebra.
□ A-infinity structures are crucial for Fukaya categories as they enable a flexible framework where morphism compositions are defined up to homotopy rather than strictly. This flexibility is
essential for capturing the complexities that arise from intersections of Lagrangian submanifolds. In homological algebra, this leads to powerful tools for understanding derived categories
and has profound implications for developments in areas like mirror symmetry and deformation theory.
• Evaluate how current research trends involving Fukaya categories might influence future developments in symplectic geometry and related fields.
□ Current research trends involving Fukaya categories are expanding their applications into new areas such as mirror symmetry and categorical structures arising from theoretical physics. As
these categories become more intertwined with modern mathematical theories, they could lead to breakthroughs that enhance our understanding of both symplectic geometry and its connections
with other disciplines like algebraic geometry. Future developments may also explore further generalizations of Fukaya categories or their relations to other topological constructs, pushing
the boundaries of what is known in contemporary mathematics.
"Fukaya category" also found in:
Subjects (2)
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/homological-algebra/fukaya-category","timestamp":"2024-11-14T18:45:51Z","content_type":"text/html","content_length":"144890","record_id":"<urn:uuid:e194a3c8-8b11-48b9-ae3f-55d311757ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00500.warc.gz"}
|
12183 Feet to Meters
12183 ft to m conversion result above is displayed in three different forms: as a decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in
the United Kingdom) and as a fraction (exact result). Every display form has its own advantages and in different situations particular form is more convenient than another. For example usage of
scientific notation when working with big numbers is recommended due to easier reading and comprehension. Usage of fractions is recommended when more precision is needed.
If we want to calculate how many Meters are 12183 Feet we have to multiply 12183 by 381 and divide the product by 1250. So for 12183 we have: (12183 × 381) ÷ 1250 = 4641723 ÷ 1250 = 3713.3784 Meters
So finally 12183 ft = 3713.3784 m
|
{"url":"https://unitchefs.com/feet/meters/12183/","timestamp":"2024-11-04T04:16:20Z","content_type":"text/html","content_length":"22801","record_id":"<urn:uuid:bc236b94-31d8-4b97-9deb-f42a5ceddd4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00471.warc.gz"}
|
Bayesian Topology Learning and noise removal from network data
Learning the topology of a graph from available data is of great interest in many emerging applications. Some examples are social networks, internet of things networks (intelligent IoT and industrial
IoT), biological connection networks, sensor networks and traffic network patterns. In this paper, a graph topology inference approach is proposed to learn the underlying graph structure from a given
set of noisy multi-variate observations, which are modeled as graph signals generated from a Gaussian Markov Random Field (GMRF) process. A factor analysis model is applied to represent the graph
signals in a latent space where the basis is related to the underlying graph structure. An optimal graph filter is also developed to recover the graph signals from noisy observations. In the final
step, an optimization problem is proposed to learn the underlying graph topology from the recovered signals. Moreover, a fast algorithm employing the proximal point method has been proposed to solve
the problem efficiently. Experimental results employing both synthetic and real data show the effectiveness of the proposed method in recovering the signals and inferring the underlying graph.
All Science Journal Classification (ASJC) codes
• Software
• Computer Networks and Communications
• Hardware and Architecture
• Human-Computer Interaction
• Information Systems
• Electrical and Electronic Engineering
• Bayesian inference
• Graph signal processing
• Internet of things
• Signal representation
• Topology learning
Dive into the research topics of 'Bayesian Topology Learning and noise removal from network data'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/bayesian-topology-learning-and-noise-removal-from-network-data","timestamp":"2024-11-05T10:35:59Z","content_type":"text/html","content_length":"51679","record_id":"<urn:uuid:78d9b1b7-5bd7-4146-b745-50096a423d55>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00889.warc.gz"}
|
Singularités et transfert
Annales mathématiques du Québec
37, no. 2
(September 2013) pp. 173-253.
Author's comments (2021-06-22): The paper as presented here is not the published paper. That was unfortunately modified, namely slightly abridged, by the editors without consulting the author and
without his approval. The present paper, the original paper, is the preferred form.
Author's comments: This text is provisional from a mathematical point of view, but it may be some time before the obstacles described in the concluding sections are overcome. Serious progress has
been made by Ali Altuğ.
It has been easy to misconstrue the principal purpose of this paper and of the previous paper, at least my principal purpose. It was to introduce the use of the Poisson formula in combination with
the stable transfer as a central tool in the development of the stable trace formula and its applications to global functoriality. Unfortunately the review in Math. Reviews was inadequate, simply
reproducing the abtract, written not by me but by the editors, ``A transfer similar to that for endoscopy is introduced in the context of stably invariant harmonic analysis on reductive groups. For
the group \(\mathrm{SL}(2)\), the existence of the transfer is verified and some aspects of the passage from the trace formula to the Poisson formula are examined.'' This transfer is for me a central
issue for harmonic analysis on reductive groups over local fields. The problems it raises have, so far as I know, not been solved even over \(\mathbf R\) and \(\mathbf C\). Its construction for \(\
mathrm{SL}(2)\) over \(p\)-adic fields, \(p\) odd, was, and remains, for me an interesting application of the explicit formulas of Sally-Shalika for the characters of that group.
|
{"url":"https://publications.ias.edu/rpl/paper/139","timestamp":"2024-11-11T05:05:11Z","content_type":"text/html","content_length":"15890","record_id":"<urn:uuid:a4ebbd15-f4f0-4b39-bed3-d10e512f3e2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00752.warc.gz"}
|
Intel Rolls Out 49 Qubits
With a backdrop of security and stock trading news swirling, Intel’s [Brian Krzanich] opened the 2018 Consumer Electronics Show with a keynote where he looked to future innovations. One of the
bombshells: Tangle Lake; Intel’s 49-qubit superconducting quantum test chip. You can catch all of [Krzanch’s] keynote in replay and there is a detailed press release covering the details.
This puts Intel on the playing field with IBM who claims a 50-qubit device and Google, who planned to complete a 49-qubit device. Their previous device only handled 17 qubits. The term qubit refers
to “quantum bits” and the number of qubits is significant because experts think at around 49 or 50 qubits, quantum computers won’t be practical to simulate with conventional computers. At least until
someone comes up with better algorithms. Keep in mind that — in theory — a quantum computer with 49 qubits can process about 500 trillion states at one time. To put that in some apple and orange
perspective, your brain has fewer than 100 billion neurons.
Of course, the number of qubits isn’t the entire story. Error rates can make a larger number of qubits perform like fewer. Quantum computing is more statistical than conventional programming, so it
is hard to draw parallels.
We’ve covered what quantum computing might mean for the future. If you want to experiment on a quantum computer yourself, IBM will let you play on a simulator and on real hardware. If nothing else,
you might find the beginner’s guide informative.
Image credit: [Walden Kirsch]/Intel Corporation
51 thoughts on “Intel Rolls Out 49 Qubits”
1. Ok, so I have a real question: Why are all these qubit chips weird sizes, like 49qubits, or 17? What happened to powers of 2? Y’know, boring numbers like 16, 32, or 64?
1. 17 and 49 are Prime numbers…
1. I love that 49 is now a prime number. “Tough titties 7.” :)
1. It’s a hardware bug and a future security hole. :-D
2. Your not supposed to fact check my claims. ;P
1. You’re*
2. You’re not supposed to spell check them either.
2. wow
2. It wasn’t always this way with regular 0-or-1 bits either. Some early computers used 6 bits per byte. or 9 bits per byte. So your fancy 1960s computer with 8 bytes of memory might have had
anywhere from 48 to 72 bits. We finally settled on 8 bits per byte because it was a convenient power of two, and we only cared that it was a power of two because, as more bits are added, the
number of combinations (ie, addressable numbers) scale as a power of two.
That doesn’t apply to qubits, which have any number of possible states and potential combinations. I don’t even know if it would make sense to group qubits into “qubytes” or not. So with no
grouping of qubits being done, there’s no point in targeting any specific number of bits per package.
3. I guess it’s because 49 is the amount they can pack into their design. Power of two is due to the revolution of the 8 bit byte (there have been other byte sizes) and having 8 bit bytes plus
byte addressable memory means easier porting of software.
There’s not much software for quantum machines, in fact there isn’t really any software at the moment AFAIK. Think analog computers with dials and wires.
4. They are aiming for a million qbits next. The real problem is that qbits and bits are not the same, you may as well be asking why are computer clock speeds not multiples of 2.
2. Does this mean the Intel ME Subsystem will be 500 trilion times easier to exploit?
1. Well. Quantum entanglement sounds like new kind of side channel for attacks. At least they freeze the quantum computers to near 0K to prevent Meltdown.
1. Your joke was so subtle and clever…. Nobody had even suspectre thing!
2. The world where attacks can be taken out on computers via long-distance entanglement is one i’d want to study.
3. this would not be good for unix.
if you cat a file, it might delete it!
1. Oh jesus.
2. It might or might not be deleted…..
1. It’ll be in a state of both deleted and not deleted… just don’t look!
3. Quantum computers can’t replace classical CPUs, only supplement them. Much like you still have a CPU along with your GPU.
1. I mean, technically Quantum computers will be classically Turing Complete (in addition to being quantumly weird), so would be capable of replacing classical CPUs. Now, practically you’ll
still have classical computers for doing classical computations because there’s a LOT less overhead involved.
4. “if you cat a file, it might delete it!”
Not if you cat > Schrödinger > filename
1. You can only sudo delete it
5. You can only read the contents of the file or its location, not both.
4. Was happy to see hackaday covering this as I wanted a in-depth analysis and a technology review/update. Hence I was very disappointed by the actual article…
5. Is it just me or has everyone been quite rude in 2018? Since I don’t read “the usual tech rags” I actually was glad to see this and I will play with the IBM online thing. Maybe you get your own
website see how that goes?
6. “You can catch all of [Krzanch’s] keynote in replay and there is a detailed press release covering the details.”
I want the details not covered by the press release.. you could atleast included some technical aspects in the article? I would think that researching a good article would be easy as there is no
lack of papers on the subject and Quantum Computation in it’s current state is a hack of a hack of a hack that only just might work and then only around 0 Kelvin.
7. What if any impact does the development in this article have on the optimism of the next article regarding the future of bitcoin and blockchains?
8. “” a quantum computer with 49 qubits can process about 500 trillion states at one time. To put that in some apple and orange perspective, your brain has fewer than 100 billion neurons.””
Can anyone explain this sentence to me? One part talks about the number of states the other the number of neurons… what am i missing?
1. Like it said, apple and orange perspective. The two don’t really compare in any meaningful way.
Though I suppose if you followed it up with “and each neuron can have X number of states, for a combined Y states in total for your brain” then it could start getting interesting.
1. I read that some neurons have up to 10000 connections.
How much states do we have?
1. If we have 100 billion neurons and each neuron has 10000 connections and we treat them as binary
then 2^(10,000 * 100,000,000,000) approx one with 3e14 zeros
1. I love you. So many states. Sexy.
2. https://www.sciencedaily.com/releases/2014/01/140116085105.htm
Quantum might have something to do with it.
2. “what am i missing?”
Well, to put it another way: “If it takes a day and half for a hen and a half, to lay an egg and a half, how many pancakes does it take to shingle a dog house?”
1. 42…
9. So… this is an announcement from CES, so take with the appropriate pillar of salt.
That said, from what I gather, if this is real, it puts practical implementations of Shor’s algorithm easily 2-3 years closer than I thought was expected. Is that a fair estimate?
If so, then from where I sit that means that practical attacks on RSA and EC will be feasible now *before* NIST even approves the first post-quantum public key cryptosystems.
This can’t be good.
1. Wouldn’t be surprised if the NSA has something a tad more advanced.
1. I would be surprised. Cuz bitcoin is the canary in that particular coal mine. But the bird is still singing.
Is it?
2. Nah, we’re still at least a decade away. It takes thousands of logical qubits (and even more gates) to run Shor’s algorithm, and those qubits need to be error-corrected to be useful for this
case, so increase the number of physical gates by one or two orders of magnitude, so we’re talking 100,000 to 1,000,000 physical qubits, perhaps.
HOWEVER, someone could just be sucking up encrypted data over the Internet right now and storing it for the “Quantum Jubilee” when the first large-scale, encryption-breaking quantum computers
are able to unlock the encrypted data.
So it’s really not that long from now, and we need to be thinking about this NOW because there are lots of things that we’ll still want to be secret 15 years from now.
1. Highly doubt it about “sucking up data”/etc. Even a small/low bandwidth uplink of ~10Gbit/s ~ 10 petabytes a year (with 50% average load, and counting that encrypted traffic is ~50% of
that load).
So – unless there is a lot of filtering going on – it just would not work for long term plan.
2. There are, indeed a lot of things that we need to remain secret for more than a decade, but the overwhelming majority of encrypted traffic today world-wide has a useful lifetime more on
the order of at *most* months. Just for example, it’s unlikely there will much interest in the fact that read this page and posted this comment ever again.
10. I thought we were up to 512 boxes of dead cats.
1. If that is a D-wave reference, then no they don’t have 512 boxes of dead cats, they have one single box with 512 dead cats piled up inside it.
D-Wave machines can only solve one type of problem called the lowest energy state determination.
Their machines are not capable of manipulating each qbit individually, instead they manipulate the state of all of them to setup the problem to solve, and let them find the lowest energy
state of the system to provide the solution, which they read out as a singular result.
This is why they can have such a massive number of qbits, they don’t need to deal with them individually.
A horrible analogy would be magnetic hard drive platters. Current day drives cram a metric shitton of bits in a tiny physical space, but the drive can read and write each one individually
(this would be the IBM/Intel/Google quantum model)
If you instead took a huge bar magnet to the platter, you could still read and write it but using a ton of those bits as a single entity (d-waves model)
This analogy breaks however due to the fact the d-wave can still exploit quantum features of those qbits to solve certain problems.
11. His first name Ian Brian, right? Not Brain, albeit fitting…
1. dyac*
Is, not Ian
12. What is that number 500 trillion based on? “In theory” we can have an infinite amount of states with one qbit. We just cant read em.
1. Ah never mind its just 2^49 5,6 x 10^14 sorry
13. It is important to understand what quantum computing is good for. Their strong point is not in performing logical operations, like current computers, but in finding patterns, such as finding the
factors of large numbers. So, a low error rate 49 qbit quantum computer can theoretically factor a 49 bit number in a single step. That immediately makes decryption much faster. The more qbits,
the faster the factorization. We are getting closer to the point where the best current encryption is breakable in some reasonable time period. Such pattern recognition could also be very
important for AI.
14. Intel, you say? Would anyone like to speculate [pun intended] what sort of Spectre and Meltdown attacks are possible on this thing? Spectre and Meltdown already feel like weird $#!+ is happening
in alternate universes because they’re not allowed in the real world, yet still detectable from the real world :-O
15. If you understand quantum computing, then you don’t really understand it.
Please be kind and respectful to help make the comments section excellent. (Comment Policy)
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://hackaday.com/2018/01/09/intel-rolls-out-49-qubits/","timestamp":"2024-11-11T04:56:28Z","content_type":"text/html","content_length":"165498","record_id":"<urn:uuid:66b0addb-cff8-4249-9b41-e9bef96eb70c>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00184.warc.gz"}
|
Optimal Mechanism Design for Single-Minded Agents
We consider optimal (revenue maximizing) mechanism design in the interdimensional setting, where one dimension is the 'value' of the buyer, and the other is a 'type' that captures some auxiliary
information. A prototypical example of this is the FedEx Problem, for which Fiat et al. [2016] characterize the optimal mechanism for a single agent. Another example of this is when the type encodes
the buyer's budget [DW17]. The question we address is how far can such characterizations goIn particular, we consider the setting of single-minded agents. A seller has heterogenous items. A buyer has
a valuation vfor a specific subset of items S, and obtains value vif and only if he gets all the items in S(and potentially some others too). We show the following results. Deterministic mechanisms
(i.e. posted prices) are optimal for distributions that satisfy the "declining marginal revenue" (DMR) property. In this case we give an explicit construction of the optimal mechanism. Without the
DMR assumption, the result depends on the structure of the minimal directed acyclic graph (DAG) representing the partial order among types. When the DAG has out-degree at most 1, we characterize the
optimal mechanism àla FedEx; this can be thought of as a generalization of the FedEx characterization since FedEx corresponds to a DAG that is a line. Surprisingly, without the DMR assumption andwhen
the DAG has at least one node with an out-degree of at least 2, then we show that there is no hope of such a characterization. The minimal such example happens on a DAG with 3 types. We show that in
this case the menu complexity is unboundedin that for any M, there exist distributions over (v,S) pairs such that the menu complexity of the optimal mechanism is at least M. For the case of 3 types,
we also show that for all distributions there exists an optimal mechanism of finitemenu complexity. This is in contrast to the case where you have 2 heterogenous items with additive utilities for
which the menu complexity could be uncountably infinite [DDT15, MV07]. In addition, we prove that optimal mechanisms for Multi-Unit Pricing (without a DMR assumption) can have unbounded menu
complexity as well, and we further propose an extension where the menu complexity of optimal mechanisms can be countably infinite, but not uncountably infinite. Taken together, these results
establish that optimal mechanisms in interdimensional settings are both surprisingly richer than single-dimensional settings, yet also vastly more structured than multi-dimensional settings.
Original language English (US)
Title of host publication EC 2020 - Proceedings of the 21st ACM Conference on Economics and Computation
Publisher Association for Computing Machinery
Pages 193-256
Number of pages 64
ISBN (Electronic) 9781450379755
State Published - Jul 13 2020
Externally published Yes
Event 21st ACM Conference on Economics and Computation, EC 2020 - Virtual, Online, Hungary
Duration: Jul 13 2020 → Jul 17 2020
Publication series
Name EC 2020 - Proceedings of the 21st ACM Conference on Economics and Computation
Conference 21st ACM Conference on Economics and Computation, EC 2020
Country/Territory Hungary
City Virtual, Online
Period 7/13/20 → 7/17/20
All Science Journal Classification (ASJC) codes
• Computer Science (miscellaneous)
• Economics and Econometrics
• Statistics and Probability
• Computational Mathematics
• duality
• interdimensional
• menu complexity
• optimal mechanism design
• partial lagrangian
• revenue
• single-minded valuations
Dive into the research topics of 'Optimal Mechanism Design for Single-Minded Agents'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/optimal-mechanism-design-for-single-minded-agents","timestamp":"2024-11-03T19:32:24Z","content_type":"text/html","content_length":"54570","record_id":"<urn:uuid:4cad57b8-3d46-4ca2-a1a6-c0ba42dfcb27>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00098.warc.gz"}
|
How do you find the GCF and LCM of numbers? + Example
How do you find the GCF and LCM of numbers?
1 Answer
Given two numbers, you can find the GCF as follows:
Divide the larger number by the smaller to give a quotient and remainder.
If the remainder is zero, then the GCF is the smaller number.
Otherwise repeat with the smaller number and the remainder.
For example, to find the GCF of $252$ and $70$ proceed as follows:
$\frac{252}{70} = 3$ with remainder $42$
$\frac{70}{42} = 1$ with remainder $28$
$\frac{42}{28} = 1$ with remainder $14$
$\frac{28}{14} = 2$ with remainder $0$
So the GCF is $14$
Then to find the LCM, multiply the two original numbers together and divide by the GCF.
In our example, the LCM of $252$ and $70$ is:
$\frac{252 \cdot 70}{14} = 1260$
Impact of this question
6728 views around the world
|
{"url":"https://socratic.org/questions/how-do-you-find-the-gcf-and-lcm-of-numbers#248193","timestamp":"2024-11-09T06:28:52Z","content_type":"text/html","content_length":"33633","record_id":"<urn:uuid:f5858f28-5fbf-4de9-bdf4-cf3cead7245b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00672.warc.gz"}
|
Could someone do a calculation on a real 71B?
01-16-2017, 11:14 AM
(This post was last modified: 01-16-2017 11:17 AM by Thomas Okken.)
Post: #5
Thomas Okken Posts: 1,896
Senior Member Joined: Feb 2014
RE: Could someone do a calculation on a real 71B?
(01-16-2017 10:31 AM)EdS2 Wrote: It's more enlightening to ask for the sin of a number just below pi, because then you get more digits of pi.
Neat! I never realized that.
It works for the number just above pi, too; you just get the ten's complement of those extra digits:
I guess the point is that the calculator performs argument reduction using an extended-precision approximation of pi. Next question, for extra credit: how many digits?
User(s) browsing this thread: 1 Guest(s)
|
{"url":"https://hpmuseum.org/forum/showthread.php?tid=7587&pid=66892&mode=threaded","timestamp":"2024-11-08T08:18:48Z","content_type":"application/xhtml+xml","content_length":"19532","record_id":"<urn:uuid:1ee75781-adae-47f1-9bf1-e37b6293574f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00862.warc.gz"}
|
Midsummer's Eve Giveaway Hop
I Am A Reader, Not A Writer decided to host a spontaneous
Midsummer’s Eve Giveaway Hop and I am joining with a $10 Amazon Gift Certificate.
Enter via the Rafflecopter
Ends 6/30/13
a Rafflecopter giveaway
214 comments:
1. fun giweavay :)
2. ratrianugrah (at) ymail (dot) com
3. thanks for the giveaway!
4. thanks for the giveaway cookiesmasher5@yahoo.com
5. Hope your summer is going well!
My gmail is win4xmas.
6. Thanks for doing this giveaway!
7. Thanks for the chance. ladyaramina10 at yahoo dot com
8. Thanks for a chance to win this.
mariskaveenbrink at gmail dot com
9. would love to win
10. If I win I'd be buying 'Reboot' ! Thanks for the giveaway!
Email: sshah605@gmail.com
11. Thanks for the giveaway!
hippiegirl12 @ gmail . com
12. Thanks for the giveaway!
hippiegirl12 @ gmail . com
13. Thanks for the simple giveaway!
reviewsbyabby at gmail dot com
14. Thanks for the giveaway! Contact me @ EmAnne3000(at)gmail(dot)com
15. thanks for the giveaway!
maybe31 at yahoo.com
16. Thanks for the chance to win!
17. Thanks for the giveaway! :)
musmekipi at gmail dot com
18. Thanks for the giveaway. gwynnethwhite(at)4xforum.com
19. thank you a lot for this giveaway ^^ i could use this give card for some summer release i really want
20. Thanks
lilcrickit at gmail dot com
21. Thanks for the giveaway! nicole(at)nicoleandscott(dot)com
22. Thanks for the giveaway :D
23. rainjeys@yahoo.com
Thank so much!
24. Thank you
eddiem11 (at ) ca (dot) rr (dot) com
25. andrea_rose95 at rocketmail . com
26. Thanks for the giveaway!
27. Thank you for the giveaway!
28. Jen HaileJune 21, 2013 at 1:41AM
Thank you!!!
jenniferhaile1 at gmail dot com
29. Thanks for the giveaway! :)
30. thanks for the giveaway
roswello at hotmail dot com
31. This comment has been removed by the author.
32. Thanks for the giveaway! :)
33. Thank you for the chance to win!
verusbognar (at) gmail (dot) com
34. jjoliet@hotmail.com Thanks for the fun giveaway :)
35. Thanks for the giveaway.
36. Thanks for the giveaway! I'd just it towards a book or two on my wishlist.
Email: maurapedia @ gmail.com
37. Thanks for the opportunity!
ikkinlala AT yahoo DOT ca
38. Thanks for the giveaway.
cdenigan at hotmail dot com
39. Thanks for the giveaway :)
40. Thanks for the giveaway!
41. Doing the happy dance....love amazon gift cards!
books4me67 at ymail dot com
42. Thank you.
43. Thank you for the opportunity :)
jslbrown_03 at yahoo dot com
44. Thanks!
shangelx at gmail dot com
45. Thanks for being part of the hop!
Terri M
Oklahomamommy0306 @ Gmail.com
46. Thank you for the giveaway
47. Thanks for the great giveaway!
saltsnmore at yahoo dot com
48. Allie LJune 21, 2013 at 6:12PM
Thank you for this giveaway :D
iheartmemorethanyou at yahoo dot com
Allie L
49. I entered your Midsummer's Eve Giveaway Hop
It would be great to win the $10 Amazon Gift Certificate.
In response to leave a comment with my email.
I don't want to post it publicly
so i put it in the rafflecopter.
Thank you for having this giveaway!!!!!
50. thank you!
inthehammockblog at gmail dot com
51. Thank you for the giveaway!
shermie40 at yahoo dot com
52. I think I'd put this towards a new reader for my daughter - she right into reading and summer is coming. THANKS. abrennan09@hotmail.com
53. Thank you for the giveaway! (=
sarah_sal90 at yahoo dot com
54. Thanks for the giveaway!
55. Thanks for the Giveaway!
lunacarmin at gmail dot com
56. I'd love to pick up a good summer read, like Meg Wolitzer's "The Interestings" - thanks for the chance to enter!
Geoff K
gkaufmanss at yahoo dot com
57. Thanks for participating in the hop and for a chance at this great prize!
- huntress023(at)hotmail(dot)com
58. Thank you! annauponavon at gmail dot com
59. this would be very helpful! thanks for the chance
60. I would love to win this. Thanks for the amazing giveaway!
Tina M Kohrman
61. Thanks for the awesome giveaway! ;)
cohlesguerra at gmail dot com
62. Awesome giveaway ^_^
63. thanks for the giveaway
64. Thanks for the chance :)
65. happy solstice!!
66. crap, i forgot my email. dookiepookiebear @yahoo.com
67. Thanks a lot for the giveaway!
by.evie at yahoo dot com dot br
68. Thanks!!!
69. I forgot my email! hahaha erikabee1989@q.com
70. georgiabeckman@hotmail.com
Thanks for participating in the hop!
71. ty for hosting this giveaway
sue14625 @ gmail . com
72. Ann FantomJune 22, 2013 at 7:59AM
Thanks for offering this giveaway.
abfantom at yahoo dot com
73. Thanks for participating in this fun hop :)
74. Thank you for the opportunity
clc at neo dot rr dot com
75. Nicole NewbyJune 22, 2013 at 11:10AM
Thanks for offering this giveaway! I hope I win!
76. love amazon
1agordon at live.com
77. YAY! Another HOP! Thanks for the giveaway:)
78. Thanks for the chance.
lazybones344 at gmail dot com
79. ky_grandma40 at yahoo dot com
80. thank you cowboys.wife at hotmail dot com
81. Thanks for the giveaway
leighannecrisp at yahoo dot com
82. Thanks for the giveaway
s2s2 at comcast dot net
83. Thank you so much!
ellaangelus AT gmail DOT com
84. Thanks for the giveaway!
maggie at literary winner dot com
85. Thank you for being part of this hop.
Irene Rosa
86. Thanks for the chance to win!
87. kelle017@aol.com
88. Thanks!
89. Thank you so much!
90. Thanks for participating in the hop.
91. Thanks for the giveaway!
92. Thanks for the giveaway.
magic5905 at embarqmail dot com
93. Thank you! :)
holliister at gmail dot com
94. Thanks!
95. Oh a giftcard *rubbing my hands together with glee at the books that I could buy* you rock!!! :D
96. Spontaneous Giveaway Hops are the BEST! Thank you for spontaneously, at the last minute joining in!
97. Anna GranbergJune 23, 2013 at 2:14AM
Thanks for the giveaway!
98. Love to shop at Amazon for ebooks for my Kindle.
rhoneygtn at yahoo dot com
99. Thanks for the giveaway!
100. Thanks for the giveaway!
livvvy75 @ gmail . com
101. Thank you for the giveaway! :D
102. Happy sunday :) tamarsweeps-at-gmail-dot-com
103. Thanks
104. Thank you so much!
105. Thanks for the giveaway!
justjanhvi at gmail dot com
106. Thanks for the chance to win!
sha4178 at comcastdotnet
107. Thank you!
kerri, kerbear560 at yahoo dot com
108. Thanks.
nhall999 at yahoo dot com
109. Thanks for the chance! Banksd664 @ gmail dot com
110. Love Amazon - thanks!
barbara dot montyj at gmail dot com
111. Thank you for this!
chelseawoodring at hotmail dot com
112. robinblankenship at gmail dot com
113. Thank you for the giveaway!
beef mc big stuff at gmail dot com
114. There are so many great books coming out, this would be helpful.
Heididaily at gmail dot con
115. Thanks for the giveaway!
the.girl.and.the.book at gmail dot com
116. Love this hop :) Thanks for participating! ineedadietcoke at aol dot com
117. Thanks for the great giveaway!
118. Thanks for the GA, maureen.
Here is my email : fingerprintale@yahoo.com
119. Thanks for the giveaway!
120. Thanks for hosting this GA!
joviemaria at yahoo dot com
121. Thanks for giveaway
122. Thanks for the great giveaway!!
jenni_bearz at hotmail dot com
123. Thanks for the great giveaway!
jenni_bearz at hotmail dot com
124. Thanks for the giveaway.
125. Thanks for the giveaway! I always love finding new blogs!
126. I would so love to win this so I can get James Rollins' new book! rainpendragon2684[at]yahoo[dot]com
127. Thanks for the awesome giveaway!
128. Thank you for the chance!
129. tiffanynichole89@gmail.com
130. Thanks for the chance!
131. Thank you very much (:
132. Thanks for the giveaway.
133. Thanks for the chance!
kimberlybreid at hotmail dot com
134. Thank you! bigmak207(at)gmail(dot)com
135. Boo OliverJune 25, 2013 at 1:06AM
Thanks for the chance
136. thank you for the giveaway! my email is aerojenn@aol.com
137. Thank you for the chance to win!
138. bookaholicholly at gmail dot com
139. tabbs55[at]gmail[dot]com
Thanks for the giveaway!
140. Would love to use this for the final Sookie book!
ncprincess96 at yahoo dot com
141. thanks!
magabygc AT gmail.com
142. leah49 (at) gmail (dot) com
143. Thanks for the giveaway!
144. elleberra at gmail dot com
145. Thanks for the giveaway. Happy summer!
mia at jacobsracing dot com
146. Awesome giveaway! I would use the card toward the Kindle my daughter wants for her birthday. Thank you!
Brooke Bumgardner
brooke811 at ymail.com
147. Thanks for participating!
148. Thanks for the giveaway!
crystalfaulkner2000 at yahoo dot com
149. Great giveaway.
150. Thanks for the chance to win!
natasha_donohoo_8 at hotmail dot com
151. Hi
Thanks for the giveaway!
dany7578 at hotmail dot com
152. Thanks for the giveaway
153. Fun Giveaway!!
Chris Sutor
154. Thank you for the giveaway! I could really use that money to buy more books of my ever growing wishlist! :D
(Melissa Robles on Rafflecopter)
155. Thanks for the giveaway!
156. Thanks for the amazing giveaway!
elizabeth @ bookattict . com
157. Thanks for the great giveaway!
bymyself.g AT gmail DOT com
158. Thanks for the giveaway and fun hop! dguillen at kc dot rr dot com
159. Thanks for the great giveaway
160. Thanks,
161. Thank you :)
162. oops .. my email address: readerrabbit22 at gmail.com
163. thanks for the giveaway !
uniquas at ymail dot com
164. thank for the chance
debrahall one nine six one at yahoo dot com
165. Happy Summer! Who else here would kill to have a pool?
Thanks for the giveaway!
166. Thanks for the chance!
bacchus76 at myself dot com
167. Ileana A.June 26, 2013 at 11:57PM
Thank you for the giveaway!
168. love the giveaway neonangelwings(at0yahoo(dot)com
169. Thanks for the giveaway! =)
170. colleen fowlerJune 27, 2013 at 8:32AM
Thank you!! starbirdfan at hotmail dot com
171. Thanks for the giveaway!
octoberbaby1990 at yahoo dot com
172. sara2 (at) chadskiles .com thanks!
173. Thanks for the giveaway!
fecheerleader at yahoo dot com
174. Thanks for the giveaway!
lafittelady at gmail dot com
175. Thanks for the giveaway!
shelver506 at gmail dot com
176. Daniel MJune 27, 2013 at 10:35PM
Thanks for taking part in the hop! - regnod(at)yahoo(d0t)com
177. my email is carawling(at)Hotmail(Dot)com
Thank you for the giveaway!
178. My email is: bunnysmip(AT) gmail (DOT) com
179. Thank you for the giveaway!
kurri121 AT hotmail DOT com
180. COol I have a few books on my I want to read list that a $10 amazon card can go to thanks you for offering
181. great giveaway
susanmplatt AT Hotmail DOT com
182. Thanks for the giveaway!
183. Thank you for the chance!
spamscape [at] gmail [dot] com
184. Thanks for the awesome giveaway!
185. Thanks for the chance!
xsweeteternityx (at) hotmail (dot) com
186. Awesome! This is so cool, thanks!
187. This is so cool, thanks so much for this opportunity! :)
188. Woops, forgot to include my email. It's cherrypie357@gmail.com :) Thank you so much!
189. Thanks for the giveaway!- kim-anhv@hotmail.com
190. Thank you for the giveaway!
191. Thanks for hosting the giveaway
192. mihaeladay [at] gmail [dot] com
193. Thank you for the chance at the giveaway!!
194. Thank you for joining the hop!
Bonnie Hilligoss
195. Thanks for the giveaway!
eswright18 at gmail dot com
196. Thanks for the giveaway
197. Thanks for the chance to be included for your giveaway.
lkish77123 at gmail dot com
198. Thank you for the chance to win.
bituin76 at hotmail dot com
199. Thank you!
mjg0051 at yahoo dot com
200. tinylittlebows at gmail dot com! :)
|
{"url":"https://musingsbymaureen.blogspot.com/2013/06/midsummers-eve-giveaway-hop.html","timestamp":"2024-11-04T18:49:08Z","content_type":"text/html","content_length":"463848","record_id":"<urn:uuid:2547dbd1-0b44-4d0a-8a5c-4a35f3bae4a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00778.warc.gz"}
|
Our users:
My parents are really happy. I brought home my first A in math yesterday and I know I couldnt have done it without the Algebrator.
Pamela Nelson, MT
I really needed help with fractions, the program listed each of the steps, so I was able to get my homework finished in time. Thanks.
Stephen J., NY
The Algebrator was very helpful, it helped me get back on track and bring back my skills for my next school season. The program shows step by step solutions which made learning easier. I think this
would be very helpful to anyone just starting to learn algebra, or even if they already know it, it would sharpen their skills.
K.T., Ohio
I ordered the software late one night when my daughter was having problems in her honors algebra class. It had been many years since I have had algebra and parts of it made sense but I couldn't quite
grasp how to help her. After we ordered your software she was able to see step by step how to solve the problems. Your software definitely saved the day.
Alex Starke, OR
OK here is what I like: much friendlier interface, coverage of functions, trig. better graphing, wizards. However, still no word problems, pre-calc, calc. (Please tell me that you are working on it -
who is going to do my homework when I am past College Algebra?!?
Joanne Ball, TX
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
Search phrases used on 2010-12-08:
• mcdougal littell course 3 answers workbook
• help with algebra 2 integration application connections
• multiply rational algebraic expression
• TI 84 calculator download
• the list of roots and cube in algebra
• "ti 84 plus"+"download"
• multiplying equation practice questions
• systems of equations on TI-84
• Convert mm to pixel
• checks algebra answers
• "order of operation worksheet"
• multiplying and dividing rational expressions
• squareroot algebra examples
• algebra rules of a parabola
• matlab ode45 2nd order equation
• How to Solve Piecewise Functions
• Download Prentice Hall Mathematics
• holt rinehart winston algebra 1 workbook answers
• plotting elipses in mathematica
• a level algebra worksheet
• TI-83 equation solver
• how to graph absolute value inequalities on an integer graph
• World Chemistry McDougal and Littell Chapter 2 Review
• tricky aptitude questions
• solve my algebra problems
• how do you solve an absolute polynomials
• worksheet biology florida answers
• Writing Decimals As Mixed Numbers
• inverse intercept graph
• cubed root calculator
• 7th grade math transformation image
• printable worksheets for 100000 place value
• adding and subtracting integer games
• convert 6'4" to a mix fraction
• free online Sequence solver
• algebra 2 online calculator
• ti-84 emulator
• how to compare decimals to the thousandths
• latest math trivias
• absolute values radicals
• conics: parabolas, ellipses, hyperbolas
• free algebra problem solver software
• casio calculator how to divide
• holt mathematics course 3 texas homework and practice workbook answers
• simplified roots calculator
• find vertex of an equation in intercept form
• animated lesson for mathematics for 9th standard children in india
• common factors chart
• boolean algebra problems
• usable online graphing calculators
• vector projection calculator
• McDougal Littell Pre-algebra book answers
• Absolutely the best algebra solver online
• shortcut method ( subtractions)
• combine like terms calculator
• quadratic equations +log roots
• sample problems on laws of exponent
• solving a differential equation second order
• teach myself algebra calculator
• malaysian grade 8th course math
• fractions as powers
• Write Each Decimal As a Mixed Number
• addition and subtraction formulas of trig examples
• free ratio worksheets
• mcdougal littell algebra 2 answers
• Maple Implicit Derivation
• online calculator rational expressions
• squared numbers practice worksheets
• free worksheet conversion of percent to decimal
• Linear Interpolation Graphing Calculators
• Solve a complete square problem
• simplifying root numbers
• maths worksheets volume free
• "free printable temperature worksheets"
• harcourt brace math grade two practice on my own teachers answers
• free algebra cheating online
• subtracting negative fractions
• finding the slope game
• Scale Factor Worksheets
• standard grade biology online Past Papers
• Grade 8 Math Simple Interest worksheets
• finding domain of square roots
• ti-84 plus accounting
• square root property maths
• ti89 graph inequalities
• Working with LCM + 3rd Grade
|
{"url":"https://mathpoint.net/focal-point-parabola/reducing-fractions/6th-grade-pre-algebra-eog.html","timestamp":"2024-11-14T08:50:53Z","content_type":"text/html","content_length":"116201","record_id":"<urn:uuid:85118558-9e51-4788-a545-b0ee75041eaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00076.warc.gz"}
|
Carport Cost Calculators - Online Calculators
Input the length, width, height, cost per square feet, to calculate with basic and advanced calculator
The Carport Cost Calculator is a valuable tool in hands that helps you figure out the cost of building a carport, whether it’s a small one-car unit or a large custom structure, this makes it all
$\text{CC} = (\text{L} \times \text{W} \times \text{H} \times \text{C}) + (\text{D} \times \text{P})$
To calculate the total cost (CC) of a carport, first multiply the length (L), width (W), and height (H) of the carport by the cost per cubic unit (C) to find the cost of the carport structure. Then
multiply the door cost (D) by the number of doors (P). Add the two results together to get the total cost of the carport.
Variable Meaning
CC Carport Cost (total cost of the carport)
L Length of the carport (in meters or feet)
W Width of the carport (in meters or feet)
H Height of the carport (in meters or feet)
C Cost per cubic unit (the price per cubic foot or meter of the carport structure)
D Door Cost (the cost of doors or entry points for the carport)
P Number of Doors (the number of doors or entry points)
Solved Calculations :
Example 1:
• Length (L) = 20 feet
• Width (W) = 15 feet
• Height (H) = 10 feet
• Cost per cubic foot (C) = $5
• Door Cost (D) = $200
• Number of Doors (P) = 2
Calculation Instructions
Step 1: CC = $(\text{L} \times \text{W} \times \text{H} \times \text{C}) + (\text{D} \times \text{P})$ Start with the formula.
Step 2: CC = $(20 \times 15 \times 10 \times 5) + (200 \times 2)$ Replace L, W, H, C, D, and P with their values.
Step 3: CC = $15,000 + 400$ Multiply the dimensions and cost per cubic foot, then calculate door costs.
Step 4: CC = 15,400 Add the two values together to get the total cost.
The total cost of the carport is $15,400.
Example 2:
• Length (L) = 25 meters
• Width (W) = 12 meters
• Height (H) = 8 meters
• Cost per cubic meter (C) = $10
• Door Cost (D) = $250
• Number of Doors (P) = 1
Calculation Instructions
Step 1: CC = $(\text{L} \times \text{W} \times \text{H} \times \text{C}) + (\text{D} \times \text{P})$ Start with the formula.
Step 2: CC = $(25 \times 12 \times 8 \times 10) + (250 \times 1)$ Replace L, W, H, C, D, and P with their values.
Step 3: CC = 24,000+250 Multiply the dimensions and cost per cubic meter, then calculate door costs.
Step 4: CC = 24,250 Add the two values together to get the total cost.
The total cost of the carport is $24,250.
What is Carport Cost Calculator ?
If you are planning to build the Carport, this calculator helps to figure out the cost of building a carport based on various factors such as size, materials, and location. Whether you’re building a
basic 20×20 carport for two cars or a larger custom-designed structure, this calculator gives you a quick estimate of the overall cost.
Carports are generally more affordable than garages, with costs ranging from $1,500 to $10,000, depending on whether you use wood, metal, or other materials. .
Building a carport can be a great investment. It offers protection for your vehicles and increasing the value of your property. However, costs can vary significantly depending on factors like
location, material, and labor.
For instance, a metal carport is often less expensive and more durable than wood, while a custom-built carport may cost more but offer added value and design options. This calculator makes it easy.
When you input key details such as dimensions, materials, and any additional features like walls or roofing, It gives you the comprehensive report of calculation.
Final Words:
The Carport Cost Calculator simplifies the process of estimating the total expenses associated with building a carport. By considering factors such as dimensions, material costs, and additional
features, users can obtain a reliable estimate of the project’s financial requirements.
|
{"url":"https://areacalculators.com/carport-cost-calculator/","timestamp":"2024-11-04T01:16:44Z","content_type":"text/html","content_length":"115344","record_id":"<urn:uuid:50999e0b-0e43-478e-9c76-e51363f01f83>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00805.warc.gz"}
|
Ordinary differential equations - (Numerical Analysis I) - Vocab, Definition, Explanations | Fiveable
Ordinary differential equations
from class:
Numerical Analysis I
Ordinary differential equations (ODEs) are equations that involve functions of one independent variable and their derivatives. These equations describe a variety of phenomena in engineering, physics,
and other fields by relating the rates of change of a quantity to the quantity itself. ODEs are crucial for modeling dynamic systems and can often be solved using various numerical methods, such as
the Classical Fourth-Order Runge-Kutta Method, which provides an effective approach to approximate solutions to ODEs.
congrats on reading the definition of ordinary differential equations. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. An ordinary differential equation can be classified based on its order, which is determined by the highest derivative present in the equation.
2. The Classical Fourth-Order Runge-Kutta Method is particularly popular for solving first-order ODEs because it balances accuracy and computational efficiency.
3. ODEs can be linear or nonlinear; linear ODEs have solutions that can be added together, while nonlinear ODEs can exhibit more complex behaviors.
4. Many physical systems are modeled using ODEs, such as population growth, heat conduction, and motion dynamics.
5. Exact solutions to ODEs may not always be obtainable, making numerical methods essential for practical applications in science and engineering.
Review Questions
• How does the Classical Fourth-Order Runge-Kutta Method improve upon simpler numerical methods for solving ordinary differential equations?
□ The Classical Fourth-Order Runge-Kutta Method improves upon simpler methods like Euler's method by providing a more accurate approximation through its use of four slope estimates at each
step. By calculating these slopes, the method takes into account the curvature of the solution path, leading to better approximations over larger intervals. This increased accuracy makes it
particularly useful when dealing with stiff ODEs or when precise results are required.
• Discuss the differences between linear and nonlinear ordinary differential equations, providing examples of each type.
□ Linear ordinary differential equations can be expressed in the form where the unknown function and its derivatives appear linearly, such as $$y'' + p(x)y' + q(x)y = g(x)$$. An example is $$y'
= 3y$$. In contrast, nonlinear ODEs include terms that are not linear in the function or its derivatives, like $$y' = y^2$$. Nonlinear ODEs can exhibit behaviors such as bifurcation and
chaos, making them more complex to analyze and solve compared to linear ones.
• Evaluate how the understanding of ordinary differential equations can influence advancements in technology and science.
□ Understanding ordinary differential equations is crucial for advancements in technology and science because they model many real-world systems, from electrical circuits to population
dynamics. By using ODEs, engineers and scientists can predict system behaviors under various conditions, leading to optimized designs and innovations. Moreover, numerical methods for solving
ODEs enable researchers to tackle complex problems that cannot be solved analytically, pushing the boundaries of knowledge in fields such as climate modeling, medicine, and robotics.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
|
{"url":"https://library.fiveable.me/key-terms/numerical-analysis-i/ordinary-differential-equations","timestamp":"2024-11-03T22:22:15Z","content_type":"text/html","content_length":"163450","record_id":"<urn:uuid:ca18a76e-316b-41d1-8c08-51ef3fdc86af>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00848.warc.gz"}
|
Review of Basic Statistical Analysis Methods for Analyzing Data - Part 2
Establishing Trends
Various statistical hypothesis tests have been developed for exploring whether there is something more interesting in one or more data sets than would be expected from the chance fluctuations
Gaussian noise. The simplest of these tests is known as linear regression or ordinary least squares. We will not go into very much detail about the underlying statistical foundations of the approach,
but if you are looking for a decent tutorial, you can find it on Wikipedia. You can also find a discussion of linear regression in another PSU World Campus course: STAT 200.
The basic idea is that we test for an alternative hypothesis that posits a linear relationship between the independent variable (e.g., time, t in the past examples, but for purposes that will later
become clear, we will call it x) and the dependent variable (i.e., the hypothetical temperature anomalies we have been looking at, but we will use the generic variable y).
The underlying statistical model for the data is:
${y}_{i}=a+b\cdot {\chi }_{i}+{\epsilon }_{i}$
where i ranges from 1 to N, a is the intercept of the linear relationship between y and x, b is the slope of that relationship, and ε is a random noise sequence. The simplest assumption is that ε is
Gaussian white noise, but we will be forced to relax that assumption at times.
Linear regression determines the best fit values of a and b to the given data by minimizing the sum of the squared differences between the observations y and the values predicted by the linear model
$\stackrel{^}{y}=a+bx$. The residuals are our estimate of the variation in the data that is not accounted for by the linear relationship, and are defined by
$\text{}{\epsilon }_{i}={y}_{i}-{\stackrel{^}{y}}_{i}$
For simple linear regression, i.e., ordinary least squares, the estimates of a and b are readily obtained:
$b=\frac{\left[N\cdot \Sigma {y}_{i}{x}_{i}-\Sigma {y}_{i}\cdot \Sigma {x}_{i}\right]}{\left[N\cdot \Sigma {x}_{i}{}^{2}-\Sigma {\left({x}_{i}\right)}^{2}\right]}$
$a=\left(1}{N}\right)\cdot \Sigma {y}_{i}-\frac{b}{N\cdot \Sigma {x}_{i}}$
The parameter we are most interested in is b, since this is what determines whether or not there is a significant linear relationship between y and x.
The sampling uncertainty in b can also be readily obtained:
${\sigma }_{b}=\frac{std\left(\epsilon \right)}{{\left[\Sigma {\left({x}_{i}-\mu \left(x\right)\right)}^{2}\right]}^{\frac{1}{2}}}$
where std(ε) is standard deviation of ε and μ is the mean of x. A statistically significant trend amounts to the finding that b is significantly different from zero. The 95% confidence range for b is
given by $b±2\text{}{\sigma }_{b}$ . If this interval does not cross zero, then one can conclude that b is significantly different from zero. We can alternatively measure the significance in terms of
the linear correlation coefficient, r , between the independent and dependent variables which is related to b through
$r=b\cdot \frac{std\left(x\right)}{std\left(y\right)}$
r is readily calculated directly from the data:
$r=\frac{\left(1}{N-1}\right)\cdot \Sigma \left(x-\overline{x}\right)\left(y-\overline{y}\right)}{std\left(x\right)\cdot std\left(y\right)}$
where over-bar indicated the mean. Unlike b, which has dimensions (e.g., °C per year in the case where y is temperature and x is time), r is conveniently a dimensionless number whose absolute value
is between 0 and 1. The larger the value of r (either positive or negative), the more significant is the trend. In fact, the square of r (r^2) is a measure of the fraction of variation in the data
that is accounted for by the trend.
We measure the significance of any detected trends in terms of a a p-value. The p-value is an estimate of the probability that we would wrongly reject the null hypothesis that there is no trend in
the data in favor of the alternative hypothesis that there is a linear trend in the data — the signal that we are searching for in this case. Therefore, the smaller the p value, the less likely that
you would observe as large a trend as is found in the data from random fluctuations alone. By convention, one often requires that p<0.05 to conclude that there is a significant trend (i.e., that only
5% of the time should such a trend have occurred from chance alone), but that is not a magic number.
The choice of p in statistical hypothesis testing represents a balance between the acceptable level of false positives vs. false negatives. In terms of our example, a false positive would be
detecting a statistically significant trend, when, in fact, there is no trend; a false negative would be concluding that there is no statistically significant trend, when, in fact, there is a trend.
A lower threshold (that is, higher p-value, e.g., p = 0.10) makes it more likely to detect a real but weak signal, but also more likely to falsely conclude that there is a real trend when there is
not. Conversely, a higher threshold (that is, lower p-value, e.g., p = 0.01) makes false positives less likely, but also makes it less likely to detect a weak but real signal.
There are a few other important considerations. There are often two different alternative hypotheses that might be invoked. In this case, if there is a trend in the data, who is to say whether it
should be positive (b > 0) or negative (b < 0)? In some cases, we might want only to know whether or not there is a trend, and we do not care what sign it has. We would then be invoking a two-sided
hypothesis: is the slope b large enough in magnitude to conclude that it is significantly different from zero (whether positive or negative)? We would obtain a p-value based on the assumption of a
two-sided hypothesis test. On the other hand, suppose we were testing the hypothesis that temperatures were warming due to increased greenhouse gas concentrations. In that case, we would reject a
negative trend as being unphysical — inconsistent with our a priori understanding that increased greenhouse gas concentrations should lead to significant warming. In this case, we would be invoking a
one-sided hypothesis. The results of a one-sided test will double the significance compared with the corresponding two-sided test, because we are throwing out as unphysical half of the random events
(chance negative trends). So, if we obtain, for a given value of b (or r) a p-value of p = 0.1 for the two-sided test, then the p-value would be p = 0.05 for the corresponding one-sided test.
There is a nice online statistic calculator tool, courtesy of Vassar college, for obtaining a p-value (both one-sided and two-sided) given the linear correlation coefficient, r , and the length of
the data series, N. There is still one catch, however. If the residual series ε of equation 6 contains autocorrelation, then we have to correct the degrees of freedom, N', which is less than the
nominal number of data points, N. The correction can be made, at least approximately in many instances, using the lag-one autocorrelation coefficient. This is simply the linear correlation
coefficient, r[1,] between ε, and a carbon copy of ε lagged by one time step. In fact, r[1] provides an approximation to the parameter ρ introduced in equation 2. If r[1] is found to be positive and
statistically significant (this can be checked using the online link provided above), then we can conclude that there is a statistically significant level of autocorrelation in our residuals, which
must be corrected for. For a series of length N = 100, using a one-sided significant criterion of p = 0.05, we would need r[1 ]> 0.17 to conclude that there is significant autocorrelation in our
Fortunately, the fix is very simple. If we find a positive and statistically significant value of r[1], then we can use the same significance criterion for our trend analysis described earlier,
except we have to evaluate the significance of the value of r for our linear regression analysis (not to be confused with the autocorrelation of residuals r[1]) using a reduced, effective degrees of
freedom N', rather than the nominal sample size N. Moreover, N' is none other than the N' given earlier in equation 3 where we equate $\rho ={r}_{1}$
That's about it for ordinary least squares (OLS), the main statistical tool we will use in this course. Later, we will encounter the more complicated case where there may be multiple independent
variables. For the time being, however, let us consider the problem of trend analysis, returning to the synthetic data series discussed earlier. We will continue to imagine that the dependent
variable (y) is temperature T in °C and the independent variable (x) is time t in years.
First, let us calculate the trend in the original Gaussian white noise series of length N = 200 shown in Figure 2.12(1). The linear trend is shown below:
The trend line is given by: $\stackrel{^}{T}=0.0006\cdot t-0.1140$, and the regression gives r = 0.0332. So there is an apparent positive warming trend of 0.0006 °C per year, or alternatively, 0.06
°C per century. Is that statistically significant? It does not sound very impressive, does it? And that r looks pretty small! But let us be rigorous about this. We have N = 200, and if we use the
online calculator link provided above, we get a p-value of 0.64 for the (default) two-sided hypothesis. That is huge, implying that we would be foolish in this case to reject the null hypothesis of
no trend. But, you might say, we were looking for warming, so we should use a one-sided hypothesis. That halves the p-value to 0.32. But that is still a far cry from even the least stringent (e.g., p
= 0.10) thresholds for significance. It is clear that there is no reason to reject the null hypothesis that this is a random time series with no real trend.
Next, let us consider the red noise series of length N = 200 shown earlier in Figure 2.12(2).
As it happens, the trend this time appears nominally greater. The trend line is now given by: $\stackrel{^}{T}=0.0014\cdot t-0.2875$, and the regression gives r = 0.0742. So, there is an apparent
positive warming trend of 0.14 degrees C per century. That might not seem entirely negligible. And for N = 200 and using a one-sided hypothesis test, r = 0.0742 is statistically significant at the p
= 0.148 level according to the online calculator. That does not breach the typical threshold for significance, but it does suggest a pretty high likelihood (15% chance) that we would err by not
rejecting the null hypothesis. At this point, you might be puzzled. After all, we did not put any trend into this series! It is simply a random realization of a red noise process.
Self Check
So why might the regression analysis be leading us astray this time?
Click for answer.
If you said "because we did not account for the effect of autocorrelation" then you are right on target.
The problem is that our residuals are not uncorrelated. They are red noise. In fact, the residuals looks a lot like the original series itself:
This is hardly coincidental; after all, the trend only accounts for ${r}^{2}={0.0742}^{2}=0.0055$, i.e., only about half a percent, of the variation in the data. So 99.5% of the variation in the data
is still left behind in the residuals. If we calculate the lag-one autocorrelation for the residual series, we get r[1] = 0.54. That is, again not coincidentally, very close to the value of ρ = 0.6
we know that we used in generating this series in the first place.
How do we determine if this autocorrelation coefficient is statistically significant? Well, we can treat it like it were a correlation coefficient. The only catch is that we have to use N-1 in place
of N, because there are only N-1 values in the series when we offset it by one time step to form the lagged series required to estimate a lag-one autocorrelation.
Self Check
Should we use a one-sided or two-sided hypothesis test?
Click for answer.
If you said "one-sided" you are correct.
After all, we are interested only in whether there is positive autocorrelation in the time series.
If we found r[1] < 0, that would be an entirely different matter, and a complication we will choose to ignore for now.
If we use the online link and calculate the statistical significance of r[1 ]= 0.54 with N-1 = 199, we find that it is statistically significant at p < 0.001. So, clearly, we cannot ignore it. We
have to take it into account.
So, in fact, we have to treat the correlation from the regression r = 0.074 as if it has $N"=\left(1-0.54\right)/\left(1+0.54\right)200=0.30\left(200\right)=59.9$ ≈ 60 degrees of freedom, rather than
the nominal N = 200 degrees of freedom. Using the interactive online calculator, and replacing N = 200 with the value N' = 60, we now find that a correlation of r = 0.074 is only significant at the p
= 0.57 (p = 0.29) for a two-sided (one-sided) test, hardly a level of significance that would cause us to seriously call into doubt the null hypothesis.
At this point, you might be getting a bit exasperated. When, if ever, can we conclude there is a trend? Well, why don't we now consider the case where we know we added a real trend in with the noise,
i.e., the example of Figure 2.12(5) where we added a trend of 0.5°C/century to the Gaussian white noise. If we apply our linear regression machinery to this example, we do detect a notable trend:
Now, that's a trend - your eye isn't fooling you. The trend line is given by: $\stackrel{^}{T}=0.0056\cdot t-0.619$. So there is an apparent positive warming trend of 0.56 °C per century (the 95%
uncertainty range that we get for b, i.e., the range b±2 σ[b], gives a slope anywhere between 0.32 and 0.79 °C per century, which of course includes the true trend (0.5 °C/century) that we know we
originally put in to the series!). The regression gives r = 0.320. For N = 200 and using a one-sided hypothesis test, r = 0.320 is statistically significant at p<0.001 level. And if we calculate the
autocorrelation in the residuals, we actually get a small negative value (${r}_{1}=-0.095$), so autocorrelation of the residuals is not an issue.
Finally, let's look at what happens when the same trend (0.5 °C/century) is added to the random red noise series of Figure 2.12(2), rather than the white noise series of Figure 2.12(1). What result
does the regression analysis give now?
We still recover a similar trend, although it's a bit too large. We know that the true trend is 0.5 degrees/century, but the regression gives: $\stackrel{^}{T}=0.0064\cdot t-0.793$. So, there is an
apparent positive warming trend of 0.64 °C per century. The nominal 95% uncertainty range that we get for b is 0.37 to 0.92 °C per century, which again includes the true trend (0.5 degrees C/
century). The regression gives r = 0.315. For N = 200 and using a one-sided hypothesis test, r = 0.315 is statistically significant at the p < 0.001. So, are we done?
Not quite. This time, it is obvious that the residuals will have autocorrelation, and indeed we have that r[1] = 0.539, statistically significant at p < 0.001. So, we will have to use the reduced
degrees of freedom N'. We have already calculated N' earlier for ρ = 0.54, and it is roughly N' = 60. Using the online calculator, we now find that the one-sided p = 0.007, i.e., roughly p = 0.01,
which corresponds to a 99% significance level. So, the trend is still found to be statistically significant, but the significance is no longer at the astronomical level it was when the residuals were
uncorrelated white noise. The effect of the "redness" of the noise has been to make the trend less statistically significant because it is much easier for red noise to have produced a spurious
apparent trend from random chance alone. The 95% confidence interval for b also needs to be adjusted to take into account the autocorrelation, though just how to do that is beyond the scope of this
Often, residuals have so much additional structure — what is sometimes referred to as heteroscedasticity (how's that for a mouthful?) — that the assumption of simple autocorrelation is itself not
adequate. In this case, the basic assumptions of linear regression are called into question and any results regarding trend estimates, statistical significance, etc., are suspect. In this case, more
sophisticated methods that are beyond the scope of this course are required.
Now, let us look at some real temperature data! We will use our very own custom online Linear Regression Tool written for this course. The demonstration how to use this tool has been recorded in
three parts below. Here is a link to the online statistical calculator tool mentioned in the videos below.
In addition, there is a written tutorial for the tool and these data available at these links: Part 1, Part 2.
Video: Custom Linear Regression Tool - Part 1 (3:16)
Video: Custom Linear Regression Tool - Part 2 (1:03)
Video: Custom Linear Regression Tool - Part 3 (1:53)
You can play around with the temperature data set used in this example using the Linear Regression Tool
|
{"url":"https://www.e-education.psu.edu/meteo469/node/208","timestamp":"2024-11-12T22:25:05Z","content_type":"text/html","content_length":"75392","record_id":"<urn:uuid:580f10c2-5f7a-47c4-8683-4752d0f2ccd8>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00733.warc.gz"}
|
How to measure liquid density in context of liquid density
31 Aug 2024
Title: Measuring Liquid Density: A Comprehensive Guide
Abstract: Liquid density is a fundamental property that plays a crucial role in various scientific and engineering applications. Accurate measurement of liquid density is essential for understanding
its behavior, predicting its flow characteristics, and designing equipment for handling and processing liquids. This article provides an overview of the methods used to measure liquid density,
including the principles behind each technique, and discusses their advantages and limitations.
Introduction: Liquid density is defined as the mass per unit volume of a liquid substance (ρ = m/V). It is an important property that affects the behavior of liquids in various contexts, such as
fluid dynamics, chemical engineering, and materials science. Measuring liquid density accurately requires careful consideration of the measurement technique, instrument calibration, and data
Methods for Measuring Liquid Density:
1. Hydrometer Method
The hydrometer method involves measuring the buoyancy force exerted by a liquid on a calibrated object (hydrometer) submerged in it. The principle is based on Archimedes’ Principle:
ρ = ρ_hydrometer * (V_hydrometer / V_liquid)
where ρ is the density of the liquid, ρ_hydrometer is the density of the hydrometer material, V_hydrometer is the volume of the hydrometer, and V_liquid is the volume of the liquid.
2. Pycnometer Method
The pycnometer method involves measuring the difference in mass between a container filled with liquid and an empty container. The principle is based on the conservation of mass:
ρ = (m_container + m_liquid) / V_container
where ρ is the density of the liquid, m_container is the mass of the container, m_liquid is the mass of the liquid, and V_container is the volume of the container.
3. Densitometer Method
The densitometer method involves measuring the refractive index of a liquid using a densitometer instrument. The principle is based on the relationship between density and refractive index:
ρ = (n^2 - 1) / (n^2 + 2 * n * Δn)
where ρ is the density of the liquid, n is the refractive index, and Δn is a correction factor.
4. API Gravity Method
The API gravity method involves measuring the specific gravity of a liquid using an API gravity meter. The principle is based on the relationship between specific gravity and density:
ρ = (SG * ρ_water) / (1 + SG)
where ρ is the density of the liquid, SG is the specific gravity, and ρ_water is the density of water.
Conclusion: Measuring liquid density accurately requires careful consideration of the measurement technique, instrument calibration, and data analysis. The methods discussed in this article provide a
comprehensive guide to measuring liquid density in various contexts. By understanding the principles behind each technique and their advantages and limitations, researchers and engineers can select
the most suitable method for their specific application.
• [1] ASTM D1250-14: Standard Test Method for Density of Liquids by Hydrometer.
• [2] API MPMS Chapter 11.3: Measurement of Liquid Density by Pycnometer.
• [3] ISO 12185:2017: Determination of the density of liquids by pycnometry (liquid displacement).
• [4] ASTM D287-14: Standard Test Method for Specific Gravity (API Gravity) of Liquids at 60°F.
Related articles for ‘liquid density’ :
• Reading: How to measure liquid density in context of liquid density
Calculators for ‘liquid density’
|
{"url":"https://blog.truegeometry.com/tutorials/education/1b0f10082ac6c9c50ca42381710a0c91/JSON_TO_ARTCL_How_to_measure_liquid_density_in_context_of_liquid_density.html","timestamp":"2024-11-06T12:42:04Z","content_type":"text/html","content_length":"17372","record_id":"<urn:uuid:02bf6c90-b342-445c-a5cd-93fc885091e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00638.warc.gz"}
|
s Gr
Lesson 7
Revisit Percentages
7.1: Number Talk: Percentages (5 minutes)
The purpose of this warm-up is to rekindle anything students remember about percentages and representations they use to reason about them.
Display one problem at a time. Give students 30 seconds of quiet think time for each problem and ask them to give a signal when they have an answer and a strategy. Keep all problems displayed
throughout the talk. Follow with a whole-class discussion.
Representation: Internalize Comprehension. To support working memory, provide students with sticky notes or mini whiteboards.
Supports accessibility for: Memory; Organization
Student Facing
Solve each problem mentally.
1. Bottle A contains 4 ounces of water, which is 25% of the amount of water in Bottle B. How much water is there in Bottle B?
2. Bottle C contains 150% of the water in Bottle B. How much water is there in Bottle C?
3. Bottle D contains 12 ounces of water. What percentage of the amount of water in Bottle B is this?
Activity Synthesis
Invite students to share different representations and ways of reasoning. Record student strategies and nonchalantly write an equation for each in the process.
Speaking: MLR8 Discussion Supports. Provide sentence frames to support students with explaining their strategies. For example, "I noticed that ______." or "First, I ________ because ________." When
students share their answers with a partner, prompt them to rehearse what they will say when they share with the whole class. Rehearsing provides students with additional opportunities to clarify
their thinking.
Design Principle(s): Optimize output (for explanation)
7.2: Representing a Percentage Problem with an Equation (20 minutes)
Students perform repeated calculations and then generalize with an algebraic expression (MP8). The purpose of this activity is to help students see that any basic percentage problem like “\(n\)
percent of this is that” can be represented with an equation in the form \(px=q\).
Using any insights from the warm-up as an example, remind students of any efficient method they know to compute a percentage. For example, 25% of 16 can be computed using \(\frac{25}{100} \boldcdot
Arrange students in groups of 2. Give 5–10 minutes of quiet work time and time to share responses with a partner, followed by a whole-class discussion.
Representation: Internalize Comprehension. Activate or supply background knowledge about computing percentages. Allow students to use calculators to ensure inclusive participation in the activity.
Supports accessibility for: Memory; Conceptual processing
Student Facing
1. Answer each question and show your reasoning.
1. Is 60% of 400 equal to 87?
2. Is 60% of 200 equal to 87?
3. Is 60% of 120 equal to 87?
2. 60% of \(x\) is equal to 87. Write an equation that expresses the relationship between 60%, \(x\), and 87. Solve your equation.
3. Write an equation to help you find the value of each variable. Solve the equation.
Anticipated Misconceptions
Students might not understand that we are trying to find the whole that we know an amount is a certain percent of. Encourage them to draw a tape diagram or a double number line to visualize the
relationship between the three quantities.
Activity Synthesis
The purpose of this discussion is for students to see how writing and solving an equation can be an efficient way to solve a problem about percentages. In the course of the discussion, they should
see three equations written and solved. If any students used representations like tape diagrams or double number lines to reason about the problem, it can be advantageous to display these alongside
the equations so that students can make connections between strategies they understand well and the more abstract strategy of writing and solving an equation.
Writing, Representing: MLR3 Clarify, Critique, Correct. Present an incorrect statement, “For 60% of \(c\) is 43.2, the value of \(c\) is 25.92 because 0.6 times 43.2 is 25.92.” Invite students to ask
clarifying questions about the statement to identify the error. Invite students to work with a partner to write a correct statement using a representation such as a tape diagram or double number
line. This will help students to visualize the relationship between the three quantities and use language to critique and create viable mathematical arguments.
Design Principle(s): Maximize meta-awareness
7.3: Puppies Grow Up, Revisited (10 minutes)
In this activity, students are asked to write an equation but are not given a letter to use. This is an opportunity to explain to students that when they decide to use a letter to represent
something, they need to state what the letter represents.
Keep students in the same groups. Allow students 5 minutes of quiet work time and time to share responses with a partner, followed by a whole-class discussion.
Engagement: Develop Effort and Persistence. Connect a new concept to one with which students have experienced success. For example, invite students to draw a picture or tape diagram to help as an
intermediate step before writing an equation.
Supports accessibility for: Social-emotional skills; Conceptual processing
Student Facing
1. Puppy A weighs 8 pounds, which is about 25% of its adult weight. What will be the adult weight of Puppy A?
2. Puppy B weighs 8 pounds, which is about 75% of its adult weight. What will be the adult weight of Puppy B?
3. If you haven’t already, write an equation for each situation. Then, show how you could find the adult weight of each puppy by solving the equation.
Student Facing
Are you ready for more?
Diego wants to paint his room purple. He bought one gallon of purple paint that is 30% red paint and 70% blue paint. Diego wants to add more blue to the mix so that the paint mixture is 20% red, 80%
1. How much blue paint should Diego add? Test the following possibilities: 0.2 gallons, 0.3 gallons, 0.4 gallons, 0.5 gallons.
2. Write an equation in which \(x\) represents the amount of paint Diego should add.
3. Check that the amount of paint Diego should add is a solution to your equation.
Activity Synthesis
The focus of the discussion should be the selection of a variable to represent an unknown quantity. Invite students to share how they decided where to use a variable, what it represented in the
story, what letter they used and why. Ask why it is important to state what the letter represents and where they made that statement in their solutions.
Speaking, Representing: MLR8 Discussion Supports. To support whole-class discussion about selecting a variable to represent an unknown quantity, provide sentence frames to help students explain their
reasoning. For example, "I knew I needed to use a variable to represent _____ because _____." or "The variable I chose to represent _____ is _____, because_____."
Design Principle(s): Support sense-making
Lesson Synthesis
Students have been solving equations with fraction coefficients in the past few lessons so these percent problems are an application of their prior work. Consider asking some of the following
questions to guide the discussion and help students recognize this connection:
• “How are the equations we wrote today related to the equations we have previously written with fractions? How do solution strategies compare?”
• “Can equations be used to solve other types of problems with percents? For example, where we know the part and the whole but not what percent the part is of the whole?” (Yes. For example, the
equation \(20p=5\) and its solution \(p=\frac14\) or \(\frac{25}{100}\) tells us that 5 is 25% of 20.)
• “Describe a situation where you know what percent a number is of another, but you don't know that second number. Explain to a partner how you would find the second number.”
7.4: Cool-down - Fundraising for the Animal Shelter (5 minutes)
Student Facing
If we know that 455 students are in school today and that number represents 70% attendance, we can write an equation to figure out how many students go to the school.
The number of students in school today is known in two different ways: as 70% of the students in the school, and also as 455. If \(s\) represents the total number of students who go to the school,
then 70% of \(s\), or \(\frac{70}{100}s\), represents the number of students that are in school today, which is 455.
We can write and solve the equation:
\(\displaystyle \begin {align} \frac{70}{100}s&=455\\ s&=455\div\frac{70}{100}\\ s&=455\boldcdot \frac{100}{70}\\ s&=650\end{align} \)
There are 650 students in the school.
In general, equations can help us solve problems in which one amount is a percentage of another amount.
|
{"url":"https://curriculum.illustrativemathematics.org/MS/teachers/1/6/7/index.html","timestamp":"2024-11-04T11:02:59Z","content_type":"text/html","content_length":"91808","record_id":"<urn:uuid:4f4a6ed7-ef69-42a5-95e6-5cdff4e80c08>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00273.warc.gz"}
|
HOME | Hitesh Gakhar
top of page
I am a teaching faculty at Michigan State University. In Spring 2024, I am teaching MTH314: Matrix Algebra with Computational Applications. Over Summer 2024, I will be developing the curriculum for
MTH414: Linear Algebra II.
My research interests lie in the field of Topological Data Analysis.
Before joining MSU, I was a postdoctoral associate in the Department of Mathematics at the University of Oklahoma.
I earned my PhD in mathematics from Michigan State University in May 2020. During my PhD under the supervision of Jose Perea, my work revolved around the study of toroidal dynamical systems. I made
contributions to the development of persistent homology by proving Künneth-type theorems, to topological time series analysis by further developing the theory of sliding window embeddings, and to
multiscale data coordinatization in topological spaces by proving stability theorems for circular coordinates.
Recent News: Papers, Organization & Attendance, and Talks
I was at the Symposium on Computational Geometry at the University of Texas, Dallas
I co-organized a Special Session on Topological Persistence at the Spring Southeastern AMS Sectional Meeting (w/ Luis Scoccola and Ling Zhou)
I co-organized a Special Session at the Joint Mathematics Meeting (w/ Harlin Lee and Josue Tonelli-Cueto)
I gave a lecture at the Topological Data Analysis Study Group, hosted virtually by the Erdos Institute
I spoke about MTH314 curriculum reform efforts in the Conversations Among Colleagues seminar in the Department of Mathematic at Michigan State University
I spoke at the TDA Seminar in the Department of CMSE at Michigan State University
I gave a contributed talk at the SIAM Great Lakes Meeting, hosted at Michigan State University
I presented our Toroidal Coordinates paper at the Symposium on Computational Geometry
I gave an outreach talk at the Raising a Mathematician Training Program (Intro Topology for Middle-High Schools Students)
bottom of page
|
{"url":"https://www.hiteshgakhar.com/","timestamp":"2024-11-09T19:23:45Z","content_type":"text/html","content_length":"362537","record_id":"<urn:uuid:822ea1d7-151d-4d9a-b20b-6d2ec0a2bcfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00750.warc.gz"}
|
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets function as foundational devices in the realm of mathematics, giving an organized yet functional system for learners to explore
and grasp numerical ideas. These worksheets supply an organized strategy to comprehending numbers, supporting a solid structure upon which mathematical efficiency prospers. From the simplest checking
workouts to the ins and outs of advanced estimations, Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets deal with students of diverse ages and skill levels.
Revealing the Essence of Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets -
Convert the mixed numbers to improper fractions generate equivalent like fractions and sum up Easy Moderate Difficult Adding Mixed Numbers Unlike Denominators Vertical In this printable task find how
good you re at adding mixed numbers columnwise Repeat the steps of finding the least common multiple and equivalent fractions
Our Adding and Subtracting Fractions and Mixed Numbers worksheets are designed to supplement our Adding and Subtracting Fractions and Mixed Numbers lessons These ready to use printable worksheets
help assess student learning Be sure to check out the fun interactive fraction activities and additional worksheets below
At their core, Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets are automobiles for conceptual understanding. They envelop a myriad of mathematical principles, directing
learners through the labyrinth of numbers with a collection of engaging and deliberate exercises. These worksheets transcend the boundaries of traditional rote learning, encouraging active
interaction and cultivating an instinctive understanding of numerical relationships.
Nurturing Number Sense and Reasoning
Adding And Subtracting Mixed Fractions Worksheets Worksheets Master
Adding And Subtracting Mixed Fractions Worksheets Worksheets Master
Step 1 Find the Lowest Common Multiple LCM between the denominators Step 2 Multiply the numerator and denominator of each fraction by a number so that they have the LCM as their new denominator Step
3 Add or subtract the numerators and keep the denominator the same
This page has worksheets on subtracting fractions and mixed numbers Includes like and unlike denominators Worksheets for teaching basic fractions equivalent fractions simplifying fractions comparing
fractions and ordering fractions There are also worksheets on addition subtraction multiplication and division of fractions
The heart of Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets hinges on growing number sense-- a deep understanding of numbers' definitions and interconnections. They encourage
exploration, inviting students to study arithmetic operations, understand patterns, and unlock the mysteries of series. Via provocative challenges and sensible problems, these worksheets end up being
portals to honing thinking abilities, supporting the analytical minds of budding mathematicians.
From Theory to Real-World Application
Improper Fraction Worksheets
Improper Fraction Worksheets
Videos and Worksheets Primary 5 a day 5 a day GCSE 9 1 5 a day Primary 5 a day Further Maths More Further Maths Practice Papers Conundrums Blog About Revision Cards Books September 25 2019 October 10
2023 corbettmaths Mixed Numbers and Improper Fractions Textbook Exercise Click here for Questions
Step 1 Convert all mixed numbers into improper fraction Step 2 Do you have a common denominator If not find a common denominator List the multiples of 4 List the multiples of 7 The Lowest Common
Multiple LCM between 4 and
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets serve as avenues bridging theoretical abstractions with the apparent realities of daily life. By infusing useful situations
right into mathematical workouts, students witness the importance of numbers in their surroundings. From budgeting and dimension conversions to recognizing analytical information, these worksheets
empower students to possess their mathematical expertise past the confines of the classroom.
Varied Tools and Techniques
Adaptability is inherent in Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets, employing an arsenal of instructional devices to accommodate diverse knowing styles. Visual aids
such as number lines, manipulatives, and digital resources function as companions in visualizing abstract ideas. This diverse strategy makes sure inclusivity, accommodating students with various
preferences, strengths, and cognitive designs.
Inclusivity and Cultural Relevance
In an increasingly varied world, Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets accept inclusivity. They go beyond cultural limits, incorporating examples and issues that
reverberate with learners from varied histories. By including culturally pertinent contexts, these worksheets promote a setting where every student feels stood for and valued, boosting their
connection with mathematical concepts.
Crafting a Path to Mathematical Mastery
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets chart a course towards mathematical fluency. They infuse willpower, vital thinking, and problem-solving skills, important
attributes not just in maths however in various facets of life. These worksheets encourage students to navigate the complex surface of numbers, supporting an extensive appreciation for the elegance
and logic inherent in mathematics.
Welcoming the Future of Education
In an era marked by technological development, Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets perfectly adjust to electronic platforms. Interactive interfaces and digital
sources increase conventional discovering, using immersive experiences that transcend spatial and temporal borders. This combinations of typical approaches with technological advancements declares an
encouraging period in education, fostering an extra dynamic and interesting knowing setting.
Conclusion: Embracing the Magic of Numbers
Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets represent the magic inherent in maths-- an enchanting trip of expedition, exploration, and mastery. They go beyond conventional
pedagogy, functioning as catalysts for igniting the fires of interest and query. Through Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets, learners start an odyssey, unlocking
the enigmatic globe of numbers-- one problem, one option, at once.
Adding And Subtracting Mixed Fractions A Fractions Worksheet
Improper Fraction Worksheets
Check more of Adding And Subtracting Improper Fractions And Mixed Numbers Worksheets below
Improper Fractions To Mixed Number Worksheet
Worksheet On Improper Fractions
Mixed Fractions Worksheets Free Fraction Worksheets Simple Fractions Learning Fractions
Adding Proper And Improper Fractions With Unlike Denominators And Mixed Fractions Results A
Mixed Numbers To Improper Fractions TMK Education
Write My Paper How To Write An Improper Fraction As A Mixed Number 2017 10 11
Adding And Subtracting Fractions And Mixed Numbers Worksheets
Our Adding and Subtracting Fractions and Mixed Numbers worksheets are designed to supplement our Adding and Subtracting Fractions and Mixed Numbers lessons These ready to use printable worksheets
help assess student learning Be sure to check out the fun interactive fraction activities and additional worksheets below
Add amp Subtract Fractions Worksheets For Grade 5 K5 Learning
5th grade adding and subtracting fractions worksheets including adding like fractions
Our Adding and Subtracting Fractions and Mixed Numbers worksheets are designed to supplement our Adding and Subtracting Fractions and Mixed Numbers lessons These ready to use printable worksheets
help assess student learning Be sure to check out the fun interactive fraction activities and additional worksheets below
5th grade adding and subtracting fractions worksheets including adding like fractions
Adding Proper And Improper Fractions With Unlike Denominators And Mixed Fractions Results A
Worksheet On Improper Fractions
Mixed Numbers To Improper Fractions TMK Education
Write My Paper How To Write An Improper Fraction As A Mixed Number 2017 10 11
Adding Proper And Improper Fractions With Like Denominators With Mixed Fraction Results A
Worksheets For Fraction Addition
Worksheets For Fraction Addition
Image Result For Adding Mixed Fractions With Different Denominators Worksheets Adding Mixed
|
{"url":"https://alien-devices.com/en/adding-and-subtracting-improper-fractions-and-mixed-numbers-worksheets.html","timestamp":"2024-11-08T11:12:16Z","content_type":"text/html","content_length":"27163","record_id":"<urn:uuid:921adff4-9991-4331-b18f-7a1b03bff7c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00288.warc.gz"}
|
$300 000 Mortgage 30 Year Calculator - Certified Calculator
$300 000 Mortgage 30 Year Calculator
Introduction: Welcome to our $300,000 Mortgage 30-Year Calculator, a tool designed to help you estimate the monthly mortgage payment for a $300,000 loan over a 30-year period. Whether you’re a
first-time homebuyer or considering refinancing, this calculator provides a quick assessment of your potential monthly obligation.
Formula: The calculator utilizes the standard formula for calculating the monthly mortgage payment for a fixed-rate loan. This formula considers the loan amount, interest rate, and loan term to
determine the monthly payment. For this calculator, the loan amount is preset at $300,000, and the loan term is set to 30 years.
How to Use:
1. Input the annual interest rate you are considering for your mortgage.
2. Click the “Calculate” button to obtain your estimated monthly payment.
Example: Suppose you are exploring financing options for a $300,000 mortgage over 30 years and are considering an annual interest rate of 4.5%. Inputting these values into the calculator and clicking
“Calculate” will provide you with an estimate of your monthly mortgage payment.
1. Q: Can I change the loan amount for this calculator? A: No, this calculator is specifically set for a $300,000 loan amount. For different loan amounts, consider using our general mortgage
2. Q: Is the interest rate entered as an annual percentage? A: Yes, the interest rate should be entered as an annual percentage. The calculator will convert it to a monthly rate for calculations.
3. Q: Can I use this calculator for other loan terms? A: No, this calculator is configured for a fixed 30-year loan term. For different terms, please use our customizable mortgage calculator.
4. Q: Does the monthly payment include property taxes and insurance? A: No, the calculated monthly payment represents the principal and interest components only. Property taxes and insurance should
be considered separately.
Conclusion: Our $300,000 Mortgage 30-Year Calculator is a helpful tool for individuals considering the affordability of a mortgage at a specific loan amount and term. Use this calculator to estimate
your monthly mortgage payment and make informed decisions about your home financing. Always consult with financial advisors for personalized advice based on your specific circumstances.
Leave a Comment
|
{"url":"https://certifiedcalculator.com/300-000-mortgage-30-year-calculator/","timestamp":"2024-11-09T23:06:48Z","content_type":"text/html","content_length":"54597","record_id":"<urn:uuid:ecf1e0fb-e466-4aa1-b1da-24803340c510>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00158.warc.gz"}
|
Does Bitcoin use
No, Bitcoin does not use encryption. It is called “
currency” because its
digital signature
algorithm uses the same mathematical techniques that are used for
a type of encryption based on elliptic curves
. (In particular Bitcoin uses the
ECDSA algorithm
with elliptic curve
For both encryption and digital signatures, each user of the system generates a pair of keys: a public key and a private key. The public and private keys are mathematically related, but (as far as we
know) it is computationally infeasible to derive the private key from the public key. Briefly, public/private key encryption and digital signatures work as follows:
• If Alice wants to encrypt a short message to Bob, Alice uses Bob's public key to encrypt the message, and then Bob uses his private key to decrypt the message.
• If Alice wants to digitally sign a short message, Alice uses her private key to produce a signature, and then anyone who knows Alice's public key can verify that the signature could only be
produced by someone who knows Alice's private key.
In the case of the Bitcoin ledger, each unspent transaction output (UTXO) is usually associated with a public key. If Alice has an UTXO associated with her public key, and she wants to send the money
to Bob, then Alice uses her private key to sign a transaction that spends the UTXO, creating a new UTXO associated with Bob's public key.
|
{"url":"https://www.wklieber.com/does-bitcoin-use-encryption.html","timestamp":"2024-11-14T03:47:06Z","content_type":"text/html","content_length":"3363","record_id":"<urn:uuid:d67d4baa-85e2-4e3f-8171-d84f10596ff1>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00063.warc.gz"}
|
Cluster categories, tropical mirror symmetry and quantization
We will explain how the cluster-tilting subcategories of a cluster category give rise to A- and X-cluster structures in the sense of Fock-Goncharov. In particular, as T varies through the
cluster-tilting subcategories, the Grothendieck groups $K_0(T)$ and $K_0(\mathrm{fd}\, T)$ admit families of homomorphisms tropicalizing the birational maps used to build the corresponding A- and
X-cluster varieties.
Using our framework, we construct an X-cluster character for any cluster category and relate it to the A-cluster character. We also show the existence of a canonical quantization of any Hom-finite
exact cluster category, extending examples originally due to Geiß-Leclerc-Schröer.
This is joint work with Matthew Pressland (Glasgow).
|
{"url":"https://projects.au.dk/homologicalalgebra/seminaraarhus/event/activity/6455?cHash=1a32a2ed724c76048df00e918822b2a8","timestamp":"2024-11-12T10:37:11Z","content_type":"text/html","content_length":"16668","record_id":"<urn:uuid:120f2c96-cebd-4c62-90fb-ed0fb5b43ee7>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00175.warc.gz"}
|
MCLab Group List of Papers -- Query Results
Giuseppe Della Penna, Benedetto Intrigila, Igor Melatti, Enrico Tronci, and Marisa Venturini Zilli. "Bounded Probabilistic Model Checking with the Mur$\varphi$ Verifier." In Formal Methods in
Computer-Aided Design, 5th International Conference, FMCAD 2004, Austin, Texas, USA, November 15-17, 2004, Proceedings, edited by A. J. Hu and A. K. Martin, 214–229. Lecture Notes in Computer Science
3312. Springer, 2004. ISSN: 3-540-23738-0. DOI: 10.1007/978-3-540-30494-4_16.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Synthesizing Control Software from Boolean Relations." International Journal on Advances in Software vol. 5, nr 3&4 (2012): 212–223.
IARIA. ISSN: 1942-2628.
G. Dipoppa, G. D'Alessandro, R. Semprini, and E. Tronci. "Integrating Automatic Verification of Safety Requirements in Railway Interlocking System Design." In High Assurance Systems Engineering,
2001. Sixth IEEE International Symposium on, 209–219. Albuquerque, NM, USA: IEEE Computer Society, 2001. ISSN: 0-7695-1275-5. DOI: 10.1109/HASE.2001.966821.
Giuseppe Della Penna, Benedetto Intrigila, Enrico Tronci, and Marisa Venturini Zilli. "Exploiting Transition Locality in the Disk Based Mur$\varphi$ Verifier." In 4th International Conference on
Formal Methods in Computer-Aided Design (FMCAD), edited by M. Aagaard and J. W. O'Leary, 202–219. Lecture Notes in Computer Science 2517. Portland, OR, USA: Springer, 2002. ISSN: 3-540-00116-6. DOI:
Giuseppe Della Penna, Benedetto Intrigila, Enrico Tronci, and Marisa Venturini Zilli. "Synchronized Regular Expressions." Electr. Notes Theor. Comput. Sci. 62 (2002): 195–210. Notes: TOSCA 2001,
Theory of Concurrency, Higher Order Languages and Types.
Enrico Tronci. "Equational Programming in lambda-calculus." In Sixth Annual IEEE Symposium on Logic in Computer Science (LICS), 191–202. Amsterdam, The Netherlands: IEEE Computer Society, 1991. DOI:
Enrico Tronci. "Equational Programming in Lambda-Calculus via SL-Systems. Part 2." Theoretical Computer Science 160, no. 1&2 (1996): 185–216. DOI: 10.1016/0304-3975(95)00106-9.
Federico Mari, Igor Melatti, Ivano Salvo, and Enrico Tronci. "Synthesis of Quantized Feedback Control Software for Discrete Time Linear Hybrid Systems." In Computer Aided Verification, edited by T.
Touili, B. Cook and P. Jackson, 180–195. Lecture Notes in Computer Science 6174. Springer Berlin / Heidelberg, 2010. DOI: 10.1007/978-3-642-14295-6_20.
Roberto Gorrieri, Ruggero Lanotte, Andrea Maggiolo-Schettini, Fabio Martinelli, Simone Tini, and Enrico Tronci. "Automated analysis of timed security: a case study on web privacy." International
Journal of Information Security 2, no. 3-4 (2004): 168–186. DOI: 10.1007/s10207-004-0037-9.
Giuseppe Della Penna, Antinisca Di Marco, Benedetto Intrigila, Igor Melatti, and Alfonso Pierantonio. "Interoperability mapping from XML schemas to ER diagrams." Data Knowl. Eng. 59, no. 1 (2006):
166–188. Elsevier Science Publishers B. V.. ISSN: 0169-023x. DOI: 10.1016/j.datak.2005.08.002.
|
{"url":"https://mclab.di.uniroma1.it/publications/search.php?sqlQuery=SELECT%20author%2C%20title%2C%20type%2C%20year%2C%20publication%2C%20abbrev_journal%2C%20volume%2C%20issue%2C%20pages%2C%20keywords%2C%20abstract%2C%20thesis%2C%20editor%2C%20publisher%2C%20place%2C%20abbrev_series_title%2C%20series_title%2C%20series_editor%2C%20series_volume%2C%20series_issue%2C%20edition%2C%20language%2C%20author_count%2C%20online_publication%2C%20online_citation%2C%20doi%2C%20serial%20FROM%20refs%20WHERE%20serial%20RLIKE%20%22.%2B%22%20ORDER%20BY%20first_page%20DESC&submit=Cite&citeStyle=Roma&citeOrder=&orderBy=first_page%20DESC&headerMsg=&showQuery=0&showLinks=0&formType=sqlSearch&showRows=10&rowOffset=50&client=&viewType=Print","timestamp":"2024-11-03T15:32:10Z","content_type":"text/html","content_length":"35292","record_id":"<urn:uuid:a4b7dccb-42d4-4b87-9c01-00452c2712cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00583.warc.gz"}
|
Fei Zhou: Professor at Department of Physics & Astronomy, UBC Faculty of Science
Fei Zhou
Relevant Thesis-Based Degree Programs
Graduate Student Supervision
Doctoral Student Supervision
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
Topological quantum phase transitions and topological quantum criticality in superfluids and superconductors (2021)
Superfluids and superconductors can be either fully gapped or gapless with nodal structures in momentum space. Both the gapped and nodal phases can be topologically protected and possess nontrivial
topological invariants. Topological quantum phase transitions exist at zero temperature between different gapped phases or between gapped and nodal phases. These phase transitions can be driven by
chemical potential and/or spin exchange fields. The two phases separated by the phase transition can have the same local order but differ in global topology. Thus, these topological quantum phase
transitions cannot be described by the Landau paradigm of symmetry breaking. Although some aspects of these transitions, such as the change of topological invariants and gapless boundary states, have
been discussed before, a complete theory of these transitions has yet to be developed. In this dissertation, we construct effective field theories to study the universality and thermodynamic
signatures of these transitions. We find four different universality classes in superfluids and superconductors. Certain thermodynamic quantities, such as compressibility or spin susceptibility,
change non-analytically across the transitions. For certain time-reversal symmetry breaking fields that lead to bulk phase transitions, there also exist topological phase transitions on the surface.
All the topological phase transitions studied in this dissertation only exist at zero temperature. At finite temperature, different states are connected by smooth crossovers. There exists a quantum
critical region at finite temperature near the quantum critical point (QCP). In this quantum critical region, thermodynamic quantities have universal scaling dependence on temperature dictated by the
universality class of the QCP. We argue that these scaling properties can be used to probe and differentiate these QCPs. These bulk and surface topological quantum phase transitions are discussed in
various concrete models, such as chiral and time-reversal invariant p-wave superfluids, topological superconductors of emergent Dirac fermions, and topological superconducting model of CuₓBi₂Se₃.
View record
Scale symmetry and the non-equilibrium quantum dynamics of ultra-cold atomic gases (2019)
The study of the quantum dynamics of ultra-cold atomic gases has become a forefront of atomic research. Experiments studying dynamics have become routine in laboratories, and a plethora of phenomena
have been studied. Theoretically, however, the situation is often intractable unless one resorts to numerical or semiclassical calculations. In this thesis we apply the symmetry associated with scale
invariance to study the dynamics of atomic gases, and discuss the implications of this symmetry on the full quantum dynamics. In particular we study the time evolution of an expanding two-dimensional
Bose gas with attractive contact interactions, and the three-dimensional Fermi gas at unitarity. To do this we employ a quantum variational approach and exact symmetry arguments. It is shown that the
time evolution due to a scale invariant Hamiltonian produces an emergent conformal symmetry. This emergent conformal symmetry has implications on the time evolution of an expanding quantum gas. In
addition, we examine the effects of broken scale symmetry on the expansion dynamics. To do this, we develop a non-perturbative formalism that classifies the possible dynamics that can occur. This
formalism is then applied to two systems, an ensemble of two-body systems, and for the compressional and elliptic flow of a unitary Fermi gas, both in three spatial dimensions.
View record
Nature of Bose Gases Near Feshbach Resonance: The Interplay Between Few-Body and Many-Body Physics (2014)
In this thesis, we investigated the physics of two- and three-dimensional ultra cold Bose gases in the strongly interacting regime at zero temperature. This regime can be experimentally accessed
using a Feshbach resonance. We applied a self-consistent diagrammatic approach to determine the chemical potential of three-dimensional Bose gases for a wide range of interaction values. We showed
that such strongly interacting Bose gases become unstable towards the formation of molecules at a finite positive scattering length. In fact, the interaction between atoms becomes effectively
attractive and the system looses its metastability before reaching the unitary limit. We also found that such systems are nearly fermionized close to the instability point. Near this critical point,
the chemical potential reaches a maximum and the contribution to the system energy due to three-body forces is estimated to be only a few percent. We also studied the same system using a
self-consistent renormalization group method. This approach confirms the existence of an instability point towards the formation of molecules as well as fermionization. We showed that the instability
and accompanying maximum are precursors of the sign change of the effective two-body interaction strength from repulsive to attractive near resonance. In addition, we examined the physics of
two-dimensional Bose gases near resonance using a similar self-consistent diagrammatic approach as the one introduced for three-dimensional Bose gases. We demonstrated that a competition between
three-body attractive interactions and two-body repulsive forces results in the chemical potential of two-dimensional Bose gases to exhibit a maximum at a critical scattering length beyond which
these quantum gases possess a negative compressibility. For larger scattering lengths, the increasingly prominent role played by three-body attractive interactions leads to an onset instability at a
second critical value. The three-body effects studied for these systems are universal, fully characterized by the effective two-dimensional scattering length and are, in comparison to the
three-dimensional case, independent of three-body ultraviolet physics.
View record
Fluctuation Driven Phenomena in Ultracold Spinor Bose Gas (2010)
In this thesis, we have investigated several fluctuation-driven phenomena in ultracold spinor Bose gases.In Bose-Einstein condensates of hyperfine spin-two (F=2) atoms, it is shown that zero-point
quantum fluctuations completely lift the accidental continuous degeneracy in quantum spin nematic phases predicted by mean field analysis, and these fluctuations select out two distinct spin nematic
states with higher symmetries.It is further shown that fluctuations can drive a novel type of coherent spin dynamics which is very sensitive to the variation of quantum fluctuations controlled by
magnetic fields or potential depths in optical lattices.These results have indicated fundamental limitations of precision measurements based on mean field theories.In addition, fluctuation-driven
coherent spin dynamics studied here is a promising tool to probe correlated fluctuations in many body systems.In another system -- a two-dimension superfluid of spin-one (F=1) Na²³ atoms -- we have
investigated spin correlations associated with half quantum vortices.It is shown that when cold atoms become superfluid below a critical temperature a unique nonlocal topological order emerges
simultaneously due to fluctuations in low dimensional systems.Our simulation have indicated that there exists a nonlocal softened pi-spin disclination structure associated with a half-quantum vortex
although spin correlations are short ranged.We have also estimated fluctuation-dependent critical frequencies for half-quantum vortex nucleation in rotating optical traps.These results indicate that
the strongly fluctuating ultracold spinor system is a promising candidate for studying topological orders that are the focus of many other fields.
View record
Master's Student Supervision
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Fermion doubling in condensed matter physics: simulating a weyl fermion on a lattice (2022)
It is complicated to promote a continuum quantum theory with fermions to a lattice. This problem is caused by an unexpected appearance of extra states in the lattice theory - the fermion doubling
problem. Nielsen and Ninomiya proved in 1981 that under certain conditions, it is actually impossible to find a lattice that simulates a single Weyl fermion. We realize that one of the crucial
assumptions in their proof is the conservation of electric charge - a condition which is not held in topological superconductors. A common toy model for topological superconductors is the
one-dimensional Kitaev wire. Thus, we propose a similar two-band three-dimensional lattice that has a single Weyl fermion in the low-energy. We find this effective theory by combining the degrees of
freedom around the nodal points and then integrating out the extra degrees of freedom using the Schrieffer-Wolff transformation.
View record
Renormalization approach to bound state energy computation for two ultracold atoms in an optical lattice (2012)
In experiments with ultra-cold gases, two alkali atoms, that interact with repulsive or attractive potentials and are confined to an optical lattice, can form bound states. In order to compute the
energy of such states formed by atoms in the lowest Bloch band, one needs to take into account the intra-band corrections arising from contributions by higher Bloch bands. As it is hard to implement,
known calculations tend to neglect them altogether thus setting up a limit for the precision of such computations. To address the problem we apply an approach that uses renormalization-group
equations for an effective potential we introduce. It allows for the expression of the bound state energy in terms of the free-space interaction scattering length and parameters of confining
potentials. Expressions for bound state energies in 1D, 2D and 3D optical lattices are reported. We show that the method we use can be easily tailored to various cases of atoms confined by external
fields of other geometries. A known result for atoms confined to a quasi-2D system is reproduced as an example. Universality of the approach makes it a useful tool for such class of problems.
View record
Membership Status
Member of G+PS
Program Affiliations
Academic Unit(s)
|
{"url":"https://www.grad.ubc.ca/researcher/13462-zhou","timestamp":"2024-11-11T21:34:15Z","content_type":"text/html","content_length":"84903","record_id":"<urn:uuid:0178123c-13a6-4357-95be-024029742648>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00610.warc.gz"}
|