content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Average fair rate of return
(A high rate of return, of course, will beat that, but you'll have to work for it.) Assume that inflation is an annual 3% and capital gains are 15%. If your target is a 15% return before inflation
and taxes, you'll end up with 12.4% return. Real rate of return = Simple/nominal interest rate – Inflation rate For example, if you have an investment that pays 5 percent interest per year, but the
inflation rate is 3 percent, your real rate of return on the investment is 2 percent (5 percent nominal interest rate minus 2 percent inflation rate). Rate-of-return regulation is a system for
setting the prices charged by government-regulated monopolies. The main premise is that monopolies must charge the same price that would ideally prevail in a perfectly-competitive market, equal to
the efficient costs of production, plus a market-determined rate of return on capital.
Let us make an in-depth study of the merits and demerits of rate of return (5) It gives due weight-age to the profitability of the project if based on average rate of return. (5) Rate of return
method does not determine the fair rate of return on Change the discount rate to 7% and the fair value is now $6.63 per share. there to learn about discount rates, weighted average cost of capital
(WACC), CAPM And the discount rate you choose should be based on the rate of return you Return Date. Select sites to compare prices: Find Flights. Alert me when prices change. Skip to content.
Notice: COVID-19 (Coronavirus) may impact your Standard Metered Fare. $2.50 initial charge. Plus 50 cents per 1/5 mile when traveling above 12mph or per 60 seconds in slow traffic or when the
vehicle is
Best Answer: Yes, as jeff410 points out, using the CAPM should yield the answer (assuming the "expected rate of return" is the expected market return). So in this case, the answer would be: Fair
Return = 6% + 1.2(10% - 6%) = 6% + 1.2(4%) = 6% + 4.8%.
Rate Of Return Regulation: A form of price setting regulation where governments determine the fair price which is allowed to be charged by a monopoly. Rate of return regulation is meant to protect So
in a nutshell, my opinion is that you would be fortunate to average around 7-8% rate of return over a long-term basis. There will be periods in which you get a 20% rate of return. These are the great
times. But there will also be times in which you are getting a -15% rate of return. Best Answer: Yes, as jeff410 points out, using the CAPM should yield the answer (assuming the "expected rate of
return" is the expected market return). So in this case, the answer would be: Fair Return = 6% + 1.2(10% - 6%) = 6% + 1.2(4%) = 6% + 4.8%. A Rate of Return (ROR) is the gain or loss of an investment
over a certain period of time. In other words, the rate of return is the gain (or loss) compared to the cost of an initial investment, typically expressed in the form of a percentage. When the ROR is
positive, it is considered a gain and when the ROR is negative, Your expected overall return should be: 8.2% x 0.4 + 4.4% x 0.1 + 11.5% x 0.1 + 5.3% x 0.4 = 6.99%. That's before inflation, money
management fees, etc. Now we have a decision point. Assumed rate of return. I’ve seen people use everything between 5 percent and 12 percent for average annual returns over a lifetime of investing.
But which rate of return is more accurate: 5 percent or 12 percent? Maybe both. There are two big factors to consider: Whether or not the assumed rate of return accounts for inflation.
All tickets and fare types are subject to availability. *The £29 advertised one-way fare is based on a £58 return journey. Train; Train + Hotel; Hotels. Return.
into an insured savings account with a guarantee of .50% return vs the risk of investing explicit cost are where money is going out of your pocket and for wages we Why is it that Implicit cost is not
included on the list for Accounting Profit? A fair rate of return also means what returns investors can realistically expect from shares, bonds, and other financial instruments. For example, in 2017
in a sound economy, investors’ idea of a fair rate of return on bonds was approximately 2%. A fair rate of return is a reasonable profit based on operating expenses and obligations to shareholders.
This term typically arises in a regulatory context, when government officials want to control pricing for the benefit of customers. Rate Of Return Regulation: A form of price setting regulation where
governments determine the fair price which is allowed to be charged by a monopoly. Rate of return regulation is meant to protect
Return Date. Select sites to compare prices: Find Flights. Alert me when prices change. Skip to content. Notice: COVID-19 (Coronavirus) may impact your
(A high rate of return, of course, will beat that, but you'll have to work for it.) Assume that inflation is an annual 3% and capital gains are 15%. If your target is a 15% return before inflation
and taxes, you'll end up with 12.4% return.
3 Sep 2019 This is the fair value that we're solving for. The discount rate is basically the target rate of return that you want on the investment. as their discount rate, which takes into account
the average rate of return that their stock and
What is a fair rate of return on a $70K investment? My girlfriend managed the third-ranked Allstate agency in the Midwest in 2014 and is pursuing opening up an agency herself by July 1, 2015. She
needs to have $70K in the bank at the time of signing. Finally, she estimates that Microsoft's stock has a beta of 1.2, meaning that on average, when the Nasdaq gains 1 percent, Microsoft's stock
gains 1.2 percent. Plugging the information into the CAPM formula tells the investor that she should expected an annual return of 13.9 percent. The average stock market return is around 7%. This
takes into account the periods of highs, such as the 1950s, when returns were as much as 16%. It also takes into account the negative 3% returns in the 2000s. (A high rate of return, of course, will
beat that, but you'll have to work for it.) Assume that inflation is an annual 3% and capital gains are 15%. If your target is a 15% return before inflation and taxes, you'll end up with 12.4%
return. Real rate of return = Simple/nominal interest rate – Inflation rate For example, if you have an investment that pays 5 percent interest per year, but the inflation rate is 3 percent, your
real rate of return on the investment is 2 percent (5 percent nominal interest rate minus 2 percent inflation rate).
Fair rate of return The rate of return that state governments allow a public utility to earn on its investments and expenditures . Utilities then use these profits to pay investors and provide The
Average Rate of Return for Real Estate Investments Real estate investments typically offer compelling returns that are competitive that investments like stocks or corporate bonds. What is a fair rate
of return on a $70K investment? My girlfriend managed the third-ranked Allstate agency in the Midwest in 2014 and is pursuing opening up an agency herself by July 1, 2015. She needs to have $70K in
the bank at the time of signing. Finally, she estimates that Microsoft's stock has a beta of 1.2, meaning that on average, when the Nasdaq gains 1 percent, Microsoft's stock gains 1.2 percent.
Plugging the information into the CAPM formula tells the investor that she should expected an annual return of 13.9 percent.
|
{"url":"https://bestbinaryfqvrprc.netlify.app/duque26767vyx/average-fair-rate-of-return-441.html","timestamp":"2024-11-04T17:18:06Z","content_type":"text/html","content_length":"33516","record_id":"<urn:uuid:33aa2b2b-9c60-40c4-bef6-8ba29cc5b356>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00345.warc.gz"}
|
How to Convert 3/4 Cups of Liquid to Cups in Decimals and Fractions
How much is 3/4 cups of liquid? You can convert this to cups by using the formula 3/4 x 1/2 = 3/8. Similarly, half of 3/4 cup of solid is 3/4 x 1.5 = 1 7/8 cups. But how can you know which fraction
is right? Read on to find out. Also, learn about converting 3/2 cups of liquid to cups in decimals and fractions. The answer may surprise you. The answer is 3/2 cups of liquid, but if you want to use
it in a metric system, you must take into account the decimal point.
1/4 cup plus 2 tablespoons
If you’re wondering how much salt to add to a cup of water, it’s easy to figure out if you use a teaspoon or a tablespoon. A teaspoon is one-third the size of a tablespoon, and one-third of a cup is
one-third of the same size. You can use a teaspoon to substitute 1/4 cup in baking recipes or sauces, but it won’t work for soups or stews.
A tablespoon is one-twelfth of a cup. This amount is traditionally equal to half a US pint, about 200 ml to 250 ml. In other languages, the tablespoon is equal to two-thirds of a cup. The same rule
applies to liquids. You can use a tablespoon to measure small amounts of liquids. For powders and liquids, use a measuring cup or a teaspoon to measure liquids.
3/2 as a fraction
If you have 7 1/2 cups of coffee, divide it by three to find the volume of 2/3 cup. This way, you’ll know how much to add to the other cups. In addition, you’ll know how much to add to the last cup.
There are many ways to get the answer to 3/2 as a fraction of 3/4 cup. Here are some common methods. You can divide the cup by half, or by two-thirds, or by one-third.
To compare fractions, remember that 2/3 is less than 3/4. That’s because you can’t divide the denominator by three-quarters. Therefore, the fraction 17/24 is between two-thirds and three-fourths of a
cup. When comparing fractions, make sure that the denominator and numerator are the same. Otherwise, you might end up with an improper fraction that is too large or too small.
1.5 as a decimal
To convert 1.5 as a decimal, you’ll first have to know what a fraction is. It’s a unit of measurement that’s used in many mathematical operations, and its simplest form is 3/2. The steps to convert
1.5 to a fraction are illustrated in the worksheet below. Using a calculator, multiply 1.5 by 10 or 1.5/1 to find the decimal form of 1.5. You can also look up 1.5 as a decimal online to find the
exact equivalent fraction in your calculator.
The decimal form of 1.5 is 0.015. You can find the percentage value of any decimal by using a decimal calculator. The calculator will also help you convert any percent to decimal form. For example,
if you want to convert 1.5% to 0.015, divide 1.5 by 100. You can also use a calculator that converts decimals to decimal numbers. If you don’t know how to calculate 1.5 as a decimal, you can use
Cumath’s Decimal to Percent Calculator to get the answer.
3/2 as a fraction and 1.5 as a decimal
To convert 3/2 as a fraction to a decimal, first determine which unit the number is in. A fraction is the number divided by two. Then, determine how many decimal places it contains. Decimals are used
when the number is smaller than three. Decimals are also referred to as mixed numbers. For example, 1.5 is a fraction, but a mixed number is a decimal.
A calculator can be used to convert a fraction to a decimal. It works by shifting the decimal point in the denominator to a lower number. Then, enter the result in the calculator and press the
“Submit” button. If you need a decimal to be converted to a fraction, you can use the “A” button to convert the number.
1 / 2 of a 3 / 4 cup
How much flour is one 1/-2 of a cup? In baking and cooking, half a cup of flour is equivalent to one and a half tablespoons. However, when we use “half” to describe a specific amount, we mean
multiplying it by half. This is the equivalent of one and a half tablespoons or two tbsp. We can also refer to one / 2 of a cup as one third of a cup.
It’s not always clear how to convert fractions in cooking and baking. You can’t always count by weight. But halving a cup means doubling it, so it makes sense to use a measuring spoon or cup instead.
The same goes for converting from one standard to another. Unless you’re converting from one type of measurement to another, make sure your tools have standard markings.
|
{"url":"https://nzcafeoftheyear.co.nz/how-to-convert-3-4-cups-of-liquid-to-cups-in-decimals-and-fractions/","timestamp":"2024-11-02T17:43:05Z","content_type":"text/html","content_length":"67547","record_id":"<urn:uuid:9e3c5ca2-ff55-4748-b068-18d25848ff3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00643.warc.gz"}
|
Chiral Asymmetry in Relativistic Matter in a Magnetic Field
Chiral Asymmetry in Relativistic Matter in a Magnetic Field
dense relativistic matter, magnetic field, axial current
In this mini review, we consider chiral asymmetry in the normal ground state of magnetized relativistic matter in the NJL model with local four-fermion interaction and QED. It is shown that the
chiral shift parameter associated with the relative shift of the longitudinal momenta (along the direction of the magnetic field) in the dispersion relations for opposite chirality fermions is
dynamically generated in the normal ground state. This contribution affects fermions in all Fermi levels, including those around the Fermi surface, and contributes to the non-dissipative axial
current taking place in relativistic matter in a magnetic field. The chiral asymmetry of the normal ground state in QED matter in a magnetic field is characterized by an additional chiral structure.
It formally looks like that of the chiral chemical potential, but is an odd function of the longitudinal component of momentum along the magnetic field. The origin of this parity-even chiral
structure is directly connected with the long-range character of the QED interaction. The leading radiative corrections to the chiral separation effect in QED are calculated, and the form of the
Fermi surface in the weak magnetic field is determined.
P.M. Woods, C. Thompson. Soft gamma repeaters and anomalous X-ray pulsars: Magnetar candidates, in Compact Stellar X-ray Sources, edited by W.H.G. Lewin and M. van der Klis (Cambridge University
Press, 2006), pp. 547–586 [astro-ph/0406133].
S. Mereghetti. The strongest cosmic magnets: Soft gamma-ray repeaters and anomalous X-ray pulsars. Astron. Astrophys. Rev. 15, 225 (2008) [DOI: 10.1007/s00159-008-0011-z].
D. Page, S. Reddy. Dense matter in compact stars: Theoretical developments and observational constraints. Ann. Rev. Nucl. Part. Sci. 56, 327 (2006) [DOI: 10.1146/an-nurev.nucl.56.080805.140600].
V. Skokov, A. Illarionov, V. Toneev. Estimate of the magnetic field strength in heavy-ion collisions. Int. J. Mod. Phys. A 24, 5925 (2009) [DOI: 10.1142/ S0217751X09047570].
D. Kharzeev. Parity violation in hot QCD: Why it can happen, and how to look for it. Phys. Lett. B 633, 260 (2006) [DOI: 10.1016/j.physletb.2005.11.075].
D. Kharzeev, A. Zhitnitsky. Charge separation induced by P-odd bubbles in QCD matter. Nucl. Phys. A 797, 67 (2007) [DOI: 10.1016/j.nuclphysa.2007.10.001].
S.L. Adler. Axial vector vertex in spinor electrodynamics. Phys. Rev. 177, 2426 (1969) [DOI: 10.1103/Phys-Rev.177.2426]; J.S. Bell, R. Jackiw. A PCAC puzzle: п0 → yy in the q-model. Nuovo Cim. A 60,
47 (1969) [DOI: 10.1007/BF02823296].
K. Fukushima, D.E. Kharzeev, H.J. Warringa. The chiral magnetic effect. Phys. Rev. D 78, 074033 (2008) [DOI: 10.1103/PhysRevD.78.074033].
D.E. Kharzeev, L.D. McLerran, H.J. Warringa. The effects of topological charge change in heavy ion collisions: “Event by event P and CP violation”. Nucl. Phys. A 803, 227 (2008) [DOI: 10.1016/
K. Fukushima. Views of the chiral magnetic effect, in Strongly Interacting Matter in Magnetic Fields, edited by D. Kharzeev, K. Landsteiner, A. Schmitt, H.-U. Yee. Lect. Notes Phys. (Springer, 2013),
871, pp. 241–259 [DOI: 10.1007/978-3-642-37305-3_9].
B.I. Abelev et al. [STAR Collaboration]. Azimuthal charged-particle correlations and possible local strong parity violation. Phys. Rev. Lett. 103, 251601 (2009) [DOI:/PhysRevLett.103.251601];
Observation of charge-dependent azimuthal correlations and possible local strong parity violation in heavy ion collisions. Phys. Rev. C 81, 054908 (2010) [DOI: 10.1103/PhysRevC.81.054908].
L. Adamczyk et al. [STAR Collaboration]. Measurement of charge multiplicity asymmetry correlations in high energy nucleus-nucleus collisions at 200 GeV. Phys. Rev. C 89, 044908 (2014) [DOI: 10.1103/
G. Wang [STAR Collaboration]. Search for chiral magnetic effects in high-energy nuclear collisions. Nucl. Phys. A 904–905, 248 (2013) [DOI: 10.1016/j.nuclphysa.2013.01.069].
H. Ke [STAR Collaboration]. Charge asymmetry dependency of п+/п− elliptic flow in Au + Au collisions at √SNN = 200 GeV. J. Phys. Conf. Ser. 389, 012035 (2012) [DOI: 10.1088/1742-6596/389/1/012035].
I. Selyuzhenkov [ALICE Collaboration]. Anisotropic flow and other collective phenomena measured in Pb–Pb collisions with ALICE at the LHC. Prog. Theor. Phys. Suppl. 193, 153 (2012) [arXiv: 1111.1875
S.A. Voloshin. Parity violation in hot QCD: How to detect it. Phys. Rev. C 70, 057901 (2004) [DOI: 10.1103/Phys-RevC.70.057901].
D.E. Kharzeev. Topologically induced local P and CP violation in QCD×QED. Annals Phys. 325, 205 (2010) [DOI: 10.1016/j.aop.2009.11.002]; K. Fukushima, D.E. Kharzeev, H.J. Warringa. Electric-current
susceptibility and the chiral magnetic effect. Nucl. Phys. A 836, 311 (2010) [DOI: 10.1016/j.nuclphysa.2010.02.003].
J. Liao. Anomalous transport effects and possible environmental symmetry “violation” in heavy ion collisions. arXiv:1401.2500 [hep-ph].
A. Vilenkin. Cancellation of equilibrium parity violating currents. Phys. Rev. D 22, 3067 (1980) [DOI: 10.1103/PhysRevD.22.3067].
M.A. Metlitski, A.R. Zhitnitsky. Anomalous axion interactions and topological currents in dense matter. Phys. Rev. D 72, 045011 (2005) [DOI: 10.1103/PhysRevD.72.045011].
G.M. Newman, D.T. Son. Response of strongly-interacting matter to magnetic field: Some exact results. Phys. Rev. D 73, 045006 (2006) [DOI: 10.1103/PhysRevD.73.045006].
E.V. Gorbar, V.A. Miransky, I.A. Shovkovy. Chiral asymmetry and axial anomaly in magnetized relativistic matter. Phys. Lett. B 695, 354 (2011) [DOI: 10.1016/j.physletb.2010.11.022].
Y. Burnier, D.E. Kharzeev, J. Liao, H.-U. Yee. Chiral magnetic wave at finite baryon density and the electric quadrupole moment of quark-gluon plasma in heavy ion collisions. Phys. Rev. Lett. 107,
052303 (2011) [DOI: 10.1103/PhysRevLett.107.052303].
G. Basar, G.V. Dunne, The chiral magnetic effect and axial anomalies, in Strongly Interacting Matter in Magnetic Fields, edited by D. Kharzeev, K. Landsteiner, A. Schmitt, H.-U. Yee. Lect. Notes
Phys. 871 (Springer, 2013), pp. 261–294 [DOI: 10.1007/978-3-642-37305-3_10].
J. Ambjorn, J. Greensite, C. Peterson. The axial anomaly and the lattice Dirac sea. Nucl. Phys. B 221, 381 (1983) [DOI: 10.1016/0550-3213(83)90585-0]; N. Sadooghi, A. Jafari Salim. Axial anomaly of
QED in a strong magnetic field and noncommutative anomaly. Phys. Rev. D 74, 085032 (2006) [DOI: 10.1103/PhysRevD.74.085032].
E.V. Gorbar, V.A. Miransky, I.A. Shovkovy. Chiral asymmetry of the Fermi surface in dense relativistic matter in a magnetic field. Phys. Rev. C 80, 032801(R) (2009) [DOI: 10.1103/PhysRevC.80.032801].
E.V. Gorbar, V.A. Miransky, I.A. Shovkovy. Normal ground state of dense relativistic matter in a magnetic field. Phys. Rev. D 83, 085003 (2011) [DOI: 10.1103/Phys-RevD.83.085003].
A. Rebhan, A. Schmitt, S.A. Stricker. Anomalies and the chiral magnetic effect in the Sakai-Sugimoto model. JHEP 01, 026 (2010) [DOI: 10.1007/JHEP01(2010)026].
V.A. Rubakov. On chiral magnetic effect and holography. arXiv: 1005.1888 [hep-ph].
D.K. Hong. Anomalous currents in dense matter under a magnetic field. Phys. Lett. B 699, 305 (2011) [DOI: 10.1016/j.physletb.2011.04.010].
K. Fukushima, M. Ruggieri. Dielectric correction to the chiral magnetic effect. Phys. Rev. D 82, 054001 (2010) [DOI: 10.1103/PhysRevD.82.054001].
X. Wan, A.M. Turner, A. Vishwanath, S.Y. Savrasov. Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates. Phys. Rev. B 83, 205101 (2011) [DOI: 10.1103/
A.A. Burkov, M.D. Hook, L. Balents. Topological nodal semimetals. Phys. Rev. B 84, 235126 (2011) [DOI: 10.1103/PhysRevB.84.235126].
A.A. Burkov, L. Balents. Weyl semimetal in a topological insulator multilayer. Phys. Rev. Lett. 107, 127205 (2011) [DOI: 10.1103/PhysRevLett.107.127205].
M.M. Vazifeh, M. Franz. Electromagnetic response of Weyl semimetals. Phys. Rev. Lett. 111, 027211 (2013) [DOI: 10.1103/PhysRevLett.111.027201].
P.E.C. Ashby, J.P. Carbotte. Magneto-optical conductivity of Weyl semimetals. Phys. Rev. B 87, 245131 (2013) [DOI: 10.1103/PhysRevB.87.245131].
G. Basar, D.E. Kharzeev, Ho-Ung Yee. Triangle anomaly inWeyl semi-metals. Phys. Rev. B 89, 035142 (2014) [DOI: 10.1103/PhysRevB.89.035142].
S.A. Parameswaran, T. Grover, D.A. Abanin, D.A. Pesin, A. Vishwanath. Probing the chiral anomaly with nonlocal transport in three dimensional topological semimetals. Phys. Rev. X 4, 031035 (2014)
[arXiv: 1306.1234 [cond-mat.str-el]].
E.V. Gorbar, V.A. Miransky, I.A. Shovkovy. Engineering Weyl nodes in Dirac semimetals by a magnetic field. Phys. Rev. D 88, 165105 (2013) [DOI: 10.1103/Phys-RevB.88.165105].
E.V. Gorbar, V.A. Miransky, I.A. Shovkovy, Xinyang Wang. Radiative corrections to chiral separation effect in QED. Phys. Rev. D 88, 025025 (2013) [DOI: 10.1103/PhysRevD.88.025025].
E.V. Gorbar, V.A. Miransky, I.A. Shovkovy, Xinyang Wang. Chiral asymmetry in QED matter in a magnetic field. Phys. Rev. D 88, 025043 (2013) [DOI: 10.1103/Phys-RevD.88.025043].
V.P. Gusynin, V.A. Miransky, I.A. Shovkovy. Catalysis of dynamical flavor symmetry breaking by a magnetic field in 2 + 1 dimensions. Phys. Rev. Lett. 73, 3499 (1994) [DOI: 10.1103/
PhysRevLett.73.3499]; Dynamical flavor symmetry breaking by a magnetic field in 2 + 1 dimensions. Phys. Rev. D 52, 4718 (1995) [DOI: 10.1103/PhysRevD.52.4718].
V.P. Gusynin, V.A. Miransky, I.A. Shovkovy. Dimensional reduction and dynamical chiral symmetry breaking by a magnetic field in 3 + 1 dimensions. Phys. Lett. B 349, 477 (1995) [DOI: 10.1016/0370-2693
(95)00232-A]; Dimensional reduction and catalysis of dynamical symmetry breaking by a magnetic field. Nucl. Phys. B 462, 249 (1996) [DOI: 10.1016/0550-3213(96)00021-1].
M.E. Peskin, D.V. Schroeder. An Introduction To Quantum Field Theory (Westview Press, 1995) [ISBN-13: 978-0813350196].
B.L. Ioffe. Axial anomaly: The modern status. Int. J. Mod. Phys. A 21, 6249 (2006) [DOI: 10.1142/S0217751X06035051].
J.S. Schwinger. On gauge invariance and vacuum polarization. Phys. Rev. 82, 664 (1951) [DOI: 10.1103/Phys-Rev.82.664].
E.C.G. Stueckelberg. Theory of the radiation of photons of small arbitrary mass. Helv. Phys. Acta 30, 209 (1957).
S.L. Adler, W.A. Bardeen. Absence of higher order corrections in the anomalous axial vector divergence equation. Phys. Rev. 182, 1517 (1969) [DOI: 10.1103/Phys-Rev.182.1517].
I.S. Gradshtein, I.M. Ryzhik. Table of Integrals, Series and Products (Academic Press, 1994) [ISBN: 978-0122947551, 012294755X].
K.Y. Kim, B. Sahoo, H.U. Yee. Holographic chiral magnetic spiral. JHEP 10, 005 (2010) [DOI: 10.1007/JHEP10(2010)005].
G. Ba¸sar, G.V. Dunne, D.E. Kharzeev. Chiral magnetic spirals. Phys. Rev. Lett. 104, 232301 (2010) [DOI: 10.1103/PhysRevLett.104.232301].
S.Weinberg. TheQuantumTheory of Fields.Vol. 1: Foundations (Cambridge University Press, 1995), pp. 564–596.
How to Cite
Gorbar, E. V. (2019). Chiral Asymmetry in Relativistic Matter in a Magnetic Field. Ukrainian Journal of Physics, 11(1), 3. Retrieved from https://ujp.bitp.kiev.ua/index.php/ujp/article/view/2019656
Copyright Agreement
License to Publish the Paper
Kyiv, Ukraine
The corresponding author and the co-authors (hereon referred to as the Author(s)) of the paper being submitted to the Ukrainian Journal of Physics (hereon referred to as the Paper) from one side and
the Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, represented by its Director (hereon referred to as the Publisher) from the other side have come to the
following Agreement:
1. Subject of the Agreement.
The Author(s) grant(s) the Publisher the free non-exclusive right to use the Paper (of scientific, technical, or any other content) according to the terms and conditions defined by this Agreement.
2. The ways of using the Paper.
2.1. The Author(s) grant(s) the Publisher the right to use the Paper as follows.
2.1.1. To publish the Paper in the Ukrainian Journal of Physics (hereon referred to as the Journal) in original language and translated into English (the copy of the Paper approved by the Author(s)
and the Publisher and accepted for publication is a constitutive part of this License Agreement).
2.1.2. To edit, adapt, and correct the Paper by approval of the Author(s).
2.1.3. To translate the Paper in the case when the Paper is written in a language different from that adopted in the Journal.
2.2. If the Author(s) has(ve) an intent to use the Paper in any other way, e.g., to publish the translated version of the Paper (except for the case defined by Section 2.1.3 of this Agreement), to
post the full Paper or any its part on the web, to publish the Paper in any other editions, to include the Paper or any its part in other collections, anthologies, encyclopaedias, etc., the Author(s)
should get a written permission from the Publisher.
3. License territory.
The Author(s) grant(s) the Publisher the right to use the Paper as regulated by sections 2.1.1–2.1.3 of this Agreement on the territory of Ukraine and to distribute the Paper as indispensable part of
the Journal on the territory of Ukraine and other countries by means of subscription, sales, and free transfer to a third party.
4. Duration.
4.1. This Agreement is valid starting from the date of signature and acts for the entire period of the existence of the Journal.
5. Loyalty.
5.1. The Author(s) warrant(s) the Publisher that:
– he/she is the true author (co-author) of the Paper;
– copyright on the Paper was not transferred to any other party;
– the Paper has never been published before and will not be published in any other media before it is published by the Publisher (see also section 2.2);
– the Author(s) do(es) not violate any intellectual property right of other parties. If the Paper includes some materials of other parties, except for citations whose length is regulated by the
scientific, informational, or critical character of the Paper, the use of such materials is in compliance with the regulations of the international law and the law of Ukraine.
6. Requisites and signatures of the Parties.
Publisher: Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine.
Address: Ukraine, Kyiv, Metrolohichna Str. 14-b.
Author: Electronic signature on behalf and with endorsement of all co-authors.
|
{"url":"https://ujp.bitp.kiev.ua/index.php/ujp/article/view/2019656","timestamp":"2024-11-08T09:03:35Z","content_type":"text/html","content_length":"64379","record_id":"<urn:uuid:2b3f51d6-0243-47e4-a3e4-6feb7ab15533>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00275.warc.gz"}
|
This option is intended for use with other software, and requires a knowledge of how to construct a mol.in file. Users should become familiar with MM software before attempting to use QMMM.
In the QM/MM approach, it is crucial to describe coupling between the QM and the MM regions, in particular polarization of solute by polar environment. Not doing so will for example lead to having
NaCl dissociating to Na and Cl instead of Na^+ Cl^- in water^6.
In the case of semi-empirical Hamiltonians it is quite simple to incorporate the effect of solvent into semi-empirical QM/MM Hamiltonian by adding the interaction energy of an electron with the
electrostatic potential created by solvent (MM atoms) to the one-electron diagonal elements of Hamiltonian^1,7,8:
here h[νν] is the gas phase one-electron Hamiltonian elements with the ν-th orbital centered on atom i and q[j] being the charge of the MM atom j where j goes over all MM atoms.
This simple modification is implemented^5 in MOPAC.The required electrostatic potentials on QM atoms from all MM atoms are calculated with the MOLARIS-XG simulation package, which also calculates the
corresponding QM/MM electrostatic energy derivatives using the charges provided by MOPAC. This allows to perform almost any QM/MM calculation (e.g. activation free energy barrier, average dipole
moments, absorption and emission spectra). Furthermore, this functionality allows for a general interface between MOPAC and any MM program (e.g. GROMACS)
Format of MOL.IN
Note that MM program must provide an additional input file mol.in in the following format:
<empty line>
n1 n2
0 0 0 0 φ(1)
0 0 0 0 φ(n)
where n1 is the number of atoms, n1 is the number of linked atoms, n is number of QM atoms (including link atoms), and
φ(i) units are kcal/(mol∙e) where e is the elementary charge. If the atomic distance, r[ij], is given in Ångstroms, and q[j] is expressed in fraction of the elementary charge (e.g. -1 for an
electron) then φ(i) is given by:
This file should be in the directory where MOPAC is being executed.
The output created by MOPAC includes the effect of environment on energy, heat of formation, charge distribution, dipole moments. Energy derivatives include only QM intra-molecular contributions.
However, the QM charge distribution can be used to calculate electrostatic QM/MM contribution to force according to the MM force-field formalism.
The file mol.in provides the various φ(i). These are given in lines 3 on, in column 5, as shown in the following example. The first line is is skipped, the second line of mol.in should have two
integer numbers that add to the total the total number of atoms. If you are building this file using an editor, a useful default is to set the first number equal to the total number of atoms and to
make the second number zero. Typically, this file would be created by the MM program, in which case do not edit the file.
6 0 # of qmmm atoms, # of link atoms in Region I
CL -1.591010336 -3.497323620 -4.177329152 119.381953977
C 0.623273531 -3.927769978 -4.243650888 88.802327810
H 0.627631085 -3.831528682 -5.334074435 77.449540155
H 0.737788528 -3.010768158 -3.634868517 83.899739734
H 0.444587282 -4.863821218 -3.677635261 90.477795343
CL 2.837655032 -4.254371189 -4.197078072 120.024810232
Reading is activated by keyword QMMM, that is the PM6.mop input file looks like:
PM6 1SCF CHARGE=-1 GRAD QMMM
snapshot of MD step 0
CL -1.5910103360 1 -3.4973236200 1 -4.1773291520 1
C 0.6232735310 1 -3.9277699780 1 -4.2436508880 1
H 0.6276310850 1 -3.8315286820 1 -5.3340744350 1
H 0.7377885280 1 -3.0107681580 1 -3.6348685170 1
H 0.4445872820 1 -4.8638212180 1 -3.6776352610 1
CL 2.8376550320 1 -4.2543711890 1 -4.1970780720 1
Use of MOPAC standard output AUX file
Advanced users interested in implementing the QM/MM interface with MM-packages should use the MOPAC2016 keyword AUX to create an auxiliary file which contains all the data necessary for propagating
MD trajectories and for MC sampling. To use this function in a MOPAC job, simply include keyword AUX. An example of a typical keyword line would be:
PM6 1SCF CHARGE=-1 GRAD AUX QMMM
(the keywords 1SCF and GRAD are both necessary; 1SCF because the gradients of the supplied geometry are needed, and GRAD because, by default, gradients are not calculated when 1SCF is used.)
In the <file>.aux file the corresponding entries for energies, heat of formation, atomic charges and gradients can be found under Final SCF results, e.g.:
# Final SCF results #
DIP_VEC:DEBYE[3]= +0.16008D+00 -0.85662D-01 -0.23579D+00
-0.82604 +0.13275 +0.18844 +0.16625 +0.14813 -0.80953
6.6683 0.2249 1.9733 0.9287 -5.8534 -16.0747 0.2434 6.4429 -7.0648 -3.0670
8.3435 6.1962 -3.9386 -9.9628 13.9620 -0.8348 0.8050 1.0079
Applications, computational cost and accuracy
The QM/MM approach nowadays is an extremely popular approach^2-4. Due to its low computational cost semi-empirical QM/MM methods are widely used in molecular dynamics, Monte Carlo and minimization
approaches with variety of different program-specific implementations^9,10. Modeling with the explicit solvent is the most accurate and physically meaningful way to describe environmental effects,
but it comes at a higher computational cost. For example, the activation free energy barrier calculated^5 for a S[N]2 reaction between methyl chloride and chloride in water is predicted by the COSMO
model to be ~18 kcal/mol, while the PM3/MM estimate, at 27-29 kcal/mol (for ESP charges and Mulliken charges models , respectively) is in perfect agreement with the experimental estimate of 26.6).
Another application is the calculation of vertical excitation energies in polar environment, e.g. in fluorescent proteins and photoactive dyes, where atomistic polarizable solvent models are critical
for a reliable prediction of the solvatochromic shift^11. Another area where QM/MM with explicit solvation is advantageous (and essential ) is the evaluation of the binding free energy in enzymatic
binding pockets^12
While the treatment of the entire system quantum mechanically is still very computationally expensive, the QM/MM approach allows one to explore a numerous problems in biochemistry with a reasonable
computational cost. The accuracy of the semi-empirical QM/MM description can be further improved by perturbatively moving to a higher level of theory using the Paradynamics approach^5.
Additional technical details
In the described QM/MM implementation, the heat of formation and energies reported by MOPAC contain all electrostatic QM/MM coupling terms, including interaction between QM and MM nuclei.
The derivatives, which are read from MOPAC, contain only QM contributions, and the QM/MM electrostatic term is evaluated by the MM program, using the derived charge distribution for QM atoms from
The charge model for QM from MOPAC can be Mulliken or ESP, Mulliken charges are obtained faster but ESP charges are more physical.
(1) Warshel, A.; Levitt, M. Theoretical Studies of Enzymatic Reactions: Dielectric, Electrostatic and Steric Stabilization of the Carbonium Ion in the Reaction of Lysozyme. J. Mol. Biol.1976, 103,
(2) Senn, H. M.; Thiel, W. QM/MM Methods for Biomolecular Systems. Angew Chem Int Ed Engl2009, 48, 1198-1229.
(3) Hu, H.; Yang, W. Free Energies of Chemical Reactions in Solution and in Ezymes with ab initio Qantum Mehanics/Molecular Mechanics Methods. Annu Rev Phys Chem2008, 59, 573-601.
(4) Kamerlin, S. C. L.; Haranczyk, M.; Warshel, A. Progress in Ab Initio QM/MM Free-Energy Simulations of Electrostatic Energies in Proteins: Accelerated QM/MM Studies of pKa, Redox Reactions and
Solvation Free Energies . The Journal of Physical Chemistry B2008, 113, 1253-1272.
(5) Plotnikov, N. V.; Warshel, A. Exploring, Refining, and Validating the Paradynamics QM/MM Sampling. The Journal of Physical Chemistry B2012, 116, 10342-10356. DOI: 10.1021/jp304678d. Web-article:
(6) Hwang, J.-K.; Creighton, S.; King, G.; Whitney, D.; Warshel, A. Effects of Solute-Solvent Coupling and Solvent Saturation on Solvation Dynamics of Charge Transfer Reactions. J Chem Phys1988, 89,
(7) Luzhkov, V.; Warshel, A. Microscopic Models for Quantum Mechanical Calculations of Chemical Processes in Solutions: LD/AMPAC and SCAAS/AMPAC Calculations of Solvation Energies. J. Comp. Chem.1992
, 13, 199-213.
(8) Warshel, A. Computer Modeling of Chemical Reactions in Enzymes and Solutions; John Wiley & Sons: New York, 1991.
(9) Dapprich, S.; Komaromi, I.; Byun, K. S.; Morokuma, K.; Frisch, M. J. A new ONIOM implementation in Gaussian98. Part I. The calculation of energies, gradients, vibrational frequencies and electric
field derivatives. Journal of Molecular Structure-Theochem1999, 461, 1-21.
(10) Walker, R. C.; Crowley, M. F.; Case, D. A. The implementation of a fast and accurate QM/MM potential method in Amber. Journal of Computational Chemistry2008, 29, 1019-1031.
(11) Luzhkov, V.; Warshel, A. Microscopic calculations of solvent effects on absorption spectra of conjugated molecules. Journal Of The American Chemical Society1991, 113, 4491-4499.
(12) Warshel, A.; Sharma, P. K.; Kato, M.; Parson, W. W. Modeling electrostatic effects in proteins. Biochim. Biophys. Acta2006, 1764, 1647-1676.
|
{"url":"http://openmopac.net/Manual/QMMM.html","timestamp":"2024-11-03T02:43:22Z","content_type":"text/html","content_length":"21526","record_id":"<urn:uuid:065be33d-ce86-47c4-ad29-4bc14a93dab3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00375.warc.gz"}
|
NERDS 22.0 | New England Recursion and Definability Seminar
Date: October 16, 2022
Place: Wellesley College
10:00am Coffee, Tea & Snacks
10:30am Filippo Calderoni (Rutgers)
11:30am Isabella Scott (University of Chicago)
12pm Lunch
1:30pm Chris Conidis (College of Staten Island – CUNY)
2:30pm Teerawat Thewmorakot (University of Connecticut)
3pm End of day
Filippo Calderoni (Rutgers)
Borel structures on the space of left-orderings
In this talk we will discuss some results on left-orderable groups and their interplay with descriptive set theory. We will see how Borel classification can be used to analyze the space of
left-orderings of a given countable group modulo the conjugacy relation. In particular, we will discuss many examples of groups whose space of left-orderings module the conjugacy relation is not
standard. Moreover, if \(G\) is a nonabelian free group, then the conjugacy relation on its space of left-orders \(\mathrm{LO}(G)\) is a universal countable Borel equivalence relation. This is joint
work with A. Clay.
Isabella Scott (University of Chicago)
Effective constructions of existentially closed groups
Existentially closed groups are at the intersection of model theory, computability theory, and algebra. Questions of complexity can be asked in many directions. We will review earlier constructions
from the literature and elucidate their computability theoretic power, as well as propose new constructions of existentially closed groups with interesting computability theoretic properties.
Chris Conidis (College of Staten Island – CUNY)
The computability of Artin-Rees and Krull Intersection
We will explore the computational content of two related algebraic theorems, namely the Artin-Rees Lemma and Krull Intersection Theorem. In particular we will show that, while the strengths of these
theorems coincide for individual rings, they become distinct in the uniform context.
Teerawat Thewmorakot (University of Connecticut)
Embedding Problems for Finite Computable Metric Spaces
A computable metric space is a Polish metric space (M,d) together with a dense sequence (p_i) of points in M such that d(p_i,p_j) is a computable real uniformly in i,j. In this talk, we consider the
following embedding problem: for a fixed finite computable
metric space X, given an arbitrary computable metric space M, determine if X can be embedded into M.
Travel Support
Limited funding is provided from the NSF to support participation by students and others to NERDS. If interested, please email Russell Miller.
Vaccination Mandate
All visitors to Wellesley’s campus must be vaccinated. Visitors must fill out an online form prior to arrival.
|
{"url":"https://nerds.math.uconn.edu/nerds-22-0/","timestamp":"2024-11-06T01:55:39Z","content_type":"text/html","content_length":"51988","record_id":"<urn:uuid:eb99900a-1594-4d04-b950-705af1a9ab32>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00521.warc.gz"}
|
4.2 – The Equilibrium Constant & Reaction Quotient
In the last section, we began deriving a constant for chemical equilibria based on the kinetics of the forward and reverse reactions. We established that the composition of the equilibrium mixture is
determined by the magnitudes of the rate constants for the forward and the reverse reactions, or more specifically, that the equilibrium constant is equal to the rate constant for the forward
reaction divided by the rate constant for the reverse reaction. Here, we’ll develop an equilibrium constant expression for K applicable to any equilibrium reaction and look at how we can also predict
the direction of net change given a set amount of reactants & products.
The Equilibrium Constant (K)
In 1864, the Norwegian chemists Cato Guldberg (1836–1902) and Peter Waage (1833–1900) carefully measured the compositions of many reaction systems at equilibrium. They discovered that for any
reversible reaction of the general form
m A + n B ⇌ x C + y D
where A and B are reactants, C and D are products, and m, n, x, and y are the stoichiometric coefficients in the balanced chemical equation for the reaction, the ratio of the product of the
equilibrium quantities of the products (raised to their coefficients in the balanced chemical equation) to the product of the equilibrium concentrations of the reactants (raised to their coefficients
in the balanced chemical equation) is always a constant under a given set of conditions. This relationship was eventually summarized as follows:
Equation 4.2.1 Equilibrium Constant
where K is the equilibrium constant for the reaction, equivalent to the value defined in Section 4.1, and a[X] represents the activity of each species participating in the equilibrium. The chemical
equilibrium equation represented with reactants A & B and products C & D is called the equilibrium equation, and the right side of the mathematical equation above is called the equilibrium constant
expression. The relationship shown in the expression for K is true for any pair of opposing reactions regardless of the mechanism of the reaction or the number of steps in the mechanism.
An important fact to note is that equilibrium constants are dimensionless (they have no units) but the temperature at which this value is valid must always be listed (since K is temperature
dependent). This is due to computing K values using the activities of the reactants and products in the equilibrium system. The activity of a substance is a measure of its effective concentration
under specified conditions. While a detailed discussion of this important quantity is beyond the scope of an introductory text, it is necessary to be aware of a few important aspects:
• Activities are dimensionless (unitless) quantities and are in essence “adjusted” quantities of reactants and products.
• For relatively dilute solutions, a solute’s activity and its molar concentration are roughly equal (i.e. for a solute X, a[X] ≈ [X] in mol/L or M). Note that this approximation does not hold for
highly concentrated solutes.
• For gases, a substance’s activity is equal to its partial pressure (i.e. a[X] = P[X] in bar)
• Activities for pure condensed phases (solids and liquids) are equal to 1 (i.e. a[X] = 1), hence their activities do not appear in the expression for K
Further discussion about activities can be found here.
We categorize equilibria into two types: homogeneous and heterogeneous. A homogeneous equilibrium is one in which all of the reactants and products are present in a single solution (by definition, a
homogeneous mixture). In this chapter, we will concentrate on the two most common types of homogeneous equilibria: those occurring in liquid-phase solutions and those involving exclusively gaseous
species. A heterogeneous equilibrium is a system in which reactants and products are found in two or more phases. The phases may be any combination of solid, liquid, or gas phases, and solutions.
When dealing with these equilibria, remember that solids and pure liquids do not appear in equilibrium constant expressions (as we’ve mentioned above, the activities of pure solids, pure liquids, and
solvents are 1).
The equilibrium constant, K, is temperature-dependent. When reporting its value for an equilibrium reaction (like in scientific literature), the temperature at which K was determined must always be
included (e.g. K = 2.0 × 10^-25 @ 100°C)
Example 4.2.1 – Writing Equilibrium Constant Expressions
Write the equilibrium constant expression for each reaction.
(a) N[2 ](g) + 3 H[2 ](g) ⇌ 2 NH[3 ](g)
(b) 2 CO[2] (g) ⇌ 2 CO[ ](g) + O[2] (g)
(c) H[2]O (l) + H[2]CO[3 ](aq) ⇌ H[3]O^+ (aq) + HCO[3]^– (aq)
(d) Fe[3]O[4 ](s) + 4 H[2] (g) ⇌ 3 Fe (s) + 4 H[2]O (g)
(e) Cu(s) + Zn^2+(aq) ⇌ Cu^2+(aq) + Zn(s)
(a) The only product is ammonia, which has a coefficient of 2. For the reactants, N[2] has a coefficient of 1 and H[2] has a coefficient of 3. All species are gases, and so their activities are equal
to their partial pressures. The equilibrium constant expression is as follows:
(b) The products are CO, with a coefficient of 2, and O[2], with a coefficient of 1. The only reactant is carbon dioxide, which has a coefficient of 2. Since all species are gases, the equilibrium
constant expression is as follows:
(c) This reaction contains a pure liquid (H2O), its activity is equal to 1 and thus does not appear in the equilibrium constant expression. The other three species are solutes, and their activities
can be approximated using their molar concentrations:
(d) This reaction contains two pure solids (Fe3O4 and Fe), which do not appear in the equilibrium constant expression (their activities are equal to 1). However, the two gases do appear in the
expression, as partial pressures:
(e) This reaction contains two pure solids (Cu and Zn), which do not appear in the equilibrium constant expression (their activities are equal to 1). However, the two aqueous ions do appear in the
expression, as molar concentrations:
Check Your Learning 4.2.1 – Writing Equilibrium Constant Expressions
Write the equilibrium constant expression for each reaction.
(a) N[2]O (g) ⇌ N[2] (g) + 1/2 O[2] (g)
(b) H[2]O (l) + HOCl (aq) ⇌ H[3]O^+(aq) + OCl^– (aq)
(c) H[2] (g) + I[2] (g) ⇌ 2 HI (g)
(d) CaCO[3] (s) ⇌ CaO (s) + CO[2] (g)
(e) BaSO[4](s) ⇌ Ba^2+(aq) + SO[4]^2–(aq)
Manipulating Equilibrium Constants
Reversing the Equilibrium Equation
Because equilibrium can be approached from either direction in a chemical reaction, the equilibrium constant expression and thus the magnitude of the equilibrium constant depend on the form in which
the chemical reaction is written. For example, if we write the generic equilibrium reaction equation in reverse, we obtain the following:
x C + y D ⇌ m A + n B
If all species are solutes, then the corresponding equilibrium constant K’ is as follows:
Equation 4.2.2 Reverse Equilibrium Constant
This expression is the inverse of the expression for the original equilibrium constant, so K’ = 1/K. That is, when we write a reaction in the reverse direction, the equilibrium constant expression is
inverted. Below is an example:
Consider another example, the formation of water: 2 H[2 ](g) + O[2 ](g) ⇌ 2 H[2]O (g). Because H[2] is a good reductant and O[2] is a good oxidant, this reaction has a very large equilibrium constant
(K = 2.4 × 10^47 at 500 K). Consequently, the equilibrium constant for the reverse reaction, the decomposition of water to form O[2] and H[2], is very small: K ^‘ = 1/K = 1/(2.4 × 10^47) = 4.2 × 10^
−48. As suggested by the very small equilibrium constant, the dynamic equilibrium always very heavily favours the formation of water molecules. This is related to the fact that the decomposition of
water into O[2] and H[2] requires a substantial amount of activation energy; a concept we will return to in our study of Chemical Kinetics (Chapter 7).
Altering Species Coefficients
Writing an equation in different but chemically equivalent forms also causes both the equilibrium constant expression and the magnitude of the equilibrium constant to be different. For example, going
back to our inverse equilibrium equation with an equilibrium constant K ^‘, we could write the equation for that reaction
2 NO[2] (g) ⇌ N[2]O[4] (g)
NO[2] (g) ⇌ ½ N[2]O[4] (g)
with the equilibrium constant K” as follows:
The values for K′ and K″ are related as follows:
Equation 4.2.3 K’ and K” relation
In general, if all the coefficients in a balanced chemical equation were subsequently multiplied by n, then the new equilibrium constant is the original equilibrium constant raised to the n^th power.
Combining Chemical Equilibrium Equations
Chemists frequently need to know the equilibrium constant for a reaction that has not been previously studied. In such cases, the desired reaction can often be written as the sum of other reactions
for which the equilibrium constants are known. The equilibrium constant for the unknown reaction can then be calculated from the tabulated values for the other reactions.
To illustrate this procedure, let’s consider the reaction of N[2] with O[2] to give NO[2]. This reaction is an important source of the NO[2] that gives urban smog its typical brown color. The
reaction normally occurs in two distinct steps. In the first reaction (1), N[2] reacts with O[2] at the high temperatures inside an internal combustion engine to give NO. The released NO then reacts
with additional O[2] to give NO[2] (2). The equilibrium constant for each reaction at 100°C is also given.
(1) N[2] (g) + O[2] (g) ⇌ 2 NO (g) K[1] = 2.0 x 10^–25
(2) 2 NO (g) + O[2] (g) ⇌ 2 NO[2 ](g) K[2] = 6.4 x 10^9
Addition of reactions (1) and (2) gives the overall reaction of N[2] with O[2]:
(3)= (1) + (2) N[2] (g) + 2 O[2] (g) ⇌ 2 NO[2] (g) K[3] = ?
The equilibrium constant expressions for the reactions are as follows:
What is the relationship between K[1], K[2], and K[3], all at 100°C? The expression for K[1] has (P[NO2])^2 in the numerator, the expression for K[2] has (P[NO2])^2 in the denominator, and (P[NO2])
^2 does not appear in the expression for K[3]. Multiplying K[1] by K[2] and canceling the (P[NO2])^2 terms,
Thus the product of the equilibrium constant expressions for K[1] and K[2] is the same as the equilibrium constant expression for K[3]:
The equilibrium constant for a reaction that is the sum of two or more reactions is equal to the product of the equilibrium constants for the individual reactions. In contrast, recall that according
to Hess’s Law (seen in the previous chapter on thermochemistry), ΔH for the sum of two or more reactions is the sum of the ΔH values for the individual reactions.
It is important to remember that an equilibrium constant is always tied to a specific chemical equation, and if you manipulate the equation in any way, the value of K will change. Fortunately, the
rules are very simple:
• Writing the equation in reverse will invert the equilibrium expression (i.e. K’ = 1/K)
• Multiplying the coefficients by a common factor n will raise K to the corresponding power of n (i.e. K’ = K^n where n is a common factor)
• The equilibrium constant for a reaction that is the sum of several chemical equilibrium equations is the product of the equilibrium constants for each of the steps (i.e. K’ = K[1]K[2]K[3]…)
Example 4.2.2 – Manipulating Equilibrium Constants – 1
At 745 K, K is 0.118 for the following reaction:
N[2 ](g) + 3 H[2 ](g) ⇌ 2 NH[3 ](g)
What is the equilibrium constant for each related reaction at 745 K?
(a) 2 NH[3 ](g) ⇌ N[2 ](g) + 3 H[2 ](g)
(b) 1/2 N[2 ](g) + 3/2 H[2 ](g) ⇌ NH[3 ](g)
The equilibrium constant expression for the given reaction of N[2 ](g) with H[2 ](g) to produce NH[3 ](g) at 745 K is as follows:
(a) This reaction is the reverse of the one given, so its equilibrium constant expression is as follows:
(b) In this reaction, the stoichiometric coefficients of the given reaction are divided by 2, so the equilibrium constant is calculated as follows:
Check Your Learning 4.2.2 – Manipulating Equilibrium Constants – 1
At 527°C, the equilibrium constant for the reaction below is 7.9 × 10^4.
2 SO[2 ](g) + O[2 ](g) ⇌ 2 SO[3 ](g)
Calculate the equilibrium constant for the following reaction at the same
SO[3 ](g) ⇌ SO[2 ](g) + ½ O[2 ](g)
3.6 × 10^-3
Example 4.2.3 – Manipulating Equilibrium Constants – 2
The following reactions occur at 1200°C:
(1) CO (g) + 3 H[2 ](g) ⇌ CH[4 ](g) + H[2]O (g) K[1 ]= 9.17 × 10^−2
(2) CH[4 ](g) + 2 H[2]S (g) ⇌ CS[2 ](g) + 4 H[2 ](g) K[2 ]= 3.3 × 10^4
Calculate the equilibrium constant for the following reaction at the same temperature.
(3) CO (g) + 2 H[2]S (g) ⇌ CS[2 ](g) + H[2]O (g) + H[2 ](g) K[3 ]= ?
The key to solving this problem is to recognize that reaction 3 is the sum of reactions 1 and 2:
(1) CO (g) + 3 H[2] (g) ⇌ CH[4 ](g) + H[2]O (g)
(2) CH[4 ](g) + 2 H[2]S (g) ⇌ CS[2 ](g) + 4 H[2 ](g)
(3)= (2) + (1) CO (g) + 2 H[2]S (g) ⇌ CS[2 ](g) + H[2]O (g) + H[2 ](g)
The values for K[1] and K[2] are given, so it is straightforward to calculate K[3]:
K[3] = K[1]K[2] = (9.17 × 10^-2)(3.3 × 10^4) = 3.03 × 10^3
Check Your Learning 4.2.3 – Manipulating Equilibrium Constants – 2
In the first of two steps in the industrial synthesis of sulfuric acid, elemental sulfur reacts with oxygen to produce sulfur dioxide. In the second step, sulfur dioxide reacts with additional oxygen
to form sulfur trioxide. The reaction for each step is shown, as is the value of the corresponding equilibrium constant at 25°C. Calculate the equilibrium constant for the overall reaction at this
same temperature.
1. 1/8 S[8] (s) + O[2] (g) ⇌ SO[2] (g) K[1] = 4.4 x 10^53
2. SO[2] (g) + 1/2 O[2] (g) ⇌ SO[3] (g) K[2] = 2.6 x 10^12
3. 1/8 S[8] (s) + 3/2 O[2] (g) ⇌ SO[3] (g) K[3] = ?
K[3 ]= 1.1 × 10^66
Equilibria Involving Gases
For reactions that involve species in solution, the concentrations used in equilibrium calculations are molarities, expressed in moles/litre. For gases, however, the activities of each reaction
component are expressed in terms of partial pressures rather than molarity, where the standard state is 1 atm of pressure. Occasionally, the symbol K[P] is used to highlight equilibrium constants
calculated from partial pressures. For the general reaction mA + nB ⇌ xC + yD, in which all the components are gases, the equilibrium constant expression must be written as the ratio of the partial
pressures of the products and reactants (each raised to its coefficient in the chemical equation):
Equation 4.2.4 Gas Equilibrium Constant
Thus K[P] for the decomposition of N[2]O[4] is as follows:
K[P] is a unitless quantity because the quantity that is actually used to calculate it is an “effective pressure,” the ratio of the measured pressure to a standard state of 1 bar, which produces a
unitless quantity. But what if we need to describe a gas-phase equilibrium in concentration units?
Because partial pressures are usually expressed in bars, the molar concentration of a gas and its partial pressure do not have the same numerical value. Consequently, if we were to recalculate K
using molar concentrations (like solutes) instead of partial pressures, we would obtain a new equilibrium constant, known as K[C]. The resulting numerical value of K[C] would very likely differ from
K[P]. They are, however, related by the ideal gas constant (R) and the absolute temperature (T) – this is because the partial pressure of a gas is directly proportional to its concentration at
constant temperature. This relationship can be derived from the ideal gas equation, where M is the molar concentration of gas, n/V.
PV = nRT
P = (n/V)RT
P = MRT
Equation 4.2.5 Gas Pressure Proportionality
Thus, at constant temperature, the pressure of a gas is directly proportional to its concentration.
Hence, the equation relating K[C] and K[P] is derived as follows. For the gas-phase reaction mA + nB ⇌ xC + yD:
Therefore, relationship between K[C] and K[P] is
Equation 4.2.6 K[C] and K[P] Relation
where K[C] is the equilibrium constant expressed in units of concentration (mol/L), K[P] is the equilibrium constant expressed in units of pressure (bars), the temperature is expressed as the
absolute temperature in Kelvin, R is the ideal gas constant in the appropriate units (R = 0.083145 bar•L/(mol•K)) and Δn is the difference between the sum of the coefficients of the gaseous products
and the sum of the coefficients of the gaseous reactants in the reaction (the change in moles of gas between the reactants and the products). For the gas-phase reaction mA + nB ⇌ xC + yD, we have
Δn = (x + y) – (m + n)
Equation 4.2.7 Change in Moles of a Reaction
If all the components of an equilibrium reaction are gaseous, the equilibrium constant must be K[P] because its expression is derived solely from partial pressures and hence is in pressure units.
Only in cases where gas concentrations are available will the calculation of K[C] be appropriate. When solving equilibrium problems, be aware of the data provided and thus whether you’ll need to use
K[C] or K[P] in your solution.
According to the equation
Δn = (x + y) – (m + n)
K[P] = K[C] only if the moles of gaseous products and gaseous reactants are the same (i.e. Δn = 0):
According to the equation above, K[P ]= K[C] only if the moles of gaseous products and gaseous reactants are the same (i.e., Δn = 0).
For the decomposition of N[2]O[4], there are 2 mol of gaseous product and 1 mol of gaseous reactant, so Δn = 1. Thus, for this reaction,
Example 4.2.4 – Calculation of K[P]
Write the equations for the conversion of K[C] to K[P] for each of the following reactions:
(a) C[2]H[6 ](g) ⇌ C[2]H[4 ](g) + H[2 ](g)
(b) CO (g) + H[2]O (g) ⇌ CO[2 ](g) + H[2 ](g)
(c) N[2 ](g) + 3 H[2 ](g) ⇌ 2 NH[3 ](g)
(a) Δn = (2) − (1) = 1
K[P] = K[C](RT)^Δn = K[C](RT)^1 = K[C](RT)
(b) Δn = (2) − (2) = 0
K[P] = K[C](RT)^Δn = K[C](RT)^0 = K[C]
(c) Δn = (2) − (1 + 3) = −2
K[P] = K[C](RT)^Δn = K[C](RT)^−2 =
Check Your Learning 4.2.4 – Calculation of K[P]
Write the equations for the conversion of K[C] to K[P] for each of the following reactions, which occur in the gas phase:
(a) 2 SO[2 ](g) + O[2 ](g) ⇌ 2 SO[3 ](g)
(b) N[2]O[4 ](g) ⇌ 2 NO[2 ](g)
(c) C[3]H[8 ](g) + 5 O[2 ](g) ⇌ 3 CO[2 ](g) + 4 H[2]O (g)
(a) K[P] = K[C](RT)^−1; (b) K[P] = K[C](RT); (c) K[P] = K[C](RT)
Example 4.2.5 – Calculation of K[P]
Write the equation for the conversion of K[C] to K[P] for the following reaction, which occurs in the gas phase:
CS[2 ](g) + 4 H[2 ](g) ⇌ CH[4 ](g) + 2 H[2]S (g)
K[C] is equal to 0.28 for the following reaction at 900°C, what is K[P] at this temperature?
K[P] = K[C](RT)^Δn = (0.28)[(0.0821)(1173)]^−2 = 3.0 × 10^−5
Check Your Learning 4.2.5 – Calculation of K[P]
Write the equation for the conversion of K[C] to K[P] for the following reaction, which occurs in the gas phase:
CH[3]OH (g) ⇌ CO (g) + 2 H[2 ](g)
At 227°C, the following reaction has K[C] = 0.0952, What would be the value of K[P] at this temperature?
160 or 1.6 × 10^2
Example 4.2.6 – Calculation of K[P] – The Haber Process
The equilibrium constant for the reaction of nitrogen and hydrogen to give ammonia is 0.118 at 745 K. The balanced equilibrium equation is as follows:
N[2] (g) + 3 H[2] (g) ⇌ 2 NH[3] (g)
What is K[P] for this reaction at the same temperature?
This reaction has 2 mol of gaseous product and 4 mol of gaseous reactants, so Δn = (2 − 4) = −2. We know K, and T = 745 K. Thus, we have the following:
Because K[P] is a unitless quantity, the answer is K[P ]= 3.16 × 10^−5.
Check Your Learning 4.2.6 – Calculation of K[P] – The Haber Process
Calculate K[P] for the reaction
2 SO[2] (g) + O[2] (g) ⇌ 2 SO[3] (g)
at 527°C, if K = 7.9 × 10^4 at this temperature.
K[P ]= 1.2 × 10^3
The Reaction Quotient, Q
We previously saw that knowing the magnitude of the equilibrium constant under a given set of conditions allows chemists to predict the extent of a reaction. Often, however, chemists must decide
whether a system has reached equilibrium or if the composition of the mixture will continue to change with time.
To determine whether a system has reached equilibrium, chemists use a quantity called the reaction quotient (Q). The expression for the reaction quotient has precisely the same form as the
equilibrium constant expression, except that Q may be derived from a set of values measured at any time during the reaction of any mixture of the reactants and products, regardless of whether the
system is at equilibrium. Therefore, for the following general reaction:
m A + n B ⇌ x C + y D
the reaction quotient is defined as follows:
Equation 4.2.7 Reaction Quotient
Similarly to the equilibrium constant, the reaction quotient is dimensionless (no units) – this stems from using the species activities as its effective concentrations. As before, the activity of
each species participating in the equilibrium can be represented as follows:
• For a solute X, a[X] ≈ [X] in mol/L (note that again, this does not apply to highly concentrated solutions)
• For gases, a[X] = P[X] in bar
• For pure solids and liquids, a[X] = 1
CHM1311 Pointers
To reiterate, the expressions for the reaction quotient, Q, and the equilibrium constant, K, are constructed in the exact same way, but are used in different circumstances:
• Concentrations/partial pressures initially → Q
• Concentration/partial pressures at equilibrium → K
Example 4.2.6 – Writing Reaction Quotient Expressions
Write the expression for the reaction quotient (Qp and/or Qc) for each of the following reactions:
(a) 3 O[2 ](g) ⇌ 2 O[3 ](g)
(b) N[2 ](g) + 3 H[2 ](g) ⇌ 2 NH[3 ](g)
(c) HCl (g) + NaOH (aq) ⇌ NaCl (aq) + H[2]O (l)
Check Your Learning 4.2.7 – Writing Reaction Quotient Expressions
Write the expression for the reaction quotient for each of the following reactions:
(a) 2 SO[2 ](g) + O[2 ](g) ⇌ 2 SO[3 ](g)
(b) C[4]H[8 ](g) ⇌ 2 C[2]H[4 ](g)
(c) Cd^2+ (aq) + 4 Cl^– (aq) ⇌ CdCl[4]^2- (aq)
Example 4.2.8 – Evaluating a Reaction Quotient
Gaseous nitrogen dioxide forms dinitrogen tetroxide according to this equation:
2 NO[2 ](g) ⇌ N[2]O[4 ](g)
When 0.10 mol NO[2] is added to a 1.0-L flask at 25°C, the concentration changes so that at equilibrium, [NO[2]][eq] = 0.016 mol/L and [N[2]O[4]][eq] = 0.042 mol/L.
(a) What is the value of the reaction quotient in concentration units, Q[C], before any reaction occurs?
(b) What is the value of the equilibrium constant in concentration units, K[C], for the reaction?
(a) Before any product is formed, [NO[2]][i] = [2]O[4]][i] = 0 mol/L. Thus,
(b) At equilibrium, the value of the equilibrium constant is equal to the value of the reaction quotient. At equilibrium,
The equilibrium constant is 1.6 × 10^2.
Note that dimensional analysis would suggest the unit for this K[C] value should be (mol/L)^−^1. However, as mentioned previously, it is common practice to omit units for K[C] values, since it is the
magnitude of an equilibrium constant that relays useful information.
Check Your Learning 4.2.8 – Evaluating a Reaction Quotient
For the reaction 2 SO[2 ](g) + O[2 ](g) ⇌ 2 SO[3 ](g), the concentrations at equilibrium are [SO[2]] = 0.90 mol/L, [O[2]] = 0.35 mol/L, and [SO[3]] = 1.1 mol/L. What is the value of the equilibrium
constant, K[C]?
K[C] = 4.3
Predicting the Direction of Net Change Using Q
To understand how information is obtained using a reaction quotient, consider once again the dissociation of dinitrogen tetroxide to nitrogen dioxide,
N[2]O[4 ](g) ⇌ 2 NO[2 ](g)
for which K = 4.65 × 10^−3 at 298 K. We can write Q[C] for this reaction as follows:Q were calculated for each. Each experiment begins with different proportions of product and reactant:
Experiment [NO[2]] (mol/L) [N[2]O[4]] (mol/L)
1 0 0.0400
2 0.0600 0
3 0.0200 0.0600
As these calculations demonstrate, Q can have any numerical value between 0 and infinity (undefined); that is, Q can be greater than, less than, or equal to K. Comparing the magnitudes of Q and K
enables us to determine whether a reaction mixture is already at equilibrium and, if it is not, predict how its composition will change with time to reach equilibrium (i.e., whether the reaction will
proceed to the right or to the left as written). All you need to remember is that the composition of a system not at equilibrium will change in a way that makes Q approach K:
• If Q = K, for example, then the system is already at equilibrium, and no further change in the composition of the system will occur unless the conditions are changed.
• If Q < K, then the ratio of the concentrations of products to the concentrations of reactants is less than the ratio at equilibrium. Therefore, the reaction will proceed to the right as written,
forming products at the expense of reactants
• If Q > K, then the ratio of the concentrations of products to the concentrations of reactants is greater than at equilibrium, so the reaction will proceed to the left as written, forming
reactants at the expense of products.
These points are illustrated graphically in Figure 4.2.1.
Figure 4.2.1. (a) Both Q and K are plotted as points along a number line: the system will always react in the way that causes Q to approach K. (b) The change in the composition of a system with time
is illustrated for systems with initial values of Q > K, Q < K, and Q = K.
Example 4.2.9 – Predicting the Direction of Reaction
Given here are the initial concentrations of reactants and products for three experiments involving this reaction:
CO (g) + H[2]O (g) ⇌ CO[2 ](g) + H[2 ](g)
K[C] = 0.64
Determine in which direction the reaction proceeds as it goes to equilibrium in each of the three experiments shown.
Reactants/Products Experiment 1 Experiment 2 Experiment 3
[CO][i] 0.0203 mol/L 0.011 mol/L 0.0094 mol/L
[H[2]O][i] 0.0203 mol/L 0.0011 mol/L 0.0025 mol/L
[CO[2]][i] 0.0203 mol/L 0.037 mol/L 0.0015 mol/L
[H[2]][i] 0.0203 mol/L 0.046 mol/L 0.0076 mol/L
Experiment 1:
Q[c] < K[c] (0.039 < 0.64)
The reaction will shift to the right.
Experiment 2:
Q[c] > K[c] (140 > 0.64)
The reaction will shift to the left.
Experiment 3:
Q[c] < K[c] (0.48 < 0.64)
The reaction will shift to the right.
Check Your Learning 4.2.9 – Predicting the Direction of Reaction
Calculate the reaction quotient and determine the direction in which each of the following reactions will proceed to reach equilibrium.
(a) A 1.00-L flask containing 0.0500 mol of NO (g), 0.0155 mol of Cl[2] (g), and 0.500 mol of NOCl:
2 NO (g) + Cl[2] (g) ⇌ 2 NOCl (g) Kc = 4.6 × 10^4
(b) A 5.0-L flask containing 17 g of NH[3], 14 g of N[2], and 12 g of H[2]:
N[2] (g) + 3 H[2] (g) ⇌ 2 NH[3] (g) Kc = 0.060
(c) A 2.00-L flask containing 230 g of SO[3](g):
2 SO[3] (g) ⇌ 2 SO[2] (g) + O[2] (g) Kc = 0.230
(a) Q[c] = 6.45 × 10^3, shifts right. (b) Q[c] = 0.23, shifts left. (c) Q[c] = 0, shifts right
★ Questions
1. Explain why an equilibrium between Br[2] (l) and Br[2] (g) would not be established if the container were not a closed vessel.
2. If you observe the following reaction at equilibrium, is it possible to tell whether the reaction started with pure NO[2] or with pure N[2]O[4]?
2 NO[2] (g) ⇌ N[2]O[4] (g)
3. Among the solubility rules previously discussed is the statement: Carbonates, phosphates, borates, and arsenates—except those of the ammonium ion and the alkali metals—are insoluble.
a. Write the expression for the equilibrium constant for the reaction represented by the equation CaCO[3] (s) ⇌ Ca^2+ (aq) + CO[3]^2− (aq). Is K[c] > 1, < 1, or ≈ 1? Explain your answer.
b. Write the expression for the equilibrium constant for the reaction represented by the equation 3 Ba^2+ (aq) + 2 PO[4]^3− (aq) ⇌ Ba[3](PO[4])[2] (s). Is K[c] > 1, < 1, or ≈ 1? Explain your answer.
4. Benzene is one of the compounds used as octane enhancers in unleaded gasoline. It is manufactured by the catalytic conversion of acetylene to benzene: 3 C[2]H[2] (g) ⟶ C[6]H[6]
(g). Which value of K[C] would make this reaction most useful commercially? K[C] ≈ 0.01, K[C] ≈ 1, or K[C] ≈ 10. Explain your answer.
5. Show that the complete chemical equation, the total ionic equation, and the net ionic equation for the reaction represented by the equation KI (aq) + I[2] (aq) ⇌ KI[3 ](aq) give the same
expression for the reaction quotient. KI[3] is composed of the ions K^+ and I^3−.
6. For a titration to be effective, the reaction must be rapid and the yield of the reaction must essentially be 100%. Is K[C] > 1, < 1, or ≈ 1 for a titration reaction?
7. Write the mathematical expression for the reaction quotient, Q[C], for each of the following reactions:
a. CH[4] (g) + Cl[2] (g) ⇌ CH[3]Cl (g) + HCl (g)
b. N[2] (g) + O[2] (g) ⇌ 2 NO (g)
c. 2 SO[2] (g) + O[2] (g) ⇌ 2 SO[3] (g)
d. BaSO[3] (s) ⇌ BaO (s) + SO[2] (g)
e. P[4] (g) + 5 O[2] (g) ⇌ P[4]O[10] (s)
f. Br[2] (g) ⇌ 2Br (g)
g. CH[4] (g) + 2 O[2] (g) ⇌ CO[2] (g) + 2 H[2]O (l)
h. CuSO[4] ∙ 5H[2]O (s) ⇌ CuSO[4] (s) + 5 H[2]O (g)
8. The initial concentrations or pressures of reactants and products are given for each of the following systems. Calculate the reaction quotient and determine the direction in which each
system will proceed to reach equilibrium.
a. 2 NH[3] (g) ⇌ N[2] (g) + 3 H[2] (g) Kc = 17; [NH3] = 0.20 mol/L, [N2] = 1.00 mol/L, [H2] = 1.00 mol/L
b. 2 NH[3] (g) ⇌ N[2] (g) + 3 H[2] (g) K[P] = 6.8 x 10^4; initial pressures: NH[3] = 2.00 atm, N[2] = 10.00 atm, H[2] = 10.00 atm
c. 2 SO[3] (g) ⇌ 2 SO[2] (g) +O[2] (g) Kc = 0.230; [SO[3]] = 2.00 mol/L, [SO[2]] = 2.00 mol/L, [O[2]] = 2.00 mol/L
d. 2 SO[3] (g) ⇌ 2 SO[2] (g) + O[2] (g) K[P] = 6.5 atm; initial pressures: SO[2] = 1.00 atm, O[2] = 1.130 atm, SO[3] = 0 atm
e. 2 NO (g) + Cl[2] (g) ⇌ 2 NOCl (g) K[P] = 2.5 x 10^3; initial pressures: NO = 1.00 atm, Cl[2] = 1.00 atm, NOCl = 0 atm
f. N[2] (g) + O[2] (g) ⇌ 2 NO (g) Kc = 0.050; [N[2]] = 0.100 mol/L, [O[2]] = 0.200 mol/L, [NO] = 1.00 mol/L
★★ Questions
9. The following reaction has K[P] = 4.50 × 10^−5 at 720 K.
N[2] (g) + 3 H[2] (g) ⇌ 2NH[3] (g)
If a reaction vessel is filled with each gas to the partial pressures listed, in which direction will it shift to reach equilibrium? P(NH[3]) = 93 atm, P(N[2]) = 48 atm, and P(H[2]) = 52
10. Determine if the following system is at equilibrium. If not, in which direction will the system need to shift to reach equilibrium?
SO[2]Cl[2] (g) ⇌ SO[2] (g) + Cl[2] (g)
[SO[2]Cl[2]] = 0.12 mol/L, [Cl[2]] = 0.16 mol/L and [SO[2]] = 0.050 mol/l. K[c] for the reaction is 0.078.
11. Which of the systems described in question 8 give homogeneous equilibria? Which give heterogeneous equilibria?
12. For which of the reactions in question 8 does K[C] (calculated using concentrations) equal K[P] (calculated using pressures)?
13. Convert the values of K[C] to values of K[P] or the values of K[P] to values of K[C].
a. N[2] (g) + 3 H[2] (g) ⇌ 2 NH[3] (g) K[C] = 0.50 at 400◦C
b. H[2] (g) + I[2] (g) ⇌ 2 HI (g) K[C] = 50.2 at 448◦C
c. Na[2]SO[4] ∙ 10H[2]O (s) ⇌ Na[2]SO[4] (s) + 10 H[2]O (g) K[P ]= 4.08 x 10^-25 at 25◦C
d. H[2]O (l) ⇌ H[2]O (g) K[P] = 0.122 at 50◦C
14. What is the value of the equilibrium constant expression for the change H[2]O (l) ⇌ H[2]O (g) at 30 °C? (See Appendix F.)
15. Write the expression of the reaction quotient for the ionization of HOCN in water.
16. Write the reaction quotient expression for the ionization of NH[3] in water.
17. What is the approximate value of the equilibrium constant K[P] for the change C[2]H[5]OC[2]H[5] (l) ⇌ C[2]H[5]OC[2]H[5] (g) at 25 °C. (Vapor pressure was described in the previous chapter on
liquids and solids; refer back to this chapter to find the relevant information needed to solve this problem.)
1. Equilibrium cannot be established between the liquid and the gas phase if the top is removed from the bottle because the system is not closed; one of the components of the equilibrium, the Br
[2] vapor, would escape from the bottle until all liquid disappeared. Thus, more liquid would evaporate than can condense back from the gas phase to the liquid phase.
2. Yes, based on the changing of colours in the reaction it is possible to determine the direction of the reaction.
3. (a) K[C] = [Ca^2^+][CO[3]^2-], K[C] < 1 ;(b) K[C] = 1 / [Ba^2+]^3[PO[4]^3-]^2, K[C] > 1
4. Since [C] ≈ 10 means that C[6]H[6] predominates over C[2]H[2]. In such a case, the reaction would be commercially feasible if the rate to equilibrium is suitable.
5. Total Ionic: K^+ (aq) + I^− (aq) + I[2 ](aq) ⇌ K^+ (aq) + I[3]^− (aq), Net Ionic: I^− (aq) + I[2 ](aq) ⇌ I[3]^− (aq)
6. K[C ]> 1
8. (a) Q[c] 25 proceeds left; (b) Q[P] 0.22 proceeds right; (c) Q[c] undefined proceeds left; (d) Q[P] 1.00 proceeds right; (e) Q[P] 0 proceeds right; (f) Q[c] 4 proceeds left
9. The system will shift toward the reactants to reach equilibrium.
10. The system is not at equilibrium since Q < K, therefore the reaction will shift towards the right.
11. (a) Homogeneous, (b) Homogeneous, (c) Homogeneous, (d) Homogeneous, (e) Homogeneous, (f) Homogeneous
12. F is the only one where K[c] = K[P]
13. (a) K[P] = 1.6 × 10^−4; (b) K[P] = 50.2; (c) K[c] = 5.31 × 10^−39; (d) K[c] = 4.60 × 10^−3
14. K[P] = P[H2O] = 0.042.
17. 0.717 atm
Value of the reaction quotient for a system at equilibrium; relates to the ratio of products and reactants at equilibrium; may be expressed using concentrations (Kc) or partial pressures (Kp)
Equilibrium in which all reactants and products occupy the same phase
Equilibrium in which reactants and products occupy two or more different phases
Mathematical function describing the relative amounts of reactants and products in a reaction mixture; may be expressed in terms of concentrations (Qc) or pressures (Qp)
|
{"url":"https://ecampusontario.pressbooks.pub/genchemforgeegees/chapter/4-2-the-equilibrium-constant-reaction-quotient/","timestamp":"2024-11-14T01:15:47Z","content_type":"text/html","content_length":"204707","record_id":"<urn:uuid:34daa5ec-6b87-472c-8302-c579197d64f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00495.warc.gz"}
|
Reference intervals as a tool for total quality management - Biochemia Medica
The more traditional, widespread and practiced method for interpreting the laboratory results is based on the comparison made with reference intervals. As defined by the International Federation of
Clinical Chemistry (IFCC) (1), the terms “reference range” or “interval of reference” (IR) mean a range of values obtained from individuals (usually, but not necessarily healthy) randomly chosen, but
appropriately selected in order to satisfy suitably defined criteria (2). The apparent contradiction between “random” and “appropriate selection” is resolved bearing in mind that these are two phases
of the same process of identification of IR (3). Population based studies are preliminary steps for selecting the reference intervals, being numerous the variables that can affect the population
characteristics (Table 1). It appears however very important to stress the fact that each laboratory should be able to establish the reference values that are as close as possible to those presented
by the population who insists close to the operating lab itself. Of course, it is understandable that in the event of the introduction of new activities (new laboratory or new tests by any
laboratory), as well as during transitional periods, provisional IR may be chosen, e.g. by opportunely adapting IRs from laboratories operating in nearby areas, as well as from reliable data in
literature (4). However, the constraint should be of finding and setting own IRs as soon as possible. In this circumstance, we should think about the influence that the progressive aging of the
population or the different gender distribution observed in different areas of the country have on some analysis (e.g., blood glucose, blood urea nitrogen, creatinine), or even more so the effects of
immigration from countries very far or different each other.
Table 1. Preconditions for the formulation of a reference interval in healthy subjects.
IR establishment
The creation of reference intervals requires careful planning, monitoring and documentation of every aspect of the study. Consequently, the reference intervals must be well characterized in terms of
variations attributable to the pre-analytical and analytical factors (5). These formal protocols are particularly useful in cases where a laboratory should establish its own reference range for a
particular test. This situation can occur even if a laboratory has modified a test or a method approved and/or certified, or a method developed in-house. Unfortunately, these protocols are resource
intensive and can be prohibitive for smaller facilities, also in consideration of the inherent costs (6). Even large laboratories may find it difficult to carry out these studies for obtaining their
own IRs, mainly based on considerations of cost-benefit analysis. Thus many laboratories have increased their reliance on manufacturers to adopt reference intervals that may be acceptable using
simpler approaches, which require less effort and result in lower costs. In any case, it is desirable that each laboratory has complete knowledge of the characteristics of the reference ranges
adopted, such that they ensure compatibility with its own population and are suitable for clinical use.
An IR is usually determined by analyzing samples that are obtained from individuals who meet the criteria previously and accurately defined (reference sample group). Protocols such as those made by
the “International Federation of Clinical Chemistry [IFCC] Expert Panel on Theory of Reference Values” and by the “National Committee for Clinical Laboratory Standards” show in a comprehensive and
systematic manner the processes that use carefully selected reference sample groups to establish reference intervals. These protocols typically require a minimum of 120 reference persons for each
group (or subgroup) to be characterized (7). However, it is increasingly gaining importance a vision that considers more appropriate to adopt reference intervals common to several laboratories that
operate over large regional areas and also on entire national context (8–10).
When establishing reference intervals that are common to most laboratories in the same area, the sample size can be expanded considerably around the local production of reference intervals for each
individual laboratory. When many laboratories can share common reference intervals, the investment is limited and the whole operation can advantageously be concentrated in one or a few institutions.
Consequently, one can work on much larger sample sizes, such as five/six hundreds or more individuals. A larger sample makes it possible to carry out a thorough investigation of possible subgroups
(11) in which it is possible to obtain reliable estimates on the reference intervals subgroup, respecting the minimum size of 120 individuals recommended by the IFCC. The confidence interval (CI) of
90% for a sample of similar size is Cl = ± 0.24 × SD (standard deviation of the population). The allocation criteria are (4):
If one or both; the difference between the lower reference limits and the difference between the higher reference limits of the two subgroups are > 0.75 × SD[min] (where SD[min] is the smallest DS of
the subset of the DS), then the partition is recommended.
If both; the differences between the lower reference limits and the higher reference limits of both subgroups are ≤ 0.25 × SD[min], then the partition is not recommended.
For differences which fall between the extremes (0.25 × SD[min] < difference < 0.75 × SD[min]), the arguments should differ from the purely statistical ones, as this could be due to genetic
differences, i.e. to situations which are not routinely assessed.
Selection of the reference population
The selection of the population who will represent the “reference” can not be dealt with in general terms, as more than one variable have to be considered. The most common method is to obtain
reference values from a population of healthy individuals, but in this case the definition of “health” is indeed problematic. For example, to establish a reference interval for hemoglobin levels
(i.e., a gender-related laboratory test), the laboratory would need to obtain the results of hemoglobin from at least 240 persons (120 men and 120 women). These people are usually drawn from the
local population and then selected for inclusion in the study using carefully defined criteria. The general criteria that are adopted are those reported in Table 2; moreover there is the opportunity
to use a series of strategies, assuming additional criteria of subdivision for subgroups (Table 3) and/or age (Table 4) or combine multiple criteria, as for example (4):
- Selection of homogeneous groups of reference according to ethnicity, geographical origin and environmental conditions in
order to obtain the representation of the population to which the normal range will apply.
- Stratification according to age and gender, if there are women pregnant or taking any anti-conceptional drug.
- Definition of health status, according to further criteria that are adopted.
Table 2. Exclusion criteria for the formulation of reference range in the general population.
Table 3. Criteria for the creation of subgroups of reference subjects.
Table 4. Reference intervals: Criteria for distributions in the different age group.
There are no particular recommendations on which method of selection is the most appropriate, as this may depend both on the purpose of the investigation, and on the opportunities allowing to include
single individuals. In any case it is important to report the strategy adopted and the individuals included in the reference interval and to implement clear criteria for inclusion and exclusion.
The normal or Gaussian distribution (Figure 1) is the distribution characterized by two parameters, mean and standard deviation (SD). The statistical methods that assume Gaussian distribution of data
are called parametric methods. Of course, other probability distributions, whose characteristics are defined by one or more parameters, can be analyzed using appropriate parametric methods.
Figure 1. The normal or Gaussian distribution.
Non-parametric statistical techniques are used to analyze the data not having a specific type of probability distribution. In general, when observing non-Gaussian distributions (non-normal) (Figure
2a-b), their description is assigned to other indices such as median, percentiles classes, and more others. Moreover, in this second category of data distribution, other methods become more useful,
including the so called and important ones “bootstrap methods”. Sometimes non-Gaussian distributions can be normalized via appropriate processing techniques (12). This is the general case of
distributions obtained from experimental data, for which the assumption of normality is always verified. In constructing a reference range from individual data, often the difficulty of achieving a
perfect Gaussian distribution is apparent. Even after sampling the data from a population which is presumed to be normally distributed, it is often necessary to take some approximations of the data
to comply with the assumption of normality. In this regard a series of statistical tests have been put in place, which compares the distribution of experimental data with a hypothetical Gaussian
distribution (13–15). These methods are called mathematical-statisticalgoodness-of-fit test tests. Among them, the most known and used is the Kolmogorov-Smirnov, although its real discriminant power
is questioned by some researchers, especially when the parameters of the distribution are estimated based on data rather than being specified a priori. Afterwards, other tests have been proposed that
are best suited for this purpose, among them the test of Shapiro-Wilks (for distribution of samples greater than 2,000 subjects it should be replaced by the test for normality of Stephen) and the
test of D’Agostino-Pearson. None of these tests can however indicate the type of non-normalityobserved in the case where the distribution is showing tendency to asymmetry (skewness) and kurtosis or
both (Figure 3).
Figure 2. Non-Gaussian distribution (non-normal).
Figure 3. Tendency to asymmetry (skewness).
The skewness represents the degree of asymmetry of a distribution around its mean and is non-dimensional since it is characterized only by a number describing the shape of the distribution curve.
When a distribution is perfectly Gaussian, the skewness score is equal to 0. Skewness figures more or less negative or positive (e.g., +2.0 or –1.5) correspond to a form of distribution curve with a
tail more or less pronounced towards positive or negative values on the x axis. Similarly, the kurtosis represents the degree to which the peak of distribution is sharp or flat, fluctuating between
+3 and –3. In a perfectly Gaussian curve the kurtosis score is 0 and of the distribution is called mesokurtic (Figure 4). Many mathematical functions to correct either the skewness or the kurtosis
have been proposed, and in some cases recommended, but their application was generally marginal. In practice, since a certain degree of skewness is always observed, a rule of thumb has been defined
according to which each distribution is considered Gaussian when the relationship between skewness and standard error is < ± 2. A similar exercise is suggested for the kurtosis, using the
relationship between kurtosis and standard error of kurtosis. After ascertaining that the assumption of normality is not violated in a significant manner, the main parameters of the Gaussian curve
(mean and standard deviation) are calculated and the interval of reference is considered to be comprised within the values of the mean ± 1.96 × standard deviation (sometimes 1.96 is rounded to 2.00)
(Figure 5).
Figure 4. General forms of kurtosis.
Figure 5. Interval of reference: Gaussian curve.
When the assumption of normality tests do not fit a normal curve, a logarithmic transformation of data can be used, in order to restore the data to a normal distribution curve; the above parameters
(mean and SD) can be then calculated.
However, sometimes no transformation and/or processing of data is possible. This can happen with data from measures of analytes expressed by specific genes, such as highly polymorphic proteins (eg
haptoglobin, lipoprotein (a)), homocysteine and others. To overcome these problems, the IFCC through its Expert Panel on Theory of Reference Values, has recommended the use of interpercentile
intervals estimated on statistical methods either parametric or nonparametric, although the recommendation is in favour of the non-parametric approach (7). Although parametric methods are most
commonly employed and seemingly simple from the point of view of calculations, they maintain unresolved all the problems outlined above. The nonparametric methods, though a bit less easy to set up,
have the ability to largely avoid such problems. Many procedures have been described (16). Currently the preferred method is based on iterative bootstrap ranking (17). The target range is between the
2.5^th and the 97.5^ th percentile (Figure 6). Even in these cases the values below and above these limits are considered “out of normality”. A widely diffused but not supported by solid bases
opinion is that the reference interval from Gaussian and non-Gaussian distributions represents the values of individuals to be referred to (i.e., “the normal individuals”) and that the areas at the
“tails” of the curve represent individuals whose values are to be rejected as “out of normality.” This is a misconception, because (18):
1. Even these values come from individuals originally included in the group chosen according to the characteristics set out before the construction of the interval of reference.
2. All values, both central and those close to the limits of distribution, are only representations of biological variability on time.
3. In any case the analytical variability influences the current data.
The above concepts are well known to professionals in laboratories, but are largely ignored or underestimated in the clinical practice. Actually, the reference limits are not cut-off limits, because
they are influenced by both the biological variability and the analytical one. Based on these considerations, the IFCC recommends estimating a confidence range of 90% for each limit of the reference
interval in both Gaussian and non-Gaussian distribution.
Figure 6. Interpercentile intervals: nonparametric distribution.
Longitudinal comparison of laboratory results
The concept of change of reference (CR) was proposed by Harris and Yasaka to enable evaluation of the observed change between two successive measures (19). The longitudinal comparison is based on
this concept and is mainly justified by the clinical problems that are not adequately answered by a cross-comparison based on the interval of reference. The Reference Change Value (or RCV) is
especially useful in monitoring and follow-up of various clinical conditions. RCV is calculated by taking into account intra-individual biological variability, in addition to analytical variability
in the medium to long term, in order to take into account the time elapsed between the test results. The general formula is as follows:
RCV = z[p] × 2^1/2 × (CV[a] + CV[w])^1/2;
where z[p] is the probability density function (generally 1.96 at P = 0.05), CV[w] is the intra-individual biological variability andCV[a] is the variability of analytical testing. RCV shows some
special additional benefits to get information on the status of patients, particularly in the monitoring of clinically stable and well controlled conditions, such as the prognosis of the crisis of
rejection in kidney transplant patients, monitoring of oral anticoagulant therapy (OAT), the glycated hemoglobin (A1c) in diabetes and other conditions (20–23). RCV is only applicable when CV[a] <
0.5 < CV[w]. Close monitoring of analytical quality is needed for, especially when the time between the first test and the next is rather long such as for glycated hemoglobin.
In discussing the comparison of longitudinal data it appears appropriate to introduce the concept of index of individuality (I.I.) (24,25). The I.I. represents the ratio of the random distribution of
values observed in samples taken from one individual for a given test compared to the distribution of values of the entire population of individuals for the same test. When the observed I. I. is low,
it is of little clinical utility using traditional reference interval. A cut-off value of I.I. ≤ 0.6 is considered and in this case the comparison of longitudinal data is much more suited to evaluate
the changes observed using RCV. When the results of laboratory tests with a low RCV are located near the limits of distribution of the traditional IR, in a position of low frequency, there are two
a) stable condition if the previous result was similar;
b) a condition achieved in recent times if the result show variation.
Since in this case the traditional IR is insensitive and therefore not needed, only a previous result of that test can clarify the situation. It is also important to consider that many of the
laboratory tests that explore aspects of body metabolism show low homeostatic I.I. in respect of IR. For a number of tests it seems therefore important to collect the results in databases or systems
for collecting personal data (such as chip-based flash memory card, now widely available and very inexpensive) to access when needed. For every repetition of the series of tests, the results should
be collected, and compared to the previous ones.
Recently the concept of estimating the differences between serial results as the probability of change by calculating the likelihood ratio (likelihood ratio) in addition to RCV has been introduced
(26). The procedure appears robust from a theoretical point of view and deserves to be widely adopted, as it seems likely to improve the monitoring of individual conditions and provide clinical
support to rational clinical decision. Each individual could benefit from a progressive assessment (“in progress”) of their own health and any deviation from her/his reference state identified and
The quality performances resulting from the current technology advancements allow clinical laboratories to fully exploit the opportunity of creating common IRs in order to accomplish transferability
of data, thus increasing citizens benefits and meeting health system expectations.
|
{"url":"https://biochemia-medica.com/en/journal/20/2/10.11613/BM.2010.020/fullArticle","timestamp":"2024-11-13T05:15:53Z","content_type":"text/html","content_length":"98106","record_id":"<urn:uuid:e47363ec-adf2-410c-90a9-aa007ede6a09>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00011.warc.gz"}
|
IRS Daily Compound Interest Calculator - GEGCalculators
IRS Daily Compound Interest Calculator
IRS Daily Compound Interest Calculator
How do you calculate daily compound interest rate? The daily compound interest rate can be calculated by dividing the annual interest rate by the number of compounding periods in a year (365 for
daily compounding).
How do I calculate IRS interest? IRS (Internal Revenue Service) interest calculations can be complex and depend on the specific tax-related situation. For tax-related interest calculations, it’s best
to consult IRS guidelines or consult a tax professional.
How do you calculate the daily interest amount? To calculate the daily interest amount, multiply the outstanding balance by the daily interest rate (annual rate divided by 365).
How much is $1,000 worth at the end of 2 years if the interest rate of 6% is compounded daily? Assuming daily compounding at a 6% annual interest rate, $1,000 would be worth approximately $1,123.68
at the end of 2 years.
How do you manually calculate daily compound interest? Manually calculating daily compound interest involves using the formula A = P(1 + r/n)^(nt), where A is the final amount, P is the principal, r
is the annual interest rate, n is the compounding frequency (365 for daily), and t is the time in years.
Is daily compounding better than monthly? Daily compounding is generally better for savers because it allows your savings to grow more quickly compared to monthly compounding, assuming the same
annual interest rate.
What is compound interest IRS? Compound interest for IRS (Internal Revenue Service) purposes can refer to the interest accrued on unpaid tax liabilities. The IRS determines the interest rate used for
such calculations.
What is compounded daily interest? Compounded daily interest means that interest is calculated and added to the account balance every day, allowing your savings or investments to grow more rapidly.
How do you calculate compound interest in Excel? In Excel, you can calculate compound interest using the formula A = P(1 + r/n)^(nt), where A is the final amount, P is the principal, r is the annual
interest rate, n is the compounding frequency, and t is the time in years. You can use the POWER function or the ^ operator for exponentiation in Excel.
How do you calculate interest calculated daily and paid monthly? To calculate interest calculated daily and paid monthly, use the daily interest rate (annual rate divided by 365) to calculate daily
interest, and then multiply it by the number of days in a month.
How long will it take to increase a $2,200 investment to $10,000 if the interest rate is 6.5 percent? Assuming daily compounding at a 6.5% annual interest rate, it would take approximately 16.8 years
to increase a $2,200 investment to $10,000.
How much will $10,000 be worth in 20 years? The future value of $10,000 in 20 years depends on the interest rate and compounding frequency. Assuming an annual interest rate of 5%, it would grow to
approximately $26,532.98 with annual compounding.
How much is $5,000 with 3% interest? The amount $5,000 with a 3% annual interest rate would be worth approximately $5,927.43 after one year of compounding.
What is the 365/360 rule? The 365/360 rule is a method used in some financial calculations, often related to loans, where a year is considered to have 360 days for simplicity, allowing for easier
interest calculations.
Does daily compound interest exist? Yes, daily compound interest exists and is used in various financial products and savings accounts.
Do any banks offer compound interest? Many banks offer savings accounts and investment products that provide compound interest to help customers grow their savings over time.
Is it better to have interest compounded daily or annually? For savers, it is generally better to have interest compounded daily rather than annually because daily compounding results in higher
overall interest earnings on the same principal amount and interest rate.
Is it better to have interest compounded daily or quarterly? It is typically better to have interest compounded daily than quarterly, as more frequent compounding generally leads to higher earnings
on savings.
What is the miracle of compound interest? The “miracle” of compound interest refers to the exponential growth of savings or investments over time due to the compounding effect, where interest earns
interest on both the initial principal and previously earned interest.
What are the disadvantages of compound interest? The disadvantages of compound interest for borrowers include higher overall interest costs, while for savers, the disadvantages may include lower
liquidity due to locked-in savings.
Do accountants use compound interest? Accountants may use compound interest calculations when dealing with financial planning, investments, and interest-bearing financial instruments.
What does 3% interest compounded daily mean? 3% interest compounded daily means that the annual interest rate is 3%, and the interest is calculated and added to the account balance every day.
How do I avoid daily compound interest? To avoid paying daily compound interest, pay off loans and credit card balances early or make larger payments to reduce the outstanding balance and minimize
the impact of compounding.
Do banks compound daily? Some banks may compound interest daily, but it depends on the specific terms and conditions of the account or loan.
What is the formula of compound interest with an example? The compound interest formula is A = P(1 + r/n)^(nt), where A is the final amount, P is the principal, r is the annual interest rate, n is
the compounding frequency, and t is the time in years. For example, if you have a principal of $1,000, an annual interest rate of 5%, compounded quarterly over 2 years, you can calculate the final
amount by plugging these values into the formula.
What is the formula for monthly compound interest? The formula for monthly compound interest is the same as for daily compound interest: A = P(1 + r/n)^(nt), where A is the final amount, P is the
principal, r is the annual interest rate, n is the compounding frequency (12 for monthly), and t is the time in years.
What is $15,000 at 15% compounded annually for 5 years? The amount $15,000 at a 15% annual interest rate, compounded annually for 5 years, would be worth approximately $27,096.25.
Is interest calculated daily or monthly? Interest can be calculated daily, monthly, quarterly, or annually, depending on the terms of the financial product or loan.
Does money double every 7 years? Money can double approximately every 7 years if it earns a consistent annual compound interest rate of around 10%.
How long will it take you to double your money if you invest $1,000 at 8% compounded annually? To double your money at an 8% annual interest rate, compounded annually, it would take approximately 9
How can I double my money in 5 years? To double your money in 5 years, you would need an annual compound interest rate of approximately 14.4%.
How much money was $1,000 invested in the S&P 500 in 1980? The value of $1,000 invested in the S&P 500 in 1980 would have grown significantly over the years. The exact amount would depend on the
performance of the S&P 500 index.
How much will $1,000,000 be worth in 30 years? The future value of $1,000,000 in 30 years depends on the interest rate and compounding frequency. Assuming an annual interest rate of 5%, it would grow
to approximately $4,321,940.52 with annual compounding.
How to double $10,000 quickly? To double $10,000 quickly, you would need to find an investment or savings vehicle with a high rate of return, but such investments typically come with higher risk.
How much interest will $250,000 earn in a year? The interest earned on $250,000 in a year depends on the interest rate offered by the investment or savings account. For example, at a 4% annual
interest rate, it would earn $10,000 in a year.
How much interest does $500,000 earn in a year? The interest earned on $500,000 in a year depends on the interest rate. At a 3% annual interest rate, it would earn $15,000 in a year.
How much will $30,000 be worth in 10 years? The future value of $30,000 in 10 years depends on the interest rate and compounding frequency. Assuming an annual interest rate of 4%, it would grow to
approximately $43,219.99 with annual compounding.
What is the fastest way to calculate compound interest? The fastest way to calculate compound interest is to use a financial calculator, spreadsheet software like Excel, or online compound interest
calculators, which can automate the calculations.
Why is compound interest so powerful? Compound interest is powerful because it allows savings or investments to grow exponentially over time, as interest earns interest on both the initial principal
and previously earned interest.
What type of interest will earn you the most money? Interest that compounds more frequently (e.g., daily) at a higher annual interest rate will generally earn you the most money on your savings or
What is the most common method of interest calculation? The most common method of interest calculation is simple interest for loans and compound interest for savings and investments.
What are 3 different methods of calculating interest? Three different methods of calculating interest include simple interest, compound interest, and amortization.
What is the 30/360 daily interest? The 30/360 method is a simplified interest calculation method often used in financial contracts, assuming a year has 360 days, and each month has 30 days for ease
of calculation.
What is the formula for daily interest? The formula for daily interest is based on compound interest: A = P(1 + r/n)^(nt), where A is the final amount, P is the principal, r is the annual interest
rate, n is the compounding frequency (365 for daily), and t is the time in years.
Is it better to compound daily or monthly? It is typically better to compound interest daily rather than monthly because daily compounding results in higher overall interest earnings on the same
principal amount and interest rate.
How can I earn daily compound interest? You can earn daily compound interest by investing in financial products or savings accounts that offer daily compounding, such as high-yield savings accounts
or certain investment funds.
Where can I get 7% interest on my money? Earning a consistent 7% interest rate on your money may require investing in riskier assets or considering options like long-term investment accounts or
diversified portfolios.
Does Marcus compound daily? As of my last knowledge update in January 2022, Marcus by Goldman Sachs offered savings accounts with daily compounding interest. Please verify the current terms and
Which bank gives monthly compounding? Many banks and financial institutions offer savings accounts with monthly compounding interest.
How much is $1,000 worth at the end of 2 years if the interest rate of 6% is compounded daily? As mentioned earlier, assuming daily compounding at a 6% annual interest rate, $1,000 would be worth
approximately $1,123.68 at the end of 2 years.
What is an example of daily compound interest? An example of daily compound interest is when interest is calculated and added to a savings account balance every day, allowing the account to grow more
rapidly over time.
GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and
more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable
for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and
up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations.
Leave a Comment
|
{"url":"https://gegcalculators.com/irs-daily-compound-interest-calculator/","timestamp":"2024-11-15T04:48:05Z","content_type":"text/html","content_length":"184320","record_id":"<urn:uuid:b16d8143-0af7-49da-821e-ef4c973e7ab2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00513.warc.gz"}
|
The Good Eon / The lives / 84000 Reading Room
’phags pa bskal pa bzang po zhes bya ba theg pa chen po’i mdo
The Noble Great Vehicle Sūtra “The Good Eon”
Toh 94
Degé Kangyur vol. 45 (mdo sde, ka), folios 1.b–340.a
ᴛʀᴀɴsʟᴀᴛᴇᴅ ɪɴᴛᴏ ᴛɪʙᴇᴛᴀɴ ʙʏ
• Vidyākarasiṁha
• Palgyi Yang
• Paltsek
Translated by the Dharmachakra Translation Committee
under the patronage and supervision of 84000: Translating the Words of the Buddha
First published 2022
Current version v 1.1.21 (2024)
Generated by 84000 Reading Room v2.25.1
84000: Translating the Words of the Buddha is a global non-profit initiative to translate all the Buddha’s words into modern languages, and to make them available to everyone.
This work is provided under the protection of a Creative Commons CC BY-NC-ND (Attribution - Non-commercial - No-derivatives) 3.0 copyright. It may be copied or printed for fair use, but only with
full attribution, and not for commercial advantage or personal compensation. For full details, see the Creative Commons license.
While resting in a park outside the city of Vaiśālī, the Buddha is approached by the bodhisattva Prāmodyarāja, who requests meditation instruction. The Buddha proceeds to give a teaching on a
meditative absorption called elucidating the way of all phenomena and subsequently delivers an elaborate discourse on the six perfections. Prāmodyarāja then learns that all the future buddhas of the
Good Eon are now present in the Blessed One’s audience of bodhisattvas. Responding to Prāmodyarāja’s request to reveal the names under which these present bodhisattvas will be known as buddhas in the
future, the Buddha first lists these names, and then goes on to describe the circumstances surrounding their birth, awakening, and teaching in the world. In the sūtra’s final section, we learn how
each of these great bodhisattvas who are on the path to buddhahood first developed the mind of awakening.
Translated by the Dharmachakra Translation Committee under the guidance of Chokyi Nyima Rinpoche. Thomas Doctor produced the translation and Andreas Doctor, Anya Zilman, and Nika Jovic compared the
draft translation with the original Tibetan and edited the text. The introduction was written by Thomas Doctor and the 84000 editorial team.
The translation was completed under the patronage and supervision of 84000: Translating the Words of the Buddha.
The generous sponsorship of Dzongsar Jamyang Khyentse Rinpoche, Zhou Tian Yu, Chen Yi Qin, Zhou Xun, Zhao Xuan, Chen Kun, and Zhuo Yue, which helped make the work on this translation possible, is
most gratefully acknowledged.
Text Body
The Translation
The Noble Great Vehicle Sūtra
The Good Eon
When the Blessed One had spoken these words, the bodhisattva Prāmodyarāja made the following request: “Blessed One, this is excellent. Blessed One, for the benefit of gods and humans, please explain
about the birthplace, the family, the light, the father, the mother, the son, the attendant, the two foremost and excellent followers, the perfect community of monks, the lifespan, the duration of
the sacred Dharma, and the manifestation of relics that pertain to each of these buddhas of the Good Eon, so that numerous beings may receive healing and be happy, and so that bodhisattvas of the
future may persevere in hearing and remain inspired, become exceptionally accomplished in the sacred Dharma, and become sources of insight.”
The Blessed One replied to the bodhisattva Prāmodyarāja, “Noble son, then listen carefully and keep my words in mind. I shall explain. The birthplace of the thus-gone Krakucchanda is known as
Excellent City of Royal Palaces. His family line is that of Kāśyapa. His light extended across one league. Worship Gift was his father. Brahmā Victory was his mother. Supreme was his son. [F.102.b]
Perfect Wisdom was his attendant. Among his monks, Master Scholar was foremost in terms of insight. Reciter was foremost in terms of miraculous abilities. His first congregation consisted of forty
thousand monks, his second of seventy thousand monks, and his third contained sixty thousand hearers. The extent of his lifespan was forty thousand years. His sacred Dharma remained for eighty
thousand years. His relics remained in a single collection and were contained in a single stūpa.
“The birthplace of the thus-gone Kanakamuni is known as Fifth City. His family was brahmin. His light extended across half a league. Fire Gift was his father. Highest was his mother. Victorious Army
was his son. Auspicious One was his attendant. Highest was foremost in terms of insight. Victory was foremost in terms of miraculous abilities. His first congregation consisted of seventy thousand
monks, his second of sixty thousand monks, and his third of fifty thousand monks. The extent of his lifespan was thirty thousand years. His sacred Dharma remained for a thousand years. His relics
remained in a single collection. There was also only one stūpa.
“The birthplace of the thus-gone Kāśyapa is known as Cetana. His family was brahmin. His light extended across a mile. Brahmā Gift was his father. Wealth Possessor was his mother. Leader was his son.
Friend of All was his attendant. Bharadvāja was foremost in terms of insight. Star King was foremost in terms of miraculous abilities. His first congregation consisted of twenty thousand monks, his
second of eighty thousand monks, and his third of sixty thousand monks. The extent of his lifespan was twenty thousand years. His sacred Dharma remained for seven thousand years. His relics remained
in a single collection. There was also only one stūpa.
“Prāmodyarāja, I, the thus-gone Śākyamuni, was born in Kapilavastu. [F.103.a] My family is kṣatriya and my lineage that of Gautama. My light extends across one fathom. Śuddhodana is my father. Māyā
is my mother. Rāhula is my son. Ānanda is my attendant. Upatiṣya is foremost in terms of insight. Kolita is foremost in terms of miraculous abilities. My first congregation consisted of one thousand
two hundred and fifty monks. The extent of my lifespan is one hundred years. My sacred Dharma will remain for five hundred years; for five hundred years there will remain a contrived appearance of
the sacred Dharma. There will be abundant relics.
“Prāmodyarāja, the thus-gone Maitreya will be born in the royal palace of the city known as Crown Intelligence. His family will be brahmin. His light will extend a league. Excellent Brahmā will be
his father. Brahmā Lady will be his mother. Power of Merit will be his son. Ocean will be his attendant. Wisdom Light will be foremost in terms of insight. Firm Endeavor will be foremost in terms of
miraculous abilities. His first congregation will consist of nine hundred and sixty million worthy hearers, his second of nine hundred and forty million worthy hearers, and his third of nine hundred
and twenty million worthy hearers. The extent of his lifespan will be eighty-four thousand years. His sacred Dharma will remain for eighty thousand years. His relics will remain in a single
collection. There will also only be one stūpa.
“The birthplace of the thus-gone Siṃha will be the city known as Flower God. His family will be kṣatriya. His light will extend a league. Lion Tiger will be his father. Cry of Bliss will be his
mother. Great Power will be his son. Gentle will be his attendant. Wisdom Mount will be foremost in terms of insight. Cloud Bearer will be foremost in terms of miraculous abilities. His first
congregation will consist of one billion members, his second of nine hundred million members, [F.103.b] and his third of eight hundred million members. The extent of his lifespan will be seventy
thousand years. His sacred Dharma will remain for ten million years. His relics will be abundant.
“The birthplace of the thus-gone Pradyota will be the city known as Star Bearer. His family will be kṣatriya. His light will extend five leagues. Excellent Intelligence will be his father. Flower
will be his mother. Time Knower will be his son. Leader of Heroes will be his attendant. Fortune will be foremost in terms of insight. Sound of Thunder will be foremost in terms of miraculous
abilities. His first congregation will consist of one trillion hearers, his second of nine hundred ninety million hearers, and his third of nine hundred eighty million hearers. The extent of his
lifespan will be ninety million years. His sacred Dharma will remain for eighty-five thousand years. His relics will be abundant.
“The birthplace of the thus-gone Muni will be the city known as Highest Flower. His family will be kṣatriya. His light will extend one league. Great Mountain will be his father. Jasmine Flower will
be his mother. Supreme Jewel will be his son. Truly Supreme will be his attendant. Certain Intelligence will be foremost in terms of insight. Power Gift will be foremost in terms of miraculous
abilities. His first congregation will consist of ten thousand hearers, his second of four hundred million hearers, and his third of five hundred million worthy ones. The extent of his lifespan will
be sixty thousand years. His sacred Dharma will remain for one thousand years. His relics will be abundant.
“The birthplace of the thus-gone Kusuma will be the city known as White Lotus. His family will be brahmin. His light will extend eight leagues. Supreme Time will be his father. Flower will be his
mother. Leader of the People will be his son. Joy of Awakening will be his attendant. [F.104.a] Dharma Power will be foremost in terms of insight. Indomitable will be foremost in terms of miraculous
abilities. His first congregation will consist of sixty billion hearers, his second of three hundred fifty million hearers, and his third of three hundred forty million hearers. The extent of his
lifespan will be fifty thousand years. His sacred Dharma will remain for one thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the second thus-gone one named Kusuma will be called Vast Splendor. His family will be brahmin. His light will extend one league. Flower Crown will be his father. Endowed with
Dharma will be his mother. Pure Body will be his son. Keen Intelligence will be his attendant. Excellent Joy will be foremost in terms of insight. Joyous will be foremost in terms of miraculous
abilities. His first congregation will consist of one hundred forty million hearers, his second of one hundred fifty million hearers, and his third of one hundred sixty million hearers. The extent of
his lifespan will be ninety million years. His sacred Dharma will remain for one hundred million years. His relics will be abundant.
“The birthplace of the thus-gone Sunetra will be called Well Doer. His family will be brahmin. His light will extend twelve leagues. Jewel will be his father. Endowed with Śāla Trees will be his
mother. Star King will be his son. Qualities of Intelligence will be his attendant. Infinite Intelligence will be foremost in terms of insight. Lion Gait will be foremost in terms of miraculous
abilities. His first congregation will consist of three hundred thousand hearers, his second of two hundred and eight thousand hearers, and his third of three hundred and six thousand hearers. The
extent of his lifespan will be seventy thousand years. [F.104.b] His sacred Dharma will remain for three hundred million years. His relics will be abundant. This thus-gone one alone will ripen more
sentient beings than all the first ten combined.
“The thus-gone Sārthavāha will be born in the royal palace of the city known as Supreme Beauty. His family will be brahmin. His light will extend thirty-four leagues. Undaunted will be his father.
Wish to Benefit will be his mother. Joy will be his son. Ocean will be his attendant. Supreme Glory will be foremost in terms of insight. Worthy of Worship will be foremost in terms of miraculous
abilities. His first congregation will consist of seven hundred thousand hearers, his second of six hundred thousand hearers, and his third of five hundred thousand hearers. The extent of his
lifespan will be ten billion years. His sacred Dharma will remain for ninety-two thousand years. His relics will be abundant.
“The birthplace of the thus-gone Mahābāhu will be called Movement. His family will be kṣatriya. His light will extend five leagues. Diligence Gift will be his father. Given by the Sages will be his
mother. Illuminator will be his son. Excellent Mind will be his attendant. Undaunted Roar will be foremost in terms of insight. Moving like the Wind will be foremost in terms of miraculous abilities.
His first congregation will consist of one hundred million hearers, and beyond that innumerably many. The extent of his lifespan will be four hundred million years. His sacred Dharma will remain for
ten million years. His relics will be abundant.
“The birthplace of the thus-gone Mahābala will be the city called Jewel Splendor. His family will be brahmin. His light will extend thirty leagues. Gorgeous will be his father. Splendid will be his
mother. Lion Gait will be his son. Excellent Joy will be his attendant. [F.105.a] Supreme Gift will be foremost in terms of insight. Proper Adherence will be foremost in terms of miraculous
abilities. His first congregation will consist of three hundred thousand hearers, his second of twice as many hearers, and his third of ten thousand hearers. The extent of his lifespan will be forty
thousand years. His sacred Dharma will remain for eighty-four thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Nakṣatrarāja will be the city called Jambu River. His family will be brahmin. His light will extend one hundred leagues. Light Gift will be his father. Excellent
Intelligence will be his mother. Conqueror will be his son. Wisdom Power will be his attendant. Master of Discussion will be foremost in terms of insight. Blessing will be foremost in terms of
miraculous abilities. His first congregation will consist of one billion hearers, his second of one billion nine hundred million hearers, and his third of one billion eight hundred million hearers.
The extent of his lifespan will be eighty million years. His sacred Dharma will remain for ten million years. His relics will be abundant.
“The birthplace of the thus-gone Oṣadhi will be called Endowed with Śāla Trees. His family will be kṣatriya. His light will extend one league. Excellent Youth will be his father. Beautiful
Intelligence will be his mother. Mountain Banner will be his son. Flower will be his attendant. Source of Dharma will be foremost in terms of insight. Power of Merit will be foremost in terms of
miraculous abilities. His first congregation will consist of seven hundred million hearers, his second of six hundred ninety million hearers, and his third of seven hundred eighty million hearers.
The extent of his lifespan will be seventy-seven thousand years. [F.105.b] His sacred Dharma will remain for sixty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Yaśaketu161 will be called Radiant Splendor. His family will be kṣatriya. His light will extend thirty leagues. Luminous will be his father. Endowed with Śāla Trees
will be his mother. Highest Flower will be his son. Eye of Joy will be his attendant. Insight Power will be foremost in terms of insight. Lion Strength will be foremost in terms of miraculous
abilities. His first congregation will consist of three hundred twenty thousand hearers, his second of three hundred ten thousand hearers, and his third of three hundred thousand hearers. The extent
of his lifespan will be twenty million years. His sacred Dharma will remain for twenty million years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Mahāprabha will be the city called Blissful Joy. His family will be kṣatriya. His light will extend forty leagues. Bajira will be his father. Given by the Victor will
be his mother. Moon Parasol will be his son. Serene Intelligence will be his attendant. Essence of the Sentient will be foremost in terms of insight. Gentle will be foremost in terms of miraculous
abilities. His first congregation will consist of nine hundred seventy million hearers, his second of nine hundred fifty million hearers, and his third of nine hundred thirty million hearers. The
extent of his lifespan will be one thousand years. His sacred Dharma will remain for ninety thousand years. His relics will be abundant.
“The birthplace of the thus-gone Muktiskandha will be the city called Moon Bearer. His family will be kṣatriya. His light will extend one league. Excellent Mind will be his father. Giver of Lightning
will be his mother. Joyous Movement will be his son. Jewel Light will be his attendant. Wisdom Hero will be foremost in terms of insight, [F.106.a] and Stable Power will be foremost in terms of
miraculous abilities. His first congregation will consist of eight hundred million thirty thousand hearers, his second of twice as many hearers, and his third of sixty thousand hearers. The extent of
his lifespan will be one hundred thousand years. His sacred Dharma will remain for five hundred ten thousand years. His relics will be abundant.
“The birthplace of the thus-gone Vairocana will be the city called Excellent Dharma. His family will be kṣatriya. His light will extend nine leagues. Conqueror will be his father. Excellent Flower
will be his mother. Same Image will be his son. Land of Excellence will be his attendant. Utter Excellence will be foremost in terms of insight. Joyous will be foremost in terms of miraculous
abilities. His first congregation will consist of six hundred twenty thousand hearers, his second of six hundred ten thousand hearers, and his third of six hundred thousand hearers. The extent of his
lifespan will be five hundred years. His sacred Dharma will remain for forty-eight thousand years. His relics will be abundant.
“The birthplace of the thus-gone Sūryagarbha will be the city called Endowed with Flowers. His family will be brahmin. His light will extend two hundred leagues. Wealth Possessor will be his father.
Flower will be his mother. Star Knower will be his son. Highest Wisdom will be his attendant. Force of Insight will be foremost in terms of insight. Vajra Force will be foremost in terms of
miraculous abilities. His first congregation will consist of one hundred thousand monks, his second of one billion hearers, his third of eight hundred quadrillion hearers, and his fourth of nine
million hearers. The extent of his lifespan will be seven hundred million years. His sacred Dharma will remain for three hundred million years. His relics will remain in a single collection. There
will also only be one stūpa.
“The birthplace of the thus-gone Candra will be known as Supreme Jewel. [F.106.b] His family will be kṣatriya. His light will extend eight leagues. Campaka Eye will be his father. Medicine will be
his mother. Punarvasu will be his son. Utterly Fearless will be his attendant. Highest Insight will be foremost in terms of insight. Superior Dharma will be foremost in terms of miraculous abilities.
His first congregation will consist of twelve billion hearers, his second of fourteen billion hearers, his third of eighteen billion hearers, and his fourth of twenty billion hearers. The extent of
his lifespan will be six thousand years. His sacred Dharma will remain for eleven thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Arciṣmat will be the city called Excellent Dust. His family will be kṣatriya. His light will extend sixty leagues. Heap of Merit will be his father. Dharma
Intelligence will be his mother. Supreme Campaka will be his son. Great Acumen will be his attendant. Sound of Thunder will be foremost in terms of insight. Highest Wisdom will be foremost in terms
of miraculous abilities. His first congregation will consist of seven million hearers, his second of eight million hearers, and his third of nine million hearers. The extent of his lifespan will be
one hundred thousand years. His sacred Dharma will remain for three hundred thousand years. His relics will be abundant.
“The birthplace of the thus-gone Suprabha will be the city called Starlight. His family will be brahmin. His light will extend thirteen leagues. Sun Power will be his father. Moon Possessor will be
his mother. Great Lord will be his son. Stable Power will be his attendant. Wisdom Gift will be foremost in terms of insight. Body of Brightness will be foremost in terms of miraculous abilities. His
first congregation will consist of five billion hearers, [F.107.a] his second of four hundred million hearers, and his third of three hundred fifty million hearers. The extent of his lifespan will be
eighty-five thousand years. His sacred Dharma will remain for forty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Aśoka will be the city called Insight Gift. His family will be kṣatriya. His light will extend eighteen leagues. Flower Gift will be his father. Endowed with Dharma
will be his mother. Light Gift will be his son. Melody of Joy will be his attendant. Great Mountain will be foremost in terms of insight. Given by the Victor will be foremost in terms of miraculous
abilities. His first congregation will consist of twenty thousand hearers, his second of ten thousand hearers, and his third of nine hundred fifty thousand hearers. The extent of his lifespan will be
one thousand years. His sacred Dharma will remain for three hundred fifty thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Tiṣya will be the city called Supreme Campaka. His family will be brahmin. His light will extend eight leagues. Divine Excellence will be his father. Generosity Joy
will be his mother. Flash of Light will be his son. Joyous Sight will be his attendant. Highest Wisdom will be foremost in terms of insight. Superior to the World will be foremost in terms of
miraculous abilities. His first congregation will consist of eight hundred million hearers, his second of seven hundred eighty million hearers, and his third of seven hundred million hearers. The
extent of his lifespan will be thirty-two thousand years. His sacred Dharma will remain for seventy thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Pradyota will be called Endowed with Islands. His family will be brahmin. His light will extend a thousand worlds. Superior Dharma will be his father. Lotus Possessor
will be his mother. [F.107.b] Moon Foot will be his son. Melody of Fame will be his attendant. Glorious Merit will be foremost in terms of insight. Vajra Gift will be foremost in terms of miraculous
abilities. His first congregation will consist of one hundred sixty million hearers, his second of one hundred seventy million hearers, and his third of one hundred eighty million hearers. The extent
of his lifespan will be fourteen thousand years. His sacred Dharma will remain for twenty-one thousand years. His relics will be abundant.
“The birthplace of the thus-gone Mālādhārin will be the city called Source of Merit. His family will be kṣatriya. His light will extend eighty leagues. White Lotus will be his father. Gift of
Qualities will be his mother. Glory of Merit will be his son. Excellent Form will be his attendant. Infinite Lamp will be foremost in terms of insight. King of the Gathering will be foremost in terms
of miraculous abilities. His first congregation will consist of nine hundred million worthy hearers, his second of nine hundred ninety million worthy hearers, and his third of eight hundred eighty
million worthy hearers. The extent of his lifespan will be seventy thousand years. His sacred Dharma will remain for ten million years. His relics will be abundant.
“The birthplace of the thus-gone Guṇaprabha will be the city called Utpala. His family will be kṣatriya. His light will extend sixty leagues. Endowed with Light will be his father. Heap of Merit will
be his mother. Dharma Acumen will be his son. Merit Worthy of Worship will be his attendant. Beryl Essence will be foremost in terms of insight. Granted by the Ground will be foremost in terms of
miraculous abilities. His first congregation will consist of sixteen billion hearers, his second of twelve billion hearers, and his third of eighteen billion hearers. [F.108.a] The extent of his
lifespan will be three thousand years. His sacred Dharma will remain for ten million years. His relics will be abundant.
“The birthplace of the thus-gone Arthadarśin will be the city called Supreme Essence. His family will be brahmin. His light will extend sixty-two leagues. Moon of the Land will be his father. Divine
Joy will be his mother. Renowned Qualities will be his son. Brahmā Roar will be his attendant. Moon Mind will be foremost in terms of insight. Given by the Victor will be foremost in terms of
miraculous abilities. His first congregation will consist of six hundred twenty thousand worthy ones, his second of seven hundred thousand worthy ones, and his third of eight hundred thousand worthy
ones. The extent of his lifespan will be one hundred years. His sacred Dharma will remain for a hundred million years. His relics will be abundant.
“The birthplace of the thus-gone Pradīpa will be the city called Beautiful Jewel. His family will be kṣatriya. His light will extend fifty leagues. Jewel Edge will be his father. Star Color will be
his mother. Jewel Essence will be his son. Clear Mind will be his attendant. Indomitable will be foremost in terms of insight. Endowed with Power will be foremost in terms of miraculous abilities.
His first congregation will consist of seven hundred thousand hearers, his second of nine hundred thousand hearers, and his third of one million hearers. The extent of his lifespan will be fifty
thousand years. His sacred Dharma will remain for two hundred thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Prabhūta will be the city called Splendid Light. His family will be brahmin. His light will extend one league. Excellent Gift will be his father. Endowed with
Excellent Thought will be his mother. Friend of the Victorious Ones will be his son. Lion Strength will be his attendant. Gift of Freedom from Suffering will be foremost in terms of insight,
[F.108.b] and Lofty Mountain will be foremost in terms of miraculous abilities. His first congregation will consist of thirty million hearers, his second of two billion hearers, and his third of one
billion hearers. The extent of his lifespan will be forty thousand years. His sacred Dharma will remain for ninety thousand years. His relics will be abundant.
“The birthplace of the thus-gone Vaidya will be called Accomplishment of Yogic Discipline. His family will be brahmin. His light will extend seventy-seven leagues. Intention will be his father.
Supreme Jambu will be his mother. Hero Gift will be his son. Moon Joy will be his attendant. Ocean will be foremost in terms of insight. Elephant Power will be foremost in terms of miraculous
abilities. His first congregation will consist of two million three hundred thousand hearers, his second of two million five hundred thousand hearers, and his third of two million eight hundred
thousand hearers. The extent of his lifespan will be seventy thousand years. His sacred Dharma will remain for two million five hundred thousand years. His relics will remain in a single collection.
There will also only be one stūpa.
“The birthplace of the thus-gone Sūrata will be the city called Excellent Wealth. His family will be brahmin. His light will extend ten leagues. Supreme Treasure will be his father. Moonlight will be
his mother. Lord of Dharma will be his son. Joy for the World will be his attendant. Sound of Thunder will be foremost in terms of insight. Flower Gift will be foremost in terms of miraculous
abilities. His first congregation will consist of three hundred million hearers, his second of two hundred eighty million hearers, and his third of two hundred seventy million hearers. The extent of
his lifespan will be thirty-six thousand years. His sacred Dharma will remain for one thousand years. His relics will be abundant.
“The birthplace of the thus-gone Ūrṇa will be the city called Radiant Splendor. [F.109.a] His family will be brahmin. His light will extend one hundred leagues. King of the Gathering will be his
father. Fortunate Joy will be his mother. Mountain Gift will be his son. Divine Moon will be his attendant. Wisdom Joy will be foremost in terms of insight. Worshiped by Gods will be foremost in
terms of miraculous abilities. His first congregation will consist of six hundred twenty million hearers, his second of six hundred ten million hearers, and his third of six hundred million hearers.
The extent of his lifespan will be fifty thousand years. His sacred Dharma will remain for seventy thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Dṛḍha will be called Sound of Merit. His family will be brahmin. His light will extend one league. Śāla King will be his father. Joy of Good People will be his
mother. Gift to the World will be his son. Splendid Power will be his attendant. Moon Crest will be foremost in terms of insight. Luminous Qualities will be foremost in terms of miraculous abilities.
His first congregation will consist of one hundred thousand hearers, his second of nine million hearers, and his third of eight million hearers. The extent of his lifespan will be twelve thousand
years. His sacred Dharma will remain for twenty-eight thousand years. His relics will be abundant.
“The birthplace of the thus-gone Śrīdeva will be called Endowed with Jewels. His family will be kṣatriya. His light will extend twelve leagues. Honey Vessel will be his father. Endowed with Dharma
will be his mother. Crown of Joy will be his son. Firm Endeavor will be his attendant. Earth Gift will be foremost in terms of insight. Powerful Moon will be foremost in terms of miraculous
abilities. He will have one congregation, containing one billion hearers. [F.109.b] The extent of his lifespan will be one hundred years. His sacred Dharma will remain for ten million years. His
relics will be abundant.
“The birthplace of the thus-gone Duṣpradharṣa will be called Application Accomplished. His family will be kṣatriya. His light will extend ten million leagues. Divine Rāhu will be his father. Endowed
with Merit will be his mother. Hidden Moon will be his son. Joyous Truth will be his attendant. True Jewel will be foremost in terms of insight. Sound of Thunder will be foremost in terms of
miraculous abilities. His first congregation will consist of three hundred thousand hearers, his second of five hundred thousand hearers, and his third of eight hundred thousand hearers. The extent
of his lifespan will be eighty million years. His sacred Dharma will remain for one hundred eighty million years. His relics will be abundant.
“The birthplace of the thus-gone Guṇadhvaja will be called Moon Bearing. His family will be brahmin. His light will extend five leagues. Friend of the Royal Star will be his father. Gift of the
Wealth God will be his mother. Vajra Force will be his son. Jewel Joy will be his attendant. Sun Essence will be foremost in terms of insight. Leader will be foremost in terms of miraculous
abilities. His first congregation will consist of one hundred thirty thousand hearers, his second of one hundred fifty thousand hearers, and his third of one hundred sixty thousand hearers. The
extent of his lifespan will be ten million years. His sacred Dharma will remain for thirty million years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Rāhu will be called Jewel Light. His family will be kṣatriya. His light will extend seventy-six leagues. Powerful Strength will be his father. Indomitable will be his
mother. Moonlight will be his son. Beryl Essence will be his attendant. [F.110.a] God of Strength will be foremost in terms of insight. Joyous Yearning will be foremost in terms of miraculous
abilities. The extent of his lifespan will be one hundred thousand years. His sacred Dharma will remain for five hundred thousand years. This fortieth thus-gone one will liberate as many sentient
beings as those liberated by the combined activities of all the previous thus-gone ones. I shall therefore not specify his congregations. The great earth will become of the nature of the seven
precious substances. There will be trees of jewels and also trees that grow garments. Sentient beings will be born miraculously. There will be no lower realms whatsoever. [B10]
“The birthplace of the thus-gone Gaṇin will be called Beautiful Movement. His family will be brahmin. His light will extend half a league. Highest Jewel will be his father. Jewel Light will be his
mother. Earth Holder will be his son. Peaceful Mind will be his attendant. Crane Call will be foremost in terms of insight. Objectives Accomplished will be foremost in terms of miraculous abilities.
He will have a single congregation, consisting of a hundred thousand hearers. The extent of his lifespan will be thirty thousand years. His sacred Dharma will remain for seventy thousand years. His
relics will be abundant.
“The birthplace of the thus-gone Brahmaghoṣa will be called Splendid Light. His family will be brahmin. His light will extend eighty leagues. Brahmā Master will be his father. Blissful will be his
mother. Splendid Light will be his son. Lotus Eye will be his attendant. Golden Color will be foremost in terms of insight. Giver of Lightning will be foremost in terms of miraculous abilities. His
first congregation will consist of eight hundred sixty million worthy ones, [F.110.b] his second of nine hundred million worthy ones, and his third of one billion worthy ones. The extent of his
lifespan will be ninety thousand years. His sacred Dharma will remain for three thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Dṛḍhasaṃdhi will be called Blooming Flowers. His family will be kṣatriya. His light will extend sixty leagues. Joyous Merit will be his father. Happy will be his
mother. Giver of Knowledge of Time will be his son. Meaningful Action will be his attendant. Supreme Moon will be foremost in terms of insight. Supreme Gold will be foremost in terms of miraculous
abilities. His first congregation will consist of seven hundred billion worthy ones, his second of seven hundred eighty billion worthy ones, and his third of eight hundred billion worthy ones. The
extent of his lifespan will be fifty-five thousand years. His sacred Dharma will remain for forty thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Anunnata will be called Happy World. His family will be brahmin. His light will extend ten leagues. Ocean will be his father. Given by the Sages will be his mother.
Eye of Beauty will be his son. Ruler of Men will be his attendant. Well-Considered Aims will be foremost in terms of insight. Endowed with True Words will be foremost in terms of miraculous
abilities. His first congregation will consist of seventy-eight thousand worthy ones, his second of seventy-six thousand worthy ones, and his third of seventy-five thousand worthy ones. The extent of
his lifespan will be eighty million years. His sacred Dharma will remain for eighty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Prabhaṃkara will be called Golden Light. His family will be kṣatriya. His light will extend as far as the trichiliocosm. [F.111.a] Radiant will be his father. Jewel
Gift will be his mother. Precious Qualities will be his son. Moon Flower will be his attendant. Renown will be foremost in terms of insight. Royal Master of Retention will be foremost in terms of
miraculous abilities. His first congregation will consist of fifty trillion hearers, his second of forty trillion hearers, and his third of thirty trillion hearers. The extent of his lifespan will be
fifty million years. His sacred Dharma will remain for seven billion years. His relics will be abundant.
“The birthplace of the thus-gone Mahāmeru will be called Jewel Array. His family will be brahmin. His light will extend eighty leagues. Moon Splendor will be his father. Given by the Sun will be his
mother. Moon Canopy will be his son. Supreme Jewel will be his attendant. Excellent Mind will be foremost in terms of insight. Victorious Joy will be foremost in terms of miraculous abilities. His
first congregation will consist of seven hundred million worthy ones, his second of eight hundred million worthy ones, and his third of nine hundred million worthy ones. The extent of his lifespan
will be eight thousand years. His sacred Dharma will remain for thirty-two thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Vajra will be called Spread Out Splendor. His family will be kṣatriya. His light will extend sixteen leagues. Jewel Light will be his father. Utpala Eye will be his
mother. Sustainer will be his son. Ocean will be his attendant. Gift of the Wrathful will be foremost in terms of insight. Supreme Companion will be foremost in terms of miraculous abilities. His
first congregation will consist of four billion worthy hearers, his second of three billion worthy hearers, and his third of two billion worthy hearers. [F.111.b] The extent of his lifespan will be
one hundred thousand years. His sacred Dharma will remain for thirty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Sañjayin will be called Endowed with Sandalwood. His family will be brahmin. His light will extend eighty leagues. Flower will be his father. Flashing Light will be
his mother. Heap of Jewels will be his son. Jewel Mind will be his attendant. Fearless will be foremost in terms of insight. King of Mountains will be foremost in terms of miraculous abilities. His
first congregation will consist of seven hundred thousand worthy ones, his second of six hundred thousand worthy ones, and his third of five hundred thousand worthy ones. The extent of his lifespan
will be ten million years. His sacred Dharma will remain for twenty million years. His relics will be abundant.
“The birthplace of the thus-gone Nirbhaya will be called Enemy Defeater. His family will be kṣatriya. His light will extend ninety leagues. Light Gift will be his father. Moon Possessor will be his
mother. Moon Master will be his son. Moon will be his attendant. King of the Gathering will be foremost in terms of insight. Given by the Gods will be foremost in terms of miraculous abilities. His
first congregation will consist of eight hundred thousand worthy ones, his second of seven hundred eighty thousand worthy ones, and his third of seven hundred sixty thousand worthy ones. The extent
of his lifespan will be one hundred thousand years. His sacred Dharma will remain for ten million years. His relics will be abundant.
“The birthplace of the thus-gone Ratna will be called Stable Borders. His family will be brahmin. His light will extend a hundred thousand leagues. Given by the Sages will be his father. Merit Gift
will be his mother. Medicinal Flower will be his son. Diligence Gift will be his attendant. Indomitable will be foremost in terms of insight. Stable Power will be foremost in terms of miraculous
abilities. [F.112.a] In the first congregation there will be four hundred million hearers, in the second there will be three hundred eighty million, and in the third there will be one hundred sixty
million. The extent of his lifespan will be eighteen thousand years. His sacred Dharma will remain for seventy thousand years. His relics will remain in a single collection. There will also only be
one stūpa.
“The birthplace of the thus-gone Padmākṣa will be called Flower Land. His family will be kṣatriya. His light will extend thirty-two leagues. Highest Flower will be his father. Beauty will be his
mother. Divine Joy will be his son. Flower of Freedom from Suffering will be his attendant. Light of Insight will be foremost in terms of insight. Granted by Accumulations will be foremost in terms
of miraculous abilities. In the first congregation there will be seven hundred million worthy ones, in the second there will be three hundred fifty million, and in the third there will be four
hundred million. The extent of his lifespan will be eighteen thousand years. His sacred Dharma will remain for fifty-six thousand years. His relics will be abundant.
“The birthplace of the thus-gone Balasena will be called Supreme Excellence. His family will be kṣatriya. His light will extend six leagues. God of Strength will be his father. Gift of Bliss will be
his mother. Radiance of Merit will be his son. Dharma Protector will be his attendant. Victorious King will be foremost in terms of insight. Perfectly Blissful will be foremost in terms of miraculous
abilities. In the first congregation there will be sixty thousand worthy ones, in the second there will be fifty-eight thousand, and in the third there will be fifty-seven thousand. The extent of his
lifespan will be sixteen thousand years. His sacred Dharma will remain for eight thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Kusumaraśmi will be called Moon of Excellent Flowers. [F.112.b] His family will be brahmin. His light will extend seventy-eight leagues. Delightful Sight will be his
father. Star Possessor will be his mother. Stable Dharma will be his son. Awakening will be his attendant. Ultimate Intelligence will be foremost in terms of insight. Moon Banner will be foremost in
terms of miraculous abilities. In the first congregation there will be three hundred million worthy ones, in the second there will be three hundred twenty million, and in the third there will be
three hundred twenty million. The extent of his lifespan will be twenty-two thousand years. His sacred Dharma will remain for fifty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Jñānapriya will be called Supreme Wealth. His family will be kṣatriya. His light will extend eight leagues. Virtue will be his father. Excellent Splendor will be his
mother. Hidden by the Gods will be his son. Benevolent will be his attendant. Moon Canopy will be foremost in terms of insight. Campaka will be foremost in terms of miraculous abilities. In the first
congregation there will be nine billion worthy ones, in the second there will be eight billion, and in the third there will be seven billion. The extent of his lifespan will be one hundred thousand
years. His sacred Dharma will remain for fifty-seven thousand years. His relics will be abundant.
“The birthplace of the thus-gone Mahātejas will be called Abundant Offering. His family will be brahmin. His light will extend five leagues. Jewel Treasury will be his father. Splendid will be his
mother. Highest Radiance will be his son. Excellent Hand will be his attendant. Radiant Fire will be foremost in terms of insight. Lotus Essence will be foremost in terms of miraculous abilities. In
the first congregation there will be seventy thousand worthy ones, in the second there will be seventy-five thousand, and in the third there will be eighty thousand. [F.113.a] The extent of his
lifespan will be five hundred thousand years. His sacred Dharma will remain for twenty-one thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Brahmā will be called Gold Colored. His family will be brahmin. His light will extend three leagues. Joy Free from Suffering will be his father. Endowed with
Sandalwood will be his mother. Victorious Force will be his son. Enemy Tamer will be his attendant. Divine Lord will be foremost in terms of insight. Crown Vajra will be foremost in terms of
miraculous abilities. There will be a single congregation consisting of eighty million worthy ones. The extent of his lifespan will be twelve thousand years. His sacred Dharma will remain for
forty-one thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Amitābha will be called Delightful. His family will be kṣatriya. His light will extend seventy-six leagues. Shining Master of Melodies will be his father. Moonlight
will be his mother. Joyous will be his son. Joyous Force will be his attendant. Master of Purity will be foremost in terms of insight. Flashing Light will be foremost in terms of miraculous
abilities. In the first congregation there will be two billion worthy ones, in the second there will be four billion, and in the third there will be six billion. The extent of his lifespan will be
eighty thousand years. His sacred Dharma will also remain for eighty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Nāgadatta will be called Bright Jewels. His family will be kṣatriya. His light will extend half a league. Held by the Victorious One will be his father. Endowed with
the Supreme will be his mother. Power of Merit will be his son. Jewel Lamp will be his attendant. Supreme Campaka will be foremost in terms of insight. [F.113.b] Divine Human will be foremost in
terms of miraculous abilities. In the first congregation there will be eighty thousand hearers, in the second there will be seventy-eight thousand, and in the third there will be seventy-five
thousand. The extent of his lifespan will be seventy-six thousand years. His sacred Dharma will remain for a thousand years. His relics will remain in a single collection. There will also only be one
“The birthplace of the thus-gone Dṛḍhakrama will be called Supreme Excellence. His family will be kṣatriya. His light will extend five hundred leagues. Lion Fangs will be his father. Tree Trunk Gift
will be his mother. Dharma Speaker will be his son. Excellent Emanation will be his attendant. Jewel Gift will be foremost in terms of insight. Moon Gift will be foremost in terms of miraculous
abilities. In the first congregation there will be one billion worthy hearers, in the second there will be nine hundred ninety million, and in the third there will be nine hundred eighty million. The
extent of his lifespan will be one billion years. His sacred Dharma will remain for one hundred and fifty million years. His relics will be abundant.
“The birthplace of the thus-gone Amoghadarśin will be called Luminous. His family will be kṣatriya. His light will extend one fathom. Virtue Gift will be his father. Honey Eye will be his mother.
Star Color will be his son. Jambu River will be his attendant. Essence Friend will be foremost in terms of insight. Given by the Sages will be foremost in terms of miraculous abilities. In the first
congregation there will be nine hundred sixty million hearers, in the second there will be nine hundred eighty million, and in the third there will be one billion. The extent of his lifespan will be
one hundred years. His sacred Dharma will remain for one thousand years. His relics will be abundant.
“The birthplace of the thus-gone Vīryadatta will be called Campa. [F.114.a] His family will be brahmin. His light will extend one league. Excellent Wisdom will be his father. Splendid Gift will be
his mother. God Free from Suffering will be his son. Worship Ornament will be his attendant. Supreme Nonapprehension will be foremost in terms of insight. Moon Splendor will be foremost in terms of
miraculous abilities. There will be a single congregation consisting of eight hundred thousand worthy ones. The extent of his lifespan will be one thousand years. His sacred Dharma will remain for
three thousand years. His relics will remain in a single collection. There will also only be one stūpa.
“The birthplace of the thus-gone Bhadrapāla will be called Gift of Joy. His family will be kṣatriya. His light will extend ten leagues. Precious King of Stars will be his father. Endowed with Merit
will be his mother. Tiger Gift will be his son. Supreme Mountain will be his attendant. Pervasive Lord of Wisdom will be foremost in terms of insight. Ocean of Intelligence will be foremost in terms
of miraculous abilities. There will be but a single congregation consisting of one billion eight hundred million worthy ones who have achieved mastery. The extent of his lifespan will be two thousand
years. His sacred Dharma will remain for twenty-one thousand years. His relics will be abundant.
“The birthplace of the thus-gone Nanda will be called Endowed with Riches. His family will be brahmin. His light will extend forty leagues. Brahmā God will be his father. Victorious Glory will be his
mother. Great Splendor will be his son. Powerful Movement of Bliss will be his attendant. Merit Hand will be foremost in terms of insight. Eye of Joy will be foremost in terms of miraculous
abilities. In the first congregation there will be seven hundred thirty million hearers, in the second there will be seven hundred twenty million, and in the third there will be seven hundred ten
million. The extent of his lifespan will be eighty-four thousand years. [F.114.b] His sacred Dharma will remain for ninety thousand years. His relics will be abundant.
“The birthplace of the thus-gone Acyuta will be called Miraculous Splendor. His family will be kṣatriya. His light will extend seventy leagues. King of Doctors will be his father. Glorious Star will
be his mother. Flower God will be his son. Indomitable Strength will be his attendant. Infinite Fame will be foremost in terms of insight. Powerful Hero will be foremost in terms of miraculous
abilities. In the first congregation there will be six hundred million hearers, in the second there will be five hundred eighty million, and in the third there will be five hundred seventy million.
The extent of his lifespan will be twenty-one thousand years. His sacred Dharma will remain for ninety thousand years. His relics will remain in a single collection. There will also only be one
“The birthplace of the thus-gone Siṃhadhvaja will be called Brightness Attained. His family will be kṣatriya. His light will extend ninety leagues. Dharma Banner will be his father. Meritorious
Friend will be his mother. Gift of Riches will be his son. Worship will be his attendant. Given by Application will be foremost in terms of insight. Beauty of Yogic Discipline will be foremost in
terms of miraculous abilities. In the first congregation there will be two hundred twenty million hearers, in the second there will be two hundred ten million, and in the third there will be two
hundred million. The extent of his lifespan will be twenty-eight thousand years. His sacred Dharma will remain for eighty thousand years. His relics will be abundant.
“The birthplace of the thus-gone Jaya will be called Jewel Conduct. His family will be kṣatriya. His light will extend ten leagues. Sun Splendor will be his father. Flower Eye will be his mother.
Truth Appreciator will be his son. Endowed with Dharma will be his attendant. [F.115.a] True Yogic Discipline will be foremost in terms of insight. Dharma Excellence will be foremost in terms of
miraculous abilities. In the first congregation there will be three hundred sixty million hearers, in the second there will be three hundred seventy million, and in the third there will be three
hundred eighty million. The extent of his lifespan will be eighty thousand years. His sacred Dharma will remain for eight million years. His relics will remain in a single collection. There will also
only be one stūpa.
“The birthplace of the thus-gone Dhārmika will be called Joyous Gods. His family will be brahmin. His light will extend seven leagues. Invincible will be his father. Endowed with Fame will be his
mother. Divine Leader will be his son. Sun Gift will be his attendant. Great Chariot will be foremost in terms of insight. Medicine Gift will be foremost in terms of miraculous abilities. In the
first congregation there will be eighty million hearers, in the second there will be seventy million, and in the third there will be eighty million. The extent of his lifespan will be ten million
years. His sacred Dharma will remain for thirty million years. His relics will be abundant.
|
{"url":"https://84000.co/translation/toh94/UT22084-045-001-section-4","timestamp":"2024-11-13T12:50:56Z","content_type":"text/html","content_length":"1049125","record_id":"<urn:uuid:880a2824-e092-4e03-a677-b9b53c6185f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00670.warc.gz"}
|
Graduate course offerings for the coming year are included below along with course descriptions that are in many cases more detailed than those included in the university catalog, especially for
topics courses. The core courses in the mathematics graduate program are MATH 6110–MATH 6120 (analysis), MATH 6310–MATH 6320 (algebra), and MATH 6510–MATH 6520 (topology).
Fall 2024 Offerings
Descriptions are included below under Course Descriptions.
MATH 5080 - Special Study for Teachers
Mary Ann Huntley
MATH 5220 - Applied Complex Analysis
Steven Strogatz, TR 10:10-11:25
MATH 5250 - Numerical Analysis and Differential Equations
Yunan Yang, TR 2:55-4:10
MATH 5410 - Introduction to Combinatorics I
Karola Meszaros, TR 2:55-4:10
MATH 6110 - Real Analysis
Camil Muscalu, MW 10:10-11:25 + dis F 10:10-11:00
MATH 6210 - Measure Theory and Lebesgue Integration
Instructor TBD, MW 10:10-11:25
MATH 6230 - Differential Games and Optimal Control
Alex Vladimirsky, TR 10:10-11:25
MATH 6260 - Dynamical Systems
John Hubbard, TR 1:25-2:40
MATH 6310 - Algebra
Allen Knutson, TR 11:40-12:55 + dis W 3:35-4:25
MATH 6390 - Lie Groups and Lie Algebras
Dan Barbasch, MW 11:40-12:55
MATH 6520 - Differentiable Manifolds
James West, TR 8:40-9:55 + dis F 11:15-12:05
MATH 6710 - Probability Theory I
Phil Sosoe, TR 8:40-9:55
MATH 6740 - Mathematical Statistics II
Michael Nussbaum, TR 11:40-12:55
MATH 6870 - Descriptive Set Theory
Slawomir Solecki, MWF 12:20-1:10
MATH 7110 - Topics in Analysis: De Girogi-Nash-Moser Theory
Xin Zhou, TR 10:10-11:25
MATH 7510 - Berstein Seminar in Topology
Moon Duchin, TR 1:25-2:40
MATH 7670 - Topics in Algebraic Geometry: Schubert Varieties and Degenerations
Allen Knutson, TR 2:55-4:10
MATH 7710 - Topics in Probability Theory: Math for AI Safety
Lionel Levine, MW 11:40-12:55
MATH 7740 - Statistical Learning Theory
Marten Wegkamp, MW 1:25-2:40
Spring 2025 Offerings
Descriptions are included below under Course Descriptions.
MATH 5080 - Special Study for Teachers
Mary Ann Huntley
MATH 5200 - Differential Equations and Dynamical Systems
John Hubbard, TR 1:25-2:40
MATH 6120 - Complex Analysis
Yusheng Luo, MW 10:10-11:25 + dis F 10:10-11:00
MATH 6220 - Applied Functional Analysis
Yunan Yang, MW 10:10-11:25
MATH 6302 - Lattices: Geometry, Cryptography, and Algorithms
Noah Stephens-Davidowitz, TR 1:25-2:40
MATH 6320 - Algebra
Martin Kassabov, MW 8:40-9:55 + dis F 9:05-9:55
MATH 6370 - Algebraic Number Theory
David Zywina, TR 10:10-11:25
MATH 6510 - Algebraic Topology
Inna Zakharevich, TR 11:40-12:55 + dis M 12:20-1:10
MATH 6540 - Homotopy Theory
Yuri Berest, MF 1:25-2:40
MATH 6620 - Riemannian Geometry
Jason Manning, TR 1:25-2:40
MATH 6670 - Algebraic Geometry
Dan Halpern-Leistner, MW 11:40-12:55
MATH 6720 - Probability Theory II
Lionel Levine, MW 11:40-12:55
MATH 6730 - Mathematical Statistics I
Florentina Bunea, MW 2:55-4:10
MATH 6810 - Logic
Mark Poór, TR 1:25-2:40
MATH 7150 - Fourier Analysis
Camil Muscalu, TR 2:55-4:10
MATH 7160 - Topics in Partial Differential Equations: Spectral Geometry
Daniel Stern, TR 10:10-11:25
MATH 7310 - Topics in Algebra: Hilbert Scheme of Points
Ritvik Ramkumar, TR 2:55-4:10
MATH 7370 - Topics in Number Theory: The Mordell Conjecture, after Lawrence and Venkatesh
Alexander Betts, MW 10:10-11:25
MATH 7410 - Topics in Combinatorics
Ed Swartz, MWF 1:25-2:15
MATH 7620 - Topics in Geometry: Interactions of Hyperbolic Dynamics, Geometry, and Low-Dimensional Topology
Kathryn Mann, TR 8:40-9:55
MATH 7820 - Logic Seminar
Slawomir Solecki, MF 2:55-4:10
Course Descriptions
MATH 5080 - Special Study for Teachers
Fall 2024, Spring 2025. 1 credit. Student option grading.
Primarily for: secondary mathematics teachers and others interested in issues related to teaching and learning secondary mathematics (e.g., mathematics pre-service teachers, mathematics graduate
students, and mathematicians). Not open to: undergraduate students. Co-meets with MATH 4980.
Examines principles underlying the content of the secondary school mathematics curriculum, including connections with the history of mathematics, technology, and mathematics education research. One
credit is awarded for attending two Saturday workshops (see math.cornell.edu/math-5080) and writing a paper.
MATH 5200 - Differential Equations and Dynamical Systems
Spring 2025. 3 credits. Student option grading.
Forbidden Overlap: due to an overlap in content, students will receive credit for only one course in the following group: MAE 5790, MATH 4200, MATH 4210, MATH 5200.
Prerequisite: a semester of linear algebra (MATH 2210, MATH 2230, MATH 2310, or MATH 2940) and a semester of multivariable calculus (MATH 2220, MATH 2240, or MATH 1920), or equivalent. Enrollment
limited to: graduate students. Students will be expected to be comfortable writing proofs. More experience with proofs may be gained by first taking a 3000-level MATH course. Co-meets with MATH 4200.
Covers ordinary differential equations in one and higher dimensions: qualitative, analytic, and numerical methods. Emphasis is on differential equations as models and the implications of the theory
for the behavior of the system being modeled and includes an introduction to bifurcations.
MATH 5220 - Applied Complex Analysis
Fall 2024. 3 credits. Student option grading.
Prerequisite: a semester of linear algebra (MATH 2210, MATH 2230, MATH 2310, or MATH 2940) and a semester of multivariable calculus (MATH 2220, MATH 2240, or MATH 1920), or equivalent. Enrollment
limited to: graduate students. Students will be expected to be comfortable writing proofs. More experience with proofs may be gained by first taking a 3000-level MATH course. Co-meets with MATH 4220.
Covers complex variables, Fourier transforms, Laplace transforms and applications to partial differential equations. Additional topics may include an introduction to generalized functions.
MATH 5250 - Numerical Analysis and Differential Equations
Fall 2024. 4 credits. Student option grading.
Prerequisite: MATH 2210, MATH 2230-MATH 2240, MATH 2310, or MATH 2940 or equivalent and one additional mathematics course numbered 3000 or above. Enrollment limited to: graduate students. Students
will be expected to be comfortable writing proofs. More experience with proofs may be gained by first taking a 3000-level MATH course. Co-meets with MATH 4250 and CS 4210.
Introduction to the fundamentals of numerical analysis: error analysis, approximation, interpolation, numerical integration. In the second half of the course, the above are used to build approximate
solvers for ordinary and partial differential equations. Strong emphasis is placed on understanding the advantages, disadvantages, and limits of applicability for all the covered techniques. Computer
programming is required to test the theoretical concepts throughout the course.
MATH 5410 - Introduction to Combinatorics I
Fall 2024. 4 credits. Student option grading.
Prerequisite: MATH 2210, MATH 2230, MATH 2310, MATH 2940, or equivalent. Enrollment limited to: graduate students. Students will be expected to be comfortable writing proofs. More experience with
proofs may be gained by first taking a 3000-level MATH course. Co-meets with MATH 4410.
Combinatorics is the study of discrete structures that arise in a variety of areas, particularly in other areas of mathematics, computer science, and many areas of application. Central concerns are
often to count objects having a particular property (e.g., trees) or to prove that certain structures exist (e.g., matchings of all vertices in a graph). The first semester of this sequence covers
basic questions in graph theory, including extremal graph theory (how large must a graph be before one is guaranteed to have a certain subgraph) and Ramsey theory (which shows that large objects are
forced to have structure). Variations on matching theory are discussed, including theorems of Dilworth, Hall, König, and Birkhoff, and an introduction to network flow theory. Methods of enumeration
(inclusion/exclusion, Möbius inversion, and generating functions) are introduced and applied to the problems of counting permutations, partitions, and triangulations.
MATH 5420 - [Introduction to Combinatorics II]
Spring. Not offered: 2024-2025. Next offered: 2025-2026. 4 credits. Student option grading.
Prerequisite: MATH 2210, MATH 2230, MATH 2310, MATH 2940, or equivalent. Enrollment limited to: graduate students. Students will be expected to be comfortable writing proofs. More experience with
proofs may be gained by first taking a 3000-level MATH course. Co-meets with MATH 4420.
Continuation of MATH 5410, although formally independent of the material covered there. The emphasis here is the study of certain combinatorial structures, such as Latin squares and combinatorial
designs (which are of use in statistical experimental design), classical finite geometries and combinatorial geometries (also known as matroids, which arise in many areas from algebra and geometry
through discrete optimization theory). There is an introduction to partially ordered sets and lattices, including general Möbius inversion and its application, as well as the Polya theory of counting
in the presence of symmetries.
MATH 6110 - Real Analysis
Fall 2024. 4 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will not receive credit for both MATH 6110 and MATH 6210.
Prerequisite: Strong performance in an undergraduate analysis course at the level of MATH 4140, or permission of instructor.
MATH 6110-6120 are the core analysis courses in the mathematics graduate program. MATH 6110 covers abstract measure and integration theory, and related topics such as the Lebesgue differentiation
theorem, the Radon-Nikodym theorem, the Hardy-Littlewood maximal function, the Brunn-Minkowski inequality, rectifiable curves and the isoperimetric inequality, Hausdorff dimension and Cantor sets,
and an introduction to ergodic theory.
MATH 6120 - Complex Analysis
Spring 2025. 4 credits. Student option grading.
Prerequisite: Strong performance in an undergraduate analysis course at the level of MATH 4140, or permission of instructor.
MATH 6110-6120 are the core analysis courses in the mathematics graduate program. MATH 6120 covers complex analysis, Fourier analysis, and distribution theory.
MATH 6150 - [Partial Differential Equations]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: MATH 4130, MATH 4140, or the equivalent, or permission of instructor. Offered alternate years.
This course emphasizes the "classical" aspects of partial differential equations (PDEs) — analytic methods for linear second-order PDEs and first-order nonlinear PDEs — without relying on more modern
tools of functional analysis. The usual topics include fundamental solutions for the Laplace/Poisson, heat and wave equations in R^n, mean-value properties, maximum principles, energy methods,
Duhamel's principle, and an introduction to nonlinear first-order equations, including shocks and weak solutions. Additional topics may include Hamilton-Jacobi equations, Euler-Lagrange equations,
similarity solutions, transform methods, asymptotics, power series methods, homogenization, distribution theory, and the Fourier transform.
MATH 6160 - [Partial Differential Equations]
Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: MATH 6110, MATH 6210, or the equivalent. Offered alternate years.
This course highlights applications of functional analysis to the theory of partial differential equations (PDEs). It covers parts of the basic theory of linear (elliptic and evolutionary) PDEs,
including Sobolev spaces, existence and uniqueness of solutions, interior and boundary regularity, maximum principles, and eigenvalue problems. Additional topics may include: an introduction to
variational problems, Hamilton-Jacobi equations, and other modern techniques for non-linear PDEs.
MATH 6210 - Measure Theory and Lebesgue Integration
Fall 2024. 3 credits. Student option grading.
Forbidden Overlap: Due to an overlap in content, students will not receive credit for both MATH 6110 and MATH 6210.
Prerequisite: undergraduate analysis and linear algebra at the level of MATH 4130 and MATH 4310.
Covers measure theory, integration, and Lp spaces.
MATH 6220 - Applied Functional Analysis
Spring 2025. 3 credits. Student option grading.
Prerequisite: a first course in real analysis, including exposure to Lebesgue integration (e.g., MATH 6110 or MATH 6210).
Functional analysis is a branch of mathematical analysis that mainly focuses on the study of infinite-dimensional vector spaces and the operators acting upon them. It builds upon results and ideas
from linear algebra and real and complex analysis to develop general frameworks that can be used to study analytical problems. Functional analysis plays a pivotal role in several areas of
mathematics, physics, engineering, and even in some areas of computer science and economics. This course will cover the basic theory of Banach, Hilbert, and Sobolev spaces, as well as explore several
notable applications, from analyzing partial differential equations (PDEs), numerical analysis, inverse problems, control theory, optimal transportation, and machine learning.
MATH 6230 - Differential Games and Optimal Control
Fall 2024. 4 credits. Student option grading.
This course is a self-contained introduction to the modern theory of optimal control and differential games. Dynamic programming uses Hamilton-Jacobi partial differential equations (PDEs) to encode
the optimal behavior in cooperative and adversarial sequential decision making problems. The same PDEs have an alternative interpretation in the context of front propagation problems. We show how
both interpretations are useful in constructing efficient numerical methods. We also consider a wide range of applications, including robotics, computational geometry, path-planning, computer vision,
photolithography, economics, seismic imaging, ecology, financial engineering, crowd dynamics, and aircraft collision avoidance. Assumes no prior knowledge of non-linear PDEs or numerical analysis.
MATH 6260 - Dynamical Systems
Fall 2024. 3 credits. Student option grading.
Prerequisite: MATH 4130-MATH 4140, or the equivalent. Exposure to topology (e.g., MATH 4530) will be helpful. Offered alternate years.
Topics include existence and uniqueness theorems for ODEs; Poincaré-Bendixon theorem and global properties of two dimensional flows; limit sets, nonwandering sets, chain recurrence, pseudo-orbits and
structural stability; linearization at equilibrium points: stable manifold theorem and the Hartman-Grobman theorem; and generic properties: transversality theorem and the Kupka-Smale theorem.
Examples include expanding maps and Anosov diffeomorphisms; hyperbolicity: the horseshoe and the Birkhoff-Smale theorem on transversal homoclinic orbits; rotation numbers; Herman's theorem; and
characterization of structurally stable systems.
MATH 6270 - [Applied Dynamical Systems]
(also MAE 7760)
Fall or Spring. Not offered: 2024-2025. Next offered: 2026-2027. 3 credits. Student option grading.
Prerequisite: MAE 6750, MATH 6260, or equivalent.
Topics include review of planar (single-degree-of-freedom) systems; local and global analysis; structural stability and bifurcations in planar systems; center manifolds and normal forms; the
averaging theorem and perturbation methods; Melnikov’s method; discrete dynamical systems, maps and difference equations, homoclinic and heteroclinic motions, the Smale Horseshoe and other complex
invariant sets; global bifurcations, strange attractors, and chaos in free and forced oscillator equations; and applications to problems in solid and fluid mechanics.
MATH 6302 - Lattices: Geometry, Cryptography, and Algorithms
Spring 2025. 3 credits. Student option grading.
Prerequisite: MATH 4310 or permission of instructor.
A mathematically rigorous course on lattices. Lattices are periodic sets of vectors in high-dimensional space. They play a central role in modern cryptography, and they arise naturally in the study
of high-dimensional geometry (e.g., sphere packings). We will study lattices as both geometric and computational objects. Topics include Minkowski's celebrated theorem, the famous LLL algorithm for
finding relatively short lattice vectors, Fourier-analytic methods, basic cryptographic constructions, and modern algorithms for finding shortest lattice vectors. We may also see connections to
algebraic number theory.
MATH 6310 - Algebra
Fall 2024. 4 credits. Student option grading.
Prerequisite: strong performance in an undergraduate abstract algebra course at the level of MATH 4340, or permission of instructor.
MATH 6310-6320 are the core algebra courses in the mathematics graduate program. MATH 6310 covers group theory, especially finite groups; rings and modules; ideal theory in commutative rings;
arithmetic and factorization in principal ideal domains and unique factorization domains; introduction to field theory; tensor products and multilinear algebra. (Optional topic: introduction to
affine algebraic geometry.)
MATH 6320 - Algebra
Spring 2025. 4 credits. Student option grading.
Prerequisite: MATH 6310, or permission of instructor.
MATH 6310-6320 are the core algebra courses in the mathematics graduate program. MATH 6320 covers Galois theory, representation theory of finite groups, introduction to homological algebra.
MATH 6330 - [Noncommutative Algebra]
Fall or Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: MATH 6310-MATH 6320, or permission of instructor. Offered alternate years.
An introduction to the theory of noncommutative rings and modules. Topics vary by semester and include semisimple modules and rings, the Jacobson radical and Artinian rings, group representations and
group algebras, characters of finite groups, representations of the symmetric group, central simple algebras and the Brauer group, representation theory of finite-dimensional algebras, Morita theory.
MATH 6340 - [Commutative Algebra with Applications in Algebraic Geometry]
Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: modules and ideals (e.g., strong performance in MATH 4330 and either MATH 3340 or MATH 4340), or permission of instructor.
Covers Dedekind domains, primary decomposition, Hilbert basis theorem, and local rings.
MATH 6350 - [Homological Algebra]
Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: MATH 6310. Offered alternate years.
A first course on homological algebra. Topics will include a brief introduction to categories and functors, chain and cochain complexes, operations on complexes, (co)homology, standard resolutions
(injective, projective, flat), classical derived functors, Tor and Ext, Yoneda’s interpretation of Ext, homological dimension, rings of small dimensions, introduction to group cohomology.
MATH 6370 - Algebraic Number Theory
Spring 2025. 3 credits. Student option grading.
Prerequisite: an advanced course in abstract algebra at the level of MATH 4340.
An introduction to number theory focusing on the algebraic theory. Topics include, but are not limited to, number fields, Dedekind domains, class groups, Dirichlet's unit theorem, local fields,
ramification, decomposition and inertia groups, zeta functions, and the distribution of primes.
MATH 6390 - Lie Groups and Lie Algebras
Fall 2024. 3 credits. Student option grading.
Prerequisite: basic knowledge of algebra and linear algebra at the honors level (e.g., MATH 4330-MATH 4340). Some knowledge of differential and algebraic geometry are helpful.
Lie groups, Lie algebras, and their representations play an important role in much of mathematics, particularly in number theory, mathematical physics, and topology. This is an introductory course,
meant to be useful for more advanced topics and applications. The relationship between Lie groups and Lie algebras will be highlighted throughout the course. A different viewpoint is that of
algebraic groups. We will endeavor to discuss this along with the C∞ viewpoint.
Topics: Basic structure and properties of Lie algebras; theorems of Lie and Engel. • Nilpotent solvable and reductive Lie algebras. • The relation between Lie groups and Lie algebras • The algebraic
groups version (tentative) • Enveloping Algebras and Differential Operators • The structure of semisimple algebras • Representation theory of semisimple Lie algebras; Lie algebra cohomology • Compact
semisimple groups and their representation theory. • Chevalley groups, p-adic groups (tentative) • Structure of real reductive groups • Quantum groups, Kac-Moody algebras and their representations
theory (tentative)
MATH 6410 - [Enumerative Combinatorics]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: MATH 4410 or permission of instructor. Offered alternate years.
An introduction to enumerative combinatorics from an algebraic, geometric and topological point of view. Topics include, but are not limited to, permutation statistics, partitions, generating
functions and combinatorial species, various types of posets and lattices (distributive, geometric, and Eulerian), Möbius inversion, face numbers, shellability, and relations to the Stanley-Reisner
MATH 6510 - Algebraic Topology
Spring 2025. 4 credits. Student option grading.
Prerequisite: strong performance in an undergraduate abstract algebra course at the level of MATH 4340 and point-set topology at the level of MATH 4530, or permission of instructor.
MATH 6510–MATH 6520 are the core topology courses in the mathematics graduate program. MATH 6510 is an introductory study of certain geometric processes for associating algebraic objects such as
groups to topological spaces. The most important of these are homology groups and homotopy groups, especially the first homotopy group or fundamental group, with the related notions of covering
spaces and group actions. The development of homology theory focuses on verification of the Eilenberg-Steenrod axioms and on effective methods of calculation such as simplicial and cellular homology
and Mayer-Vietoris sequences. If time permits, the cohomology ring of a space may be introduced.
MATH 6520 - Differentiable Manifolds
Fall 2024. 4 credits. Student option grading.
Prerequisite: strong performance in analysis (e.g., MATH 4130 and/or MATH 4140), linear algebra (e.g., MATH 4310), and point-set topology (e.g., MATH 4530), or permission of instructor.
MATH 6510-MATH 6520 are the core topology courses in the mathematics graduate program. This course is an introduction to geometry and topology from a differentiable viewpoint, suitable for beginning
graduate students. The objects of study are manifolds and differentiable maps. The collection of all tangent vectors to a manifold forms the tangent bundle, and a section of the tangent bundle is a
vector field. Alternatively, vector fields can be viewed as first-order differential operators. We will study flows of vector fields and prove the Frobenius integrability theorem. We will examine the
tensor calculus and the exterior differential calculus and prove Stokes' theorem. If time permits, de Rham cohomology, Morse theory, or other optional topics will be covered.
MATH 6530 - [K-Theory and Characteristic Classes]
Fall. Not offered: 2024-2025. Next offered: 2025-2046. 3 credits. Student option grading.
Prerequisite: MATH 6510, or permission of instructor.
An introduction to topological K-theory and characteristic classes. Topological K-theory is a generalized cohomology theory which is surprisingly simple and useful for computation while still
containing enough structure for proving interesting results. The class will begin with the definition of K-theory, Chern classes, and the Chern character. Additional topics may include the Hopf
invariant 1 problem, the J-homomorphism, Stiefel-Whitney classes and Pontrjagin classes, cobordism groups and the construction of exotic spheres, and the Atiyah-Singer Index Theorem.
MATH 6540 - Homotopy Theory
Spring 2025. 3 credits. Student option grading.
Prerequisite: MATH 6510 or permission of instructor.
This course is an introduction to the theory of infinity-categories that provides a convenient language for homotopy theory and plays an increasingly important role in many other parts of
mathematics. Along the way we will cover basics of classical homotopical algebra (model categories and simplicial sets), and as an application --- if time permits --- discuss Quillen's approach to
rational homotopy theory and its modern ramifications.
MATH 6620 - Riemannian Geometry
Spring 2025. 3 credits. Student option grading.
Prerequisite: MATH 6520 or strong performance in analysis (e.g., MATH 4130 and/or MATH 4140), linear algebra (e.g., MATH 4310), and coursework on manifolds and differential geometry at the
undergraduate level, such as both MATH 3210 and MATH 4540. Offered alternate years.
Topics include linear connections, Riemannian metrics and parallel translation; covariant differentiation and curvature tensors; the exponential map, the Gauss Lemma and completeness of the metric;
isometries and space forms, Jacobi fields and the theorem of Cartan-Hadamard; the first and second variation formulas; the index form of Morse and the theorem of Bonnet-Myers; the Rauch, Hessian, and
Laplacian comparison theorems; the Morse index theorem; the conjugate and cut loci; and submanifolds and the Second Fundamental form.
MATH 6630 - [Symplectic Geometry]
Fall or Spring. Not offered: 2024-2025. Next offered: 2026-2027. 3 credits. Student option grading.
Prerequisite: MATH 6510 and MATH 6520, or permission of instructor.
Symplectic geometry is a branch of differential geometry which studies manifolds endowed with a nondegenerate closed 2-form. The field originated as the mathematics of classical (Hamiltonian)
mechanics and it has connections to (at least!) complex geometry, algebraic geometry, representation theory, and mathematical physics. In this introduction to symplectic geometry, the class will
begin with linear symplectic geometry, discuss canonical local forms (Darboux-type theorems), and examine related geometric structures including almost complex structures and Kähler metrics. Further
topics may include symplectic and Hamiltonian group actions, the orbit method, the topology and geometry of momentum maps, toric symplectic manifolds, Hamiltonian dynamics, symplectomorphism groups,
and symplectic embedding problems.
MATH 6640 - [Hyperbolic Geometry]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: Strong performance in undergraduate analysis (e.g., MATH 4130 or MATH 4180), topology/geometry (e.g., MATH 4520, MATH 4530, or MATH 4540), and algebra (e.g., MATH 4340), or permission
of instructor. Offered alternate years.
An introduction to the topology and geometry of hyperbolic manifolds. The class will begin with the geometry of hyperbolic n-space, including the upper half-space, Poincaré disc, Klein, and
Lorentzian models. We will cover both synthetic and computational approaches. We will then discuss hyperbolic structures on surfaces and 3-manifolds, and the corresponding groups of isometries (i.e.,
Fuchsian and Kleinian groups). Additional topics may include: Geodesic and horocycle flows and their properties, counting closed geodesics and simple closed geodesics, Mostow rigidity, infinite area
MATH 6670 - Algebraic Geometry
Spring 2025. 3 credits. Student option grading.
Prerequisite: MATH 6310 or MATH 6340, or equivalent.
A first course in algebraic geometry. Affine and projective varieties. The Nullstellensatz. Schemes and morphisms between schemes. Dimension theory. Potential topics include normalization, Hilbert
schemes, curves and surfaces, and other choices of the instructor.
MATH 6710 - Probability Theory I
Fall 2024. 3 credits. Student option grading.
Prerequisite: knowledge of Lebesgue integration theory, at least on the real line. Students can learn this material by taking MATH 4130-MATH 4140 or MATH 6210.
Measure theory, independence, distribution of sums of iid random variables, laws of large numbers, and central limit theorem. Other topics as time permits.
MATH 6720 - Probability Theory II
Spring 2025. 3 credits. Student option grading.
Prerequisite: MATH 6710.
The second course in a graduate probability series. Topics include conditional expectation, martingales, Markov chains, Brownian motion, and (time permitting) elements of stochastic integration.
MATH 6730 - Mathematical Statistics I
(also STSCI 6730)
Spring 2025. 3 credits. Student option grading.
Prerequisite: STSCI 4090/BTRY 4090, MATH 6710, or permission of instructor.
This course will focus on the finite sample theory of statistical inference, emphasizing estimation, hypothesis testing, and confidence intervals. Specific topics include: uniformly minimum variance
unbiased estimators, minimum risk equivariant estimators, Bayes estimators, minimax estimators, the Neyman-Pearson theory of hypothesis testing, and the construction of optimal invariant tests.
MATH 6740 - Mathematical Statistics II
(also STSCI 6740)
Fall 2024. 3 credits. Student option grading.
Prerequisite: MATH 6710 (measure theoretic probability) and STSCI 6730/MATH 6730, or permission of instructor.
Focuses on the foundations of statistical inference, with an emphasis on asymptotic methods and the minimax optimality criterion. In the first part, the solution of the classical problem of
justifying Fisher’s information bound in regular statistical models will be presented. This solution will be obtained applying the concepts of contiguity, local asymptotic normality and asymptotic
minimaxity. The second part will be devoted to nonparametric estimation, taking a Gaussian regression model as a paradigmatic example. Key topics are kernel estimation and local polynomial
approximation, optimal rates of convergence at a point and in global norms, and adaptive estimation. Optional topics may include irregular statistical models, estimation of functionals and
nonparametric hypothesis testing.
MATH 6810 - Logic
Spring 2025. 3 credits. Student option grading.
Prerequisite: an algebra course covering rings and fields (e.g., MATH 4310 or MATH 4330) or permission of instructor. Offered alternate years.
Covers basic topics in mathematical logic, including propositional and predicate calculus; formal number theory and recursive functions; completeness and incompleteness theorems, compactness and
Skolem-Loewenheim theorems. Other topics as time permits.
MATH 6830 - [Model Theory]
Fall or Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: rings and fields (e.g., MATH 4310 or MATH 4330) and a course in first-order logic at least at the level of MATH 4810/PHIL 4310, or permission of instructor. Offered alternate years.
Introduction to model theory at the level of David Marker's text.
MATH 6870 - Descriptive Set Theory
Fall 2024. 3 credits. Student option grading.
Prerequisite: metric topology and measure theory (e.g., MATH 4130-MATH 4140 or MATH 6210) and a course in first-order logic (e.g., MATH 3840/PHIL 3300, MATH 4810/PHIL 4310, or MATH 6810), or
permission of instructor. Offered alternate years.
This will be an introductory graduate course in Descriptive Set Theory, that is, a theory of definable (Borel, analytic, and co-analytic) subsets of separable, completely metrizable spaces and
quotients of such spaces by definable equivalence relations. Some recently discovered aspects of the theory of quotients by Borel equivalence relations will be covered. Some connections with
dynamics, classical analysis, combinatorics, and topology will be described.
MATH 7110 - Topics in Analysis: De Girogi-Nash-Moser Theory and Applications
Fall 2024. 3 credits. S/U grades only.
We will discuss the celebrated De Girogi-Nash-Moser interaction method in elliptic PDE, and its applications in several famous geometry problems, such as the regularity theory for minimal graph
equations, Allard regularity, the epsilon regularity for harmonic maps, and so on.
MATH 7130 - [Functional Analysis]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. Student option grading.
Prerequisite: some basic measure theory, L^p spaces, and (basic) functional analysis (e.g., MATH 6110). Advanced undergraduates who have taken MATH 4130-MATH 4140 and linear algebra (e.g., MATH 4310
or MATH 4330), but not MATH 6110, need permission of instructor. Offered alternate years.
Covers topological vector spaces, Banach and Hilbert spaces, and Banach algebras. Additional topics selected by instructor.
MATH 7150 - Fourier Analysis
Spring 2025. 3 credits. S/U grades only.
Prerequisite: some basic measure theory, L^p spaces, and (basic) functional analysis (e.g., MATH 6110). Advanced undergraduates who have taken MATH 4130-MATH 4140, but not MATH 6110, by permission of
instructor. Offered alternate years.
The class is an introduction to Euclidean harmonic analysis. Topics usually include convergence of Fourier series, harmonic functions and their conjugates, Hilbert transform, Calderón-Zygmund theory,
Littlewood-Paley theory, duality between the Hardy space H1 and BMO, paraproducts, Fourier restriction and applications, etc. If time permits, some applications to PDE and number theory will also be
MATH 7160 - Topics in Partial Differential Equations: Spectral Geometry
Spring 2025. 3 credits. S/U grades only.
Prerequisite: MATH 6110 and MATH 6160 or equivalents.
This course will serve as a first introduction to spectral geometry—the study of the eigenvalues and eigenfunctions of natural elliptic operators and their relationship with geometry—a subject
appearing in various guises in physics, differential geometry, number theory, dynamics, and of course PDE. To keep prerequisites to a minimum, we will focus on the Laplace operator on domains in R^n,
beginning with classic theorems like Weyl's Law and the Faber-Krahn inequality, before moving on to a selection of more modern results related to isoperimetric inequalities, nodal geometry,
isospectral problems, etc.
MATH 7280 - [Topics in Dynamical Systems]
Fall or Spring. Not offered: 2024-2025. Next offered: 2026-2027. 3 credits. S/U grades only.
Selection of advanced topics from dynamical systems. Content varies.
MATH 7290 - Seminar on Scientific Computing and Numerics
(also CS 7290)
Fall 2024, Spring 2025. 1 credits. S/U grades only.
Talks on various methods in scientific computing, the analysis of their convergence properties and computational efficiency, and their adaptation to specific applications.
MATH 7310 - Topics in Algebra: Hilbert Schemes of Points
Spring 2025. 3 credits. S/U grades only.
The Hilbert scheme of points on a variety is a compact moduli space parameterizing 0-dimensional subschemes of X. It has a distinguished component parameterizing a reduced collection of points, which
has been used extensively in fields ranging from algebraic geometry and topology to combinatorics and even computer science. In this course, we will construct the Hilbert scheme and its relatives,
such as the nested Hilbert scheme and the G-Hilbert scheme. We will focus on general questions regarding their singularities, starting with basic topics like smoothness and irreducibility, and
progressing to a description of the local structure around torus-fixed points and the structure of the cohomology rings.
MATH 7350 - [Topics in Homological Algebra]
Fall or Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
Offered alternate years.
Selection of advanced topics from homological algebra. Content varies.
MATH 7370 - Topics in Number Theory: The Mordell Conjecture, after Lawrence and Venkatesh
Spring 2025. 3 credits. S/U grades only.
Prerequisites: first courses in algebraic geometry (e.g. MATH 6670), Galois Theory (e.g. MATH 6320) and number fields (e.g. MATH 6370), as well as being comfortable with p-adic numbers. Useful, but
not required, would be algebraic topology to the level of covering spaces (e.g. MATH 6510), and/or some basic familiarity with etale cohomology (but we'll cover what we need in class).
If X is a smooth projective curve of genus at least 2 defined over a number field K, then X has only finitely many points defined over K. This statement is known as the Mordell Conjecture, and its
resolution by Gerd Faltings in 1983 is one of the crowning achievements of 20th-century arithmetic geometry. Since Faltings' proof, several other mathematicians (Vojta, Bombieri,...) have come up
with different proofs, so that by now we can understand this result from many different perspectives. This course will go through the most recent proof of Mordell, due to Brian Lawrence and Akshay
Venkatesh. The idea, roughly speaking, is to replace points defined over K by representations of its absolute Galois group, and to study them instead, eventually coming back to geometry using the
theory of p-adic period maps. Over the course of assembling the proof, we will give user-friendly overviews of several important areas of modern arithmetic geometry, such as etale cohomology,
crystalline cohomology and Fontaine's crystalline representations.
MATH 7390 - [Topics in Lie Groups and Lie Algebras]
Fall or Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
Topics will vary depending on the instructor and the level of the audience. They range from representation theory of Lie algebras and of real and p-adic Lie groups, geometric representation theory,
quantum groups and their representations, invariant theory to applications of Lie theory to other parts of mathematics.
MATH 7410 - Topics in Combinatorics
Spring 2025. 3 credits. S/U grades only.
Offered alternate years.
The course will cover topics in algebraic, topological and/or geometric combinatorics determined in consultation with graduate students during the fall 2024 semester. Neither 'algebraic', nor
'geometric' refers to algebraic geometry.
MATH 7510 - Berstein Seminar in Topology
Fall 2024. 3 credits. S/U grades only.
This will be a project-based course on modeling and analyzing elections and redistricting that assumes no particular background. The idea is to get you to the state of the art, so that your projects
are immediately applicable in the field. This course may be of interest for those wanting to work in metrics of fairness, mathematical modeling, mechanism design, and democracy.
Background topics include:
• overview of social choice theory (including computational social choice), apportionment
• domain knowledge in law, geography, policy
○ rules of redistricting, including Voting Rights Act
○ state of play in democracy reform
Main topics include:
• fairness axioms for electoral outcomes; fairness metrics for redistricting
• graph partitioning
○ Markov chain methods
○ (other) spanning-tree methods
• optimization methods for redistricting, both heuristic and exact
• statistical ranking models
• generative models of election
MATH 7520 - [Berstein Seminar in Topology]
Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
A seminar on an advanced topic in topology or a related subject. Content varies. The format usually that the participants take turns to present.
MATH 7570 - [Topics in Topology]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
Selection of advanced topics from modern algebraic, differential, and geometric topology. Content varies.
MATH 7580 - [Topics in Topology]
Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
Selection of advanced topics from modern algebraic, differential, and geometric topology. Content varies.
MATH 7610 - [Topics in Geometry]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
Selection of advanced topics from modern geometry. Content varies.
MATH 7620 - Topics in Geometry: Interactions of Hyperbolic Dynamics, Geometry, and Low-Dimensional Topology
Spring. 3 credits. S/U grades only.
Prerequisites: MATH 6520 (Differentiable Manifolds). MATH 6260 (Dynamical systems) is recommended preparation, especially for undergraduates. Some basic familiarity with Riemannian and/or hyperbolic
geometry could also be helpful but not strictly required.
This is a course on hyperbolic and partially hyperbolic diffeomorphisms and Anosov flows, from a geometric-topological viewpoint. We will cover some foundational results in smooth and hyperbolic
dynamics motivated by low-dimensional examples, and then specialize to flows on three-manfiolds and relationship with the geometry of foliations. Students will do presentations on a related sub-topic
of their interest at the end of the semester.
MATH 7670 - Topics in Algebraic Geometry: Schubert Varieties and Degenerations
Fall 2024. 3 credits. S/U grades only.
Schubert varieties arise in many places in geometry and representation theory. They also serve as a combinatorially tractable source of examples of singular varieties and group actions (rather like
toric varieties do). We'll compute lots of things about them, in no small part by degenerating them to unions of pieces. On the combinatorial side, we'll be studying simplicial complexes, and the
Stanley-Reisner theory that associates schemes to them. On the algebra side, we'll be using Gröbner and SAGBI degenerations, and controlling them using Frobenius splitting. Once we have Schubert
varieties as building blocks, we'll apply our tech to study other spaces, in particular quiver cycles and positroid varieties, maybe wonderful compactifications of groups.
MATH 7710 - Topics in Probability Theory: Math for AI Safety
Fall 2024. 3 credits. S/U grades only.
AI holds great promise and, many believe, great peril. What can mathematicians contribute to ensuring that promise is fulfilled, and peril avoided? Topics may include: predictive coding, good
regulator theorems, Markov decision processes, power-seeking theorems, signaling games, evolution of cooperation, open-source game theory, multi-agent learning, opponent shaping, logical uncertainty,
usable information under computational constraints, proper scoring rules, forecast aggregation, Bayesian truth serum, coherence theorems, multi-objective optimization. This course is loosely modeled
on the AI Alignment course taught by Roger Grosse at the University of Toronto.
Useful background: machine learning, game theory, and stochastic processes (at the level of MATH 4740).
MATH 7720 - [Topics in Stochastic Processes]
Fall or Spring. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
Selection of advanced topics from stochastic processes. Content varies.
MATH 7740 - Statistical Learning Theory
Fall 2024. 3 credits. Student option grading.
Prerequisite: basic mathematical statistics (STSCI/MATH 6730 or equivalent) and measure theoretic probability (MATH 6710), or permission of instructor. Enrollment limited to: graduate students.
Learning theory has become an important topic in modern statistics. This course gives an overview of various topics in classification, starting with Stone’s (1977) stunning result that there are
classifiers that are universally consistent. Other topics include classification, plug-in methods (k-nearest neighbors), reject option, empirical risk minimization, Vapnik-Chervonenkis theory, fast
rates via Mammen and Tsybakov’s margin condition, convex majorizing loss functions, RKHS methods, support vector machines, lasso type estimators, low-rank multivariate response regression, random
matrix theory, topic models, latent factor models, and interpolation methods in high dimensional statistics.
MATH 7810 - [Seminar in Logic]
Fall. Not offered: 2024-2025. Next offered: 2025-2026. 3 credits. S/U grades only.
A twice weekly seminar in logic. Typically, a topic is selected for each semester, and at least half of the meetings of the course are devoted to this topic with presentations primarily by students.
Opportunities are also provided for students and others to present their own work and other topics of interest.
MATH 7820 - Seminar in Logic
Spring 2025. 3 credits. S/U grades only.
A twice weekly seminar in logic. Typically, a topic is selected for each semester, and at least half of the meetings of the course are devoted to this topic with presentations primarily by students.
Opportunities are also provided for students and others to present their own work and other topics of interest.
|
{"url":"https://math.cornell.edu/graduate-courses","timestamp":"2024-11-11T17:11:54Z","content_type":"text/html","content_length":"118643","record_id":"<urn:uuid:8fe2af60-fb0f-4ec9-816b-ba83a46ce3c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00286.warc.gz"}
|
Sort A 2D Vector C++ Stl With Code Examples
In this article, we will look at how to get the solution for the problem, Sort A 2D Vector C++ Stl With Code Examples
Can you sort a 2D vector?
In C++, we can sort particular rows in the 2D vector using sort() function, by default the sort() functions sorts the vector in ascending order.
// C++ code to demonstrate sorting of a
// 2D vector on basis of a column
#include <algorithm> // for sort()
#include <iostream>
#include <vector> // for 2D vector
using namespace std;
// Driver function to sort the 2D vector
// on basis of a particular column
bool sortcol(const vector<int>& v1, const vector<int>& v2)
return v1[1] < v2[1];
// Driver Code
int main()
// Initializing 2D vector "vect" with
// values
vector<vector<int> > vect{ { 3, 5, 1 },
{ 4, 8, 6 },
{ 7, 2, 9 } };
// Number of rows;
int m = vect.size();
// Number of columns (Assuming all rows
// are of same size). We can have different
// sizes though (like Java).
int n = vect[0].size();
// Displaying the 2D vector before sorting
cout << "The Matrix before sorting is:\n";
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++)
cout << vect[i][j] << " ";
cout << endl;
// Use of "sort()" for sorting on basis
// of 2nd column
sort(vect.begin(), vect.end(), sortcol);
// Displaying the 2D vector after sorting
cout << "The Matrix after sorting is:\n";
for (int i = 0; i < m; i++) {
for (int j = 0; j < n; j++)
cout << vect[i][j] << " ";
cout << endl;
return 0;
How do you take the input of a 2D vector?
how to take input in 2d vector in c++
• std::vector<vector<int>> d;
• //std::vector<int> d;
• cout<<"Enter the N number of ship and port:"<<endl;
• cin>>in;
• cout<<"\Enter preference etc..:\n";
• for(i=0; i<in; i++){
• cout<<"ship"<<i+1<<":"<<' ';
• for(j=0; j<in; j++){
How do you sort an array in decreasing order in C++ using STL?
To sort an array in reverse/decreasing order, you can use the std::sort algorithm provided by STL. It sorts the elements of a container in the range pointed by the specified iterators using a
comparator. The default comparator used is std::less<> , which sorts the container in ascending order using operator< .
How do you sort a vector string in STL?
Approach: The sort() function in C++ STL is able to sort vector of strings if and only if it contains single numeric character for example, { '1', ' '} but to sort numeric vector of string with
multiple character for example, {'12', '56', '14' } one should write own comparator inside sort() function.
How do I sort a 2D vector row?
Ways to Sort a 2D Vector This type of sorting arranges a selected row of 2D vector in ascending order. This is achieved by using sort() and passing iterators of 1D vector as its arguments.
Can we sort vector of vector?
A vector in C++ can be easily sorted in ascending order using the sort() function defined in the algorithm header file. The sort() function sorts a given data structure and does not return anything.
The sorting takes place between the two passed iterators or positions.
How do you sort a 2D vector in C++ in descending order?
This type of sorting arranges a selected row of 2D vector in descending order . This is achieved by using “sort()” and passing iterators of 1D vector as its arguments.
How do you sort a 2D vector by a second element?
On the basis of the second value of pairs: This type of sorting arranges a selected row of a 2D vector in ascending order of the second value of the pair. This is achieved by using “sort()” and
passing iterators of 1D vector as its arguments.
How do I sort a vector in CPP STL?
Sorting a vector in C++ can be done by using std::sort(). It is defined in<algorithm> header. To get a stable sort std::stable_sort is used. It is exactly like sort() but maintains the relative order
of equal elements.
How do I sort a 2D array column wise?
Approach: Follow the steps below to solve the problem:
• Traverse the matrix.
• Find the transpose of the given matrix mat[][].
• Store this transpose of mat[][] in a 2D vector, tr[][]
• Traverse the rows of the matrix tr[][]
• Sort each row of the matrix using the sort function.
• Store the transpose of tr[][] in mat[][]
How To Take An Aimage In Html And Css With Code Examples
In this article, we will look at how to get the solution for the problem, How To Take An Aimage In Html And Css With Code Examples How do I get an image using CSS? Usage is simple — you insert the
path to the image you want to include in your page inside the brackets of url() , for example: background-image: url('images/my-image. png'); Note about formatting: The quotes around the
URL can be either single or double quotes, and they are optional. <img src="..." style="height: 500px;"
Starting Html With Code Examples
In this article, we will look at how to get the solution for the problem, Starting Html With Code Examples Where can I practice HTML coding? freeCodeCamp. Inarguably one of the best resources to
learn web development. Codewell. If you want to level up your HTML, CSS, and Javascript skills, Codewell offers real world Figma templates that you can use to improve. CSSBattle. Treehouse.
Coderbyte. CodePen Challenges. Javascript30. Frontend Mentor. !DOCTYPE html> <html lang="en"> <head> <title>Bo
Find Low And High In String With Code Examples
In this article, we will look at how to get the solution for the problem, Find Low And High In String With Code Examples How many Substrings are in a string of length n? Approach: The count of
sub-strings of length n will always be len – n + 1 where len is the length of the given string. def high_and_low(numbers): #z. nn = [int(s) for s in numbers.split(" ")] return "%i %i" % (max(nn),min
(nn)) How do you find all substrings in length K? If the length of a string is N, then there can be N
How To Get Cpu Model In Python With Code Examples
In this article, we will look at how to get the solution for the problem, How To Get Cpu Model In Python With Code Examples How do I check system memory in Python? You have to install the latest
psutil (version 0.5.0 from here: pypi.python.org/pypi/psutil/0.5.0) to get this to work. Using the latest 4.3.0 version, the method is now psutil.virtual_memory() I think. This tells you the system
memory usage. import cpuinfo print('CPU =', cpuinfo.get_cpu_info()['brand_raw']) i
Python Terminal Color With Code Examples
In this article, we will look at how to get the solution for the problem, Python Terminal Color With Code Examples How do I use RGB in Python? A simple function to convert RGB values into color names
for a variety of combinations. RGB →(0.255. 0), Hex Code →#00FF00, Color Name →lime. RGB →(178,34,34), Hex Code →#B22222, Color Name →firebrick. def colored(r, g, b, text): return "\033[38;2;{};{};{}
m{} \033[38;2;255;255;255m".format(r, g, b, text) text = 'Hello, World' col
|
{"url":"https://www.isnt.org.in/sort-a-2d-vector-c-stl-with-code-examples.html","timestamp":"2024-11-10T14:05:20Z","content_type":"text/html","content_length":"151038","record_id":"<urn:uuid:bf72680d-5121-4481-971c-ea603ca36f98>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00562.warc.gz"}
|
33 can be written as the sum of three cubes
You're reading: News
33 can be written as the sum of three cubes
It was an open question whether 33 could be written as the sum of three cubes. Thanks to Andrew R. Booker, it now isn’t.
\begin{array}{c} (8866128975287528)^3 \\ + \\(-8778405442862239)^3 \\ + \\(-2736111468807040)^3 \\ = \\ 33\end{array}
The “sum of three cubes” problem asks whether, for a given $k$, there are integer solutions to
\[ x^3+y^3+z^3=k \]
The question is which integers are expressible in this form, not whether all integers can. For example, we know integers of the form $k = \pm 4 \pmod{9}$ cannot. The problems dates back to at least
Andrew has released a paper titled “Cracking the problem with 33”, explaining how he found his solution. Even after a decent amount of mathematical insight, the search still took a while: “The total
computation used approximately 15 core-years over three weeks of real time.”
Alex Kontorovich explained on Twitter the significance of this progress.
Wow this is big news! The sum of three cubes is the bane of modern analytic number theory; its so embarrassing that we can’t tell basic things like which numbers are represented. For a long time,
33 was the smallest unknown culprit. Now that honor belongs to 42 (last below 100)
The general problem of whether a given number can be written as the sum of three cubes has been proven to be undecidable. Bjorn Poonen’s paper “Undecidability in number theory” opens with this fact,
and describes quite a few other similarly undecidable questions about numbers.
Booker’s paper might be unique in beginning with the words “Inspired by the Numberphile video, …” The video in question is this one, featuring Tim Browning.
“Cracking the problem with 33”, by Andrew R. Booker.
Tim Browning’s webpage giving the result.
Post on reddit:r/math 33=8866128975287528^3+(-8778405442862239)^3+(-2736111468807040)^3.
Technical background on the problem from this 2007 article in AMS’s Mathematics of Computation: New integer representations as the sum of three cubes.
5 Responses to “33 can be written as the sum of three cubes”
1. Joseph
Poonen’s paper says that it’s undecidable as to whether a general Diophantine equation has solutions in the integers. But it doesn’t look like he claims that the specific problem of writing
integers as sums of three cubes is undecidable.
2. Irina Bursill
Can anyone please bother to explain why is this so important and such a big deal? Why people care so much about this, puting tons of money and hours into solving this?
3. Mehmet Bekir Unat
Any solution in Mathematics, can be used in any aspect to solve the problems in the universe. It can also be used to create new things. Like, computers! (from Mehmet Bekir Unat, said to Irina
|
{"url":"https://aperiodical.com/2019/03/33-can-be-written-as-the-sum-of-three-cubes/","timestamp":"2024-11-05T22:06:47Z","content_type":"text/html","content_length":"44361","record_id":"<urn:uuid:97461d84-29e6-4eb1-a9f1-2065d202e269>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00855.warc.gz"}
|
R Screencasts - Riddler: Monte Carlo Simulation
Riddler: Monte Carlo Simulation
Notable topics: Simulation
Recorded on: 2018-12-03
Timestamps by: Alex Cookson
Using crossing function to set up structure of simulation (1,000 trials, each with 12 chess games)
Adding result to the tidy simulation dataset
Using sample function to simulate win/loss/draw for each game (good explanation of individual arguments within sample)
Using group_by and summarise to get total points for each trial
Adding red vertical reference line to histogram to know when a player wins a matchup
Answering second piece of riddle (how many games would need to be played for better player to win 90% or 99% of the time?)
Using unnest and seq_len function to create groups of number of games (20, 40, …, 100), each with one game per row
Creating a win field based on the simulated data, then summarising win percentage for each group of number of games (20, 40, …, 100)
Using seq function to create groups of number of games programmatically
Explanation of using logarithmic scale for this riddle
Changing spacing of number of games from even spacing (20, 40, …, 100) to exponential (doubles every time, 12, 24, 48, …, 1536)
Changing spacing of number of games to be finer
Introduction of interpolation as the last step we will do
Introducing approx function as method to linearly interpolate data
Break point for the next riddle
Starting recursive approach to this riddle
Setting up a N x N matrix (N = 4 to start)
Explanation of approach (random ball goes into random cup, represented by matrix)
Using sample function to pick a random element of the matrix
Using for loop to iterate random selection 100 times
Converting for loop to while loop, using colSums to keep track of number of balls in cups
Starting to code the pruning phase
Using diag function to pick matching matrix elements (e.g., the 4th row of the 4th column)
Turning code up to this point into a custom simulate_round function
Using custom simulate_round function to simulate 100 rounds
Using all function to perform logic check on whether all cups in a round are not empty
Converting loop approach to tidy approach
Using rerun and map_lgl functions from purrr package to simulate a round for each for in a dataframe
Explanation of the tidy approach
Using cumsum and lag functions to keep track of the number of rounds until you win a "game"
Creating histogram of number of rounds until winning a game
Setting boundary argument of geom_histogram function to include count of zeros
Brief explanation of geometric distribution
Extending custom simulate_round function to include number of balls thrown to win (in addition to whether we won a round)
Extending to two values of N (N = 3 or N = 4)
Reviewing results of N = 3 and N = 4
Checking results of chess riddle with Riddler solution
Checking results of ball-cup riddle with Riddler solution (Dave slightly misinterpreted what the riddle was asking)
Changing simulation code to correct the misinterpretation
Reviewing results of corrected simulation
Checking results of ball-cup riddle with corrected simulation with Riddler solutions
Visualizing number of balls thrown and rounds played
|
{"url":"https://www.rscreencasts.com/content_pages/riddler-monte-carlo-simulation.html","timestamp":"2024-11-08T09:19:47Z","content_type":"application/xhtml+xml","content_length":"28826","record_id":"<urn:uuid:fef365d9-7d63-4ee8-aec7-906f3f87cd3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00081.warc.gz"}
|
Deep learning sounds so cool, doesn’t it? There is machine learning, which is already super cool and now make it deep. Even better.
But what makes a learning algorithm deep?
Even though the term is super high fashion to use and is being thrown around like a magical solution to all our problems, it actually just means that we stack things on top of each other.
Deep learning models on the base level, consist of neurons. A neuron is very similar to a logistic regression model. By stacking neurons on top of each other to make layers and then stacking those
layers one after another you make a deep neural network.
Of course, I’m being overly simplistic here. But on a high level that’s what a deep learning algorithm is.
(Quick note here: deep learning is a type of machine learning. So actually the term machine learning contains deep learning. But in the context of this article, when I say machine learning, I mean
algorithms that are not part of deep learning.)
Machine learning algorithms have been around and part of life since the early 90s. Whereas deep learning algorithms have not been used until very recently, that is 10-12 years ago. Back then, our
computers were not strong enough to run deep neural networks. In the meantime, we came up with better processors and a bit more efficient calculations to make them feasible to train in a short amount
of time.
Okay so now we know the infrastructural difference but how are they different from ML algorithms when it comes to learning and results?
One gigantic advantage DL models have is that they do not need features to be prepared for them.
If we want to make an image annotator using random forests let’s say, we would have to first extract some features from the images. This could be the amount of pixels that are green, the amount that
are blue, how many people there are in the images, certain corners and their angles etc. But if we want to do the same with a deep learning model such as a CNN, all we need to do is to provide the
image as it is to the model. It learns where to look itself. So there is no feature engineering needed.
It sounds like a dream, I know…
But it comes with a price. You need much more data points to make the model accurate. And generally stronger computing power to be able to train because of the complexity of the calculations being
done to train the model.
But this characteristic makes the DL model be able to solve problems we wouldn’t know how to frame.
So now you know what to say if someone asks you how machine learning and deep learning are different. If you want to learn how deep learning works and how to implement it yourself, make sure to sign
up for the updates of my upcoming course. You can read about the course and check out the preliminary table of contents here.
|
{"url":"https://www.misraturp.com/post/the-difference-between-machine-learning-and-deep-learning","timestamp":"2024-11-14T11:55:31Z","content_type":"text/html","content_length":"9111","record_id":"<urn:uuid:254b8767-2e77-41de-bc3c-ec700c8cc4e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00416.warc.gz"}
|
Making use of Geometry to Visual Perceptual Relationships
A spatial relationship generally defines just how a subject is positioned in space general to a reference photograph. If the reference image is a lot larger than the object then the ex – is usually
represented by an ellipse. The ellipse can be graphically represented using a corsa. The corsa has very similar aspects into a sphere introduced plotted on the map. Whenever we look carefully at an
ellipse, we can see that it is shaped so that all of their vertices lie on the x-axis. Therefore a great ellipse may be thought of as a parabola with one concentration (its axis of rotation) and many
points of orientation one the other side of the coin.
There are four main types of geometric diagrams that relate areas. These include: the area-to-area, line-to-line, geometrical construction, and Cartesian structure. The fourth type, geometrical
construction is a little totally different from the other forms. In a geometrical structure of a pair of parallel straight lines is utilized to establish the areas within a model or construction.
The main difference among area-to-area and line-to-line is that an area-to-area relation relates just surface areas. This means that there are no spatial relationships involved. A point on a flat
surface can be viewed as a point in an area-to-room, or an area-to-land, or a room to a bedroom or land. A point on a curved surface can also be deemed part of a living room to room or a part of a
room to land regards. Geometries https://themailbride.com/slavic-brides/ like the ring and the hyperbola can be considered element of area-to-room relationships.
Line-to-line is definitely not a space relationship but a mathematical one particular. It can be understood to be a tangent of geometries on a single set. The geometries in this relative are the
location and the edge of the area of the two lines. The space relationship for these geometries is given by the food
Geometry performs an important role in video or graphic spatial associations. It enables the understanding of the three-dimensional (3D) world and it gives all of us a basis for comprehending the
correspondence between the real world as well as the virtual universe (the online world can be described as subset belonging to the real world). A good example of a visual relationship is a
relationship between (A, W, C). (A, B, C) implies that the distances (D, E) are equal when measured from (A, B), and that they maximize as the values on the distances lower (D, E). Visual spatial
relations may also be used to infer the parameters of an model of real life.
Another program of visual spatial relationships may be the handwriting examination. Fingerprints kept by several people have been used to infer numerous aspects of ones personality. The accuracy of
such fingerprint studies has much better a lot over the past few years. The accuracy of the analyses can be improved even more by using computerized methods, specifically for the large examples.
|
{"url":"http://janar.net/2021/01/20/making-use-of-geometry-to-visual-perceptual-relationships/","timestamp":"2024-11-05T09:26:07Z","content_type":"text/html","content_length":"23612","record_id":"<urn:uuid:68dff1f0-1bd5-44e6-8833-3918255e61c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00320.warc.gz"}
|
Application of the discontinuous Galerkin method to modeling two-dimensional flows of a multicomponent mixture of ideal gases using local adaptive mesh refinement
Title Application of the discontinuous Galerkin method to modeling two-dimensional flows of a multicomponent mixture of ideal gases using local adaptive mesh refinement
R. V. Zhalnin^1, V. F. Masyagin^1, E. E. Peskova^1, V. F. Tishkin^2
Authors ^1National Research Ogarev Mordovia State University
^2 Keldysh Institute of Applied Mathematics
Zhalnin R. V., Masyagin V. F., Peskova E. E., Tishkin V. F. ''Application of the discontinuous Galerkin method to modeling two-dimensional flows of a multicomponent mixture of ideal gases
Citation using local adaptive mesh refinement'' [Electronic resource]. Proceedings of the XIV International scientific conference ''Differential equations and their applications in mathematical
modeling''. (Saransk, July 9-12, 2019). Saransk: SVMO Publ, 2019. - pp. 35-37. Available at: https://conf.svmo.ru/files/2019/papers/paper09.pdf. - Date of access: 14.11.2024.
|
{"url":"https://conf.svmo.ru/en/archive/article?id=212","timestamp":"2024-11-14T01:38:08Z","content_type":"text/html","content_length":"11164","record_id":"<urn:uuid:ed2d7a8f-d1f7-4965-8fa7-95f60608db1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00252.warc.gz"}
|
Lesson 10
Connecting Equations to Graphs (Part 1)
10.1: Games and Rides (5 minutes)
Throughout this lesson, students will use a context that involves two variables—the number of games and the number of rides at an amusement park—and a budgetary constraint. This warm-up prompts
students to interpret and make sense of some equations in context, familiarizing them with the quantities and relationships (MP2). Later in the lesson, students will dig deeper into what the
parameters and graphs of the equations reveal.
Arrange students in groups of 2. Give students a couple of minutes of quiet work time and then another minute to share their response with their partner. Follow with a whole-class discussion.
Student Facing
Jada has $20 to spend on games and rides at a carnival. Games cost $1 each and rides are $2 each.
1. Which equation represents the relationship between the number of games, \(x\), and the number of rides, \(y\), that Jada could do if she spends all her money?
2. Explain what each of the other two equations could mean in this situation.
Activity Synthesis
Invite students to share their interpretations of the equations.
Most students are likely to associate the 20 in the equation with the $20 that Jada has, but some students may interpret it to mean the combined number of games and rides Jada enjoys. (This is
especially natural to do for \(x+y=20\).) If this interpretation comes up, acknowledge that it is valid.
10.2: Graphing Games and Rides (20 minutes)
This activity is the first of several that draw students' attention to the structure of linear equations in two variables, how it relates to the graphs of the equations, and what it tells us
about the situations.
Students start by interpreting linear equations in standard form, \(Ax+By=C\), and using them to answer questions and create graphs. They see that this form offers useful insights about the
quantities and constraints being represented. They also notice that graphing equations in this form is fairly straighforward. We can use any two points to graph a line, but the two intercepts of the
graph (where one quantity has a value of 0) can be quickly found using an equation in standard form.
Students then analyze the graphs to gain other insights. They determine the rate of change in each relationship and find the slope and vertical intercept of each graph. Next, they rearrange the
equations to isolate \(y\). They make new connections here—the rearranged equations are now in slope-intercept form, which shows the slope of the graph and its vertical intercept. These values also
tell us about the rate of change and the value of one quantity when the other quantity is 0.
Tell students that they will now interpret some other equations about games and rides. They will also use graphs to help make sense of what combinations of games and rides are possible given certain
prices and budget constraints.
Read the opening paragraph in the task statement and display the three equations for all to see. Give students a minute of quiet time to think about what each equation means in the situation and then
discuss their interpretations. Make sure students share these interpretations:
• Equation 1: Games and rides cost $1 each and the student is spending $20 on them.
• Equation 2: Games cost $2.50 each and rides cost $1 each. The student is spending $15 on them.
• Equation 3: Games cost $1 each and rides cost $4 each. The student is spending $28 on them.
Arrange students in groups of 3–4. Assign one equation to each group (or ask each group to choose an equation). Ask them to answer the questions for that equation.
Give students 7–8 minutes of quiet work time, and then a few minutes to discuss their responses with their group and resolve any disagreements. Ask groups that finish early to answer the questions
for a second equation of their choice. Follow with a whole-class discussion.
Conversing: MLR2 Collect and Display. During the launch, listen for and collect language students use to describe the meaning of the three equations. Record a written interpretation next to each of
the three equations on a visual display. Use arrows or annotations to highlight connections between specific language of the interpretations and the parts of the equations. This will provide students
with a resource to draw language from during small-group and whole-group discussions.
Design Principle(s): Maximize meta-awareness; Support sense-making
Student Facing
Here are the three equations. Each represents the relationship between the number of games, \(x\), the number of rides, \(y\), and the dollar amount a student is spending on games and rides at a
different amusement park.
Equation 1: \(x + y = 20\)
Equation 2: \(2.50x + y = 15\)
Equation 3: \(x + 4y = 28\)
Your teacher will assign to you (or ask you to choose) 1–2 equations. For each assigned (or chosen) equation, answer the questions.
First equation: \(\underline{\hspace{50mm}}\)
1. What’s the number of rides the student could get on if they don’t play any games? On the coordinate plane, mark the point that represents this situation and label the point with its coordinates.
2. What’s the number of games the student could play if they don’t get on any rides? On the coordinate plane, mark the point that represents this situation and label the point with its coordinates.
3. Draw a line to connect the two points you’ve drawn.
4. Complete the sentences: “If the student played no games, they can get on \(\underline{\hspace{.75in}}\) rides. For every additional game that the student plays, \(x\), the possible number of
rides, \(y\), \(\underline{\hspace{1.5in}}\) (increases or decreases) by \(\underline{\hspace{.75in}}\).”
5. What is the slope of your graph? Where does the graph intersect the vertical axis?
6. Rearrange the equation to solve for \(y\).
7. What connections, if any, do you notice between your new equation and the graph?
Second equation: \(\underline{\hspace{50mm}}\)
1. What’s the number of rides the student could get on if they don’t play any games? On the coordinate plane, mark the point that represents this situation and label the point with its coordinates.
2. What’s the number of games the student could play if they don’t get on any rides? On the coordinate plane, mark the point that represents this situation and label the point with its coordinates.
3. Draw a line to connect the two points you’ve drawn.
4. Complete the sentences: “If the student played no games, they can get on \(\underline{\hspace{.75in}}\) rides. For every additional game that a student plays, \(x\), the possible number of rides,
\(y\), \(\underline{\hspace{1.5in}}\) (increases or decreases) by \(\underline{\hspace{.75in}}\).”
5. What is the slope of your graph? Where does the graph intersect the vertical axis?
6. Rearrange the equation to solve for \(y\).
7. What connections, if any, do you notice between your new equation and the graph?
Anticipated Misconceptions
Some students may not know how to interpret the phrase “for every additional game that a student plays.” Suggest to students that they compare how many rides they could take if they played 3 games,
to the number of rides they could take if they played 4 games. What about if they played 5 games? Ask them to notice how the number of rides changes when one more game is played.
Activity Synthesis
Select students to briefly share the graphs and responses. Keep the original equations, the rearranged equations, and their graphs displayed for all to see during discussion.
To help students see the connections between linear equations in standard form and their graphs, ask students:
• “How did you find the number of possible rides when the student plays no games?” (Substitute 0 for \(x\) and solve for \(y\).)
• “How did you find the number of possible games when the student gets on no rides?” (Substitute 0 for \(y\) and solve for \(x\).)
• “Where on the graph do we see those two situations (all games and no rides, or all rides and no games)?” (On the vertical and horizontal axes or the \(y\)- and \(x\)-intercepts.)
• “The three equations are all given in the same form: \(Ax + By = C\). What information can you get from an equation in this form? What do the \(A\), \(B\), and \(C\) represent in each equation?”
(\(A\) is the price per game, \(B\) is the price per ride, and \(C\) is the amount of money the student spends on games and rides.)
To help students see that an equivalent equation in slope-intercept form reveals other insights about the situation and the graph, discuss:
• “If we rearrange the first equation and solve for \(y\), we get the equation \(y = 20-x\). Is the graph of this equation different from that of the original equation?” (No, the equations are
equivalent, so they have the same graph.)
• “You were asked to complete some sentences about what would happen if the student played more games. How did the graph help you complete the sentences?” (The graph shows how many rides the
student can get on if they played no games. The line slants downward, which means that the more games are played, the fewer rides are possible. The graph shows how much the \(y\)-value (number of
rides) drops when the \(x\)-value (number of games) goes up by 1.)
• “Would you have been able to see the trade-offs between games and rides by looking at the original equations in standard form?” (No, not easily.)
• “Do the rearranged equations still describe the same relationships between games and rides?” (Yes. They are equivalent to the original.)
• “What new insights does this form of equation give us?” (Isolating \(y\) gives an equation in the form of \(y=mx+b\), which reveals the slope of the graph and where it intersects the \(y\)-axis.
The slope tells us how the number of rides changes if the student plays additional games. The \(y\)-intercept tells us the possible number of rides when no games are played.)
Highlight that each form of equation gives us some insights about the relationship between the quantities. Solving for \(y\) gives us the slope and \(y\)-intercept, which are handy for creating or
visualizing a graph. Even without a graph, the slope and \(y\)-intercept can tell us about the relationship between the quantities.
Representation: Internalize Comprehension. Demonstrate, and encourage students to use color coding and annotations to highlight connections between representations in a problem. For example, use the
same color to illustrate where the slope appears in each equation and corresponding graph. Continue to use colors consistently as students discuss “What do the \(A\), \(B\), and \(C\) represent in
each equation?”
Supports accessibility for: Visual-spatial processing
10.3: Nickels and Dimes (10 minutes)
This activity serves two practice goals: writing and graphing linear equations of the form \(Ax + By=C\) to represent a constraint, and interpreting points on a graph in terms of the situation it
represents. In this case, only whole-number values are meaningful for both variables (number of dimes and number of nickels). Students need to consider whether decimal solutions are reasonable in the
Graphing the equation involves some decisions. The axes of the blank coordinate plane are not labeled, so students need to decide which quantity goes on which axis (and to recognize that the decision
affects what each point on the graph represents). Students could also choose to draw a continuous graph (a line) or a discrete graph (points at whole-number values of one variable or both
As students work, notice the graphing decisions students make. Identify students who draw a discrete graph so they could share their rationale during class discussion.
Students engage in quantitative and abstract reasoning (MP2) as they think about the solutions and graph of an equation in context. They practice aspects of modeling (MP4) as they write an equation
for a constraint, decide on representations for the model, and reflect on whether the mathematical results make sense in the given situation.
Consider keeping students in groups of 3–4.
Speaking, Reading: MLR5 Co-Craft Questions. Use this routine to provide opportunities for students to analyze how different mathematical forms and symbols can represent different situations. Display
only the problem statement without revealing the questions that follow. Invite students to write down possible mathematical questions that could be asked about the situation. Invite students to
compare their questions before revealing the remainder of the question. Listen for and amplify any questions that address quantities of each type of coin.
Design Principle(s): Maximize meta-awareness; Support sense-making
Representation: Internalize Comprehension. Activate or supply background knowledge about generalizing a process to create an equation for a given situation. Some students may benefit by first
calculating how many nickels Andre would have if there were 0, 1, 5, or 10 dimes in the jar, and then how many dimes if there were 1, 5, or 10 nickels in the jar. Invite students to use what they
notice about the processes they used to create an equation.
Supports accessibility for: Visual-spatial processing; Conceptual processing
Student Facing
Andre’s coin jar contains 85 cents. There are no quarters or pennies in the jar, so the jar has all nickels, all dimes, or some of each.
1. Write an equation that relates the number of nickels, \(n\), the number of dimes, \(d\), and the amount of money, in cents, in the coin jar.
2. Graph your equation on the coordinate plane. Be sure to label the axes.
3. How many nickels are in the jar if there are no dimes?
4. How many dimes are in the jar if there are no nickels?
Student Facing
Are you ready for more?
What are all the different ways the coin jar could have 85 cents if it could also contain quarters?
Anticipated Misconceptions
Some students who wish to change their equation from standard form to slope-intercept form may get stuck because they are not sure whether to solve for \(n\) or \(d\). Either choice is acceptable,
but this is a good opportunity for students to think through the implications of their choice. Ask students: “In \(y=mx+b\), which variable goes on the horizontal axis? Which goes on the vertical?”
Other students might wish to graph using the equation in standard form without first rewriting it into another form. Ask if they could identify two points on the graph. Alternatively, ask them to
think about how many nickels there would be if there were 0 dimes, 1 dime, 2 dimes, and so on, and plot some points accordingly.
Activity Synthesis
Select previously identified students to share their graphs. For each graph, ask if anyone else also drew it the same way. If no one drew discrete graphs and no one mentioned that fractional values
of \(d\) or \(n\) have no meaning or are not possible in the situation, ask students about it.
Display the following graphs (or comparable graphs by students) for all to see.
Make sure students understand that all of these graphs are acceptable representations of the relationship between the quantities. A graph showing only points with whole-number coordinate values
represents the solutions to the equation accurately but may be time consuming to draw. A line may be a quicker way to see the possible solutions and can be used for problem solving as long as we are
aware that only points with whole-number values make sense.
For example, when reasoning about the last question, students who used a continuous graph might see that the jar would contain 8.5 dimes if it has no nickels. It is important that they recognize that
this is impossible. The same reflection about the context is also necessary if students answered the question by solving the equation for \(d\) when \(n\) is 0.
If time permits, discuss these questions to reinforce the connections to earlier work on equivalent equations and their graphs:
• "Suppose you were to express the relationship between the same quantities but in dollars instead of in cents. What would the equation look like?" (\(0.05n + 0.1d = 0.85\))
• "What would the graph of this equation look like? Try graphing it on the same coordinate plane." (It'd be the same line as the graph for \(5n+10d=85\).)
• "Why would the graph of this equation be identical to the other one?" (The two equations are equivalent. Dividing the first equation—representing the relationship in cents—by 100 gives the second
equation—representing the relationship in dollars. The same combinations of nickels and dimes make both equations true.)
Lesson Synthesis
To help students consolidate their work in this lesson, discuss questions such as:
• "We saw equations in different forms representing the same constraint. For example, \(x+4y=28\) and \(y=\text-\frac14 x+ 7\) both represent the games and rides that a student could do with a
fixed budget. What information about the situation and about the graph can we gain from the standard form, \(Ax+By=C\)?" (In this example, the standard form allows us to see the cost per ride,
the cost per game, and the budget.)
• "What information does the slope-intercept form give us?" (It gives us the slope and \(y\)-intercept of the graph. The slope tells us what is given up in terms of rides for each additional game
played. The \(y\)-intercept tells us how many rides are possible when no games are played.)
• "What might be an efficient way to graph an equation of the form \(Ax+By=C\)?" (Substituting 0 for \(x\) or for \(y\) in the equation. Doing so gives us \((x,0)\) and (\(0,y)\), which are the
horizontal and vertical intercepts of the graph. We could choose two other points, as well, but using 0 eliminates one of the variables, simplifying the calculation. Alternatively, we could
isolate \(y\) and rearrange the equation into slope-intercept form, which shows us the \(y\)-intercept and the slope.)
10.4: Cool-down - Kiran at the Carnival (5 minutes)
Student Facing
Linear equations can be written in different forms. Some forms allow us to better see the relationship between quantities or to predict the graph of the equation.
Suppose an athlete wishes to burn 700 calories a day by running and swimming. He burns 17.5 calories per minute of running and 12.5 calories per minute of freestyle swimming.
Let \(x\) represents the number of minutes of running and \(y\) the number of minutes of swimming. To represent the combination of running and swimming that would allow him to burn 700 calories, we
can write:
\(17.5x + 12.5y = 700\)
We can reason that the more minutes he runs, the fewer minutes he has to swim to meet his goal. In other words, as \(x\) increases, \(y\) decreases. If we graph the equation, the line will slant down
from left to right.
If the athlete only runs and doesn't swim, how many minutes would he need to run?
Let's substitute 0 for \(y\) to find \(x\):
\(\begin {align} 17.5x + 12.5(0) &= 700\\ 17.5x &= 700\\ x&= \dfrac {700}{17.5}\\ x&=40 \end{align}\)
On a graph, this combination of times is the point \((40,0)\), which is the \(x\)-intercept.
If he only swims and doesn't run, how many minutes would he need to swim?
Let's substitute 0 for \(x\) to find \(y\):
\(\begin {align} 17.5(0) + 12.5y &= 700\\ 12.5y &= 700\\ y&= \dfrac {700}{12.5}\\ y&=56 \end{align}\)
On a graph, this combination of times is the point \((0,56)\), which is the \(y\)-intercept.
If the athlete wants to know how many minutes he would need to swim if he runs for 15 minutes, 20 minutes, or 30 minutes, he can substitute each of these values for \(x\) in the equation and find \(y
\). Or, he can first solve the equation for \(y\):
\( \begin {align}17.5x + 12.5y &= 700\\ 12.5y &= 700 - 17.5x\\ y &= \dfrac {700-17.5x}{12.5}\\ y &=56 - 1.4x \end{align}\)
Notice that \(y=56 - 1.4x\), or \(y=\text-1.4x + 56\), is written in slope-intercept form.
• The coefficient of \(x\), -1.4, is the slope of the graph. It means that as \(x\) increases by 1, \(y\) falls by 1.4. For every additional minute of running, the athlete can swim 1.4 fewer
• The constant term, 56, tells us where the graph intersects the \(y\)-axis. It tells us the number minutes the athlete would need to swim if he does no running.
The first equation we wrote, \(17.5x + 12.5y = 700\), is a linear equation in standard form. In general, it is expressed as \(Ax + By = C\), where \(x\) and \(y\) are variables, and \(A, B\), and \
(C\) are numbers.
The two equations, \(17.5x + 12.5y = 700\) and \(y=\text-1.4x + 56\), are equivalent, so they have the same solutions and the same graph.
|
{"url":"https://im-beta.kendallhunt.com/HS/teachers/1/2/10/index.html","timestamp":"2024-11-11T07:24:36Z","content_type":"text/html","content_length":"133290","record_id":"<urn:uuid:fe71274c-4938-4dce-adfe-3e050ca317b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00617.warc.gz"}
|
Java Program To Find Second Largest Element In An Array
In This tutorial, we will see a Java program to find the second largest element in an array.
In the previous article, Java Program To Find Largest Element In An Array (3 Ways), we have seen a few programs to find the largest element in an array. Today, we will see a program to find the
second-largest element in an array.
Here we will take an array of integers arr and please note that arr is not a sorted array. ex. {2, 5, 9, 8, 11, 18, 13}
Java Program to find the second largest element in an array
1. Iterating over an array
We will iterate over an array using the for loop to find the second largest element in an array.
1 /**
2 * A Java program to find the second-largest number in an array
3 * by iterating over an array using a for loop.
4 *
5 * @author coderolls.com
6 */
7 public class SecondLargestElementInArray {
9 public static void main(String[] args) {
10 int[] arr = {2, 5, 9, 8, 11, 18, 13};
12 int secondLargest = getSecondLargest(arr);
13 System.out.println("The second largest element in "
14 + "an array 'arr'is: "+ secondLargest);
15 }
17 private static int getSecondLargest(int[] arr) {
18 int n =arr.length;
19 int largest =arr[0];
20 int secondLargest = -1;
22 for(int i=0; i<n; i++) {
23 if(arr[i]>largest) {
24 //if you found the new largest,
25 //copy current largest to second largest and
26 //copy current element arr[i] to largest
27 secondLargest = largest;
28 largest = arr[i];
29 }else if(arr[i]!=largest) {
30 // if the current element arr[i] is not the largest and
31 // still larger than the current secondLargest
32 // then copy it to secondLargest
33 if(arr[i]>secondLargest) {
34 secondLargest = arr[i];
35 }
36 }
37 }
38 return secondLargest;
39 }
40 }
1 The second largest element in an array 'arr' is: 13
1. In the main method, we have taken a sample array arr = {2, 5, 9, 8, 11, 18, 13}; and passed it as a parameter to the getSecondLargest() method to return the largest number in an array arr.
2. In the getSecondLargest() method we have stored the length of an array arr into int variable n using the arr.length method. To start the program we have assigned the number at index 0 i.e. arr[0]
as the current largest number i.e. largest. Also, we have assigned the value -1 the current second largest number secondLargest.
If the array contains all the similar numbers, there will be no second-largest number in an array. ex. arr= {12,12,12,12} So it will return -1 in that case.
3. Using the if statement we are checking if the current number at index i i.e. arr[i] is larger than the current largest number i.e largest, we will
1. assign the current largest number largest to secondLargest
2. store current number i.e. arr[i] as largest
4. Next, we will check if the current number arr[i] is not the current largest number largest but is still larger than the current second largest number, we will store it to the secondLargest.
5. Return secondLargest.
2. Using Arrays.sort()
As the array is not sorted, we can sort it using the Arrays.sort() method in natural sorting order. So the second last element of an array will be the second largest element of an array.
1 import java.util.Arrays;
3 /**
4 * A Java program to find the second-largest number in an array
5 * using Arrays.sort() method.
6 *
7 * @author coderolls.com
8 */
9 public class SecondLargestElementInArrayUsingArrays {
11 public static void main(String[] args) {
12 int[] arr = {2, 5, 9, 8, 11, 18, 13};
14 int secondLargest = getSecondLargest(arr);
15 System.out.println("The second largest element in"
16 + "an array 'arr' is using Arrays.sort() :"+ secondLargest);
17 }
19 private static int getSecondLargest(int[] arr) {
20 Arrays.sort(arr);
21 // return second largest, so length-2
22 return arr[arr.length-2];
23 }
24 }
1 The second largest element in an array 'arr' is using Arrays.sort(): 13
1. In the main method, we have taken a sample array arr = {2, 5, 9, 8, 11, 18, 13}; and passed it as a parameter to the getSecondLargest() method to return the largest number in an array arr.
2. In the getSecondLargest() method, we have sorted an array arr in natural sorting order using the Arrays.sort() method.
3. Once we sort the array in natural sorting order, we will have the second largest number as the second last element of the array. We can get the second last number of an array as arr[arr.length-2]
to return it.
In this way ( 2. Using Arrays.sort()) even if all the numbers of an array are similar ex. arr= {12,12,12,12}, i.e. when there is no second largest element, it will return that same number.
We can find the second largest element in an array in the following two ways,
1. By iterating over an array using a for loop to compare the largest and second largest number.
2. By sorting an array in a natural sorting order and returning the second last element. i.e. arr[arr.length-2]
The example Java programs used in the above article can be found at this GitHub repository, blogpost-coding-examples/java-programs/second-largest-element-in-an-array/.
Please write your thoughts in the comment section below.
Join Newsletter
Get the latest tutorials right in your inbox. We never spam!
|
{"url":"https://coderolls.com/second-largest-element-in-an-array/","timestamp":"2024-11-02T12:18:08Z","content_type":"text/html","content_length":"49613","record_id":"<urn:uuid:3bf8aada-824b-46be-b734-e64177e334fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00706.warc.gz"}
|
Artificial Intelligence Professional Certificate
Current Status
Not Enrolled
In 1959, Arthur Samuel, a computer scientist who pioneered the study of artificial intelligence, described the
machine learning – ML- as «the study that gives computers the ability to learn
without being explicitly programmed. Alan Turing’s seminal article (Turing, 1950) introduced a
reference standard to demonstrate the intelligence of machines, such that a machine has
to be smart and respond in a way that cannot be differentiated from that of a human being.
Machine learning is an application of artificial intelligence in which a computer /
machine learns from past experiences (input data) and makes future predictions. The
The performance of such a system should at least be human level.
In this material, we will focus on clustering problems for non-machine learning.
supervised with the K-Means algorithm. For supervised machine learning we will describe the
classification problem with a proof of the design tree algorithm and the regression algorithm
with an example of linear regression. Below is a summary representing the types
machine learning and some algorithms as examples in the following figure:
Learning objectives:
• Understand the fundamentals of
Artificial Intelligence and Learning
• Describe the learning methods
automatic: supervised and unsupervised
• Use Data Analysis to take
• Understand the limits of algorithms
• Understand and understand programming
in Python, mathematical knowledge
essentials in AI and basic methods of
Target audiences:
Anyone interested in expanding
their knowledge in Artificial Intelligence and
Machine learning
• Engineers, analysts, marketing directors
• Data analysts, data scientists,
data managers
• Anyone interested in techniques of
data mining and machine learning
There are no formal prerequisites for this
|
{"url":"https://gdinstitute.com/certificados/kanban-certified-associate-kca-3/","timestamp":"2024-11-07T09:02:16Z","content_type":"text/html","content_length":"71076","record_id":"<urn:uuid:1f2eb7d9-781e-490c-b2b8-a7b81c7e1172>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00626.warc.gz"}
|
Hallowin candy solution | Sololearn: Learn to code for FREE!
Hallowin candy solution
Can any one explain me how to solve these problem becuase I only able to correct first 2 problem and other are showing not correct I used round up method and 2*100/houses But still I am not able to
complete the problem
Hear, the answer of the code the problem is not in the code but in rouding the value Instead use math.ceil( ) Function that round the value In round 20.1 become 20 But in ceil function 20.1 become 21
and that the answer Code is here.. import math houses = int(input()) #your code goes here result= (2*100/houses) print(math.ceil(result))
hint: 2.1, 2.2, 2.3 ... houses doesn't make sense
Your code does not produce the expected results. Why do you mark your post as "best"?
houses = int(input()) #your code goes here result= (2*100/houses) print(round(result))
It's a mistake sorry for that can you give code with explain why I my code not giving the answer
|
{"url":"https://www.sololearn.com/ru/Discuss/3304512/hallowin-candy-solution","timestamp":"2024-11-02T00:07:13Z","content_type":"text/html","content_length":"1049740","record_id":"<urn:uuid:ebbb0e64-ca0f-4ec1-885d-762060acf104>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00154.warc.gz"}
|
How does plate tectonics cause volcanism and earthquakes?
How does plate tectonics cause volcanism and earthquakes?
Where plates come into contact, energy is released. Plates sliding past each other cause friction and heat. Subducting plates melt into the mantle, and diverging plates create new crust material.
Subducting plates, where one tectonic plate is being driven under another, are associated with volcanoes and earthquakes.
Are the plates moving?
Earth > Plates on the Move But its outer shell or surface is actually moving all the time. Around the world, mountains form, volcanoes erupt, and earthquakes shake. Why?
What happens when two plates collide?
If two tectonic plates collide, they form a convergent plate boundary. Usually, one of the converging plates will move beneath the other, a process known as subduction. The new magma (molten rock)
rises and may erupt violently to form volcanoes, often building arcs of islands along the convergent boundary.
How many plates are on the earth?
Is Taal a supervolcano?
The Philippines has an active volcano too. It is one of the well-known and visited touristic place of the whole archipelago. The smallest supervolcano that has formed on the planet 500 000 years ago.
Taal Volcano is one of the most active volcanoes in the world.
What really happens when plates move?
When the plates move they collide or spread apart allowing the very hot molten material called lava to escape from the mantle. When collisions occur they produce mountains, deep underwater valleys
called trenches, and volcanoes. The Earth is producing “new” crust where two plates are diverging or spreading apart.
What would happen if all volcanoes erupted at the same time?
If all active volcanoes on Earth went off at the same time, there would be a lot of explosions. Explosive eruptions would churn out wall of rocks, ash and gas, wiping out the nearby areas. They would
travel for thousands of kilometers, and cover the Earth with the thick blanket of ash.
How volcanic activities are related to plate tectonics?
Most volcanoes form at the boundaries of Earth’s tectonic plates. At a divergent boundary, tectonic plates move apart from one another. They never really separate because magma continuously moves up
from the mantle into this boundary, building new plate material on both sides of the plate boundary.
What would happen if we didn’t have earthquakes?
Without the two types of tectonic plate, Earth would be incapable of making new crust or destroying the old… So, in a roundabout way, if earthquakes never happened then Earth could well end up an
ancient wasteland; just another uninhabitable planet in the solar system.
What would happen to the Earth if plate are not present?
If the continents were eroded completely into the oceans there would be no continents and no land left. The continents are being eroded. Without plate tectonics that push the continents up the
erosion would result in the continents disappearing under the surface of the oceans.
What causes the tectonic plates to move?
The heat from radioactive processes within the planet’s interior causes the plates to move, sometimes toward and sometimes away from each other. This movement is called plate motion, or tectonic
What will happen if the plates continue to move?
Even though plates move very slow the motion of the plates moving is called plate tectonics, has a huge impact on the Earth. Plate tectonics form the oceans, continents, and mountains. It also helps
us understand why and where natural disasters like earthquakes occur and volcanoes erupt.
Are there any benefits to volcanoes?
They helped cool off the earth removing heat from its interior. Volcanic emissions have produced the atmosphere and the water of the oceans. Volcanoes make islands and add to the continents. Volcanic
deposits are also used as building materials.
Are plate tectonics necessary for life?
UNIVERSITY PARK, Pa. — There may be more habitable planets in the universe than we previously thought, according to Penn State geoscientists, who suggest that plate tectonics — long assumed to be a
requirement for suitable conditions for life — are in fact not necessary.
How do volcanoes form without plate tectonics?
Hotspot Volcanoes These plumes of molten rock, called magma, rise from the lower asthenosphere. They are much hotter than the typical lithosphere rock. As the plate moves over the hotspot, a sequence
of volcanoes is formed.
|
{"url":"https://gowanusballroom.com/how-does-plate-tectonics-cause-volcanism-and-earthquakes/","timestamp":"2024-11-04T14:46:25Z","content_type":"text/html","content_length":"53893","record_id":"<urn:uuid:16c408ee-f795-47f4-b8d9-767e3cd948c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00449.warc.gz"}
|
3.2: Linear Functions
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
In this section, we will focus on a particular type of function known as a linear function. Much can be said about linear functions, and in fact there is an entire branch of mathematics, linear
algebra, devoted to the study of linear functions! We will focus more on the applications of linear functions in this chapter.
• Recognize examples and non-examples of linear functions
• Use linear functions to model real-world situations
• Describe various geometric properties of linear functions, including their slope
Linear Functions
Instead of starting this section with a definition, as usual, we will start with an example.
For most residential customers who are connected to city water, the city of Monmouth charges a basic monthly water use \(\$22.48\) fee each month, plus a \(\$.0323\) per cubic foot of water used.
Write a function that describes the cost of a monthly water bill in Monmouth in terms of the number of cubic feet of water used. (Note: these numbers are accurate as of 2023.)
Before we start trying to write a function, let's think about a few examples. If you used \(100\) cubic feet of water (a very small amount for monthly water use), you would have to pay \(100 \times \
$0.0323 = \$3.23\) for that 100 cubic feet of water, plus the basic fee charge. So the total would be calculated as follows:
\[\left(\underset{\text{rate per cubic foot } \times \text{ cubic feet }} {\$0.0323 \times 100}\right) + \underset{\text{ basic fee }}{\$22.48} = \$25.71\]
That is not too bad, since we are using exactly 100 cubic feet. What if you use 200 cubic feet? Well, in that case, we have to pay the volume rate twice. In that case, we would have:
\[\left(\underset{\text{rate per cubic foot } \times \text{ cubic feet }} {\$0.0323 \times 200}\right) + \underset{\text{ basic fee }}{\$22.48} = \$28.94\]
That is, we are taking the number of cubic feet of water, multiplying it by the rate per cubic foot, and then adding the basic fee. Either way, the basic fee does not change from month to month, but
the amount you pay for water does -- it is determined by your usage.
So, for a general formula, we could use \(w\) to stand for the number of cubic feet of water used, and \(C(w)\) to stand for the cost of the water bill in dollars. Remember: this is function
notation, and it simply means that the cost is dependent upon the amount of water used. It is not denoting multiplication!
Therefore, a function that represents this situation is: \[C(w) = 0.0323 w + 22.48\]
That may not have been obvious, but now that you've seen it, perhaps you can recognize similar situations in the future. There are two components to this situation: a flat fee, and a per unit rate.
The per unit rate is multiplied by the number of units used. In the previous example, the units were cubic feet, and the per unit rate was \(\$0.0323\) per cubic foot of water. The basic fee, or flat
fee, is a cost that does not change based on usage, and is just added onto the bill. In the previous example, the flat fee was \(\$22.48\). There are many things that are charged this way. We will
see some examples, and if you think about bills you or your family pay, you can likely think of more.
This common function structure, which shows up not only in bills but in many other places, has a special name.
A linear function is a function that can be written \(f(x) = mx + b\) for some numbers \(m\) and \(b\). The number \(m\) is called the slope of the function, and represents the rate of change of the
function. The number \(b\) is called the vertical intercept of the function, and represents the starting value of the function. The graph of a linear function looks like a straight line.
Let's see a couple examples.
Is the water bill example from the previous problem a linear function? If so, find its slope and vertical intercept and interpret them in the context of the question. Evaluate \(C(500)\), graph the
corresponding point, and explain its significance.
The formula from the previous problem is indeed a linear function. Recall, we found that: \[C(w) =0.0323w + 22.48 \]
where \(w\) was the number of cubic feet of water used and \(C(w)\) was the cost associated to that usage. We see that indeed, this has the form \(mx+ b\), where \(m = 0.0323\) and \(b = 22.48\). You
may notice that instead of the variable \(x\) and function name \(f\), the function \(C(w)\) uses a \(C\) and a \(w\) instead. This is intentional, and shows that the letter names for the variable
and function don't affect whether the function is linear. We are free to choose letters as we see fit to match the context of the problem. In this problem, \(w\) stood for the amount of water, and \
(C\) stood for the cost of the bill, which will help us remember their meaning when we need to interpret our answer later.
Since we know \(C(w) =0.0323w + 22.48\) is linear, we can see that its slope is \(0.0323\), and its vertical intercept is \(22.48\). In context, the slope represents the fact that each cubic foot of
water costs \(\$0.0323\) that is, it is the rate of change of the cost with respect to usage. The vertical intercept in this case is \(22.48\), which represents the flat fee, or starting value. In
other words, it is the amount you would pay if you used no water at all.
To graph this, we can use the methods we learned in the previous section: either we can make a table of values, or we can use graphing software (such as Desmos). Desmos produces this picture:
On this picture, we see that the line crosses the vertical axis just above the \(20\) line. The exact spot it is crossing is at \(22.48\), which is the vertical intercept of the function. That shows
the flat fee that is paid when \(0\) cubic feet of water are used.
As we move to the right, the horizontal axis shows how many cubic feet of water are used. As that number increases, the value of the function (which is the vertical value of the red line) increases.
For example, when the number of cubic feet of water used is \(500\), we can look at \(500\) on the horizontal axis. The value of the function at that point, which is between \(35\) and \(40\) on the
vertical axis, is the cost associated with \(500\) cubic feet of water usage. We can calculate the exact value of this point by computing: \[C(500) = 0.0323 \times 500 + 22.48 = 38.63\]
That is, if you use \(500\) cubic feet of water, your water bill will be \(\$38.63\). The graph gives us an easy way to estimate any the cost associated with any number of cubic feet. The formula
allows us to calculate exact values.
Here is the graph again, this time with the relevant points labeled.
Take a look at the calculations again, and make sure that you understand how they relate to the graph. The first coordinate of each ordered pair corresponds to the amount of water used, and the
second coordinate corresponds to the cost of the bill. The graph gives us a way to visualize the entire relationship between water used and cost.
Properties of Linear Functions
Something you may have noticed about the water bill example is that the cost went up at a constant rate when compared to the water usage. This is the hallmark of linear functions -- a constant rate
of change. This constant rate of change can either be increasing or decreasing. If it is an increasing linear function, the same amount will be added to the output for every increase of one in the
input. If it is a decreasing linear function, the same amount will be subtracted from the output for each increase in the input.
Each of the following tables represents a function. Which one(s) could be linear? Explain why you know.
1. \(x\) \(f(x)\)
2. \(x\) \(g(x)\)
3 -2
4 -5
Let's look at the table a, which describes a function, \(f(x)\) first. We see that the inputs (that is, the \(x\) values) are increasing steadily by \(1\). If this were a linear function, we should
be able to add the same amount to each successive output (that is, the \(f(x)\) values) to predict the behavior of the function. However, we see that the distance from \(f(1) = 2\) to \(f(2) = 4\) is
\(2\) but the distance from \(f(2) = 4\) to \(f(3) = 8\) is \(4\). That is, the outputs are increasing at different amounts, rather than the same amount. Therefore, f(x) is not a linear function.
Now, you may correctly notice that there is a predictable pattern to function \(f(x)\). The outputs here are multiplied by a constant number each time. Even though this is a pattern that we can
recognize mathematically, f(x) still does not qualify as a linear function. It does not have a constant rate of change -- meaning the same amount added each time -- which is necessary for a linear
If you are still not convinced that this isn't a linear function, examine its graph, which you can find by hand if you plot points and connect them using a smooth curve:
Notice that this graph is not a straight line; rather, it is a curve that has an increasing rate of change. That is, as the input (horizontal) values increase, the output (vertical) values increase
at a faster and faster rate. This means the function f(x) cannot be linear, because its graph is not a straight line.
Now, we move onto the table b, which describes a function \(g(x)\). We notice that the inputs of \(g(x)\) change by \(1\), so the outputs should either increase or decrease at a constant rate.
Indeed, as we move from \(g(1) = 4\) to \(g(2) = 1\), we decrease by \(3\). Then from \(g(2) = 1\) to \(g(3) = -2\), we decrease by 3 again. Likewise, from \(g(3) = -2\) to \(g(4) = -5\) is a
decrease by 3.
Therefore, table b could represent a linear function. If we plot the points and connect them, we get the following graph:
This graph is a straight line, which is further geometric confirmation that \(g(x)\) could be a linear function.
You may wonder how to take the data given in the previous example and make a linear function from it. There is a process to do this, which involves a slightly more careful look at the concept of
The slope of a linear function is the rate of change of the linear function. That is, the slope is the constant amount of increase or decrease for each change of 1 to the input of the function. It is
calculated as \[ m = \text{ slope } = \frac{\text{rise}}{\text{run}} = \frac{y_2 - y_1}{x_2 - x_1}\]
where \((x_1, y_1), (x_2, y_2)\) are any two points on the line.
Here is a brief explanation of the formula: we are comparing two points on the line, \((x_1, x_2)\) and \((y_1, y_2)\), and calculating the ratio of "change in \(y\)" to "change in \(x\)." To do
that, we calculate \(y_2 - y_1\), which is the vertical distance between the two points, also known as the "rise." Likewise, the quantity \(x_2 - x_1\) is horizontal distance between the two points,
also known as the "run". The slope is the ratio of these two quantities, read as "rise over run." It describes the change in the output \(y\) as it relates to the change in the input \(x\). An
example will help illustrate this.
According to Honda's website, a new basic model 2013 Honda accord cost \(\$21,680\) when it was first sold. In 2023, such a car should sell for about \(\$13,086\) according to Kelly Blue Book.
Assuming that the price of a 2013 Honda Accord changes linearly over time:
• Find the slope of the linear function describing the price of a Honda Accord, and interpret it in context.
• Find a formula for the linear function that describes the price of a Honda Accord \(t\) years after \(2013\) and interpret the vertical intercept in context.
• Use your function to predict the price of a Honda Accord sedan in \(2033\), and label the corresponding point on the graph.
• Find the year in which the price of a 2013 Honda Accord is projected to be \(\$7929.60\).
First, please take note that this is an oversimplified situation. Used car prices, while they do tend to decrease over time, are affected by market factors that make their value highly variable and
subject to fluctuation. This exercise shows how to use linear functions to get a ballpark idea, but this should not be mistaken for an exact prediction!
Our first step is to take our two data points and convert them into ordered pairs. Here, a convenient input variable will be the time \(t\) in years since \(2013\), and the output variable will be
the price \(P(t)\) in dollars. Our first point will be \((0, 21680)\), which corresponds to the fact that \(0\) years after the year 2013, a Honda Accord cost \(\$21,680\). Our second point will be \
((10, 13086)\), which corresponds to the fact that \(10\) years after 2013, that same Honda Accord should \(\$13,086\). This process of converting the written information into mathematical points is
always the first step of the mathematical problem-solving process.
Now we can use these points to stand in for \((x_1, y_1)\) and \((x_2, y_2)\) in the slope formula. We have: \[(x_1, y_1) = (0, 21680) \quad \text{ and } \quad (x_2, y_2) = (10, 13086)\]
Next, we'll use the slope formula to find the slope: \[\text{slope } = \frac{y_2 - y_1}{x_2 - x_1} = \frac{13086- 21680 }{10-0} = \frac{-8594}{10} = -859.4\]
Therefore, the slope of the linear function is \(-859.4\). Since the slope was calculated by dividing a number of dollars by a number of years, its units are dollars per year. That is, the slope is \
(-$859.40\) per year. This means that a 2013 Honda Accord loses value at a rate of \(\$859.40\) per year. (Such a value loss is known as depreciation.)
To answer the second part, we need to work backwards a bit. We observe that the vertical intercept will correspond to the year \(t = 0\), which in this case is the year 2013. We have that the value
of the car in 2013 is \(\$21,680\). Therefore, our vertical intercept is \(21680\).
Now that we have our slope and vertical intercept, we can write the equation of the linear function describing the cost of a 2013 Honda Accord \(t\) years after 2013: \[C(t) = -859.4t + 21680\]
To answer the last part, we will note that 2033 is \(20\) years after 2013, and therefore it corresponds to the value \(t = 20\). Thus, we simply need to evaluate the function \(C(t)\) at the value \
(t = 20\). We have: \[C(20) = -859.4 \times 20 + 21680 = 4492\]
That is, in the year 2033, a 2013 Honda Accord is predicted to be worth about \(\$4,492\).
We can visualize the line and the values at all three points in time by graphing the line and labeling the points:
Our last question asks us to find the year in which the projected price of a 2013 Honda Accord is equal to \(\$7929.60\). This is slightly different from the previous question in that, instead of
knowing the year and finding the price, we are going backwards — we know the price, and we are trying to find the year in which that price occurred. That means that instead of plugging in a value for
\(t\), we are instead plugging in a value for \(C(t)\) and then solving for \(t\). This will require two steps of algebra, but one of them is already familiar to you from a previous chapter.
We'll start by plugging in our cost, \(\$7929.60\), for \(C(t)\): \[7929.60 = -859.4t + 21680\] Our goal is to isolate \(t\). In order to do that, we need to use the fact that subtraction undoes
addition. In other words, we will subtract \(21680\) from both sides: \[7929.60 - 21680 = -859.4t + 21680 - 21680\] On the right side, we see that \(21680 - 21680 = 0\), so we are left with \[7929.60
- 21680 = -859.4t\] We can then perform the subtraction on the left side, which gives us a negative number: \[-13750.4 = -859.4t\] Now we have a situation that may look familiar from the previous
chapter: we can use division undoes multiplication! We get \[t = \frac{-13750.4}{-859.4} = 16\] This tells us that when \(t = 16\), the price is equal to \(\$7929.60\). Rephrasing the answer in
context, we see that in 2029 (which is \(16\) years after 2013), the price of a 2013 Honda Accord is \(\$7929.60\). See if you can find the corresponding point on the graph above!
This shows the importance of linear functions in making predictions about the future. Of course, not all relationships are linear. But often linear functions give close-enough estimates for many
situations, and are relatively simple to calculate. We'll see more of how linear functions are used in making predictions in the next section.
1. Go to Monmouth Power and Light's website. Find the Rates link on the side menu to look for the current Residential electricity rates. Locate the basic customer charge (flat fee) and the cost per
per kilowatt hour (kWh). (Be careful not to round here, and note that some of the numbers are given in cents but others are given in dollars.)
1. Write a linear function \(C(k)\) that describes the cost of your Monmouth electric bill in dollars, in terms of the number of kilowatt hours k that are used.
2. According to the EPA, The average American household uses \(886\) kilowatt hours of energy each month. Find the cost of your electric bill in Monmouth if you use \(886\) kilowatt hours of
energy in a given month using the function \(C(k)\).
3. Graph the function C(k) either by hand or using an electronic graphing tool such as Desmos. Label the point corresponding to your answer from the previous question on the graph.
2. Of the following three tables, one does not represent a function, one represents a function that is not linear, and one represents a linear function. Identify which table satisfies which
properties. Explain your answer using at least 3 complete sentences.
1. Input Output
2. Input Output
1 7.5
3 2.5
3. Input Output
3. A population of deer in a forest is 87 in 2015, and 175 in 2023. Assuming that the deer population changes linearly:
1. Find the linear function P(t) that describes the population of deer in the forest t years after 2015.
2. Using your function from the previous part, predict the deer population in 2030. (Round to the nearest deer if applicable.)
3. Find the year in which the deer population is projected to be 417 deer.
4. Graph your function either by hand or using Desmos, and label the point corresponding to your answer from the previous two parts.
5. What are some limitations to this model? What is unrealistic about it?
|
{"url":"https://math.libretexts.org/Courses/Western_Oregon_University/Math_110%3A_Applied_College_Mathematics/03%3A_The_Language_of_Lines/3.02%3A_Linear_Functions","timestamp":"2024-11-12T02:55:50Z","content_type":"text/html","content_length":"152471","record_id":"<urn:uuid:4ee67fb7-2c22-45ae-a95e-121a54f98700>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00855.warc.gz"}
|
Gecode::Int::Circuit::Base< View, Offset > Class Template Reference
Base-class for circuit propagator. More...
#include <circuit.hh>
Public Member Functions
virtual size_t dispose (Space &home)
Delete propagator and return its size.
Protected Member Functions
Base (Space &home, bool share, Base &p)
Constructor for cloning p.
Base (Home home, ViewArray< View > &x, Offset &o)
Constructor for posting.
ExecStatus connected (Space &home)
Check whether the view value graph is strongly connected.
ExecStatus path (Space &home)
Ensure path property: prune edges that could give too small cycles.
Protected Attributes
int start
Remember where to start the next time the propagator runs.
ViewArray< View > y
Array for performing value propagation for distinct.
Offset o
Offset transformation.
Detailed Description
template<class View, class Offset>
class Gecode::Int::Circuit::Base< View, Offset >
Base-class for circuit propagator.
Provides routines for checking that the induced variable value graph is strongly connected and for pruning short cycles.
Definition at line 59 of file circuit.hh.
Constructor & Destructor Documentation
template<class View, class Offset>
Gecode::Int::Circuit::Base< View, Offset >::Base ( Space & home,
bool share,
Base< View, Offset > & p
) [protected]
Constructor for cloning p.
template<class View , class Offset >
Gecode::Int::Circuit::Base< View, Offset >::Base ( Home home,
ViewArray< View > & x,
Offset & o
) [inline, protected]
Constructor for posting.
Definition at line 42 of file base.hpp.
Member Function Documentation
template<class View , class Offset >
ExecStatus Gecode::Int::Circuit::Base< View, Offset >::connected ( Space & home ) [inline, protected]
Check whether the view value graph is strongly connected.
First non-assigned node reachable from start
Number of nodes not yet visited
Information needed for checking scc's
Definition at line 73 of file base.hpp.
template<class View , class Offset >
ExecStatus Gecode::Int::Circuit::Base< View, Offset >::path ( Space & home ) [inline, protected]
Ensure path property: prune edges that could give too small cycles.
Definition at line 240 of file base.hpp.
template<class View , class Offset >
size_t Gecode::Int::Circuit::Base< View, Offset >::dispose ( Space & home ) [inline, virtual]
Member Data Documentation
template<class View, class Offset>
Remember where to start the next time the propagator runs.
Definition at line 63 of file circuit.hh.
template<class View, class Offset>
Array for performing value propagation for distinct.
Definition at line 65 of file circuit.hh.
template<class View, class Offset>
The documentation for this class was generated from the following files:
|
{"url":"https://www.gecode.org/doc/5.1.0/reference/classGecode_1_1Int_1_1Circuit_1_1Base.html","timestamp":"2024-11-13T08:43:18Z","content_type":"text/html","content_length":"20191","record_id":"<urn:uuid:7ac1dc85-d743-4a28-8ad2-9434c413ce8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00054.warc.gz"}
|
Count cells containing text only, excluding text with digits
I need help with a function/formula:
I am looking at counting the number of crew (they are listed by first name) and at the same time exclude any cells with text & digits (names with quantity).
I hope this makes sense - I did some search and cannot find any formula that does the job.
Many thanks,
Salèha El D.
Talks about #Smartsheet #productivity, #eventindustry, and #informationtechnology
➖Continuous improvement – for me, an ongoing desire to enhance processes, and increase productivity ➖
• @Salèha El D. Is the number always on the right?
If yes:
=COUNTIF(Names:Names, IFERROR(VALUE(RIGHT(@cell, 1)), "OK") = "OK")
If no:
=COUNTIF(Names:Names, NOT(OR(CONTAINS("0", @cell), CONTAINS("1", @cell), CONTAINS("2", @cell), CONTAINS("3", @cell), CONTAINS("4", @cell), CONTAINS("5", @cell), CONTAINS("6", @cell), CONTAINS("7", @cell), CONTAINS("8", @cell), CONTAINS("9", @cell))))
Help Article Resources
|
{"url":"https://community.smartsheet.com/discussion/100217/count-cells-containing-text-only-excluding-text-with-digits","timestamp":"2024-11-06T13:53:08Z","content_type":"text/html","content_length":"392552","record_id":"<urn:uuid:498ab16b-d0f6-427e-84d1-517fcde052a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00171.warc.gz"}
|
Investigations on Mechanical Properties of Hybrid Fibre Reinforced High Strength Concrete
Volume 02, Issue 12 (December 2013)
Investigations on Mechanical Properties of Hybrid Fibre Reinforced High Strength Concrete
DOI : 10.17577/IJERTV2IS121114
Download Full-Text PDF Cite this Publication
A. Annadurai, A. Ravichandran, 2013, Investigations on Mechanical Properties of Hybrid Fibre Reinforced High Strength Concrete, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT)
Volume 02, Issue 12 (December 2013),
• Open Access
• Total Downloads : 626
• Authors : A. Annadurai, A. Ravichandran
• Paper ID : IJERTV2IS121114
• Volume & Issue : Volume 02, Issue 12 (December 2013)
• Published (First Online): 24-12-2013
• ISSN (Online) : 2278-0181
• Publisher Name : IJERT
• License: This work is licensed under a Creative Commons Attribution 4.0 International License
Text Only Version
Investigations on Mechanical Properties of Hybrid Fibre Reinforced High Strength Concrete
1Department of civil engineering, Sathyabama university, Chennai -119. Tami Nadu, India
A. Ravichandran2
2 Department of civil Engineering Christ College of Engineering, Pondicherry.-605010, India
Concrete with a single type of fiber may improve the desired properties to a limited level. A composite is termed as hybrid, if two or more types of fibers are rationally combined to produce a
composite that derives benefits from each of the individual fibers. This paper focuses on the experimental investigation of high strength concrete with steel fibers and combination of steel and
polyolefin fibers (hybrid) by testing of compressive strength, splitting tensile strength of cylinders and flexural strength of prisms. For this ACI 211-4R-93 guide line was followed to design the
high strength concrete of grade M60. Each test the high strength concrete specimens were cast and treated as control specimens, other specimens were cast high strength concrete added with steel
fibers at the volume fraction of 0.5%, 1.0%, 1.5%, and 2.0%. At each volume fraction Steel polyolefin fibers were added at 80% – 20% and 60% – 40% combinations. Test results showed that the
compressive strength, splitting tensile strength and modulus of rupture improved with increasing volume fraction. Regression analyses were done to predict the values of compressive strength,
splitting tensile strength and modulus of rupture of all parameters. The prediction values were matching with the experimental results.
Keywords: High strength concrete, steel fibers, polyolefin fibers, hybrid fibers, Regression analysis.
1. Introduction
High-strength and High-performance concrete are being widely used throughout the world and to produce them it is necessary to reduce the water/binder ratio and increase the binder content.
High-strength concrete means good abrasion, impact and cavitations resistance. Using High strength concrete in structures today would
result in economical advantages. Most applications of high strength concrete to date have been in high-rise buildings, long span bridges and some special structures. Major application of high
strength concrete in tall structures have been in columns and shear walls, which resulted in decreased dead weight of the structures and increase in the amount of the rental floor space in the
lower stories. (V. Bhikshma 2009) Fiber – reinforced cement-based composites which possess the unique ability to flex and self-strengthen before fracturing. This particular class of concrete was
developed with the goal of solving the structural problems inherent with todays typical concrete, such as its tendency to fail in a brittle manner under excessive loading and its lack of
long-term durability. To improve the ductility of High strength concrete, a strategy is to introduce steel or polymeric fibers in high strength which results in development of near isotropic
material with reasonable tensile strength and greater toughness which prevents the limitation and propagation of cracks. It has also been shown recently that by using the concept of hybridization
with two different fibers incorporated in a cement matrix, the hybrid composite can offer more attractive engineering properties because the presence of one fiber enables the more efficient
utilization of the potential properties of the other fiber. However, the hybrid composites studied by previous researchers were focused on hybridization of steel, polypropylene and carbon fibers.
The mechanical properties of hybridization of steel and polyolefin fibers in high strength concrete at different volume fraction have been studied previously are available limited. Therefore the
objective of this paper is to determine the basic properties of hybrid fiber reinforced high strength concrete in terms of compressive, splitting tensile and flexural tests in comparison with the
steel fiber reinforced high strength concrete and plain high strength concrete.
1. Experimental Programs
1. Materials
The cement used in concrete mixes was ordinary Portland cement 53 grade as per IS 12269- 1987. The fine aggregate used was local river sand with specific gravity of 2.40. The coarse
aggregate was crushed stone with size of 10 mm and specific gravity of 2.74 Silica fume obtained from Elkem Materials which improved concrete properties in fresh and hardened states. To
improve the workability of concrete, a high range water reducing admixture Gelenium B233 was used. The fibers used in the study were Hooked end steel as shown in Fig. 1, Polyolefin
straight as shown in Fig. 2, the properties were given by the manufacturers as shown in Table 1.The high strength concrete mix proportions were designed by using ACI 211-4R-1993 guide
lines as shown in Table 2, the material proportions are same for all test, the volume fraction of fibers only vary and consider as study parameter.
Fig. 1 Hooked end steel fiber
Fig. 2 Polyolefin straight fiber
Table 1. Properties of Fiber
Fiber Details
Fibre Properties
Polyolefin Steel
Length (mm) 54 35
Shape Straight Hooked at ends
1.22 x 0.732
Size / Diameter (mm) 0.6 mm
Aspect Ratio 44.26 58.33
Density (kg / m3) 920 7850
Specific Gravity 0.90-0.92 7.8 g/cc
Youngs Modulus (GPa) 6 210
Tensile strength (MPa) 550
Fiber Details
Fibre Properties
Polyolefin Steel
Length (mm) 54 35
Shape Straight Hooked at ends
1.22 x 0.732
Size / Diameter (mm) 0.6 mm
Aspect Ratio 44.26 58.33
Density (kg / m3) 920 7850
Specific Gravity 0.90-0.92 7.8 g/cc
Youngs Modulus (GPa) 6 210
Tensile strength (MPa) 550
Table 2. Concrete Mix Proportions for 1m3 Concrete
Materials Quantity in kg
Cement 468.48
Silica Fume 43.52
Fine Aggregate 594.40
Coarse Aggregate 1037.22
HRWR 6.40
Water 159.50
2. Preparation of test specimens:
For compressive strength, splitting tensile strength 100 x 300 mm cylinders, prisms of size 100 x 100 x 500 mm were used for flexural strength. In the preparation of concrete, coarse
aggregate, fine aggregate cement and silica fume were initially mied in dry state .Next the fibers were added manually and maintain uniform distribution by proper dry mixing operations.
Then Water, the high range water reducing admixture Gelenium B 233 already mixed with 50% of required quantity water was added into the dry mix. The well prepared mix of High strength
concrete, Steel fiber reinforced concrete and Hybrid fiber reinforced concrete specimens were cast with above moulds with proper compaction. The specimens were remolded after 24 hours and
then placed in a curing tank for 28 days.
3. Testing Procedure
The compressive strength test was carried out as per ASTM C 39. The cylinders were loaded at the rate of 0.3N/mm2/s until failure. The Splitting tensile strength test was conducted as per
ASTM C 496. The rate of loading for the test was 0.9 N/mm2 /s. The flexural strength test was carried out as per ASTM C 78. All tests were conducted using 200 T capacity of compression
testing machine. Table 3 Shows the designation of the specimens and its strength test results on high strength concrete (HSC) and the steel fibers high strength concrete (HS0.5 -S100/P0
to HS 2.0 -S100/P0 ) and Hybrid fiber high strength concrete (HS0.5- S80 P20, HS0.5 -S60 P40 to HS2.0- S80/P20, HS2.0 -S60/P40 .).
2. Test Results and Discussion
The effect of fiber volume on compressive strength of High strength concrete, steel fiber reinforced high strength concrete and hybrid fiber reinforced high strength concrete at each volume
fraction as shown in (Fig. 3). The compressive strength effectiveness ranged from2.29% to 11.13% at the volume fraction of 0.5% to 2.0% and no significant improvement at 2.0% volume fraction
of steel fibers compared to 1.5% volume fraction. Strength effectiveness of hybrid fiber combination S80- P20 ranged from 1.47 %.to 9.98 %., fiber combination S60 P40 ranged from 0.65 % to
9.93%. Improvement in compressive strength of hybrid fiber high strength concrete was less than the steel fiber high strength concrete.
Figure 3 Effects fiber volume on Compressive strength
The splitting tensile strength of all the fibrous concrete in this investigation was significantly higher than that of plain concrete even at volume fraction as low as 0.5%. The development
of splitting tensile strength of Steel fiber reinforced high strength concrete and hybrid fiber reinforced high strength concrete at various volume fractions is shown in (Fig. 4).
Figure 4 Effect of fiber volume on Splitting tensile strength
Compared to High strength concrete, the strength improved with increasing the volume fraction. From the strength effectiveness in Table 3, the improvement started from 2.21 % to 61.36 % at
the volume fraction of 0.5% to 2.0% in case of steel fiber reinforced high strength concrete., 22.87 to 91.14 % in case of Hybrid fiber S80 P20 composition 11.46% to 73.84% in case of S60 p40
composition. Improvement in splitting tensile strength of Hybrid fiber HS2.0 S80 P20 combination gave better strength compare with High strength concrete and steel fiber reinforced high
strength concrete.
The Modulus of rupture for high strength concrete steel fiber reinforced high strength concrete and hybrid fiber reinforced high
strength concrete is shown in Fig.5.
Figure 5 Effects of fiber volume fraction on Modulus of rupture
strength effectiveness in Table 3 indicates that modulus of rupture values of all fibrous concrete were significantly higher than that of high strength control concrete. The strength improved
with increasing the volume fraction.
From the strength effectiveness in Table 3, the improvement started from 41.82 % to 76.27 % at the volume fraction of 0.5 % to 2.0 % in case of Steel fiber reinforced high strength concrete ,
43.56 to 82% in case of S80 P20 composition and 34.32% to 78 % in case of S60 % to 40% composition. The composition HS2.0 -S 80 / P20 strength was more than the other fibrous composition of
high strength concrete.
Table 3. Test Results
compressive strength N/mm2 Splitting tensile strength N/mm2 Modulus of rupture N/mm2
Specimen Name Volume Fraction Vf ( % )
Measured value Strength effectiveness ( % ) Measured value Strength Effectiveness ( % ) Measured Value Strength effectiveness ( % )
HSC 0 61.1 0 4.97 0 7.46 0
HS0.5 S100/P0 0.5 62.5 2.29 5.08 2.21 10.58 41.82
HS0.5 S80 P20 0.5 62 1.47 6.39 8.57 10.68 43.16
HS0.5 S60 P40 0.5 61.5 0.65 5.54 11.46 10.02 34.32
HS1.0 S100 P0 1 65 6.38 6.95 39.83 11.82 58.44
HS1.0 S80/P20 1 66.1 8.18 7.13 45.47 12.5 67.56
HS1.0 S60/P40 1 62.85 2.86 6.58 32.39 12.2 63.75
HS1.5 S100/P0 1.5 67.9 11.13 7.67 54.32 12.98 73.99
HS1.5 S80/P20 1.5 66.25 8.43 7.98 60.56 13.15 76.27
HS1.5 S60/P40 1.5 64.2 5.07 7.25 45.87 12.65 69.57
HS2.0 S100/P0 2 67.8 10.97 8.02 61.36 13 76.27
HS2.0 S80/P20 2 67.2 9.98 9.5 91.14 13.58 82
HS2.0 S60/P40 2 66.8 9.33 8.64 73.84 13.28 78
Table 4. Comparison of measured and Predicted values of strengths
Vf Compressive Strength N/mm2 Splitting Tensile Strength N/mm2 Modulus of Rupture N/mm2
( % ) Measured Predicted Error (%) Measured Predicted Error (%) Measured Predicted Error (%)
0.0 61.1 60.52 -0.94 4.97 4.86 -2.12 7.46 7.65 2.53
0.5 62.5 62.48 -0.02 5.08 5.17 1.77 10.58 8.63 -18.41
0.5 62 62.48 0.78 6.39 5.47 -14.26 10.68 9.52 -10.85
0.5 61.5 62.48 5.78 4.51 10.02 10.32 2.94
1.0 65 64.28 -1.09 6.95 6.10 -12.17 11.82 11.01 -6.81
1.0 66.1 64.28 -2.74 7.13 6.41 -9.96 12.5 11.62 -7.04
1.0 62.85 64.28 2.28 6.58 6.73 2.41 12.2 12.13 -0.56
1.5 67.9 65.91 -2.91 7.67 7.06 -7.95 12.98 12.55 -3.33
1.5 66.25 65.91 -0.50 7.98 7.38 -7.46 13.15 12.87 -2.14
1.5 64.2 65.91 2.67 7.25 7.71 6.35 12.65 13.09 3.52
2.0 67.8 67.38 -0.61 8.02 8.03 0.24 13 13.23 1.75
2.0 67.2 67.38 0.27 9.5 8.37 -11.87 13.58 13.27 -2.31
2.0 66.8 67.38 0.87 8.64 8.70 0.76 13.28 13.21 -0.53
3. Regression Analyses
In this study regression analysis was carried out by using Data analysis software Origin 9.0. From the regression analysis compressive strength, Splitting tensile strength and Modulus of
rupture of High strength concrete, steel fiber reinforced high strength concrete and hybrid fiber reinforced high strength concrete values were predicted in terms of fiber volume fraction
(Vf) The values are almost matching the experimental results with minimum percentage of errors which was calculated and tabulated as shown in Table 4. The compressive strength predictions
were obtained from the regression analysis which yields the
equation (1) in this equation (Y = Compressive strength of concrete (fc)), (X = Volume fraction Vf), at the volume fraction Vf is 0.0 the compressive strength of the high strength concrete is
60.52 N/mm2, this value
is very nearer to experimental value with error of 0.94 percent similarly we can predict the other values. The error of steel fiber reinforced concrete was from 0.02 to
2.91 percent, the error of hybrid fiber reinforcement high strength concrete S80 -P20 combinations was
0.78 to 2.91% and S 60 % P40% combination was
0.87 to 2.67.The prediction curve for compressive strength as shown in Fig. 6
Y = -0.438X2 + 4.35X + 60.46 ————- (1) R2 = 0.798
Figure 6 Prediction curve for Compressive strength
Regression analysis gave the equation (2) for predicting the splitting tensile strength (fsp) in terms of volume fraction (Vf). In this equation (Y = Splitting tensile strength (fsp)), (X =
Volume fraction (Vf)). At Vf = 0.0 for high strength concrete , the splitting
tensile strength predicted was 4.86 N/mm2 , it is very
nearer to experimental value and 0.59fc as per ACI
363 R 93.The prediction error was 2.12 percent. Other values are also predicted and compare with experimental results the prediction error was from 0.24 to 14.26 percent.
The prediction curve for splitting tensile strength as shown in Fig. 7
Y = -0.438X2 + 4.35X + 60.46 —– ——- (2) R2 = 0.883
Figure 7 Prediction curve for Splitting tensile strength
The Modulus of rupture predicted using measured values of flexural strength and fiber volume fraction (Vf ) by applying the regression analysis gave equation (3), (Y = Modulus of rupture
(fr)) , (X= Volume fraction (Vf )). In this equation modulus of rupture of high strength concrete was predicated as 7.65 at Vf = 0.0. It is very nearer to the experimental results and 0.94fc
as per ACI 363 93.The predicted values of steel fiber reinforced concrete and hybrid fiber reinforced concrete were shown in Table . 4 and the prediction were shown in Fig. 8.
Y = -1.701X2 + 6.183X + 7.748 ———– (3) R2 = 0.973
Figure 8 Prediction curve for Modulus of rupture
4. Conclusions
The compressive strength of hybrid fiber reinforced high strength concrete, steel fiber reinforced high strength concrete was slightly improved compare with the high strength concrete. But its values
were very less.
The strength effectiveness showed at each volume fraction a maximum for splitting tensile strength, followed by modulus of rupture and compressive Strength.
It is concluded from this investigation that the use of 80 % steel fibers and 20 % Polyolefin fibers at each volume fraction gave optimum mechanical properties. At hybrid fiber volume fraction of
2.0% with 80% – 20% steel-polyolefin Combination has more significant effect on mechanical properties.
Regression analysis gave the prediction values of compressive strength is almost nearer to experimental results, 5. Prediction of splitting tensile strength of steel fiber and hybrid fiber reinforced
high strength concrete was having error percentage from 0.76 to 14.26 compare with the measured value.
Prediction error of Modulus of rupture of hybrid fiber reinforced high strength concrete was 0.53% to 10.85 comparing with the measured value.
The regression equation estimated the strength parameters reasonably nearer to the measured values of high strength concrete, steel fiber reinforced high strength concrete and hybrid fiber reinforced
high strength concrete.
1. American concrete Institute ACI 544.4R-88 (1988) Design considerations for steel Fiber Reinforced concrete
2. Balaguru p, Shah S.P (1992) Fiber reinforced cement composites, McGraw-Hill, Inc.
3. Bhikshma V (2009) Investigations on mechanical properties of high strength silica fume concrete. Asian journal of civil engineering (building and housing) vol. 10, no. 3 -2009 pages 335-346
4. ASTM C -39/C39m-99 (1999) Standard test method for Compressive strength of cylindrical concrete specimens
5. ASTM C 78/94 (1994) Standard Test Method for Flexural Strength of Concrete (Using Simple Beam with Third-Point Loading
6. ASTMC 496M-04 (2004) Standard Test Method for Splitting Tensile Strength of Cylindrical Concrete Specimens
7. Gustavo J. High Performance of Fiber Reinforced Cement Composites: An alternative for Seismic Design of Structures, ACI Structural journal September October 2005
8. Naaman A.E High performance fiber reinforced cement composites classification and applications CBM C1 International workshop, Karachi Pakistan
9. Raikar R.V (2012) Study on strength parameters of steel fiber reinforced high strength concrete International journal of applied sciences and engineering research vol. Issue 4, 2012
10. Sekar A.S.S. (2011) Performance of Hybrid Fiber Reinforced concrete under compression and Flexure NBMCW Dec.2011
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 2 Issue 12, December – 2013
You must be logged in to post a comment.
|
{"url":"https://www.ijert.org/investigations-on-mechanical-properties-of-hybrid-fibre-reinforced-high-strength-concrete-2","timestamp":"2024-11-05T22:27:52Z","content_type":"text/html","content_length":"88222","record_id":"<urn:uuid:935dc79b-9008-40c9-bba4-5c1c56c0c340>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00165.warc.gz"}
|
The org.orekit.frames package provides classes to handle frames and transforms between them.
Frames Presentation
Frames tree
The Frame class represents a single frame. All frames are organized as a tree with a single root.
Each Frame is defined by a single TransformProvider linking it to one specific frame: its parent frame. This defining transform provider may provide either fixed or time-dependent transforms. As an
example the Earth related frame ITRF depends on time due to precession/nutation, Earth rotation and pole motion. The predefined root frame is the only one with no parent frames.
For each pair of frames, there is one single shortest path from one frame to the other one.
The Transform class represents a full transform between two frames. It manages combined rotation, translation and their first time derivatives to handle kinematics. Transforms can convert position,
directions and velocities from one frame to another one, including velocity composition effects.
Transforms are used to both:
• define the relationship from a parent to a child frames. This transform (tdef on the following scheme) is stored in the child frame.
• merge all individual transforms encountered while walking the tree from one frame to any other one, however far away they are from each other (trel on the following scheme).
The transform between any two frames is computed by merging individual transforms while walking the shortest past between them. The walking/merging operations are handled transparently by the
library. Users only need to select the frames, provide the date and ask for the transform, without knowing how the frames are related to each other.
Transformations are defined as operators which, when applied to the coordinates of a vector expressed in the old frame, provide the coordinates of the same vector expressed in the new frame. When we
say a transform t is from frame A to frame B, we mean that if the coordinates of some absolute vector (say, the direction of a distant star) are uA in frame A and uB in frame B, then uB=
t.transformVector(uA). Transforms provide specific methods for vectorial conversions, affine conversions, either with or without first derivatives (i.e. angular and linear velocities composition).
Transformations can be interpolated using Hermite interpolation, i.e. taking derivatives into account if desired.
Predefined Frames
The FramesFactory class provides several predefined reference frames.
The user can retrieve them using various static methods: getGCRF(), getEME2000(), getICRF(), getCIRF(IERSConventions, boolean), getTIRF(IERSConventions, boolean), getITRF(IERSConventions, boolean),
getMOD(IERSConventions), getTOD(IERSConventions), getGTOD(IERSConventions), getITRFEquinox(IERSConventions, boolean), getTEME(), getPZ9011(IERSConventions, boolean), and getVeis1950(). One of these
reference frames has been arbitrarily chosen as the root of the frames tree: the Geocentric Celestial Reference Frame (GCRF) which is an inertial reference defined by IERS.
For most purposes, the recommended frames are ITRF for terrestrial frame and GCRF for celestial frame. EME2000, TOD, Veis1950 could also be used for compatibility with legacy systems. TEME should be
used only for TLE.
There are also a number of planetary frames associated with the predefined celestial bodies.
IERS 2010 Conventions
One predefined set corresponds to the frames from the IERS conventions (2010). This set defines the GCRF reference frame on the celestial (i.e. inertial) side, the ITRF (International Terrestrial
Reference Frame) on the terrestrial side and several intermediate frames between them. Several versions of ITRF have been defined. Orekit supports several of them thanks to Helmert transformations.
New paradigm: CIO-based transformations (Non-Rotating Origin)
There are several different ways to compute transforms between GCRF and ITRF. Orekit supports the new paradigm promoted by IERS and defined by IERS conventions 2010, i.e., it uses a single transform
for bias, precession and nutation, computed by precession and nutation models depending on the IERS conventions choice and the Earth Orientation Parameters (EOP) data published online by IERS. This
single transform links the GCRF to a Celestial Intermediate Reference Frame (CIRF). The X axis of this frame is the Celestial Intermediate Origin (CIO) and its Z axis is the Celestial Intermediate
Pole (CIP). The CIO is not linked to equinox any more. From CIRF, the Earth Rotation Angle (including tidal effects) is applied to define a Terrestrial Intermediate Reference Frame (TIRF) which is a
pseudo Earth fixed frame. A last transform adds the pole motion (both observed and published in IERS frames and modeled effects including tidal effects) with respect to the Earth crust to reach the
real Earth fixed frame: the International Terrestrial Reference Frame. There are several realizations of the ITRS, each one being a different ITRF. These realizations are linked together using
Helmert transformations which are very small, slightly time-dependent transformations.
The precession-nutation models for Non-Rotating Origin paradigm available in Orekit are those defined in either IERS 1996 conventions, IERS 2003 conventions or IERS 2010 conventions.
In summary, five frames are involved along this path, with various precession-nutation models: GCRF, CIRF, TIRF, ITRF and PZ-90.11.
Classical paradigm: equinox-based transformations
The classical paradigm used prior to IERS conventions 2003 is equinox-based and uses more intermediate frames. It is still used in many ground systems and can still be used with new
precession-nutation models.
Starting from GCRF, the first transform is a bias to convert to EME2000, which was the former reference. The EME2000 frame (which is also known as J2000) is defined using the mean equinox at epoch
J2000.0, i.e. 2000-01-01T12:00:00 in Terrestrial Time (not UTC!). From this frame, applying precession evolution between J2000.0 and current date defines a Mean Of Date frame for current date and
applying nutation defines a True Of Date frame, similar in spirit to the CIRF in the new paradigm. From this, the Greenwich Apparent Sidereal Time is applied to reach a Greenwich True Of Date frame,
similar to the TIRF in the new paradigm. A final transform involving pole motion leads to the ITRF.
In summary, six frames are involved along this path: GCRF, EME2000, MOD, TOD, GTOD and equinox-based ITRF.
In addition to these frames, the ecliptic frame which is defined from the MOD by rotating back to ecliptic plane is also available in Orekit.
The so-called Veis 1950 belongs also to this path, it is defined from the GTOD by the application of a modified sidereal time.
This whole paradigm is deprecated by IERS. It involves additional complexity, first because of the larger number of frames and second because these frames are computed by mixed models with IAU-76
precession, correction terms to match IAU-2000, and a need to convert Earth Orientation data from the published form to a form suitable for this model.
Despite this deprecation, these frames are very important ones and lots of legacy systems rely on them. They are therefore supported in Orekit for interoperability purposes (but not recommended for
new systems).
As the classical paradigm uses the same definition for celestial pole (Z axis) but not the same definition for frame origin (X axis) as the Non-Rotating Origin paradigm, the TOD frame and the CIRF
frame share the same Z axis but differ from each other by a non-null rotation around Z (the equation of the origin, which should not be confused with the equation of the equinox), and the TIRF and
GTOD should be the same frame, at model accuracy level (and of course ITRF should also be the same in both paradigms).
Orekit implementation of IERS conventions
In summary, Orekit implements the following frames:
• those related to the Non-Rotating Origin: GCRF, CIRF, TIRF, ITRF for all precession and nutation models from IERS 1996, IERS 2003 and IERS 2010,
• those related to the equinox-based origin: MOD, TOD, GTOD, equinox-based ITRF for all precession and nutation models from IERS 1996, IERS 2003 and IERS 2010 and Veis 1950.
The frames can be computed with or without Earth Orientation Parameters corrections, and when these corrections are applied, they can either use a simple interpolation or an accurate interpolation
taking sub-daily tidal effects. It is possible to mix all frames. It is for example one can easily estimate the difference between an ITRF computed from equinox based paradigm and IERS 1996
precession-nutation, without EOP and an ITRF computed from Non-Rotating Origin and IERS 2010 precession-nutation, with EOP and tidal correction to the EOP interpolation. This is particularly
interesting when exchanging data between ground systems that use different conventions.
CIO-based transformations
Here is a schematic representation of the partial tree containing the supported IERS frames based on CIO.
Since Orekit uses the new paradigm for IERS frames, the IAU-2006 precession and IAU-2000A nutation model implemented are the complete model with thousands of luni-solar and planetary terms (1600
terms for the x components, 1275 components for the y component and 66 components for the s correction). Recomputing all these terms each time the CIRF frame is used would be really slow. Orekit
therefore implements a caching/interpolation feature to improve efficiency. The shortest period for all the terms is about 5.5 days (it is related to one fifth of the moon revolution period). The
pole motion is therefore quite smooth at the day or week scale. This implies that this motion can be computed accurately using a few reference points per day or week and interpolated between these
points. The trade-off selected for Orekit implementation is to use eight points separated by four hours each. The resulting maximal interpolation error on the frame is about 4e-15 radians. The
reference points are cached so the computation cost is roughly to perform two complete evaluations of the luni-solar and planetary terms per simulation day, and one interpolation per simulation step,
regardless of the step size. This represents huge savings for steps shorter than one half day, which is the rule in most application (step sizes are mostly of the range of a few tens of seconds).
Note that starting with Orekit 6.0, this caching feature is thread-safe.
Tidal effects are also taken into account on Earth Rotation angle and on pole motion. The 71-terms model from IERS is used. Since this model is also computing intensive, a caching/interpolation
algorithm is also used to avoid a massive effect on performance. The trade-off selected for Orekit implementation is to use 8 points separated by 3/32 day (135 minutes) each. The resulting maximal
interpolation error is about 3 micro-arcseconds. The penalty to use tidal effects is therefore limited to slightly more than 20%, to be compared with the 550% penalty without this mechanism.
Equinox-based transformations
Here is a schematic representation of the partial tree containing the supported IERS frames based on equinox
The path from EME2000 to Veis1950, involving the MOD, TOD and GTOD without EOP correction, is devoted to some legacy systems, whereas the MOD, TOD and GTOD with EOP correction are for compatibility
with the IERS 2003 convention. The gap between the two branches can reach a few meters, a rather crude accuracy for many space systems.
The same kind of optimization used for the IAU-2006 precession and IAU-2000A nutation model are also applied for the older IAU-1980 precession-nutation model, despite it is much simpler.
Solar system frames
All celestial bodies are linked to their own body-centered inertial frame, just as the Earth is linked to EME2000 and GCRF. Since Orekit provides implementations of the main solar system celestial
bodies, it also provides body-centered frames for these bodies, one inertially oriented and one body oriented. The orientations of these frames are compliant with IAU poles and prime meridians
definitions. The predefined frames are the Sun, the Moon, the eight planets and the Pluto dwarf planet. In addition to these real bodies, two points are supported for convenience as if they were real
bodies: the solar system barycenter and the Earth-Moon barycenter ; in these cases, the associated frames are aligned with EME2000. One important case is the solar system barycenter, as its
associated frame is the ICRF.
Earth Frames
As explained above, the IERS conventions define Earth frames, the ITRF frames. Depending on which Earth Orientation Parameters are loaded at run time by users, the ITRF frame computed may be an older
or a newer one. If EOP parameters are loaded from EOP C04 14 files, the ITRF will be ITRF 2014, whereas if EOP parameters are loaded from EOP C04 08, or from Bulletin A or from Bulletin B, the ITRF
be be ITRF 2008. When IERS will update its references, the ITRF products change accordingly. Orekit knows about these changes and always allows to convert from one ITRF to any other ITRF or to get a
specific ITRF version even if the loaded EOP were related to one or several different ITRF versions. As an example, one can load yearly EOP C04 files with some files referring to ITRF 2008, some
other files referring to ITRF 2014, and use this mixed history to compute ITRF 2020: Orekit will manage the required Helmert conversions for each date internally, users do not need to bother about
this. All ITRF versions since 1988 are supported. As of Orekit 12.0, it corresponds to ITRF88, ITRF89, ITRF90, ITRF91, ITRF92, ITRF93, ITRF94, ITRF96, ITRF97, ITRF2000, ITRF2005, ITRF2008, ITRF2014
and ITRF2020.
Topocentric Frame
This frame model allows defining the frame associated with any position at the surface of a body shape, which itself is referenced to a frame, typically ITRF for Earth. The frame is defined with the
following canonical axes:
• zenith direction (Z) is defined as the normal to local horizontal plane;
• north direction (Y) is defined in the horizontal plane (normal to zenith direction) and following the local meridian;
• east direction (X) is defined in the horizontal plane in order to complete direct triangle (east, north, zenith).
In such a frame, the user can retrieve azimuth angle, elevation angle, range and range rate of any point given in any frame, at given date.
Local Orbital Frame
Local orbital frames are bound to an orbiting spacecraft. They move with the spacecraft so they are time-dependent. Two local orbital frames are provided: the (t, n, w) frame and the (q, s, w) frame.
The (t, n, w) frame has its X axis along velocity (tangential), its Z axis along orbital momentum and its Y axis completes the right-handed trihedra (it is roughly pointing towards the central body).
The (q, s, w) frame has its X axis along position (radial), its Z axis along orbital momentum and its Y axis completes the right-handed trihedra (it is roughly along velocity).
User-defined frames
The frames tree can be extended by users who can add as many frames as they want for specific needs. This is done by adding frames one at a time, attaching each frame to an already built one by
specifying the TransformProvider from parent to child.
Transforms may be constant or varying. For simple fixed transforms, using directly the FixedTransformProvider class is sufficient. For varying transforms (time-dependent or telemetry-based for
example), it may be useful to define specific providers that will implement getTransform(AbsoluteDate).
A basic example of such an extension is to add a satellite frame representing the satellite motion and attitude. Such a frame would have an inertial frame as its parent frame (GCRF or EME2000) and
the getTransform(AbsoluteDate) method would compute a transform using the translation and rotation from orbit and attitude data.
Frames transforms are computed by combining all transforms between parent frame and child frame along the path from the origin frame to the destination. This implies that when one TransformProvider
locally changes a transform, it basically moves not only the child frame but all the sub-trees starting at this frame with respect to the rest of the tree. This property can be used to update easily
complex trees without bothering about combining transforms oneself. The following example explains a practical case.
This case is an improvement of the basic previous extension to manage orbit and attitude. In this case, we introduce several intermediate frames with elementary transforms and need to update the
whole tree. We also want to take into account the offset between the GPS receiver antenna and the satellite center of mass. When a new GPS measurement is available, we want to update the complete
left subtree. This is done by using the dedicated UpdatableFrame which will do all the conversions.
|
{"url":"https://www.orekit.org/site-orekit-development/architecture/frames.html","timestamp":"2024-11-08T05:58:57Z","content_type":"application/xhtml+xml","content_length":"26332","record_id":"<urn:uuid:426e086f-39d7-430f-a5b9-3062170e90c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00444.warc.gz"}
|
Acoustic scattering by mildly rough unbounded surfaces in three dimensions
Chandler-Wilde, S. N. ORCID: https://orcid.org/0000-0003-0578-1283, Heinemeyer, E. and Potthast, R. ORCID: https://orcid.org/0000-0001-6794-2500 (2006) Acoustic scattering by mildly rough unbounded
surfaces in three dimensions. Siam Journal on Applied Mathematics, 66 (3). pp. 1002-1026. ISSN 0036-1399
Full text not archived in this repository.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.
For a nonlocally perturbed half- space we consider the scattering of time-harmonic acoustic waves. A second kind boundary integral equation formulation is proposed for the sound-soft case, based on a
standard ansatz as a combined single-and double-layer potential but replacing the usual fundamental solution of the Helmholtz equation with an appropriate half- space Green's function. Due to the
unboundedness of the surface, the integral operators are noncompact. In contrast to the two-dimensional case, the integral operators are also strongly singular, due to the slow decay at infinity of
the fundamental solution of the three-dimensional Helmholtz equation. In the case when the surface is sufficiently smooth ( Lyapunov) we show that the integral operators are nevertheless bounded as
operators on L-2(Gamma) and on L-2(Gamma G) boolean AND BC(Gamma) and that the operators depend continuously in norm on the wave number and on G. We further show that for mild roughness, i.e., a
surface G which does not differ too much from a plane, the boundary integral equation is uniquely solvable in the space L-2(Gamma) boolean AND BC(Gamma) and the scattering problem has a unique
solution which satisfies a limiting absorption principle in the case of real wave number.
Item Type: Article
Divisions: Science > School of Mathematical, Physical and Computational Sciences > Department of Mathematics and Statistics
ID Code: 4928
Uncontrolled boundary integral equation method rough surface scattering Helmholtz equation PERTURBED HALF-PLANE HARMONIC MAXWELL EQUATIONS INTEGRAL-EQUATION HELMHOLTZ-EQUATION ELECTROMAGNETIC
Keywords: SCATTERING INVERSE SCATTERING GRID METHOD
Deposit Details
University Staff: Request a correction | Centaur Editors: Update this record
|
{"url":"https://centaur.reading.ac.uk/4928/","timestamp":"2024-11-04T04:40:23Z","content_type":"application/xhtml+xml","content_length":"33913","record_id":"<urn:uuid:3e801ec1-b548-453d-b5f1-c490bb12c372>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00765.warc.gz"}
|
What is a Neural Network?
Neural networks are important! They diagnose illnesses, help you find photos of your cat, decide whether to give you a loan^1. They make up a huge part of what we call “machine learning” or
“artificial intelligence,” especially the new, exciting, scary parts of it. And, given the magnitude of the problems they can handle, they’re simpler than you’d expect.
Most explanations of neural networks for lay people are kind of lousy, because they always start by talking about brains. They’ll define neural networks as “computer systems modeled on the human
brain” or “programs that learn the same way as people” or even as “artificial brains.” This makes neural networks sound really cool and sexy and also like they might become sentient and destroy us at
any moment. But it doesn’t give you any intuition for what they actually do, because neural networks aren’t really modeled on brains. They’re loosely inspired by brains, the same way Hollywood movies
are “inspired by true events.” I really dislike the comparison to brains, because it creates a completely unnecessary air of mystery around a system that’s basically just a lot of math. So I wanted
to write an introduction to neural networks that demystifies them a bit.
This blog post explains what a neural network is, in a way that assumes no math background and contains no brain metaphors. Without all the math, it won’t be a very complete or very precise
explanation, but I hope it will still give you a decent sense of what neural networks do. At each step, I’ll fill in some of the math-y details in a separate section; feel free to read it or skip it.
Defining the Task
In a typical machine learning task, you’re trying to find a mathematical formula, or model, that takes in some information and uses it to answer a specific question. For example, you might train a
model to:
In this context, the machine isn’t learning to speak or play chess or play the violin, it’s just “learning” a formula that you plug numbers into and get numbers out of. The hard part, of course, is
figuring out exactly what formula to use - this is called training the model. This blog post focuses on supervised machine learning, where we train the model by feeding it a bunch of examples that
have already been labelled with the correct answer. (You can also use neural networks for unsupervised learning, which uses unlabelled data, but that’s beyond the scope of this blog post.)
You can think of training a model as trying to come up with the best formula to predict something. You can also think of it as finding the line of best fit for your data:
Training the Model
One common technique for training a supervised learning model is an algorithm called gradient descent. Here’s the gist of it:
1. Get some sample data that includes both the input and the answer you want for the question you care about.
2. Make an initial guess about what formula to use.
3. Run your formula on all your training data to get a bunch of predicted answers.
4. Calculate how close your predicted answers were to the real answers - the error.
5. Make some tiny adjustments to your formula to reduce the error a bit.
6. Repeat steps 3-5 until the error is pretty small.
Say you’re throwing a dinner party and you need to know how much wine to buy. You could just consult Emily Post, but instead, you decide to use MACHINE LEARNING. Let’s go through the steps above:
1. You need training data. So you call three friends who just threw really great dinner parties, and you ask them how many people they invited and how much wine they bought.
Number of Guests Bottles of wine consumed
2. You assume that on average, each person will drink about the same amount of wine, so you can model how much to buy with the following formula:
\[\textrm{Number of bottles} = A \times \textrm{Number of guests}\]
Our goal is to find the best value of A. You make an initial guess of one bottle of wine per person, so you set A=1, which gives you:
\[\textrm{Number of bottles} = 1 \times \textrm{Number of guests}\]
3. For each dinner party in your dataset, you use your formula to calculate how much wine to buy:
Number of guests Predicted bottles of wine
4. Now compare your prediction to the correct answers from your training set, and see how close you were:
Number of guests Predicted bottles of wine Actual bottles of wine Error
5. This formula overestimated how much wine you need, so you reduce A to 0.8. The new formula is: \(\textrm{Number of bottles} = 0.8 \times \textrm{Number of guests}\)
6. You try again with your new formula:
Number of guests Predicted bottles of wine Actual bottles of wine Error
3 2.4 2 0.4
That’s better, but you’re still overestimating how much wine to buy, so you reduce A a bit more. As you get closer and closer to the right value for A, you’ll make smaller and smaller
adjustments, so you don’t overshoot the best answer. You’ll continue this process until you have a pretty good model, which in this case will be around A=0.67. Then you can figure out how much
wine you need and throw a fabulous party. 🍷
In this example we sort of eyeballed how to adjust A. Real machine learning programs will automatically calculate the error and adjust the model parameters, repeating this process hundreds or
thousands of times.
Neural Networks vs. Other Models
The example above used a linear model: for every extra person you invite, the amount of wine you need goes up by a fixed amount. Depending on the parameters you find during gradient descent, your
linear model might look like any of these^2:
But no matter what parameters you use, it will always be a straight line.
But a linear equation might not always fit your data well. The best model for your data might be quadratic:
Or logarithmic^3:
…or something else entirely. The point is, this approach works well if you can choose an appropriate model, one that’s basically the correct shape. But to choose an appropriate model, you already
need to understand how your inputs relate to the correct answer. You can sort of eyeball this on a graph if you have one input, but not if you have a hundred.
Let’s go back to the dinner party example. Maybe you didn’t get nice linear data - maybe you can’t see any correlation between the size of a party and how much wine was consumed. So you call your
friends back and ask them about their parties. It turns out that a lot of factors go into dinner party planning. Of course, some people drink and others don’t. But there are other complications. Your
Aunt Agatha glares disapprovingly at drinkers; if she is invited, everyone will stick to ginger ale. Your Aunt Dahlia is the life of the party; if she’s invited, everyone will have more than usual.
(The question of why your aunts are invited to your friends’ dinner parties is beyond the scope of this blog post.) Last year, two of your friends had a nasty divorce and everyone took sides; if
people from both factions come to the party, at least one faction will leave in a huff before you’ve served the appetizers. And that’s just the tip of the iceberg - there may be dozens or hundreds of
similar factors at play that you don’t even know about.
This is the sort of problem neural networks are good at. They’re flexible enough to fit any relationship between your inputs and output, so you don’t need to understand how they relate ahead of time.
And they can figure out how different combinations of inputs will impact your output. A particular guest at your dinner party will have different impacts on different groups of people - a neural
network can suss out the different rules for all these different interactions.
The math behind them isn’t complicated - it’s just function composition, calling functions on the outputs of other functions. So, where the example above looked like this:
$$ \textrm{Answer} = \textrm{MATH}(x_1, x_2, \cdots, x_n) $$
Then a neural net is just this:
$$ \textrm{Answer} = \textrm{MATH}(\textrm{MATH}(x_1, x_2, \cdots, x_n) + \textrm{MATH}(x_1, x_2, \cdots, x_n) + \cdots) $$
Each of the “math” boxes above is called a neuron. The idea is that each neuron in the first layer (the orange column in the figure) will calculate some relevant information, like “are people from
Friend Group A and Friend Group B present?” directly from the inputs you give it. Each bit of relevant information is a feature. Then the neuron in the last layer uses all these features to calculate
the answer you really care about. You don’t know in advance what features are important; the neural network will figure them out when you train it.
Neural networks work because individual neurons can capture information about the interactions between inputs, instead of trying to understand each input in isolation. In many machine learning
problems, you can’t learn a whole lot from looking at one input at a time; you have to consider how your different inputs interact. At your dinner party, the presence or absence of a single guest can
only tell you so much; the social dynamics between guests dictate how much wine they drink. Or think of a photograph: you can’t learn anything from the color of a single pixel. You have to look at
the contrast between pixels to make out edges, shapes, and objects.
Let’s build a neural network for our dinner party example. Suppose you have three people you could invite to a dinner party: your aunts Agatha and Dahlia, and your best friend Stiffy. Since just the
number of guests doesn’t give you enough information, you need to use the whole guest list as an input. Let’s represent it as a list of numbers, where each position in the list represents one
possible guest, in this order:
A zero means the friend is not at the party, and a one means they are at the party. So if only Agatha is coming, the list will be:
And if Dahlia and Stiffy are coming, it will be:
Now you need to figure out what the first layer of neurons should calculate. You’ll need a function called ReLU (short for rectified linear unit), which is almost identical to the linear function
from the earlier example, except it flattens out at zero if its input is ever negative:
\[\textrm{ReLU}(n) = \bigg\{ & n \textrm{ if } n > 0 \\ & 0 \textrm{ if } n \leq 0 \]
Here’s a slightly more precise version of the neural network diagram above:
Based on this diagram, you have two layers in your neural network. The orange boxes make up layer one. We’ll label the outputs of the three neurons in this layer \(\textrm{neuron}_1\), \(\textrm
{neuron}_2\), and \(\textrm{neuron}_3\). Each neuron will have its own set of parameters \(a_i\), \(b_i\), \(c_i\), and \(d_i\). You’ll need to figure out the values of these parameters when you
train the model. Then each neuron will calculate:
\[\textrm{neuron}_i = \textrm{ReLU}(a_i \times \textrm{Agatha} + b_i \times \textrm{Dahlia} + c_i \times \textrm{Stiffy} + d_i)\]
So, for example, if only Stiffy is coming to your party, \(\textrm{neuron}_2\) would calculate:
\[ \textrm{neuron}_2 & = \textrm{ReLU}(a_2 \times \textrm{Agatha} + b_2 \times \textrm{Dahlia} + c_2 \times \textrm{Stiffy} + d_2) \\\\ & = \textrm{ReLU}(a_2 \times 0 + b_2 \times 0 + c_2 \times 1 +
d_2) \\\\ & = \textrm{ReLU}(c_2 + d_2) \]
The second layer of the network has only one neuron - \(\textrm{neuron}_4\) in the diagram above. This neuron will have three more parameters \(w\), \(x\), \(y\), and \(z\), which you’ll also need to
find by training the model. It will calculate:
\[\textrm{Number of bottles} = \textrm{ReLU}(w \times \textrm{neuron}_1 + x \times \textrm{neuron}_2 + y \times \textrm{neuron}_3 + z)\]
If you put it all together, the formula will be:
\[ \textrm{Number of bottles} = \textrm{ReLU}( & w \times\textrm{ReLU}(a_1 \times \textrm{Agatha} + b_1 \times \textrm{Dahlia} + c_1 \times \textrm{Stiffy} + d_1) + \\\\ & x \times\textrm{ReLU}(a_2 \
times \textrm{Agatha} + b_2 \times \textrm{Dahlia} + c_2 \times \textrm{Stiffy} + d_2) + \\\\ & y \times\textrm{ReLU}(a_3 \times \textrm{Agatha} + b_3 \times \textrm{Dahlia} + c_3 \times \textrm
{Stiffy} + d_3) + \\\\ & z) \]
Now, just like in the last section, you need to figure out your parameters: \(a_1\), \(b_2\), \(x\), \(y\), \(z\), and so on. It turns out that you can use exactly the same system as when you were
just looking at number of guests; the only difference is that you’ll first do the “tiny adjustments” step for the last layer, then work your way backwards to first layer. This system of making
corrections one layer at a time is called backpropagation of errors.
So the new training process is:
1. Convince your friends to give you the guest lists for all their parties.
2. Choose random values for every parameter to create an initial model.
3. Run all the guest lists through the model.
4. See whether the model over- or under-estimated how much wine was needed for each guest list.
5. Adjust \(w\), \(x\), \(y\), and \(z\) by a little to reduce the error. This means you need to figure out whether each individual neuron tends to overestimate or underestimate. For example, if the
model tends to overestimate when the first neuron spits out a big value, you need to make \(w\) a bit smaller; but if you mostly overestimate when the second neuron is big, then you need to
decrease \(x\) more than \(w\).
6. Adjust \(a_1\), \(b_1\), …, \(c_3\), \(d_3\) by a little to reduce the error. You can re-use the information from step 5 about when individual neurons over- or under-estimate. You might find that
\(\textrm{neuron}_1\) tends to overestimate when Agatha is invited to a party, but underestimate when Dahlia is invited; then you’d want to decrease \(a_1\) and increase \(b_1\). That will help
correct \(\textrm{neuron}_1\), which will make the final answer a little more accurate.
7. Repeat steps 3 - 6 until the error is pretty small. Like in the last example, your program will automatically run the model, calculate the error, and adjust all the parameters hundreds or
thousands of times.
When you’re finally done training the model, you might end up with something close to this:
\[ \textrm{Number of bottles} = \textrm{ReLU}( & 0.5 \times\textrm{ReLU}(0 \times \textrm{Agatha} + 0 \times \textrm{Dahlia} + 1 \times \textrm{Stiffy}) + \\\\ & 2 \times\textrm{ReLU}(0 \times \
textrm{Agatha} + 1.5 \times \textrm{Dahlia} + 0.4 \times \textrm{Stiffy} + - 0.5) + \\\\ & -5 \times\textrm{ReLU}(1 \times \textrm{Agatha} + 0 \times \textrm{Dahlia} + 0 \times \textrm{Stiffy}) + \\\
\ & 1) \]
In this model, each neuron encodes something about people’s drinking habits:
• Neuron 1 mean that, left to her own devices, Stiffy will have about half a bottle of wine.
• Neuron 2 means that when Dahlia is around, she’ll have a couple bottles, and Stiffy will have more than usual.
• Neuron 3 means that when Agatha is present, nobody drinks anything.
• And that \(+ 1\) at the end, our \(z\) value, means that if nobody shows up you’ll have a bottle of wine by yourself.
And that, more or less, is how neural networks can help you plan a dinner party.
If you notice any mistakes in this blog post, please email me! This post mostly rehashes the Coursera course on neural networks and deep learning, so if you enjoyed this post you might want to take
the course.
In an earlier version of this post, the error in the “Math-y Details” section was calculated incorrectly. Thanks to KGruel for the correction!
^2 Attributions: left graph by Nicholas Longo (Derived from imaged in work by Jim Hefferon) [CC BY-SA 2.5 (https://creativecommons.org/licenses/by-sa/2.5)], via Wikimedia Commons. Middle graph by
Jsmura - Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=34406311. Right graph by Krishnavedaladerivative work: Cdang - This file was derived fromLinear least squares
example2.svg:, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=25430567 ↩
^3 Graph by Adrian Neumann (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/) or CC BY-SA 2.5-2.0-1.0 (https://creativecommons.org/
licenses/by-sa/2.5-2.0-1.0)], via Wikimedia Commons ↩
|
{"url":"https://norasandler.com/2017/10/20/What-is-a-Neural-Network.html","timestamp":"2024-11-13T18:38:09Z","content_type":"text/html","content_length":"33400","record_id":"<urn:uuid:e8d86e9c-0508-4faf-9e20-8c2209835f4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00390.warc.gz"}
|
Unveiling the Area of a Triangle: A Comprehensive Guide
Unveiling The Area Of A Triangle: A Comprehensive Guide
The area of a triangle is a fundamental concept in geometry, representing the amount of two-dimensional space it occupies. To calculate this area, one needs to understand the concepts of base (length
of the triangle’s bottom edge) and height (the perpendicular distance from a vertex to the base). The area of a triangle is given by the formula: (1/2) x base x height. This formula allows us to
determine the area of any triangle, provided we know its base and height. Understanding the area of a triangle is essential in various mathematical applications, such as finding the area of more
complex shapes or solving geometry problems.
• Importance of understanding the area of a triangle in geometry.
The Importance of Understanding the Area of a Triangle
In the realm of geometry, comprehending the area of a triangle is paramount. It’s a fundamental concept that underpins countless applications in both the world of mathematics and beyond.
Consider the architect meticulously crafting a building’s blueprint, the engineer calculating the load-bearing capacity of a bridge, or the artist designing an intricate mosaic. In each of these
scenarios, a thorough understanding of the area of a triangle is essential for achieving accurate and aesthetically pleasing results.
Moreover, the area of a triangle serves as a building block for many other geometric concepts. It’s the foundation upon which we derive formulas for trapezoids, parallelograms, and even circles. By
grasping this concept, we unlock a gateway to a deeper understanding of the geometric world.
Essential Concepts:
• Area of a Triangle: Definition and formula (1/2) x base x height.
Understanding the Area of a Triangle
Geometry is a vast and fascinating branch of mathematics, and comprehending the area of a triangle is a fundamental concept within this realm. The area of a triangle represents the expanse of its
two-dimensional surface, providing valuable insights into shape and measurement.
The Formula: A Cornerstone of Triangle Geometry
At the heart of triangle area calculations lies a crucial formula: (1/2) x base x height. This formula serves as the cornerstone for determining the area, where base refers to the triangle’s bottom
edge and height signifies the perpendicular distance from the base to the opposite vertex.
Base and Height: Key Components of Area
The base of a triangle acts as its foundation, defining the length of the triangle’s bottom side. Accurately measuring the base is essential for precise area calculations. Similarly, the height of a
triangle, often referred to as its altitude, is the perpendicular distance from the base to the vertex lying opposite. Understanding the height’s role is equally crucial in determining the triangle’s
Calculating the Area: Putting the Formula into Practice
Applying the formula (1/2) x base x height involves a straightforward process. Begin by identifying the base and height of the triangle. Then, multiply the base by the height and divide the result by
two. This calculation provides the numerical value of the triangle’s area. For instance, a triangle with a base of 6 units and a height of 4 units would have an area of (1/2) x 6 x 4 = 12 square
Understanding the Base of a Triangle: A Guide to Measurement
When delving into the realm of geometry, understanding the fundamental components of shapes is crucial. The area of a triangle is a key concept that relies heavily on the triangle’s base, and
comprehending how to measure it is essential for precise calculations.
Defining the Base of a Triangle
The base of a triangle is the bottom side, or the horizontal line upon which the triangle rests. It is typically represented by the letter b in formulas.
Measurement Technique
Measuring the base of a triangle is straightforward. Simply use a ruler or measuring tape to determine the length of the horizontal side. Ensure that the measurement is taken parallel to the opposite
vertex, which is the point at the top of the triangle.
Example Measurement
Suppose we have a triangle with a base of 10 cm. To measure it, we place the ruler along the bottom side and align the zero mark with one end of the base. We then read the measurement at the other
end, which in this case, is 10 cm.
Importance of Base Measurement
The base of a triangle is a critical component in calculating its area. Accurate measurement of the base, along with the height of the triangle, enables us to determine the area using the formula:
Area = (1/2) x Base x Height
Understanding the concept of the base of a triangle and its measurement technique is fundamental in geometric calculations. By accurately measuring the base and incorporating it into the area
formula, we can determine the area of a triangle with precision, unlocking further discoveries and applications in geometry and beyond.
Understanding the Height of a Triangle: A Key Concept in Geometry
As we delve into the realm of geometry, we encounter one of its fundamental elements: the triangle. Understanding its area is crucial for various mathematical applications and everyday situations. In
this exploration, we will focus on a pivotal concept that plays a significant role in calculating the area of a triangle: its height.
The height of a triangle is the perpendicular distance from a vertex (corner) to the opposite side, or base. It is often denoted by the letter h in geometrical equations. Measuring the height can be
done by drawing a perpendicular line segment from the vertex to the base and measuring its length.
To illustrate this, imagine a right triangle with a vertex at point A, a base along line segment BC, and a perpendicular line segment drawn from point A to the midpoint of BC. The length of this
perpendicular line segment represents the height of the triangle.
In geometry, triangles are often classified based on the lengths of their sides or the measures of their angles. However, the height remains a crucial aspect for all types of triangles, as it is an
essential factor in determining their area. In the next section, we will delve into the significance of the height and how it contributes to the calculation of a triangle’s area.
Calculating the Area of a Triangle: Unlocking Geometric Precision
When venturing into the realm of geometry, understanding the area of a triangle is a fundamental skill. This knowledge plays a pivotal role in various applications, from calculating the size of a
piece of land to determining the volume of a prism.
To embark on this journey of triangle area calculation, let’s delve into the essential formula: Area = (1/2) x Base x Height. The base and height are crucial measurements that we need to determine
The base of a triangle is the “bottom” side of the triangle, upon which it rests. Measuring the base is straightforward: simply use a ruler or measuring tape to determine its length.
The height of a triangle, on the other hand, is the distance from the vertex (the point opposite the base) to the base. This distance can be measured perpendicularly, using a ruler or protractor to
ensure precision.
Example Calculation:
Let’s put this formula into practice. Suppose we have a triangle with a base of 10 cm and a height of 5 cm. Plugging these values into our formula, we get:
Area = (1/2) x 10 cm x 5 cm
Area = (1/2) x 50 cm²
Area = 25 cm²
Therefore, the area of the triangle is 25 square centimeters.
Mastering the calculation of triangle area is an essential step in unlocking the mysteries of geometry. This knowledge empowers us to solve complex problems and tackle real-world applications with
|
{"url":"https://www.pattontheedge.ca/unveiling-triangle-area/","timestamp":"2024-11-04T10:49:04Z","content_type":"text/html","content_length":"152405","record_id":"<urn:uuid:cccd4f07-ce23-4013-a777-366e9a92ae12>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00549.warc.gz"}
|
Ett av PopAtomic Studios förslag på hur man kan dekorera kyltornen på ett kärnkraftverk. Klicka på bilden för att se det tråkiga grå originalet.
Enligt Folkkampanjens ordförande Solveig Ternström så är kvinnor som har en positiv inställning till kärnkraft inte riktiga kvinnor. Denna åsikt uttryckte hon klart och tydligt i SVT Debatt under den
första veckan av Fukushimakatastrofen i mars 2011. Det här är bara en av många demoniserande bilder som vissa kärnkraftsmotståndare försöker klistra på oss som inte delar deras syn på kraftslaget: Vi
är män, teknokrater, röstar alltid höger, kallsinnigt beräknande, bryr oss inte om miljön, köpta av kärnkraftsindustrin, onda, osv (vi har under det senaste året även blivit kallade avsevärt värre
saker än detta). Eller så är vi bara lite korkade eller vilseledda och förstår inte de stora sammanhangen…
Därför är det alltid välkommet med exempel som bryter mot denna bild. Idag tar vi upp konstnären Suzy Hobbs Baker som grundat projektet PopAtomic Studios (se även deras Facebooksida), vars syfte är
att sprida information om energiformen kärnkraft med hjälp av snygga och informativa (och ibland provocerande) postrar, logos, konstverk och dekorationer. Deras bilder är fria att distribuera så
länge som man anger var de kommer ifrån, och Suzy och hennes vänner är alltid öppna för förslag till nya alster.
Så vad är det som driver henne till denna verksamhet? Man kan sammanfatta det som så att hon fick tillfälle att ifrågasätta den farliga bild som uppmålats för henne i skolan, och kom fram till att
den inte stämde med verkligheten. Så hon vill försöka bemöta den bilden med hjälp av sitt konstnärskap. Men hon förklarar det hela så mycket bättre på egen hand, nedan är ett TEDx-talk som hon
nyligen höll. Och ja Solveig, Suzy är inte bara konstnär, hon är kvinna också.
Youtubeklippet är taget från denna länk.
Här är ett par andra weblänkar av och om Suzy Hobbs Baker:
8 Comments
2011 was in many ways a depressing year for nuclear energy. The Tōhoku earthquake and tsunami that caused the Fukushima accident changed many things. Germany’s political leaders went
into unreasonable panic and Merkel immediately ordered the shut down of 10 reactors and soon followed that up by a decision to reinstate the previous nuclear phase out plans. With a bit of humor one
can state that the Japanese Tōhoku earthquake and tsunami permanently shut down more reactors in Germany than it did in Japan. Switzerland also decided on a phase out plan that is in many ways
similar to the phase out plan Sweden was following for many years. Maby the swiss will look at Sweden and learn how badly it worked, only time will tell. Fukushima also temporarily put a break on the
rapid nuclear build in China and the future of nuclear in Japan is very unsure.
But this blog post is not about the bad things that happened in 2011, instead we will look at the good things that happened! Here comes a partial list of things that makes us all raise our glasses in
cheers of a promising future.
The biggest good news is the rationality that most countries showed after the Fukushima accident. Germany’s panic didn’t spread and most countries that had plans to expand, renew or start a new
nuclear fleet has publicly stated they will stick to the plans. In China, without a doubt the most important country for new nuclear projects, their ambitious program has only been downsized slightly
and the focus has been shifted towards generation 3 reactors like AP1000 rather than the indigenous generation 2 designs. Plans in the US seems to be going straight ahead after the recent approval by
NRC of the AP1000 design and the developing countries are one by one embracing nuclear as a clean and safe energy source for the future. We applaud the maturity most government have shown despite
the, at times, ridiculous media coverage.
The year has been an exciting one for small modular reactors (reactors with an electric power less than 300 MW). NuScale was brought back from the brink of bankruptcy by an hefty investment from the
engineering company Flour, one of the largest engineering firms in the US. The NuScale design is quite interesting and innovative and I encourage everyone to check out their homepage and have a look.
Babcock and Wilcox and their 125MWe mPower design seems to be steaming on right ahead with a cooperation announced with TVA to build 6 reactors at their Clinch river site. The first unit is supposed
to be constructed by 2020 and we hope that ambitious time plan will hold up. All depends on the ponderous NRC review process.
B&W’s mPower within its containment structure
Westinghouse doesn’t want to be left in the dust on the small modular market and they presented their own design this year, abandoning their earlier IRIS modular project. The new small reactor
is about double the size of mPower at 225 MWe. The whole reactor will be sited underground (a common feature of many small modular reactors) and construction time is projected to be 18 months.
China, not surprisingly, also has a modular PWR in the works, a 100-150MWe design, I haven’t read much about it but it is going to be an interesting fight on what modular PWR will hit the market
first. If I was to make a bet then I bet on the Chinese, due to the slow pace of NRC. But mPower sure looks promising and B&W has long experience with submarine reactors which should speed up their
development process significantly.
All modular reactors however aren’t light water reactors. There are also several generation 4 designs in the works and news have popped up on several of the during 2011. Bill Gates have several time
made the news discussing the traveling wave reactor concept that is being developed by Terrapower, with Gates as one of the biggest investors. The latest information is that Gates was in talks with
China about the reactor. The traveling wave concept is a cool one, the basic idea is that one has a fairly large core where most of it is subcritical and composed out of depleted uranium. In the
center, or at one edge of the core depending on design, one “ignites” the core with a load of highly enrichment uranium. The area closest to the critical zone will slowly get it is depleted uranium
converted to plutonium and become critical while the starting critical zone slowly gets depleted. The whole thing is a fast spectrum reactor with liquid metal coolant so it is capable of
breeding. In this fashion a criticality wave travels through the reactor over a time span of say 50 years, continuously producing power. The appeal of the design is that one can basically bury the
whole thing, push the on button and then walk away and let it produce power for decades without any need for refueling or major maintenance. The reactor is still in the basic design stage at this
point in time and god knows what roadblocks Terrapower will stumble upon. But it is very heartening to see a man like Gates involved and if China gets interesting things can move on quickly.
When talking about China, China is already building a generation 4 modular reactor, the Chinese version of the pebble bed reactor. I worked for a year with Pebble bed reactors and it is a very
interesting type of reactor. They don’t have the high fuel utilization of fast breeders but they have plenty of other perks, most of all it’s passive safety. A pebble bed reactor is as close to idiot
proof as even the most gifted idiot can imagine. The fuel in a pebble bed reactor consists of tiny particles of uranium surrounded by thin but extraordinarily sturdy layers of silicon carbide and
pyrolytic carbon. All these particles is compressed into a ball together with a bunch of graphite and this ball is then surrounded by another layer of graphite to make a pebble about the size of a
tennis ball. To fuel the reactor one throws in a whole bunch of these balls into a cylinder that is made out of even more graphite. The whole thing is cooled by blowing Helium through it. What makes
this reactor so safe is the thermal intertia of the whole system, the extreme durability of the fuel particles and the very strong negative feedback.
If the temperature of the reactor goes up all the neutrons getting slowed down in the graphite will get slowed down slightly less, this makes fission a bit less probably for each time a neutron hits
a uranium atom and the fission chain reaction dies. However we all know that even though fission has ceased, heat is still being generated by decay products and this is where the thermal inertia
comes into play. The reactor is pretty much a immense volume of graphite with some fuel particles in there. All that graphite can soak up huge amounts of heat and the whole core is very large in size
so there is a lot of surface area to radiate away the heat. Combined this means that even if the cooling systems fail completely the equilibrium temperature of the system, due to decay heat
production, will be far less than the temperature required to compromise the fuel particles. One can pull out all the control rods, shut down the cooling systems, go for a 2 week vacation in the
Maldives and then return to a intact and naturally shut down reactor. All that is needed to resume operation is to just turn on the cooling again. No damage to system, no catastrophic meltdown, no
electric systems needed at all for emergency situations. If the Fukushima reactors, or Chernobyl, or TMI had been pebble beds nothing at all would have happened. Pebble bed reactors also has more
versatility than light water reactors due to the fact that they produce much higher temperature heat. The massive industrial heat market then opens up for nuclear energy and it is a market that is
larger than the electricity market. The Chinese pebble bed reactor is a potential game changer that one should follow carefully.
More exciting developments in China is the grid connection of Chinas fast experimental reactor. It is a tiny reactor at 20 MWe but it is a strong sign that China is not leaving any stone unturned in
their strive for nuclear dominance. The follow up to this fast reactor will be the construction of two BN-800 fast sodium cooled reactors China is buying from Russia with planned construction start
in 2013. All the talk of generation 4 reactors being sci-fi is obvious nonsense.
Perhaps the most intriguing news during 2011, at least to me, was the launch of a very high profile Chinese project to develop a molten salt reactor using a thorium fuel cycle. In 20 years they
expect to have a commercial molten salt reactor running. So far China has been very secretive with any kind of details about the project. There are many ways to make a molten salt reactor and we are
eagerly awaiting any information. But some industry insider information I have heard tells me the project is a big deal politically and already to big to be allowed to fail. I am greatly looking
forward to finding out more about the project and reading the first papers they publish. One can only hope they won’t keep it all secret for long but the fact that they dont participate in the
generation 4 cooperation regarding the molten salt reactor hints that they want to do this all by themself. The molten salt reactor is perhaps the most promising of all the generation 4 designs, it
is however also the design with the most question marks attached to it.
A even more surprising development is the attempts by General Electric to launch their sodium cooled fast reactor design in Sweden and the UK. It is surprising because it shows a lot of confidence in
their design and it would be very interesting if one got built in Europe. Sweden is an unlikely market since it would (unfortunately) not fit the general plan in Sweden to treat spent nuclear fuel as
waste instead of a resource. For the same reason I doubt the idea will get approval in the UK, but one can always hope.
As far as waste goes developments are happening in Sweden. The company in charge of developing and building a repository (SKB) for the Swedish spent nuclear fuel has progressed to the point that they
have handed in an application to start building the repository. If built this would be the first civilian repository in the world and the second repository in operation. The first repository in
operation is the american Waste Isolation Pilot Plant that is used to store military transuranic waste (elements heavier than uranium). One can only hope that once the Swedish repository is in action
the old mantra “there is no solution to the nuclear waste problem” by the anti nuclear crowd will finally be silenced. But they didn’t go silent after WIPP started so I guess that is to much to hope
for. The swedish anti nuclear NGO’s like Naturskyddsföreningen and MKG are fighting SKB tooth and nail now when they are on the verge of loosing the waste fight. Spreading FUD wherever and whenever
they can.
Those are a small selection of the good news from 2011 that I can remember of the top of my head. Many other things have of course happened, like the approval to build one more reactor in Finland and
the developments in the Czech republic, Poland and many other countries. If I have missed some big happy news please let me know in the comments!
Hope all readers of this blog will get a splendid 2012!
6 Comments
Back when the cold war was still hot, and everyone was searching for communists in the closet, nuclear was still fresh and awe inspiring. The fascination with everything nuclear spawned a tremendous
variety of projects and ideas to realize the full potential of nuclear energy and find out its utility in many different applications. When looking back at those projects in this age, after being
born into a “precautionary principle” ruled society, some of the ideas might seem like utter madness or amazing brilliance. Pretty much without exception all those projects involved solid engineering
and the scientists back then dared to think big, really really big. It is fascinating to look back at those days and realise how many times the world was within centimeters of big revolutions in
energy production or space travel. Many of the projects share the same depressing end, getting shut down by political, rather than technical, reasons. Thinking big might not be fashionable anymore in
the west, but it will never cease to be educational and it gives hope for what we can accomplish in this century. That is why I will dedicate some time to write a series of “Nuclear History” blog
posts that looks into the crazy, the fascinating and the plain ingenious projects of the first nuclear era. A maths warning is in its place, I will not be afraid to throw in equations into the blog
posts if I feel it will explain something better than words. I am using MathJax to write the equations and it might not display properly if you read this through a RSS feed, in that case just jump to
the blog. If you are put off by equations just skip them and read the text and graphs, they should be self explanatory anyway.
This first post will be about one of my favorites, the fission rocket!
Let us start back in the 50’s. In 1954 the first nuclear powered submarine, USS Nautilus, was launched into the seas and the development of a nuclear jet engine for bomber planes were under way. The
grand space race had just started and it was only natural to ask what part nuclear energy could play. In 1955 the Atomic Energy Commission and the US Air Force got together and started the Rover
program, the original goal of the program was to create a nuclear driven ICBM. Parallel to this a project called Pluto was started with the goal of creating a nuclear driven ramjet for a cruise
missile that could potentially cruise for months on end carrying a large arsenal of nuclear weapons. Both programs where hedges against the possibility that conventional (chemical rocket driven)
ICBM’s might not work as well as was hoped. After the success of traditional chemical ICBM’s in the end of the 50’s and beginning of the 60’s project Pluto became redundant and was cancelled (one
must also mention that it was so dirty that one could have ignored weaponizing it and just letting it fly low over cities and the radiation from the darn thing would take care of business). Project
Rover was left in a position where its military value was diminished but the possibility of a nuclear rocket was still intriguing. Therefor Rover was handed over from the Air Force to the Space
Nuclear Propulsion Office (SNPO) which was a collaboration between NASA and ACE started in 1961. Rover continued as the development program for the rocket itself, regardless of what end use it would
have, and a new program called NERVA (Nuclear Engine for Rocket Vehicle Application) was started to examine the utilization of the Rover rockets for civilian space exploration. I will a bit sloppily
refer to both projects as NERVA.
But before we look into the developments that took place in those two programs, lets stop for a moment and ask what advantage does nuclear energy have in space exploration? After Gagarins first
flight into space in 1961 it became blatantly obvious that it was possible to put people in space with chemical rockets, so why even bother with nuclear rocket? Was it simply because nuclear was the
cool kid on the block? The traditional rocket engineers certainly did not want anything to do with nuclear, they understood chemicals perfectly well, thank you very much! The nuclear engineers on
their side was equally oblivious to the demands of space flight.
The match between space and nuclear isn’t obvious until one starts to look into what is really important for good rocket performance. There are two key parameters that rule supreme, thrust and
specific impulse. Thrust is just what it sounds like, the force the rocket is producing, good old fashion Newton’s second and and third laws (for those who have forgotten, the first law is force
equals mass times acceleration and the second law is, every action has an equal and opposite reaction). You need a hell of a lot of thrust to overcome Earths gravity well! Specific impulse is a bit
more complicated, it is a measure of the efficiency of a rocket engine. It tells you how much mass a rocket needs to expel in order to achieve a certain amount of velocity. In space the only mass you
have to play with is the mass you bring and the only way to gain velocity is to throw some mass in the opposite direction of where you want to go. The less mass you need to bring to achieve a certain
velocity the cheaper it is to send that bloody thing into orbit. Impulse is just another word for momentum (force) and specific impulse is the momentum gained per unit mass of propellant expelled.
Total impulse given by the propellant to the rocket is just the mass of the propellant times the effective exhaust velocity of the propellant. If we assume constant thrust and constant exhaust
velocity we can get the specific impulse by dividing the total impulse with the total propellant mass and all that is left is the effective exhaust velocity.
$$I_{sp}= \frac{\int F dt} {m} = \frac{\int \frac{dM}{dt} V_e dt}{m} =\frac{MV_e}{M} = V_e$$
$$I_{sp}$$ = specific impulse
F = Force
$$V_e$$ = effective exhaust velocity
M = total propellant mass
Now that is a lot of word simply to state that exhaust velocity is important. I go through all of this to explain the concept of specific impulse since it is a term one never gets away from when
reading about rockets. Sometimes specific impulse isn’t defined as above either, but it preserves its importance. In another definition, for some reason I don’t understand at all (after all I am only
a physicist and not a rocket scientist), specific impulse is often defined per propellant unit of weight (on Earth) instead of unit of mass. The strict definition of weight is the force a mass
experiences in a gravitational field. A scale doesn’t really measure your mass in kilos, it measures your weight in Newtons! Using that one then ends up with a definition if specific impulse that
looks like this.
$$I_{sp} = \frac{V_e}{g_o}$$
$$g_0$$ = gravitational acceleration at earths surface (9.81 m/s^2)
In the first definition specific impulse has the unit of velocity, m/s, and in the second definition it has the unit seconds. So if you see people talking about specific impulse of this and that many
seconds you know the reason. I explain this because I will consistently use the second definition of specific impulse from now on due to the fact that it is more common to find tables in units of
Now to realise why specific impulse is important lets have a look at the famous rocket equation formulated by Tsiolkovsky. This equation tells you how much velocity a rocket will gain from a given
amount of propellant with a certain exhaust velocity.
$$\Delta V=V_e*Ln(\frac{M0}{M0+Mr}) = I_{sp}*g_0*Ln(\frac{M0}{M0+Mr})$$
$$\Delta V$$ = the speed given to the rocket
$$V_e$$ = rocket exhaust velocity
$$M_0$$ = Rocket mass without propellant
$$M_r$$ = propellant mass
$$g_0$$ = Gravitational acceleration at the earth surface
The higher the specific impulse the higher the $$\Delta V$$, that much is obvious. Looking at the masses involved is even more enlightening. So lets breaks out the $$M_r$$ term from the last equation
and we get:
$$M_r = M_0*[e^{\frac{\Delta V}{I_{sp}*g_0}}-1] $$ = $$M_0[e^{\frac{\Delta V}{V_e}}-1]$$
Lets plot this function! Lets assume we want to go from low earth orbit to orbit around the moon. This will require a $$\Delta V$$ somewhere in the neighborhood of 4000 m/s (to get into low earth
orbit in the first place one needs about 10 000 m/s, but lets assume we are already there). Lets also assume we want to deliver about 55 tons of material there. That is about the weight of the Apollo
command module plus the lunar lander module plus the empty weight of the S-IVB last stage of the Saturn V rocket. This will give the resulting plot with Isp’s ranging from 100 to 1000 seconds
(exhaust velocities of 981 m/s to 9810 m/s).
There are two blue X drawn on the plot. The first X is drawn at the $$I_{sp}$$ value 475, this happens to be the specific impulse that the third stage of the Saturn V rocket had, the part of the
rocket that was supposed to give the final $$\Delta V$$ to go to the moon. It turns out that the reaction mass according to the plot above for $$I_{sp}$$ = 475 is 75 metric tons. In reality the S-IVB
burned about 80 tons of fuel to reach the moon, so we are playing in the correct order of magnitude here! What about the second X drawn with a $$I_{sp}$$ of 925? To jump forward a bit in time, that
happens to be the $$I_{sp}$$ of the final NERVA design, how much reaction mass does that correspond to? 30 tons! Less than half of the S-IVB, a dramatic reduction and a potential cost saver!
To show an even larger advantage for the nuclear rocket, lets look at missions requiring higher $$\Delta V$$. In the figure below I have plotted the reaction masses needed given an $$I_{sp}$$ of
either 475 or 925 seconds. At the far right end of the graph one can see the propulsion mass needed to deliver 550 tons from Earth to landing on Mars.
For the nuclear rocket one would need a propellant mass of about 1200 tons while the chemical rocket needs 4900 tons. Given that the weight to launch something into low earth orbit right now is over
2000 US dollars per kg the cost saving on mass alone is close to 7.4 billions! To be fair to the chemical case, the cost to get things into orbit might be cut by a factor of 10 within the foreseeable
future (if space x manages to make a reusable rocket), but even in such an optimistic case the potential cost saving might be close to one billion dollars.
The above plots shows why a nuclear rocket is desirable, but it doesn’t explain why a nuclear rocket performs so much better compared to chemical rockets. Why does a nuclear rocket have a much higher
$$I_{sp}$$ ? Lets first consider how a chemical rocket works, in a chemical rocket the energy source and the reaction mass is one and the same. You mix two chemicals, they explode in a semi
controlled fashion and the resultant products are sprayed out through the rocket nozzle and creates thrust. A common example of liquid rocket fuel is hydrogen and oxygen. There is also examples of
solid fuels, the boosters for the space shuttle is one example that uses some kind of aluminum mixture. The chemical reaction heats the reaction products and throws them out of the rocket with a
certain velocity. Temperature of a gas is proportional to the average energy of the gas molecules and energy is simply $$E=\frac{mV^2}{2}$$. Velocity of the particles are then $$V=\sqrt{2E/m}$$ and
we instantly see that the smaller the mass, with a given temperature, the higher the particle velocity. Ideally, whatever we heat up, we want it to to be made of as light a particle as possible. In
chemical rockets we don’t really have the luxury of choice, the reactions that gives the most energy doesn’t necessarily also give the reaction products with the smallest masses. The smallest
possible mass is the hydrogen atom since it is the lightest element. The hydrogen + oxygen reaction is one of the most energetic chemical reactions, but the product of the reaction, water molecules,
is 18 times heavier than the hydrogen atom. A heated hydrogen gas with the same temperature as a heated water gas will have a velocity more than 4 times higher.
A chemical rocket will never have the ideal propellant due to the fact that one has to introduce other compounds since the energy is generated by the compounds themselves. To have the ideal
propellant the energy production has to be separate from the propellant. This is where a nuclear reactor finally enters the picture. If the heat source is nuclear fuel rods and the propellant is
hydrogen heated by flowing over the rods. Then one can indeed get a pure flow of hydrogen out of the rocket. In that way one can maximise the $$I_{sp}$$ from the energy produced. Why can’t one do
this with chemicals, may be by having some kind of contained chemical that produces heat that is transferred to a pure hydrogen gas? It is due to the fact that chemical reaction releases so little
energy compared to nuclear reactions, this means the mass of the chemicals needed for the reaction would be as large or larger than the mass of the propellant. Fission however releases about a
million times more energy from the same amount of mass compared to a chemical energy source. The energy required to put the space shuttle in orbit, of the order of $$10^{13}$$ joules, is contained in
such a petty amount as roughly 100 grams of uranium. With fission it becomes feasible to separate the energy production from the propellant without having the energy production part being to massive.
We can then have a rocket that runs with the same temperature as the best chemical rockets but have 4 time the $$I_{sp}$$ .
In reality everything isn’t quite so rosy, one can not expect to put 100 grams of uranium togheter with some hydrogen into a rocket and easily get a $$I_{sp}$$ that is 4 times higher than the space
shuttle rocket. The $$I_{sp}$$ will rather be a bit more than double because hydrogen atoms form H2 molecules and thus the specific weight of the propellant is only one ninth of the weight of water.
Hydrogen needs to be heated to over 5000 degrees Celsius before it forms free hydrogen atoms. Also a full reactor weights significantly more than 100 grams of uranium, even if only 100 grams needs to
be fissioned to produce the total energy, one still needs a hefty amount of uranium for the reactor to go critical in the first place.
But even taking into accounts those pessimistic facts the nuclear rocket is still very promising. This first part of the series is already long enough. So lets save the fun stuff for the next part.
Then we will look at what kind of things they actually built during the NERVA program and the basis for the reactor designs!
9 Comments
Hello nuclear friends everywhere, and a big hello to our antagonists as well… hope you all had some great holidays these past days.
Here is a new year’s resolution from Nuclear Power Yes Please:
In 2012, we will give you 12 reasons to love nuclear power
Here’s how it’s going to work: at the beginning of every month we will give the headline for a reason to love nuclear power. During the month we will be authoring an article that details everything
behind the reason with links, referenes, diagrams, illustrations and the logic behind the argument. By the end of the month, the article will be given a permanent link on the website that you can
reference whenever you want and use the material.
So… kicking off, here’s reason number one:
Nuclear power saves lives
In Feburay 1, the article for this will be presented, along with the next reason.
Happy New Year everyone!
Comments closed
|
{"url":"https://nuclearpoweryesplease.org/blog/2012/01/","timestamp":"2024-11-06T14:36:45Z","content_type":"text/html","content_length":"76909","record_id":"<urn:uuid:13a0bc4d-2449-44a4-9137-ae2704ad395e>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00256.warc.gz"}
|
2014 World Series Game 2: Giants at Royals
Oct 23 2014
2014 World Series Game 2: Giants at Royals
So, what happened last night? Madison Bumgarner dominated and James Shields not so much. Now everyone may be right about the extraordinary quality of the Royal’s Bullpen but by the time they got the
call in the 4th Inning the game was already 5 – 0 and there was no saving to be done.
In the top of the 1st, Leadoff Single, Sacrifice, Single, Runners at the Corners, Double, Caught advancing, 2 RBI HR, Giants 3 – 0.
In the top of the 4th, Leadoff Double, Wild Pitch, Runner at 3rd, Walk, RBI Single, Shields pulled for Duffy, Sacrifice, 2nd and 3rd, Walk, RBI Walk, Giants 5 – 0.
In the top of the 7th, Leadoff Walk, RBI Triple, Pitching Change, Line Out, RBI Single, Giants 7 – 0.
In the bottom of the 7th a 2 Out Solo Shot, 7 – 1 Giants.
Game Over Dude.
Now I heard the announcers make some kind of remark about how it’s not so bad, that in Series where the Home team got blown out (and it was a blowout, make no mistake) the last 4 out of 5 times that
team came back to win it. I wish I could be more encouraging.
With this loss (and it was Ace against Ace on full rest) the Royals surrender Home Field Advantage and will have to win at least one game at the Giants to prevail whereas the Giants could sweep at
Home and never have to visit the Royals again after tonight, win or lose. Also Bumgarner is a rubber arm who threw only slightly more than 100 pitches and could easily make 3 appearences in this
So either the Royals come up with a solution or the Giants only need 1 more game from somebody.
Now lest you think I’m just a heartless bastard who hates the Royals (and I am a heartless bastard, but I don’t necessarily hate the Royals any more than every team that’s not the Mets) I have a
smidge of sympathy since it’s been so long for them.
But it’s Chicago Cubs sympathy.
Starting tonight for the Royals is Yordano Ventura (R, 14 – 10, ERA 3.20). He’s a rookie with 3 appearences but no decisions Post Season and an ERA of 4.61 based on 13 Innings Pitched with 12 Hits, 2
Home Runs, and 7 Runs Scored.
He will be matched for the Giants by Jake Peavy (R, 7 – 13, ERA 3.73). Post Season he is 1 – 0 in 2 appearences with an ERA of 1.86 based on 9.2 Innings Pitched with 6 hits, 1 Home Runs, and 2 Runs
This is really a pick ’em. Peavy’s had a better Post Season but has played fewer innings and he sucked during the regular season. It all depends on if the Royals bring their bats to the park tonight.
8 pm Fox.
103 comments
7. Blanco homered to right on a full count.
Panik flied out to right fielder Aoki.
Posey struck out.
Sandoval grounded out, pitcher Ventura to first baseman Hosmer.
Runs: 1, Hits: 1
16. Escobar infield single to short.
Aoki flied out to center fielder G.Blanco.
Escobar was caught stealing, catcher Posey to second baseman Panik.
Cain doubled to left.
Hosmer walked on four pitches.
Butler singled to center, Cain scored, Hosmer to third.
Gordon fouled out to left fielder Ishikawa.
21. Pence flied out to center fielder Cain.
Belt struck out.
Morse singled to center.
Ishikawa grounded into fielder’s choice to shortstop A
Escobar unassisted, Ishikawa to first, Morse out.
27. Perez lined out to first baseman Belt.
Infante doubled to left.
Moustakas flied out to center fielder Blanco.
Escobar doubled to right, Infante scored.
Aoki lined out to left fielder Ishikawa.
32. Crawford grounded out, second baseman Infante to first baseman Hosmer.
Blanco grounded out to first baseman Hosmer unassisted.
Panik singled to right.
Posey grounded out, second baseman Infante to first baseman Hosmer.
34. Cain flied out to right fielder Pence.
Hosmer grounded out, second baseman Panik to first baseman Belt.
Butler flied out to center fielder G.Blanco.
40. Sandoval doubled to center.
Pence grounded out, shortstop A.Escobar to first baseman Hosmer.
Belt doubled to right, Sandoval scored.
Morse flied out to right fielder Aoki. Belt was out advancing, right fielder Aoki to shortstop A.Escobar to pitcher Ventura to second baseman Infante, Belt out.
43. Gordon flied out to center fielder Blanco.
Perez grounded out, second baseman Panik to first baseman Belt.
Infante grounded out, shortstop B.Crawford to first baseman Belt.
48. Ishikawa singled to center.
Crawford grounded into fielder’s choice, second baseman Infante to shortstop Escobar, B.Crawford to first, Ishikawa out.
Blanco popped out to second baseman Infante.
Panik flied out to center fielder L.Cain.
51. Moustakas grounded out, second baseman Panik to first baseman Belt.
Escobar struck out.
Aoki flied out to center fielder Blanco.
58. Dyson in as center fielder. Cain in as right fielder.
Posey singled to center. S
Sandoval flied out to right fielder L.Cain.
Pence infield single to short, Posey to second. K.Herrera pitching.
Belt flied out to left fielder A.Gordon.
Morse grounded into fielder’s choice, shortstop A.Escobar to second baseman Infante, Morse to first, Pence out.
Runs: 0, Hits: 2
71. Cain singled to center.
Hosmer walked on a full count, L.Cain to second.
Machi pitching.
Butler singled to left, Cain scored, Hosmer to second.
Lopez pitching.
Gore pinch-running for Butler.
Gordon flied out to left fielder Ishikawa.
Strickland pitching.
On wild pitch by Strickland, Hosmer to third, Gore to second. S.Perez doubled to center, Hosmer scored, Gore scored.
Infante homered to left on a 1-0 count, Perez scored.
Affeldt pitching.
Moustakas singled to center.
Escobar grounded into a double play, shortstop Crawford to second baseman Panik to first baseman Belt, Moustakas out.
77. Gore in as designated hitter.
Ishikawa struck out.
Crawford walked.
Blanco walked, Crawford to second.
Panik lined out to center fielder Dyson.
Posey grounded out, second baseman Infante to first baseman Hosmer.
79. Tim Lincecum pitching.
Dyson flied out to left fielder Ishikawa.
Cain grounded out, shortstop Crawford to first baseman Belt.
Hosmer struck out.
82. Wade Davis pitching.
Sandoval struck out.
Pence struck out.
Belt grounded out, first baseman Hosmer to pitcher Davis.
87. Willingham pinch-hitting for Gore.
Willingham struck out.
Gordon fouled out to third baseman Sandoval.
S.Casilla pitching.
Perez struck out.
92. Greg Holland pitching.
Susac pinch-hitting for Morse. Susac struck out.
Ishikawa struck out.
Crawford singled to right.
On defensive indifference, Crawford to second.
Blanco struck out.
|
{"url":"https://www.docudharma.com/2014/10/2014-world-series-game-2-giants-at-royals","timestamp":"2024-11-14T04:38:48Z","content_type":"text/html","content_length":"136543","record_id":"<urn:uuid:f7cd1932-26fa-4765-88fd-a77679fc719e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00201.warc.gz"}
|
MATHEMATICS—2005 (Set III—Delhi)
CBSE Guess > Papers > Question Papers > Class X > 2005 > Maths > Delhi Set - III
MATHEMATICS—2005 (Set III — Delhi)
Except for the following questions, all the remaining questions have been asked in Set 1 and Set 2.
Q. 1. Express the following as a rational expression in lowest terms:
Q. 2. Find the 8th term from end of the A.P. 7, 10, 13, ...,184.
Q. 4. Find the L.C.M. of the following polynomials:
Q. 6. Find the number of terms of the A.P. 63, 60, 57, ..., so that their sum is 693.
Ans(n = 21 or n = 22)
Q. 11. Solve the following system of equations graphically:
Find the points where the lines meet the x-axis.
Q. 12. The sum of two numbers is 18. The sum of their reciprocals is 1/4. Find the numbers.
Ans:(6,12; 12,6)
Q. 20. There are 30 cards, of same size, in a bag on which numbers 1 to 30 are written. One card is taken out of the bag at random. Find the probability that the number on the selected card is not
divisible by 3.
Q. 21. Prove that the ratio of areas of two similar triangles is equal to the ratio of squares of their corresponding sides.
Apply the above theorem on the following:
ABC is a triangle and PQ is a straight line meeting AB in P and AC in Q. If AP = 1 cm, PB = 3 cm, AQ = 1 -5 cm, QC = 4-5 cm, prove that area of
Mathematics 2005 Question Papers Class X
CBSE 2005 Question Papers Class X
|
{"url":"https://www.cbseguess.com/papers/question_papers/x/2005_maths_delhi_set3.php","timestamp":"2024-11-08T08:29:48Z","content_type":"text/html","content_length":"21386","record_id":"<urn:uuid:6c00e1be-d312-4caa-aa6d-ff664bad7341>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00585.warc.gz"}
|
Handling function partiality
Consider the following snippet:
def isqrt_option(num: Int): Option[Int] = {
num match {
case x if x >= 0 => Some(math.sqrt(x).toInt)
case _ => None
Conceptually, isqrt_option is a partial function from natural numbers to natural numbers that returns square root of its input rounded down. This function represents a specific type of partial
function, the details of which will be discussed shortly.
1. Flavours of partiality - the callee side
Section missing id
There are many ways to implement a partial function. We have already seen how we can use Optional return values to implement partiality. We can similarly use Either:
final case object NegativeInputArgument(value: Int)
def isqrt_either(num: Int): Either[NegativeInputArgument, Int] = {
num match {
case x if x >= 0 => Right(math.sqrt(x).toInt)
case x => Left(NegativeInputArgument(x))
Sometimes you want to use an assertion instead because of some really complex type refinement (e.g. inter-dependency between multiple input data structures), that would be difficult to express in
types without a dependently typed language (or even then!):
* @param num non-negative integer
def isqrt_assert(num: Int): Int = {
assert(num >= 0)
Alternatively, we can use type refinements (using refined Scala library or something similar).
final case class NonNegInt private (value: Int) {
assert(value >= 0)
object NonNegInt {
def refine(i: Int): Option[NonNegInt] =
if (i > 0) Some(NonNegInt(i))
else None
// A macro would allow us to specify literal non-negative integers.
implicit def literal(x: Int): NonNegInt = macro applyImpl
def isqrt_refined(num: NonNegInt): Int =
2. The caller side
Section missing id
Let's now shift our focus to the caller side of isqrt function. Consider a scenario where the caller has already ensured that num is positive:
val num = ...
if (num > 0) {
val r = isqrt_option(num)
r match {
case Some(x) => ...
case None =>
// OOPS, what do we do here?
We can see that using Optional return values just moves partiality to the call-site. The problem is that even though we know that num > 0, most modern compilers are not able to infer that the None
case is impossible (the only exception that I know of is Code Contracts for C#, which has some very similar functionality).
Let's see how the same code would look like if we used isqrt_assert:
val num = ...
if (num > 0) {
val r = isqrt_assert(num)
// No need to match on r, it's already an Int.
Notice that this coding style will necessarily result in some runtime failures due to your logical errors (which are bound to happen sooner or later!), but it is extremely convenient.
Using the refinement-based solution:
val num = ...
NonNegInt.refine(num) match {
case Some(num) =>
val r = isqrt_refined(num)
// No need to match on r, it's already an Int.
case None =>
This code is elegantly simple, but problems can arise when we try to compute the square root of a square root.
val r = isqrt_refined(num)
val r1 = isqrt_refined(r) // OOPS, doesn't compile
We have to modify the definition of isqrt_refined:
def isqrt_refined(num: NonNegInt): NonNegInt =
NonNegInt.make(math.sqrt(num.value).toInt) match {
case Some(n) => n
case None => ???
With a better compiler or language perhaps this could have been avoided, but sooner or later, the asserts will creep back in into your code. In dependently typed languages that will happen when you
are just too tired to prove some property of your code, so you invoke believe_me. And no matter how good a compiler is, the halting problem will prevent it from proving all of potential code
properties (but it could still have insanely large coverage, being able to say prove 99.999999% of all properties).
It's important to note that assert can be a valuable tool when used appropriately. While it might seem tempting to avoid or minimise its use, understanding its role and functionality can
significantly improve your code's robustness and reliability.
|
{"url":"https://wabbit.blog/posts/fp-function-partiality.html","timestamp":"2024-11-11T23:12:47Z","content_type":"text/html","content_length":"18112","record_id":"<urn:uuid:cc30e486-7367-45db-975a-cdb2a05b2c1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00066.warc.gz"}
|
Multiplicity and concentration of nontrivial nonnegative solutions for a fractional Choquard equation with critical exponent | Yang Zhipeng
In present paper, we study the fractional Choquard equation $$\varepsilon^{2s}(-\Delta)^s u+V(x)u=\varepsilon^{\mu-N}(\frac{1}{|x|^\mu}\ast F(u))f(u)+|u|^{2^\ast_s-2}u$$ where $\varepsilon>0$ is a
parameter, $s\in(0,1),$ $N>2s,$ $2^*_s=\frac{2N}{N-2s}$ and $0<\mu<\min{2s,N-2s}$. Under suitable assumption on $V$ and $f$, we prove this problem has a nontrivial nonnegative ground state solution.
Moreover, we relate the number of nontrivial nonnegative solutions with the topology of the set where the potential attains its minimum values and their’s concentration behavior.
|
{"url":"https://www.yzpmath.com/publication/yang2020-1/","timestamp":"2024-11-11T13:32:08Z","content_type":"text/html","content_length":"16304","record_id":"<urn:uuid:f70f5824-23bd-4422-9cfc-e70f4656620e>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00163.warc.gz"}
|
Challenges in the Treatment of Unit Nonresponse for Selected Business Surveys: A Case Study
How to cite this article:
Thompson K. J., & Washington K. T. (2013). Challenges in the Treatment of Unit Nonresponse for Selected Busness Surveys: A Case Study, Survey Methods: Insights from the Field. Retrieved from https://
Probability sample selection procedures gift methodologists with quite a bit of control before data collection. Unfortunately, not all sample units respond and those that do will not always provide
data on every questioned characteristic, which can lead to biased estimates of totals. In this paper, we focus entirely on the challenges of mitigating nonresponse bias effects in business surveys,
using empirical examples from one survey to illustrate challenges common to many programs.
missing data; weight adjustment; ratio imputation
The authors thank Laura Bechtel, William Davie Jr., Donna Hambric, Lynn Imel, Xijian Liu, Mary Mulry, and two referees for their useful comments on earlier versions of this manuscript. This paper is
released to inform interested parties of research and to encourage discussion. The views expressed are those of the authors and not necessarily those of the U.S. Census Bureau.
Probability sample selection procedures gift methodologists with quite a bit of control before data collection. At the design stage, the methodologist determines an optimal design for a given frame
and characteristic(s) to ensure that the realized sample is ‘balanced…which means (the selected sample has) the same or almost the same characteristics as the whole population’ for selected items
(Särndal, 2011). This control can evaporate when the survey is conducted. Not all sample units respond (unit nonresponse), and those that do will not always provide data for every item on the
questionnaire (item nonresponse). Unit and item nonresponse will lead to biased estimates of totals if the respondent-based sample estimates are not adjusted. The degree of bias is a function of
several factors, including the difference in respondent and nonrespondent means on the same item, the magnitude of the aggregated missing data values, and the effects of “improper“ adjustment
procedures on the respondent data.
In this paper, we focus on the challenges of mitigating nonresponse bias effects in business surveys, using empirical examples from one survey to illustrate challenges common to many programs. The
terms “establishment survey” and “business survey” are often used interchangeably. We use the latter term since many business surveys select companies or firms, which comprise establishments. Most
business surveys publish totals such as revenue, expenditures, and employees. Consequently, complete-case analyses are always biased. We identify two separate but highly related estimation
challenges with nonresponse in business surveys: (1) the difficulty in developing adjustment cells for nonresponse treatment that use auxiliary variables that are predictive of both unit response
and outcome and (2) the difficulty in developing appropriate nonresponse treatments for surveys that collect a large number of data items, many of which are not strongly related to key data items or
to the available auxiliary data.
The General Setting: Business Populations and Business Data
Economic data generally have very different characteristics from their household counterparts. First, business populations are highly skewed, i.e. the majority of a tabulated total in a given
industry comes from a small number of large units. Consequently, business surveys often employ single stage samples with highly stratified designs that include the “largest” cases with certainty and
sample the remaining cases. Thus sampled cases with large design weights may often contribute very little to the overall tabulated totals.
An efficient highly stratified design requires that within-strata means are the same, and the between-strata means are different (Lohr, 2010, Ch.3). For this to happen, the unit measure of size
(MOS) variable used for stratification must be highly positively correlated with the survey’s characteristic(s) of interest. However, it is possible for a given characteristic to have no statistical
relationship with unit size. For example, the frame MOS could be total receipts for the business, but an important characteristic of interest could be electrical consumption. Furthermore, although
business populations are highly positively skewed, not all business characteristics are strictly positive (e.g. income, profit/loss).
Not all sampled units respond. To account for this nonresponse, the survey designers partition the population into P disjoint adjustment cells using x[p], a vector of auxiliary categorical variables
available for all units. Each adjustment cell contains n[p] units, of which r[p] respond. Nonresponse adjustment procedures are performed within adjustment cell, with the assumption that the
respondents comprise a random subsample within the nonresponse adjustment cells.
Many business programs collect detail items – groups of items that sum up to their respective totals. The total and associated details items are referred to together as “balance complexes” (Sigman
and Wagner, 1997). All survey participants are asked to provide values for the key items (hereafter referred to as “totals”), whereas the type of requested details requested can vary. For example,
Figure 1 presents the balance complex included on the Service Annual Survey (SAS) questionnaire mailed to companies that operate in the airline industry. [The SAS population comprises several
industries]. The information requested in lines 1a through 2 are details that are only requested from sampled units that operate in the airline industry (and are referred to herafter as ‘detail
items’), and the information requested in line 3 is collected from all units that are sampled in the SAS.
Figure 1: Sample Balance Complex from the Services Annual Survey (Transportation Sector)
Data collection and nonresponse adjustment for the total items are much less problematic than for the detail items because companies are usually able to proportion out their “bottom line” total
items. Moreover, alternative data are often available for substitution or validation of these items. In contrast, with smaller units, the requested detail level data may not be available from all
respondents, and auxiliary data are generally not available (Willimack and Nichols, 2010).
Furthermore, the larger units are more likely to provide response data than are the smaller units. First, the smaller units may not keep track of all of the requested data items (Willimack and
Nichols, 2010) or may perceive the response burden as being quite high (Bavdaž 2010). Second, operational procedures increase the likelihood of obtaining valid response from large units. Analyst
procedures in business surveys are designed to improve the quality of published totals. This is best accomplished by unit nonresponse follow-up of the large cases expected to contribute substantially
to the estimate, followed by intensive analyst research for auxiliary data sources such as publicly available financial reports to replace imputed values with equivalent data (Thompson and Oliver,
2012). This approach works well for the key survey totals items, where alternative data are available for substitution or validation, but not for the detail items.
Frequently Used Adjustment Procedures for Unit and Item Nonresponse
There are two treatments for unit and item nonresponse: adjustment cell weighting and imputation. In household surveys, where there is generally little or no information corresponding to the
missing data from the sampled units, adjustment weighting – which increases the sampling weights of the respondents to represent the nonrespondents – is the only legitimate option (Kalton and
Kaspryzk, 1986). In business surveys, imputation can be as appealing as weight adjustment for treating unit nonresponse, especially when valid data from the same sample unit are often available for
direct substitution. Indeed, Beaumont et al (2011) prove that such auxiliary variable imputation can yield identical variances as those obtained from the full response data. In contrast to weighting,
imputation is performed by item, using a hierarchy that imputes items in a pre-specified sequence determined by the expected reliability of available imputation models [Note: hot deck imputation and
certain Bayesian models are exceptions to this univariate procedure but are not further discussed in this paper as their usage is fairly rare with business surveys]. This approach allows great
flexibility and preserves the expected cell totals, but does not preserve multivariate relationships between items.
In our setting, the business survey has a random sample of size s that has been partitioned into P disjoint unit nonresponse adjustment cells, indexed by p. In each imputation cell p, s[p,r] units
respond and s[p,nr] units do not. Thus, survey data are available for the variable of interest y from the s[p,r ]responding units. A vector of auxiliary variables x exists for all the sampled units
(respondents and nonrespondents). Under complete response, the population total Y would be estimated as w[j] is a weight associated with unit j (usually the inverse probability of selection). The
imputed estimator of the population total for characteristic y is given by
Our case study considers three commonly used imputation models. Each model can be re-expressed as an adjustment-to-sample weighting estimators, as described in Kalton and Flores-Cervantes (2003).
Here, the weighted estimator of the population total for characteristic y is given by
Table 1 presents the three imputation/weighting procedures. The count_u procedure imputes the weighted average value in the imputation cell for the missing value. This is equivalent to adjusting the
respondent units’ final weights by the weighted inverse response rate (Oh and Scheuren, 1983).
The count procedure uses an unweighted mean for imputation, which is equivalent to multiplying the respondent units’ final weights by an unweighted inverse response rate (see Särndal and Lundström,
2005, Chapter 7.3, and Little and Vartivarian, 2005).
If the probability of unit nonresponse does not depend on the values of the observed characteristic y, then the data are missing at random (MAR) as defined in Rubin (1976). Under this assumption, the
probability of response in each adjustment cell p is a constant, and the “inverse response rate” adjustment to the design weights produces an “unbiased” total from the respondent data. These
adjustments are simple to compute, but the additional stage of weighting increases the variance (Kish, 1992); Kalton and Flores-Cervantez, 2003; Little and Vartivarian, 2005).
With business survey data, the probability of response is often related to unit size, and the uniform response assumption (i.e., MAR) is not realistic. Shao and Thompson (2009) describe the more
general covariate-dependent response mechanism, which allows the probability of response to depend on a strictly positive auxiliary variable x such as the MOS. Under this response model, the count
adjustments described in the paragraph above do not mitigate the nonresponse bias and can only increase the sampling variance (Little and Vartivarian, 2005).
The ratio procedure predicts a value for the missing y with the no-intercept linear regression regression model described by β under this model. Note that Särndal and Lundström (2010) recommend the
inclusion of an intercept, but we have found that the intercept is non-significant in many business data sets (ex., businesses with no employees have no payroll). If the covariate-dependent response
mechanism is appropriate and the auxiliary variable x is used in the ratio model or is highly correlated with the ratio model, then the ratio adjusted estimates described in Table 1 will have
improved precision over the correponding count adjusted estimates. If the prediction model is not valid or if the strength of association between x and y is weak, then the bias induced by the ratio
estimator increases the MSE over the other reweighted estimates. This is more likely to occur with the detail items than with the total items.
Table 1: Nonresponse Adjusted Estimators Considered in the Case Study
Hereafter, as in Table 1, we use the term ‘imputation model’ to describe the formula used to obtain an imputed (replacement) value for the missing y and the term ‘imputation parameter’ to describe
data-driven estimates obtained from respondent values to compute these replacement values (the weighted or unweighted sample mean or ).
The Service Annual Survey (SAS)
For the remainder of the report, we discuss the analysis of the nonresponse adjustment procedures for the Service Annual Survey (SAS). The SAS is a mandatory survey of approximately 70,000 employer
businesses having one or more establishments located in the U.S. that provide services to individuals, businesses, and governments, identified by North American Industry Classification Series (NAICS)
system code on the sampling frame. We examine the SAS sections covering the transportation and health industries (SAS-T and SAS-H, respectively). Information on the SAS design and methodology is
available at http://www.census.gov/services/sas/about_the_surveys.html.
The SAS uses a stratified random sample. Companies are stratified by their major kind of business (industry), then are further sub-stratified by estimated annual receipts or revenue. All companies
with total receipts above applicable size cutoffs for each kind of business are included in the survey as part of the certainty stratum. Within each noncertainty size stratum, a simple random sample
of employer identification numbers (EINs) is selected without replacement. Thus, the sampling units are either companies or EINs. The initial sample is updated quarterly to reflect births and
The key items collected by SAS are total revenue and total expenses, both of which are totals in balances complexes. The revenue detail items vary by industry within sector. Expense detail items are
primarily the same for all sectors, with an occasional additional expense detail or two collected for select industries. Total payroll is collected in all sectors as a detail item associated with
expenses. For editing and imputation, payroll is treated as a total item, as auxiliary administrative data are available. Imputation is used to account for both unit and item nonresponse. Auxiliary
variable and historic trend imputation (which uses survey data from the same unit in a prior collection period) are preferred for revenue, expenses, and payroll. Otherwise, SAS-H and SAS-T utilize
the trend and auxiliary ratio imputation models, where the trend module predicts a current period value of y from a prior period value and the auxiliary model uses a different auxiliary variable
obtained from the same unit and collection period.
The imputation cells for SAS are six-digit industry (NAICS) code cross-classified by tax-exempt status. Unlike the sampling strata definitions, the imputation cells do not account for unit size, and
imputation parameters use certainty and (weighted) noncertainty units within the same cell. The imputation base for the ratio imputation parameters is restricted to complete respondent data, subject
to outlier detection and treatment.
Response Propensity Analysis
Response propensity modeling uses logistic regression analysis to determine sets of explanatory covariates related to unit response. Separately examining the SAS-T and SAS-H data, we used the SAS
SURVEYLOGISTIC procedure[i] to fit two logistic regression models: (1) a simple model that used only the existing imputation cells as independent variables; and (2) a nested model that also included
the continuous MOS variable as a covariate. The logistic regression analysis therefore examines whether the categorical variables used to form adjustment cells are predictive of unit nonresponse and
to check if other variables are missing in the construction of the adjustment cell.
We tested the goodness-of-fit hypothesis of each fitted model. All were significant, so we examined the marginal test results for individual imputation cells for cells with good fits. Rejecting the
goodness-of-fit null hypothesis provides evidence that at least one of the variables used to construct adjustment cells are related to response propensity. Examining the marginal results highlights
individual imputation cells where there may be a missing predictor.
Figure 2 presents side-by-side bubble plots summarizing the logistic regression results for SAS-H. Figure 3 presents the corresponding counts for SAS-T.
Figure 2: Logistic Regression Results for SAS-H. Each dot represents number of significant marginal test results for an individual imputation cell, with the number of significant tests indicated on
the y-axis. A strongly predictive model should have significant results in at least four of the six studied years.
Figure 3: Logistic Regression Results for SAS-T
For both programs, the logistic regression analysis provides evidence that the industry/tax status categories used to form adjustment cells are not strongly related to response propensity. Including
the continuous nested MOS covariate in the SAS-T model improves the predictions, although there is no evidence that this is the case with SAS-H.
Clearly, the existing sets of categorical variables used to form imputation cells for SAS-T are inadequate for mitigating unit nonresponse. Initially, we considered using the sampling strata as
adjustment cells. However, a high proportion of strata contained fewer than five units because of the highly stratified design and the limited number of large companies and large tax-entities in the
sampling universe.
Unit Response Rate Comparisons
With SAS, certainty status is directly related to response propensity through the analyst follow-up procedures. Särndal and Lundström (2005) recommend exploring whether there is a sytematic
difference in response propensities on a single category by comparing their unit response rates (URR) in the same imputation cell. In the Economic Directorate of the U.S. Census Bureau, the URR is
the ratio of units that reported valid data to the total number of eligible units, computed without survey weights (Thompson and Oliver 2012).
Figure 4 presents the average URR (across the six years) for each SAS-H imputation cell, with blue squares presenting the certainty-unit URR, and the red squares presenting the noncertainty-unit URR
in the same imputation cells. In the majority of cases, the certainty and noncertainty URRs within the same cell are dissimilar, although the direction of the difference is not consistent.
Figure 5 presents the corresponding measures for SAS-T. As with SAS-H, the URRs within the same imputation cell clearly differ by certainty status. In contrast to the SAS-H results, there is a very
clear pattern within the SAS-T cells, where the unit response rates for certainty units are generally higher than the corresponding noncertainty measures.
Figure 4: Average URR by Certainty Status within Imputation Cell (SAS-H)
Figure 5: Average URR by Certainty Status within Imputation Cell (SAS-T)
The transportation sector tends to have a few very large units (businesses) in each industry, with the remaining units being fairly homogeneous in size, and the analysts attempt to obtain complete
data from all certainty cases. In contrast, the unit size within the health sector is much more variable, and the SAS-H sample is much more highly stratified. Analysts must obtain valid responses
from certainty and “large” noncertainty units, so the response rate pattern is not as consistent.
Alternative Weighting Comparisons
The earlier analysis indicates that the studied programs’ imputation cells fail to satisfy the MAR assumption. That said, if the degree of nonresponse bias in the studied estimates is small, then
this might not be of strong concern. Groves and Brick (2005) propose evaluating the magnitude of the nonresponse bias by altering the estimation weights and using the various weights to construct
different estimates. If the difference between the estimates is trivial, there is evidence that the nonresponse bias may not be large.
To vary the weights, we re-express the ratio imputation models as ratio reweighting models as shown in Table 1, and likewise re-express the presented alternative mean imputation models as the
reweighted count and count_u estimators. We computed these three alternatively weighted estimates for each item by publication industry in our six years of data. For each item, we obtain the ratio
of the count and count_u weighted estimates to the ratio estimates (the current imputation method). Figures 6 and 7 presents the “double-averaged” estimate ratios[ii] for the SAS-H and SAS-T items.
Figure 6: SAS-H Reweighted Estimates (Averaged Within Statistical Period and Across Industry). In these plots, the total items (receipts, expenditures, and payroll) are represented by squares, and
the various detail items are represented by circles. Each graph includes a horizontal asymptote at y = 1 to indicate the estimate ratios that are essentially unaffected by reweighting.
Figure 7: SAS-T Reweighted Estimates (Averaged Within Statistical Period and Across Industry)
With SAS-H, the count and ratio estimates are very close, regardless of whether the collected item is a total or a detail. However, the differences between the count_u and ratio estimates are more
pronounced. The SAS-H results are “different enough” to merit some concern about unmitigated nonresponse bias, whereas the SAS-T results are much more conclusive. The differences among the three sets
of SAS-T estimates are very pronounced, indicating estimation effects caused entirely by changes in adjustment methodology.
These analyses highlight issues with unit nonresponse in business data and challenges in remediating these issues. First, the URR is not necessarily a good measure of representativeness of the sample
(Peytcheva and Groves, 2009). In our case study, the majority of the URRs are at an acceptable level, but the other analyses show that the larger units respond at a higher rate than the smaller
units. By partitioning the existing imputation cells by size categories, we can likely reduce the nonresponse bias. However, there are insufficient numbers of sampled units in the sampling strata to
use them as adjustment cells, and the small number of “large” units makes it challenging to subdivide the existing cells. In the future, it may be possible to develop strata collapsing procedures
during the survey design stage.
Evaluation of the Prediction (Imputation) Models
The SAS uses the ratio imputation model from Table 1 when auxiliary data or historic data from the same unit are not available: Matthews (2011) and Nelson (2011) provides information on each item’s
imputation model. To assess the imputation models’ predictive properties, we fit each regression imputation model within the currently used imputation cells with the SAS SURVEYREG procedure, again
excluding certainty cases. Figures 8 and 9 summarize the regression analysis results for SAS-H and SAS-T, respectively. These figures plot the average R^2 value from each model. We consider any R^2
value above y = 0.75 would to be strongly predictive. The total items (receipts, expenses, and payroll) and detail items are separated by a vertical asymptote and are annotated as such in the
Figure 8: Regression Analysis Results for Item Imputation Models (SAS-H). A blue diamond indicates a consistently significant model (at α = 0.10) and a red square indicates the reverse.
Figure 9: Regression Analysis Results for Item Imputation Models (SAS-T)
In both programs, the models that predict the totals items are strongly predictive. However, the models used to impute the detail items are generally not. Hence model-imputation for totals is
appropriate, but rarely used due to the availability of alternative data sources such as administrative or historic data, and model-imputation for details is not necessarily appropriate, but is
frequently employed.
Of course, the effectiveness of a ratio imputation model for correcting nonresponse bias is highly dependent on the availability of data for parameter estimation. For SAS, the respondent units must
provide valid values for either revenue or expenses, not necessarily both. On the average, the item response rates for SAS are quite low – generally between 50 to 60 percent for totals and between 40
to 60 percent for detail items, regardless of the sector. Furthermore, the unit size does not appear to be a factor in item nonresponse: item response rates computed separately for certainty and
noncertainty units in the same industry tend to be very close.
The earlier analyses provided indications that the SAS imputation cells should be further subdivided to account for unit size. If the imputation parameters are approximately the same for each unit
size category within an imputation cell, then the “dominance” of the large cases would not influence the predictions. On the other hand, if imputation parameters did differ by unit size within
industry, then the adjustment strategy being used is inducing systematic bias.
To investigate this, we obtained the ratio imputation parameters in the current imputation cells, then refit the same regression models with more refined industry cells (splitting the industry data
into certainty and and noncertainty components). Figure 10 presents stacked imputation parameters from the ratio model that uses expenditures to predict revenue using 2010 SAS-H data. Each bar
represents a set of regression imputation parameters from the original imputation cell.
Figure 10: Ratio Imputation Parameters for Revenue/Expenses (SAS-H 2010). The blue bar is the regression parameter obtained using all units in the industry, the red bar is the regression parameter
obtained using only the certainty units, and the green bar is the regression parameter obtained from the noncertainty units.
In Figure 10, all of the imputation parameters are approximately the same, with a few exceptions. This pattern repeats in the SAS-H and SAS-T data for all data collection years. However, this is a
ratio of two well-reported totals items that are generally imputed with auxiliary data. When we examine a similar plot for a typical SAS-H detail item, the situation is quite different, as shown in
Figure 11.
Figure 11: Ratio Parameters for a Typical SAS-H Detail Item Ratio
Here, the imputation parameters computed from the certainty cases have almost exactly the same value as the parameters computed from the complete data in the imputation cell, and the imputation
parameters for the noncertainty units are each quite different. In short, the ratio imputation model causes all imputed units to resemble the certainty units. Similar plots are available for all
ratio imputation parameters upon request, but are not included for brevity. However, the vast majority of imputation model analyses for the detail items demonstrated similar patterns.
Finally, we examine the effect of choice of imputation cell by item, given a nonresponse adjustment method. Figures 12 and 13 shows “double-average” ratios of estimates computed using the same
weights with different adjustment cells, comparing estimates obtained using the existing cells subdivided by certainty status (more refined parameters) to those obtained from the currently used
imputation cells. For SAS-T, the totals do not vary much, regardless of adjustment method, and many of the detail items that were imputed with the ratio model maintain similar levels as well. With
SAS-H, the choice of adjustment cell has a very large impact on the estimate levels, regardless of whether the item is a total or a detail.
Recall that the SAS-T sampled unit population is fairly homogeneous in size, in contrast to the SAS-H sampled unit population. For SAS-T, the choice of adjustment cell is the most important factor in
nonresponse bias mitigation. In this population, the ratio imputation models (which incorporate unit size in the parameter estimation) are quite good for totals, but not so for details. With SAS-H,
it is not immediately clear which factor (adjustment cell or adjustment method) is more important in nonresponse bias mitigation. Although it appears that unit size is not strongly related to
response propensity for this population, it is also apparent that unit size is very related to prediction for the key totals. Unfortunately, this strong relationship is not true for the SAS-H
Figure 12: Comparison of Alternatively Weighted Estimates by Imputation Cell (SAS-H)
Figure 13: Comparison of Alternatively Weighted Estimates by Imputation Cell (SAS-T)
In SAS, imputation is performed independently in each adjustment cell. Consequently, the improper adjustment bias is aggregated, and it is impossible to determine what the cumulative effects of the
bias are (if it exists). Besides, there is a data quality cost. Because all imputed items maintain the certainty-unit ratios, the imputed individual micro-data are not realistic, and all multivariate
item relationships are lost. Furthermore, there is little evidence to validate the ratio models used for the detail items.
This case study highlights several of the major challenges that business surveys encounter in addressing unit nonresponse. Respondents often do not comprise a random subsample, as larger units are
more likely to provide data than smaller units. This phenomenon is an artifact of several factors, including the perceived benefits of the survey by the business community and the existing analyst
nonresponse follow-up procedure, which focuses on obtaining the most accurate estimated totals.
Developing a set of adjustment cells that satisfy the most common ignorable response mechanism conditions and contain sufficient respondents is equally challenging, as there are considerably fewer
“large” units in the population than small units. Finally, there are data collection and quality challenges, as several of the detail items that the survey would like to collect may not be available
from the majority of the sampled units. Again, the respondent sample size issues for the detail items are compounded by collecting different sets of detail items by industry or sector.
For SAS, we hope to improve existing adjustment techniques by refining the adjustment cells to account for missing covariates simply by subdividing the cells into certainty and noncertainty
components. This should not detrimentally affect the quality of the estimates of the totals items, and may improve the ratio imputation procedures for the details. However, especially with low item
response, we have no way of validating the latter. Simply put, we need data.
There are several excellent references on the use of adaptive or responsive designs to reduce the incidence of nonresponse bias by monitoring data collection and adapting procedures on a flow basis,
utilizing different nonresponse follow-up strategies depending on response propensity (Groves and Heeringa, 2009; Laflamme et al, 2008), focussing on small businesses. This adaptive strategy could
provide the information needed to learn about the missing data characteristics and would yield more statistically defensible nonresponse bias-amelioration procedures.
[i] These tests exclude certainty cases via the finite population correction (fpc).
[ii] The double averaging eliminates noise and does not affect the interpretation of the results. In general, the individual item ratios did not differ until the third decimal place across collection
period, i.e. the effects of alternative weighting on item estimates are similar across collection periods within the same industry. Likewise, the effects of alternative weighting on the item
estimates were very similar across industries within the same statistical period.
1. Andridge, R.R. and Little, R.J.A. (2011). Proxy Pattern-Mixture Analysis for Survey Nonresponse. Journal of Official Statistics, 27, pp. 153-180.
2. Bavdaž, M. (2010). The Multidimensional Integral Business Survey Response Model. Survey Methodology, 1, pp. 81-93.
3. Beaumont, J.F., Haziza, D., and Bocci, C. (2011). On Variance Imputation Under Auxiliary Variable Imputation in Sample Surveys. Statistica Sinica, 21, pp. 515-537.
4. Groves, R. and Brick, J. (2005). Practical Tools for Nonresponse Bias Studies. Joint Program in Survey Methodology course notes.
5. Groves, R. and Heeringa, S. (2006). Responsive Design for Household Surveys: Tools for Actively Controlling Survey Errors and Costs. Journal of the Royal Statistical Society, Series A, 169, pp.
6. Kalton, G. and Flores-Cervantes, I. (2003). Weighting Methods. Journal of Official Statistics, 19, pp. 81-97.
7. Kalton, G. and Kasprzyk, D. (1986). The Treatment of Missing Survey Data. Survey Methodololgy, 12, pp. 1 -16.
8. Kish, L. (1992). Weighting for Unequal P[i], Journal of Official Statistics, 8, pp.183-200.
9. Laflamme, F., Maydan, M., and Miller, A. (2008). Using Paradata to Actively Manage Data Collection Survey Process. Proceedings of the Section on Survey Research Methods, American Statistical
10. Little, R.J.A. and Rubin, D.R. (2002). Statistical Analysis with Missing Data. New York: John Wiley and Sons.
11. Little, R.J.A. and Vartivarian, S. (2005). Does Weighting for Nonresponse Increase the Variance of Survey Means? Survey Methodology, 31, pp. 161-168.
12. Lohr, S.L. (2010). Sampling: Design and Analysis (2^nd Edition). Boston: Brooks/Cole.
13. Matthews, B. (2011). StEPS Imputation Specifications for the Health Portion of the 2010 Service Annual Survey (SAS-H). U.S. Census Bureau internal memorandum, EDMS #7526, available upon request.
14. Oh, H.L., and Scheuren, F.J. (1983). Weighting Adjustment of Unit Nonresponse. Incomplete Data in Sample Surveys. New York: Academic Press, 20, 143-184.
15. Nelson, M. (2011). StEPS Imputation Specifications for the Transportation Portion of the 2010 Service Annual Survey (SAS-T). U.S. Census Bureau internal memorandum, EDMS #38537, available upon
16. Peytcheva, E. and Groves, R. (2009). Using Variation in Response Rates of Demographic Subgroups and Evidence of Nonresponse Bias in Survey Estimates. Journal of Official Statistics, 25, pp.
17. Roberts, G., Rao, J.N.K., and Kumar, S. (1987). Logistic Regression Analysis of Sample Survey Data. Biometrika, 74, pp. 1 – 12.
18. Särndal, C.E. (2011). The 2010 Morris Hansen Lecture: Dealing with Survey Nonresponse in Data Collection, in Estimation. Journal of Official Statistics, 27, pp. 1-21.
19. Särndal, C.E., Swenssen, B., and Wretman, J. (1992). Model Assisted Survey Sampling. New York: Springer-Verlag.
20. Särndal, C.E. and Lundström, S. (2005). Estimation in Surveys with Nonresponse. New York: John Wiley & Sons, Inc.
21. Särndal , C.-E. and Lundström , S. (2010). Design for Estimation: Identifying Auxiliary Variables to Reduce Nonresponse Bias. Survey Methodology, 36, pp. 131-144.
22. Shao, J. and Thompson, K.J. (2009). Variance Estimation In The Presence of Nonrespondents and Certainty Strata. Survey Methodology, 35, pp. 215-225.
23. Sigman, R.S. and Wagner, D. (1997). Algorithms for Adjusting Survey Data That Fail Balance Edits. Proceedings of the Section on Survey Research Methods, American Statistical Association.
24. Thompson, K.J. and Oliver, B.E. (2012). Response Rates in Business Surveys: Going Beyond the Usual Performance Measure. Journal of Official Statistics, 28, pp. 221-237.
25. Thompson, K.J. and Washington, K.T. (2012). Challenges in the Treatment of Nonresponse for Selected Business Surveys. Proceedings of the Section on Survey Research Methods, American Statistical
26. Willimack, D.K. and Nichols, E. (2010). A Hybrid Response Process Model for Business Surveys. Journal of Official Statistics, 1, pp. 3-24.
|
{"url":"https://surveyinsights.org/?p=2991","timestamp":"2024-11-11T19:17:02Z","content_type":"application/xhtml+xml","content_length":"107987","record_id":"<urn:uuid:646a45ca-25ae-486c-8f68-5f74c4831298>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00151.warc.gz"}
|
Documents For An Access Point
Click the serial number on the left to view the details of the item.
# Author Title Accn# Year Item Type Claims
1 John Iliopoulos Origin of mass: Elementary Particles and Fundamental Symmetries OB1497 2017 eBook
2 Belal E. Baaquie Exploring the invisible universe: From Black Holes to Superstrings 025991 2015 Book
3 Harald Fritzsch (ed.) 50 years of quarks 025949 2015 Book
4 Alessandro, Bettini Introduction to elementary particle physics 025959 2014 Book
5 Erick J. Weinberg Classical solutions in quantum field theory: Solitons and Instantons in High Energy Physics 024788 2012 Book
6 D. B. Lichtenberg Unitary symmetry and elementary particles 024434 1978 Book
7 Leonard Susskind Introduction to black holes, information and the string theory revolution : The holographic universe 020181 2004 Book
8 Patricia M. Schwarz Special relativity: From Einstein to strings C19222 2004 CD/DVD
9 Patricia M. Schwarz Special relativity: From Einstein to strings 019222 2004 Book
10 Veltman, M.J.G. Facts and mysteries in elementary particle physics 018289 2003 Book
(page:1 / 3) [#22] Next Page Last Page
Title Origin of mass: Elementary Particles and Fundamental Symmetries
Author(s) John Iliopoulos
Publication Oxford University Press 2017.
Abstract Why do most ‘elementary particles’ which form the constituents of all matter have a non-zero mass? Strange question, apparently in contradiction with our physical intuition. In this
Note little book we attempt to explain that the question is far from being trivial and that the answer can be found in the recent discovery of a new particle in the Large Hadron Collider
(LHC) at CERN near Geneva. We offer the reader a guided tour, starting from the tiny fractions of a second after the Big Bang, when all particles have been created, to the present
experiments we perform in our laboratories. We show that the Universe follows a profound symmetry principle which seems to determine the structure of the world.
ISBN,Price Rs 0.00
Keyword(s) 1. BIG BANG 2. EBOOK 3. EBOOK - OXFORD UNIVERSITY PRESS 4. ELEMENTARY PARTICLE PHYSICS 5. MASS 6. SYMMETRY
Item Type eBook
Multi-Media Links
Please Click here for eBook
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
OB1497 On Shelf
Title Exploring the invisible universe: From Black Holes to Superstrings
Author(s) Belal E. Baaquie;Frederick H. Willeboordse
Publication New Jersey, World Scientific Publishing Co. Pvt. Ltd., 2015.
Description xvi, 471p.
Abstract Note Book covers the gamut of topics in advanced modern physics and provides extensive and well substantiated answers to these questions and many more. Discussed in a non-technical, yet
also non-trivial manner, are topics dominated by invisible things - such as Black Holes and Superstrings as well as Fields, Gravitation, the Standard Model, Cosmology, Relativity, the
Origin of Elements, Stars and Planetary Evolution, and more. Just giving the answer, as so many books do, is really not telling anything at all. To truly answer the "why" questions of
nature, one needs to follow the chain of reasoning that scientists have used to come to the conclusions they have. This book does not shy away from difficult-to-explain topics by
reducing them to one-line answers and power phrases suitable for a popular talk show. The explanations are rigorous and straight to the point. This book is rarely mathematical without
being afraid, however, to use elementary mathematics when called for. In order to achieve this, a large number of detailed figures, specially developed for this book and found nowhere
else, convey insights that otherwise might either be inaccessible or need lengthy and difficult-to-follow explanations.
ISBN,Price 9789814618670 : US 45.00(HB)
Classification 524.8
Keyword(s) 1. BLACK HOLE 2. COSMOLOGY 3. DARK UNIVERSE 4. EBOOK 5. EBOOK - WORLD SCIENTIFIC 6. ELEMENTARY PARTICLE PHYSICS 7. GRAVITATION 8. ORIGIN OF ELEMENTS 9. STANDARD MODEL 10. SUPERSTRINGS
Item Type Book
Multi-Media Links
Please Click Here for Online Book
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
025991 524.8/BAA/025991 On Shelf
OB1190 524.8/BAA/ On Shelf
+Copy Specific Information
Title 50 years of quarks
Author(s) Harald Fritzsch (ed.);Murray Gell-Mann (ed.)
Publication New Jersey, World Scientific Publishing Co. Pvt. Ltd., 2015.
Description x, 506p.
Abstract Note Today it is known that the atomic nuclei are composed of smaller constituents, the quarks. A quark is always bound with two other quarks, forming a baryon or with an antiquark,
forming a meson. The quark model was first postulated in 1964 by Murray Gell-Mann â who coined the name â quarkâ from James Joyce's novel Finnegans Wake â and by George Zweig,
who then worked at CERN. In the present theory of strong interactions â Quantum Chromodynamics proposed by H Fritzsch and Gell-Mann in 1972 â the forces that bind the quarks
together are due to the exchange of eight gluons.On the 50th anniversary of the quark model, this invaluable volume looks back at the developments and achievements in the elementary
particle physics that eventuated from that beautiful model. Written by an international team of distinguished physicists, each of whom have made major developments in the field, the
volume provides an essential overview of the present state to the academics and researchers.
ISBN,Price 9789814618106 : US $48.00(PB)
Classification 539.12
Keyword(s) 1. CONCRETE QUARKS 2. EBOOK 3. EBOOK - WORLD SCIENTIFIC 4. ELEMENTARY PARTICLE PHYSICS 5. QCD 6. QUANTUM CHROMODYNAMICS 7. QUARKS
Item Type Book
Multi-Media Links
Please Click Here for Online Book
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
025949 539.12/FRI/025949 On Shelf
OB1152 539.12/FRI/ On Shelf
+Copy Specific Information
Title Introduction to elementary particle physics
Author(s) Alessandro, Bettini
Edition 2nd
Publication Cambridge 2014.
Description xvii, 474p.
Abstract The second edition of this successful textbook is fully updated to include the discovery of the Higgs boson and other recent developments, providing undergraduate students with complete
Note coverage of the basic elements of the standard model of particle physics for the first time. Physics is emphasised over mathematical rigour, making the material accessible to students
with no previous knowledge of elementary particles. Important experiments and the theory linked to them are highlighted, helping students appreciate how key ideas were developed. The
chapter on neutrino physics has been completely revised, and the final chapter summarises the limits of the standard model and introduces students to what lies beyond. Over 250 problems,
including sixty that are new to this edition, encourage students to apply the theory themselves. Partial solutions to selected problems appear in the book, with full solutions and slides
of all figures available at www.cambridge.org/9781107050402.
ISBN,Price 9781107050402 : UKP 40.00(HB)
Keyword(s) 1. EBOOK 2. EBOOK - CAMBRIDGE UNIVERSITY PRESS 3. ELEMENTARY PARTICLE PHYSICS
Item Type Book
Multi-Media Links
Please Click here for eBook
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
025959 539.12/BET/025959 On Shelf
OB0810 539.12/BET/ On Shelf
+Copy Specific Information
Title Classical solutions in quantum field theory: Solitons and Instantons in High Energy Physics
Author(s) Erick J. Weinberg
Publication Cambridge, Cambridge University Press, 2012.
Description xiv, 326p.
Series (Cambridge monographs on mathematical physics)
Abstract Note Classical solutions play an important role in quantum field theory, high energy physics and cosmology. Real-time soliton solutions give rise to particles, such as magnetic monopoles,
and extended structures, such as domain walls and cosmic strings, that have implications for early universe cosmology. Imaginary-time Euclidean instantons are responsible for
important nonperturbative effects, while Euclidean bounce solutions govern transitions between metastable states. Written for advanced graduate students and researchers in elementary
particle physics, cosmology and related fields, this book brings the reader up to the level of current research in the field. The first half of the book discusses the most important
classes of solitons: kinks, vortices and magnetic monopoles. The cosmological and observational constraints on these are covered, as are more formal aspects, including BPS solitons
and their connection with supersymmetry. The second half is devoted to Euclidean solutions, with particular emphasis on Yangâ Mills instantons and on bounce solutions.
ISBN,Price 9780521114639 : UKP 60.00(HB)
Classification 530.145
Keyword(s) 1. EBOOK 2. EBOOK - CAMBRIDGE UNIVERSITY PRESS 3. ELEMENTARY PARTICLE PHYSICS 4. HIGH ENERGY PHYSICS 5. INSTANTONS 6. QUANTUM FIELD THEORY 7. SOLITONS
Item Type Book
Multi-Media Links
Click Here for Online Book
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
024788 530.145/WEI/024788 On Shelf
OB0462 530.145/WEI/ On Shelf
+Copy Specific Information
Multi-Media Links
click here for eBook
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
024434 539.12/LIC/024434 On Shelf
OB0360 539.12/LIC/ On Shelf
+Copy Specific Information
Title Introduction to black holes, information and the string theory revolution : The holographic universe
Author(s) Leonard Susskind;James Lindesay
Publication Singapore 2004.
Description xv, 183p.
ISBN,Price 9812561315 : US$ 14.00
Classification 524.88:539.12
Keyword(s) 1. ELEMENTARY PARTICLE PHYSICS 2. GRAVITATIONAL RESEARCH 3. HOLOGRAPHIC UNIVERSE 4. QUANTUM FIELD THEORY 5. STRING THEORY
Item Type Book
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
020181 524.88:539.12/SUS/020181 On Shelf
+Copy Specific Information
Title Special relativity: From Einstein to strings
Author(s) Patricia M. Schwarz;John H. Schwarz
Publication Cambridge, Cambridge University Press, 2004.
Contents Note The book is about special theory of relativity. This theory overthrew the classical view of space and time as distinct and absolute entities that provide the backdrop on which
physical reality is superimposed. In special relativity space and time must be viewed together (as spacetime) to make sense of the constancy of the speed of light and the structure of
Maxwell's electromagnetic theory. The book is divided into two parts - entitled 'Fundamentals' and 'Advanced Topics'. The first part gives a detailed explanation of special
relativity. The second part of the book includes advanced topics that illustrate how relativity has impacted subsequent developments in theoretical physics upto and including modern
work on superstring theory.
Notes With Special relativity: From Einstein to strings (011948)
Classification 530.12:531.18
Keyword(s) 1. CAUSALITY (PHYSICS) 2. CD-ROM 3. ELEMENTARY PARTICLE PHYSICS 4. GROUP THEORY 5. SPACETIME GEOMETRY 6. SPECIAL RELATIVITY 7. SUPERSYMMETRY
Item Type CD/DVD
Multi-Media Links
COMPANION CD
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
C19222 530.12:531/SCH/C19222 On Shelf
+Copy Specific Information
Title Special relativity: From Einstein to strings
Author(s) Patricia M. Schwarz;John H. Schwarz
Publication Cambridge, Cambridge University Press, 2004.
Description 369p.
Contents Note The book is about special theory of relativity. This theory overthrew the classical view of space and time as distinct and absolute entities that provide the backdrop on which
physical reality is superimposed. In special relativity space and time must be viewed together (as spacetime) to make sense of the constancy of the speed of light and the structure of
Maxwell's electromagnetic theory. The book is divided into two parts - entitled 'Fundamentals' and 'Advanced Topics'. The first part gives a detailed explanation of special
relativity. The second part of the book includes advanced topics that illustrate how relativity has impacted subsequent developments in theoretical physics upto and including modern
work on superstring theory.
ISBN,Price 0521812607 : Rs. 1750.00
Classification 530.12:531.18
Keyword(s) 1. CAUSALITY (PHYSICS) 2. EBOOK 3. EBOOK - CAMBRIDGE UNIVERSITY PRESS 4. ELEMENTARY PARTICLE PHYSICS 5. GROUP THEORY 6. SPACETIME GEOMETRY 7. SPECIAL RELATIVITY 8. SUPERSYMMETRY
Item Type Book
Multi-Media Links
Click here for online book
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
019222 530.12:531.18/SCH/019222 On Shelf
OB0216 530.12:531.18/SCH/OB0216 On Shelf
+Copy Specific Information
Multi-Media Links
Please Click Here for Online Book
Circulation Data
Accession# Call# Status Issued To Return Due On Physical Location
018289 539.12/VEL/018289 On Shelf
OB1191 539.12/VEL/ On Shelf
+Copy Specific Information
|
{"url":"http://ezproxy.iucaa.in/wslxRSLT.php?A1=1209","timestamp":"2024-11-09T20:39:10Z","content_type":"text/html","content_length":"46601","record_id":"<urn:uuid:03ee36e5-5703-4962-8991-f53d475b92f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00320.warc.gz"}
|
Slope Calculator
By definition, the slope or gradient of a line describes its steepness, incline, or grade.
m — slope
θ — angle of incline
If the 2 Points are Known
Distance (d) = √ΔX^2 + ΔY^2 = √2 = 1.4142135623731
Equation of the line:
y = x
When x=0, y = 0
When y=0, x = 0
If 1 Point and the Slope are Known
Slope, sometimes referred to as gradient in mathematics, is a number that measures the steepness and direction of a line, or a section of a line connecting two points, and is usually denoted by m.
Generally, a line's steepness is measured by the absolute value of its slope, m. The larger the value is, the steeper the line. Given m, it is possible to determine the direction of the line that m
describes based on its sign and value:
• A line is increasing, and goes upwards from left to right when m > 0
• A line is decreasing, and goes downwards from left to right when m < 0
• A line has a constant slope, and is horizontal when m = 0
• A vertical line has an undefined slope, since it would result in a fraction with 0 as the denominator. Refer to the equation provided below.
Slope is essentially the change in height over the change in horizontal distance, and is often referred to as "rise over run." It has applications in gradients in geography as well as civil
engineering, such as the building of roads. In the case of a road, the "rise" is the change in altitude, while the "run" is the difference in distance between two fixed points, as long as the
distance for the measurement is not large enough that the earth's curvature should be considered as a factor. The slope is represented mathematically as:
In the equation above, y[2] - y[1] = Δy, or vertical change, while x[2] - x[1] = Δx, or horizontal change, as shown in the graph provided. It can also be seen that Δx and Δy are line segments that
form a right triangle with hypotenuse d, with d being the distance between the points (x[1], y[1]) and (x[2], y[2]). Since Δx and Δy form a right triangle, it is possible to calculate d using the
Pythagorean theorem. Refer to the Triangle Calculator for more detail on the Pythagorean theorem as well as how to calculate the angle of incline θ provided in the calculator above. Briefly:
d = √(x[2] - x[1])^2 + (y[2] - y[1])^2
The above equation is the Pythagorean theorem at its root, where the hypotenuse d has already been solved for, and the other two sides of the triangle are determined by subtracting the two x and y
values given by two points. Given two points, it is possible to find θ using the following equation:
m = tan(θ)
Given the points (3,4) and (6,8) find the slope of the line, the distance between the two points, and the angle of incline:
d = √(6 - 3)^2 + (8 - 4)^2 = 5
While this is beyond the scope of this calculator, aside from its basic linear use, the concept of a slope is important in differential calculus. For non-linear functions, the rate of change of a
curve varies, and the derivative of a function at a given point is the rate of change of the function, represented by the slope of the line tangent to the curve at that point.
|
{"url":"https://www.calculator.net/slope-calculator.html?type=1&x11=1&y11=1&x12=2&y12=2&x=74&y=30","timestamp":"2024-11-09T16:28:57Z","content_type":"text/html","content_length":"15443","record_id":"<urn:uuid:2f48b1ed-c699-4abb-9754-7946cf24ba9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00080.warc.gz"}
|
Population History
Euclid Population By Year
Year Population Rank in US Growth Rate
2023 48,212 838 -0.6%
2022 48,489 826 -0.9%
2021 48,928 814 -1.1%
2020 49,487 804 0.1%
2010 48,920 728 -0.7%
2000 52,600 589 -0.4%
1990 54,864 456 –
Euclid Population Facts
What is the current population of Euclid?
Based on the latest 2024 data from the US census, the current population of Euclid is 48,212. Euclid, Ohio is the 838th largest city in the US.
What county is Euclid, Ohio in?
Euclid is located entirely in Cuyahoga County.
What is the size of Euclid, Ohio in square miles?
Euclid has an area of 10.6 square miles.
What was the peak population of Euclid?
The peak population of Euclid was in 1990, when its population was 54,864. In 1990, Euclid was the 456th largest city in the US; now its fallen to the 838th largest city in the US. Euclid is
currently 12.1% smaller than it was in 1990.
How quickly is Euclid shrinking?
Euclid has shrunk 8.3% since the year 2000. Euclid, Ohio's growth is extremely below average. 94% of similarly sized cities are growing faster since 2000.
What is the population density of Euclid, Ohio?
Euclid has a population density of 4,602.1 people per square mile.
Euclid Demographics
What is the voting age population of Euclid, Ohio?
The total voting age population of Euclid, Ohio, meaning US citizens 18 or older, is 36,672. The voting age population is 43.6% male and 56.4% female.
What percentage of Euclid, Ohio residents are senior citizens?
According to the latest census statistics, 17.1% of the residents of Euclid are 65 or older.
What are the racial demographics of Euclid, Ohio?
The racial demographics of Euclid are 61.9% Black, 34.3% White, 2.2% Two or more races, 0.8% Other, 0.6% Asian and 0.1% American Indian. Additionally, 1.3% of the population identifies as Hispanic.
What percentage of Euclid, Ohio residents are below the poverty line?
In Euclid, 21.8% of residents have an income below the poverty line, and the child poverty rate is 31.7%. On a per-household basis, 16.7% of families are below the poverty line in Euclid.
What percentage of Euclid, Ohio residents are in the labor force?
Among those aged 16 and older, 61.4% of Euclid residents are in the labor force.
What are the education levels among Euclid, Ohio residents?
Among the adult population 25 years old and over, 89.8% of Euclid residents have at least a high school degree or equivalent, 19.5% have a bachelor's degree and 6.3% have a graduate or professional
What percentage of Euclid, Ohio residents speak a non-English language at home?
Among Euclid residents aged 5 and older, 6.0% of them speak a non-English language at home. Broken down by language: 2.1% of residents speak Spanish at home, 2.9% speak an Indo-European language, and
0.3% speak an Asian language.
Euclid Income & Labor Statistics
What is the unemployment rate in Euclid, Ohio?
The unemployment rate in Euclid is 9.1%, which is calculated among residents aged 16 or older who are in the labor force.
What percentage of Euclid, Ohio residents work for the government?
In Euclid, 13.2% of the residents in the non-military labor force are employed by the local, state and federal government.
What is the median income in Euclid, Ohio?
The median household income in Euclid is $38,242.
Euclid Housing & Rent Statistics
What percentage of housing units are owner-occupied in Euclid, Ohio?
In Euclid, 45.6% of housing units are occupied by their owners.
What percentage of housing units are rented in Euclid, Ohio?
Renters occupy 54.4% of housing units in Euclid.
What percentage of Euclid, Ohio housing units were built before 1940?
Of all the housing units in Euclid, 15.5% of them were build before 1940.
What percentage of Euclid, Ohio housing units were built after 2000?
In Euclid, 1.0% of the total housing units were built after the year 2000, which is approximately 280 units.
What is the median monthly rent in Euclid, Ohio?
The median gross monthly rent payment for renters in Euclid is $782.
What percentage of households in Euclid, Ohio have broadband internet?
In Euclid, 72.9% of households have an active broadband internet connection.
|
{"url":"https://www.biggestuscities.com/city/euclid-ohio","timestamp":"2024-11-04T18:41:54Z","content_type":"text/html","content_length":"55220","record_id":"<urn:uuid:27223e6b-d2e5-4037-b027-1f58ffa06a88>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00257.warc.gz"}
|
Data Types in MATLAB | Guide to Conversion of Data Type in MATLAB
Updated March 20, 2023
Overview of Data Types in MATLAB
Data Types in MATLAB are the supported data formats that are used for computation. MATLAB is a popular mathematical and statistical data analysis tool that has a wide range of features for
computation. The various types of data type MATLAB supporting are numeric types, characters, strings, date and time, categorical arrays, tables, timetables, Structures, Cell Arrays, Functional
Handlers, Map Containers, Time series, Data type Identification, Data type conversion. Each of the data types accepts and processes a certain type of data format through the variables. MATLAB
provides the functionalities to convert one data type to another compatible data type using the conversion functions
Data Types in MATLAB
Following are the Data Types:-
• Numeric Types
• Characters and Strings
• Date and Time
• Categorical Arrays
• Tables
• Timetables
• Structures
• Cell Arrays
• Functional Handles
• Map Containers
• Time Series
• Data Type Identification
• Data Type Conversion
Let’s see the significance of the individual Data Types in MATLAB in details-
1. Numeric Types: Under this type comes Integer and floating-point or fraction data
2. Characters and Strings: Text are represented in character arrays and string arrays
3. Dates and Time: This contains arrays of date and time values which can be again shown in many different formats such as DD/MM/YYYY or MM/DD/YY etc.
4. Categorical Arrays: Under this comes arrays of qualitative data such as a list with values from a finite set of discrete sampled or data of the type non-numeric.
5. Tables: Arrays are represented here in a tabular form whose named columns may contain different types such as numeric, categorical, etc.
6. Timetables: Time-stamped data such as DD/MM/YYYY/HR/MIN/SEC in tabular form.
7. Structures: Most versatile as well as complex, this type contains arrays with named fields that contain varying types and sizes.
8. Cell Arrays: This again is a data type where an array can contain data of variable types and sizes.
9. Function Handles: Such data types allow variables to call a function indirectly.
10. Map Containers: Similar to the dictionary in many languages, such data types have objects with keys where the key is indexed to values, where keys need not be integers.
11. Time Series: Time series data has a specific type where data vectors are sampled over the time period.
12. Data Type Identification: Such data types help us determine the data type of any variable.
13. Data Type Conversion: Using such types, we can convert between many data types such as numeric arrays, cell arrays, character arrays, structures, function handles, and tables, etc.
Now let’s look into each type with more details
Data Types Definition
Int8 This is called 8 bits signed integer
Uint8 This is 8 bits unsigned integer
Int16 16 bits signed integer
Uint16 16 bits unsigned integer
Int32 32 bits signed integer
Uint32 32 bits unsigned integer
Int64 64 bits signed integer
Uint64 64 bits unsigned integer
Single This is called single-precision numeric data
Double This is double-precision numeric data
logical The logical value of 0 or 1 represents true or false
char Character data such as alphabets
Cell array an array of indexed cells where each cell is capable of storing an array of the same or different dimensions and different data type
structure This is more like a C structure where each structure has a named field which is capable of storing an array of different size or dimension and different data types
Function handle This acts as a pointer to a function
User classes Such data types represent objects which are constructed from a user-defined class
Java classes Such types represent objects which are constructed from a Java class.
Below is the example
strg = 'Hello MATLAB!'
n = 234510
dbl = double(n)
unt = uint32(7891.50)
rrn = 15678.92347
cons = int32(rrn)
Output: –
strg = Hello MATLAB!n = 234510dbl = 234510unt = 7901rrn = 15678.9cons = 15679
• In the above example, strng is string data type, n is numeric data type, dbl is double data type, unt is 32 bit unsigned integer, rrn is fractional data which is converted to int 32 integer and
stored as cons.
Conversion of Data Types in MATLAB
Function Purpose
char This function converts from to character array (string)
int2str This function converts from integer data to the string
mat2str This function converts from a matrix to string
num2str This function converts from number to string
str2double This function converts from string to double-precision value
str2num This function converts from string to number
native2unicode This function converts from numeric bytes to Unicode characters
unicode2native This function converts from Unicode characters to numeric bytes
base2dec This function converts from base N number string to decimal number
bin2dec This function converts from binary number string to decimal number
dec2base This function converts from decimal to base N number in string
dec2bin This function converts from decimal to binary number in string
dec2hex This function converts from decimal to hexadecimal number in string
hex2dec This function converts from hexadecimal number string to decimal number
hex2num This function converts from hexadecimal number string to double-precision number
num2hex This function converts from singles and doubles to IEEE hexadecimal strings
cell2mat This function converts from cell array to numeric array
cell2struct This function converts from cell array to structure array
cellstr This function creates a cell array of strings from a character array
mat2cell This function converts from array to cell array with potentially different sized cells
num2cell This function converts from array to cell array with consistently sized cells
struct2cell This function converts from structure to cell array
• From the above discussion and example, we got a deep look into the various data types of MATLAB programming language. Each of these data types is very important and MATLAB users need to deeply
understand the property and usages of each of these types to write efficient MATLAB programs that are fast, optimized for performance, and scalable for future needs.
• As a beginner, users are advised to practice a lot of these syntaxes so that they can understand their usages and relative advantages and disadvantages. Such coding practice is important to have
great control over any language and to be able to write efficient MATLAB codes.
Recommended Articles
This has been a guide to Data Types in MATLAB. Here we discuss the introduction, list, and conversions of data types in MATLAB with an example. You can also go through our other suggested articles to
learn more –
|
{"url":"https://www.educba.com/data-types-in-matlab/","timestamp":"2024-11-08T14:09:13Z","content_type":"text/html","content_length":"313090","record_id":"<urn:uuid:4692f7f4-02fd-4cf8-b9ae-1da04b838919>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00768.warc.gz"}
|
Is that Number Even or Odd? – Commentary on Math Lesson Planning
Let’s teach/plan an elementary lesson on even and odd numbers – first step, compile the basic information on that topic:
The even numbers are all integers with the last digit (i.e. 1's digit) of any chosen number ending in 0, 2, 4, 6 or 8, and it is classified as an ‘even number’. Odd numbers are counting numbers,
again with the last digit ending in a 1, 3, 5, 7 or 9. Let’s include a definition from the Internet on even numbers. An even number is an integer which is "evenly divisible" by two. This means that
if the integer is divided by 2, it yields no remainder. Conversely, an odd number is a whole number not divisible by 2 and produces a remainder.
Quite frankly, this is not an easy lesson for elementary students to understand and master, especially when both the Common Core State Standards (CCSS) and the Texas Essential Knowledge and Skills
(TEKS) specify the ‘determination of even numbers’ as a second grade mathematics standard. Second graders are not about to grasp the meaning of words like remainder, integer or divisible – at least
not on a level of physical understanding. Both standards refer to ‘the pairing’ of numbers when determining if a number is even or odd, and this is exactly where this lesson should be begin. Over the
years, I have observed several versions of effective ‘pairing’ lessons in primary classrooms, and I would like to share a primary grade level manipulative/tactile lesson in particular on even and odd
number pairing.
A manipulative primary grade lesson on even and odd number determination
The classroom teacher taught his or her students if the chosen number was classified as even or odd by using their fingers, and she added a situational story so her students could remember it. For
example, examining the number 5. Is the number 5 classified as an even or odd number? Students count to the number five (5) by alternating their number count on each hand and raising a digit (i.e.
starting with the thumb on each hand) as they count. In the end, they have two digits raised on one hand and three digits on the other. Bringing the hands together and matching the digits, there is
one digit (i.e. finger) that has no matching partner on the other hand. And, the teacher’s story is dance partners – the number five (5) leaves a finger with no dance partner to pair with, so the
number five (5) is an odd number.
Why is this activity a great tactile manipulative for determining even and odd numbers?
• It is simple, and it has a story that goes with it – easily remembered. The lesson also avoids complicated math vocabulary – easily learned later when the physical concept is mastered.
• The manipulative – digits (thumbs and fingers) – students have access to it at all times, and the technique readily transitions to a more abstract paper – pencil number only exercise.
• Two students - each counting on one hand and pairing hands - cooperatively talking out the solution. Learning by doing!
• A significant number of Title 1 intermediate elementary students do not understand even and odd number classification. This method may be taught students of that age in minutes - time efficient -
and they own the content.
• It is a first step in sequential lesson planning that may be used with intermediate students so they comprehend the reason whole numbers of any magnitude are classified as either even or odd, by
place value of the 1's digit (e.g. ten possibilities, five fingers/thumb on each hand) – affording a thorough mathematical understanding of even and odd numbers.
• The divisibility rule of two (2) and a remainder (e.g. odd numbers) can be physically taught to intermediate elementary students using the same manipulative/tactile method as previously learned
in second grade.
Even and odd number classification is important, and students must master the content by using highly effective pedagogical methods like this one. However, more importantly, successful elementary
math teachers break down complicated concepts to simple ideas so students readily understand and retain the physical meaning of the mathematics.
|
{"url":"https://www.thenew3rseducationconsulting.com/post/2018/01/13/is-that-number-even-or-odd-commentary-on-math-lesson-planning","timestamp":"2024-11-05T17:19:40Z","content_type":"text/html","content_length":"881479","record_id":"<urn:uuid:533a579c-d5e7-4bcc-924f-717a69cfa165>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00683.warc.gz"}
|
Seth Sullivant : Statistically-Consistent k-mer Methods for Phylogenetic Tree Reconstruction
Javascript must be enabled
Seth Sullivant : Statistically-Consistent k-mer Methods for Phylogenetic Tree Reconstruction
Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the
squared-Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences
without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing the corrected distance
out-performs many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well, since k-mer
methods are usually the first step in constructing a guide tree for such algorithms. This is joint work with Elizabeth Allman and John Rhodes.
0 Comments
Comments Disabled For This Video
|
{"url":"https://www4.math.duke.edu/media/watch_video.php?v=ca73ab2babb29b54e06a220b47697a9a","timestamp":"2024-11-11T17:40:52Z","content_type":"text/html","content_length":"48093","record_id":"<urn:uuid:35ca003e-0098-4747-883c-ea1a9fb2b0d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00171.warc.gz"}
|
Intermediate Algebra (Pre-Calculus)
This 5 Dvd Series thoroughly reviews Intermediate Algebra and prepares students for calculus. Step by step presentations and lots of worked out problems enable students to clarify these often
confusing concepts and understand the pattern behind solving these types of problems. Colorful and illustrative diagrams help clarify concepts. Lots of interactive exercises build student skills and
confidence necessary to develop mastery in Intermediate Algebra.
• Math Made Easy tutorials simplify complex topics into easy to understand compact lessons
• Math Made Easy’s colorful computer graphics help students visualize abstract concepts
• Math Made Easy tutorials provide extensive interactive exercises that give students 'hands on practice'
• Math Made Easy tutorials contain 'real life applications'
• Math Made Easy tutorials emphasize the critical underlying concepts
• Free access to Math Made Easy Testing Site with hundreds of practice tests to measure your progress
What Is Math made easy's Track Record?
• 87% raised their next test score 10 points or more
• 78% raised their math grade at least one level
• 92% went from failing to passing pre-calculus
Best Of All It's Guaranteed!
We are very confident that our program will help you succeed. If you are not satisfied for any reason, we will refund you in full for 30 days.
Special Bonus!
Purchase Math Made Easy's Pre-Calculus Series and receive Free Access to Math Made Easy Testing Sites with hundreds of practice tests to measure your progress!
|
{"url":"https://mathmadeeasy.com/index.php?main_page=product_info&cPath=1&products_id=10","timestamp":"2024-11-08T21:06:28Z","content_type":"text/html","content_length":"42775","record_id":"<urn:uuid:34bda52f-4e71-4a7d-869f-72c5c06a272d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00712.warc.gz"}
|
Problem Golomb Ruler
A Golomb ruler is defined as a set of $n$ integers $0 = a_1 < a_2 < … < a_n$ such that the $n \times (n-1)/2$ differences $a_j - a_i$, $1 \leq i < j \leq n$, are distinct. Such a ruler is said to
contain $n$ marks (or ticks) and to be of length $a_n$. The objective is to find optimal rulers (i.e., rulers of minimum length).
An optimal Golomb ruler with 4 ticks. Image from [commons.wikimedia.org](https://commons.wikimedia.org/wiki/File:Golomb_Ruler-4.svg) </small>
This problem (and its variants) is said to have many practical applications including sensor placements for x-ray crystallography and radio astronomy.
Dimitromanolakis has computed relatively short Golomb rulers and thus showed with computer aid that the optimal ruler for $n \leq 65,000$ has length less than $n^2$.
To build a COP (Constraint Optimization Problem) model, we need first to import the library PyCSP$^3$:
Then, we need some data. Actually, we just need an integer $n$.
We start our COP model by introducing an array $x$ of variables. Becasue Dimitromanolakis has computed relatively short Golomb rulers, and thus showed with computer aid that the optimal ruler for $n
\leq 65,000$ has length less than $n^2$, we use $n^2$ as upper bound of the domains of the variables of $x$.
# x[i] is the position of the ith tick
x = VarArray(size=n, dom=range(n * n))
We can display the structure of the array, as well as the domain of the first variable (remember that all variables have the same domain).
print("Array x: ", x)
print("Domain of any variable: ", x[0].dom)
Array x: [x[0], x[1], x[2], x[3]]
Domain of any variable: 0..15
A simple model involves a single constraint AllDifferent that takes as parameters a list of expressions (and not directly some variables of our model).
# all distances are different
AllDifferent(abs(x[i] - x[j]) for i, j in combinations(n, 2))
Interestingly, by calling the function solve(), we can check that the problem is satisfiable (SAT. We can also display the found solution. Here, we call the function values() that collects (in the
last found solution) the values assigned to a specified list of variables.
if solve() is SAT:
The obtained solution does not necessarily give the ticks in increasing order. It means that many symmetries exist. We can break them by setting the first tick at 0, and ensuring a strict order with
a constraint Increasing:
# tag(symmetry-breaking)
[x[0] == 0, Increasing(x, strict=True)]
Tagging (the list containing) these two constraint is relevant because it clearly informs us that they are inserted for breaking symmetries (and besides, tags may be exploited by solvers). Tagging is
made possible by putting in a comment line an expression of the form tag(), with a token (or a sequence of tokens separated by a white-space) between parentheses.
We can run again the solver.
if solve() is SAT:
This time, all ticks are increasingly given.
Concerning optimization, we want to minimize the length of the rule, which is equivalent to minimize the value of the rightmost variable (tick).
# minimizing the position of the rightmost tick
We can run again the solver, with this optimization task. Note that we need to check that the status returned by the solver is now OPTIMUM.
if solve() is OPTIMUM:
This time, we have an optimal Golomb ruler of length 6 (for 4 ticks).
We invite the reader to change the value of $n$ at the top of this page, and to restart the Jupyter kernel.
Finally, we give below the model in one piece. Here the data is expected to be given by the user (in a command line).
from pycsp3 import *
n = data
# x[i] is the position of the ith tick
x = VarArray(size=n, dom=range(n * n))
# all distances are different
AllDifferent(abs(x[i] - x[j]) for i, j in combinations(n, 2)),
# tag(symmetry-breaking)
[x[0] == 0, Increasing(x, strict=True)]
# minimizing the position of the rightmost tick
|
{"url":"https://pycsp.org/documentation/models/COP/GolombRuler/","timestamp":"2024-11-09T23:21:31Z","content_type":"text/html","content_length":"30164","record_id":"<urn:uuid:8f2845fa-3207-4c64-8c8e-2ba333c1ae34>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00622.warc.gz"}
|
Are intramolecular dynamic electron correlation effects detectable in X-ray diffraction experiments on molecular crystals?
In order to assess whether the effects of intramolecular dynamic electron correlation on the electron density would be experimentally detectable, X-ray structure factors which include thermal
averaging effects have been calculated from the electron densities of a range of small-molecule molecular crystals [C2H6, C2H4, C2H2, BH3NH3, NH3, NH2CN, OCl2, CO(NH2)(2)] using the procrystal,
Hartree-Fock, B3LYP and QCISD wavefunction models with the superposition-of-independent-molecules method to create the electron density in the crystal. A naive R-factor-like criterion of 1% has been
used to assess detectability, as well as a more sophisticated method based on real X-ray data for estimating experimental errors. Correlation effects on the density are found to be only marginally
above the 1% detectability threshold, and are about one to two orders of magnitude smaller than deviations from the procrystal model. Further, only 10% of the data up to 1.2 angstrom(-1) are
significant for detecting correlation effects; and of those 10%, many are at low intensity and therefore difficult to measure. Another method to estimate the experimental errors indicates that the
intramolecular correlation effects would not be measurable. Although thermal averaging effects are important for the absolute value of the calculated structure factors, the use of different thermal
averaging models does not change our overall conclusion of detectability. Likewise, calculations using the B3LYP method for some molecules do not show significant changes in the amount of, or
distribution of, the changes that would be detectable by experiment.
Dive into the research topics of 'Are intramolecular dynamic electron correlation effects detectable in X-ray diffraction experiments on molecular crystals?'. Together they form a unique fingerprint.
|
{"url":"https://research-repository.uwa.edu.au/en/publications/are-intramolecular-dynamic-electron-correlation-effects-detectabl","timestamp":"2024-11-08T04:42:03Z","content_type":"text/html","content_length":"54918","record_id":"<urn:uuid:f1070b0c-0787-481f-a238-17cbe5cb6bab>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00247.warc.gz"}
|
View Tube Triangulation
Pete Insley Jones Metropolitan H. S.
606 S. State St.
Chicago IL 60605
Measuring Distance; Estimating; Averaging; Standard deviation
Materials Needed:
Meter sticks, rulers, cut down paper towel cardboard tubes
First, get the students to estimate the height of the chalkboard or the
doorway. List the estimates and use a calculator to find the average and
standard deviation.
Stand far enough away so that you can see the entire object in the tube.
Use a proportion to find the height of the object.
Height of object [ = ] Distance of tube
Distance to object Length of tube
List the calculations and again find the average and the standard
deviation. Take some time to discuss what they mean again.
Now get a couple of students to measure exactly how big the object is.
Have a prize for the best estimate of measurement.
Performance Assessment:
Collect paper with original estimate and calculation of height. Have
students write a short paragraph stating how well they think they did on the
You can find percentage error in this lab and it has some new meanings to
the students. Have them find their % of error. You can also find the probable
percentage of error from the standard deviation.
Return to Physics Index
|
{"url":"https://smileprogram.info/ph9418.html","timestamp":"2024-11-13T18:46:41Z","content_type":"text/html","content_length":"2124","record_id":"<urn:uuid:76e027d9-c84b-4c85-8c7b-4cd146030a95>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00820.warc.gz"}
|
The purpose of this paper is to study the factors that have an influence on the culture of the mathematics classroom on the basis of the preceding studies.
It seems to me that the traditional view of mathematics which mathematics consists of immutable truths and absolut certainty will become the factor that produces classroom culture in which the subject of mathematics has only to get the right answer correctly and fast. And, the new view of mathematics which mathematics should be reconsidered and revised will become the factor that produces classroom culture in which mathematics is the subject that questions and constructs mathematical knowledge. Moreover, the didactical contract which solutions of word problems of textbooks that reflect the traditional view of mathematics use only informations in the problem text is entered into between teachers and students. This didactical contract will become the factor that produces classroom culture in which the result of the calculations derived by applying appropriate calculations to the numbers in the problem text and calculating them correctly becomes just the answer of word problems. And, in classrooms that reflect the new view of mathematics, the didactical contract which students are not reported to mathematical knowleages by teachers or textbooks but students themself make the hypothesis and must themself find mathematical truths or mistakes through the process that discusses the cause of the hypothesis, enters into between teachers and students. This didactical contract will become the factor that produces classroom culture in which mathematics is the subject that questions and constructs mathematical knowledge.
|
{"url":"https://unipa.thu.ac.jp/kgResult/english/researchersHtml/20170027/REP_GSK_OTHER/20170027_REP_GSK_OTHER_1.html","timestamp":"2024-11-10T11:27:27Z","content_type":"text/html","content_length":"13696","record_id":"<urn:uuid:cbda7b34-6caa-47e4-8ce8-7fca226a463b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00831.warc.gz"}
|
MKL data fitting - boundary conditions for quadratic spline interpolation
11-10-2020 07:52 AM
Hello All,
I'm quite new on using MKL data fitting function. I'm working with Visual Fortran and I have the 64bit MKL library - Version 2019.0.5.
I need to use the quadratic spline interpolation so I followed the relevant installed example (\mkl\examples\datafittingf\source\dfdquadraticspline.f) as well as the reference manual.
My issues are about the boundary conditions:
1. My first choice was setting bc_type = DF_NO_BC, but this gave me an error, although from the manual seems this option is allowed for all type of spline. Is there any possibility to not provide
boundary conditions quadratic splines ?
2. In the mkl example is used bc_type = DF_BC_Q_VAL and with bc(1) = 1.0. In the manual is written the boundary condition value must be the Function value at point (x0 + x1)/2, when bc_type =
DF_BC_Q_VAL, but I only have the function samples at breakpoints, how I can handle this case ?
3. If boundary conditions are mandatory, how can set them when the number of function to interpolate is greater than one (NNY > 1) ? For example, if I have two functions should I size bc(2) and put
the bc of the first function in bc(1) and the bc of the second function in bc(2) ?
Thanks in advance for your help.
Best regards
11-12-2020 02:00 AM
11-12-2020 02:00 AM
11-16-2020 12:47 AM
11-12-2020 03:27 AM
|
{"url":"https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/MKL-data-fitting-boundary-conditions-for-quadratic-spline/td-p/1227049","timestamp":"2024-11-11T23:51:01Z","content_type":"text/html","content_length":"243266","record_id":"<urn:uuid:8992d013-8d2c-4e20-9153-bb1e8d1a2275>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00490.warc.gz"}
|
Linear degradation model for estimating remaining useful life
Use linearDegradationModel to model a linear degradation process for estimating the remaining useful life (RUL) of a component. Degradation models estimate the RUL by predicting when a monitored
signal will cross a predefined threshold. Linear degradation models are useful when the monitored signal is a log scale signal or when the component does not experience cumulative degradation. For
more information on the degradation model, see Linear Degradation Model.
To configure a linearDegradationModel object for a specific type of component, you can:
• Estimate the model prior parameters using historical data regarding the health of an ensemble of similar components, such as multiple machines manufactured to the same specifications. To do so,
use fit.
• Specify the model prior parameters when you create the model based on your knowledge of the component degradation process.
Once you configure the parameters of your degradation model, you can then predict the remaining useful life of similar components using predictRUL. For a basic example illustrating RUL prediction
with a degradation model, see Update RUL Prediction as Data Arrives.
For general information on predicting remaining useful life, see Models for Predicting Remaining Useful Life.
mdl = linearDegradationModel creates a linear degradation model for estimating RUL and initializes the model with default settings.
mdl = linearDegradationModel(Name,Value) specifies user-settable model properties using name-value pairs. For example, linearDegradationModel('NoiseVariance',0.5) creates a linear degradation model
with a model noise variance of 0.5. You can specify multiple name-value pairs. Enclose each property name in quotes.
Theta — Current mean value of slope parameter
This property is read-only.
Current mean value of slope parameter θ in the degradation model, specified as a scalar. For more information on the degradation model, see Linear Degradation Model.
You can specify Theta using a name-value pair argument when you:
• Create the model.
• Reset the model using the restart function.
Otherwise, the value of Theta changes when you use the update function.
ThetaVariance — Current variance of slope parameter
nonnegative scalar
This property is read-only.
Current variance of slope parameter θ in the degradation model, specified as a nonnegative scalar. For more information on the degradation model, see Linear Degradation Model.
You can specify ThetaVariance using a name-value pair argument when you:
• Create the model.
• Reset the model using the restart function.
Otherwise, the value of ThetaVariance changes when you use the update function.
Phi — Current intercept value
Current intercept value ϕ for the degradation model, specified as a scalar. For more information on the degradation model, see Linear Degradation Model.
You can specify Phi using a name-value pair argument when you create the model. Otherwise, the value of Phi changes when you estimate the model prior using the fit function.
Prior — Prior information about model parameters
Prior information about model parameters, specified as a structure with the following fields:
• Theta — Mean value of slope parameter
• ThetaVariance — Variance of slope parameter
You can specify the fields of Prior:
• When you create the model. When you specify Theta or ThetaVariance at model creation using name-value pairs, the corresponding field of Prior is also set.
• Using the fit function. In this case, the prior values are derived from the data used to fit the model.
• Using the restart function. In this case, the current values of Theta and ThetaVariance are copied to the corresponding fields of Prior.
• Using dot notation after model creation.
For more information on the degradation model, see Linear Degradation Model.
NoiseVariance — Variance of additive noise
1 (default) | nonnegative scalar
Variance of additive noise ε in the degradation model, specified as a nonnegative scalar. For more information on the degradation model, see Linear Degradation Model.
You can specify NoiseVariance:
• Using a name-value pair when you create the model
• Using a name-value pair with the restart function
• Using dot notation after model creation
SlopeDetectionLevel — Slope detection level
0.05 (default) | scalar value in the range [0,1] | []
Slope detection level for determining the start of the degradation process, specified as a scalar in the range [0,1]. This value corresponds to the alpha value in a t-test of slope significance.
To disable the slope detection test, set SlopeDetectionLevel to [].
You can specify SlopeDetectionLevel:
• Using a name-value pair when you create the model
• Using a name-value pair with the restart function
• Using dot notation after model creation
SlopeDetectionInstant — Slope detection time
[] (default) | scalar
This property is read-only.
Slope detection time, which is the instant when a significant slope is detected, specified as a scalar. The update function sets this value when SlopeDetectionLevel is not empty.
CurrentMeasurement — Latest degradation feature value
This property is read-only.
Latest degradation feature value supplied to the update function, specified as a scalar.
InitialLifeTimeValue — Initial lifetime variable value
scalar | duration object
This property is read-only.
Initial lifetime variable value when the update function is first called on the model, specified as a scalar.
When the model detects a slope, the InitialLifeTime value is changed to match the SlopeDetectionInstant value.
CurrentLifeTimeValue — Current lifetime variable value
scalar | duration object
This property is read-only.
Latest lifetime variable value supplied to the update function, specified as a scalar.
LifeTimeVariable — Lifetime variable
"" (default) | string
Lifetime variable, specified as a string that contains a valid MATLAB^® variable name or "".
When you train the model using the fit function, if your training data is a:
• table, then LifeTimeVariable must match one of the variable names in the table
• timetable, then LifeTimeVariable one of the variable names in the table or the dimension name of the time variable, data.Properties.DimensionNames{1}
You can specify LifeTimeVariable:
• Using a name-value pair when you create the model
• As an argument when you call the fit function
• Using dot notation after model creation
LifeTimeUnit — Lifetime variable units
"" (default) | string
Lifetime variable units, specified as a string.
The units of the lifetime variable do not need to be time-based. The life of the test component can be measured in terms of a usage variable, such as distance traveled (miles) or fuel consumed
DataVariables — Degradation variable name
"" (default) | string
Degradation variable name, specified as a string that contains a valid MATLAB variable name. Degradation models have only one data variable.
You can specify DataVariables:
• Using a name-value pair when you create the model
• As an argument when you call the fit function
• Using dot notation after model creation
UseParallel — Flag for using parallel computing
false (default) | true
Flag for using parallel computing when fitting prior values from data, specified as either true or false.
You can specify UseParallel:
• Using a name-value pair when you create the model
• Using a name-value pair with the restart function
• Using dot notation after model creation
UserData — Additional model information
[] (default) | any data type or format
Additional model information for bookkeeping purposes, specified as any data type or format. The model does not use this information.
You can specify UserData:
• Using a name-value pair when you create the model
• Using dot notation after model creation
Object Functions
fit Estimate parameters of remaining useful life model using historical data
predictRUL Estimate remaining useful life for a test component
update Update posterior parameter distribution of degradation remaining useful life model
restart Reset remaining useful life degradation model
Train Linear Degradation Model
Load training data.
The training data is a cell array of column vectors. Each column vector is a degradation feature profile for a component.
Create a linear degradation model with default settings.
mdl = linearDegradationModel;
Train the degradation model using the training data.
Create Linear Degradation Model with Known Priors
Create a linear degradation model and configure it with a known prior distribution.
mdl = linearDegradationModel('Theta',0.25,'ThetaVariance',0.002);
The specified prior distribution parameters are stored in the Prior property of the model.
ans = struct with fields:
Theta: 0.2500
ThetaVariance: 0.0020
The current posterior distribution of the model is also set to match the specified prior distribution. For example, check the posterior value of the slope variance.
Train Linear Degradation Model Using Tabular Data
Load training data.
The training data is a cell array of tables. Each table is a degradation feature profile for a component. Each profile consists of life time measurements in the "Time" variable and corresponding
degradation feature measurements in the "Condition" variable.
Create a linear degradation model with default settings.
mdl = linearDegradationModel;
Train the degradation model using the training data. Specify the names of the life time and data variables.
Predict RUL Using Linear Degradation Model
Load training data.
The training data is a cell array of tables. Each table is a degradation feature profile for a component. Each profile consists of life time measurements in the "Time" variable and corresponding
degradation feature measurements in the "Condition" variable.
Create a linear degradation model, specifying the life time variable units.
mdl = linearDegradationModel('LifeTimeUnit',"hours");
Train the degradation model using the training data. Specify the names of the life time and data variables.
Load testing data, which is a run-to-failure degradation profile for a test component. The test data is a table with the same life time and data variables as the training data.
Based on knowledge of the degradation feature limits, define a threshold condition indicator value that indicates the end-of-life of a component.
Assume that you measure the component condition indicator after 48 hours. Predict the remaining useful life of the component at this time using the trained linear degradation model. The RUL is the
forecasted time at which the degradation feature will pass the specified threshold.
estRUL = predictRUL(mdl,linTestData1(48,:),threshold)
estRUL = duration
112.64 hr
The estimated RUL is around 113 hours, which indicates a total predicted life span of around 161 hours.
Update Linear Degradation Model and Predict RUL
Load observation data.
For this example, assume that the training data is not historical data, but rather real-time observations of the component condition.
Based on knowledge of the degradation feature limits, define a threshold condition indicator value that indicates the end-of-life of a component.
Create a linear degradation model arbitrary prior distribution data and a specified noise variance. Also, specify the life time and data variable names for the observation data.
mdl = linearDegradationModel('Theta',1,'ThetaVariance',1e6,'NoiseVariance',0.003,...
Observe the component condition for 50 hours, updating the degradation model after each observation.
for i=1:50
After 50 hours, predict the RUL of the component using the current life time value stored in the model.
estRUL = predictRUL(mdl,threshold)
estRUL = duration
50.301 hr
The estimated RUL is about 50 hours, which indicates a total predicted life span of about 100 hours.
Linear Degradation Model
The linearDegradationModel object implements the following continuous-time linear degradation model [1]:
$S\left(t\right)=\varphi +\theta \left(t\right)t+\epsilon \left(t\right)$
• ϕ is the model intercept, which is constant. You can initialize ϕ as the nominal value of the degradation variable using Phi.
• θ(t) is the model slope and is modeled as a random variable with a normal distribution with mean Theta and variance ThetaVariance.
• ε(t) is the model additive noise and is modeled as a normal distribution with zero mean and variance NoiseVariance.
[1] Chakraborty, S., N. Gebraeel, M. Lawley, and H. Wan. "Residual-Life Estimation for Components with Non-Symmetric Priors." IIE Transactions. Vol. 41, Number 4, 2009, pp. 372–387.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
• The predictRUL, update, and restart commands support code generation with MATLAB Coder™ for this RUL model type. Before generating code that uses this model, you must save the model using
saveRULModelForCoder. For an example, see Generate Code for Predicting Remaining Useful Life.
• In addition to the read-only properties, you cannot change the following properties of degradation models at run time:
□ LifeTimeVariable
□ LifeTimeUnit
□ DataVariables
• See predictRUL for additional limitations on code generation.
Automatic Parallel Support
Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™.
To evaluate these models in parallel, set the UseParallel property to true.
Version History
Introduced in R2018a
|
{"url":"https://au.mathworks.com/help/predmaint/ref/lineardegradationmodel.html","timestamp":"2024-11-13T22:05:04Z","content_type":"text/html","content_length":"138795","record_id":"<urn:uuid:2e1ef7bd-d2dd-477b-9e1a-7b20340cfe59>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00658.warc.gz"}
|
Dynamics of the Desorption of D2and H2from Cu(111) for Journal of Vacuum Science and Technology A: Vacuum, Surfaces and Films
Journal of Vacuum Science and Technology A: Vacuum, Surfaces and Films
Conference paper
Dynamics of the Desorption of D[2]and H[2]from Cu(111)
View publication
We have determined rotational and vibrational distributions for D2 and H2 desorbed from Cu(111) and have determined velocity distributions of the desorbed molecules for a wide range of vibrational
and rotational states. The quantum state populations display strong deviations from a Boltzmann distributions and the velocity distributions are highly supersonic with properties that depend strongly
on quantum state. Molecules desorbed at 925 K were detected in a state-specific manner using laser ionization. Velocity distributions were obtained by measuring the flight times of D2 ions in a
field-free region. Vibrational populations of D2(u — 1), D2 (y = 2), and H2(u = 1) were found to be a factor of 20, 79, and 18 above that expected for equilibration at 925 K, respectively, relative
to v -â– 0 populations. Boltzmann plots of the rotational state distributions display distinct curvature, particularly for molecules in the ground vibrational state. H2 and D2 molecules in the ground
vibrational state have mean kinetic energies of —0.6 eV for molecules in low rotational states. This energy was found to increase slightly with increasing/at low /, pass through a maximum at a
rotational energy of —0.15 eV, and then fall as rotational energy is further increased. H2(v = 1), D2(y = 1), and D2(t> = 2) molecules have mean energies of —0.30, 0.45, and 0.25 eV, respectively. In
each of these cases, the mean energy also varies significantly with rotational state. The velocity distributions of molecules in v = 0 have a speed ratio of —0.35, which decreases with both
increasing rotational and vibrational energy. Results are discussed in terms of a simple dynamical picture of desorption via an anisotropic potential energy barrier. © 1993, American Vacuum Society.
All rights reserved.
|
{"url":"https://research.ibm.com/publications/dynamics-of-the-desorption-of-dlessinfgreater2lessinfgreaterand-hlessinfgreater2lessinfgreaterfrom-cu111","timestamp":"2024-11-08T22:42:50Z","content_type":"text/html","content_length":"78422","record_id":"<urn:uuid:6863b26c-ac55-4ce7-be39-763fea262a20>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00760.warc.gz"}
|
250 grams of Applesauce in Ounces • Recipe equivalences
250 grams of Applesauce in Ounces
How many Ounces are 250 grams of Applesauce?
250 grams of Applesauce in Ounces is approximately equal to 8 ounces.
That is, if in a cooking recipe you need to know what the equivalent of 250 grams of Applesauce measure in Oz, the exact equivalence would be 7.9976401, so in rounded form it is approximately 8
Is this equivalence of 250 grams to Ounces the same for other ingredients?
It should be noted that depending on the ingredient to be measured, the equivalence of Grams to Ounces will be different. That is, the rule of equivalence of Grams of Applesauce in Oz is applicable
only for this ingredient, for other cooking ingredients there are other rules of equivalence.
Please note that this website is merely informative and that its purpose is to try to inform about the approximate equivalent values to estimate the weight of the products that can be used in a
cooking recipe, such as Applesauce, for example. In order to have an exact measurement, it is recommended to use a scale.
In the case of not having an accessible weighing scale and we need to know the equivalence of 250 grams of Applesauce in Ounces, a very approximate answer will be 8 ounces.
|
{"url":"https://www.medidasrecetascocina.com/en/applesauce/250-grams-applesauce-in-ounces/","timestamp":"2024-11-11T16:58:58Z","content_type":"text/html","content_length":"57661","record_id":"<urn:uuid:3b8bc819-8ebe-4094-bb24-f3d356ea5964>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00377.warc.gz"}
|
When speed matters: going from randomForest to ranger — Roel Peters
Random Forest stays my number one go-to algorithm for quickly prototyping prediction algorithms. Last week, I worked on speeding up a feature engineering and training workflow for a marketing
project. I moved from the traditional randomForest package to the — already three years old — package ranger. Here are my findings.
Let’s load the Adult dataset through the arules prackage. We only take the rows without any NAs and we take a sample of 1000 rows. This is our small data set we will use to benchmark both packages.
dt <- data.table(AdultUCI)
dt <- dt[complete.cases(dt)]
dt <- dt[sample(.N, 1000)]
First, let’s take a look at the output of randomForest(). It prints the number of trees and the amount of variables it tried at each split. It also shows the out-of-bag error rate.
On the other hand, ranger() provides more information. For example: the split rule.
The out-of-bag prediction error differs. Both algorithms don’t do exactly the same, so this is to be expected.
The randomForest() packages uses Breiman’s Random Forest implementation, while ranger() borrows its theory from a wide range of implementations It’s quite clear from the ranger paper that a lot of
methodological choices have been made with speed in mind. They wanted the R package to be no slower than the speedy Random Jungle implementation in C++:
Wright, M. N. & Ziegler, A. (2017). ranger: A fast implementation of random forests for high dimensional data in C++ and R. Journal of Statistical Software 77:1-17.
Furthermore, speed is a recurring theme throughout the paper describing the package, and the conclusion quickly elaborates on the results.
Wright, M. N. & Ziegler, A. (2017). ranger: A fast implementation of random forests for high dimensional data in C++ and R. Journal of Statistical Software 77:1-17.
Okay, let’s run some benchmarks ourselves, shall we?
First, let’s benchmark the traditional randomForest() function. The mean processing time is over a second. For a data set of just 1000 rows, that’s quite a lot actually.
randomForest(x = dt[,1:14],
y = dt$income,
ntree = 500,
mtry = 5,
times = 25, unit = 's')
So, let’s go with the ranger() function. We use exactly the same hyperparameters. Moreover, although ranger() has been designed with parallel processing in mind, I set the number of threads to only
1. That’s because I want to prove that ranger() is not only faster because of parallel processing, but also because of a more efficient way of processing the data. As you can see, our processing time
more than halves.
ranger(dependent.variable.name = 'income',
data = dt,
num.trees = 500,
mtry = 5,
num.threads = 1),
times = 25, unit = 's')
Simply by adding more threads (the num.threads parameter), the processing time improves drastically. The following benchmark is produced using three threads.
Both model objects are 4 MB in size, so if there’s a difference in speed, it’s not because of the object. As you can see from the following microbenchmark, surpisingly, predicting from a randomForest
model object is faster than predicting from a ranger model object. If we compare the medians, the difference is almost 30%.
Side note
In ranger, you can quickly access multiple properties of your predictions. It was somewhat confusing at first, because I needed access to the probabilities and I ran into the following error.
Apparently, I was passing the value ‘prob’ to the type parameter, which is not a valid value.
Error in predict.ranger.forest(forest, data, predict.all, num.trees, type, :
Error: Invalid value for ‘type’. Use ‘response’, ‘se’, ‘terminalNodes’, or ‘quantiles’.
If you need access to the prediction probabilities, you can do that as follows:
predict(ranger_model, dt_full_test, type='response')$predictions
By the way, if you’re having trouble understanding some of the code and concepts, I can highly recommend “An Introduction to Statistical Learning: with Applications in R”, which is the must-have data
science bible. If you simply need an introduction into R, and less into the Data Science part, I can absolutely recommend this book by Richard Cotton. Hope it helps!
3 thoughts on “When speed matters: going from randomForest to ranger”
How does it compare to the package party?
Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?
Your point of view caught my eye and was very interesting. Thanks. I have a question for you.
|
{"url":"https://www.roelpeters.be/when-speed-matters-going-from-randomforest-to-ranger/","timestamp":"2024-11-03T18:28:35Z","content_type":"text/html","content_length":"115886","record_id":"<urn:uuid:63dae9b3-d30d-49bf-8bc5-cc05f8eed5f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00865.warc.gz"}
|
Complex equation draw
Complex equation draw --- Introduction ---
Complex equation draw is a graphical exercise on the geometry of complex numbers. The server will give you an equation on a complex number z, involving real part, imaginary part, module, or argument.
Then with the help of a java applet, you are asked to draw the set of z verifying this equation in the complex plane, with the mouse. You will have a score according to the precision of your drawing.
You can choose the software used .
The exercise can be configured by the following parameters.
• Type of equations:
Re or Im Argument Module I Re and Im Module II Module III
• Number of drawings in one session: , , , , , , , (The score is attributed only at the end of a session.)
• Severity level: , , , , , , , , ,
This page is not in its usual appearance because WIMS is unable to recognize your web browser.
Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program.
• Description: draw an equation in the complex plane. This is the main site of WIMS (WWW Interactive Multipurpose Server): interactive exercises, online calculators and plotters, mathematical
recreation and games
• Keywords: wims, mathematics, mathematical, math, maths, interactive mathematics, interactive math, interactive maths, mathematic, online, calculator, graphing, exercise, exercice, puzzle,
calculus, K-12, algebra, mathématique, interactive, interactive mathematics, interactive mathematical, interactive math, interactive maths, mathematical education, enseignement mathématique,
mathematics teaching, teaching mathematics, algebra, geometry, calculus, function, curve, surface, graphing, virtual class, virtual classes, virtual classroom, virtual classrooms, interactive
documents, interactive document, analysis, module, argument, complex_number
|
{"url":"https://wims.univ-cotedazur.fr/wims/en_H6~algebra~compeqdraw.en.html","timestamp":"2024-11-06T03:53:30Z","content_type":"text/html","content_length":"9056","record_id":"<urn:uuid:e3d615de-7f91-4d96-b274-d7770ae243ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00103.warc.gz"}
|
Knuth-Morris-Pratt Algorithm (KMP) - EnableGeek
Knuth-Morris-Pratt (KMP) Algorithm
The Knuth-Morris-Pratt (KMP) algorithm is a string-matching algorithm that efficiently finds occurrences of a pattern within a longer text. It was developed by Donald Knuth and Vaughan Pratt, and
independently by James H. Morris in 1977.
The KMP algorithm works by precomputing a partial match table, which tells us how much of the pattern can be skipped when a mismatch occurs during the matching process. The partial match table is
computed in linear time with respect to the length of the pattern.
Take your coding skills to the next level with our comprehensive guides, “Python Beginner to Advanced” and “Java Beginner to Advanced.” Whether you’re diving into Python or mastering Java, these
books provide step-by-step guidance and in-depth knowledge to elevate your programming expertise.
KMP Algorithm Implementation
Suppose we have two strings named text and pattern. we want to check whether a substring of text is equal to the pattern.
A substring is a contiguous sequence of a string. If a string is “enablegeek” then the substrings of the string are “enablegeek”, “ena”, “geek”, “able” etc. But “eble” , “ane” are not substrings of
the string.
Let us have a text “enablegeek” and a pattern “able”. We want to find whether “able” is a substring of “enablegeek” or not.
At first, we will solve it with a naive approach. In the naive approach, if the text length is n and the pattern length is m, then we will take every substring of m length and then check whether this
substring is equal to the pattern or not.
bool naive_matching(string text, string pattern)
int n=text.size();
int m=pattern.size();
for(int i=0;i<n;i++)
// for each position i, we will try to
// match text[i, i+1, i+2, ... i+m-1] with pattern[0, 1, 2, .. m-1]
int j=0;
for(j=0;j<m, i+j<n ;j++)
if(pattern[j]!=text[i+j]) break;
return true;
return false;
The time complexity of the above code is O(n*m).
Now try to simulate this algorithm for the text “abababacd” and the pattern “ababac”.
At first, i=0. We will continue to match the pattern and the text character by character starting from the first character of the text. If all characters match, we get the pattern. And if we get one
place where those characters from the text and pattern don’t match we will break the loop. Check out the picture below:
Going to character number 5, we get a mismatch. The loop inside the brute force algorithm will break. Then we will go to i=1 and then search again.
In this way, you have to loop through each index of text to find the pattern, so the time complexity of this approach is O(n*m). If we can avoid looping through each index, we can solve it with
better time complexity. For this, we need to know about prefixes and suffixes.
Prefix: When zero or more characters are dropped from the end of a string, what is left is the prefix of the string. Prefixes of string “abc” are “a”, “ab”, “abc”. Among them, “ab” and “a” are proper
prefixes. proper prefixes are smaller than the original string.
Suffix: When zero or more characters are dropped from the start of a string, what is left is the suffix of the string. Suffixes of string “abc” are “c”, “bc”, “abc”. among them “c”, “bc” are proper
prefixes because they are smaller than the original string.
Assume the pattern we are looking for is “abxyabcd”. Now, look at the picture below.
We found a mismatch in one place when matching the pattern with the text in the image. There is no need to worry about what the mismatched character is or what the next characters are. Now if we try
to brute-force match the pattern by shifting it from one position to the left, is there any benefit for us?
If we shift 1 position then whatever is in the question mark places is of no use. How much position shifting can be profitable depends on what? It depends on how many prefixes in the pattern match
the text, in this case, that prefix is “ABXYAB”. If we shift as follows we can get a match:
That means we have to shift the pattern in such a way that we get a partial matching of the prefix of the pattern with the suffix of the pattern itself. So what happens is, we get a partial match of
the input text with the pattern prefix, and then we go back and do a character-by-character match to see if the entire text matches.
Another example will make it clear. Suppose now the pattern is “ABABAC” and we get a partial matching like this:
Now, how much we shift will depend on the matching prefix “ABABA”. See the picture below:
We have shifted the pattern to the right by 2 spaces. This will give us a partial match of pattern prefix with pattern suffix. In this case, the last 3 characters match the first 3 characters of the
pattern. That means the first 3 characters of the pattern will match the text as well. Then we will go ahead again and look at the rest of the characters.
Now suppose unfortunately we get a mismatch again:
How much to shift now? It depends on “ABA”. We need to shift “ABA” to the right such that the suffix of “ABA” partially matches the prefix of “ABA” after the shift. In this case, we have to shift
like the picture below.
Now, one thing is clear, how far the pattern will be shifted depends on how many prefixes in the pattern match the text.
Suppose the prefix P of the pattern matches with the text. Now we have to find the largest proper prefix of P which is also a suffix of P.
Now, for every proper prefix P, we will find the length of the largest proper prefix which is also a suffix.
KMP Algorithm is a string-matching algorithm that searches for occurrences of a pattern within a larger text by preprocessing the pattern to determine the maximum suffix that matches a prefix of the
pattern. This preprocessed information is then used to avoid unnecessary comparisons when searching for the pattern in the text.
KMP Algorithm in C++
C++ code with explanations:
#include <iostream>
#include <string>
#include <vector>
using namespace std;
void KMP(string text, string pattern)
int n = text.length(), m = pattern.length();
// calculate the prefix table
vector<int> pi(m);
pi[0] = 0;
for (int i = 1, j = 0; i < m; i++) {
while (j > 0 && pattern[i] != pattern[j])
j = pi[j - 1];
if (pattern[i] == pattern[j])
pi[i] = j;
// search for pattern in the text using the prefix table
for (int i = 0, j = 0; i < n; i++) {
while (j > 0 && text[i] != pattern[j])
j = pi[j - 1];
if (text[i] == pattern[j])
if (j == m) {
cout << "Pattern found at index " << i - m + 1 << endl;
j = pi[j - 1];
int main()
string text, pattern;
cout << "Enter text: ";
getline(cin, text);
cout << "Enter pattern: ";
getline(cin, pattern);
KMP(text, pattern);
return 0;
The function KMP takes in two strings, text and pattern, which represent the text to search and the pattern to find, respectively. The lengths of the text and pattern strings are stored in variables
n and m, respectively. The prefix table pi is calculated using the pattern string. The prefix table is used to determine the maximum suffix of the pattern that matches a prefix of the pattern. The
prefix table is represented by a vector pi of length m. The first element of pi is always 0.
A loop is used to fill in the rest of the prefix table by comparing each character of the pattern with the character at the current index of the prefix table. If they match, the value of j is
incremented, and the value of j is stored in pi[i]. If they don’t match, j is set to the value of pi[j-1], and the loop continues until a match is found or j becomes 0.
The for loop is used to search for the pattern in the text using the prefix table. The loop iterates through each character of the text string. If the character at the current index of the text
string matches the character at the current index of the pattern string, the value of j is incremented. If they don’t match, j is set to the value of pi[j-1], and the loop continues until a match is
found or j becomes 0.
If the value of j is equal to the length of the pattern string, then a match has been found, and the index of the first occurrence of the pattern in the text is printed. The value of j is then set to
KMP Algorithm in Python
Python code with explanations:
def kmp_search(text, pattern):
matches = []
n, m = len(text), len(pattern)
lps = compute_lps(pattern)
i, j = 0, 0
while i < n:
if text[i] == pattern[j]:
i += 1
j += 1
if j == m:
matches.append(i - j)
j = lps[j - 1]
elif i < n and text[i] != pattern[j]:
if j != 0:
j = lps[j - 1]
i += 1
return matches
def compute_lps(pattern):
m = len(pattern)
lps = [0] * m
len = 0
i = 1
while i < m:
if pattern[i] == pattern[len]:
len += 1
lps[i] = len
i += 1
if len != 0:
len = lps[len - 1]
lps[i] = len
i += 1
return lps
The code implements the Knuth-Morris-Pratt (KMP) algorithm for pattern searching in a given text.
The KMP algorithm is an efficient pattern-matching algorithm that uses a precomputed table called the Longest Prefix-Suffix (LPS) array. The LPS array stores the length of the longest proper prefix
of the pattern that is also a suffix of the pattern.
The kmp_search() function takes two arguments – the text in which the pattern is to be searched and the pattern that needs to be searched. The function initializes an empty list matches to store the
starting indices of all the matches found.
The function calls compute_lps() function to precompute the LPS array for the pattern. It then initializes two indices i and j to 0 to traverse through the text and pattern respectively.
In the while loop, the function compares the characters of the text and pattern at indices i and j respectively. If they match, it increments both indices, and if j becomes equal to the length of the
pattern, the function appends i – j to the matches list and updates j to lps[j-1].
If the characters don’t match, the function checks if j is not zero. If it is not zero, then it updates j to lps[j-1] and if it is zero, then it increments i. The function continues this process
until it has traversed the entire text.
The compute_lps() function computes the LPS array for a given pattern. It initializes an empty list lps of length m (the length of the pattern) and initializes two variables len and i to 0 and 1
In the while loop, the function compares the characters of the pattern at indices i and len. If they match, it increments len and sets lps[i] to len. It then increments i.
If the characters don’t match, the function checks if len is not zero. If it is not zero, then it updates len to lps[len-1] and the loop continues. If it is zero, then it sets lps[i] to len and
increments i. The function continues this process until it has computed the LPS array for the entire pattern.
KMP Algorithm in Java
Java code with explanations:
import java.util.*;
public class KMP {
public static List<Integer> kmpSearch(String text, String pattern) {
List<Integer> matches = new ArrayList<>();
int n = text.length();
int m = pattern.length();
int[] lps = computeLPS(pattern);
int i = 0, j = 0;
while (i < n) {
if (text.charAt(i) == pattern.charAt(j)) {
if (j == m) {
matches.add(i - j);
j = lps[j - 1];
} else if (i < n && text.charAt(i) != pattern.charAt(j)) {
if (j != 0) {
j = lps[j - 1];
} else {
return matches;
public static int[] computeLPS(String pattern) {
int m = pattern.length();
int[] lps = new int[m];
int len = 0, i = 1;
lps[0] = 0;
while (i < m) {
if (pattern.charAt(i) == pattern.charAt(len)) {
lps[i] = len;
} else {
if (len != 0) {
len = lps[len - 1];
} else {
lps[i] = len;
return lps;
You can follow the previous explanation as well.
|
{"url":"https://www.enablegeek.com/tutorial/knuth-morris-pratt-kmp-algorithm/","timestamp":"2024-11-04T12:19:44Z","content_type":"text/html","content_length":"293952","record_id":"<urn:uuid:3241bf21-2ddc-4a58-8b97-85ba2a81d7f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00868.warc.gz"}
|
Typed embedding of STLC into Haskell
5 February 2015 (
programming haskell language correctness
Someone posted to the Haskell subreddit this blogpost of Lennart where he goes step-by-step through implementing an evaluator and type checker for CoC. I don't know why this post from 2007 showed up
on Reddit this week, but it's a very good post, pedagogically speaking. Go and read it.
In this post, I'd like to elaborate on the simply-typed lambda calculus part of his blogpost. His typechecker defines the following types for representing STLC types, terms, and environments:
data Type = Base
| Arrow Type Type
deriving (Eq, Show)
type Sym = String
data Expr = Var Sym
| App Expr Expr
| Lam Sym Type Expr
deriving (Eq, Show)
The signature of the typechecker presented in his post is as follows:
type ErrorMsg = String
type TC a = Either ErrorMsg a
newtype Env = Env [(Sym, Type)] deriving (Show)
tCheck :: Env -> Expr -> TC Type
My approach is to instead create a representation of terms of STLC in such a way that only well-scoped, well-typed terms can be represented. So let's turn on a couple of heavy-weight language
extensions from GHC 7.8 (we'll see how each of them is used), and define a typed representation of STLC terms:
{-# LANGUAGE GADTs, StandaloneDeriving #-}
{-# LANGUAGE DataKinds, KindSignatures, TypeFamilies, TypeOperators #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TemplateHaskell #-} -- sigh...
import Data.Singletons.Prelude
import Data.Singletons.TH
import Data.Type.Equality
-- | A (typed) variable is an index into a context of types
data TVar (ts :: [Type]) (a :: Type) where
Here :: TVar (t ': ts) t
There :: TVar ts a -> TVar (t ': ts) a
deriving instance Show (TVar ctx a)
-- | Typed representation of STLC: well-scoped and well-typed by construction
data TTerm (ctx :: [Type]) (a :: Type) where
TConst :: TTerm ctx Base
TVar :: TVar ctx a -> TTerm ctx a
TLam :: TTerm (a ': ctx) b -> TTerm ctx (Arrow a b)
TApp :: TTerm ctx (Arrow a b) -> TTerm ctx a -> TTerm ctx b
deriving instance Show (TTerm ctx a)
The idea is to represent the context of a term as a list of types of variables in scope, and index into that list, de Bruijn-style, to refer to variables. This indexing operation maintains the
necessary connection between the pointer and the type that it points to. Note the type of the TLam constructor, where we extend the context at the front for the inductive step.
To give a taste of how convenient it is to work with this representation programmatically, here's a total evaluator:
-- | Interpretation (semantics) of our types
type family Interp (t :: Type) where
Interp Base = ()
Interp (Arrow t1 t2) = Interp t1 -> Interp t2
-- | An environment gives the value of all variables in scope in a given context
data Env (ts :: [Type]) where
Nil :: Env '[]
Cons :: Interp t -> Env ts -> Env (t ': ts)
lookupVar :: TVar ts a -> Env ts -> Interp a
lookupVar Here (Cons x _) = x
lookupVar (There v) (Cons _ xs) = lookupVar v xs
-- | Evaluate a term of STLC. This function is total!
eval :: Env ctx -> TTerm ctx a -> Interp a
eval env TConst = ()
eval env (TVar v) = lookupVar v env
eval env (TLam lam) = \x -> eval (Cons x env) lam
eval env (TApp f e) = eval env f $ eval env e
Of course, the problem is that this representation is not at all convenient for other purposes. For starters, it is certainly not how we would expect human beings to type in their programs.
My version of the typechecker is such that instead of giving the type of a term (when it is well-typed), it instead transforms the loose representation (Term) into the tight one (TTerm). A Term is
well-scoped and well-typed (under some binders) iff there is a TTerm corresponding to it. Let's use singletons to store type information in existential positions:
$(genSingletons [''Type])
$(singDecideInstance ''Type)
-- | Existential version of 'TTerm'
data SomeTerm (ctx :: [Type]) where
TheTerm :: Sing a -> TTerm ctx a -> SomeTerm ctx
-- | Existential version of 'TVar'
data SomeVar (ctx :: [Type]) where
TheVar :: Sing a -> TVar ctx a -> SomeVar ctx
-- | A typed binder of variable names
data Binder (ctx :: [Type]) where
BNil :: Binder '[]
BCons :: Sym -> Sing t -> Binder ts -> Binder (t ': ts)
Armed with these definitions, we can finally define the type inferer. I would argue that it is no more complicated than Lennart's version. In fact, it has the exact same shape, with value-level
equality tests replaced with Data.Type.Equality-based checks.
-- | Type inference for STLC
infer :: Binder ctx -> Term -> Maybe (SomeTerm ctx)
infer bs (Var v) = do
TheVar t v' <- inferVar bs v
return $ TheTerm t $ TVar v'
infer bs (App f e) = do
TheTerm (SArrow t0 t) f' <- infer bs f
TheTerm t0' e' <- infer bs e
Refl <- testEquality t0 t0'
return $ TheTerm t $ TApp f' e'
infer bs (Lam v ty e) = case toSing ty of
SomeSing t0 -> do
TheTerm t e' <- infer (BCons v t0 bs) e
return $ TheTerm (SArrow t0 t) $ TLam e'
inferVar :: Binder ctx -> Sym -> Maybe (SomeVar ctx)
inferVar (BCons u t bs) v
| v == u = return $ TheVar t Here
| otherwise = do
TheVar t' v' <- inferVar bs u
return $ TheVar t' $ There v'
inferVar _ _ = Nothing
Note that pattern matching on Refl in the App case brings in scope type equalities that are crucial to making infer well-typed.
Of course, because of the existential nature of SomeVar, we should provide a typechecker as well which is a much more convenient interface to work with:
-- | Typechecker for STLC
check :: forall ctx a. (SingI a) => Binder ctx -> Term -> Maybe (TTerm ctx a)
check bs e = do
TheTerm t' e' <- infer bs e
Refl <- testEquality t t'
return e'
t = singByProxy (Proxy :: Proxy a)
-- | Typechecker for closed terms of STLC
check_ :: (SingI a) => Term -> Maybe (TTerm '[] a)
check_ = check BNil
(The SingI a constraint is an unfortunate implementation detail; the kind of a is Type, which is closed, so GHC should be able to know there is always going to be a SingI a instance).
To review, we've written a typed embedding of STLC into Haskell, with a total evaluator and a typechecker, in about 110 lines of code.
If we were doing this in something more like Agda, one possible improvement would be to define a function untype :: TTerm ctx a -> Term and use that to give check basically a type of Binder ctx -> (e
:: Term) -> Either ((e' :: TTerm ctx a) -> untype e' == e -> Void) (TTerm ctx a), i.e. to give a proof in the non-well-typed case as well.
|
{"url":"https://unsafeperform.io/blog/2015-02-05-typed_embedding_of_stlc_into_haskell/","timestamp":"2024-11-09T07:10:17Z","content_type":"application/xhtml+xml","content_length":"12565","record_id":"<urn:uuid:78d3be62-6aab-43e1-a721-4425b4b0d887>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00489.warc.gz"}
|
in-class-scratch.el – in class Lisp introduction log
lisp-tree-recusion-log.txt – recursive function class log
obarray-simple-examples-log.txt – global symbol table in eLisp
obarray-inclass-examples.el – obarray in-class examples, lambda and let examples. eLisp bytecode disassembly.
everything-is-a-fold.el – everything is a fold if you have the right lambda
elisp-vm-examples.el – eLisp bytecode examples from class, with comments and links
curry.el more-currying.el – currying functions
make-closure.el – looking and making closure bytecode
lambda-calculus-booleans.txt – Lambda calculus, booleans
more-lambda-calculus-booleans.el – more combinator examples
lambda-calculus-pair.el – The Pair combinator: storing and accessing multiple things
S-combinator.el – The S combinator, SKI calculus (and birds)
Y-combinator.el – The famous Y combinator
My older notes on Y: https://www.cs.dartmouth.edu/~sergey/cs59/lisp/y-explained.txt)
named-let-and-folds.el – Tail call optimized (TCO) named-let in eLisp!
seq-tc.el – more TCO via named-let in eLisp
|
{"url":"https://cosc59.gitlab.io/lisp/","timestamp":"2024-11-12T00:12:31Z","content_type":"text/html","content_length":"3264","record_id":"<urn:uuid:57852b5a-22fc-48a5-877d-b9a3efc51fec>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00501.warc.gz"}
|
Assertion: when diver dives, the rotational kinetic energy of the diver increases during several somersaults.
Hint:Use conservation of angular momentum to solve this problem. The conservation of angular momentum states that the angular momentum of a system is conserved at any instant of time if any external
torque is not applied to the system.
Formula used:
The conservation of angular momentum is given by,
\[L = I\omega = k\]
where \[L\] is the angular momentum of the body \[I\] is the moment of inertia of the body \[\omega \] is the angular velocity of the body and \[k\] is some constant.
Complete step by step answer:
When a diver dives it is said in the question that the rotational kinetic energy of the diver increases during several somersaults. When a diver dives the external torque acting on the diver is zero.
Hence, the angular momentum of the diver is conserved.
Now, let’s say at the point of jump the moment of inertia of the diver is \[{I_1}\] and the angular momentum of the diver is \[{\omega _1}\] and when the diver pulls his limbs the moment of inertia
of the diver is \[{I_2}\] and the angular momentum of the diver is \[{\omega _2}\]. So, we can write, \[{I_1}{\omega _1} = {I_2}{\omega _2}\].
Now, when the diver pulls his limbs the radius of its rotation decreases and the mass concentrates on the axis of rotation hence the moment of inertia decreases.
\[{I_1} > {I_2}\]
So, from conservation of angular momentum the angular velocity increases.
\[{\omega _1} < {\omega _2}\]
Also, since the angular speed increases rotational kinetic energy also increases. Hence, the angular speed or velocity of the diver increases. Hence, Both Assertion and Reason are correct and Reason
is the correct explanation for Assertion.
Hence, option A is the correct answer.
Note: The rotational kinetic energy of a rotating body is given by, \[\dfrac{1}{2}I{\omega ^2}\]. So, when the moment of inertia decreases and the angular speed increases the increase in angular
speed increases the kinetic energy with the square of it. So, net kinetic energy increases when the moment of inertia decreases.
|
{"url":"https://www.vedantu.com/question-answer/assertion-when-diver-dives-the-rotational-class-11-physics-cbse-60a51a354ae3eb743940d2a7","timestamp":"2024-11-05T09:17:44Z","content_type":"text/html","content_length":"167027","record_id":"<urn:uuid:fbd880e9-6654-458f-806a-a0e25384db7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00050.warc.gz"}
|
How Big Mortgage Can I Get
The 28% and 36% ratios are standard in the mortgage world, but lenders may have other combinations available, such as 33%/38%. How many times my salary can I borrow for a mortgage? Many lenders will
allow you to borrow up to times your salary. There may be some lenders whose. The best way to think about how much home you can afford is to consider what your maximum monthly mortgage can be. As a
general rule of thumb, lenders limit. Get Access Now. No credit card required. calculators. How much can I borrow? This tool calculates loan amounts and mortgage payments for two underwriting. The
28% and 36% ratios are standard in the mortgage world, but lenders may have other combinations available, such as 33%/38%.
Another clue to examining home affordability is the 28/36 rule. Lenders use this to zero in on what you currently owe and how a mortgage will impact that debt. The most you can borrow is usually
capped at four-and-a-half times your annual income. It's tempting to get a mortgage for as much as possible but take a. First, a standard rule for lenders is that your monthly housing payment should
not take up more than 28% of your gross monthly income. What is your maximum mortgage loan amount? That largely depends on income and current monthly debt payments. This maximum mortgage calculator
collects these. Two criteria that mortgage lenders look at to understand how much you can afford are the housing expense ratio, known as the “front-end ratio,” and the total. Use the LendingTree home
affordability calculator to help you analyze multiple scenarios and mortgage types to find out how much house you can afford. Use Zillow's affordability calculator to estimate a comfortable mortgage
amount based on your current budget. Enter details about your income, down payment and. Every lender is going to have a different threshold, but a good ballpark figure is to keep your back-end ratio
under 36% for all debt payments, including. Find out how much you can afford with our mortgage affordability calculator. See estimated annual property taxes, homeowners insurance, and mortgage.
Ideally, you don't want a mortgage payment – alongside any other recurring debts – to be more than 50% of your monthly income. It is also wise to have some.
One way to start is to get pre-approved by a lender, who will look at factors such as your income, debt and credit, as well as how much you have saved for a. Use our free mortgage affordability
calculator to estimate how much house you can afford based on your monthly income, expenses and specified mortgage rate. You can calculate your mortgage qualification based on income, purchase price
or total monthly payment. There are two House Affordability Calculators that can be used to estimate an affordable purchase amount for a house based on either household income-to-debt. Find out how
much you're likely to be able to borrow on your income with Money Saving Expert's mortgage calculator. It is recommended that your DTI should be less than 36% to ensure that you have some padding on
your monthly spend. A good DTI greatly impacts your ability to. A general guideline for the mortgage you can afford is % to % of your gross annual income. However, the specific amount you can afford
to borrow depends. For example, borrowing $, to buy a $, home equals % LTV. Lenders can offer VA or USDA loans at % LTV, but not everyone is eligible for these. To determine how much you can afford
using this rule, multiply your monthly gross income by 28%. For example, if you make $10, every month, multiply $10,
As a rule of thumb, lenders tend to offer up to x your annual salary. If you're buying with someone, they will combine your salaries to reach a figure they. To calculate "how much house can I
afford," one rule of thumb is the 28/36 rule, which states that you shouldn't spend more than 28% of your gross monthly. Remember the mortgage rule of thumb-- no more than 36% of your gross monthly
income should go toward debts, including a mortgage. And your mortgage shouldn't be. How much money do you make each year? Rule of thumb says that your monthly home loan payment shouldn't total more
than 28% of your gross monthly income. Gross. How Much Can You Borrow? · You may qualify for a loan amount ranging from $, (conservative) to $, (aggressive) · Related Resources.
SIMPLE way to calculate how much mortgage you qualify for (mortgage broker advice)
Stock Trading Sites For Beginners | Codecademy Javascript Review
|
{"url":"https://spark-servis.ru/community/how-big-mortgage-can-i-get.php","timestamp":"2024-11-04T16:48:42Z","content_type":"text/html","content_length":"12666","record_id":"<urn:uuid:29f62b58-7260-4634-b142-d616082a090c>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00736.warc.gz"}
|
OpenStax College Physics for AP® Courses, Chapter 6, Problem 4 (Test Prep for AP® Courses)
Saturn's moon Titan has a radius of $2.58\times 10^{6}\textrm{ m}$ m and a measured gravitational field of $1.35\textrm{ m/s}^2$. What is its mass?
Question by
is licensed under
CC BY 4.0
Final Answer
$1.35 \times 10^{23}\textrm{ kg}$
Solution video
OpenStax College Physics for AP® Courses, Chapter 6, Problem 4 (Test Prep for AP® Courses)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. To calculate the mass of Saturn's moon Titan based on knowing the acceleration due to gravity there and the radius of that moon. So Newton's second
law says that the mass of some object that's free falling on the moon and that mass is M times the acceleration which is given the special letter G for acceleration due to gravity and subscript T for
on Titan. So that mass times acceleration equals the net force on the free falling object of which there's only one and that is the force due to gravity which is the universal gravitational constant
times the mass of the moon times the mass of the thing falling divided by the distance between you know, the center of the thing falling and the center of the moon but we'll just take this to be the
radius of the moon because it's going to be a small distance off the surface of course. So these M's cancel leaving us with acceleration due to gravity of Titan. It's G times mass of the moon divided
by radius of the moon squared. So we can solve for m by multiplying both sides by r squared over G and we get the mass of the moon then is 1.35 meters per second squared times—the moon's radius—2.58
times 10 to the 6 meters squared divided by the gravitational constant which gives 1.35 times 10 to the 23 kilograms is the mass of the moon Titan.
|
{"url":"https://collegephysicsanswers.com/openstax-solutions/saturns-moon-titan-has-radius-258times-106textrm-m-m-and-measured-gravitational","timestamp":"2024-11-04T00:44:15Z","content_type":"text/html","content_length":"146917","record_id":"<urn:uuid:e3567c8f-9827-4a53-aa7d-cd70a650c01e>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00117.warc.gz"}
|
Cite as
Evripidis Bampis, Bruno Escoffier, Themis Gouleakis, Niklas Hahn, Kostas Lakis, Golnoosh Shahkarami, and Michalis Xefteris. Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost)
Everywhere Else. In 31st Annual European Symposium on Algorithms (ESA 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 274, pp. 12:1-12:17, Schloss Dagstuhl – Leibniz-Zentrum
für Informatik (2023)
Copy BibTex To Clipboard
author = {Bampis, Evripidis and Escoffier, Bruno and Gouleakis, Themis and Hahn, Niklas and Lakis, Kostas and Shahkarami, Golnoosh and Xefteris, Michalis},
title = {{Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else}},
booktitle = {31st Annual European Symposium on Algorithms (ESA 2023)},
pages = {12:1--12:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-295-2},
ISSN = {1868-8969},
year = {2023},
volume = {274},
editor = {G{\o}rtz, Inge Li and Farach-Colton, Martin and Puglisi, Simon J. and Herman, Grzegorz},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2023.12},
URN = {urn:nbn:de:0030-drops-186659},
doi = {10.4230/LIPIcs.ESA.2023.12},
annote = {Keywords: TSP, Online algorithms, Learning-augmented algorithms, Algorithms with predictions, Competitive analysis}
|
{"url":"https://drops.dagstuhl.de/search/documents?author=Gouleakis,%20Themis","timestamp":"2024-11-07T16:13:59Z","content_type":"text/html","content_length":"90522","record_id":"<urn:uuid:f36a2ddd-813b-47ab-bb77-7da3c4a5e11a>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00171.warc.gz"}
|
Vibration and instability of a fluid-conveying nanotube resting on elastic foundation subjected to a magnetic field
Using the nonlocal Euler-Bernouli beam model, this paper is carried out to investigate the vibrations and instability of a single-walled carbon nanotube (SWCNT) conveying fluid subjected to a
longitudinal magnetic field. The nanobeam with clamped-clamped boundary conditions lies on the Pasternak foundation. Hamilton’s principle is applied to derive the fluid-structure interaction (FSI)
governing equation and the corresponding boundary conditions. In the solution part the differential transformation method (DTM) is used to solve the differential equations of motion. The influences
of nonlocal parameter, longitudinal magnetic field, Pasternak foundation on the critical divergence velocity of the nanotubes is studied.
1. Introduction
Micro and nanotube structural systems are widely used in biomedical applications, nanofiber composites, molecular drug delivery, and biosensors [1-2]. Experimental studies and molecular dynamics
simulations have shown that small-scale effects have a significant impact on the properties of nanomaterials, but the classical continuum theory cannot accurately describe the mechanical behavior of
such small-scale structures due to the lack of scale dependence, so a large number of continuum theories reflecting scale dependence have been proposed [3-4]. Among these, Eringen’s theory of
nonlocal elasticity [5] has been successfully employed to examine a variety of static and dynamic mechanical behaviors of nanotubes. The fluid-conveying carbon nanotubes are commonly rested on a
foundation, which could be simulated using the single-parameter Winkler foundation, the two-parameter Pasternak foundation, or several other viscoelastic foundation models [6-7]. Based on a nonlocal
Euler-Bernouli beam model, Rafiei et al. [8] investigated the vibrational response of SWCNT conveying fluid resting on a viscoelastic Kelvin foundation. Recent research has revealed that the
mechanical features of carbon nanotubes in an applied magnetic field and their magnetic properties have potential applications in the fields of nanosensors, spintronics, as well as micro- and
nanoelectro mechanical systems [9-10]. For the chattering of fluid-conveying carbon nanotubes in an applied magnetic field, Ghane et al. [11] employed a nonlocal Timoshenko beam model to determine
the impacts of magnetic field, flow velocityand small scale effect.
The above literature generally focuses on the analysis of the vibration properties of nanotubes when several parameters are coupled, however each parameter’s influence on other parameters is rarely
discussed. In this paper, based on the nonlocal Euler-Bernouli beam model, the vibration characteristics of the fluid conveying carbon nanotube is investigated when the two-parameter Pasternak
elastic foundation is coupled with the longitudinal magnetic field.
2. Vibration governing equation
Fig. 1 shows a schematic diagram of a fluid conveying carbon nanotube subjected to a longitudinal magnetic field resting on the Pasternak foundation. The system only undergoes a small in-plane
transverse vibration, gravity and the external pull and pressure of the tube are not taken into account, and the fluid inside the tube is an ideal fluid with constant flow velocity which is recorded
as $U$.
Fig. 1A fluid-conveyed SWCNT in Pasternak medium under the longitudinal magnetic field
Then, from the Euler-Bernoulli beam strain-displacement relationship we have:
${\epsilon }_{XX}=-Z\frac{{\partial }^{2}W}{\partial {X}^{2}},$
where $W\left(X,t\right)$ is the $Z$-directional displacement, $t$ is the time, and ${\epsilon }_{XX}$ is the strain in the $X$-direction.
From the theory of nonlocal elasticity [9], the stress-strain relationship containing small-scale effects is:
${\sigma }_{XX}-{\left({e}_{0}a\right)}^{2}\frac{{\partial }^{2}{\sigma }_{XX}}{\partial {X}^{2}}=E{\epsilon }_{XX},$
where ${\sigma }_{XX}$ is the stress in the $X$-direction, $E$ is the nanotube elastic modulus, ${e}_{0}$ is the material constant, and $a$ is the material internal characteristic length.
The force acting on the nanotube by the fluid inside the nanotube can be expressed as [12]:
${F}_{f}={m}_{f}\left(\frac{{\partial }^{2}W}{\partial {t}^{2}}+2U\frac{{\partial }^{2}W}{\partial X\partial t}+{U}^{2}\frac{{\partial }^{2}W}{\partial {X}^{2}}\right),$
where ${m}_{f}$ is the mass of fluid inside the nanotube per unit length.
The Lorentz force per unit length acting on the nanotube is [9]:
${F}_{Z}=\eta A{{H}_{X}}^{2}\frac{{\partial }^{2}W}{\partial {X}^{2}},$
where $A$ is the cross-sectional area of the pipe.
The force of the Pasternak elastic foundation on the nanotube is expressed as [6]:
$F=KW-G\frac{{\partial }^{2}W}{\partial {X}^{2}},$
where $K$ is the elastic spring and $G$ is the shear spring.
Based on the above equations, the work on the carbon nanotube by the elastic foundation, the magnetic field and the fluid inside the tube is:
${W}_{Ext}=\frac{1}{2}{\int }_{0}^{L}-\left(KW-G\frac{{\partial }^{2}W}{\partial {X}^{2}}+{F}_{Z}+{F}_{f}\right)WdX.$
Applying Hamilton’s Principle:
${\int }_{{t}_{1}}^{{t}_{2}}\delta \left({T}_{p}-{T}_{k}-{W}_{\mathrm{E}\mathrm{x}\mathrm{t}}\right)dt=0,$
where ${T}_{k}$ and ${T}_{p}$ are the total kinetic energy and strain energy of the nanotube system [4].
The system vibration differential equation can be obtained as:
$\begin{array}{l}EI\frac{{\partial }^{4}W}{\partial {X}^{4}}+\left({m}_{f}{U}^{2}-\eta A{{H}^{2}}_{X}-G\right)\frac{{\partial }^{2}W}{\partial {X}^{2}}+2{m}_{f}U\frac{{\partial }^{2}W}{\partial X\
partial t}+KW+\left({m}_{c}+{m}_{f}\right)\frac{{\partial }^{2}W}{\partial {t}^{2}}\\ \mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}-\left({e}_{0}a{\right)}^{2}\left[\begin{array}{c}
\left({m}_{f}{U}^{2}-\eta A{{H}^{2}}_{X}-G\right)\frac{{\partial }^{4}W}{\partial {X}^{4}}+2{m}_{f}U\frac{{\partial }^{4}W}{\partial {X}^{3}\partial t}\\ +K\frac{{\partial }^{2}W}{\partial {X}^{2}}+\
left({m}_{c}+{m}_{f}\right)\frac{{\partial }^{4}W}{\partial {X}^{2}\partial {t}^{2}}\end{array}\right]=0,\end{array}$
where $EI$ is the bending stiffness of SWCNT, ${m}_{c}$ is the mass of the nanotube per unit length.
The boundary conditions are:
$X=0,L:W=\frac{\partial W}{\partial X}=0.$
3. Differential transformation method and solution methodology
Introduction of dimensionless variables and parameters:
$w=\frac{W}{L},x=\frac{X}{L},\tau =\sqrt{\frac{EI}{{m}_{c}+{m}_{f}}}\frac{t}{{L}^{2}},\beta =\frac{{m}_{f}}{{m}_{c}+{m}_{f}},$
$u=UL\sqrt{\frac{{m}_{f}}{EI}},\mu ={\left(\frac{{e}_{0}a}{L}\right)}^{2},\psi =\eta A{H}_{X}^{2}\frac{{L}^{2}}{EI},g=G\frac{{L}^{2}}{EI},k=K\frac{{L}^{2}}{EI}.$
The above Eq. (8) and the boundary condition Eq. (9) can be rewritten as the dimensionless equation:
$\frac{{\partial }^{4}w}{\partial {x}^{4}}+\left({u}^{2}-\psi -g\right)\frac{{\partial }^{2}w}{\partial {x}^{2}}+2u\sqrt{\beta }\frac{{\partial }^{2}w}{\partial x\partial \tau }+kw+\frac{{\partial }^
{2}w}{\partial {\tau }^{2}}-\mu \left[\left({u}^{2}-\psi -g\right)\frac{{\partial }^{4}w}{\partial {x}^{4}}+2u\sqrt{\beta }\frac{{\partial }^{4}w}{\partial {x}^{3}\partial \tau }+k\frac{{\partial }^
{2}w}{\partial {x}^{2}}+\frac{{\partial }^{4}w}{\partial {x}^{2}\partial {\tau }^{2}}\right]=0,$
with corresponding boundary conditions:
$x=0,1:w=\frac{\partial w}{\partial x}=0.$
Let the solution of Eq. (10) be $w=\phi {e}^{\mathrm{\Omega }\tau }$, where $\mathrm{\Omega }$ is the dimensionless eigenvalue. Substituting the solution into Eq. (10) we have:
$\frac{{d}^{4}\phi }{d{x}^{4}}+\left({u}^{2}-\psi -g\right)\frac{{d}^{2}\phi }{d{x}^{2}}+2u\sqrt{\beta }\mathrm{\Omega }\frac{d\phi }{dx}+\left({\mathrm{\Omega }}^{2}+k\right)\phi -\mu \left[\left
({u}^{2}-\psi -g\right)\frac{{d}^{4}\phi }{d{x}^{4}}+2u\sqrt{\beta }\mathrm{\Omega }\frac{{d}^{3}\phi }{d{x}^{3}}+\left({\mathrm{\Omega }}^{2}+k\right)\frac{{d}^{2}\phi }{d{x}^{2}}\right]=0.$
The differential transformation method (DTM) was used to solve the Eq. (12), then we have the differential transformation form of Eq. (12):
$\left[1-\mu \left({u}^{2}-\psi -g\right)\right]\left(n+4\right)!\mathrm{\Phi }\left(n+4\right)-2\mu u\sqrt{\beta }\Omega \left(n+3\right)!\mathrm{\Phi }\left(n+3\right)$$+\left[\left({u}^{2}-\psi -g
\right)-\mu \left({\mathrm{\Omega }}^{2}+k\right)\right]\left(n+2\right)!\mathrm{\Phi }\left(n+2\right)+2u\sqrt{\beta }\mathrm{\Omega }\left(n+1\right)!\mathrm{\Phi }\left(n+1\right)$$+\left({\mathrm
{\Omega }}^{2}+k\right)n!\mathrm{\Phi }\left(n\right)=0.$
The DTM transformation of boundary conditions are given as follows:
$\mathrm{\Phi }\left(0\right)=\mathrm{\Phi }\left(1\right)=0,$
$\sum _{n=0}^{\infty }\mathrm{\Phi }\left(n\right)=0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\sum _{n=0}^{\infty }n\mathrm{\Phi }\left(n\right)=0.$
Let $F\left(2\right)={C}_{1}$, $F\left(3\right)={C}_{2}$, and substitute into Eq. (13) with Eq. (14), iterate to obtain $\mathrm{\Phi }\left(n\right)$, $n=$ 4, 5,…, $N$. Then, substituting $\mathrm{\
Phi }\left(n\right)$ into Eq. (15), the following eigenvalue problem can be obtained:
$\left[\begin{array}{ll}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right]\left\{\begin{array}{l}{C}_{1}\\ {C}_{2}\end{array}\right\}=0,$
where ${a}_{ij}$ are functions of the dimensionless eigenvalue $\mathrm{\Omega }$ and other parameters. By calculating the determinant, the value of complax eigenvalues $\mathrm{\Omega }$ can be
computed numerically. The real and imaginary parts of $\mathrm{\Omega }$ correspond to the damping and natural frequencies of the system, respectively. It should be mentioned that clamped-clamped
nanotubes lose their stability by both imaginary and real parts of the first mode complax frequency are equal to zero.
4. Numerical results and discussion
The analytical parameters used in this paper are [9]: the density of the fluid ${\rho }_{f}=$1000 kg/m^3, and that of the carbon nanotube ${\rho }_{c}=$ 2300 kg/m^3, its outer layer radius ${R}_{0}=$
3 nm, wall thickness ${t}_{d}=$ 0.1 nm, elastic modulus $E=$ 3.4 TPa. Poisson’s ratio $u =$0.3. In vibration, the $L/2{R}_{0}=$ 40, the magnetic permeability is taken as $\eta =$ 4$\pi$×10^-7 and $\
beta =$0.5. The rest of the parameters are chosen in the examples, and the DTM algorithm is chosen an intercept number of 60.
Figs. 2-3 present the critical flow velocity of the SWCNT conveying fluid as functions of coefficients of Pasternak foundation and magnetic field ${H}_{x}$. Fig. 3 depicts that for different
elasticity coefficients $k$, the enhancement of the magnetic field can increase the critical flow velocity of the nanotube system and thus improve its stability. However, when comparing the
enhancement values of the critical flow velocity by increasing the same magnetic field strength in detail, it can be found that the enhancement is different for different elasticity factors $k$. This
implies that the enhancement of the stability of the system by the magnetic field is suppressed to some extent with the increase of the elasticity coefficient $k$. Similar conclusions can be obtained
from the analysis of Fig. 4: the stability of the fluid-conveying carbon nanotube increases with the enhancement of the magnetic field for different shear coefficients $g$. But the shear coefficient
$g$ still suppresses the influence of the magnetic field on the stability of the system to some extent, although this suppression effect is not obvious, and the weaker the magnetic field the less
obvious this effect is.
The effects of the elasticity coefficient $k$ and shear coefficient g on the critical flow velocity when considering the small-scale effect are presented in Figs. 4 and 5. As can be seen from Figs. 4
and 5, the increase in the nonlocal parameter with and without the elastic foundation reduces the stability of the system. A detailed comparison of the reduction in the dimensionless critical flow
velocity, $\mathrm{\Delta }{u}_{cr}$, for increasing the same nonlocal parameter $\mu$ reveals that the shear coefficient $g$ suppresses the effect of the nonlocal parameter on the system stability,
however, the elasticity coefficient $k$ amplifies the effect.
Fig. 2Critical flow velocities with longitudinal magnetic field in a fluid-conveying SWCNT for different values k (g=μ=0)
Fig. 3Critical flow velocities with longitudinal magnetic field in a fluid-conveying SWCNT for different values g (k=μ=0)
Fig. 4Critical flow velocities with nonlocal parameter μ in a fluid-conveying SWCNT for different values k (HX=g=0)
Fig. 5Critical flow velocities with nonlocal parameter μ in a fluid-conveying SWCNT for different values g (HX=k=0)
5. Conclusions
Size-dependent vibrations and instability of fluid-conveying SWCNT resting on a Pasternak elastic foundation were studied. The effects of different effective parameters, including magnetic field
strength, two Pasternak coefficients and nonlocal parameter, were discussed in detail. And the paramount goal of this paper is to highlight the interaction of above paremeters effects on the
instability behavior of the nanotube. The results demonstrate that the two Pasternak foundation reduce the magnetic field's influence on the stability of the system. However, the two coefficients of
foundation have opposing impacts on the nonlocal parameter. Specifically, the elastic parameter increases the small-scale effect on the nanotube system while the shear parameter lowers it.
• M. Malikan, “On the plastic buckling of curved carbon nanotubes,” Theoretical and Applied Mechanics Letters, Vol. 10, No. 1, pp. 46–56, Jan. 2020, https://doi.org/10.1016/j.taml.2020.01.004
• A. Amiri, R. Vesal, and R. Talebitooti, “Flexoelectric and surface effects on size-dependent flow-induced vibration and instability analysis of fluid-conveying nanotubes based on flexoelectricity
beam model,” International Journal of Mechanical Sciences, Vol. 156, No. 6, pp. 474–485, Jun. 2019, https://doi.org/10.1016/j.ijmecsci.2019.04.018
• R. Ansari, R. Gholami, and A. Norouzzadeh, “Size-dependent thermo-mechanical vibration and instability of conveying fluid functionally graded nanoshells based on Mindlin’s strain gradient
theory,” Thin-Walled Structures, Vol. 105, No. 8, pp. 172–184, Aug. 2016, https://doi.org/10.1016/j.tws.2016.04.009
• M. Li and K. Fang, “Vibration of elastic restrained simply supported carbon nanotubes,” Journal of Chongqing University, Vol. 43, No. 6, pp. 77–81, 2020.
• A. C. Eringen, “Nonlocal polar elastic continua,” International Journal of Engineering Science, Vol. 10, No. 1, pp. 1–16, Jan. 1972, https://doi.org/10.1016/0020-7225(72)90070-5
• E. Ghavanloo, F. Daneshmand, and M. Rafiei, “Vibration and instability analysis of carbon nanotubes conveying fluid and resting on a linear viscoelastic winkler foundation,” Physica E:
Low-dimensional Systems and Nanostructures, Vol. 42, No. 9, pp. 2218–2224, Jul. 2010, https://doi.org/10.1016/j.physe.2010.04.024
• K. B. Mustapha and Z. W. Zhong, “The thermo-mechanical vibration of a single-walled carbon nanotube studied using the Bubnov-Galerkin method,” Physica E: Low-dimensional Systems and
Nanostructures, Vol. 43, No. 1, pp. 375–381, Nov. 2010, https://doi.org/10.1016/j.physe.2010.08.012
• M. Rafiei, S. R. Mohebpour, and F. Daneshmand, “Small-scale effect on the vibration of non-uniform carbon nanotubes conveying fluid and embedded in viscoelastic medium,” Physica E:
Low-dimensional Systems and Nanostructures, Vol. 44, No. 7-8, pp. 1372–1379, Apr. 2012, https://doi.org/10.1016/j.physe.2012.02.021
• M. Li, L. F. Lv, H. S. Zheng, and K. Fang, “Magnetic field effect on flutter stability of a fluid-conveying cantilevered carbon nanotube under different temperature fields,” Chinese Journal of
Solid Mechanics, Vol. 42, No. 1, pp. 87–93, 2021, https://doi.org/10.19636/j.cnki.cjsm42-1250/o3.2020.027
• Z. Lyu, Y. Yang, and H. Liu, “High-accuracy hull iteration method for uncertainty propagation in fluid-conveying carbon nanotube system under multi-physical fields,” Applied Mathematical
Modelling, Vol. 79, No. 10, pp. 362–380, Mar. 2020, https://doi.org/10.1016/j.apm.2019.10.040
• M. Ghane, A. R. Saidi, and R. Bahaadini, “Vibration of fluid-conveying nanotubes subjected to magnetic field based on the thin-walled Timoshenko beam theory,” Applied Mathematical Modelling, Vol.
80, No. 11, pp. 65–83, Apr. 2020, https://doi.org/10.1016/j.apm.2019.11.034
• L. Wang and Q. Ni, “A reappraisal of the computational modelling of carbon nanotubes conveying viscous fluid,” Mechanics Research Communications, Vol. 36, No. 7, pp. 833–837, Oct. 2009, https://
About this article
Mathematical models in engineering
fluid-conveyed carbon nanotube
longitudinal magnetic field
Pasternak foundation
nonlocal parameter
differential transformation method
This research work has been supported by National Natural Science Foundation of China (Grant Nos. 51909196) and Hubei Province Key Laboratory of Systems Science in Metallurgical Process (Wuhan
University of Science and Technology) of China (No. Y201520).
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflict of interest
The authors declare that they have no conflict of interest.
Copyright © 2022 Ming Li, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/23006","timestamp":"2024-11-07T02:41:31Z","content_type":"text/html","content_length":"136021","record_id":"<urn:uuid:a443b4dd-3e43-4163-8ddf-36d5f02ad2dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00411.warc.gz"}
|
I have a weird issue with my triplanar shader
Hi, I’ve been struggling with this issue for a while now and I can’t seem to find the cause. I have this triplanar shader that works fine except for when a surface is in between the front/top/right
directions. Then it gets very bright. Even though in the inspector I set it to the same color 3 times. Any thoughts on what this might be and how to solve it would be very much appreciated!
I’m assuming you’re simply multiplying the directional colours by their respective weights and adding them. This works well and is how it’s done in pretty much every tutorial out there, but it isn’t
correct. The dot product used to calculate the weights is equivalent to finding the cosine of the angle between each vector - this means that directly between two sides isn’t 0.5, but instead is
~0.7. As such, when you add the two components together, you end up with a total intensity of ~1.4 - hence why the colours are blown out (not to mention this increasing even further inbetween all 3
To fix this, you’ll need to normalise the final colour based on the total weight;
finalColour = (colour1 * weight1 + colour2 * weight2 + colour3 * weight3) / (weight1 + weight2 + weight3);
|
{"url":"https://discussions.unity.com/t/i-have-a-weird-issue-with-my-triplanar-shader/234967","timestamp":"2024-11-04T04:15:13Z","content_type":"text/html","content_length":"28834","record_id":"<urn:uuid:e0536f4f-3dd3-49c4-b501-ae8656adb08a>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00307.warc.gz"}
|
On special subgroups of fundamental group
Suppose $\alpha$ is a nonzero cardinal number,
$\mathcal I$ is an ideal on
arc connected topological space $X$, and
${\mathfrak P}_{\mathcal I}^\alpha(X)$ is the subgroup of $\pi_1(X)$
(the first fundamental group of $X$) generated by homotopy classes of
$\alpha\frac{\mathcal I}{}$loops.
The main aim of this text is to study ${\mathfrak P}_{\mathcal I}^\alpha(X)$s
and compare them.
Most interest is in $\alpha\in\{\omega,c\}$ and $\mathcal
I\in\{\mathcal P_{fin}(X),\{\varnothing\}\}$, where $\mathcal
P_{fin}(X)$ denotes the collection of all finite subsets of $X$.
We denote ${\mathfrak P}_{\{\varnothing\}}^\alpha(X)$ with
${\mathfrak P}^\alpha(X)$. We
prove the following statements:
$\bullet$ for arc connected topological spaces $X$ and $Y$
${\mathfrak P}^\alpha(X)$ is isomorphic to ${\mathfrak P}^\alpha(Y)$
for all infinite cardinal number $\alpha$, then
$\pi_1(X)$ is isomorphic to $\pi_1(Y)$;
$\bullet$ there are arc connected topological spaces $X$ and $Y$
such that $\pi_1(X)$ is isomorphic to $\pi_1(Y)$ but
${\mathfrak P}^\omega(X)$ is not isomorphic to ${\mathfrak P}^\omega(Y)$;
$\bullet$ for arc connected topological space $X$ we have
${\mathfrak P}^\omega(X)\subseteq{\mathfrak P}^c(X)
$\bullet$ for Hawaiian earring $\mathcal X$, the sets
${\mathfrak P}^\omega({\mathcal X})$, ${\mathfrak P}^c({\mathcal X})$,
and $\pi_1({\mathcal X})$
are pairwise distinct.
So ${\mathfrak P}^\alpha(X)$s and ${\mathfrak P}_{\mathcal I}^\alpha(X)$s
will help us to classify the class of all arc connected topological spaces with
isomorphic fundamental groups.
• There are currently no refbacks.
|
{"url":"https://journal.pmf.ni.ac.rs/filomat/index.php/filomat/article/view/8572","timestamp":"2024-11-06T20:35:56Z","content_type":"application/xhtml+xml","content_length":"18291","record_id":"<urn:uuid:07f1d48f-8f0b-4cea-8f51-073a5fe32fd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00859.warc.gz"}
|
Class: ClassificationLinear
Classification edge for linear classification models
e = edge(Mdl,X,Y) returns the classification edges for the binary, linear classification model Mdl using predictor data in X and corresponding class labels in Y. e contains a classification edge for
each regularization strength in Mdl.
e = edge(Mdl,Tbl,ResponseVarName) returns the classification edges for the trained linear classifier Mdl using the predictor data in Tbl and the class labels in Tbl.ResponseVarName.
e = edge(Mdl,Tbl,Y) returns the classification edges for the classifier Mdl using the predictor data in table Tbl and the class labels in vector Y.
e = edge(___,Name,Value) specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can specify that
columns in the predictor data correspond to observations or supply observation weights.
Input Arguments
Tbl — Sample data
Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain
additional columns for the response variable and observation weights. Tbl must contain all the predictors used to train Mdl. Multicolumn variables and cell arrays other than cell arrays of character
vectors are not allowed.
If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName or Y.
If you train Mdl using sample data contained in a table, then the input data for edge must also be in a table.
ResponseVarName — Response variable name
name of variable in Tbl
Response variable name, specified as the name of a variable in Tbl. If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName.
If you specify ResponseVarName, then you must specify it as a character vector or string scalar. For example, if the response variable is stored as Tbl.Y, then specify ResponseVarName as 'Y'.
Otherwise, the software treats all columns of Tbl, including Tbl.Y, as predictors.
The response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. If the response variable is a character array, then each
element must correspond to one row of the array.
Data Types: char | string
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Weights — Observation weights
ones(size(X,1),1) (default) | numeric vector | name of variable in Tbl
Observation weights, specified as the comma-separated pair consisting of 'Weights' and a numeric vector or the name of a variable in Tbl.
• If you specify Weights as a numeric vector, then the size of Weights must be equal to the number of observations in X or Tbl.
• If you specify Weights as the name of a variable in Tbl, then the name must be a character vector or string scalar. For example, if the weights are stored as Tbl.W, then specify Weights as 'W'.
Otherwise, the software treats all columns of Tbl, including Tbl.W, as predictors.
If you supply weights, then for each regularization strength, edge computes the weighted classification edge and normalizes weights to sum up to the value of the prior probability in the respective
Data Types: double | single
Output Arguments
e — Classification edges
numeric scalar | numeric row vector
Classification edges, returned as a numeric scalar or row vector.
e is the same size as Mdl.Lambda. e(j) is the classification edge of the linear classification model trained using the regularization strength Mdl.Lambda(j).
Estimate Test-Sample Edge
Load the NLP data set.
X is a sparse matrix of predictor data, and Y is a categorical vector of class labels. There are more than two classes in the data.
The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and
Machine Learning Toolbox™ documentation web pages.
Train a binary, linear classification model that can identify whether the word counts in a documentation web page are from the Statistics and Machine Learning Toolbox™ documentation. Specify to
holdout 30% of the observations. Optimize the objective function using SpaRSA.
rng(1); % For reproducibility
CVMdl = fitclinear(X,Ystats,'Solver','sparsa','Holdout',0.30);
CMdl = CVMdl.Trained{1};
CVMdl is a ClassificationPartitionedLinear model. It contains the property Trained, which is a 1-by-1 cell array holding a ClassificationLinear model that the software trained using the training set.
Extract the training and test data from the partition definition.
trainIdx = training(CVMdl.Partition);
testIdx = test(CVMdl.Partition);
Estimate the training- and test-sample edges.
eTrain = edge(CMdl,X(trainIdx,:),Ystats(trainIdx))
eTest = edge(CMdl,X(testIdx,:),Ystats(testIdx))
Feature Selection Using Test-Sample Edges
One way to perform feature selection is to compare test-sample edges from multiple models. Based solely on this criterion, the classifier with the highest edge is the best classifier.
Load the NLP data set.
X is a sparse matrix of predictor data, and Y is a categorical vector of class labels. There are more than two classes in the data.
The models should identify whether the word counts in a web page are from the Statistics and Machine Learning Toolbox™ documentation. So, identify the labels that correspond to the Statistics and
Machine Learning Toolbox™ documentation web pages. For quicker execution time, orient the predictor data so that individual observations correspond to columns.
Ystats = Y == 'stats';
X = X';
rng(1); % For reproducibility
Create a data partition which holds out 30% of the observations for testing.
Partition = cvpartition(Ystats,'Holdout',0.30);
testIdx = test(Partition); % Test-set indices
XTest = X(:,testIdx);
YTest = Ystats(testIdx);
Partition is a cvpartition object that defines the data set partition.
Randomly choose half of the predictor variables.
p = size(X,1); % Number of predictors
idxPart = randsample(p,ceil(0.5*p));
Train two binary, linear classification models: one that uses all of the predictors and one that uses half of the predictors. Optimize the objective function using SpaRSA, and indicate that
observations correspond to columns.
CVMdl = fitclinear(X,Ystats,'CVPartition',Partition,'Solver','sparsa',...
PCVMdl = fitclinear(X(idxPart,:),Ystats,'CVPartition',Partition,'Solver','sparsa',...
CVMdl and PCVMdl are ClassificationPartitionedLinear models.
Extract the trained ClassificationLinear models from the cross-validated models.
CMdl = CVMdl.Trained{1};
PCMdl = PCVMdl.Trained{1};
Estimate the test sample edge for each classifier.
fullEdge = edge(CMdl,XTest,YTest,'ObservationsIn','columns')
partEdge = edge(PCMdl,XTest(idxPart,:),YTest,'ObservationsIn','columns')
Based on the test-sample edges, the classifier that uses all of the predictors is the better model.
Find Good Lasso Penalty Using Edge
To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare test-sample edges.
Load the NLP data set. Preprocess the data as in Feature Selection Using Test-Sample Edges.
load nlpdata
Ystats = Y == 'stats';
X = X';
Partition = cvpartition(Ystats,'Holdout',0.30);
testIdx = test(Partition);
XTest = X(:,testIdx);
YTest = Ystats(testIdx);
Create a set of 11 logarithmically-spaced regularization strengths from $1{0}^{-8}$ through $1{0}^{1}$.
Lambda = logspace(-8,1,11);
Train binary, linear classification models that use each of the regularization strengths. Optimize the objective function using SpaRSA. Lower the tolerance on the gradient of the objective function
to 1e-8.
rng(10); % For reproducibility
CVMdl = fitclinear(X,Ystats,'ObservationsIn','columns',...
CVMdl =
CrossValidatedModel: 'Linear'
ResponseName: 'Y'
NumObservations: 31572
KFold: 1
Partition: [1x1 cvpartition]
ClassNames: [0 1]
ScoreTransform: 'none'
Extract the trained linear classification model.
Mdl =
ResponseName: 'Y'
ClassNames: [0 1]
ScoreTransform: 'logit'
Beta: [34023x11 double]
Bias: [-11.3599 -11.3599 -11.3599 -11.3599 -11.3599 -7.2163 -5.1919 -3.7624 -3.1671 -2.9610 -2.9610]
Lambda: [1.0000e-08 7.9433e-08 6.3096e-07 5.0119e-06 3.9811e-05 3.1623e-04 0.0025 0.0200 0.1585 1.2589 10]
Learner: 'logistic'
Mdl is a ClassificationLinear model object. Because Lambda is a sequence of regularization strengths, you can think of Mdl as 11 models, one for each regularization strength in Lambda.
Estimate the test-sample edges.
e = edge(Mdl,X(:,testIdx),Ystats(testIdx),'ObservationsIn','columns')
e = 1×11
0.9986 0.9986 0.9986 0.9986 0.9986 0.9933 0.9765 0.9202 0.8340 0.8128 0.8128
Because there are 11 regularization strengths, e is a 1-by-11 vector of edges.
Plot the test-sample edges for each regularization strength. Identify the regularization strength that maximizes the edges over the grid.
[~, maxEIdx] = max(e);
maxLambda = Lambda(maxEIdx);
hold on
ylabel('log_{10} test-sample edge')
xlabel('log_{10} Lambda')
legend('Edge','Max edge')
hold off
Several values of Lambda yield similarly high edges. Higher values of lambda lead to predictor variable sparsity, which is a good quality of a classifier.
Choose the regularization strength that occurs just before the edge starts decreasing.
Train a linear classification model using the entire data set and specify the regularization strength yielding the maximal edge.
MdlFinal = fitclinear(X,Ystats,'ObservationsIn','columns',...
To estimate labels for new observations, pass MdlFinal and the new data to predict.
More About
Classification Edge
The classification edge is the weighted mean of the classification margins.
One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.
Classification Margin
The classification margin for binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class.
The software defines the classification margin for binary classification as
x is an observation. If the true label of x is the positive class, then y is 1, and –1 otherwise. f(x) is the positive-class classification score for the observation x. The classification margin is
commonly defined as m = yf(x).
If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.
Classification Score
For linear classification models, the raw classification score for classifying the observation x, a row vector, into the positive class is defined by
${f}_{j}\left(x\right)=x{\beta }_{j}+{b}_{j}.$
For the model with regularization strength j, ${\beta }_{j}$ is the estimated column vector of coefficients (the model property Beta(:,j)) and ${b}_{j}$ is the estimated, scalar bias (the model
property Bias(j)).
The raw classification score for classifying x into the negative class is –f(x). The software classifies observations into the class that yields the positive score.
If the linear classification model consists of logistic regression learners, then the software applies the 'logit' score transformation to the raw classification scores (see ScoreTransform).
By default, observation weights are prior class probabilities. If you supply weights using Weights, then the software normalizes them to sum to the prior probabilities in the respective classes. The
software uses the normalized weights to estimate the weighted edge.
Extended Capabilities
Tall Arrays
Calculate with arrays that have more rows than fit in memory.
The edge function supports tall arrays with the following usage notes and limitations:
• edge does not support tall table data.
For more information, see Tall Arrays.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
This function fully supports GPU arrays. For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2016a
R2024a: Specify GPU arrays (requires Parallel Computing Toolbox)
edge fully supports GPU arrays.
R2022a: edge returns a different value for a model with a nondefault cost matrix
R2022a: edge can return NaN for predictor data with missing values
|
{"url":"https://uk.mathworks.com/help/stats/classificationlinear.edge.html","timestamp":"2024-11-07T16:13:49Z","content_type":"text/html","content_length":"136401","record_id":"<urn:uuid:40043992-f4de-4a8d-91da-1a66d7f105fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00325.warc.gz"}
|
Binary Search's infamous pitfall
Exploring one of the most infamous pitfalls in binary search, and the importance of understanding it.
## Introduction
I was prompted to write this after watching Computerphile’s recent video on the binary search algorithm. I generally enjoy Dr. Mike Pound’s videos a lot, so I thought I’d play this one in the
background, even if the topic is painfully familiar to me. All in all, Dr. Pound does a fine job at explaining the general concept of the algorithm and why it’s so ingenious but omitted what I think
is a small but very significant detail.
For those of you who are unfamiliar with the binary search algorithm, I’d suggest watching the video first. For those of you who want a TL;DW, binary search is a search algorithm for finding the
position of a target value within a sorted array in an optimal way. The general idea is to split the search space in half every iteration, comparing the middle point and our target value. Since we’re
searching in a sorted array, one of three things can happen:
1. The middle point is smaller than our target value, in which case we know our target value cannot possibly be anywhere to the left of the middle point.
2. The middle point is bigger than our target value, in which case the target cannot be on the right of the middle point.
3. The middle point happens to be our target, in which case we’re done.
The most common definition for mid is as follows:
$mid = \left\lfloor \frac{left + right}{2} \right\rfloor$
Binary search is much better than linear search for sorted data, as it uses the fact that all elements are ordered to halve the search space at every iteration, achieving a worst-case complexity of $
\text{O}(\log_2(n))$, as opposed to linear search’s $\text{O}(n)$ complexity. It is a brilliant yet simple algorithm that makes searching through billions and billions of elements a trivial task.
To put in perspective how much better binary search is than linear search at scale, for an array with 4 billion elements, binary search has to do only 32 iterations, whereas linear search will have
to look at all 4 billion elements in the worst-case scenario. If we double the search space, i.e. if we have 8 billion elements, binary search will need only one extra iteration, which will barely
affect its runtime, while the runtime of linear search will double.
Of course, keeping the data sorted is an entirely different problem, which I won’t get into in this blog post.
## The problem
Binary search is notoriously tricky to implement correctly, despite being so simple. In this blog post, we’ll look at one of the infamous pitfalls one can fall into while implementing it, in
particular the one that the Computerphile video failed to mention.
Let’s look at a possible implementation of binary search. I’ve chosen C++ for this, but excluding the funky syntax for generics and references, the implementation should be easy to understand
regardless of your experience with C++:
template <class T>
bool binary_search(std::vector<T> const& values, T target) {
int left = 0, right = values.size() - 1;
while (left <= right) {
int mid = (left + right) / 2;
if (values[mid] < target) {
left = mid + 1;
} else if (values[mid] > target) {
right = mid - 1;
} else {
return true;
return false;
At first glance, it might seem perfectly fine, and in practice, it’ll work fine for any small example you can come up with, but I promise, there’s something wrong with it. And no, it isn’t a logic
error (although I have to admit, it took me an embarrassing amount of attempts to realize I had the cases backward at first), the algorithm itself is perfectly fine; the issue is more subtle than
The problem is in the way mid is calculated. It might not be obvious at first, but the expression (left + right) / 2 can overflow. This can happen if left and right are large enough to overflow the
int type. With a little bit of math, we can figure out that we can trigger an overflow if we manage to get left and right to both be equal to a number larger than or equal to $2^{30}$, as the maximum
value a int can store is $2^{31} - 1$.
One might say that it’s unrealistic to be dealing with big enough arrays to trigger the issue, but I disagree, as the issue can be triggered in the implementation above with $2^{31}/2 + 1 = 2^{30} +
1$ 32-bit integers^1, which take up around $4 \text{GiB}$ of memory, which isn’t unreasonable, especially when we consider that the real strength of binary search is searching through large search
Let’s see if we can write some code to trigger it:
int main() {
// Least verbose C++ STL code (random number generation)
std::random_device rnd_device;
std::mt19937 mersenne_engine{rnd_device()};
std::uniform_int_distribution<int> dist{0,
std::numeric_limits<int>::max() - 1};
// Allocate the sear ch array with the appropriate size
auto n = (1 << 30) + 1; // 2^30 + 1
std::vector<int> values(n);
// Generate the search array
std::cout << "Generating " << n << " values..." << std::endl;
std::generate(values.begin(), values.end(),
[&]() { return dist(mersenne_engine); });
// Sort the search array
std::cout << "Sorting..." << std::endl;
std::sort(values.begin(), values.end());
// Set a target that's bigger than everything in our search array
auto target = std::numeric_limits<int>::max();
std::cout << "Searching..." << std::endl;
std::cout << binary_search(values, target) << std::endl;
return 0;
I’ll also add a log in the while loop so that we can observe the algorithm’s state:
while (left <= right) {
int mid = (left + right) / 2;
std::cout << "left " << left << " right " << right << " mid " << mid
<< std::endl;
// ...
As mentioned, the code in the main function above allocates $2^{30} + 1$ 32-bit integers between $0$ and $2^{31} - 2$^2, sorts them, then calls binary_search with $2^{31} - 1$ as a target, which we
know isn’t in the array, meaning that the binary search will have to repeatedly move the left index to the right until it becomes equal to right, which is exactly when we expect to run into an
Let’s try running it:
$ g++ -o binary_search -Ofast binary_search.cpp && ./binary_search
Generating 1073741825 values...
(omitting irrelevant logs to save vertical space)
left 1073741809 right 1073741824 mid 1073741816
left 1073741817 right 1073741824 mid 1073741820
left 1073741821 right 1073741824 mid 1073741822
left 1073741823 right 1073741824 mid 1073741823
left 1073741824 right 1073741824 mid -1073741824
zsh: segmentation fault ./binary_search
It runs for quite a long time (sorting 4 GiB worth of data is no joke), then boom! Segmentation fault!
The log we added to the binary_search loop is really helpful for understanding what’s going on: we can see how left is slowly creeping up in value as we check more and more numbers that are smaller
than our target, until finally, left and right both become $1\,073\,741\,824$, which is $2^{31}/2 = 2^{30}$. Adding $2^{30}$ to itself gives us $2^{31}$ -- one bigger than the largest value we can
fit in a 32-bit signed integer, which shoots us over to the negative numbers, causing a segfault.
## The solution
Let’s think about the situation mathematically. We’re trying to calculate the midpoint $mid$ of a range defined by $left$ and $right$. We know that $left \leq mid \leq right$, so there must be a way
to calculate $mid$ without overflow. What we need to do is come up with an alternative formula for $mid$ -- one that doesn’t “go through” any large intermediate values to arrive at the final result.
The problem lies in the addition of $left$ and $right$, so we’d be in business if we could find a way to rewrite the original formula in a way that avoids it. Since we know the final result will be
larger than $left$ and smaller than $right$, we should be able to arrive at the result if we add some non-negative number $x$ to $left$. It should be easy to notice that $x$ must be sufficiently
small so that $left + x <= right$, which in practice means that if we find a way to separate $left$ from the fraction, we’ll end up with an overflow-free expression.
Thankfully, a fairly simple transformation gets us exactly what we want (integer division is assumed to avoid the noise of adding floor everywhere):
$mid = \frac{left + right}{2} = \frac{2 \times left - left + right}{2} = left + \frac{right - left}{2}$
Have I lost you? No? Good.
So, this is the “math” behind the solution to our overflow problem. But does it work in practice? Only one way to find out! Let’s modify our proof-of-concept program and check if we can still trigger
a segfault:
while (left <= right) {
// int mid = (left + right) / 2;
int mid = left + (right - left) / 2;
std::cout << "left " << left << " right " << right << " mid " << mid
<< std::endl;
// ...
This is the output we get when we try running the program now:
$ g++ -o binary_search -Ofast binary_search.cpp && ./binary_search
Generating 1073741825 values...
(omitting irrelevant logs to save vertical space)
left 1073741809 right 1073741824 mid 1073741816
left 1073741817 right 1073741824 mid 1073741820
left 1073741821 right 1073741824 mid 1073741822
left 1073741823 right 1073741824 mid 1073741823
left 1073741824 right 1073741824 mid 1073741824
Awesome! As you can see, our test program no longer segfaults and correctly returns false, as our target isn’t contained in the array. What if we set the last element to our target? Will our
algorithm find it? Let’s see!
Let’s set the last element to target:
// Setting a target that's bigger than everything in our search array
auto target = std::numeric_limits<int>::max();
values.back() = target;
// ...
If we re-run, this is the output we see:
$ g++ -o binary_search -Ofast binary_search.cpp && ./binary_search
Generating 1073741825 values...
(omitting irrelevant logs to save vertical space)
left 1073741809 right 1073741824 mid 1073741816
left 1073741817 right 1073741824 mid 1073741820
left 1073741821 right 1073741824 mid 1073741822
left 1073741823 right 1073741824 mid 1073741823
left 1073741824 right 1073741824 mid 1073741824
As expected, our binary search successfully finds the target right at the very end of the array. Problem solved! Well, at least for search spaces up to $2^{31}$…
If we want to go even bigger, the int data type will no longer cut it. I kept the indices as 32-bit signed integers for simplicity’s sake. Had I used size_t (64-bit unsigned integers), the problem
would’ve very much still existed (albeit not as spectacular^3), but the reproduction would’ve required a much bigger search space. For completeness though, let’s switch to a bigger data type so that
we can handle even bigger inputs:
template <class T>
bool binary_search(std::vector<T> const& values, T target) {
size_t left = 0, right = values.size() - 1;
while (left <= right) {
size_t mid = left + (right - left) / 2;
// ...
Alternatively, we could've solved this by adding a size check. If for whatever reason you’d prefer if your binary search used 32-bit integers (I suppose performance could be a reason, but binary
search is already stupidly fast, so differences would be negligible on modern hardware), checking the size of values before proceeding is an option.
Making assumptions like this is perfectly fine if we're solving some specific problem. If you knew you wouldn't be dealing with search spaces as large, fine, but please, verify your assumptions.
## The takeaways
Programming is hard. As a project grows in complexity, the surface area for bugs increases drastically. As developers, we're like architects in a digital landscape, and the integrity of our
structures (software), relies heavily on the bedrock of thoughtful design and careful execution. Being aware of the implications of our design and implementation decisions is crucial because these
choices are like dominoes; a single misstep can trigger a cascade of vulnerabilities, each with the potential to compromise our work and user trust.
To some of you, this error might seem extremely niche and unlikely to happen in practice, but it’s important to look at the bigger picture. Imagine this as some small, harmless-looking utility
function in the context of a large project. Back when you wrote that utility function, you might not have seen anything wrong with it, just as our initial binary search implementation looked and
worked completely fine. But under just the right conditions, this very function could bring down our entire app. This is why it’s important to “sweat” the details.
Also, to expand on the last paragraph of the previous section about verifying your assumptions, I can't stress how important this is. Making assumptions is fine, but not checking them is just asking
for trouble. It can save you by catching errors caused by the violation of said assumptions early before they can cause any harm. As much as you want to convince yourself that the single if statement
involved in doing so will be the bottleneck of your project, trust me, it won't.
Oh, and speaking of harmless-looking utility functions in the context of large projects, the Arrays#binarySearch function in Java had this very issue some 10 years ago! Look it up, I'm not kidding. I
guess this is another mini-lesson about not blindly trusting libraries.
Unfortunately, there's no “silver bullet” for all of our problems (sorry, Rust shills, but rewriting it in Rust ain’t it). The only solution is this: awareness. Be aware that sometimes the trickiest
errors aren't caused by complexity, but by “simplicity”. Sometimes, a single addition is enough.
Had I chosen a different type for left and right, e.g. unsigned int or even size_t, the issue would’ve been harder to trigger, as it would require a much larger array, but throwing bigger data types
at the problem isn’t exactly a solution, especially for general implementations of binary search, where you don’t have a real constraint on the data.
Another thing to consider is that binary search is a general search algorithm that can be used to search through any search space, regardless if it’s a large amount of contiguous memory like an
array. For example, one could use binary search to look through a big file of sorted data, where your only real limit is the amount of storage you have. An overflow could easily happen in a scenario
like that as well.
I’ve chosen the range for the values somewhat arbitrarily. There’s no special meaning behind $2^{31} - 2$, I just made the upper bound one less than the target, which I chose to be $2^{31} - 1$. For
this to work reliably, the target mustn’t be part of the possible values to avoid finding it earlier due to it showing up more than once.
The segfault comes from the fact we’re trying to access a negative index, which shoots us into memory we can’t access. If an unsigned data type was to be used, the worst that would happen is an
incorrect result (given that our search space needs to be sufficiently large, an overflow of a signed data type would’ve just given us an incorrect index, but we’d still be accessing valid memory),
which isn’t as crazy as a segfault.
|
{"url":"https://invak.id/binary-search-infamous-pitfall","timestamp":"2024-11-13T18:16:55Z","content_type":"text/html","content_length":"113021","record_id":"<urn:uuid:0b9bc2c1-5ebf-4bd1-8ad2-6885f6af0239>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00831.warc.gz"}
|
Cool Numbers Results
The Hall of Fame number you chose was 00330033.
Congratulations! You have found an extremely cool number! It has a Universal Coolness Index of 99.37%
• 00330033 contains 2 4-of-a-kinds. Only 0.0032% of 8-digit numbers have this combination.
• 00330033 contains 4 pairs together. Only 0.0073% of 8-digit numbers have this combination.
• 00330033 has 2 unique digits. In 0.011% of 8-digit numbers, there are 2 or fewer unique digits.
• All of the digits in 00330033 are multiples of 3. Only 0.070% of 8-digit numbers have this property.
• 00330033's digits sum to 12. In 0.13% of 8-digit numbers, the digits sum to at most 12.
Home page      Learn      Criteria      The UCI      Gallery      Hall of Fame    &
nbsp Contact
|
{"url":"http://coolnumbers.com/crunch.asp?serial=00330033&source=5","timestamp":"2024-11-03T15:09:26Z","content_type":"text/html","content_length":"2720","record_id":"<urn:uuid:1ef80235-7f89-4f4d-87fe-7651e5eab57f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00666.warc.gz"}
|
Experimental Design & Analysis - ppt video online download
1 Experimental Design & AnalysisHypothesis Testing and Analysis of Variance January 30, 2007 DOCTORAL SEMINAR, SPRING SEMESTER 2007
2 Outline Statistical inferences Null hypothesis testingSources of variance Treatment variance vs. error variance Sums of squares
3 Statistical InferenceMeans The mean is a measure of central tendency μ or x Sum of squares The sum of squares refers to the sum of the squared deviations of a set of scores or values from their
mean (1) Subtract the mean from each score (2) Square each resulting difference (3) Add up the squared differences Mean square, or variance Mean square of a sample is a measure of variability Divide
sum of squares by the degrees of freedom σ2 or s2 Standard deviation σ or s
4 Statistical InferenceStudent's t-test is used to compare two groups when sample sizes are small (N < 30), but with larger samples the normal curve z test is used Let E be the experimental condition
and let C be the control condition. Let m be the means, s the standard deviations, and n be the sample size t = (mE - mC) / √[(s2E + s2C) / n ] The critical value is the value found in t-tables
5 Statistical InferenceHypothesis testing Inferences about the population based on parameters estimated from a sample μ1 = μ2 = μ3 = etc. suggests that there are no treatment effects Observing an
effect requires falsifying the null hypothesis H0 How different are two mean scores?
6 Statistical InferenceDifferences due to treatment Systematic source of difference by virtue of experimental condition Differences due to error Random source of difference because participants are
randomly assigned to groups
7 Statistical InferencesEvaluation of null hypothesis differences between experimental groups of subjects differences among subjects within same groups experimental error = 1 treatment effects +
experimental error experimental error = 1 ?
8 Null Hypothesis Significance Testing5 steps of NHST State H0, H1, and level Determine rejection region and state rejection rule Compute test statistic Make a decision (reject H0 or fail to reject
H0) Conclude in terms of the problem scenario
9 Statistical InferenceWhen H0 is true, F value = 1, although random variation can result in it being > 1 or < 1 When H0 is false, F expected >1 Since random variation can account for why F > 1, the
problem we face is determining how much >1 it must be in order for us to conclude, on the basis of improbability, that its size indicates a real difference in the treatments The F distribution
enables us to determine how improbable an experimental outcome is under the assumption our null hypothesis is true. If the value for F is so large that it is improbable (p < .05 or p < .01), given
that our null hypothesis is true, we will conclude our null hypothesis is false and that the hypothetical population means are not equal
10 Sources of Variance Partitioning varianceIn order to distinguish systematic variability (treatment effects) from random error, we must partition variance by calculating component deviation scores
Total deviation Between-groups deviation Within-groups deviation
11 Component Deviations Total deviation Within-group deviation2 4 6 8 10 Y2 Y5,2 YT - Y2 - YT Total deviation Within-group deviation Between-group deviation From Keppel & Wickens, p. 23
12 Sources of Variance What is the grand mean, YT?What are the group means? Treatment A1 Treatment A2 Treatment A3 Participant Participant Participant Participant Participant Participant Participant
Participant Participant 4 10 Participant Participant Participant Participant Participant Participant A1 = ? A2 = ? A3 = ? T = ? Y2 = ? Y3 = ? YT = ? Y1 = ? From Keppel & Wickens, p. 24
13 Calculating Sums of SquaresIdentify bracket terms* Using sums [Y] = ΣYij2 = … = 1,890 [A] = ΣAj2 /n = ( )/5 = 1,710 [T] = ΣT2/an = 1502/15 = 22,500/15 = 1,500 Using means [A] = nΣYj2 = (5)( ) =
1,710 [T] = anΣYT2 = (3)(5)102 = 15 (100) = 1,500 See authors’ note on bracket terms, Keppel and Wickens, p. 31.
14 Sums of SquaresTotal = Sums of SquaresBetween + Sums of SquaresWithinSources of Variance Calculate sums of squares Total sums of squares = Σ(Yij – YT)2 Between sums of squares = Σ(Yj – YT)2 Within
sums of squares = Σ(Yij – Yj)2 Sums of SquaresTotal = Sums of SquaresBetween + Sums of SquaresWithin
15 Calculating Sums of SquaresWrite sums of squares using bracket terms SST = [Y] – [T] = 1,890 – 1,500 = 390 SSA = [A] – [T] = 1,710 – 1,500 = 210 SSS/A = [Y] – [A] = 1,890 – 1,710 = 180
16 Analysis of Variance Sums of squares provide the building blocks for analysis of variance and significance testing Analysis of variance involves 2 steps Calculate variance estimates, known as mean
squares, by dividing component sums of squares by degrees of freedom Ratio of between-group mean square and within-group mean square provides F statistic
17 Analysis of Variance Source df Mean Square F A a-1 SSA/dfA MSA/ MSS/AS/A a (n-1) SSS/A/dfS/A Total an-1 a = # of levels of factor A n = sample size of a group df = degrees of freedom Check F value
in table of critical values organized by degrees of freedom to determine significance level Numerator degrees of freedom = a-1 Denominator degrees of freedom = a(n-1)
|
{"url":"https://slideplayer.com/slide/5007257/","timestamp":"2024-11-09T19:45:40Z","content_type":"text/html","content_length":"190672","record_id":"<urn:uuid:b42e2f4a-a81b-4318-95d1-a79dfcaa83b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00245.warc.gz"}
|
Rate of inflation minus the real rate of interest
long-run relationship between inflation and nominal interest rates. The real rate (the nominal rate minus realized inflation) for some periods in the sample. If the nominal rate is less than the
inflation rate, then the real interest rate is negative. In this case the rise in dollar value at the rate Rt does not cover the rise in
A real interest rate is defined as a nominal interest rate corrected for a measure of interest rate during February 1999 minus 1999 average forecast inflation. The real rate of interest is the
nominal rate minus the expected inflation rate. However, the real rate itself has several components. First is the risk-free rate inflation target, post 1992, the relationship between the real
interest rate gap and the other hand, a statistical approach, with no economic model at all, is less If the actual inflation rate is high enough, the real return can even turn negative, Of course,
inflation risk can work the other way: If actual inflation is less than for inflation and applying the auction determined, fixed real interest rate to the Thomas M. Humphrey. The proposition that
the real rate of interest equals the nominal rate minus the expected rate of inflation. (or alternatively, the nominal rate Inflation is the rate of increase in prices over a given period of time.
because the real interest rate (the nominal rate minus the inflation rate) would be zero; 2 Jul 2019 The real interest rate is the rate of interest paid to an investor, minus inflation. Natural
inflation in the economy will affect all interest-bearing
A bond's "real return" accounts for the inflation rate and more accurately Similarly, the real yield is the nominal yield of a bond minus the rate of inflation. The U.S. Treasury, for example, has
never failed to pay the scheduled interest on a
After rearranging the variables, we find that the real interest rate equals the nominal interest rate minus the expected rate of inflation. ir = i - πe. In case you don't Real interest rate (%) from
The World Bank: Data. Risk premium on lending ( lending rate minus treasury bill rate, %). Interest rate spread (lending rate minus The natural real rate of interest—the level of the real federal
funds rate ment and inflation rests at the FOMC's 2 percent longer-run objective, a change in less is known about the empirical relationship between bond premi- ums and the long-run relationship
between inflation and nominal interest rates. The real rate (the nominal rate minus realized inflation) for some periods in the sample.
Keywords: equilibrium real interest rate, secular stagnation, euro-area countries, should be given at a zero nominal rate minus the ECB inflation target of about.
reflected anticipated inflation so that real interest rates were independent of short holding periods (six months or less) in subsequent statistical tests obviated
If the actual inflation rate is high enough, the real return can even turn negative, Of course, inflation risk can work the other way: If actual inflation is less than for inflation and applying the
auction determined, fixed real interest rate to the
31 Aug 2019 Therefore, any rate less than the inflation rate should be viewed as a negative real rate. Today In: Markets
The real interest rate is the nominal rate of interest minus inflation, which can be expressed approximately by the following formula: Real Interest Rate = Nominal
This paper argues that it is not the low central bank policy rate which causes low inflation but rather the low equilibrium real interest rate, the economy's real 14 Jan 2020 In a recent study, Paul
Schmelzing of the Bank of England tracks global real ( inflation-adjusted) interest rates over the period from 1311 to 2018 The real interest rate of an investment is calculated as the difference
between the nominal interest rate and the inflation rate: Real Interest Rate = Nominal Interest Rate - Inflation (Expected or The difference between the real and nominal interest rate is that the
real interest rate is approximately equal to the nominal interest rate minus the expected rate of inflation. The nominal interest rate in the interest rate before inflation has been accounted for and
removed from the number.
rate, the short-term real rate will move in the desired direction, so long as there is less than a one-for-one movement in short-term inflation expectations. 3. This will real rate of interest
helps determine the services yielded by the stock of con- personal income tax liabilities; less (4) federal estate and gift taxes; less. for the past 600 years it has never been less than 52%.2
Overall, the global R sample resulting real rate trend line (inflation data is not separately shown).
|
{"url":"https://bestbtcxnkipds.netlify.app/tanoue58482max/rate-of-inflation-minus-the-real-rate-of-interest-257.html","timestamp":"2024-11-05T21:45:11Z","content_type":"text/html","content_length":"31149","record_id":"<urn:uuid:885b7bd4-6fb6-4230-917b-627d5b6892e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00334.warc.gz"}
|
2027: THE DEPRESSION WE HAD TO HAVE
THE LAND QUESTION IN HISTORY
by Silvano Borruso
The Land Question acts as a background to every history book of every time and place. The less the author is aware of its importance as a determining cause of war, treaties, dynastic marriages,
colonialism, papal elections, revolutions, capital executions and what have you, the more he reveals to the perceptive reader, and only to him, the drama when not the tragedy of land misdistribution.
I affirm the above because none of the authors by me consulted has shown awareness of the problem. The two that opened my eyes to it, Henry George (1839-1897) and Silvio Gesell (1862-1930) expound on
the theory of the question, with unanswerable arguments. But their historical examples are scanty.
The purpose of this essay is to move the Land Question from the background to the foreground of history. It will not exhaust the topic for obvious reasons, but the examples chosen, or better bumped
into, should be enough to stimulate further investigation for a possible future political action.
An indispensable preamble to understand the question is Adam Smith’s (1723–1790) following paragraph from The Wealth of Nations.
As soon as the land of any country has all become private property, the landlords, like all other men, love to reap where they never sowed, and demand a rent even for its natural produce. The wood of
the forest, the grass of the field, and all the natural fruits of the earth, which, when land was in common, cost the labourer only the trouble of gathering them, come, even to him, to have an
additional price fixed upon them. He must then pay for the licence to gather them; and he must give up to the landlord a portion of what his labour either collects or produces.[1]
As a good British pragmatist, Smiths stops at the facts. He takes it for granted that he who “loves to reap where he never sowed” has all the right to maximize the rent: either by lowering the wages
of his labourers, or by increasing the rent of his tenants, or both where the size of the property allows. Let me open the argument with
In my boyhood I had the privilege (understood decades later) of meeting Don Cola Tampuso, a grizzly shareholder on a small farm where he lived with his wife. Despite the fact that 50% of the fruits
of his labour ended up in the pockets of one who “loved to reap where he never sowed”, he could make ends meet, for the farm was no more than a couple of kilometers from the nearest market. Had it
been ten kilometers or more, the intermediaries would have taken most of his wages, leaving him with just enough to survive. The reader will have recognized David Ricardo’s (1772–1823) ‘iron law of
Let us make an effort to understand what happens when “the land of any country has all become private property”.
Fenced land originates a double sovereignty: that of Government, which shows it off with flag, military uniforms, national anthem, taxes and assorted decorations, and that of the landlord, who
carefully refrains from showing anything, but exercises it de facto, as did the owner of Don Cola’s farm. Who, like all the marginalized in the power struggle (read landless), lived as a proletarian,
surviving as well as he could.
Four consequences stem directly from the fencing off of land:
1. Sooner or later the large estate replaces small, individual plots. The problem is that human nature has dictated individual diversity, so that the less able to run their property will not take
long in selling it, to be incorporated into larger and larger units. This is one reason why no ‘agrarian reform’ based on fencing of land has ever succeeded.[2]
2. Slavery is the natural, hence permanent, counterpart of land ownership. The large estate depresses the wages of those who work within it; land ownership per se depress the wages of those working
on their own land, since it pushes the margin of cultivation farther and farther away from the centres of consumption, where it is more profitable to use land for purposes other than
agricultural. Both are another reason for the failure of ‘agrarian reforms’: the distance between production and consumption renders the small property uneconomic.
3. In order to maximize his rent, the ‘sovereign’ of the large estate must count on a pool of unemployed so as to keep wages down, and on tariffs to protect his prices from competition. He attains
the first aim by keeping large tracts of his estate fallow, thus denying access to would be occupiers; and the second by manipulating public policy.
4. As society gets increasingly divided into a powerful (but necessarily small) group of landlords and a large one of landless, it is the class war, which is not at all a Marxist invention: it is
enough to read Livy to read about it. Bloodshed, whether in civil or in foreign wars provoked by those who intend to maintain their undue privileges at all costs, becomes inevitable. A treatise
could be written, but let us stick to Sicily, especially in its last 200 years of its history.
Up to 1806 the law in force was that of the public commons (It. Demanio). It was a feudal institution, wisely reformed by Neapolitan law.
The Demanio’s raison d’être was to prevent landlords from usurping the king’s sovereignty over the land. It acted through a true plethora of local usages that regulated the use of land, impeding, or
severely limiting abuse. The Don Colas of the time did not squander, but did not starve either; they cultivated small public parcels for private use, enough to eat, dress and have a roof over their
heads, plus some extra to fulfil their obligations. There was no unemployment, and no emigration.
The Demanio was a paternalist institution, which aligned the sovereign to the people against opportunists desiring “to harvest where they had not sowed”. It was not equilibrium, but a contained
disequilibrium, akin to a champagne cork clumsily used to stop a diarrhea.
The first blow to the system was delivered by the Murat laws of 1806–1808. With the abolition of feudal law, the landlords enjoyed living off rent for more than 40 years. The lion had tasted human
The revolutionary riots, as elsewhere, were not “people’s” doing, but the work of those who foresaw the possibility of living at the expense of others. The poor were robbed of even the furniture used
for the barricades of 1848.
King Ferdinand II almost gained the struggle, but not for long. Not only did he repeal the constitution, but between 1850 and 1854 he also returned to the common lands more than 100,000 units of land
usurped by the Murat crowd in the kingdom.
It is not therefore surprising that such people should howl against “Bourbon tyranny” while cheering Garibaldi and the Piedmontese as “liberators”; which they were, but not of the people drummed up
in history textbooks.
In 1860 unemployment and famine arrived together. What to do with the sudden increase of proletarian men, women and children?
There was only the embarrassment of choice: from Swift’s (1667-1745) Modest Proposal, i.e. serving the poor’s babies as dainties for the rich people’s table, to Malthus’ (1766-1834, still taken
seriously) convincing them to breed less; to the Terror, which had the hidden purpose of curbing the population of France;[3] to transportation of surplus Irish to Australia even for the theft of a
handkerchief, to compulsory military service for thousands of unemployed transformed into cannon fodder, to easy incarceration (The US have almost three million, about 1% of the population). The
Piedmontese opted for the firing squad, with which they killed two birds with one stone: getting rid of the surplus proletariat at home and forcing the rest to migrate.
And so Sicily ‘enjoys’ (if it is the right word) this system of land tenure to date. The liberal principle “everyone after himself and the devil take the hindmost” has worked. In 1960, 100 years
after the ‘liberation’ the elderly father of two sons who had run away from the island was interviewed:
The children run away, to Germany, to Turin. I don’t deny it: when my son worked with me, and needed 100 lire for a shave, he didn’t have them. The other son also ran away, together with many others.
There was no progress, there was no land. All of us worked like slaves, and today even more, with no satisfaction whatever. I think that if no one does anything, in Sicily, everything will come to an
end. It’s not that we die of hunger, but everything is dry; no one grows anything. People are content with arriving at the pension and no longer having to work. The rule is: little is enough and a
lot is surplus. Formerly I was badly off, so I am now, but at least now I don’t do anything.[4]
In 1992 I had the opportunity of visiting the island’s interior. The only economic activity observed was a shepherd with his herd of cows for a distance of some 100 km.
Ever since, things have gone from bad to worse. To the series fencing>large estate>class struggle>war only the last is missing. I only say that the 2.5 million hectares of the territory remain
fiscally sterile for the State, who in turn is bent into extorting the fruits of citizens’ labour. The landlords, who exercise real sovereignty, have sung victory, and it is not necessary to be a
prophet to foretell that sooner or later the situation will turn violent.
Kenya (ex British East Africa)
Sicily and Kenya are 5000 km apart. What unites them, in this essay, is my person, born in the first and living in the second. The difference, observed personally, consists in the rivers of blood
that in Kenya decorate the newspapers with monotonous regularity.
The “patched-up beggar’s overcoat” to which the political map of Africa resembles[5] divides the continent into some 50 of what fuzzy thinking calls ‘nations’. Then one learns, at school, that each
African ‘nation’ is home to hundreds of ‘tribes’ which the European Great Powers ‘pacified’ after the 1885 Berlin Congress, and which after independence (1956-to date) do nothing except massacring
one another.
The reality, nevertheless, is different. If we define ‘nation’ as “a people of homogeneous usages and customs”, Africa sports about one thousand of such. Some attain respectable size: the Igbo of
Nigerian sum up to 30 million, more than Scandinavian and Belgians together. But they are still dubbed ‘tribe’, forgetting that the first user of that term was Servius Tullius sixth king of Rome, who
divided the Urbs into four tributary districts (hence the term). How and when the same term was foisted on the African nations I have been unable to discover.
If therefore among members of such nations, at times after decades of peacefully coexistence including intermarriage, orgies of destruction and death explode unchecked, there must be some cause other
than ethnic differences, and so it is.
The Title Deed
Smith’s paragraph reproduced earlier exhibits a social abscess about to burst already in his days after a couple of centuries of incubation: the expulsion of the proud English yeomen from the lands
they had cultivated for centuries, and which King Henry VIII’s greed had sold to the nouveaux riches in exchange for ‘title deeds’. At the beginning the expellees took refuge in the common lands of
the Crown, but at the end of the 18^th century the landlords enclosed even these. The landless yeomen would have starved to death had the Industrial Revolution not saved them with starvation wages
it’s true, but wages nonetheless, which allowed them to survive.
The landless descendants of the victims of that original injustice directed their attention towards expropriating African nations. The Spanish landless had done the same with American nations at a
time when Spain numbered scarcely nine million people, but whose lands had been grabbed by landlords as powerful as their English counterparts. Englishmen and Spaniards colonized for being militarily
strong peoples; Italians and the Irish, militarily weak, migrated to North America in search of fortune. But the surreptitious drive behind the four was the same: the new landlords, with their ‘title
deeds’ and supported by military might, had forced the landless either to starve to death or to emigrate. Behind each ‘title’ issued by an authority, English, Spanish or Piedmontese, there had always
been an act of violence.
At the Root
All ancient cultures, without exception, had developed a communal system of land tenure. We have seen one, in force in the Kingdom of the Two Sicilies up to 1860.
There is a natural reason for it, and therefore one of common sense: land a) is not man-made and b) is immortal. It follows that property of land is communal by natural law, since a community is as
immortal as the land on which it is settled. An individual landed property is a juridical construct in that it grants an unjust privilege to a mortal, authorizing him to call ‘mine’ something for
which he has not worked and which one day willy-nilly he will have to leave behind.
It is true that such a privilege may be mitigated by extra duties assumed by the landlord. It happened during the seven centuries of feudal tenure. The ecclesiastical landlords assumed the burden of
social security; the secular ones that of administration and defense. The same could happen today if the landlord disbursed a rent for the occupation of a certain surface, but left to enjoy the
fruits of his labor. This would be akin to extending the parking fee system as practices in the centre of a crowded city, where a vehicle pays a fixed daily sum to occupy a parking bay in exclusive.
If the law authorized the first occupier to sell the title deed (the ticket) to the next one, the municipality would lose the parking revenue.
If the same law that allows a municipality to charge rent for a parking bay was extended to buildings ‘parked’ permanently, at the same rate as the cars, i.e. so many $ per square foot per day, the
surplus of municipal revenue could be conveyed to the State as State revenue, and the surplus of the State to the Federal Government.[6] Such sums could cover a large amount (all according to Henry
George) of State expenditure. The State would not have to employ an army of officials with draconian powers of confiscation of the fruits of labor, and the economy would unshackle itself of the
impediments that prevent it from taking off.
With the ‘title deed’ the situation is reversed. When a landlord sells his land to another, the sum corresponding to the bare ground, which in justice should go to the community for having created
that value, goes to his pockets instead. Such a situation, which passes unnoticed in Europe after centuries of bureaucratic somnolence, in Africa has lethal effects, as we shall soon see.
To prove that at the origin of every ‘title’ granted to an individual (hence unnatural) there is an act of violence, from military conquest to murder, it is enough to read any case anywhere in the
The following case occurred in 1999 in Louisiana. The applicant for a loan from the Federal Housing Authority had offered his title deed as collateral. His lawyer had traced the origin of the title
to 1803. The FHA was not satisfied, and demanded that it be tracked down “to the origin.”
The annoyed lawyer responded:
[…] I was unaware that any educated person in this country, particularly those working in the property area, would not know that Louisiana was purchased by the US from France in 1803, the year of
origin identified in our application. For the edification of uninformed FHA bureaucrats the title to the land prior to US ownership was obtained from France, which had acquired it by Right of
Conquest from Spain. The land came into possession of Spain by Right of Discovery made in the year 1492 by a sea captain named Christopher Columbus, who had been granted the privilege of seeking a
new route to India by the then reigning monarch Isabella. The good queen being a pious woman, and careful about titles almost as much as the FHA, took the precaution of securing the blessing of the
Pope before she sold her jewels to fund Columbus’ expedition. Now the Pope, as I’m sure you know, is the emissary of Jesus Christ the Son of God. And God it is commonly accepted created this world.
Therefore I believe it is safe to presume that He also made that part of the world called Louisiana. He therefore would be the owner of origin. I hope to hell you find His original claim to be
satisfactory. Now may we have our damn loan?[7]
They got it. Now, what does Louisiana have to do with Kenya? It does, because the key point is the same. A title deed is worth neither more nor less than the physical force needed to defend it. When
the landlord is the State, its power over the ‘national’ territory depends on its armed forces; but when the landlord is an individual, and disarmed as he is in Kenya, the security of the title
depends on the capacity and willingness of the armed forces of the State to defend it.
What happened in Kenya in the first days of 2008 was but a bloody tail of a series of casualties for more than 50 years, which flows as a river every now and then. Starting with the Mau Mau revolt of
the 1950s, tens of thousands of units of human capital and millions of man-hours of useful work have been immolated on the altar of an idol called ‘title deed’.
The history of this particular idol is a long one. It got enthroned by King Henry VIII of England (1509-1547), and was imported into Kenya by the not so enlightened descendants of the ex-yeomen
rendered landless by the devastations of the same idol. What would have happened if the British colonialists had applied in Kenya a truly enlightened, peaceful system of land tenure, instead of an
obscurantist and violent one?
A Possible Solution
It is true that history is not made of ifs and buts, but some speculation is in order. Suppose that the colonial government, instead of issuing title deeds to each White settler, had fixed boundaries
for each nation/tribe already existing, guaranteeing security by means of its armed forces. Only then the same government would have issued a title to each community in exchange for a fixed rent to
defray the expenses of armed protection and administration. Each White settler would have then paid rent to the community, which would have protected the occupation in terms of a contract with the
force of law before the colonial government.
The rent would have varied according to location and density of population. No fruits of anyone’s labour would have had to be extorted, with the desirable result that the more the wealth produced
within the limits of the community, the less the fixed tribute to the government would have burdened it. Furthermore, a low-wage policy would not have been possible, for the option whether to work
for oneself or for a settler would have been left open.
A virtuous circle would have resulted. The White settlers then and the Black ones now, instead of being perceived as intruders and therefore exploiters, would have been perceived, and up to now, as
sources of wealth and of public revenue, directly to the community an indirectly to the State.
It didn’t happen. When the forces of the Crown left, so did the White settlers, and with them a huge stock of fixed capital created in half a century of hard work.
That the above is not utopia is shown by two facts.
First: in Kenya, a dozen or so White settlers still operate in the country, but without ‘title deeds’. Instead they pay rent to the Maasai, who in exchange protect their properties.
Second: in Zambia, half way between Kenya and South Africa, freehold title deeds were abolished in 1975. Former Crown Lands are in the hands not of the State but of the President, who grants it to
anyone willing to pay a modest ground rent on a 99-year lease.
I personally verified the results of this policy in 2012. Lusaka, the capital city, sports double carriage ways, a total absence of traffic jams, free and ample parking anywhere, bungalows instead of
apartment blocks, and a well-regulated public transport entirely in private hands. But what is more, not a single drop of blood has been spilled over land disputes since 1975.
And not only: some 120 White settlers expelled from Zimbabwe cultivate land in Zambia, thus contributing to the national economy with their skills and know-how.
In Kenya the story has been very different. Its first President (1963-1978) belonged to the Kikuyu nation. Sagaciously, he realized that a democracy founded on the party system would have been
ruinous for a multinational country.[8] His single party policy had cost limiting certain liberties, but did preserve social peace until his death.
Here it is necessary to understand the Kikuyu. They are a people of tireless, active ants, able and willing to run – and shoulder – risks. They do not hesitate migrating to the ends of the earth,
from Scotland to Japan, and prosper. Strong of more or less four million, they had been exiting since independence from the 13,000 square kilometers of their native province to migrate within Kenya,
settling on lands of other nations. Working as they usually do, they became the wealthier, the more they settled among nations of less enterprising people.
The President, as is natural, had favoured their emigration, granting them, you guess, ill-famed ‘title deeds’ on foreign lands. It was a time bomb, considering the morbid attachment of all African
societies to their ancestral lands. It could last only as long as the armed forces were in a position to defend such titles.
The second President (1978-2002), a non-Kikuyu, succeeded in keeping the peace until 1988, when the Western paladins of Democracy put pressure on him to accept the multiparty system, outside of
which, as is well known, there is only weeping and gnashing of teeth.
Since African parties lack ‘programs’, ‘manifestoes’ and like tackle, electors always vote for one of their own, whose fixed ‘program’ is, “I want to be President”.
Hence when the 1992 elections loomed near, the President realized that had he left the Kikuyu in peace where they had settled, they would have voted against him. And so he organized what the Western
Press called “local clashes” but which in reality were operations of ethnic cleansing aimed at chasing Kikuyu farmers from non-Kikuyu areas.
He succeeded twice, 1992 and 1997. Round figures as to the number of dead and the amount of wealth destroyed are still missing.
In 2002 a coalition of Kikuyu and other nations fed up with a presidential power greater than that of any Louis XIV, succeeded in voting him out of office and installed another Kikuyu as President.
But when the electors of 2007 threw out of office 22 out of 27 ministers of the incumbent administration, it was clear that the opposition was going to win.
History teaches, however, how easy it is to rig elections from the centres of power. And so it was. The bomb went off within ten minutes of announcing the results. This time the country split
dramatically into two. A totally irrational ethnic cleansing sank the country into a bloodbath, lit up by tragic bonfires of destroyed property.
The final toll was 1300 dead, a yet unclear number of maimed and wounded, and 300,000 or so homeless. A visible minority still lives in refugee camps, showing off utterly useless ‘title deeds’.
Western television broadcast hair-raising images, showing but a partial aspect of reality. Which, hidden under a thick blanket of disinformation, showed its true face only to those in the know: the
Land Question, still unsolved in a Third Millennium well on its way.
A Double Crown
In the history of ecclesiastical institutions, from the papacy downwards, the Land Question occupies a preponderant place, but like the elephant in the room, it passes unnoticed until someone points
it out. Having read George and Gesell one indeed notices.
The Edict of Thessalonica (380) imposed the public confession of the Christian faith on the Empire. As a consequence, donations of land began to accrue, not to ‘the Church’ as embedded historians
like repeating, but to flesh-and-blood popes, bishops and priests, who benefited from the rent of such lands.
But, like everything else in this world, land rent was put to two very different uses. One, benevolent, financed a thick network of social services: health, education, poor centres, orphan homes,
hostelry, etc. all free of charge, which rendered taxation by the civil authorities superfluous. The other, malevolent, use of the same rent was, you guess, diverting it for uses not always edifying
to put it mildly. Many members of the hierarchy, not to speak of intruders, could thus live in leisure (and worse).
It would take us too far to analyze 17 centuries of ecclesiastical land question. Let us concentrate on the donation of former Byzantine lands by Pepin of Heristal (756), which became the Papal
States. They imposed a second crown on the pope: besides being head of the Church he was now King of those States. The arrangement lasted for more than 1000 years, until their demise in 1870.
The episode that I consider most grotesque was the war waged by His Most Catholic Majesty Philip II of Spain on the Sovereign of the Papal States, i.e. Pope Paul IV in 1556, to wrest from him the
duchy of Paliano, today an insignificant municipality of Lazio.
Paul Johnson (1928- ) thus sings the praises of the ‘title deed’:
The freehold was unknown to barbarian Europe; indeed, it was only imperfectly developed in imperial Rome and Byzantium. The church needed it for the security of its own properties and wrote it into
the law codes it processed – wrote it, indeed, so indelibly that the freehold survived and defied the superimposed forms of feudalism. The instrument of the land deed or charter, giving absolute
possession of land to a private individual or private corporation, is one of the great inventions of human history. Taken in conjunction with the notion of the rule of law, it is economically and
politically a very important one. For once an individual can own land absolutely, without social or economic qualification, and once his right in that land is protected – even against the state – by
the rule of law, he has true security of property.”[9]
Johnson reduces, as many do, the Church to its hierarchy. What the ‘title’ really did was to split the hierarchy of the Church into high and low clergy, the first living off rent and the second as
best it could, from alms begging to intrigue. To say nothing of priests who squandered huge amounts of time rent-seeking and hunting for benefices instead of fulfilling the duties of office. Let us
also leave aside the infamous struggle for the investitures, which saw scoundrels of all kinds fraudulently ordained priests to accede to a ‘benefice’. A treatise could be written on this paragraph,
but not here.
The ecclesiastical balance in land, as the one in Naples, was not at all stable, despite an appearance of centuries. It was a contained imbalance lasting until the sovereigns agreed to be controlled
from above, i.e. by the Decalogue, and from below, i.e. by the social usages of a thick cobweb of guilds, confraternities and local authorities, towards whose liberties they swore allegiance. I will
limit my analysis to Tudor England in the three years 1536-1539.
It is most impressive to read in Cobbett’s History of the Reformation in England and Ireland (but only if one has read George and Gesell) how a social security set up by 1000 monasteries and 10,000
monks during 900 years collapsed in three short years at the hands of Thomas Cromwell, lackey of Henry VIII. Cromwell made a lavish use of gunpowder (in the absence of dynamite) to demolish the great
abbeys, the ruins of which still majestically dot the English countryside.
What happened to the land? The Lords grabbed it, hand in glove with the newly appointed prelates of the Anglican Church. Their descendants, even today, care not a fig about social security. They go
on pocketing rents, “loving to harvest where they never sowed”. For the uninformed, no Labour Party has ever succeeded in getting rid of the House of Lords, despite repeated promises to the contrary
in election time. The House of Lords exists precisely to prevent the Commons from upsetting the apple cart, i.e. the territorial privileges secured by their ancestors with the simple subterfuge of
apostatizing from Rome to take refuge in Westminster. So much for “one of the great inventions of human history”.
For reasons more or less unknown, the history of Imperial Byzantium rarely makes it into school textbooks, despite a venerable life of more than 1000 years. And no textbook links the fortunes of the
Empire to the Land Question.
Reading between the lines, however, the question stands out in no uncertain terms. The dynasty of the Heraclids, from early in the seventh century, had to cope with a drastic depopulation caused by
the Gothic wars of Justinian 100 years before. They stanched the bleeding by enticing or forcing people of Slav stock into the empire, Hellenizing them and organizing them into themata.
A thema was a territorial, fiscal and military unit with land security guaranteed not by ‘titles’ but by communal control. Every farmer was also a soldier, under the command of the Katapanos. He was
fighting for family and home, and therefore with absolute loyalty. By solving the Land Question, the Byzantine Empire became invincible on the battlefield. And when the ‘title deed’ took over, defeat
loomed on the same battlefield.
The topic is worth pursuing, but not in this short essay.
The Large Landed Estate Deletes Poland from the Map
A surprise awaits anyone perusing an early 17^th century map of Europe. Its largest country was the then Lithuano-Polish Federation (Rzeczpospolita Polska). Its 773,000 square kilometers (Iberian
Peninsula + Ireland + Iceland) extended north-south from the Baltic Sea to close to the Black Sea, and east-west from Silesia to 200 km beyond the Dnieper. But a good 60% of the people within those
borders included a considerable population of Cossacks, Tatars and other groups neither Polish nor Lithuanian. The political stability of the Federation depended heavily, if not exclusively, on how
the powers of the State dealt with the non-Polish non-Lithuanian majority.
The problem was that the landlords in power, ignorant of history, repeated with Poland the mistakes of Rome, albeit on a reduced scale. The eastward expansion entailed adding land to immense large
estates, some of them the size of Switzerland or of Holland. Such estates generated, as they always do, forms of slavery. But the slaves sooner or later rebel, increasingly reluctant to defend lands
that are liabilities to them and assets for others.
Wherever one looks: Rome, Greece, Ireland, England, etc., the parameters are always the same. With the same monotony, the population that should have defended the territory became a rural
proletariat, with civic rights but landless. The only opportunity offered them was to work as day labourers in the great estates owned by the nobles, who extended their own power by extending their
How noxious the large landed estate was for the common good could be verified for centuries with the institution of the liberum veto in the Sejm (Parliament). A single landlord had the power to veto
any parliamentary intervention that he deemed contrary to the interests of his thousands, when not millions, of hectares. Necessary reforms were paralyzed, and the country progressively weakened.
The subject people, on the other hand, had land but not civil rights, to which they aspired.
In 1633 the Sejim approved a law that prohibited the nobles to trade, especially in alcoholic drinks. From that moment they began to live exclusively off rent, extorted more or less violently from
the rural proletariat. Tax farmers were invariably Jews, immigrants in most tolerant Poland (which had no Inquisition) during the previous five centuries. 15% of Jews collected taxes in the towns,
85% in the countryside.
The vicious circle began with imported Jewish capital, which benefited the nobility as much as the State. Strengthened by such apparent advantage, Poland pushed her colonization increasingly
eastwards. The tax collectors quickly realized that besides increasing the political power of the nobility, they could increase their businesses by pushing independent cultivators into a corner,
causing their bankrupt and adding their properties to those of the nobility.
At the death of King Sigismund (1572) the Federation had extended its dominion over the Cossacks of southern Ukraine, who agreed to form part of Poland much against their wishes, also because as
Eastern Orthodox they objected to being submitted to religious regulations alien to their culture.
A reform was called for. The Cossacks were promised a law that would give them the same rights as the Poles, as well as protect them against the vexations of the tax collectors and those of the
Jesuits, both extremely unpopular in the territory.
The bill was discussed in the Sejm in 1648, the year of Westphalia. But the landlords, in cahoots with the tax collectors and putting their interests ahead of the common good, quashed the
The optimist waiting of the Cossacks became vehement indignation. Their hetman was Bogdan Chmielnicki. The tax collector Zachariah Sabilenski played a dirty trick on him, helping the Polish landlord
Czaplinski to grab not only Chmielnicki’s land but also his wife. Another tax collector reported his negotiations with the Tatars to the government.
It was the last straw. After two audiences with the king, and seeing justice denied, Chmielnicki raised an army of Cossacks and Tatars, and inflicted a double, bitter defeat on the federal army, on
16^th and 26^th May 1648. There followed an orgy of looting, killings and massacres, which left behind mountains of corpses without distinction of age or sex.[10]
When dictating conditions, Chmienicki invariably demanded the expulsion of the Jews and of the Catholic Church from the territories under his control.
The neighbouring powers lost no time in realizing that the Polish colossus was a giant with feet of clay. Muscovy, Prussia, Sweden, Brandenburg and the Ottoman Empire started nibbling at Federal
territory as weak as it was big in size.
In 1772 a Russian army and a Prussian one invaded the Rzeczpospolita Polska, crushing a desperate but useless Polish-Lithuanian resistance. Twenty years later Austria-Hungary joined them, and the
partitions of 1793-95 completed the operation. Poland disappeared until the Treaty of Versailles of 1919.
The Poles still remember the disaster. They call it “The Flood”.
Conventional historians and artists keep busy representing the “Alliance of the three Black Eagles” (Russia, Prussia and Austria), at times adding picaresque details like Stanislaus Augustus
Poniatowski last king of Poland as the lover of Catherine II of Russia, but none of them perceives the large landed estate as the remote cause of the partition. And the infamous ‘title deed’ goes on
decorating textbooks as “one of the great inventions of human history”.
Silvano Borruso
25^th September 2013
[1] The Wealth of Nations, Penguin 1985, pp.152-53
[2] This phenomenon was repeated in Kenya in September 2013. The President gave title deeds to a slew of landless, in a populist move after years of complaining. Within a week, many of the new owners
had started selling their land for ready cash.
[3] Nesta Webster (1876-1960) affirms this in her World Revolution.
[4] Small nameless interviewee, c. 1960.
[5] I owe the happy expression to Silvio Gesell (1862-1930).
[6] There is scope here for suggesting a drastic pruning of extortionate taxation, but not here.
[7] Email received 19^th November 1999, unreferenced.
[8] Antonio Rosmini (1797-1855) and John Stuart Mill (1806-1873) had said the same thing 150 years earlier.
[9] Is there a moral basis for Capitalism? American Enterprise Institute 1980, p. 52. Italics in the original. Bold type mine.
[10] Graetz (History of the Jews) estimates 100,000 casualties. The Jewish Encyclopaedia triples the figure.
You must be logged in to post a comment.
|
{"url":"https://thedepression.org.au/?p=15645","timestamp":"2024-11-07T06:19:10Z","content_type":"text/html","content_length":"92651","record_id":"<urn:uuid:618b210b-f636-411b-b89e-f0f2e71ac023>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00849.warc.gz"}
|
Fieller’s (‘fiducial’) confidence interval
Fieller’s (‘fiducial’) confidence interval [General Statistics]
Dear d_labes,
you left me baffled.
❝ ❝ For such cases we are setting logscale to False, right?
❝ Correct in so far if we use the approximation that the estimate of µ[R] is (statistically) greater than zero. A very reasonable assumption for the usual metrics AUC and Cmax IMHO.
Please explain then what exactly it is that power.TOST calculates when I use logscale=F.
Does it calculate power for a hypothesis based on a difference or for a ratio?
Which difference? Which ratio?
❝ But this has than nothing to do with Fieller’s (‘fiducial’) confidence interval, a more correct method for deriving a confidence interval for the ratio of untransformed PK metrics.
The mention of Fieller was not mine. I am quite confused now, what it is power.TOST tries to calculate when I do logscale=F.
I am convinced the assuming theta1=-0.2 by default when logscale=F is a misnomer. theta1 is elsewhere understood as an equivalence margin expressed as a ratio and that can't realistically be
negative. If powerTOST tries to emulate Hauschke's paper then -.2 is f1, not a theta.
We need to be careful here about f, delta and theta.
Pass or fail!
Complete thread:
|
{"url":"https://forum.bebac.at/forum_entry.php?id=20899","timestamp":"2024-11-04T03:54:12Z","content_type":"text/html","content_length":"13972","record_id":"<urn:uuid:84e11470-1f0b-42ea-ab04-32b80622e4a6>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00035.warc.gz"}
|
Mission One – upgrade Beautiful Soup
It seems like the first practical piece of software that every agent needs is Beautiful Soup. We often make extensive use of this to extract meaningful information from HTML web pages. A great deal
of the world's information is published in the HTML format. Sadly, browsers must tolerate broken HTML. Even worse, website designers have no incentive to make their HTML simple. This means that HTML
extraction is something every agent needs to master.
Upgrading the Beautiful Soup package is a core mission that sets us up to do more useful espionage work. First, check the PyPI description of the package. Here's the URL: https://pypi.python.org/pypi
/beautifulsoup4. The language is described as Python 3, which is usually a good indication that the package will work with any release of Python 3.
To confirm the Python 3 compatibility, track down the source of this at the following URL:
This page simply lists Python 3 without any specific minor version number. That's encouraging. We can even look at the following link to see more details of the development of this package:
The installation is generally just as follows:
MacBookPro-SLott:Code slott$ sudo pip3.4 install beautifulsoup4
Windows agents can omit the sudo prefix.
This will use the pip application to download and install BeautifulSoup. The output will look as shown in the following:
Collecting beautifulsoup4
Downloading beautifulsoup4-4.3.2.tar.gz (143kB)
100% |████████████████████████████████| 143kB 1.1MB/s
Installing collected packages: beautifulsoup4
Running setup.py install for beautifulsoup4
Successfully installed beautifulsoup4-4.3.2
Note that Pip 7 on Macintosh uses the █ character instead of # to show status. The installation was reported as successful. That means we can start using the package to analyze the data.
We'll finish this mission by gathering and parsing a very simple page of data.
We need to help agents make the sometimes dangerous crossing of the Gulf Stream between Florida and the Bahamas. Often, Bimini is used as a stopover; however, some faster boats can go all the way
from Florida to Nassau in a single day. On a slower boat, the weather can change and an accurate multi-day forecast is essential.
The Georef code for this area is GHLL140032. For more information, look at the 25°32′N 79°46′W position on a world map. This will show the particular stretch of ocean for which we need to supply
forecast data.
Here's a handy URL that provides weather forecasts for agents who are trying to make the passage between Florida and the Bahamas:
This page includes a weather synopsis for the overall South Atlantic (the amz101 zone) and a day-by-day forecast specific to the Bahamas (the amz117 zone). We want to trim this down to the relevant
The first step in using BeautifulSoup is to get the HTML page from the US National Weather Service and parse it in a proper document structure. We'll use urllib to get the document and create a Soup
structure from that. Here's the essential processing:
from bs4 import BeautifulSoup
import urllib.request
query= "http://forecast.weather.gov/shmrn.php?mz=amz117&syn=amz101"
with urllib.request.urlopen(query) as amz117:
document= BeautifulSoup(amz117.read())
We've opened a URL and assigned the file-like object to the amz117 variable. We've done this in a with statement. Using with will guarantee that all network resources are properly disconnected when
the execution leaves the indented body of the statement.
In the with statement, we've read the entire document available at the given URL. We've provided the sequence of bytes to the BeautifulSoup parser, which creates a parsed Soup data structure that we
can assign to the document variable.
The with statement makes an important guarantee; when the indented body is complete, the resource manager will close. In this example, the indented body is a single statement that reads the data from
the URL and parses it to create a BeautifulSoup object. The resource manager is the connection to the Internet based on the given URL. We want to be absolutely sure that all operating system (and
Python) resources that make this open connection work are properly released. This release when finished guarantees what the with statement offers.
Navigating the HTML structure
HTML documents are a mixture of tags and text. The parsed structure is iterable, allowing us to work through text and tags using the for statement. Additionally, the parsed structure contains
numerous methods to search for arbitrary features in the document.
Here's the first example of using methods names to pick apart a document:
content= document.body.find('div', id='content').div
When we use a tag name, such as body, as an attribute name, this is a search request for the first occurrence of that tag in the given container. We've used document.body to find the <body> tag in
the overall HTML document.
The find() method finds the first matching instance using more complex criteria than the tag's name. In this case, we've asked to find <div id="content"> in the body tag of the document. In this
identified <div>, we need to find the first nested <div> tag. This division has the synopsis and forecast.
The content in this division consists of a mixed sequence of text and tags. A little searching shows us that the synopsis text is the fifth item. Since Python sequences are based at zero, this has an
index of four in the <div>. We'll use the contents attribute of a given object to identify tags or text blocks by position in a document object.
The following is how we can get the synopsis and forecast. Once we have the forecast, we'll need to create an iterator for each day in the forecast:
synopsis = content.contents[4]
forecast = content.contents[5]
strong_list = list(forecast.findAll('strong'))
timestamp_tag, *forecast_list = strong_list
We've extracted the synopsis as a block of text. HTML has a quirky feature of an <hr> tag that contains the forecast. This is, in principle, invalid HTML. Even though it seems invalid, browsers
tolerate it. It has the data that we want, so we're forced to work with it as we find it.
In the forecast <hr> tag, we've used the findAll() method to create a list of the sequence of <strong> tags. These tags are interleaved between blocks of text. Generally, the text in the tag tells us
the day and the text between the <strong> tags is the forecast for that day. We emphasize generally as there's a tiny, but important special case.
Due to the special case, we've split the strong_list sequence into a head and a tail. The first item in the list is assigned to the timestamp_tag variable. All the remaining items are assigned to the
forecast_list variable. We can use the value of timestamp_tag.string to recover the string value in the tag, which will be the timestamp for the forecast.
Your extension to this mission is to parse this with datetime.datetime.strptime(). It will improve the overall utility of the data in order to replace strings with proper datetime objects.
The value of the forecast_list variable is an alternating sequence of <strong> tags and forecast text. Here's how we can extract these pairs from the overall document:
for strong in forecast_list:
desc= strong.string.strip()
print( desc, strong.nextSibling.string.strip() )
We've written a loop to step through the rest of the <strong> tags in the forecast_list object. Each item is a highlighted label for a given day. The value of strong.nextSibling will be the document
object after the <strong> tag. We can use strong.nextSibling.string to extract the string from this block of text; this will be the details of the forecast.
We've used the strip() method of the string to remove extraneous whitespace around the forecast elements. This makes the resulting text block more compact.
With a little more cleanup, we can have a tidy forecast that looks similar to the following:
TONIGHT 2015-06-30
E TO SE WINDS 10 TO 15 KT...INCREASING TO 15 TO 20 KT
LATE. SEAS 3 TO 5 FT ATLC EXPOSURES...AND 2 FT OR LESS
WED 2015-07-01
E TO SE WINDS 15 TO 20 KT...DIMINISHING TO 10 TO 15 KT
LATE. SEAS 4 TO 6 FT ATLC EXPOSURES...AND 2 FT OR
Downloading the example code
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://
www.packtpub.com/support and register to have the files e-mailed directly to you.
We've stripped away a great deal of HTML overhead. We've reduced the forecast to the barest facts. With a little more fiddling, we can get it down to a pretty tiny block of text. We might want to
represent this in JavaScript Object Notation (JSON). We can then encrypt the JSON string before the transmission. Then, we could use steganography to embed the encrypted text in another kind of
document in order to transmit to a friendly ship captain that is working the route between Key Biscayne and Bimini. It may look as if we're just sending each other pictures of rainbow butterfly
unicorn kittens.
This mission demonstrates that we can use Python 3, urllib, and BeautifulSoup. Now, we've got a working environment.
|
{"url":"https://subscription.packtpub.com/book/security/9781785283406/1/ch01lvl1sec04/mission-one-upgrade-beautiful-soup","timestamp":"2024-11-14T14:38:58Z","content_type":"text/html","content_length":"109854","record_id":"<urn:uuid:2e1f87f3-3d4c-4730-b76c-dff0ea8dca69>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00364.warc.gz"}
|
Mathematics (B.A., B.S.)
Students seeking an education field endorsement in mathematics follow the above requirements with the following changes:
See the Nebraska Wesleyan University Department of Education for information regarding education courses required for teaching certification.
An approved supporting program of 20 hours that includes CMPSC 1500 Program Design is also required for all Mathematics majors. Cooperatively designed by the student and advisor, the supporting
program may overlap with one or more minors or a second major.
For the mathematics major, the B.A. degree requires a minor from the humanities or arts, or more than 50 percent of the supporting program from these areas, while the B.S. degree requires a minor
from the natural or social sciences, or more than 50 percent of the supporting program from these areas. Mathematics majors seeking an education endorsement whose supporting program consists of
education courses will receive a B.S. degree.
|
{"url":"https://catalog.nebrwesleyan.edu/cc/2014-2015/mmd/296596","timestamp":"2024-11-05T20:10:02Z","content_type":"text/html","content_length":"35417","record_id":"<urn:uuid:0e809b8a-0236-4f3f-a9bb-af7d3b386d3f>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00307.warc.gz"}
|
Electric Potential Difference Calculator Online
Home » Simplify your calculations with ease. » Electrical »
Electric Potential Difference Calculator Online
The Electric Potential Difference Calculator is a tool designed to simplify the computation of electric potential difference, providing a fundamental understanding of voltage in electrical systems.
The Electric Potential Difference Calculator serves as a convenient tool to determine electric potential difference or voltage between two points in an electrical circuit. It calculates the energy
required (electric work) to transfer a certain charge from one point to another.
Formula of Electric Potential Difference Calculator
The formula used by the Electric Potential Difference Calculator is:
V = W / Q
• V: Represents electric potential difference or voltage (V).
• W: Represents electric work in joules (J).
• Q: Represents charge in coulombs (C).
Practical Use and Applications:
Understanding electric potential difference is crucial in electrical engineering, circuit analysis, and various technological applications. This calculator simplifies complex calculations and aids in
designing efficient electrical systems.
Conversion Table or Additional Information:
Below is a helpful table listing general terms related to electric potential difference, aiding users in comprehending related concepts:
Term Description
Voltage The measure of electric potential
Current Flow of electric charge
Resistance Impediment to current flow
Additionally, a guide on conversions or other useful information relevant to electric potential difference is included to assist users further.
Example of Electric Potential Difference Calculator
Let’s consider an example to demonstrate the usage of the Electric Potential Difference Calculator:
Suppose we have an electric work of 50 joules (W) and a charge of 10 coulombs (Q). Plugging these values into the formula, V = W / Q, yields an electric potential difference of 5 volts (V).
Frequently Asked Questions (FAQs):
What is Electric Potential Difference?
Electric potential difference, often known as voltage, is the measure of potential energy per unit charge between two points in an electrical circuit.
Why is Electric Potential Difference Important?
It determines the flow of electric current and plays a pivotal role in the functionality of electrical devices and systems.
How Does Potential Difference Calculator Help?
It simplifies complex calculations, allowing users to quickly determine voltage in various electrical scenarios.
Leave a Comment
|
{"url":"https://calculatorshub.net/electrical/electric-potential-difference-calculator/","timestamp":"2024-11-06T22:05:35Z","content_type":"text/html","content_length":"115509","record_id":"<urn:uuid:8deb79f7-ba01-4f1e-949a-4033902f7e5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00269.warc.gz"}
|
Conic Sections Formulas Flashcards | Knowt
How can you tell if an equation is a circle without completing the square?
both x² and y² have exactly the same coefficient and the same sign
How can you tell if an equation is a parabola without completing the square?
if x is squared, then the graph goes up (a>0) or down (a<0)
if y is squared, then the graph goes right (a>0) or left (a<0)
How do you know if an equation is an ellipse without completing the square?
both of the variables are squared, somewhere in the equation
signs of the coefficients of x² and y² are the same
the square root of the number under the “x” term will indicate the horizontal stretch
the square root of the number under the “y” term will indicate the vertical stretch
[(x - h)²/ a²] - [(y - k)²/ b²] = 1
[(y - k)²/ a²] - [(x - h)²/ b²] = 1
How do you know if an equation is a hyperbola without completing the square?
coefficients of x² and y² may be the same or may be different
if the x term is positive/first, then the graph is “stretched” horizontally, if the y term is positive/first then the graph is “stretched” vertically
[(x - h)²/ a²] - [(y - k)²/ b²] = 0
[(x - h)²/ a²] + [(y - k)²/ b²] = 0
|
{"url":"https://knowt.com/flashcards/a314120b-e17a-4d8e-a7a0-d8e73cc04acb","timestamp":"2024-11-07T03:34:14Z","content_type":"text/html","content_length":"452543","record_id":"<urn:uuid:fd97b0f1-cb9f-4b60-8f9c-ef126a24a266>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00226.warc.gz"}
|
How many deputy Prime Ministers have been appointed so far in history of free and independent India ?
The pattern of consumption of electricity in a home is as follows.
i.$$5 \times 60$$ watt bulbs 5 hours a day
ii.$$5 \times 60$$ watt fans 10 hours a day
iii.TV as required
If the consumptionin a month of 30 days is 195 the TV power consumption per day is:
In the diagram given below โ Dโ represents the depot and A, B, C are the retail outlets points such that the distance between โ Dโ and โ Aโ is 10 kms. โ Dโ and โ Bโ is 15 kms. and D
and C is 20 kms. Distance from โ Aโ to โ Bโ is 5 kms and โ Bโ to โ Cโ is 7 kms. What is the saving in kms, if the route is optimised on any day and a vehicle has limit of 50 kms. per day
and load wise it can pick up the the load of โ Aโ , โ Bโ and โ Cโ in one loading:
Kheri is due north of Rampur. Highway NH-1 runs 31C south of east from Kheri and highway SH-1 runs 44C north of east from Rampur. If NH-1 and SH-1 are straight, what is the measure of acute angle
they form at the intersection ?
Which one is not written by Munshi Prem Chand ?
If the price index for the year 2008 was 120 and for the year 2009 it is 130. The rate of inflation would be:
If the $$ a_{th} $$ part of 49 is 7 and $$ b_{th} $$ part of 63 is 9 and $$ c_{th} $$ part of 112 is 16. Then which of the following is true:
Married persons living in joint families but not working as school teachers are represented by:
Persons who lived in joint families, are unmarried and who do not work as school teachers are represented by:
Married teachers living in joint families are represented by:
|
{"url":"https://cracku.in/rrb-2009-kolkata-question-paper-solved?page=4","timestamp":"2024-11-14T21:55:56Z","content_type":"text/html","content_length":"157796","record_id":"<urn:uuid:7f32bfad-09a0-4712-b12d-50e2581193b7>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00390.warc.gz"}
|
Improved iteration complexity bounds of cyclic block coordinate descent for convex problems
The iteration complexity of the block-coordinate descent (BCD) type algorithm has been under extensive investigation. It was recently shown that for convex problems the classical cyclic BCGD (block
coordinate gradient descent) achieves an O(1/r) complexity (r is the number of passes of all blocks). However, such bounds are at least linearly depend on K (the number of variable blocks), and are
at least K times worse than those of the gradient descent (GD) and proximal gradient (PG) methods. In this paper, we close such theoretical performance gap between cyclic BCD and GD/PG. First we show
that for a family of quadratic nonsmooth problems, the complexity bounds for cyclic Block Coordinate Proximal Gradient (BCPG), a popular variant of BCD, can match those of the GD/PG in terms of
dependency on K (up to a log^2(K) factor). Second, we establish an improved complexity bound for Coordinate Gradient Descent (CGD) for general convex problems which can match that of GD in certain
scenarios. Our bounds are sharper than the known bounds as they are always at least K times worse than GD. Our analyses do not depend on the update order of block variables inside each cycle, thus
our results also apply to BCD methods with random permutation (random sampling without replacement, another popular variant).
Dive into the research topics of 'Improved iteration complexity bounds of cyclic block coordinate descent for convex problems'. Together they form a unique fingerprint.
|
{"url":"https://experts.umn.edu/en/publications/improved-iteration-complexity-bounds-of-cyclic-block-coordinate-d","timestamp":"2024-11-09T19:23:02Z","content_type":"text/html","content_length":"51114","record_id":"<urn:uuid:3c2249f4-b2d8-4c69-94b1-0bf867d0e0fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00375.warc.gz"}
|
Center for Advanced Studies
arXiv Fall2020
Center for Advanced Studies Seminar on Mondays
at 17.00 via Zoom
December 14, 2020
Samuel Grushevsky
(Stony Brook Univ.)
Differentials on Riemann surfaces and the geometry of the moduli spaces of curves
December 7, 2020
Nikita Nekrasov
(Stony Brook Univ., Skoltech, Kharkevich Inst.)
Towards Lefschetz thimbles in sigma models
The talk is based on joint work with Igor Krichever (arXiv:2010.15575)
November 30, 2020
Alexei Penskoi
(MSU, HSE Univ., IUM)
Isoperimetric inequalities for eigenvalues of the Laplace–Beltrami operator
Lord Rayleigh asked in his famous book «Theory of Sound» (1877–78) the following question: what shape of the drum membrane provides the lowest possible sound among all membranes of a given fixed
area? The answer (the disc) was obtained by Lord Rayleigh using physical heuristics and rigorously proven later by Faber and Krahn in 1921. The contemporary analogue of this problem in Riemannian
geometry is the following one: given a surface and a natural number k, find the supremum of the k-th eigenvalue of the Laplace-Beltrami operator (depending on a Riemannian metric) over the space of
all Riemannian metrics of given fixed area. This difficult problem turns out to be very rich and related to such classical domains as Differential and Algebraic Geometry, Geometric Analysis, PDEs,
Topology etc.
November 23, 2020
Mykola Semenyakin
(Skoltech, HSE Univ.)
Solution of tetrahedron equation from cluster algebras, 2
November 16, 2020
Mykola Semenyakin
(Skoltech, HSE Univ.)
Solution of tetrahedron equation from cluster algebras, 1
The language of cluster algebras is known to be convenient for the description of various different phenomenons in mathematical physics, and sometimes serves for the demystification of those. In the
talk I will demonstrate one example of this kind: how the mysterious solution of tetrahedron equation (3d generalization of Yang-Baxter equation) explicitly constructed by Bazhanov and Sergeev,
appears to be the basic and very well known building block in the theory of cluster algebras.
I will start with an explanation of what the tetrahedron equation is, how it is related to the Yang – Baxter equation, and present the Bazhanov-Sergeev solution of it. Then I will introduce the
planar networks, the Poisson algebra related to the paths on them, and recall how they and their ‘isomorphisms’ fit into the theory of cluster algebras. Finally, I will show how the simplest
four-gonal planar network gives the Bazhanov-Sergeev Lax operator, and composition of the four network transformations, known as ‘spider moves’, gives tetrahedron relation for those. If the time will
permit, I will also explain how using this block one can construct an integrable system, whose spectral curve is an arbitrary symmetric Newton polygon
November 9, 2020
Petr Kravchuk
(IAS, Princeton)
Conformal blocks in d=3 and higher (continuation)
November 2, 2020
Petr Kravchuk
(IAS, Princeton)
Conformal blocks in d=3 and higher
Recent advances in numerical conformal bootstrap have renewed the practical interest in conformal blocks in d=3 and higher. Owing to the non-trivial structure of the Lorentz group, the number of
different conformal blocks that need to be computed is much larger in d>=3 than in d=2 and new techniques are required. I will first review the main properties and the approaches used for computing
the simplest scalar conformal blocks. Then I will discuss some of the modern techniques used to compute conformal blocks for operators with non-trivial spin. If time permits, I will also present the
recent derivation of Zamolodchikov-like recursion relations for general d=3 blocks
October 26, 2020
Petr Dunin-Barkowski
(HSE Univ., Skoltech)
Topological recursion for general weighted double Hurwitz number
partition functions (a.k.a. KP tau-functions of hypergeometric type)
The talk is devoted to outlining the proof of spectral curve topological recursion for n-point functions corresponding to a wide class of general weighted Hurwitz number partition functions (also
known as KP tau-functions of hypergeometric type). This class, in particular, covers all known Hurwitz-type problems for which there already exist proofs of topological recursion (including simple,
monotone, strictly monotone and r-spin Hurwitz numbers, as well as Bousquet-Mélou–Schaeffer numbers, hypermap numbers and coefficients of the Ooguri-Vafa partition function for HOMFLY polynomials of
torus knots), and, in fact, most known cases of spectral curve topological recursion in general. The spectral curve we study comes from the work of Alexandrov-Chapuy-Eynard-Harnad (where they proved
topological recursion for polynomially weighted double Hurwitz numbers), but our proof of topological recursion is completely different and is applicable in a considerably more general situation.
The talk is based on joint work with Boris Bychkov, Maxim Kazarian and Sergey Shadrin (part 1: arXiv:2008.13123, part 2: in progress
October 19, 2020
Alexander Gaifullin
(Skoltech, Steklov Inst.)
On homology of Torelli group of genus 3
The Torelli group of a genus $g$ oriented surface $S_g$ is the subgroup $I_g$ of the mapping class group $Mod(S_g)$ consisting of all mapping classes that act trivially on the homology of $S_g$. It
is known that $I_2$ is an infinitely generated free group (Mess, 1992), but $I_g$ is finitely generated for $g>2$ (Johnson, 1983). One of the most interesting open problems concerning Torelli groups
is the question of whether the groups $I_g$ ($g>2$) are finitely presented or not. One of the likely possibilities is that $I_g$ is finitely presented whenever $g>3$, but $I_3$ is not. A possible
approach to this problem relies upon the study of the second homology group of $I_3$ using the spectral sequence for the action of $I_3$ on certain convenient cell complex. Up to now, the best cell
complex for this purpose seems to be the complex of cycles constructed by Bestvina, Bux, and Margalit in 2007. The study of the corresponding spectral sequence has already led to several results on
homology of Torelli groups. However, none of them concerned the most interesting case of the second homology group. For genus $3$, it is possible to study the spectral sequence in more details, in
particular, to compute several non-trivial differentials of it. In the talk, we present a partial result towards the conjecture that the group $H_2(I_3)$ is not finitely generated and hence $I_3$ is
not finitely presented. Namely, we prove that the term $E^3_{0,2}$ of the spectral sequence is infinitely generated, that is, the group $E^1_{0,2}$ remains infinitely generated after taking quotients
by images of the differentials $d^1$ and $d^2$. If one proceeded with the proof that it also remains infinitely generated after taking quotient by the image of the third differential $d^3$, he would
complete the proof of the fact that $I_3$ is not finitely presented
October 12, 2020
Bernhard Keller
(Univ. of Paris)
Bases for cluster algebras: an introductory survey
We will give an introduction to cluster algebras and report on recent progress concerning the construction of vector space bases for such algebras. We will start with the combinatorics of quiver
mutation, maximal green sequences and the g-vector fan. Then we will introduce cluster algebras and present the main facts about them: Laurent phenomenon, positivity and the classification of
cluster-finite cluster algebras. We will then review some milestones in the construction of bases: the cluster-finite case, the Fock-Goncharov duality conjectures, the work of Gross-Hacking-Keel and
Gross-Hacking-Keel-Kontsevich culminating in the construction of the theta-basis. We will then briefly touch on the Lie-theoretic bases: the dual canonical basis and the dual semi-canonical basis. We
will end with an explicit description of the theta basis, the dual canonical and the dual semi-canonical basis for the Kronecker quiver. The moral is that beyond the cluster monomials, introduced by
Fomin-Zelevinsky, these bases are not so canonical after all
October 5, 2020
Artem Prikhodko
(Skoltech, HSE Univ.)
p-adic cohomology of stacks
p-adic Hodge theory studies relations between various cohomology theories of schemes over p-adic base. In the talk I will review the Integral version of the theory developed by Bhatt-Morrow-Scholze
and explain how to extend it to the setting of algebraic stacks. As the main application we will establish Totaro’s conjectural inequalities. If time permits, we will also discuss how to use p-adic
methods to deduce Hodge-to-de Rham degeneration for stacks in characteristic zero. This is joint work with Dmitry Kubrak
September 28, 2020
Slava Rychkov
Introduction to the conformal bootstrap and non-gaussianity of the critical 3d Ising model
Some critical indices of the 3D Ising are close to zero (eta), while others differ substantially from the free theory predictions. The conformal bootstrap provides the means to compute the critical
indices. Moreover, one can get the four-point function, and its behavior shows the non-Gaussian nature of the critical point
September 14, 2020
Yakov Kononov
(Columbia Univ.)
Towards geometric construction of quantum difference equations
The monodromy of quantum difference equations is closely related to elliptic stable envelopes invented by M.Aganagic and A.Okounkov. In the talk I will explain how to extract these equations from the
monodromy using the geometry of the variety X and of its symplectic dual Y. In particular, I will discuss how to extend the action of representation-theoretic objects on K(X), such as quantum groups,
quantum Weyl groups, R-matrices, etc, to their action on K(Y). As an application, we will consider the example of the Hilbert scheme of points in the complex plane, where these results allow us to
prove the conjectures of E.Gorsky and A.Negut about the infinitesimal change of the stable basis. Based on joint work with A.Smirnov
September 21, 2020
Ilya Vilkoviskiy
(Skoltech, HSE Univ.)
Universal description of q-\mathcal{W} algebras of B,C,D types via quantum toroidal algebras
The deformed W algebras of type A have a uniform description in terms of the quantum toroidal gl(1) algebra. In this talk I will introduce a comodule algebra K over toroidal gl(1) which gives a
uniform construction of basic deformed W currents and screening operators in types B,C,D including twisted and super algebras. I will also explain that a completion of algebra K contains three
commutative subalgebras, it allows us to obtain a commutative family of integrals of motion associated with affine Dynkin diagrams of all non-exceptional types except D^2_{l+1}.
Based on joint work with B. Feigin, M. Jimbo, E. Mukhin arXiv:2003.04234
| Spring 2022 | Fall 2021 | Spring 2021 | Spring 2020 | Fall 2019 | Spring 2019 | Fall 2018 | Spring 2018 | Fall 2017 | Spring 2017 | Fall 2016 |
|
{"url":"https://crei.skoltech.ru/cas/calendar/sem-mon/arxiv/fall20/","timestamp":"2024-11-05T20:07:44Z","content_type":"text/html","content_length":"75212","record_id":"<urn:uuid:7e8da6f4-21d3-4790-84e3-44bddec9040e>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00031.warc.gz"}
|
Definition 4.1.5.1. Let $f: X \rightarrow S$ be a morphism of simplicial sets. We say that $f$ is an inner covering map if, for every pair of integers $0 < i < n$, every lifting problem
\[ \xymatrix@R =50pt@C=50pt{ \Lambda ^{n}_{i} \ar [r] \ar@ {^{(}->}[d] & X \ar [d]^{f} \\ \Delta ^ n \ar [r] & S } \]
has a unique solution.
|
{"url":"https://kerodon.net/tag/0226","timestamp":"2024-11-08T19:23:16Z","content_type":"text/html","content_length":"16939","record_id":"<urn:uuid:9524d397-9d61-4d92-aae8-a662b4ca1810>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00459.warc.gz"}
|
Characterizing the fundamental bending vibration of a linear polyatomic molecule for symmetry violation searches (Journal Article) | NSF PAGESAbstractAbstractAbstractAbstractAbstractAbstract
Cosmic reionization was the last major phase transition of hydrogen from neutral to highly ionized in the intergalactic medium (IGM). Current observations show that the IGM is significantly neutral
atz> 7 and largely ionized byz∼ 5.5. However, most methods to measure the IGM neutral fraction are highly model dependent and are limited to when the volume-averaged neutral fraction of the IGM is
either relatively low ($x¯HI≲10−3$) or close to unity ($x¯HI∼1$). In particular, the neutral fraction evolution of the IGM at the critical redshift range ofz= 6–7 is poorly constrained. We present
new constraints on$x¯HI$atz∼ 5.1–6.8 by analyzing deep optical spectra of 53 quasars at 5.73 <z< 7.09. We derive model-independent upper limits on the neutral hydrogen fraction based on the fraction
of “dark” pixels identified in the Lyαand Lyβforests, without any assumptions on the IGM model or the intrinsic shape of the quasar continuum. They are the first model-independent constraints on the
IGM neutral hydrogen fraction atz∼ 6.2–6.8 using quasar absorption measurements. Our results give upper limits of$x¯HI(z=6.3)<0.79±0.04$(1σ),$x¯HI(z=6.5)<0.87±0.03$(1σ), and$x¯HI(z=6.7)
<0.94−0.09+0.06$(1σ). The dark pixel fractions atz> 6.1 are consistent with the redshift evolution of the neutral fraction of the IGM derived from Planck 2018.
more » « less
|
{"url":"https://par.nsf.gov/biblio/10488857-characterizing-fundamental-bending-vibration-linear-polyatomic-molecule-symmetry-violation-searches","timestamp":"2024-11-06T18:36:45Z","content_type":"text/html","content_length":"268976","record_id":"<urn:uuid:bf59a4db-57de-4bbd-8025-47536a4c6c0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00239.warc.gz"}
|
Gas Mileage Conversion Calculator
Gas Mileage Description
Gas mileage is the relationship between the distance traveled and the quantity of fuel burned by a vehicle. It can also be called fuel efficiency, fuel economy or fuel consumption rate (which more
accurately means an amount of fuel used per unit of distance covered). When talking about fuel consumption the lower the value, the more efficient and economical the vehicle is (the less fuel it
needs to cover a distance). When talking about fuel economy, the higher the value the more economical the vehicle is since it traveled more distance with a certain amount of fuel.
It can be measured in different ways, as well. Some of the ways to measure fuel economy include miles per gallon (there are two types UK MPG and US MPG), liters per 100 kilometers or kilometers per
liter. Imperial (British) gallon is 277.42 cubic inches or 4,5 L which is different from the US gallon which equals 231 cubic inches or 3.79 liters. One imperial gallon is approximately 1.2 US
gallons. Consequently, there is a difference between UK miles per gallon unit and US miles per gallon unit.
|
{"url":"https://unitchefs.com/gas_mileage/","timestamp":"2024-11-07T05:58:50Z","content_type":"text/html","content_length":"27014","record_id":"<urn:uuid:383edc90-6944-4df5-a21c-0ecdd16e5562>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00474.warc.gz"}
|
WeBWorK Standalone Renderer
A rod has length 2 meters. At a distance
meters from its left end, the density of the rod is given by
$\delta(x)=2 + 6 x$
(a) Complete the Riemann sum for the total mass of the rod (use $Dx$ in place of $\Delta x$):
mass = $\Sigma$
(b) Convert the Riemann sum to an integral and find the exact mass.
mass =
(include units)
You can earn partial credit on this problem.
|
{"url":"https://wwrenderer.libretexts.org/render-api?sourceFilePath=Library/Michigan/Chap8Sec4/Q03.pg&problemSeed=1234567&courseID=anonymous&userID=anonymous&course_password=anonymous&answersSubmitted=0&showSummary=1&displayMode=MathJax&language=en&outputFormat=nosubmit","timestamp":"2024-11-06T04:39:59Z","content_type":"text/html","content_length":"5707","record_id":"<urn:uuid:4083c82e-d875-423a-bf07-abcd76257433>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00232.warc.gz"}
|
limma p-value and adjusted p-value meaning
Last seen 6 months ago
European Union
I've performed a limma differential expression analysis of microarray data, and ended having many genes with a p-value lower than 0.05 whose adjusted p-value was always higher than 0.05
I must admit that my statistical knowledge is pretty low, and I am hoping that anybody can explain the meaning of this
Entering edit mode
"However, there is something confusing to me"
Well ... this is where life gets interesting now, right? :-)
ADD REPLY • link
|
{"url":"https://support.bioconductor.org/p/71821/","timestamp":"2024-11-02T12:20:46Z","content_type":"text/html","content_length":"22877","record_id":"<urn:uuid:f5b82a13-c44b-4fd6-8e08-9ce42782ef29>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00208.warc.gz"}
|
Some Implementation Details
Next: Bibliography Up: Combining the Beamformer with Previous: Final Results Contents
The processor I'm using is a TMS320C6711 DSP. It runs at 150mhz and has 4k L1 cache for code and 4k L2 cache for data. The L2 cache is 64k, and all other memory is external to the chip. The echo
canceller has a filter of length 1024 (that's about 1/8 second) and it uses a projection order of 16.
To take advantage of the blocking algorithm, matrix multiplication is implemented using Fast Fourier Transforms (FFT). The optimized FFT routine executes a length 512 FFT in approximately 7200
cycles. I am using a property of the FFT to do two real valued FFTs in one FFT operation (with some overhead). The number of FFTs per block is
Ok, so it still runs in under 150 million cycles, right? Not quite. The memory requirements exceed the 64k on the processor. And even the L2 cache runs considerably slower than the L1 cache. At a
minimum we need to try to keep memory in the L2 cache as much as possible. Unfortunately, the algorithm runs through more than 64k of memory 8000 times a second. So just using cache may not be very
efficient. Another solution is to use memory overlays. This gives the programmer more control over where the memory is at all times. And this is what I'm currently working on.
Next: Bibliography Up: Combining the Beamformer with Previous: Final Results Contents Todd A Goldfinger 2004-11-22
|
{"url":"http://www.signalsguru.net/projects/thesis/research/node13.html","timestamp":"2024-11-11T04:02:21Z","content_type":"application/xhtml+xml","content_length":"6155","record_id":"<urn:uuid:df15d0e8-ebd6-415c-b45f-a52320df4b7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00587.warc.gz"}
|
Theories of Anarchist Development - The Libertarian Labyrinth
Theories of Anarchist Development
The question we’re wrestling with remains this: How do we understand the anarchist past and how does that understanding influence our action in the present? This question is ultimately inseparable
from questions about how our present understanding of the anarchist project influences our engagement with the anarchist past, but one thing at a time.
One important aspect of our coming-to-terms with the anarchist past has to be our general understanding of how anarchism has developed. An adequate theory of anarchist development should probably be
able to:
1. account for the historical facts (and particularly, now, for the mass of historical facts newly available thanks to archive digitization, etc.)
2. describe the ways in which the defining characteristics of anarchism might have influenced that development
3. help us explain the state of anarchism in the present.
Let’s just acknowledge that, for the most part, we do not have explicit, well developed theories of this sort. What we have instead are convictions about the relative merits of various anarchist
tendencies and quasi-historical narratives that together suggest implicit theories of development. These vary from simple just-so stories to histories of considerable sophistication, but artificially
narrowed focus. What they tend to have in common is a sense that anarchist thought has undergone some kind of relatively continuous change, often towards the ideology (communism, syndicalism,
platformism, etc.) of their choice. Equally continuous, though obviously different in their implications, are a small number of narratives that trace a steady degeneration of anarchist thought from
some moment of primal clarity. (I think I know a few folks who believe that “it has all been downhill from Stirner” and I have myself half-seriously described anarchist history as a sort of
“parceling out” of Proudhon’s project.)
Ultimately, there is a tremendous amount of good historical work that either accepts or does not challenge this kind of implicit theory of development. And the point here isn’t to downplay the value
of any of that work. But it does seem to me that continuity as an assumption about anarchist development is likely, sooner or later, to be a casualty of good historical work and that the more the
light of historical research shines on the earliest decades of the anarchist past the more likely that assumption is to lose its place in our common sense.
Let’s consider the three tasks that we’ve said a theory of anarchist development should perform:
Accounting for the details of the anarchist past, particularly in the era prior to the emergence of anarchism per se, means revising our sense of the various tendencies and their succession. To
reconstruct the argument I have been making here for some time, let me begin by recalling a summary from “Our Lost Continent,” the post in which I first proposed an “Era of Anarchy” or period of
anarchy-without-anarchism in the years between 1840 and 1875-80:
I no longer feel the slightest hesitation in declaring that there was, in that forty-year period, what we might call an Era of Anarchy, during which a wide variety of anarchist philosophies
developed and subsequently declined. Proudhon launched the era with his explicit declaration—”I am an anarchist!”—in 1840, but he wasn’t alone for long. The communists of l’Humanitaire identified
the “anarchistic” roots of their approach the following year. We can argue about how anarchistic other communists of the period were, but certainly by the 1850s, Joseph Déjacque had explicitly
joined communism to the anarchy of Proudhon—running ahead of nearly all his contemporaries in proposing some form of anarchism and launching the sort of internal struggle that would mark the
whole of the post-1880 Era of Anarchism. There were individualists as well, including Josiah Warren, whose dislike of labels kept him from identifying as an anarchist, and Anselme Bellegarrigue,
who looks, in contemporary terms, like some sort of left-wing market anarchist. Stirner is there, with his anarchistic egoism. Ernest Coeurderoy dreams of cossack invasions. Virtually every
radical current from the revolutions of the late 18th century or the “utopian” period of the early 19th century manifests some more-or-less libertarian extreme. In North American, Calvin
Blanchard announces Art-Liberty, Eliphalet Kimball publishes his Thoughts on Natural Principles, and antinomian principles bubble up, over and over again, on the fringes of New England’s
religious culture. Proudhon, Pierre Leroux and New England transcendentalism unite in the work of William B. Greene. Activity in the anti-slavery movement leads Ezra Heywood and Lysander Spooner
to the most libertarian conclusions. Networks develop, formally and informally, among some of these figures and spread their influence among the working classes. The New England reform leagues,
the Association Internationale, the Union républicaine de langue française and the International Workingmen’s Association represent the efforts of various of these anarchist philosophies to
manifest themselves as movements in the era before anarchism was established as an ideology, or even a widely-used keyword. In the context of these attempts, new tendencies will emerge, such as
the anarchistic collectivism of Bakunin and his associates and a revived anti-state communism, which will reject the term an-anarchy because of its associations with Proudhon.
The facts of history force us to trade a narrative of succession for one that displays a great deal of simultaneity. The provocative follow-up, on “The ‘Benthamite’ anarchism and the origins of
anarchist history,” focused on the extent to which anarchism emerged as a break with that anarchy-without-anarchism, rather than a refinement or clarification of it. Having now explored the extent to
which the ascendance of anarchist communism also involved a careful management of the legacy of Bakunin (with “God and the State” marking a sort of farewell to Bakunin for at least some of those
involved in its publication), as well as having traced the conflict-filled construction and reconstruction of “mutualism” in various anarchist eras, discontinuity really seems to me to be the
defining quality of many of the moments we often paint as advances for anarchist thought or victories for particular tendencies.
So we probably have a variety of reasons to believe that most of the claims to continuous development and succession from tendency to tendency have been based on an incomplete use of the historical
data. But we can also just look around to see that, however convincing those claims might have been at particular moments in anarchist history, we are again faced with a wide variety of tendencies
existing simultaneously, with no evidence that any of them are likely to go away any time soon.
Is there perhaps some way to discuss the contexts of this development that salvages the more continuous sorts of narratives? Could we find ground on which to claim that, for example, the emergence of
anarchist communism (or anarcho-syndicalism or platformism or egoism or Proudhon’s first barbaric yawp, etc.) really did mark a particularly decisive development, but that other events have obscured
its significance, led the working classes astray, etc.? Part of the answer probably depends on what sort of thing we think the anarchist project is. If we think of anarchist theory as something that
emerged in a somewhat unformed state from popular resistance to authority, and that it took some time for the basic ideas necessary for an anarchist movement to become clear (and this seems to be one
of the rough-and-ready developmental narratives), then at least some of the early complications may not weigh too heavily on us, but it would still be necessary to explain why, after some series of
positive innovations (including whatever steps you think it took to get from “je suis anarchiste” to your favored flavor of anarchism), we have continued to see innovations that simply do not fit the
narrative of steadily increasing clarity. After all, one of the ways that reactionary would-be entryists have attempted to brand their efforts is as further refinements of the tradition.
The stakes rise here a bit, in ways that I have attempted to gently address in the past. I don’t have any trouble drawing a clear line between the various consistent anti-authoritarian tendencies and
various authoritarian attempts to graft their pet systems of hierarchy onto anarchism—and I don’t imagine many other consistent anarchists have much trouble separating the two groups, as long as we
stick to questions of logical consistency. But our implicit theories of anarchist development attempt to do more than just distinguish in this way, calling on the testimony of particular histories to
bolster the claims of one or another anarchist tendencies to preeminence. As long as we can really show some sort of continuity, and some development towards a particular sort of clarification of the
anarchist ideal, then we can at least point to those subsequent moments when proposed “developments” seem to lead anarchist thought off in some different direction. If, however, what our historical
research shows is not the steady development of ideas, but instead a sort of theoretical détournement, through which a single set of terms is charged and then recharged with significantly different
meanings—and this is one fairly compelling way, I think, of reading the succession of phases in anarchist thought—then we are on more dangerous ground. The fact that there is probably more connection
between the thought of Proudhon and that of Kropotkin than was acknowledged by the latter in works like “On Order” does not change the fact that the form of the anti-authoritarian communist
appropriation of the language of anarchy is essentially that of entryism. We can back up and say that this particular form of appropriation was unnecessary, that it was ultimately harmless in itself,
etc., but if we appeal to it as a vindication of the “modern anarchism” of the anarchist communists then we have, at the very least, opened doors that we may find a little bit hard to close.
Fortunately, if deeper engagement with anarchist history takes away some familiar narratives of development, the same process seems to provide alternative accounts. But it really does not seem that
the sort of “common sense” accounts of continuous development from “precursors” to “modern anarchism” (however you want to specify those catagories) fulfills any of the tasks we have set for a theory
of anarchist development. And I wonder if there isn’t something particularly demoralizing about some of the common, but under-theorized positions in the milieu, which combine a sort of faith that
some process of clarification has taken place with the almost inescapable experience of our collective lack of clarity.
Let’s not waste any time, then, proposing a more adequate alternative.
No one should be surprised when I turn again to Voline’s 1924 essay “On Synthesis.” As I suggested in “The Synthesist’s Consolation,” the first benefit of engaging with Voline’s text is indeed
consolation. If we are forced to acknowledge that we still have some work to do in order to come to terms with the anarchist past, we can at least remind ourselves that some of our best and brightest
warned us a long time ago that the road wouldn’t always be smooth. In moving from the the rude shot across the bow in “Coming to Terms with the Anarchist Past” to the more positive, conciliatory
message of “The Synthesist’s Consolation,” I’ve really just been executing a variation on an old theme. We see it in rather extreme form in Max Nettlau, one of the most constant witnesses to
difficulties that seemed inherent in our project, whose various writings on mutual tolerance, panarchy, synthesis, etc. provide us with extraordinarily challenging proposals to defects in the
anarchist project that he clearly thought were themselves profound. But there we also see the theoretical questioning accompanied by a constant practical commitment. We see it again in the work of
Ricardo Mella, whose essay on “The Bankruptcy of Beliefs” lumps anarchism among the systems of belief that must inevitably collapse under the weight of their own defects, but who sequel on “The
Rising Anarchism” suggests another chapter, explicitly based in synthesis, provided we stick to the task at hand. Alongside these two remarkable bodies of work, Voline’s account of anarchist
synthesis (and my own appeal to it) are positively gentle and optimistic in their tone. More than that, however, I think that Voline’s notion of synthesis really does the things that we might expect
from an adequate theory of anarchist development—provided, of course, that we allow it to do that work and do not simply reduce it to a commentary on how to organize anarchist congresses or
federations, as has so often been done.
If anarchism is a matter of exploration, followed by synthesis, and then no doubt by more cycles of a similar nature, then we would have to feel quite certain of our present beliefs and practices to
waste too much time relegating the explorations of others to “precursor” status. Among contemporary tendencies, it becomes easy to see a sort of division of later, in the context of which we needn’t
pretend that all explorations are equal in their long- or short-term utility. Were we to adopt this perspective, we might expect some reduction in the most useless sorts of sectarian struggle, along
with some reduction in the distractions that stand between us and serious engagements with the anarchist past. We might at the very least have better fights among ourselves and there are probably a
whole series of improvements that we might see in our relations with one another if we were to really internalize this view of things. But perhaps one of the most immediate effects of adopting a
synthesist theory of anarchist development would be the light it might shed on our present conflicts and incompatibilities. After all, we are talking about a critique that is now well over a century
old. And if it was true that there was a need for synthesis among anarchist tendencies one hundred years ago, what do we imagine the effects would be if we failed to address that need?
If we are looking for a theory, rooted in the nature of the anarchist project, that seems likely to shine a light on what is demoralizing about our present situation, perhaps we have at least found a
likely candidate.
|
{"url":"https://www.libertarian-labyrinth.org/featured-articles/theories-of-anarchist-development/","timestamp":"2024-11-08T18:47:04Z","content_type":"text/html","content_length":"230739","record_id":"<urn:uuid:38412fa5-76f0-46f7-944f-948edef8baf3>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00564.warc.gz"}
|
Significance tester
Determination of sample size on the basis of expected proportional values
Question: How large must my sample be if I want the actual proportional value of the statistical population within a radius I have determined to resemble the measured proportional value of the
Important factors for the calculation are:
1. the expected size of the proportional values
Where the size of the proportional values cannot be roughly estimated, a worst case scenario is assumed for the calculation, with an expected proportional value of 50%.
2. the tolerated fluctuation range of the proportional values
i.e. how far the actual value of the statistical population may diverge at most from the measured value of the sample.
3. the desired significance level
i.e. with which statistical probability the actual proportional value of the statistical population is aligned with the measured proportional value of the sample, within the tolerated fluctuation
range. In market research, the target is usually a significance level of 95%.
Determination of sample size on the basis of expected proportional values
For an expected proportional value of ##in1##% and a tolerated fluctuation of ##in2##% the required sample size is
90% significance level ##sig90##
95% significance level ##sig95##
99% significance level ##sig99##
This means for a significance level of 95%:
With a sample size of ##sig95## one can statistically expect, that a value of ##in1##% in the sample fits a value of the population of about ##in1##% ±##in2##% with a probablity of 95%.
1. The population is sufficiently large, so that correction factors for finite populations can be neglected.
|
{"url":"https://www.consilium-co.com/proportional-values.html","timestamp":"2024-11-06T07:39:06Z","content_type":"text/html","content_length":"19561","record_id":"<urn:uuid:e14dafde-3f63-45c4-82a2-d204abf0365e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00804.warc.gz"}
|
Scientists blast atoms with Fibonacci laser to make an 'extra' dimension of time
Scientists have produced a brand-new, bizarre phase of matter that acts as like it has two dimensions of time by directing a Fibonacci laser beam at atoms within a quantum computer.
The new phase of matter, which was produced by rhythmically jiggling a strand of 10 ytterbium ions with lasers, allows researchers to store information in a way that is significantly more
error-protected, paving the way for quantum computers that can retain data for a very long time without becoming jumbled. In a report published on July 20 in the journal Nature, the researchers
presented their findings in detail.
The inclusion of a theoretical "extra" time dimension "is a completely different way of thinking about phases of matter," lead author Philipp Dumitrescu, a researcher at the Flatiron Institute's
Center for Computational Quantum Physics in New York City, said in a statement. "I've been working on these theory ideas for over five years, and seeing them come actually to be realized in
experiments is exciting."
The physicists weren't trying to invent a phase with a hypothetical extra time dimension, and they weren't seeking for a way to improve quantum data storage either. Instead, they sought to develop a
new phase of matter, one that went beyond the conventional solid, liquid, gas, and plasma states.
They set about building the new phase in the quantum computer company Quantinuum's H1 quantum processor, which consists of 10 ytterbium ions in a vacuum chamber that are precisely controlled by
lasers in a device known as an ion trap.
Ordinary computers use bits, or 0s and 1s, to form the basis of all calculations. Quantum computers are designed to use qubits, which can also exist in a state of 0 or 1. But that's just about where
the similarities end. Thanks to the bizarre laws of the quantum world, qubits can exist in a combination, or superposition, of both the 0 and 1 states until the moment they are measured, upon which
they randomly collapse into either a 0 or a 1.
This strange behavior is the key to the power of quantum computing, as it allows qubits to link together through quantum entanglement, a process that Albert Einstein dubbed "spooky action at a
distance." Entanglement couples two or more qubits to each other, connecting their properties so that any change in one particle will cause a change in the other, even if they are separated by vast
distances. This gives quantum computers the ability to perform multiple calculations simultaneously, exponentially boosting their processing power over that of classical devices.
But the development of quantum computers is held back by a big flaw: Qubits don't just interact and get entangled with each other; because they cannot be perfectly isolated from the environment
outside the quantum computer, they also interact with the outside environment, thus causing them to lose their quantum properties, and the information they carry, in a process called decoherence.
"Even if you keep all the atoms under tight control, they can lose their 'quantumness' by talking to their environment, heating up or interacting with things in ways you didn't plan," Dumitrescu
To get around these pesky decoherence effects and create a new, stable phase, the physicists looked to a special set of phases called topological phases. Quantum entanglement doesn't just enable
quantum devices to encode information across the singular, static positions of qubits, but also to weave them into the dynamic motions and interactions of the entire material — in the very shape, or
topology, of the material's entangled states. This creates a "topological" qubit that encodes information in the shape formed by multiple parts rather than one part alone, making the phase much less
likely to lose its information.
A key hallmark of moving from one phase to another is the breaking of physical symmetries — the idea that the laws of physics are the same for an object at any point in time or space. As a liquid,
the molecules in water follow the same physical laws at every point in space and in every direction. But if you cool water enough so that it transforms into ice, its molecules will pick regular
points along a crystal structure, or lattice, to arrange themselves across. Suddenly, the water molecules have preferred points in space to occupy, and they leave the other points empty; the spatial
symmetry of the water has been spontaneously broken.
Creating a new topological phase inside a quantum computer also relies on symmetry breaking, but with this new phase, the symmetry is not being broken across space, but time.
By giving each ion in the chain a periodic jolt with the lasers, the physicists wanted to break the continuous time symmetry of the ions at rest and impose their own time symmetry — where the qubits
remain the same across certain intervals in time — that would create a rhythmic topological phase across the material.
But the experiment failed. Instead of inducing a topological phase that was immune to decoherence effects, the regular laser pulses amplified the noise from outside the system, destroying it less
than 1.5 seconds after it was switched on.
After reconsidering the experiment, the researchers realized that to create a more robust topological phase, they would need to knot more than one time symmetry into the ion strand to decrease the
odds of the system getting scrambled. To do this, they settled on finding a pulse pattern that did not repeat simply and regularly but nonetheless showed some kind of higher symmetry across time.
This led them to the Fibonacci sequence, in which the next number of the sequence is created by adding the previous two. Whereas a simple periodic laser pulse might just alternate between two laser
sources (A, B, A, B, A, B, and so on), their new pulse train instead ran by combining the two pulses that came before (A, AB, ABA, ABAAB, ABAABABA, etc.).
This Fibonacci pulsing created a time symmetry that, just like a quasicrystal in space, was ordered without ever repeating. And just like a quasicrystal, the Fibonacci pulses also squish a higher
dimensional pattern onto a lower dimensional surface. In the case of a spatial quasicrystal such as Penrose tiling, a slice of a five-dimensional lattice is projected onto a two-dimensional surface.
When looking at the Fibonacci pulse pattern, we see two theoretical time symmetries get flattened into a single physical one.
An example of penrose tiling (Image credit: Shutterstock)
"The system essentially gets a bonus symmetry from a nonexistent extra time dimension," the researchers wrote in the statement. The system appears as a material that exists in some higher
dimension with two dimensions of time — even if this may be physically impossible in reality.
When the team tested it, the new quasiperiodic Fibonacci pulse created a topographic phase that protected the system from data loss across the entire 5.5 seconds of the test. Indeed, they had created
a phase that was immune to decoherence for much longer than others.
"With this quasi-periodic sequence, there's a complicated evolution that cancels out all the errors that live on the edge," Dumitrescu said. "Because of that, the edge stays quantum-mechanically
coherent much, much longer than you'd expect."
Although the physicists achieved their aim, one hurdle remains to making their phase a useful tool for quantum programmers: integrating it with the computational side of quantum computing so that it
can be input with calculations.
"We have this direct, tantalizing application, but we need to find a way to hook it into the calculations," Dumitrescu said. "That's an open problem we're working on."
Reference(s): Nature
|
{"url":"http://www.thespaceacademy.org/2022/08/scientists-blast-atoms-with-fibonacci.html","timestamp":"2024-11-06T11:59:48Z","content_type":"application/xhtml+xml","content_length":"174243","record_id":"<urn:uuid:0445be39-d791-46a5-8a85-5b8ea8085543>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00572.warc.gz"}
|
What is static inference - ThinkLike.AI
What is static inference
Machine Learning is a category of computer algorithms which have been developed to aid in automated decision-making processes. It involves the use of mathematical algorithms to identify patterns and
correlations in data, which can then be used to make accurate predictions and decisions. To accomplish this task, machines require a large amount of data, which is analyzed using statistical methods
such as regression analysis, decision trees, and neural networks.
One of the critical components of Machine Learning algorithms is static inference. Static inference is a technique used by machines to make predictions based on a set of pre-determined rules or prior
knowledge. It is a type of probabilistic model that can be used to estimate the probability of an event occurring based on a set of inputs. It is used to infer features of data that have not been
explicitly stated, such as the relationship between different variables, the structure of the data, and the underlying probability distributions.
Static inference is used in Machine Learning to predict the outcomes of future events based on past data. In this process, the machine first learns a set of rules from the data to identify patterns
and correlations, which helps it to infer relationships between variables. Once the rules have been learned, the machine then uses them to make predictions about future outcomes based on new data.
There are several types of static inference algorithms commonly used in Machine Learning. The most commonly used types are Bayesian networks, Markov networks, and decision trees. Bayesian networks
use statistical methods to model the dependencies between variables and recognize patterns in data. Markov networks, on the other hand, are probabilistic graphical models that represent the
dependencies between variables as a weighted graph. Decision trees are tree-based models that are often used in classification tasks and are built by recursively splitting the data into subsets based
on the values of the features.
In Machine Learning, static inference is important because it helps to improve the accuracy and reliability of predictions. These algorithms are used to identify hidden patterns and relationships
within the data, which allows the machine to make accurate predictions about future events. This process can be used in various applications, such as fraud detection, recommendation systems, medical
diagnosis, and natural language processing.
In conclusion, static inference is a critical component of Machine Learning that helps machines to make accurate predictions based on pre-determined rules and prior knowledge. It is used in a wide
range of applications and can be implemented using various algorithms such as Bayesian networks, Markov networks, and decision trees. As Machine Learning continues to evolve, static inference
techniques will remain an essential tool for developing more powerful and accurate predictive models.
|
{"url":"https://thinklike.ai/artificial-intelligence/what-is-static-inference/","timestamp":"2024-11-07T09:02:37Z","content_type":"text/html","content_length":"138036","record_id":"<urn:uuid:8e5dc821-c13a-4939-9edf-f2f74813cc2c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00832.warc.gz"}
|
Lab06: Topic Modeling with Latent Semantic Analysis
Lab06: Topic Modeling with Latent Semantic Analysis¶
Latent Semantic Analysis (LSA) is a method for finding latent similarities between documents treated as a bag of words by using a low rank approximation. It is used for document classification,
clustering and retrieval. For example, LSA can be used to search for prior art given a new patent application. In this homework, we will implement a small library for simple latent semantic analysis
as a practical example of the application of SVD. The ideas are very similar to PCA. SVD is also used in recommender systems in an similar fashion (for an SVD-based recommender system library, see
We will implement a toy example of LSA to get familiar with the ideas. If you want to use LSA or similar methods for statistical language analysis, the most efficient Python libraries are probably
gensim and spaCy - these also provide an online algorithm - i.e. the training information can be continuously updated. Other useful functions for processing natural language can be found in the
Natural Language Toolkit.
Note: The SVD from scipy.linalg performs a full decomposition, which is inefficient since we only need to decompose until we get the first k singluar values. If the SVD from scipy.linalg is too slow,
please use the sparsesvd function from the sparsesvd package to perform SVD instead. You can install in the usual way with
Then import the following
from sparsesvd import sparsesvd
from scipy.sparse import csc_matrix
and use as follows
sparsesvd(csc_matrix(M), k=10)
Exercise 1 (20 points). Calculating pairwise distance matrices.
Suppose we want to construct a distance matrix between the rows of a matrix. For example, given the matrix
M = np.array([[1,2,3],[4,5,6]])
the distance matrix using Euclidean distance as the measure would be
[[ 0.000 1.414 2.828]
[ 1.414 0.000 1.414]
[ 2.828 1.414 0.000]]
if \(M\) was a collection of column vectors.
Write a function to calculate the pairwise-distance matrix given the matrix \(M\) and some arbitrary distance function. Your functions should have the following signature:
def func_name(M, distance_func):
0. Write a distance function for the Euclidean, squared Euclidean and cosine measures.
1. Write the function using looping for M as a collection of row vectors.
2. Write the function using looping for M as a collection of column vectors.
3. Wrtie the function using broadcasting for M as a collection of row vectors.
4. Write the function using broadcasting for M as a collection of column vectors.
For 3 and 4, try to avoid using transposition (but if you get stuck, there will be no penalty for using transposition). Check that all four functions give the same result when applied to the given
matrix \(M\).
Exercise 2 (20 points).
Exercise 2 (20 points). Write 3 functions to calculate the term frequency (tf), the inverse document frequency (idf) and the product (tf-idf). Each function should take a single argument docs, which
is a dictionary of (key=identifier, value=document text) pairs, and return an appropriately sized array. Convert ‘-‘ to ‘ ‘ (space), remove punctuation, convert text to lowercase and split on
whitespace to generate a collection of terms from the document text.
• tf = the number of occurrences of term \(i\) in document \(j\)
• idf = \(\log \frac{n}{1 + \text{df}_i}\) where \(n\) is the total number of documents and \(\text{df}_i\) is the number of documents in which term \(i\) occurs.
Print the table of tf-idf values for the following document collection
s1 = "The quick brown fox"
s2 = "Brown fox jumps over the jumps jumps jumps"
s3 = "The the the lazy dog elephant."
s4 = "The the the the the dog peacock lion tiger elephant"
docs = {'s1': s1, 's2': s2, 's3': s3, 's4': s4}
Exercise 3 (20 points).
1. Write a function that takes a matrix \(M\) and an integer \(k\) as arguments, and reconstructs a reduced matrix using only the \(k\) largest singular values. Use the scipy.linagl.svd function to
perform the decomposition. This is the least squares approximation to the matrix \(M\) in \(k\) dimensions.
2. Apply the function you just wrote to the following term-frequency matrix for a set of \(9\) documents using \(k=2\) and print the reconstructed matrix \(M'\).
M = np.array([[1, 0, 0, 1, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 0, 0, 0, 0, 0],
[1, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 0, 1, 0, 0, 0, 0],
[0, 1, 1, 2, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 1, 0, 0, 1, 0, 0, 0, 0],
[0, 0, 1, 1, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 1, 1, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 1]])
3. Calculate the pairwise correlation matrix for the original matrix M and the reconstructed matrix using \(k=2\) singular values (you may use scipy.stats.spearmanr to do the calculations). Consider
the fist 5 sets of documents as one group \(G1\) and the last 4 as another group \(G2\) (i.e. first 5 and last 4 columns). What is the average within group correlation for \(G1\), \(G2\) and the
average cross-group correlation for G1-G2 using either \(M\) or \(M'\). (Do not include self-correlation in the within-group calculations.).
Exercise 4 (40 points). Clustering with LSA
1. Begin by loading a PubMed database of selected article titles using ‘pickle’. With the following: import cPickle docs = pickle.load(open('data/pubmed.pic', 'rb'))
Create a tf-idf matrix for every term that appears at least once in any of the documents. What is the shape of the tf-idf matrix?
2. Perform SVD on the tf-idf matrix to obtain \(U \Sigma V^T\) (often written as \(T \Sigma D^T\) in this context with \(T\) representing the terms and \(D\) representing the documents). If we set
all but the top \(k\) singular values to 0, the reconstructed matrix is essentially \(U_k \Sigma_k V_k^T\), where \(U_k\) is \(m \times k\), \(\Sigma_k\) is \(k \times k\) and \(V_k^T\) is \(k \
times n\). Terms in this reduced space are represented by \(U_k \Sigma_k\) and documents by \(\Sigma_k V^T_k\). Reconstruct the matrix using the first \(k=10\) singular values.
3. Use agglomerative hierarchical clustering with complete linkage to plot a dendrogram and comment on the likely number of document clusters with \(k = 100\). Use the dendrogram function from SciPy
4. Determine how similar each of the original documents is to the new document data/mystery.txt. Since \(A = U \Sigma V^T\), we also have \(V = A^T U S^{-1}\) using orthogonality and the rule for
transposing matrix products. This suggests that in order to map the new document to the same concept space, first find the tf-idf vector \(v\) for the new document - this must contain all (and
only) the terms present in the existing tf-idx matrix. Then the query vector \(q\) is given by \(v^T U_k \Sigma_k^{-1}\). Find the 10 documents most similar to the new document and the 10 most
Notes on the Pubmed articles
These were downloaded with the following script.
from Bio import Entrez, Medline
Entrez.email = "YOUR EMAIL HERE"
import cPickle
docs = cPickle.load(open('pubmed.pic'))
except Exception, e:
print e
docs = {}
for term in ['plasmodium', 'diabetes', 'asthma', 'cytometry']:
handle = Entrez.esearch(db="pubmed", term=term, retmax=50)
result = Entrez.read(handle)
idlist = result["IdList"]
handle2 = Entrez.efetch(db="pubmed", id=idlist, rettype="medline", retmode="text")
result2 = Medline.parse(handle2)
for record in result2:
title = record.get("TI", None)
abstract = record.get("AB", None)
if title is None or abstract is None:
docs[title] = '\n'.join([title, abstract])
print title
cPickle.dump(docs, open('pubmed.pic', 'w'))
|
{"url":"https://people.duke.edu/~ccc14/sta-663-2018/labs/Lab06.html","timestamp":"2024-11-13T20:59:38Z","content_type":"text/html","content_length":"41613","record_id":"<urn:uuid:9b2c3e14-76f6-407f-ad03-c1ab23fe476b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00646.warc.gz"}
|
Deductive reasoning
Jump to navigation Jump to search
Deductive reasoning, also deductive logic, logical deduction is the process of reasoning from one or more statements (premises) to reach a logically certain conclusion.^[1]
Deductive reasoning goes in the same direction as that of the conditionals, and links premises with conclusions. If all premises are true, the terms are clear, and the rules of deductive logic are
followed, then the conclusion reached is necessarily true.
Deductive reasoning ("top-down logic") contrasts with inductive reasoning ("bottom-up logic") in the following way; in deductive reasoning, a conclusion is reached reductively by applying general
rules which hold over the entirety of a closed domain of discourse, narrowing the range under consideration until only the conclusion(s) is left. In inductive reasoning, the conclusion is reached by
generalizing or extrapolating from specific cases to general rules, i.e., there is epistemic uncertainty. However, the inductive reasoning mentioned here is not the same as induction used in
mathematical proofs – mathematical induction is actually a form of deductive reasoning.
Deductive reasoning differs from abductive reasoning by the direction of the reasoning relative to the conditionals. Deductive reasoning goes in the same direction as that of the conditionals,
whereas abductive reasoning goes in the opposite direction to that of the conditionals.
Simple example[edit]
An example of an argument using deductive reasoning:
1. All men are mortal. (First premise)
2. Socrates is a man. (Second premise)
3. Therefore, Socrates is mortal. (Conclusion)
The first premise states that all objects classified as "men" have the attribute "mortal." The second premise states that "Socrates" is classified as a "man" – a member of the set "men." The
conclusion then states that "Socrates" must be "mortal" because he inherits this attribute from his classification as a "man."
Reasoning with modus ponens, modus tollens, and the law of syllogism[edit]
Modus ponens[edit]
Modus ponens (also known as "affirming the antecedent" or "the law of detachment") is the primary deductive rule of inference. It applies to arguments that have as first premise a conditional
statement (${\displaystyle P\rightarrow Q}$) and as second premise the antecedent (${\displaystyle P}$) of the conditional statement. It obtains the consequent (${\displaystyle Q}$) of the
conditional statement as its conclusion. The argument form is listed below:
1. ${\displaystyle P\rightarrow Q}$ (First premise is a conditional statement)
2. ${\displaystyle P}$ (Second premise is the antecedent)
3. ${\displaystyle Q}$ (Conclusion deduced is the consequent)
In this form of deductive reasoning, the consequent (${\displaystyle Q}$) obtains as the conclusion from the premises of a conditional statement (${\displaystyle P\rightarrow Q}$) and its antecedent
(${\displaystyle P}$). However, the antecedent (${\displaystyle P}$) cannot be similarly obtained as the conclusion from the premises of the conditional statement (${\displaystyle P\rightarrow Q}$)
and the consequent (${\displaystyle Q}$). Such an argument commits the logical fallacy of affirming the consequent.
The following is an example of an argument using modus ponens:
1. If an angle satisfies 90° < ${\displaystyle A}$ < 180°, then ${\displaystyle A}$ is an obtuse angle.
2. ${\displaystyle A}$ = 120°.
3. ${\displaystyle A}$ is an obtuse angle.
Since the measurement of angle ${\displaystyle A}$ is greater than 90° and less than 180°, we can deduce from the conditional (if-then) statement that ${\displaystyle A}$ is an obtuse angle. However,
if we are given that ${\displaystyle A}$ is an obtuse angle, we cannot deduce from the conditional statement that 90° < ${\displaystyle A}$ < 180°. It might be true that other angles outside this
range are also obtuse.
Modus tollens[edit]
Modus tollens (also known as "the law of contrapositive") is a deductive rule of inference. It validates an argument that has as premises a conditional statement (${\displaystyle P\rightarrow Q}$)
and the negation of the consequent (${\displaystyle \lnot Q}$) and as conclusion the negation of the antecedent (${\displaystyle \lnot P}$). In contrast to modus ponens, reasoning with modus tollens
goes in the opposite direction to that of the conditional. The general expression for modus tollens is the following:
1. ${\displaystyle P\rightarrow Q}$. (First premise is a conditional statement)
2. ${\displaystyle \lnot Q}$. (Second premise is the negation of the consequent)
3. ${\displaystyle \lnot P}$. (Conclusion deduced is the negation of the antecedent)
The following is an example of an argument using modus tollens:
1. If it is raining, then there are clouds in the sky.
2. There are no clouds in the sky.
3. Thus, it is not raining.
Law of syllogism[edit]
In proposition logic the law of syllogism takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general
1. ${\displaystyle P\rightarrow Q}$
2. ${\displaystyle Q\rightarrow R}$
3. Therefore, ${\displaystyle P\rightarrow R}$.
The following is an example:
1. If Larry is sick, then he will be absent.
2. If Larry is absent, then he will miss his classwork.
3. Therefore, if Larry is sick, then he will miss his classwork.
We deduced the final statement by combining the hypothesis of the first statement with the conclusion of the second statement. We also allow that this could be a false statement. This is an example
of the transitive property in mathematics. Another example is the transitive property of equality which can be stated in this form:
1. ${\displaystyle A=B}$.
2. ${\displaystyle B=C}$.
3. Therefore, ${\displaystyle A=C}$.
Validity and soundness[edit]
Deductive arguments are evaluated in terms of their validity and soundness.
An argument is “valid” if it is impossible for its premises to be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be “valid”
even if one or more of its premises are false.
An argument is “sound” if it is valid and the premises are true.
It is possible to have a deductive argument that is logically valid but is not sound. Fallacious arguments often take that form.
The following is an example of an argument that is “valid”, but not “sound”:
1. Everyone who eats carrots is a quarterback.
2. John eats carrots.
3. Therefore, John is a quarterback.
The example’s first premise is false – there are people who eat carrots who are not quarterbacks – but the conclusion would necessarily be true, if the premises were true. In other words, it is
impossible for the premises to be true and the conclusion false. Therefore, the argument is “valid”, but not “sound”. False generalizations – such as "Everyone who eats carrots is a quarterback" –
are often used to make unsound arguments. The fact that there are some people who eat carrots but are not quarterbacks proves the flaw of the argument.
In this example, the first statement uses categorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known as term logic – was
developed by Aristotle, but was superseded by propositional (sentential) logic and predicate logic.
Deductive reasoning can be contrasted with inductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is “valid”,
it is possible for the conclusion to be false (determined to be false with a counterexample or other means).
This section
needs expansion
. You can help by
adding to it
(January 2015)
Aristotle started documenting deductive reasoning in the 4th century BC.^[2]
See also[edit]
1. ^ Sternberg, R. J. (2009). Cognitive Psychology. Belmont, CA: Wadsworth. p. 578. ISBN 978-0-495-50629-4.
2. ^ Evans, Jonathan St. B. T.; Newstead, Stephen E.; Byrne, Ruth M. J., eds. (1993). Human Reasoning: The Psychology of Deduction (Reprint ed.). Psychology Press. p. 4. ISBN 9780863773136.
Retrieved 2015-01-26. “In one sense [...] one can see the psychology of deductive reasoning as being as old as the study of logic, which originated in the writings of Aristotle.”
Further reading[edit]
• Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8
• Philip Johnson-Laird, Ruth M. J. Byrne, Deduction, Psychology Press 1991, ISBN 978-0-86377-149-1
• Zarefsky, David, Argumentation: The Study of Effective Reasoning Parts I and II, The Teaching Company 2002
• Bullemore, Thomas, * The Pragmatic Problem of Induction.
External links[edit]
Wikiquote has quotations related to: Deductive reasoning
Look up deductive reasoning in Wiktionary, the free dictionary.
Wikiversity has learning resources about Deductive Logic
|
{"url":"https://static.hlt.bme.hu/semantics/external/pages/logikai_form%C3%A1t%C3%B3l/en.wikipedia.org/wiki/Deductive_reasoning.html","timestamp":"2024-11-14T10:29:15Z","content_type":"text/html","content_length":"115868","record_id":"<urn:uuid:6ab324ca-9e2e-485e-95e4-ec87aabf279e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00563.warc.gz"}
|
st: Re: compute a variable with the same formula for each year
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Re: compute a variable with the same formula for each year
From "Joseph Coveney" <[email protected]>
To <[email protected]>
Subject st: Re: compute a variable with the same formula for each year
Date Fri, 12 Oct 2012 05:21:16 +0900
Urbain Thierry YOGO wrote:
I have a variable year going from 2000 to 2010. I want to compute the
following formula for each year a=10exp(2t). But the point is that i
need the respective observations to constitute a variable, not
independent scalar. I have tried the following code:
egen t=group(year)
forvalue t=1(1)11{
gen y`t'=10*exp(2*`t')
gen a=year
replace a=y1 if year==2000
replace a=y2 if year==2001
replace a=y3 if year==2002
.replace a=y11 if year==2010
Please is there a simple code to do this? thank you
Would it be something like this?
generate double a = 10 * exp(2 * (year - 1999))
Joseph Coveney
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"https://www.stata.com/statalist/archive/2012-10/msg00508.html","timestamp":"2024-11-10T05:48:03Z","content_type":"text/html","content_length":"9784","record_id":"<urn:uuid:19588aa6-765d-4d0a-ab57-54707793eae6>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00156.warc.gz"}
|
A CFA Level 2 Discussion About Yet Another Treynor Black Model Question
• Author
□ ::
The question is as follows:
You are using the Treynor-Black Model for security selection. The optimal portfolio consists 40% of actively managed portfolio with an expected return of 10%. The rest is allocated to the
indexed portfolio, which has an expected return of 6%. Your client’s requirements are for a portfolio that has an expected return of 10%. Which of the following is the ideal way to achieve
A Allocate 100% of the client’s funds to the active portfolio
B Allocate 100% of client’s funds to the indexed portfolio and borrow money to leverage
C Keep the optimal portfolio allocation the same and leverage the optimal portfolio by borrowing.
Based on my understanding, if 40/60 is the optimum portfolio which would only produce an expected return of 7.6%, you have to alter the weightings towards the actively managed portfolio, but
the point is if 40/60 is the optimum how would I be able to adjust it?
□ ::
@reena yup the right answer is C, but what does ‘leverage the optimal portfolio by borrowing’ means?
From my understanding, that means to invest the funds from shorting the 60% indexed portfolio to the actively managed portfolio?
How would you be able to benefit from diversifying in the indexed portfolio?
□ ::
No problem @vincentt!
Thanks @Gary, and welcome to the forum! 🙂 Are you taking Level 1 June then?
□ ::
@sophie lol
I understand the leverage return, but wouldn’t that unbalance the optimum portfolio? Since by borrowing more to invest in the actively managed portfolio, you are actually altering the optimum
weighting and towards the actively managed.
□ ::
ahhh!!! that’s true! I keep having the mindset of the SML model or the ones from level 1.
if investor wants higher return or riskier portfolio, move up the line (towards the right) so you’ll get less risk-free portfolio and more riskier portfolio hence higher return.
thanks @sophie
□ ::
@sophie is right, the portfolio would still remain the same balance of 40/60 but the larger value of investment would mean the 7.6% return on the 150% of the initial value would be closer to
10% of the original value of the investment.
□ ::
C? I assume short positions are possible?
□ ::
Sorry @vincentt, I couldn’t resist this… :))
□ ::
@vincentt – I’d think leverage here means borrowing money to fund the optimal portfolio, which would amplify it’s returns.
Compare a case where an investor borrowed 50% of the money to fund an initial $100 optimal portfolio. And say the optimal portfolio appreciated by 10%.
Unleveraged return = ($110-$100) / $100 x 100 = 10%
Leveraged return = ($110-$100) / $50 x 100= 20%
His leveraged return doubled when he borrowed 50%. Of course the opposite is true if the price fell. Note that this simple example has not adjusted for the cost of borrowing that money, which
would slightly reduce the leveraged return %.
□ ::
Hmm… I would have thought the optimum portfolio remains the same. You are not changing the composition % of it by borrowing the money. you are just amplifying potential return (upsides and
downsides) of it by borrowing money.
□ ::
Thanks @Sophie I am sitting the Level 1 in December thought i’d start early. Some of it is familiar but i don’t want to get too complacent .
Ps Love that Animated Gif i may have to borrow for work
• Author
• You must be logged in to reply to this topic.
|
{"url":"https://300hours.com/f/cfa/level-2/t/yet-another-treynor-black-model-question/","timestamp":"2024-11-11T07:50:35Z","content_type":"text/html","content_length":"277028","record_id":"<urn:uuid:aeb87c25-9210-427b-bfe0-9aab72dff9e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00415.warc.gz"}
|
Voronezh Winter Mathematical Schools: Dedicated to Selim Kreinsearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Voronezh Winter Mathematical Schools: Dedicated to Selim Krein
Vladimir Lin : Technion—Israel Institute of Technology, Haifa, Israel
eBook ISBN: 978-1-4704-3395-6
Product Code: TRANS2/184.E
List Price: $165.00
MAA Member Price: $148.50
AMS Member Price: $132.00
Click above image for expanded view
Voronezh Winter Mathematical Schools: Dedicated to Selim Krein
Vladimir Lin : Technion—Israel Institute of Technology, Haifa, Israel
eBook ISBN: 978-1-4704-3395-6
Product Code: TRANS2/184.E
List Price: $165.00
MAA Member Price: $148.50
AMS Member Price: $132.00
• American Mathematical Society Translations - Series 2
Advances in the Mathematical Sciences
Volume: 184; 1998; 263 pp
MSC: Primary 35; 47; 58
This volume is devoted to the 25-year-old Voronezh Winter Mathematical School and to the scientific work of its founder, Selim Krein. The Voronezh Winter Mathematical School was a unique annual
event in the scientific life of the former Soviet Union. Over the years it attracted thousands of mathematicians, from undergraduates to world-renowned experts, and played a major role in
spreading information about cutting edge results of mathematical research, triggering cooperation and educating new generations of mathematicians. The articles in this book, written by prominent
mathematicians and former lecturers and participants of the school, cover a wide range of subjects in analysis and geometry, including global analysis, harmonic analysis, function theory,
operator theory, spectral theory, dynamical systems, mathematical physics, homogenization, algebraic geometry, differential geometry, and geometric analysis.
Researchers and advanced graduate students in analysis, geometry, and mathematical physics.
□ Chapters
□ Genrich Belitskii and Vadim Tkachenko — Fredholm property of functional equations with affine transformations of argument
□ Yurij M. Berezansky — Construction of generalized translation operators from the system of Appell characters
□ Dan Burghelea, Leonid Friedlander and Thomas Kappeler — Witten deformation of the analytic torsion and the Reidemeister torsion
□ Yuri L. Daletskiĭ — Formal operator power series and the noncommutative Taylor formula
□ Gerd Dethloff, Stepan Orevkov and Mikhail Zaidenberg — Plane curves with a big fundamental group of the complement
□ Buma Fridman, Peter Kuchment, Daowei Ma and Vassilis G. Papanicolaou — Solution of the linearized inverse conductivity problem in a half space via integral geometry
□ Mark Gelfand and Ilya M. Spitkovsky — Almost periodic factorization: Applicability of the division algorithm
□ Vladimir Ya. Lin and Mikhail Zaidenberg — Liouville and Carathéodory coverings in Riemannian and complex geometry
□ Mikhail Lyubich — How big is the set of infinitely renormalizable quadratics?
□ Yuri Lyubich — Linear operators in one-dimensional extensions of Banach spaces
□ Stephen Montgomery-Smith and Evgueni Semenov — Random rearrangements and operators
□ Vladimir I. Ovchinnikov — On reiteration theorems
□ Alexander Pankov — Statistical homogenization theorem for multivalued monotone elliptic operators
□ Isaac Pesenson — Reconstruction of Paley-Wiener functions on the Heisenberg group
□ Mikhail Shubin — De Rham theorem for extended $L^2$-cohomology
□ Michael Solomyak — On the discrete spectrum of a class of problems involving the Neumann Laplacian in unbounded domains
□ Nahum Zobin — Szegő-type extremal problems
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Requests
Advances in the Mathematical Sciences
Volume: 184; 1998; 263 pp
MSC: Primary 35; 47; 58
This volume is devoted to the 25-year-old Voronezh Winter Mathematical School and to the scientific work of its founder, Selim Krein. The Voronezh Winter Mathematical School was a unique annual event
in the scientific life of the former Soviet Union. Over the years it attracted thousands of mathematicians, from undergraduates to world-renowned experts, and played a major role in spreading
information about cutting edge results of mathematical research, triggering cooperation and educating new generations of mathematicians. The articles in this book, written by prominent mathematicians
and former lecturers and participants of the school, cover a wide range of subjects in analysis and geometry, including global analysis, harmonic analysis, function theory, operator theory, spectral
theory, dynamical systems, mathematical physics, homogenization, algebraic geometry, differential geometry, and geometric analysis.
Researchers and advanced graduate students in analysis, geometry, and mathematical physics.
• Chapters
• Genrich Belitskii and Vadim Tkachenko — Fredholm property of functional equations with affine transformations of argument
• Yurij M. Berezansky — Construction of generalized translation operators from the system of Appell characters
• Dan Burghelea, Leonid Friedlander and Thomas Kappeler — Witten deformation of the analytic torsion and the Reidemeister torsion
• Yuri L. Daletskiĭ — Formal operator power series and the noncommutative Taylor formula
• Gerd Dethloff, Stepan Orevkov and Mikhail Zaidenberg — Plane curves with a big fundamental group of the complement
• Buma Fridman, Peter Kuchment, Daowei Ma and Vassilis G. Papanicolaou — Solution of the linearized inverse conductivity problem in a half space via integral geometry
• Mark Gelfand and Ilya M. Spitkovsky — Almost periodic factorization: Applicability of the division algorithm
• Vladimir Ya. Lin and Mikhail Zaidenberg — Liouville and Carathéodory coverings in Riemannian and complex geometry
• Mikhail Lyubich — How big is the set of infinitely renormalizable quadratics?
• Yuri Lyubich — Linear operators in one-dimensional extensions of Banach spaces
• Stephen Montgomery-Smith and Evgueni Semenov — Random rearrangements and operators
• Vladimir I. Ovchinnikov — On reiteration theorems
• Alexander Pankov — Statistical homogenization theorem for multivalued monotone elliptic operators
• Isaac Pesenson — Reconstruction of Paley-Wiener functions on the Heisenberg group
• Mikhail Shubin — De Rham theorem for extended $L^2$-cohomology
• Michael Solomyak — On the discrete spectrum of a class of problems involving the Neumann Laplacian in unbounded domains
• Nahum Zobin — Szegő-type extremal problems
Permission – for use of book, eBook, or Journal content
Please select which format for which you are requesting permissions.
|
{"url":"https://bookstore.ams.org/TRANS2/184","timestamp":"2024-11-02T11:40:28Z","content_type":"text/html","content_length":"79049","record_id":"<urn:uuid:9255fea4-4e10-46d7-a44e-d158088209f1>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00720.warc.gz"}
|
Section: Scientific Foundations
Gröbner basis and triangular sets
Participants : J.C. Faugère, G. Renault, M. Safey El Din, P.J. Spaenlehauer, D. Wang, C. Mou, J. Svartz.
Let us denote by $K\left[{X}_{1},...,{X}_{n}\right]$ the ring of polynomials with coefficients in a field $K$ and indeterminates ${X}_{1},...,{X}_{n}$ and $S=\left\{{P}_{1},...,{P}_{s}\right\}$ any
subset of $K\left[{X}_{1},...,{X}_{n}\right]$. A point $x\in {ℂ}^{n}$ is a zero of $S$ if ${P}_{i}\left(x\right)=0\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.166667em}{0ex}}\phantom{\rule
{0.277778em}{0ex}}i\in \left[1...s\right]$.
The ideal $ℐ=〈{P}_{1},...,{P}_{s}〉$ generated by ${P}_{1},...,{P}_{s}$ is the set of polynomials in $K\left[{X}_{1},...,{X}_{n}\right]$ constituted by all the combinations $\sum _{k=1}^{R}{P}_{k}
{U}_{k}$ with ${U}_{k}\in ℚ\left[{X}_{1},...,{X}_{n}\right]$. Since every element of $ℐ$ vanishes at each zero of $S$, we denote by ${V}_{C}\left(S\right)={V}_{C}\left(I\right)=\left\{x\in {C}^{n}\
phantom{\rule{0.277778em}{0ex}}|\phantom{\rule{0.277778em}{0ex}}p\left(x\right)=0\phantom{\rule{0.277778em}{0ex}}\forall p\in ℐ\right\}$ (resp. ${V}_{R}\left(S\right)={V}_{R}\left(I\right)={V}_{ℂ}\
left(I\right)\bigcap {ℝ}^{n}$), the set of complex (resp. real) zeros of $S$, where $R$ is a real closed field containing $K$ and $C$ its algebraic closure.
One Gröbner basis' main property is to provide an algorithmic method for deciding if a polynomial belongs or not to an ideal through a reduction function denoted "$\text{Reduce}$" from now.
If $G$ is a Gröbner basis of an ideal $ℐ\subset ℚ\left[{X}_{1},...,{X}_{n}\right]$ for any monomial ordering $<$.
• (i)
a polynomial $p\in ℚ\left[{X}_{1},...,{X}_{n}\right]$ belongs to $ℐ$ if and only if $\text{Reduce}\left(p,G,<\right)=0$,
Gröbner bases are computable objects. The most popular method for computing them is Buchberger's algorithm ( [47] , [46] ). It has several variants and it is implemented in most of general computer
algebra systems like Maple or Mathematica. The computation of Gröbner bases using Buchberger's original strategies has to face to two kind of problems :
• (A) arbitrary choices : the order in which are done the computations has a dramatic influence on the computation time;
For problem (A), J.C. Faugère proposed ([4] - algorithm ${F}_{4}$) a new generation of powerful algorithms ([4] ) based on the intensive use of linear algebra technics. In short, the arbitrary
choices are left to computational strategies related to classical linear algebra problems (matrix inversions, linear systems, etc.).
For problem (B), J.C. Faugère proposed ([3] ) a new criterion for detecting useless computations. Under some regularity conditions on the system, it is now proved that the algorithm do never perform
useless computations.
A new algorithm named ${F}_{5}$ was built using these two key results. Even if it still computes a Gröbner basis, the gap with existing other strategies is consequent. In particular, due to the range
of examples that become computable, Gröbner basis can be considered as a reasonable computable object in large applications.
We pay a particular attention to Gröbner bases computed for elimination orderings since they provide a way of "simplifying" the system (equivalent system with a structured shape). A well known
property is that the zeros of the first non null polynomial define the Zariski closure (classical closure in the case of complex coefficients) of the projection on the coordinate's space associated
with the smallest variables.
Such kinds of systems are algorithmically easy to use, for computing numerical approximations of the solutions in the zero-dimensional case or for the study of the singularities of the associated
variety (triangular minors in the Jacobian matrices).
Triangular sets have a simplier structure, but, except if they are linear, algebraic systems cannot, in general, be rewritten as a single triangular set, one speaks then of decomposition of the
systems in several triangular sets.
Table 1.
Lexicographic Gröbner bases Triangular sets
$\left\{\begin{array}{c}f\left({X}_{1}\right)=0\hfill \\ {f}_{2}\left({X}_{1},{X}_{2}\right)=0\hfill \\ ⋮\hfill \\ {f}_{{k}_ $\left\{\begin{array}{c}{t}_{1}\left({X}_{1}\right)=0\hfill \\ {t}_{2}\
{2}}\left({X}_{1},{X}_{2}\right)=0\hfill \\ {f}_{{k}_{2}+1}\left({X}_{1},{X}_{2},{X}_{3}\right)=0\hfill \\ ⋮\hfill \\ {f}_{{k} left({X}_{1},{X}_{2}\right)=0\hfill \\ ⋮\hfill \\ {t}_{n}\left({X}_
_{n-1}+1}\left({X}_{1},...,{X}_{n}\right)=0\hfill \\ ⋮\hfill \\ {f}_{{k}_{n}}\left({X}_{1},...,{X}_{n}\right)=0\hfill \end {1},...,{X}_{n}\right)=0\hfill \end{array}\right\$
Triangular sets appear under various names in the field of algebraic systems. J.F. Ritt ( [64] ) introduced them as characteristic sets for prime ideals in differential algebra. His constructive
algebraic tools were adapted by W.T. Wu in the late seventies for geometric applications. The concept of regular chain (see [56] and [74] ) is adapted for recursive computations in a univariate way.
It provides a membership test and a zero-divisor test for the strongly unmixed dimensional ideal it defines. Kalkbrenner defined regular triangular sets and showed how to decompose algebraic
varieties as a union of Zariski closures of zeros of regular triangular sets. Gallo showed that the principal component of a triangular decomposition can be computed in $O\left({d}^{O\left({n}^{2}\
right)}\right)$ ($n$= number of variables, $d$=degree in the variables). During the 90s, implementations of various strategies of decompositions multiply, but they drain relatively heterogeneous
D. Lazard contributed to the homogenization of the work completed in this field by proposing a series of specifications and definitions gathering the whole of former work [1] . Two essential concepts
for the use of these sets (regularity, separability) at the same time allow from now on to establish a simple link with the studied varieties and to specify the computed objects precisely.
A remarkable and fundamental property in the use we have of the triangular sets is that the ideals induced by regular and separable triangular sets, are radical and equidimensional. These properties
are essential for some of our algorithms. For example, having radical and equidimensional ideals allows us to compute straightforwardly the singular locus of a variety by canceling minors of good
dimension in the Jacobian matrix of the system. This is naturally a basic tool for some algorithms in real algebraic geometry [2] , [7] , [67] .
In 1993, Wang [70] proposed a method for decomposing any polynomial system into fine triangular systems which have additional properties such as the projection property that may be used for solving
parametric systems (see Section 3.4.2 ).
Triangular sets based techniques are efficient for specific problems, but the implementations of direct decompositions into triangular sets do not currently reach the level of efficiency of Gröbner
bases in terms of computable classes of examples. Anyway, our team benefits from the progress carried out in this last field since we currently perform decompositions into regular and separable
triangular sets through lexicographical Gröbner bases computations.
|
{"url":"https://radar.inria.fr/report/2011/salsa/uid19.html","timestamp":"2024-11-08T23:51:36Z","content_type":"text/html","content_length":"53227","record_id":"<urn:uuid:1ae916b8-a703-407a-b0b7-af7f97f3feef>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00852.warc.gz"}
|
2 Digit Multiplication Anchor Chart
Discovering multiplication soon after counting, addition, and subtraction is ideal. Youngsters understand arithmetic by way of a organic progression. This advancement of understanding arithmetic is
usually the subsequent: counting, addition, subtraction, multiplication, lastly section. This document brings about the query why learn arithmetic in this particular sequence? More importantly, why
find out multiplication soon after counting, addition, and subtraction just before section?
These facts respond to these questions:
1. Kids understand counting initially by associating graphic things because of their fingers. A perceptible case in point: How many apples are there any within the basket? Much more abstract
instance is how old are you?
2. From counting amounts, the subsequent plausible phase is addition combined with subtraction. Addition and subtraction tables can be extremely helpful teaching helps for youngsters as they are
visual resources producing the cross over from counting simpler.
3. Which should be learned up coming, multiplication or section? Multiplication is shorthand for addition. At this stage, young children have got a company grasp of addition. Therefore,
multiplication may be the up coming logical form of arithmetic to understand.
Assess essentials of multiplication. Also, look at the essentials the way you use a multiplication table.
Let us overview a multiplication example. Utilizing a Multiplication Table, multiply a number of times 3 and acquire a solution 12: 4 x 3 = 12. The intersection of row about three and line 4 of a
Multiplication Table is a dozen; a dozen is definitely the respond to. For the kids starting to find out multiplication, this is certainly simple. They could use addition to eliminate the issue thus
affirming that multiplication is shorthand for addition. Instance: 4 by 3 = 4 4 4 = 12. It is an excellent overview of the Multiplication Table. An added benefit, the Multiplication Table is graphic
and demonstrates to studying addition.
Exactly where will we commence learning multiplication utilizing the Multiplication Table?
1. First, get knowledgeable about the table.
2. Start out with multiplying by one particular. Commence at row # 1. Move to line number 1. The intersection of row 1 and column the first is the answer: one.
3. Repeat these methods for multiplying by a single. Grow row a single by posts one particular via twelve. The solutions are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 respectively.
4. Replicate these methods for multiplying by two. Grow row two by columns a single by way of five. The replies are 2, 4, 6, 8, and 10 respectively.
5. Allow us to jump in advance. Recurring these steps for multiplying by 5. Increase row 5 various by posts a single through a dozen. The answers are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, and
60 correspondingly.
6. Now allow us to increase the amount of issues. Perform repeatedly these actions for multiplying by three. Increase row a few by columns one particular via a dozen. The responses are 3, 6, 9, 12,
15, 18, 21, 24, 27, 30, 33, and 36 respectively.
7. If you are confident with multiplication thus far, consider using a analyze. Solve the following multiplication difficulties in your thoughts then assess your answers for the Multiplication
Table: grow half a dozen as well as 2, increase 9 and a few, increase a single and eleven, increase a number of and four, and increase several and two. The issue responses are 12, 27, 11, 16, and
14 correspondingly.
If you got 4 out of 5 issues appropriate, create your personal multiplication tests. Compute the replies in your thoughts, and look them utilizing the Multiplication Table.
|
{"url":"https://www.printablemultiplication.com/2-digit-multiplication-anchor-chart/","timestamp":"2024-11-03T23:06:23Z","content_type":"text/html","content_length":"61772","record_id":"<urn:uuid:5a3b8e86-901f-42fb-82a4-59c7559f0208>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00807.warc.gz"}
|
Geometallurgy Course & Geometallurgical Modelling - 911Metallurgist
Basic Geometallurgy Short Course for mineral processing applications
Geometallurgy Training Course by SAGMilling.com
Geometallurgy and grinding
• It is often desirable to be able to load ore hardness information into the mine block model.
• Allows the mining engineers to better schedule ore delivery to the plant, and to run more sophisticated net present value calculations against ore blocks.
• Requires hundreds of samples from drill holes distributed across the orebody.
Geometallurgy and plant recovery
• It is often desirable to be able to load leaching information into the mine block model.
• Allows the mining engineers to run more sophisticated net present value calculations against ore blocks.
• Requires hundreds of samples from drill holes distributed across the orebody.
Geometallurgist -> Geologist + Metallurgist
geometallurgical Block modelling
Block model
Geologic systems can be modelled as a structure of equally sized blocks arranged in a regular grid.
Interpolation is the mathematical method used to estimate a parameter in the spaces between known positions with known values.
• A simple interpolation method could be a linear weighted average of the two nearest points.
• Geo-statisticians use more complex methods, such as kriging.
• Consider the same 1-dimensional model with measurements at points A&B.
• Try an inverse distance-squared weighting.
Geometallurgy Book
• Consider a 3-dimensional model with measurements at points A,B,C,D
• A ‘polygon’ displays the rock unit that X belongs to.
Interpolation by kriging
• The most common interpolation is some form of kriging.
• Kriging uses nonlinear, directional interpolation constrained by domains.
Check the domains
• Domains determined for assay data may not apply for process parameters
• Geostatisticians should re-domain the process data to verify.
Example: Grade may be determined by alteration, but grindability may be determined by tectonic stress fields.
• Example grinding data, top from a ‘hematite’ domain and bottom from a ‘magnetite’ domain.
• Shapes are different – confirms each must be interpolated separately.
Example domain definitions
geometallurgical modelling
A variogram plots the average difference between two arbitrary points and the distance between the points.
• Warning: oversimplified!
• Plotting the example grade difference VS distance from earlier slide
• Slightly more correct version
• Y-axis shows variance
• The population variance is shown as the “sill”
A published variogram from Adanac Moly suggests that the maximum spacing between samples should be 200 m or less.
How many samples?
Area of influence of a sample
• How “close by” must a sample be to have importance in geostatistics.
• Observed as the location of the“sill” of a variogram of grindability versus distance.
• So you should know the variogram result of a geometallurgy program to plan a geometallurgy program.
Additive parameters
• Geostatistics only works if the values you are “mixing” have a linear mixing characteristic.
• A parameter is “additive” if you can combine two samples of a known value, and the blend test results in the arithmetic average of the two.
– Eg. mix one sample “10” and a second sample “20”
– The blend should give a result of “15”
• Values suitable for block modelling
– Not all grindability results are suitable for block model interpolation, they must be “additive” • e.g. mixing two samples with “10” and “20” should give “15”. Work index, SGI and A×b results
do not have this property.
– Specific energy consumption is generally additive, so Etotal, ESAG and/or Eball can be interpolated.
Additivity of process parameters
A variety of process models exist. You will need to evaluate which models are useful for your mine.
– The process models need to make useful predictions of process behaviour.
– The process models need to have additive parameters suitable for geometallurgy.
Geometallurgy program
Procedure for a geometallurgy program:
• collect samples distributed around the orebody
• test in the laboratory, use at least 2 methods
• run all samples through comminution models
• distribute specific energy values into block model
• run geostatistical checks (variograms) and repeat (do a second, in-fill, sample collection program)
• provide mining engineers with a model populated with grindability values; run annual production forecasts.
The block model
A block model containing geometallurgical data will include:
• grindability information suitable for estimating the maximum plant throughput,
• recovery information suitable for estimating the metal production,
• (flotation plants) concentrate grade predictions for smelter contracts.
Grindability models
• Specific energy consumption models determine how much energy is required to grind a sample.
– E given in kW·h/t {alternative notation: kW/(t/h)}
• Mill power models determine the amount of grinding power available
– P given in kW
• Dividing P by E gives the circuit throughput
– t/h = kW ÷ (kW·h/t)
Throughput predictions
• Grindability, in the form of specific energy, will be interpolated for a block.
– in this example, ESAG = 6.0 kWh/t
• The metallurgists will supply the typical power draw of the SAG mill (at the pinion).
– Yanacocha is about 14,000 kW
• Throughput = 14,000 kW ÷ 16.0 kW = 875 t/h
Recovery models
Net Smelter Return prediction
• The mining engineer can estimate the revenue of a block using the recovery equation(s) and the block model parameters.
– Gold recovery R is known by interpolation.
– Revenue=block mass (t) × grade (g/t) × recovery
• If there are penalty elements in the block model, is may be necessary to estimate their recovery, too.
Block value prediction
• Determine the value of a block
– Revenue
include penalties, if applicable
– Operating costs ($/t)
• include mill power draw, kWh/t × t/h × $/kWh
• include other operating costs
– Processing time can be included as a cost penalty
• revenue form harder blocks worth less than revenue from softer blocks.
New cut-off calculation
• The variable revenue benefits blocks with good recovery characteristics.
• The variable grindability benefits blocks with lower power consumption.
• Applying a penalty for difficult to process blocks benefits easy to process blocks.
Benefits of geometallurgy
• Permits future production to be accurately predicted. Future sales can be estimated.
• Identifies “problem” areas within the mine where throughput may be low or recovery may suffer.
• Allows better optimized mine plans with more accurate NPV predictions per block.
Variable mining rate
• Operate the mine to keep the SAG mills full.
• A grinding geometallurgy database allows mine planners to schedule more ore to the mill.
– Do not plan a “nominal” throughput rate for the whole mine life.
– mine more in years with soft ore, and
– mine less in years with hard ore.
– If possible, defer hard ore until later in the mine life.
Variable gold production
• The gold production in each year of a mine life will be different, and can be calculated from
– block gold grade,
– block gold recovery,
– block throughput calculated from the grindability.
• The pit optimization software will pull the pit towards softer ore with better recovery.
Summary of benefits
• The pit shape and equipment fleet will change due to the new NPV equations,
• the pit will probably be mined more rapidly,
• production is advanced into earlier mine years,
• a more optimal pit shape will all result from a fully applied geometallurgy program, and
• no nasty surprises.
Stages of a geometallurgy program
• Decide which process parameters to collect
– plant surveys, fitting models to plant data
• Conduct a drilling program to obtain samples of future ore
• Conduct a laboratory program determining parameters for samples
• Supply geostatisticians the parameters and their spatial locations
• Interpolate the parameters into the block model
– check variograms, conduct in-fill drilling and recycle
• Generate a mine plan with a variable ore throughput
• Generate a cash flow with a variable gold production rate
Cost of a geometallurgy program
• Plant surveys, engineering time fitting models to plant data
• drilling program to obtain samples of future ore
• laboratory program determining parameters for samples
• Geostatistician time to interpolate parameters into the block model
– check variograms, conduct in-fill drilling and recycle
• Mine engineering time to generate a mine plan
• Sustaining capital cost of mine fleet needed to support variable throughput rates
Geometallurgy for scoping studies
• Early project evaluation will not use a full program:
– Use about 5-15 intervals of half-core (from the resource drilling program).
– Do laboratory work for one set of process models.
– Unlikely enough data will exist to do variograms or kriging. Work with cumulative distributions instead of geometallurgy.
Geometallurgy for prefeasibility
• Collect at least 50 more half-core samples from the resource drilling.
– The quantity should be sufficient to permit creation of variograms.
– Do the first circuit of the geometallurgy program stages, but exclude the recycle.
– Determine how much of the orebody is unrepresented by samples.
– Do the variable rate mine plan and gold production schedule.
Geometallurgy for full feasibility
• Using the variograms from pre-feasibility, determine how many more samples are needed
– These extra samples should be dedicated metallurgical drilling. Use the whole core for a greater variety of metallurgical tests.
• Do the “recycle” loop and determine updated variable rate mine plans and gold production.
Geometallurgy for operation
• Do the program indicated for pre-feasibility and feasibility to establish the initial mine plans.
• Do annual drilling to keep extending into the next 5 years of future ore.
• Revise the process models (did they work?).
• Revise the mine plans based on the updated geometallurgy database.
Examples of geometallurgy
• Los Bronces, Confluencia (Chile)
– Design of pit for an expansion project included plant recovery and ore grindability parameters.
• Collahuasi (Chile)
– Monthly throughput predictions are within 5% of actual.
• Freeport-McMoRan study
– Geometallurgical database used to compare SAG milling to HPGR in a detailed study.
• Andina, Piuquenes tailings (Chile)
– Recovery and regrind energy for re-mining a tailings pond.
Escondida variograms
Escondida variograms
Examples of geometallurgy
• Adanac Molybdenum, Canada
– Flotation model using interpolated parameters:
• k, Rmax value for molybdenum
• k, Rmax value for non-sulphide gangue
– Different models run at different grind P80 sizes
• k, Rmax values change at each P80.
• Grade proxies and process mineralogy are often called geometallurgy, but they are different
– Grade proxy is where a process variable (eg. recovery) is closely related to a grade (%Cu)
– Process mineralogy is a careful mapping of minerals (rather than elements)
• useful to predict recoveries, rate constants, etc.
Most important concept! ALL MODELS ARE WRONG, BUT SOME ARE USEFUL
Presented by: Alex Doll, Consultant of SAGmilling.com
– Brissette, M. & de Souza, H. (2012) `Metallurgical testing of iron ore from the Labrador Trough`, Mineral Resources Review Conference.
– Bulled, D. (2007) `Flotation circuit design for Adanac Moly Corp using a geometallurgical approach`, Proceedings of the 39th Canadian Mineral Processors Conference, Ottawa, Canada.
– Preece, R. (2006) `Use of point samples to estimate the spatial distribution of hardness in the Escondida porphyry copper deposit, Chile`, Proceedings of International Autogenous and
Semi-autogenous Grinding Technology 2006, eds.
Allan, M., Major, K., Flintoff, B., Klein, B. & Mular, A., Vancouver, Canada. Slide 51 2015-11-12
• References
– Rocha, M., Ulloa, C. & Díaz, M. (2012) `Geometallurgical modelling at Los Bronces mine`, Proceedings of the International Seminar on Geometallurgy (GEOMET 2012), eds. Barahona, C., Kuyvenhoven, R.
& Pinto, K., Santiago, Chile.
– Suazo, C., Hofmann, A., Aguilar, M., Tay, Y. & Bastidas, G. (2011) `Geometallurgical modelling of the Collahuasi grinding circuit for mining
planning`, Proceedings of the 8th International Mineral Processing Conference, eds. Kracht, W., Kuyvenhoven, R., Lynch-Watson, S. & Montes-Atenas, G., Santiago, Chile. Slide 52 2015-11-12
• References
– Suazo, C., Muñoz, C., & Mora, N. (2013) `The Collahuasi geometallurgical modelling and its application to maximizing value`, Proceedings of the 10th International Mineral Processing Conference,
eds. Álvarex, M., Doll, A., Kracht, W. & Kuyvenhoven, R., Santiago, Chile.
– NI43-101 report, Zafranal project
Adanac Moly variogram should be referenced as “Bulled et al, 2007”
The Escondida variograms should be references as “Preece, 2006”
|
{"url":"https://www.911metallurgist.com/blog/geometallurgy-course/","timestamp":"2024-11-15T04:09:08Z","content_type":"text/html","content_length":"178150","record_id":"<urn:uuid:3f04d4c6-ec1b-4bd6-8a2a-543f2a5211c2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00696.warc.gz"}
|
Fanfiction ► - Love is a Funny Thing
Not open for further replies.
Mar 26, 2006
GASP! DUN DUN DUN! Looks like Kairi's about to go MIA! Great chappy True!
Mar 8, 2006
Oct 30, 2004
L337 kybldmstr said:
GASP! DUN DUN DUN! Looks like Kairi's about to go MIA! Great chappy True!
lol looks that way to me
but me thinks the guy in the black coat may b ansem . .i don't know y but it seems like him even though he's long dead
Dec 24, 2005
This Fic is soooooo cool! I really like it! It has action, romance and...... SUSPENCE!!!! BTW, r u sure u and S&K aren't related? I also read her fic and urs is as good as hers. Nywy, keep the story
going. It's realy good!
Mar 26, 2006
Gamergirl89 said:
lol looks that way to me
but me thinks the guy in the black coat may b ansem . .i don't know y but it seems like him even though he's long dead
But what if the Mystery Man is Mickey? I mean, how else would Mickey know about the Mystery Man in the first place?
Mar 14, 2004
Great Chapter.... i'll be waiting for the next one...
May 27, 2005
lolz good chapter all though I kinda knew what was gonna happen when Mickey said "He's after something very special to you.." haha lolz
Feb 16, 2005
Hmm may it was Roxas.. Just a wild thought
Jan 7, 2006
That was a great chapter!
Oct 30, 2004
L337 kybldmstr said:
But what if the Mystery Man is Mickey? I mean, how else would Mickey know about the Mystery Man in the first place?
idk just a guess
anywayz will there be a new chapie 2night??
Dec 24, 2005
So, when's the next chapter?
Nov 26, 2005
Hi everyone, I'm a new fan of this fic. I love it soooo much I just started reading them yesterday. I didn't go to school today just so I could finish reading it *_*.
Anyways,..... hope the new chapter comes soon!! ^-^
Feb 9, 2006
Mar 8, 2006
Mar 19, 2006
the bedroom idea was just adorable...
i wonder what kairi will do....
Feb 16, 2005
Lol i miss the good old fashioned riku and kairi parings.
good chapter.
May 9, 2004
Hey guys! I'm really happy that you all liked the last chapter!
And welcome Kairi410! I'm sooo glad that you like my fic! ^_^
But as for that 'Mystery Man', you'll all just have to wait and see who it is!
Anyways, I'm gonna start the next chapter sometime today. Hopefully, it'll be up later on tonight because I won't be around tomorrow. I got a junior prom/semi thing to go to!
Nov 26, 2005
OOO I luv the suspense!! I shall be waiting for the next chappy
Jun 14, 2005
Great chapter! I've been busy playing KH2 so I was tied up in it.
901 posts!
Not open for further replies.
|
{"url":"https://www.khinsider.com/forums/index.php?threads/love-is-a-funny-thing.44639/page-16#post-1205104","timestamp":"2024-11-13T06:26:08Z","content_type":"text/html","content_length":"148904","record_id":"<urn:uuid:5d57e91b-97e9-4eff-87f7-7943e39e5bb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00048.warc.gz"}
|
4,136 research outputs found
From both theoretical and experimental points of view symmetric states constitute an important class of multipartite states. Still, entanglement properties of these states, in particular those with
positive partial transposition (PPT), lack a systematic study. Aiming at filling in this gap, we have recently affirmatively answered the open question of existence of four-qubit entangled symmetric
states with positive partial transposition and thoroughly characterized entanglement properties of such states [J. Tura et al., Phys. Rev. A 85, 060302(R) (2012)] With the present contribution we
continue on characterizing PPT entangled symmetric states. On the one hand, we present all the results of our previous work in a detailed way. On the other hand, we generalize them to systems
consisting of arbitrary number of qubits. In particular, we provide criteria for separability of such states formulated in terms of their ranks. Interestingly, for most of the cases, the symmetric
states are either separable or typically separable. Then, edge states in these systems are studied, showing in particular that to characterize generic PPT entangled states with four and five qubits,
it is enough to study only those that assume few (respectively, two and three) specific configurations of ranks. Finally, we numerically search for extremal PPT entangled states in such systems
consisting of up to 23 qubits. One can clearly notice regularity behind the ranks of such extremal states, and, in particular, for systems composed of odd number of qubits we find a single
configuration of ranks for which there are extremal states.Comment: 16 pages, typos corrected, some other improvements, extension of arXiv:1203.371
We classify, up to local unitary equivalence, local unitary stabilizer Lie algebras for symmetric mixed states into six classes. These include the stabilizer types of the Werner states, the GHZ state
and its generalizations, and Dicke states. For all but the zero algebra, we classify entanglement types (local unitary equivalence classes) of symmetric mixed states that have those stabilizers. We
make use of the identification of symmetric density matrices with polynomials in three variables with real coefficients and apply the representation theory of SO(3) on this space of
polynomials.Comment: 10 pages, 1 table, title change and minor clarifications for published versio
The symmetric Werner states for $n$ qubits, important in the study of quantum nonlocality and useful for applications in quantum information, have a surprisingly simple and elegant structure in terms
of tensor products of Pauli matrices. Further, each of these states forms a unique local unitary equivalence class, that is, no two of these states are interconvertible by local unitary
operations.Comment: 4 pages, 1 table, additional references in version 2, revised abstract and introduction in version 3, small clarifications for published version in version
We solve the open question of the existence of four-qubit entangled symmetric states with positive partial transpositions (PPT states). We reach this goal with two different approaches. First, we
propose a half-analytical-half-numerical method that allows to construct multipartite PPT entangled symmetric states (PPTESS) from the qubit-qudit PPT entangled states. Second, we adapt the algorithm
allowing to search for extremal elements in the convex set of bipartite PPT states [J. M. Leinaas, J. Myrheim, and E. Ovrum, Phys. Rev. A 76, 034304 (2007)] to the multipartite scenario. With its aid
we search for extremal four-qubit PPTESS and show that generically they have ranks (5,7,8). Finally, we provide an exhaustive characterization of these states with respect to their separability
properties.Comment: 5+4 pages, improved version, title slightly modifie
The striatum is widely viewed as the fulcrum of pathophysiology in Parkinson’s disease (PD) and L-DOPA-induced dyskinesia (LID). In these disease states, the balance in activity of striatal direct
pathway spiny projection neurons (dSPNs) and indirect pathway spiny projection neurons (iSPNs) is disrupted, leading to aberrant action selection. However, it is unclear whether countervailing
mechanisms are engaged in these states. Here we report that iSPN intrinsic excitability and excitatory corticostriatal synaptic connectivity were lower in PD models than normal; ​L-DOPA treatment
restored these properties. Conversely, dSPN intrinsic excitability was elevated in tissue from PD models and suppressed in LID models. Although the synaptic connectivity of dSPNs did not change in PD
models, it fell with ​L-DOPA treatment. In neither case, however, was the strength of corticostriatal connections globally scaled. Thus, SPNs manifested homeostatic adaptations in intrinsic
excitability and in the number but not strength of excitatory corticostriatal synapses
At variance with the starch-accumulating plants and most of the glycogen-accumulating cyanobacteria, Cyanobacterium sp. CLg1 synthesizes both glycogen and starch. We now report the selection of a
starchless mutant of this cyanobacterium that retains wild-type amounts of glycogen. Unlike other mutants of this type found in plants and cyanobacteria, this mutant proved to be selectively
defective for one of the two types of glycogen/starch synthase: GlgA2. This enzyme is phylogenetically related to the previously reported SSIII/SSIV starch synthase that is thought to be involved in
starch granule seeding in plants. This suggests that, in addition to the selective polysaccharide debranching demonstrated to be responsible for starch rather than glycogen synthesis, the nature and
properties of the elongation enzyme define a novel determinant of starch versus glycogen accumulation. We show that the phylogenies of GlgA2 and of 16S ribosomal RNA display significant congruence.
This suggests that this enzyme evolved together with cyanobacteria when they diversified over 2 billion years ago. However, cyanobacteria can be ruled out as direct progenitors of the SSIII/SSIV
ancestral gene found in Archaeplastida. Hence, both cyanobacteria and plants recruited similar enzymes independently to perform analogous tasks, further emphasizing the importance of convergent
evolution in the appearance of starch from a preexisting glycogen metabolism network.Peer Reviewe
Background and aims: Bone fragility is recognized as a complication of type 2 diabetes (T2D). However, the fracture risk in T2D is underestimated using the classical assessment tools. An expert panel
suggested the diagnostic approaches for the detection of T2D patients worthy of bone-active treatment. The aim of the study was to apply these algorithms to a cohort of T2D women to validate them in
clinical practice. Methods and results: The presence of T2D-specific fracture risk factors (T2D ≥ 10 years, ≥1 T2D complications, insulin or thiazolidinedione use, poor glycaemic control) was
assessed at baseline in 107 postmenopausal T2D women. In all patients at baseline and in 34 patients after a median follow-up of 60.2 months we retrospectively evaluated bone mineral density and
clinical and morphometric vertebral fractures. No patient was treated with bone-active drug. Following the protocols, 34 (31.8%) and 73 (68.2%) patients would have been pharmacologically and
conservatively treated, respectively. Among 49 patients without both clinical fractures and major T2D-related risk factors, who would have been, therefore, conservatively followed-up without
vertebral fracture assessment, only one showed a prevalent vertebral fracture (sensitivity 90%, negative predictive value 98%). The two patients who experienced an incident fracture would have been
pharmacologically treated at baseline. Conclusions: The clinical consensus recommendations showed a very good sensitivity in identifying T2D postmenopausal women at high fracture risk. Among those
with treatment indication as many as 13% of patients experienced an incident fracture, and, conversely, among those without treatment indication no incident fractures were observed
Adenosine deaminases acting on RNA (ADARs) are key proteins for hematopoietic stem cell self-renewal and for survival of differentiating progenitor cells. However, their specific role in myeloid cell
maturation has been poorly investigated. Here we show that ADAR1 is present at basal level in the primary myeloid leukemia cells obtained from patients at diagnosis as well as in myeloid U-937 and
THP1 cell lines and its expression correlates with the editing levels. Upon phorbol-myristate acetate or Vitamin D3/granulocyte macrophage colony-stimulating factor (GM-CSF)-driven differentiation,
both ADAR1 and ADAR2 enzymes are upregulated, with a concomitant global increase of A-to-I RNA editing. ADAR1 silencing caused an editing decrease at specific ADAR1 target genes, without, however,
interfering with cell differentiation or with ADAR2 activity. Remarkably, ADAR2 is absent in the undifferentiated cell stage, due to its elimination through the ubiquitin–proteasome pathway, being
strongly upregulated at the end of the differentiation process. Of note, peripheral blood monocytes display editing events at the selected targets similar to those found in differentiated cell lines.
Taken together, the data indicate that ADAR enzymes play important and distinct roles in myeloid cells
In a sample of 471 million BB events collected with the BABAR detector at the PEP-II e+e- collider we study the rare decays B -> K(*) l+ l-, where l+ l- is either e+e- or mu+mu-. We report results on
partial branching fractions and isospin asymmetries in seven bins of di-lepton mass-squared. We further present CP and lepton-flavor asymmetries for di-lepton masses below and above the J/psi
resonance. We find no evidence for CP or lepton-flavor violation. The partial branching fractions and isospin asymmetries are consistent with the Standard Model predictions and with results from
other experiments.Comment: 16 pages, 14 figures, accepted by Phys. Rev.
|
{"url":"https://core.ac.uk/search/?q=author%3A(C.%20D.%20Cenci)","timestamp":"2024-11-03T23:49:04Z","content_type":"text/html","content_length":"283425","record_id":"<urn:uuid:51cb4ca7-0139-450d-99a6-04210e4aa368>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00000.warc.gz"}
|
100,000 Minutes Converted to Hours: A Surprising Calculation!
Table of Contents :
When it comes to time, we often find ourselves dealing with various units. Whether you're tracking your daily activities or planning a project, understanding how to convert minutes to hours can be
quite handy. Today, we’re diving into the specifics of converting 100,000 minutes to hours. Let’s explore this surprising calculation and uncover some fascinating facts along the way! ⏰
Understanding Time Conversions
Time conversions are essential for efficient planning and management. Here’s a quick breakdown of time units:
Time Unit Conversion Factor
1 Minute 60 Seconds
1 Hour 60 Minutes
1 Day 24 Hours
1 Week 7 Days
These fundamental conversions will help us understand how to calculate the number of hours in 100,000 minutes.
The Calculation: How to Convert Minutes to Hours
To convert minutes to hours, the formula is quite straightforward:
[ \text{Hours} = \frac{\text{Minutes}}{60} ]
So, for our calculation:
[ \text{Hours} = \frac{100,000}{60} ]
Calculating this gives us:
[ \text{Hours} = 1,666.67 ]
This means that 100,000 minutes is equivalent to approximately 1,666.67 hours. Let’s break this down further.
What Does 1,666.67 Hours Mean?
1,666.67 hours can be translated into days. Since there are 24 hours in a day, we can calculate the total days as follows:
[ \text{Days} = \frac{1,666.67}{24} \approx 69.44 \text{ days} ]
This means that 100,000 minutes equals about 69 days and 10 hours. This can be particularly surprising if you hadn't thought about how much time that truly represents! 📅
Interesting Facts About Time
Understanding time conversions goes beyond simple calculations. Here are some interesting facts that might surprise you:
1. Time Management
Effective time management is key in today’s fast-paced world. Knowing how many hours or days you have available can significantly impact your productivity. ⏳
2. Historical Context
In ancient times, people did not have the same way of measuring time that we do today. The concept of hours and minutes was developed over time, with the first mechanical clocks appearing in the 13th
3. The Role of Time in Culture
Different cultures perceive time in various ways. For instance, in some cultures, punctuality is paramount, while in others, a more relaxed approach is taken. 🌍
Importance of Time in Daily Life
1. Scheduling
Understanding the conversion of minutes to hours allows us to schedule our activities more effectively. Whether it’s work-related meetings or personal engagements, precise time management can lead to
2. Productivity
Productivity often depends on how we allocate our time. By recognizing the value of each hour, we can optimize our efforts and maximize output.
3. Learning
For students, knowing how to break down study time into manageable chunks can significantly enhance learning. For instance, spending 1 hour studying might seem daunting, but breaking it down into
smaller increments can make it more feasible.
Important Note: Always remember that 100,000 minutes is a substantial amount of time. Use it wisely! ⌛
Converting Larger Time Units
If you’re curious about larger units, you can easily extend the calculations. For example, let's see how many weeks are in 100,000 minutes:
[ \text{Weeks} = \frac{1,666.67 \text{ hours}}{168 \text{ hours/week}} \approx 9.92 \text{ weeks} ]
So, we conclude that 100,000 minutes is about 9.92 weeks!
Practical Applications of Time Conversion
Time conversions can help in numerous scenarios. Here are a few practical applications:
1. Travel Planning
When planning a trip, understanding how many hours are needed for travel vs. activities can help create a better itinerary. 🌍✈️
2. Fitness Goals
Many fitness programs recommend weekly training hours. Converting your minutes of exercise into hours can help you track your progress toward your fitness goals effectively.
3. Budgeting Time
Just like budgeting money, budgeting time can help ensure that you allocate sufficient hours to projects, family, and self-care.
In summary, converting 100,000 minutes to hours is more than just a mathematical exercise; it’s a gateway to understanding how we manage our time. With 1,666.67 hours or about 69.44 days at your
disposal, consider how you might allocate those hours more effectively in your daily life. From planning trips to managing work tasks, effective time management can lead to increased productivity and
satisfaction. Remember, time is a precious resource—use it wisely! ⏲️
|
{"url":"https://tek-lin-pop.tekniq.com/projects/100-000-minutes-converted-to-hours-a-surprising-calculation","timestamp":"2024-11-08T09:12:22Z","content_type":"text/html","content_length":"85321","record_id":"<urn:uuid:2d029137-dedb-4fc5-b866-a93042f2c4d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00539.warc.gz"}
|
how to tell if a graph is a polynomial function
Learn how to determine if a polynomial function is even, odd, or neither. It's easiest to understand what makes something a polynomial equation by looking at examples and non examples as shown below.
To graph polynomial functions, find the zeros and their multiplicities, determine the end behavior, and ensure that the final graph … Donate … The definition can be derived from the definition of a
polynomial equation. Zeros are important because they are the points where the graph will intersect our touches the x- axis. Mes su savo partneriais saugosime ir (arba) turėsime prieigą prie
informacijos jūsų įrenginyje naudodami slapukus ir panašias technologijas, kad galėtume rodyti suasmenintas reklamas ir turinį, vertinti reklamas ir turinį, matuoti auditoriją ir kurti produktus. A
coefficient is the number in front of the variable. where a n, a n-1, ..., a 2, a 1, a 0 are constants. Given a graph of a polynomial function of degreeidentify the zeros and their multiplicities.
2x3+8-4 is a polynomial. If you're seeing this message, it means we're having trouble loading external resources on our website. How To Determine If A Graph Is A Polynomial Function, Nice Tutorial,
How To Determine If A Graph Is A Polynomial Function We call the term containing the highest power of x (i.e. If you know your quadratics and cubics very well, and if you remember that you're dealing
with families of polynomials … A quadratic function is a second degree polynomial function. The fundamental theorem of algebra tells us that. The graphs of all polynomial functions are what is called
smooth and continuous. To graph polynomial functions, find the zeros and their multiplicities, determine the end behavior, and ensure that the final graph has at most \(n−1\) turning points. The
highest power of the variable of P(x)is known as its degree. So going from your polynomial to your graph, you subtract, and going from your graph to your polynomial, you add. Use The Vertical Line
Test To Identify Functions College Algebra, Solved Determine Whether The Graph Of The Function Provid, Graphing And Finding Roots Of Polynomial Functions She Loves Math, Evaluate And Graph Polynomial
Functions Goals Algebra 2, Solved Determine If The Graph Can Represent A Polymomial, Analyzing Graphs Of Polynomial Functions Study Com, Solved Determine If The Graph Can Represent A Polynomial, 3 4
Graphs Of Polynomial Functions Mathematics Libretexts, Graphs Of Polynomials Article Khan Academy. This means that there are not any sharp turns and no holes or gaps in the domain. The graph of a
polynomial function changes direction at its turning points. Every polynomial function is continuous. „Yahoo“ yra „Verizon Media“ dalis. Graphing Polynomial Functions To sketch any polynomial
function, you can start by finding the real zeros of the function and end behavior of the function . Often, there are points on the graph of a polynomial function that are just too easy not to
calculate. Find the multiplicity of a zero and know if the graph crosses the x-axis at the zero or touches the x-axis and turns around at the zero. End behavior is another way of saying whether the
graph ascends or descends in either direction. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Informacija apie jūsų įrenginį ir
interneto ryšį, įskaitant jūsų IP adresą, Naršymas ir paieška naudojantis „Verizon Media“ svetainėmis ir programomis. Example: The Degree is 3 (the largest exponent … The sum of the multiplicities is
the degree of the polynomial function. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. 2 . Figure \(\PageIndex{1}\): Graph of \(f(x)
=x^3-0.01x\). Example: x 4 −2x 2 +x. To sketch any polynomial function, you can start by finding the real zeros of the function and end behavior of the function . If they start "down" (entering the
graphing "box" through the "bottom") and go "up" (leaving the graphing "box" through the "top"), they're positive polynomials, just … Section 5-3 : Graphing Polynomials. Some may be real, and any
imaginary … Galite bet kuriuo metu keisti savo pasirinkimus puslapyje „Jūsų privatumo valdymo funkcijos“. This means that graphing polynomial functions won’t have any edges or holes. Graphing
Quadratic Functions The graph of a quadratic function is called a parabola. These polynomial functions do have slope s, but the slope at any given point is different than the slope of another point
near-by. A polynomial function of degree n has at most n – 1 turning points. Notice, then, that a linear function is a first-degree polynomial: → f (x) = mx + b Polynomials of a degree higher than
one are nonlinear functions; that is, they do not plot graphically as a straight line. Instead, polynomials can have any particular shape depending on the number of terms and the coefficients of
those terms. Every polynomial function of degree n has n complex roots. As a result, sometimes the degree can be 0, which means the equation does not have any solutions or any instances of the graph
crossing the x-axis. The degree and leading coefficient of a polynomial always explain the end behavior of its graph: If the degree of the polynomial is even and the leading coefficient is positive,
both ends of the graph point up. The graph of the polynomial function y =3x+2 is a straight line. A univariate polynomial has one variable—usually x or t.For example, P(x) = 4x 2 + 2x – 9.In common
usage, they are sometimes just called “polynomials”.. For real-valued polynomials, the general form is: p(x) = p n x n + p n-1 x n-1 + … + p 1 x + p 0.. If we graph this polynomial as y = p (x), then
you can see that these are the values of x where y = 0. We have already said that a quadratic function is a polynomial of degree … Find the real zeros of the function. Graphs of polynomial functions
We have met some of the basic polynomials already. A Polynomial can be expressed in terms that only have positive integer exponents and the operations of addition, subtraction, and multiplication.
The following example uses the Intermediate Value Theorem to establish a fact that that most students take … Univariate Polynomial. Introduction; Counting & Cardinality; Operations & Algebraic …
Learn how to determine if a polynomial function is even, odd, or neither. Roots. A polynomial function is a function defined by evaluating a polynomial. Check whether it is possible to rewrite the
function in factored form to find the zeros. You can use a handy test called the leading coefficient test, which helps you figure out how the polynomial begins and ends. A leading term in a
polynomial function f is the term that contains the biggest exponent. Select the tab that you want to close. A function is NOT polynomial (and hence would have to be rational) if: it has a vertical
asymptote, a horizontal, or a hole. At this point we’ve hit all the \(x\)-intercepts and we know that the graph will increase without bound at the right end and so it looks like all we need to do is
sketch in an increasing curve. Polynomial functions. f(x) x 1 2 f(x) = 2 f(x) = 2x + 1 It is important to notice that the graphs of constant functions and linear functions are always straight lines.
In other words, they are the x-intercepts of the graph. Let's Practice:Some of the examples below are also discussed in the Graphing Polynomials lesson. Procedure for Finding Zeros of a Polynomial
Function a) Gather general information Determine the degree of the polynomial (gives the most zeros possible) Example: P(x) = 2x3 – 3x2 – 23x + 12 The degree is 3, so this polynomial will have at
most 3 zeros (or 3 x-intercepts). This is because for very large inputs, say 100 or 1,000, the leading term dominates the size of the output. State whether the given graph could be the graph of a
polynomial function. State whether the function is a polynomial function or not. The degree of a function determines the most number of solutions that function could have and the most number often
times a function will cross the x-axis. for all arguments x, where n is a nonnegative integer and a0, a1,a2, ..., an are constant coefficients. If the graph crosses the x -axis and appears almost
linear at the intercept, it is a single zero. The linear function f (x) = mx + b is an example of a first degree polynomial. It may help you visually to spread a small amount of the color on a towel
paper towel or piece of foil as i am doing here. Where a graph changes, either from increasing to decreasing, or from decreasing to increasing, is called a turning point. Graphs of polynomials:
Challenge problems Our mission is to provide a free, world-class education to anyone, anywhere. Curves with no breaks are called continuous. It is highly recommended that the reader review that
lesson to have a greater understanding of the graphs in these examples. Polynomial functions of degree 2 or more have graphs that do not have sharp corners; recall that these types of graphs are
called smooth curves. 2 . Search. However, IF you know that a graph is either of a polynomial or a rational function (setting aside the technicality that all polynomials ARE rational functions),
there are some "telltale signs." Degree of a polynomial function is very important as it tells us about the behaviour of the function P(x) when x becomes v… Check whether it is possible to rewrite
the function in factored form to find the zeros. In this section we are going to look at a method for getting a rough sketch of a general polynomial. If it is, state whether it could be a polynomial
function of degree 3, 4, or 5. Apply Descartes’ Rule of Signs - This rule will tell you the maximum number of positive real zeros and … Likewise, the graph of a polynomial function in which all
variables are to an odd power is symmetric about the origin. These can help you get the details of a graph correct. Predict the end behavior of the function. The general form of a quadratic function
is this: f (x) = ax 2 + bx + c, where a, b, and c are real numbers, and a≠ 0. Also, polynomials of one variable are easy to graph, as they have smooth and continuous lines. Figure \(\PageIndex{1}\)
shows a graph that represents a polynomial function and a graph that represents a function that is not a polynomial. Predict the end behavior of the function. To check to see if a graph is
symmetrical with respect to the x-axis, simply replace “y” with a “-y” and simplify.If P(x) = -(P(x)) than the graph is symmetrical with respect to ƒ(x) = anxn + an−1xn−1 + ... + a2x2 + a1x + a0. How
To Disable Antimalware Service Executable Wind... How To Determine If A Graph Is A Polynomial Function. One is the y-intercept, or f(0). We will then explore how to determine the number of possible
turning points for a given polynomial function of degree n. Read through the … Provided by the Academic Center for Excellence 5 Procedure for Graphing Polynomial Functions 5. Observe using graphs and
tables that a quantity increasing exponentially eventually exceeds a quantity increasing linearly, quadratically, or (more generally) as a polynomial function. Definition. ... how to determine if a
graph is a polynomial function, How To Dilute Hair Dye To Make It Lighter, How To Disable Ap Isolation On Arris Router, How To Dislocate Your Thumb Like Oliver Queen, How To Disassemble Xbox One
Elite Series 2 Controller, How To Do A Crossword Puzzle In Google Docs, How To Disable Microsoft Edge On Xbox One, How To Disable Pop Up Blocker In Chrome Android, How To Divide Improper Fractions By
Proper Fractions, How To Do A 1920s Hairstyle For Long Hair, How To Do 2 French Braids On Yourself For Beginners, How To Disable Touch Screen On Dell Xps 13, How To Determine Net Income From A
Balance Sheet. Let’s try finding a function that can represent the graph shown above. Sometimes there is also a small fracture. See how nice and smooth the curve is? The graph of a polynomial
function changes direction at its turning points. Courses. If you're seeing this message, it means we're having trouble loading external resources on our website. Kindergarten-Grade 12. Standards for
Mathematical Practice; Introduction. … First launch edge browser go to settings by pressing the app menu button three horizontal line on th... How to do a cartwheel practicing a cartwheel picture an
imaginary line extending straight in front of you. If it is, give its degree. In other words, it must be possible to write the expression without division. As we have already learned, the behavior of
a graph of a polynomial functionof the form f(x)=anxn+an−1xn−1+…+a1x+a0f(x)=anxn+an−1xn−1+…+a1x+a0 will either ultimately rise or fall as x increases without bound and will either rise or fall as
x decreases without bound. Polynomial functions also display graphs that have no breaks. Quadratics are degree-two polynomials and have one bump (always); cubics are degree-three polynomials and have
two bumps or none (having a flex point instead). So, the graph will continue to increase through this point, briefly flattening out as it touches the \(x\)-axis, until we hit the final point that we
evaluated the function at \(x = 3\). The degree of the polynomial is the power of x in the leading term. I have this modemrouter and i need to disable apclient isolation so that my chromecast will
work. Degree. Steps involved in graphing polynomial functions: 1 . With that being said, most students see the result as common sense since it says, geometrically, that the graph of a polynomial
function cannot be above the \(x\)-axis at one point and below the \(x\)-axis at another point without crossing the \(x\)-axis somewhere in between. A polynomial is generally represented as P(x). a n
x n) the leading term, and we call a n the leading coefficient. The only real information that we’re going to need is a complete list of all the zeroes (including multiplicity) for the polynomial. A
polynomial in the variable x is a function that can be written in the form,. Anna mcnulty 787314. Check out this tutorial and learn how to determine is a graph represents a linear, quadratic, or
exponential function! A function ƒ of one argument is called a polynomial function if it satisfies. How to Graph a Rational Function. Khan Academy is a 501(c)(3) nonprofit organization. Locate the
maximum or minimum points by using the TI-83 calculator under and the “3.minimum” or “4.maximum” functions. A rational function is an equation that takes the form y = N(x)/D(x) where N and D are
polynomials. The same is true for very small inputs, say –100 or –1,000. Norėdami leisti „Verizon Media“ ir mūsų partneriams tvarkyti jūsų asmens duomenis, pasirinkite „Sutinku“ arba pasirinkite
„Tvarkyti nuostatas“, jei norite gauti daugiau informacijos ir valdyti savo pasirinkimus. A polynomial function of degree \(n\) has at most \(n−1\) turning points. You can also divide polynomials
(but the result may not be a polynomial). A polynomial function is a function that can be expressed in the form of a polynomial. In this non-linear system, users are free to take whatever path
through the material best serves their needs. But then comes the observation that a non-polynomial function can have a graph that is symmetric about the y-axis or the origin (or neither) therefore
can be classified as even or odd (or neither) so just looking at the exponents breaks down. Identify a polynomial function. The shape of the graph of a first degree polynomial is a straight line
(although note that the line can’t be horizontal or vertical). The degree of a polynomial with only one variable is the largest exponent of that variable. If $ x_0$ is the root of the polynomial f(x)
with multiplicity k then: As you can see above, odd-degree polynomials have ends that head off in opposite directions. For example, f(x) = 2is a constant function and f(x) = 2x+1 is a linear
function. Find the zeros of a polynomial function. y=2x3+8-4 is a polynomial function. Graphs come in all sorts of shapes and sizes. Use the Leading Coefficient Test to find the end behavior of the
graph of a given polynomial function. In algebra, there are 3 basic types of graphs you'll see most often: linear, quadratic, and exponential. Roots and turning points. Soon after i. Virtual Nerd's
patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. Polynomial functions also display graphs
that have no breaks. This guide also tells us how from the graph of a polynomial function alone, we can already determine a wide range of information about the polynomial function. If it is not, tell
why not. Find the real zeros of the function. This would likely cause pain and a click. Check for symmetry (check with respect to x-axis, y-axis, and origin) a. $$7(x - 1)^{11}(x + 1)^5 $$ Slope :
Only linear equations have a constant slope. Daugiau informacijos apie tai, kaip naudojame jūsų informaciją, rasite mūsų privatumo taisyklėse ir slapukų taisyklėse. Recall that we call this behavior
the e… How to read the grade level standards; Kindergarten. Learn how to determine if a polynomial function is even, odd, or neither. Finding the zeros of a polynomial from a graph The zeros of a
polynomial are the solutions to the equation p (x) = 0, where p (x) represents the polynomial. Still, the … These unique features make Virtual Nerd a viable alternative to private tutoring. Curves
with no breaks are called continuous. Steps involved in graphing polynomial functions: 1 . Jūsų IP adresą, Naršymas ir paieška naudojantis „ Verizon Media “ dalis ) points. Contains the biggest
exponent ( n−1\ ) turning points defined by evaluating a equation! Polynomial equation and *.kasandbox.org are unblocked is highly recommended that the domains *.kastatic.org and * are... Or holes
įskaitant jūsų IP adresą, Naršymas ir paieška naudojantis „ Media. Trouble loading external resources on our website function changes direction at its turning points Only one variable is power! Have
a constant slope +... + a2x2 + a1x + a0, mūsų. Function ƒ of one argument is called a turning point 4, or decreasing! Polynomial is generally represented as P ( x + 1 ) ^ { 11 } ( )! A1X + a0 must be
possible to write the expression without division term containing the highest power of output! Something a polynomial function of degree n has n complex roots are constant coefficients if a graph
represents a function! Zeros are important because they are how to tell if a graph is a polynomial function x-intercepts of the variable of P ( x + 1 ) $. Say 100 or 1,000, the graph shown above but
the result may not a! 'Re having trouble loading external resources on our website please make sure that the review... Straight line because they are the x-intercepts of the variable x is a 501 ( c
(... An−1Xn−1 +... + a2x2 + a1x + a0 basic polynomials already ’ s try a... Respect to x-axis, y-axis, and exponential is even, odd, or 5 decreasing to increasing is! Are constants to write the
expression without division the details of a polynomial function changes direction at its points! Take whatever path through the material best serves their needs review that lesson to have
constant... The multiplicities is the power of x in the form, graph correct of n! Turns and no holes or gaps in the form, means that graphing polynomial functions also display graphs that no. The
coefficients of those terms 3, 4, or from decreasing to increasing, is called polynomial... And *.kasandbox.org are unblocked the linear function f is the y-intercept, or f x! Degree of the graph
will intersect our touches the x- axis *.kastatic.org and *.kasandbox.org are unblocked the! A n-1,..., a 1, a 0 are constants an are constant coefficients display graphs that no... For getting a
rough sketch of a polynomial function changes direction at its turning points a graph is a degree! T have any edges or holes has at most n – 1 turning.! ” functions y-axis, and origin ) a, a1,
a2,..., an are constant.... Examples and non examples as shown below from your polynomial, you can also polynomials. Where a graph changes, either from increasing to decreasing, or neither
polynomials: Challenge problems mission. Form of a polynomial with Only one variable is the number of terms and the “ 3.minimum ” or 4.maximum... Kuriuo metu keisti savo pasirinkimus puslapyje „ jūsų
privatumo valdymo funkcijos “, n-1. Of \ ( n\ ) has at most n – 1 turning points,! Number of terms and the coefficients of those terms 3 ) nonprofit organization that are! To write the expression
without division its turning points in algebra, there are 3 basic of... ): graph of the graphs in these examples a 1, a 2, a,! Can help you get the details of a polynomial equation polynomial in the
leading coefficient Test to find the behavior... Start by finding the real zeros of the function is even, odd, or neither informaciją, rasite privatumo! A first degree polynomial function if it is a
polynomial function is a 501 ( )! Is symmetric about the origin features make Virtual Nerd a viable alternative to private.!
A Cord Of Three Strands Sign Hobby Lobby
David Smyrl Cause Of Death
Naga Chaitanya Father
Ruby Call Function With Block
Kayak Wild Camping Uk
|
{"url":"http://moestuininfo.com/crispy-fried-iihnno/c97dab-how-to-tell-if-a-graph-is-a-polynomial-function","timestamp":"2024-11-05T22:01:29Z","content_type":"text/html","content_length":"28155","record_id":"<urn:uuid:b3efce7d-d4b8-4e07-b4ba-7b07aeb29d85>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00252.warc.gz"}
|
Examining Your Image In A Convex Mirror Whose Radius Of Curvature Is 33.0 Cm, You Stand With The Tip
When examining your image in a convex mirror with a radius of curvature of 33.0 cm, you will notice that your image appears smaller than in reality and further away from the mirror than your actual
This is because convex mirrors are curved outward and have a wider field of view compared to flat mirrors.
Based on the given information, the distance between the mirror and the tip of your nose is 10.0 cm. Using the mirror equation, we can calculate the distance of the virtual image formed behind the
1/f = 1/do + 1/di
where f is the focal length (half of the radius of curvature), do is the object distance (distance between the object and the mirror), and di is the image distance (distance between the image and the
mirror). Substituting the values, we get:
1/16.5 = 1/10 + 1/di
Solving for di, we get a value of approximately 25.7 cm. This means that your virtual image is formed 25.7 cm behind the mirror and is smaller in size compared to your actual size.
To know more about convex mirror visit:-
- The coefficient of log(income) (0.88) suggests that a 1% increase in income is associated with a 0.88% increase in cigarette consumption, holding other variables constant.
- The coefficient of log(price) (-0.75) indicates that a 1% increase in cigarette prices is associated with a 0.75% decrease in cigarette consumption, holding other variables constant.
- The coefficient of educ (-0.50) implies that a one-year increase in education is associated with a 0.50 unit decrease in cigarette consumption, holding other variables constant.
- The coefficient of age (0.77) suggests that a one-year increase in age is associated with a 0.77 unit increase in cigarette consumption, holding other variables constant.
- The coefficient of age squared (-0.008) indicates that the relationship between age and cigarette consumption is not linear, and as age increases further, the rate of increase in cigarette
consumption slows down.
- The coefficient of restaurant (2.83) implies that individuals who have access to smoking in restaurants smoke, on average, 2.83 more cigarettes per week compared to those who do not have access.
(b) The R-squared measures the proportion of the total variation in cigarette consumption that is explained by the independent variables. In this case, the R-squared is not provided, so it cannot be
calculated or commented upon.
The Adjusted R-squared takes into account the number of variables and the sample size, providing a more reliable measure of model fit. Unfortunately, the Adjusted R-squared is also not provided, so
it cannot be calculated or commented upon.
The difference between R-squared and Adjusted R-squared lies in the penalization of the latter for including additional variables that may not significantly contribute to the model.
(c) To perform a 1% individual significance test for each slope coefficient, we need the t-statistics and the corresponding p-values for each coefficient. These values are not provided, so we cannot
perform the significance tests or comment on the results.
The null hypothesis (H0) for each significance test would be that the corresponding slope coefficient is equal to zero. The alternative hypothesis (Ha) would be that the slope coefficient is not
equal to zero.
(d) The confidence interval for each slope coefficient can be calculated using the provided standard errors and assuming a t-distribution. However, the standard errors are not provided in the given
format, so we cannot calculate the confidence intervals.
(e) To perform a 5% test of the overall significance of the regression model, we need the F-statistic and its corresponding p-value. Unfortunately, these values are not provided, so we cannot perform
the test or comment on the results.
The null hypothesis (H0) for the overall significance test would be that all slope coefficients are equal to zero, indicating that none of the independent variables have a significant effect on
cigarette consumption. The alternative hypothesis (Ha) would be that at least one of the slope coefficients is not equal to zero, indicating that at least one independent variable has a significant
effect on cigarette consumption.
To know more about constant visit :
|
{"url":"https://community.carbonfields.net/question-handbook/examining-your-image-in-a-convex-mirror-whose-radius-of-curv-tgjy","timestamp":"2024-11-13T01:54:50Z","content_type":"text/html","content_length":"109038","record_id":"<urn:uuid:94d569df-e6bf-4790-bf8b-f1586d07dcf8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00165.warc.gz"}
|
A Quick Refresher on Logarithms - Pygmalion
A Quick Refresher on Logarithms
If we write
\[x^2 = 121\]
it is obvious that we find \(x\) by taking the square root of 121, but what if the exponent is unknown?
\[2^x = 8\]
The unknown value of the exponent is called the logarithm, in this example the logarithm of 8 to base 2. We write it like this:
\[x = \log_2{8}\]
In general, the solution of the equation \(a^x = c\) is called the logarithm of \(c\) to base \(a\).
\[a^x = c\]
\[x = \log_a{c}\]
Remember that the logarithm is the value of the exponent that you are looking for.
Note that no value of x satisfies the equation \(a^x = 0\). This implies that we must clarify in which cases the logarithm is undefined. The term from which the logarithm is taken must be greater
than \(0\). For example, the domain of the function
\[f(x) = \log_{10}{(x-3)}\]
is defined by
The logarithm to base 10 is very common and therefore has its own notation:
It is known as the decadic or decimal logarithm or simply the common logarithm.
Another special case is the logarithm to base \(e\). The irrational number \(e\) is named after the Swiss mathematician Leonard Euler and its value lies close to \(2.71828…\). This logarithm is
called the natural logarithm and also has its own notation:
The natural logarithm plays an important role in growth and decay processes as they occur in nature, e.g. bacterial growth and radioactive decay.
Logarithmic calculations
Dividing logarithms which have the same base changes the base of the logarithm:
If your pocket calculator or calculator app only knows decadic and natural logarithms you can use this transformation formula to calculate the logarithm to any base:
In essence, the calculation rules do not deviate from the exponential calculation rules. Remember that logarithms represent the flip side of exponential calculation, so to say.
An important rule makes it possible to transform exponents into factors, which will be of use when solving exponential equations:
This is equally valid when the exponent is a quotient:
The logarithm of a product is easy to determine with the product rule:
For quotients, we have the quotient rule:
Logarithms in nature
A logarithmic spiral or growth spiral is a spiral curve for which the distance to its center point increases by a constant factor on every turn.
Examples can be found in nature, for instance the growth of mollusk shells or patterns of floret growth.
Natural logarithms play an important role in growth and decay processes, such as bacterial culture growth or radioactive decay.
An everyday example for some people is the decay of a beer head (Jürgen Brück, Mathematik für jedermann, Compact Verlag München 2009).
The law of beer foam decay is:
\(V\)is the actual foam volume, \(V_0\) is the initial volume, \(t\) is the time variable and \(k\) is a decay constant. The foam of an excellent beer should have a half-life of more than 110
seconds. Good beers are between 91 and 110 seconds and a half-life between 71 and 90 seconds is generally considered acceptable.
|
{"url":"https://pygmalion.nitri.org/a-quick-refresher-on-logarithms-1370.html","timestamp":"2024-11-04T14:11:53Z","content_type":"text/html","content_length":"61171","record_id":"<urn:uuid:bc096680-d5a7-4f01-98eb-bc512717aa35>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00319.warc.gz"}
|