content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
872: Fairy Tales
Explain xkcd: It's 'cause you're dumb.
Fairy Tales
Title text: Goldilocks' discovery of Newton's method for approximation required surprisingly few changes.
This explanation may be incomplete or incorrect:
Please include the reason why this explanation is incomplete, like this: {{incomplete|reason}}
If you can address this issue, please edit the page! Thanks.
is a word adopted into English from German like "kindergarten" or "blitzkrieg". It's a mathematical concept and has nothing to do with the fairy tale Cinderella, confusing
The story of Cinderella includes Cinderella going to a ball in disguise, dancing with a prince and then leaving early and quickly, so she leaves a glass slipper behind. The prince then uses the shoe
to find Cinderella. Megan says that the way she learned it, the prince used an eigenvector and corresponding eigenvalue to match the shoe to its owner. This is a somewhat logical mathematical
connection to make, as eigenvectors and values are important properties of a matrix.
Megan explains that her mother would talk about her work, math, while she fell asleep in the midst of reading bed time stories. The middle panel refers to the story of the Grasshopper and the Ant
with the addition of what is likely a reference to the Poincaré conjecture, a (now-misnamed) theorem in Mathematics. Megan also mentions two other story changes. Inductive White and the n - 1 dwarves
is a combination of Snow White and the 7 Dwarves with the principle of induction, and the LIM x->∞ (x) little pigs combines the 3 little pigs with mathematical limits.
In the title text, Newton's method for approximation is a method for finding successively better approximations to the zeroes (or roots) of a real-valued function. In Goldilocks, the protagonist
finds successively better porridge and appropriately sized chairs in a house where three bears lived. In the same way, in the Mom's version of the fairy tale, she would find successively better
approximations to zeroes instead of porridge and chairs instead of successively better bowls of porridge.
[Megan sits in an armchair, reading a book.]
Megan: Are there eigenvectors in Cinderella?
Cueball: ... no?
Megan: The prince didn't use them to match the shoe to its owner?
Cueball: What are you talking about?
Megan: Dammit.
[Megan is in bed, mom is sitting on the edge of the bed reading.]
My mom is one of those people who falls asleep while reading, but keeps talking. She's a math professor, so she'd start rambling about her work.
Mom:But while the ant gathered food ...
Mom:... zzzz ...
Mom:... the grasshopper contracted to a point on a manifold that was NOT a 3-sphere ...
I'm still not sure which versions are real.
Cueball: You didn't notice the drastic subject changes?
Megan: Well, sometimes her versions were better. We loved Inductive White and the (N-1) Dwarfs.
Megan: I guess the LIM x->∞ (x) little pigs did get a bit weird toward the end...
add a comment!
What about the grasshopper one?
There is an Aesop fable about an Ant and a Grasshopper. Maybe the connection is that "contracting to a point etc" is a frivolous activity (like playing fiddle & dancing)? - 38.113.0.254 01:07, 6
December 2012 (UTC)
Can someone make the Eigenvector explanation a little more "plain language" for those of us who are mathematically challenged? <--feeling dumb... 108.28.72.186 05:45, 4 August 2013 (UTC)
Thanks for your comment, I did mark this as incomplete and start to do an explain for non math people. But consider this: xkcd is "A webcomic of romance, sarcasm, math, and language."
Nevertheless, I try to work on this comic right now.--Dgbrt (talk) 20:11, 4 August 2013 (UTC)
I find it amusing that the Poincaré conjecture is still called a conjecture. Wikipedia starts with the amusing statement "the Poincaré conjecture ... is a theorem." I couldn't find it, but I'd guess
that there's probably a lovely discussion on that topic on the talk page.
) 22:30, 19 August 2013 (UTC) | {"url":"http://www.explainxkcd.com/wiki/index.php?title=872:_Fairy_Tales&oldid=47006","timestamp":"2014-04-19T05:15:18Z","content_type":null,"content_length":"33152","record_id":"<urn:uuid:19d0c7a4-ae4b-4f5f-a380-212f02e71ea7>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Working Papers
Santa Fe Institute
Working Papers
Contact information of Santa Fe Institute:
Postal: 1399 Hyde Park Road, Santa Fe, New Mexico 87501
Web page:
http://www.santafe.edu/sfi/publications/working-papers.htmlMore information through EDIRC
For corrections or technical questions regarding this series, please contact (Thomas Krichel)
Series handle: repec:wop:safiwp
Citations RSS feed: at CitEc
Impact factors: Simple, Recursive, Discounted, Recursive discounted, H-Index, Aggregate
Access and download statistics
Top item: By citations.
More pages of listings: 0| 1| 2| 3
More pages of listings: | {"url":"http://ideas.repec.org/s/wop/safiwp.html","timestamp":"2014-04-24T17:59:18Z","content_type":null,"content_length":"53184","record_id":"<urn:uuid:c230b092-1f99-4044-8aca-03861a104428>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00075-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: Re: egen rowmin rowmax
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: Re: egen rowmin rowmax
From Kit Baum <baum@bc.edu>
To statalist@hsphsun2.harvard.edu
Subject st: Re: egen rowmin rowmax
Date Sun, 5 Nov 2006 07:05:07 -0500
This should not be necessary.
may not be combined with by. It creates the (row) sum of the
variables in varlist, treating missing as 0.
may not be combined with by. It gives the minimum value in varlist
for each observation (row). If all values in varlist are missing for
an observation, newvar is set to missing.
The egen rowtotal function will properly handle missings, and it properly handles varlists, including
the form firstvar-lastvar. (Remember that this is not the alpha order of your vars, but the data set order) Thus
webuse auto
egen tot = rowtotal(price-headroom)
will produce a variable with 74 obs. even though rep78 is in there. You refer to an egen function total(), which produces something quite different: a constant which is the sum of an exp. An exp of,
say, price-headroom is exactly that: subtract headroom from price and sum it up. If MVs make the exp missing, then the sum will be missing. Perfectly logical: you get what you ask for. But you
probably meant to use rowtotal.
Furthermore the rowmin function will properly pick out the minimum value, ignoring missings, unless all are
missing, so it should return zero for a row with all (or some) zeroes, greater than zero otherwise, if none of your rows contain all MVs.
Kit Baum, Boston College Economics
An Introduction to Modern Econometrics Using Stata:
On Nov 5, 2006, at 2:33 AM, statalist-digest wrote:
As it turns out, egen [var] = rowsum([varlist]) is equivalent to
egen [var] = total([varlist]).
My solution turned out to be to replace all of the zeros in the seq*
cells with missings, then execute
egen byte minmath = rowmin(seq*) /*without the condition!*/
and add the following line of code:
replace minmath = 0 if minmath==.
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2006-11/msg00134.html","timestamp":"2014-04-19T22:35:50Z","content_type":null,"content_length":"7690","record_id":"<urn:uuid:1429fddf-ee4d-4518-90d8-d4ffded875ba>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marc Artzrouni's Home Page
Last update: 13-June-2013
Marc Artzrouni's Professional Home Page
Department of Mathematics - UMR CNRS 5142
University of Pau (BP 1155)
64013 Pau Cedex
tel: + 33 - (0)5 59 40 75 50
│ HIGHLIGHTS of ACHIEVEMENTS and INTERESTS │
│ │
│ Research (with references to articles further down) │
│ I. American period (1981-1993) ( Drexel University (Pennsylvania), Census Bureau, Washington, DC, University of North Carolina at Chapel Hill, Clemson Universty (South Carolina), Loyola │
│ University (New Orleans)). Research in mathematical demography, (c1-c6), demo-economic modeling (e1-e2), matrix analysis (b1-b2), mathematical epidemiology (HIV/AIDS) (d1-d2). │
│ │
│ II. French period (1993-present) (University of Pau). Move to France to work with a medical entomologist on the modeling of sleeping sickness (a10, b3-b7, b9). More work on HIV/AIDS modeling (d9, │
│ b8), matrix analysis (a9, a11), and classical analysis (a13). │
│ │
│ This research has resulted in 50 publications in international journals of mathematics, biomathematics, biology, epidemiology, demography, and economics/history (see list below). │
│ │
│ III. Current/Recent activities and research │
│ │
│ i. Economics: With Fabio Tramonta, University of Pavia: "The debt trap: a two compartment train wreck". With Patrice Cassagnard, University of Pau: "The discrete Nerlove-Arrow model: Explicit │
│ solutions". │
│ │
│ ii. Population biology: With Niels Teichert, La Réunion, "A mutlistate cyclical Leslie matrix model for Sicyopterus lagocephalus in Réunion". │
│ │
│ iii. Mathematics in medicine: With Vasily Leonenko, Russian Academy of Sciences, Omsk: "A continuous-time Markov chain model for the spread of HIV among drug users: Application to an urban area │
│ of Siberia". │
│ │
│ iv. Conference: Organization of an international conference on The Role and Impact of Mathematics in Medicine held in Paris on 10-12, June 2010. │
│ │
│ Recent publications │
│ M. Artzrouni and E. Deuchert (2012) Consistent partnership formation: Application to a sexually transmitted disease model Mathematicla Biosciences, 235: 182-188. │
│ │
│ M. Artzrouni C.B. Begg, R. Chabiniok, et. al. (2011) The first international workshop on the role and impact of mathematics in medicine: A collective account ; Am J Transl Res 2011;3(5):492-497 │
│ M. Artzrouni and E. Deuchert (2010) Do men and women have the same average number of lifetime partners? Mathematical Population Studies, Vol 17, no 4: 242-256. M. Artzrouni (2009) The Mathematics │
│ of Ponzi Schemes, Mathematical Social Sciences, Vol 58, no 2: 190-201. │
│ │
│ M. Artzrouni (2009) Transmission probabilities and reproduction numbers for sexually transmitted infections with variable infectivity: Application to the spread of HIV between low- and │
│ high-activity populations, Mathematical Population Studies, Vol 16, no 4: 266-287. │
│ │
│ Editorial activities │
│ I am the Founding Editor in 1988 (and editor until 2002) of Mathematical Population Studies, an international journal of mathematical demography published by Taylor and Francis. │
│ │
│ Teaching │
│ Since 1981 I have taught in four American and French universities a broad range of applied math courses (calculus, applied statistics, modeling, operations research, matrix analysis). │
│ I have practically not touched a piece of chalk in the last 10 years of my teaching. I use a video projector to project live, interactive lectures prepared with MathCad, a mathematical software │
│ designed for engineers but very well suited to teaching. │
│ The problem with too much convenient technology is that students become passive in the learning process. This happens for example if one provides complete lecture notes, either as hard copies or │
│ on the Internet. In 2004 I therefore started experimenting with a concept that combines the benefits of new technologies with those that come in the process of writing down notes in long hand. I │
│ provide students in advance with lecture notes "full of holes" to be filled by hand during the lecture. This avoids the time-consuming task of writing down routine material. Having to write by │
│ hand focuses the mind and helps concentrate on the important stuff. Student feedback suggests that they like this compromise. │
│ This approach is experimental and I would like to hear from anyone with any thoughts on the best ways to use new instructional technologies in the delivery of lectures. Click here to access more │
│ info and sample MathCad files. (MathCad needed on your computer). │
Place and Date of Birth: Greenwich, Connecticut, USA; January 17, 1954.
Marital Status: Married, two children.
Nationalities: US and French.
Languages: Native English and French (courses taught and papers written in both languages).
University of Pau (France):
“Habilitation” in Applied Mathematics ("Mathematical Tools in Population Dynamics: Application to Demography, Biology, and Economics"). December 1992. (Post Ph.D. degree based on the presentation of
a body of research; required in France to become full professor and to direct Ph.D. theses).
University of Paris V (Sorbonne):
Doctorate in Applied Mathematics ("Iterative Processes in Population Dynamics: Application to Easterlin's theory"). April 1981.
University of Grenoble (France):
Master's Degree in Applied Mathematics (Statistics, Applied Algebra and Operations Research). June 1978.
Bachelor's Degree in Mathematics. June 1976.
September 1993 - present: Professor, Department of Mathematics, University of Pau, France. Aug. 1988 - Aug.1993: Associate Professor; Department of Mathematics and Computer Science; Loyola
University, New Orleans, Louisiana. USA. Chairman of Department during the academic year 1992-1993.
Aug. 1986- Aug. 1988: Visiting Assistant Professor; Department of Mathematical Sciences, Clemson University, Clemson, South Carolina.
June 1984 - June 1986: Post-doc; Department of Biostatistics; School of Public Health; University of North Carolina at Chapel Hill.
Nov. 1982 - Nov. 1983: Mathematical Statistician in the Statistical Research Division of the Census Bureau (Washington, DC). Research on the census undercount.
Sept. 1981 -Sept. 1982: Postdoctoral Fellow; Department of Mathematical Sciences, Drexel University (Philadelphia, PA).
Oct. 1979 - Dec. 1979: Intern in the Office of Statistics at UNESCO (United Nations Educational, Scientific, and Cultural Organization, Paris, France). Work on mathematical models for the comparison
of urban and rural enrollment rates.
Sept. 1978 - Oct. 1978: Summer intern with the Economic Commission for Europe at the United Nations, Geneva, Switzerland. Research on econometric models for the comparison of the gross domestic
product of different countries.
Aug. 1977 - Sept. 1977: Summer intern in the Division of Statistics of the World Health Organization (WHO), Geneva. Research on a mathematical model (multiple logistic function) to estimate the risk
of myocardial infarctions.
a. Mathematics
[a13] M. Artzrouni (2006) A new family of periodic functions as explicit roots of a class of polynomial equations. Australian Journal of Mathematical Analysis and Applications. Vol 3, Issue 2, 1-16;
http://ajmaa.org [a12] M. Artzrouni (2003) The Local Coefficient of Ergodicity of a Nonnegative Matrix, SIAM Journal of Matrix Analysis and Applications. Vol 25, No 2, 507-516.
[a11] M. Artzrouni and O. Gavart. (2000) Non-linear matrix iterative processes and generalized coefficients of ergodicity. SIAM Journal of Matrix Analysis and Applications. Vol 2, No 4, 1343-1353.
[a10] M. Artzrouni, and JP. Gouteux (1999) A Model for the Spread of Sleeping Sickness, in Applied Mathematical Modeling: A Multidisciplinary Approach, D. R. Shier and K. T. Wallenius (Eds.), CRC
Press, Boca Raton, FL, 1999. 71-92 .
a9] M. Artzrouni (1996) On the dynamics of the linear process Y(k)=A(k)Y(k-1) with irreducible matrices A(k). SIAM Journal of Matrix Analysis and Applications, Vol 17, No 4: 822-833.
[a8] M. Artzrouni and X. Li (1995) A note on the coefficient of ergodicity of a column allowable matrix. Linear Algebra and its Applications, 214: 93-101.
[a7] M. Artzrouni (1991) On the growth of infinite products of slowly varying primitive matrices. Linear Algebra and its Applications, 145: 33-57.
[a6] M. Artzrouni and J. Reneke (1990) Stochastic differential equations in mathematical demography: A review. Applied Mathematics and Computation, 38,1: 7-21.
[a5] M. Artzrouni (1987) On the local stability of nonautonomous difference equations in R^n, Journal of Mathematical Analysis and Applications , 122, 2: 519-537.
[a4] M. Artzrouni (1987) Conditions for asymptotically exponential solutions of linear difference equations with variable coefficients, Journal of Mathematical Analysis and Applications, 121, 1:
[a3] M. Artzrouni (1986) On the convergence of infinite products of matrices. Linear Algebra and its Applications, 74: 11-21.
[a2] M. Artzrouni (1983) A theorem on products of matrices, Linear Algebra and its Applications, 49: 153-159.
[a1] M. Artzrouni (1981) Les processus itératifs en dynamique des populations et la théorie d'Easterlin. Mathématiques et Sciences Humaines, 76: 33-46.
b. Mathematical biology
[b12] M. Artzrouni and E. Deuchert (2012) Consistent partnership formation: Application to a sexually transmitted disease model Mathematicla Biosciences, 235: 182-188. [b11] M. Artzrouni and E.
Deuchert (2010) Do men and women have the same average number of lifetime partners? Mathematical Population Studies, Vol 17, no 4: 242-256. [b10] M. Artzrouni (2009) Transmission probabilities and
reproduction numbers for sexually transmitted infections with variable infectivity: Application to the spread of HIV between low- and high-activity populations, Mathematical Population Studies, Vol
16, no 4: 266-287.
[b9] M. Artzrouni and J.P. Gouteux (2006) A parity-structured matrix model for tsetse populations. Mathematical Biosciences, Vol 204, No 2; 214-231. doi:10.1016/j.mbs.2006.08.022
[b8] M. Artzrouni (2004) Back-calculation and projections of the HIV/AIDS epidemic among homosexual/bisexual men in three European countries: evaluation of past projections and updates allowing for
treatment effects. European Journal of Epidemiology 19(2); 171-179.
[b7] M. Artzrouni and JP. Gouteux. (2001) Population dynamics of sleeping sickness: A microsimulation. Simulation and Gaming, 32, 2, pp. 215-227.
[b6] M. Artzrouni and JP. Gouteux. (2001) A model of Gambian sleeping sickness with open vector populations. IMA J. Math. Appl. Medicine and Biology. 18, pp. 99-117.
[b5] JP Gouteux, M. Artzrouni, and M. Jarry (2000) Une épidémie mise en équations. La Recherche, No 335; 34-38.
[b4] K. Chalvet-Monfray, M. Artzrouni, JP Gouteux, et al. (1998) A two-patch model of Gambian sleeping sickness: Application to vector control strategies in a village and plantations. Acta
Biotheoretica, 46: 207-222.
[b3] M. Artzrouni and JP Gouteux (1996) A Compartmental Model of Sleeping Sickness in Central Africa. Journal of Biological Systems, Vol 4, No : 459-477.
[b2] M. Artzrouni (1992) A modeled time-varying density function for the incubation period of AIDS. Journal of Mathematical Biology. 31:73-99.
[b1] M. Artzrouni (1990) On transient effects in the HIV/AIDS epidemic. Journal of Mathematical Biology, 28: 271-291.
c. Mathematical demography
[c7] M. Artzrouni (2005) Mathematical Demography, Encyclopaedia of Social Sciences, Vol 2. Elsevier Inc.
[c6] M. Artzrouni (1986) Une nouvelle famille de courbes de croissance: application à la transition démographique. Population, 3: 497-509.
[c5] M. Artzrouni (1986) The rate of convergence of a generalized stable population. Journal of Mathematical Biology, 24: 405-422.
[c4] M. Artzrouni (1986) On the dynamics of a population subject to slowly changing vital rates, Mathematical Biosciences, 80: 265-290.
[c3] M. Artzrouni (1985) Generalized stable population theory. Journal of Mathematical Biology, 21: 363-381.
[c2] M. Artzrouni and R. Easterlin (1982) Birth history, age structure, and post -World War II fertility in ten developed countries: an exploratory empirical analysis. Genus, Vol. 38, 3-4: 81-99.
[c1] H. Le Bras and M. Artzrouni (1980) Interférence, indifférence, indépendance. Population, 6: 1123-1144.
d. Biology, medicine, epidemiology
[d11] M. Artzrouni C.B. Begg, R. Chabiniok, et. al. (2011) The first international workshop on the role and impact of mathematics in medicine: A collective account ; Am J Transl Res 2011;3(5):492-497
[d10] M. Artzrouni and JP. Gouteux (2003) Estimating tsetse population parameters: Application of a mathematical model with density-dependence. Medical and Veterinary Entomology, 17: 272-279.
[d9] UNAIDS Reference Group on Estimates, Modelling and Projections (2002) Improved methods and assumptions for estimation of the HIV/AIDS epidemic and its impact, AIDS, 16:W1-W14.
[d8] JP. Gouteux, M. Artzrouni and M. Jarry. (2001) A model with density-dependant immigration to estimate tsetse fly population by trapping, Bulletin of Entomological Research 91: 177-183.
[d7] M. Artzrouni, and JP. Gouteux (2000) Persistance et résurgence de la maladie du sommeil à Trypanosoma brucei gambiense dans les foyers historiques : approche biomathématique d’une énigme
épidémiologique. Comptes Rendus de l’Académie des Sciences. (Sciences de la vie). 323, pp. 351-364.
[d6] M. Artzrouni and JP. Gouteux (1999) Un modèle de transmission de la maladie du sommeil avec population vectorielle ouverte, Annales de la Société Entomologique de France, (N.S.) 35 (suppl.) :
[d5] O. Gavart and M. Artzrouni (1998) Estimation des taux de mortalité M et F pour l'anchois: Présentation générale et premiers résultats. Biométrie et Halieutique, Société Française de Biométrie,
No 15.
[d4] JP Gouteux and M. Artzrouni (1996) Faut-il ou non un contrôle des vecteurs dans la lutte contre la maladie du sommeil? Une approche biomathématique du problème. Bulletin de la Société de
Pathologie Exotique , Vol 89: 299-305.
[d3] M. Artzrouni and JP. Gouteux (1996) Control Strategies for Sleeping Sickness in Central Africa: A Model-based Approach, Tropical Medicine and International Health, Vol 1, No 6: 753-764.
[d2] M. Artzrouni (1990) Projections of the HIV/AIDS epidemic for homosexual/bisexual men in France, the Federal Republic of Germany, and the United Kingdom. European Journal of Epidemiology, Vol 6,
#2: 124-135.
[d1] R.F. Wykoff, C.W. Heath, S.L. Hollis, S.T. Leonard, C.B. Quiller, J.L. Jones, M. Artzrouni, R.L. Parker (1988) The use of contact tracing to identify human immunodeficiency virus infection in a
rural community. Journal of the American Medical Association, 259, 24: 3563-3566.
e. Economics, history
[e6] M. Artzrouni (2009) The Mathematics of Ponzi Schemes, Mathematical Social Sciences, Vol 58, no 2: 190-201.
[e5] J. Komlos et M. Artzrouni (2003) Un modèle démo-économique de la Révolution Industrielle, Economies et Sociétés, Série "Histoire économique quantitative", AF, no 30, 10/2003. 1807-1821.
[e4] M. Artzrouni and J. Komlos (1996) The formation of the European state system: A spatial predatory model. Historical Methods, Vol 29, No 3: 126-134.
[e3] J. Komlos and M. Artzrouni (1995) Ein Simulationsmodell der Industriellen Revolution.Vierteljahrschrift für Sozial- und Wirtschaftsgeschichte, 81, 3:324-338.
[e2] J. Komlos and M. Artzrouni (1990) Mathematical investigations of the escape from the Malthusian trap. Mathematical Population Studies, 2 (4): 269-287.
[e1] M. Artzrouni and J. Komlos (1985) Population growth through history and the escape from the Malthusian trap: a homeostatic simulation model. Genus, 41, 3-4: 21-39.
f. Miscellaneous (epistemology)
[f1] M. Jarry and M. Artzrouni (2008) Un hommage à Jean-Paul Gouteux: la randonnée d'un biologiste au pays des mathématiques in "Modèles, Simulations, Systèmes", J.J. Kupiec, G. Lecointre, M.
Silberstein and F. Varenne (Eds.); Editions Syllepse, Paris. No3, 271-282. PUBLISHED PROCEEDINGS
M. Artzrouni (2003) Chaotic dynamical systems, deceptive computers, and new instructional technologies, Monografias del Seminario Matematico Garcia de Galdeano, 27: 89:95. Jaca, Sept 2001.
Komlos and M. Artzrouni (1992) Etude mathématique de la sortie de trappe malthusienne. INED. Congrès et Colloques No 11.
J. Komlos and M. Artzrouni (1989) Mathematical investigations of the escape from the Malthusian trap. In published proceedings of the workshop on Reconstitution and Dynamics of Past Populations,
organized by the National Institute for Demographic Studies, Paris, France. June 2-4 1989.
J. Komlos and M. Artzrouni (1986) From Malthus to Boserup: a homeostatic simulation model of population growth through history, in "Modeling and Simulation", Proceedings of the Seventeenth Annual
Pittsburgh Conference, Edited by R. Hanham, W.G. Vogt, and M. H. Mickle, Volume 17, Part 1, 269-273.
M. Artzrouni (2007) Crossing paths in 2D Random Walks, arXiv:0712.1477v1
M. Artzrouni and V. Kamla (2007) Does heterosexual transmission drive the HIV/AIDS epidemic in Sub-Saharan Africa (or elsewhere)? arXiv:0707.0600v1
M. Artzrouni and J.P. Gouteux (2005) A parity-structured Leslie matrix model for tsetse populations. Université de Pau. Laboratoire de Mathématiques Appliquées, No 2005/33.
M. Artzrouni (2004b) A new family of periodic functions as explicit roots of a class of polynomial equations. Université de Pau. Laboratoire de Mathématiques Appliquées, No 2004/27.
M. Artzrouni (with B. Zaba) (2004a) A microsimulation/probabilisitc model to assess and correct HIV-induced biases in the estimation of child mortality when using birth history reports. I. Technical
report and preliminary sensitivity analyses. Université de Pau. Laboratoire de Mathématiques Appliquées, No 2004/03.
M. Artzrouni (2002) A migration model for the spread of coins through the Euro-zone. Université de Pau. Laboratoire de Mathématiques Appliquées, No 2002/21.
M. Artzrouni (2002) Projections of the HIV/AIDS epidemic for homosexual/bisexual men in three European countries: An evaluation 14 years later and updated projections. Université dePau. Laboratoire
de Mathématiques Appliquées, No 2002/21.
M. Artzrouni (2001) The UNAIDS/WHO projection model of HIV: asymptotic analysis and threshold conditions. Université de Pau. Laboratoire de Mathématiques Appliquées, No 2001/19.
M. Artzrouni (2001) The local coefficient of ergodicity of a nonnegative matrix. Université de Pau. Laboratoire de Mathématiques Appliquées, No 2001/49.
M. Artzrouni and JP Gouteux (2000) A microsimulation model for the population dynamics of human sleeping sickness Université de Pau. ERS 2055, No 2000/02.
M. Artzrouni and JP Gouteux (1998) A Model of Gambian Sleeping Sickness with Open Vector Populations, Université de Pau. UPRES A 5033, 98/04.
M. Artzrouni and JP Gouteux (1997) A Model of Sleeping Sickness : Open Vector Populations and Rates of Extinction. Université de Pau. Laboratoire de Mathématiques Appliquées, UPRES A 5033, 97/15.
M. Artzrouni and JP Gouteux (1996) A Compartmental Model of Sleeping Sickness in Central Africa. Université de Pau. Laboratoire de Mathématiques Appliquées, URA 1204, 96/01.
M. Artzrouni, and G. Heilig (1989) Projections of the HIV/AIDS epidemic for homosexual/bisexual men and intravenous drug users in five European countries. WP-89-68. International Institute for
Applied Systems Analysis (IIASA), Vienna. Paper prepared during my stay as a visitor at IIASA in June 1989.
M. Artzrouni, and G. Heilig. (1988) HIV and AIDS Surveillance in Europe. Working Paper WP-88-120. International Institute for Applied Systems Analysis (IIASA), Vienna. Paper prepared during my stay
as a visitor at IIASA in June 1988.
M. Artzrouni (1988) Long-term projections of the spread of HIV/AIDS in New York State. Report commissioned by the New York State Department of Health, Albany.
A deterministic/stochastic model of HIV transmission in high activity groups: Application to the client/sex worker populations of Sub-Saharan Africa. Talk given at the London School of Hygiene and
Tropical Medicine. May 12, 2008.
Le rôle des probabilités des transmission dans la propagation du VIH/SIDA chez les prostituées/clients d'Afrique Sub-Saharienne: une confrontation des approches déterministe et stochastique. Talk
given at Centre d'analyse et de mathématique sociales, Paris (CNRS-EHESS, UMR 8557), 18 March 2008.
CONFERENCES (organized, attended, etc)
Are average male and female numbers of lifetime sexual partners equal? (With E. Deuchert). CNRS/ANR conference on Sustainable Development: Demographic, Energy and Inter-generational Aspects;
University of Strasbourg, November 28-29, 2008.
Time-varying linear processes: application to the population dynamics of tsetse flies, 14th ILAS Conference, Shanghai, 16-20 July, 2007.
An individual-based model for the spread of heterosexual HIV; Thirteen years of African trypanosomiasis modeling: A tribute to JP Gouteux (1948-2006). Talks given as Invited Speaker at the EPIMATH
workshop on "Mathematical and Computer Modeling of Infectious Diseases". Brazzaville, Congo Brazzaville, 5-10 March 2007.
A density-dependent model of the population dynamics of tse-tse flies: application to trapping experiements. Invited speaker at the international Biomathematics conference organised by the African
Network for Development-Related Mathematics - Brazzaville, Congo Brazzaville, 10-15 december 2004.
A density-dependent model of the population dynamics of tse-tse flies: application to trapping experiements. Invited speaker at Fifth Annual Meeting of the African Network for Development-Related
Mathematics - Dakar, 2-8 August 2004. Also, workshop to train local public health officials in the use of the UNAIDS software for national HIV/AIDS projections.
A linear migration model for the diffusion of euro coins. Invited speaker, Eurodif2002, Madrid, 28-30 April 2003, Polytechnic Institute of Madrid.
Chaotic dynamical systems, deceptive computers, and new instructional technologies. Jaca, Espagne, bi-annual Pau-Jaca conference in applied mathematics and statistics. 18 sept. 2001.
The local coefficient of ergodicity of a nonnegative matrix, 8th Annual ILAS Conference, Barcelona, 19-22 juillet 1999.
A two-sex Demographic Model of the Heterosexual Spread of HIV. « Measurement of risk and modelling the spread of AIDS », IUSSP Workshop, Copenhagen, Danemark, 1er juin-4 juin, 1998.
A Non-linear Demographic Model of a Heterosexually Transmitted Disease with Vertical Transmission : Application to HIV in Africa. Conference on « Non-linear Models in Demography », organized by the
Max-Planck Institute for Demographic Research, Rostock, Germany, 26-28 may 1998.
On Inhomogeneous products of Leslie matrices : Application to the Case of Slowly Varying Perron Vectors. (Poster). Annual meeting of the Population Association of America, Washington, DC, USA, 27-29
mars 1997.
Of Flies and Men: A compartmental model of sleeping sickness in Central Africa. (with J.P. Gouteux). Presented at Fourth International Conference on Mathematical Population Dynamics, Houston, Texas,
May 23-27, 1995.
Population Association of America (PAA): "Discussant - Organizer" for session on Models of population dynamics. Annual meeting of PAA, Miami, May 5-7, 1994.
"A modeled time-varying density function for the incubation period of AIDS" Presented at the Third International Conference on Population Dynamics, Pau, France, June1-5, 1992.
Discussant at the session on "Mathematical Demography" at the Annual Meeting of the Population Association of America. Denver, CO; April 29-May 1, 1992.
Organizer, Chair, and Discussant of the session "Socio-economic models of population growth: from extinction to bifurcation" at the Annual Meeting of the Population Association of America. Toronto,
Canada; May 3-5, 1990.
"Long-term projections of the HIV/AIDS epidemic for homosexual/bisexual men and intravenous drug users in five European countries", presented at the Annual Meeting of the Population Association of
America. Toronto, Canada; May 3-5, 1990.
"Projections of the HIV/AIDS epidemic for homosexual/bisexual men and intravenous drug users in five European countries" (with G. Heilig) presented at the IIASA/INED workshop on Modelling the Spread
of HIV/AIDS and its Demographic and Social Consequences, Budapest, Hungary, November 23-24, 1989.
"A two-state infective-age structured model for the spread of AIDS in the USA" (with R. Wykoff) presented in poster form at IV International Conference on AIDS, Stockholm, Sweden, June 12-16, 1988.
Abstract #4695 page 235 of program.
"Mathematical investigations of the escape from the Malthusian trap", (with J. Komlos) presented at the workshop on Reconstitution and Dynamics of Past Populations, organized by the National
Institute for Demographic Studies, Paris, France. June 2-4, 1989.
"A two-state age-structured model for the spread of AIDS in the United States", presented at the 12th Annual SEAS-SIAM meeting, The University of Tennessee Space Institute, Tullahoma, Tennessee,
March 11-12, 1988.
"Empirical explorations of closed-form approximations for the dynamics of a population with slowly changing vital rates", presented at the Annual Meeting of the Population Association of America.
Chicago, IL, April 30 - May 2, 1987.
"From Malthus to Boserup: a homeostatic simulation model of population growth through history", presented by co-author John Komlos at the Modeling and Simulation Conference, University of Pittsburgh,
Pittsburgh, PA, April 25-26, 1986.
"Generalized equilibria in multistate demographic systems: a graph-theoretic approach", presented at the Annual Meeting of the Population Association of America. San Francisco, CA, April 3-5, 1986.
"On the dynamics of a generalized stable population: application to the United States" and "Population growth through history and the escape from the Malthusian trap: a homeostatic simulation model",
presented at the General Conference of the International Union for the Scientific Study of Population. Florence, Italy, June 5-12, 1985.
"Conditions for asymptotically exponential solutions of linear difference equations with variable coefficients", presented at the Second SIAM Conference on Applied Linear Algebra. Raleigh, NC, April
29-May 2, 1985.
"Generalized stable population theory", presented at the Annual Meeting of the Population Association of America, Minneapolis, MN, May 3-5, 1984.
Nonhomogeneous Matrix Products, by D.J. Hartfiel. SIAM Reviews, Vol 4, No 1, p.133-134. (2003).
Multiregional Demography: Principles, Methods and Extensions, by Andrei Rogers. Mathematical Population Studies, 6; 4: 331-333 (1997).
La dynamique des populations: populations stables, semi-stables, et quasi-stables, by Jean Bourgeois-Pichat
Population Studies, Vol. 50, No. 2, 280-281(1996).
Formal Demography, by David P. Smith. Mathematical Population Studies, 3; 4: 305-306 (1992).
Matrix Population Models, by Hal Caswell. Mathematical Population Studies, 2; 2:167-168 (1990).
Deterministic Aspects of Mathematical Demography by John Impagliazzo. Mathematical Population Studies, 1;1:127-130 (1988).
During Fall 1988 I worked with the New York State Department of Health (in Albany) on the modeling of the future course of AIDS in the state. I spent a day in Albany (September 20) during which I
made a presentation for the state health authorities on my projections of AIDS in the state. I wrote a user-friendly computer software (HIVAIDS90) to forecast the number of AIDS cases. This model is
described in M. Artzrouni (1990). | {"url":"http://web.univ-pau.fr/~artzroun/FilesHP/ENpage.shtml","timestamp":"2014-04-20T08:33:47Z","content_type":null,"content_length":"35485","record_id":"<urn:uuid:a82feac8-d708-4261-93b1-357b308f59e6>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Help!?!? 7. Graph the relation and its inverse. Use open circles to graph the points of the inverse. x | –7, –6, –5, 8 y |–6, 7, –2, 6
• one year ago
• one year ago
Best Response
You've already chosen the best response.
7. Graph the relation and its inverse. Use open circles to graph the points of the inverse. x | –7, –6, –5, 8 y |–6, 7, –2, 6
Best Response
You've already chosen the best response.
I think if you re-label the y and x you get the inverse It looks like all the plots show the correct position for (x,y) solid dots I would plot (y,x) with open circles and see which plot that
Best Response
You've already chosen the best response.
Ok. Thanks! :)
Best Response
You've already chosen the best response.
what did you get?
Best Response
You've already chosen the best response.
Im still working on it. :)
Best Response
You've already chosen the best response.
It looks like it would be C. But im not sure.
Best Response
You've already chosen the best response.
did you get this answer?
Best Response
You've already chosen the best response.
@swiftskier96 Was it C ?
Best Response
You've already chosen the best response.
I dont remember. What unit and lesson is it from?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50c74a49e4b0a14e43682b04","timestamp":"2014-04-17T03:59:22Z","content_type":null,"content_length":"50104","record_id":"<urn:uuid:e8439346-66fe-4a1d-af33-ceb9f3005131>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00319-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] The empirical foundations of deductive logic and the axiomatic method (& applications to, e.g., ODEs & stochastic processes)
[FOM] The empirical foundations of deductive logic and the axiomatic method (& applications to, e.g., ODEs & stochastic processes)
Richard Haney rfhaney at yahoo.com
Wed Aug 31 18:24:42 EDT 2005
I am generally rather amazed and somewhat puzzled by the degree to which mathematicians typically have confidence in (deductive) logic to obtain new mathematical "knowledge". This perplexing state of affairs seems especially poignant, for example, in the area of the mathematical theory of stochastic processes.
Apparently, ancient Greek mathematicians, and the world in general, gained confidence in deductive logic and the axiomatic method when it was discovered empirically that conclusions deduced from empirically true hypotheses (typically of Euclidean geometry) invariably turned out themselves to be empirically true within the accuracy of empirical methods and obvious interpretations available at that time. As a result, a great economy of effort in empirical testing of ideas was achieved. This "empirical validity" of deductive logic might be regarded as a law of nature much as Newton's law of gravity or any other empirical law of nature. (Philosophically, such "laws" might be regarded as complicated, subjective acts of "human-pattern-invention-and-matching" to nature and might not be essentially a part of "external" nature itself. This view gets into complicated questions as to "what is objective reality?", and is highly related to cultural psychology, cultural conditioning, and the
psychological mechanisms of "perception", but such questions are a side issue here.) As with Newton's law of gravity, this empirical validity of deduction might be expected to fail as to accuracy and/or precision under certain extreme (or perhaps not so extreme) conditions.
I am wondering whether there has been any specific scientific or philosophical study, especially in modern times, of such empirical questions (as such) concerning deductive logic and the axiomatic method.
The axiom of choice seems justifiable as an addition to set-theoretic axioms if the resulting mathematics becomes more "manageable" and useful for purposes of empirical modeling and related analysis. Otherwise, it seems no more relevant scientifically (and epistemologically) than such questions as "How many angels can dance on the head of a pin?". In such a view, the addition of the axiom of choice might be regarded in the same "conceptual-manageability-and-usefulness" framework as the addition of negative numbers, irrational numbers, imaginary numbers, and so on, to the modern conceptual framework of mathematics. (Incidentally, I suspect that "conceptual-manageability-and-usefulness" is what most mathematicians have in mind by the use of the word "elegant".)
The axiom of choice is apparently useful for deducing the (formalistically nominal) "existence" of solutions of certain ordinary differential equations where more "constructive" methods do not seem to be available, but I am unsure whether such formalistically nominal existence makes the resulting mathematics more manageable or useful conceptually (or otherwise). However, analysis of solutions to ordinary differential equations has proven to be extremely useful to empirically-based science -- for example in the computation and analysis of planetary and satellite orbits and spacecraft trajectories.
But the mathematical theories of stochastic processes seem to be much more "far-fetched" in terms of empirically-based science than are the mathematical theories of ordinary differential equations. The uses of functional analysis, for example, seem to be extremely elaborate, devious and intricate in applications to the mathematical theories of stochastic processes. And empirical questions seem to be much more complicated due to the generally less obvious empirical testability of such theories in practice.
So I would also like to know to what extent theoreticians and practitioners have empirically verified the "conceptual-manageability-and-usefulness" of such mathematical theories of stochastic processes in actual applications. Specifically, what empirical basis is there for confidence in such highly intricate mathematical theories of stochastic processes in actual applications?
This second question as to applications to the mathematical theories of stochastic processes seems to be more immediately practical. But the first question as to studies in the empirical foundations of deductive logic and the axiomatic method seems to be much more far-reaching in scope.
Richard Haney
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
-------------- next part --------------
An HTML attachment was scrubbed...
URL: /pipermail/fom/attachments/20050831/27172ce6/attachment.html
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2005-August/009058.html","timestamp":"2014-04-19T22:22:22Z","content_type":null,"content_length":"7602","record_id":"<urn:uuid:cc6743e5-05fe-49f9-bc6e-785be305f7ce>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another countability question
November 17th 2008, 03:16 PM #1
Another countability question
Hello everyone! Once again there are no answers in the book so this is just another check just to make sure. Thank you very much in advance.
Question: "Are the set of all algaebraic numbers countable? Justify your respose"
I will attempt to prove that the algaebraic numbers are countable. Once again for my sake let $\mathbb{A}$ be the set of all algaebraic numbers.
Answer: Let $\zeta$ be an algaebraic number. Then by definition there exists a polynomial $p(x)=a_0+a_1x+\cdots+a_n(x)\quad{a}_0,a_1,\cdots,a _n\in\mathbb{Z}$, such that $p\left(\zeta\right)=0$.
Therefore let us define the number $\zeta$ by the n-tuple $\left(a_0,a_1,\cdots,a_n\right)=t_1$. So now that we have shown that each algaebraic number may be expressed as a n-tuple we may state
that $\mathbb{A}\equiv\left(t_1,t_2,\cdots,t_m\right)$ where $t_m\equiv\left(a_0,a_1,\cdots,a_n\right)$. So now since we have shown that the set of all algaebraic numbers may expressed as a set
of n-tuples with each element of the n-tuple being an element of the integers, a countable set, we may conclude that the algaebraic numbers are countable $\quad\blacksquare$
I know its relatively simple, but I just want to sure I am doing this right.
Hello everyone! Once again there are no answers in the book so this is just another check just to make sure. Thank you very much in advance.
Question: "Are the set of all algaebraic numbers countable? Justify your respose"
I will attempt to prove that the algaebraic numbers are countable. Once again for my sake let $\mathbb{A}$ be the set of all algaebraic numbers.
Answer: Let $\zeta$ be an algaebraic number. Then by definition there exists a polynomial $p(x)=a_0+a_1x+\cdots+a_n(x)\quad{a}_0,a_1,\cdots,a _n\in\mathbb{Z}$, such that $p\left(\zeta\right)=0$.
Therefore let us define the number $\zeta$ by the n-tuple $\left(a_0,a_1,\cdots,a_n\right)=t_1$. So now that we have shown that each algaebraic number may be expressed as a n-tuple we may state
that $\mathbb{A}\equiv\left(t_1,t_2,\cdots,t_m\right)$ where $t_m\equiv\left(a_0,a_1,\cdots,a_n\right)$. So now since we have shown that the set of all algaebraic numbers may expressed as a set
of n-tuples with each element of the n-tuple being an element of the integers, a countable set, we may conclude that the algaebraic numbers are countable $\quad\blacksquare$
I know its relatively simple, but I just want to sure I am doing this right.
Looks good to me. Now you can conclude that that there are uncountably many transcendental numbers!
I'm assuming the reason being that if we assumed that the transcendentals( $\mathbb{T}$) were countable we would ahve that $\mathbb{A}\cup\mathbb{T}=\mathbb{R}$, and since the union of two
infinitely countable sets is countable we arrive at a contradiction since the reals are uncountable?
I'm assuming the reason being that if we assumed that the transcendentals( $\mathbb{T}$) were countable we would ahve that $\mathbb{A}\cup\mathbb{T}=\mathbb{R}$, and since the union of two
infinitely countable sets is countable we arrive at a contradiction since the reals are uncountable?
I want to get off topic and show you something that I find really funny. For some time mathematicians did not know if transcendental numbers existed. Only in 1882 did Louiville show, by
contrustion, a transcendental number. Then $\pi,e$ are proven to be transcendental. But look how easy it is if you are familiar with set theoretic concepts of cardinality! Not only does there
have to exist transcendental numbers, there got to be way more of them than algebraic numbers!
That just shows that if you happen to know advanced mathematical concepts you can sometimes murder seemingly simple problems in a few lines.
I want to get off topic and show you something that I find really funny. For some time mathematicians did not know if transcendental numbers existed. Only in 1882 did Louiville show, by
contrustion, a transcendental number. Then $\pi,e$ are proven to be transcendental. But look how easy it is if you are familiar with set theoretic concepts of cardinality! Not only does there
have to exist transcendental numbers, there got to be way more of them than algebraic numbers!
That just shows that if you happen to know advanced mathematical concepts you can sometimes murder seemingly simple problems in a few lines.
Haha, that reminds me of this one kid (I think I thought so too in a weird way), that because we know more math than say Newton that we are smarter. That is obviously not true, but hey...we can
always dream
Hello everyone! Once again there are no answers in the book so this is just another check just to make sure. Thank you very much in advance.
Question: "Are the set of all algaebraic numbers countable? Justify your respose"
I will attempt to prove that the algaebraic numbers are countable. Once again for my sake let $\mathbb{A}$ be the set of all algaebraic numbers.
Answer: Let $\zeta$ be an algaebraic number. Then by definition there exists a polynomial $p(x)=a_0+a_1x+\cdots+a_n(x)\quad{a}_0,a_1,\cdots,a _n\in\mathbb{Z}$, such that $p\left(\zeta\right)=0$.
Therefore let us define the number $\zeta$ by the n-tuple $\left(a_0,a_1,\cdots,a_n\right)=t_1$. So now that we have shown that each algaebraic number may be expressed as a n-tuple we may state
that $\mathbb{A}\equiv\left(t_1,t_2,\cdots,t_m\right)$ where $t_m\equiv\left(a_0,a_1,\cdots,a_n\right)$. So now since we have shown that the set of all algaebraic numbers may expressed as a set
of n-tuples with each element of the n-tuple being an element of the integers, a countable set, we may conclude that the algaebraic numbers are countable $\quad\blacksquare$
I know its relatively simple, but I just want to sure I am doing this right.
your proof is unclear and kind of wrong! you can't correspond an algebraic number to an n-tuple of integers! an n-tuple of integers may correspond to n algebraic numbers. for any
$n \in \mathbb{N}$ and $(a_0,a_1, \cdots , a_n) \in \mathbb{Z} \times \mathbb{Z} \times \cdots \times \mathbb{Z},$ define $A_n(a_0,a_1, \cdots , a_n)=\{z \in \mathbb{C}: \ a_0 + a_1z + \cdots +
a_nz^n = 0 \}.$ let $A$ be the set of all algebraic numbers. then obviously:
$A=\bigcup_{n=1}^{\infty} \bigcup_ {(a_0,a_1, \cdots, a_n) \in \mathbb{Z} \times \mathbb{Z} \times \cdots \times \mathbb{Z}} A_n(a_0,a_1, \cdots, a_n).$ now each $A_n(a_0,a_1, \cdots, a_n)$ is
finite (it has at most n distinct elements) and $\mathbb{Z} \times \mathbb{Z} \times \cdots \mathbb{Z}$ is countable. finally a countable union
of countable sets is countable. thus $A$ is countable.
your proof is unclear and kind of wrong! you can't correspond an algebraic number to an n-tuple of integers! an n-tuple of integers may correspond to n algebraic numbers. for any
$n \in \mathbb{N}$ and $(a_0,a_1, \cdots , a_n) \in \mathbb{Z} \times \mathbb{Z} \times \cdots \times \mathbb{Z},$ define $A_n(a_0,a_1, \cdots , a_n)=\{z \in \mathbb{C}: \ a_0 + a_1z + \cdots +
a_nz^n = 0 \}.$ let $A$ be the set of all algebraic numbers. then obviously:
$A=\bigcup_{n=1}^{\infty} \bigcup_ {(a_0,a_1, \cdots, a_n) \in \mathbb{Z} \times \mathbb{Z} \times \cdots \times \mathbb{Z}} A_n(a_0,a_1, \cdots, a_n).$ now each $A_n(a_0,a_1, \cdots, a_n)$ is
finite (it has at most n distinct elements) and $\mathbb{Z} \times \mathbb{Z} \times \cdots \mathbb{Z}$ is countable. finally a countable union
of countable sets is countable. thus $A$ is countable.
I am in no way refuting what you say, I am sure that you are correct. But if the coefficients of each equation uniquely determine the number why can't we represent them as sets of their
EDIT: Unless of course they don't uniquely determine them, but then why cannot we have multiple n-tuples that are equivalent.
the coefficients of an equation do not represent only one algebraic numner! for example (2,-3,1) represents 1 and 2 because both are the roots of $z^2 - 3z + 2 = 0.$ also the representation is
obviously not unique!
Yeah I realized that right afer I posted it. Ok, well then what about this if we define a set $E$ as the set of all integer sequences, that would be countable set by the argument that it is a set
of n-tuples with integer elements, then obviously the algaebraic numbers as I suggested they be a represented above would be a subset of $E$ and would be countable, or do I need to hit the books
Ok, well then what about this if we define a set $E$ as the set of all integer sequences, that would be countable set by the argument that it is a set of n-tuples with integer elements, then
obviously the algaebraic numbers as I suggested they be a represented above would be a subset of $E$ and would be countable, or do I need to hit the books again?
that's roughly the idea! and a word of wisdom: lol ... it's dangerous for a young mind to be happy with just some rough ideas!
I want to get off topic and show you something that I find really funny. For some time mathematicians did not know if transcendental numbers existed. Only in 1882 did Louiville show, by
contrustion, a transcendental number. Then $\pi,e$ are proven to be transcendental. But look how easy it is if you are familiar with set theoretic concepts of cardinality! Not only does there
have to exist transcendental numbers, there got to be way more of them than algebraic numbers!
That just shows that if you happen to know advanced mathematical concepts you can sometimes murder seemingly simple problems in a few lines.
Lambert thought that the construction of $\pi$ was impossible. A theorem by Lambert in 1768 says: For each rational $x$ ( $x eq 0$) the value $\tan x$ is irrational. From this, he deduced that $\
pi = 4 \arctan 1$ which must be irrational. In a similar argument, one can show that $e^{x} = (\tanh(x/2)-1)/(\tanh(x/2)+1)$ which implies irrationality for $x eq 0$.
I want to get off topic and show you something that I find really funny. For some time mathematicians did not know if transcendental numbers existed. Only in 1882 did Louiville show, by
contrustion, a transcendental number. Then $\pi,e$ are proven to be transcendental. But look how easy it is if you are familiar with set theoretic concepts of cardinality! Not only does there
have to exist transcendental numbers, there got to be way more of them than algebraic numbers!
That just shows that if you happen to know advanced mathematical concepts you can sometimes murder seemingly simple problems in a few lines.
The computable reals are enumerable but not effectivly enumerable (that is there is no computable function which is one-one from N onto the computable reals).
November 17th 2008, 03:32 PM #2
Oct 2008
November 17th 2008, 03:37 PM #3
November 17th 2008, 04:39 PM #4
Global Moderator
Nov 2005
New York City
November 17th 2008, 05:36 PM #5
November 17th 2008, 06:11 PM #6
MHF Contributor
May 2008
November 17th 2008, 06:24 PM #7
November 17th 2008, 06:30 PM #8
MHF Contributor
May 2008
November 17th 2008, 06:34 PM #9
November 17th 2008, 06:44 PM #10
MHF Contributor
May 2008
November 17th 2008, 06:48 PM #11
November 17th 2008, 08:22 PM #12
Oct 2008
November 17th 2008, 11:08 PM #13
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/advanced-math-topics/60107-another-countability-question.html","timestamp":"2014-04-18T21:23:29Z","content_type":null,"content_length":"98496","record_id":"<urn:uuid:614720fd-914f-453e-9beb-5b9b2bc6fb9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof methods: Proof by mathematical induction | MathBlog
Proof methods: Proof by mathematical induction
Written by Kristian on 3 August 2011
Topics: Math
It has been a while since I last posted something about proof methods, but lets dig that up again and take a look at a fourth method. The first three were direct proof, proof by contradiction and
contrapositive proofs. Proof by induction is a somewhat different nature.
I have been reading quite a few blog posts recently and they all seem to be witty and clever, so I really wanted to add a joke right here on induction. But I honestly couldn’t find one I think was
funny enough. So if you could just laugh or smirk for a few seconds before reading on, my day is saved.
Back to the induction part. Induction usually has it’s strength on statements of the type “For all integers k greater than b, P(k) is true”. For some statements we could prove it with some of the
already covered methods, but for others it would mean that we had to prove an infinity of cases.
The analogy I see everywhere and which I find quite fitting is to compare induction to dominoes (and I don’t mean the pizza thing) which are lined up. As soon as you knock the first one over it
knocks all the remaining once over one by one. Induction works in much the same way.
We need to prove two things and for explanatory purposes I will explain them in reverse order
1. Induction step: We assume that P(k) is true and then we need to show that P(k+1) is true as well. That is the same as saying if we knock an arbitrary domino over, then the next one will fall as
2. Base Case: We need to prove the base case (P(b) – or in other words we need to show that we can knock over the first domino.
Once we shown these two cases then if b=1 we have shown P(1), which by the induction step implies that P(2) is true, which by the induction step implies P(3)….. all the way to infinity.
Induction Proof Example
Let us start out with proving something Gauss figured out very early in his childhood.
Proposition: For all the following is true
I know this could be proven as a direct proof, but it is rather easy to and therefore well suited for an example. The question is rather easy to understand, so I wont do a whole lot to explain the
Base case: The base case in this situation is N = 1, so we need to show that the statement holds for that
The sum of n from 1 to 1 is…. 1, so the left hand side is rather easy. The right hand side is easy to calculate using the basic arithmetic taught in 1st-4th grade, so we can reduce the statement to
Which is true.
Induction step: We assume that the statement holds for N = k such that it holds for
and need to show that the statement is also true for
Let us do this part as a direct proof.
The first step just pulls k+1 out of the summation. the second part of the last statement is by assumption equal to so we have
which proofs the proposition that
Thereby we have proven that it holds for the base case of n=1 and that it holds for all n by induction.
Wrapping Up
I have kept this blog post a tad short with only one example. I have a few more topics regarding induction that I want to blog about but that will be in a later blog post.
I have found a few good sources for reading about proof by induction. As always The book of proofs is a good choice, and also this 21 page pdf file which in my opinion gives a great covering of the
The metaphor of dominoes also gave rise to the chosen post photo, which was kindly shared under the creative common license by Malkav. My crop of the photo is of course shared under the same
4 Comments For This Post I'd Love to Hear Yours!
1. I think this is the coolest proof method that we have learned about so far.
Two steps and you solve infinitely many cases. And that’s a whole lot of cases…
2. It is very obvious that we have proved an infinity of cases in the induction case, but is true for many of the other statements as well. This just lends it self to another type of problems.
The theorem we proved in the example can be proven by direct methods as well for all integers, so we would end up covering the same amount of cases.
That being said, yes I like it as well, and I always hear by math professor stating that we have to prove it for k+1 in his German accent.
3. 1. http://translate.google.com/
2. Choose German
3. Write “k + 1″
4. Click Listen
5. Laugh!
4. Always nice to have a clear and concise refresher of weak induction. Great post.
Leave a Comment Here's Your Chance to Be Heard!
You can use these html tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
You can use short tags like: [code language="language"][/code]
The code tag supports the following languages: as3, bash, coldfusion, c, cpp, csharp, css, delphi, diff,erlang, groovy, javascript, java, javafx, perl, php, plain, powershell, python, ruby, scala,
sql, tex, vb, xml
You can use [latex]your formula[/latex] if you want to format something using latex
Notify me of comments via e-mail. You can also subscribe without commenting. | {"url":"http://www.mathblog.dk/proof-by-induction/","timestamp":"2014-04-17T18:23:18Z","content_type":null,"content_length":"47735","record_id":"<urn:uuid:95a98778-d8ba-438e-bd3c-31b047588c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Converting decibels to PSI
How can I convert decibels to pressure units, such as PSI, or vice versa? I've seen formulas such as dB = 10*log (P/Pref) or 20*log(V/Vref). Is this correct? How do I know what the reference P is?
Thanks for your help!
Justin wrote:How can I convert decibels to pressure units, such as PSI, or vice versa? I've seen formulas such as dB = 10*log (P/Pref) or 20*log(V/Vref). Is this correct? How do I know what the
reference P is? Thanks for your help!
If P is power, the 10*log form is correct. However if P is pressure, db = 20*log(P/Pref) and Pref is 20 uPa (micropascals, I can't make a "mu" here). The assumption is that P is an rms pressure
value, any static (dc) pressure is ignored. A 10 dB increase is always 10X in power or energy, but power is proportional to power, or to the square of voltage or pressure. That is where the factors
of 10 or 20 come from.
Note that 20 Pa is 120 dB, painfully loud, while static atmospheric pressure is 101.325 kPa.
http://www.eie.fceia.unr.edu.ar/~acusti ... undlev.htm
For electrical signals, you have to find a reference to what the reference level is, but 1 mW of power (frequently into 600 ohms), or 1 V are common ones.
Pa to dB
i am doing a science fair project and am wondering how to convert sound pressure(Pa) into decibels(dB). can anyone give a answer please
Re: Converting decibels to PSI
Use this equation:
dB = 20*log( a/b )
a = the measured sound level in Pascals
b = 2*10^-5
to test; use a = 2 Pascals and you should get 100 dB
Re: Converting decibels to PSI
Guest wrote:
Justin wrote:How can I convert decibels to pressure units, such as PSI, or vice versa? I've seen formulas such as dB = 10*log (P/Pref) or 20*log(V/Vref). Is this correct? How do I know what
the reference P is? Thanks for your help!
If P is power, the 10*log form is correct. However if P is pressure, db = 20*log(P/Pref) and Pref is 20 uPa (micropascals, I can't make a "mu" here). The assumption is that P is an rms pressure
value, any static (dc) pressure is ignored. A 10 dB increase is always 10X in power or energy, but power is proportional to power, or to the square of voltage or pressure. That is where the
factors of 10 or 20 come from.
Note that 20 Pa is 120 dB, painfully loud, while static atmospheric pressure is 101.325 kPa.
<url snipped>
For electrical signals, you have to find a reference to what the reference level is, but 1 mW of power (frequently into 600 ohms), or 1 V are common ones.
Nice response, however, the original question asked how to convert dB to PSI. That can be done as follows:
20 Pascals = 120 dB (as define above)
1 psi = 6894.757 Pascals
120 dB / 20 Pa = 6 dB / Pascal
6 dB / Pa * 6894.757 Pa /1 psi = 6 * 6894.757 * (dB / 1 Psi) = 41,368.542 dB / 1 psi
Note: This ratio should allow you to see that it takes a lot of dBs (sound pressure) to equal 1 psi (i.e. pressure evenly distributed over 1 square inch). That works out to be about 1 dB per
124.881645 square microns.
So, if dB is known & you want to convert to psi, then it can easily be computed as follows:
"120 dB = how many psi?"
120 dB * ( 1 psi / 41,368.542 dB = 120 / 41,368.542 = approximately 0.0029 psi
Re: Converting decibels to PSI
The conversion supplied by "teacher" above is only good for exactly 120db because of the logarithmic scale... The calcs using the logarythmic equation must be performed in si units and then converted | {"url":"http://www.convert-me.com/en/bb/viewtopic.php?t=2941","timestamp":"2014-04-16T15:59:51Z","content_type":null,"content_length":"29681","record_id":"<urn:uuid:a52c1642-1da1-4ad5-aa4d-1024e904eb69>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics 4 Variance
nAlways report a measure of central tendency (Mean & SD) and a measure of variability to describe a set of scores
nVariation (SS) = Sum of Squares
nVariance (s2 or S2) = SS/N (deviation scores squared/N)
nStandard deviation (s or SD) = square root of variance
nMeasures of variability provide a quantitative measure of the degree to which scores in a distribution are spread out or clustered together on the scale.
nIf variance = 0 then all scores are the same, NO variability exists
nIf variance is large - then scores were very far apart
nIf variance is small Ð then scores were very close together | {"url":"http://healthsciences.okstate.edu/college/fammed/research/WorkshopSeriesDocs/stats04Variance_files/slide0003.htm","timestamp":"2014-04-19T01:50:30Z","content_type":null,"content_length":"9362","record_id":"<urn:uuid:92d2cdab-a2eb-47ad-b3db-0415d2fff674>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Harwood Heights Science Tutor
Find a Harwood Heights Science Tutor
...I especially enjoy breaking down mathematical concepts for students who feel weak in this area and helping them develop a practical approach to problem solving. I believe in strengthening
foundational skills first and then building upon those toward mastering more difficult material. I have als...
38 Subjects: including ACT Science, Spanish, reading, statistics
...I took the revised version of the GRE the first day it was offered and I scored a 170 on the Quant and a 168 on Verbal. I've helped design GRE apps for Android and IOS operating systems, and I
revised the content of online GRE study tools to reflect the new structure of the exam. In the past 5 years, I've written proprietary guides on SAT strategy for local companies.
24 Subjects: including physics, ACT Science, calculus, geometry
...I have a graduate degree in Communication. I have an undergrad degree in English, and experience working with middle school students as well as college students. In the summer, I teach debate
and persuasive techniques to gifted middle school students.
32 Subjects: including biology, English, writing, reading
...Topics include: functions and graphing (linear, quadratic, logarithmic, exponential), complex numbers, systems of equations and inequalities, and relations. This can also include beginning
trigonometry and probability and statistics. Geometry is unlike many other Math courses in that it is a spatial/visual class and deals minimally with variables and equations.
11 Subjects: including ACT Science, calculus, geometry, algebra 1
...I specialize in English Composition/ESL(English as a Second Language), as well as all Business courses. I was a tutor with the Everest University Igniter Ambassador Tutoring Program (2012)
where I received a Special Recognition Award for outstanding contributions to Everest University Online's I...
12 Subjects: including anatomy, microbiology, biology, English | {"url":"http://www.purplemath.com/harwood_heights_science_tutors.php","timestamp":"2014-04-17T11:25:22Z","content_type":null,"content_length":"24274","record_id":"<urn:uuid:e26f073a-4acd-44de-b1b8-56faba5ae351>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
csTriangleMeshLOD Class Reference
A static class which performs the calculation of the best order to do the collapsing of a triangle mesh. More...
#include <csgeom/trimeshlod.h>
Static Public Member Functions
static void CalculateLOD (csTriangleMesh *mesh, csTriangleVerticesCost *verts, int *translate, int *emerge_from, csTriangleLODAlgo *lodalgo)
For the given mesh and a set of vertices calculate the best order in which to perform LOD reduction.
static csTriangle * CalculateLOD (csTriangleMesh *mesh, csTriangleVerticesCost *verts, float max_cost, int &num_triangles, csTriangleLODAlgo *lodalgo)
Calculate a simplified set of triangles so that all vertices with cost lower then the maximum cost are removed.
static csTriangle * CalculateLODFast (csTriangleMesh *mesh, csTriangleVerticesCost *verts, float max_cost, int &num_triangles, csTriangleLODAlgo *lodalgo)
This is a faster version of CalculateLOD() which doesn't recalculate the cost of a vertex after edge collapse.
Detailed Description
A static class which performs the calculation of the best order to do the collapsing of a triangle mesh.
Definition at line 180 of file trimeshlod.h.
Member Function Documentation
static void csTriangleMeshLOD::CalculateLOD ( csTriangleMesh * mesh,
csTriangleVerticesCost * verts,
int * translate,
int * emerge_from,
csTriangleLODAlgo * lodalgo
) [static]
For the given mesh and a set of vertices calculate the best order in which to perform LOD reduction.
This fills two arrays (which should have the same size as the number of vertices in 'verts'). 'translate' contains a mapping from the old order of vertices to the new one. The new ordering of
vertices is done in a way so that the first vertex is the one which is always present in the model and with increasing detail; vertices are added in ascending vertex order. 'emerge_from' indicates
(for a given index in the new order) from which each vertex arises (or seen the other way around: to what this vertex had collapsed).
mesh the source triangle mesh.
verts the vertex costs.
translate contains a mapping from the old order of vertices to the new one.
emerge_from indicates from which each vertex arises.
lodalgo is the lod algorithm.
Note: The given 'mesh' and 'verts' objects are no longer valid after calling this function. Don't expect any useful information here.
static csTriangle* csTriangleMeshLOD::CalculateLOD ( csTriangleMesh * mesh,
csTriangleVerticesCost * verts,
float max_cost,
int & num_triangles,
csTriangleLODAlgo * lodalgo
) [static]
Calculate a simplified set of triangles so that all vertices with cost lower then the maximum cost are removed.
The resulting simplified triangle mesh is returned (and number of triangles is set to num_triangles). You must delete[] the returned list of triangles if you don't want to use it anymore.
mesh the source triangle mesh.
verts the vertex costs.
max_cost the cost which sets the limit for the simplification.
num_triangles receives the number of triangles in the simplified set.
lodalgo is the Lod algorithm.
Note: The given 'mesh' and 'verts' objects are no longer valid after calling this function. Don't expect any useful information here.
static csTriangle* csTriangleMeshLOD::CalculateLODFast ( csTriangleMesh * mesh,
csTriangleVerticesCost * verts,
float max_cost,
int & num_triangles,
csTriangleLODAlgo * lodalgo
) [static]
This is a faster version of CalculateLOD() which doesn't recalculate the cost of a vertex after edge collapse.
It is less accurate in cases where the cost of a vertex can change after edge collapse but it calculates a LOT faster. You must delete[] the returned list of triangles if you don't want to use it
mesh the source triangle mesh.
verts the vertex costs.
max_cost the cost which sets the limit for the simplification.
num_triangles receives the number of triangles in the simplified set.
lodalgo is the lod algorithm.
Note: The given 'mesh' and 'verts' objects are no longer valid after calling this function. Don't expect any useful information here.
The documentation for this class was generated from the following file:
Generated for Crystal Space 1.4.1 by | {"url":"http://www.crystalspace3d.org/docs/online/api-1.4/classcsTriangleMeshLOD.html","timestamp":"2014-04-17T15:59:13Z","content_type":null,"content_length":"16114","record_id":"<urn:uuid:9e7394f8-dda5-4063-818d-a7587ccf81e2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Having Trouble Setting up a System...
August 31st 2009, 02:15 PM #1
Aug 2009
Having Trouble Setting up a System...
...of DEs in regards to a written problem we've been given. Here it is, followed by a tentative 'stab' at what is meant to be in the equation.
"Suppose that the time rate of change (ROC) of a price M(t) of a product, minus inflation I(t) is proportional to the difference between Supply S(t) at time t and some equilbrium supply t.
(If $S>S_0$ the supply is too large and cost 'will' decrease. If $S<S_0$ supply is too low and price 'will' increase)
Also assume that the ROC in supply is proportional to the difference between the price P and some equilibrium $P_0$
(If $P>P_0$ the price is too high and supply will increase. If $P<P_0$ the price is too low and supply will decrease)
We've also got that F(t) = sinwt
Now, this is what I have scribbled down in an attempt to get some working out, but I highly doubt that it is anywhere near correct, as I'm having trouble moving the knowledge of DEs from just
straight questions into drawing stuff out of problems.
$dy/dt(M(t) - I(t))(s(t) - S_0$) = 0
I(t) = sinwt
As I've, I'm well from confident about it
...of DEs in regards to a written problem we've been given. Here it is, followed by a tentative 'stab' at what is meant to be in the equation.
"Suppose that the time rate of change (ROC) of a price M(t) of a product, minus inflation I(t) is proportional to the difference between Supply S(t) at time t and some equilbrium supply t.
(If $S>S_0$ the supply is too large and cost 'will' decrease. If $S<S_0$ supply is too low and price 'will' increase)
Also assume that the ROC in supply is proportional to the difference between the price P and some equilibrium $P_0$
(If $P>P_0$ the price is too high and supply will increase. If $P<P_0$ the price is too low and supply will decrease)
We've also got that F(t) = sinwt
Now, this is what I have scribbled down in an attempt to get some working out, but I highly doubt that it is anywhere near correct, as I'm having trouble moving the knowledge of DEs from just
straight questions into drawing stuff out of problems.
$dy/dt(M(t) - I(t))(s(t) - S_0$) = 0
I(t) = sinwt
As I've, I'm well from confident about it
Have you read your post again after posting it? You designate the price with variable M and with variable P. Then F(t)=sin(wt) seems to be inflation I(t), or not? So this becomes difficult to
follow and moreover what is the exact question?
I assume you mean by the first part:
$\frac{dM}{dt}-I(t)=-K\cdot \left(S(t)-S_0 \right)$
K is a proportionality constant. And by the second part:
And finally:
If this is correct you can substitute the second and last part in the first one and obtain a second order DE. Please check this and come back to shed some light on these dark postingshadows :-)
Gah. I did mean that I(T) = sinwt, rather than F(T). And grud knows why I've switched between P and M for price.
All the rest is correctly interpreted. Very sorry for the confusion.
Last edited by mr fantastic; September 3rd 2009 at 05:04 AM.
No problem I'm happy everything is sorted out with the question. OK, so can you write down the second order DE and solve it? It should look familiar....
September 2nd 2009, 11:24 AM #2
Junior Member
Mar 2008
September 3rd 2009, 02:09 AM #3
Aug 2009
September 6th 2009, 08:46 AM #4
Junior Member
Mar 2008 | {"url":"http://mathhelpforum.com/differential-equations/99972-having-trouble-setting-up-system.html","timestamp":"2014-04-20T02:29:28Z","content_type":null,"content_length":"43302","record_id":"<urn:uuid:5289191f-311b-4dca-b086-dc1f56830e70>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
answers to the "related possible questions"
Hi, Mark:
The related possible questions in the answer section are really good, but there are no answers provided, and some of those related questions are equally hard and tricky. Since there is a high chance
that the interviewer is going to task those related questions, will you provide answers to them?
Re: answers to the "related possible questions"
well it would triple the length of the book plus I think it's good for the reader to have some questions that he can't look up the answers to...
You can ask here about any question you are stuck on.
Re: answers to the "related possible questions"
Ok, here the unanswered question i got stuck with:
p290, bottom of the page: "6 pirates discover a treasure chest filled with 10000 gold coins..."
can you at least give a hint?
Re: answers to the "related possible questions"
it's a similar kind of backwards induction. i.e. what happens with one and then two...
It's pretty easy to find an answer with google for this one.
Re: answers to the "related possible questions"
i know it is backward, but i got stuck and can't move because i don't see my mistake:
if there is only one pirate, he got all money
if there are two, then they split 500/500, because otherwise they will never agree to anything ("the majority should be strict" meaning we need 2 vote to accept or kill the proposal)
if there are three, then the one making a proposal can leave himself 499 and let the other one (randomly chosen) has 501. in this case the randomly chosen one will vote in favor of the proposal,
because if he kills it, he ll get only 500
if there are four, then the one making a proposal needs two more votes. he should give some money to first two pirates, but how much to give? both of them randomly getting 0 or 501... should he gave
them 1 and 1, or should i calculate average payoff and give 1 coin more above it?
i got stuck.... if you have 5 minutes, help!
Re: answers to the "related possible questions"
if there are two, you would have to give everything to the other pirate since if he votes against, there is no majority and you die.
with three you can give one to the second pirate and keep the rest since this better for him than dying.
with four you can give two to the guy who gets one when there are three, and one to the one who gets zero and they will vote in favour so you have a strict majority.
Anyway, there is loads of discussion of this on the web:
Re: answers to the "related possible questions"
o! i see my mistake now!
you need majority to accept the proposal, but you don't need it to kill the proposal!
thank you! the rest of question is easy.
i know that there is a big discussion in web, but forum is much more convenient and give you time and opportunity to figure out where you were wrong, and this is much better for brains, then just
reading solution from the web | {"url":"http://www.markjoshi.com/books/phpBB3/viewtopic.php?f=17&t=695&p=2164","timestamp":"2014-04-16T07:13:21Z","content_type":null,"content_length":"24041","record_id":"<urn:uuid:21a8a739-dd38-4fdc-a538-f8c30f47ac2b>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mckinney Algebra Tutor
Find a Mckinney Algebra Tutor
I have been a middle school and elementary school teacher for the last four years and love every second of it. I received my college degree from the University of North Texas in Finance and
Marketing. I have had different jobs in the private sector since I was about 15 years old.
23 Subjects: including algebra 1, chemistry, geometry, biology
...I get in front of people to speak all the time to promote my business. I have taught nutrition to many college students as part of the weight training class at Collin County Community College.
I am a Certified Clinical Exercise Specialist and Health Fitness Specialist through the ACSM and am confident in teaching nutrition to others.
16 Subjects: including algebra 1, writing, grammar, Microsoft Excel
...My goal is to to simplify the amount of 'stuff' students must remember and use mechanically and to enhance understanding, so that the math can be applied to new situations (Chemistry, Physics
are heavily tied to math). I have 2 daughters 8th grade and a senior so I see math as a parent and a teac...
7 Subjects: including algebra 1, algebra 2, geometry, precalculus
...Learning chemistry can be very difficult at times. However, I will do my best to help you achieve success in your studies.I have taught organic chemistry at the college level. My background is
in synthetic organic chemistry and organometallic chemistry.
10 Subjects: including algebra 2, algebra 1, calculus, chemistry
...I live in the Richardson area and I will meet you at your home or a local library. Sign up today and begin increasing your confidence and test scores! Thank you and I look forward to working
with you!I teach an SAT prep class at a private school, and students improve their math score by an average of 100+ points after taking the class.
11 Subjects: including algebra 1, algebra 2, precalculus, SAT math
Nearby Cities With algebra Tutor
Allen, TX algebra Tutors
Carrollton, TX algebra Tutors
Denton, TX algebra Tutors
Fairview, TX algebra Tutors
Frisco, TX algebra Tutors
Garland, TX algebra Tutors
Irving, TX algebra Tutors
Lewisville, TX algebra Tutors
Lucas, TX algebra Tutors
Melissa algebra Tutors
Parker, TX algebra Tutors
Plano, TX algebra Tutors
Richardson algebra Tutors
The Colony algebra Tutors
Wylie algebra Tutors | {"url":"http://www.purplemath.com/mckinney_algebra_tutors.php","timestamp":"2014-04-21T12:56:29Z","content_type":null,"content_length":"23715","record_id":"<urn:uuid:d376c71e-086c-4b7a-a7a6-5099dee3848d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve This Math Problem, Win a Million Bucks | TIME.com
Justin Lewis / Getty Images
Want to make a quick million? All you have to do is figure out a little math problem that goes like this: A^x + B^y = C^z. Simple algebra, right?
(MORE: After Inhaling Hot Sauce Fumes, Three People Are Hospitalized)
Oh how deceptively innocuous a few elementary variables can seem. You’re actually looking at something inspired by one of the great mysteries of mathematics, known as Fermat’s Last Theorem and named
after the 17th century French lawyer and mathematician Pierre de Fermat. Fermat came up with his own theorem back in 1637, scribbling it in the margins of his copy of the Greek text Arithmetica by
Diophantus and surmising that — put your math caps on and buckle up — if n were an integer greater than 2, then the equation X^n + Y^n = Z^n has no positive integral solutions. The note was
discovered after Fermat’s death, and it took over 350 years and untold failed attempts by others for someone to prove the theorem. In 1995, British mathematician Andrew Wiles, who’d been fascinated
with the theorem since he was a child, finally got the job done, having puzzled over it in secret for roughly six years.
That’s where Texas billionaire D. Andrew Beal comes in. In 1993, he posited a closely related number theory problem hence dubbed Beal’s Conjecture (that first A-B-C equation above), where the only
solution is possible when A, B and C have a common numerical factor and the exponents x, y and z are greater than 2. Beal’s been trying to prove his theorem ever since, reports ABC News, offering
cash rewards in steadily increasing amounts — $5,000 in 1997, $100,000 in 2000 – to anyone with the knack to get the job done.
The prize total in 2013: $1 million, which is either a sign of Beal’s magnanimity or his skepticism that it’s actually possible. (Since Beal is worth a reported $8 billion, there’s little need to
worry about whether he’ll pay the winner.)
It’s apparently not just about the money for Beal, either: In a statement, he said “I’d like to inspire young people to pursue math and science. Increasing the prize is a good way to draw attention
to mathematics generally … I hope many more young people will find themselves drawn into the wonderful world of mathematics.”
[Update: The challenge is sponsored by the American Mathematical Society; the rules, including where to submit proposed solutions by email or snail mail, are available here.]
134 comments
Newest | Oldest
The answer is 3
So this is saying A to the X plus B to the y equals C to the z. A, B, C, x, y, and z are all positive integers and and the latter three are all greater than 2. Am I missing something here? There's
an endless number of answers. Here's one, and it's a simple one: Say A, B, and C are all 2. Then x = 3, y = 3 and z = 4. That's one example. Now the reason why A, B, and C must have a common
prime factor is because A to the x and B to the y must ADD into C to the z.
Here's a link to the free e-book.
Just finished my book How To Solve the Beal and Other Mathematical Conjectures. It's free on Kindle for the next 3 days. It can be solved via the the Pythagorean theorem because it is the
Pythagorean theorem. Say n=4 so A^4 +B^4=C^4 or a^2A^2+b^2B^2=c^2C^2. a,b,and c become dimensionless factors and are applied toward enlarging the AREAS of A^2, B^2, and C^2 respectively. This is
the Fermat conjecture. Now the Beal is practically the same thing, works the same way except the exponents x and y have to be expressed in terms of z the exponent for the hypotenuse, C..
The answer is 1. Reason: anything with this high of stacks wouldn't have a very complicated answer.
The answer is:
A= a^(2/x)
B= b^(2/y)
C= c^(2/z)
making the equation:
[a^(2/x)^x] + [b^(2/y)^y] = [c^(2/z)^z]
4(x)+4(y)=8(z). 4(3)+4(3)=8(3). 12+12=24.
The answer is 0
Has the proof of the pythagorem theorem ever been discovred? That should help explain this. Good luck for those that like mathematical proofs.
Pythagorym theorem will explain it. Good luck
this problem is easy seeing as a positive integer is above 0 and they all share a common prime factor ,a prime number is the number itself and 1 and so the most common factor is 1.I can see the
million now .
the answer is D EMAIL ME FOR MY ADDRESS RO SEND THE CHECK
THIS CONTEST BETTER BE REAL OR WE WILL MEET IN COURT :-) JOKING
It´s too easy to show that the Beal conjeture is wrong. Thanks.
What exactly is the equation used for?
You don't even have the theorem correct in the first sentence. Come on Matt. Ax+ By = Cz is not the same as Xn + Yn = Zn. Totally different.
For a million bucks, I'll "prove" it using by far the most popular technique ever invented by humanity: "It must be so, because God said so."
What do you mean, that doesn't count?
I'm unpleasant, but the assertion "La ecuación establece que si Ax + By = Cz, donde A, B, C, x, y, z son enteros positivos, siendo x, y, z mayores que 2, entonces A, B, y C deben tener un factor
común primo." (FROM http://sociedad.elpais.com/sociedad/2013/06/07/actualidad/1370616631_328551.html ) is completely erroneous! Giovanni Imbalzano. SEE also: http://www.lulu.com/shop/
giovanni-imbalzano/fermat-%C3%A0-la-page/paperback/product-18811968.html OR http://www.lulu.com/shop/giovanni-imbalzano/extension-of-the-fermats-numbers/paperback/product-6207910.html OR http://
www.lulu.com/shop/giovanni-imbalzano/solution-of-the-fermats-last-theorem/paperback/product-4589261.html G. Imbalzano
India - The Birthplace of Mathematics and Astronomy
Brilliant Hindu/ Indians discovered pretty much everything in Mathematics starting from Geometry, Algebra, Numbers 0 to 9, Trigonometry, Calculus, Arithmetic, Place value system, Decimal system and
the list goes on and on.
The Fractal mindset of Brilliant Hindu Mathematicians can never be replicated by western scientists with all their models and brain theories.
Last I heard, so called Newtons unsolvable 350 yr old Math puzzle was solved by a Brilliant Hindu/Indian kid.
Mock modular forms which came from the fractal mindset of another Brilliant Hindu/Indian mathematician Srinivasa Ramanujam 100 yrs ago was proved to be correct last year/
We in the west, fail to acknowledge the source of our Knowledge and wealth since 18th century (since colonialism of India)
India was the richest country and largest economy in the world followed by China, with a GDP of 25 to 30% until the time of British invasion in the 18th century, according to a 20 yrs research done
by Angus Maddison for OECD countries.
India was the number 1 manufacturer and exporter of textiles and Iron and steel and both these technologies were stolen by the British and made as twin engines of its so called Industrial revolution
Thank you
Checking the first 281,600 equations yields 25 solutions which by inspection all have common primes:
2^3 + 2^3 = 2^4 2^4 + 2^4 = 2^5 2^5 + 2^5 = 2^6 2^6 + 2^6 = 2^7 2^7 + 2^7 = 2^8
2^8 + 2^8 = 2^9 2^9 + 2^9 = 2^10 2^5 + 2^5 = 4^3 2^7 + 2^7 = 4^4 2^9 + 2^9 = 4^5
2^8 + 2^8 = 8^3 2^6 + 4^3 = 2^7 2^8 + 4^4 = 2^9 2^8 + 4^4 = 8^3 2^9 + 8^3 = 2^10
2^9 + 8^3 = 4^5 3^3 + 6^3 = 3^5 4^3 + 4^3 = 2^7 4^4 + 4^4 = 2^9 4^4 + 4^4 = 8^3
4^7 + 4^7 = 8^5 4^10 + 4^10 = 8^7 8^3 + 8^3 = 2^10 8^3 + 8^3 = 4^5 8^5 + 8^5 = 4^8
Further the number of candidate solutions per equation rolls off logarithmically.
So I choose to agree with Beal's conjecture, and speculate that its complexity of proof is at least Fermat, which took 358 years. That works out to about $1.40 an hour assuming 2000 hours of effort
per year.
More importantly Beal's conjecture is operationally true for the space of numbers of most common concern.
2 ^ 8 + 2 ^ 8 = 2 ^ 9
2 ^ 9 + 2 ^ 9 = 2 ^ 10
2^10 + 2 ^10 = 2 ^ 11
the problem is not correctly stated, poorly worded, written by a guy who knows nothing about mathematics
You don't SOLVE theorems...you prove (or disprove) them. A proof requires the use of mathematical principles to demonstrate that the theorem must be true (or more accurately, that it cannot NOT be
true.) That's what you need to do to get the $1 million. Of course, in order to disprove it, you only need to come up with one example where it fails. I'm not sure if he'd pay you for that
though. :)
Will Hunting will be on it....after he's done seeing about a girl
As a consolation prize for you math/professor/hidden genius-types that are not going to win this $1,000,000 prize (and even more so if you love college football), you might want to Google “Shoelace
Shootout”. Game theory finds a new application in real life, and the short read will be like a breath of fresh air compared to banging your head against the wall trying to prove Beal’s Conjecture.
Your claim to fame will be you got a glimpse of the future, two years before it happens.
Max, you wanna give it a shot?
This has already been proven by Andrew Wiles
Yes you are missing something. In the Beal conjecture A,B,C,x,y,z all have to be integers and all have to be different . In the Fermat conjecture A,B,C all have to be different integers but x,y,z
all must be the same integer.
if the equation were: Ax+By=Cz, then the solution would be:
a^n*a^(-n+2) + b^n*b^(-n+2) = c^n*c^(-n+2)
@LeticiaCastro Long, long ago. It's easily demonstrable.
Beal's theorem says that the exponents must be different integers. So Pythagorean theorem doesn't work.
@pablo19590 No. The Beal conjecture is right.
@RichardSRussell Trolling the religious much? Mocking people with beliefs different than your own shows how stupid you are. I'd call you ignorant, but I don't want to lie.
@JohnRock I was born in India. And I have proved the Beal conjecture.
It is true : India has some problems. But who does not have a problem? On individual basis, group basis, community basis, country basis, religion basis or whatever basis. Who is perfect?
The issue here is the Beal conjecture. And it is correct.
@JohnRock Sure, discovery is great, but ADVANCEMENT is even better. India "was this" and "was that." What is it today? The headlines I recall from the New York Times is that India is still an
oppressive regime that limits free speech, and has no rape laws. Until government & social structure improves, I don't see India contributing as much as its Western rivals in advancements in math/
science/technology, solely due to lack of funding and current state of technology. In some parts of the country, one would have to scour through junkyards, looking for rubber tubes and glass bulbs
just to have certain surgeries performed on them (they have a about a year or two to search, since that's how long the waiting list for the surgery would be).
@JohnRock How about some facts from the dark side on your pro-Indian rant? They still have one of the most oppressive caste systems in the world, and many of them look down on marrying outside
Indian blood. India isn't the Mecca of civilization you make it out to be.
@vwvan Did you check all 281,600 equations? I just finished my book "The Beal Conjecture - Detailed Analysis, Complete Proof, Concrete Examples" . The first edition is available for purchase. I prove
that it works in all cases. Soon my proof will be accepted and there will be no more the Beal conjecture. It will be known as the Beal theorem. I hope Mr. Beal will be happy about it.
For more info, I can be reached at 416-725-0909.
@rekbhn All these three solutions fall in one class. They all have 2 as a common factor. My book provides the proof.
@PramodAcharya No. I disagree with you.
The problem is correctly stated. I have proved the conjecture in my book. "The Beal Conjecture - Detailed Analysis, Complete Proof, Concrete Examples" . For more info, I can be reached at
sounds like it might have been written by my high school calculus teacher then hahh
@ElizabethMcBride-Lilleg I have proved that it is correct. There cannot be a counterexample. It will always work. The details are in my book titled, "The Beal Conjecture - Detailed Analysis,
Complete Proof, Concrete Examples" .
You are right about mathematical principles. Indeed,. I have used mathematical principles to prove that it will work in all cases.
How do you like them apples!
@TimeTunnel Who knows they may not wait for two years.
I believe my proof is convincing enough that it is the best way to prove it. Hence, they may not wait for any better proof or a dispute to that proof.
@cspanb2 Andrew Wiles proved the Fermat's Last Theorem. Isn't it?
I have proved the Beal conjecture.
I see the article mentioned that. 1782^12 + 1841^12 = 1922^12
"Mocking people with beliefs different than your own shows how stupid you are."
hi pot, my name is kettle. hypocritical much?
@cspanb2 I am not sure this is true: 1782^12 is even number, 1841^12 is odd number, sum of odd and even should be odd
@KaushikDas @cspanb2 You are right. When you add an even number to an odd number, the result will be an odd number.
And that is one class of solutions I have examined thoroughly, in my recent book.
@KaushikDas @cspanb2
No its a Fermat near miss. On a calculator (9-digit) it will appear to be correct (seemingly disproving the theorem), but is not after the 9th digit. | {"url":"http://newsfeed.time.com/2013/06/11/solve-this-math-problem-win-a-million-bucks/?xid=newsletter-daily","timestamp":"2014-04-19T14:37:36Z","content_type":null,"content_length":"141002","record_id":"<urn:uuid:d8e72b84-c429-44b3-b619-cd4f8fd5a4ab>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
An "existence contra partition of unity" statement for integer matrices?
up vote 9 down vote favorite
While reading a blog post on partitions of unity at the Secret Blogging Seminar the following question came into my mind.
Let $n$ be a positive integer and let $B_1$ and $B_2$ be $n \times n$ matrices with integer entries. Is it true that exactly one of the following two statements is true?
1. There is a vector $v \in \mathbb{Q}^n \backslash \mathbb{Z}^n$ such that both $B_1v$ and $B_2v$ are in $\mathbb{Z}^n$.
2. There are matrices $A_1$ and $A_2$ with integer entries such that $A_1B_1+A_2B_2=I$.
Here, $I$ denotes the $n \times n$ identity matrix. The case $n=1$ is Bézout's identity.
add comment
1 Answer
active oldest votes
I believe that what you say is true. I'll sketch an argument.
Let f:Z^n ---> Z^2n be the map of free Z-modules given by the matrices B[1], B[2] put in column (i.e. the direct sum of the morphisms given by B[1] and B[2]). Now we rephrase conditions
(1) and (2) in a slightly more abstract way:
• (1) fails to hold if, and only if there exists p:Z^2n ---> Z^n such that, together with f, fit in a short exact sequence
0 ---> Z^n ---> Z^2n ---> Z^n ---> 0 (*)
Indeed, the failure of (1) means that any v in Q^n such f(v) in Z^2n must be integral (i.e. v in Z^n). In particular, this implies that f is injective. Moreover, take w in Z^2n
up vote 5 representing a nonzero torsion element in the cokernel of f. As w represents a torsion element, Nw belongs to the image of f for some big enough positive integer N, so there is v in Z^n
down vote such that f(v) = Nw. But now f(1/N v) = w, and this means, by the failure of (1), that 1/N v is integral, so w is in the image of f and the cokernel of f has no torsion. As a finitely
accepted generated torsion-free Z-module is free, we get an exact sequence like (*) above. This argument can easily be reversed, to show the equivalence between the existence of this exact
sequence and the failure of (1).
• (2) holds if, and only if there exists a morphism of Z-modules r:Z^2n ----> Z^n such that rf = id.
Let r be represented by a matrix (A[1],A[2]). Then gf has matrix A[1]B[1] + A[2]B[2], and gf = id if, and only if (2) holds.
Now, the proof of what you asked for is easy. (1) fails if, and only if we can form the exact sequence (*), but such an exact sequence is always split because Z^n is projective, so we can
form such exact sequence if, and only if there exists a splitting r:Z^2n ----> Z^n, which is exactly condition (2).
Very good solution. Thank you. By the way, the statement is also true (and much simpler) if we replace Q/Z by some field k in the following sense. It is an exercise in linear algebra to
show that if B<sub>1</sub> and B<sub>2</sub> are nxn matrices with entries in k then either they have a common eigenvector with eigenvalue 0 or there exist nxn matrices A<sub>1</sub>
and A<sub>2</sub> with entries in k such that A<sub>1</sub>B<sub>1</sub>+A<sub>2</sub>B<sub>2</sub>=I. – Philipp Lampe Oct 17 '09 at 19:35
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra nt.number-theory matrices short-exact-sequences partition-of-unity or ask your own question. | {"url":"http://mathoverflow.net/questions/730/an-existence-contra-partition-of-unity-statement-for-integer-matrices","timestamp":"2014-04-23T19:38:42Z","content_type":null,"content_length":"55357","record_id":"<urn:uuid:1db9ff42-c770-41cf-9e13-05d6e57e8fde>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
differentiation f(x)=xe^(5x), f'(x)=(1/5)
March 13th 2007, 06:42 PM
differentiation f(x)=xe^(5x), f'(x)=(1/5)
differentiation f(x)=xe^(5x), f'(x)=(1/5)
ok so i got 5xe^5x + e^5x
now what do i do with the (1/5).
and is my differentiation correct?
March 13th 2007, 06:47 PM
your differentiation is correct. as to what to do with the (1/5), i think you just equate it to the derivative and i guess try to solve for x (which i think will be hard) what were the full
March 13th 2007, 06:51 PM
ok, so maybe its just me, but i tried equating and solving for x, cant do it, as i thought. so im guessing thats not what the question asked
March 13th 2007, 06:59 PM
thank you.
i'm confused also.
but i don't know if he means to do the integral of it.
that was all the directions.
like should i do integral of 5e^(5x)x+(e^5x) x dx ?
March 13th 2007, 07:05 PM
you're telling me the question was phrased just like that?
"differentiation f(x)=xe^(5x), f'(x)=(1/5)" | {"url":"http://mathhelpforum.com/calculus/12534-differentiation-f-x-xe-5x-f-x-1-5-a-print.html","timestamp":"2014-04-20T13:55:58Z","content_type":null,"content_length":"6770","record_id":"<urn:uuid:51c17363-68e5-4cc4-9964-c02beea96040>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00478-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic Formula Blogs
As a student, I found that I remembered information a lot easier when the information was in a song. I learned the 'quadratic formula song' in one of my math classes and have not forgotten the
formula since. Several of my students have also found this song helpful (and catchy!), so I though I'd share: The 'Quadratic Formula Song' (sung to the lyrics of 'Pop Goes the Weasel') The quadratic
formula is negative b plus or minus the square root of b squared minus four a c all over 2a! (Warning, this will get stuck in your head!) | {"url":"http://www.wyzant.com/resources/blogs/quadratic_formula","timestamp":"2014-04-17T12:41:37Z","content_type":null,"content_length":"31901","record_id":"<urn:uuid:dceeca25-f502-4226-b225-00842661046c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum communication complexity
Quantum communication complexity tries to quantify the communication reduction possible by using quantum effects during a distributed computation.
At least three quantum generalizations of CC have been proposed; for a survey see the suggested text by G. Brassard.
The first one is the qubit-communication model, where the parties can use quantum communication instead of classical communication, for example by exchanging photons through an optical fiber.
In a second model the communication is still performed with classical bits, but the parties are allowed to manipulate an unlimited supply of quantum entangled states as part of their protocols. By
doing measurements on their entangled states, the parties can save on classical communication during a distributed computation.
The third model involves access to previously shared entanglement in addition to qubit communication, and is the least explored of the three quantum models. | {"url":"http://www.fact-index.com/q/qu/quantum_communication_complexity.html","timestamp":"2014-04-19T04:19:42Z","content_type":null,"content_length":"4965","record_id":"<urn:uuid:3e1e284b-6c68-4a49-83ed-2d7d23186c6d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Two Equations Governing Light's Behavior: Part One
λν = c
Return to Electrons in Atoms menu
Go to Part Two of Light Equations
Worksheet - Calculate frequency when given wavelength
Worksheet - Calculate wavelength when given frequency
There are two equations concerning light that are usually taught in high school. Typically, both are taught without any derivation as to why they are the way they are. That is what I will do in the
Equation Number One: λν = c
Brief historical note: I am not sure who wrote this equation (or its equivalent) first. The wave theory of light has its origins in the late 1600's and was developed mathematically starting in the
early 1800's. It was James Clerk Maxwell, in the 1860's, who first predicted that light was an electromagnetic wave and computed (rather than measured) its speed. By the way, the proof that light's
speed was finite was published in 1676 and the first reliable measurements of the speed of light, ones that were very close to the modern value, took place in the late 1850's.
Each symbol in the equation is discussed below. Also, right before the example problems, there is a mention of the two main types of problems teachers will ask using the equation. I encourage you to
take a close look at that section.
1) λ is the Greek letter lambda and it stands for the wavelength of light. Wavelength is defined as the distance between two successive crests of a wave. When studying light, the most common units
used for wavelength are: meter, centimeter, nanometer, and Ångström. Even though the offical unit used by SI is the meter, you will see explanations and problems which use the other three. Less
often, you will see other units used; picometer is the most common one among the less-often used wavelength units. Ångström is a non-SI unit commonly included in discussions of SI units because of
its wide usage.
Keep in mind these definitions:
one centimeter equals 10¯^2 meter
one nanometer equals 10¯^9 meter
one Ångström equals 10¯^8 centimeter
The symbol for the Ångström is Å.
Most certainly, you will need to move easily from one unit to the other. For example, notice that 1 Å = 10¯^10 meter. This means that 10 Å = 1 nm. So, if you are given an Ångström value for
wavelength and a nanometer value is required, divide the Ångström value by 10. If you can't make easy transitions between various metric units, you'd better go back to study and practice that area
some more.
2) ν is the Greek letter nu. It is NOT the letter v, it is the Greek letter nu!!! It stands for the frequency of the light wave. Frequency is defined as the number of wave cycles passing a fixed
reference point in one second. When studying light, the unit for frequency is called the Hertz (its symbol is Hz). One Hertz is when one complete cycle passes the fixed point, so a million Hz is when
the millionth cycle passes the fixed point.
There is an important point to make about the unit on Hz. It is NOT commonly written as cycles per second (or cycles/sec), but only as sec¯^1 (more correctly, it should be written as s¯^1; you need
to know both ways). The "cycles" part is deleted, although you may see an occasional problem which uses it.
A brief mention of cycle: imagine a wave, frozen in time and space, where a wave crest is exactly lined up with our fixed reference point. Now, allow the wave to move until the following crest is
exactly lined up with the reference point, then freeze the wave in place. This is one cycle of the wave and if all that took place in one second, then the frequencey of the wave is 1 Hz.
In any event, the only scientifically useful part of the unit is the denominator and so "per second" (remember, usually as s¯^1) is what is used. The numerator "cycles" is not needed and so its
presence is simply understood and, if writing a fraction is necessary, a one would be used, as in 1/sec.
3) c is the symbol for the speed of light, the speed at which all electromagnetic radiation moves when in a perfect vacuum. (Light travels slower when passing through objects such as water, but it
never travels faster than when in a perfect vacuum.)
Both ways shown below are used to write the value. You need to be aware of both:
3.00 x 10^8 m/s
3.00 x 10^10 cm/s
The actual value is just slightly less, but the above values are the one generally used in introductory classes. (sometimes you'll see 2.9979 rather than 3.00.) Be careful about using the exponent
and unit combination. Meters are longer than centimeters, so there are less of them used above.
Since there are two variables (λ and ν), we can have two types of calculations: (a) given wavelength, calcuate frequency and (b) given frequency, calculate wavelength. By the way, these are the two
types of problems teachers generally ask on the test.
Note that we can easily rearrange the main equation to fit these two types of problems:
(a) given the wavelength, calculate the frequency; use this equation: ν = c / λ
(b) given the frequency, calculate the wavelength; use this equation: λ = c / ν
Problem #1: What is the frequency of red light having a wavelength of 7000 Å?
The solution below depends on converting Å into cm. This means you must remember that the conversion is 1 Å = 10¯^8 cm. The solution:
(7000 x 10¯^8 cm) (x) = 3.00 x 10^10 cm/sec
Notice how I did not bother to convert 7000 x 10¯^8 into scientific notation. If I had done so, the value would have been 7.000 x 10¯^5.
Note also that I effectively consider 7000 to be 4 significant figures. I choose to do this because I know wavelength measurements are very accurate and that 6, 7, or even 8 sig figs are possible. At
an introductory level, you will not know this, so that is why I am telling you here. Also, the value for the speed of light is known to nine significant figures, as in 299,792,458 m s¯^1. However,
3.00 is good enough for introductory work.
The answer is 4.29 x 10^14 s¯^1
Problem #2: What is the frequency of violet light having a wavelength of 4000 Å?
The solution below depends on converting Å into cm. This means you must remember that the conversion is 1 Å = 10¯^8 cm. The solution:
(4000 x 10¯^8 cm) (x) = 3.00 x 10^10 cm/sec
Notice how I did not bother to convert 4000 x 10¯^8 into scientific notation. If I had done so, the value would have been 4.000 x 10¯^5. Note also that I effectively consider 4000 to be 4 significant
The correct answer is 7.50 x 10^14 s¯^1
Be aware that the range of 4000 to 7000 Å is taken to be the range of visible light. Notice how the frequencies stay within more-or-less the middle area of 10^14, ranging from 4.29 to 7.50, but
always being 10^14. If you are faced with this calculation and you know the wavelength is a visible one (say 555 nm), then you know the exponent on the frequency MUST be 10^14. If it isn't, then YOU
(not the teacher) have made a mistake.
Problem #3: What is the frequency of EMR having a wavelength of 555 nm? (EMR is an abbreviation for electromagnetic radiation.)
First, let us convert nm into meters. Since one meter contains 10^9 nm, we have the following conversion:
555 nm x (1 m / 10^9 nm)
555 x 10¯^9 m = 5.55 x 10¯^7 m
Inserting into λν = c, gives:
(5.55 x 10¯^7 m) (x) = 3.00 x 10^8 m s¯^1
x = 5.40 x 10^14 s¯^1
Problem #4: What is the wavelength (in nm) of EMR with a frequency of 4.95 x 10^14 s¯^1?
Substitute into λν = c, as follows:
(x) (4.95 x 10^14 s¯^1) = 3.00 x 10^8 m s¯^1
x = 6.06 x 10¯^7 m
Now, we convert meters to nanometers:
6.06 x 10¯^7 m x (10^9 nm / 1 m) = 606 nm
Problem #5: What is the wavelength (in both cm and Å) of light with a frequency of 6.75 x 10^14 Hz?
The fact that cm is asked for in the problem allows us to use the cm/s value for the speed of light:
(x) (6.75 x 10^14 s¯^1) = 3.00 x 10^10 cm s¯^1
x = 4.44 x 10¯^5 cm
Next, we convert to Å:
(4.44 x 10¯^5 cm) x (1 Å / 10¯^8 cm) = 4440 Å
Worksheet - Calculate frequency when given wavelength
Worksheet - Calculate wavelength when given frequency
An interesting little light trivia: light travels about one foot every nanosecond. You might try and work out the proper calculation before checking the answer.
Go to Part Two of Light Equations | {"url":"http://www.chemteam.info/Electrons/LightEquations1.html","timestamp":"2014-04-18T00:33:52Z","content_type":null,"content_length":"10554","record_id":"<urn:uuid:ce540078-4c0d-4a45-b0c7-d13f1aba7710>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are there any Zero-Knowledge Proofs?
I speculated before about zero-knowledge proofs and existentials. What I had in mind was encoding knowledge hiding via types. I suspect this result would be more interesting than I had imagined. Here
is my understanding:
Zero-knowledge proofs are interactive proofs, and are therefore in the class IP. This class is the same as PSPACE, which is not yet known to be distinct from P (though it certainly contains P). So,
it's possible that P = IP = PSPACE, and ZK proofs can't hide anything the verifier couldn't calculate herself. In other words, we may one day discover that most of our ZK proof protocols are useless.
(This is not the whole story, as there are lots of variations on interactive proving.)
So, if there were a correspondence between these ZK proofs and existential types, it would either settle the P = PSPACE problem or discover a problem in type theory that is equivalent to it.
Each of these seems quite unlikely to me.
There is at least one person, however, who is doing research about the relationship between types and cryptography.
1 comment:
Chung-chieh Shan said...
More research between types and cryptography: "Logical Relations for Encryption", "A Bisimulation for Dynamic Sealing" (Eijiro Sumii and Benjamin C. Pierce) | {"url":"http://blog.jbapple.com/2007/08/are-there-any-zero-knowledge-proofs.html","timestamp":"2014-04-17T10:10:21Z","content_type":null,"content_length":"45286","record_id":"<urn:uuid:d226aa79-770a-46e5-93f8-00017dfc7375>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tips and Tricks on - Syllogism
Tips and Tricks on Syllogism
Read Today's Rajnikanth Formula
Check out your Daily Challenge
Tip #1
Solve the questions through a Venn Diagram. Always make sure common areas are shaded do give you a correct answer.
Tip #2
Shortcut rules (if Venn Diagrams are confusing you) between Statement 1 and Statement 2 in that order
All + All = All
All + No = No
All + Some = No Conclusion
Some + All = Some
Some + Some = No Conclusion
Some + No = Some Not
No + No = No Conclusion
No + All = Some not reversed
No + Some = Some not reversed
Tip #3
You can cancel out common terms in two statements given, then on the remaining terms apply the syllogisms rules and solve. E.g. Some dogs are goats, All goats are cows. Cancel out "goats" which
leaves us with Some dogs are...all are cows. Important words remaining are ALL and SOME in that order. SOME + ALL = SOME, hence conclusion is SOME dogs are cows.
Tip #4
Interchange between reading the question as well as the conclusion before arriving at the answers. Always evaluate each and every conclusion to find out how many conclusions are possible.
Tip #5
Avoid using common knowledge as Syllogisms questions usually state unnatural statements
Tip #6
Remember some implications
All <=> Some, e.g. All A are B also implies Some A are B (being a subset) and Some B are A
Some <=> Some, e.g. Some A are B also implies Some B are A
No<=> No, e.g. No A are B also implies No B are A | {"url":"http://www.simplylearnt.com/tips-tricks/Syllogism-3","timestamp":"2014-04-21T14:41:01Z","content_type":null,"content_length":"42380","record_id":"<urn:uuid:17d8c2b4-8531-443a-baa2-0de68257df73>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aromas Math Tutors
...I'm a part-time mom with a bachelor's degree in biology. I excelled in all subjects, particularly math and science, receiving exceptional test scores in AP calculus and AP biology in high
school. While attending UC Davis and UC Santa Cruz, I took many math and science courses, including calculu...
37 Subjects: including calculus, ACT Math, English, reading
...Yes, I have used high school algebra in my research. I have a B.A. in mathematics plus 90 graduate units and a B.A. in chemistry. I have several years experience teaching algebra 2 at the high
school level and have also had the pleasure of teaching the equivalent course at the college level.
19 Subjects: including geometry, computer science, linear algebra, computer engineering
...All I ask is for you to trust me. I will lift the veil of uncertainty and give the student power over his/her struggles. I will give the student the confidence needed to understand.
7 Subjects: including algebra 1, geometry, prealgebra, elementary math
...I have a M.S. degree in Condensed Matter Physics from Iowa State University with more than 5 years of teaching experience in physics. I was a recipient of the Graduate College Teaching
Excellence Award for my teaching experience at Iowa State University as recommended by the department with the ...
14 Subjects: including geometry, Microsoft Word, prealgebra, trigonometry
...I identify with the vital mission of private schools and believe highly in them. I have been a part-time university level instructor for many years. I am more than happy to customize lessons
that will fit your needs and requirements for any subject or Test Preparation exercise.
75 Subjects: including ACT Math, English, Spanish, reading | {"url":"http://www.algebrahelp.com/Aromas_math_tutors.jsp","timestamp":"2014-04-18T03:13:01Z","content_type":null,"content_length":"24531","record_id":"<urn:uuid:c5a05442-1cc7-4624-877a-22d38efdfee2>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: pdegrad gives incorrect values at edges
Replies: 2 Last Post: Apr 29, 2013 6:42 PM
Re: pdegrad gives incorrect values at edges
Posted: Apr 29, 2013 6:42 PM
"Michael Thomas" <mthomas@mathworks.com> wrote in message <a24620$kmn$1@news.mathworks.com>...
> The errors you are encountering are common errors inherent to most finite
> element method implementations. Refining your mesh will reduce the
> spreading of the contour lines in your grad(u) plot, and improve the
> accuracy of grad(u) linearly with the mesh size. The finer the mesh, the
> closer you will get to 0 at x=0. The PDE toolbox uses linearly interpolated
> basis functions which results in a second order accurate method. Each
> grad(u) you apply to the final solution results in roughly a loss of
> accuracy of one order, so when you recalculate div(grad(u)), you will see
> order unity errors. The results I get are really better than I would have
> expected (a little better than unity, but not much). The 2.5 you see at the
> edge is from interpolating the results a couple of times to get back the
> second derivative. The solution is as expected. The errors introduced by
> pdegrad are also expected as they are inherent to the FEM implementation.
> What you are seeing is errors introduced by post-processing the solution.
> The bad news is, you are going to need a clever way to determine
> div(grad(u)), or restructure/define your problem, if you have any hope of
> finding an accurate solution since it is needed for your coefficients.
> One suggestion is to use higher order interpolants, but that would mean
> writing your own FEM implementation. Most methods for accurately
> differentiating solutions to Poisson type problems are pretty complicated
> mathematically (using Green's second identity and such), making the
> implementation that much more difficult. Most of these methods also have
> pretty significant problems near the boundaries. I would work on redefining
> the problem to try to eliminate the need for second derivatives in your
> coefficients.
> Good luck.
> Mike
> "Dana Edgell" <edgell@far-tech.com> wrote in message
> news:eea9590.-1@WebX.raydaftYaTP...
> > I want to solve a nonlinear system problem using the PDE toolbox where
> some of the coefficients c depend on grad(u) and grad(grad(u)). Before
> attempting this I was examining the output of the PDE function pdegrad for a
> very simple case and found some disturbing results.
> >
> >
> > For
> > - a simple box geometry x= 0 to 1, y=0 to pi/2
> > - boundary conditions: right side dirichlet u=0; all other sides neumann
> grad(u)=0
> > - elliptic equation grad(grad(u))=5 i.e. c=1, f=5, a=0, d=0
> >
> >
> > pdetool seems to give the correct solution BUT when plotting grad(u) which
> should be equal to 5x there is an obvious problem
> >
> >
> > - at the x=0 and x=1 boundaries. The grad(u) at x=0 should be zero but it
> is not.
> > - A contour plot of grad(u) should be all evenly spaced lines but at both
> the x=0 and x=1 boundaries, the contour lines are noticably farther apart.
> > - grad(grad(u)) calculated via pdegrad should be 5 throughout the space,
> however, at x=0 and x=1 grad(grad(u)) = 2.5 eaxactly half of the correct
> value.
> >
> >
> > The above are basically all the same problem, the function pdegrad (and
> the grad(u) calculated by the pdetool plotting presumably using the same
> function) give an incorrect answer at x=0 and x=1. This is particularly
> disturbing as grad(u)=0 at x=0 is a boundary condition.
> > However, given that grad(u) (or ux) seems to be correct in the bulk of the
> solution space I assume that the problem is being solved correctly using the
> correct boundary conditions.
> >
> >
> > Does anyone have a solution to my problem? How can I use coefficients that
> depend on grad(u) (and hopefully grad(grad(u)) if pdegrad gives incorrect
> answers at the edges?
> >
> >
> > Thanks
> > Dana Edgell
I understand that this post was created a long time ago but I am facing a similar problem with pdegrad. I am solving a coupled fluid flow problem and therefore share the gradient at the boundary
(which is evaluated using pdegrad) at each iteration. Though each fluid flow domain is solved with 2nd order accuracy, the coupled solution is only 1st order accurate because of pdegrad. Have you
found any solution to the problem? | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2450557&messageID=8906110","timestamp":"2014-04-20T01:30:51Z","content_type":null,"content_length":"19319","record_id":"<urn:uuid:cccf30dc-d5b5-4246-8d83-d9f7c303301a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Thomas Fischbacher
Date: -- (:)
From: Thomas Fischbacher <Thomas.Fischbacher@P...>
Subject: Ocaml in String Theory
Ladies and Gentlemen,
a happy new year to all readers of this list.
For a non-mainstream language like Ocaml, it is evidently of great
importance to have good answers to the question about its practical
relevance. It seems as if we now have another nice application to add to
the list: in today's official arxiv.org listing, the following
preprint paper (by myself and two colleagues from the Albert Einstein
Institute) appeared:
Physically speaking, one of the hot topics in string theory today is the
conjectured equivalence of certain quantum field theories which neither at
the classical level nor as quantum theories have an intrinsic length scale
on the one side - so-called conformal field theories - and string theory
on a space-time which approaches constant negative curvature at infinity
(i.e. anti deSitter space) - see e.g.
What we did was to provide further strong evidence that the physical key
property of integrability holds, however another important property known
as BMN scaling (sorry, I cannot go into details) is violated in
quite non-obvious ways.
Computationally, what we had to do in order to achieve this result was to
develop a fast algorithm which furthermore can be implemented close to the
machine level that allows us to sum literally billions of contributions
from different planar feynman graphs with four loops in them. Planarity is
the key property here that makes this calculation feasible - if we
included the non-planar graphs as well, we would have had to deal with an
estimated number of contributions of about half a quadrillion.
At the much simpler three loop level, our approach is faster than
previous ones (using the FORM program which was built explicitly for fast
quantum field theoretic calculations) by about a factor of 100. This comes
in part from our improved algorithm that singles out planar graphs (and
hence scales much better than algorithms which do not), in part from
doing term transformations not in an interpreted fashion, but
directly at machine code level (via compiled ocaml), furthermore from
carefully ensuring not to do unnecessary work when simplifying terms,
from evil hacks (such as abusing the FPU to do exact(!) fraction
arithmetics for fractions of the form <small numerator>/<small power of 2>),
and - quite important - from certain algorithmic tricks from the
functional programmer's toolbox such as continuation coding and lazy
evaluation. In other words, it would not at all have been possible
in anything else but a fast compiled functional language. Nevertheless,
we still had to burn 88 000+ CPU-hours on 2 GHz Athlon (and Opteron)
hardware to do the largest piece of the calculation and we are very
grateful towards our numerical colleagues for providing us with
appropriate resources - this is true symbolic supercomputing.
While we (the authors) are not yet sure about this, we think to have
strong reason to believe that this may be the (presumably by orders of
magnitude) largest symbolic algebra calculation performed so far -
counting the number of term transformations. (Excluding cypher breaking
and prime search attempts, as the underlying questions hardly can be
regarded as of symbolic nature.) We know that there have been quite large
four-loop QCD calculations before involving something like 50 000
individual graphs that furthermore had to deal with some transformations
on integrals (which we do not have, due to a certain kind of reduction we
perform in our model) and hence are somewhat more difficult to calculate
than our graphs - but certainly not by a factor of 100 000. If anyone
knows better and can tell us about an even larger symbolic calculation, we
would be glad to hear about it.
While our paper is essentially for physicists, it features a
self-contained appendix explaining the algorithmic and implementation
aspects of our work that should be readable especially for people with a
computer science background. Furthermore, one can download (details in the
paper) our source. Unfortunately, in order to actually build the program,
one needs a somewhat large development environment, as some of the ocaml
source and data files are machine generated by perl and CMU Common Lisp.
Admittedly, the code could be cleaner, but one should keep in mind here
a few external factors (i.e. pressure to publish new physical
results) which are different for computer scientists and physicists.
Well, it's not as bad as quite a lot of code in physics, and I
think I can show it around without having to pull a brown paper bag over
my head, but the style is certainly not one I'd like to see in textbooks.
Given the time (which I at present do not have), I'd like to clean it up
a bit more.
regards, tf@cip.physik.uni-muenchen.de (o_
Thomas Fischbacher - http://www.cip.physik.uni-muenchen.de/~tf //\
(lambda (n) ((lambda (p q r) (p p q r)) (lambda (g x y) V_/_
(if (= x 0) y (g g (- x 1) (* x y)))) n 1)) (Debian GNU) | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2005/01/4d2814ac748e6ca60a606c52ba85e57e.en.html","timestamp":"2014-04-21T02:23:21Z","content_type":null,"content_length":"10049","record_id":"<urn:uuid:28ff0998-8efb-48c2-9a49-ccf9a7616cdd>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
6.43 in words
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fa86a7fe4b059b524f462fb","timestamp":"2014-04-20T18:24:16Z","content_type":null,"content_length":"34928","record_id":"<urn:uuid:a44b982c-0685-41aa-b472-42735801ac11>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Frontiers
“What’s the hardest problem you’ve ever solved?”
Kids focus right in. Driven by a ruthless curiosity, they ask questions from which adults often shy away. Which is great, if you think you know the answer to everything a 7 year-old can possibly ask
Two Wednesdays ago, I was invited to participate in three Q&A sessions that quickly turned into Reddit-style AMA (ask-me-anything) sessions over Skype with four 5th grade classes and one 2nd grade
class of students at Medina Elementary in Medina, Washington. When asked by the organizers what I would like the sessions to focus on, I initially thought of introducing students to the mod I helped
design for Minecraft, called QCraft, which brings concepts like quantum entanglement and quantum superposition into the world of Minecraft. But then I changed my mind. I told the organizers that I
would talk about anything the kids wanted to know more about. It dawned on me that maybe not all 5th graders are as excited about quantum physics as I am. Yet.
The students took the bait. They peppered me with questions for over two hours —everything from “What is a quantum physicist and how do you become one?” to “What is it like to work with a fashion
designer (about my collaboration with Project Runway’s Alicia Hardesty on Project X Squared)?” and of course, “Why did you steal the cannon?” (learn more about the infamous Cannon Heist - yes kids,
there is an ongoing war between the two schools and Caltech took the last (hot) shot just days ago.)”
Then they dug a little deeper: “If we have a quantum computer that knows the answer to everything, why do we need to go to school?” This question was a little tricky, so I framed the answer like
this: I compared the computer to a sidekick, and the kids—the future scientists, artists and engineers —to superheroes. Sidekicks always look up to the superheroes for guidance and leadership. And
then I got this question from a young girl: “If we are superheroes, what should we do with all this power?” I thought about it for a second and though my initial inclination was to go with: “You
should make Angry Birds 3D!”, I went with this instead: “People often say, “Study hard so that one day you can cure cancer, figure out the theory of everything and save the world!” But I would rather
see you all do things to understand the world. Sometimes you think you are saving the world when it does not need saving—it is just misunderstood. Find ways to understand one another and move to look
for the value in others. Because there is always value in others, often hiding from us behind powerful emotions.” The kids listened in silence and, in that moment, I felt profoundly connected with
them and their teachers.
I wasn’t expecting any more “deep” questions, until another young girl raised her hand and asked: “Can I be a quantum physicist, or is it only for the boys?” The ferocity of my answer caught me by
surprise: “Of course you can! You can do anything you set your mind to and anyone who tells you otherwise, be it your teachers, your friends or even your parents, they are just wrong! In fact, you
have the potential to leave all the boys in the class behind!” The applause and laughter from all the girls sounded even louder among the thunderous silence from the boys. Which is when I realized my
mistake and added: “You boys can be superheroes too! Just make sure not to underestimate the girls. For your own sake.”
Why did I feel so strongly about this issue of women in science? Caltech has a notoriously bad reputation when it comes to the representation of women among our faculty and postdocs (graduate
students too?) in areas such as Physics and Mathematics. IQIM has over a dozen male faculty members in its roster and only one woman: Prof. Nai-Chang Yeh. Anyone who meets Prof. Yeh quickly realizes
that she is an intellectual powerhouse with boundless energy split among her research, her many students and requests for talks, conference organization and mentoring. Which is why, invariably, every
one of the faculty members at IQIM feels really strongly about finding a balance and creating a more inclusive environment for women in science. This is a complex issue that requires a lot of
introspection and creative ideas from all sides over the long term, but in the meantime, I just really wanted to tell the girls that I was counting on them to help with understanding our world, as
much as I was counting on the boys. Quantum mechanics? They got it. Abstract math? No problem.*
It was of course inevitable that they would want to know why we created the Minecraft mod, a collaborative work between Google, MinecraftEDU and IQIM – after all, when I asked them if they had played
Minecraft before, all hands shot up. Both IQIM and Google think it is important to educate younger generations about quantum computers and the complex ideas behind quantum physics; and more
importantly, to meet kids where they play, in this case, inside the Minecraft game. I explained to the kids that the game was a place where they could experiment with concepts from quantum mechanics
and that we were developing other resources to make sure they had a place to go to if they wanted to know more (see our animations with Jorge Cham at http://phdcomics.com/quantum).
As for the hardest problem I have ever solved? I described it in my first blog post here, An Intellectual Tornado. The kids sat listening in some sort of trance as I described the nearly perilous
journey through the lands of “agony” and “self doubt” and into the valley of “grace”, the place one reaches when they learn to walk next to their worst fears, as understanding replaces fear and
respect for a far superior opponent teaches true humility and instills in you a sense of adventure. By that time, I thought I was in the clear – as far as fielding difficult questions from 10
year-olds goes – but one little devil decided to ask me this simple question: “Can you explain in 2 minutes what quantum physics is?” Sure! You see kids, emptiness, what we call the quantum vacuum,
underlies the emergence of spacetime through the build-up of correlations between disjoint degrees of freedom, we like to call entangled subsystems. The uniqueness of the Schmidt decomposition over
generic quantum states, coupled with concentration of measure estimates over unequal bipartite decompositions gives rise to Schrodinger’s evolution and the concept of unitarity – which itself only
emerges in the thermodynamic limit. In the remaining minute, let’s discuss the different interpretations of the following postulates of quantum mechanics: Let’s start with measurements…
Reaching out to elementary school kids is just one way we can make science come alive, and many of us here at IQIM look forward to sharing with kids of any age our love for adventuring far and wide
to understand the world around us. In case you are an expert in anything, or just passionate about something, I highly recommend engaging the next generation through visits to classrooms and Skype
sessions across state lines. Because, sometimes, you get something like this from their teacher:
Hello Dr. Michalakis,
My class was lucky enough to be able to participate in one of the Skype chats you did with Medina Elementary this morning. My students returned to the classroom with so many questions,
wonderings, concerns, and ideas that we could spend the remainder of the year discussing them all.
Your ability to thoughtfully answer EVERY single question posed to you was amazing. I was so impressed and inspired by your responses that I am tempted to actually spend the remainder of the year
discussing quantum mechanics J.
I particularly appreciated your point that our efforts should focus on trying to “understand the world” rather than “save” the world. I work each day to try and inspire curiosity and wonder in my
students. You accomplished more towards my goal in about 40 minutes than I probably have all year. For that I am grateful.
All the best,
* Several of my female classmates at MIT (where I did my undergraduate degree in Math with Computer Science) had a clarity of thought and a sense of perseverance that Seal Team Six would be envious
of. So I would go to them for help with my hardest homework.
Tsar Nikita and His Scientists
Once upon a time, a Russian tsar named Nikita had forty daughters:
Every one from top to toe
Was a captivating creature,
Perfect—but for one lost feature.
So wrote Alexander Pushkin, the 19^th-century Shakespeare who revolutionized Russian literature. In a rhyme, Pushkin imagined forty princesses born without “that bit” “[b]etween their legs.” A
courier scours the countryside for a witch who can help. By summoning the devil in the woods, she conjures what the princesses lack into a casket. The tsar parcels out the casket’s contents, and
everyone rejoices.
“[N]onsense,” Pushkin calls the tale in its penultimate line. A “joke.”
The joke has, nearly two centuries later, become reality. Researchers have grown vaginas in a lab and implanted them into teenage girls. Thanks to a genetic defect, the girls suffered from
Mayer-Rokitansky-Küster-Hauser (MRKH) syndrome: Their vaginas and uteruses had failed to grow to maturity or at all. A team at Wake Forest and in Mexico City took samples of the girls’ cells, grew
more cells, and combined their harvest with vagina-shaped scaffolds. Early in the 2000s, surgeons implanted the artificial organs into the girls. The patients, the researchers reported in the journal
The Lancet last week, function normally.
I don’t usually write about reproductive machinery. But the implants’ resonance with “Tsar Nikita” floored me. Scientists have implanted much of Pushkin’s plot into labs. The sexually deficient
girls, the craftsperson, the replacement organs—all appear in “Tsar Nikita” as in The Lancet. In poetry as in science fiction, we read the future.
Though threads of Pushkin’s plot survive, society’s view of the specialist has progressed. “Deep [in] the dark woods” lives Pushkin’s witch. Upon summoning the devil, she locks her cure in a casket.
Today’s vagina-implanters star in headlines. The Wall Street Journal highlighted the implants in its front section. Unless the patients’ health degrades, the researchers will likely list last week’s
paper high on their CVs and websites.
Much as Dr. Atlántida Raya-Rivera, the paper’s lead author, differs from Pushkin’s witch, the visage of Pushkin’s magic wears the nose and eyebrows of science. When tsars or millenials need medical
help, they seek knowledge-keepers: specialists, a fringe of society. Before summoning the devil, the witch “[l]ocked her door . . . Three days passed.” I hide away to calculate and study (though days
alone might render me more like the protagonist in another Russian story, Chekhov’s “The Bet”). Just as the witch “stocked up coal,” some students stockpile Red Bull before hitting the library. Some
habits, like the archetype of the wise woman, refuse to die.
From a Russian rhyme, the bones of “Tsar Nikita” have evolved into cutting-edge science. Pushkin and the implants highlight how attitudes toward knowledge have changed, offering a lens onto science
in culture and onto science culture. No wonder readers call Pushkin “timeless.”
But what would he have rhymed with “Mayer-Rokitansky-Küster-Hauser”?
“Tsar Nikita” has many nuances—messages about censorship, for example—that I didn’t discuss. To the intrigued, I recommend The Queen of Spades: And selected works, translated by Anthony Briggs and
published by Pushkin Press.
Defending against high-frequency attacks
It was the summer of 2008. I was 22 years old, and it was my second week working in the crude oil and natural gas options pit at the New York Mercantile Exchange (NYMEX.) My head was throbbing after
two consecutive weeks of disorientation. It was like being born into a new world, but without the neuroplasticity of a young human. And then the crowd erupted. “Yeeeehawwww. YeEEEeeHaaaWWWWW. Go get
‘em cowboy.”
It seemed that everyone on the sprawling trading floor had started playing Wild Wild West and I had no idea why. After at least thirty seconds, the hollers started to move across the trading floor.
They moved away 100 meters or so and then doubled back towards me. After a few meters, he finally got it, and I’m sure he learned a life lesson. Don’t be the biggest jerk in a room filled with
traders, and especially, never wear triple-popped pastel-colored Lacoste shirts. This young aspiring trader had been “spurred.”
In other words, someone had made paper spurs out of trading receipts and taped them to his shoes. Go get ‘em cowboy.
I was one academic quarter away from finishing a master’s degree in statistics at Stanford University and I had accepted a full time job working in the algorithmic trading group at DRW Trading. I was
doing a summer internship before finishing my degree, and after three months of working in the algorithmic trading group in Chicago, I had volunteered to work at the NYMEX. Most ‘algo’ traders didn’t
want this job, because it was far-removed from our mental mathematical monasteries, but I knew I would learn a tremendous amount, so I jumped at the opportunity. And by learn, I mean, get ripped
calves and triceps, because my job was to stand in place for seven straight hours updating our mathematical models on a bulky tablet PC as trades occurred.
I have no vested interests in the world of high-frequency trading (HFT). I’m currently a PhD student in the quantum information group at Caltech and I have no intentions of returning to finance. I
found the work enjoyable, but not as thrilling as thinking about the beginning of the universe (what else is?) However, I do feel like the current discussion about HFT is lop-sided and I’m hoping
that I can broaden the perspective by telling a few short stories.
What are the main attacks against HFT? Three of them include the evilness of: front-running markets, making money out of nothing, and instability. It’s easy to point to extreme examples of
algorithmic traders abusing markets, and they regularly do, but my argument is that HFT has simply computerized age-old tactics. In this process, these tactics have become more benign and markets
more stable.
Front-running markets: large oil producing nations, such as Mexico, often want to hedge their exposure to changing market prices. They do this by purchasing options. This allows them to lock in a
minimum sale price, for a fee of a few dollars per barrel. During my time at the NYMEX, I distinctly remember a broker shouting into the pit: “what’s the price on DEC9 puts.” A trader doesn’t want to
give away whether they want to buy or sell, because if the other traders know, then they can artificially move the price. In this particular case, this broker was known to sometimes implement parts
of Mexico’s oil hedge. The other traders in the pit suspected this was a trade for Mexico because of his anxious tone, some recent geopolitical news, and the expiration date of these options.
Some confident traders took a risk and faded the market. They ended up making between $1-2 million dollars from these trades, relative to what the fair price was at that moment. I mention relative to
the fair price, because Mexico ultimately received the better end of this trade. The price of oil dropped in 2009, and Mexico executed its options enabling it to sell its oil at a higher than market
price. Mexico spent $1.5 billion to hedge its oil exposure in 2009.
This was an example of humans anticipating the direction of a trade and capturing millions of dollars in profit as a result. It really is profit as long as the traders can redistribute their exposure
at the ‘fair’ market price before markets move too far. The analogous strategy in HFT is called “front-running the market” which was highlighted in the New York Times’ recent article “the wolf
hunters of Wall Street.” The HFT version involves analyzing the prices on dozens of exchanges simultaneously, and once an order is published in the order book of one exchange, then using this demand
to adjust its orders on the other exchanges. This needs to be done within a few microseconds in order to be successful. This is the computerized version of anticipating demand and fading prices
accordingly. These tactics as I described them are in a grey area, but they rapidly become illegal.
Making money from nothing: arbitrage opportunities have existed for as long as humans have been trading. I’m sure an ancient trader received quite the rush when he realized for the first time that he
could buy gold in one marketplace and then sell it in another, for a profit. This is only worth the trader’s efforts if he makes a profit after all expenses have been taken into consideration. One of
the simplest examples in modern terms is called triangle arbitrage, and it usually involves three pairs of currencies. Currency pairs are ratios; such as USD/AUD, which tells you, how many Australian
dollars you receive for one US dollar. Imagine that there is a moment in time when the product of ratios $\frac{USD}{AUD}\frac{AUD}{CAD}\frac{CAD}{USD}$ is 1.01. Then, a trader can take her USD, buy
AUD, then use her AUD to buy CAD, and then use her CAD to buy USD. As long as the underlying prices didn’t change while she carried out these three trades, she would capture one cent of profit per
After a few trades like this, the prices will equilibrate and the ratio will be restored to one. This is an example of “making money out of nothing.” Clever people have been trading on arbitrage
since ancient times and it is a fundamental source of liquidity. It guarantees that the price you pay in Sydney is the same as the price you pay in New York. It also means that if you’re willing to
overpay by a penny per share, then you’re guaranteed a computer will find this opportunity and your order will be filled immediately. The main difference now is that once a computer has been
programmed to look for a certain type of arbitrage, then the human mind can no longer compete. This is one of the original arenas where the term “high-frequency” was used. Whoever has the fastest
machines, is the one who will capture the profit.
Instability: I believe that the arguments against HFT of this type have the most credibility. The concern here is that exceptional leverage creates opportunity for catastrophe. Imaginations ran wild
after the Flash Crash of 2010, and even if imaginations outstripped reality, we learned much about the potential instabilities of HFT. A few questions were posed, and we are still debating the
answers. What happens if market makers stop trading in unison? What happens if a programming error leads to billions of dollars in mistaken trades? Do feedback loops between algo strategies lead to
artificial prices? These are reasonable questions, which are grounded in examples, and future regulation coupled with monitoring should add stability where it’s feasible.
The culture in wealth driven industries today is appalling. However, it’s no worse in HFT than in finance more broadly and many other industries. It’s important that we dissociate our disgust in a
broad culture of greed from debates about the merit of HFT. Black boxes are easy targets for blame because they don’t defend themselves. But that doesn’t mean they aren’t useful when implemented
Are we better off with HFT? I’d argue a resounding yes. The primary function of markets is to allocate capital efficiently. Three of the strongest measures of the efficacy of markets lie in “bid-ask”
spreads, volume and volatility. If spreads are low and volume is high, then participants are essentially guaranteed access to capital at as close to the “fair price” as possible. There is huge
academic literature on how HFT has impacted spreads and volume but the majority of it indicates that spreads have lowered and volume has increased. However, as alluded to above, all of these points
are subtle–but in my opinion, it’s clear that HFT has increased the efficiency of markets (it turns out that computers can sometimes be helpful.) Estimates of HFT’s impact on volatility haven’t been
nearly as favorable but I’d also argue these studies are more debatable. Basically, correlation is not causation, and it just so happens that our rapidly developing world is probably more volatile
than the pre-HFT world of the last Millennia.
We could regulate away HFT, but we wouldn’t be able to get rid of the underlying problems people point to unless we got rid of markets altogether. As with any new industry, there are aspects of HFT
that should be better monitored and regulated, but we should have level-heads and diverse data points as we continue this discussion. As with most important problems, I believe the ultimate solution
here lies in educating the public. Or in other words, this is my plug for Python classes for all children!!
I promise that I’ll repent by writing something that involves actual quantum things within the next two weeks!
IQIM Presents …”my father”
Following the IQIM teaser, which was made with the intent of creating a wider perspective of the scientist, to highlight the normalcy behind the perception of brilliance and to celebrate the common
human struggles to achieve greatness, we decided to do individual vignettes of some of the characters you saw in the video.
We start with Debaleena Nandi, a grad student in Prof Jim Eisenstein’s lab, whose journey from Jadavpur University in West Bengal, India to the graduate school and research facility at the Indian
institute of Science, Bangalore, to Caltech has seen many obstacles. We focus on the essentials of an environment needed to manifest the quest for “the truth” as Debaleena says. We start with her
days as a child when her double-shift working father sat by her through the days and nights that she pursued her homework.
She highlights what she feels is the only way to growth; working on what is lacking, to develop that missing tool in your skill set, that asset that others might have by birth but you need to inspire
by hard work.
Debaleena’s motto: to realize and face your shortcomings is the only way to achievement.
As we build Debaleena up, we also build up the identity of Caltech through its breathtaking architecture that oscillates from Spanish to Goth to modern. Both Debaleena and Caltech are revealed
slowly, bit by bit.
This series is about dissecting high achievers, seeing the day to day steps, the bit by bit that adds up to the more often than not, overwhelming, impressive presence of Caltech’s science. We attempt
to break it down in smaller vignettes that help us appreciate the amount of discipline, intent and passion that goes into making cutting edge researchers.
Presenting the emotional alongside the rational is something this series aspires to achieve. It honors and celebrates human limitations surrounding limitless boundaries, discoveries and
Stay tuned for more vignettes in the IQIM Presents “My _______” Series.
But for now, here is the video. Watch, like and share!
(C) Parveen Shah Production 2014
Inflation on the back of an envelope
Last Monday was an exciting day!
After following the BICEP2 announcement via Twitter, I had to board a transcontinental flight, so I had 5 uninterrupted hours to think about what it all meant. Without Internet access or references,
and having not thought seriously about inflation for decades, I wanted to reconstruct a few scraps of knowledge needed to interpret the implications of r ~ 0.2.
I did what any physicist would have done … I derived the basic equations without worrying about niceties such as factors of 3 or $2 \pi$. None of what I derived was at all original — the theory has
been known for 30 years — but I’ve decided to turn my in-flight notes into a blog post. Experts may cringe at the crude approximations and overlooked conceptual nuances, not to mention the missing
references. But some mathematically literate readers who are curious about the implications of the BICEP2 findings may find these notes helpful. I should emphasize that I am not an expert on this
stuff (anymore), and if there are serious errors I hope better informed readers will point them out.
By tradition, careless estimates like these are called “back-of-the-envelope” calculations. There have been times when I have made notes on the back of an envelope, or a napkin or place mat. But in
this case I had the presence of mind to bring a notepad with me.
According to inflation theory, a nearly homogeneous scalar field called the inflaton (denoted by $\phi$) filled the very early universe. The value of $\phi$ varied with time, as determined by a
potential function $V(\phi)$. The inflaton rolled slowly for a while, while the dark energy stored in $V(\phi)$ caused the universe to expand exponentially. This rapid cosmic inflation lasted long
enough that previously existing inhomogeneities in our currently visible universe were nearly smoothed out. What inhomogeneities remained arose from quantum fluctuations in the inflaton and the
spacetime geometry occurring during the inflationary period.
Gradually, the rolling inflaton picked up speed. When its kinetic energy became comparable to its potential energy, inflation ended, and the universe “reheated” — the energy previously stored in the
potential $V(\phi)$ was converted to hot radiation, instigating a “hot big bang”. As the universe continued to expand, the radiation cooled. Eventually, the energy density in the universe came to be
dominated by cold matter, and the relic fluctuations of the inflaton became perturbations in the matter density. Regions that were more dense than average grew even more dense due to their
gravitational pull, eventually collapsing into the galaxies and clusters of galaxies that fill the universe today. Relic fluctuations in the geometry became gravitational waves, which BICEP2 seems to
have detected.
Both the density perturbations and the gravitational waves have been detected via their influence on the inhomogeneities in the cosmic microwave background. The 2.726 K photons left over from the big
bang have a nearly uniform temperature as we scan across the sky, but there are small deviations from perfect uniformity that have been precisely measured. We won’t worry about the details of how the
size of the perturbations is inferred from the data. Our goal is to achieve a crude understanding of how the density perturbations and gravitational waves are related, which is what the BICEP2
results are telling us about. We also won’t worry about the details of the shape of the potential function $V(\phi)$, though it’s very interesting that we might learn a lot about that from the data.
Exponential expansion
Einstein’s field equations tell us how the rate at which the universe expands during inflation is related to energy density stored in the scalar field potential. If a(t) is the “scale factor” which
describes how lengths grow with time, then roughly
$\left(\frac{\dot a}{a}\right)^2 \sim \frac{V}{m_P^2}$.
Here $\dot a$ means the time derivative of the scale factor, and $m_P = 1/\sqrt{8 \pi G} \approx 2.4 \times 10^{18}$ GeV is the Planck scale associated with quantum gravity. (G is Newton’s
gravitational constant.) I’ve left our a factor of 3 on purpose, and I used the symbol ~ rather than = to emphasize that we are just trying to get a feel for the order of magnitude of things. I’m
using units in which Planck’s constant $\hbar$ and the speed of light c are set to one, so mass, energy, and inverse length (or inverse time) all have the same dimensions. 1 GeV means one billion
electron volts, about the mass of a proton.
(To persuade yourself that this is at least roughly the right equation, you should note that a similar equation applies to an expanding spherical ball of radius a(t) with uniform mass density V. But
in the case of the ball, the mass density would decrease as the ball expands. The universe is different — it can expand without diluting its mass density, so the rate of expansion $\dot a / a$ does
not slow down as the expansion proceeds.)
During inflation, the scalar field $\phi$ and therefore the potential energy $V(\phi)$ were changing slowly; it’s a good approximation to assume $V$ is constant. Then the solution is
$a(t) \sim a(0) e^{Ht},$
where $H$, the Hubble constant during inflation, is
$H \sim \frac{\sqrt{V}}{m_P}.$
To explain the smoothness of the observed universe, we require at least 50 “e-foldings” of inflation before the universe reheated — that is, inflation should have lasted for a time at least $50 H^
Slow rolling
During inflation the inflaton $\phi$ rolls slowly, so slowly that friction dominates inertia — this friction results from the cosmic expansion. The speed of rolling $\dot \phi$ is determined by
$H \dot \phi \sim -V'(\phi).$
Here $V'(\phi)$ is the slope of the potential, so the right-hand side is the force exerted by the potential, which matches the frictional force on the left-hand side. The coefficient of $\dot \phi$
has to be $H$ on dimensional grounds. (Here I have blown another factor of 3, but let’s not worry about that.)
Density perturbations
The trickiest thing we need to understand is how inflation produced the density perturbations which later seeded the formation of galaxies. There are several steps to the argument.
Quantum fluctuations of the inflaton
As the universe inflates, the inflaton field is subject to quantum fluctuations, where the size of the fluctuation depends on its wavelength. Due to inflation, the wavelength increases rapidly, like
$e^{Ht}$, and once the wavelength gets large compared to $H^{-1}$, there isn’t enough time for the fluctuation to wiggle — it gets “frozen in.” Much later, long after the reheating of the universe,
the oscillation period of the wave becomes comparable to the age of the universe, and then it can wiggle again. (We say that the fluctuations “cross the horizon” at that stage.) Observations of the
anisotropy of the microwave background have determined how big the fluctuations are at the time of horizon crossing. What does inflation theory say about that?
Well, first of all, how big are the fluctuations when they leave the horizon during inflation? Then the wavelength is $H^{-1}$ and the universe is expanding at the rate $H$, so $H$ is the only thing
the magnitude of the fluctuations could depend on. Since the field $\phi$ has the same dimensions as $H$, we conclude that fluctuations have magnitude
$\delta \phi \sim H.$
From inflaton fluctuations to density perturbations
Reheating occurs abruptly when the inflaton field reaches a particular value. Because of the quantum fluctuations, some horizon volumes have larger than average values of $\phi$ and some have smaller
than average values; hence different regions reheat at slightly different times. The energy density in regions that reheat earlier starts to be reduced by expansion (“red shifted”) earlier, so these
regions have a smaller than average energy density. Likewise, regions that reheat later start to red shift later, and wind up having larger than average density.
When we compare different regions of comparable size, we can find the typical (root-mean-square) fluctuations $\delta t$ in the reheating time, knowing the fluctuations in $\phi$ and the rolling
speed $\dot \phi$:
$\delta t \sim \frac{\delta \phi}{\dot \phi} \sim \frac{H}{\dot\phi}.$
Small fractional fluctuations in the scale factor $a$ right after reheating produce comparable small fractional fluctuations in the energy density $\rho$. The expansion rate right after reheating
roughly matches the expansion rate $H$ right before reheating, and so we find that the characteristic size of the density perturbations is
$\delta_S\equiv\left(\frac{\delta \rho}{\rho}\right)_{hor} \sim \frac{\delta a}{a} \sim \frac{\dot a}{a} \delta t\sim \frac{H^2}{\dot \phi}.$
The subscript hor serves to remind us that this is the size of density perturbations as they cross the horizon, before they get a chance to grow due to gravitational instabilities. We have found our
first important conclusion: The density perturbations have a size determined by the Hubble constant $H$ and the rolling speed $\dot \phi$ of the inflaton, up to a factor of order one which we have
not tried to keep track of. Insofar as the Hubble constant and rolling speed change slowly during inflation, these density perturbations have a strength which is nearly independent of the length
scale of the perturbation. From here on we will denote this dimensionless scale of the fluctuations by $\delta_S$, where the subscript $S$ stands for “scalar”.
Perturbations in terms of the potential
Putting together $\dot \phi \sim -V' / H$ and $H^2 \sim V/{m_P}^2$ with our expression for $\delta_S$, we find
$\delta_S^2 \sim \frac{H^4}{\dot\phi^2}\sim \frac{H^6}{V'^2} \sim \frac{1}{{m_P}^6}\frac{V^3}{V'^2}.$
The observed density perturbations are telling us something interesting about the scalar field potential during inflation.
Gravitational waves and the meaning of r
The gravitational field as well as the inflaton field is subject to quantum fluctuations during inflation. We call these tensor fluctuations to distinguish them from the scalar fluctuations in the
energy density. The tensor fluctuations have an effect on the microwave anisotropy which can be distinguished in principle from the scalar fluctuations. We’ll just take that for granted here, without
worrying about the details of how it’s done.
While a scalar field fluctuation with wavelength $\lambda$ and strength $\delta \phi$ carries energy density $\sim \delta\phi^2 / \lambda^2$, a fluctuation of the dimensionless gravitation field $h$
with wavelength $\lambda$ and strength $\delta h$ carries energy density $\sim m_P^2 \delta h^2 / \lambda^2$. Applying the same dimensional analysis we used to estimate $\delta \phi$ at horizon
crossing to the rescaled field $h/m_P$, we estimate the strength $\delta_T$ of the tensor fluctuations as
$\delta_T^2 \sim \frac{H^2}{m_P^2}\sim \frac{V}{m_P^4}.$
From observations of the CMB anisotropy we know that $\delta_S\sim 10^{-5}$, and now BICEP2 claims that the ratio
$r = \frac{\delta_T^2}{\delta_S^2}$
is about $r\sim 0.2$ at an angular scale on the sky of about one degree. The conclusion (being a little more careful about the O(1) factors this time) is
$V^{1/4} \sim 2 \times 10^{16}~GeV \left(\frac{r}{0.2}\right)^{1/4}.$
This is our second important conclusion: The energy density during inflation defines a mass scale, which turns our to be $2 \times 10^{16}~GeV$ for the observed value of $r$. This is a very
interesting finding because this mass scale is not so far below the Planck scale, where quantum gravity kicks in, and is in fact pretty close to theoretical estimates of the unification scale in
supersymmetric grand unified theories. If this mass scale were a factor of 2 smaller, then $r$ would be smaller by a factor of 16, and hence much harder to detect.
Rolling, rolling, rolling, …
Using $\delta_S^2 \sim H^4/\dot\phi^2$, we can express $r$ as
$r = \frac{\delta_T^2}{\delta_S^2}\sim \frac{\dot\phi^2}{m_P^2 H^2}.$
It is convenient to measure time in units of the number $N = H t$ of e-foldings of inflation, in terms of which we find
$\frac{1}{m_P^2} \left(\frac{d\phi}{dN}\right)^2\sim r;$
Now, we know that for inflation to explain the smoothness of the universe we need $N$ larger than 50, and if we assume that the inflaton rolls at a roughly constant rate during $N$ e-foldings, we
conclude that, while rolling, the change in the inflaton field is
$\frac{\Delta \phi}{m_P} \sim N \sqrt{r}.$
This is our third important conclusion — the inflaton field had to roll a long, long, way during inflation — it changed by much more than the Planck scale! Putting in the O(1) factors we have left
out reduces the required amount of rolling by about a factor of 3, but we still conclude that the rolling was super-Planckian if $r\sim 0.2$. That’s curious, because when the scalar field strength is
super-Planckian, we expect the kind of effective field theory we have been implicitly using to be a poor approximation because quantum gravity corrections are large. One possible way out is that the
inflaton might have rolled round and round in a circle instead of in a straight line, so the field strength stayed sub-Planckian even though the distance traveled was super-Planckian.
Spectral tilt
As the inflaton rolls, the potential energy, and hence also the Hubble constant $H$, change during inflation. That means that both the scalar and tensor fluctuations have a strength which is not
quite independent of length scale. We can parametrize the scale dependence in terms of how the fluctuations change per e-folding of inflation, which is equivalent to the change per logarithmic length
scale and is called the “spectral tilt.”
To keep things simple, let’s suppose that the rate of rolling is constant during inflation, at least over the length scales for which we have data. Using $\delta_S^2 \sim H^4/\dot\phi^2$, and
assuming $\dot\phi$ is constant, we estimate the scalar spectral tilt as
$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim - \frac{4 \dot H}{H^2}.$
Using $\delta_T^2 \sim H^2/m_P^2$, we conclude that the tensor spectral tilt is half as big.
From $H^2 \sim V/m_P^2$, we find
$\dot H \sim \frac{1}{2} \dot \phi \frac{V'}{V} H,$
and using $\dot \phi \sim -V'/H$ we find
$-\frac{1}{\delta_S^2}\frac{d\delta_S^2}{d N} \sim \frac{V'^2}{H^2V}\sim m_P^2\left(\frac{V'}{V}\right)^2\sim \left(\frac{V}{m_P^4}\right)\left(\frac{m_P^6 V'^2}{V^3}\right)\sim \delta_T^2 \delta_S^
{-2}\sim r.$
Putting in the numbers more carefully we find a scalar spectral tilt of $r/4$ and a tensor spectral tilt of $r/8$.
This is our last important conclusion: A relatively large value of $r$ means a significant spectral tilt. In fact, even before the BICEP2 results, the CMB anisotropy data already supported a scalar
spectral tilt of about .04, which suggested something like $r \sim .16$. The BICEP2 detection of the tensor fluctuations (if correct) has confirmed that suspicion.
Summing up
If you have stuck with me this far, and you haven’t seen this stuff before, I hope you’re impressed. Of course, everything I’ve described can be done much more carefully. I’ve tried to convey,
though, that the emerging story seems to hold together pretty well. Compared to last week, we have stronger evidence now that inflation occurred, that the mass scale of inflation is high, and that
the scalar and tensor fluctuations produced during inflation have been detected. One prediction is that the tensor fluctuations, like the scalar ones, should have a notable spectral tilt, though a
lot more data will be needed to pin that down.
I apologize to the experts again, for the sloppiness of these arguments. I hope that I have at least faithfully conveyed some of the spirit of inflation theory in a way that seems somewhat accessible
to the uninitiated. And I’m sorry there are no references, but I wasn’t sure which ones to include (and I was too lazy to track them down).
It should also be clear that much can be done to sharpen the confrontation between theory and experiment. A whole lot of fun lies ahead.
Added notes (3/25/2014):
Okay, here’s a good reference, a useful review article by Baumann. (I found out about it on Twitter!)
From Baumann’s lectures I learned a convenient notation. The rolling of the inflaton can be characterized by two “potential slow-roll parameters” defined by
$\epsilon = \frac{m_p^2}{2}\left(\frac{V'}{V}\right)^2,\quad \eta = m_p^2\left(\frac{V''}{V}\right).$
Both parameters are small during slow rolling, but the relationship between them depends on the shape of the potential. My crude approximation ($\epsilon = \eta$) would hold for a quadratic
We can express the spectral tilt (as I defined it) in terms of these parameters, finding $2\epsilon$ for the tensor tilt, and $6 \epsilon - 2\eta$ for the scalar tilt. To derive these formulas it
suffices to know that $\delta_S^2$ is proportional to $V^3/V'^2$, and that $\delta_T^2$ is proportional to $H^2$; we also use
$3H\dot \phi = -V', \quad 3H^2 = V/m_P^2,$
keeping factors of 3 that I left out before. (As a homework exercise, check these formulas for the tensor and scalar tilt.)
It is also easy to see that $r$ is proportional to $\epsilon$; it turns out that $r = 16 \epsilon$. To get that factor of 16 we need more detailed information about the relative size of the tensor
and scalar fluctuations than I explained in the post; I can’t think of a handwaving way to derive it.
We see, though, that the conclusion that the tensor tilt is $r/8$ does not depend on the details of the potential, while the relation between the scalar tilt and $r$ does depend on the details.
Nevertheless, it seems fair to claim (as I did) that, already before we knew the BICEP2 results, the measured nonzero scalar spectral tilt indicated a reasonably large value of $r$.
Once again, we’re lucky. On the one hand, it’s good to have a robust prediction (for the tensor tilt). On the other hand, it’s good to have a handle (the scalar tilt) for distinguishing among
different inflationary models.
One last point is worth mentioning. We have set Planck’s constant $\hbar$ equal to one so far, but it is easy to put the powers of $\hbar$ back in using dimensional analysis (we’ll continue to assume
the speed of light c is one). Since Newton’s constant $G$ has the dimensions of length/energy, and the potential $V$ has the dimensions of energy/volume, while $\hbar$ has the dimensions of energy
times length, we see that
$\delta_T^2 \sim \hbar G^2V.$
Thus the production of gravitational waves during inflation is a quantum effect, which would disappear in the limit $\hbar \to 0$. Likewise, the scalar fluctuation strength $\delta_S^2$ is also $O(\
hbar)$, and hence also a quantum effect.
Therefore the detection of primordial gravitational waves by BICEP2, if correct, confirms that gravity is quantized just like the other fundamental forces. That shouldn’t be a surprise, but it’s nice
to know.
My 10 biggest thrills
Like many physicists, I have been reflecting a lot the past few days about the BICEP2 results, trying to put them in context. Other bloggers have been telling you all about it (here, here, and here,
for example); what can I possibly add?
The hoopla this week reminds me of other times I have been really excited about scientific advances. And I recall some wise advice I received from Sean Carroll: blog readers like lists. So here are
(in chronological order)…
My 10 biggest thrills (in science)
This is a very personal list — your results may vary. I’m not saying these are necessarily the most important discoveries of my lifetime (there are conspicuous omissions), just that, as best I can
recall, these are the developments that really started my heart pounding at the time.
1) The J/Psi from below (1974)
I was a senior at Princeton during the November Revolution. I was too young to appreciate fully what it was all about — having just learned about the Weinberg-Salam model, I thought at first that the
Z boson had been discovered. But by stalking the third floor of Jadwin I picked up the buzz. No, it was charm! The discovery of a very narrow charmonium resonance meant we were on the right track in
two ways — charm itself confirmed ideas about the electroweak gauge theory, and the narrowness of the resonance fit in with the then recent idea of asymptotic freedom. Theory triumphant!
2) A magnetic monopole in Palo Alto (1982)
By 1982 I had been thinking about the magnetic monopoles in grand unified theories for a few years. We thought we understood why no monopoles seem to be around. Sure, monopoles would be copiously
produced in the very early universe, but then cosmic inflation would blow them away, diluting their density to a hopelessly undetectable value. Then somebody saw one …. a magnetic monopole obediently
passed through Blas Cabrera’s loop of superconducting wire, producing a sudden jump in the persistent current. On Valentine’s Day!
According to then current theory, the monopole mass was expected to be about 10^16 GeV (10 million billion times heavier than a proton). Had Nature really been so kind as the bless us with this
spectacular message from an staggeringly high energy scale? It seemed too good to be true.
It was. Blas never detected another monopole. As far as I know he never understood what glitch had caused the aberrant signal in his device.
3) “They’re green!” High-temperature superconductivity (1987)
High-temperature superconductors were discovered in 1986 by Bednorz and Mueller, but I did not pay much attention until Paul Chu found one in early 1987 with a critical temperature of 77 K. Then for
a while the critical temperature seemed to be creeping higher and higher on an almost daily basis, eventually topping 130K …. one wondered whether it might go up, up, up forever.
It didn’t. Today 138K still seems to be the record.
My most vivid memory is that David Politzer stormed into my office one day with a big grin. “They’re green!” he squealed. David did not mean that high-temperature superconductors would be good for
the environment. He was passing on information he had just learned from Phil Anderson, who happened to be visiting Caltech: Chu’s samples were copper oxides.
4) “Now I have mine” Supernova 1987A (1987)
What was most remarkable and satisfying about the 1987 supernova in the nearby Large Magellanic Cloud was that the neutrinos released in a ten second burst during the stellar core collapse were
detected here on earth, by gigantic water Cerenkov detectors that had been built to test grand unified theories by looking for proton decay! Not a truly fundamental discovery, but very cool
Soon after it happened some of us were loafing in the Lauritsen seminar room, relishing the good luck that had made the detection possible. Then Feynman piped up: “Tycho Brahe had his supernova,
Kepler had his, … and now I have mine!” We were all silent for a few seconds, and then everyone burst out laughing, with Feynman laughing the hardest. It was funny because Feynman was making fun of
his own gargantuan ego. Feynman knew a good gag, and I heard him use this line at a few other opportune times thereafter.
5) Science by press conference: Cold fusion (1989)
The New York Times was my source for the news that two chemists claimed to have produced nuclear fusion in heavy water using an electrochemical cell on a tabletop. I was interested enough to consult
that day with our local nuclear experts Charlie Barnes, Bob McKeown, and Steve Koonin, none of whom believed it. Still, could it be true?
I decided to spend a quiet day in my office, trying to imagine ways to induce nuclear fusion by stuffing deuterium into a palladium electrode. I came up empty.
My interest dimmed when I heard that they had done a “control” experiment using ordinary water, had observed the same excess heat as with heavy water, and remained just as convinced as before that
they were observing fusion. Later, Caltech chemist Nate Lewis gave a clear and convincing talk to the campus community debunking the original experiment.
6) “The face of God” COBE (1992)
I’m often too skeptical. When I first heard in the early 1980s about proposals to detect the anisotropy in the cosmic microwave background, I doubted it would be possible. The signal is so small! It
will be blurred by reionization of the universe! What about the galaxy! What about the dust! Blah, blah, blah, …
The COBE DMR instrument showed it could be done, at least at large angular scales, and set the stage for the spectacular advances in observational cosmology we’ve witnessed over the past 20 years.
George Smoot infamously declared that he had glimpsed “the face of God.” Overly dramatic, perhaps, but he was excited! And so was I.
7) “83 SNU” Gallex solar neutrinos (1992)
Until 1992 the only neutrinos from the sun ever detected were the relatively high energy neutrinos produced by nuclear reactions involving boron and beryllium — these account for just a tiny fraction
of all neutrinos emitted. Fewer than expected were seen, a puzzle that could be resolved if neutrinos have mass and oscillate to another flavor before reaching earth. But it made me uncomfortable
that the evidence for solar neutrino oscillations was based on the boron-beryllium side show, and might conceivably be explained just by tweaking the astrophysics of the sun’s core.
The Gallex experiment was the first to detect the lower energy pp neutrinos, the predominant type coming from the sun. The results seemed to confirm that we really did understand the sun and that
solar neutrinos really oscillate. (More compelling evidence, from SNO, came later.) I stayed up late the night I heard about the Gallex result, and gave a talk the next day to our particle theory
group explaining its significance. The talk title was “83 SNU” — that was the initially reported neutrino flux in Solar Neutrino Units, later revised downward somewhat.
8) Awestruck: Shor’s algorithm (1994)
I’ve written before about how Peter Shor’s discovery of an efficient quantum algorithm for factoring numbers changed my life. This came at a pivotal time for me, as the SSC had been cancelled six
months earlier, and I was growing pessimistic about the future of particle physics. I realized that observational cosmology would have a bright future, but I sensed that theoretical cosmology would
be dominated by data analysis, where I would have little comparative advantage. So I became a quantum informationist, and have not regretted it.
9) The Higgs boson at last (2012)
The discovery of the Higgs boson was exciting because we had been waiting soooo long for it to happen. Unable to stream the live feed of the announcement, I followed developments via Twitter. That
was the first time I appreciated the potential value of Twitter for scientific communication, and soon after I started to tweet.
10) A lucky universe: BICEP2 (2014)
Many past experiences prepared me to appreciate the BICEP2 announcement this past Monday.
I first came to admire Alan Guth‘s distinctive clarity of thought in the fall of 1973 when he was the instructor for my classical mechanics course at Princeton (one of the best classes I ever took).
I got to know him better in the summer of 1979 when I was a graduate student, and Alan invited me to visit Cornell because we were both interested in magnetic monopole production in the very early
universe. Months later Alan realized that cosmic inflation could explain the isotropy and flatness of the universe, as well as the dearth of magnetic monopoles. I recall his first seminar at Harvard
explaining his discovery. Steve Weinberg had to leave before the seminar was over, and Alan called as Steve walked out, “I was hoping to hear your reaction.” Steve replied, “My reaction is applause.”
We all felt that way.
I was at a wonderful workshop in Cambridge during the summer of 1982, where Alan and others made great progress in understanding the origin of primordial density perturbations produced from quantum
fluctuations during inflation (Bardeen, Steinhardt, Turner, Starobinsky, and Hawking were also working on that problem, and they all reached a consensus by the end of the three-week workshop …
meanwhile I was thinking about the cosmological implications of axions).
I also met Andrei Linde at that same workshop, my first encounter with his mischievous grin and deadpan wit. (There was a delegation of Russians, who split their time between Xeroxing papers and
watching the World Cup on TV.) When Andrei visited Caltech in 1987, I took him to Disneyland, and he had even more fun than my two-year-old daughter.
During my first year at Caltech in 1984, Mark Wise and Larry Abbott told me about their calculations of the gravitational waves produced during inflation, which they used to derive a bound on the
characteristic energy scale driving inflation, a few times 10^16 GeV. We mused about whether the signal might turn out to be detectable someday. Would Nature really be so kind as to place that mass
scale below the Abbott-Wise bound, yet high enough (above 10^16 GeV) to be detectable? It seemed unlikely.
Last week I caught up with the rumors about the BICEP2 results by scanning my Twitter feed on my iPad, while still lying in bed during the early morning. I immediately leapt up and stumbled around
the house in the dark, mumbling to myself over and over again, “Holy Shit! … Holy Shit! …” The dog cast a curious glance my way, then went back to sleep.
Like millions of others, I was frustrated Monday morning, trying to follow the live feed of the discovery announcement broadcast from the hopelessly overtaxed Center for Astrophysics website. I was
able to join in the moment, though, by following on Twitter, and I indulged in a few breathless tweets of my own.
Many of his friends have been thinking a lot these past few days about Andrew Lange, who had been the leader of the BICEP team (current senior team members John Kovac and Chao-Lin Kuo were Caltech
postdocs under Andrew in the mid-2000s). One day in September 2007 he sent me an unexpected email, with the subject heading “the bard of cosmology.” Having discovered on the Internet a poem I had
written to introduce a seminar by Craig Hogan, Andrew wrote:
just came across this – I must have been out of town for the event.
l love it.
it will be posted prominently in our lab today (with “LISA” replaced by “BICEP”, and remain our rallying cry till we detect the B-mode.
have you set it to music yet?
I lifted a couplet from that poem for one of my tweets (while rumors were swirling prior to the official announcement):
We’ll finally know how the cosmos behaves
If we can detect gravitational waves.
Assuming the BICEP2 measurement r ~ 0.2 is really a detection of primordial gravitational waves, we have learned that the characteristic mass scale during inflation is an astonishingly high 2 X 10^16
GeV. Were it a factor of 2 smaller, the signal would have been far too small to detect in current experiments. This time, Nature really is on our side, eagerly revealing secrets about physics at a
scale far, far beyond what we will every explore using particle accelerators. We feel lucky.
We physicists can never quite believe that the equations we scrawl on a notepad actually have something to do with the real universe. You would think we’d be used to that by now, but we’re not — when
it happens we’re amazed. In my case, never more so than this time.
The BICEP2 paper, a historic document (if the result holds up), ends just the way it should:
“We dedicate this paper to the memory of Andrew Lange, whom we sorely miss.”
Summer of Science: Caltech InnoWorks 2013
The following post is a collaboration between visiting undergraduates Evan Giarta from Stanford University and Joy Hui from Harvard University. As mentors for the 2013 Caltech InnoWorks Academy, Evan
and Joy agreed to share their experience with the audience of this blog.
All throughout modern history, science and mathematics have been the foundation for engineering new, world-advancing technologies. From the wheel and sailboat to the automobile and jumbo jet, the
fields of science, technology, engineering and math (STEM) have helped our world and its people move faster, farther, and forward. Now, more than ever, products that were unimaginable a century ago
are used every day in households and businesses all over the earth, products made possible by generations of scientists, mathematicians, engineers and technologists.
There is, however, some troublesome news regarding the state of our nation’s math and science education. For the past few years, education and news reports have ranked the United States behind other
developed countries in science and mathematics. In fact, the proportion of students that score in the most advanced levels of math and science in the United States is significantly lower than that of
several other nations. This reality brings to light a stark concern: If proficiency in science and math is necessary for the engineering of novel technologies and breakthrough discoveries, but is
something the next generation will lack, what will become of the production industry, our economy, and global security? While the answer to this question might range from complete and utter chaos to
little or no effect, it seems reasonable to try and avoid finding out. Rather, we–the community of collective scientists, technologists, engineers, mathematicians–should seek to solve this problem:
What can we do to restore the integrity and substance of the educational system, especially in science and math?
Although there is no easy answer to this question, there are ongoing efforts to address this important issue, one of which is the InnoWorks Academy. Founded in 2003 by a group of Duke
undergraduates, the United InnoWorks Academy has developed summer programs that encourage underprivileged middle school students to explore the fields of science, technology, engineering, math, and
medicine, (STEM^2) free of charge. The academy is sponsored by a variety of businesses and organizations, including GlaxoSmithKline, Cisco, Project Performance Corporation, Do Something, St. Andrew’s
School, University of Pennsylvania, University of California, Los Angeles, and many others. InnoWorks Academy was a winner of the 2007 BRICK Awards and received both the MIT Global Community Choice
Award and the MIT IDEAS Challenge Grand Prize in 2011. In ten short years, the United InnoWorks Academy has successfully conducted over 50 summer programs for more than 2,000 students through the
contributions of over 1,000 volunteers. Currently, InnoWorks has chapters at about a dozen universities and is scheduled to add three more chapters this coming summer.
In early August of 2013, the Caltech InnoWorks chapter hosted its second annual summer camp, and Joy and I (Evan) were privileged to be invited as program mentors. As people on the inside, we got a
first hand look at how the organization prepares and runs the week-long event and what the students themselves experience in the hands-on, interactive and collaborative one-of-a-kind opportunity that
is InnoWorks.
Since Joy and I kept a diary of each day’s activities, we thought you may like to see what our middle-schoolers experienced during that week. Here is a play-by-play of the first few days from Joy’s
Monday, August 5, 2013 was the first day of our camp! Before I go on with talking about the cool things we did that day, I’d like to introduce my team. The self-titled YOLOSWAG consisted of Michael,
Chase, Evan, Phaelan and myself. Michael was the oldest, and a little shy at first, but he definitely started talking when he got comfortable. Chase was respectful and polite, and we hit it off
immediately. Evan loved science and had lots of questions and knew a TON. Phaelan was the only other girl on the team, but she was very nice and friendly to the other students, and eager to help with
anything she could. We all seem different, right? But wait, here’s the best part: we were all die-hard Percy Jackson (property of Rick Riordan) fans! We were definitely the best team.
Anyway, back to the actual stuff we did. One of the first things we saw was a cloud demo, essentially the creation of a cloud in a fish tank, with lots of dry ice and water. The demonstrator, Rob
Usiskin, stuck a lot of dry ice in the (empty) fish tank, and poured some water into the tank, which caused the water to turn into a fog, which turns out to be the exact form of a cloud! Add a bubble
maker, a question of whether bubbles will float or sink on the cloud, and a room full of InnoWorks campers (about 40 of them), and you will get an hour of general excitement. See the pictures for
yourself! (The bubbles, float, by the way, even when the cloud is invisible!)
Following the demonstration, we made Soap Boats. We were apparently supposed to cut index cards into the shape of boats, and dab a little bit of soap on the bottom of the boat, and set the boat in
the water. These “Soap Boats” were supposed to be propelled forward by the soap’s ability to decrease the surface tension of the water it touched. Ours, appropriately named “Titanic,” however, simply
sat in the water until the water soaked through, and the Titanic sank for the second time in history. Many other teams’ boats fared about the same, but we certainly had a blast designing and naming
our Soap Boat!
The last activity of the day was a secret message decoded with bio-luminescence. Each team was given a vial of dried up ostracods, which are sea creatures found glowing in the darkness of the deep
sea. Then, each team crushed the ostracods and mixed the resulting powder with water to catalyze the bio-luminescence. Every mentor had written a secret message on a slip of paper, folded it, and
handed it to their teams to decipher in the dark (Mine said: YOLOSWAG for the win). The convenient cleaning closet provided said darkness–Spiros, our faculty mentor, suggested that it might also
provide passage to Narnia. I don’t think we lost any of our students that day, so no Narnia-traveling was done; by students. Nevertheless, it was a fun-filled and action-packed day, and a great start
to an eventful week.
Tuesday was full of fun, hands-on activities. After a short lesson on the effects of air resistance and gravity on free-falling objects, we demonstrated the concept with a thin sheet of paper and a
large textbook. As expected, when placed side by side and dropped, air resistance caused the sheet of paper to hit the ground much later than the textbook. But when the sheet of paper was placed
directly above the textbook, to the shock of students, both items fell at the exact same rate. Though thorough in their knowledge of physical laws, the connection between their conceptual
understanding and real life application had yet to be established. And as a result, when asked to explain what happened and why, the best answer one could muster was simply, “SCIENCE!”.
Following an exercise consisting of blow dryers and floating ping pong balls, the kids received a brief tutorial on how tornadoes are formed by air moving through high and low pressure regions and
gusts of vertically rising winds. Due to the forces it is producing and acting upon, the tornado would then be able to more or less sustain itself. To explore this concept further, students
constructed water tornado machines by taping two soda bottles together at their openings. Laughter and wetness ensued. Some groups added small trinkets in their tornado machine to observed the water
tornado’s effect on “debris”. One team in particular inserted duct tape sharks, and aptly renamed themselves as Sharknado.
In the afternoon, the campers were presented a lesson on the transportation of sailboats and aircraft. Contrary to what most people intuit, the fastest way to control a boat is not to flow in the
direction of the wind, but to place the sail at the heading which produces a net force, which can be explained by Bernoulli’s Principle. It states that faster moving fluid has less pressure than
slower moving fluid, therefore producing a force from the slower moving side to the faster moving side. Though in sailing this force is initially barely noticeable, over time it creates a large
impulse to move the craft at considerable speeds. The same principle can also explain the way a plane takes flight.
With the importance of good design in mind, students were tasked with prototyping the fastest water-bottle boat. Given a solar panel, electric motor, various propellers, empty bottle, tape, and other
construction essentials, kids started with basic designs, then diversified in order to gain an edge against other teams. Some teams boasted two-bottle designs, and others used one, each type having
trade-offs in speed and stability. Few implemented style upgrades with graphics and colors, and even fewer leveraged performance modifications with ballast and crude control systems. But with a tough
deadline to meet, not all boats met their intended specifications. Nonetheless, the races commenced and each team’s innovation was tested in the torrents of Caltech’s Beckman Fountain. Some failed,
but those that survived were rewarded accordingly.
By the end of the second day, many of the campers’ initial shyness had been replaced with conversation and budding new friendships. Lunch hour and break times allowed time for kids and mentors alike
to hang out and enjoy themselves in California’s summer sun in-between discovering the applications of science and math to engineering, medicine, and technology. These moments of discovery, no matter
how rare, are the reasons why we do what we do as we continue our research, studies, and work to improve the world we live in. | {"url":"http://quantumfrontiers.com/","timestamp":"2014-04-18T22:19:35Z","content_type":null,"content_length":"197163","record_id":"<urn:uuid:4d5912b5-f963-4237-86b8-b9388764100c>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Talk:Uncertainty Principle
Are you quite sure it's
? I've always seen the Uncertainty Principle written as Δ
, which is equal to
. --
I'm not sure, and neither are the experts:
I just checked Encyclopedia Britannica, and they give both 2π and 4π in different articles. A case for
m:Making fun of Britannica
I think we should leave it at 4π -- at least we are on the safe side. --AxelBoldt
I think it's 4π, though it's a while since I did an y QM. Doesn't it come from:
(ΔA)(ΔB) >= (1/2) |[A,B]|
[x^,p^] = h_bar / i ,
(Δx^)(Δp^) >= h_bar / 2 .
-- DrBob
Ok, I will then happily make fun of Britannica now... --AxelBoldt
I think to remember that DrBob is right. The thing is that when you use it to estimate a value, sometimes you say
(Δx^)(Δp^) ~ h_bar
that is compatible with the previous--AN
Sadly, the source is a bit old, but in Feynman's lectures, Feynman defines the measurements Δx and Δp to be the width of a gaussian distribution, which may be the source of the confusion. He does,
however, state precisely:
ΔxΔp ≥ h/2π
It was originally published 1963-1965, so I guess it may have changed since then.
Ah, that must be it. They never clearly say what they mean with "uncertainty". Feynman takes thh "width", which is probably twice the standard deviation. We take the standard deviation itself, that's
why we get half his number. So we are fine, but EB is still screwed :-) --AxelBoldt
Not quite Axel. Check the math. If Δx' and Δp' are the standard deviations, equal to half the width, then:
(2Δx')(2Δp') ≥ h/2π
=> Δx'Δp' ≥ h/8π
That might be the source of the confusion if the competitors were:
ΔxΔp ≥ h_bar
ΔxΔp ≥ h_bar/4
I'm not familiar enough with statistical data analysis to say much more, however. Does the width of a gaussian divided by √2 mean anything? --BlackGriffen
Yup, √2 times standard deviation is the width of the range where 50% of the values will be. But it only works for a gaussian distribution, and there is no reason to assume that all
observables are normally distributed, in fact they're most definitely not, so our use of the standard deviation is much cleaner. --AxelBoldt
So, the width over √2 corresponds to the 50% of observations. Doesn't that mean that the formula:
ΔxΔp ≥ [S:h:S]/2
is the incorrect one? the correct ones being:
ΔxΔp ≥ [S:h:S]
σ[x]σ[p] ≥ [S:h:S]/4
lower case sigma is the standard character for a standard deviation, right?--BlackGriffen
In stats, they use sigma, but it seems that physisists use Δx both for standard deviation and for the 50% range, and call both "uncertainty". If we used Δx for the 50% range and σ[x] for the
standard deviation, then the correct formulas would be
ΔxΔp ≥ [S:h:S]
σ[x]σ[p] ≥ [S:h:S]/2
But the first of those is really pointless since it makes the unjustified assumption that the variables are normally distributed. --AxelBoldt
If there are two differing definitions of Δx and Δp we should note this, and that the uncertainity principle takes different forms depending on what definition is chosen. -- SJK
I removed the reference to the energy--time uncertainty principle, since it is not really correct. While energy surely is an operator in quantum mechanics, time is not, it's a parameter. One derives
uncertainty inequalities from commutators of operators (i.e. if the commutator between two operators is not zero then there is an uncertainty relationship between the standard deviations). Since time
is not an operator, it commutes with everything. I would suggest that the primary author of this article read the relvent sections of L. Ballentine's _Quantum_Mechanics_A_Modern_Development_ for a
very clear discussion of the HUP.
While I'm ranting, I would suggest dropping the stuff at the end about Einstein. First off, he never denied the empirical validity of the HUP, and second even if he did his opinion on the the
structure of QM is best left to another page.
It's actually messier than that. In standard non-relativistic QM, position is an operator and time isn't. If you extend this to relativistic QM, things get very messy, since there isn't even a
quantity that stands up and says that "I'm the position operator".
Granted, I was thinking of the non-relativistic theory. Perhaps a paragraph about the relativisitic theory is in order? --matthew
--- I'll need to read Ballentine to see what he says, but this seems wrong. The variable t does not commute with the Hamiltonian operator, and mathematically, a wave function of finite time does not
have a defined energy and the mathematical relationship between energy and time appears to be the same as momentum and position. -- Chenyu
The more I think about it, the more I think that Ballentine is wrong if he is asserting that there is no energy-time uncertainty relationship
That's not what I (or Ballentine) said. I merely recommended Ballentine's book as a very clear discussion of the HUP. --Matthew Nobes
or that time commutes with energy.
Huh? Time commutes with *EVERYTHING* in non-relativistic QM. It's a parameter, there is no time operator. --Matthew
I've found these links on the web
[[:http://www.aip.org/history/heisenberg/p08a.htm|http://www.aip.org/history/heisenberg/p08a.htm]] [[:http://press.web.cern.ch/pdg/cpep/unc_vir|http://press.web.cern.ch/pdg/cpep/unc_vir]]
Granted, these are popular pages, but one presumes that AIP and CERN had someone proofread the pages. There is also the discussion in
which I think we should fold into the article -- Chenyu
Umm the link to Dr. Baez's page renforces what I said. Notice the derved relationship is between the total energy and the time derivative of some operator. That's why I deleted the energy--time
reference, because this type of UP requires more careful thought --Matthew
I don't think that time does commute with energy in NR QM. To calculate the time expectation value of wave function phi, you use the expression <phi|t|phi>. To calculate the energy function you use
an operation which has a time derivative in it. t and d/dt do not commute. It's formally exactly the same relationship as x and d/dx. Yes you can make a distinction between t the parameter and t the
operator, but you can do exactly the same thing between x the operator and x the parameter.
Okay, I think you realize the error below, but just in case let me reiterate, there is no time operator in non relativistic QM. Such a thing *does*not*exist* in the theory. Dr. Baez's webpage,
which you cite, gives a plausible way of constructing something that might look like a time operator, but it won't be time itself. And things get worse in the relativistic theory, since there is
not good position operator either. Your expression above shows this as well, say you set
<t> = <phi|t|phi>.
Now t is not a operator, so this is
<t> = t <phi|phi> = t (<phi|phi>=1).
This is *not* the same thing as x=<phi|X|phi> since there is no particular reason to assume that |phi> is a position eigenstate. -- Matthew
I really don't see any reason why t should be treated differently in NR QM than position. -- Chenyu
Becuase time is not an operator. X and P~d/dx are operators. -- Matthew
Never mind - I think I just saw it.
On second thought I don't see it --- Chenyu
AARRGRRHHHHH!!!!! My mind is frying.....
Anyway, I don't have any objection to the article as it stands. I misunderstood the original comment to say that there wasn't an Energy-Time relationship rather than to say that its derviation is
tricky. Maybe we need another article just for the energy-time relationship. --- Chenyu
I suppose `one copy of THE system ... and another, identical one' is meant to refer to two real life systems with the same specs. But the formulation sounds much like ensembles of systems from
conventional statistics, i.e. just thought experiments. I feel the statement would be stronger if the fact that the systems are real is stated more explicitely. That is, if I'm correct about that.
If you mean
Disturbance plays no part since the principle even applies if position is measured in one copy of the system and momentum is measured in another, identical one.
then I agree that the statement needs to be clarified. Check out the no cloning theorem. -- CYD
Recently, an addition was made to the article claiming that "Conciousness interpretation" of QM have been proven wrong. I seem to remember an experiment where the two slits of a double slit
experiment are equipped with detectors. If the detectors record which slit an electron took, and somebody looks at the results, then no interference pattern occurs. But if the detectors record the
information, and subsequently the information is destroyed before anyone has a chance to see it, then the interference pattern does occur. How are these results being interpreted nowadays?
As far as I know, what you described should not happen: in both circumstances there should be an interference pattern. Can you provide a link? We need an article quantum decoherence. I will get
around to writing that one of these days, if no one else does it. -- CYD
Huh? This whole thing confuses me. Here's what you might be thinking of, if you put dectectors at the slits, and they function at 100% then there will be no interference. It doesn't matter wether
somebody reads the output of the detectors or not. If the dectectors worked then the interference is destroyed. There is an interesting effect if the detectors are not 100% efficient. Then the
interference pattern get's ``washed out. There is a brief attempt at an explanation on my home page see
for some details. Also I think what is needed is one long complete article on quantum mechanics, not a great mass of micro articles on various features. Actually even better would be two long
articles, on for laymen and one more advanced. --Matthew Nobes
I think the current approach is fine. We already have what you suggest, in the articles quantum mechanics and mathematical formulation of quantum mechanics. These lay down the framework of
the theory. However, it is necessary to have "micro articles" such as this, simply because the secondary requirements and implications of quantum mechanics are so numerous. The uncertainty
principle is not a postulate of the theory.
That's true, it's a theorem about operators. This is my point though, for a layman's type article the HUP requires a couple of sentences, there is no need for a micro article. From a mathmatical
standpoint it is an easy to prove theorem, again a couple of lines in a long article. However, since I don't have the time to write any long articles right now I'll quit critiquing the approach
others take, and stick to physics issues. -- Matthew
In my understanding, position and momentum don't have to be noncommuting observables; the fact that they are is a result of our choice of Hamiltonian and state space, which is motivated by
experimental results (phenomenology).
That's not how I'd put it. Momentum is the generator of translations, as such it will not commute with position. This is true even in classical mechanics where x and p~d/dx have a non-zero
Poisson bracket. -- M.
The Pauli exclusion principle is another example of a principle often associated with, but not strictly required by, quantum mechanics. --CYD
That's not really true. The PEP is a theorem in the relativistic formulation. You cannot construct a consistent theory of fermions without it (at least in three space + one time dimension) look
up the ``Spin-Statistics theorem. *Historically* it was a phenomenological principle, but from a logical perspective it is contained in the theory. -- M.
I put in the various statements about the consciousness interpretations
of QM. As far as I am aware of, few if any physicists have ever believed that consciousness has a special role in wave function collapse. The problem is that many popularizations of QM have made it
seem that there is a connection, and the fact that a lot of people seem to be influenced by that is why I keep emphasizing the point that you can demonstrate that it isn't. I'm not aware of the
experiment that you referred to.
I searched around and couldn't find it either anymore. The closest I found was an experiment involving polarization and a double slit, which might be interpreted as "the interference pattern
reappears if the polarization information is destroyed by another polarization filter", but that hardly qualifies as an introduction of conciousness into QM. --AxelBoldt
In the sound wave example, it is possible to calculate the exact frequency of a sound wave at a given time. The mathematics for this calculation are exact, notwithstanding the measurement error
introduced by the uncertainty principle.
Given a sound signal, you cannot talk about the exact frequencies contained in the signal at a specified point in time. In order to do Fourier analysis, you always need to look at the function over a
whole interval; a single point is not enough. And if you make the interval smaller, the uncertainty in the frequencies increases proportionally. It's very much analogous to the uncertainty principle,
and in fact, the same theorem underlies both effects. --AxelBoldt
That's not true. It is not the same theorem. Standard QM uncertaintaty relations are derived from a theorem about operators on a hilbert space. These are not things that occur in classical wave
mechanics. This idea about frequency and intervals is what underscores the time-energy relations, which is precisly why I cautioned against talking about them in the same language as standard QM
relations. -- Matthew
There is a theorem relating the "uncertainty" in a function and the uncertainty in its Fourier transform. In standard wave mechanics, the Fourier transform is a Hilbert space automorphism
which translates between the position observable x and the momentum observable -i d/dx. So the Fourier theorem can actually be used to prove the space-momentum uncertainty relation.
I'd like to put this sentence back in the main article. It's a clear layman-accessible explanation of what the uncertainty principle is all about.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/ta/Talk:Uncertainty_Principle","timestamp":"2014-04-16T16:22:28Z","content_type":null,"content_length":"32426","record_id":"<urn:uuid:bead2e58-c8d5-40f1-bf22-bb4414b153e9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Beiträge zur Algebra und Geometrie / Contributions to Algebra and Geometry, Vol. 42, No. 2, pp. 307-311 (2001)
Simple Polygons with an Infinite Sequence of Deflations
Thomas Fevens, Antonio Hernandez, Antonio Mesa, Patrick Morin, Michael Soss, Godfried Toussaint
School of Computer Science, McGill University, 3480 University Street, Montreal, Quebec Canada H3A 2A7 e-mail: godfried@cs.mcgill.ca
Abstract: Given a simple polygon in the plane, a deflation is defined as the inverse of a flip in the Erdos-Nagy sense. In 1993 Bernd Wegner conjectured that every simple polygon admits only a finite
number of deflations. In this note we describe a counter-example to this conjecture by exhibiting a family of polygons on which deflations go on forever.
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] © 2001 ELibM for the EMIS Electronic Edition | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/BAG/vol.42/no.2/2.html","timestamp":"2014-04-20T18:30:55Z","content_type":null,"content_length":"1892","record_id":"<urn:uuid:83d4f7f8-86ee-4ed5-b15a-3ac67ef8cd6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algorithms for Closed Under Rational Behavior (CURB) Sets
M. Benisch, G. B. Davis, T. Sandholm
We provide a series of algorithms demonstrating that solutions according to the fundamental game-theoretic solution concept of closed under rational behavior (CURB) sets in two-player, normal-form
games can be computed in polynomial time (we also discuss extensions to n-player games). First, we describe an algorithm that identifies all of a player's best responses conditioned on the belief
that the other player will play from within a given subset of its strategy space. This algorithm serves as a subroutine in a series of polynomial-time algorithms for finding all minimal CURB sets,
one minimal CURB set, and the smallest minimal CURB set in a game. We then show that the complexity of finding a Nash equilibrium can be exponential only in the size of a game's smallest CURB set.
Related to this, we show that the smallest CURB set can be an arbitrarily small portion of the game, but it can also be arbitrarily larger than the supports of its only enclosed Nash equilibrium. We
test our algorithms empirically and find that most commonly studied academic games tend to have either very large or very small minimal CURB sets.
This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy. | {"url":"http://www.aaai.org/Library/JAIR/Vol38/jair38-013.php","timestamp":"2014-04-20T13:26:29Z","content_type":null,"content_length":"3054","record_id":"<urn:uuid:f09c4931-591a-4bd3-b408-57a0c7a4ba87>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
nick has a blog!
I found the following question today that piqued my interest:
Given an array of items where each item has a name and a weight (integer value), select a random item from the array based on the weight.
What this means is that items in the array should be randomly chosen such that the larger the weight, the more likely the item will be picked.
To approach this problem, I will create an array where each element gives us the probability the ith item would be chosen. To build this array, I will first find the total amount of weight amongst
all items and divided each item’s weight by this total value. Once I have this array, I will be able to pick a random number between 0.0 (Inclusive) and 1.0 (Exclusive) to determine which item to
My language of choice for this example will be Python. Let’s say I am given the data as an ordered list of Tuples where the first element is the name of the item and the second element is the weight
of that item.
items = [('Item1', 4), ('Item2', 3), ('Item3', 9), ('Item4', 7)]
With this array of items, I will first grab the weights and store it in an array such that the ith element in the this array corresponds to the ith element in the items array.
weights = map(lambda item: item[1], items) # [4, 3, 9, 2]
Once I have this array of weights, it’s trivial to find the total weight for all items.
total = float(sum(weights)) # 23
Note that I’m casting the sum to a float. I will need this total as a float to keep precision for when I divide each weight by this value. If we left the total as an integer, the divide operation
for each weight will return 0 since an integer divided by an integer is an integer.
probabilities = map(lambda weight: weight / total, weights) # [0.1739, 0.1304, 0.3913, 0.304] - Values shortened for brevity
Since these are probabilities, these values added together should equal 1.0. Visually, think each of each entry as a slice of a pie. The higher the probability value, the larger the piece of pie
is. We now need to determine which slice of pie we get, if we randomly select a piece.
To do this, we can use the Python function random.random(), which will return a value between 0.0 and 1.0. To pick the correct value, I will iterate through this list of normalized weights and add
the current item to a total. If the random value is less than the total, then we have our random item.
random_val = random.random()
total_weight = 0
for i, weight in enumerate(probabilities):
total_weight += weight
if random_val < total_weight:
return items[i][0]
Let’s step through our example above to get a better understanding of how this all works. Let’s say our random value is 0.6195. During the first iteration, total_weight is set to to 0.1739. Our
random value is greater than this value, so we continue onto the next iteration. During the second iteration total_weight is set to 0.3043. Again, the random value is greater than this value, so we
continue onto the 3rd iteration where we set total_weight to 0.6956. Finally, the random value is less than total_weight, so the third item is our random value.
Overall this algorithm runs in O(n) time. If we know that the values are already sorted, we can instead use a binary search to get this down to O(lgn) time. However, sorting will take O(nlgn)
operations, so an O(n) solution will suffice for now.
Interview Question #1
I was going through some old bookmarks on my computer and found a site of programming teaser questions that were asked of candidates applying at Google. Being a sucker for these sort of things I
checked out the first question:
There is an array A[N] of N numbers. You have to compose an array Output[N] such that Output[i] will be equal to multiplication of all the elements of A[N] except A[i]. For example Output[0] will
be multiplication of A[1] to A[N-1] and Output[1] will be multiplication of A[0] and from A[2] to A[N-1].
Solve it without division operator and in O(n).
I thought about this question for a while and I have to admit that it absolutely stumped me! In attempt to learn something for this question, we’re going to look at the answer and then examine why
this works.
For this example, let’s say we’re given the array [1, 2, 3, 4, 5]; The solution to this problem would then be [120, 60, 40, 30, 24].
The solution is as follows:
1. Initialize a variable called product, to 1.
2. Starting at the second element of the array, iterate through the array going forwards. Multiply the current value of the given array with the product and store that value in the previous element
of the results array.
3. Re-initialize the product variable to 1.
4. Starting at the last element, Iterate through the elements going backwards. Multiply the current value of the given array with the product and store it in the product variable.
5. Multiply the current value of the product with the previous element of the results array.
Before we look at any code, let’s work this out.
The first step is trivial, so using the sample given above, let’s examine how we would get to the end result. Let’s check out what the results array looks like during the second step.
1. results = [1, 1, 1, 1, 1] // results[1] = data[0] × product (1)
2. results = [1, 1, 2, 1, 1] // results[2] = data[1] × product (1)
3. results = [1, 1, 2, 6, 1] // results[3] = data[2] × product (2)
4. results = [1, 1, 2, 6, 24] // results[4] = data[3] × product (6)
Now, let’s take a look at the value of the product and the results array during step 4.
1. results = [1, 1, 2, 6, 24] product = 5 (product × 5)
2. results = [1, 1, 2, 30, 24] product = 20 (product × 4)
3. results = [1, 1, 40, 30, 24] product = 60 (product × 3)
4. results = [1, 60, 40, 30, 24] product = 120 (product × 2)
5. results = [120, 60, 40, 30, 24] product = 120 (product × 1)
Bam! We end up our answer! Why does this work? Well, let’s examine the factors of the answer:
results[0] = 5 × 4 × 3 × 2
results[1] = 5 × 4 × 3 × 1
results[2] = 5 × 4 × 2 × 1
results[3] = 5 × 3 × 2 × 1
results[4] = 4 × 3 × 2 × 1
Let’s see how we accumulate those factors in step 2
1. results = [1, 1, 1, 1, 1]
2. results = [1, 1, 2 × 1, 1, 1]
3. results = [1, 1, 2 × 1, 3 × 2 × 1 , 1]
4. results = [1, 1, 2 × 1, 3 × 2 × 1, 4 x 3 x 2 x 1]
Step 3
1. results = [1, 1, 2 × 1, 3 × 2 × 1, 4 x 3 x 2 x 1] product = 5
2. results = [1, 1, 2 × 1, 5 × 3 × 2 × 1, 4 x 3 x 2 x 1] product = 5 × 4
3. results = [1, 1, 5 × 4 × 2 × 1, 5 × 3 × 2 × 1, 4 x 3 x 2 x 1] product = 5 × 4 × 3
4. results = [1, 5 × 4 × 3 × 1, 5 × 4 × 2 × 1, 5 × 3 × 2 × 1, 4 x 3 x 2 x 1] product = 5 × 4 × 3 × 2
5. results = [5 × 4 × 3 × 2 × 1, 5 × 4 × 3 × 1, 5 × 4 × 2 × 1, 5 × 3 × 2 × 1, 4 x 3 x 2 x 1] product = 5 × 4 × 3 × 2 × 1
In step 2, we’re first distributing the factors that appear before the given element in the array. In step 4, the product is distributing the factors that appear after each element as we go back.
Therefore, each element in the results array contains all factors that appear and before the same element in the given array.
Anyway, here’s the code for it:
int[] data = { 1, 2, 3, 4, 5 };
int n = data.length;
int[] results = new int[n];
for (int i = 0; < n; i++) results[i] = 1
// Step 2
for (int i = 0, product = 1; i < n - 1; i++) {
product *= data[i];
result[i + 1] = product;
// Step 4
for (int i = n - 1, product = 1; i > 0; i--) {
product *= data[i];
result[i - 1] *= product;
Java String Pool
What will the following lines of code print?
1 String a = "abc";
2 String b = "abc";
3 System.out.println(a == b);
4 String c = new String("abc");
5 String d = new String("abc");
6 System.out.println(c == d);
In order to test for equality between two objects, you must use the equals method. Otherwise, if you test the equality between two variables that point to objects, you’re comparing the value it
stores; in this case the address that reference the objects. As a result, both println statements should both return false, right? Why does the first println statement (line 3) print true?
It turns out that variables a and b both point to the same object. This is because both variables reference string literal objects. A string literal is instead stored in a special area of the Java
PermGen heap space called the String Pool. The String Pool acts as a cache for string literals; If you create a string literal that already exists in the String Pool, Java is smart enough to use the
existing reference.
On line 1, when I set the variable a to the literal “abc”, it will store that value in the String Pool. When I do the same again for variable b, on line 2, it will find the same value in the String
Pool and use the reference to the existing object. Not only does this save memory, but the values of the variables a and b are both equal since they reference the same object.
On the other hand, if we create a String using the new operator (line 4), we will first find the “abc” reference that exists in the String Pool. This reference will be passed into the String
constructor to instantiate a completely new object that now lives in the Java heap space. After we create two instances of the String object using the new operator (line 5), we have two completely
separate instances of the String object. Therefore the variables c and d reference two completely separate variables.
As you can see, it almost always never makes sense to use the new String(String) constructor. If for whatever reason you must use this constructor, you may use the String.intern() method to place
the object in the String Pool. This method will then return the reference to the object that lives in the String Pool.
Given what you have just learned, what will the following lines of code print?
System.out.println(c.intern() == d.intern());
System.out.println("a" + "bc" == "ab" + "c");
System.out.println("abcd" == "abc" + "d");
Gawker Password Database Cracking
If you haven’t heard already, Gawker Media has been hacked! Long story short: The Gnosis crew has taken credit for breaking into Gawker Media’s servers and downloaded a database of roughly 1.3
million user records. One of the big stories here was that Gawker stored user passwords using the block cipher DES (Data Encryption Standard). This encryption scheme only encrypts the first 8
characters of a password before storing it in the database. This means ‘password1sSecureBec@useIt’sLong_4nd_uses_5pecial_characters’ will only be stored as DES(‘password’).
My senior project in college involved using the distributed computing framework Hadoop to crack passwords, so naturally this piqued my interest. Unfortunately I don’t have access to a cluster of
computers, so instead I wanted to take the opportunity to learn something new (Cuda comes to mind). Before I jump into playing with Cuda, I first wanted to get a feel for what I was doing by writing
some Python code to get me started.
First thing I had to do was filter out the records in the database that were not valid or didn’t have an encrypted password. Of the 1,248,120 records there were only 748,508 that I considered valid.
Each password was encrypted using a salt, so I couldn’t simply create a rainbow table of encrypted passwords and do a lookup. Instead I had to crack each password individually. I decided to crack the
passwords using a dictionary-based attack. I used a list of the 500 most commonly use passwords as my initial dictionary (The most popular passwords being first). I figured this would be my best bet
for quickly shortening the list passwords that I would need to crack. The jist of how this works is the following:
1. Read in a word from the list.
2. Iterate through each account and compute the encrypted value using the user’s given salt.
3. If the computed value is the same as the encrypted password, then we have the original password.
4. If the password was cracked, remove it from the list so we don’t have to worry about it in future computations.
As of this writing I’m at the 39th word and I’ve managed to shorten the list by 16,512 users, or by 0.02%. This doesn’t seem to be going fast enough, so perhaps I’ll look more into doing this with
something faster such as C or Cuda. The point of this exercise was really to get myself familiar with the process of cracking these passwords. I was able to quickly write some code in Python to test
my ideas, and I quickly found out what ideas worked and what didn’t.
Starting out with Nmap
Nmap is one of those programs that I can’t live without. Whether I’m using it to learn more about my own network, or somebody else’s network, it really gives me a better picture of what other
devices are on the network without actually having to actually physically see the device.
Scanning your network
One of the basic ways to discover devices is to use a ping sweep with Nmap. This sends out a ping to a single device or a range of devices. It used to be that hosts that dropped ICMP echo requests
(ping) would up show as being down in Nmap. It seems that newer versions of Nmap are able to detect if a host is up if it doesn’t respond to a ping. (Try ping microsoft.com and see what it responds
Pinging a single device
nmap -sP 192.168.0.1
Nmap scan report for 192.168.0.1
Host is up (0.055s latency).
Pinging a range of IP addresses
nmap -sP 192.168.0.1-5
Nmap scan report for 192.168.0.1
Host is up (0.0046s latency).
Nmap scan report for 192.168.0.3
Host is up (0.016s latency).
Nmap scan report for 192.168.0.5
Host is up (0.000098s latency).
Pinging the entire network
nmap -sP 192.168.0.0/24
Nmap scan report for 192.168.0.1
Host is up (0.055s latency).
Nmap scan report for 192.168.0.3
Host is up (0.014s latency).
Nmap scan report for 192.168.0.5
Host is up (0.00029s latency).
Nmap scan report for 192.168.0.100
Host is up (0.0090s latency).
The last command will send a ping out to any device that has their IP address starting with 192.168.0.x. (The /24 is short-hand for the subnet mask of 255.255.255.0.)
Device Enumeration
Once we have a list of devices on our network we may want to know some information about it. Simply typing the command nmap along with the IP address or name of the device, will return what ports
are open on the device and guess what service is running on the device. For example, when I run nmap on my own machine I get the following:
nmap 192.168.0.5
PORT STATE SERVICE
22/tcp open ssh
88/tcp open kerberos-sec
139/tcp open netbios-ssn
445/tcp open microsoft-ds
515/tcp open printer
631/tcp open ipp
3689/tcp open rendezvous
What if I want some more information like what version of ssh am I running? Use the -sV flag.
nmap -sV 192.16.0.5
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 5.2 (protocol 2.0)
88/tcp open kerberos-sec Mac OS X kerberos-sec
139/tcp open netbios-ssn Samba smbd 3.X (workgroup: WORKGROUP)
445/tcp open netbios-ssn Samba smbd 3.X (workgroup: WORKGROUP)
515/tcp open printer
631/tcp open ipp CUPS 1.4
3689/tcp open daap Apple iTunes DAAP 9.0.3
Service Info: OS: Mac OS X
Also notice we get some extra information such as the OS I’m running and the workgroup my samba shares are on.
Now let’s take what we learned and put them together. How would I get information on all devices on my network?
nmap -sV 192.168.0.0/24
The output of this is too long for here, but I think you get the idea; it’ll display all the service information (along with service versions) for all hosts on the network.
That’s the basics of using nmap. With the ideas presented here, you should be able to find out most information about about devices on the network you are currently connected to.
Most listened to music of the year
According to last.fm, this is the final count of the bands I listened to the most this year.
1. Nine Inch Nails (528 Plays) – Nine Inch Nails is and has been one of my favorite bands for about 13 years now. NIN has always been the goto band for me for when I don’t feel like listening to
anything else. This year I finally got around to actually listening to Year Zero. I was not enthusiastic about this CD when it first came out, but after seeing them live at the Santa Barbara Bowl
this year, I finally got around to listening to it and love it! I actually found that it made for good running music! On top of that, I rediscovered what has to be one of my favorite albums: The
Fragile. This CD is up there with Dark Side of the Moon in terms of replayability.
2. NOFX (431 Plays) - Another one of goto bands when I don’t feel like listening to anything else. NOFX has been another one of my favorite bands for about 11 years now and I’m still not tired of
them. This year they release coaster which has been in regular rotation in my library. This summer in particular I have also been re-listening to a lot of their albums when I went on my runs.
3. The Appleseed Cast (329 Plays) - The Appleseed Cast have been one of my favorite bands of recent times. This year they released Sagarmatha which I absolutely feel in love with. The first track
“As The Little Things Go” has to be my favorite track of the year. I will hopefully get to see them live for the first time in 2010.
4. Placebo (291 Plays) - Again, another one of my favorite bands that I started to around 9 years ago. This year they released “Battle for the Sun” and did it without their original drummer Steven
Hewitt. Despite his absense, Placebo released this slightly popier sounding album, but still staying true to their original sound. I have yet to see this band live; this needs to change!
5. Muse (203 Plays) - This year muse released “The Resistance”, but I’ve also been rediscovering their CDs such as “Origin of Symmetry” and “Absolution”. I also recently watched the movie “Southland
Tales” (which I won’t go into here), but it featured the song “Blackout” off of Absolution that I really never listened to. Holy crap what a great track!
6. Asobi Seksu (181 Plays) - This band is a Japanese/American Indie/Pop band that I admit is a guilty pleasure. This year I’ve been listening to their self-titled album and “Citrus” mostly. This
year they came out with the CD “Hush,” but it hasn’t gotten as much playtime as the other two albums from me.
7. Pixies (163 Plays) - I had my pixies phase around six years ago. I think now was the time to come back and check out my old favorites. Not much to say here really.
8. Steve Aoki (148 Plays) - Introduced to me randomly by a friend in SLO in August, Steve Aoki has been a running favorite this year. Steve Aoki kind of reminds me of Girl Talk which I probably
listened to A LOT last year.
9. The Protomen (142 Plays) - The concept of a rock opera based on the Mega Man video game series just cannot fail. I was a big fan of the first act, but when act II came out, I listened to both
acts back to back. Both CDs have a unique feel to them; the first act is more inspired by the videogame series (with a hint of Ennio Morricone) while the second act follows up with a sound
inspired by the 80s. Someone needs to turn these albums into an anime!
10. Pink Floyd (140 Plays) - Pink Floyd is not one of those bands where you pick and choose their greatest hits. Instead when you listen to Pink Floyd, you listen to them an entire album at a time.
In particular I remember listening to The Wall, Animals, and Dark Side of the Moon this year.
Find the Largest Prime Factor
I love programming competitions. In the past two years I’ve participated in both my school’s programming competition and the ACM Southern California regional competition. Recently I’ve been turned
onto http://projecteuler.net which is a competition-like site that gives you a series of math questions which you need to solve by writing a program. I saw this as a good opportunity to brush up on
python, so today I’ve been trying my hand at a few of these problems.
Question #3 asks you to find the largest prime factor of a very large number. My first instinct was to start at Sqrt(n) and work my way back until I find the first factor. The problem here is that
the first factor you find may not be the first PRIME factor. Instead, my approach to this problem was to find all factors of the number between 2 and Sqrt(n) and put them in in a list.
Of all the numbers in the list, one of them is the largest prime factor, but which one? To find out, I made an assumption that turned out to be right: The composite factors have at least one prime
factor that is already in the list. If I can show that a given number has no factors in the list, then it is a prime factor. Further, if I have a sorted list and start at the end, I can show that
the number is the largest prime factor.
Anyway, here’s the code I used:
n = 600851475143 # Find the largest prime factor of this
div = [x for x in range(2,int(math.sqrt(n))) if n % x == 0] # Get all factors
i, j = 0, len(div) – 1
while j > i: # Determine which one of the factors is the largest prime
# This number is composite, go to the next largest factor
if div[j] % div[i] == 0: i,j = 0,j-1
# See if the next number at the beginning of the list divides the current number
else: i += 1
print div[j] # Print out the largest prime factor
n = 600851475143 # Find the largest prime factor of this
div = [x for x in range(2,int(math.sqrt(n))) if n % x == 0] # Get all factors
i, j = 0, len(div) - 1
while j > i: # Determine which one of the factors is the largest prime
# This number is composite, go to the next largest factor
if div[j] % div[i] == 0: i,j = 0,j-1
# See if the next number at the beginning of the list divides the current number
else: i += 1
print div[j] # Print out the largest prime factor
Fix for CodeIgniter White Screen on Bluehost
This past weekend I spent writing an application using the CodeIgniter framework for my programming languages class. Sunday night I finally got it working and it was time to move it to production on
bluehost. After I made created the database and made some configuration changes I checked it out and I got nothing. It was a white page of nothing. I checked my logs and found nothing. Normally
this wouldn’t be such an issue, but when you have nothing in your logs, then you don’t have much to work with. I looked at the source of the blank page and found this: “<!– SHTML Wrapper – 500
Server Error –>.” Still not a lot to work with.
Today I spent some time investing the issue and found out that a lot of people were having the issue, but I didn’t find any fixes. Long story short the fix was to enable FastCGI PHP5 on Bluehost.
When I did that an error finally appeared on the screen! Apparently I misspelled my database name >_<.
Anyway, if you’re having the same issue, let me know if this fix worked for you.
PHP Frameworks
I’ve been coding in PHP off and on for years now. Each project I’ve done thus far has always been by hand. Doing the architecture of your site by hand is the easiest way to introduce bugs and end
up with sloppy code. For the first time I’ve started a project where I used a framework in PHP. My framework? Codeigniter. I like it’s simplicity, MFC design and how it’s not like Rails. I
thought Rails was a bit too complicated and involved hacking the internal code way too much to get anything done, but that’s a whole other rant.
Anyway, the project I’m working on? Just a twitter clone. Why? For the sake of learning, that’s all. It’s actually for a project in school. The idea is to implement this idea in something that
I’m already familiar with (PHP) and then implement it again in another language I’m not familiar with at all. After I’m doing with the first part I want to implement it in Python using the Django
framework. Python has always been one of those languages I’ve been meaning to learn. In fact it seems I try to learn it every summer, but I just don’t seem to have the dedication to sit in doors
learning it.
Regardless, I hope to use this knowledge in my professional life. The last PHP project I worked on was stressful just because of some of the design decisisons I made (Doing a half-assed MVC
implementation…didn’t do the best job of dividing up the view and controller). However, I’ve learned from those mistakes and I might be taking up a new project at work where I’ll probably be using
this framework.
Using SSL with Gmail and Gmail Notifier
With the recent announcement at Defcon, Gmail users will soon be the target of session hijacking. The reason for this is that Gmail by default does not encrypt any traffic (except logins). This
allows anyone on the local network to sniff for session ids passed between gmail and the user when you check your email. With this session id, a hijacker can act authenticate themselves as you
without the need for your username and password.
This has always been an issue for non-encrypted traffic, but it was announced at defcon that a tool has been released that automates this hack. This was enough reason for Google to release an option
to turn on SSL. The problem here is that you still have to manually turn it on.
To turn on SSL go to “Settings” and at the bottom you’ll see an option called “Browser Connection.” Choose the option “Always use https.” Yay now we’re protected from session hijacking!
The problem I next noticed was that gmail notifier stopped working! After doing some investigating, I found that you had to do some hex editing with gnotify.exe to get it to use SSL.
Before we do anything, close and make a backup copy of gnotify.exe just in case anything happens. By default you can find this executable in C:\Program Files\Google\Gmail Notifier. For hex editing
I used an old favorite hex editor called “Hex Workshop” for Windows. After you download/install it, open up gnotify.exe in Hex Workshop. On the left you’ll see a bunch of hexidecimal characters and
on the right you’ll see the ASCII equivalent.
To make the replaecment, we need to first find the area we want to modify. To find the area, hit CTRL-F and let’s do a search for the string we want to modify: “http://”. Under the “Type” drop-down
choose “Text String.” When you find “http://mail.google.com/mail/” go ahead and add a “s” after “http.” You’ll see that whenever you type, it will overwrite whatever was in that field before. Go
ahead and type out the replaced characters until you end up with “https://mail.google.com/mail/”.
Go ahead and save the modified executable and open it back up. If it fails, you can always use the backup you made! Otherwise, you should know have access to gmail over an encrypted connection!
I’m sure I’ll be writing a part II to this when I get home and my gmail notifier isn’t working there either. | {"url":"http://www.nickpeters.net/","timestamp":"2014-04-16T04:33:19Z","content_type":null,"content_length":"57782","record_id":"<urn:uuid:54f79276-d632-4a8c-a1ec-912da621ebfe>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the formula mass of FeO3?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
can you show me how to do it with all your work make sure it is the correct unit labels please! thank you
Best Response
You've already chosen the best response.
What's a formula mass?
Best Response
You've already chosen the best response.
55.847 (mass of Fe) + 3(15.999) (mass of 3 oxygen's) = 103.844 g
Best Response
You've already chosen the best response.
wait thats the formula of it?
Best Response
You've already chosen the best response.
that is how you find the formula mass.
Best Response
You've already chosen the best response.
im confused now
Best Response
You've already chosen the best response.
there is one Iron, Fe, and 3 Oxygen, O. So you add the Fe, 55.847 + 3*15.999, which is the mass of oxygen. the mass of oxygen is 15.999 but since there are 3 of them you have to multiply 3 and
15.999 and add that to 55.847 to get the total mass which is 103.844 g.
Best Response
You've already chosen the best response.
To find the mass of the formula, you look at the periodic table Fe = ______ g O = 3( ______)g you mutiply by three b/ there are 3 oxygens. You then proceed to add evrything to get the mass.
Best Response
You've already chosen the best response.
Do it steps @Tttaammmaannnaa
Best Response
You've already chosen the best response.
so the mass os 103.844 right?
Best Response
You've already chosen the best response.
Since there is one Iron, Fe. the mass is 55.487. The mass of 3 oxygen's is 47.997. Add 55.487 and 47.997 and you get 103.844
Best Response
You've already chosen the best response.
the mass of Fe is 55.847. sorry
Best Response
You've already chosen the best response.
so the answer is 103.844
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
ok thank you so much that helped a lot thank you again!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50ea12c2e4b0d4a537cc1efd","timestamp":"2014-04-20T14:00:18Z","content_type":null,"content_length":"61465","record_id":"<urn:uuid:b4778b56-9889-4ea7-b944-7f642a4172a5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Medford Geometry Tutor
...I am also an expert in time management and study skills, which are an essential part of scholastic success. I am patient, enthusiastic about learning, and will work very hard with you to
achieve your academic goals. JoannaI have three years' experience tutoring high school students in biology.
10 Subjects: including geometry, chemistry, biology, algebra 1
...I have been teaching middle school math for the past 7 years and love my job. I love to also help students and tutor on the side. I want all students to feel confident in math.
6 Subjects: including geometry, algebra 1, elementary math, study skills
I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have
tutored a wide range of students - from middle school to college level.
14 Subjects: including geometry, statistics, algebra 1, algebra 2
...I am licensed to teach mathematics in Massachusetts to grades 8-12. Under this licensure I am qualified to teach man high-school level mathematics topics including geometry. The Massachusetts
state teaching license for mathematics covers Pre-algebra.
9 Subjects: including geometry, algebra 1, algebra 2, SAT math
My name is Derek H. and I recently graduated from Cornell University's College of Engineering with a degree in Information Science, Systems, and Technology. I have a strong background in Math,
Science, and Computer Science. I currently work as software developer at IBM.
17 Subjects: including geometry, statistics, algebra 1, economics | {"url":"http://www.purplemath.com/west_medford_ma_geometry_tutors.php","timestamp":"2014-04-19T02:46:31Z","content_type":null,"content_length":"23952","record_id":"<urn:uuid:8d94232e-43aa-43ca-8f24-3d4ec6e3e1b4>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Java Programming.. If I want to use the If statement to say : If (x > 4 and <9){} What operator should I use? or what method?
Best Response
You've already chosen the best response.
and is "&&"
Best Response
You've already chosen the best response.
so it would be If(x>4 && X<9){}
Best Response
You've already chosen the best response.
yeah but that operator is not working...
Best Response
You've already chosen the best response.
&& is the logical "and" operator to compare two boolean values, should be working
Best Response
You've already chosen the best response.
u try it..
Best Response
You've already chosen the best response.
try putting it in parentheses so if ((x>4)&&(x<9))
Best Response
You've already chosen the best response.
if(( x > 4) && (X < 9 )) { //Do something here. } Hey! That's exactly I was going to post lol
Best Response
You've already chosen the best response.
There you go , That's exactly what I was looking for.. I was trying something like that but without some parenthesis If (x >4)&(<9) but I forgot that theres only one condition so everything must
be within a set of parenthesis. Thanks man!
Best Response
You've already chosen the best response.
Just so you know, the single ampersand is a legal operator. It's used for bitwise and.
Best Response
You've already chosen the best response.
if(x>4 && x<9 ) { // your stuff }
Best Response
You've already chosen the best response.
There's also the assignment conditional operator used for singular assignments. a = (b > c) ? b : c; So if b is greater than c, a takes the value of b, otherwise it takes the value of c. More
formally, this is read as: value = (condition) ? (true value) : (false value); The condition can be any valid conditional statement, as long as it validates before the ? operator.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4e10863d0b8b56e5559878e7","timestamp":"2014-04-16T16:58:04Z","content_type":null,"content_length":"52031","record_id":"<urn:uuid:b083ecd9-c872-4c25-bbbc-1abade29f88c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
The density of x_1^n+x_2^n where x_i are Gaussian
up vote 0 down vote favorite
We define a process $\chi_k^n=\sum _{i=1}^k x_i^n$ where x_i are iid gaussian processes. I try to find the distribution of $\chi_k^n$. If k=1 then we get $f(x^n=y)=\frac1n y^{\frac{1-n}{n}}\exp(-y^{2
/n}/2)$ and then try to do convolution but then to say something about the integral $$C_{n}\int_{-\infty}^{\infty}\frac1n (l^2-h^2)^{\frac{1-n}{n}}\exp(-(l+h)^{2/n}/2)\exp(-(l+h)^{2/n}/2)\exp(-(l-h)^
{2/n}/2)dh $$ What can I do n>2????????
Can I say somthing about the density, formula or upper bound?
pr.probability st.statistics
1 This question might work better on stats.stackexchange.com – Robby McKilliam Aug 9 '10 at 10:57
When you say "process" do you just mean "random variable"? – Yemon Choi Aug 10 '10 at 7:26
add comment
1 Answer
active oldest votes
Firstly you forgot to multiply the density $f(x^n=y)$ by $1/\sqrt{2\pi}$. I think if you obtained the density of the random variable $X_{1,2}=X_1^n+X_2^n$ by the convolution method, the
problem no more posed , because for $X_1^n+X_2^n+X_3^n=X^n_{1,2}+X_3^n=X^n_{1,2,3}$, and you have the density of $X^n_{1,2}$, the density of $X^n_{3}$, you can calculate there convolution,
up vote i.e the density of $X^n_{1,2,3}$. If the calculation is very difficult with the convolution (I think) you can use the characteristic function. You calculate the function characteristic of the
1 down variable $X_1^n$ that one noted $\psi_{X_1^n}(t)$. As the two variables $X_1^n$ and $X_2^n$ are i.i.d, then $\psi_{X_1^n+X_2^n}(t)=\psi_{X_1^n}(t)\cdot \psi_{X_2^n}(t)=(\psi_{X_1^n}(t))^2$
vote and so on for variable $X_1^n+X_2^n+...+X_k^n$ we will have $\psi_{X_1^n+X_2^n+\ldots +X_k^n}(t)=(\psi_{X_1^n}(t))^k$. Just well calculate $\psi_{X_1^n}(t)$.
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability st.statistics or ask your own question. | {"url":"http://mathoverflow.net/questions/34984/the-density-of-x-1nx-2n-where-x-i-are-gaussian/36559","timestamp":"2014-04-18T18:18:02Z","content_type":null,"content_length":"52652","record_id":"<urn:uuid:5cb6fd3d-d4dc-49f1-bc05-d34d54e90bed>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edexcel C3 past paper question
January 2nd 2010, 06:38 AM #1
Edexcel C3 past paper question
I'm using past papers to help with revision but found this question really difficult. Hope someone can help
Equation of curve: y=(2x-1)tan2x, x is between 0 and pi/4 (including 0)
The curve has a minimum at the point P. The x-coordinate of P is k.
a) Show that k satisfies the equation
I wasn't sure where to start but tried substituting k into the equation of the graph which hasn't helped :s
I'm using past papers to help with revision but found this question really difficult. Hope someone can help
Equation of curve: y=(2x-1)tan2x, x is between 0 and pi/4 (including 0)
The curve has a minimum at the point P. The x-coordinate of P is k.
a) Show that k satisfies the equation
I wasn't sure where to start but tried substituting k into the equation of the graph which hasn't helped :s
used a calculator and determined $k \approx 0.276515...$
this value of $k$ does not satisfy the equation $4k - \sin(4k) - 2 = 0$
I'm using past papers to help with revision but found this question really difficult. Hope someone can help
Equation of curve: y=(2x-1)tan2x, x is between 0 and pi/4 (including 0)
The curve has a minimum at the point P. The x-coordinate of P is k.
a) Show that k satisfies the equation
I wasn't sure where to start but tried substituting k into the equation of the graph which hasn't helped :s
Differentiate the function using product rule:
$y'= 2 tan(2x)+ (2x-1) \cdot \dfrac2{(\cos(2x))^2}$
Set y ' = 0 and start to solve for x:
$2 tan(2x)+ (2x-1) \cdot \dfrac2{(\cos(2x))^2} = 0~\implies~2\sin(2x) \cdot \cos(2x) + 4x - 2=0$$~\implies~2 \cdot \frac12 \cdot \sin(4x) + 4x -2=0$
which yields:
Could it be that there is a typo in your text?
I don't know a method to solve the equation algebraically. If you use an iterative method (for instance Newton's method) you'll get
$x = k \approx 0.276515039...$ which is the value skeeter posted yesterday.
I used as initial value
$x_0 = 0.5$
January 2nd 2010, 07:14 AM #2
January 2nd 2010, 07:24 AM #3
January 3rd 2010, 07:20 AM #4
January 3rd 2010, 08:02 AM #5 | {"url":"http://mathhelpforum.com/trigonometry/122190-edexcel-c3-past-paper-question.html","timestamp":"2014-04-18T14:12:01Z","content_type":null,"content_length":"48559","record_id":"<urn:uuid:b7e02cf2-f336-41f5-b8a4-45fcd5346624>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there a "knot theory" for graphs?
up vote 15 down vote favorite
I think knot theory has been studied for quite a while (like a century or so), so I'm just wondering whether there is a "knot theory" for graphs, i.e. the study of (topological properties of)
embeddings of graphs into R^3 or S^3.
If yes, can anyone show me any reference?
If the answer is basically no, then why? Is it just too hard, uninteresting, or can it be essentially reduced to the study of knots (and links)?
knot-theory graph-theory
check out: ams.org/mathscinet-getitem?mr=1781912 – Ian Agol Oct 30 '11 at 2:26
You might be interested in the last part of my answer to this question : mathoverflow.net/questions/39650/… – Andy Putman Oct 30 '11 at 2:40
add comment
7 Answers
active oldest votes
The theory of knotted trees is obviously trivial. So given a knotted graph $\Gamma$, take a maximal tree in it and you can bring it to a standard form, say to be embedded as a planar object
inside a tiny disk that is disjoint from the rest of the knotted graph; which is just the finitely many arcs that make the complement of the tree. But now you can draw $\Gamma$ in the plane
so that "everything interesting" (namely, the complement of the tree) is outside of a small disk. Do inversion, and you have a fixed tree outside the disk and a tangle inside it. (Some
details depend on whether your vertices are rigid or not, or "thickened" or not, but the conclusion is always more or less the same).
This correspondence between knotted graphs and tangles is not canonical - it depends on the (combinatorial) choice of a maximal tree, and modifying that choice modifies the resulting tangle
up vote (in simple ways that will not be stated here).
17 down
vote So topologically speaking, "knotted graphs" are not interesting. They are merely tangles, along with a bit of further combinatorial data (mostly the tree). If you totally understand the
theory of tangles (modulo some simple to state actions, which also depend on what rigidity assumptions are made for the vertices), you'd totally understand knotted graphs.
Yet there's lot's of beautiful information in the interaction between the combinatorics of the graph and the topology of the tangle. For example, see my recent paper with Zsuzsanna Dancso,
arXiv:1103.1896, in which we study the relationship between knotted trivalent graphs and Drinfel'd associators.
add comment
Yes, there are many such results. Conway-Gordon, Sachs in the 80s proved that any map $K_6 \to R^3$ contains two disjoint linked traingles. Robertson-Seymour-Thomas proved found the family
of minors that characterizes such property. Lovasz-Schrijver proved that this is equivalent to having Colin de Verderie invariant larger than 4 and the projection on the null space of the
Colin de Verderie matrix is a linkless embedding (in the case the null space is of dimension four or less, I forget if this is a theorem or a conjecture?)
There are many papers saying things like, for your favorite Link invariant there is a numnber $n$, such that for any embedding $K_n \to R^3$ one can find a link with nontrivial your favorite
invariant. I don't remember the references now, maybe google "ramsey theory for links" or something like that. ($K_n$ is the complete graph on $n$ vertices).
up vote From a more geometrical point of view, here are two things you can do:
11 down
vote One is to look at metric properties. For this look up Kolmogorov-Borodin and the recent paper by Guth and Gromov. Actually expanders were discovered for this reason.
The alternative is to think about the linear structure, namely you can ask whether there are affine subspaces of the ambient space intersecting many of the edges for any embedding. In a
recent joint paper with Boris Bukh we called this "space crossings". Because if the affine flat that intersects your edges is of dimension 0 this is precisely a crossing. We investigated the
"space crossing numbers" of graphs in $R^3$, but our techniques generalize to graphs in $R^d$. The first result in this direction was Zivaljevic's who proved that $K_{6,6} \to R^3$ has non
zero space crossing number. Our main result is an analogue of the classical crossing number inequality which almost implies it.
1 I'm leaving this as a comment rather than an answer because it's really the same as what Alfredo already said, but for more of what he mentions in his first paragraph, at a nontechnical
level, see en.wikipedia.org/wiki/Linkless_embedding – David Eppstein Oct 30 '11 at 2:55
"in all the previous results is very important that you are dealing with codimension two". The Conway-Gordon/Sachs result has NOTHING to do with codimension two: any map of the
$n$-skeleton of the $(2n+3)$-simplex in $\Bbb R^{2n+1}$ contains a pair of disjoint linked boundaries of the $(n+1)$-simplex (Lovasz-Schrijver, ams.org/journals/proc/1998-126-05/
S0002-9939-98-04244-0 and Taniyama, pjm.berkeley.edu/pjm/2000/194-2/p14.xhtml; a third proof is in Example 4.7 in arxiv.org/abs/math/0612082 and a fourth in Example 4.9 in arxiv.org/abs/
1103.5457v2). – Sergey Melikhov Oct 30 '11 at 18:30
"... as a famous result of Zeeman says". The fact that every graph unknots in $\Bbb R^n$ for $n>3$ is trivial (use general position) and has nothing to do with Zeeman. Zeeman's result is
about piecewise-linear unknotting of spheres in codimension $\ge 3$. But spheres easily link, and connected manifolds easily knot in high codimensions. In fact, "your favorite" link
invariant used in "Ramsey link" theory (and I've seen papers dealing with the Sato-Levine invariant and Milnor's triple invariant) probably has a higher-dimensional extension (certainly
in those two cases). – Sergey Melikhov Oct 30 '11 at 19:09
(con't) In more detail, higher-dim extensions of Milnor's triple invariant detect a Brunnian "Borromean rings" link of three $S^{2k−1}$'s in $\Bbb R^{3k}$, and a higher-dim counterpart of
the Sato-Levine invariant (not the original higher-dim Sato-Levine invariant) detects a "Whitehead link" of two $S^{2k−1}$'s in $\Bbb R^{3k}$, $k\ne 3,7$, which has zero linking number. –
Sergey Melikhov Oct 30 '11 at 19:20
Finally, the Robertson-Seymour-Thomas result about minors is likely to have an analogue for linkless embeddings of $n$-dimensional simplicial complexes in $\Bbb R^{2n+1}$, $n\ne 2$ (see
arxiv.org/abs/1103.5457v2), but I'd skeptical about lower codimension, especially codimension two ($K^n$ in $\Bbb R^{n+2}$ for $n>1$) In fact, I haven't seen any results whatsoever on
"codimension two Ramsey theory" ($K^n$ in $\Bbb R^{n+2}$) except for the classical case ($n=1$). – Sergey Melikhov Oct 30 '11 at 19:27
show 2 more comments
Yes, there's plenty of work on this. First of all, you have to define the notion of equivalence that you are interested in. Usually people only care about the graph up to handle-slide
(turning the subject into the subject of knotted handlebodies), so you can assume the graph is tri-valent. But you could go further to study graphs up to isotopy and there's work on that
too. Much of the technology to study knots translates to studying knotted graphs. Some references:
up vote 9 http://ldtopology.wordpress.com/2009/10/29/which-knotted-objects-are-worthy-of-study/
down vote
The last two references are rather nice as they show that much the same way hyperbolic geometry "dominates" traditional knot theory, it plays a similar role in the study of knotted
trivalent graphs. In this case orbifolds play a more prominent role.
The first katlas link looks puzzling. Is there any paper about this Reidemeister torsion of graph complement? I remember finding Viro's Alexander polynomial/Conway function of trivalent
graphs (arxiv.org/abs/math/0204290, mi.mathnet.ru/eng/aa74) to be quite enlightening (e.g. in trying to understand the ordinary multivariable Alexander polynomial). But is it related to
the Reidemeister torsion? For links, the relation is made very clear in arxiv.org/abs/math/9806035, but this doesn't work for graphs, does it? – Sergey Melikhov Oct 31 '11 at 20:22
add comment
whether there is a "knot theory" for graphs...
or can it be essentially reduced to the study of knots (and links)?
As Dror Bar-Natan points out in his interesting answer, it can, "if you totally understand the theory of tangles". If you don't, but you're very generous as to what amounts to a reduction,
then it "almost can" (up to about one integer invariant) by a theorem of Roberston, Seymour and Thomas: two knotless, linkless embeddings $f,g$ of a graph $G$ in $\Bbb R^3$ are equivalent (by
an isotopy of $\Bbb R^3$) if and only if the restictions of $f$ and $g$ to every subgraph of $G$, homeomorphic to $K_5$ or $K_{3,3}$ are equivalent. Here "knotless" means that every cycle (a
subgraph homeomorphic to $S^1$) in $G$ is unknotted, and "linkless" means that every two disjoint subgraphs are separated by an embedded $2$-sphere. To be precise, Robertson, Seymour and
Thomas had a slightly different formulation (with "panelled" in place of "knotless and linkless") and the above version is proved in http://arxiv.org/abs/math/0612082.
up vote
7 down What is the "about one integer invariant"? As Ryan Budney points out in his interesting answer, it helps to study graphs up to weaker equivalence relation than ambient equivalence or
vote non-ambient isotopy (which, incidentally, already kills all local knots). Taniyama (Topol. Appl. 65 (1995), 205-228) has shown that two embeddings of a graph $G$ in $\Bbb R^3$ are
"homologous" (=cobound an embedded $G\times I$+(handles) in $\Bbb R^3\times I$, where each handle is a torus attached by a tube to a $2$-cell, (edge)$\times I$) if and only if they have the
same Wu invariant (this integer invariant is really just the $1$-parameter version of the van Kampen obstruction). On the other hand, Shinjo and Taniyama (Topol. Appl. 134 (2003), 53-67) have
shown that the vanishing of the Wu invariant of a graph is determined by the vanishing of its restriction to subgraphs homeomorphic to $K_5$, $K_{3,3}$ and $S^1\sqcup S^1$.
Another interesting relation on embedded graphs in link homotopy, i.e. arbitrary self-intersections of connected components are allowed, but distinct components may not intersect. The link
homotopy classification of embeddings in $\Bbb R^3$ of a disjoint union of two $S^1$'s and a wedge of $S^1$ is already pretty nontrivial.
add comment
In principle, there is an algorithm to tell if two graphs in $\mathbb{R}^3$ are isotopic, using Waldhausen's method of recognizing Haken 3-manifolds. The complement of a graph (obtained by
removing an open regular neighborhood) has a natural pared manifold structure (also keeping track of meridians and longitudes on closed loop components). The pared manifold just means you
have a collection of annuli in the boundary, and these annuli come from the regular neighborhoods of the edges of the graph. Waldhausen's theorem may be extended to determine the
homeomorphism problem for pared manifolds - although it is not explicitly stated in this form, his method makes use of a more general concept of manifolds with boundary pattern, of which
pared manifolds are a special case. It's not hard to see that two graphs are isotopic if and only if their corresponding pared manifolds are equivalent. However, this algorithm has not been
up vote fully implemented by computer.
5 down
vote One practical method is to use the program Orb. This allows you to input a graph using a mouse, similar to Snappea/SnapPy. If the graph complement is hyperbolic (in an appropriate sense,
where the pared locus corresponds to rank one cusps, and the complementary regions corresponding to vertices of the graph are totally geodesic), then Orb will allow you to tell if two graph
complements are isotopic (if it doesn't crash!). There is a relative JSJ decomposition, which allows one to break up a pared manifold into hyperbolic and Seifert pieces (such as the graph
generalization of connect sum), but this has not been implemented as far as I know.
add comment
The theory of (un)knotted graphs also contributes to knot theory. For example, the theory of tunnel number one knots can be thought of as the theory of embedded theta graphs with a
distinguished edge (the tunnel). The operation of band summing two knots (or more generally any rational tangle replacement) can be studied by examining an eyeglasses graph with the
up vote 3 separating edge the core of the band. There is a relatively nice interplay between such graphs and other 3-manifold theories like thin position and sutured manifold theory.
down vote
add comment
Just as Ryan Budney pointed out, instead of ambient isotopy one may consider a weaker equivalence relation on spatial graphs, namely the one generated by isotopy and IH-moves (also known as
Whitehead moves). With this definition of equivalence, two knotted graphs are equivalent if and only if they admit isotopic regular neighbourhoods. This equivalence relation has been
already considered, for example, by Kinoshita in 1958, and it was named ''neighbourhood equivalence'' for obvious reasons.
Of course, the study of graphs up to neighbourhood equivalence reduces to the study of knotted handlebodies. There exist several invariants of knotted handlebodies. Among them, I have
recently become interested in the quandle coloring invariants defined by Ishii in his paper
Moves and invariants for knotted handlebodies Algebraic & Geometric Topology 8 (2008) 1403–1418
In a joint paper with R. Benedetti
"Levels of knotting of spatial handlebodies" http://arxiv.org/abs/1101.2151
up vote 2 we have exploited (among other things, like the Alexander invariants of the complement) these quandle coloring invariants in order to distinguish different levels of knotting for
down vote handlebodies.
Just as in the case of knot theory, a good invariant for a knotted handlebody is its complement. However, while Gordon-Luecke's Theorem ensures that a knot is determined by its complement,
there exist inequivalent handlebodies whose complements are homeomorphic (this is one of the reasons why I would compare the theory of knotted handlebodies of genus g with the theory of
links with g components, rather than with knot theory). On the other hand, Kent and Souto exhibited here
a spatial handlebody whose complement admits a unique embedding in the 3-sphere up to isotopy. Also observe that, due to Fox's reimbedding Theorem, every compact submanifold of the 3-sphere
admits a reimbedding as the complement of a finite union of handlebodies in the 3-sphere itself. Therefore, a complete understanding of knotted handlebodies should provide an understanding
of spatial domains in general.
add comment
Not the answer you're looking for? Browse other questions tagged knot-theory graph-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/79488/is-there-a-knot-theory-for-graphs/79654","timestamp":"2014-04-20T11:27:10Z","content_type":null,"content_length":"94452","record_id":"<urn:uuid:31e1a606-c6d0-4436-a5a6-91f443937c08>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regression, ANOVA, and the General Linear Model: A Statistics Primer e-book downloads
Regression, ANOVA, and the General Linear Model: A Statistics Primer
Peter W. (Wright) Vik
Download Regression, ANOVA, and the General Linear Model: A Statistics Primer
Peter Vik’s Regression, ANOVA, and the General Linear Model. SAGE: Regression, ANOVA, and the General Linear Model: A. Univariate GLM is the general linear model now often. . Regression, ANOVA, and
the General Linear Model: A Statistics Primer [Peter W. Opposite Results in Ordinal Logistic Regression—Solving a Statistical. (Wright) Vik] on Amazon.com. including multiple regression models,
one-way ANOVA,. A Primer on Linear Models – John F. A SAGE Publications book: Regression, ANOVA, and the General Linear Model: A Statistics PrimerPeter Vik. Amazon.com: General Linear Models:
Univariate GLM, Anova/Ancova. Regression, ANOVA, and the General Linear Model: A Statistics. *FREE* super saver shipping on qualifying offers. Regression, ANOVA, and the General Linear Model: A
Statistics. Linear Statistical Models. and How ANOVA and Linear Regression Really are the Same. The General Linear Model,. A Primer on Linear Models. the text first provides examples of the general
linear model, including multiple regression models, one-way ANOVA,. General Linear Models: Univariate GLM, Anova. Regression, ANOVA, and the General Linear Model: A Statistics Primer: Amazon.it:
Peter Vik: Libri in altre lingue A Primer on Linear Models (Chapman & Hall/CRC Texts in Statistical. This book is very easy to follow and has a lot of
YELLOW BOOK PRINTS HC download
Axis ebook
I Won’t Give Up e-book | {"url":"http://iscusevo.blog.com/2013/07/19/regression-anova-and-the-general-linear-model-a-statistics-primer-e-book-downloads/","timestamp":"2014-04-19T14:32:41Z","content_type":null,"content_length":"20070","record_id":"<urn:uuid:5e1e14f4-0b84-4489-8b54-8d7911220bd8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suzanne at the Math Forum
Yesterday as Max and I were planning a workshop we’ll be facilitating in early August a recurring thought came to me — after spending hours of time on problem solving teachers sometimes comment to
us, how will I ever find time to do this with my students?
As I ponder this issue, I wonder if at the heart of it is that
• the teachers realize that the amount of time we spend on one problem is worth the time?
• there is no apparent transfer from a condensed one-day workshop to a full-year class?
• teachers’ learning experiences are different from their students’ learning experiences?
Worth the Time
From formative assessment during the workshop and evaluations at the conclusion of our workshops, teachers indicate that the time spent is worth it for them.
While teachers would like to transfer the ideas, this seems hard to achieve. So many school “routines” get in the way.
Learning Experiences
Are teachers more likely to be in control of their own learning? Are classrooms/schools ready at this point to have students be in control of their own learning?
How can teachers find time for rich problem-solving experiences for their students?
Idea: If students are encouraged to take charge of their own learning, our job is not to “lead” them through problem solving but instead to create environments that encourage them to embrace the
process. Try the “At the End of the Period: Take 5 Minutes” approach.
• Using just 5 minutes at the end of a class period is manageable.
• Starting and stopping reinforces problem solving as a process.
• Perseverance is also reinforced.
Do you see any disadvantages?
Does it make sense that taking this approach could reinforce the idea of problem solving as a process and that it’s not something to rush to finish just to be over and done? How might this idea fit
within your classroom routine?
1. Andrew Stadel says:
Hi Suzanne,
I really need to do a better job of reminding myself and students that not all problems need (or should) be wrapped up in one class period. I’ll have to try the “Take 5 minutes” approach this
year. I feel that the first initial instance where I know students will run out of time, I’ll give myself a little bit more than five minutes so we can discuss the value of problem-solving as a
I’m making a list of 5 daily reminders that I will greet myself with each day as I enter the classroom. This one has been added to the list:
Starting and stopping reinforces problem solving as a process.
2. Suzanne Alejandre says:
Hi Andrew,
I’ll look forward to hearing (or maybe viewing?) stories of how this goes for you. When I was writing the post it occurred to me that the classroom videos that I’ve made and the ones that the
Math Forum will soon be including on our “companion website” for the Powerful Problem Solving book don’t show the idea of “starting and stopping.” Because we (Math Forum staff) don’t have our own
classrooms and come in to a school to mentor/model, we usually use the full class period to give a teacher ideas.
This coming school year I’m going to try to capture on tape the “Take 5 minute” approach. I’m not sure if you or other EnCoMPASS folks are in situations to tape and publish video (just using an
iPad or SmartPhone and then YouTube and/or Vimeo) but if that seems like a possibility, it would be great to offer that “look” to the community to go along with my paper handout of how to try it.
I think it’s more powerful when it’s not the guest (me) coming in but the actual teacher illustrating how their classroom/their students react.
Another idea that I just thought of that I think I’ll blog about right now is working on multiple problems — start a new one before bring closure to the previous one.
3. [...] closure to a problem during a class period rather than using a Take 5 Minutes approach and let time elapse between engagement with a [...]
4. Kristin says:
This is a great idea, because I do feel the need to reconvene the class on a problem to share solution pathways before they leave for the day. “Take 5 Minutes” is great because I think it would
keep most students “in suspense” and thinking about the problem even when they leave class.
Not a disadvantage, more classroom management, but I could see certain students coming in on the second day with a solution because they worked at home or with parents, so I would have to have a
plan for how those students then approach the rest of the process.
I will definitely be trying this out to see how it works in the flow of my school day.
5. Suzanne Alejandre says:
I have to admit that with my 7th graders (average reading level 4th grade) I didn’t experience having them come the next day having worked out the solution! :) … but … if you present just the
Scenario to start with, chances are less that they can come in with a solution since you’ve not decided on a question (yet).
Some students “might” make up their own question through wondering … but … they could also be challenged by someone else’s wondering. If your students are at this level of mathematical
engagement, the trick will be to find some problems that could lead to a range of questions and then just present the scenario.
It will be so great to hear from you about how this goes when you try it in your classroom.
Thanks so much!
6. Lisa Henry says:
I, too, am interested in the “take 5 minutes” approach. I am one of those who is struggling with how do you fit problem solving into the classroom when we (seemingly) have so much to cover over
the course of the year. I don’t know if my students would take it home and work it out with a parent, but I can definitely see that by introducing by using the scenario form that students would
have some time to think about what they notice and wonder. For my students, personally, I don’t think they would necessarily rush to solve it all on their own. However, if they did, they could
type it up and as their teacher, I could continue to push them to look at different aspects if they are ahead of other students.
I really like Andrew’s idea of 5 daily reminders and I have already been thinking about doing something similar for myself. Between attending EnCoMPASS and Twitter Math Camp, there are several
things that I want to keep fresh in my memory as this school year proceeds.I think I may “borrow” Andrew’s statement to add to it. | {"url":"http://mathforum.org/blogs/suzanne/2013/07/24/finding-time/","timestamp":"2014-04-19T12:38:10Z","content_type":null,"content_length":"32570","record_id":"<urn:uuid:d9ee53e4-90ec-499d-a257-793ad12a8ef3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00060-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newark, NJ Algebra Tutor
Find a Newark, NJ Algebra Tutor
...As an educator, I have spent the last 6 years working with special needs children. I have helped many students on the Autism spectrum. My interest in Autism and other developmental problems in
children has taken me back to school to study communication disorders, specifically language disorders...
39 Subjects: including algebra 1, algebra 2, reading, English
I am currently a high school math teacher in a large comprehensive Queens High School. I have been teaching high school mathematics for 11 years. Courses I have taught include Integrated Algebra
, the new Common Core Algebra 1, Geometry, Algebra 2 and Trigonometry, and Financial Algebra.
20 Subjects: including algebra 1, algebra 2, geometry, finance
Hey Everyone. My name is William and I've been tutoring the SAT for the past 5 years. As a young guy myself, I work very well with high school students and aim to go above and beyond by giving
advice on scholarship and college applications.
9 Subjects: including algebra 1, algebra 2, geometry, SAT math
...My approach to mathematics tutoring is creative and problem-oriented. I focus on proofs, derivations and puzzles, and the natural progression from one math problem to another. My
problem-solving skills were honed while training for the 40th International Mathematical Olympiad in Bucharest, Romania, at which I won a Bronze Medal.
9 Subjects: including algebra 2, algebra 1, calculus, geometry
...I have extensive experience tutoring students ranging from 6th grade to graduate level, and a strong foundation in math, biology, chemistry, and physics at the undergrad level. I understand
that every student is different and that strategies must be adjusted accordingly. I engage my students in...
24 Subjects: including algebra 1, algebra 2, chemistry, biology
Related Newark, NJ Tutors
Newark, NJ Accounting Tutors
Newark, NJ ACT Tutors
Newark, NJ Algebra Tutors
Newark, NJ Algebra 2 Tutors
Newark, NJ Calculus Tutors
Newark, NJ Geometry Tutors
Newark, NJ Math Tutors
Newark, NJ Prealgebra Tutors
Newark, NJ Precalculus Tutors
Newark, NJ SAT Tutors
Newark, NJ SAT Math Tutors
Newark, NJ Science Tutors
Newark, NJ Statistics Tutors
Newark, NJ Trigonometry Tutors
Nearby Cities With algebra Tutor
Bayonne algebra Tutors
Bloomfield, NJ algebra Tutors
East Newark, NJ algebra Tutors
East Orange algebra Tutors
Elizabeth, NJ algebra Tutors
Harrison, NJ algebra Tutors
Hillside, NJ algebra Tutors
Irvington, NJ algebra Tutors
Jersey City algebra Tutors
Kearny, NJ algebra Tutors
Orange, NJ algebra Tutors
South Kearny, NJ algebra Tutors
Staten Island algebra Tutors
Union Center, NJ algebra Tutors
Union, NJ algebra Tutors | {"url":"http://www.purplemath.com/Newark_NJ_Algebra_tutors.php","timestamp":"2014-04-17T19:32:11Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:6073442a-34dd-42fb-a06a-522eca4f12f6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus AB problems
November 11th 2005, 08:45 PM
Calculus AB problems
Can you help me out with these problems.
Find the slope of the line tangent to the curve y=sin^5x at the point where x=3
Show that slope of every line tangent to the curve y= 1/(1-2x)^3 is positive.
Find the tangent and normal lines to the ellipse x^2 -xy + y^2 = 7 at the point (-1, 2)
November 12th 2005, 11:45 AM
Originally Posted by tallboi562
Can you help me out with these problems.
Find the slope of the line tangent to the curve y=sin^5x at the point where x=3
Show that slope of every line tangent to the curve y= 1/(1-2x)^3 is positive.
Find the tangent and normal lines to the ellipse x^2 -xy + y^2 = 7 at the point (-1, 2)
For #1, how does one find the slope of a tangent line?
For #2, again, find the general slope of the tangent line and watch for it's trends.
For #3, implicit differentiation will need to be used here. Start off and tell me what you get. | {"url":"http://mathhelpforum.com/calculus/1270-calculus-ab-problems-print.html","timestamp":"2014-04-17T08:05:12Z","content_type":null,"content_length":"4332","record_id":"<urn:uuid:97e6414d-6d39-4c42-b628-f7735851796c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
How the Allies Used Math to Figure out Nazi Germany's Tank Production
article from 2006:
The German tanks were numbered as follows: 1, 2, 3 ... N, where N was the desired total number of tanks produced. Imagine that they had captured five tanks, with serial numbers 20, 31, 43, 78 and
92. They now had a sample of five, with a maximum serial number of 92. Call the sample size S and the maximum serial number M. After some experimentation with other series, the statisticians
reckoned that a good estimator of the number of tanks would probably be provided by the simple equation (M-1)(S+1)/S. In the example given, this translates to (92-1)(5+1)/5, which is equal to
109.2. Therefore the estimate of tanks produced at that time would be 109
By using this formula, statisticians reportedly estimated that the Germans produced 246 tanks per month between June 1940 and September 1942. At that time, standard intelligence estimates had
believed the number was far, far higher, at around 1,400. After the war, the allies captured German production records, showing that the true number of tanks produced in those three years was 245
per month, almost exactly what the statisticians had calculated, and less than one fifth of what standard intelligence had thought likely.
Now I Know
| Photo of Tiger II tank by Flickr user
used under Creative Commons license
Newest 5
Newest 5 Comments
According to http://www.wired.com/autopia/2010/10/how-the-allies-used-math-against-german-tanks/
the tanks were 256 against 245 in this article. I really wonder the exact figures. I really wonder is this technique going 2 work?I also agree with Jill may be it is just coincidence.
Do you have a link to the Guardian article?
"More of a triumph for logic rather than statistics."
No conflict here; statistics is just mathematical logic in work clothes.
Edward says:"That equation says: If we are capturing X percentage of the tanks we see...".
No, I don't think so. We don't know how many tanks are produced, so we don't know what percentage of them we have captured.
Math is usually a tankless task. | {"url":"http://www.neatorama.com/2010/10/18/how-the-allies-used-math-to-figure-out-nazi-germanys-tank-production/","timestamp":"2014-04-18T08:57:05Z","content_type":null,"content_length":"62992","record_id":"<urn:uuid:461dd35d-e2a8-4f2a-9518-e0245cbb4f6b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
CFI Forums | Quantum Physics
The following excerpt was taken from here
[u:1a9432e703]EPR paradox[/u:1a9432e703]
In quantum mechanics, the EPR paradox is a thought experiment which challenged long-held ideas about the relation between the observed values of physical quantities and the values that can be
accounted for by a physical theory. "EPR" stands for Einstein, Podolsky, and Rosen, who introduced the thought experiment in a 1935 paper to argue that quantum mechanics is not a complete physical
theory. [/quote:1a9432e703]
My question is, at what point exactly was the deterministic, causal model for reality first threatened? And was it threatened by an actual empirical observation? Who made the observation? What did
they observe? And how?
And my second question is in regards to this:
The EPR paradox is sometimes referred to as the EPRB paradox for [b:1a9432e703]David Bohm[/b:1a9432e703], who converted the original thought experiment into something closer to being experimentally
David Bohm’s first book, Quantum Theory published in 1951, was well-received by Einstein, among others. However, Bohm became dissatisfied with the orthodox approach to quantum theory, which he had
written about in that book, and began to develop his own approach (Bohm interpretation) Û he devised a non-local hidden variable deterministic theory whose predictions agree perfectly with the
nondeterministic quantum theory. [/quote:1a9432e703]
Ok, so why isn’t Bohm’s theory taken seriously?
The rest of my questions are more mundane and reflect my inexperience in this subject (but a person has to start somewhere)
During the last century, quantum theory has proved to be a successful theory, which describes the physical reality of the mesoscopic and microscopic world. Up to now, no method is known which
contradicts [b:1a9432e703]the predictions made by quantum theory. [/b:1a9432e703] [/quote:1a9432e703]
Can anyone tell me what these predictions were and how they were made?
Quantum mechanics was developed with the aim to describe atoms and to explain the observed spectral lines in a measurement apparatus. [/quote:1a9432e703]
What are spectral lines? And why do they exist in [i:1a9432e703]a measurement apparatus?[/i:1a9432e703]
During the development of quantum mechanics the fact that quantum theory allows for an accurate description of reality is obvious from many physical experiments, and has probably never been seriously
On the other hand, for the interpretation of quantum mechanics, things could not be more different. Since the theory of quantum mechanics has been formulated, the following question arises:
How can we interpret the mathematical formulation of quantum mechanics?
This question leads to a discussion, in which people with different philosophical backgrounds give different answers. [b:1a9432e703]Quantum theory and quantum mechanics do not account for single
measurement outcomes in a deterministic way. [/b:1a9432e703] [/quote:1a9432e703]
What is a measurement outcome? When I open my fridge and look inside, the contents in the fridge are as they are displayed to me. Is that an example of a measurement outcome?
One accepted interpretation of quantum mechanics is the Copenhagen interpretation. The Copenhagen manifest argued that a measurement causes [b:1a9432e703]an instantaneous collapse of the wave
function which describes the quantum system[/b:1a9432e703] [/quote:1a9432e703]
Ok, so I’ve heard this term ‘wave function’ kicking around quite a bit. Can anyone give me an explanation of it that makes more sense then the one given here?
[quote:1a9432e703]The system after the collapse is random - pure chaos. [/quote:1a9432e703]
Why? Have they observed this collapse with the naked eye? | {"url":"http://www.centerforinquiry.net/forums/viewthread/2068/","timestamp":"2014-04-20T10:55:40Z","content_type":null,"content_length":"108484","record_id":"<urn:uuid:3f5cd220-87a2-4510-8cde-6d4e0d362dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Encryption is less secure than we thought
Information theory—the discipline that gave us digital communication and data compression—also put cryptography on a secure mathematical foundation. Since 1948, when the paper that created
information theory first appeared, most information-theoretic analyses of secure schemes have depended on a common assumption.
Unfortunately, as a group of researchers at Massachusetts Institute of Technology (MIT) and the National Univ. of Ireland (NUI) at Maynooth, demonstrated in a recently presented paper, that
assumption is false. In a follow-up paper, the same team shows that, as a consequence, the wireless card readers used in many keyless-entry systems may not be as secure as previously thought.
In information theory, the concept of information is intimately entwined with that of entropy. Two digital files might contain the same amount of information, but if one is shorter, it has more
entropy. If a compression algorithm worked perfectly, the compressed file would have the maximum possible entropy. That means that it would have the same number of 0s and 1s, and the way in which
they were distributed would be totally unpredictable. In information-theoretic parlance, it would be perfectly uniform.
Traditionally, information-theoretic analyses of secure schemes have assumed that the source files are perfectly uniform. In practice, they rarely are, but they’re close enough that it appeared that
the standard mathematical analyses still held.
“We thought we’d establish that the basic premise that everyone was using was fair and reasonable,” says Ken Duffy, one of the researchers at NUI. “And it turns out that it’s not.” On both papers,
Duffy is joined by his student Mark Christiansen; Muriel Médard, a prof. of electrical engineering at MIT; and her student Flávio du Pin Calmon.
The problem, Médard explains, is that information-theoretic analyses of secure systems have generally used the wrong notion of entropy. They relied on so-called Shannon entropy, named after the
founder of information theory, Claude Shannon, who taught at MIT from 1956 to 1978.
Shannon entropy is based on the average probability that a given string of bits will occur in a particular type of digital file. In a general-purpose communications system, that’s the right type of
entropy to use, because the characteristics of the data traffic will quickly converge to the statistical averages. Although Shannon’s seminal 1948 paper dealt with cryptography, it was primarily
concerned with communication, and it used the same measure of entropy in both discussions.
But in cryptography, the real concern isn’t with the average case but with the worst case. A codebreaker needs only one reliable correlation between the encrypted and unencrypted versions of a file
in order to begin to deduce further correlations. In the years since Shannon’s paper, information theorists have developed other notions of entropy, some of which give greater weight to improbable
outcomes. Those, it turns out, offer a more accurate picture of the problem of codebreaking.
When Médard, Duffy and their students used these alternate measures of entropy, they found that slight deviations from perfect uniformity in source files, which seemed trivial in the light of Shannon
entropy, suddenly loomed much larger. The upshot is that a computer turned loose to simply guess correlations between the encrypted and unencrypted versions of a file would make headway much faster
than previously expected.
“It’s still exponentially hard, but it’s exponentially easier than we thought,” Duffy says. One implication is that an attacker who simply relied on the frequencies with which letters occur in
English words could probably guess a user-selected password much more quickly than was previously thought. “Attackers often use graphics processors to distribute the problem,” Duffy says. “You’d be
surprised at how quickly you can guess stuff.”
In their Asilomar paper, the researchers apply the same type of mathematical analysis in a slightly different way. They consider the case in which an attacker is, from a distance, able to make a
“noisy” measurement of the password stored on a credit card with an embedded chip or a key card used in a keyless-entry system.
“Noise” is the engineer’s term for anything that degrades an electromagnetic signal—such as physical obstructions, out-of-phase reflections or other electromagnetic interference. Noise comes in lots
of different varieties: The familiar white noise of sleep aids is one, but so is pink noise, black noise and more exotic-sounding types of noise, such as power-law noise or Poisson noise.
In this case, rather than prior knowledge about the statistical frequency of the symbols used in a password, the attacker has prior knowledge about the probable noise characteristics of the
environment: Phase noise with one set of parameters is more probable than phase noise with another set of parameters, which in turn is more probable than Brownian noise, and so on. Armed with these
statistics, an attacker could infer the password stored on the card much more rapidly than was previously thought. | {"url":"http://www.rdmag.com/news/2013/08/encryption-less-secure-we-thought","timestamp":"2014-04-19T10:24:22Z","content_type":null,"content_length":"79567","record_id":"<urn:uuid:2df723f7-3332-4334-872d-cce951f33cc0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
Water Piping System Design Calculations - Part1-Problem Definition
Water Pipe Sizing Calculation Example – Part 1: Overview
This water piping system design calculation example will use the Darcy-Weisbach equation to calculate the pressure loss in a pipe and using that value it will check the appropriateness of the assumed
pipe size.
Problem Definition
Refer the above schematics (Fig.1) and observe the following inputs:
Total pipe length=7 meters
Pressure head=5 meters
Number of ball valves=2
Number of Tee bend=1
Also assume that the required flow rate at the pipe end is 0.5 liter/sec.
Find out the appropriate pipe size
Following stepwise procedure needs to be followed for finding out the appropriate pipe size of this water pipe sizing calculation example:
Step-1: Assume the pipe diameter as 20 mm.
Step-2: Determine the Effective Pipe Length
Step-3: Determine the Permissible Pressure Loss
Step-4: Determine Hydraulic Diameter of the pipe
Step-5: Determine the Relative Roughness of the pipe
Step-6: Determine the Reynolds Number
Step-7: Determine the Moody friction factor using Moody diagram
Step-8: Determine the Actual Pressure Drop in the Pipe using Darcy-Weisbach equation and check if it is below the permissible pressure loss as calculated in step-3
In the next part (part-2) of this pipe size calculation series will discuss about calculating effective pipe length.
thanks for the article. it was really helpful. | {"url":"http://blog.mechguru.com/hydraulics/water-pipe-sizing-calculation-example-part-1-overview/","timestamp":"2014-04-20T08:14:19Z","content_type":null,"content_length":"37980","record_id":"<urn:uuid:76e20f5c-18b1-47b1-b99e-a2033f99f38f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Space-frequency Quantization for Wavelet Image Coding
Results 1 - 10 of 113
, 1997
"... We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the
space-frequency characterization of the wavelet image representation. Specifically, we model image wavelet coefficients ..."
Cited by 152 (11 self)
Add to MetaCart
We introduce a new image compression paradigm that combines compression efficiency with speed, and is based on an independent "infinite" mixture model which accurately captures the space-frequency
characterization of the wavelet image representation. Specifically, we model image wavelet coefficients as being drawn from an independent Generalized Gaussian distribution field, of fixed unknown
shape for each subband, having zero mean and unknown slowly spatiallyvarying variances. Based on this model, we develop a powerful "on the fly" Estimation-Quantization (EQ) framework that consists
of: (i) first finding the Maximum-Likelihood estimate of the individual spatially-varying coefficient field variances based on causal and quantized spatial neighborhood contexts; and (ii) then
applying an off-line Rate-Distortion (R-D) optimized quantization /entropy coding strategy, implemented as a fast lookup table, that is optimally matched to the derived variance estimates. A
distinctive feature of o...
, 2000
"... In this paper, we propose a low bit-rate embedded video coding scheme that utilizes a threedimensional (3D) extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has
proved so successful in still image coding. Three-dimensional spatio-temporal orientation trees coupled w ..."
Cited by 115 (18 self)
Add to MetaCart
In this paper, we propose a low bit-rate embedded video coding scheme that utilizes a threedimensional (3D) extension of the set partitioning in hierarchical trees (SPIHT) algorithm which has proved
so successful in still image coding. Three-dimensional spatio-temporal orientation trees coupled with powerful SPIHT sorting and refinement renders 3D SPIHT video coder so efficient that it provides
comparable performance to H.263 objectively and subjectively when operated at the bit-rates of 30 to 60 kilobits per second with minimal system complexity. Extension to color-embedded video coding is
accomplished without explicit bit allocation, and can be used for any color plane representation. In addition to being rate scalable, the proposed video coder allows multiresolutional scalability in
encoding and decoding in both time and space from one bit-stream. This added functionality along with many desirable attributes, such as full embeddedness for progressive transmission, precise ...
, 2005
"... Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we
introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algori ..."
Cited by 84 (21 self)
Add to MetaCart
Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we
introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intra- and inter-signal correlation
structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint
recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a
parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models,
the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take
effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is
immediately applicable to a range of problems in sensor networks and arrays.
- in Proc. of Computational Imaging IV at SPIE Electronic Imaging , 2006
"... Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing.
It has many promising implications and enables the design of new kinds of Compressive Imaging systems and ..."
Cited by 69 (6 self)
Add to MetaCart
Compressive Sensing is an emerging field based on the revelation that a small number of linear projections of a compressible signal contain enough information for reconstruction and processing. It
has many promising implications and enables the design of new kinds of Compressive Imaging systems and cameras. In this paper, we develop a new camera architecture that employs a digital micromirror
array to perform optical calculations of linear projections of an image onto pseudorandom binary patterns. Its hallmarks include the ability to obtain an image with a single detection element while
sampling the image fewer times than the number of pixels. Other attractive properties include its universality, robustness, scalability, progressivity, and computational asymmetry. The most
intriguing feature of the system is that, since it relies on a single photon detector, it can be adapted to image at wavelengths that are currently impossible with conventional CCD and CMOS imagers.
, 1998
"... We consider the problem of coding images for transmission over error-prone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet
is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be error-free, ..."
Cited by 64 (7 self)
Add to MetaCart
We consider the problem of coding images for transmission over error-prone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is
lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be error-free, but some of the data may never reach the receiver. The proposed algorithms are based on a
combination of multiple description scalar quantizers with techniques successfully applied to the construction of some of the most ecient subband coders. A given image is encoded into multiple
independent packets of roughly equal length. When packets are lost, the quality of the approximation computed at the receiver depends only on the number of packets received, but does not depend on
exactly which packets are actually received. When compared with previously reported results on the performance of robust image coders based on multiple descriptions, on standard test images, our
coders attain s...
, 2001
"... The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally
improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based o ..."
Cited by 63 (0 self)
Add to MetaCart
The JPEG committee has recently released its new image coding standard, JPEG 2000, which will serve as a supplement for the original JPEG standard introduced in 1992. Rather than incrementally
improving on the original standard, JPEG 2000 implements an entirely new way of compressing images based on the wavelet transform, in contrast to the discrete cosine transform (DCT) used in the
original JPEG standard. The significant change in coding methods between the two standards leads one to ask: What prompted the JPEG committee to adopt such a dramatic change? The answer to this
question comes from considering the state of image coding at the time the original JPEG standard was being formed. At that time wavelet analysis and wavelet coding were still
- IEEE Trans. Image Processing , 1998
"... We extend our previous work on space-frequency quantization (SFQ) [1] for image coding from wavelet transforms to the more general wavelet packet transforms [2]. The resulting wavelet packet
coder offers an universal transform coding framework within the constraints of filter bank structures by allo ..."
Cited by 60 (5 self)
Add to MetaCart
We extend our previous work on space-frequency quantization (SFQ) [1] for image coding from wavelet transforms to the more general wavelet packet transforms [2]. The resulting wavelet packet coder
offers an universal transform coding framework within the constraints of filter bank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input
image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes,
our new coder gives excellent coding performance. 1 Introduction Recently, wavelet transforms have attracted considerable attention, especially with applications to image coding, due to their ability
to provide attractive space-frequency resolution tradeoffs for natural images [3, 4]. In addition to conventional scalar (or vector) quantization strategies that are common in subband coding [5], the
- IEEE Transactions on Image Processing , 2000
"... Abstract—Wavelets are ill-suited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to
zero, even at high bit rates. Our goal in this paper is to provide a fast numerical implementation of the best ..."
Cited by 46 (18 self)
Add to MetaCart
Abstract—Wavelets are ill-suited to represent oscillatory patterns: rapid variations of intensity can only be described by the small scale wavelet coefficients, which are often quantized to zero,
even at high bit rates. Our goal in this paper is to provide a fast numerical implementation of the best wavelet packet algorithm [1] in order to demonstrate that an advantage can be gained by
constructing a basis adapted to a target image. Emphasis in this paper has been placed on developing algorithms that are computationally efficient. We developed a new fast two-dimensional (2-D)
convolution-decimation algorithm with factorized nonseparable 2-D filters. The algorithm is four times faster than a standard convolution-decimation. An extensive evaluation of the algorithm was
performed on a large class of textured images. Because of its ability to reproduce textures so well, the wavelet packet coder significantly out performs one of the best wavelet coder [2] on images
such as Barbara and fingerprints, both visually and in term of PSNR. Index Terms—Adaptive transform, best basis, image compression, ladder structure, wavelet packet. I.
- IEEE Trans. Image Processing , 1997
"... Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work
well? These are the central issues we address. We introduce a new waveletbased framework for analyzing block ..."
Cited by 42 (2 self)
Add to MetaCart
Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well?
These are the central issues we address. We introduce a new waveletbased framework for analyzing block-based fractal compression schemes. Within this framework we are able to draw upon insights from
the well-established transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders of the form introduced by Jacquin[1] are a Haar wavelet
subtree quantization scheme. We examine a generalization of this scheme to smooth wavelets with additional vanishing moments. The performance of our generalized coder is comparable to the best
results in the literature for a Jacquin-style coding scheme. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and leads us to develop an
unconditionally convergen...
, 1999
"... This paper presents a novel image coding scheme using M-channel linear phase perfect reconstruction filterbanks (LPPRFB's) in the embedded zerotree wavelet (EZW) framework introduced by Shapiro
[1]. The innovation here is to replace the EZW's dyadic wavelet transform by M-channel uniformband maxi ..."
Cited by 41 (20 self)
Add to MetaCart
This paper presents a novel image coding scheme using M-channel linear phase perfect reconstruction filterbanks (LPPRFB's) in the embedded zerotree wavelet (EZW) framework introduced by Shapiro [1].
The innovation here is to replace the EZW's dyadic wavelet transform by M-channel uniformband maximally decimated LPPRFB's, which offer finer frequency spectrum partitioning and higher energy
compaction. The transform stage can now be implemented as a block transform which supports parallel processing mode and facilitates regionof -interest coding/decoding. For hardware implementation,
the transform boasts efficient lattice structures, which employ a minimal number of delay elements and are robust under the quantization of lattice coefficients. The resulted compression algorithm
also retains all attractive properties of the EZW coder and its variations such as progressive image transmission, embedded quantization, exact bit rate control, and idempotency. Despite its
simplicity, our new coder outperforms some of the best image coders published recently in literature [1]--[4], for almost all test images (especially natural, hard-to-code ones) at almost all bit | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=276937","timestamp":"2014-04-17T19:31:49Z","content_type":null,"content_length":"41638","record_id":"<urn:uuid:ea9d6ec4-34f6-4f4c-b966-52d222c37fae>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Infinite primes proof???
Someone told me Euler proved that there are infinitely many prime numbers by proving that the sum of their reciprocals is infinite.
I have one concern. How can you prove the infinitude of primes by this method without assuming the set to be infinite in the first place. | {"url":"http://www.physicsforums.com/showpost.php?p=3809097&postcount=1","timestamp":"2014-04-17T21:37:30Z","content_type":null,"content_length":"8559","record_id":"<urn:uuid:14209255-483e-4d32-b24f-3168957ee0be>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why the number 1729 shows up in so many Futurama episodes
It seems like such an ordinary number – so why does it show up so frequently?
1729 TaxiCab number via MathWorld.
In the episode "Xmas Story," Bender receives a card designating him "Son #1729;" but the number shows up in other places, as well. The registration number on the hull of the starship Nimbus, for
example, is BP-1729. In "The Farnsworth Parabox," which involves the members of Planet Express slipping in and out of alternate universes, one of the universes visited is Universe 1729:
But so then what's the significance behind this seemingly insignificant number? (Fun fact: There's actually no such thing as an uninteresting natural number.) The answer can be traced to a
conversation, now famous among numberphiles, that occurred between mathematicians G.H. Hardy and Srinivisa Ramanujan in 1918.
In a BBC News article that explores Hardy and Ramanujan's unlikely friendship and its ties to Futurama's frequent references to 1729, science writer Simon Singh recounts a time when Hardy visited
Ramanujan at the nursing home where he lay ill. "I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen"
Hardy later recalled. Ramanujan is said to have countered: "No, it is a very interesting number. It is the smallest number expressible as the sum of two cubes in two different ways."
Ramanujan's point can be expressed mathematically as follows:
1729 = 1³ + 12³ = 9³ + 10³
1729 has since become known as the Hardy-Ramanujan number. The story behind it is also why the smallest numbers that can be expressed as the sum of two cubes in one or more distinct ways are called
"taxicab numbers" – hence the number on the cab pictured here, which makes an appearance in Bender's Big Score (above, the literal taxicab number 87539319 is the smallest number that can be written
as the sum of two cubes in three different ways, viz. 87,539,319 = 167^3+436^3 = 228^3+423^3 = 255^3+414^3).
Science jokes, mathematical theorems and other nerdy allusions are, of course, warp and woof of Futurama's style, due in no small part to its team of eminently geeky writers, including J. Stewart
Burns, who holds a master's degree in maths from UC Berkeley; Bill Odenkirk, a PhD from U. Chicago in chemistry; Jeff Westbrook, who holds a PhD in computer science from Princeton; Ken Keeler, who
earned his PhD in maths at Harvard); and of course head writer David X. Cohen, who majored in applied maths at Harvard and went on to earn a master's in computer science at UC Berkeley.
Above: A screenshot from The Prisoner of Benda, featuring a novel theorem that Keeler devised and solved for the express purpose of explaining a plot twist in the episode.
Keeler, for his part, makes a direct reference to the Hardy-Ramanujan number in an interview with GotFuturama.
"Well, sure," he replied, when asked whether all his years of education had been worth it. "For example, Bender's serial number is 1729, a historically significant integer to mathematicians
everywhere; that 'joke' alone is worth six years of grad school, I'd say." In another interview with mathematician Sarah Greenwald, Keeler explains in greater detail:
We needed a number for plot reasons, and David Cohen asked if I could think of an interesting one, and the Hardy-Ramanujan sum-of-two-cubes story leapt to mind. Afterwards David [X. Cohen] sort
of went to town with the idea whenever we needed a serial number.
In the same Greenwald interview, Keeler expands on the idea that pursuing a terminal degree in mathematics can make you a better writer:
Mathematical training makes you good at following logical structure, and I think that's a plus in any field. In writing, it helps you to see the steps you need to lay out to make a story hang
together. I also think studying hard math problems teaches you how to just disconnect from your current approach sometimes and force yourself to come up with a radically new one. This is kind of
an effective method of coming up with jokes when you're stuck; I guess maybe it helps you achieve the element of surprise. And the way a mathematician will exhaustively explore the consequences
of an assumption or a result is kind of similar to the way multipart jokes are occasionally built up; you've started down a road and you want to see how far it'll take you.
Sources + Additional Links
• For more background on the unlikely friendship between Hardy and Ramanujan (and the recurring memorialization of the duo in Futurama), see this piece by science writer Simon Singh, published
yesterday at BBC News.
• For more on the mathematics of Futurama, check out "Futurama Math: Mathematics in the Year 3000." Created by Appalachian State University mathematician Dr. Sarah Greenwald, the website is
entirely devoted to unveiling and unpacking the show's MANY mathematical allusions.
• Greenwald's Futurama math interviews with Ken Keeler, Jeff Westbrook and David X. Cohen.
• The mathematical backgrounds of Futurama's writers, including links to some writers' academic dissertations.
171 46Reply | {"url":"http://io9.com/why-does-the-number-1729-show-up-in-so-many-futurama-ep-1445512975","timestamp":"2014-04-17T23:03:01Z","content_type":null,"content_length":"101310","record_id":"<urn:uuid:e779adb5-0509-4d67-bb52-40dbcd475b24>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/bridgettegale95/asked","timestamp":"2014-04-19T04:45:15Z","content_type":null,"content_length":"104802","record_id":"<urn:uuid:232070d2-b04e-4973-9f52-42bf01bacf0e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Genetic Probability (Binomial Expansion)
1. The problem statement, all variables and given/known data
I'm stuck on a problem with two variables in it. The question wants to know what's the probability of getting 2 boys and 2 girls, with one of the boys being albino. They say one parent is albino and
the other is a heterozygous carrier, so the change of getting it is 50% in the children.
So by binomial expansion, it's 1 4 6 4 1. I know the two girls and two boys is the middle term with 6(p^2)(q^2) but with q and p being getting a boy or girl at .50 each. But how do I encompass the
albino probability in this? Multiply each child by .50?
2. Relevant equations
3. The attempt at a solution
I can easily find the probability of getting two girls and two boys, I would simply use the 6(p^2)(q^2) with p and q being .50 respectively. I just don't know how to factor in the albino part. | {"url":"http://www.physicsforums.com/showthread.php?t=669254","timestamp":"2014-04-18T18:22:51Z","content_type":null,"content_length":"25692","record_id":"<urn:uuid:2d522e05-f6f4-4e05-8e7d-e98072ed9f3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
The classifiers implemented in MOA are the following:
• Bayesian classifiers
□ Naive Bayes
□ Naive Bayes Multinomial
• Decision trees classifiers
□ Decision Stump
□ Hoeffding Tree
□ Hoeffding Option Tree
□ Hoeffding Adaptive Tree
• Meta classifiers
□ Bagging
□ Boosting
□ Bagging using ADWIN
□ Bagging using Adaptive-Size Hoeffding Trees.
□ Perceptron Stacking of Restricted Hoeffding Trees
□ Leveraging Bagging
• Function classifiers
□ Perceptron
□ SGD: Stochastic Gradient Descent
□ SPegasos
• Drift classifiers
Classifiers for static streams
Always predicts the class that has been observed most frequently the in the training data.
• -r : Seed for random behaviour of the classifier
Performs classic bayesian prediction while making naive assumption that all inputs are independent.
Naive Bayes is a classifier algorithm known for its simplicity and low computational cost. Given n different classes, the trained Naive Bayes classifier predicts for every unlabelled instance I the
class C to which it belongs with high accuracy.
• -r : Seed for random behaviour of the classifier
Decision trees of one level.
• -g : The number of instances to observe between model changes
• -b : Only allow binary splits
• -c : Split criterion to use. Example : InfoGainSplitCriterion
• -r : Seed for random behaviour of the classifier
Decision tree for streaming data.
A Hoeffding tree is an incremental, anytime decision tree induction algorithm that is capable of learning from massive data streams, assuming that the distribution generating examples does not change
over time. Hoeffding trees exploit the fact that a small sample can often be enough to choose an optimal splitting attribute. This idea is supported mathematically by the Hoeffding bound, which
quantifies the number of observations (in our case, examples) needed to estimate some statistics within a prescribed precision (in our case, the goodness of an attribute).
A theoretically appealing feature of Hoeffding Trees not shared by other incremental decision tree learners is that it has sound guarantees of performance. Using the Hoeffding bound one can show that
its output is asymptotically nearly identical to that of a non-incremental learner using infinitely many examples. See for details:
G. Hulten, L. Spencer, and P. Domingos. Mining time-changing data streams. In KDD’01, pages 97–106, San Francisco, CA, 2001. ACM Press.
• -m : Maximum memory consumed by the tree
• -n : Numeric estimator to use :
□ Gaussian approximation evaluating 10 splitpoints
□ Gaussian approximation evaluating 100 splitpoints
□ Greenwald-Khanna quantile summary with 10 tuples
□ Greenwald-Khanna quantile summary with 100 tuples
□ Greenwald-Khanna quantile summary with 1000 tuples
□ VFML method with 10 bins
□ VFML method with 100 bins
□ VFML method with 1000 bins
□ Exhaustive binary tree
• -e : How many instances between memory consumption checks
• -g : The number of instances a leaf should observe between split
• -s : Split criterion to use. Example : InfoGainSplitCriterion
• -c : The allowable error in split decision, values closer to 0 will take
longer to decide
• -t : Threshold below which a split will be forced to break ties
• -b : Only allow binary splits
• -z : Stop growing as soon as memory limit is hit
• -r : Disable poor attributes
• -p : Disable pre-pruning
• -l : Leaf classifier to use at the leaves: Majority class, Naive Bayes, Naive Bayes Adaptive. By default: Naive Bayes Adaptive.
In old versions of MOA, a HoeffdingTreeNB was a HoeffdingTree with Naive Bayes classification at leaves, and a HoeffdingTreeNBAdaptive was a HoeffdingTree with adaptive Naive Bayes classification at
leaves. In the current version of MOA, there is an option to select wich classification perform at leaves: Majority class, Naive Bayes, Naive Bayes Adaptive. By default, the option selected is Naive
Bayes Adaptive, since it is the classifier that gives better results. This adaptive Naive Bayes prediction method monitors the error rate of majority class and Naive Bayes decisions in every leaf,
and chooses to employ Naive Bayes decisions only where they have been more accurate in past cases.
To run experiments using the old default version of HoeffdingTree, with a majority class learner at leaves, use “HoeffdingTree -l MC”.
Decision option tree for streaming data.
Hoeffding Option Trees are regular Hoeffding trees containing additional option nodes that allow several tests to be applied, leading to multiple Hoeffding trees as separate paths. They consist of a
single structure that efficiently represents multiple trees. A particular example can travel down multiple paths of the tree, contributing, in different ways, to different options.
See for details:
B. Pfahringer, G. Holmes, and R. Kirkby. New options for hoeffding
trees. In AI, pages 90–99, 2007.
• -o : Maximum number of option paths per node
• -m : Maximum memory consumed by the tree
• -n : Numeric estimator to use :
□ Gaussian approximation evaluating 10 splitpoints
□ Gaussian approximation evaluating 100 splitpoints
□ Greenwald-Khanna quantile summary with 10 tuples
□ Greenwald-Khanna quantile summary with 100 tuples
□ Greenwald-Khanna quantile summary with 1000 tuples
□ VFML method with 10 bins
□ VFML method with 100 bins
□ VFML method with 1000 bins
□ Exhaustive binary tree
• -e : How many instances between memory consumption checks
• -g : The number of instances a leaf should observe between split attempts
• -s : Split criterion to use. Example : InfoGainSplitCriterion
• -c : The allowable error in split decision, values closer to 0 will take longer to decide
• -w : The allowable error in secondary split decisions, values closer to 0 will take longer to decide
• -t : Threshold below which a split will be forced to break ties
• -b : Only allow binary splits
• -z : Memory strategy to use
• -r : Disable poor attributes
• -p : Disable pre-pruning
• -d : File to append option table to.
• -l : Leaf classifier to use at the leaves: Majority class, Naive Bayes, Naive Bayes Adaptive. By default: Naive Bayes Adaptive.
In old versions of MOA, a HoeffdingOptionTreeNB was a HoeffdingTree with Naive Bayes classification at leaves, and a HoeffdingOptionTreeNBAdaptive was a HoeffdingOptionTree with adaptive Naive Bayes
classification at leaves. In the current version of MOA, there is an option to select wich classification perform at leaves: Majority class, Naive Bayes, Naive Bayes Adaptive. By default, the option
selected is Naive Bayes Adaptive, since it is the classifier that gives better results. This adaptive Naive Bayes prediction method monitors the error rate of majority class and Naive Bayes decisions
in every leaf, and chooses to employ Naive Bayes decisions only where they have been more accurate in past cases.
To run experiments using the old default version of HoeffdingOptionTree, with a majority class learner at leaves, use “HoeffdingOptionTree -l MC”.
Adaptive decision option tree for streaming data with adaptive Naive Bayes classification at leaves.
An Adaptive Hoeffding Option Tree is a Hoeffding Option Tree with the following improvement: each leaf stores an estimation of the current error. It uses an EWMA estimator with α = .2. The weight of
each node in the voting process is proportional to the square of the inverse of the error.
AdaHoeffdingOptionTree -o 50
• Same parameters as HoeffdingOptionTree
This adaptive Hoeffding Tree uses ADWIN to monitor performance of branches on the tree and to replace them with new branches when their accuracy decreases if the new branches are more accurate. For
more information, see:
Albert Bifet, Ricard Gavaldá. Adaptive Learning from Evolving Data Streams In IDA 2009.
Incremental on-line bagging of Oza and Russell.
Oza and Russell developed online versions of bagging and boosting for Data Streams. They show how the process of sampling bootstrap replicates from training data can be simulated in a data stream
context. They observe that the probability that any individual example will be chosen for a replicate tends to a Poisson(1) distribution.
[OR] N. Oza and S. Russell. Online bagging and boosting. In Artificial Intelligence and Statistics 2001, pages 105–112. Morgan Kaufmann, 2001.
• -l : Classifier to train
• -s : The number of models in the bag
Incremental on-line boosting of Oza and Russell.
See details in:
[OR] N. Oza and S. Russell. Online bagging and boosting. In Artificial Intelligence and Statistics 2001, pages 105–112. Morgan Kaufmann, 2001.
For the boosting method, Oza and Russell note that the weighting procedure of AdaBoost actually divides the total example weight into two halves – half of the weight is assigned to the correctly
classified examples, and the other half goes to the misclassified examples. They use the Poisson distribution for deciding the random probability that an example is used for training, only this time
the parameter changes according to the boosting weight of the example as it is passed through each model in sequence.
• -l : Classifier to train
• -s : The number of models to boost
• -p : Boost with weights only; no poisson
Online Coordinate Boosting.
Pelossof et al. presented Online Coordinate Boosting, a new online boosting algorithm for adapting the weights of a boosted classifier, which yields a closer approximation to Freund and Schapire’s
AdaBoost algorithm. The weight update procedure is derived by minimizing AdaBoost’s loss when viewed in an incremental form. This boosting method may be reduced to a form similar to Oza and Russell’s
See details in:
[PJ] Raphael Pelossof, Michael Jones, Ilia Vovsha, and Cynthia Rudin. Online coordinate boosting. 2008.
OCBoost -l HoeffdingTreeNBAdaptive -e 0.5
• -l : Classifier to train
• -s : The number of models to boost
• -e : Smoothing parameter
Classifiers for evolving streams
Bagging using trees of different size.
The Adaptive-Size Hoeffding Tree (ASHT) is derived from the Hoeffding Tree algorithm with the following differences:
• it has a maximum number of split nodes, or size
• after one node splits, if the number of split nodes of the ASHT tree is higher than the maximum value, then it deletes some nodes to reduce its size
The intuition behind this method is as follows: smaller trees adapt more quickly to changes, and larger trees do better during periods with no or little change, simply because they were built on more
data. Trees limited to size s will be reset about twice as often as trees with a size limit of 2s. This creates a set of different reset-speeds for an ensemble of such trees, and therefore a subset
of trees that are a good approximation for the current rate of change. It is important to note that resets will happen all the time, even for stationary datasets, but this behaviour should not have a
negative impact on the ensemble’s predictive performance. When the tree size exceeds the maximun size value, there are two different delete options:
• delete the oldest node, the root, and all of its children except the one where the split has been made. After that, the root of the child not deleted becomes the new root
• delete all the nodes of the tree, i.e., restart from a new root.
The maximum allowed size for the n-th ASHT tree is twice the maximum allowed size for the (n − 1)-th tree. Moreover, each tree has a weight proportional to the inverse of the square of its error, and
it monitors its error with an exponential weighted moving average (EWMA) with α = .01. The size of the first tree is 2.
With this new method, it is attempted to improve bagging performance by increasing tree diversity. It has been observed that boosting tends to produce a more diverse set of classifiers than bagging,
and this has been cited as a factor in increased performance.
See more details in:
[BHPKG] Albert Bifet, Geoff Holmes, Bernhard Pfahringer, Richard Kirkby, and Ricard Gavaldà . New ensemble methods for evolving data streams. In 15th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, 2009.
The learner must be ASHoeffdingTree, a Hoeffding Tree with a maximum size value.
OzaBagASHT -l ASHoeffdingTree -s 10 -u -r
• Same parameters as OzaBag
• -f : the size of first classifier in the bag.
• -u : Enable weight classifiers
• -r : Reset trees when size is higher than the max
Bagging using ADWIN.
ADWIN is a change detector and estimator that solves in a well-specified way the problem of tracking the average of a stream of bits or real-valued numbers. ADWIN keeps a variable-length window of
recently seen items, with the property that the window has the maximal length statistically consistent with the hypothesis “there has been no change in the average value inside the window”.
More precisely, an older fragment of the window is dropped if and only if there is enough evidence that its average value differs from that of the rest of the window. This has two consequences: one,
that change reliably declared whenever the window shrinks; and two, that at any time the average over the existing window can be reliably taken as an estimation of the current average in the stream
(barring a very small or very recent change that is still not statistically visible). A formal and quantitative statement of these two points (a theorem) appears in
[BG07c] Albert Bifet and Ricard Gavaldà. Learning from time-changing data with adaptive windowing. In SIAM International Conference on Data Mining, 2007.
ADWIN is parameter- and assumption-free in the sense that it automatically detects and adapts to the current rate of change. Its only parameter is a confidence bound δ, indicating how confident we want
to be in the algorithm’s output, inherent to all algorithms dealing with random processes. Also important, ADWIN does not maintain the window explicitly, but compresses it using a variant of the
exponential histogram technique. This means that it keeps a window of length W using only O(log W) memory and O(log W) processing time per item.
ADWIN Bagging is the online bagging method of Oza and Rusell with the addition of the ADWIN algorithm as a change detector and as an estimator for the weights of the boosting method. When a change is
detected, the worst classifier of the ensemble of classifiers is removed and a new classifier is added to the ensemble.
See details in:
[BHPKG] Albert Bifet, Geoff Holmes, Bernhard Pfahringer, Richard Kirkby, and Ricard Gavald` . New ensemble methods for evolving data streams. In 15th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, 2009.
OzaBagAdwin -l HoeffdingTreeNBAdaptive -s 10
• -l : Classifier to train
• -s : The number of models in the bag
Leveraging Bagging for evolving data streams using ADWIN and Leveraging Bagging MC using Random Output Codes ( -o option). These methods leverage the performance of bagging, with two randomization
improvements: increasing resampling and using output detection codes. For more information see
Albert Bifet, Geoffrey Holmes, Bernhard Pfahringer Leveraging Bagging for Evolving Data Streams In Machine Learning and Knowledge Discovery in Databases, European Conference, ECML PKDD, 2010.
• -l : Classifier to train.
• -s : The number of models in the bagging
• -w : The number to use to compute the weight of new instances.
• -a : Delta of Adwin change detection
• -o : Use Output Codes to use binary classifiers
• -m : Leveraging Bagging to use:
□ Leveraging Bagging ME using weight 1 if misclassified, otherwise error/(1-error)
□ Leveraging Bagging Half using resampling without replacement half of the instances
□ Leveraging Bagging WT without taking out all instances.
□ Leveraging Subagging using resampling without replacement.
Single perceptron classifier. Performs classic perceptron multiclass learning incrementally.
• -r : Learning ratio of the classifier
Implements stochastic gradient descent for learning various linear models: binary class SVM, binary class logistic regression and linear regression.
Implements the stochastic variant of the Pegasos (Primal Estimated sub-GrAdient SOlver for SVM) method of Shalev-Shwartz et al. (2007). For more information, see:
S. Shalev-Shwartz, Y. Singer, N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM. In 4th International Conference on MachineLearning, 807-814, 2007.
Class for handling concept drift datasets with a wrapper on a classifier.
The drift detection method (DDM) proposed by Gama et al. controls the number of errors produced by the learning model during prediction. It compares the statistics of two windows: the first one
contains all the data, and the second one contains only the data from the beginning until the number of errors increases. Their method doesn’t store these windows in memory. It keeps only statistics
and a window of recent errors. They consider that the number of errors in a sample of examples is modeled by a binomial distribution. A significant increase in the error of the algorithm, suggests
that the class distribution is changing and, hence, the actual decision model is supposed to be inappropriate. They check for a warning level and a drift level. Beyond these levels, change of context
is considered.
The number of errors in a sample of n examples is modeled by a binomial distribution. For each point i in the sequence that is being sampled, the error rate is the probability of misclassifying (pi
), with standard deviation given by si = pi (1 − pi )/i. A significant increase in the error of the algorithm, suggests that the class distribution is changing and, hence, the actual decision model is
supposed to be inappropriate. Thus, they store the values of pi and si when pi + si reaches its minimum value during the process (obtaining ppmin and smin ). And it checks when the following
conditions trigger:
• pi + si ≥ pmin + 2 · smin for the warning level. Beyond this level, the examples are stored in anticipation of a possible change of context.
• pi +si ≥ pmin +3·smin for the drift level. Beyond this level, the model induced by the learning method is reset and a new model is learnt using the examples stored since the warning level
Baena-García et al. proposed a new method EDDM in order to improve DDM. It is based on the estimated distribution of the distances between classification errors. The window resize procedure is
governed by the same heuristics.
See more details in:
[GMCR] J. Gama, P. Medas, G. Castillo, and P. Rodrigues. Learning with drift detection. In SBIA Brazilian Symposium on Artificial Intelligence, pages 286–295, 2004.
[BDF] Manuel Baena-García, José del Campo-Avila, Raul Fidalgo, Albert Bifet, Ricard Gavaldà , and Rafael Morales-Bueno. Early drift detection method. In Fourth International Workshop on Knowledge
Discovery from Data Streams, 2006.
SingleClassifierDrift -d EDDM -l HoeffdingTreeNBAdaptive
• -l : Classifier to train
• -d : Drift detection method to use: DDM or EDDM | {"url":"http://moa.cms.waikato.ac.nz/details/classification/classifiers-2/","timestamp":"2014-04-21T00:31:53Z","content_type":null,"content_length":"39723","record_id":"<urn:uuid:54c96b14-21bb-4623-a4f7-19d49f490f5e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
May 6th 2011, 10:37 AM #1
null sequences.
Prove or disprove:
If $(x_ny_n)$ is a null sequence then one or both of $(x_n), \, (y_n)$ is(are) null.
My attempt:
if $(x_n)\, and \, (y_n)$ both are null then certainly $(x_ny_n)$ is null, no problem with that.
I assumed that $(x_n)$ is not null, $(y_n)$ is not null and as given, $(x_ny_n)$ is null. I tried to bring a contradiction since i have a feeling that the statement in the question is true.
since its assumed that $(x_n)$ is not null so:
there exists $\epsilon_1>0$such that for all $N$there exists $n \geq N$such that $|x_n|>\epsilon_1$
there exists $\epsilon_2>0$such that for all $N$there exists $n \geq N$such that $|y_n|>\epsilon_2$
and since $(x_ny_n)$ is null:
there exists $N$such that $n \geq N \Rightarrow |x_ny_n|<\epsilon_1 \epsilon_2$
now since i can't say that $|x_n|>\epsilon_1 \, and \, |y_n|>\epsilon_2$ occur at the same $n$ i can't find a contradiction.
someone help.
probably $(x_ny_n) \rightarrow XY eq 0$ and probably null sequences converge to 0 and hence a contradiction. But convergence has not yet been discussed in the book i am reading(first course in
real analysis- stirling K berberian) so i can only guess.
OK go back to the OP.
Let $\varepsilon = \frac{{\varepsilon _1 \varepsilon _2 }}{2}$.
Make $|x_ny_n|<\varepsilon$ then $\varepsilon _1 \varepsilon _2\le |x_n||y_n|=|x_ny_n|<\varepsilon$.
That is a contradiction.
the problem is that i can't( or yet not able to) say that there exists an $n$ such that $|x_n||y_n|> \epsilon_1 \epsilon_2$. I can only say that $|x_{n_1}||y_{n_2}|>\epsilon_1 \epsilon_2$ and
that's why i can't contradict anything.
I think that I misread the OP.
Consider this example.
$x_n = \left\{ {\begin{array}{rl} {1,} & {\text{n is even}} \\ {\frac{1}{n},} & {\text{ n is odd}} \\ \end{array} } \right.$ and $y_n = \left\{ {\begin{array}{rl} {1,} & {\text{n is odd}} \\ {\
frac{1}{n},} & {\text{ n is even}} \\ \end{array} } \right.$
What is $x_ny_n~?$
I think that I misread the OP.
Consider this example.
$x_n = \left\{ {\begin{array}{rl} {1,} & {\text{n is even}} \\ {\frac{1}{n},} & {\text{ n is odd}} \\ \end{array} } \right.$ and $y_n = \left\{ {\begin{array}{rl} {1,} & {\text{n is odd}} \\ {\
frac{1}{n},} & {\text{ n is even}} \\ \end{array} } \right.$
What is $x_ny_n~?$
so it can happen that if we have $(x_n)$ not null, $(y_n)$ not null, we still have $(x_ny_n)$ as null!!
thank you for this.
May 6th 2011, 10:42 AM #2
May 6th 2011, 10:51 AM #3
May 6th 2011, 11:01 AM #4
May 6th 2011, 11:05 AM #5
May 6th 2011, 11:13 AM #6
May 6th 2011, 11:20 AM #7
May 6th 2011, 03:44 PM #8
May 6th 2011, 09:58 PM #9 | {"url":"http://mathhelpforum.com/differential-geometry/179726-null-sequences.html","timestamp":"2014-04-17T02:57:12Z","content_type":null,"content_length":"68677","record_id":"<urn:uuid:ae1318af-f4c7-41bf-b0b7-eca7336b333b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2004 [00042]
[Date Index] [Thread Index] [Author Index]
Re: Re: Infrequent Mathematica User
• To: mathgroup at smc.vnet.net
• Subject: [mg47299] Re: [mg47244] Re: Infrequent Mathematica User
• From: Paul Abbott <paul at physics.uwa.edu.au>
• Date: Fri, 2 Apr 2004 03:31:50 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
On 2/4/04, Andrzej Kozlowski wrote:
>Actually one can use Paul's argument to prove
>the following stronger statement:
>Sum[(Subscript[x,i]/(1 + Sum[Subscript[x,j]^2, {j, i}]))^2, {i, n}] < 1
>for every positive integer n.
>It is easy to see that this implies the
>inequality in the original problem (use
>Schwarz's inequality).
MathWorld only gives the integral form at
The required form of the inequality is at
>Moreover, the proof is easier since the inductive step is now trivial.
>In addition, the inequality leads to some
>intriguing observations and also to what looks
>like a bug in Limit (?)
Actually, I think the problem is with Series. I've submitted a bug report.
>The inequality implies that the sums, considered
>as functions on the real line, are bounded and
>attain their maxima. So it is natural to
>consider the functions f[n] (obtained by setting
>all the Subscript[x,i] = Subscript[x,j)]
>f[n_][x_] := NSum[(x/(i*x^2 + 1))^2, {i, 1, n}]
>It is interesting to look at:
>plots = Table[
> Plot[f[n][x], {x, -1, 1}, DisplayFunction -> Identity], {n, 1, 10}];
>Show[plots, DisplayFunction -> $DisplayFunction]
>The f[n] of course also bounded by 1 and so in the limit we have the function:
>f[x_] = Sum[(x/(i*x^2 + 1))^2, {i, 1, Infinity}]
>PolyGamma[1, (x^2 + 1)/x^2]/x^2
>which also ought to be bounded bu 1.
>Plotting the graph of this, e.g.
>Plot[f[x], {x, -0.1, 0.1}]
>shows a maximum value 1 at 0 (where the function
>is not defined), however Mathematica seems to
>give the wrong limit:
Series also gives an incorrect result (I think Limit is using this). | {"url":"http://forums.wolfram.com/mathgroup/archive/2004/Apr/msg00042.html","timestamp":"2014-04-20T08:45:38Z","content_type":null,"content_length":"36120","record_id":"<urn:uuid:95733f86-f163-46dc-8420-9d3ac8f265ff>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiattribute Models
Each of the four general classes of models described above—classical, generalizability, item response, and latent class—can be extended to incorporate more than one attribute of the student. Doing so
allows for connections to a richer substantive theory and educationally more complex interpretations. In multidimensional IRM, observations are hypothesized to correspond to multiple constructs
(Reckase, 1972; Sympson, 1978). For instance, performance on mathematics word problems might be attributable to proficiency in both mathematics and reading. In the IEY example above, the progress of
students on four progress variables in the domain of science was mapped and monitored (see Box 4–2, above). Note that in this example, one might have analyzed the results separately for each of the
progress variables and obtained four independent IRM estimations of the student and item parameters, sometimes referred to as a consecutive approach (Adams, Wilson, and Wang, 1997).
There are both measurement and educational reasons for using a multidimensional model. In measurement terms, if one is interested, for example, in finding the correlation among the latent constructs,
a multidimensional model allows one to make an unbiased estimate of this correlation, whereas the consecutive approach produces smaller correlations than it should. Educationally dense longitudinal
data such as those needed for the IEY maps can be difficult to obtain and manage: individual students may miss out on specific tasks, and teachers may not use tasks or entire activities in their
instruction. In such a situation, multidimensional models can be used to bolster sparse results by using information from one dimension to estimate performance on another. This is a valuable use and
one on which the BEAR assessment system designers decided to capitalize. This profile allows differential performance and interpretation on each of the single dimensions of IEY Science, at both the
individual and group levels. A diagram illustrating a two-dimensional IRM is shown in Figure 4–9. The curved line indicates that the two dimensions may be correlated. Note that for clarity, extra
facets have not been included in this diagram, but that can be routinely done. Multidimensional factor analysis can be represented by the same diagram. Among true-score models, multivariate G-theory
allows multiple attributes. Latent class models may also be extended to include multiple attributes, both ordered and unordered. Figures analogous to Figure 4–9 could easily be generated to depict
these extended models.
The measurement models considered thus far have all been models of status, that is, methods for taking single snapshots of student achievement in | {"url":"http://www.nap.edu/openbook.php?record_id=10019&page=128","timestamp":"2014-04-25T09:01:11Z","content_type":null,"content_length":"37106","record_id":"<urn:uuid:56c646f5-e850-4c4a-99a0-24aabcfe1534>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burlington, NJ ACT Tutor
Find a Burlington, NJ ACT Tutor
...I have successfully passed the GRE's (to get into graduate school) as well as the Praxis II content knowledge test for mathematics. Therefore, I am qualified to tutor students in SAT Math. I
have a bachelor's in mathematics from Rutgers University.
16 Subjects: including ACT Math, English, calculus, physics
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including ACT Math, physics, ASVAB, calculus
...I have tutored student all the way from 5th grade through high school. I don't believe tutoring is just for students who are having difficulty in a subject. Undoubtedly, if a student is having
a tough time keeping pace in a classroom, the one on one attention a tutoring session can provide is invaluable.
10 Subjects: including ACT Math, geometry, algebra 1, algebra 2
...I have strong expertise in pre-algebra, pre-calculus, algebra I, algebra II, ACT math, SAT math, and biology. I performed especially well on the math portions of the ACT and SAT, scoring a 35/
36 on the ACT and a 2210/2400 on the SAT. Also, I have been classically trained in piano since the age of 4 and have playing competitive tennis at the national level since the age of 10.
12 Subjects: including ACT Math, biology, piano, algebra 1
...I have been teaching science and math for over four years in a one-on-one level, as well as at the collegiate level at Rutgers University. I am patient and cater to my students needs. I use
diagrams and a systematic approach towards solving problems in my teaching, which allows my students to grasp the deeper concepts, rather than just solve the problem.
39 Subjects: including ACT Math, chemistry, physics, writing
Related Burlington, NJ Tutors
Burlington, NJ Accounting Tutors
Burlington, NJ ACT Tutors
Burlington, NJ Algebra Tutors
Burlington, NJ Algebra 2 Tutors
Burlington, NJ Calculus Tutors
Burlington, NJ Geometry Tutors
Burlington, NJ Math Tutors
Burlington, NJ Prealgebra Tutors
Burlington, NJ Precalculus Tutors
Burlington, NJ SAT Tutors
Burlington, NJ SAT Math Tutors
Burlington, NJ Science Tutors
Burlington, NJ Statistics Tutors
Burlington, NJ Trigonometry Tutors
Nearby Cities With ACT Tutor
Beverly, NJ ACT Tutors
Bristol, PA ACT Tutors
Burlington City, NJ ACT Tutors
Burlington Township, NJ ACT Tutors
Croydon, PA ACT Tutors
Delanco Township, NJ ACT Tutors
Florence, NJ ACT Tutors
Hainesport ACT Tutors
Hainesport Township, NJ ACT Tutors
Hulmeville, PA ACT Tutors
Rancocas ACT Tutors
Roebling ACT Tutors
Tullytown, PA ACT Tutors
Westampton Township, NJ ACT Tutors
Willingboro ACT Tutors | {"url":"http://www.purplemath.com/burlington_nj_act_tutors.php","timestamp":"2014-04-20T09:18:02Z","content_type":null,"content_length":"24144","record_id":"<urn:uuid:e9341114-8920-47d2-ac2e-7d6595f7ed64>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00646-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suitland Algebra 2 Tutor
...There are numerous topic areas that must be worked. In a meeting with a new student in Algebra 2, I talk with the student in an attempt to identify the ones for which he/she may need
assistance. To be sure, there are links between the topic areas.
13 Subjects: including algebra 2, chemistry, calculus, physics
...I truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it. I believe that there is a way to learn math for everyone and I look forward to
finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals.
22 Subjects: including algebra 2, calculus, geometry, GRE
...I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012. I had more than 3 years' intense training in programming, especially in C and Java,
both of which have been widely used in my daily job. I also have the tutored C and Java courses when I'm an undergraduate.
27 Subjects: including algebra 2, chemistry, calculus, physics
...As for athletics, I am skilled in the following areas: Swimming: I was a competitive swimmer for three years and enjoy teaching stroke mechanics and customizing drills to help you progress
through swimming. Golf: I was on my high school varsity team for three years. I cover swing mechanics (I...
13 Subjects: including algebra 2, calculus, writing, GRE
...I have bachelor's and master's in environmental engineering - testimony to the fact that learning math, biology, and other challenging subjects can actually be easy and fun when equipped with
the right techniques and tools!In my prior experiences as a tutor, work management skills have been a cri...
17 Subjects: including algebra 2, reading, writing, biology | {"url":"http://www.purplemath.com/Suitland_algebra_2_tutors.php","timestamp":"2014-04-21T04:56:55Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:2bab3650-7ece-4da7-850b-e4e9092dd767>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need some help
Prove or disprove For all sets A and B: If B ⊆ A^c, then A n B = ∅
suppose not. Then there would be an $x\in B \cap A$. But this means x is in A and x is in B. But since x is in B, and $B \subset A^c$, then x is in the complement of A. But then we have $x\in A^c$
and $x \in A$, which is clearly a contradiction as $A \cap A^c = \emptyset$ | {"url":"http://mathhelpforum.com/discrete-math/85175-need-some-help.html","timestamp":"2014-04-19T00:21:50Z","content_type":null,"content_length":"27587","record_id":"<urn:uuid:37af4014-29a5-45c6-80c0-97883f7f51bc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Why is Mata much slower than MATLAB at matrix inversion?
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Why is Mata much slower than MATLAB at matrix inversion?
From David Hoaglin <dchoaglin@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Why is Mata much slower than MATLAB at matrix inversion?
Date Tue, 24 Jul 2012 09:10:46 -0400
For large matrices, differences in the detailed programming of matrix
multiplication may have a substantial effect on speed.
And for obtaining factorizations and inverses of matrices, it is
difficult to make comparisons among software platforms without knowing
the details of the algorithms and their implementation. The empirical
evidence presented in this discussion is useful, but it would be much
easier to interpret in the context of the underlying programs (which
may be low-level and well-understood only by experts). As users, we
take the various matrix operations for granted, but one could not have
done that in the early 1970s. We have the numerical analysis
community to thank for a substantial research effort, and the late
Gene H. Golub deserves special recognition.
David Hoaglin
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-07/msg00877.html","timestamp":"2014-04-17T04:09:07Z","content_type":null,"content_length":"10953","record_id":"<urn:uuid:169c264e-1cca-452f-9e50-1ad1a64d95c5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
verify: sin(x+y)-sin(x-y)=2cosxsiny
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fb96a06e4b05565342e655f","timestamp":"2014-04-16T19:44:46Z","content_type":null,"content_length":"77345","record_id":"<urn:uuid:af55db7d-5d11-4820-88aa-91361ed67df1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Rules and Formulas - Supplied or Not?
Replies: 2 Last Post: May 29, 1995 11:54 PM
Messages: [ Previous | Next ]
Re: Rules and Formulas - Supplied or Not?
Posted: May 29, 1995 1:32 PM
Rex Boggs writes,
>Should we supply on the exam paper such things as
>sin^2 + cos^2 = 1, (ab)^n = a^n b^n, log(ab) = log a + log b, the chain
>rule for differential calculus, etc,, and concentrate on whether students
>can apply the correct rule correctly. Or are these part of the body of
>mathematical knowledge that a good student should have at their fingertips?
>In Queensland, we have external exams for mature age learners. The Maths in
>Society exam has the formulas supplied, the Maths B and Maths C exams
>(roughly equivalent to Algebra II, and Calculus and Analytical Geometry)
>don't. So even within our system there appears to be differing attitudes.
I am not a teacher at this level, but I have had a number of semi-technical
jobs and hobbies where algebraic manipulation was necessary. If I did not
know the formulas you give, I would be in sad shape. Or rather, I would,
through practice, quickly learn them. I would hope that the course they
took gave them enough practice so you wouldn't have to give them formulas
like this. If they don't know that (ab)^n = a^n b^n or log(ab) = log a +
log b, then how can you say they really know exponents or logarithms?
In the case of sin^2 + cos^2 = 1, this one is pretty basic to doing any
kind of trig calculations, but for trig identities in general, there are so
many of them and they are so interrelated that no one can expect to know
them all. Maybe give them the standard identities, but problems that depend
not on those, but identities that can be derived from them.
In the case of the chain rule, and in general, the answer depends on what
you want them to know when the course is over. If the Maths in Society
course -- like many similarly-named courses in the USA -- is intended for
people who are going no further with math, giving them the formulas may be
appropriate. If you are training scientists and engineers, they obviously
need to know the chain rule after their first calculus course; they will
need it soon and often.
Amos Newcombe | Voice (914) 339-0582 | The world's biggest fool
CyberMath | Fax (914) 331-2697 | can say the sun is shining,
Manor Lake #2 | Email: amos@cerf.net, | but that doesn't make it
Kingston NY 12401 | 76324.3313@compuserve.com | dark out. -- Robert Pirsig
Date Subject Author
5/29/95 Rules and Formulas - Supplied or Not? Rex Boggs
5/29/95 Re: Rules and Formulas - Supplied or Not? Amos Newcombe
5/29/95 Re: Rules and Formulas - Supplied or Not? Ed Wall | {"url":"http://mathforum.org/kb/message.jspa?messageID=1475062","timestamp":"2014-04-19T17:02:00Z","content_type":null,"content_length":"20666","record_id":"<urn:uuid:fda34029-6545-4d19-8105-ab06ce8edd44>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Einstein equations
From Encyclopedia of Mathematics
of the gravitational field
Fundamental equations in the general theory of relativity. They connect the metric tensor of the space-time continuum, which describes the gravitational field, and the physical characteristics of
different forms of matter, described by means of the energy-momentum tensor:
Here Ricci tensor, which can be expressed in terms of the metric tensor
[1] L.D. Landau, E.M. Lifshitz, "The classical theory of fields" , Addison-Wesley (1962) (Translated from Russian)
[a1] S. Weinberg, "Gravitation and cosmology" , Wiley (1972) pp. Chapt. 7
[a2] R.M. Wald, "General relativity" , Univ. Chicago Press (1984) pp. Chapt. 4
How to Cite This Entry:
Einstein equations. D.D. Sokolov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Einstein_equations&oldid=14498
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Einstein_equations","timestamp":"2014-04-21T09:35:54Z","content_type":null,"content_length":"16912","record_id":"<urn:uuid:58f63728-bb02-4d4e-83bc-2c260b748598>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
college physics
Posted by lala on Thursday, February 24, 2011 at 6:45pm.
Two cars of equal mass are traveling as shown in the figure below just before undergoing a collision. Before the collision, one of the cars has a speed of 19 m/s along +x, and the other has a speed
of 31 m/s along +y. The cars lock bumpers and then slide away together after the collision. What are the magnitude and direction of their final velocity?
• college physics - drwls, Thursday, February 24, 2011 at 8:00pm
Momentum is conserved (always).
Final Vx = 9.5 m/s (for equal masses)
Final Vy = 15.5 m/s (for equal masses)
Use the Pythagorean theorem for magnitude.
Direction = arctan 31/19 = 58.5 degrees from +x axis, towards +y
• college physics (simple electronics) - Mike, Thursday, February 24, 2011 at 10:22pm
The electronic flash attachment for a camera contains a capacitor for storing the energy used to produce the flash. In one such unit, the potential difference between the plates of a 775 µF
capacitor is 330 V.
(a) Determine the energy that is used to produce the flash in this unit.
(b) Assuming that the flash lasts for 5.0*10^-3 s, find the effective power or "wattage" of the flash.
So far:
0.000775 Farads= Coulumbs / 330V
Now here I'm not sure what to do next.
But I push on:
But, that answer is kicked backed as wrong, and I'm not quite sure how to go about part A.
• college physics - Mike, Thursday, February 24, 2011 at 10:22pm
oops. sorry. messed that up didn't i?
Related Questions
college physics - Two cars of equal mass are traveling as shown in the figure ...
PHYSICS - Two cars collide at an intersection. Car A, with a mass of 2000kg, is ...
Physics Classical Mechanics - The figure below shows the experimental setup to ...
general physics - An automobile has a mass of 1520 kg and a velocity of 19.7 m/s...
general physics - An automobile has a mass of 1520 kg and a velocity of 19.7 m/s...
calculus - Two cars, one traveling north and one traveling west, collide at an ...
Physics - A 1000 kg car is moving at a constant speed of 7 m/s when it hits the ...
Physics - Momentum of cars at intersection - Two cars of identical mass are ...
physics - Please decide which of the following statements for car collisions are...
kinetics - a stationary car is hit from behind by another car travelling at 40km... | {"url":"http://www.jiskha.com/display.cgi?id=1298591124","timestamp":"2014-04-16T18:10:30Z","content_type":null,"content_length":"9873","record_id":"<urn:uuid:bcff42c2-444b-4a4c-956e-42a68d4bb803>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of folded over
folded normal distribution
is a
probability distribution
related to the
normal distribution
. Given a normally distributed random variable X with
μ and
, the
random variable Y
= |
| has a folded normal distribution. Such a case may be encountered if only the magnitude of some variable is recorded, but not its sign. The distribution is called Folded because probability mass to
the left of the
= 0 is "folded" over by taking the
absolute value
The cumulative distribution function (CDF) is given by
$F_Y\left(y; mu, sigma\right) = int_0^y frac\left\{1\right\}\left\{sigmasqrt\left\{2pi\right\}\right\} , exp left\left(-frac\left\{\left(-x-mu\right)^2\right\}\left\{2sigma^2\right\} right\
right), dx$
+ int_0^{y} frac{1}{sigmasqrt{2pi}} , exp left(-frac{(x-mu)^2}{2sigma^2} right), dx.
Using the change-of-variables z = (x − μ)/σ, the CDF can be written as
$F_Y\left(y; mu, sigma\right) = int_\left\{-mu/sigma\right\}^\left\{\left(y-mu\right)/sigma\right\} frac\left\{1\right\}\left\{sqrt\left\{2pi\right\}\right\} , exp left\left(-frac\left\{1\right\}
\left\{2\right\}left\left(z + frac\left\{2mu\right\}\left\{sigma\right\}right\right)^2right\right) dz$
+ int_{-mu/sigma}^{(y-mu)/sigma} frac{1}{sqrt{2pi}} , exp left(-frac{z^2}{2} right) dz.
The expectation is then given by
$E\left(y\right) = sigma sqrt\left\{2/pi\right\} exp\left(-mu^2/2sigma^2\right) + muleft\left[1-2Phi\left(-mu/sigma\right)right\right],$
where Φ(•) denotes the cumulative distribution function of a standard normal distribution.
The variance is given by
$operatorname\left\{Var\right\}\left(y\right) = mu^2 + sigma^2 - left\left\{ sigma sqrt\left\{2/pi\right\} exp\left(-mu^2/2sigma^2\right) + muleft\left[1-2Phi\left(-mu/sigma\right)right\right]
Both the mean, μ, and the variance, σ^2, of X can be seen to location and scale parameters of the new distribution.
Related distributions
• Leone FC, Nottingham RB, Nelson LS (1961). "The Folded Normal Distribution". Technometrics 3 (4): 543–550.
• Johnson NL (1962). "The folded normal distribution: accuracy of the estimation by maximum likelihood". Technometrics 4 (2): 249–256.
• Nelson LS (1980). "The Folded Normal Distribution". J Qual Technol 12 (4): 236–238.
• Elandt RC (1961). "The folded normal distribution: two methods of estimating parameters from moments". Technometrics 3 (4): 551–562.
• Lin PC (2005). "Application of the generalized folded-normal distribution to the process capability measures". Int J Adv Manuf Technol 26 825–830. | {"url":"http://www.reference.com/browse/folded%20over","timestamp":"2014-04-19T16:20:01Z","content_type":null,"content_length":"74866","record_id":"<urn:uuid:bad046ec-df70-491e-80e7-4b65a139f31d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Digital Signal Processing
Principal lecturer: Dr Markus Kuhn
Taken by: Part II
Past exam questions
Information for supervisors (contact lecturer for access permission)
No. of lectures: 12
Suggested hours of supervisions: 3
Prerequisite courses: Probability, Mathematical Methods for Computer Science
The last lecture of Unix Tools (MATLAB introduction) is a prerequisite for the practical exercises. Some of the material covered in Floating-Point Computation will also help in this course.
This course teaches the basic signal-processing principles necessary to understand many modern high-tech systems, with digital-communications examples. Students will gain practical experience from
numerical experiments in MATLAB-based programming assignments.
• Signals and systems. Discrete sequences and systems, their types and properties. Linear time-invariant systems, convolution.
• Phasors. Eigen functions of linear time-invariant systems. Review of complex arithmetic. Some examples from electronics, optics and acoustics.
• Fourier transform. Phasors as orthogonal base functions. Forms of the Fourier transform. Convolution theorem, Dirac’s delta function, impulse combs in the time and frequency domain.
• Discrete sequences and spectra. Periodic sampling of continuous signals, periodic signals, aliasing, sampling and reconstruction of low-pass and band-pass signals, spectral inversion.
• Digital modulation. IQ representation of band-pass signals, in particular AM, FM, PSK, and QAM signals.
• Discrete Fourier transform. Continuous versus discrete Fourier transform, symmetry, linearity, review of the FFT, real-valued FFT.
• Spectral estimation. Leakage and scalloping phenomena, windowing, zero padding.
• Finite and infinite impulse-response filters. Properties of filters, implementation forms, window-based FIR design, use of frequency-inversion to obtain high-pass filters, use of modulation to
obtain band-pass filters, FFT-based convolution, polynomial representation, z-transform, zeros and poles, use of analog IIR design techniques (Butterworth, Chebyshev I/II, elliptic filters).
• Random sequences and noise. Random variables, stationary processes, autocorrelation, crosscorrelation, deterministic crosscorrelation sequences, filtered random sequences, white noise,
exponential averaging.
• Correlation coding. Random vectors, dependence versus correlation, covariance, decorrelation, matrix diagonalization, eigen decomposition, Karhunen-Loève transform, principal component analysis.
Relation to orthogonal transform coding using fixed basis vectors, such as DCT.
• Lossy versus lossless compression. What information is discarded by human senses and can be eliminated by encoders? Perceptual scales, masking, spatial resolution, colour coordinates, some
demonstration experiments.
• Quantization, image coding standards. A/mu-law coding, delta coding, JPEG, MPEG audio compression.
By the end of the course students should be able to
• apply basic properties of time-invariant linear systems;
• understand sampling, aliasing, convolution, filtering, the pitfalls of spectral estimation;
• explain the above in time and frequency domain representations;
• use filter-design software;
• visualize and discuss digital filters in the z-domain;
• use the FFT for convolution, deconvolution, filtering;
• implement, apply and evaluate simple DSP applications in MATLAB;
• apply transforms that reduce correlation between several signal sources;
• understand the basic principles of several widely-used modulation and image-coding techniques.
* Lyons, R.G. (2010). Understanding digital signal processing. Prentice Hall (3rd ed.).
Oppenheim, A.V. & Schafer, R.W. (2007). Discrete-time digital signal processing. Prentice Hall (3rd ed.).
Stein, J. (2000). Digital signal processing - a computer science perspective. Wiley.
Salomon, D. (2002). A guide to data compression methods. Springer. | {"url":"http://www.cl.cam.ac.uk/teaching/1314/DSP/","timestamp":"2014-04-18T03:12:03Z","content_type":null,"content_length":"10817","record_id":"<urn:uuid:af88b921-4051-44e5-804e-f7482ae91361>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
A little question
August 1st 2013, 05:43 PM
A little question
Suppose f is a function such that: f(x)=0 if x is irrational and 0<x<1; f(x)=1/q if 0<x<1 is rational and in reduced form p/q, i.e. p and q have no common factor. How do I show that for any
number a with 0<a<1, f approaches 0 at a?
August 1st 2013, 06:02 PM
Re: A little question
Please do not double post. See the original here. | {"url":"http://mathhelpforum.com/pre-calculus/220988-little-question-print.html","timestamp":"2014-04-21T05:01:45Z","content_type":null,"content_length":"3638","record_id":"<urn:uuid:57ff8416-97cb-4766-b776-6f62a0224879>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing generators and relations for matrix algebras
Abstract (Summary)
We describe algorithms for computing a presentation for a matrix algebra over a finite field, and for computing the basic algebra associated to such a matrix algebra. We give correctness proofs of
our algorithms, and implementations of them in the Magma computer algebra system. We use these implementations to compute several basic algebras. Index words: Matrix Algebras, Finite Dimensional
Algebras, Basic Algebras, Generators and Relations, Morita Theory, Modular Representation Theory. Computing Generators and Relations for Matrix Algebras by Graham Y. Matthews BSc Hons (First Class),
The University of Auckland, 1989 MSc (With Distinction), The University of Auckland, 1991 Graduate Diploma, The Australian National University, 1997 A Dissertation Submitted to the Graduate Faculty
of The University of Georgia in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Athens, Georgia 2004 c? 2004 Graham Y. Matthews All Rights Reserved Computing Generators
and Relations for Matrix Algebras by Graham Y. Matthews Approved: Major Professor: Committee: Jon Carlson Brian Boe Leonard Chastkofsky Elham Izadi Robert Varley Electronic Version Approved: Maureen
Grasso Dean of the Graduate School The University of Georgia August 2004 Preface Modular representation theory is the study of the realizations of an algebra A over a field K of characteristic p, as
a subalgebra of the endomorphisms of some K-vector space V . In computational modular representation theory we usually take K to be finite, and both V and A to be finite dimensional over K, so that
endomorphisms of V can be represented as n×n matrices over K, where n is the K-dimension of V . In the computational setting A is usually implicitly defined via a sequence ? = {?1, . . . , ?t} of n ×
n matrices over K, so A is the subalgebra of Mn(K) generated by ?. Two natural questions arise. First, can we compute a presentation for A in terms of generators and relations, and if so, can this be
done in a somewhat canonical way? The more canonical the presentation, the more useful it becomes in answering related questions, such as whether two algebras are isomorphic. Second, can we compute
the basic algebra associated to A? The basic algebra, B, is a usually much smaller dimensional algebra, with the property that A and B are Morita equivalent, i.e., the module category for A and the
module category for B are categorically equivalent, and hence representations of A and of B are ‘essentially the same’. Previous approaches to these questions have mainly focussed on the second
problem, and have assumed that A is the group algebra of some finite group G. The techniques employed center on finding idempotents e ? KG, usually via special subgroups of G, such that the condensed
algebra eKGe is Morita equivalent to the group algebra KG. There is a large body of work [16] on how to both construct and recognize such idempotents. The essential problem with this approach is that
there is no guarantee that if g1, . . . , gn generate KG, then eg1e, . . . , egne will generate eKGe. iv v This dissertation attempts to answer both problems via a somewhat different approach. We
still attempt to find idempotents in A, and then condense A with respect to these idempotents, but we do not assume that A is the group algebra of some finite group. Rather we work directly with A as
a subalgebra of a matrix algebra. We also treat the two problems as being intimately related – our solution to the second problem yields a natural solution to the first. We start by proving that A
can be ‘split’ as the direct sum of a subalgebra A? isomorphic (as an algebra) to A modulo its Jacobson radical J(A), and the twosided ideal J(A). Next we show how to find generators and relations
for A?. These generators are constructed in a canonical way, with two generators and a family of four relations per simple A-module. One of the two generators is actually in the basic algebra B for
A, and the other becomes zero when we condense to form B. We then show how to construct a generating set for J(A) as a two-sided ideal. While this generating set is not quite canonical, it has the
interesting property that it is wholly contained in B. Hence the set not only generates J(A), but also J(B). Our careful construction of generators for A? and J(A) via elements of A that are either
in B, or condense to zero within B, allows us to both construct B as a matrix algebra, and to compute a presentation for B via generators and relations. We conclude by computing the basic algebra
associated to several algebras, including the group algebra of the Mathieu group M11 in characteristic 2. We provide algorithms for all our constructions, along with proofs of their correctness.
Appendix A contains implementations of our algorithms in the Magma computer algebra system
Bibliographical Information:
School:The University of Georgia
School Location:USA - Georgia
Source Type:Master's Thesis
Date of Publication: | {"url":"http://www.openthesis.org/documents/Computing-generators-relations-matrix-algebras-48915.html","timestamp":"2014-04-16T16:05:25Z","content_type":null,"content_length":"12651","record_id":"<urn:uuid:ad8cb6af-b075-4a15-911c-51636b80ccdc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fisher's iris data consists of measurements on the sepal length, sepal width, petal length, and petal width for 150 iris specimens. There are 50 specimens from each of three species. Load the data
and see how the sepal measurements differ between species. You can use the two columns containing sepal measurements.
load fisheriris
gscatter(meas(:,1), meas(:,2), species,'rgb','osd');
xlabel('Sepal length');
ylabel('Sepal width');
N = size(meas,1);
Suppose you measure a sepal and petal from an iris, and you need to determine its species on the basis of those measurements. One approach to solving this problem is known as discriminant analysis.
Linear and Quadratic Discriminant Analysis
The classify function can perform classification using different types of discriminant analysis. First classify the data using the default linear discriminant analysis (LDA).
ldaClass = classify(meas(:,1:2),meas(:,1:2),species);
The observations with known class labels are usually called the training data. Now compute the resubstitution error, which is the misclassification error (the proportion of misclassified
observations) on the training set.
bad = ~strcmp(ldaClass,species);
ldaResubErr = sum(bad) / N
ldaResubErr =
You can also compute the confusion matrix on the training set. A confusion matrix contains information about known class labels and predicted class labels. Generally speaking, the (i,j) element in
the confusion matrix is the number of samples whose known class label is class i and whose predicted class is j. The diagonal elements represent correctly classified observations.
[ldaResubCM,grpOrder] = confusionmat(species,ldaClass)
ldaResubCM =
grpOrder =
Of the 150 training observations, 20% or 30 observations are misclassified by the linear discriminant function. You can see which ones they are by drawing X through the misclassified points.
hold on;
plot(meas(bad,1), meas(bad,2), 'kx');
hold off;
The function has separated the plane into regions divided by lines, and assigned different regions to different species. One way to visualize these regions is to create a grid of (x,y) values and
apply the classification function to that grid.
[x,y] = meshgrid(4:.1:8,2:.1:4.5);
x = x(:);
y = y(:);
j = classify([x y],meas(:,1:2),species);
For some data sets, the regions for the various classes are not well separated by lines. When that is the case, linear discriminant analysis is not appropriate. Instead, you can try quadratic
discriminant analysis (QDA) for our data.
Compute the resubstitution error for quadratic discriminant analysis.
qdaClass = classify(meas(:,1:2),meas(:,1:2),species,'quadratic');
bad = ~strcmp(qdaClass,species);
qdaResubErr = sum(bad) / N
qdaResubErr =
You have computed the resubstitution error. Usually people are more interested in the test error (also referred to as generalization error), which is the expected prediction error on an independent
set. In fact, the resubstitution error will likely under-estimate the test error.
In this case you don't have another labeled data set, but you can simulate one by doing cross-validation. A stratified 10-fold cross-validation is a popular choice for estimating the test error on
classification algorithms. It randomly divides the training set into 10 disjoint subsets. Each subset has roughly equal size and roughly the same class proportions as in the training set. Remove one
subset, train the classification model using the other nine subsets, and use the trained model to classify the removed subset. You could repeat this by removing each of the ten subsets one at a time.
Because cross-validation randomly divides data, its outcome depends on the initial random seed. To reproduce the exact results in this example, execute the following command:
First use cvpartition to generate 10 disjoint stratified subsets.
cp = cvpartition(species,'k',10)
cp =
K-fold cross validation partition
N: 150
NumTestSets: 10
TrainSize: 135 135 135 135 135 135 135 135 135 135
TestSize: 15 15 15 15 15 15 15 15 15 15
The crossval function can estimate the misclassification error for both LDA and QDA using the given data partition cp.
Estimate the true test error for LDA using 10-fold stratified cross-validation.
ldaClassFun= @(xtrain,ytrain,xtest)(classify(xtest,xtrain,ytrain));
ldaCVErr = crossval('mcr',meas(:,1:2),species,'predfun', ...
ldaCVErr =
The LDA cross-validation error has the same value as the LDA resubstitution error on this data.
Estimate the true test error for QDA using 10-fold stratified cross-validation.
qdaClassFun = @(xtrain,ytrain,xtest)(classify(xtest,xtrain,ytrain,...
qdaCVErr = crossval('mcr',meas(:,1:2),species,'predfun',...
qdaCVErr =
QDA has a slightly larger cross-validation error than LDA. It shows that a simpler model may get comparable, or better performance than a more complicated model.
The classify function has other two other types, 'diagLinear' and 'diagQuadratic'. They are similar to 'linear' and 'quadratic', but with diagonal covariance matrix estimates. These diagonal choices
are specific examples of a Naive Bayes classifier, because they assume the variables are conditionally independent given the class label. Naive Bayes classifiers are among the most popular
classifiers. While the assumption of class-conditional independence between variables is not true in general, Naive Bayes classifiers have been found to work well in practice on many data sets.
The NaiveBayes class can be used to create a more general type of Naive Bayes classifier.
First model each variable in each class using a Gaussian distribution. You can compute the resubstitution error and the cross-validation error.
nbGau= NaiveBayes.fit(meas(:,1:2), species);
nbGauClass= nbGau.predict(meas(:,1:2));
bad = ~strcmp(nbGauClass,species);
nbGauResubErr = sum(bad) / N
nbGauClassFun = @(xtrain,ytrain,xtest)...
(predict(NaiveBayes.fit(xtrain,ytrain), xtest));
nbGauCVErr = crossval('mcr',meas(:,1:2),species,...
'predfun', nbGauClassFun,'partition',cp)
nbGauResubErr =
nbGauCVErr =
So far you have assumed the variables from each class have a multivariate normal distribution. Often that is a reasonable assumption, but sometimes you may not be willing to make that assumption or
you may see clearly that it is not valid. Now try to model each variable in each class using a kernel density estimation, which is a more flexible nonparametric technique.
nbKD= NaiveBayes.fit(meas(:,1:2), species,'dist','kernel');
nbKDClass= nbKD.predict(meas(:,1:2));
bad = ~strcmp(nbKDClass,species);
nbKDResubErr = sum(bad) / N
nbKDClassFun = @(xtrain,ytrain,xtest)...
nbKDCVErr = crossval('mcr',meas(:,1:2),species,...
'predfun', nbKDClassFun,'partition',cp)
nbKDResubErr =
nbKDCVErr =
For this data set, the Naive Bayes classifier with kernel density estimation gets smaller resubstitution error and cross-validation error than the Naive Bayes classifier with a Gaussian distribution.
Another classification algorithm is based on a decision tree. A decision tree is a set of simple rules, such as "if the sepal length is less than 5.45, classify the specimen as setosa." Decision
trees are also nonparametric because they do not require any assumptions about the distribution of the variables in each class.
The classregtree class creates a decision tree. Create a decision tree for the iris data and see how well it classifies the irises into species.
t = classregtree(meas(:,1:2), species,'names',{'SL' 'SW' });
It's interesting to see how the decision tree method divides the plane. Use the same technique as above to visualize the regions assigned to each species.
[grpname,node] = t.eval([x y]);
Another way to visualize the decision tree is to draw a diagram of the decision rule and class assignments.
This cluttered-looking tree uses a series of rules of the form "SL < 5.45" to classify each specimen into one of 19 terminal nodes. To determine the species assignment for an observation, start at
the top node and apply the rule. If the point satisfies the rule you take the left path, and if not you take the right path. Ultimately you reach a terminal node that assigns the observation to one
of the three species.
Compute the resubstitution error and the cross-validation error for decision tree.
dtclass = t.eval(meas(:,1:2));
bad = ~strcmp(dtclass,species);
dtResubErr = sum(bad) / N
dtClassFun = @(xtrain,ytrain,xtest)(eval(classregtree(xtrain,ytrain),xtest));
dtCVErr = crossval('mcr',meas(:,1:2),species, ...
'predfun', dtClassFun,'partition',cp)
dtResubErr =
dtCVErr =
For the decision tree algorithm, the cross-validation error estimate is significantly larger than the resubstitution error. This shows that the generated tree overfits the training set. In other
words, this is a tree that classifies the original training set well, but the structure of the tree is sensitive to this particular training set so that its performance on new data is likely to
degrade. It is often possible to find a simpler tree that performs better than a more complex tree on new data.
Try pruning the tree. First compute the resubstitution error for various of subsets of the original tree. Then compute the cross-validation error for these sub-trees. A graph shows that the
resubstitution error is overly optimistic. It always decreases as the tree size grows, but beyond a certain point, increasing the tree size increases the cross-validation error rate.
resubcost = test(t,'resub');
[cost,secost,ntermnodes,bestlevel] = test(t,'cross',meas(:,1:2),species);
plot(ntermnodes,cost,'b-', ntermnodes,resubcost,'r--')
xlabel('Number of terminal nodes');
ylabel('Cost (misclassification error)')
Which tree should you choose? A simple rule would be to choose the tree with the smallest cross-validation error. While this may be satisfactory, you might prefer to use a simpler tree if it is
roughly as good as a more complex tree. For this example, take the simplest tree that is within one standard error of the minimum. That's the default rule used by the classregtree/test method.
You can show this on the graph by computing a cutoff value that is equal to the minimum cost plus one standard error. The "best" level computed by the classregtree/test method is the smallest tree
under this cutoff. (Note that bestlevel=0 corresponds to the unpruned tree, so you have to add 1 to use it as an index into the vector outputs from classregtree/test.)
[mincost,minloc] = min(cost);
cutoff = mincost + secost(minloc);
hold on
plot([0 20], [cutoff cutoff], 'k:')
plot(ntermnodes(bestlevel+1), cost(bestlevel+1), 'mo')
legend('Cross-validation','Resubstitution','Min + 1 std. err.','Best choice')
hold off
Finally, you can look at the pruned tree and compute the estimated misclassification error for it.
pt = prune(t,bestlevel);
ans =
This example shows how to perform classification in MATLAB® using Statistics Toolbox functions.
This example is not meant to be an ideal analysis of the Fisher iris data, In fact, using the petal measurements instead of, or in addition to, the sepal measurements may lead to better
classification. Also, this example is not meant to compare the strengths and weaknesses of different classification algorithms. You may find it instructive to perform the analysis on other data sets
and compare different algorithms. There are also Statistics Toolbox functions that implement other classification algorithms. For instance, you can use TreeBagger to perform bootstrap aggregation for
an ensemble of decision trees, as described in the example Classifying Radar Returns for Ionosphere Data. | {"url":"http://www.mathworks.in/help/stats/examples/classification.html?prodcode=ST&nocookie=true","timestamp":"2014-04-24T19:07:05Z","content_type":null,"content_length":"39098","record_id":"<urn:uuid:53505310-6213-4425-ab1d-a5eebdca6303>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inversion density: Have you seen this concept?
up vote 3 down vote favorite
Let n > 1 be an integer. Let A be an array, indexed from 1 to n, of n values A(i) coming from the finite set {0,1}. (More generally, the values can come from any totally ordered set, but I only need
a two element set for now.)
Let us count the number of inversions in A, that is the number of pairs (i,j) with i < j and A(i) > A(j). Using [ ] for Iverson notation, this is
I = sum{ 1 <= i <= j <= n } [A(i) > A(j)]
I call the quantity I/(n^2) the inversion density of A. As an exercise you can show the inversion density falls in the closed interval [0, 1/4] .
Question: Is this (inversion density) or a closely related concept present in the literature? If so, please tell me where.
I am still wading through the sorting literature, where number of inversions in an array are considered, but I have yet to see anything regarding a density. I have not yet found a successful online
search; if a good search term is proffered I will try it as a substitute for a good reference.
Motivation: I did some research on Combsort and was looking for worst case complexity results. After finding some bad (good) cases, I saw that the same ideas had been more fully developed in the two
papers listed below. In particular Poonen has a proposition which can be phrased in terms of inversion density as: there is an absolute constant c so that, for any length n 0-1 array A and for any
integer j with 1 < j <= n, there is a contiguous length j subarray B (so B(k) = A(l+k-1) for some l and 1 <= k <= j ) such that the inversion density of B is at least c times the inversion density of
A .
Poonen shows c >= 1/256. I can tighten his argument to show c >= 1/32, and suspect c = 1/2 . I also suspect the proposition holds for arrays with values from any totally ordered set.
References (Please add to this!):
Mark Allen Weiss and Robert Sedgewick, "Tight Lower Bounds for Shellsort", Journal of Algorithms Volume 11, Issue 2, June 1990, Pages 242-251
B. Poonen, “The worst case in Shellsort and related algorithms,” J. of Algorithms 15, 1993, 101-124.
Gerhard "Ask Me About System Design" Paseman, 2010.07.10
reference-request computer-science
See page 68 of Property testing and parameter testing for permutations by Hoppen et al. siam.org/proceedings/soda/2010/SODA10_007_hoppenc.pdf – SandeepJ Jul 11 '10 at 18:07
SandeepJ, thanks. I will see if the bibliography holds some clues as well. Gerhard "Ask Me About System Design" Paseman, 2010.07.11 – Gerhard Paseman Jul 12 '10 at 4:39
Will, click on Users and type in "Gerhard Paseman", no double quotes. I think my total is close to 900. Part of that is due to community-wiki answers. I'll see if I can ask some really good non CW
questions. Gerhard "An Account for Every Occasion" Paseman, 2010.07.11 – Gerhard Paseman Jul 12 '10 at 4:41
I did that, something with 200-300 points. My impression is that unregistered accounts are not name-searchable. So you may have several fragments around. The upside for MO of having them combine
all your accounts into one is when somebody says "I remember Gerhard Paseman had this fascinating question/answer/comment a year ago, what was that, there was something about a unicorn and it
turned out the good guy and the bad guy were twins." Then they search under your name and have a chance of finding it. – Will Jagy Jul 12 '10 at 19:02
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged reference-request computer-science or ask your own question. | {"url":"http://mathoverflow.net/questions/31364/inversion-density-have-you-seen-this-concept","timestamp":"2014-04-23T19:12:18Z","content_type":null,"content_length":"52864","record_id":"<urn:uuid:b512a4d1-c095-4496-b190-02f2fc5956a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suitland Algebra 2 Tutor
...There are numerous topic areas that must be worked. In a meeting with a new student in Algebra 2, I talk with the student in an attempt to identify the ones for which he/she may need
assistance. To be sure, there are links between the topic areas.
13 Subjects: including algebra 2, chemistry, calculus, physics
...I truly believe that math can be fun and easy if it's broken down for you in a way that you can comprehend it. I believe that there is a way to learn math for everyone and I look forward to
finding out which way works best for you. Even if you just need a little reminder of math you used to know, I'm happy to help you remember the fundamentals.
22 Subjects: including algebra 2, calculus, geometry, GRE
...I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012. I had more than 3 years' intense training in programming, especially in C and Java,
both of which have been widely used in my daily job. I also have the tutored C and Java courses when I'm an undergraduate.
27 Subjects: including algebra 2, chemistry, calculus, physics
...As for athletics, I am skilled in the following areas: Swimming: I was a competitive swimmer for three years and enjoy teaching stroke mechanics and customizing drills to help you progress
through swimming. Golf: I was on my high school varsity team for three years. I cover swing mechanics (I...
13 Subjects: including algebra 2, calculus, writing, GRE
...I have bachelor's and master's in environmental engineering - testimony to the fact that learning math, biology, and other challenging subjects can actually be easy and fun when equipped with
the right techniques and tools!In my prior experiences as a tutor, work management skills have been a cri...
17 Subjects: including algebra 2, reading, writing, biology | {"url":"http://www.purplemath.com/Suitland_algebra_2_tutors.php","timestamp":"2014-04-21T04:56:55Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:2bab3650-7ece-4da7-850b-e4e9092dd767>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fredericksburg, VA
Fairfax Station, VA 22039
Tutor for Math, Reading, Science and Computers
...By doing that, the student learns how to learn. By learning to learn, the investment the student made for the Algebra 1 class that is giving them such a hard time today can help them in their
organic chemistry class 4 years from now in college. I have been using...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/Fredericksburg_VA_Maths_tutors.aspx","timestamp":"2014-04-16T17:41:23Z","content_type":null,"content_length":"61084","record_id":"<urn:uuid:7dbcc109-c8c5-4f49-95b2-40ee4b4c2bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition of Complex Shapes | Chegg.com
Most shapes occurring in the physical world are complex. Complex shapes combine parts or all of simple shapes. These complex shapes include polygons and other shapes that may include parts of
circles, squares, triangles, ellipses, and rectangles.
To determine the area of a complex shape, use the formulas for simple shapes when possible and either combine or subtract them depending on the composition of the complex shape. Use Heron's formula
to calculate the area of a triangle when the lengths of all three sides are known; use Brahmagupta's formula to calculate the area of any quadrilateral given the lengths of the sides and some of
their angles. | {"url":"http://www.chegg.com/homework-help/definitions/complex-shapes-63","timestamp":"2014-04-21T14:55:51Z","content_type":null,"content_length":"18419","record_id":"<urn:uuid:52b506dd-2cd6-4733-9538-7fbe01e0c65d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quotient of two functions
January 8th 2008, 03:22 PM #1
Dec 2007
Quotient of two functions
Determine the zeroes, and the domain. Write the equation for each asymptote. Then graph the function and estimate the range.
g(x)= $x/x^2-4$ h(x)= $x^2-4/x$
Thank you
Are the functions $g(x) = \frac{x}{{x^2 - 4}}\,\& \,h(x) = \frac{{x^2 - 4}}{x}$?
Please learn some advanced LaTeX.
... Or at least parenthesis.
yes Plato is right about the equations....sorry about that
First factor the numerator and denominator and see if anything cancels out.
$g(x) = \frac{x}{(x + 2)(x - 2)}$
No cancellations.
So to find vertical asymptotes find out where the denominator is equal to 0. This gives x = 2 and x = -2 as vertical asymptotes.
This also gives the domain as all real numbers except x = 2, and -2.
As far as the zeros are concerned, solve
$g(x) = \frac{x}{(x + 2)(x - 2)} = 0$
This has a solution of x = 0, so there is your zero.
Is there a horizontal asymptote? For that we need to see what the behavior of g(x) is for very large x. I think it is easy to see that as x goes to either plus or minus infinity that g(x) goes to
0. So there is a horizontal asymptote at y = 0.
We do not have a slant asymptote because the degree of the numerator is not one more than the degree of the denominator.
I think that about covers it. I'll leave you to graph it yourself.
some remarks about the function h:
1. $h(x) = \frac{x^2-4}{x} = x-\frac4x = \frac1{g(x)}~,~x\ \in \ \mathbb{R} \setminus \{0\}$
Therefore: The zeros of g indicates the vertical asymptotes of h.
The vertical asymptotes of g pass through the zeros of h.
2. h has a slanted asymptote y = x and a vertical asymptote at x = 0
3. h has 2 zeros: x = -2, x = 2
4. The graph of h is drawn in red, the asymptotes in brown.
The blue graph with it's green asymptotes is the graph of g.
January 8th 2008, 03:28 PM #2
January 8th 2008, 04:12 PM #3
January 8th 2008, 04:52 PM #4
Dec 2007
January 8th 2008, 05:59 PM #5
January 8th 2008, 09:09 PM #6 | {"url":"http://mathhelpforum.com/pre-calculus/25786-quotient-two-functions.html","timestamp":"2014-04-20T05:49:46Z","content_type":null,"content_length":"50904","record_id":"<urn:uuid:19e85c6f-d27c-41ae-9034-dc44e10863ca>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring Focal Length
Why measure Focal Length?
Unless you're testing and calibrating lenses, I'm not sure why you'd want to know the exact focal length of a lens other than simply to satisfy your curiosity. It's just something that some people
want to know. If you pay for a 400mm lens, it's nice to know you have a 400mm lens, not a 370mm lens I guess. In fact most lens makers will tell you that the focal length marked on a lens is +/- 5%.
That means your 400mm could be as short as 380mm and still be "within spec". Normally telephoto lenses err on the short side. It's MUCH more likely that a lens will be shorter than marked than
Definition of "Focal Length"
Camera lenses are complex critters. In the case of a single element equi-convex thin lens, it's easy to measure focal length. You focus a point at infinity, and the distance from the center of the
lens to the focal point is the focal length.
However life is not so simple with a camera lens. You can still focus an object at infinity OK, but what distance do you measure? From the focus to the back of the lens, from the focus to the front
of the lens or from the focus to the middle of the lens? The answer is no to all three questions. You actually measure the distance from the focus to something called the rear (or secondary) nodal
point of the lens. The strict definition is:
Assuming that the lens is surrounded by air or vacuum (refractive index 1.0), the focal length is the distance from the secondary principal point (which in this case is also the secondary nodal
point) to the rear focal point of a lens.
Where is the "rear nodal point"? Well it could be anywhere. It could be somewhere inside the lens, it could be out in front of the first element of the lens (for telephoto lenses) or it could be
somewhere between the last element of the lens and the focus (for wideangle retrofocus lenses). This makes life complex.
If a lens actually had the focal length that was marked on it, the rear nodal point would be one focal length in front of the film (or sensor) plane when the lens was focused at infinity. If course
if the lens had the focal length marked on it, you wouldn't need to measure it! the point of doing the measurement is to see what the true focal length is!
So to measure the focal length you either have to determine where the rear nodal point is, or you have to use a method of measurement which doesn't require you to know where it is.
There are a number of methods of finding the nodal points of a lens, but none are simple. I won't discuss them here. Instead I'll describe a couple of methods of measuring focal length.
The first method I'll call "the hard way" since it means setting up a small optical bench and making a number of linear measurements. It's the method I'd use to see what the true focal length of a
close focusing zoom is. Close focusing telephoto zooms with internal focus often get that close focus by reducing the focal length. So when you have your 300mm zoom focused down to 12", it's probably
only really acting as a 100mm lens. Does it matter? Well, if it does to you, this is how to measure it.
The second method I'll call "the easy way". It involves taking one photograph, followed by some fairly simple image measurements and calculations. It's the method I'd use to measure the focal length
of telephoto lenses focused at infinity.
The Hard Way
At "A" is the target being imaged and at "B" is a screen on which the image will be focused. "A" and "B" must be greater than 4 focal lengths apart. .
There are two positions for the lens which will focus an image on the screen. In the first position (upper image), a magnified image of the target will be formed. In the second position (lower
image), a reduced image of the target will be formed.
The procedure for focal length determination is as follows. Move the lens to a position where a magnified image of the target is focused on the screen. Measure "h1" (the target size) and "h2" (the
image size). Also measure "d1", the distance from the target "A" to some point on the lens. It could be the front of the lens or the back of the lens. It doesn't matter.
Now move the lens towards the screen "B" until a second (reduced) image is formed in sharp focus. Measure the distance "d2" from the target "A" to the same point on the lens you used in the first
Now calculate the magnification in the first step, which is simply (h2/h1) = "m". Then calculate the distance the lens was moved, which is simply (d2-d1) = "d"
The focal length of the lens is then given by:
focal length = (d)/((m-(1/m))
Example: Lets say the magnification was 6x and the distance the lens had to be moved was 345mm. The focal length of the lens is thus 345/((6-(1/6)) = 345/5.833 = 59.14mm.
While this seems (and is) pretty simple in principle, in practice it's not trivial to setup and to make measurements with high accuracy. If you want 1% accuracy on focal length, you need at least 1%
accuracy when measuring magnification and the distance moved. Measuring magnification to a 1% accuracy is pretty difficult.
Does it work in practice?
To test this method I made a very rough calculation of the focal length of a Canon EF28-105/3.5-4.5 USM lens set to 105mm and focused at (1) infinity and (2) 1m. This is an internally focusing lens
and so would be expected to change focal length when close focused.
(1) With the lens set to infinity focus I found a magnification of 3x and a distance of 27cm between two focus conditions with the target ("A") and screen ("B") about 55cm apart. This gives a focal
length of 101mm, pretty close to the specified 105mm and not bad considering the precision with which I measured magnification and distance.
(2) With the lens set to 1m focus I found a magnification of 5x and a distance of 36cm between two focus conditions. This gives a focal length of 75cm, a reasonable number for a 100mm internal focus
lens close focused.
So yes, the method seems to work. However measuring magnification with high precision is tricky, so getting accurate numbers for focal length isn't particularly easy.
The Easy Way
The easy way doesn't require you to make any manual measurements at all, which is why it's easy - and accurate. However it's only good for the lens set to infinity focus.
The focal length of a non wideangle lens is closely approximated by:
Focal_Length = (distance/angle) * (180/pi) [1]
where angle = the angle between two distant points
and distance = distance between those two points in the focal plane
More accurately the formula is:
Angle = 2 * arctan (distance/(2*focal_length)) or, after rearranging:
Focal_length = distance/(2*tan(angle/2)) [2]
The approximation error in focal length of the simpler formula [1] is about 1% at 35mm, 0.1% at 100mm and .0037% (less than 0.2mm) at 500mm
Fortunately nature has given us the perfect target. The stars. They are pinpoints of light at an infinite distance and astronomers have measured their positions to an amazingly high degree of
accuracy. So if we focus on a pair of stars of known angular separation and measure how far apart their images are on the film (or digital sensor), we know the focal length of the lens!
Right now (winter in the northern hemisphere) Orion is a very prominent constellation and it makes a great calibration target. The three stars of Orion's belt (delta, epsilon and zeta Orionis) are
nicely spaced for calibration of lenses with focal lengths from 100mm to 600mm and are bright enough to be easily seen and recognized.
The angular separation of the stars is easy to calculate, but a little tedious, so I've done it for you! The basic procedure is to get the coordinates of the stars from a star catalog. The Yale
Bright Start Catalog (BS) is available for download from http://vizier.u-strasbg.fr/viz-bin/ftp-index?/ftp/cats/v/50 and lists all naked eye visible stars. It's not easy reading so be warned that
finding a star's coordinates takes some effort.
The angular separation between two stars is given by the expression:
Cos (A) = sin(d1)*sin(d2) + cos (d1)*cos(d2)*cos (ra1-ra2)
where A is the angular separation between star 1 and star 2 (degrees)
d1 is the declination of star 1 (degrees)
d2 is the declination of star 2 (degrees)
ra1 is the right ascension of star 1 (degrees)
ra2 is the right ascension of star 2 (degrees)
There are a few complications due to the fact that the stars move. The BS catalog lists positions for 1900 and 2000. Use the ones for 2000 of course. There are also corrections for annual proper
motion, but in the case of the belt stars of Orion they're all moving in very nearly the same direction at very nearly the same speed, so they maintain their relative positions over long periods of
time and corrections in relative position for proper motion are very small.
My calculations show the following spacings:
Delta to Epsilon = 1.38583 degrees
Epsilon to Zeta = 1.35606 degrees
Delta to Zeta = 2.73601 degrees
So, now we have that all we need to do is take a photograph of those stars. Two things to bear in mind here. First, if there is any distortion in the image, they results will be affected. Fortunately
telephoto lenses don't usually have a lot of distortion. However if they do, distortion is a function of the cube of the distance from the center of the frame, so if we keep the stars away from the
edge, distortion should be negligible. Second, the earth rotates (so the stars appear to move across the sky). If you want to freeze that motion with a 500mm lens, you'll need an exposure of 1/10s or
less. Fortunately these stars are bright enough that an exposure of 1/10s at f5.6 at ISO 400 or 800 is enough.
As a brief aside here, the exposure required for a given star at a given ISO setting doesn't depend on the f-stop of the lens. This may seem odd, but it's true. Exposure depends on f-stop only for
extended objects. A star is essentially an infinitely small point source and the (diffraction limited) size of the image is essentially the same at the same f-stop no matter what focal length lens
you use. So you get the same sized image with a 500mm lens at f4 as you do with a 50mm lens at f4. For extended objects the 500mm lens would give you an image 10x as large. What exposure does depend
on is the physical size of the aperture, which is given by (focal length/f-stop), so for a 500mm lens at f4 it's 500/4 = 125mm. For a 50mm lens at f4 it's 50/4 = 12.5mm. Since the amount of light
collected is proportional to the area of the aperture, you'll need 100x longer exposure with the 50mm f4 lens than you need with the 500mm f4 lens to record the same star at the same brightness. With
the 50mm lens at f2, you'd only need to expose for 25 times as long as the 500mm lens at f4.
Absolute, precise, focus isn't needed since the star image will be a circle and you can just pick the center of the circle as the reference point from which measurements are made. However the better
the focus the less exposure will be needed. Just focus manually or set the lens to "infinity". A sample image is shown below, taken with a Canon EOS 20D DSLR using a Canon EF 300/4L lens.
If you magnify the star images in your image editor, you'll see something like the image below:
If you move the cursor to the center of the star image in most image editors, somewhere on the screen the coordinates will be displayed. For example in Irfan View they are displayed at the top right
of the screen:
So in this case the center of the star image is located at pixel 1838 horizontally and pixel 1388 vertically.
Now all you have to do is calculate the actual distance between the stars on the sensor. This is pretty simple. Say we shoot an image using a Canon EOS 20D DSLR and a Canon EF500/4.5L and the image
coordinates of Zeta Orionis are (x1, y1) pixels and the coordinates of Epsilon Orionis are (x2, y2) pixels. The separation of those two coordinates is given by the Pythagorean theorem:
(distance between stars)^2 = (x1-x2)^2 + (y1-y2)^2
So let's say the image of Epsilon Orionis is centered at 969, 1371 and the image of Delta Orionis is centered at 2849, 1251. The distance between them (let's call it "S") is then just:
S = the square root of (969-2849)^2 + (1371-1251)^2 = 1883.8 pixels^
So what's a "pixel" in terms of length? Well it's the size of the sensor divided by the number of pixels across it. For the Canon EOS EOS 20D DLSR it's 22mm and 3504 pixels, meaning a pixel
corresponds to 6.4212 microns (a micron is 1/1000 mm). So 1883.8 pixels is 12.0963mm.
Now we go back to the equation: Focal Length = (length)/(angle)* (180)/pi and substitute the values.
Focal length = (12.0963)/(1.38583) * (180)/pi = 500.107mm
Done! The focal length of the Canon EF500/4.5L turns out to be 500.1mm. It's best to do the test several times with multiple pairs of stars and average the results if you want the most accurate
value. Doing this I came up with 500.15mm
I did this with the Canon EF 300/4L and obtained focal lengths of 295.94, 295.59 and 297.76mm, which averages out to 295.76mm. For the Canon EF 28-135/3.5-5.6 IS set to 135mm the calculated focal
length from two frames were 132.36mm and 132.42mm, which gives an average of 132.39mm
Once you've calibrated a few lenses like this, you can use them as "transfer standards" for other lenses. If you shoot with the Canon EF 300/4L (focal length 295.76mm) and then shoot the same distant
scene with another lens, you can compare the scale of the two images in PhotoShop (or your favorite alternative image editor). Let's say you have to reduce the size of the image shot with the Canon
EF 300/4L by 5% to get an exact overlap with the image shot with the 2nd lens. That means that the second lens must have a focal length 5% less then that of the 300/4L, which would make it 281mm.
So now you know!
© Copyright Bob Atkins All Rights Reserved | {"url":"http://www.bobatkins.com/photography/technical/measuring_focal_length.html","timestamp":"2014-04-21T09:36:55Z","content_type":null,"content_length":"21896","record_id":"<urn:uuid:d6868030-3301-4bff-8705-3d07305b5819>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Muddiest Points on Chapter
MP 5..1 Why is always true?
This is a relation between state variables. As such it is not path dependent, only depends on the initial and final states, and thus must hold no matter how we transition from initial state to final
state. What is not always true, and what holds only for reversible processes, are the relations and . One example of this is the free expansion where , but where the quantities and (and the integrals
of these quantities) are not zero.
MP 5..2 What makes different than ?
The term denotes the heat exchange during a reversible process. We use the notation to denote heat exchange during any process, not necessarily reversible. The distinction between the two is
important for the reason given above in Section 5.3.
MP 5..3 What happens when all the energy in the universe is uniformly spread, i.e., entropy at a maximum?
I quote from The Refrigerator and the Universe, by Goldstein and Goldstein:
The entropy of the universe is not yet at its maximum possible value and it seems to be increasing all the time. Looking forward to the future, Kelvin and Clausius foresaw a time when the maximum
possible entropy would be reached and the universe would be at equilibrium forever afterward; at this point, a state called the ``heat death'' of the universe, nothing would happen forever after.
The book also gives comments on the inevitability of this fate.
MP 5..4 Why do you rewrite the entropy change in terms of ?
We have discussed the representation of thermodynamic changes in - coordinates a number of times and it is familiar, as is the idea of the `` '' process. I want to relate this to the more general
expression involving the entropy change Equation (5.5) to show (i) when the simple form applied and (ii) how valid an approximation it was. Using the entropy change, we now have a quantitative metric
for doing just that.
MP 5..5 What is the difference between isentropic and adiabatic?
Isentropic means no change in entropy ( ). An adiabatic process is a process with no heat transfer ( ). We defined for reversible processes . So generally an adiabatic process is not necessarily
isentropic -- only if the process is reversible and adiabatic we can call it isentropic. For example a real compressor can be assumed adiabatic but is operating with losses. Due to the losses the
compression is irreversible. Thus the compression is not isentropic.
MP 5..6 In the single reservoir example, why can the entropy decrease?
When we looked at the single reservoir, our ``system'' was the reservoir itself. The example I did in class had heat leaving the reservoir, so that was negative. Thus the entropy change of the
reservoir is also negative. The second law, however, guarantees that there is a positive change in entropy somewhere else in the surroundings that will be as large, or larger, than this decrease.
MP 5..7 Why does the entropy of a heat reservoir change if the temperature stays the same?
A heat reservoir is an idealization (like an ideal gas, a rigid body, an inviscid fluid, a discrete element mass-spring-damper system). The basic idea is that the heat capacity of the heat reservoir
is large enough so that the transfer of heat in whatever problem we address does not appreciably alter the temperature of the reservoir. In grappling with approximations such as this it is useful to
think about extreme cases. Therefore, suppose the thermal reservoir is the atmosphere. The mass of the atmosphere is roughly kg (give or take an order of magnitude). Let us calculate the temperature
rise due to the heat dumped into the atmosphere by a jet engine during a transcontinental flight. A large gas turbine engine might produce on the order of 100 MW of heat, so that the rise in
atmospheric temperature, , for the heat transfer associated with a 6 hour flight is given by
Substituting for the atmospheric mass and the specific heat gives a value for temperature change of roughly K. To a very good approximation, we can say that the temperature of this heat reservoir is
constant and we can evaluate the entropy change of the reservoir as .
MP 5..8 How can the heat transfer from or to a heat reservoir be reversible?
We made the assumption that the heat reservoir is very large, and therefore it is a constant temperature heat source or sink. Since the temperature is uniform there is no heat transfer across a
finite temperature difference and this heat exchange is reversible. We discussed this in the second example, ``Heat transfer between two heat reservoirs,'' in Section 5.5.
MP 5..9 How can be less than zero in any process? Doesn't entropy always increase?
The second law says that the total entropy (system plus surroundings) always increases. (See Section 5.1). This means that either the system or the surroundings can have its entropy decrease if there
is heat transfer between the two, although the sum of all entropy changes must be positive. For an isolated system, with no heat transfer to the surroundings, the entropy must always increase.
MP 5..10 If for a reservoir, could you add to any size reservoir and still get the same ?
Yes, as long as the system you were adding heat to fulfilled the conditions for being a reservoir.
MP 5..11 What is the difference between the isothermal expansion of a piston and the (forbidden) production of work using a single reservoir?
The difference is contained in the word sole in the Kelvin-Planck statement of the second law given in Section 5.1 of the notes.
For the isothermal expansion the changes are:
1. The reservoir loses heat .
2. The system does work (equal in magnitude to ).
3. The system changes its volume and pressure.
4. The system changes its entropy (the entropy increases by ).
For the ``forbidden'' process,
1. The reservoir loses heat .
2. The system does work ( ) and that's all the changes that there are. I leave it to you to calculate the total entropy changes (system plus surroundings) that occur in the two processes.
MP 5..12 For the ``work from a single heat reservoir'' example, how do we know there is no ?
Our system was the heat reservoir itself. In the example we had heat leaving the reservoir, thus was negative and the entropy change of the reservoir was also negative. Using the second law, it is
guaranteed that somewhere else in the surroundings a positive entropy change will occur that is as large or larger than the decrease of the entropy of the reservoir.
MP 5..13 How does a cycle produce zero ? I thought that the whole thing about cycles was an entropy that the designers try to minimize.
The change in entropy during a cycle is zero because we are considering a complete cycle (returning to initial state) and entropy is a function of state (holds for ideal and real cycles!).
The entropy you are referring to is entropy that is generated in the components of a nonideal cycle. For example in a real jet engine we have a non-ideal compressor, a non-ideal combustor and also a
non-ideal turbine. All these components operate with some loss and generate entropy -- this is the entropy that the designers try to minimize. Although the change in entropy during a non-ideal cycle
is zero, the total entropy change (cycle and heat reservoirs!) is . Basically the entropy generated due to irreversibilities in the engine is additional heat rejected to the environment (to the lower
heat reservoir). We will discuss this in detail in Section 6.1.
MP 5..14 On the example of free expansion versus isothermal expansion, how do we know that the pressure and volume ratios are the same? We know for each that and .
During the free expansion no work is done and no heat is transferred (insulated system). Thus the internal energy stays constant and so does the temperature. This means that holds also for the free
expansion and that the pressure and volume ratios are the same when comparing free expansion to reversible isothermal expansion.
MP 5..15 Where did come from?
We were using the 1^st and 2^nd law combined (Gibbs) and in the example discussed there was no change in internal energy ( ). If we then integrate using (with being the number of moles of gas in
volume and being the universal gas constant) we obtain . | {"url":"http://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node42.html","timestamp":"2014-04-21T00:02:58Z","content_type":null,"content_length":"24115","record_id":"<urn:uuid:a16d725e-8fe7-4517-a5dc-b5ccfc083c56>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
C ∗ -algebras arising from group actions on the boundary of a triangle building
Results 1 - 10 of 17
, 2000
"... Building on recent work of Robertson and Steger, we associate a C ∗ –algebra to a combinatorial object which may be thought of as a higher rank graph. This C ∗ –algebra is shown to be isomorphic
to that of the associated path groupoid. Various results in this paper give sufficient conditions on the ..."
Cited by 58 (11 self)
Add to MetaCart
Building on recent work of Robertson and Steger, we associate a C ∗ –algebra to a combinatorial object which may be thought of as a higher rank graph. This C ∗ –algebra is shown to be isomorphic to
that of the associated path groupoid. Various results in this paper give sufficient conditions on the higher rank graph for the associated C ∗ –algebra to be: simple, purely infinite and AF. Results
concerning the structure of crossed products by certain natural actions of discrete groups are obtained; a technique for constructing rank 2 graphs from “commuting” rank 1 graphs is given.
- Math. Proc. Royal Irish Acad
"... Abstract. We begin the study of a new class of operator algebras that arise from higher rank graphs. Every higher rank graph generates a Fock space Hilbert space and creation operators which are
partial isometries acting on the space. We call the weak operator topology closed algebra generated by th ..."
Cited by 24 (9 self)
Add to MetaCart
Abstract. We begin the study of a new class of operator algebras that arise from higher rank graphs. Every higher rank graph generates a Fock space Hilbert space and creation operators which are
partial isometries acting on the space. We call the weak operator topology closed algebra generated by these operators a higher rank semigroupoid algebra. A number of examples are discussed in
detail, including the single vertex case and higher rank cycle graphs. In particular the cycle graph algebras are identified as matricial multivariable function algebras. We obtain reflexivity for a
wide class of graphs and characterize semisimplicity in terms of the underlying graph. In [22] Kumjian and Pask introduced k-graphs as an abstraction of the combinatorial structure underlying the
higher rank graph C ∗-algebras of Robertson and Steger [31, 32]. A k-graph generalizes the set of finite paths of a countable directed graph when viewed as a partly defined multiplicative semigroup
with vertices considered as degenerate paths. The C ∗-algebras associated with k-graphs include k-fold tensor products of graph C ∗-algebras, and much more [2, 21, 26, 27, 30]. On the other hand, as
a generalization of the nonselfadjoint free semigroup
, 2000
"... Building on recent work of Robertson and Steger, we associate a C*-algebra to a combinatorial object which maybe thought of as a higher rank graph. This C*-algebra is shown to be isomorphic to
that of the associated path groupoid. Various results in this paper give su cient conditions on the higher ..."
Cited by 14 (2 self)
Add to MetaCart
Building on recent work of Robertson and Steger, we associate a C*-algebra to a combinatorial object which maybe thought of as a higher rank graph. This C*-algebra is shown to be isomorphic to that
of the associated path groupoid. Various results in this paper give su cient conditions on the higher rank graph for the associated C*-algebra to be: simple, purely innite and AF. Results concerning
the structure of crossed products by certain natural actions of discrete groups are obtained; a technique for constructing rank 2 graphs from "commuting" rank 1 graphs is given.
- Canad. J. Math
"... Abstract. Let Γ be a torsion free lattice in G = PGL(3, F) where F is a nonarchimedean local field. Then Γ acts freely on the affine Bruhat-Tits building B of G and there is an induced action on
the boundary Ω of B. The crossed product C ∗-algebra A(Γ) = C(Ω) ⋊Γ depends only on Γ and is classified b ..."
Cited by 13 (2 self)
Add to MetaCart
Abstract. Let Γ be a torsion free lattice in G = PGL(3, F) where F is a nonarchimedean local field. Then Γ acts freely on the affine Bruhat-Tits building B of G and there is an induced action on the
boundary Ω of B. The crossed product C ∗-algebra A(Γ) = C(Ω) ⋊Γ depends only on Γ and is classified by its K-theory. This article shows how to compute the K-theory of A(Γ) and of the larger class of
rank two Cuntz-Krieger algebras. 1.
"... Abstract. To a higher rank directed graph (Λ, d), in the sense of Kumjian and Pask [16], one can associate natural noncommutative analytic Toeplitz algebras, both weakly closed and norm closed.
We introduce methods for the classification of these algebras in the case of single vertex graphs. 1. ..."
Cited by 12 (3 self)
Add to MetaCart
Abstract. To a higher rank directed graph (Λ, d), in the sense of Kumjian and Pask [16], one can associate natural noncommutative analytic Toeplitz algebras, both weakly closed and norm closed. We
introduce methods for the classification of these algebras in the case of single vertex graphs. 1.
, 2006
"... We provide inverse semigroup and groupoid models for the Toeplitz and Cuntz-Krieger algebras of finitely aligned higher-rank graphs. Using these models, we prove a uniqueness theorem for the
Cuntz-Krieger algebra. ..."
Cited by 11 (4 self)
Add to MetaCart
We provide inverse semigroup and groupoid models for the Toeplitz and Cuntz-Krieger algebras of finitely aligned higher-rank graphs. Using these models, we prove a uniqueness theorem for the
Cuntz-Krieger algebra.
, 2002
"... Abstract. Given a row-finite k-graph Λ with no sources we investigate the K-theory of the higher rank graph C ∗-algebra, C ∗ (Λ). When k = 2 we are able to give explicit formulae to calculate
the K-groups of C ∗ (Λ). The K-groups of C ∗ (Λ) for k> 2 can be calculated under certain circumstances. We ..."
Cited by 8 (1 self)
Add to MetaCart
Abstract. Given a row-finite k-graph Λ with no sources we investigate the K-theory of the higher rank graph C ∗-algebra, C ∗ (Λ). When k = 2 we are able to give explicit formulae to calculate the
K-groups of C ∗ (Λ). The K-groups of C ∗ (Λ) for k> 2 can be calculated under certain circumstances. We state that for all k, the torsion-free rank of K0(C ∗ (Λ)) and K1(C ∗ (Λ)) are equal when C ∗
(Λ) is unital, and we determine the position of the class of the unit of C ∗ (Λ) in K0(C ∗ (Λ)). 1.
- Proceedings of the SFB Workshop on C∗-algebras (Münster, March 8–12, 1999), 182–202, Springer-Verlag, 2000, MR 2001j:46082
"... Abstract. Let Γ be a group of type rotating automorphisms of an affine building B of type Ã2. If Γ acts freely on the vertices of B with finitely many orbits, and if Ω is the (maximal) boundary
of B, then C(Ω) ⋊ Γ is a p.i.s.u.n. C ∗-algebra. This algebra has a structure theory analogous to that of ..."
Cited by 7 (2 self)
Add to MetaCart
Abstract. Let Γ be a group of type rotating automorphisms of an affine building B of type Ã2. If Γ acts freely on the vertices of B with finitely many orbits, and if Ω is the (maximal) boundary of B,
then C(Ω) ⋊ Γ is a p.i.s.u.n. C ∗-algebra. This algebra has a structure theory analogous to that of a simple Cuntz-Krieger algebra and is the motivation for a theory of higher rank Cuntz-Krieger
algebras, which has been developed by T. Steger and G. Robertson. The K-theory of these algebras can be computed explicitly in the rank two case. For the rank two examples of the form C(Ω) ⋊ Γ which
arise from boundary actions on Ã2 buildings, the two K-groups coincide.
- MR1961175 (2004f:46068), Zbl pre01925883
"... Abstract. We consider the higher-rank graphs introduced by Kumjian and Pask as models for higher-rank Cuntz-Krieger algebras. We describe a variant of the Cuntz-Krieger relations which applies
to graphs with sources, and describe a local convexity condition which characterises the higher-rank graphs ..."
Cited by 5 (2 self)
Add to MetaCart
Abstract. We consider the higher-rank graphs introduced by Kumjian and Pask as models for higher-rank Cuntz-Krieger algebras. We describe a variant of the Cuntz-Krieger relations which applies to
graphs with sources, and describe a local convexity condition which characterises the higher-rank graphs that admit a nontrivial Cuntz-Krieger family. We then prove versions of the uniqueness
theorems and classifications of ideals for the C ∗-algebras generated by Cuntz-Krieger families. 1.
, 2003
"... Abstract. k-graphs are higher-rank analogues of directed graphs which were first developed to provide combinatorial models for operator algebras of Cuntz-Krieger type. Here we develop a theory
of the fundamental groupoid of a k-graph, and relate it to the fundamental groupoid of an associated graph ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract. k-graphs are higher-rank analogues of directed graphs which were first developed to provide combinatorial models for operator algebras of Cuntz-Krieger type. Here we develop a theory of the
fundamental groupoid of a k-graph, and relate it to the fundamental groupoid of an associated graph called the 1-skeleton. We also explore the failure, in general, of k-graphs to faithfully embed
into their fundamental groupoids. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=5865161","timestamp":"2014-04-16T14:24:30Z","content_type":null,"content_length":"35188","record_id":"<urn:uuid:4c80332e-ecb1-4482-a0a9-a3f7694b3368>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Current-Voltage Characteristics of a Memristor
A memristor (short for memory resistor) is the newly discovered (fourth) fundamental circuit element, the first three being the resistor, capacitor, and inductor. It was realized in 2008, some 37
years after its existence was postulated in 1971. This Demonstration shows the I-V curves for a memristor, modelled as two regions of different resistivities in series, and their dependence on the
period of the applied AC voltage and the initial fraction of the low-resistivity region. The unusual I-V curves, plotted for time , show hysteresis that increases with and leads large region of
negative differential conductance .
Starting from the origin, the blue curve shows I-V for , the red for , and the subsequent blue shows the I-V for . The black dots mark I-V for temporally equidistant points in the interval .
Snapshot 1: typical hysteresis loop; iff implies that a memristor is a purely dissipative element
Snapshot 2: typical I-V at low frequencies (high ) shows large regions of negative differential conductance ; this behavior is also induced by increasing the initial fraction of the low-resistivity
Snapshot 3: the hysteresis is suppressed at high frequencies because the fraction of the low-resistivity region does not change significantly over the period for small
Analytical results underlying this Demonstration, as well as references to the original experimental (2008) and theoretical (1971) papers, can be found at | {"url":"http://demonstrations.wolfram.com/CurrentVoltageCharacteristicsOfAMemristor/","timestamp":"2014-04-17T21:24:21Z","content_type":null,"content_length":"44307","record_id":"<urn:uuid:b09611f5-bbfe-4050-ba3a-d9675bcd1ab6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus 1
Use this tool to understand how the research skills contained in the course outcomes, or other indicators, align with developmental stages of information literacy and critical thinking. See how PCC
Librarians can help support the integration of research concepts and skills at levels appropriate to the course outcomes. This guide was prepared by the PCC librarians for Calculus 1 (MTH 251)
Research support framework
The following shows where Calculus 1 fits into the Research Support Framework developed by PCC Librarians.
Information literacy developmental stage:
• Are the qualities of the results what I need?
• Can these information sources help support my thesis?
Cognitive domains and information literacy outcomes:
• Directed searching in discipline-specific contexts
• Connecting sources and integrating them
Library support of CCOGs
These are the course outcomes and other indicators which require library support.
Intended course outcomes related to information literacy:
• Work with derivatives and limits in various situations and use correct mathematical terminology, notation, and symbolic processes in order to engage in work, study, and conversation on topics
involving derivatives and limits with colleagues in the field of mathematics, science or engineering.
• Enjoy a life enriched by exposure to Calculus.
Course integrated research support
These are the ways that the librarians can support information literacy achievement for the students in this course.
Corresponding information literacy outcomes:
1. Gather credible information sources
2. Use search strategies familiar to mathematicians
3. Locate historical development of principles of calculus as well as current, scholarly conversations
4. Locate solutions of calculus problems
Information literacy instructional objectives:
1. Define general indicators of credibility
2. Compare and evaluate open web and sources from databases for factualness and validity
3. Identify and use controlled vocabularies in searching
4. Use specialized search engines, library databases and monograph (book) browsing to locate sources that mathematicians would use
Bridging competencies:
· Define a research topic in a hierarchical fashion, broad to specific
· Differentiate between databases for relevance to topic
· Differentiate key words from subject headings
· Use reference tools for building background knowledge
Recommended tools and guides:
Library Assignment Ideas:
Browse book stacks at 510.92 for biographies of mathematicians
Search online catalog: calculus history, or, calculus biography, for e-books or books to borrow
Given lists of famous mathematicians, look up their names in Science Direct database for references to the influence of their work | {"url":"http://www.pcc.edu/library/faculty-services/course-specific-research-support/calculus-1","timestamp":"2014-04-18T18:50:11Z","content_type":null,"content_length":"16816","record_id":"<urn:uuid:e28baea2-e663-489f-a001-c73c0fb25442>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
On the Threshold
On the Threshold
Where the Hard Problems Are
The new algorithm weaves together threads from at least three disciplines: mathematics, computer science and physics. The theme that binds them all together is the presence of sudden transitions from
one kind of behavior to another.
The mathematical thread begins in the 1960s with the study of random graphs, initiated by Paul Erdos and Alfred Rényi. In this context a graph is not a chart or plot but a more abstract mathematical
structure—a collection of vertices and edges, generally drawn as a network of dots (the vertices) and connecting lines (the edges). To draw a random graph, start by sprinkling n vertices on the page,
then consider all possible pairings of the vertices, choosing randomly with probability p whether or not to draw an edge connecting each pair. When p is near 0, edges are rare, and the graph consists
of many small, disconnected pieces, or components. As p increases, the graph comes to be dominated by a single "giant" component, which includes most of the vertices. The existence of this giant
component is hardly a surprise, but the manner in which it develops is not obvious. The component does not evolve gradually as p increases but emerges suddenly when a certain threshold is crossed.
The threshold is defined by a parameter I'll call
In computer science, a similar threshold phenomenon came to widespread attention in the early 1990s. In this case the threshold governs the likelihood that certain computational problems have a
solution. One of these problems comes straight out of graph theory: It is the k-coloring problem, which asks you to paint each vertex of a graph with one of k colors, under the rule that two vertices
joined by an edge may not have the same color. Finding an acceptable coloring gets harder as k-colorable, and above this threshold almost none are. Moreover, the threshold affects not only the
existence of solutions but also the difficulty of finding them. The computational effort needed to decide whether a graph is k-colorable has a dramatic peak near the critical value of really hard
problems are.")
Physicists also know something about threshold phenomena; they call them phase transitions. But are the changes of state observed in random graphs and in constraint-satisfaction problems truly
analogous to physical events such as the freezing of water and the onset of magnetization in iron? Or is the resemblance a mere coincidence? For a time there was controversy over this issue, but it's
now clear that the threshold phenomena in graphs and other mathematical structures are genuine phase transitions. The tools and techniques of statistical physics are ideally suited to them. In
particular, the k-coloring problem can be mapped directly onto a model of a magnetic system in solid-state physics. The survey-propagation algorithm draws on ideas developed originally to describe
such physical models. | {"url":"http://www.americanscientist.org/issues/pub/2003/1/on-the-threshold/2","timestamp":"2014-04-19T22:10:40Z","content_type":null,"content_length":"126335","record_id":"<urn:uuid:07408818-d567-478b-aa21-74a383efd4a9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Phase Transformations In Metals And Alloys
Sponsored High Speed Downloads
PHASE TRANSFORMATIONS IN METALS AND ALLOYS 329 too rigidly. It is to be emphasized that a phase transition can involve a structural change, a compositional change, or both.
Phase Transformations in Metals and Alloys SECOND EDITION D.A. Porter Rautaruukki Oy Research Centre Raahe Finland K.E. Easterling Formerly School ojEngineering
Solidification of Metals The solidification of metals and their alloys is an important industrial process. N t l d t t l ll t t ith thNot only do structural alloys start with the
1 Phase Transformations in Titanium Alloys: An Assessment of Our Current Understanding Jim Williams Monash University Melbourne, Australia 13 February 2013
Title: Phase Transformations in Metals and Alloys, Third Edition (Revised Reprint) Subject: Phase Transformations in Metals and Alloys, Third Edition (Revised Reprint)
Phase Transformations in Metals Isothermal transformation diagrams Continuous cooling diagram Mechanical behavior of iron-carbon alloys ... • Developing the microstructure of either single-phase or
two-phase alloys typically involves a phase transformation.
Course Syllabus EMA 6136: Diffusion, Kinetics, and Transport Phenomena Spring 2014 1. Catalog Description: Credits: 3 Physical basis, equation, and theories of diffusion, tracer, chemical,
multicomponent, and
Phase Transformations in Metals and Alloys, Third Edition [Revised Reprint] pdf - Kenneth E. Easterling a. I wish thought the office visits a sufficient degree.
Mechanisms of Diffusional Phase Transformations in Metals and Alloys, Hubert I. Aaronson, Taylor & Francis Group, 2010, 1420062999, 9781420062991, 667 pages.
Phase Transformations in Metals and Alloys THIRD EDITION DAVID A. PORTER, KENNETH E. EASTERLING, and MOHAMED Y. SHERIF (гйС) CRC Press ^^) Taylor & Francis Group
79 5 Solid-State Phase Transformations Solid-state.phase.transformations.play.an.important.role.in.the.development.of.the. structure.and.properties.of.metals.and.alloys..Polymorphic.transformations.
Northeastern University Kinetics of Phase Transformation 3 Phase transformations in metals/alloys occur by nucleation and growth.
Phase Transformations in Metals 10.1 Introduction We may use time and temperature dependencies to modify some phase diagrams to develop phase ... which is fundamental to the development of
microstructure in steel alloys. Upon cooling, ...
phase transformations are affected by magnetic fields, and ... of materials. It is also expected that new properties may be developed in materials by applying mag-netic fields during phase
transformations. Iron-based alloys ... paper on the enhancement of phenomena in metals by
¾Phase transformations in Fe-C alloys ¾Isothermal Transformation Diagrams ¾Mechanical Behavior ¾Tempered Martensite ... Chapter 10, Phase Transformations in Metals University of Tennessee, Dept. of
Materials Science and Engineering 24 Summary ¾Alloy steel ¾Athermal transformation ¾Bainite
Experiment C: Phase Transformations and Age Hardening in Non Ferrous Alloys Introduction: This lab is divided into two parts: ... Pure metals are single-phase. Alloys can be single-phase but more
often than not consist of more than one phase.
Phase Transformations in Metals and Alloys SECOND EDITION D.A. Porter Rautaruukki Oy Research Centre Raahe Finland K.E. Easterling Formerly School of Engineering
Solid Phase Transformations Description: Special topic volume with invited papers only. This special-topic book, devoted to “Solid Phase Transformations”, ... Properties and Phase Transformations of
Metals and Alloys K. Masuda-Jindo, ...
Phase transformations in Ag70.5Cu26.5Ti3 filler alloy during brazing processes Sviatoslav HIRNYJ 1*, J. Ernesto INDACOCHEA 2 ... Alloys , ASM International, Metals Park, OH, 1987. [22] J.K.
Kivilahti, F.J.J. van Loo, Mater. Sci. Forum
Phase Transformations! 1! Amorphous Metallic Alloys!! Amorphous metallic alloys or metallic glasses have emerged as a new class of engineering materials after vitrfication of metallic alloys by using
Chapter 10: Phase Transformations in Metals (10.1-2, 10.5-9) ... • Phase transformations involve alteration of the microstructure – Diffusion dependent transformations – Diffusionless transformations
1. ... Fe-C alloys Create PDF files ...
Phase Transformations in the Heat Treated . and Untreated Zn-Al Alloys . ... Microstructure changes and phase transformations of Zn-Al based alloys have been systematically studied, using XRD, ...
Primary metals were used to melt of Zn-(4, 8, 12, 22, 27) % Al
Undercooling and phase transformations in Cu-based alloys Stefano Curiotto Relatore Coordinatore ... and cobalt which has the highest Curie temperature of all transition metals. ... rial depend on
the phase transformations, in particular, as regards Cu-based alloys, ...
PHASE TRANSFORMATIONS IN ALLOYS H. Bakker, G. F. Zhou and H. Yang Van ... metals, where a typical ... Phase Transformations in Compounds with the B82 Structure and Related Orthorhombic Structure
Accelerated Diffusion and Phase Transformations in CoCu Alloys Driven by the Severe Plastic Deformation Boris B. Straumal1,2,3,4, ... and Defect Interactions in Metals, eds by J. I. Takamura, M.
Doyama, M. Kiritani, (The University of Tokyo Press, Tokyo, 1982) p. 554.
Solidification and solid state transformations of Allvac ... Metals & Materials Society), 2005. be addressed in this study and the results must be preliminary in nature. ... Figure 10. Formation of G
Phase in Alloys 718 and 718Plus™.
Ferrous Metals and Alloys 2-5 Phase Equilibrium Diagram for Iron and Iron Carbide Phase is a form of material having characteristics ... Phase Transformations The following phase transformations
occur with iron-carbon alloys:
The Theory of Transformations in Metals and Alloys ... metals and alloys, and corresponding marked variations in physical and chemical properties. ... Part II Growth from the vapour phase.
Solidification and melting. Polymorphic Changes.
MSE 2090: Introduction to Materials Science Chapter 10, Phase Transformations 6 Homogeneous nucleation solid liquid Is the transition from undercooled liquid to a solid
Isothermal Phase Transformation of U-Zr-Nb Alloys for Advanced Nuclear Fuels ... For the phase transformations in isothermal conditions, ... J.W. The Theory of Transformations in Metals and Alloys,
Pergamon, 2002. [7] Vandermeer, R.A.
Phase Transformations and Microstructural Evolution of Mo-Bearing Stainless Steels ... Although high Mo nickel-base filler metals can be used to help mitigate this problem, microsegre- ... where
ferrite was still a stable phase. Because alloys of the F mode still contained some of the austenite
Phase Transformations in Metals and Alloys 3rd 2. Physical Metallurgy 4 Ed., by Porter & Easterling th ... Review of diffusional phase transformations: Nucleation in solids, thermally activated
growth, solute transport in precipitation, ...
... Phase Transformations and Microstructure Evolution (for Slow Cooling) ... micrographs of pure metals after solidification, showing equiaxed grain ... Cooling through a Two Phase Region Alloys
show a freezing range, ...
substrates: phase transformations, wetting, ... chemical aggressivity of rare metals is known. Based on ... vation of the eutectic and monotectic phase transformations in the Gd–Ti alloys as well as
of the high-temperature
Phase transformations Transmission electron microscopy EDS 1. ... was melted from pure metals (>99.9%) in an arc furnace with ... involved in two-phase TiAl-based alloys—I. Lamellar structure
formation. Acta Mater 1996;44:343–52.
METALLOGRAPHIC INVESTIGATIONS OF PHASE TRANSFORMATIONS DURING SOLID-HDDR PROCESS IN FERROMAGNETIC ALLOYS BASED ON Dd2Fe14B ... mixture of rare-earth metals (Dd)-rich phase grain boundary (Fig. 1а).
... ferromagnetic alloys (R is mixture of Nd, Pr, Ce, La, Dy and others).
Phase Transformation in Metals Phase transformation goes through two stages ... other transformations is not possible with out reheating to form austenite ... phase is coarse and this leads to less
phase boundary area. Of all steel alloys, ones containing spheroidite are the softest and the ...
Phase Transformations in Metals and Alloys, D. A. Porter and K. E. Easterling, Chapman & Hall, 1993. Topics covered: 1) Solution Thermodynamics 7) Nucleation and Growth 2) Phase Diagrams 8)
Solidification 3) Diffusion ...
*Phase Transformations in Metals and Alloys 3rd Ed., by Porter & Easterling 2 ... Solidification of pure metals, solid solution alloys and eutectics. ...
CHAPTER 10- PHASE TRANSFORMATIONS IN METALS PROBLEM SOLUTIONS 10.6 For this problem, we are given, for the recrystallization of aluminum, two values of y and two values of the corresponding times,
and are asked to determine the fraction recrystallized after a total time of
Phase Transformations in -Type Ti-Mo Alloys E. Sukedai, M. Shimoda*, H. Nishizawa* and Y. Nako* Faculty of Engineering, Okayama University of Science, Okayama 700-0005, Japan ... J. Japan Inst.
Metals 50 (1986) 893–899. 13) H.FujiiandH.Suzuki: ...
The phase transformations during continuous cooling of Ti6Al7Nb alloy from the two-phase α+β range 1. ... Titanium alloys in the 1990’s, The Mineral, Metals and Materials Society, Warrendale, 1993,
3-14. [5] J. Marciniak, Metallic biomaterials-directions and
All phase transformations occur to lower the total energy of the system. ... 2.5 J.W. Christian, The Theory of Transformations in Metals and Alloys, Pergamon Press, 1965 5342_ch02_6111.indd 38 3/2/12
12:23:01 PM. Chapter 2: ...
including phase transformations and the theory of strength. N. A. ... Physics, Ural Branch, RAS. He analyses properties of ordered alloys and intermetallics. L. A. Rodionova is a Senior Researcher at
the Institute of Metal ... Nonferrous Metals Processing Plant.
Porter, D.A., and Easterling, K.E., Phase Transformations in Metals and Alloys , 2nd edition, Chapman & Hall, 1992. Title Aluminium Alloys-system Author: Administrator Created Date:
tory, Metals and Ceramics Division, Oak Ridge National Laboratory, Oak Ridge, ... applicable to a wide range of alloys in which sigma phase is found. ... step in most, if not all, sigma phase
Materials Transactions, Vol. 43, No. 10 (2002) pp. 2593 to 2599 c 2002 The Japan Institute of Metals Phase Transformations in Al87Ni7Ce6 and Al87Ni7Nd6 Amorphous Alloys
Phase Transformations in Metals and Alloys Third Edition, CRC Press (2009). 2. V Raghavan Solid state phase transformations First edition, Prentice Hall of India Pvt. Ltd., 1992. 2.1 Additional Texts
1. Robert E Reed-Hill Physical Metallurgy Principles
Faculty of Metals Engineering and Industrial Computer Science, AGH University of Science and Technology, Al. Mickiewicza 30, ... Keywords: Metallic alloys; Kinetic phase transformations of
undercooled austenite; CCT diagrams; Model alloys
Phase Transformations in Metals and Alloys, D.A. Porter and K.E. Easterling, Chapman and Hall, London, 1992. References: Several published papers will be discussed during the quarter. These will be
available in the library or through interlibrary loan. | {"url":"http://ebookily.org/pdf/phase-transformations-in-metals-and-alloys","timestamp":"2014-04-23T14:25:56Z","content_type":null,"content_length":"45797","record_id":"<urn:uuid:e0b17c04-5008-4c21-a922-7869364ac8bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
-Contact Metric Manifolds
ISRN Geometry
Volume 2012 (2012), Article ID 970682, 18 pages
Research Article
Curvature Properties and -Einstein -Contact Metric Manifolds
Department of Mathematics, Bangalore University, Central College Campus, Bangalore 560 001, India
Received 16 September 2012; Accepted 2 October 2012
Academic Editors: G. Martin, C. Qu, and A. Viña
Copyright © 2012 H. G. Nagaraja and C. R. Premalatha. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We study curvature properties in -contact metric manifolds. We give the characterization of -Einstein -contact metric manifolds with associated scalars.
1. Introduction
The class of -contact manifolds [1] is of interest as it contains both the classes of Sasakian and non-Sasakian cases. The contact metric manifolds for which the characteristic vector field belongs
to -nullity distribution are called contact metric manifolds. Boeckx [2] gave a classification of -contact metric manifolds. Sharma [3], Papantoniou [4], and many others have made an investigation of
-contact metric manifolds. A special class of -contact metric manifolds called -contact metric manifolds was studied by authors [5, 6] and others. In this paper we study -contact metric manifolds by
considering different curvature tensors on it (Table 1). We characterize -Einstein -contact metric manifolds with associated scalars by considering symmetry, -symmetry, semisymmetry, -recurrent, and
flat conditions on -contact metric manifolds. The paper is organized as follows: In Section 2, we give some definitions and basic results. In Section 3, we consider conharmonically symmetric,
conharmonically semisymmetric, -conharmonically flat, -conharmonically flat, and -recurrent -contact metric manifolds and we prove that such manifolds are -Einstein or -parallel or cosymplectic
depending on the conditions. In Section 4, we prove that -conformally flat -contact metric manifold reduces to -contact metric manifold if and only if it is an -Einstein manifold. Further we prove
conformally Ricci-symmetric and -conformally flat -contact metric manifolds are -Einstein. In Section 5, we prove that pseudoprojectively symmetric and pseudoprojectively Ricci-symmetric -contact
metric manifolds are -Einstein. In Section 6 we consider Ricci-semisymmetric -contact metric manifolds and prove that such manifolds are -Einstein. In all the cases where -contact metric manifold is
an -Einstein manifold, we obtain associated scalars in terms of and .
2. Preliminaries
A dimensional -differentiable manifold is said to admit an almost contact metric structure if it satisfies the following relations [7, 8] where is a tensor field of type (1,1), is a vector field, is
a 1-form, and is a Riemannian metric on . A manifold equipped with an almost contact metric structure is called an almost contact metric manifold. An almost contact metric manifold is called a
contact metric manifold if it satisfies for all vector fields , .
The (1,1) tensor field defined by , where denotes the Lie differentiation, is a symmetric operator and satisfies , , and . Further we have [1] where denotes the Riemannian connection of .
The -nullity distribution of a contact metric manifold is a distribution [1] for any vector fields and on .
Definition 2.1. A contact metric manifold is said to be (i)Einstein if , where is a constant and is the Ricci tensor, (ii)-Einstein if , where and are smooth functions.
A contact metric manifold with is called a -contact metric manifold. In a -contact metric manifold, we have If , , then the manifold becomes Sasakian [1], and if , then the notion of -nullity
distribution reduces to -nullity distribution [9]. If , then -contact metric manifold is locally isometric to the product . In a (2n+1)-dimensional -contact metric manifold, we have the following [1
]: where is the Ricci operator and is the scalar curvature of .
Throughout this paper denotes (2n+1)-dimensional -contact metric manifold.
3. Conharmonic Curvature Tensor in -Contact Metric Manifolds
The conharmonic curvature tensor in is given by [10] A -contact metric manifold is said to be (1)conharmonically symmetric if (2)conharmonically semisymmetric if
3.1. Conharmonically Symmetric -Contact Metric Manifolds
Differentiating (3.1) covariantly with respect to , we obtain If is conharmonically symmetric, then, from (3.4), we obtain Differentiating (2.6) covariantly with respect to and using (2.4), we obtain
Differentiating (2.10) covariantly with respect to and using (2.11), (2.4), we have where From (3.7), we obtain Taking in (3.5) and using (3.6), (3.7), and (3.9), we obtain where Contracting (3.10)
with and using (2.1), we obtain From (3.12), we get either or Taking in (3.13) and using (2.1), we obtain Taking in (3.14), we obtain Since , from (3.15), it follows that if and only if
In view of (2.4), the above equation gives that reduces to a cosymplectic manifold. Thus we have is cosymplectic if and only if .
Further from (2.10) and Definition 2.1, we have is -Einstein with if and only if . Thus we have the following.
Theorem 3.1. In a conharmonically symmetric -contact metric manifold , the following statements are equivalent. (1) is cosymplectic. (2) is -Einstein with . (3).
3.2. Conharmonically Semisymmetric -Contact Metric Manifolds
Suppose . Then from (3.3), we have Using (2.5), (2.9), (2.6), (2.10), and (2.11) in (3.17) and taking , we get Taking and using (2.9), (2.6), and (2.11), we obtain That is, is an -Einstein manifold.
Conversely, suppose in the relation (3.19) holds. Then we have Using (3.19) in (3.20), we get which implies that is conharmonically semisymmetric. Thus we have the following.
Theorem 3.2. A -contact metric manifold is conharmonically semisymmetric if and only if it is -Einstein with and .
3.3. -Conharmonically Flat -Contact Metric Manifolds
Suppose is -conharmonically flat, that is, for all vector fields , , , . Then from (3.1), we obtain Let be a local orthonormal basis of the tangent space at each in . Then in , the following
relations hold: Taking in (3.21) and summing up from 1 to , we have Using (2.13), (3.22), in (3.24), we obtain Replacing by and by in (3.25) and using (2.1), we have Taking in (3.26) and taking
summation over to , we obtain .
Substituting this in (3.26) and taking the covariant derivative with respect to , we obtain That is, is -parallel.
Further substituting in (2.12), we obtain Thus from the above discussions we can state the following.
Theorem 3.3. In a -dimensional -conharmonically flat -contact metric manifold, Ricci tensor is -parallel and .
3.4. -Conharmonically Flat -Contact Metric Manifolds
Suppose is -conharmonically flat, that is, .
Then from (3.1), we obtain Using (2.9), (2.6), and (2.11) in (3.29), we obtain Taking in (3.30) and using (2.1), we obtain Contracting (3.31) with , we obtain Replacing by in (3.32) and using (2.7)
and (2.10), we obtain Tha above equation with (3.32) yields where Hence reduces to an -Einstein manifold.
Thus we have the following.
Theorem 3.4. A -dimensional -conharmonically flat -contact metric manifold is an -Einstein manifold.
3.5. -Recurrent -Contact Metric Manifolds
A -dimensional -contact metric manifold is said to be -recurrent if and only if there exists a nonzero 1-form such that Differentiating (3.1) covariantly with respect to , we obtain Suppose is
-recurrent. Then from (3.37), we have Contracting (3.38) with , we obtain Since is a nonzero 1-form, we have .
Using (3.1), the above equation yields Using (2.6) and (2.9) in (3.40), we obtain Taking in (3.41), we get Replacing by in (3.42), we obtain Replacing by in (2.10) and comparing the resulting
equation with (3.43), we obtain where .
Using (3.44) in (3.42), we get where and .
That is, is an -Einstein manifold.
Thus we have the following.
Theorem 3.5. A -recurrent -contact metric manifold is an -Einstein manifold.
4. Conformal Curvature Tensor in -Contact Metric Manifolds
The conformal curvature tensor in is defined by [11]
Definition 4.1. A -contact metric manifold is (1)-conformally flat if , (2)conformally Ricci symmetric if , (3)-conformally flat if for all , , , and .
4.1. -Conformally Flat -Contact Metric Manifolds
Suppose that -contact metric manifold is -conformally flat. Then from (4.1), we obtain Using (2.1), (2.6), and (2.11) in (4.2), we obtain Putting in (4.3) and using (2.1) and (2.10), we obtain
Contraction of the above with yields From (4.5), we have the following. with and if and only if .
Thus we have the following.
Theorem 4.2. A -conformally flat -contact metric manifold reduces to -contact metric manifold if and only if it is an -Einstein manifold.
4.2. Conformally Ricci-Symmetric -Contact Metric Manifolds
If , then we have Taking in (4.7) and using (4.1), (2.6), (2.9) to (2.12), we obtain where Taking in (4.8), we obtain .
If , then , .
Thus for , (4.8) reduces to where that is, reduces to -Einstein.
Thus we have the following.
Theorem 4.3. A conformally Ricci-symmetric -contact metric manifold is an -Einstein manifold.
4.3. -Conformally Flat -Contact Metric Manifolds
Suppose is -conformally flat, that is, for all vector fields , , , and . Then from (4.1), we obtain Let be a local orthonormal basis of the tangent space at in .
Taking in (4.12) and summing up from 1 to , we obtain Using (3.22) in (4.13), we obtain Replacing by and by in (4.14) and using (2.1), we have where From the relation (4.15), we conclude that is an
-Einstein manifold.
Hence we can state the following.
Theorem 4.4. A -conformally flat -contact metric manifold is an -Einstein manifold with .
5. Pseudoprojective Curvature Tensor in -Contact Metric Manifolds
In , the pseudoprojective curvature tensor is given by [11] where and are constants such that , , is the curvature tensor, is the Ricci tensor, is the scalar curvature.
5.1. Pseudoprojectively Symmetric -Contact Metric Manifolds
Suppose holds in . Then we have Taking in (5.2) and using (5.1), (2.5), (2.10), and (2.11), we have Contracting the above with , we obtain Replacing by in (5.4) and using (2.7) and (2.10), we obtain
where Substituting for in (5.4), we obtain where From relation (5.7), we conclude that is an -Einstein manifold.
Hence we can state the following.
Theorem 5.1. A pseudoprojective symmetric -contact metric manifold is an -Einstein manifold.
5.2. Pseudoprojective Ricci-Symmetric -Contact Metric Manifolds
If , then we have Taking in (5.9) and using (5.1), (2.1), (2.5), (2.10), and (2.11), we obtain where Replacing by in (5.10) and using (2.7) and (2.10), we obtain Now substituting for in (5.10), we
obtain where From (5.13), we have that is an -Einstein manifold.
Hence we can state the following.
Theorem 5.2. A -contact metric manifold is an -Einstein manifold if .
6. Ricci Semisymmetric -Contact Metric Manifolds
If a -dimensional -contact metric manifold is Ricci semisymmetric, then . That is, Taking in (6.1) and using (2.5), (2.7), and (2.11), we obtain where Replacing by in (6.2) and using (2.7) and (2.10
), we obtain Then (6.2) reduces to where From relation (6.5), we conclude that the manifold is an -Einstein manifold.
Hence we can state the following.
Theorem 6.1. A Ricci semisymmetric -contact metric manifold is an -Einstein manifold.
1. D. E. Blair, T. Koufogiorgos, and B. J. Papantoniou, “Contact metric manifolds satisfying a nullity condition,” Israel Journal of Mathematics, vol. 91, no. 1–3, pp. 189–214, 1995. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
2. E. Boeckx, “A full classification of contact metric $\left(k,\mu \right)$-spaces,” Illinois Journal of Mathematics, vol. 44, no. 1, pp. 212–219, 2000.
3. R. Sharma, “Certain results on K-contact and $\left(k,\mu \right)$-contact manifolds,” Journal of Geometry, vol. 89, no. 1-2, pp. 138–147, 2008. View at Publisher · View at Google Scholar
4. B. J. Papantoniou, “Contact manifolds, harmonic curvature tensor and $\left(k,\mu \right)$-nullity distribution,” Commentationes Mathematicae Universitatis Carolinae, vol. 34, no. 2, pp. 323–334,
5. U. C. De and A. K. Gazi, “On $\varphi$-recurrent N($k$)-contact metric manifolds,” Mathematical Journal of Okayama University, vol. 50, pp. 101–112, 2008.
6. H. G. Nagaraja, “On N($k$)-mixed quasi Einstein manifolds,” European Journal of Pure and Applied Mathematics, vol. 3, no. 1, pp. 16–25, 2010.
7. D. E. Blair, Contact Manifolds in Riemannian Geometry, vol. 509 of Lecture Notes in Mathematics, Springer, Berlin, Germany, 1976. View at Zentralblatt MATH
8. D. E. Blair, Riemannian Geometry of Contact and Symplectic Manifolds, vol. 203 of Progress in Mathematics, Birkhäuser, Boston, Mass, USA, 2002. View at Zentralblatt MATH
9. S. Tanno, “Ricci curvatures of contact Riemannian manifolds,” The Tohoku Mathematical Journal, vol. 40, no. 3, pp. 441–448, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt
10. U. C. De and A. A. Shaikh, Differential Geometry of Manifolds, Alpha Science International Ltd, Oxford, UK, 2007.
11. M. M. Tripathi and P. Gupta, “On $\tau$-curvature tensor in $k$-contact and Sasakian manifolds,” International Electronic Journal of Geometry, vol. 4, no. 1, pp. 32–47, 2011. | {"url":"http://www.hindawi.com/journals/isrn.geometry/2012/970682/","timestamp":"2014-04-17T13:19:54Z","content_type":null,"content_length":"927399","record_id":"<urn:uuid:b711d363-5cb2-4445-85ce-f9f57095afa2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
coordinates given basis
March 7th 2010, 04:19 PM
coordinates given basis
I am stuck on this problem, i tried multiplying p(x) by B but the answer i got is incorrect, any help is appreciated
The set http://128.151.57.236:8080/wwtmp/equ...fbc1361711.png is a basis for http://128.151.57.236:8080/wwtmp/equ...2abc68d321.png. Find the coordinates of http://128.151.57.236:8080/wwtmp/
equ...6c50439221.png relative to this basis:
March 7th 2010, 05:28 PM
I am stuck on this problem, i tried multiplying p(x) by B but the answer i got is incorrect, any help is appreciated
The set http://128.151.57.236:8080/wwtmp/equ...fbc1361711.png is a basis for http://128.151.57.236:8080/wwtmp/equ...2abc68d321.png. Find the coordinates of http://128.151.57.236:8080/wwtmp/
equ...6c50439221.png relative to this basis:
You are looking for $c_1,c_2,c_3$ such that
Expanding this out gives
$(-c_1-3_2-7c_3)\cdot 1+(4c_2+8c_3)\cdot x+(4c_1+12c_2+36c_3)\cdot x^2=-28+36x+144x^2$
Now just solve the system of equations for the needed coeffients.
March 8th 2010, 11:18 AM
im confused, do you meen solve for c1 c2 and c3? or 1 x and x^2? Also, when solving for that coefficient do i set the others to 0? | {"url":"http://mathhelpforum.com/advanced-algebra/132585-coordinates-given-basis-print.html","timestamp":"2014-04-17T19:49:54Z","content_type":null,"content_length":"6563","record_id":"<urn:uuid:7416a1e2-94ca-4382-ada4-051087e77450>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mattapan Algebra Tutor
Find a Mattapan Algebra Tutor
...I have a good command of the material, even well beyond the regents themselves. I've programmed extensively in Mathematica. I used it to analyze my thesis Data for my Ph.D. in physics.
47 Subjects: including algebra 1, algebra 2, reading, chemistry
...I've tutored nearly all the students I've worked with for many years, and I've also frequently tutored their brothers and sisters - also for many years. I enjoy helping my students to
understand and realize that they can not only do the work - they can do it well and they can understand what they're doing. My references will gladly provide details about their own experiences.
11 Subjects: including algebra 1, algebra 2, geometry, precalculus
...No short cuts required - if it makes sense, the student will learn it. I have a math degree from MIT and taught math at Rutgers University for 10 years. I don't just know the material; I know
the student as well.I performed well in my physics courses as an MIT student.
24 Subjects: including algebra 2, chemistry, algebra 1, physics
...I am always surprised by the lack of geographical knowledge that students possess these days, and was not surprised to learn that many Americans cannot locate New York City on a map. I don't
have this problem, however, and if you need a geography tutor, I am your man. As an English minor in college, I have both a personal and academic background with respect to English.
21 Subjects: including algebra 1, English, Spanish, Italian
After working for a number of years as a development chemist and a software engineer in the medical instrumentation field, and serving as a youth soccer coach for a dozen years, I made a
transition to the education field. I am certified to teach middle school and high school math, and high school c...
33 Subjects: including algebra 2, algebra 1, chemistry, calculus | {"url":"http://www.purplemath.com/mattapan_algebra_tutors.php","timestamp":"2014-04-17T11:02:21Z","content_type":null,"content_length":"23883","record_id":"<urn:uuid:ba65c76f-42e4-4eae-944e-89f11f0c0bca>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00125-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact Solution for the Time-Dependent Temperature Field in Dry Grinding: Application to Segmental Wheels
Mathematical Problems in Engineering
Volume 2011 (2011), Article ID 927876, 28 pages
Research Article
Exact Solution for the Time-Dependent Temperature Field in Dry Grinding: Application to Segmental Wheels
^1Cátedra Energesis de Tecnología Interdisciplinar, Universidad Católica de Valencia, 46002 Valencia, Spain
^2Departamento de Matemáticas, Universidad de Pinar del Río, 20200 Pinar del Río, Cuba
^3Instituto Universitario de Matemática Pura y Aplicada, Universidad Politécnica de Valencia, 46022 Valencia, Spain
Received 21 February 2011; Accepted 1 April 2011
Academic Editor: Ezzat G. Bakhoum
Copyright © 2011 J. L. González-Santander et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
We present a closed analytical solution for the time evolution of the temperature field in dry grinding for any time-dependent friction profile between the grinding wheel and the workpiece. We base
our solution in the framework of the Samara-Valencia model Skuratov et al., 2007, solving the integral equation posed for the case of dry grinding. We apply our solution to segmental wheels that
produce an intermittent friction over the workpiece surface. For the same grinding parameters, we plot the temperature fields of up- and downgrinding, showing that they are quite different from each
1. Introduction
A major technological challenge in the grinding of metallic plates [1–5] is how to avoid thermal damage to the workpiece. The grinding process transforms large amounts of mechanical energy into heat,
which primarily affects the contact area between the workpiece and the wheel. It is therefore of great industrial importance to determine the temperature distribution within the workpiece, and its
maximum, in order to avoid thermal damage.
Despite the fact that there have been studies of the temperature field solving the heat equation numerically [6, 7], an analytical approach is of great interest [8] for two reasons. Firstly, explicit
expressions for the dependence of the temperature field with respect to the grinding parameters can be obtained. Secondly, the rapid presentation of results allows the industry to monitor the
grinding process on line.
This paper is organized as follows. Section 2 presents the Samara-Valencia model [9]. This model is used in Section 3 to derive a closed analytical solution for the evolution of the temperature field
in dry grinding, for any time-dependent friction profile between the grinding wheel and the workpiece. Section 4 applies the result obtained in the previous section to intermittent grinding, for both
up- and downgrindings. Section 5 analyzes some important variables in continuous grinding, such as the location of the maximum temperature and the relaxation time, which can be applied to
intermittent grinding. We compare also the stationary regime of continuous grinding with the quasistationary regime of intermittent grinding. In Section 6, we present some numerical results,
comparing continuous and intermittent grinding. Our conclusions are summarized in Section 7.
2. Samara-Valencia Model
The Samara-Valencia model setup is depicted in Figure 1. The workpiece moves at a constant speed and is assumed to be infinite along and , and semiinfinte along . The plane is the surface being
ground. The contact area between the wheel and the workpiece is an infinitely long strip of width located parallel to the axis and on the plane . Both the wheel and the workpiece are assumed to be
rigid. Although the equations below allow for the case of wet grinding, we will consider in this paper the case of dry grinding. The Samara-Valencia model [9] solves the convection heat equation
subject to the initial condition, and the boundary condition, where and . The first term of (2.3) models the application of coolant over the workpiece surface considering as the heat transfer
coefficient. The second term, , represents the heat flux entering into the workpiece. This heat flux is generated on the surface by friction between the wheel and the workpiece. The solution of the
Samara-Valencia model (2.1)–(2.3) may be presented as the sum of two terms, where Notice that contains the friction function , and contains the temperature field on the surface and the heat transfer
coefficient .
3. Theorem for Dry Grinding
3.1. Dry Grinding
When no coolant is applied to the workpiece, we can consider the workpiece to be isolated from the environment. According to Newton's cooling law, this means that there is no heat flux from the
workpiece to the environment, thus the heat transfer coefficient is zero, In this case of dry grinding, the expression for given in (2.6) becomes In order to tackle the integral equation given in (
3.2), let us define the following integral operators where Therefore, taking into account (3.3), we may rewrite (2.4) as
3.2. The Operator
Let us calculate the operator over the frictional term of the temperature field. According to (2.5), may be expressed as Therefore, substituting (3.7) in (3.5), and reordering the integrals by
Fubini’s theorem, we obtain Expanding the exponent of the integrand given in (3.8), we arrive at The last integral given in (3.9) can be calculated [10, Equation 3.323.2], so that, Once again,
expanding the exponent of the last integrand given in (3.10) and simplifying, we arrive at Let us define We can calculate (3.12) performing the substitution, , and introducing the Heaviside function
, so that, Substituting (3.13) in (3.11) and simplifying, we get
3.3. The Operator
Substituting the expression obtained in (3.14) into (3.4) and reordering the integrals, we have Let us define Since , the integral given in (3.16) can be expressed in the following way: In order to
calculate (3.17), we can perform the following substitutions: , and , leading to Therefore, substituting (3.18) in (3.15) and changing the integration order, we arrive at Remembering the expression
for given in (2.5), we conclude
3.4. Resolution by Successive Approximations
According to (3.2), in order to evaluate , we have to know the temperature field on the surface, . At zeroth order approximation, , we can consider that the temperature field will be given by the
term involving friction only, that is, according to (2.5). So that, In order to get the first-order approximation , we can substitute the zeroth order (3.21) in (3.3), Thus, the temperature field at
first order is or according to (3.22), In general, the th approximation is where the initial value is given by (3.21). Applying now (3.20) to (3.22), we can rewrite the first-order approximation as
In order to evaluate the second order, we can substitute (3.26) in the recurrence equation (3.25) for . Taking into account that the integral operator is linear, we obtain where we have applied (3.20
) once again. Repeating the same steps, we get at third order Looking at the coefficients appearing in the first orders, (3.26), (3.27), and (3.28), we may establish the following conjecture for the
th order: that can be proved by induction, The temperature field will be the infinite order approximation, thus taking the limit of (3.29), results in Applying (3.20), we may check that (3.31) is a
solution of the integral equation given in (3.6), Taking into account (2.5), we conclude that the time evolution of the temperature field may be expressed as
3.5. Uniqueness of the Solution
3.5.1. Bound Limit for ℶ[0]
To prove the uniqueness of the solution of the integral equation (3.6), let us calculate first the value of the operator over a constant. According to (3.5), we have Performing the substitution: , (
3.33) results in Applying (3.35) to (3.4), we have Performing the substitution: , we have Therefore,
Let us consider now a function whose maximum value taking is , that is, Applying to (3.39) and taking into account that is a linear operator, Thus, according to (3.38), Note that, if we apply to (
3.39) and we take into account (3.37), we have So, in general, for all ,
3.5.2. Resolution of the Uniqueness
If and are solutions of (3.6), we have Subtracting (3.45) from (3.44) and taking into account that is a linear operator, Taking in (3.46), Recursive substitution of (3.47) yields If we take in (3.43)
as a function , according to (3.48), we have that, for all , where is the maximum value of . Taking the limit in (3.50), so that, Note that in (3.48) we can exchange labels and , Thus, taking now the
function we obtain that that is, From (3.52) and (3.56), we conclude that both solutions on the surface are equal, Applying to (3.57), and substituting (3.58) in (3.44), we have that Comparing (3.45)
with (3.59), we finally obtain Therefore, the solution given in (3.33) is the only solution of (3.6).
4. Intermittent Grinding
Equation (3.31) is a generalization of the result presented in [11] since now the transient regime is considered and any type of time-dependent friction profile is allowed. In the next section, we
will apply (3.31) to calculate the time-dependent temperature field produced by an intermittent grinding of a segmental wheel (Figure 2).
4.1. Intermittence Function
Let us model the friction due to a toothed wheel, which can contact the workpiece within . Therefore, we will call this zone, contact zone. Figures 3 and 4 show the friction zone highlighted in red
within the limits and for two different times and . The wheel has a spatial period , where is the distance between teeth and is the tooth width. The wheel teeth move at a speed , where is the angular
velocity and is the wheel radius. When more than two teeth touch simultaneously the contact zone , the friction zone is split as Figure 5 shows. For a given instant , the incoming heat flux enters
the workpiece through the friction zone: , , where indicates a wheel tooth. Notice that there can be up to teeth within the contact zone, where Note, also, that the friction limits are time
dependent: and . If the incoming heat flux is constant for every point where friction occurs, we may write the friction function as where is the Heaviside function. In order to know the friction
limits of the wheel teeth which enters into the contact zone, that is, and , let us define the spatial period, According to Figure 5, the points, , are initially over the period , where we have
defined a boolean variable , in order to define the rotation of the wheel: , downgrinding; , upgrinding, as Figure 6 shows. If we want a periodic repetition of the friction limits over the period ,
we may define the function We want as well that , thus, Similarly, since the tooth width is , The and functions are given by
4.2. Temperature Field
Substituting (4.2) in (3.33), we obtain Let us evaluate the integral over the variable in (4.9), Performing the substitution, and taking into account the properties of the error function, we get
where we have defined the function Substituting (4.12) in (4.9), we obtain the following expression for the temperature field:
5. Continuous Grinding
5.1. Stationary Regime
In order to calculate the temperature field for the case of continuous friction, we can take in (4.7)-(4.6) the constant values of the contact zone, Therefore, we can redefine (4.12) as obtaining,
according to (4.13), the following temperature field: The stationary regime is reached when the temperature field does not vary in time, In the case of continuous grinding, the time derivative is
Taking the limit of (5.4), knowing that erf, we can check that the stationary regime is reached when : so that,
5.2. Quasistationary Regime
Notice that intermittent grinding never reaches a stationary regime, since the heat source produced by friction is pulsed. This is not the case of continuous grinding, where the stationary regime is
reached asymptotically for . Therefore, for continuous grinding, we may define a relaxation time that provides us an idea of how rapid the stationary regime is reached in practice. It turns out that
this relaxation time, defined for the continuous case, is a good temporal reference in order to plot the temperature field in the case of intermittent grinding. Even though intermittent grinding
never reaches a stationary regime, we may define a quasistationary regime in which the temperature field is periodically stable. Since and are periodic functions (4.6)-(4.7), according to (4.14), we
may define the quasistationary regime as According to Figure 5, the temporal period of the friction function in a fixed point is However, the global consideration of the plot indicates the following
temporal period: In view of (3.31), we may conclude that possesses the same global and point periods and as the friction function .
5.3. Maximum Temperature
Since the error function erf is an increasing function for all , we have Therefore, the temperature on a given point of the workpiece is a monotonically increasing function, Equation (5.12) means
that the maximum temperature must be reached in the stationary state, . Moreover, as in (5.11), we have so that, for , Equation (5.14) indicates that maximum temperature must be localized on the
surface, . From (5.12) and (5.14), we conclude that the maximum temperature must be reached on the surface in the stationary regime, This result agrees with [11].
5.3.1. Location of the Maximum Temperature
Denoting the stationary regime in the case of continuous friction as according to [12], we have, where is the modified Bessel function of zeroth order [13, Section 9.6.], and are spatial
dimensionless coordinates, and is a characteristic temperature, According to what we have seen in (5.15), the maximum temperature is reached on the surface at the stationary regime. Thus, we have to
analyze the maximum of the function given in (5.17) taking , that is, In order to determine the location of the maximum on the surface, firstly let us calculate the points where has a null derivative
(extrema points), Therefore, satisfies where When , the workpiece moves as indicates Figure 1, so that, now on we will consider . Since the incoming heat flux into the workpiece is a positive
magnitude, , we have, . Moreover, since is positive for positive arguments [13, Section 9.6.], the integrand of (5.21) is also positive, thus,
Location of the Extrema
Assume first that , so that (5.19) results in We may rewrite (5.24) as , where . Since is positive for positive arguments [13, Section 9.6.], we have for all , that is, , for . Therefore, we conclude
Assume now that , so that (5.21) becomes Performing the change of variables , (5.27) is equivalent to , where . Due to the integral representation [13, Equation 9.6.24], and since for all , , we have
for all , So that, , for . That is, (5.27) is not satisfied for ,
Finally, assume that , so that and , and therefore, according to (5.22), Since is a continuous function in and, according to Bolzano’s theorem,
Uniqueness of the Extremum and Identification as Maximum
Since is a positive and monotonically decreasing function for positive arguments, , for , [13, Section 9.6.] we have that is a monotonically decreasing function for , Therefore, according to (5.33),
On the one hand, according to (5.26), (5.30), and (5.35), has a unique extremum in and this one always occurs within the interval . On the other hand, from (5.19) we can see that Since is a positive
(5.23), continuous and differentiable function, which satisfies (5.36), the only possibility is that the extremum corresponds to a global maximum. Therefore, just compute a root of (5.21) within the
interval , that is taking (5.31), in order to get the location on the surface of the maximum temperature, There is an equivalent, but more elaborated proof, in [14].
5.4. Relaxation Time
In order to estimate how rapid the transient regime is, according to (5.4), we will be close to it when, for a certain time , is satisfied. Notice that (5.38) depends on the workpiece point chosen
for the evaluation of . We can define the relaxation time , as the time that satisfies (5.38) over the maximum temperature point. According to (5.15), that point must be on the surface in the
stationary state, , thus, Equation (5.39) can be solved numerically. In order to solve it approximately, we can expand the following function up to the first order, near the stationary regime , [13,
Equation ]: Therefore, Substituting (5.41) in (5.39), we have the following approximated equation: Using the Lambert function [15], we can derive the relaxation time from (5.42), arriving at, Notice
that in (5.43), the relaxation time is independent of the localization of the maximum on the surface , thus it can be computed much more rapidly.
6. Numerical Analysis
For the plots presented in this section, we have taken as grinding parameters: ,m/s, and . We have considered as well a VT20 titanium alloy workpiece, whose thermal properties are and [16].
Following the procedure described in Section 5.3, the maximum temperature in continuous grinding and its location on the workpiece surface is In order to evaluate the relaxation time, according to (
5.38), we have taken a very small parameter K/s. Taking into account (6.1), we may solve numerically (5.39) and compute the approximation given in (5.43), obtaining Notice that the results given in
(6.2) coincide in order of magnitude.
For the case of intermittent grinding we have taken in (4.1), (4.3), (4.4), and (4.7), the following wheel parameters: and , and a wheel velocity over the workpiece surface . According to this data,
the point period of the quasistationary regime is , and the global period is . Figures 7, 8, and 9 show the time evolution of the workpiece surface temperature for , for continuous and intermittent
up- and downgrinding, respectively. As can be seen, the temporal evolution of up and down grinding is quite different from each other, but in both cases, the continuous profile is a limit boundary.
Figure 10 compares the temperature time evolution in of continuous grinding with intermittent up- and downgrinding. We may highlight that the relaxation time obtained for the continuous case is a
good estimation for the transient regime in the intermittent case. We may notice also how upgrinding nearly saturates the maximum temperature of the continuous case, but this does not occur in
downgrinding. We may evaluate numerically the maximum temperature, both intermittent up- and downgrindings,
Figure 11 shows the time evolution of the quasistationary regime on the surface for a friction period . For , the temperature oscillates as a wave. This is because the heat flux pulses produced at
the contact zone are propagated along the surface just ground. Figure 12 shows the time evolution of the temperature in for . On the one hand, we may check that the quasistationary regime has a
period , as it was commented in (5.9). On the other hand, we may notice that the quasistationary regime is reached when the temperature in the continuous case is saturated. Therefore, the relaxation
time defined for the continuous case is a good measurement for the transient regime when we have a quasistationary regime in intermittent grinding. In Figures 14 and 15, we have plotted the
temperature fields at , in the cases of up- and downgrinding, respectively. We may realize that both temperature fields are quite different from each other. Figure 13 shows the field temperature for
the continuous case. If we compare the temperature field in the continuous case with the intermittent one (up- or downgrinding), we may observe that an intermittent friction distorts the temperature
field producing thermic waves inside the workpiece.
7. Conclusions
We have derived a closed analytical solution for the time evolution of the temperature field in dry grinding for any time-dependent friction function. Our result is based on the Samara-Valencia model
[9], solving explicitly the evolution of temperature field for the case of dry grinding. We find this solution solving a recurrence equation by successive approximations. We have proved that this
solution is unique. An analytical solution of this type has the advantage to be straightforwardly computable, plotting the graphs very rapidly. Also, the dependence of the grinding parameters on the
temperature field can be studied. The latter is quite useful for the engineering optimization of the grinding process.
We apply our solution to continuous and intermittent up- and downgrinding. We have tested numerically that the time evolution of up- and downgrinding is quite different from each other. In continuous
grinding, we have proved that the maximum temperature occurs at the stationary regime within the friction zone on the surface. In order to graph the evolution of the temperature field, we have
obtained a useful approximation for the characteristic time of the transient regime. Comparing the plots of continuous and intermittent grinding for the same workpiece and grinding parameters, we
conclude that the behavior of the intermittent case is more complicated in detail, but in general the magnitude of temperature field is lower. The latter is quite understandable because, in
intermittent grinding, the amount of energy per unit time entering into the workpiece due to friction is less than in the continuous case. Therefore, the temperature plot for the continuous grinding
acts as a boundary for the intermittent case.
Also, we have tested numerically that the relaxation time obtained for continuous grinding is a good estimation for the characteristic time of the transient regime in the intermittent case. Finally,
we have obtained an expression for the quasistationary regime in intermittent grinding, in which the field temperature oscillates periodically.
The authors wish to thank the financial support received from Generalitat Valenciana under Grant GVA 3012/2009 and from Universidad Politécnica de Valencia under Grant PAID-06-09.
1. S. Malkin, Grinding Technology: Theory and Applications of Machining with Abrasives, Ellis Horwood Ltd. and John Wiley and Sons, 1989.
2. C. Guo and S. Malkin, “Analysis of energy partition in grinding,” Journal of Engineering for Industry, vol. 117, pp. 55–61, 1995. View at Publisher · View at Google Scholar
3. S. Malkin and R. B. Anderson, “Thermal aspects of grinding: 1—energy partition,” Journal of Engineering for Industry, vol. 96, no. 4, pp. 1177–1183, 1974.
4. A. S. Lavine and B. F. von Turkovich, “Thermal aspects of grinding: the effect of heat generation at the shear planes,” CIRP Annals, vol. 40, no. 1, pp. 343–345, 1991. View at Publisher · View at
Google Scholar
5. A. S. Lavine, “An exact solution for surface temperature in down grinding,” International Journal of Heat and Mass Transfer, vol. 43, no. 24, pp. 4447–4456, 2000. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
6. M. Mahdi and Liangchi Zhang, “The finite element thermal analysis of grinding processes by ADINA,” Computers & Structures, vol. 56, no. 2-3, pp. 313–320, 1995, Proceedings of the 10th ADINA
Conference of Nonlinear Finite Element Analysis and ADINA. View at Publisher · View at Google Scholar
7. A. G. Mamalis, D. E. Manolakos, A. Markopoulos, J. Kundrák, and K. Gyáni, “Thermal modelling of surface grinding using implicit finite element techniques,” International Journal of Advanced
Manufacturing Technology, vol. 21, no. 12, pp. 929–934, 2003. View at Publisher · View at Google Scholar
8. K. T. Andrews, M. Shillor, and S. Wright, “A model for heat transfer in grinding,” Nonlinear Analysis, vol. 35, no. 2, pp. 233–246, 1999. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
9. D. L. Skuratov, Yu. L. Ratis, I. A. Selezneva, J. Pérez, P. Fernández de Córdoba, and J. F. Urchueguía, “Mathematical modelling and analytical solution for workpiece temperature in grinding,”
Applied Mathematical Modelling, vol. 31, no. 6, pp. 1039–1047, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
10. I. S. Gradsthteyn and I. M. Ryzhik, Table of Integrals, Series and Products, Academic Press, New York, NY, USA, 7th edition, 2007.
11. J. L. González-Santander, J. Pérez, P. Fernández de Córdoba, and J. M. Isidro, “An analysis of the temperature field of the workpiece in dry continuous grinding,” Journal of Engineering
Mathematics, vol. 67, no. 3, pp. 165–174, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
12. J. C. Jaeger, “Moving sources of heat and the temperature at sliding contracts,” The Royal Society of New South Wales, vol. 76, pp. 204–224, 1942.
13. M. Abramowitz and I. Stegun, Handbook of Mathematical Functions, NBS Applied Mathematics Series 55, NBS, Washington, DC, USA, 1972.
14. J. L. González-Santander, Modelización matemática de la transmisión de calor en el proceso del rectificado industrial plano, Ph.D. thesis, Universidad Politécnica de Valencia, Valencia, Spain,
2009, http://hdl.handle.net/10251/4769.
15. R. M. Corless, D. J. Jeffrey, and D. E. Knuth, “A sequence of series for the Lambert W function,” in Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation, pp.
197–204, ACM Press, Maui, Hawaii, USA, July 1997.
16. S. G. Glasunov and V. N. Moiseev, Constructional Titanium Alloys, Metallurgy, Moscow, Moscow, Russia, 1974. | {"url":"http://www.hindawi.com/journals/mpe/2011/927876/","timestamp":"2014-04-18T07:10:15Z","content_type":null,"content_length":"751940","record_id":"<urn:uuid:38fdb22c-a7a7-470c-badb-30555714b3d8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00069-ip-10-147-4-33.ec2.internal.warc.gz"} |
Accountability in DCPS: Details from teacher's IMPACT report
By Valerie Strauss
My guest is Aaron Pallas, professor of sociology and education at Teachers College, Columbia University. He wrote a piece last week about the IMPACT teacher evaluation in D.C. public schools, and
this is a follow-up.
Pallas writes the Sociological Eye on Education blog for The Hechinger Report, a nonprofit, non-partisan education-news outlet affiliated with the Hechinger Institute on Education and the Media.
Pallas has also taught at Johns Hopkins University, Michigan State University, and Northwestern University, and served as a statistician at the National Center for Education Statistics in the U.S.
Department of Education.
By Aaron Pallas
My post last week on the recent firing of 241 teachers in the D.C. public schools elicited some strong reactions.
I had argued that school districts such as those in Washington D.C. and New York City, which are usin “value-added” measures for high-stakes personnel decisions (such as deciding which teachers to
grant tenure, lay off or fire), have an obligation to make the technical features of these measures available for public scrutiny.
“Value-added” measures attempt to isolate how much individual teachers are contributing to a student’s current achievement from other relevant factors, such as a student’s poverty status or her
achievement in the preceding year. The goal is to determine whether students are learning more or less from a particular teacher than statistical models would predict they’d learn from a typical
teacher, and then to base teacher evaluations in part on the results.
In many states and school districts, value-added measures are limited to teachers in grades four through eight, because all states currently test students in grades three through eight in reading and
math, and two years of data are needed to estimate a teacher’s influence on student achievement.
I also noted that the procedure described in the DCPS IMPACT Guidebook for calculating a teacher’s value-added score, which involves subtracting students’ scores on the DC CAS from 2009 from their
scores in 2010, was seriously flawed, because the scores for one grade are on a different scale from the scores for the adjacent grade.
I concluded that if the district were following the description it gave in the Guidebook, it had botched the calculation of value-added scores for teachers – and that these flawed calculations may
have been used to justify firing 26 teachers and placing hundreds more at risk of being terminated next year.
Frederick M. Hess of the American Enterprise Institute, in a blog post entitled “Professor Pallas’s Inept, Irresponsible Attack on DCPS,” raised two major objections to my piece. The first was that
my analysis rested on a document I found on the DCPS website rather than on phone calls or emails to DCPS. Second, Hess said I misrepresented the complex procedures that DCPS and its contractor,
Mathematica Policy Research, used to calculate its value-added measures.
Hess and I both agree that “the simple subtraction exercise” I described in my first post wouldn’t result in accurate value-added scores. And that’s precisely my point: the procedures DCPS described
in the Guidebook are seriously flawed.
But whereas Hess is willing to take on faith the validity of whatever DCPS and Mathematica actually did – simply because “there’s a growing industry that specializes in doing precisely this” – I am
more skeptical, because the description provided in the DCPS IMPACT Guidebook is so off-base.
Below, I present evidence from a current teacher’s IMPACT report to show that what DCPS provided to teachers two weeks ago states that a simple subtraction exercise is used to calculate value-added
I worked from the Guidebook because that is what is publicly available. It doesn’t take a leap of logic or faith to expect that the Guidebook should accurately describe how IMPACT works.
It is the only document that was made available to DCPS teachers to explain the new system by which they were to be evaluated – and possibly fired – in the most recent school year. I was also careful
to qualify my statements – noting that “I cannot be sure that this is what happened” – for the simple reason that there’s no technical report.
Such a report would detail the methodology used to make the calculations, allowing outside experts to confirm or dispute IMPACT’s validity. It would also help teachers better understand their
A colleague of mine who sought a copy of the technical report, Justin Snider of The Hechinger Report, managed to reach the DCPS Chief of Data and Accountability, Erin McGoldrick.
McGoldrick explained that the technical report for the IMPACT system is currently being finalized by Mathematica. “DCPS is considering releasing the document publicly once it is finalized,” she said.
Meanwhile, Mathematica has now posted a brief description on its website of the value-added procedures it developed for DCPS.
Stanford economist Eric Hanushek, who serves as a technical advisor to IMPACT, told me that IMPACT uses a fairly standard regression model, which predicts current-year achievement by taking into
account prior achievement and a variety of student characteristics like age, gender and socioeconomic status.
This is reassuring, although I still have many questions about the details of the model, such as its reliance on a teacher’s performance solely in the 2009-2010 school year rather than multiple
years. Hanushek also told me that he’s never looked at what DCPS reports about the value-added measures.
I’m happy to hear that DCPS seems not to have botched the calculation of the value-added scores by subtracting last year’s score from this year’s score in the calculation of actual or predicted
student performance, even if this is what DCPS is telling teachers it’s doing.
And I still contend that teachers whose careers were placed in jeopardy by the results should have been notified of the methodology in advance, not after the fact. Also, I reserve the right to be
critical of the procedures if and when they are made public. The weight of expert scholarly opinion is that many technical and practical issues must still be worked out before value-added measures
can be fairly used in high-stakes personnel decisions.
So why has DCPS misrepresented the value-added methodology to teachers and the public?
I didn’t invent the procedures described in the Guidebook – it’s an official DCPS publication. Is it really plausible that these procedures were only intended to illustrate the logic of the
value-added approach to a lay audience? Other districts, such as New York City, seem able to represent the actual methods used. Do DCPS officials believe that teachers don’t need to understand the
system that’s being used to evaluate their performance, or, worse, that they aren’t capable of understanding it?
Here’s an example of how the misrepresentation in D.C. continues.
A teacher in IMPACT Group 1 – the group for whom 50 percent of the IMPACT score is based on Individual Value-Added (IVA) – generously provided me a copy of her actual IMPACT report for 2009-10.
(Details have been changed to protect the teacher’s identity.) This teacher received an overall impact score that was in the “Effective” range. Pages 4-8 of her report make clear that the calculation
of the value-added score is based on a measure of “growth” that involves subtracting a student’s score from 2009 from his or her score in 2010.
For example, in the section entitled “Your IVA Score for Reading,” Step 1 reads as follows: “We calculated the average reading growth of your students over the past school year. On the spring 2009 DC
CAS (before they entered your class), your students’ average scale score was 443.8. On the spring 2010 DC CAS (after being in your class), your students’ average score was 547.2. Therefore, on
average, your students grew 103.4 DC CAS points this past school year.”
Step 2 is: “We calculated the average reading growth of students like yours over the past school year. On the spring 2009 DC CAS, the average scale score of students like your 2009-10 students was
443.8. On the spring 2010 DC CAS, the average scale score of students like your 2009-10 students was 539.8. Therefore, on average, students like yours grew 96.0 DC CAS points.”
Finally, in Step 3: “We compared the average reading growth of your own students with the average reading growth of students like yours. Recall that your own students grew 103.4 DC CAS points and
students like yours grew 96.0 DC CAS points. Thus, your students grew 7.4 DC CAS points more than similar students. Your raw value-added score, then, is +7.4.”
In its reporting to teachers, then, DCPS appears to be saying one thing – which is clearly incorrect – but doing another. The big question, then, is why?
McGoldrick of DCPS provided this explanation: “The full technical description of the value-added model designed for us by Mathematica Policy Research includes sophisticated statistical techniques
that require specific expertise to understand fully.We wanted to make the model as accessible as possible to teachers so they could understand how they were being evaluated. Therefore, we focused our
materials on conveying the core concepts of value-added to teachers, leaving the full description to our technical advisors who have specific expertise in value-added.”
Okay, but as we learned from the financial meltdown, there are serious risks in relying on technologies so complicated that hardly anyone – and perhaps no one setting policy for DCPS – truly
understands them. Involving outside experts can diffuse responsibility so that no individual is solely in charge of decisions with real consequences.
Follow my blog all day, every day by bookmarking washingtonpost.com/answersheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page at
washingtonpost.com/higher-ed Bookmark it!
By Valerie Strauss | August 5, 2010; 6:00 AM ET
Categories: D.C. Schools, Guest Bloggers, Research, Standardized Tests, Teachers | Tags: IMPACT and d.c., d.c. teacher evaluations, d.c. teachers, dc schools and teachers, how to evaluate teachers,
value added evaluations
Save & Share: Previous: Ravitch: Mayoral control means zero accountability
Next: Principal: Not the change I had in mind
We who are evaluated by the methods used for IMPACT simply call this the shell game. Given how IMPACT worked at my own school it is hard to believe that this is actually expected to identify and keep
effective teachers at a school. As it has been used at my school IMPACT has proved to be a subjective tool with mumbo jumbo math thrown in to try and make it look objective. In the end, due to the
way math is used for the scoring from tests and the fact that Group 1 teachers'test scores account for 55% of their score (not the 50% that is constantly reported in your paper), very few teachers
will really receive incentive bonuses. This works out great for a cash strapped school system that could use the foundation money elsewhere - for instance, for the highly paid consultants who
designed IMPACT and who will be designing the tests to be used to evaluate pre-K and Kindergarten students next year. This is a sham and your paper's refusal to investigate fully the paid consultant
scandal that is Rhee's tenure is shameful.
Posted by: adcteacher1 | August 5, 2010 7:53 AM | Report abuse
Prof. Pallas may have inadvertently painted a picture where -- and this is by deduction -- only he, apart from Mathematica, understands the methodology released so far. He has provided no evidence
that no DCPS decisionmakers have good and sufficient understanding of it--unless he hangs his hat on one phone interview. Mathematica is reputable, as Pallas knows from his own business as a
consultant. But, ya know, as an outside body with offices close to an Ivy League school and with some well paid people it must be beaten up and found to be part of the larger Gates-Walton-Duncan,
yada, yada plot to crush American Public Education, and specifically bust the DC teachers union, while permanently disadvantaging youth in the District. That's a mandatory charge that must be brought
by we professional victims in DC. This plot is essential to understanding us and our vast progress in public education until recently.
Posted by: axolotl | August 5, 2010 8:08 AM | Report abuse
DC teachers need a good lawyer.
Posted by: Linda/RetiredTeacher | August 5, 2010 8:58 AM | Report abuse
IMHO, it is time for axolotl to either cite clear data to support remarks made on this site or be ignored.
Posted by: lacy41 | August 5, 2010 9:09 AM | Report abuse
-- This has now devolved into a "Just Trust Me" situation. We can't explain it but our hired gun experts assure us that "all is well".
-- Trust is something that is earned by performance and accountability. Currently there is no way to verify either the actual calculations made or the validity of the methodology. I, for one, need
this verification to earn my trust. (There are others that don't seem to need this level of close inspection to trust the current administration.)
-- It's still NOT a "vertical scale" so DC CAS scaled scores cannot be compared from one grade to the next. (Yes, the numeric values are 300-399, 400-499, etc. but the actual test questions are not
designed to measure learning progress across grades but rather are designed to measure proficiency on grade specific criterion-based standards.) See the McGraw-Hill technical papers at OSSE.
-- Since the Technical Support Document has not been published, no one at DCPS knows how the actual calculations are done. This implies that DCPS must have spent a fortune to have the calculations
done for them at Mathematica. And that they just trusted them to do it "right".
-- Current research suggests that such growth measures are fraught with very high error volatility (that regression analysis cannot smooth out satisfactorily.) See: http://voices.washingtonpost.com/
answer-sheet/teachers/study-error-rates-high-when-st.html#more Perhaps the Technical Document speaks to these very real issues.
-- What is the definition of "students just like yours'? Does it take into account the number of students that have been or should have been retained? Does it take into account the number of divorces
or separations that occurred during the school year? Does it take into account loss of school time due to injury or illness? Does it take into account discipline issues this year versus last year
versus the "students just like yours"? Does it take into account students participating in extended hours or Saturday classes last year versus this year (ie: programs cut from the budget this year
versus last year)?
-- Ms. McGoldrick educational background doesn't seem to be too strong on "statistical student growth models" From the DCPS website: "Ms. McGoldrick earned a bachelor’s degree in the classics from
the University of Notre Dame and a master’s degree in public policy from UCLA’s School of Public Policy and Social Research." http://dcps.dc.gov/DCPS/About+DCPS/Who+We+Are/Leadership+Team/
-- If it's just "a fairly standard regression model" then it should not be very hard to explain. It's harder to prove that it really works by citing applicable current research and studies. In
particular, DC CAS has been used by DCPS since 2006 so now we have 5 years of consistent data. How has this model from Mathematical worked over those 4 years?
-- To me, the IMPACT growth model still feels like an experiment in the early stages but is already being used!
Posted by: interested8 | August 5, 2010 11:38 AM | Report abuse
You will also notice that Mathematica's discription of this is filled with hedges. From their website.
"For example, as with any statistical model, there is uncertainty in the estimates produced; therefore, two teachers with similar value-added estimates are said to be “statistically
indistinguishable” from one another. We quantified the precision with which the measures are estimated by reporting the upper and lower bounds of a confidence interval of performance for each
Really. DCPS teachers never saw any confidence intervals, just a single number. I would be very interested to know how wide those confidence intervals are. Given the tiny sample sizes for Elem.
school teachers, I wouldn't be surprised if it were 15 or so scale points, which suggests some level of doubt to teacher scores.
"In addition, because value-added estimates measure not only the effectiveness of the teacher but also the combined effect of all factors that affect student achievement in the classroom, some
caution should be applied when comparing teachers across schools. Finally, if student assignment to teachers was based on unobservable factors—for example, pairing difficult-to-teach students with
teachers who have succeeded with similar students in the past—a value-added model might unfairly penalize these teachers because it cannot statistically account for factors that cannot be measured."
So teachers might be unfairly punished for being good at their jobs and being given more challenging students? I can't think of anyone who has ever brought that up EXCEPT FOR EVERY TEACHER ON THESE
Given these flaws, are we sure this system is even close to working. I asked DCPS for the whole model, and was told that I wasn't entitled to see it, despite the fact that I am being evaluated on it.
I'm afraid that Rhee burned trust a long time ago, so telling me to trust that the system is fair isn't going to fly with me or a lot of teachers. (Note: I graded highly effective, so it isn't sour
grapes here...)
Posted by: Wyrm1 | August 5, 2010 12:33 PM | Report abuse
"-- This has now devolved into a "Just Trust Me" situation. We can't explain it but our hired gun experts assure us that "all is well"."
Trust Michelle Rhee who lied on her resume about her Baltimore Miracle (TM).
Trust Jason Kamras, who appears to have made false claims about his success at Sousa Middle School.
Posted by: phillipmarlowe | August 5, 2010 12:54 PM | Report abuse
"McGoldrick explained that the technical report for the IMPACT system is currently being finalized by Mathematica. “DCPS is considering releasing the document publicly once it is finalized,” she
Umm...WHAT? They're "considering" releasing it *once it's finalized*? 200+ people have lost their jobs, another 700 or so are on watch and may lose their jobs next year, and it's not even finalized?
They're only just now *considering* if they'll let people know how they're being evaluated???
And what about the IES study--conducted by two Mathematica researchers-- that found that current value-added models have error rates upwards of 25% (when using 3 years of data), and 35% when using
just one?
How can they just implement a system that has runaway problems like this, refuse to inform people about it, and call it reform?
Teachers dig into our own pockets to pay for basic supplies. Yet there's always money for central office staff and pricey Data consultants. And more tests.
These poor children.
Posted by: TeacherSabrina | August 5, 2010 1:25 PM | Report abuse
So Pallas is saying that the difference between being a Level 3 teacher (effective) and a Level 2 teacher (minimally effective) is about 5-10 points? And that is acquired how? Oh, I know… They used
the cosmological constant! Great, a teacher’s job rests upon a fudge factor…
The attempt to control for so many social descriptors with numerical values inserted into a "formula" takes this further and further from any kind of classroom reality, and frankly, credibility. It
does belong in outer space.
And speaking of ineptitude and irresponsibility, where does the American Enterprise Institute, a neocon think tank, fall on that continuum? Of course Hess would "take on faith" whatever bogus math
was done. It is his job to conclude that we must trust the "growing industry" dedicated to such nonsense. If there's a "business" in charge, than all is well, and if it's a BIG business, than
"The goal is to determine whether students are learning more or less from a particular teacher than statistical models would predict they’d learn from a typical teacher, and then to base teacher
evaluations in part on the results."
And who or what is a "typical teacher"? Where is the model for that? Looks like we’re back to the fudge factor… and it’s obvious that the people factoring it are NOT as smart as Einstein, or Mr.
Posted by: Incidentally | August 5, 2010 2:42 PM | Report abuse
Professor Pallas – did you know that this is not the first time that Rhee and Company has been dodgy about the numbers?
As I’ve said before, it took considerable badgering on my part to get the Post to stop repeating Rhee’s line that Shaw middle school’s DC-CAS scores stayed about the same, when fact they decreased.
But the Post finally did it,* and in future articles, the Post acknowledged that the principal’s efforts in his first year of a turnaround had resulted in lower scores. PBS aired the same false
information about Shaw in the summer of 2009, but they were much quicker to correct their error,* no badgering needed. In both cases there were official stats to back up the claims. Thank you for
your inquiry into IMPACT and please stay on the trail.
* references in the order mentioned. Please check them and see for yourself
For detailed info on numerous DCPS statistical claims, see: http://gfbrandenburg.wordpress.com/
Posted by: efavorite | August 5, 2010 3:05 PM | Report abuse
value-added model designed for us by Mathematica Policy Research includes sophisticated statistical techniques that require specific expertise to understand fully.
Time for Americans to understand what is a mathematical model.
Mathematical models were used to convince investors that the bundles of the worthless mortgages that were being offered for sale, would be great investments.
These models of Mathematica Policy Research are as worthless as the models selling bundles of worthless mortgages.
Statistically it is totally meaningless and impossible to derive anything from the limited number of classes per grade.
There are only being generous 140 classes of 25 students per grade.
44,000 students in D.C. public schools, divided by 13 grades, divided by 25 students per grade.
The 140 classes are too small a sample to account for the random composition of classes with 25 students of different skill levels.
Imagine 4 different levels of skills, 1-4.
With 4 levels and 25 students there are 390,625 possible random classroom compositions. 2 student in a class with 4 levels is 16 possible classroom compositions. 2 raised to the power of 4.
The sample of 140 classes are too small and very little probability of finding two classrooms with the same class composition if class composition is done randomly with 4 levels and 25 student.
Okay well we attempt to make all the compositions of the classrooms the same.
Yes but now you do have the randomness of statistics, and the validity of 4 levels are no longer valid since you are not looking at random class composition where a very large random sampling set
would be appropriate.
Let us say that level 1 is failure. Level 1 can not be used to simply equalize the composition of classes. On a test where less than 200 is failure there are large differences in a student that
failed with a score of 50 and a student that failed with a score of 190.
Since you are attempting to equalize the composition of classes you have to equalize the average previous scores of students. But with only 25 students this can not be done in a random fashion.
Does a student with a high scores of 450 and an student with very low scores of 50 equate to two student that each have a score of 250?
Time to recognize that test scores in D.C. can not used in D.C. to evaluate teachers.
The students are too different in skills in this school system to have compositions of classes where a small sample can be used.
It has been seen that you can create mathematical models to sell worthless mortgages. Time to recognize that when you hear "includes sophisticated statistical techniques that require specific
expertise to understand fully" it probably means that the model has very little validly with reality and even with mathematics.
Posted by: bsallamack | August 5, 2010 4:01 PM | Report abuse
Such a report would detail the methodology used to make the calculations, allowing outside experts to confirm or dispute IMPACT’s validity. It would also help teachers better understand their
It is time for Mathematica Policy Research to release full details of the model for using test scores in evaluating teachers.
It is also time for Mathematica Policy Research and the DCPS to release models and methods for equalizing the class composition of students based upon test results.
Once these are released and reviewed by mathematicians I believe that it will be found that the model is very flawed mathematically.
Posted by: bsallamack | August 5, 2010 4:43 PM | Report abuse
If the education of children and the livelihoods of teachers weren't on the line, this would be high comedy...
"We calculated the average reading growth of your students over the past school year." - NO, you calculated the average growth in OBSERVED SCORES (which have a margin of error relative to what's
called a "true" score), and those scores come from a test that has a debatable correlation to a subset of the larger set of skills involved in reading.
“We calculated the average reading growth of students like yours over the past school year." - NO, you calculated the average change in SCORES among some students who MIGHT resemble other students in
a small number of ways for which you have data, and you ASSUMED that these were relevant factors. Furthermore, you ASSUMED that where you had NO information about the students, that the information
you lacked would have no effect in distinguishing one group of students from another. Apparently, you did not consider how long students had been in the class or in the school system; whether or not
the student had access to books and computers at home; whether or not the student participated in any tutoring or mentorship activities; whether or not the student lived with two parents, one parent,
or another arrangement; how much school the student missed this year; whether or not the student's attendance record this year was comparable to prior years; whether or not the student or a family
member experienced serious illness, loss of employment, loss of a home.
By evaluating teachers comparatively, you ASSUME that any changes in the school or curriculum affected all teachers equally, and that all factors in the school, like tutoring, administration, and the
impact of other teachers, will affect all teachers equally. By comparing teachers to the group based on such simple data and attributing differences to the teachers themselves, you ASSUME that the
teachers have equal access to and benefits from training and support. By making high-stakes decisions based on scores from year to year, you IGNORE or DON'T CARE about significant variables from year
to year in the teacher's work or life, such as changes in teaching assignments; extended absence or illness; bereavement.
Good luck, D.C. teachers. I feel for you.
Posted by: DavidBCohen | August 5, 2010 7:37 PM | Report abuse
Mathematica Policy Research announces new method to evaluate teachers based on test results.
Those at Mathematica Policy Research are celebrating with champagne.
They are eagerly awaiting for the Federal government to provide the first batch of clone children that have been raised in government facilities to prove the effectiveness of evaluating teachers by
test results.
Meanwhile others at Mathematica Policy Research, that have been working on a Federal contract, will shortly be ready to show the Federal government their model that predicts that 10 years or more of
massive unemployment is an indication of a healthy economy.
Posted by: bsallamack | August 5, 2010 8:14 PM | Report abuse
2010 DC-CAS individual scores are up
Really Great job by Hardy MS 8th graders:
They went up above their 7th grade scores and last years 8th grade
29.41% Advanced in reading (83.33% prof and Adv)
18.45% Advanced Math (89.32% Prof and Adv)
Posted by: edlharris | August 5, 2010 11:25 PM | Report abuse
Unfortunately, with Race to the Top, all teachers in this country will be evaluated in this way.
Posted by: tutucker | August 5, 2010 11:41 PM | Report abuse
If anyone has read the Bridging Differences blog, Richard Allington posted this. . .
"I'm in agreement with Diane on this one. RttT is but another "grab the money and go" scam. John Papay, Harvard University, has a paper that will appear in the American Educational Research Journal
soon on pay for performance that is a must read. He finds that 40% of the top teachers using scores fom reading test were among the lowest performing teachers using a different reading test. And
vice-versa. It's not just that we have bad tests but, as I have been telling my students all along, but we have bad tests that provide bad advice. We also have no tests that measure the real goal of
reading instruction, creating kids (then adults) who read for information and entertainment. I'd love a system that rewarded our best teachers financially but we don't have any tests currently that
I'd trust in that system. And if my experiences are valid, then most teachers are not in teaching for the money but for everything else that goes along with teaching."
If you are not a teacher, please know that Dr. Richard Allington is a highly respected literacy researcher. So not only is IMPACT bad but so is the reading tests we're giving our students.
Posted by: tutucker | August 5, 2010 11:45 PM | Report abuse
When I graduated from college, I spent a year subbing. The big thing in school at that time was behavior modification.
In a first grade class, I was so busy running around trying to figure out who got a star for what that I didn't teach at all that day. Fortunately, the kids knew the system and were able to tell me
when they got stars for their behavior etc. Unfortunately, most of their learning was all focused on this reward program.
In the same building I subbed for a 5th grade teacher. After panicking when I didn't see a behavior plan, I asked a student what his teacher did when a student misbehaved. He told me, "She tells us
to stop."
Can you believe that worked? :-)
Just let us teach in ways that kids can learn.
I'm so tired of data. Data rich, information poor. yuckkkkkkkkkkkkkk!!!
Posted by: tutucker | August 5, 2010 11:51 PM | Report abuse
(Bsallamack, you're starting to rub off on me. :-) )
I'm tutoring a student this summer and have been front loading him with the concepts he'll be working on for next year.
As we started working on energy, I got a little caught up in the learning and started thinking out loud.
"So electricity is one form of energy and a source of energy is the sun, but it also says that kinetic and potential energy are forms of energy. But I think kinetic and potential energy are present
in all forms of energy.
I've always wondered how solar energy works. Let's look it up on Google images. It talks about DC and AC power. I have no idea what that means, but I'll ask my hubby later as that's his job."
I looked over at my tutoree and smiled. But then I realized this is what learning is.
During the school year, I try to instruct in this way. Kids research and learn about content area through investigation, questions, connections etc. I remember studying about the nervous system this
past year and a student stating that she was like the associative neuron because she passed messages between two different groups during recess.
I also work hard at getting kids to think at a different level in regards to all other academic areas too.
I expect I could be scored effective by just having kids fill in the blanks, but I'm not interested in that. But the above learning that I attempt to teach can also be captured through portfolio
assessments etc. Performance based assessments. Even an outside person can come in and talk to the kids about the critical learning happening in their academic areas.
I want kids to be curious, and to pursue their learning. I have also noticed that the skills come much easier when kids are active learners and not passively working on skill and drill in isolation.
And yet, we are continuing on the road that narrows the instruction and thus dumbs down our children by holding teachers accountable to a fill in the blank or multiple choice test.
Posted by: tutucker | August 6, 2010 12:09 AM | Report abuse
I want kids to be curious, and to pursue their learning. I have also noticed that the skills come much easier when kids are active learners and not passively working on skill and drill in isolation.
And yet, we are continuing on the road that narrows the instruction and thus dumbs down our children by holding teachers accountable to a fill in the blank or multiple choice test.
Posted by: tutucker
You are right. Really the multiple choice tests simply are a measure but the politicians have confused this totally.
On a reading comprehension test it is impossible for a teacher to drill children or prep them for the test. The test measures reading ability and usually this ability comes solely from reading. The
only way you could drill students is where each student works by them self on a computer with a computer program where children read on their own and then answer questions to show that they have
understood the material. You can not do this with a teacher in the classroom.
In reality this is the same with math. Teacher teaches the concepts to get them started and a computer program where each student is drilled by the program. Drilling 2+2 by a teacher over and over is
senseless when some students know it and some do not.
The politicians and even educators do not understand that a teacher can use repetition only to a certain point. The idea is that the child sees the pattern from the repetition.
DC and AC. Direct curent is from a battery where the atoms are traveling in one direction. Alternate current is from a electrical generator with a magnet moving around coils of wiring. The atoms
alternate movement in one direction and then the reverse direction because of the spin of the magnetic field. Originally when transmission of electricity in this country started, there had to be a
decision on how to transmit energy. Alternate current was selected as the method for transmission of current. ( Even with the magnet and coil generator you could convert this into direct energy for
transmission but this is more complex.)
Never did like kinetic energy versus potential energy. Since everything is composed of atoms that are constantly in motion seem like everything is kinetic energy. We seem to build constructs that are
really chains on thought.
Posted by: bsallamack | August 6, 2010 1:25 AM | Report abuse
2010 DC-CAS individual scores are up
Really Great job by Hardy MS 8th graders:
They went up above their 7th grade scores and last years 8th grade
29.41% Advanced in reading (83.33% prof and Adv)
18.45% Advanced Math (89.32% Prof and Adv)
Posted by: edlharris
I would not be too impressed with this since the national tests of D.C. indicate that the DC-CAS.
Actually this year many D.C. are getting bonuses simply because of easy test.
Teachers should not be fired because of test results and also they should not be given bonuses. The test results really are based upon the composition of students in the class.
Time to revert back to a student getting X on test only means the student got a X on the test. It does not tell you anything about the teacher and it certainly does not tell you whether the test was
too easy.
Posted by: bsallamack | August 6, 2010 1:33 AM | Report abuse
Good luck, D.C. teachers. I feel for you.
Posted by: DavidBCohen
Yes you are right about the secondary issues.
But the real issues is that you can not use tests to evaluate teachers unless each class has the same composition of students based upon the tests of previous grades.
Example of composing classes in grade A with 25 students to have same composition.
10 students failed previous grade test
8 students basic previous grade test
5 students proficient previous grade test
2 students advanced previous grade test
Even this presents a problem.
Student a failed with a 50 on the previous grade test.
Student b failed with a 190 on the previous grade test.
Having student a in your class is not equivalent to student b in a different class.
And of course you are not even considering:
Student a failed the two previous grade tests.
Student b only failed the previous grade teat and passed the test in the grade prior to the previous grade.
Having student a in your class is not equivalent to student b in a different class.
It is impossible to use test results to fairly evaluate teachers.
Posted by: bsallamack | August 6, 2010 2:02 AM | Report abuse
I'm starting to like you. :-) Thanks for the info on energy. I plan to investigate it more. Your response alone has increased my interest.
On your point in regards to reading comprehension. I had a student who had incredibly high levels of higher level thinking around his reading. He increased the level of thinking in our classroom in
his responses to our class reading. He could talk about author's intent, evaluate the characters' actions etc. But he stunk at multiple choice tests. he was not strong in the skill aspect of reading,
and his thinking was so diverse.
I would argue that a multiple choice test can't tell us much in regards to a child's reading. I can listen to him read, talk to him about his reading etc., and if I know what I'm doing, I can
evaluate him and move him to another place. I can do much more for him than a multiple choice test can.
Posted by: tutucker | August 6, 2010 10:51 AM | Report abuse
I'm starting to like you. :-) Thanks for the info on energy. I plan to investigate it more. Your response alone has increased my interest.
On your point in regards to reading comprehension. I had a student who had incredibly high levels of higher level thinking around his reading. He increased the level of thinking in our classroom in
his responses to our class reading. He could talk about author's intent, evaluate the characters' actions etc. But he stunk at multiple choice tests. he was not strong in the skill aspect of reading,
and his thinking was so diverse.
I would argue that a multiple choice test can't tell us much in regards to a child's reading. I can listen to him read, talk to him about his reading etc., and if I know what I'm doing, I can
evaluate him and move him to another place. I can do much more for him than a multiple choice test can.
Posted by: tutucker | August 6, 2010 10:53 AM | Report abuse
I would argue that a multiple choice test can't tell us much in regards to a child's reading.
Posted by: tutucker
I saw a problem in the math tests as there are trick questions which do not belong on a math test. Math is being tested and not the ability to see tricks. Trick questions on a math test are not
You may be right about reading tests as I have really not looked at the reading tests in a while and they may have changed them to make them invalid.
Reading tests have been around for over 50 years and if valid are effective in providing information. The score is information.
As a teacher you should be able to make the determination whether the reading test is fair or not.
Maybe the child has made a mistake as an offset in filling in the blanks. If you have the test you can check this where you see correct answers that have been offset.
Give the child a test where the child circles the answers on the page instead of filling in blanks.
"I would argue that a multiple choice test can't tell us much in regards to a child's reading."
I disagree with the above. The idea of correctly structured multiple choice tests are the only fair method of evaluating all students.
You have to make the determination whether the specific test has been correctly structured.
Posted by: bsallamack | August 6, 2010 12:44 PM | Report abuse
Posted by: tutucker
Other possibilities for your student.
Reading too fast because it is a test and making mistakes. A fast runner can lose in a race because of poor pacing.
Child may be displaying memory skills and not reading skills in the classroom. Simply repeating from memory. This skill would not help on a test. Being able to memorize symbols does not imply reading
ability and really is not very different from someone that can memorize complex formulas but has no understanding of applying them to solve a given problem.
Posted by: bsallamack | August 6, 2010 3:15 PM | Report abuse
Bsallamack and tutucker, I like you both. Let's talk about energy some, and I'll come back to education standards from that.
The idea that the phenomenon of energy is one "thing" which can be recognised in many forms depends on being able to put a number on it - "quantify" it - when it is approached in different
manifestations. In our current International System of scientific measurement, as long as we use the seven base units to start our calculations, we can measure kinetic (and potential!) energy from
any source, in any manifestation, in joules. Kinetic energy is 1/2 the mass of a moving object x its velocity squared, in joules. The energy released by a nuclear explosion is the mass destroyed x
the speed of light squared, also in joules. An electron volt is one joule of energy per coulomb of charge. The energy in a photon of light is plancks constant times the frequency of the light. In
joules. Energy is work, which equals force x distance, in joules. Heat is also energy, the statistical average of the kinetic energy of individual molecules and atoms: the energy needed to raise the
temperature of a gram of water by one degree celsius is 4.118 joules. We express the potential energy stored in chicken fat, gasoline, glucose and ethanol in the form of kilojoules per mole. When we
burn them or metabolise them, they release that much heat, or do that much work.
None of that is in the current Massachusetts framework for high school chemistry! And if it was, it would be broken down into bullets, and we would be pressured and "professionally developed" to
follow test preparation teaching methods to turn it into disconnected gibberish. I know: I do still teach it, with all my strength, in a low income public school where all our resources are being
stolen by these lying, cheating data-driven profiteers.
Posted by: mport84 | August 7, 2010 9:25 AM | Report abuse
None of that is in the current Massachusetts framework for high school chemistry! And if it was, it would be broken down into bullets, and we would be pressured and "professionally developed" to
follow test preparation teaching methods to turn it into disconnected gibberish. I know: I do still teach it, with all my strength, in a low income public school where all our resources are being
stolen by these lying, cheating data-driven profiteers.
Posted by: mport84
Yes the measurement are there for describing potential and kinetic energy but how do these correlate to the force field of a magnet or the actions of movement of atoms. The lump of coal where energy
could be be measured by heating the coal, and in the same way that the energy of a piece of wood could be measured. Yet there will be differences and simply stating this is potential energy hides
these differences.
Is electricity that is moving in transmission lines only potential energy until the electricity is used by a device?
Moving can be changed to poised to move if one is speaking of AC.
Agree with you that the standardized testing is mostly a waste of money.
I can see having basic standardized tests for reading and arithmetic but question the money spent for science, writing, and various other subjects for standardized tests.
Instead of spending money on these standardized tests it would be better to spend the money for more equipment in science labs for experiments, and computers in writing classes.
Let teachers grade students in these classes and use the expense of standardized tests for other purposes.
My physic class in my last year of high school was an elective and I dropped it quickly when I saw that it would be doing pound foot measurements and that I could get out of school at 2:15 instead of
I could see teaching basic logic in the public schools with a standardized test, but the politicians would never allow this since the voters later would recognize the false premises of the
politicians. It is very interesting that everyone wants students to think, yet no one is willing to allow logic into the public schools with the basic laws in regard to thinking logically. Think for
yourself but we will not teach you the basic rules of logic.
Posted by: bsallamack | August 8, 2010 3:35 PM | Report abuse
The comments to this entry are closed. | {"url":"http://voices.washingtonpost.com/answer-sheet/dc-schools/accountability-in-dcps-details.html","timestamp":"2014-04-19T07:37:00Z","content_type":null,"content_length":"111038","record_id":"<urn:uuid:6029b741-ca35-4ad9-b0bd-af7e21e30926>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Golf, IL Math Tutor
Find a Golf, IL Math Tutor
...I continue teaching my English tutorial students and private clarinet students of all ages. I am very enthusiastic about helping students better learn to read, write and think, and my own
passion for literature continues to grow. I enjoy the many successes both my English and instrumental stude...
15 Subjects: including ACT Math, reading, grammar, writing
...As an undergraduate, I worked with my school, Towson University, tutoring peers in various entry-level to advanced biology and chemistry topics, including multiple sessions tutoring other
students in my same genetics class due to a lack of tutors who had already completed the course. I graduated...
26 Subjects: including trigonometry, ACT Math, discrete math, ASVAB
...I have taught Mandarin Chinese in public school for three years. I am native speaker. My ISBE teaching certification includes Mandarin Chinese, science endorsements, self-contained classroom.
28 Subjects: including SAT math, Chinese, GRE, algebra 1
...I have seven years of high school coaching experience at all levels, in addition to being an all-conference high school basketball player. I serve as the college counselor at our high school,
so I have had the opportunity to visit many colleges as well as to stay current with trends in higher ed...
20 Subjects: including prealgebra, algebra 1, ACT Math, Spanish
...I taught at the collegiate level for 8 years at Notre Dame, the Illinois Institute of Technology, and, most recently, at the University of Chicago, where I completed a four-year fellowship in
2012. At U. of C., I taught humanities courses in philosophy and literature to first-year students and a...
21 Subjects: including discrete math, differential equations, linear algebra, algebra 1
Related Golf, IL Tutors
Golf, IL Accounting Tutors
Golf, IL ACT Tutors
Golf, IL Algebra Tutors
Golf, IL Algebra 2 Tutors
Golf, IL Calculus Tutors
Golf, IL Geometry Tutors
Golf, IL Math Tutors
Golf, IL Prealgebra Tutors
Golf, IL Precalculus Tutors
Golf, IL SAT Tutors
Golf, IL SAT Math Tutors
Golf, IL Science Tutors
Golf, IL Statistics Tutors
Golf, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Bannockburn, IL Math Tutors
Fort Sheridan Math Tutors
Fox Valley Math Tutors
Fox Valley Facility, IL Math Tutors
Glenview, IL Math Tutors
Hines, IL Math Tutors
Indian Creek, IL Math Tutors
Indianhead Park, IL Math Tutors
Kenilworth, IL Math Tutors
Morton Grove Math Tutors
Niles, IL Math Tutors
Northfield, IL Math Tutors
Skokie Math Tutors
Western, IL Math Tutors
Winnetka, IL Math Tutors | {"url":"http://www.purplemath.com/Golf_IL_Math_tutors.php","timestamp":"2014-04-21T10:38:05Z","content_type":null,"content_length":"23660","record_id":"<urn:uuid:ad6b4410-02de-4e53-a5bd-8c0e73261b08>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Automatic Theorem Proving with Renamable and Semantic Resolution
Results 1 - 10 of 27
- Artificial Intelligence , 2000
"... In this paper we provide algorithms for reasoning with partitions of related logical axioms in propositional and first-order logic (FOL). We also provide a greedy algorithm that automatically
decomposes a set of logical axioms into partitions. Our motivation is two-fold. First, we are concerned with ..."
Cited by 52 (8 self)
Add to MetaCart
In this paper we provide algorithms for reasoning with partitions of related logical axioms in propositional and first-order logic (FOL). We also provide a greedy algorithm that automatically
decomposes a set of logical axioms into partitions. Our motivation is two-fold. First, we are concerned with how to reason e#ectively with multiple knowledge bases that have overlap in content.
Second, we are concerned with improving the e#ciency of reasoning over a set of logical axioms by partitioning the set with respect to some detectable structure, and reasoning over individual
partitions. Many of the reasoning procedures we present are based on the idea of passing messages between partitions. We present algorithms for reasoning using forward message-passing and using
backward message-passing with partitions of logical axioms. Associated with each partition is a reasoning procedure. We characterize a class of reasoning procedures that ensures completeness and
soundness of our message-passing ...
- In Proceedings IJCAI-93 , 1993
"... SCOTT (Semantically Constrained Otter) is a resolution-based automatic theorem prover for first order logic. It is based on the high performance prover OTTER by W. McCune and also incorporates a
model generator. This finds finite models which SCOTT is able to use in a variety of ways to direct ..."
Cited by 32 (2 self)
Add to MetaCart
SCOTT (Semantically Constrained Otter) is a resolution-based automatic theorem prover for first order logic. It is based on the high performance prover OTTER by W. McCune and also incorporates a
model generator. This finds finite models which SCOTT is able to use in a variety of ways to direct its proof search. Clauses generated by the prover are in turn used as axioms of theories to be
modelled. Thus prover and model generator inform each other dynamically. This paper describes the algorithm and some sample results. SCOTT (Semantically Constrained Otter) is a resolution based
automatic theorem prover for first order logic. So much is hardly revolutionary. What is new in SCOTT is the way in which it blends traditional theorem proving methods, best seen as purely syntactic,
with techniques for semantic investigation more usually associated with constraint satisfaction problems. Thus it bridges two aspects of the science of reasoning. It was made by marrying an existing
- In Proceedings of CADE-12 , 1994
"... . A previous work on Herbrand model construction is extended in two ways. The first extension increases the capabilities of the method, by extending one of its key rules. The second, more
important one, defines a new method for simultaneous search of refutations and models for set of equational clau ..."
Cited by 28 (14 self)
Add to MetaCart
. A previous work on Herbrand model construction is extended in two ways. The first extension increases the capabilities of the method, by extending one of its key rules. The second, more important
one, defines a new method for simultaneous search of refutations and models for set of equational clauses. The essential properties of the new method are given. The main theoretical result of the
paper is the characterization of conditions assuring that models can be built. Both methods (for equational and non equational clauses) have been implemented as an extension of OTTER. Several running
examples are given, in particular a new automatic solution of the ternary algebra problem first solved by Winker. The examples emphasize the unified approach to model building allowed by the ideas
underlying our method and the usefulness of using constrained clauses. Several problems open by the present work are the main lines of future work. 1 Introduction It is trivial to say that the use of
models o...
, 2003
"... Query answering over commonsense knowledge bases typically employs a first-order logic theorem prover. While first-order inference is intractable in general, provers can often be hand-tuned to
answer queries with reasonable performance in practice. ..."
Cited by 26 (4 self)
Add to MetaCart
Query answering over commonsense knowledge bases typically employs a first-order logic theorem prover. While first-order inference is intractable in general, provers can often be hand-tuned to answer
queries with reasonable performance in practice.
, 1994
"... We propose a method for combining the clause linking theorem proving method with theorem proving methods based on orderings. This may be useful for incorporating term-rewriting based approaches
into clause linking. In this way, some of the propositional inefficiencies of ordering-based approaches ..."
Cited by 24 (3 self)
Add to MetaCart
We propose a method for combining the clause linking theorem proving method with theorem proving methods based on orderings. This may be useful for incorporating term-rewriting based approaches into
clause linking. In this way, some of the propositional inefficiencies of ordering-based approaches may be overcome, while at the same time incorporating the advantages of ordering methods into clause
linking. The combination also provides a natural way to combine resolution on non-ground clauses, with the clause linking method, which is essentially a ground method. We describe the method, prove
completeness, and show that the enumeration part of clause linking with semantics can be reduced to polynomial time in certain cases. We analyze the complexity of the proposed method, and also give
some plausibility arguments concerning its expected performance. 1 Introduction There are at least two basic approaches to the study of automated deduction. One approach concentrates on solving...
, 1994
"... We analyze the search efficiency of a number of common refutational theorem proving strategies for first-order logic. Search efficiency is concerned with the total number of proofs and partial
proofs generated, rather than with the sizes of the proofs. We show that most common strategies produce sea ..."
Cited by 22 (3 self)
Add to MetaCart
We analyze the search efficiency of a number of common refutational theorem proving strategies for first-order logic. Search efficiency is concerned with the total number of proofs and partial proofs
generated, rather than with the sizes of the proofs. We show that most common strategies produce search spaces of exponential size even on simple sets of clauses, or else are not sensitive to the
goal. However, clause linking, which uses a reduction to propositional calculus, has behavior that is more favorable in some respects, a property that it shares with methods that cache subgoals. A
strategy which is of interest for term-rewriting based theorem proving is the A-ordering strategy, and we discuss it in some detail. We show some advantages of A-ordering over other strategies, which
may help to explain its efficiency in practice. We also point out some of its combinatorial inefficiencies, especially in relation to goal-sensitivity and irrelevant clauses. In addition, SLD-reso...
, 1991
"... A general theory of deduction systems is presented. The theory is illustrated with deduction systems based on the resolution calculus, in particular with clause graphs. This theory distinguishes
four constituents of a deduction system: ffl the logic, which establishes a notion of semantic entailmen ..."
Cited by 19 (0 self)
Add to MetaCart
A general theory of deduction systems is presented. The theory is illustrated with deduction systems based on the resolution calculus, in particular with clause graphs. This theory distinguishes four
constituents of a deduction system: ffl the logic, which establishes a notion of semantic entailment; ffl the calculus, whose rules of inference provide the syntactic counterpart of entailment; ffl
the logical state transition system, which determines the representation of formulae or sets of formulae together with their interrelationships, and also may allow additional operations reducing the
search space; ffl the control, which comprises the criteria used to choose the most promising from among all applicable inference steps. Much of the standard material on resolution is presented in
this framework. For the last two levels many alternatives are discussed. Appropriately adjusted notions of soundness, completeness, confluence, and Noetherianness are introduced in order to
- In Computer Science Logic (9th Int. Workshop CSL'95 , 1996
"... . Few year ago we have developed an Automated Deduction approach to model building. The method, called RAMC 1 looks simultaneously for inconsistencies and models for a given formula. The
capabilities of RAMC have been extended both for model building and for unsatisfiability detection by including ..."
Cited by 18 (4 self)
Add to MetaCart
. Few year ago we have developed an Automated Deduction approach to model building. The method, called RAMC 1 looks simultaneously for inconsistencies and models for a given formula. The capabilities
of RAMC have been extended both for model building and for unsatisfiability detection by including in it the use of semantic strategies. In the present work we go further in this direction and define
more general and powerful semantic rules. These rules are an extension of Slagle 's semantic resolution. The robustness of our approach is evidenced by proving that the method is also a decision
procedure for a wide range of classes decidable by semantic resolution and in particular by hyperresolution. Moreover, the method builds models for satisfiable formulae in these classes, in
particular, for satisfiable formulae that do not have any finite model. 1 Introduction Model building and model checking are extremely important topics in Logic and Computer Science. Few years ago we
have develop...
- In Proceeding of IJCAI'95 , 1995
"... An extension of semantic resolution is proposed. It is also an extension of the set of support as it can be considered as a particular case of semantic resolution. It is proved sound and
refutationally complete. The extension is based on our former method for model building. The approach uses constr ..."
Cited by 16 (9 self)
Add to MetaCart
An extension of semantic resolution is proposed. It is also an extension of the set of support as it can be considered as a particular case of semantic resolution. It is proved sound and
refutationally complete. The extension is based on our former method for model building. The approach uses constrained clauses (or c-clauses), i.e. couples [[clause : constraint]]. Two important new
features are introduced with respect to semantic resolution. Firstly, the method builds its own (finite or infinite) models to guide the search or to stop it if the initial set of clauses is
satisfiable. Secondly, instead of evaluating a clause in an interpretation it imposes conditions (coded in its rules) to force a c-clause not to be evaluated to true in the interpretation it builds.
The extension is limited in this paper to binary resolution but generalizing it to nary-resolution should be straightforward. The prover implementing our method is an extension of OTTER and compares
advantageously with it ... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=10073","timestamp":"2014-04-23T20:35:06Z","content_type":null,"content_length":"37925","record_id":"<urn:uuid:257aeb32-fe08-4d2d-9cd3-2d83c58d5de7>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
One sub=lower class...
07-14-2004 #1
One sub=lower class...
I am seriously thinking of running one SoloX 18" off of two Eclipse DA7232's. I want to compete Meca M5 but don't know if they have anything in Arizona. I could pull off street C in DB Drag. If
you know of any events in Arizona, help me out.
Eclipse CD5000
PPI... DEQ-230
PPI... PC450 4 channel
Eclipse 3640 4 channel
FI 12"BTL
Eclipse DA7232 (2000 watts RMS)<------
Kicker ND 25 Tweets
Kicker RS6 6.5" comps
Kicker R5c 5.25" midrange
Re: One sub=lower class...
I am seriously thinking of running one SoloX 18" off of two Eclipse DA7232's. I want to compete Meca M5 but don't know if they have anything in Arizona. I could pull off street C in DB Drag. If
you know of any events in Arizona, help me out.
Check out USACi at www.caraudioevents.com
Theres always tons of comps in Arizona, and if not Arizona, Vegas, which i dont think is to far from there. If you want to travel, there is a 2x weekend in Northern California on the Sept. 25/
26th weekend for USACi, there also may be a show friday night, but its still up for debate.
Just check out that site, and it should set u up. dB Drag isnt to popular in Arizona.
Re: One sub=lower class...
Cottonwood, AZ---7/24
Glendale, AZ---7/25
I guess if u really want to go u can go in 2 weeks lol.
Also if u can, theres a Triple point event in Vegas this weekend. $45 to enter, which for a triple, isnt bad. You should check it out if u like to drive.
Re: One sub=lower class...
I am seriously thinking of running one SoloX 18" off of two Eclipse DA7232's. I want to compete Meca M5 but don't know if they have anything in Arizona. I could pull off street C in DB Drag. If
you know of any events in Arizona, help me out.
Id personaly send more power to the solox, 4kw might get a decent score but more power couldnt hurt.
Eclipse CD7000
MTX TA7804
Eclipse SC8365
MTX TA92001
9 Elemental Designs 13Ov.2's (15.5 cu ft ported box)
O/1g All around, 2 Yellow tops, big 3.
Re: One sub=lower class...
Id personaly send more power to the solox, 4kw might get a decent score but more power couldnt hurt.
That is 2000 watts each channel. Equals up to 8000 watts.
Eclipse CD5000
PPI... DEQ-230
PPI... PC450 4 channel
Eclipse 3640 4 channel
FI 12"BTL
Eclipse DA7232 (2000 watts RMS)<------
Kicker ND 25 Tweets
Kicker RS6 6.5" comps
Kicker R5c 5.25" midrange
Re: One sub=lower class...
god, 8000 watts on a solox 18"... mmmmmmmmm
Re: One sub=lower class...
That is 2000 watts each channel. Equals up to 8000 watts.
4000 watts max.....1000 per channel rms=2000 watts per amp x 2 amps = 4000 watts
Eclipse CD7000
MTX TA7804
Eclipse SC8365
MTX TA92001
9 Elemental Designs 13Ov.2's (15.5 cu ft ported box)
O/1g All around, 2 Yellow tops, big 3.
Re: One sub=lower class...
Even with "only" 4kW on that sub, you are going to blow the coil if you abuse that sub like most people I know do.....
Lots of thin wire gives great BL (40+) but not so hot (or actually very hot) for power handling...
Yota made funny!
Last edited by Bumpin' Yota; 07-15-2004 at 11:05 PM.
1 volt = [1(kg)(meter^2)] / [(second^3)(ampere)]
1 watt = 1 joule / second
1 watt = (1 Newton)(meter) / second
1 watt = [1 kg/(second^2)] (meter) / second
simplifying we find:
1 watt = [1(kg)(meter)] / (second ^3)
P = (I)(V)
1 watt = (1 volt)(1ampere)
1 watt = ( [1(kg)(meter^2)] / [(second^3)(ampere)] )(1 ampere)
1 watt = [1(kg)(meter^2)] / (second^3)
And that is WHY Power is in the SI units of Watts. enjoy!
Re: One sub=lower class...
Its 2000 max "per channel", two channels, dual mono. I know its the max rating, but that is what I plan on running it at.
Eclipse CD5000
PPI... DEQ-230
PPI... PC450 4 channel
Eclipse 3640 4 channel
FI 12"BTL
Eclipse DA7232 (2000 watts RMS)<------
Kicker ND 25 Tweets
Kicker RS6 6.5" comps
Kicker R5c 5.25" midrange
Re: One sub=lower class...
Its 2000 max "per channel", two channels, dual mono. I know its the max rating, but that is what I plan on running it at.
4kW rms is the power rating you'll have if you use 2amps or 8kW peak. Eclipse uses off terminology at times....
1 volt = [1(kg)(meter^2)] / [(second^3)(ampere)]
1 watt = 1 joule / second
1 watt = (1 Newton)(meter) / second
1 watt = [1 kg/(second^2)] (meter) / second
simplifying we find:
1 watt = [1(kg)(meter)] / (second ^3)
P = (I)(V)
1 watt = (1 volt)(1ampere)
1 watt = ( [1(kg)(meter^2)] / [(second^3)(ampere)] )(1 ampere)
1 watt = [1(kg)(meter^2)] / (second^3)
And that is WHY Power is in the SI units of Watts. enjoy!
Re: One sub=lower class...
Its 2000 max "per channel", two channels, dual mono. I know its the max rating, but that is what I plan on running it at.
Do you plan on plugging in to a nuclear reacture to pull off the MAX rating max means nothing. :thumbsup
Re: One sub=lower class...
Do you plan on plugging in to a nuclear reacture to pull off the MAX rating max means nothing. :thumbsup
No, instead I'm going to plug each amp into 3000 potatoes(series). That should give me the power I need.
Eclipse CD5000
PPI... DEQ-230
PPI... PC450 4 channel
Eclipse 3640 4 channel
FI 12"BTL
Eclipse DA7232 (2000 watts RMS)<------
Kicker ND 25 Tweets
Kicker RS6 6.5" comps
Kicker R5c 5.25" midrange
Re: One sub=lower class...
nuclear reactor vs. potatos....... not much of a difference if you ask me. good thinking
Re: One sub=lower class...
No, instead I'm going to plug each amp into 3000 potatoes(series). That should give me the power I need.
Re: One sub=lower class...
and maybe the potatoes will bake and u can feed everyone at the show.
07-14-2004 #2
07-14-2004 #3
07-14-2004 #4
Join Date
Jan 2003
Beaver Dam WI
0 Post(s)
07-15-2004 #5
07-15-2004 #6
07-15-2004 #7
Join Date
Jan 2003
Beaver Dam WI
0 Post(s)
07-15-2004 #8
07-16-2004 #9
07-16-2004 #10
07-16-2004 #11
Join Date
Jan 2004
San Diego, Ca
0 Post(s)
07-17-2004 #12
07-26-2004 #13
07-26-2004 #14
07-26-2004 #15
Join Date
Apr 2004
H'Burg Mississippi!
0 Post(s) | {"url":"http://www.caraudio.com/forums/car-audio-competition-shows-events/63683-one-sub%3Dlower-class.html","timestamp":"2014-04-19T21:32:50Z","content_type":null,"content_length":"98299","record_id":"<urn:uuid:c3957a89-7da7-4587-acf5-da7a6d554d22>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
Limits along lines with common ends in Hyperbolic Geometry
D. Ruoff and J. Shilleto
Drusbergstr. 17, 8810 Horgen, Switzerland; 3440 Yosemite Ave, El Cerrito, CA, USA 94530
Abstract: Let $P$ be a point outside a pair $a, b$ of boundary parallel lines in the hyperbolic plane, and consider the lengths of the segments that are collinear with $P$ and join $a$ with $b$. The
limit length of these segments is determined in a new, elementary way, and the variation of lengths is found using coordinates developed from Hilbert's Arithmetic of Ends.
Classification (MSC2000): 51M09, 51M10
Full text of the article (for subscribers):
Electronic version published on: 24 Jun 2010. This page was last modified: 8 Sep 2010.
© 2010 Heldermann Verlag
© 2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/BAG/vol.51/no.2/14.html","timestamp":"2014-04-18T21:14:35Z","content_type":null,"content_length":"3603","record_id":"<urn:uuid:9bc9a796-acd4-47bf-867d-0d039fc8d1b1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Million Times The Speed Of Light
The reportedly faster than light neutrinos at
may be
a systematic error
, but if these and those data of other neutrino experiments are correct, they hint at a phenomenon that propagates with very many times, perhaps millions of times the speed of light.
Some, like Cohen and Glashow (http://arxiv.org/abs/1109.6562), only look at the average velocity and then explain to us why the neutrinos could not possibly travel that fast. That is a little silly,
because we already know for many years, namely from supernova data, that neutrinos do not travel that fast over long distances. Sure, Glashow is Glashow, so to some this must be a “beautiful
refutation of the OPERA result”, even if it ignores that precisely the OPERA results indicate neutrinos may not have had a constant velocity at all. As I will point out below, C&G should have thought
deeper about the 10 nanoseconds of uncertainty in the data and not be hung up on the 60 nanoseconds early arrival time. (UPDATE: the newest data from OPERA further strengthen the discussion here.
They are afflicted by a 25 ns "jitter" which is clearly separated from the 10 ns statistical error, which is explained in the new article "OPERA Confirms Faster Than Light Neutrinos And Indicates
Ultra Superluminal Small Initial Jumps".)
We know already why the neutrinos could go faster and what new experiments this suggests, why it does not imply time travel or violates causality, and why it is somewhat expected for neutrinos. Now
let us focus on what kind of superluminal velocity is indicated.
There are three reasons for expecting extremely high superluminal velocities over short distances. This can be argued looking at three aspects, namely
1) the totality of all neutrino experiments,
2) the expectation from modern emergent relativity, and
3) the small 10 nano second statistical deviation in the OPERA data.
1) Totality of all Neutrino Experiments
The MINOS experiment a few years back already found evidence that neutrinos might move faster than the speed of light c, namely at 1.000051 (+/- 0.000029) c. Supernova1987A in the Large Magellanic
Cloud 168 thousand light-years away indicated at most a tiny increase over the speed of light. 23 neutrinos were seen over 13 seconds arriving 3 hours earlier than the light. In fact, this time
difference is mostly due to the neutrinos carrying most of the nova’s energy (in a type II supernova) through the outer layers of the star while much visible light emerges only after the shock wave
from the stellar core collapse reaches the surface of the star. OPERA is reported to indicate a velocity of only one part in 100000 above the speed of light.
Looking at all these experiments, the superluminal speed is going down along with the total distance over which the neutrinos have traveled. This indicates that they just traveled a short distance x
faster than light, after which they slowed down and traveled further with a velocity just under the speed of light. The longer they travel afterward, the less the initial short distance x of initial
superluminal propagation is noticeable as an increase of the average velocity v. The average v equals total travel time divided by the large total distance D, so it seems as if there is only a small
increase over light speed.
2) Expectation from Emergent Relativity
I discussed at great length [see links above and the archive paper http://lanl.arxiv.org/abs/0912.3069] about so called emergent relativity. Einstein relativity has been confirmed to emerge naturally
in several condensed state systems (graphene, super fluid helium, crystals’ dislocations). Relativity is an unsurprising symmetry in condensed states of matter. Particle physics (standard model,
Higgs condensate, string theory) and gravity (Einstein-ether) look very much like as if they are emergent from an underlying, more fundamental condensate. Now you may hold the opinion that an
Einstein-ether is complete nonsense, but even if such is ‘merely a similarity in the mathematical description’, you already agree with everything claimed here!
The limit velocity inside a condensate is the internally valid “light velocity c*”. If you look at the limit velocity in super fluid helium for example, it is the Landau limit that was first
estimated to be 58 meters per second (the last measurement I looked at gives 46 m/s for ^4HeII). Above this velocity, superfluidity breaks down and heat is dissipated, meaning that sound is
generated. Sound travels with a velocity V* much faster than the Landau limit, namely several hundred meters per second or more, depending on pressure. Thus, a high V*= 10 c* is to be expected.
If we look at the limit velocity of fluid helium droplets outside of a superfluid helium bath, it is of course our light velocity c. This means that for this system, the limit velocity inside of it
is about c* = 50 meters per second, while velocities outside can go up to V* = 299792458 meters per second, a factor of 10000000 higher!
Thus, if this (namely condensed state physics emergent gravity) is any indication at all; if our universe is describable as a condensed state, you should expect superluminal phenomena with V = 10 c.
If for example our universe has some sort of effective outside like extra dimensions (as string theory indeed claims), you should not be entirely surprised if superluminal phenomena with amazing
velocities V = 10000000 c are possible! By the way: Such could be involved in the Cosmic ray paradox where protons appear with energies far above the Greisen-Zatsepin-Kuzmin Limit.
3) The 10 Nanosecond Uncertainty in the OPERA Data
The third indication of that the phenomenon indicated by OPERA is one that has many times the speed of light (but only for about 20 meters around the neutrino creation) comes straight from the data.
Assuming, as is standard, that neutrinos usually travel at just under the speed of light c, and having T denote the 60 nanoseconds early arrival measured at OPERA, the initial distance over which
superluminal propagation with velocity V could have occurred is simply
x = c * T / [ 1 - (c/V) ]
At high superluminal velocity V above 10 c, the approximation x = c * T suffices.
V = 10 c results in x = 20 meters; V = 10000000 c gives x = 18 meters. Note that the two meters of difference here is close to the uncertainty in the data, which is Del T = 10 nanoseconds and thus
also corresponds to about three meters. So, depending on the detailed assumptions about the perhaps involved mechanisms, it may be that if for example neutrinos were to splash around with a wide
variety of velocities around 1000 c, some maybe 10000 c, some only 100 c, which is obviously a huge difference, x would be, surprise surprise, the same 18 meters!
This is different at low superluminal velocities: V = 1.2 c gives x = 108 meters, while V = 1.1 c gives already almost 200 meters, almost double the distance. Any smaller V leads to rapidly larger
results for x. In other words, if you assume any distribution of velocities around a small average V, the standard distribution around x should be very large, namely hundreds of meters, kilometers,
... .
However, the error in the data is only 10 nanoseconds. At an assumed small average V = 1.2 c for example, if the uncertainty were only due to statistical noise, 10 ns will translate into a standard
deviation of merely Del x = 18 meters. Do not get confused by the coincidence of having the same value of 18 m; focus instead on that these 18 meters of uncertainty Del x are much smaller than the
difference between 108 meters and 200 meters! The crux is that adding even a small variation of V would spread out the data much more than observed.
At the small superluminal velocities that Cohen and Glashow for example assume, a ridiculously small variation around V is implied. So, basically they "proved" that the OPERA result is a systematic
error afflicting a sub-luminal speed by assuming that it is a systematic error afflicting a sub-luminal speed. If you do not assume what you want to prove right from the start, if you take it as the
statistical error of a superluminal velocity like the OPERA team's data analysis tells us, the result is radically different.
Thus, depending again on many other assumptions about the details of what is actually going on of course, the relatively small statistical error in the data hints at a very high velocity V around or
far above 10 c over a small distance x, consistent with the previous two considerations. This is all more clearly perhaps explained with taking the 25 ns "jitter" of the new OPERA data into account
A Wonderful Post, Sasha.
I would like to point out that this is not the first time a CERN physicist has argued for V > 10 C, in this case an effect subsequently measured:http://arxiv.org/abs/0706.1661
Jimbo (not verified) | 10/03/11 | 12:12 PM
QM entanglement correlations (see my articles on the EPR paradox, where there is seemingly instantaneous interaction but nothing goes faster than light) are very different from information carrying
signals (neutrinos carry at least the information that an experiment has indeed taken place) going superluminal.
Sascha Vongehr | 10/04/11 | 01:13 AM
Did you check equation 4. in the Glashow paper? Your scheme increases 'del' by almost 10-to-the-power-6 to make the speed 10c, and reduces 'x' by almost 10-to-the-power-of-5 (from almost 1000km to
10m). The energy decay they mention, E_T, falls faster following 'del' than following the distance 'x' or 'L.' In other words the energy decay should be worse for this case?
That said, I don't think any theory paper can refute measured data; only identifying the error or a failure to replicate can be considered refutation.
Ajoy K Thamattoor (not verified) | 10/03/11 | 13:08 PM
The equations in that paper have certain assumptions, for example the neutrinos do not leave the usual standard model (SM) background into extra dimensions and pretty much behave according to the SM
although superluminal interaction (if it at all exists) must be expected to be physics beyond the SM. Given their assumptions, equation 4 shows that the neutrinos would have lost almost all their
energy before reaching Gran Sasso, which is their main argument. That is all fine, given the right assumptions.
Sascha Vongehr | 10/04/11 | 01:30 AM
Unfortunately, there is a real problem with the OPERA and other similar data. I would have liked it not to be the case, but I already pointed out this problem on September 28, the day before Cohen
and Glashow, in the introduction of this paper :
Comments on the recent result of the ”Measurement of the neutrino velocity with the OPERA detector in the CNGS beam”, http://arxiv.org/abs/1109.6308
and I further developed it on September 29 here :
Astrophysical consequences of the OPERA superluminal neutrino, http://arxiv.org/abs/1109.6630
The problem is that producing a superluminal neutrino costs extra energy, and this energy can be provided only by the mass term of the incoming particle. This mass term decreases like the inverse of
momentum. Therefore, to produce neutrinos above some energy, the critical speed anomaly must propagate to the pion and the kaon, and from them to the proton. This is disastrous.
There is also the spontaneous decay on the neutrino into an electron - positron pair used by Cohen and Glashow, that I had also underlined the day before their article. But is even not the worst
Neutrino experiments are very difficult, and there have already been in the past wrong announcements about oscillations and masses.
Luis Gonzalez-Mestres
Luis Gonzalez-Mes... | 10/03/11 | 14:27 PM
Has anyone considered the issue of what happens to neutrinos or doesn't happen to them as they travel through matter. I remember that there is a illusion of faster then light effects when the radio
signals (like GPS) travel through the ionosphere, and interact with all those free electrons. As neutrinos travel and oscillate do they experience gravity or matter the same in all phases of their
Anonymous (not verified) | 10/04/11 | 20:10 PM
This has been considered here, for instance :
Apparent superluminal neutrino propagation caused by nonlinear coherent interactions in matter
(Submitted on 4 Oct 2011)
However, the authors do not seem to address the question of the spontaneous decay of such neutrinos by emitting e+e- pairs.
Luis Gonzalez-Mestres
Luis Gonzalez-Mes... | 10/05/11 | 05:35 AM | {"url":"http://www.science20.com/alpha_meme/million_times_speed_light-83202","timestamp":"2014-04-18T05:11:22Z","content_type":null,"content_length":"56472","record_id":"<urn:uuid:f70ab172-041d-4a2f-b145-83a698f14166>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
Groton, MA Science Tutor
Find a Groton, MA Science Tutor
...This is a new start, for both of us! There is hard work involved, but now you know, you will be in a position to understand, your subject matter better, if this is Mathematics or Science or
Engineering (college students) and get results! The practice sessions, I shall plan for you, will be straight to the point and will yield best possible results, gradually and steadily.
6 Subjects: including physics, algebra 1, electrical engineering, prealgebra
...These proofs are often difficult for students to build and make geometry their least favorite branch of math. Precalculus is really just a way to get you acquainted with the main ideas of
calculus: functions, rates of change, and accumulation. Moreover, precalculus attempts to show you that eac...
27 Subjects: including biology, physics, reading, English
...When students start to learn with more ease it instills their yearning for learning and helps them to become a lifelong learner. Knowledge is always empowering and difficulties in learning are
just challenges to be met individually. I have an Associate of Business Science Degree from the University of New Hampshire, Durham, NH.
42 Subjects: including physical science, sociology, English, nutrition
I am a retired Psychologist. I've had extensive experience writing reports, listening to people for greater understanding, and verifying information. My graduate school work was completed about
20 years ago, or less, so I retain knowledge of anatomy and physiology, statistics and psychologcal theories.
4 Subjects: including psychology, reading, elementary (k-6th), study skills
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including biostatistics, ACT Science, reading, chemistry
Related Groton, MA Tutors
Groton, MA Accounting Tutors
Groton, MA ACT Tutors
Groton, MA Algebra Tutors
Groton, MA Algebra 2 Tutors
Groton, MA Calculus Tutors
Groton, MA Geometry Tutors
Groton, MA Math Tutors
Groton, MA Prealgebra Tutors
Groton, MA Precalculus Tutors
Groton, MA SAT Tutors
Groton, MA SAT Math Tutors
Groton, MA Science Tutors
Groton, MA Statistics Tutors
Groton, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/groton_ma_science_tutors.php","timestamp":"2014-04-19T02:27:58Z","content_type":null,"content_length":"23909","record_id":"<urn:uuid:4ec9ae0c-7d2a-442e-96b7-3ddcb22a2725>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00605-ip-10-147-4-33.ec2.internal.warc.gz"} |
Covington, GA Prealgebra Tutor
Find a Covington, GA Prealgebra Tutor
...I am currently working on my doctorate focusing on meeting the needs of individual students. I provide conceptual knowledge through hands-on activities, and I also utilize technology to help
students. My classroom is exciting, and I have never had a student leave me at the end of the year still hating math.
10 Subjects: including prealgebra, geometry, algebra 1, grammar
...I also increased the comprehension skills of the preschoolers by 50%.As a Chemistry Workshop Leader at University of West Georgia Chemistry Department, I led a group of 12 students to improve
their understanding of chemistry. As the Vice President of National Society of Collegiate Scholars I als...
9 Subjects: including prealgebra, chemistry, calculus, physics
...In high school Abigail earned the National Merit scholarship through her exemplary SAT scores. She then went on to graduate from the Georgia Tech with a degree in Applied Mathematics and a
minor in Economics. She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship.
22 Subjects: including prealgebra, reading, physics, calculus
...I am a certified elementary school teacher with 5 years' experience. For three years I taught first grade. Two years I spent teaching remedial math skills to 3rd, 4th, and 5th grades as a
Title I teacher, so I'm an excellent elementary math tutor!
10 Subjects: including prealgebra, reading, geometry, algebra 1
...I push students to not only get questions correct, but to also build their confidence in mathematics. I don't feel that there is a tutoring style that fits every student. I work with whatever
method I, the student, and the parent feel is best for the student.
10 Subjects: including prealgebra, calculus, geometry, algebra 1
Related Covington, GA Tutors
Covington, GA Accounting Tutors
Covington, GA ACT Tutors
Covington, GA Algebra Tutors
Covington, GA Algebra 2 Tutors
Covington, GA Calculus Tutors
Covington, GA Geometry Tutors
Covington, GA Math Tutors
Covington, GA Prealgebra Tutors
Covington, GA Precalculus Tutors
Covington, GA SAT Tutors
Covington, GA SAT Math Tutors
Covington, GA Science Tutors
Covington, GA Statistics Tutors
Covington, GA Trigonometry Tutors | {"url":"http://www.purplemath.com/Covington_GA_prealgebra_tutors.php","timestamp":"2014-04-18T19:22:59Z","content_type":null,"content_length":"24204","record_id":"<urn:uuid:508d6195-70d8-40b3-9972-96adc35a81df>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Very quick math question
10-24-2005 #1
Very quick math question
Bah, after re-reading my post I guess it's not a very quick question.
So I just took a midterm, and one of the questions was as follows:
lim (1-x^2)
x->+inf -------
I found two ways to go about doing this.
1) divide all by x:
lim (1-x^2)/x
x->+inf -------
lim (1/x-x)
x->+inf -------
When you plug in +inf, you get:
Evaluating to -inf.
I checked this over at the end of the test though, and decided I didn't like this solution, as by the definition of a limit: the left side limit must equal the right side limit...and how do you
evaluate the left and right side of infinity?
So, I erased that answer and proceeded to write answer #2:
lim (1-x^2)
x->+inf -------
As lim lim
x->+inf 1/x->0.
lim (1-(1/x)^2)
1/x->0 ----------
So, for lim 1/x->0+, we have:
Which is -inf.
Also, for lim 1/x->0-, we have:
Which is +inf.
lim1/x->0- DNE lim1/x->0+,
Therefore lim1/x->0 DNE.
So, yah, a bunch of my friends all said -inf is right, but I really don't like the idea of putting that down as saying it is -inf implies the limit exists and approaches -inf, which isn't
necessarily true, as inf isn't defined as a number....Aurgh.
Last edited by jverkoey; 10-24-2005 at 07:14 PM.
lim (1-x^2)
x->+∞ -------
Since you are talking about x as it gets really large (becomes unbounded) you can ignore the parts that don't contain x so you basically get:
lim -x^2
x->+∞ -------
which can then reduced to
lim -x
then you plug in ∞ and you get -∞. At this point all you are saying is that as x becomes unbounded in the positive direction the results diverge in the negative direction. Since it doesn't
converge to a number the limit does not exist.
Thanks Thantos.
Now I must search for that inf ascii value...
start -> run -> charmap -> advanced view -> search for "infi" -> search -> select -> copy -> alt+tab -> paste
the infinity sign isn't ascii, it'd be unicode, like delta and all that.
Thantor is correct. But also,
In your reasoning above jver, you said a couple times that inf/inf = something, somthing. This isn't true, inf/inf is undefined. You need to apply L'hopitals's rule to evauluate it.
Thantor is correct. But also,
In your reasoning above jver, you said a couple times that inf/inf = something, somthing. This isn't true, inf/inf is undefined. You need to apply L'hopitals's rule to evauluate it.
Ahh, yes, stupid mistake on my part. So I guess the first method is correct and just using the fact that it approaches infinity implies that it doesn't exist.
I checked this over at the end of the test though, and decided I didn't like this solution, as by the definition of a limit: the left side limit must equal the right side limit...and how do you
evaluate the left and right side of infinity?
You don't. lim x->infinity is different notation and has a mildly different meaning than lim x->c, where c is some real number.
Ahh, yes, stupid mistake on my part. So I guess the first method is correct and just using the fact that it approaches infinity implies that it doesn't exist.
I wouldn't say infinity doesn't exist...i would say it's unreachable.....
Actually infinity is a comparitive term.....
"Service of the poor and destitutes is the service of the God"
Normative Changes to ISO/IEC 9899:1990 in Technical Corrigendum 1
Incompatibilities Between ISO C and ISO C++
10-24-2005 #2
10-24-2005 #3
10-24-2005 #4
10-24-2005 #5
Cat Lover
Join Date
May 2005
Sydney, Australia
10-26-2005 #6
10-26-2005 #7
10-26-2005 #8
Join Date
Jul 2005
10-26-2005 #9 | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/71362-very-quick-math-question.html","timestamp":"2014-04-19T18:06:20Z","content_type":null,"content_length":"76281","record_id":"<urn:uuid:64302bb4-a1f5-43d1-8d30-85364ac5ba7c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Panorama City Algebra Tutor
Find a Panorama City Algebra Tutor
...I graduated with a a grade point average of 3.4 (3.84 in psychology). I was also on the Dean's List for academic achievement during the Fall 2007, Spring 2010, Fall 2011, Spring 2012, and Fall
2012 semesters. I previously worked as a tutor at Academic Advantage for 2 years. I worked one-on-one ...
12 Subjects: including algebra 1, algebra 2, chemistry, biology
...I can help you. My services are meant for those who are highly committed to learning and desire a tutor with a high degree of both subject knowledge and versatility in tutoring multiple
subjects. Here's why: I have 5 years of consistent teaching/tutoring experience.
26 Subjects: including algebra 2, algebra 1, reading, chemistry
...I spend approximately 5-7 hours a day coding in MATLAB. I have taken a class on numerical methods at Caltech that was done half in mathematica, half in Matlab. I am currently working on a
physics research project studying the structure of a new type of material, a quasicrystal, and the code I am writing for the project is also in mathematica.
26 Subjects: including algebra 1, algebra 2, calculus, physics
...I have worked with many students who were applying to college and graduate schools, helping them to complete their applications. In particular, I have helped them polish their essays and
personal statements and (for graduates) their thesis applications and dissertation statements. ADD is not a disease or a defect, and people with ADD are not disabled.
63 Subjects: including algebra 1, algebra 2, chemistry, English
...I am well aware of the required standards for each grade and subject, and how they build upon one another. I am comfortable meeting the needs of all learners as I have 5 years of experience
teaching gifted students and 3 years as a resource specialist. I have taught Open Court, which is phonics based, in grades 1 and 2 for a total of 4 years.
9 Subjects: including algebra 1, reading, SAT math, elementary (k-6th) | {"url":"http://www.purplemath.com/Panorama_City_Algebra_tutors.php","timestamp":"2014-04-19T07:01:10Z","content_type":null,"content_length":"24178","record_id":"<urn:uuid:e348d0fb-e2c4-4a63-b371-b040121b7f77>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
anytime tutor for algebra
Search phrases used:
• algebra powers formula a+b
• descrete mathematic free download
• excel fractions chart calculator
• Learn Algebra Online
• calculate lineal feet paper
• trivias in algebra
• factoring algebra
• definitions of algebra concepts
• Learn Algebra Online
• manipulating algebra help
• Solve the formula for the specified variable.
• Lesson Plan For Combining Like Terms
• calculator square root os x
• how to connect the t1-83 plus calculator to your computer
• simple equations worksheet
• linear algebra anton pdf
• free download aptitute questions
• free cost accouting class
• Maths powepoints
• free cost accouting class
• college algebra poems
• 8-bit binary calculator
• wwwcom
• permutation combination sums
• learn graph equations free
• sample math trivia questions
• Learn Algebra Online
• using a graphing calculator to calculate domain and range of a parabola
• permutation combination sums
• practice test orleans hanna algebra prognosis
• diff methods of getting the least common multiple
• year 8 math lessons online
• basic algebra poems
• Learn Algebra Online
• factoring on a calculator
• math trivias]
• Learn Algebra Online
• math trivias]
• Learn Algebra Online
• math trivias]
• investigatory maths
• decimals yr 7 common test
• quadratic equations completing the squae
• K8 linear equations worksheets
• permutation combination sums
• algebra fraction equation calculator
• slope formula
• free printable math revision sheets
• what is the square root of 648
• Learn Algebra Online
• examples of math trivias with answers
• 11th class accountancy tutorial
• solving mathematical equations in matlab
• quadratic formula in real life
• examples of math trivia algebra
• ALGEBRA 2 comparison
• Learn Algebra Online
• free cost accouting class
• free aptitude test downloads
• maths test paper online
• Elementary Math Trivia
• Aptitude Solved Question and Answers
• C programming tutorial on algorithms and flowcharts
• diff methods of getting the least common multiple
• help with signed number
• chemistry for dummies download -torrent
• year 8 math lessons online
• permutation combination sums
• quadrilaterals+worksheet+grade 9
• examples of math trivia algebra
• c Aptitude questions
• word problems on multiplying and dividing 2 integers
• method of substitution with decimal numbers.
• permutation combination sums
• method of substitution with decimal numbers.
• inverse proportion hyperbola
• examples of math trivia algebra
• GRE MATH FREE DOWNLOADS
• proportions worksheet
• free cost accouting class
• steps how to solve story problems
• practice with multiplying and dividing fractions
• factorize maths calculator
• the importance of balancing chemical equations
• Learn Algebra Online
• checking fractional algebraic equation
• coupled differential equation in matalb
• simple algebra worksheets
• sample math trivia questions
• square root method
• samples 0f mathematics course syllabus
• Learn Algebra Online
• Learn Algebra Online
• how to solve (x*x*y/x-y) +5 in matlab
• the importance of balancing chemical equations
• step by step algebra solve
• divisor calculator
• boolean algebra solver
• method of substitution with decimal numbers.
• software weget
• samples 0f mathematics syllabus
• subtraction solving problem worksheet
• samples 0f mathematics syllabus
• Square Roots and its properties with examples
• square root for decimal numbers
• permutation combination sums
• expanding formulas for ti-83
• yr 11 biology revision sheets online
• free e book for download on cost accounting
• permutation combination sums
• Learn Algebra Online
• solving for +elipse
• free cost accouting class
• diff methods of getting the least common multiple
• maths work sheets that you don't need to pay for or download
• all about math(math trivia)
• examples of math trivia algebra
• Free Download of basic chemistry, physics and mathematics.ppt
• how to use the fundamental operations on algebraic expressions-real numbers
• solving multiple equation in excel solver
• integration using simple algebraic substitution
• factoring of algebra expressions
• statistics with t-83 calculator | {"url":"http://www.softmath.com/algebra_stats/paul-a-foerster-book-answers-a.html","timestamp":"2014-04-19T04:26:40Z","content_type":null,"content_length":"25791","record_id":"<urn:uuid:5d29b158-6106-4af4-8eb1-0c8fa852fde1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Overview Package Class Use Tree Deprecated Index Help
PREV PACKAGE NEXT PACKAGE FRAMES NO FRAMES
Package org.apache.hadoop.examples.dancing
This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop.
│ Interface Summary │
│ DancingLinks.SolutionAcceptor<ColumnName> │ Applications should implement this to receive the solutions to their problems. │
│ Pentomino.ColumnName │ This interface just is a marker for what types I expect to get back as column names. │
│ Sudoku.ColumnName │ This interface is a marker class for the columns created for the Sudoku solver. │
│ Class Summary │
│ DancingLinks<ColumnName> │ A generic solver for tile laying problems using Knuth's dancing link algorithm. │
│ DistributedPentomino │ Launch a distributed pentomino solver. │
│ DistributedPentomino.PentMap │ Each map takes a line, which represents a prefix move and finds all of the solutions that start with that prefix. │
│ OneSidedPentomino │ Of the "normal" 12 pentominos, 6 of them have distinct shapes when flipped. │
│ Pentomino │ │
│ Pentomino.Piece │ Maintain information about a puzzle piece. │
│ Sudoku │ This class uses the dancing links algorithm from Knuth to solve sudoku puzzles. │
Package org.apache.hadoop.examples.dancing Description
This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. It is a generic model for problems, such as tile placement, where all of the valid choices
can be represented as a large sparse boolean array where the goal is to pick a subset of the rows to end up with exactly 1 true value in each column.
The package includes two example applications: a pentomino solver and a sudoku solver.
The pentomino includes both a "normal" pentomino set and a one-sided set where the tiles that are different when flipped are duplicated. The pentomino solver has a Hadoop driver application to launch
it on a cluster. In Knuth's paper on dancing links, he describes trying and failing to solve the one-sided pentomino in a 9x10 board. With the advances of computers and a cluster, it takes a small
(12 node) hadoop cluster 9 hours to find all of the solutions that Knuth estimated would have taken him months.
The sudoku solver is so fast, I didn't bother making a distributed version. (All of the puzzles that I've tried, including a 42x42 have taken around a second to solve.) On the command line, give the
solver a list of puzzle files to solve. Puzzle files have a line per a row and columns separated by spaces. The squares either have numbers or '?' to mean unknown.
Both applications have been added to the examples jar, so they can be run as:
bin/hadoop jar hadoop-examples-*.jar pentomino pent-outdir
bin/hadoop jar hadoop-examples-*.jar sudoku puzzle.txt
I (Owen) implemented the original version of the distributed pentomino solver for a Yahoo Hack day, where Yahoos get to work on a project of their own choosing for a day to make something cool. The
following afternoon, everyone gets to show off their hacks and gets a free t-shirt. I had a lot of fun doing it.
Overview Package Class Use Tree Deprecated Index Help
PREV PACKAGE NEXT PACKAGE FRAMES NO FRAMES
Copyright © 2009 The Apache Software Foundation | {"url":"http://hadoop.apache.org/docs/r1.0.4/api/org/apache/hadoop/examples/dancing/package-summary.html","timestamp":"2014-04-18T07:27:29Z","content_type":null,"content_length":"12896","record_id":"<urn:uuid:b248e0f6-643a-4a32-b1e2-4fc007aaa025>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00615-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Cosmology Solved?
3.2. Expansion Rate and Time
Since we are considering what the Friedmann-Lemaître model does and does not predict we should note that the model allows solutions without a Big Bang, that trace back through a bounce to contraction
from arbitrarily low density. This requires z[max] ~ |[R]| / [m]. The bounce case is seldom mentioned, and I suspect rightly so, for apart from the bizarre initial conditions the redshift z[max]
required for light element production requires quite unacceptable density parameters. If this assessment is valid we are left with Friedmann-Lemaître solutions that trace back to infinite density,
which is bizarre enough but maybe can be finessed by inflation and resolved by better gravity physics.
A Friedmann-Lemaître model that expands from exceedingly high density predicts that stellar evolution ages and radioactive decay ages are less than the cosmological expansion time t[0]. Numerical
examples are
The Hubble Space Telescope Key Project (Freedman et al. 1998; Madore et al. 1998) reports
The systematic error includes length scale calibrations common to most measurements of H[0]. A recent survey of evolution ages of the oldest globular cluster stars yields 11.5 ± 1.3 Gyr. (Chaboyer et
al. 1998). We have to add the time for expansion from very high redshift to the onset of star formation; a commonly used nominal value is 1 Gyr. If the universe is 14 Gyr old this would put the onset
of star formation at z ~ 5 in the Einstein-de Sitter model, z ~ 6 if [m] = 0.25 and [] = 0.75. Since star forming galaxies are observed in abundance at z ~ 3 (Pettini et al. 1998 and references
therein) this is conservative. These numbers give
where the standard deviations have been added in quadrature.
The result agrees with the low density models in equation (10). The Einstein-de Sitter case is off by 1.6 standard deviations, not a serious discrepancy. It could be worse: Pont et al. (1998) put the
minimum stellar evolution age at 13 Gyr. With 1 Gyr for star formation this would make the Einstein-de Sitter model > 2.6Keeton & Kochanek (1997) puts the Hubble parameter at h = 0.51 ± 0.14. At t[0]
= 14 Gyr this says H[0]t[0] = 0.73 ± 0.20, nearly centered on the Einstein-de Sitter value. An elegant argument based on the globular cluster distance to the Coma Cluster of galaxies leads to a
similar conclusion (Baum 1998). Most estimates of H[0] are larger, however, and the correction to t[0] for the time to abundant star formation is conservative, so in line 1b of Table 2 I give the
Einstein-de Sitter model a modest demerit for its expansion time.
The low density cases pass the time-scale constraint at the accuracy of the present measurements. Since a satisfactory and it is to be hoped feasible measurement would distinguish between the [m] ~
0.25 open and flat cases I lower their grades from this test to | {"url":"http://ned.ipac.caltech.edu/level5/Peebles/Peebles3_2.html","timestamp":"2014-04-17T18:51:02Z","content_type":null,"content_length":"5927","record_id":"<urn:uuid:cbc1d791-baa8-413e-bea1-0fc467cb1b03>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fords Prealgebra Tutors
...Understanding equations and numbers in general ensures a steady base of operation for anyone aspiring to more than a cashier. Associates degree in Biology. Also my main area of interest.
13 Subjects: including prealgebra, reading, chemistry, geometry
...I can also help you prepare for NY state Regents, PSAT, SAT, ACT, SSAT, ISEE, GRE, SAT subject tests in Math (both IC and IIC), GED, GRE, GMAT, LSAT, Praxis tests. I have experience tutoring
students with autism, ADD, ADHD, and other learning problems. In preparation for any of the above math-b...
55 Subjects: including prealgebra, English, calculus, reading
...I have also worked with middle and high school students. Over the years, I have gained experience working with students who have a wide variety of learning styles. For something to ?click? it
must be presented in a way that makes sense to you based on what you already understand and how you process information.
10 Subjects: including prealgebra, calculus, geometry, statistics
...I have taught at the local college. I have tutored at least 10 students over the last 5 years in geometry and have had very good results on the geometry regents. I have been teaching algebra
and advanced algebra and trigonometry over the last 10 years.
20 Subjects: including prealgebra, geometry, algebra 1, GRE
...Without good reading and analytical skills, it is hard to score well on the ACT science test. I am a certified math teacher (NJ grades 7-12) with 7 years of full-time classroom teaching
experience, including the supervision of student teachers. I have worked as a tutor since 1998, helping stude...
23 Subjects: including prealgebra, English, calculus, geometry | {"url":"http://www.algebrahelp.com/Fords_prealgebra_tutors.jsp","timestamp":"2014-04-20T08:29:14Z","content_type":null,"content_length":"24681","record_id":"<urn:uuid:57c1f451-70ea-46f8-a13a-1d779cb9a921>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dighton, MA Math Tutor
Find a Dighton, MA Math Tutor
...I found some of the greatest satisfaction in my jobs came from teaching and assisting my staff, so I am now excited to be focusing my career towards this type of pursuit. Over the last three
years I have been either tutoring or working in classroom settings. In 2010 I earned my Massachusetts Physics Teaching License.
12 Subjects: including algebra 1, algebra 2, calculus, chemistry
...My goal is to help encourage students, and prove that they can excel in a subject, and believe that building up one's confidence is the key to success.I earned an A in Genetics at Brown
University. I also work in a genetics research laboratory at Brown. I worked closely with my peers while taking Genetics, and tutored many of my friends in the subject.
39 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I've been successfully tutoring students for more than 8 years. I teach a wide variety of subjects, and will work with anyone from grades 5 through Adult. I specialize in middle and high
school Math (Algebra 2 is my favorite), and standardized test prep: PSAT, SAT, GED, ACT, etc.
41 Subjects: including algebra 1, algebra 2, American history, biology
...I presently teach in a public school. I teach direct study skills to five students every day in the classroom I teach. These skills and strategies are either derived from IEP's, district
policies and the common core standards.
33 Subjects: including algebra 1, geometry, prealgebra, reading
I have a great deal of experience in the engineering field. I offer tutoring services in Mathematics, Mechanical Engineering, Microsoft Applications and Business. My tutoring approach can be best
described as mentoring where I focus on building ability to learn rather than focussing on learning one concept at a time.
21 Subjects: including algebra 1, saxophone, marketing, elementary math
Related Dighton, MA Tutors
Dighton, MA Accounting Tutors
Dighton, MA ACT Tutors
Dighton, MA Algebra Tutors
Dighton, MA Algebra 2 Tutors
Dighton, MA Calculus Tutors
Dighton, MA Geometry Tutors
Dighton, MA Math Tutors
Dighton, MA Prealgebra Tutors
Dighton, MA Precalculus Tutors
Dighton, MA SAT Tutors
Dighton, MA SAT Math Tutors
Dighton, MA Science Tutors
Dighton, MA Statistics Tutors
Dighton, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/dighton_ma_math_tutors.php","timestamp":"2014-04-19T05:28:47Z","content_type":null,"content_length":"23840","record_id":"<urn:uuid:ca24ee48-06ad-412f-b5c0-b9fa26e1b33b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Derivative via mathematica
[Date Index] [Thread Index] [Author Index]
Re: Derivative via mathematica
cai wrote:
> Hi,
> I just used mathematica for a couple of days. I am trying to compute
> the derivative under mathematica. Because the function is complicated,
> I like to break it down.
> f[t_] = (m/(1+Exp[1/t] +b)
> Here m and b are functions of t.
> If I directly use command D after insert m and b terms, a very
> complicated equaion is gerenated, which I do not want.
> What I want is if I define the values of m' and b', rewrite the f
> m' = p
> b' = q // well, I dont know how to define, this is the idea
> f[t_, m[t], b[t]] = (m/(1+Exp[1/t] +b)
> then use the command D[f[t,m[t],b[t]],t] hopeful get a equation which is
> the function of t, p and q. How to do that?
> A related question, I tried to use non-defined function In[19]:= m[t_]
> Out[19]= m[t_]
> In[20]:= D[m[t],t]
> Out[20]= m'[t]
> and expected D[f[t,m[t],b[t]],t] contains m'[t]. Is it possible?
> Basically, it is a chain derivative question, I just want it to stop
> earlier.
> Could somebody help?
> Thanks.
> ccai1@ohiou.edu
$Line = 0;
Wan, here are some suggestions and comments. They do not address all of
your points but I hope that they may be of help - please mail me if you
need more help.
Mathematica, of course, uses the standard results
D[a f[t]+ b g[t],t]
a f'[t] + b g'[t]
D[f[t] g[t],t]
g[t] f'[t] + f[t] g'[t]
D[1/ g[t],t]
f'[g[t]] g'[t]
And we can put in values for f and g , f' and g' retrospectively. This
will be demonstrated in the treatment of the example below
f1[t_] = m/(1+Exp[1/t] +b)
1 + b + E
Mathematica assumes that m and b are independent of t.
E m
1/t 2 2
(1 + b + E ) t
Which is not what you want.
We can avoid this by using
f2[t_] = m[t]/(1+Exp[1/t] +b[t])
1 + E + b[t]
Now we get,
m[t] (-(----) + b'[t])
t m'[t] -(----------------------) +
1/t 2 1/t
(1 + E + b[t]) 1 + E + b[t]
If we define the functions m and b
m[t_] := Sin[t]; b[t_]:= Exp[k t]
we get
k t E
(E k - ----) Sin[t]
Cos[t] t
--------------- - ----------------------
1/t k t 1/t k t 2 1 + E + E (1 + E + E
It is possible to define the derivatives of these functions but we
cannot consistently define a function ad its derivative independently
so we clear the
definitions above (there are also programming reasons for this)
m'[t_] = Cos[t^2]; b'[t_]:= t Exp[k t];
These derivative definitiions are used
E k t
(-(----) + E t) m[t]
Cos[t ] t
--------------- - -----------------------
1/t 1/t 2 1 + E + b[t] (1 + E +
but Mathematica has left m[t]; it has not integrated m'[t] to find it.
Defining derivatives can be useful but perhaps not so much here.
Incidentally the derivatives are stored under Derivative, not m and b.
>From In[14]:=
>From In[15]:=
f' represents the derivative of a function f of one argument.
Derivative[n1, n2, ... ][f] is the general form,
representing a function obtained from f by differentiating
n1 times with respect to the first argument, n2 times with
respect to the second argument, and so on.
Derivative[1][m][t_] = Cos[t^2]
Derivative[1][b][t_] := t*Exp[k*t]
This, and the notation m'[t], b'[t], leads us to the important contrast
between functions and formulas:
g' is the first derivative of the function g; g'[t] is its value at t -
it is
the same as D[g[t],t], the derivative of the formula g[t] with respect
to the variable t.
The FullForm of g' is
Once you get into pure functions you may wish to use a different
apporoach which reduces the chance of interference between definitions.
But first, I need to clear the definitions of the derivatives.
f2[t]/.{ m -> Function[t, Sin[t]],
b -> Function[t, Exp[k t]]}
1/t k t
1 + E + E
d2/.{ m ->Sin,
b -> Function[t, Exp[k t]]}
k t E
(E k - ----) Sin[t]
Cos[t] t
--------------- - ----------------------
1/t k t 1/t k t 2 1 + E + E (1 + E + E
or, more briefly,
d2/.{ m -> Sin,b -> (Exp[k #]&)}
k t E
(E k - ----) Sin[t]
Cos[t] t
--------------- - ----------------------
1/t k t 1/t k t 2 1 + E + E (1 + E + E
A variant of your idea for f can be implemented by
f[m_,b_]= f2[t]
1 + E + b[t]
D[f[ Cos, Exp[k #]&],t]
k t E
(E k - ----) Cos[t]
t Sin[t] -(----------------------) -
1/t k t 2 1/t k t
(1 + E + E ) 1 + E + E
You might like to look up the following in the Help Browser D
Dt (total derivative)
Please notice that these functions also deal with several variables. --
Allan Hayes
Training and Consulting
Leicester, UK
voice: +44 (0)116 271 4198
fax: +44 (0)116 271 8642 | {"url":"http://forums.wolfram.com/mathgroup/archive/1998/Jan/msg00309.html","timestamp":"2014-04-16T10:23:48Z","content_type":null,"content_length":"41027","record_id":"<urn:uuid:db450f72-caa3-4907-93f7-b89c6532a5d9>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |