content
stringlengths
86
994k
meta
stringlengths
288
619
Multiples of 8 plus 1 Fair enough - I'm an engineer...I know just enough math to be dangerously wrong sometimes. You might enjoy this one if you haven't seen it already. Multiples of 8 plus 1 Ah too bad - you publish any of these, by chance? Carpeting a Donut The cool thing in the 2nd solution is that it uses intuition about the question the answer is the same regardless of r1's value, and will remain the same even as it diminishes to 0 It's an interesting way to look at problems in general...of course you can't always assume that what you think you see or don't see in the question is intentional. Multiples of 8 plus 1 Very interesting proof. Are there any other patterns like this that exist among square (or nth powers in general)? 3 Coins Good call applying Bayes' Theorem that way. My first thought was to use the HH information just to eliminate the all-tails coin. I think your logic is valid. 12 Identical Balls I've heard the variant of this where there's one ball that's heavier, but that's muuuch easier.
{"url":"http://www.mindcipher.com/users/31","timestamp":"2014-04-16T07:13:33Z","content_type":null,"content_length":"14293","record_id":"<urn:uuid:977bebde-651f-4225-bbdc-dbea076281c8>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
calculating log using ti-84 plus Author Message alijxdnel Posted: Wednesday 11th of Feb 18:09 Hi gals and guys I require some guidance to work out this calculating log using ti-84 plus which I’m unable to do on my own. My math assignment is due and I need guidance to work on graphing equations, monomials and converting fractions . I’m also thinking of hiring a math tutor but they are very costly. So I would be really appreciative if you can extend some help in solving the problem. Back to top espinxh Posted: Thursday 12th of Feb 08:56 Oh boy! You seem to be one of the best students in your class. Well, use Algebra Buster to solve those equations. The software will give you a detailed step by step solution. You can read the explanation and understand the problems. Hopefully your calculating log using ti-84 plus class will be the best one. From: Norway Back to top Mibxrus Posted: Thursday 12th of Feb 15:20 Hello Friend, Algebra Buster helped me with my learning sessions last month. I got the Algebra Buster from http://www.algebra-online.com/algebra-faq.htm. Go ahead, check that and keep us updated about your opinion. I have even recommended Algebra Buster to a list of of my friends at school. Back to top jenx Posted: Saturday 14th of Feb 10:47 Please do not take me negatively. I am searching for no shortcut. It’s just that I don’t seem to get enough time to try solving problems again and again. I will use the software as a reference only. Where can I find it? Back to top Matdhejs Posted: Saturday 14th of Feb 20:48 Here it is: http://www.algebra-online.com/why-algebra-buster.htm. Just a few clicks and math won't be an issue a difficulty anymore. All the best and have fun with the program! From: The Back to top Gools Posted: Sunday 15th of Feb 08:25 I would advise using Algebra Buster. It not only assists you with your math problems, but also provides all the necessary steps in detail so that you can improve the understanding of the subject. From: UK Back to top
{"url":"http://www.algebra-online.com/algebra-homework-soft/multiplying-fractions/calculating-log-using-ti-84.html","timestamp":"2014-04-17T01:09:40Z","content_type":null,"content_length":"29646","record_id":"<urn:uuid:d1c61056-4c9a-4b10-96dc-9f831d56878c>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: A General Model for Performance Investigations of Priority Based Multiprocessor System June 1992 (vol. 41 no. 6) pp. 747-754 ASCII Text x A.K. Ramani, P.K. Chande, P.C. Sharma, "A General Model for Performance Investigations of Priority Based Multiprocessor System," IEEE Transactions on Computers, vol. 41, no. 6, pp. 747-754, June, BibTex x @article{ 10.1109/12.144626, author = {A.K. Ramani and P.K. Chande and P.C. Sharma}, title = {A General Model for Performance Investigations of Priority Based Multiprocessor System}, journal ={IEEE Transactions on Computers}, volume = {41}, number = {6}, issn = {0018-9340}, year = {1992}, pages = {747-754}, doi = {http://doi.ieeecomputersociety.org/10.1109/12.144626}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Computers TI - A General Model for Performance Investigations of Priority Based Multiprocessor System IS - 6 SN - 0018-9340 EPD - 747-754 A1 - A.K. Ramani, A1 - P.K. Chande, A1 - P.C. Sharma, PY - 1992 KW - discrete time semi-Markov model; task priorities; system performance; multiprocessor system; crossbar interconnection network; performance measures; task scheduling; load balancing; performance optimization; Markov processes; multiprocessing systems; multiprocessor interconnection networks; performance evaluation. VL - 41 JA - IEEE Transactions on Computers ER - A general discrete time semi-Markov model is developed to investigate the effects of task priorities on the system performance of a multiprocessor system with crossbar interconnection network. The number of priority levels associated with the tasks in the system, connection times of different priority level requests, interrequest time, number of processing elements, and the number of shared resources are the parameters involved in estimation of the performance of the system. The bandwidth, queue length at a memory, waiting time for requests at different priority levels, and processor utilization are the performance measures quantified from the analysis. The results reveal the advantage received by the tasks at higher priority levels and the starvation experienced by the lower priority tasks. This information should be useful in the real-time task scheduling, load balancing, and performance optimization. The results obtained are validated with simulation. [1] F. Ozguner and M. L. Kao, "A multiprocessor system for fault-tolerant control of an articulated mechanism,"IEEE Trans. Indust. Electron., vol. IE-31, pp. 130-136, May 1984. [2] A. Gupta and H. M. Toong, "Microcomputers in industrial control applications,"IEEE Trans. Indust. Elect., vol. IE-31, pp. 109-119, May 1984. [3] P. K. Chande, A. K. Ramani, and P. C. Sharma, "Modular TMR multiprocessor system,"IEEE Trans. Indust. Electron., Feb. 1989. [4] R. A. Finkel and M. H. Solomon, "Processors interconnections strategies,"IEEE Trans. Comput., vol. C-29, pp. 360-371, May 1980. [5] Special Issue on Interconnection Networks,IEEE Comput. Mag., vol. C-14, Dec. 1981. [6] H. D. Kirswann and F. Kaufmann, "Poolpo-pool of processors for process control applications,"IEEE Trans. Comput., vol. C-33, pp. 869-878, Oct. 1984. [7] A. Gupta and H-M. D. Toong, "Increasing throughput in multiprocessor system,"IEEE Trans. Indust. Electron., vol. IE-32, Aug. 1985. [8] K. M. Chandy and C. H. Sauer, "Approximate methods for analysis of queueing network models of computer system,"Comput. Surveys, vol. 10, pp. 263-280, 1978. [9] K. M. Chandy and C. H. Sauer, "Computational methods for product form queueing networks,"Commun. ACM, vol. 23, pp. 573-583, 1980. [10] D. P. Bhandarkar, "Analysis of memory interference in multiprocessor,"IEEE Trans. Comput., vol. C-24, pp. 897-908, Sept. 1975. [11] C. H. Hoogendoorn, "A general form memory interference in multiprocessor,"IEEE Trans. Comput., vol. C-26, pp. 998-1005, Oct. 1977. [12] J. P. Buzen, "Computational algorithms for closed queueing networks with exponential servers,"Commun. ACM, vol. 16, no. 9, Sept. 1973. [13] J. W. McCredie, "Analytic models as aids in multiprocessor design," inProc. 7th Annu. Princeton Conf., Inf. Sci. and Syst., Mar. 1973, pp. 186-191. [14] T. E. Roberts and B. W. Johnson, "A fault tolerant multiprocessor for real-time control applications," inProc. IEEE 1987 Indust. Electron. Conf., IECON '87, Nov. 1987, pp. 512-517. [15] B. Pappasratorn and P. Prapinmongkolkarn, "A small scale distributed multi-microprocessor system using shared memory techniques,"IEEE Trans. Indust. Electron., vol. IE-32, pp. 97-102, May 1985. [16] W. A. Wolf and C. G. Bell, "C. mmp-A multiprocessor," inProc. Fall Joint Comput. Conf., AFIPS, 1972, pp. 756-777. [17] M. Ajmare and M. Gerla, "Markov models for multiple bus multiprocessor system," Tech. Rep. CSD 810304, U.C.L.A., Los Angeles, CA, Feb. 1981. [18] K. Jovan and Miodrag P., "A multilevel microcomputer structured system for supervisory monitoring," inProc. IEEE 1987 Indust. Electron. Conf., IECON '87, Nov. 1987, pp. 512-517. [19] W. S. Holderby, "Maintainability considerations in a fault tolerant / fault proofsystems design,"IEEE Trans. Ind. Electron., vol. IE-31, no. 2., pp. 120-129, May 1984. [20] K. S. Trivedi,Probability and Statistics with Reliability, Queueing and Computer Science Applications. Englewood Cliffs, NJ: Prentice-Hall, 1982. [21] P. Heidelberger and S. S. Lavenberg, "Computer performance evaluation methodology,"IEEE Trans. Comput., vol. C-33, Dec. 1984. [22] T. N. Mudge and H. B. Al-Sadourn, "Memory interference models with variable connection time,"IEEE Trans. Comput., vol. C-33, pp. 1033-1038, Nov. 1984. [23] T. N. Mudge and H. B. Al-Sadoun, "A semi-Markov model for the performance of multiple-bus systems,"IEEE Trans. Comput., vol. C-34, pp. 934-942, Oct. 1985. [24] T. Lang, M. Valcro, and I. Alegre, "Bandwidth of cross-bar and multiple-bus connections for multiprocessors,"IEEE Trans. Comput., vol. C-31, pp. 1227-1233, Dec. 1982. [25] D. Locke, E. D. Jensen, and H. Tokuda, "A time driven scheduling model for real-time operating system," inProc. Advanced Workshop Fault-Tolerant Comput., ISRO-IISc., Bangalore, India, July 1987. [26] H. S. Stone,Introduction to Computer Architecture. Chicago, Science Research Associates, Inc. [27] K. Hwang and F. A. Briggs,Computer Architecture and Parallel Processing. New York: McGraw-Hill, 1984. [28] L. Ciminiera and A. Valenzano, "Acknowledgment and priority mechanisms in IEEE 802.4 token bus,"IEEE Trans. Industrial Electron., vol. IE-35, no. 2, May 1988. [29] P. H. Enslow, "Multiprocessor organization,"Comput. Surveys, ACM, vol. 9, pp. 103-109, Mar. 1977. [30] B. H. Humond, "A study in memory interference models," Ph.D dissertation, Computing Research Lab., University of Michigan, Ann Arbor, Apr. 1985. [31] S. M. Ross,Applied Probabilily Models with Optimization Applications. San Francisco, CA: Holden-Day 1970. [32] S. M. Ross,Stochastic Processes. New York: Wiley, 1983, pp. 205-207. [33] N. Bogunovic, "Process scheduling procedure for a class of real-time computer system,"IEEE Trans. Indust. Electron., vol. IE-34, pp. 29-34, Feb. 1987. [34] K. Schwan, T. Bihari, B. W. Weide, and G. Taulbee, "High-performance operating system primitives for robotics and real-time control systems,"ACM Trans. Comput. Syst., vol. 5, no. 3, pp. 189-231, Aug. 1987. Index Terms: discrete time semi-Markov model; task priorities; system performance; multiprocessor system; crossbar interconnection network; performance measures; task scheduling; load balancing; performance optimization; Markov processes; multiprocessing systems; multiprocessor interconnection networks; performance evaluation. A.K. Ramani, P.K. Chande, P.C. Sharma, "A General Model for Performance Investigations of Priority Based Multiprocessor System," IEEE Transactions on Computers, vol. 41, no. 6, pp. 747-754, June 1992, doi:10.1109/12.144626 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tc/1992/06/t0747-abs.html","timestamp":"2014-04-18T08:48:40Z","content_type":null,"content_length":"57960","record_id":"<urn:uuid:2eda4813-3722-42ef-a6c1-e5cfd768dfe3>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Interpolation FP1 Formula Re: Linear Interpolation FP1 Formula repeat step 2. Last edited by bobbym (2013-02-25 23:18:28) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Doing it by hand, I mean. Re: Linear Interpolation FP1 Formula I do not know of a hand method. I think I remember one though but do not remember where I saw it. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Do you remember what it was? Re: Linear Interpolation FP1 Formula Also, why does this method solve Pell equations? Re: Linear Interpolation FP1 Formula For that we would have to ask Mr. Fermat and Mr. Legendre. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula For which question? Re: Linear Interpolation FP1 Formula How taking the cf and convergents solves a Fermat-Pell equation. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula It's an unsolved problem? Re: Linear Interpolation FP1 Formula I do not think so. This is all covered in number theory books that do a little computation. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I will see if I can find out about it somewhere, then... Nothing from adriana yet, 16:41. Usually on Mondays she contacts me while she is at school. Re: Linear Interpolation FP1 Formula The method I showed is very important because it finds a simple continued fraction. This simple continued fraction's convergents are the best rational approximations of the constant. Other cf's do not have that property. Look here: Last edited by bobbym (2013-02-26 04:01:36) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula I have skimmed that page for days but I've still not been able to find a method for finding a simple continued fraction that differs from yours (I believe they use your method to find the continued fraction for pi). However, I did notice that on the Wiki page for sqrt(3), they said this: The square root of 3 can be expressed by generalised continued fractions such as [2; -4, -4, -4, ...] which is identical to [1; 1, 2, 1, 2, 1, 2, ...] evaluated at every second term. The trouble is, I don't understand what they mean by that... this seems to be the important bit because they can get the simple CF from a generalised one. Do you know what they mean by 'evaluated at every second term'? Re: Linear Interpolation FP1 Formula I think that is a misprint. Generalized ones a generated using other methods some of them tricky, than the floor method. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula But, are the two continued fractions I mentioned identical? They do both converge to root 3, but I am unsure how I show that they're actually the same fraction... Re: Linear Interpolation FP1 Formula I think that is a misprint. The convergents of the two are different. There is no 30 / 17 in the second group of convergents. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Hmm... so, do you remember the other way to get the simple continued fraction? Every page I go to seems to use your method but it is difficult to do by hand... Re: Linear Interpolation FP1 Formula First things first. does not converge to the √3 In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes it does... I evaluated it after a few terms and I got root 3 with an error of only 0.0000274 %. Re: Linear Interpolation FP1 Formula I am getting convergence to something close to 1.76393202250021 In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula And, you are using the CF I am getting root 3... Re: Linear Interpolation FP1 Formula Yes that cf is correct but is that In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Yes, they are just distributing the negative through the denominator, I think. Re: Linear Interpolation FP1 Formula I do not think that is correct. {2,-4,-4,-4,-4,-4...} is which is not converging to the √3 Last edited by bobbym (2013-02-26 04:59:28) In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Linear Interpolation FP1 Formula Hmm, I agree, that one does not converge...
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=255167","timestamp":"2014-04-20T06:00:35Z","content_type":null,"content_length":"37934","record_id":"<urn:uuid:9c0a0e87-8345-45f7-91d5-9bca53f0b24e>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Which interpolation? Date: Dec 7, 2012 1:13 PM Author: Cristiano Subject: Which interpolation? From a very slow simulation I got y= f(x): x y 5 0.8048174 6 0.8194384 44 0.9706268 47 0.9724846 48765 0.9999756 53765 0.9999776 For every x, I stop the simulation when the confidence interval for y is less than 2,5*10^-6 (with 99% of confidence). I can't calculate all the x's (because the simulation is very slow), so I need to interpolate; for example, I don't have y(45) or y(46). Using the Levenberg-Marquardt Least Squares Fitting, the best equation I found gives an error that is too high (about 10^-4 for small x's). Then I thought to use a cubic spline, but I notice some "fluctuations" on the tails. Should I use LM or spline?
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7933822","timestamp":"2014-04-17T05:08:44Z","content_type":null,"content_length":"1695","record_id":"<urn:uuid:a3ba3e5a-6f19-457d-b3b1-0352c33187bc>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00652-ip-10-147-4-33.ec2.internal.warc.gz"}
Get homework help at HomeworkMarket.com Submitted by on Tue, 2012-01-17 20:35 due date not specified answered 1 time(s) other students are interested: one anonymous student showed interest Initially, A yoga ball is partially blown up and its surface area measured. Afterwards, it is filled completely and its... Initially, A yoga ball is partially blown up and its surface area measured. Afterwards, it is filled completely and its surface area is found to be 1.60 times as big as it was recorded earlier. By what factor has its volume changed? By what percentage has the radius changed? Submitted by on Tue, 2012-01-17 21:48 purchased 3 times price: $1.25 Full solutions shown body preview (47 words) xxx the xxxxxxx xxxxxx xx r m xxx the final radius be R xx xxxxxxxxxxx xxxxxxx 1.26419) xxxxx volume=(4/3)(\pi)(R^3)=(4/3)(\pi)(2.02386r^3)=2.02386(4/3)(\pi)(r^3)=2.02386(initial volume) The xxxxxx has xx xxxx xxxxx the xxxxxxxxx xxxxxxxxxx change in xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxx xxxxxx xxx xxxxxxxxx xx xxxxxx
{"url":"http://www.homeworkmarket.com/content/initially-yoga-ball-partially-blown-and-its-surface-area-measured-afterwards-it-filled-compl","timestamp":"2014-04-16T08:29:58Z","content_type":null,"content_length":"49917","record_id":"<urn:uuid:96fd9e5f-61b0-4cb1-a4a6-2cb748bb4e87>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Innersync Studio//campusuite25//EN CALSCALE:GREGORIAN BEGIN:VEVENT DTSTAMP:20140418T131317Z SUMMARY:Math & CS Seminar DESCRIPTION:Mathematics and Computer Science professor Dr. David Berry wi ll discuss his experiences teaching MATH 115 at Xavier. Â He says: MATH 1 15\, Topics in Applied Mathematics\, has always been a joy to teach. Â Fo r the length of each classroom meeting\, I am totally lost in sharing on e of the most pleasurable of all subjects: Mathematics. Join me as I ram ble through some of the topics that have given me such joy. Â See a new a lgorithm that saved me from utter despair as I developed the online vers ion of this course. Â All in all\, it has been a walk in the park for me. LOCATION:G27 Smith Hall UID:1-29-35551@campusuite.com DTSTART:20130220T173000Z DTEND:20130220T183000Z X-ALT-DESC: Mathematics and Computer Science professor Dr. Dav id Berry will discuss his experiences teaching MATH 115 at Xavier. &nbsp \;He says: MATH 115\, Topics in Applied Mathematics\, has a lways been a joy to teach. &nbsp\;For the length of each classroom meeti ng\, I am totally lost in sharing one of the most pleasurable of all sub jects: Mathematics. Join me as I ramble through some of the topics that have given me such joy. &nbsp\;See a new algorithm that saved me from ut ter despair as I developed the online version of this course. &nbsp\;All in all\, it has been a walk in the park for me. \n END:VEVENT END:VCALENDAR
{"url":"http://www.xavier.edu/campusuite25/public/calendar/feed_me.cfm?format=icalendar&id=35551","timestamp":"2014-04-18T13:13:17Z","content_type":null,"content_length":"2229","record_id":"<urn:uuid:f691f325-84c8-474b-a088-c4ddf5b6f19c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Nuclear Space problem up vote 1 down vote favorite I need to show that if X is compact,then C(X) is nuclear.Also is the condition X is metrisable necessary. I am at present attending a conference "Recent Aadvances in Operator Theory". This problem was given by Adam Skalski,Warshaw in the conference oa.operator-algebras fa.functional-analysis 1 Dear Koushik, What do you mean by $C(X)$? Regards, – Emerton Jan 4 '13 at 13:53 2 The only nuclear Banach spaces are the finite-dimensional ones. So I repeat the question: what is $C(X)$? en.wikipedia.org/wiki/Nuclear_space – Gerald Edgar Jan 4 '13 at 14:36 3 He is talking about en.wikipedia.org/wiki/Nuclear_C*-algebra – Tomek Kania Jan 4 '13 at 14:45 4 Dear Koushik - seems odd to be communicating this way, since I'm also at the conference - but the proof can be gleaned using the CPA definition of nuclearity. Use the fact that C(X) has a partition of unity (look at Adam's notes to get an idea exactly how). If you want to find me I'm wearing a purple t-shirt with a man riding a dinosaur saying "To the disco!" :) – Ollie Margetts Jan 5 '13 at 4:48 2 I have talked with Ollie and solved it. – Koushik Jan 6 '13 at 2:51 show 12 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged oa.operator-algebras fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/118054/nuclear-space-problem","timestamp":"2014-04-16T22:21:52Z","content_type":null,"content_length":"50867","record_id":"<urn:uuid:9c3c9373-7a45-4d01-8460-436e6da421e6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Figure 4: Calibration curves for MCYST-LR in three different matrices (MeOH, Stigeoclonium sp. extract, and salmon hydrolyzate) and the computed limit of detection (LOD) for a protonated MCYST-LR molecular ion mass of 995. From (a) to (c) the calibration curve of MCYST-LR in MeOH, the calibration curve of MCYST-LR in Stigeoclonium sp.extract, and the calibration curve of MCYST-LR in salmon hydrolyzate. The blue stars are the maximal intensities evaluated by DataAnalysis, the green triangles are the maximal intensities evaluated by EMP (Expertomica metabolite profiling), and the red line is the computed LOD in a given matrix. The calibration curve trend continues below the LOD value. Units for graph axes are follows: μg/mL for the -axis and counts for the -axis.
{"url":"http://www.hindawi.com/journals/bmri/2013/414631/fig4/","timestamp":"2014-04-17T17:01:05Z","content_type":null,"content_length":"3889","record_id":"<urn:uuid:514f4b5f-1b7a-4405-9aad-d383247728d6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
The Binomial Distribution \(\newcommand{\P}{\mathbb{P}}\) \(\newcommand{\E}{\mathbb{E}}\) \(\newcommand{\R}{\mathbb{R}}\) \(\newcommand{\N}{\mathbb{N}}\) \(\newcommand{\bs}{\boldsymbol}\) \(\newcommand{\var}{\text{var}}\) \(\ newcommand{\skew}{\text{skew}}\) \(\newcommand{\kurt}{\text{kurt}}\) 1. 2 2. The Binomial Distribution Basic Theory Suppose that our random experiment is to perform a sequence of Bernoulli trials : \[ \bs{X} = (X_1, X_2, \ldots) \] Recall that \(\bs{X}\) is a sequence of independent indicator random variables with common probability of success \(p \in [0, 1]\), the basic parameter of the process. In statistical terms, the first \(n\) trails \((X_1, X_2, \ldots, X_n)\) form a random sample of size \(n\) from the Bernoulli distribution. In this section we will study the random variable that gives the number of successes in the first \(n\) trials and the random variable that gives the proportion of successes in the first \(n\) trials. The underlying distribution, the binomial distribution, is one of the most important in probability theory. The number of successes in the first \(n\) trials is the random variable \[ Y_n = \sum_{i=1}^n X_i \] Thus \(\bs{Y} = (Y_0, Y_1, \ldots)\) is the partial sum process associated with the Bernoulli trials sequence \(\bs{X}\). The Density Function The probability density function of \(Y_n\) is given by \[ \P(Y_n = y) = \binom{n}{y} p^y (1 - p)^{n-y}, \quad y \in \{0, 1, \ldots, n\} \] Recall that if \((x_1, x_2 \ldots) \in \{0, 1\}^n\) with \(\sum_{i=1}^n x_i = y\) (that is, a bit string of length \(n\) with exactly \(y\) 1's), then by independence, \[ \P[(X_1, X_2, \ldots, X_n) = (x_1, x_2, \ldots, x_n)] = p^y (1 - p)^{n - y} \] Moreover, the number of bit strings of length \(n\) with exactly \(y\) 1's is the binomial coefficient \(\binom{n}{y}\). The distribution with this probability density function is known as the binomial distribution with parameters \(n\) and \(p\). In the binomial coin experiment, vary \(n\) and \(p\) with the scrollbars, and note the shape and location of the probability density function. For selected values of the parameters, run the simulation 1000 times and note the apparent convergence of the relative frequency function to the probability density function. The binomial theorem provides an additional check that the binomial probability density function really is a probability density function. The binomial distribution is unimodal: 1. \(\P(Y_n = k) \gt \P(Y_n = k - 1)\) if and only if \(k \lt (n + 1) p\). 2. \(\P(Y_n = k) = \P(Y_n = k - 1)\) if and only if \(k = (n + 1) p\) is an integer an integer between 1 and \(n\). Thus, the density function at first increases and then decreases, reaching its maximum value at \(\lfloor (n + 1) p \rfloor\). This integer is a mode of the distribution. In the case that \(m = (n + 1) p\) is an integer between 1 and \(n\), there are two consecutive modes, at \(m - 1\) and \(m\). If \(U\) is a random variable having the binomial distribution with parameters \(n\) and \(p\), then \(n - U\) has the binomial distribution with parameters \(n\) and \(1 - p\). A simple probabilistic proof is based on the interpretation of \(U\) in terms of Bernoulli trials: If \(U\) is the number of successes in \(n\) Bernoulli trials, then \(n - U\) is the number of failures. A simple analytic proof can be constructed using the probability density function of \(U\): \(\P(n - U = k) = \P(U = n - k)\) for \(k \in \{0, 1, \ldots, n\}\) The mean, variance and other moments of the binomial distribution can be computed in several different ways. Again let \(Y_n = \sum_{i=1}^n X_i\) where \(\bs{X} = (X_1, X_2, \ldots)\) is a sequence of Bernoulli trials with success parameter \(p\). \(E(Y_n) = n \, p \). This result follows immediately from the additive property of expected value and the representation of \(Y_n\) as a sum. Recall that \(\E(X_i) = p\) for each \(i\). The result can also be proven from the probability density function and the definition of expected value. For this proof, the identity \(y \binom{n}{y} = n \binom{n - 1}{y - 1}\) is useful. The expected value of \(Y_n\) also makes intuitive sense, since \(p\) should be approximately the proportion of successes in a large number of trials. We will discuss the point further in the subsection on the proportion of successes. \(\var(Y_n) = n \, p \, (1 - p)\) This result follows from a basic property of variance: the variance of a sum of independent variables is the sum of the variances. Recall that \(\var(X_i) = p \, (1- p)\) for each \(i\). The result can also be proven from the definition of variance and the probability density function of \(Y_n\), although this proof is much more complicated. The graph of \(\var(Y_n)\) as a function of \(p \in [0, 1]\) is parabola opening downward. In particular the maximum value of the variance is \(n / 4\) when \(p = 1 / 2\), and the minimm value is 0 when \(p = 0\) or \(p = 1\). In the binomial coin experiment, vary \(n\) and \(p\) with the scrollbars and note the location and size of the mean/standard deviation bar. For selected values of the parameters, run the simulation 1000 times and note the apparent convergence of the sample mean and standard deviation to the distribution mean and standard deviation. The probability generating function of \(Y_n\) is \( P_n(t) := \E \left(t^{Y_n} \right) = [(1 - p) + p \, t]^n, \quad t \in \R \). Again, there are several ways to prove this. The simplest is to use the fact that the probability generating function of a sum of indpendent variables is the product of the probability generating functions of the terms. Recall that \( P(t) := \E(t^{X_i}) = (1 - p) + p \, t\) for each \(i\), so \( P_n(t) = P^n(t) \). A direct proof, using the probability density function requires the binomial The probability generating function provides another way to compute the mean and variance. Recall that \( P_n^{(k)}(1) = \E\left[Y_n^{(k)}\right] \) The following is a recursive equation for the moments of the binomial distribution: \[ \E (Y_n^k) = n \, p \, \E \left[ (Y_{n-1} + 1)^{k-1} \right], \quad n, \; k \in \N_+ \] Use the identity \(y \binom{n}{y} = n \binom{n - 1}{y - 1}\). The recursion result in the previous exercise gives yet one more derivation of the mean and variance of the binomial distribution. The Proportion of Successes The proportion of successes in the first \(n\) trials is the random variable \[ M_n = \frac{Y_n}{n} = \frac{1}{n} \sum_{i=1}^n X_i \] In statistical terms, \(M_n\) is the sample mean of the random sample \((X_1, X_2, \ldots, X_n)\). The proportion of successes \(M_n\) is typically used to estimate the probability of success \(p\) when this probability is unknown. It is basic to the very notion of probability that if the number of trials is large, then \(M_n\)should be close to \(p\). The mathematical formulation of this idea is a special case of the law of large numbers. It is easy to express the probability density function of the proportion of successes \(M_n\) in terms of the probability density function of the number of successes \(Y_n\). First, note that \(M_n\) takes the values \(k / n\) where \(k \in \{0, 1, \dots, n\}\). The probabiliy density function of \(M_n\) is given by \[ \P \left( M_n = \frac{k}{n} \right) = \binom{n}{k} p^k (1 - p)^{n-k}, \quad k \in \{0, 1, \ldots, n\} \] In the binomial coin experiment, select the proportion of heads. Vary \(n\) and \(p\) with the scroll bars and note the shape of the probability density function. For selected values of the parameters, run the experiment 1000 timesand watch the apparent convergence of the relative frequency function to the probability density function. \(\E(M_n) = p\). In statistical terms, this means that \(M_n\) is an unbiased estimator of \(p\). \(\var(M_n) = \frac{p (1 - p)}{n}\). In particular, \(var(M_n) \le \frac{1}{4 \, n}\) for any \(p \in [0, 1]\). Note that \(\var(M_n) \to 0\) as \(n \to \infty\) and the convergence is uniform in \(p \in [0, 1]\). Thus, the estimate improves as \(n\) increases; in statistical terms, this is known as For every \(\epsilon \gt 0\), \(\P(|M_n - p| \ge \epsilon) \to 0\) as \(n \to \infty\) and the convergence is uniform in \(p \in [0, 1]\). This follows from the last exercise and Chebyshev's inequality. The result in the last exercise is a special case of the weak law of large numbers and means that \(M_n \to p\) as \(n \to \infty\) in probability. The strong law of large numbers states that the convergence actually holds with probability 1. See Estimation in the Bernoulli Model in the chapter on Set Estimation for a different approach to the problem of estimating \(p\). In the binomial coin experiment, select the proportion of heads. Vary \(n\) and \(p\) and note the size and location of the mean/standard deviation bar. For selected values of the parameters, run the experiment 1000 times and note the apparent convergence of the empirical moments to the distribution moments. Sums of Independent Binomial Variables Several important properties of the random process \(\bs{Y} = (Y_1, Y_2, \ldots)\) stem from the fact that it is a partial sum process corresponding to the sequence \(\bs{X} = (X_1, X_2, \ldots)\) of independent, identically distributed indicator variables. \(\bs{Y}\) has stationary, independent increments: 1. If \(m\) and \(n\) are positive integers with \(m \le n\) then \(Y_n - Y_m\) has the same distribution as \(Y_{n-m}\), namely binomial with parameters \(n - m\) and \(p\). 2. If \(n_1 \le n_2 \le n_3 \le \cdots\) then \((Y_{n_1}, Y_{n_2} - Y_{n_1}, Y_{n_3} - Y_{n_1}, \ldots)\) is a sequence of independent variables. Actually, any partial sum process corresponding to an independent, identically distributed sequence will have stationary, independent increments. Suppose that \(U\) and \(V\) are independent random variables, and that \(U\) has the binomial distribution with parameters \(m\) and \(p\), and \(V\) has the binomial distribution with parameters \ (n\) and \(p\). Then \(U + V\) has the binomial distribution with parameters \(m + n\) and \(p\). There are several ways to prove this. The simplest is to use the representation in terms of Bernoulli trials. In the standard notation above, we can identify \(U\) with \(Y_m\) and \(V\) with \(Y_ {m+n} - Y_m\). Another simple proof uses probability generating function. Recall again that the probability generating function of \(U + V\) is the product of the probability generating functions of \(U\) and \(V\). Finally, there is a direct proof using probability density functions. Recall that the PDF of \(U + V\) is the convolution of the PDFs of \(U\) and \(V\). The joint probability density functions of the \(\bs{Y}\) are given as follows: if \(n_1 \lt n_2 \lt \cdots \lt n_k\) and \(y_1 \le y_2 \le \cdots \le y_k\) then \[ \P(Y_{n_1} = y_1, Y_{n+2} = y_2, \ldots, Y_{n_k} = y_k) = \binom{n_1}{y_1} \binom{n_2 - n_1}{y_2 - y_1} \cdots \binom{n_k - n_{k-1}}{y_k - y_{k-1}} p^{y_k} (1 - p)^{n_k - y_k} \] where as always, we use the standard conventions for binomial coefficients: \(\binom{a}{b} = 0\) if \(b \lt 0\) or \(b \gt a\). Connection to the Hypergeometric Distribution Suppose that \(m, \; n, \; k \in \N_+\) with \(m \le n\) and \(k \le n\) then \[ \P(Y_m = j \mid Y_n = k) = \frac{\binom{m}{j} \binom{m - n}{k - j}}{\binom{n}{k}}, \quad j \in \{0, 1, \ldots, n\} \] Interestingly, the probability distribution in the last exercise is independent of \(p\). It is known as the hypergeometric distribution with parameters \(n\), \(m\), and \(k\), and governs the number of type 1 objects in a sample of size \(k\) chosen at random and without replacement from a population of \(n\) objects of which \(m\) are type 1 (and the remainder type 0). Try to interpret this result probabilistically. The Normal Approximation Open the binomial timeline experiment. For selected values of \(p \in (0, 1)\), start with \(n = 1\) and successively increase \(n\) by 1. For each value of \(n\), Note the shape of the probability density function of the number of successes and the proportion of successes. With \(n = 100\), run the experiment 1000 times and note the apparent convergence of the empirical density function to the probability density function for the number of successes and the proportion of successes The characteristic bell shape that you should observe in the previous exercise is an example of the central limit theorem, because the binomial variable can be written as a sum of \(n\) independent, identically distributed random variables (the indicator variables). The standard score of \(Y_n\) is the same as the standard score of \(M_n\): \[ \frac{Y_n - n \, p}{\sqrt{n \, p \, (1 - p)}} = \frac{M_n - p}{\sqrt{p \, (1 - p) / n}} \] The distribution of the standard score given in the previous exercise converges to the standard normal distribution as \(n \to \infty\). This version of the central limit theorem is known as the DeMoivre-Laplace theorem, and is named after Abraham DeMoivre and Simeon Laplace. From a practical point of view, this result means that, for large \(n\), the distribution of \(Y_n\) is approximately normal, with mean \(n \, p\) and standard deviation \(\sqrt{n \, p \, (1 - p)}\) and the distribution of \(M_n\) is approximately normal, with mean \(p\) and standard deviation \(\sqrt{p \, (1 - p) / n}\). Just how large \(n\) needs to be for the normal approximation to work well depends on the value of \(p\). The rule of thumb is that we need \(n \, p \ge 5\) and \(n \, (1 - p) \ge 5\). Finally, when using the normal approximation, we should remember to use the continuity correction, since the binomial is a discrete distribution. Other Connections The binomial distribution is a general exponential distribution. Suppose that \(Y\) has the binomial distribution with parameters \(n\) and \(p\), where \(n\) is fixed and \(p \in (0, 1)\). This distribution is a one-parameter exponential family with natural parameter \(\ln \left( \frac{p}{1 - p} \right)\) and natural statistic \(Y\). Note that the natural parameter is the logarithm of the odds ratio corresponding to \(p\). This function is sometimes called the logit function. Examples and Applications A student takes a multiple choice test with 20 questions, each with 5 choices (only one of which is correct). Suppose that the student blindly guesses. Let \(X\) denote the number of questions that the student answers correctly. Find each of the following: 1. The probability density function of \(X\). 2. The mean of \(X\). 3. The variance of \(X\). 4. The probability that the student answers at least 12 questions correctly (the score that she needs to pass). 1. \(\P(X = x) = \binom{20}{x} \left(\frac{1}{5}\right)^x \left(\frac{4}{5}\right)^{20-x}, \quad x \in \{0, 1, \ldots, 20\}\) 2. \(\E(X) = 4\) 3. \(\var(X) = \frac{16}{5}\) 4. \(\P(X \ge 12) \approx 0.000102\). She has no hope of passing. A certain type of missile has failure probability 0.02. Let \(Y\) denote the number of failures in 50 tests. Find each of the following: 1. The probability density function of \(Y\). 2. The mean of \(Y\). 3. The variance of \(Y\). 4. The probability of at least 47 successful tests. 1. \(\P(Y = y) = \binom{50}{k} \left(\frac{1}{50}\right)^y \left(\frac{49}{50}\right)^{50-y}, \quad y \in \{0, 1, \ldots, 50\}\) 2. \(\E(Y) = 1\) 3. \(\var(Y) = \frac{49}{50}\) 4. \(\P(Y \le 3) \approx 0.9822\) Suppose that in a certain district, 40% of the registered voters prefer candidate \(A\). A random sample of 50 registered voters is selected. Let \(Z\) denote the number in the sample who prefer \(A \). Find each of the following: 1. The probability density function of \(Z\). 2. The mean of \(Z\). 3. The variance of \(Z\). 4. The probability that \(Z\) is less that 19. 5. The normal approximation to the probability in (d). 1. \(\P(Z = z) = \binom{50}{z} \left(\frac{2}{5}\right)^z \left(\frac{3}{5}\right)^{50-z}, \quad z \in \{0, 1, \ldots, 50\}\) 2. \(\E(Z) = 20\) 3. \(\var(Z) = 12\) 4. \(\P(Z \lt 19) = 0.3356\) 5. \(\P(Z \lt 19) \approx 0.3330\) Coins and Dice Recall that a standard die is a six-sided die. A fair die is one in which the faces are equally likely. An ace-six flat die is a standard die in which faces 1 and 6 have probability \(\frac{1}{4}\) each, and faces 2, 3, 4, and 5 have probability \(\frac{1}{8}\). A standard, fair die is tossed 10 times. Let \(N\) denote the number of aces. Find each of the following: 1. The probability density function of \(N\). 2. The mean of \(N\). 3. The variance of \(N\). 1. \(\P(N = k) = \binom{10}{k} \left(\frac{1}{6}\right)^k \left(\frac{5}{6}\right)^{10-k}, \quad k \in \{0, 1, \ldots, 10\}\) 2. \(\E(N) = \frac{5}{3}\) 3. \(\var(N) = \frac{25}{18}\) A coin is tossed 100 times and results in 30 heads. Find the probability density function of the number of heads in the first 20 tosses. Let \(Y_n\) denote the number of heads in the first \(n\) tosses. \[ \P(Y_{20} = y \mid Y_{100} = 30) = \frac{\binom{20}{y} \binom{80}{30 - y}}{\binom{100}{30}}, \quad y \in \{0, 1, \ldots, 20\} \] An ace-six flat die is rolled 1000 times. Let \(Z\) denote the number of times that a score of 1 or 2 occurred. Find each of the following: 1. The probability density function of \(Z\). 2. The mean of \(Z\). 3. The variance of \(Z\). 4. The probability that \(Z\) is at least 400. 5. The normal approximation of the probability in (d) 1. \(\P(Z = z) = \binom{1000}{z} \left(\frac{3}{8}\right)^z \left(\frac{5}{9}\right)^{1000-z}, \quad z \in \{0, 1, \ldots, 1000\}\) 2. \(\E(Z) = 375\) 3. \(\var(Z) = 1875 / 8\) 4. \(\P(Z \ge 400) \approx 0.0552\) 5. \(\P(Z \ge 400) \approx 0.0550\) In the binomial coin experiment, select the proportion of heads. Set \(n = 10\) and \(p = 0.4\). Run the experiment 100 times. Over all 100 runs, compute the square root of the average of the squares of the errors, when \(M\) used to estimate \(p\). This number is a measure of the quality of the estimate. In the binomial coin experiment, select the number of heads \(Y\), and set \(p = 0.5\) and \(n = 15\). Run the experiment 1000 times with and compute the following: 1. \(\P(5 \le Y \le 10)\) 2. The relative frequency of the event \(\{5 \le Y \le 10\}\) 3. The normal approximation to \(\P(5 \le Y \le 10)\) 1. \(\P(5 \le Y \le 10) = 0.8815\) 3. \(\P(5 \le Y \le 10) \approx 0.878\) In the binomial coin experiment, select the proportion of heads \(M\) and set \(n = 30\), \(p = 0.6\). Run the experiment 1000 times and compute each of the following: 1. \(\P(0.5 \le M \le 0.7)\) 2. The relative frequency of the event \(\{0.5 \le M \le 0.7\}\) 3. The normal approximation to \(\P(0.5 \le M \le 0.7)\) 1. \(\P(0.5 \le M \le 0.7) = 0.8089\) 3. \(\P(0.5 \le M \le 0.7) \approx 0.808\) Famous Problems In 1693, Samuel Pepys asked Isaac Newton whether it is more likely to get at least one ace in 6 rolls of a die or at least two aces in 12 rolls of a die. This problems is known a Pepys' problem; naturally, Pepys had fair dice in mind. Guess the answer to Pepys' problem based on empirical data. With fair dice and \(n = 6\), run the simulation of the dice experiment 500 times and compute the relative frequency of at least one ace. Now with \(n = 12\), run the simulation 500 times and compute the relative frequency of at least two aces. Compare the results. Solve Pepys' problem using the binomial distribution. Let \(Y_n\) denote the number of aces in \(n\) rolls of a fair die. 1. \(\P(X_6 \ge 1) = 0.6651\) 2. \(\P(X_{12} \ge 2) = 0.6187\) Which is more likely: at least one ace with 4 throws of a fair die or at least one double ace in 24 throws of two fair dice? This is known as DeMere's problem, named after Chevalier De Mere. Let \(Y_n\) denote the number of aces in \(n\) rolls of a fair die, and let \(Z_n\) denote the number of double aces in \(n\) rolls of a pair of fair dice. 1. \(\P(Y_4 \ge 1) = 0.5177\) 2. \(\P(Z_{24} \ge 1) = 0.4914\) Data Analysis Exercises In the cicada data, compute the proportion of males in the entire sample, and the proportion of males of each species in the sample. 1. \(m = 0.433\) 2. \(m_0 = 0.636\) 3. \(m_1 = 0.259\) 4. \(m_2 = 0.5\) In the M&M data, pool the bags to create a large sample of M&Ms. Now compute the sample proportion of red M&Ms. \(m_{\text{red}} = 0.168\) The Galton Board The Galton board is a triangular array of pegs. The rows are numbered by the natural numbers \(\N = \{0, 1, \ldots\}\) from top downward. Row \(n\) has \(n + 1\) pegs numbered from left to right by the integers \(\{0, 1, \ldots, n\}\). Thus a peg can be uniquely identified by the ordered pair \((n, k)\) where \(n\) is the row number and \(k\) is the peg number in that row. The Galton board is named after Francis Galton. Now suppose that a ball is dropped from above the top peg \((0, 0)\). Each time the ball hits a peg, it bounces to the to the right with probability \(p\) and to the left with probability \(1 - p\), independently from bounce to bounce. The number of the peg that the ball hits in row \(n\) is the has the binomial distribution with parameters \(n\) and \(p\). In the Galton board experiment, select random variable \(Y\) (the number of moves right). Vary the parameters \(n\) and \(p\) and note the shape and location of the probability density function and the mean/standard deviation bar. For selected values of the parameters, click single step several times and watch the ball fall through the pegs. Then run the experiment 1000 times and watch the path of the ball. Note the apparent convergence of the relative frequency function and empirical moments to the probability density function and distribution moments, respectively. Recall the discussion of structural reliability given in the last section on Bernoulli trials. In particular, we have a system of \(n\) similar components that function independently, each with reliability \(p\). Suppose now that the system as a whole functions properly if and only if at least \(k\) of the \(n\) components are good. Such a systems is called, appropriately enough, a \(k\) out of \(n\) system. Note that the series and parallel systems considered in the previous section are \(n\) out of \(n\) and 1 out of \(n\) systems, respectively. Consider the \(k\) out of \(n\) system. 1. The state of the system is \(\bs{1}(Y_n \ge k)\) where \(Y_n\) is the number of working components. 2. The reliability function is \(r_{n,k}(p) = \sum_{i=k}^n \binom{n}{i} p^i (1 - p)^{n-i}\). In the binomial coin experiment, set \(n = 10\) and \(p = 0.9\) and run the simulation 1000 times. Compute the empirical reliability and compare with the true reliability in each of the following 1. 10 out of 10 (series) system. 2. 1 out of 10 (parallel) system. 3. 4 out of 10 system. Consider a system with \(n = 4\) components. Sketch the graphs of \(r_{4,1}\), \(r_{4,2}\), \(r_{4,3}\), and \(r_{4,4}\) on the same set of axes. An \(n\) out of \(2 \, n - 1\) system is a majority rules system. 1. Compute the reliability of a 2 out of 3 system. 2. Compute the reliability of a 3 out of 5 system 3. For what values of \(p\) is a 3 out of 5 system more reliable than a 2 out of 3 system? 4. Sketch the graphs of \(r_{3,2}\) and \(r_{5,3}\) on the same set of axes. 1. \(r_{3,2}(p) = 3 p^2 - 2 p^3\) 2. \(r_{5,3}(p) = 10 p^3 - 15 p^4 + 6 \, p^5\) 3. 3 out of 5 is better for \(p \ge \frac{1}{2}\) In the binomial coin experiment, compute the empirical reliability, based on 100 runs, in each of the following cases. Compare your results to the true probabilities. 1. A 2 out of 3 system with \(p = 0.3\) 2. A 3 out of 5 system with \(p = 0.3\) 3. A 2 out of 3 system with \(p = 0.8\) 4. A 3 out of 5 system with \(p = 0.8\) For the majority rules system, \(r_{2 n - 1, n} \left( \frac{1}{2} \right) = \frac{1}{2}\). Reliable Communications Consider the transmission of bits (0s and 1s) through a noisy channel. Specifically, suppose that when bit \(i \in \{0, 1\}\) is transmitted, bit \(i\) is received with probability \(p_i \in (0, 1)\) and the complementary bit \(1 - i\) is received with probability \(1 - p_i\). Given the bits transmitted, bits are received correctly or incorrectly independently of one-another. Suppose now, that to increase reliability, a given bit \(I\) is repeated \(n \in \N_+\) times in the transmission. A priori, we believe that \(\P(I = 1) = \alpha \in (0, 1)\) and \(\P(I = 0) = 1 - \alpha\). Let \(X\) denote the number of 1s received when bit \(I\) is transmitted \(n\) times. Find each of the following: 1. The conditional distribution of \(X\) given \(I = i \in \{0, 1\}\) 2. he probability density function of \(X\) 3. \(\E(X)\) 4. \(\var(X)\) 1. Give \(I = 1\), \(X\) has the binomial distribution with parameters \(n\) and \(p_1\). Given \(I = 0\), \(X\) has the binomial distribution with parameters \(n\) and \(1 - p_0\). 2. \(\P(X = k) = \binom{n}{k} [\alpha p_1^k (1 - p_1)^{n-k} + (1 - \alpha) (1 - p_0)^k p_0^{n-k}]\) for \(k \in \{0, 1, \ldots, n\}\). 3. \(\E(X) = n [\alpha p_1 + (1 - \alpha) (1 - p_0)]\) 4. \(\var(X) = \alpha [n p_1 (1 - p_1) + n^2 p_1^2] + (1 - \alpha)[n p_0 (1 - p_0) + n^2 (1 - p_0)^2] - n^2 [\alpha p_1 + (1 - \alpha)(1 - p_0)]^2\) Simplify the results in the last exercise in the symmetric case where \(p_1 = p_0 =: p\) (so that the bits are equally reliable) and with \(\alpha = \frac{1}{2}\) (so that we have no prior 1. Give \(I = 1\), \(X\) has the binomial distribution with parameters \(n\) and \(p\). Given \(I = 0\), \(X\) has the binomial distribution with parameters \(n\) and \(1 - p\). 2. \(\P(X = k) = \frac{1}{2}\binom{n}{k} [p^k (1 - p)^{n-k} + (1 - p)^k p^{n-k}]\) for \(k \in \{0, 1, \ldots, n\}\). 3. \(\E(X) = \frac{1}{2}n\) 4. \(\var(X) = n p (1 - p) + \frac{1}{2}n^2[p^2 + (1 - p)^2] - \frac{1}{4} n^2\) Our interest, of course, is predicting the bit transmitted given the bits received. Find the posterior probability that \(I = 1\) given \(X = k \in \{0, 1, \ldots, n\}\). Answer: \[\P(I = 1 \mid X = k) = \frac{\alpha p_1^k (1 - p_1)^{n-k}}{\alpha p_1^k (1 - p_1)^{n-k} + (1 - \alpha) (1 - p_0)^k p_0^{n-k}}\] Presumably, our decision rule would be to conclude that 1 was transmitted if the posterior probability in the previous exercise is greater than \(\frac{1}{2}\) and to conclude that 0 was transmitted if the this probability is less than \(\frac{1}{2}\). If the probability equals \(\frac{1}{2}\), we have no basis to prefer one bit over the other. Give the decision rule in the symmetric case where \(p_1 = p_0 =: p\), so that the bits are equally reliable. Assume that \(p \gt \frac{1}{2}\), so that we at least have a better than even chance of receiving the bit transmitted. Give \(X = k\), we conclude that bit 1 was transmitted if \[k \gt \frac{n}{2} - \frac{1}{2} \frac{\ln(\alpha) - \ln(1 - \alpha)}{\ln(p) - \ln(1 - p)}\] and we conclude that bit 0 was transmitted if the reverse inequality holds. Not surprisingly, in the symmetric case with no prior information, so that \(\alpha = \frac{1}{2}\), we conclude that bit \(i\) was transmitted if a majority of bits received are \(i\). Bernstein Polynomials The Weierstrass Approximation Theorem, named after Karl Weierstrass, states that any real-valued function that is continuous on a closed, bounded interval can be uniformly approximated on that interval, to any degree of accuracy, with a polynomial. The theorem is important, since polynomials are simple and basic functions, and a bit surprising, since continuous functions can be quite In 1911, Sergi Bernstein gave an explicit construction of polynomials that uniformly approximate a given continuous function, using Bernoulli trials. Bernstein's result is a beautiful example of the probabilistic method, the use of probability theory to obtain results in other areas of mathematics that are seemingly unrelated to probability. Suppose that \(f\) is a real-valued function that is continuous on the interval \([0, 1]\). The Bernstein polynomial of degree \(n\) for \(f\) is defined by \[ b_n(p) = \E_p[f(M_n)], \quad p \in [0, 1] \] where \(M_n\) is the proportion of successes in the first \(n\) Bernoulli trials with success parameter \(p\), as defined earlier. Note that we are emphasizing the dependence on \(p\) in the expected value operator. The next exercise gives a more explicit representation, and shows that the Bernstein polynomial is, in fact, a polynomial The Bernstein polynomial of degree \(n\) can be written as follows: \[ b_n(p) = \sum_{k=0}^n f \left(\frac{k}{n} \right) \binom{n}{k} p^k (1 - p)^{n-k}, \quad p \in [0, 1] \] This follows from the change of variables theorem for expected value. The Bernstein polynomials satisfy the following properties: 1. \(b_n(0) = f(0)\) and \(b_n(1) = f(1)\) 2. \(b_1(p) = f(0) + [f(1) - f(0)] \, p\) for \(p \in [0, 1]\). 3. \(b_2(p) = f(0) + 2 \, \left[ f \left( \frac{1}{2} \right) - f(0) \right] \, p + \left[ f(1) - 2 \, f \left( \frac{1}{2} \right) + f(0) \right] \, p^2\) for \(p \in [0, 1]\) From part (a), the graph of \(b_n\) passes through the endpoints \((0, f(0))\) and \((1, f(1))\). From part (b), the graph of \(b_1\) is a line connecting the endpoints. From (c), the graph of \(b_2 \) is parabola passing through the endpoints and the point \(\left( \frac{1}{2}, \frac{1}{4} \, f(0) + \frac{1}{2} \, f\left(\frac{1}{2}\right) + \frac{1}{4} \, f(1) \right)\). Bernstein's theorem: \(b_n \to f\) as \(n \to \infty\) uniformly on \([0, 1]\). Since \(f\) is continuous on the closed, bounded interval \([0, 1]\), it is bounded on this interval. Thus, there exists a constant \(C\) such that \(|f(p)| \le C\) for all \(p \in [0, 1]\). Also, \ (f\) it is uniformly continuous on \([0, 1]\). Thus, for any \(\epsilon \gt 0\) there exists \(\delta \gt 0\) such that if \(p \in [0, 1]\), \(q \in [0, 1]\), and \(|p - q| \lt \delta\) then \(|f(p) - f(q)| \lt \epsilon\). From basic properties of expected value, \[ |b_n(p) - f(p)| \le \E_p[|f(M_n) - f(p)|, \; |M_n - p| \lt \delta] + \E_p[|f(M_n) - f(p)|, \; |M_n - p| \ge \delta] \] Hence \(|b_n(p) - f(p)| \le \epsilon + 2 \, C \, \P_p(|M_n - p| \ge \delta)\) for any \(p \in [0, 1]\). But by Exercise 19, \(\P_p(|M_n - p| \ge \delta) \to 0\) as \(n \to \infty\) uniformly in \(p \ in [0, 1]\). Compute the Bernstein polynomials of orders 1, 2, and 3 for the function \(f\) defined by \(f(x) = \cos(\pi x), \quad x \in [0, 1]\). Graph \(f\) and the three polynomials on the same set of axes. 1. \(b_1(p) = 1 - 2 \, p\) 2. \(b_2(p) = 1 - 2 \, p\) 3. \(b_3(p) = 1 - \frac{3}{2} \, p - \frac{3}{2} \, p^2 + p^3\) Use a computer algebra system to compute the Bernstein polynomials of orders 10, 20, and 30 for the function \(f\) defined below. Use the CAS to graph the function and the three polynomials on the same axes. \[ f(x) = \begin{cases} 0, & x = 0 \\ x \, \sin\left(\frac{\pi}{x}\right), & x \in (0, 1] \end{cases} \]
{"url":"http://www.math.uah.edu/stat/bernoulli/Binomial.html","timestamp":"2014-04-19T09:24:31Z","content_type":null,"content_length":"46042","record_id":"<urn:uuid:eda9683f-5f37-4c32-ab91-da031a808dba>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Optics/Fermat's Principle From Wikibooks, open books for an open world The Principle[edit] Fermat's Principle, also known as "The Principle of Least Time" states that: "Light travels through the path in which it can reach the destination in least time". It is a fundamental law of optics from which the other laws of geometrical optics can be derived. Derivation for Law of Reflection[edit] The derivation of Law of Reflection using Fermat's principle is straightforward. The Law of Reflection can be derived using elementary calculus and trigonometry. The generalization of the Law of Reflection is Snell's law, which is derived below using the same principle. The medium that light travels through doesn't change. In order to minimize the time for light travel between two points, we should minimize the path taken. 1. Total path length of the light is given by 2. Using Pythagorean theorem from Euclidean Geometry we see that $d_1=\sqrt{x^2 + a^2}\,$ and $d_2=\sqrt{(l-x)^2 + b^2}\,$ 3. When we substitute both values of d[1] and d[2] for above, we get $L=\sqrt{x^2 + a^2} + \sqrt{(l-x)^2 + b^2}$ 4. In order to minimize the path traveled by light, we take the first derivative of L with respect to x. $\frac{dL}{dx}=\frac{x}{\sqrt{x^2 + a^2}} + \frac{-(l-x)}{\sqrt{(l-x)^2 + b^2}}=0$ 5. Set both sides equal to each other. $\frac{x}{\sqrt{x^2 + a^2}} = \frac{(l-x)}{\sqrt{(l-x)^2 + b^2}}$ 6. We can now tell that the left side is nothing but $\sin\theta_i$ and the right side $\sin\theta_r$ means $\sin\theta_i=\sin\theta_r \!\$ 7. Taking the inverse sine of both sides we see that the angle of incidence equals the angle of reflection $\theta_i=\theta_r \!\$ Derivation for Snell's Law[edit] The derivation of Snell's Law using Fermat's Principle is straightforward. Snell's Law can be derived using elementary calculus and trigonometry. Snell's Law is the generalization of the above in that it does not require the medium to be the same everywhere. To mark the speed of light in different media refractive indices named n[1] and n[2] are used. $v_1=\frac{c}{n_1} \!\$ Here $c$ is the speed of light in a vacuum and $n_1,n_2 \ge 1 \,$ because all materials slow down light as it travels through them. 1. Time for the trip equals distance traveled divided by the speed. 2. Using the Pythagorean theorem from Euclidean Geometry we see that $\frac{d_1}{v_1}=\frac{\sqrt{x^2 + a^2}}{v_1}\,$ and $\frac{d_2}{v_2}=\frac{\sqrt{b^2 + (l-x)^2}}{v_2}\,$ 3. Substituting this result into equation (1) we get $T=\frac{\sqrt{x^2 + a^2}}{v_1} + \frac{\sqrt{b^2 + (l-x)^2}}{v_2}$ 4. To minimize the transit time, we take the derivative with respect to the variable $x$ and set it equal to zero: $\frac{dT}{dx}=\frac{x}{v_1\sqrt{x^2 + a^2}} + \frac{-(l-x)}{v_2\sqrt{(l-x)^2 + b^2}}=0$ 5. After careful examination the above equation we see that it is nothing but $\frac{dT}{dx}=\frac{\sin\theta_1}{v_1} - \frac{\sin\theta_2}{v_2}=0$ 6. This leads to 7. Substituting $\frac{n_1}{c}$ for $\frac{1}{v_1}$ and $\frac{n_2}{c}$ for $\frac{1}{v_2}$ we get 8. Multiplying through by $c$ gives us our result $n_1\sin\theta_1=n_2\sin\theta_2 \!\$
{"url":"http://en.wikibooks.org/wiki/Optics/Fermat's_Principle","timestamp":"2014-04-18T10:37:18Z","content_type":null,"content_length":"34195","record_id":"<urn:uuid:47c8b4d2-3d0e-407b-99c5-0bddac37e1ab>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
11-XX Number theory 11Rxx Algebraic number theory: global fields {For complex multiplication, see 11G15} 11R04 Algebraic numbers; rings of algebraic integers 11R06 PV-numbers and generalizations; other special algebraic numbers 11R09 Polynomials (irreducibility, etc.) 11R11 Quadratic extensions 11R16 Cubic and quartic extensions 11R18 Cyclotomic extensions 11R20 Other abelian and metabelian extensions 11R21 Other number fields 11R23 Iwasawa theory 11R27 Units and factorization 11R29 Class numbers, class groups, discriminants 11R32 Galois theory 11R33 Integral representations related to algebraic numbers; Galois module structure of rings of integers [See also 20C10] 11R34 Galois cohomology [See also 12Gxx, 16H05, 19A31] 11R37 Class field theory 11R39 Langlands-Weil conjectures, nonabelian class field theory [See also 11Fxx, 22E55] 11R42 Zeta functions and $L$$L$-functions of number fields [See also 11M41, 19F27] 11R44 Distribution of prime ideals [See also 11N05] 11R45 Density theorems 11R47 Other analytic theory [See also 11Nxx] 11R52 Quaternion and other division algebras: arithmetic, zeta functions 11R54 Other algebras and orders, and their zeta and $L$$L$-functions [See also 11S45, 16H05, 16Kxx] 11R56 Adèle rings and groups 11R58 Arithmetic theory of algebraic function fields [See also 14-XX] 11R60 Cyclotomic function fields (class groups, Bernoulli objects, etc.) 11R65 Class groups and Picard groups of orders 11R70 $K$$K$-theory of global fields [See also 19Fxx] 11R80 Totally real and totally positive fields [See also 12J15] 11R99 None of the above, but in this section
{"url":"http://ams.org/mathscinet/msc/msc.html?t=11R56","timestamp":"2014-04-17T04:48:57Z","content_type":null,"content_length":"16188","record_id":"<urn:uuid:7f16e19b-e5d7-4bee-95c8-8cd1769da659>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: March 1995 [00053] [Date Index] [Thread Index] [Author Index] Re: demodulation question • To: mathgroup at christensen.cybernetics.net • Subject: [mg526] Re: [mg514] demodulation question • From: David Harrison <davidh> • Date: Thu, 9 Mar 1995 10:18:12 -0600 (CST) Yves Verbandt wrote: > I have some measured curves which following the model have the following > form : > s(x) = f(x)*cos(k*x) > where k is a constant and f(x) is a slowly varying function. Because there > is coniderable noise on the measurements the cosine is not very regular. > Does anyone have an idea how to extract f(x)? I tried the NonlinearFit > package and this was not very successfull! There are a number of related problems here. First, any nonlinear fitter, including the Marquardt algorithm used by NonlinearFit, can easily find some local minimum in the chi-squared and miss the true minimum. Thus, one must almost always provide good initial guesses of parameter values to NonlinearFit. The other related problem is that when there is noise, then two spectra taken from an identical sample with the same instrument can end up fitting to different values of the parameters, even when the initial values given to the fitter are the same. This is motivation for various people bringing out much heavier artillery such as neural networks to try to deal with nonlinear fitting. These efforts are sometimes very succesful, but also very compute intensive. Here are a few things that might be tried in the context of the particular question under consideration. 1. Try smoothing the data to reduce the noise. One simple way to do that is to compute local averages with something like: << Statistics`DescriptiveStatistics` smoothed = Mean /@ Partition[data, some_number, 1] where 'some_number' is a value substantially less than the number of points in 'data'. Note that 'smoothed' contains fewer points than 'data', and loses information at the edges. There also can be systematic effects in a simple-minded procedure like this, particularly around the edges of the data and the extrema of the 2. Sneak up on the values you want. It isn't totally clear what the parameters are that are being fit to here: s(x) = f(x)*cos(k*x) but presumably k is one of them and another one or more are in the function f. Then fix the parameters in f and find a value for k. Then fix the value of k and fit to one or more parameters in f. Iterate until you are close enough. Always plot the residuals between your fit and the data to look for systematic effects. 3. If 'k' is not one of the parameters to be fit, then try: s(x)/cos(k*x) = f(x) In fact, if you are sneaking up on the values as in #2 above, this can sometimes be useful. Hopefully some of this will be helpful. Dr. David Harrison | "Music is a hidden practice of the Visiting Scholar | soul, that does not know it is Wolfram Research Inc. | doing mathematics". -- Leibniz
{"url":"http://forums.wolfram.com/mathgroup/archive/1995/Mar/msg00053.html","timestamp":"2014-04-20T23:50:24Z","content_type":null,"content_length":"36767","record_id":"<urn:uuid:51b7b5cc-74fb-4951-95b1-41e8f7c3cd42>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Mouse Cursor Position in OpenGL coordinates [Archive] - OpenGL Discussion and Help Forums 11-30-2011, 12:00 PM Is there an easy way to get the current mouse position in OpenGL coordinates without using GLUT? I'm using an ortho projection and everyting is drawn at -1 so the Z coordinate doesn't really matter to me, but if I could somehow read the mouse position in opengl coordinates that would be super helpful in moving forward.
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-176309.html","timestamp":"2014-04-20T06:23:06Z","content_type":null,"content_length":"5303","record_id":"<urn:uuid:c226d02f-fee6-4b14-a2b7-bb11175ef83b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Nick Chepurniy, Ph.D. e-mail: nickc@sharcnet.ca HPC Consultant - SHARCNET http://www.sharcnet.ca Compute Canada http://www.computecanada.org We start by defining what interpreters and compilers are and give examples of both. Then two simple problems are done in OCTAVE (interpreter) and two compiler languages (c and fortran) to illustrate how these tools are used. A more realistic problem (Matrix Inversion) for different matrix sizes is done in OCTAVE and compared to LAPACK. Based on this problem conclusions are drawn about the use of interpreters and compilers. In this online-tutorial we start by describing briefly how compilers and interpreters work. The most important issue is: which should you use, and that depends on what kind of code you are Fortran, c/C++, Pascal and ALGOL are compilers. A compiler analyses all the statements in a program and links the code (using libraries) to produce an executable. To run the program you submit the executable which would produce some output. To make changes in the program you must repeat these steps. OCTAVE, MATLAB, MAPLE, R and APL are interpreters. An interpreter translates one high-level language command into low-level machine instructions and executes that command at once. The interpreter takes one command at a time, however the user can place several commands into a script file and execute the commands in that file one at a time. Simple Comparison First let's illustrate how OCTAVE (an interpreter) works. The second example is done in OCTAVE(interpreter) and both c and fortran (compilers) to illustrate the two approaches. Finally, the third example (in the next segment), a more realistic one, illustrates and answers: which should you use (a compiler or an interpreter). OCTAVE and MATLAB are very similar. They have a lot of similar functions but there are some that are different. Here is an example in OCTAVE: [nickc@nar316:~] octave GNU Octave, version 3.2.2 octave:1> x=3.0; octave:2> xsq=x*x xsq = 9 octave:3> quit The first line invokes the interpreter (OCTAVE) which responds on the next line with the name of the interpreter and version. Then the interpreter prints the prompt with the line number: User types a command (assigning the value of 3.0 to variable x) followed by a semicolon (;) and then presses CR (Carriage Return). The effect of the semicolon at the end of the command instructs the interpreter not to print the answer (i.e. value of x in this case is not printed on the screen). The next command computes variable xsq as the product of x by x. Since the command does not end with a semicolon the answer is printed on the next line. To terminate the interaction session the user types the command: quit Here is a second example using OCTAVE: [nickc@nar316:~] octave GNU Octave, version 3.2.2 octave:1> sum=0; octave:2> N=1000 N = 1000 octave:3> for i=1:N > sum=sum+i; > endfor octave:4> sum sum = 500500 octave:5> my_sum=N*(N+1)/2 my_sum = 500500 octave:6> quit In above example, the user sets sum=0 and N=1000 in the first 2 commands. The third command is a for-loop starting with the clause: for i=1:N The interpreter will read instructions until an endfor clause is detected. In the body of the for loop we are adding the index variable i and not printing the answer. On line 4 we print the answer (sum of integers from 1 to N). Line 5 verifies the answer with the analytical formula: my_sum=N*(N+1)/2 When the algorithms become more complex rather than typing each command at the OCATVE prompt you can place all the commands into a script file. An OCTAVE script file has the *.m extension (e.g. my_commands.m). To execute the commands in the script file type the name of the file without the extension (e.g. my_commands) into the OCATVE prompt. Let's place the commands of the second example into a file called ex2.m as shown next: # OCTAVE script file ex2.m # Commands for example 2 for i=1:N To execute the commands in the OCTAVE script file ex2.m do as follows: [nickc@nar316:] octave GNU Octave, version 3.2.2 octave:1> ex2 N = 1000 sum = 500500 my_sum = 500500 octave:2> quit or you can run it in batch mode by submiting it as a job using: # script file: sub_job if [ $# -ne 1 ]; then echo "Must have 1 argument specifying the OCTAVE file" echo "For example: ./sub_job ex2.m " sqsub -t -r 15m -o OUTPUT_${OCTAVE_FILE%%.*}_%J octave ${OCTAVE_FILE} For completeness here is the file sumN.c which solves the problem in the second example using the c compiler, pathcc: #include <stdio.h> This program prints the sum of integers from 1 to N #define N 10000 main(int argc, char **argv) int i, sum, sum_formula; sum = 0; for (i=1 ; i <= N; i++) sum = sum + i ; printf("For N = %d the sum = %d\n",N,sum); sum_formula = N*(N+1)/2; printf("and using formula sum = %d\n",sum_formula); return 0; and the equivalent fortran file sumN.f90: ! file sumN.f90 program sumN integer, parameter :: N = 10000 integer :: i, sum, sum_formula sum = 0 do i=1,N sum = sum + i write(6,1001) N,sum 1001 format("For N = ",i8," the sum = ",i10) sum_formula = N*(N+1)/2 write(6,1002) sum_formula 1002 format("and using formula sum = ",i10) To compile and execute the c program you would do these commands: [nickc@nar316:/work/nickc/INTERPRETERS_vs_COMPILERS] pathcc sumN.c [nickc@nar316:/work/nickc/INTERPRETERS_vs_COMPILERS] ./a.out For N = 10000 the sum = 50005000 and using formula sum = 50005000 and for the fortran program you would do: [nickc@nar316:/work/nickc/INTERPRETERS_vs_COMPILERS] pathf90 sumN.f90 [nickc@nar316:/work/nickc/INTERPRETERS_vs_COMPILERS] ./a.out For N = 10000 the sum = 50005000 and using formula sum = 50005000 Matrix Inversion (Example 3) The third example in this online tutorial defines a STRIDWAD matrix, stored in the array A, and a vector X described in: Once the matrix A and vector X are defined, the matrix-vector product, A*X, is computed and stored in the vector RHS. Then, using A and RHS the linear system of equations is solved and the solution is saved in the vector XX which should be identical to X. In preparation for our third example, let's introduce OCTAVE functions. You can define an OCTAVE function by typing it into a *.m file which starts with the word: "function". Here are the definitions for two functions: # file: INIT_MY_MATRIX.m function A = INIT_MY_MATRIX (N) if (nargin != 1) usage ("A = INIT_MY_MATRIX (N)"); NH = N/2; NH1 = NH + 1; # Diagonal elements MD = 3*eye(N); # Upper diagonal # Lower diagonal # ANTI-DIAGONAL A = UD + MD + LD + AD ; # file: INIT_MY_VECTOR.m function X = INIT_MY_VECTOR(N) if (nargin != 1) usage ("A = INIT_MY_MATRIX (N)"); X = find(ones(N,1)); Note that the name of the file (e.g. INIT_MY_VECTOR.m) without the *.m extension is identical to the function name inside the file. Once these functions are defined (by writing them into the *.m files), you can use them either interactevly or submiting a batch job from the same subdirectory where these functions appear. Here is a script file for a short version of example 3: # file: short_ex3.m # OCTAVE commands for example 3 t0 = clock (); A = INIT_MY_MATRIX(N) X = INIT_MY_VECTOR(N) RHS = A*X # Now use A and RHS to find new XX XX = A \ RHS elapsed_time = etime (clock (), t0) [total, user, system] = cputime (); To run it interactively you type: GNU Octave, version 3.2.2 octave:1> short_ex3 N = 10 A = 3 -1 0 0 0 0 0 0 0 1 -1 3 -1 0 0 0 0 0 1 0 0 -1 3 -1 0 0 0 1 0 0 0 0 -1 3 -1 0 1 0 0 0 0 0 0 -1 3 0 0 0 0 0 0 0 0 0 0 3 -1 0 0 0 0 0 0 1 0 -1 3 -1 0 0 0 0 1 0 0 0 -1 3 -1 0 0 1 0 0 0 0 0 -1 3 -1 1 0 0 0 0 0 0 0 -1 3 X = RHS = XX = elapsed_time = 0.0048676 total = 0.24096 user = 0.15298 system = 0.087986 N = 10 -- less (100%) (f)orward, (b)ack, (q)uitPress Control-C again to abort. octave:2> quit For a larger system of equations (N=12000) we use the script file long_ex3.m: # file: long_ex3.m # Commands for example 3 t0 = clock (); A = INIT_MY_MATRIX(N); X = INIT_MY_VECTOR(N); RHS = A*X; # Now use A and RHS to find new XX XX = A \ RHS elapsed_time = etime (clock (), t0) [total, user, system] = cputime (); and submit the job in batch mode using the script sub_job (described earlier) with argument long_ex3.m as follows: [nickc@nar316:] ./sub_job long_ex3.m submitted as jobid 1524992 The output file, OUTPUT_long_ex3_1524992, for the job was: GNU Octave, version 3.2.2 N = 12000 XX = elapsed_time = 263.65 total = 263.69 user = 253.32 system = 10.373 N = 12000 Subject: Job 1524992: <srun octave long_ex3.m> Done Job <srun octave long_ex3.m> was submitted from host <nar316> by user <nickc>. # LSBATCH: User input srun octave long_ex3.m Successfully completed. Resource usage summary: CPU time : 264.45 sec. Max Memory : 2528 MB Max Swap : 3478 MB Comparison of execution times Following table compares OCTAVE and LAPACK execution times for the matrix inversion problem for different orders of the matrix (N): N OCTAVE LAPACK order of CPU CPU matrix time time [ sec ] [ sec ] 10000 169.12 344.93 13000 314.98 594.70 15000 448.33 853.51 16000 541.83 1000.37 17000 N/A 1183.27 20000 N/A 1813.09 30000 N/A 3599.94 32000 N/A N/A Note: N/A means the problem cannot be run because there is not enough memory. Following conclusions can be drawn from the OCTAVE program in Example 3 in this online tutorial and the results presented in the online tutorial: (1) Developing a model with an interpreter is much easier because interpreters have many more commands and the development is carried out interactively. This results in a much faster development cycle. (2) OCTAVE and other interpreters could run as fast as compiled code for smaller models. As seen from the table in the previous segment OCTAVE in all cases runs in about half the time of the LAPACK execution time. (3) Compiled code however could run much larger models. (4) Since mpi can be used with compiled codes this extends the size of the models to even a higher dimension. In this case LAPACK programs can be extended to ScaLAPACK. See: Answer to which should you use: For smaller models an intepreter would be more appropriate. However, if the model exceeds the memory then you have no choice but to switch to a compiler. For the development of new algorithms it is easier to do all the preliminary testing using an interpreter. If the memory requirements are met then there is no need to switch to a compiler. But for much larger models the compiler with mpi is more adecuate. Using OCTAVE we were able to invert matrices up to N=16,000. For higher dimensions LAPACK was able to go up to N=30,000 and for even higher models ScaLAPACK went up to N=250,000 (using 256 Last update: 12 Sept 2013
{"url":"https://www.sharcnet.ca/help/index.php/INTERPRETERS_vs_COMPILERS","timestamp":"2014-04-19T17:04:06Z","content_type":null,"content_length":"27690","record_id":"<urn:uuid:ad8f4e5a-293d-4e57-8445-3b92c7d3f868>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Section: POSIX Programmer's Manual (P) Updated: 2003 Local index Up ldiv, lldiv - compute quotient and remainder of a long division #include <stdlib.h> ldiv_t ldiv(long numer, long denom); lldiv_t lldiv(long long numer, long long denom); These functions shall compute the quotient and remainder of the division of the numerator numer by the denominator denom. If the division is inexact, the resulting quotient is the long integer (for the ldiv() function) or long long integer (for the lldiv() function) of lesser magnitude that is the nearest to the algebraic quotient. If the result cannot be represented, the behavior is undefined; otherwise, quot * denom+rem shall equal numer. The ldiv() function shall return a structure of type ldiv_t, comprising both the quotient and the remainder. The structure shall include the following members, in any order: long quot; /* Quotient */ long rem; /* Remainder */ The lldiv() function shall return a structure of type lldiv_t, comprising both the quotient and the remainder. The structure shall include the following members, in any order: long long quot; /* Quotient */ long long rem; /* Remainder */ No errors are defined. The following sections are informative. div() , the Base Definitions volume of IEEE Std 1003.1-2001, <stdlib.h> Portions of this text are reprinted and reproduced in electronic form from IEEE Std 1003.1, 2003 Edition, Standard for Information Technology -- Portable Operating System Interface (POSIX), The Open Group Base Specifications Issue 6, Copyright (C) 2001-2003 by the Institute of Electrical and Electronics Engineers, Inc and The Open Group. In the event of any discrepancy between this version and the original IEEE and The Open Group Standard, the original IEEE and The Open Group Standard is the referee document. The original Standard can be obtained online at http://www.opengroup.org/unix/ online.html . This document was created by man2html, using the manual pages. Time: 21:49:29 GMT, April 16, 2011
{"url":"http://www.makelinux.net/man/3posix/L/lldiv","timestamp":"2014-04-20T03:51:15Z","content_type":null,"content_length":"10776","record_id":"<urn:uuid:18644c91-a2a3-4d9f-9bcc-91d846930eef>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 57 read each clue below.then find the vocabulary word on the right that matches the clue.draw a line from the clue to the word. Carr Middle School Math 601 divide by 57. I need to show all the work for this long division problem. This is about variables. The problem is d/2.5 - 3.4 = 4.6. What are the steps in finding out what the d is? finding the x value Find the value of x 19 x 27 Pre Algebra Mike spent $40 at the music store. He bought a cd for $20 and some cassettes for $4 each. How many cassettes did mike buy? A model rocket is built to a scale of 3 cm =5 m if the model is 9.4 cm tall how tall is actual rocket The following elements as alkali, alkaline-earth or transition metals based on their positions in the periodic table: A:) iron , Fe. C:) strontium, Sr B:)potassium, k d:) platinum, Pt Geometry B A solid has 10 faces: 4 triangles;1 Square; 4 hexagons; and 1 octagon, how many vertices does the solid have? If there are 8 boys and 9 girls in Mr. Layden's classroom, what is the ratio of boys to girls? How do the routines of life help give people their identities? Recognize or treat (someone or something) as different. Perceive or point out a difference P.S. There is this magical creation called the dictionary :) A 521 N box is sitting on a 33.5° incline. Find the force of gravity parallel (Fp) to the surface of the incline, and the force of gravity perpendicular to the surface of the incline (FN). a. If the coefficient of static friction is 0.227, find the Ff . b. Will the box sli... Two ice fisherman stand face to face on the ice. They put their hands together, and push away from each other. The first man, who has a mass of 75 kg, accelerates backwards at 3.1 m/s What is the acceleration of the other man, who weigh 637N? A stone is dropped from a height of 5 meters. How long will the stone be in the air? social studies Does church qualify as a state? social studies Why do you think Cuba does not qualify as a state? social studies Which of the following islands or group of islands does NOT qualify as a state? a. Hawaii b. Japan c. Cuba d. Australia how do you solve for 4p-13p-p=-150 What grammatical structure is the italicized portion of the sentence? By mistake I opened a package addressed to my sister. adverb clause infinitive phrase past participial phrase elliptical clause adjective clause Y/3+11=y/2-3 Please help me solve this and show your work thanks what is the function if x is 3,0,7,5 and the y being 16,10,24,20? Language Arts true or false the product of 6 and 48 is 12 less than 300 how do i figure this out If two is company and three is a crowd, what are five and four? But i don't know if that theme fits the story. I think the theme is when u treat others with respect they will treat others with it. What is the theme and some examples of the theme in the story " Thank You Ma'am by Langston Hughes? in classifying a sentences u underline the _______ parts once and the ________ parts twice? can someone please help my daughter how do types of nuclear radiation differ in electric charge? Physical science i dont know what do you think i am a genious or something god visual arts i am getting a good education, i asked 1 simple question, i dont need attitude from the person who is supposed to be helping visual arts we don't use a textbook in my class, this is a random question on the review, which we have not gone over. visual arts what are the different types of shapes and forms? ASAP EXAM TOMORROW 2 thirds times 12 social studies I need help with my abc book on california HELP ME! How do you build a pasta car? if i weigh 100 lbs on earth how much would i weigh on halley's comet? -4w+5 ______=-7 -3 social studies what are the causes and effects of the Opium War? We can't take our text book home. Do you think you could explain this to me? Use rounding to decide if the answer is reasonable. Then find the answer to see if you were right. 57-39=81 the skinfold test assesses? 6th grade physical education the minute sit up test assesses what? physical education the 1 mile run test what prt or the body? 7th grade Health I checked the site and from what I read I now think the answer is (B) is this correct? 4th grade math NAME THE POLYGON AND WRITE HOW MANY SIDES IT HAS. Pasta cars i did this project last year and took second. this year i want to get first How did the advances in steel production and oil refining affect US industry History i take specialty art and i need to know this one thing for this report:how do u define beauty? social studies what is the responsibility of the sate government social studies Out of Hindusim and Buddism..which is larger? social studies Which empire finally unifies the Indus River Valley and what we know today as the country of India? How do you find the height of a triangle on a coordinate grid? Thanks so much! what is the process called when a plant can alternate between sexual and asexual reproduction
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Makayla","timestamp":"2014-04-20T13:58:42Z","content_type":null,"content_length":"14189","record_id":"<urn:uuid:0cac298a-89f1-4c15-9744-1786f131c982>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
CGTalk - Xpresso: Dot Product Node, Output Error 10-28-2006, 04:21 PM I am trying to calculate angles between objects with the Dot Product Node in Xpresso, this all works fine until one of the objects is at 0,0,0 and gives an calculation error. I know how to calculate the angles the hard way with math nodes ect., but this also gives me the same error when one of the objects is at 0,0,0. Could someone explain me why this is and if there’s a way around it… Kind Regards,
{"url":"http://forums.cgsociety.org/archive/index.php/t-422603.html","timestamp":"2014-04-18T03:21:22Z","content_type":null,"content_length":"5925","record_id":"<urn:uuid:847412de-63ca-4c93-abc5-b46b34bf9dd8>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with this series please March 18th 2011, 03:13 PM Help with this series please Hi can someone help me figure this out: I know its convergent, but I'm trying to figure out how to get it into the form: If you notice, when n = 1, the first term is always 1. Then it becomes 1+7^2/8^2 all the way till n. I figured that r should be 1/8 since each preceding term differs by a factor of 1/8. Can't figure out a though. Any ideas? March 18th 2011, 03:18 PM Hi can someone help me figure this out: I know its convergent, but I'm trying to figure out how to get it into the form: If you notice, when n = 1, the first term is always 1. Then it becomes 1+7^2/8^2 all the way till n. I figured that r should be 1/8 since each preceding term differs by a factor of 1/8. Can't figure out a though. Any ideas? $\displaystyle{\frac{1+7^n}{8^n}<2\left(\frac{7}{8} \right)^n$ , and the rightmost series converges (why?), so the comparison test yields that also the original series converges. March 18th 2011, 03:31 PM if i split the series into two parts. (1/8)^n + (7/8)^n We can see they both converge since r < 1 for both series. But how can i figure out what they converge to? March 18th 2011, 03:31 PM I know its convergent, Yes is certainly is. So are $\displaystyle \sum\limits_{k = 1}^\infty {\frac{1}<br /> {{8^k }}} \;\& \,\sum\limits_{k = 1}^\infty {\frac{{7^k }}<br /> {{8^k }}}$ They are rearrangeable, being absolutely convergent. March 18th 2011, 05:46 PM The sum of an infinite geometric series with first element $a_1$ and constant quotient $q\,,\,|q|<1$ , is $\displaystyle{\frac{a_1}{1-q}}$
{"url":"http://mathhelpforum.com/calculus/174988-help-series-please-print.html","timestamp":"2014-04-23T17:55:05Z","content_type":null,"content_length":"8845","record_id":"<urn:uuid:dbfc0e23-fcb3-4cf7-8200-d6737d8f94ee>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of data encryption key Data Encryption Standard ) is a (a method for information) selected by as an official Federal Information Processing Standard (FIPS) for the United States in 1976 and which has subsequently enjoyed widespread use internationally. The was initially controversial with design elements, a relatively short key length , and suspicions about a National Security Agency . DES consequently came under intense academic scrutiny which motivated the modern understanding of block ciphers and their DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are unfeasible to mount in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). In some documentation, a distinction is made between DES as a standard and DES the algorithm which is referred to as the DEA (the Data Encryption Algorithm). When spoken, "DES" is either spelled out as an abbreviation or pronounced as a single syllable acronym. History of DES The origins of DES go back to the early 1970s. In 1972, after concluding a study on the US government's computer security needs, the US standards body (National Bureau of Standards) — now named (National Institute of Standards and Technology) — identified a need for a government-wide standard for encrypting unclassified, sensitive information. Accordingly, on 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions, however, turned out to be suitable. A second request was issued on 27 August 1974. This time, submitted a candidate which was deemed acceptable — a cipher developed during the period 1973–1974 based on an earlier algorithm, Horst Feistel cipher. The team at IBM involved in cipher design and analysis included Feistel, Walter Tuchman Don Coppersmith , Alan Konheim, Carl Meyer, Mike Matyas, Roy Adler, Edna Grossman , Bill Notz, Lynn Smith, and Bryant Tuckerman NSA's involvement in the design On 17 March 1975, the proposed DES was published in the Federal Register . Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was some criticism from various parties, including from public-key cryptography Martin Hellman Whitfield Diffie , citing a shortened key length and the mysterious " " as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they — but no-one else — could easily read encrypted messages. Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different. The United States Senate Select Committee on Intelligence reviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote: "In the development of DES, NSA convinced IBM that a reduced key size was sufficient; indirectly assisted in the development of the S-box structures; and certified that the final DES algorithm was, to the best of their knowledge, free from any statistical or mathematical weakness. However, it also found that "NSA did not tamper with the design of the algorithm in any way. IBM invented and designed the algorithm, made all pertinent decisions regarding it, and concurred that the agreed upon key size was more than adequate for all commercial applications for which the DES was intended. Another member of the DES team, Walter Tuchman, is quoted as saying, "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire! Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique back in the 1970s. This was indeed the case — in 1994, Don Coppersmith published the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret. Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it". Shamir himself commented, "I would say that, contrary to what some people believe, there is no evidence of tampering with the DES so that the basic design was weakened." The other criticism — that the key length was too short — was supported by the fact that the reason given by the NSA for reducing the key length from 64 bits to 56 was that the other 8 bits could serve as parity bits, which seemed somewhat specious. It was widely believed that NSA's decision was motivated by the possibility that they would be able to brute force attack a 56 bit key several years before the rest of the world would. The algorithm as a standard Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by AES, the Advanced Encryption Standard, following a public competition (see AES process). On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information. Another theoretical attack, linear cryptanalysis, was published in 1994, but it was a brute force attack in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in the article. The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES, The DES can be said to have "jump started" the nonmilitary study and development of encryption algorithms. In the 1970s there were very few cryptographers, except for those in military or intelligence organizations, and little academic study of cryptography. There are now many active academic cryptologists, mathematics departments with strong programs in cryptography, and commercial information security companies and consultants. A generation of cryptanalysts has cut its teeth analyzing (that is trying to "crack") the DES algorithm. In the words of cryptographer Bruce Schneier [9], "DES did more to galvanize the field of cryptanalysis than anything else. Now there was an algorithm to study." An astonishing share of the open literature in cryptography in the 1970s and 1980s dealt with the DES, and the DES is the standard against which every symmetric key algorithm since has been compared. Date Year Event 15 May 1973 NBS publishes a first request for a standard encryption algorithm 27 August 1974 NBS publishes a second request for encryption algorithms 17 March 1975 DES is published in the Federal Register for comment August 1976 First workshop on DES September 1976 Second workshop, discussing mathematical foundation of DES November 1976 DES is approved as a standard 15 January 1977 DES is published as a FIPS standard FIPS PUB 46 1983 DES is reaffirmed for the first time 1986 Videocipher II, a TV satellite scrambling system based upon DES begins use by HBO 22 January 1988 DES is reaffirmed for the second time as FIPS 46-1, superseding FIPS PUB 46 July 1990 Biham and Shamir rediscover differential cryptanalysis, and apply it to a 15-round DES-like cryptosystem. 1992 Biham and Shamir report the first theoretical attack with less complexity than brute force: differential cryptanalysis. However, it requires an unrealistic 2^47 chosen plaintexts. 30 December 1993 DES is reaffirmed for the third time as FIPS 46-2 1994 The first experimental cryptanalysis of DES is performed using linear cryptanalysis (Matsui, 1994). June 1997 The DESCHALL Project breaks a message encrypted with DES for the first time in public. July 1998 The EFF's DES cracker (Deep Crack) breaks a DES key in 56 hours. January 1999 Together, Deep Crack and distributed.net break a DES key in 22 hours and 15 minutes. 25 October 1999 DES is reaffirmed for the fourth time as FIPS 46-3, which specifies the preferred use of Triple DES, with single DES permitted only in legacy systems. 26 November 2001 The Advanced Encryption Standard is published in FIPS 197 26 May 2002 The AES standard becomes effective 26 July 2004 The withdrawal of FIPS 46-3 (and a couple of related standards) is proposed in the Federal Register 19 May 2005 NIST withdraws FIPS 46-3 (see Federal Register vol 70, number 96) 15 March 2007 The FPGA based parallel machine COPACOBANA of the University of Bochum and Kiel, Germany, breaks DES in 6.4 days at $10,000 hardware cost Replacement algorithms Concerns about security and the relatively slow operation of DES in motivated researchers to propose a variety of alternative block cipher designs, which started to appear in the late 1980s and early 1990s: examples include . Most of these designs kept the 64-bit block size of DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In the GOST 28147-89 algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used in DES itself can be adapted and reused in a more secure scheme. Many former DES users now use Triple DES (TDES) which was described and analysed by one of DES's patentees (see FIPS Pub 46-3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative is DES-X, which increases the key size by XORing extra key material before and after DES. GDES was a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis. In 2001, after an international competition, NIST selected a new cipher, the Advanced Encryption Standard (AES), as a replacement. The algorithm which was selected as the AES was submitted by its designers under the name Rijndael. Other finalists in the NIST AES competition included RC6, Serpent, MARS and Twofish. For brevity, the following description omits the exact transformations and permutations which specify the algorithm; for reference, the details can be found in DES supplementary material. DES is the archetypal block cipher — an algorithm that takes a fixed-length string of bits and transforms it through a series of complicated operations into another bitstring of the same length. In the case of DES, the block size is 64 bits. DES also uses a to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking , and are thereafter discarded. Hence the effective key length is 56 bits, and it is usually quoted as such. Like other block ciphers, DES by itself is not a secure means of encryption but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further comments on the usage of DES are contained in FIPS-74. Overall structure The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termed . There is also an initial and final , termed , which are (IP "undoes" the action of FP, and vice versa). IP and FP have almost no cryptographic significance, but were apparently included in order to facilitate loading blocks in and out of mid-1970s hardware, as well as to make DES run slower in software. Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes — the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms. The red ⊕ symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are not swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes. The Feistel (F) function The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages: 1. Expansion — the 32-bit half-block is expanded to 48 bits using the expansion permutation, denoted E in the diagram, by duplicating some of the bits. 2. Key mixing — the result is combined with a subkey using an XOR operation. Sixteen 48-bit subkeys — one for each round — are derived from the main key using the key schedule (described below). 3. Substitution — after mixing in the subkey, the block is divided into eight 6-bit pieces before processing by the S-boxes, or substitution boxes. Each of the eight S-boxes replaces its six input bits with four output bits according to a non-linear transformation, provided in the form of a lookup table. The S-boxes provide the core of the security of DES — without them, the cipher would be linear, and trivially breakable. 4. Permutation — finally, the 32 outputs from the S-boxes are rearranged according to a fixed permutation, the P-box. The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified by Claude Shannon in the 1940s as a necessary condition for a secure yet practical cipher. Key schedule Figure 3 illustrates the key schedule for encryption — the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 by Permuted Choice 1 (PC-1) — the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by Permuted Choice 2 (PC-2) — 24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys. The key schedule for decryption is similar — the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. Security and cryptanalysis Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute force attack, require an unrealistic amount of chosen plaintext to carry out, and are not a concern in practice. Brute force attack For any cipher, the most basic method of attack is brute force — trying every possible key in turn. The length of the key determines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacement algorithm. It is known that the NSA encouraged, if not persuaded, IBM to reduce the key size from 128 to 64 bits, and from there to 56 bits; this is often taken as an indication that the NSA thought it would be able to break keys of this length even in the mid-1970s. In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day. By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s. In 1997, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by the DESCHALL Project, led by Rocke Verser, Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by the Electronic Frontier Foundation (EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (see EFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' search; at about the same time at least one attorney from the US Justice Department was announcing that DES was unbreakable. The only other confirmed DES cracker was the COPACOBANA machine (abbreviation of cost-optimized parallel code breaker) built more recently by teams of the Universities of Bochum and Kiel, both in Germany. Unlike the EFF machine, COPACOBANA consist of commercially available, reconfigurable integrated circuits. 120 of these FPGAs of type XILINX Spartan3-1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well. The figure shows a full-sized COPACOBANA. One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000. The cost decrease by roughly a factor of 25 over the EFF machine is an impressive example for the continuous improvement of digital hardware. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Interestingly Moore's law predicts an improvement of about 32, since about 8 years have passed between the design of the two machines, which allows for about five doublings of computer power (or 5 reductions by 50% of the cost for doing the same Attacks faster than brute-force There are three attacks known that can break the full sixteen rounds of DES with less complexity than a brute-force search: differential cryptanalysis linear cryptanalysis (LC), and Davies' attack . However, the attacks are theoretical and are unfeasible to mount in practice; these types of attack are sometimes termed certificational weaknesses • Differential cryptanalysis was rediscovered in the late 1980s by Eli Biham and Adi Shamir; it was known earlier to both IBM and the NSA and kept secret. To break the full 16 rounds, differential cryptanalysis requires 2^47 chosen plaintexts. DES was designed to be resistant to DC. • Linear cryptanalysis was discovered by Mitsuru Matsui, and needs 2^43 known plaintexts (Matsui, 1993); the method was implemented (Matsui, 1994), and was the first experimental cryptanalysis of DES to be reported. There is no evidence that DES was tailored to be resistant to this type of attack. A generalisation of LC — multiple linear cryptanalysis — was suggested in 1994 (Kaliski and Robshaw), and was further refined by Biryukov et al (2004); their analysis suggests that multiple linear approximations could be used to reduce the data requirements of the attack by at least a factor of 4 (i.e. 2^41 instead of 2^43). A similar reduction in data complexity can be obtained in a chosen-plaintext variant of linear cryptanalysis (Knudsen and Mathiassen, 2000). Junod (2001) performed several experiments to determine the actual time complexity of linear cryptanalysis, and reported that it was somewhat faster than predicted, requiring time equivalent to 2^39–2^41 DES • Improved Davies' attack: while linear and differential cryptanalysis are general techniques and can be applied to a number of schemes, Davies' attack is a specialised technique for DES, first suggested by Donald Davies in the eighties, and improved by Biham and Biryukov (1997). The most powerful form of the attack requires 2^50 known plaintexts, has a computational complexity of 2^50, and has a 51% success rate. There have also been attacks proposed against reduced-round versions of the cipher, i.e. versions of DES with fewer than sixteen rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains. Differential-linear cryptanalysis was proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack. An enhanced version of the attack can break 9-round DES with 2^15.8 known plaintexts and has a 2^29.2 time complexity (Biham et al, 2002). Minor cryptanalytic properties DES exhibits the complementation property , namely that $E_K\left(P\right)=C Leftrightarrow E_overline\left\{K\right\}\left(overline\left\{P\right\}\right)=overline\left\{C\right\}$ is the bitwise denotes encryption with key denote plaintext and ciphertext blocks respectively. The complementation property means that the work for a brute force attack could be reduced by a factor of 2 (or a single bit) under a DES also has four so-called weak keys. Encryption (E) and decryption (D) under a weak key have the same effect (see involution): $E_K\left(E_K\left(P\right)\right) = P$ or equivalently, $E_K = D_K$ There are also six pairs of semi-weak keys . Encryption with one of the pair of semiweak keys, , operates identically to decryption with the other, $E_\left\{K_1\right\}\left(E_\left\{K_2\right\}\left(P\right)\right) = P$ or equivalently, $E_\left\{K_2\right\} = D_\left\{K_1\right\}.$ It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage. DES has also been proved not to be a group, or more precisely, the set $\left\{E_K\right\}$ (for all possible keys $K$) under functional composition is not a group, nor "close" to being a group (Campbell and Wiener, 1992). This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such as Triple DES would not increase the security. It is known that the maximum cryptographic security of DES is limited to about 64 bits, even when independently choosing all round subkeys instead of deriving them from a key, which would otherwise permit a security of 768 bits. See also • Ehrsam et al., Product Block Cipher System for Data Security, , Filed February 24, 1975 • Eli Biham, Adi Shamir (1991). "Differential Cryptanalysis of DES-like Cryptosystems". Journal of Cryptology 4 (1): 3-72. (preprint) • Eli Biham, Adi Shamir, Differential Cryptanalysis of the Data Encryption Standard, Springer Verlag, 1993. ISBN 0-387-97930-1, ISBN 3-540-97930-1. • Eli Biham, Alex Biryukov: An Improvement of Davies' Attack on DES. J. Cryptology 10(3): 195–206 (1997) • Eli Biham, Orr Dunkelman, Nathan Keller: Enhancing Differential-Linear Cryptanalysis. ASIACRYPT 2002: pp254–266 • Eli Biham: A Fast New DES Implementation in Software Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design, Electronic Frontier Foundation • A.Biryukov, C.De Canniere, M.Quisquater (2004). "On Multiple Linear Approximations". Lecture Notes in Computer Science 3152 1-22. (preprint). • Keith W. Campbell, Michael J. Wiener: DES is not a Group. CRYPTO 1992: pp512–520 • Don Coppersmith. (1994). The data encryption standard (DES) and its strength against attacks. IBM Journal of Research and Development, 38(3), 243–250. • Whitfield Diffie, Martin Hellman, "Exhaustive Cryptanalysis of the NBS Data Encryption Standard" IEEE Computer 10(6), June 1977, pp74–84 • John Gilmore, "Cracking DES: Secrets of Encryption Research, Wiretap Politics and Chip Design", 1998, O'Reilly, ISBN 1-56592-520-3. • Pascal Junod, "On the Complexity of Matsui's Attack." Selected Areas in Cryptography, 2001, pp199–211. • Burton S. Kaliski Jr., Matthew J. B. Robshaw: Linear Cryptanalysis Using Multiple Approximations. CRYPTO 1994: pp26–39 • Lars R. Knudsen, John Erik Mathiassen: A Chosen-Plaintext Linear Attack on DES. Fast Software Encryption - FSE 2000: pp262–272 • Susan K. Langford, Martin E. Hellman: Differential-Linear Cryptanalysis. CRYPTO 1994: 17–25 • Steven Levy, Crypto: How the Code Rebels Beat the Government Saving Privacy in the Digital Age, 2001, ISBN 0-14-024432-8. • Mitsuru Matsui (1994). "Linear Cryptanalysis Method for DES Cipher". Lecture Notes in Computer Science 765 386–397. (preprint) • Mitsuru Matsui (1994). "The First Experimental Cryptanalysis of the Data Encryption Standard". Lecture Notes in Computer Science 839 1-11. • National Bureau of Standards, Data Encryption Standard, FIPS-Pub.46. National Bureau of Standards, U.S. Department of Commerce, Washington D.C., January 1977. External links
{"url":"http://www.reference.com/browse/data+encryption+key","timestamp":"2014-04-18T11:27:12Z","content_type":null,"content_length":"122805","record_id":"<urn:uuid:233cccf5-c0dc-44fa-9f8b-e1c32775f25c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Exceptional Code This is a short tutorial on Binary Search Trees covering the basic operations that are often done on such trees. Binary Search Tree (BST) is a tree data structure that has the following property: For any node x in the tree, the left subtree rooted at x only contains nodes with values less than or equal to the value stored at x; and the right subtree rooted at x only contains nodes with values greater than the value at x. Also, the left subtree and the right subtree of x must be binary trees themselves Figure: A binary search tree of size 9 and depth 3, with root 8 and leaves 1, 4, 7 and 13 Image source: Wikipedia BSTs have sub-linear (logarithmic) average case complexity for element insertion and searching. Once a BST is built, sorting it can be done in linear time. All we need is traverse the tree in There are a number of operations that can be done on a BST. We will describe those here in some details with Java code examples. First let's see how we can represent a tree node in Java: class TreeNode int data; TreeNode parent; TreeNode left; TreeNode right; public TreeNode(int data) this.data = data; parent = left = right = null; public void setData(int data) this.data = data; public void setLeft(TreeNode node) left = node; public void setRight(TreeNode node) right = node; public void setParent(TreeNode node) parent = node; public int getData() return data; public TreeNode getLeft() return left; public TreeNode getRight() return right; public TreeNode getParent() return parent; Inserting a node into a BST Inserting a node into a BST is pretty straightforward. We start checking nodes starting from the root node. At each node if the value is greater than the value in the node to be inserted then we move to the left child, otherwise we move to the right child. We repeat this until there is no more node to be traversed. The last node will be the parent node of the node to be inserted. Now if the node to be inserted has a smaller value than the current node then we set it as the left child of the current node otherwise we set it as the right child. Java code: public static void insert(TreeNode node) TreeNode x, y; // root is a global variable and is the root of the tree x = y = root; // Assume root is initialized to null while (x != null) if (x.getData() > node.getData()) y = x; x = x.getLeft(); y = x; x = x.getRight(); // y will be the parent of node if (y == null) root = node; if (y.getData() > node.getData()) Deleting a node from a BST When deleting a node from a BST, there can be 3 different scenarios: i) The node does not have any children: This is kind of a trivial case. When there is no children all we need is removing the node from its parent. ii) The node has only one child: This is also a relatively simple case. Since the node has only one child, we will simply replace the node with this lone child without violating the BST sorted order iii) The node has two children: Now, this is a bit tricky :) Since there are two children we have to be careful not to break the sorted order of the BST. To keep the sorted order intact, we need to replace the node to be deleted with its successor so that the order is maintained in the absence of the node. Once a successor is found, we need to replace the node to be deleted with it. It turns out the successor can have at most one child. So if the successor has a child we will have to replace the successor with that child after the successor already replaced the node to be deleted. Java code: public static void delete(TreeNode node) // Case 1: node does not have a child, just delete it if (node.getLeft() == null && node.getRight() == null) if (node.getParent() != null && node.getParent().getLeft() == node) else if (node.getParent() != null && node.getParent().getRight() == node) // Case 2: node has only one child, splice the child with its parent else if (node.getLeft() == null || node.getRight() == null) if (node.getParent() != null) TreeNode x = node.getLeft() == null ? node.getRight() : node.getLeft(); if (node.getParent().getLeft() == node) // Case 3: node has both children, set the successor of the node to its parent TreeNode x = findSuccessor(node); // x will have at most one child // Instead of deleting we can just copy the successor's data over to the node to be deleted // Now delete the successor and set its child (if any) to its parent TreeNode nodeChild = x.getLeft() == null ? x.getRight() : x.getLeft(); if (x.getLeft() != null) if (x.getParent().getLeft() == x) if (x.getParent().getLeft() == x) Finding the minimum value in the tree In a BST, all nodes to the left of a node contains smaller (or equal) values than the value stored in the node itself. This gives us a clue about how to tackle the problem of finding the minimum value stored in the BST. If you think about it, the node with the minimum value in a BST is actually the leftmost node in the tree. If it wasn't then there would be node(s) in the tree that are on the right of some some node but contain smaller values than the node itself, thus violating the BST contract. Here is the Java code for finding the minimum value node in a BST: public TreeNode findMinimum(TreeNode root) if (root == null) return null; if (root.getLeft() != null) return findMinimum(root.getLeft()); return root; Finding the maximum value in the tree Similar to the logic as in finding the minimum value, the maximum value node will be the right most node in the tree. The Java code for finding the maximum value node in a BST: public static TreeNode findMaximum(TreeNode root) if (root == null) return null; if (root.getRight() != null) return findMaximum(root.getRight()); return root; One useful property of the BST is that if it is traversed in inorder we get a sorted list of values stored in the tree nodes. There are two operations that are often done in relation to maintaining this sorted order of a BST: finding the successor node and predecessor node of a given node. Finding the successor node of a given node The successor node is the node with the next bigger number in the tree. Since all numbers to the left are smaller than the current node the successor node has to be located in the right subtree. Now, since it is the next bigger number of the current node, it has to be the smallest of all numbers in the right subtree. So, essentially we are looking for the minimum number in the right subtree of the current node. In the case that there is no right child of the node, we need to look up the tree and figure out the successor. The successor is the first ancestor whose left subtree has this node as the largest number In other words: the first ancestor of this node whose left child is also an ancestor of this node. The intuition is: as we traverse left up the tree we traverse smaller values, the first node on the right is the next larger number. Here is the Java code: public static TreeNode findSuccessor(TreeNode node) if (node == null) return null; if (node.getRight() != null) return findMinimum(node.getRight()); TreeNode y = node.getParent(); TreeNode x = node; while (y != null && x == y.getRight()) x = y; y = y.getParent(); return y; Finding the predecessor node of a given node The predecessor of a given node is the node containing the next smaller value. Finding the predecessor follows symmetric rules that we used for finding the successor. Java code for finding the predecessor: public static TreeNode findPredecessor(TreeNode node) if (node == null) return null; if (node.getLeft() != null) return findMaximum(node.getLeft()); TreeNode y = node.getParent(); TreeNode x = node; while (y != null && x == y.getLeft()) x = y; y = y.getParent(); return y; Determining whether a tree is a BST or not Sometimes we already have a binary tree that we need to determine whether it is a BST or not. This is an interesting problem and can be really solved with a simple recursive solution. The BST property - that node on the right subtree has to be larger than the current node and node on the left subtree has to be smaller (or equal) than the current node - is the key to figuring out whether a tree is a BST or not. On a first thought it might look like we can simply traverse the tree and at every node check whether the node contains a value larger than the value at the left child and smaller than the value on the right child, and if this condition holds for all the nodes in the tree then we have a BST. This is the so called approach, making a decision based on local properties. But this approach clearly won't work for the following tree: / \ / \ In the tree above, at every node the condition that the node contains a value larger than its left child and smaller than its right child hold, still its not a BST: the value 5 is on the right subtree of the node containing 20, a violation of the BST property! So how do we solve this? It turns out that instead of making a decision based solely on a node and its children's values, we also need information flowing down from the parent as well. In the case of the tree above, if we could remember about the node containing the value 20 we could see that the node with value 5 is violating the BST property contract. So the condition we need to check at each node is that: a) if the node is the left child of its parent, then it must be smaller (or equal) than the parent and it must pass down the value from its parent to its right subtree to make sure none of the nodes in that subtree is greater the parent, and similarly b) if the node is the right child of its parent, then it must be larger than the parent and it must pass down the value from its parent to its left subtree to make sure none of the nodes in that subtree is greater the parent. A simple but elegant recursive solution in Java can explain this further: public static boolean isBST(TreeNode node, int leftData, int rightData) if (node == null) return true; if (node.getData() > leftData || node.getData() <= rightData) return false; return (isBST(node.left, node.getData(), rightData) && isBST(node.right, leftData, node.getData())); The initial call to this function can be something like this: if (isBST(root, Integer.MAX_VALUE, Integer.MIN_VALUE)) System.out.println("This is a BST."); System.out.println("This is NOT a BST!"); Essentially we keep creating a valid range (starting from [ MIN_VALUE, MAX_VALUE]) and keep shrinking it down foe each node as we go down recursively. So, here is my short and sweet primer on Binary Search Trees. Hope you found it useful. Let me know if there is a bug or a possible improvement in runtime or space.
{"url":"http://exceptional-code.blogspot.com/2011_08_01_archive.html","timestamp":"2014-04-16T21:52:23Z","content_type":null,"content_length":"103375","record_id":"<urn:uuid:83cf96ee-2536-483e-9d4a-09ae2e8dcf3d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of the backpropagation neural network Results 1 - 10 of 67 , 1993 "... Several researchers characterized the activation fimction under which multilayer feedforward networks can act as universal approximators. We show that most of all the characterizations that were reported thus far in the literature are special cases of the following general result: A standard multila ..." Cited by 117 (2 self) Add to MetaCart Several researchers characterized the activation fimction under which multilayer feedforward networks can act as universal approximators. We show that most of all the characterizations that were reported thus far in the literature are special cases of the following general result: A standard multilayer feedforward network with a locally bounded piecewise continuous activation fimction can approximate an3, continuous function to any degree of accuracy if and only if the network's activation function is not a polynomial. We also emphasize the important role of the threshold, asserting that without it the last theorem does not hold. - IEEE Transactions on Pattern Analysis and Machine Intelligence , 1992 "... Supervised Learning in Multi-Layered Neural Networks (MLNs) has been recently proposed through the well-known Backpropagation algorithm. This is a gradient method which can get stuck in local minima, as simple examples can show. In this paper, some conditions on the network architecture and the lear ..." Cited by 72 (17 self) Add to MetaCart Supervised Learning in Multi-Layered Neural Networks (MLNs) has been recently proposed through the well-known Backpropagation algorithm. This is a gradient method which can get stuck in local minima, as simple examples can show. In this paper, some conditions on the network architecture and the learning environment are proposed which ensure the convergence of the Backpropagation algorithm. It is proven in particular that the convergence holds if the classes are linearly-separable. In this case, the experience gained in several experiments shows that MLNs exceed perceptrons in generalization to new examples. Index Terms- Multi-Layered Networks, learning environment, Backpropagation, pattern recognition, linearly-separable classes. I. Introduction Supervised learning in Multi-Layered Networks can be accomplished thanks to Backpropagation (BP ) ([19, 25, 31]). Its application to several different subjects [25], and, particularly, to pattern recognition ([3, 6, 8, 20, 27, 29]), has , 1994 "... We studied and compared two types of connectionist learning methods for model-free regression problems in this paper. One is the popular back-propagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years ..." Cited by 65 (1 self) Add to MetaCart We studied and compared two types of connectionist learning methods for model-free regression problems in this paper. One is the popular back-propagation learning (BPL) well known in the artificial neural networks literature; the other is the projection pursuit learning (PPL) emerged in recent years in the statistical estimation literature. Both the BPL and the PPL are based on projections of the data in directions determined from interconnection weights. However, unlike the use of fixed nonlinear activations (usually sigmoidal) for the hidden neurons in BPL, the PPL systematically approximates the unknown nonlinear activations. Moreover, the BPL estimates all the weights simultaneously at each iteration, while the PPL estimates the weights cyclically (neuron-by-neuron and layer-by-layer) at each iteration. Although the BPL and the PPL have comparable training speed when based on a Gauss-Newton optimization algorithm, the PPL proves more parsimonious in that the PPL requires a fewer hi... , 1992 "... We show that, for feedforward nets with a single hidden layer, a single output node, and a "transfer function" Tanh s, the net is uniquely determined by its inputoutput map, up to an obvious finite group of symmetries (permutations of the hidden nodes, and changing the sign of all the weights associ ..." Cited by 51 (2 self) Add to MetaCart We show that, for feedforward nets with a single hidden layer, a single output node, and a "transfer function" Tanh s, the net is uniquely determined by its inputoutput map, up to an obvious finite group of symmetries (permutations of the hidden nodes, and changing the sign of all the weights associated to a particular hidden node), provided that the net is irreducible, i.e. that there does not exist an inner node that makes a zero contribution to the output, and there is no pair of hidden nodes that could be collapsed to a single node without altering the input-output map. Rutgers Center for Systems and Control, May 1991 Revised October 1991 Research supported in part by the Air Force Office of Scientific Research (AFOSR-91-0343). The author thanks Eduardo Sontag for suggesting the problem and for his helpful comments and ideas, and an anonymous referee for suggesting how to improve the exposition at several points. Requests for reprints should be sent to H'ector J. Sussmann, - ARTIFICIAL NEURAL NETWORKS , 1992 "... A very important theoretical result giving impetus to increasing interest in neural networks is that a multilayer feedforward network can approximate any function to arbitrary precision, or as a classifier it can form arbitraryly complex class boundaries [2]. In difficult practical classification p ..." Cited by 47 (6 self) Add to MetaCart A very important theoretical result giving impetus to increasing interest in neural networks is that a multilayer feedforward network can approximate any function to arbitrary precision, or as a classifier it can form arbitraryly complex class boundaries [2]. In difficult practical classification problems, like in pattern recognition and machine vision, the class boundaries will inevitably be very complex due to variations and distortions in the input images. To reduce the amount of trainig data needed the number of independent weights in the classifier must be reduced [1]. The trade-off is between the capability of the classifier and the amount of training data. In machine vision problems it is often possible to acquire large amounts of training data as long as manual classification of the objects is not required. Thus unsupervised methods can be used in the preprocessing stage without large extra cost. The essential requirement for the preprocessor is that the (unknown) class boundaries shoud be simpler that in the original data, while any two separable classes should keep separable. Since the class boundaries are not known, the best preprocessing can do is to follow the distributions of the data samples, or in other words, clustering. - Journal of Computing and Information Science in Engineering , 2003 "... This document contains the draft version of the following paper: A. Cardone, S.K. Gupta, and M. Karnik. A survey of shape similarity assessment algorithms for product design and manufacturing applications. ASME Journal of ..." Cited by 45 (13 self) Add to MetaCart This document contains the draft version of the following paper: A. Cardone, S.K. Gupta, and M. Karnik. A survey of shape similarity assessment algorithms for product design and manufacturing applications. ASME Journal of - ORSA Journal of Computing , 1993 "... We have studied neural networks as models for time series forecasting, and our research compares the Box-Jenkins method against the neural network method for long and short term memory series. Our work was inspired by previously published works that yielded inconsistent results about comparative per ..." Cited by 39 (3 self) Add to MetaCart We have studied neural networks as models for time series forecasting, and our research compares the Box-Jenkins method against the neural network method for long and short term memory series. Our work was inspired by previously published works that yielded inconsistent results about comparative performance. We have since experimented with 16 time series of differing complexity using neural networks. The performance of the neural networks is compared with that of the Box-Jenkins method. Our experiments indicate that for time series with long memory, both methods produced comparable results. However, for series with short memory, neural networks outperformed the Box-Jenkins model. Because neural networks can be easily built for multiple-step-ahead forecasting, they present a better long term forecast model than the Box-Jenkins method. We discussed the representation ability, the model building process and the applicability of the neural net approach. Neural networks appear to provide a ... - ACTA NUMERICA , 1999 "... In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. Mathematically it is one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are appr ..." Cited by 39 (3 self) Add to MetaCart In this survey we discuss various approximation-theoretic problems that arise in the multilayer feedforward perceptron (MLP) model in neural networks. Mathematically it is one of the simpler models. Nonetheless the mathematics of this model is not well understood, and many of these problems are approximation-theoretic in character. Most of the research we will discuss is of very recent vintage. We will report on what has been done and on various unanswered questions. We will not be presenting practical (algorithmic) methods. We will, however, be exploring the capabilities and limitations of this model. In the first , 1992 "... This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two n ..." Cited by 31 (14 self) Add to MetaCart This paper shows that the weights of continuous-time feedback neural networks are uniquely identifiable from input/output measurements. Under very weak genericity assumptions, the following is true: Assume given two nets, whose neurons all have the same nonlinear activation function oe; if the two nets have equal behaviors as "black boxes" then necessarily they must have the same number of neurons and ---except at most for sign reversals at each node--- the same weights. Moreover, even if the activations are not a priori known to coincide, they are shown to be also essentially determined from the external measurements. Key words: Neural networks, identification from input/output data, control systems 1 Introduction Many recent papers have explored the computational and dynamical properties of systems of interconnected "neurons." For instance, Hopfield ([7]), Cowan ([4]), and Grossberg and his school (see e.g. [3]), have all studied devices that can be modelled by sets of nonlinear dif... - in Artificial Neural Networks with Applications in Speech and Vision , 1993 "... Introduction In most applications dealing with learning and pattern recognition, neural nets are employed as models whose parameters, or "weights," must be fit to training data. Gradient descent and other algorithms are used in order to minimize an error functional, which penalizes mismatches betwe ..." Cited by 23 (8 self) Add to MetaCart Introduction In most applications dealing with learning and pattern recognition, neural nets are employed as models whose parameters, or "weights," must be fit to training data. Gradient descent and other algorithms are used in order to minimize an error functional, which penalizes mismatches between the desired outputs and those that a candidate net ---with a fixed architecture and varying weights--- produces. There are many numerical issues that arise naturally when using such a design approach, in particular: (i) the possibility of local minima which are not globally optimal, and (ii) the possibility of multiple global minimizers. The first question was dealt with by many different authors ---see for instance [5, 13, 14]--- and will not reviewed here. Regarding point (ii), observe that there are obvious transformations that leave the behavior of a network invariant, such as interchanges of all incoming and outgoing weights between two neurons, that is the relabeling of
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1762094","timestamp":"2014-04-20T17:10:46Z","content_type":null,"content_length":"39572","record_id":"<urn:uuid:13a1ea72-280b-45fc-9c5c-a4b4bb180c64>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Three quarter circle problem... February 16th 2013, 04:20 PM Three quarter circle problem... My first post! I do have a question about three quarters of a circle perimeter question. It said the perimeter is 572. How do you separate the straight edges? What is the solution? Thanks for all your help! (Nod) February 17th 2013, 10:29 PM Re: Three quarter circle problem... 1. Draw a sketch (or use a pizza and eat one quarter of it :D ) 2. The perimeter of three quarters of a circle consists of 2 radii and the corresponding arc which is 3/4 of the perimeter of a circle. Thus you have: $2r + \frac34 \cdot 2\pi \cdot r = 572$ Solve for r. 3. You should come out with $r \approx 85.22$
{"url":"http://mathhelpforum.com/geometry/213217-three-quarter-circle-problem-print.html","timestamp":"2014-04-19T03:23:55Z","content_type":null,"content_length":"4927","record_id":"<urn:uuid:7a41de8c-ed3d-4fc9-ad85-58c45f7156d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: May 2010 [00286] [Date Index] [Thread Index] [Author Index] Re: Latex, Mathematica, and journals • To: mathgroup at smc.vnet.net • Subject: [mg109781] Re: Latex, Mathematica, and journals • From: Murray Eisenberg <murray at math.umass.edu> • Date: Mon, 17 May 2010 07:11:35 -0400 (EDT) There is a sort of WYSIWYG interface for LaTeX: the cross-platform LyX. It's really a whole new document processing system that provides a front end to LaTeX as the underlying typesetting engine (and allows direct entry of LaTeX mark-up). Which means you'd have to learn a whole lot of new ways of doing things. Win some, lose some. By its very nature, LaTeX cannot be a true WYSIWIG system. Its design requires that you enter mark-up code that then is processed by the TeX engine itself to produce typeset output. And that engine has to digest much, or all, of the input in order to figure out where to break lines, how much extra space to slip in between words or characters, how much extra space to insert between lines to fill out the page, and where to break pages. Another impediment to WYSIWYG is that many if not most LaTeX users rely upon loading packages, from the very common amsmath to specialized packages for changing layouts and formats of headers/footers, allowing multiple columns within parts of the document, etc.; such packages are typically controlled by one or even many separate options you specify when you load the package. (There's a whole large book devoted just to such packages, "The LaTeX Companion".) So how could a LaTeX front end know about all such packages? And how could it implement their use through WYSIWIG methods? Two things towards WYSIWYG for LaTeX are possible: (1) A front end that helps you create the necessary mark-up; and (2) a really fast viewer that typesets as you proceed (but necessarily must change the output as more of the document is created) and from which you can readily do reverse-search from the typeset view back to the .tex source. There are a couple of front ends that make writing and processing LaTeX much easier for somebody who's not using it all the time. They combine palette-driven input of math structures and symbols along with menu-driven structuring of the document, yet introduce no new paradigms (such as the ones for LyX). The cross-platform Texmaker is one such. For Windows, there's TeXnicCenter (part of the proTeXt bundle built upon the MiKTeX distribution), and WinEdt (which can interface to MiKTeX, TeX Live, and Y & Y TeX). For Mac OS-X there's TeXShop (with the MacTeX bundle built upon the TeX Live distribution). Viewing speeds after typesetting varies with these front ends. All can do reverse-search. For Windows, too, there's the proprietary BaKoMa system, which provides essentially synchronized viewing of source as you typeset and even allows you to type text directly into the viewer window. Aside from LyX, this is probably the closest you can come today to WYSIWYG for LaTEX. On the Mac, for sheer speed of essentially simultaneous typesetting and viewing, nothing can touch proprietary Blue Sky "Textures". However, I believe Textures although as far as I can recall, it included no editor with the kind of palettes and structuring menus common to the front ends mentioned above. Moreover, there seems to have been no further development of Textures in the last few years, and I don't know whether it works with current OS-X or is kept up to date with current TeX On 5/16/2010 5:56 AM, S. B. Gray wrote: > Can anyone tell me why there is no WYSIWYG interface for Latex? > Any time I want to publish a paper I have to relearn it again, since I > publish rarely. > I would gladly use MS Word if the math journals would accept it. > And is there any movement to accepting Mathematica output, properly formatted? > Steve Gray Murray Eisenberg murray at math.umass.edu Mathematics & Statistics Dept. Lederle Graduate Research Tower phone 413 549-1020 (H) University of Massachusetts 413 545-2859 (W) 710 North Pleasant Street fax 413 545-1801 Amherst, MA 01003-9305
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/May/msg00286.html","timestamp":"2014-04-20T06:19:45Z","content_type":null,"content_length":"28889","record_id":"<urn:uuid:afab444c-3f7a-48c2-899c-09d013399fb6>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Variable selection via nonconcave penalized likelihood and its oracle properties. (English) Zbl 1073.62547 Summary: Variable selection is fundamental to high-dimensional statistical modeling, including nonparametric regression. Many approaches in use are stepwise selection procedures, which can be computationally expensive and ignore stochastic errors in the variable selection process. In this article, penalized likelihood approaches are proposed to handle these kinds of problems. The proposed methods select variables and estimate coefficients simultaneously. Hence they enable us to construct confidence intervals for estimated parameters. The proposed approaches are distinguished from others in that the penalty functions are symmetric, nonconcave on $\left(0,\infty \right)$, and have singularities at the origin to produce sparse solutions. Furthermore, the penalty functions should be bounded by a constant to reduce bias and satisfy certain conditions to yield continuous solutions. A new algorithm is proposed for optimizing penalized likelihood functions. The proposed ideas are widely applicable. They are readily applied to a variety of parametric models such as generalized linear models and robust regression models. They can also be applied easily to nonparametric modeling by using wavelets and splines. Rates of convergence of the proposed penalized likelihood estimators are established. Furthermore, with proper choice of regularization parameters, we show that the proposed estimators perform as well as the oracle procedure in variable selection; namely, they work as well as if the correct submodel were known. Our simulation shows that the newly proposed methods compare favorably with other variable selection techniques. Furthermore, the standard error formulas are tested to be accurate enough for practical applications. 62J12 Generalized linear models 62G08 Nonparametric regression 62F12 Asymptotic properties of parametric estimators 62G20 Nonparametric asymptotic efficiency
{"url":"http://zbmath.org/?q=an:1073.62547","timestamp":"2014-04-19T19:44:23Z","content_type":null,"content_length":"22276","record_id":"<urn:uuid:7e6df228-5ac7-4dc7-96e3-a7a351ebfb53>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacific, WA Find a Pacific, WA Precalculus Tutor ...It is my goal to make math an interesting, if not enjoyable subject. I am detail oriented and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and enjoyable experience. I have worked as a laboratory chemist and as an instructor at Tacoma Community College for several years. 12 Subjects: including precalculus, chemistry, geometry, ASVAB ...I have been successfully working as a tutor for the past 5 years with a variety of children in grade school with good feedback. I found my experience as a research scientist fascinated the kids which created an environment which they wanted to be in. I have worked throughout the digital revolut... 45 Subjects: including precalculus, chemistry, physics, calculus ...I keep math real and practical, and built a rapport with my students. I have even become quite versed at reading and writing upside down so that I save the time of having to turn the paper around while working with my students. I view any tutoring appointment as a contract to which I am obligated, and ask the same from my clients. 8 Subjects: including precalculus, calculus, geometry, algebra 1 ...I would love to be in a position to help others come to appreciate the mysteries and wonders science and math reveal about our world. My past teaching experience includes four years instructing beginning and intermediate college astronomy laboratories, as well as individual student tutoring for ... 5 Subjects: including precalculus, physics, geometry, algebra 1 ...Throughout my college career I had a special focus in mathematics. Outside of school, current events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha. Tutoring: I currently tutor 9 students on a regular basis in a variety of s... 17 Subjects: including precalculus, chemistry, calculus, physics Related Pacific, WA Tutors Pacific, WA Accounting Tutors Pacific, WA ACT Tutors Pacific, WA Algebra Tutors Pacific, WA Algebra 2 Tutors Pacific, WA Calculus Tutors Pacific, WA Geometry Tutors Pacific, WA Math Tutors Pacific, WA Prealgebra Tutors Pacific, WA Precalculus Tutors Pacific, WA SAT Tutors Pacific, WA SAT Math Tutors Pacific, WA Science Tutors Pacific, WA Statistics Tutors Pacific, WA Trigonometry Tutors Nearby Cities With precalculus Tutor Algona, WA precalculus Tutors Auburn, WA precalculus Tutors Bonney Lake precalculus Tutors Covington, WA precalculus Tutors Dieringer, WA precalculus Tutors Dupont, WA precalculus Tutors Edgewood, WA precalculus Tutors Federal Way precalculus Tutors Fife, WA precalculus Tutors Graham, WA precalculus Tutors Jovita, WA precalculus Tutors Maple Valley precalculus Tutors Milton, WA precalculus Tutors Puy, WA precalculus Tutors Sumner, WA precalculus Tutors
{"url":"http://www.purplemath.com/Pacific_WA_precalculus_tutors.php","timestamp":"2014-04-16T10:45:26Z","content_type":null,"content_length":"24167","record_id":"<urn:uuid:bd66fa1c-c9b0-49b4-a394-bce96aa7cd20>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Reasoning and Sense Making Task Library Focus in High School Mathematics: Reasoning and Sense Making, NCTM’s Principles and Standards for School Mathematics, and the Common Core State Standards, each item addresses: • Task Design: what the task is asking students to do (see Task Purpose, Task Overview, Focus on Reasoning and Sense Making, Focus on Mathematical Content, Materials and Technology, Assessment and the Student Activity sheet) • Teaching Design: how teachers might facilitate reasoning and sense making (see Use in the Classroom) • Student Engagement: what student might actually do in the classroom (see Focus on Student Thinking) Do you have a reasoning and sense making task that you would like to develop for the Task Library? If so, click this invitation for information regarding submissions. │ Task │ Purpose │ │ Horseshoes │ Students analyze the structure of algebraic expressions and a graph to determine what information each expression readily contributes about the flight of a horseshoe. This task is │ │ in Flight │ particularly relevant to students who are studying (or have studied) various quadratic expressions (or functions). The task also illustrates a step in the mathematical modeling │ │ │ process that involves interpreting mathematical results in a real-world context. │ │ Taking a │ Although students are often asked to find the angles of rotational symmetry for given regular polygons, in this task they are asked to find the regular polygons for a given angle │ │ Spin │ of rotational symmetry, a reversal that yields some surprising results. This task would be most appropriate with students who have at least some experience in exploring rotational │ │ │ symmetry. │ │ Tidal Waves │ Students analyze a problem faced by the captain of a shipping vessel. Students may use a range of functions to model the situation and reflect on their usefulness. Because │ │ │ trigonometric functions can be useful, this task would be particularly appropriate for students who have had an introduction to graphing sine and cosine functions. │ │ Eruptions: │ Students analyze data and make predictions. They will create a variety of graphical displays to discover trends in the data, then use those graphs to support their predictions. │ │ Old Faithful │ This task is appropriate for students familiar with line graphs and other graphical displays of univariate data sets. │ │ │ │ │ Fuel │ Students use mathematical reasoning to determine appropriate numerical measures and patterns in fuel consumption in order to inform consumer choice for vehicle purchasing. The task │ │ for Thought │ promotes a sophisticated use of number sense including careful attention to units. It is accessible to beginning high school students. │ │ Bank Shot │ Students compare their own reasoning strategies and those of their classmates, focusing on the strategies’ usefulness in determining how to make certain bank shots in billiards. │ │ │ This task is intended to involve multiple geometric perspectives and would be appropriate for students with an understanding of similar triangles, rigid motions (especially │ │ │ reflections), and equations for lines and is designed to strengthen students’ understanding of these concepts. │ │ As the Crow │ The distance formula is often presented as a “rule” for students to memorize. This task is designed to help students develop an understanding of the meaning of the formula. It │ │ Flies │ would be appropriate as an introduction (or review) of the distance formula for students who are familiar with the Pythagorean theorem and coordinate systems. │ │ Over the │ Students determine locations on a hillside for a cell phone tower erected to provide a signal to people on the other side of the hill. They identify necessary information, │ │ Hill │ represent the problem with a scale model, and answer questions in context. This task is appropriate for students who have had experience in determining equations of linear │ │ │ functions through two points and in solving systems of linear equations. │ │ │ State lotteries in Florida and other states give winners a choice between cash and another prize, such as free gas for life. In this task, students will evaluate two prize options │ │ Cash or Gas │ by discussing and making reasonable assumptions to simplify a complex decision. This task is appropriate for students who can extrapolate quantities over time and are able to make │ │ │ conversions among different units of measure. │ Related Resources • Discussion of Reasoning and Sense Making Tasks • Applets for Use with Reasoning and Sense Making Tasks
{"url":"http://www.nctm.org/rsmtasks/","timestamp":"2014-04-21T05:11:33Z","content_type":null,"content_length":"46292","record_id":"<urn:uuid:6cee5ce7-8040-4051-b673-c50a9f07acfa>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: PLS HELP BEEN STUCK FOR HOURS ON THIS. Starting from 1.5 miles away, a car drives toward a speed checkpoint and then passes it. The car travels at a constant rate of 53 miles per hour. The distance of the car from the checkpoint is given by d = |1.5 – 53t|. At what times is the car 0.1 miles from the checkpoint? Calculate your answer in seconds. (1 point)95.1s and 108.7s 10.2s and 101.9s108.7s and 10.2s95.1s and 10.2s • one year ago • one year ago Best Response You've already chosen the best response. AHhhh i have been too): i did steps and i still dont get it. Im from connexus Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. WRONG ALL OF YOU Best Response You've already chosen the best response. ok check this out bruh. this is what i first did. I subtracted 1.5 from both sides of this equation. 1.5-53t=0.1 and i got 1.4 then i divided 1.4/5.3 and i got 0.026415(etc) NEED HELP Best Response You've already chosen the best response. :P ANSWER THE TOPIC QUESTION MY BRAINS ABOUT TO BLOW Best Response You've already chosen the best response. LMAO I AGREE IVE BEEN SITTIN ON MY retriceFOR 4 HOURS MAN Best Response You've already chosen the best response. I have t=0.026 and t=0.030 Does anyone else agree? Best Response You've already chosen the best response. d = |1.5 – 53t| 0.1 = -(1.5 - 53t) 0.1 = 1.5 - 53t 0.1 =53t - 1.5 1.5 + 0.1 = 53t 1.6/53 = t t = 108.67 secs 53t = 1.5 - 0.1 53t = 1.4 t = 1.4/53 t = 95.09 secs Best Response You've already chosen the best response. fellow i got the same on these two equations i just did Best Response You've already chosen the best response. Good on you Best Response You've already chosen the best response. so 0.026 and 0.030 is correct? hero Best Response You've already chosen the best response. I don't know where you get that from. It wants in seconds Best Response You've already chosen the best response. In hours, yes, that is correct Best Response You've already chosen the best response. wait nvm. gotcha. thanks!;) Best Response You've already chosen the best response. I don't know why it took you four hours Best Response You've already chosen the best response. @Girly17, I help you with a question and you block me and call me a weirdo? Thanks Best Response You've already chosen the best response. Yeah, double checked I got it. You can solve this problem in 2 ways. 1st way by pure math abs( anything ) = something means anything =+something and/or anything =-something and solve for t also you can think of this as a physics problem with zero acceleration, deduce your position function and find the times at the desired distances. Best Response You've already chosen the best response. I solved it the "easiest" way Best Response You've already chosen the best response. calm down calm down you were rude and said post it like everyone else. well look here yo i woulda done that but i couldnt find how to do that so i seen send a message thing and it was easier for Best Response You've already chosen the best response. took me 4 hours cuz im awesome like that Best Response You've already chosen the best response. I wasn't being rude. I was trying to explain to you that it is inconvenient to help you in the messages. Best Response You've already chosen the best response. Messages are not for posting questions. Best Response You've already chosen the best response. Anyone else would tell you the same thing Best Response You've already chosen the best response. which means you took it the wrong way, how are the inconvenient? lets see. Best Response You've already chosen the best response. and how was i suppose to know?!?!?!?!?!?!?!?! Best Response You've already chosen the best response. You didn't know, but I was explaining to you. That's all I can try to do is communicate with you and explain to you. All you have to do is not overreact. Best Response You've already chosen the best response. i am not overreacting. you're just giving me bs. but wtvr goodnight and thanks Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508f7c1ce4b0ad6205374a25","timestamp":"2014-04-24T16:37:24Z","content_type":null,"content_length":"98869","record_id":"<urn:uuid:e9d78560-dbf6-41b6-8adc-d08ef8e83acd>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
USBR Water Measurement Manual - Chapter 2 - Basic Concepts Related to Flowing Water and Measurement, Section 10. Energy Balance Flow Relationships 10. Energy Balance Flow Relationships Hydraulic problems concerning fluid flow are generally handled by accounting in terms of energy per pound of flowing water. Energy measured in this form has units of feet of water. The total amount of energy is that caused by motion, or velocity head, V^2/2g, which has units of feet, plus the potential energy head, Z, in feet, caused by elevation referenced to an arbitrary datum selected as reference zero elevation, plus the pressure energy head, h, in feet. The head, h, is depth of flow for the open channel flow case and p/ Figure 2-3a -- Energy balance in pipe flow. Figure 2-3b -- Energy balance in open channel flow. Figure 2-3c -- Specific energy balance. Figures 2-3a and 2-3b show the total energy head, H[1]; for example, at point 1, in a pipe and an open channel, which can be written as: At another downstream location, point 2: Energy has been lost because of friction between points 1 and 2, so the downstream point 2 has less energy than point 1. The energy balance is retained by adding a head loss, h[f][(1-2)]. The total energy balance is written as: The upper sloping line drawn between the total head elevations is the energy gradeline, egl. The next lower sloping solid line for both the pipe and open channel cases shown on figure 2-3 is the hydraulic grade line, hgl, which is also the water surface for open channel flow, or the height to which water would rise in piezometer taps for pipe flow. A special energy form is commonly used in hydraulics in which the channel invert is selected as the reference Z elevation (figure 2-3c). Thus, Z drops out, and energy is the sum of depth, h, and velocity head only. Energy above the invert expressed this way is called specific energy, E. This simplified form of energy equation is written as: Equations 2-21 and 2-11 lead to several interesting conclusions. In a fairly short pipe that has little or insignificant friction loss, total energy at one point is essentially equal to the total energy at another point. If the size of the pipeline decreases from the first point to the second, the velocity of flow must increase from the first point to the second. This increase occurs because with steady flow, the quantity of flow passing any point in the completely filled pipeline remains the same. From the continuity equation (equation 2-11), when the flow area decreases, the flow velocity must increase. The second interesting point is that when the velocity increases in the smaller section of the pipeline, the pressure head, h, decreases. At first, this decrease may seem strange, but equation 2-21 shows that when V^2/2g increases, h must decrease proportionately because the total energy from one point to another in the system remains constant, neglecting friction loss. The fact that the pressure does decrease when the velocity in a given system increases is the basis for tube-type flow measuring devices. In open channel flow where the flow accelerates, more of its supply of energy becomes velocity head, and depth must decrease. On the other hand, when the flow slows down, the depth must increase. An example of accelerating flow with corresponding decreasing depth is found at the approach to weirs. The drop in the water surface is called drawdown. Another example occurs at the entrance to inverted siphons or conduits where the flow accelerates as it passes from the canal, through a contracting transition, and into the siphon barrel. An example of decelerating flow with a rising water surface is found at the outlet of an inverted siphon, where the water loses velocity as it expands in a transition back into canal flow. Flumes are excellent examples of measuring devices that take advantage of the fact that changes in depth occur with changes in velocity. When water enters a flume, it accelerates in a converging section. The acceleration of the flow causes the water surface to drop a significant amount. This change in depth is directly related to the rate of flow.
{"url":"http://www.usbr.gov/pmts/hydraulics_lab/pubs/wmm/chap02_10.html","timestamp":"2014-04-21T04:42:19Z","content_type":null,"content_length":"8032","record_id":"<urn:uuid:5e463cfa-2b84-4ff3-9227-3327b918b3dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
April 10th 2009, 08:01 PM #1 Feb 2009 Show that A and A^{T} have the same eigenvalues. what if anything can we say about the associated eigenvectors of A and A^{T}? $\det (A - \lambda I) = \det (A - \lambda I)^T = \det (A^T - \lambda I)$. The eigenvectors of $A^T$ are generally different to those of $A$. However, there is a relationship between the two: If $v_i$ is an eigenvector of $A$ corresponding to the eigenvalue $\lambda_i$ and $w_j$ is an eigenvector of $A^T$ corresponding to the eigenvalue $\lambda_j$ then $v_i^T w_j = 0 ~ (\lambda_i eq $A v_i = \lambda_i v_i \Rightarrow v_i^T A = \lambda_i v_i^T \Rightarrow v_i^T A w_j = \lambda_i v_i^T w_j$ .... (1) $A^T w_j = \lambda_j w_j \Rightarrow v_i^T A^T w_j = \lambda_j v_i^T w_j$ .... (2) (1) - (2): $0 = \lambda_i v_i^T w_j - \lambda_j v_i^T w_j$ and the result is easily seen. April 11th 2009, 12:02 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/83176-eigenvalues.html","timestamp":"2014-04-17T13:18:49Z","content_type":null,"content_length":"35587","record_id":"<urn:uuid:5c4ba06a-b7f4-4fd4-bf39-a8cdafd6ed73>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] 238:Pi01 Independence/Large Large Cardinals/Correction Harvey Friedman friedman at math.ohio-state.edu Tue Dec 7 22:31:23 EST 2004 This corrects my first version of Pi01 independence from n-huge cardinals in posting #236. The strength has now shot up to lots of nontrivial elementary embeddings of ranks into themselves. These developments in no way, shape, or form obsolete BRT. Let N be the set of all nonnegative integers. For p in N, let [p] = {0,...,p}. For any set V, let V^k be the set of all k-tuples from V, and V^<k be the set of all tuples from V of nonzero length < k. Let T:[p]^k into N, and E containedin N^k. We define the upper image of T on E by T<[E] = {T(x): x in E and T(x) > max(x)}. We write PL([p]^k,E) for the set of all piecewise linear transformations T:[p]^k into N over E. These are the T:[p]^k into N defined by finitely many cases, where each case is given by a finite set of linear inequalities, and T is given by an affine expression with coefficients in each case, and where all coefficients used in the inequalities and affine expressions lie in E. We use cross section notation T_x, where x is a vector of any nonzero finite length. Note that dom(T_x) depends on the length of x. If x is too long, then obviously dom(T_x) = emptyset. We take min(emptyset) = 0. For x in N^s, we define x! = (x_1!,...,x_s!). Let R1,...,Rt,S1,...,St be multivariate relations on N, where the arity of each Ri,Si are the same. We say that f nontrivially embeds (N,R1,...,Rt) into (N,S1,...,St) if and only if i) f is a partial function from V into V that is not an identity function; ii) for all x1,...,xn in dom(f), Ri(x1,...,xn) iff Si(f(x1),...,f(xn)), where the arity of Ri is n. PROPOSITION 1. For all T in PL([p]^3k,[k]) there exists A containedin [p]^3 such that every A_i!, i! in [(8k)!!,p], is a nontrivial embedding of ([i!],T,A_00,T<[A^k]) into ([i!],T,A_00,A_00') whose domain includes all min(T_x![A^<k]) <= i!. Proposition 1 is obviously explicitly Pi01. THEOREM 2. Proposition 1 is provably equivalent, over EFA, to the consistency of ZFC + {there exists an n-Mahlo cardinal lambda such that there are lambda many kappa < lambda with a nontrivial elementary embedding from V(kappa) into V(kappa)}_n. I use www.math.ohio-state.edu/~friedman/ for downloadable manuscripts. This is the 238th in a series of self contained numbered postings to FOM covering a wide range of topics in f.o.m. The list of previous numbered postings #1-149 can be found at http://www.cs.nyu.edu/pipermail/fom/2003-May/006563.html in the FOM archives, 5/8/03 8:46AM. Previous ones counting from #150 are: 150:Finite obstruction/statistics 8:55AM 6/1/02 151:Finite forms by bounding 4:35AM 6/5/02 152:sin 10:35PM 6/8/02 153:Large cardinals as general algebra 1:21PM 6/17/02 154:Orderings on theories 5:28AM 6/25/02 155:A way out 8/13/02 6:56PM 156:Societies 8/13/02 6:56PM 157:Finite Societies 8/13/02 6:56PM 158:Sentential Reflection 3/31/03 12:17AM 159.Elemental Sentential Reflection 3/31/03 12:17AM 160.Similar Subclasses 3/31/03 12:17AM 161:Restrictions and Extensions 3/31/03 12:18AM 162:Two Quantifier Blocks 3/31/03 12:28PM 163:Ouch! 4/20/03 3:08AM 164:Foundations with (almost) no axioms 4/22/03 5:31PM 165:Incompleteness Reformulated 4/29/03 1:42PM 166:Clean Godel Incompleteness 5/6/03 11:06AM 167:Incompleteness Reformulated/More 5/6/03 11:57AM 168:Incompleteness Reformulated/Again 5/8/03 12:30PM 169:New PA Independence 5:11PM 8:35PM 170:New Borel Independence 5/18/03 11:53PM 171:Coordinate Free Borel Statements 5/22/03 2:27PM 172:Ordered Fields/Countable DST/PD/Large Cardinals 5/34/03 1:55AM 173:Borel/DST/PD 5/25/03 2:11AM 174:Directly Honest Second Incompleteness 6/3/03 1:39PM 175:Maximal Principle/Hilbert's Program 6/8/03 11:59PM 176:Count Arithmetic 6/10/03 8:54AM 177:Strict Reverse Mathematics 1 6/10/03 8:27PM 178:Diophantine Shift Sequences 6/14/03 6:34PM 179:Polynomial Shift Sequences/Correction 6/15/03 2:24PM 180:Provable Functions of PA 6/16/03 12:42AM 181:Strict Reverse Mathematics 2:06/19/03 2:06AM 182:Ideas in Proof Checking 1 6/21/03 10:50PM 183:Ideas in Proof Checking 2 6/22/03 5:48PM 184:Ideas in Proof Checking 3 6/23/03 5:58PM 185:Ideas in Proof Checking 4 6/25/03 3:25AM 186:Grand Unification 1 7/2/03 10:39AM 187:Grand Unification 2 - saving human lives 7/2/03 10:39AM 188:Applications of Hilbert's 10-th 7/6/03 4:43AM 189:Some Model theoretic Pi-0-1 statements 9/25/03 11:04AM 190:Diagrammatic BRT 10/6/03 8:36PM 191:Boolean Roots 10/7/03 11:03 AM 192:Order Invariant Statement 10/27/03 10:05AM 193:Piecewise Linear Statement 11/2/03 4:42PM 194:PL Statement/clarification 11/2/03 8:10PM 195:The axiom of choice 11/3/03 1:11PM 196:Quantifier complexity in set theory 11/6/03 3:18AM 197:PL and primes 11/12/03 7:46AM 198:Strong Thematic Propositions 12/18/03 10:54AM 199:Radical Polynomial Behavior Theorems 200:Advances in Sentential Reflection 12/22/03 11:17PM 201:Algebraic Treatment of First Order Notions 1/11/04 11:26PM 202:Proof(?) of Church's Thesis 1/12/04 2:41PM 203:Proof(?) of Church's Thesis - Restatement 1/13/04 12:23AM 204:Finite Extrapolation 1/18/04 8:18AM 205:First Order Extremal Clauses 1/18/04 2:25PM 206:On foundations of special relativistic kinematics 1 1/21/04 5:50PM 207:On foundations of special relativistic kinematics 2 1/26/04 12:18AM 208:On foundations of special relativistic kinematics 3 1/26/04 12:19AAM 209:Faithful Representation in Set Theory with Atoms 1/31/04 7:18AM 210:Coding in Reverse Mathematics 1 2/2/04 12:47AM 211:Coding in Reverse Mathematics 2 2/4/04 10:52AM 212:On foundations of special relativistic kinematics 4 2/7/04 6:28PM 213:On foundations of special relativistic kinematics 5 2/8/04 9:33PM 214:On foundations of special relativistic kinematics 6 2/14/04 9:43AM 215:Special Relativity Corrections 2/24/04 8:13PM 216:New Pi01 statements 6/6/04 6:33PM 217:New new Pi01 statements 6/13/04 9:59PM 218:Unexpected Pi01 statements 6/13/04 9:40PM 219:Typos in Unexpected Pi01 statements 6/15/04 1:38AM 220:Brand New Corrected Pi01 Statements 9/18/04 4:32AM 221:Pi01 Statements/getting it right 10/7/04 5:56PM 222:Statements/getting it right again 10/9/04 1:32AM 223:Better Pi01 Independence 11/2/04 11:15AM 224:Prettier Pi01 Independence 11/7/04 8:11PM 225:Better Pi01 Independence 11/9/04 10:47AM 226:Nicer Pi01 Independence 11/10/04 10:43AM 227:Progress in Pi01 Independence 11/11/04 11:22PM 228:Further Progress in Pi01 Independence 11/12/04 2:49AM 229:More Progress in Pi01 Independence 11/13/04 10:41PM 230:Piecewise Linear Pi01 Independence 11/14/04 9:38PM 231:More Piecewise Linear Pi01 Independence 11/15/04 11:18PM 232:More Piecewise Linear Pi01 Independence/correction 11/16/04 8:57AM 233:Neatening Piecewise Linear Pi01 Independence 11/17/04 12:22AM 234:Affine Pi01 Independence 11/20/04 9:54PM 235:Neatening Affine Pi01 Independence 11/28/04 6:08PM 236:Pi01 Independence/Huge Cardinals 12/2/04 3:49PM 237:More Neatening Pi01 Affine Independence 12/6/04 12:56AM Harvey Friedman More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-December/008640.html","timestamp":"2014-04-18T13:59:37Z","content_type":null,"content_length":"9629","record_id":"<urn:uuid:fdcebea7-733a-43e1-92c4-c48d2be6dd12>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00390-ip-10-147-4-33.ec2.internal.warc.gz"}
About the U of T Mathematics Network Navigation Panel: Go forward to A Tour of this Site Switch to graphical version (better pictures & formulas) Go to University of Toronto Mathematics Network Home Page About the U of T Mathematics Network The Network is sponsored by the Mathematical Sciences Departments at the University of Toronto, including (among others) the Departments of Computer Science, Mathematics, and Statistics. It is designed to encourage high school students to actively participate in doing mathematics, providing cooperative, competitive, interesting, and interactive projects, as well as more traditional problems and other quality resource material. The Network is also intended to promote communication and mathematical discussion between high schools (in both curricular and extra-curricular settings) and the university, and to establish a connection from the high-school mathematical experience to mathematics at the university level and beyond. Through this Network we aim to provide • Opportunities for students to actively participate in doing mathematics: □ a variety of events to which students can come, □ interactive projects and activities, featuring mathematical challenges as well as interdisciplinary topics that integrate mathematics into other areas; □ problems that stimulate mathematical thinking and growth: some traditional, others interactive (in which students with internet access can explore the features of a problem in whatever manner they choose, encouraging intentional learning); puzzles and other forms of recreational mathematics to inspire student interest; • Mathematical resources: □ answers and explanations of questions that often arise as one begins to think about mathematics at a more advanced level, along with expositions of a variety of topics; resource material for classroom or individual use; • A forum for communication: □ opportunities for questions and discussions about mathematics at the high school level, university level, and beyond, and the use of mathematics in other fields; □ personal contact between university and students through a variety of events and activities, such as the annual Mathematical Sciences day and the in-depth SOAR summer camp begun last year; • A link from the high-school mathematical experience to mathematics at the university level and beyond: □ student involvement in more advanced mathematical projects such as the SOAR summer camp; □ presentations of current mathematical research activities and descriptions of the use of mathematics in various fields of study here at the University. For more information on Network activities and events, please contact: University of Toronto Mathematics Network Coordinator Department of Mathematics University of Toronto Toronto, Ontario, Canada M5S 3G3 Email: mathnet@math.toronto.edu This page last updated: September 27, 1999 Original Web Site Creator / Mathematical Content Developer: Philip Spencer Current Network Coordinator and Contact Person: Any Wilk - mathnet@math.toronto.edu Navigation Panel: Forward | Graphical Version | U of T Math Network Home
{"url":"http://www.math.toronto.edu/mathnet/plain/info.html","timestamp":"2014-04-19T23:15:47Z","content_type":null,"content_length":"4670","record_id":"<urn:uuid:d6c6b7b3-d978-4c70-9186-f962ae1eff20>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
HowStuffWorks "What Are Numbers?" What Are Numbers? Mathematics boils down to pattern recognition. We identify patterns in the world around us and use them to navigate its challenges. To do all this, however, we need numbers -- or at least the information that our numbers represent. What are numbers? As we'll explore more later, that's a deceptively deep question, but you already know the simple answer. A number is a word and a symbol representing a count. Let's say you walk outside your home and you see two angry dogs. Even if you didn't know the word "two" or know what the corresponding numeral looks like, your brain would have a good grasp of how a two-dog encounter compares with a three-, one- or zero-dog situation. We owe that innate comprehension to our brain (specifically, the inferior parietal lobe), which naturally extracts numbers from the surrounding environment in much the same way it identifies colors [source: Dehaene]. We call this number sense, and our brains come fully equipped with it from birth. Studies show that while infants have no grasp of human number systems, they can still identify changes in quantity. Neuroimaging research has even discovered that infants possess the ability to engage in logarithmic counting, or counting based on integral increases in physical quantity. While a baby won't see the difference between five teddy bears and six teddy bears in a lineup, he or she will notice a difference between five and 10 [source: Miller]. Number sense plays a vital role in the way animals navigate their environments -- environments where objects are numerous and frequently mobile. However, an animal's numerical sense becomes more imprecise with increasingly larger numbers. Humans, for instance, are systematically slower to compute 4 + 5 than 2 + 3 [source: Dehaene]. At some point in our ancient past, prehistoric humans began to develop a means of augmenting their number sense. They started counting on their fingers and toes. This is why so many numerical systems depend on groups of five, 10 or 20. Base-10 or decimal systems stem from the use of both hands, while base-20 or vigesimal systems are based on the use of fingers and toes. So ancient humans learned to externalize their number sense and, in doing so, they arguably created humanity's most important scientific achievement: mathematics.
{"url":"http://science.howstuffworks.com/math-concepts/math1.htm","timestamp":"2014-04-20T05:47:25Z","content_type":null,"content_length":"118915","record_id":"<urn:uuid:b27d98ac-45cf-4e67-9ee6-d5d4923d7eeb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Find past value of money? Isn't actually just like finding future value of money January 5th 2011, 07:35 AM Find past value of money? Isn't actually just like finding future value of money Say if we were to find the past value of $1 at an interest rate of 5% for the 10 years past today. If we were to use the present value formula $PV = \frac{FV}{(1+i)^n}$ $PV = \frac{1}{(1+0.05)^{-10}}$ $PV = \frac{1}{(1.05)^{-10}}$ $PV = (1.05)^{10}$ $PV = 1.62889462677744140625$ It seems like the past value of $1 is kind of like future value of $1(Thinking) Does this make sense? To me it kind of does, since a dollar in hand today is worth less than a dollar we had 10 years ago Would someone here comment on these findings January 5th 2011, 08:09 AM Say if we were to find the past value of $1 at an interest rate of 5% for the 10 years past today. If we were to use the present value formula $PV = \frac{FV}{(1+i)^n}$ $PV = \frac{1}{(1+0.05)^{-10}}$ $PV = \frac{1}{(1.05)^{-10}}$ $PV = (1.05)^{10}$ $PV = 1.62889462677744140625$ It seems like the past value of $1 is kind of like future value of $1(Thinking) Does this make sense? To me it kind of does, since a dollar in hand today is worth less than a dollar we had 10 years ago Would someone here comment on these findings n=10, not -10. January 5th 2011, 05:38 PM My attempt was to find past value of money thus I used -10 for the 10th year in past from today. If I were to use 10 that will give us the present value of money for the 10th year in the future. If I am attempting it wrong, may you suggest how we would go about finding PAST value of money. January 5th 2011, 05:44 PM January 5th 2011, 06:01 PM Thanks for the reply Help me out with this a) What was $1 worth on Jan 5, 2001 if interest rate is 5% b) What will be the value of $1 on Jan 5, 2021 if interest rate is 5% January 5th 2011, 09:16 PM
{"url":"http://mathhelpforum.com/business-math/167515-find-past-value-money-isnt-actually-just-like-finding-future-value-money-print.html","timestamp":"2014-04-20T04:12:47Z","content_type":null,"content_length":"12238","record_id":"<urn:uuid:38bbd965-2a0c-45c2-ad6b-030b69d0577d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/carter15/medals","timestamp":"2014-04-17T07:11:45Z","content_type":null,"content_length":"85871","record_id":"<urn:uuid:8ea74154-f8ae-4d29-bd1a-3c75ef66e41c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: -mfx- after poisson [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: -mfx- after poisson From vwiggins@stata.com (Vince Wiggins, StataCorp) To statalist@hsphsun2.harvard.edu Subject Re: st: -mfx- after poisson Date Mon, 01 May 2006 11:45:23 -0500 Scott Cunningham <scunning@gmail.com> using -mfx- to compute the marginal effects from a poisson regression with many (over 150) indicator variables. He notes that -mfx- is taking a long, long time to compute the marginal Richard Williams <Richard.A.Williams.5@ND.edu> suggested that Scott consult an FAQ on -mfx- speed at That FAQ makes several recommendations, in particular using the -varlist()- option to restrict the calculation of marginal effects to the variables of The primary reason why Scott's problem takes so long is the number of indicator variables. Here's why. -mfx- takes a brute-force approach to computing marginal effects and computes numerical derivatives for whatever is requested, rather than hand-coding analytic derivatives for only a few functions. This gives it incredible flexibility. For example it lets you compute the marginal effect of a bivariate probit model w.r.t. the joint probability of two successes, the marginal probability of a success in either outcome, or any of the other 10 statistics that are predicted for -biprobit- models. -mfx- can even compute marginal effects for user-written estimators, so long as those estimators supply -predict-ions for the statistics of interest. The price for this flexibility is performance. Usually that price is not too high, but in the case of many indicator variables it can be. Why are indicators different from continuous variables? That is a longish story, but at its core is the fact that -mfx- reduces its computational burden substantially by using the chain rule. Most estimators combine just a few nonlinear terms and those terms themselves are just linear combinations (which in Stata we call equations). The Poisson model that Scott is estimating has just a single equation that linearly combines the coefficients B and the covariates x -- xB. The expected number of events for poisson is just a nonlinear function of the single term xB -- Using the chain rule we get d(f(xB)) d(f(xB) d(Xb) d(f(xB) mfx_x = -------- = ------ * ----- = ------- * B_x d(x) d(Xb) d(x) d(Xb) So, we can numerically compute d(f(xB) / d(Xb) once, and it can be easily applied to each x_i by multiplying by the associated B_i. In Scott's case this means that we do some hard work once and then apply it to the 180 or so coefficients. These are just the first derivatives that are themselves the marginal effects. To compute standard errors of the marginal effects, we must compute second derivatives and cross-derivatives among all of the coefficients. If we have K covariates, there are K(K+1)/2 second and cross derivatives. Luckily, a similar chain rule can be applied. In Scott's case, this means that we do not have to numerically compute 180(180+1)/2 = 16,290 second and cross derivatives. Just a few will do and the chain rule will give us the rest. But what does all this have to do with indicator variables? By default, -mfx- computes the effect of a discrete change in an indicator -- the effect of the indicator going from 0 to 1. This is usually what we want. If the indicator is male vs. female, we are comparing the difference between males and females. This is usually what we want, but not always. We might be interested in the instantaneous increase in the proportion of females in a group and in that case we could specify the -nodiscrete- option to obtain the instantaneous marginal effect of the indicator -- the effect that we showed above and what -mfx- computes for continuous variables. The discrete marginal effect is different, it is dmfx_xi = f(xB|x_i=0) - f(xB|x_i=1) Because xB is now evaluated at two completely different points, we cannot use the chain rule. All of the derivatives, second derivatives, and cross-derivatives for the discrete change computed separately. In Scott's case this is about 16,000 derivatives. If you want instantaneous derivatives, you are in luck. Specify the -nodiscrete- option. For a problem similar to Scott's this takes about 15 seconds on my 3.2 gHz Pentium. Otherwise, ask only for the marginal effects you want, especially if those effects are for indicator variables and you want the discrete effect. -- Vince * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2006-05/msg00008.html","timestamp":"2014-04-20T21:24:13Z","content_type":null,"content_length":"9776","record_id":"<urn:uuid:59f81d93-8f73-46c0-81cf-595f828a6b4a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
First derivative, relative extrema questions July 16th 2008, 08:03 PM #1 Junior Member Sep 2007 First derivative, relative extrema questions 1.) Find the relative maxima and relative minima, if any of the function 2.)Average cost the average cost in dollars incurred by Records each week in prressing x compact discs is given by C(x)=-0.0001x+2+2000/x (0<x<6000) Show that C(x) is always decreasing over the interval (0<x<6000) 3.) Almost half of companies let other firms manage some of their Web operations-a practice called Web hostings .managed services- monitoring a customer's technology services is the fastest growing part of Web hosting. Managed services sales are expected to grow in accordance with the function. f(t)=0.469t^2+0.758t+0.44 (0<t<6) measured in billions of dollars and t is measured in years with t=0 corresponding to 1999 a)find the interval where f is increasing and the interval where f is decreasing. b) what does your result tell you about sales in managed services from 1999 through 2005? 1.) Find the relative maxima and relative minima, if any of the function Mr F says: Solve f'(x) = 0 and test the nature of the solutions. 2.)Average cost the average cost in dollars incurred by Records each week in prressing x compact discs is given by C(x)=-0.0001x+2+2000/x (0<x<6000) Show that C(x) is always decreasing over the interval (0<x<6000) Mr F says: Show that C'(x) < 0 over the given interval. 3.) Almost half of companies let other firms manage some of their Web operations-a practice called Web hostings .managed services- monitoring a customer's technology services is the fastest growing part of Web hosting. Managed services sales are expected to grow in accordance with the function. f(t)=0.469t^2+0.758t+0.44 (0<t<6) measured in billions of dollars and t is measured in years with t=0 corresponding to 1999 a)find the interval where f is increasing and the interval where f is decreasing. Mr F says: Solve f'(t) > 0 and f'(t) < 0 respectively. b) what does your result tell you about sales in managed services from 1999 through 2005? Mr F says: Interpret your solutions to (a). >< i still dont know how to do it Lemontea, simply find the first derivative and find the zeros of the first derivative. The point at which the derivative is zero are common points for extrema. You need to use a line graph to check whether it's increasing/decreasing. July 16th 2008, 10:36 PM #2 July 16th 2008, 10:48 PM #3 Junior Member Sep 2007 July 16th 2008, 11:06 PM #4 July 16th 2008, 11:07 PM #5 Super Member Jun 2008
{"url":"http://mathhelpforum.com/calculus/43881-first-derivative-relative-extrema-questions.html","timestamp":"2014-04-21T15:06:33Z","content_type":null,"content_length":"44772","record_id":"<urn:uuid:036d9d18-1e57-49c9-acae-df3365e7fc99>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
A Theorem in Intersection theory. up vote 1 down vote favorite Fulton's Book on intersection theory (Pg.223, theorem 12.3) asserts the following result: For r pure dimensional schemes in P^n, whose co-dimensions add to at most n, the product of their degrees is at least as great as the sum of the degrees of the irreducible components of their Under what conditions can we say that the product of the degrees of schemes is equal to the degrees of the irreducible components of the scheme? intersection-theory ag.algebraic-geometry 1 Should probably add the [Fulton] page number. – Allen Knutson Jan 20 '11 at 18:49 add comment 1 Answer active oldest votes Call the schemes to be intersected $(X_i)$, where $X_i$ has pure codimension $r_i$ in ${\mathbb P}^n$. Let $R = \sum r_i$. (Edited so as not to restrict to $R = n$ unnecessarily.) Definitely, every component of the intersection has codimension at most $R$. If the codimensions are all exactly $R$, and the schemes being intersected are Cohen-Macaulay, then the product of the degrees = the degree of the intersection ( = the sum of the degrees of its primary components). up vote 5 Non-example: let $X$ be the projective completion of a random plane through the origin in $A^4$, and $Y$ the projective completion of the union of two other random planes through the down vote origin (so, not Cohen-Macaulay). Then $X \cap Y$ is a triple point, not a double point as one might hope $(deg\ X = 1,deg\ Y = 2)$. The basic issue is that if we think about intersecting accepted $Y$ first with a $3$-plane $X' \supset X$, we get a union of two lines plus an embedded point we should throw away before we go all the way down to $X$. Then the intersection of $X$ picks up a point for each line in $Y \cap X'$, which is good, but also the embedded point, which is a failure of codimensions adding up. In this non-example $r_1 = r_2 = 2$, $n = R = 4$. 1 Allen, I don't understand your non-example. Are you saying that $Y$ as a cycle is not the sum of its components? Or are you saying that the intersection of two random planes through a point is $1.5$ points? Finally, are you saying that the degree of the triple point is strictly less than the product of the degrees (=$2$)? I am totally confused. – Sándor Kovács Jan 20 '11 at 8:44 Thanks Allen, but I am a little confused, you say: "If the co-dimensions are all exactly n..." by this do you mean that the dimension of the intersection is zero? I am sorry if I have not given this enough thought. – Sagar Kolte Jan 20 '11 at 17:38 Sorry, Sagar: fixed. @Sándor: the intersection of $X$ and $Y$ should be $1*2$ points, not $3$. Note that $3 \geq 1*2$ is in agreement with the statement from [Fulton]. – Allen Knutson Jan 20 '11 at 18:48 This is indeed a little confusing, I think because of language. The example Allen gives appears to be a counterexample to the statement from Fulton, since the degree (=multiplicity) of the (irreducible) scheme Z is greater than the product of the degrees of the intersecting schemes. However, I believe the statement the OP refers to is the more elementary fact that the underlying variety Z has degree at most equal to the product of the degrees of the intersecting varieties X and Y, a simple application of the generalized Bezout theorem. – Dave Anderson Jan 21 '11 at 1:04 ...In this case, a closely related question is: When is the cycle representing $X\cdot Y$ equivalent to the scheme-theoretic intersection $X\cap Y$? (And here's where the Cohen-Macaulay hypothesis does its work.) – Dave Anderson Jan 21 '11 at 1:09 show 7 more comments Not the answer you're looking for? Browse other questions tagged intersection-theory ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/52579/a-theorem-in-intersection-theory","timestamp":"2014-04-17T12:34:48Z","content_type":null,"content_length":"58483","record_id":"<urn:uuid:4aa61150-22a4-458a-a2e9-ed085d25baac>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
The amazing librarian Issue 47 June 2008 Given an first entry of the first entry of the first row of + the second entry of the first row of + ... + the last entry of the first row of Now recall how we have built the matrix entry of the first entry of the second row of + the second entry of the second row of + ... + the last entry of the second row of Again we see that the second entry of Return to article
{"url":"http://plus.maths.org/content/amazing-librarian-0","timestamp":"2014-04-19T15:32:28Z","content_type":null,"content_length":"34768","record_id":"<urn:uuid:b0f52dd8-3c55-4468-8759-37fd2a330ecb>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
Hilbert Curve Coffee Table Fractal Furniture One day I got a bee in my bonnet to make some fractal furniture with copper pipe. The Hilbert Curve was the obvious first choice. At the time, I'd only made one other piece of furniture out of copper pipe, and was showing it off at the Fat Man & Circuit Girl show. They asked me what I was going to do next. "A Hilbert Curve coffee table," I blurted, and suddenly I was committed, though I had no idea if I could actually make it work. So I got busy making sketches, trying to figure out if this was even possible. I didn't know much about Hilbert curves, but of course Wikipedia does. It still took me some time to get a good grasp of what's going on and how the different fractal levels relate to each other. At PDX Dorkbot I started cutting pipe, and attempting to lay out & put together the first Hilbert curve section. I had hoped to get finer detail, but the half inch elbow fittings gave me a hard constraint on the size. I decided to make it a 2x3 layout, with different fractal levels in a checkerboard pattern. Then I got to work fitting the pieces together, fluxing & soldering the level 3 sections, then partially assembling it to get the level 2 sections right. Once the top was figured out, it was time to build a two-level jig to make a shelf (a Moore curve, also space-filling like a Hilbert curve). Of course there are more details about working with copper pipe, doing the assembly, and putting it in a show at ON Gallery And it turns out that teeny little orange spiders just love copper pipe!
{"url":"http://art.net/~simran/Etc/HilbertCoffee0.html","timestamp":"2014-04-17T06:49:40Z","content_type":null,"content_length":"6308","record_id":"<urn:uuid:f32bd515-00bf-4773-b177-d4e1264b4102>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Totowa Prealgebra Tutor Find a Totowa Prealgebra Tutor ...Using these methods allows the students to learn in a way which makes it fun for them, while at the same time learning a language. Throughout my experiences of learning the language, I found that I was better able to learn the language when a variety of methods and material was used, instead of ... 6 Subjects: including prealgebra, Spanish, ESL/ESOL, grammar ...I find that the most detrimental thing to a mathematics student is lacking a core foundation. In math, everything you learn builds on top of what you learned in previous years, and without that strong foundation, students can fall behind. When teachers explain something in class they assume that the students have a certain knowledge about math based on what they learned in previous 21 Subjects: including prealgebra, calculus, statistics, geometry ...I had received a job offer last summer to work as an ESL teacher at a public school in Seoul, South Korea through a private teaching English abroad company called Ivy in Asia. However, I turned down the offer due to personal/family reasons. I currently participate in language exchange sessions at local public libraries in the North NJ area. 44 Subjects: including prealgebra, reading, Spanish, French ...I have done childcare and tutoring for kids with special needs in my hometown since high school, and I have also been a camp counselor for kids with special needs for the past two years. My experience with kids with special needs is not only academic, but also social. My most recent counselor position was specifically dedicated to building social skills. 39 Subjects: including prealgebra, reading, English, algebra 1 ...As a graduate with a BS in Mathematics my goal is to help students exceed to their full potential in math and science. I interact easily with people of diverse background, cultures and age levels. I analyze, assess and make recommendation based on individual talents and ability; implement appropriate curriculum plans for daily activities. 13 Subjects: including prealgebra, French, calculus, geometry Nearby Cities With prealgebra Tutor Cedar Grove, NJ prealgebra Tutors East Rutherford prealgebra Tutors Fairfield, NJ prealgebra Tutors Glen Rock, NJ prealgebra Tutors Haledon prealgebra Tutors Hawthorne, NJ prealgebra Tutors Hillcrest, NJ prealgebra Tutors Lincoln Park, NJ prealgebra Tutors Little Falls, NJ prealgebra Tutors North Caldwell, NJ prealgebra Tutors North Haledon, NJ prealgebra Tutors Paterson, NJ prealgebra Tutors Verona, NJ prealgebra Tutors Wayne, NJ prealgebra Tutors Woodland Park, NJ prealgebra Tutors
{"url":"http://www.purplemath.com/Totowa_prealgebra_tutors.php","timestamp":"2014-04-17T19:41:27Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:e7693f8c-2729-46d7-9b9d-f32ebc662b5c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
From Wiki.GIS.com Longitude (pronounced /ˈlɒndʒɨtju˝d/ or /ˈlɒŋɡɨtju˝d/),^[1] identified by the Greek letter lambda (λ), is the geographic coordinate most commonly used in cartography and global navigation for east-west measurement. It is the angular distance measured east or west and usually expressed in degrees (or hours), minutes, and seconds, from the prime meridian, defined to be at the Royal Observatory, Greenwich, in England, to the meridian passing through another position on the earth's surface. A location's position along a meridian is given by its latitude. This is the angular distance of that place north or south of the equator, measured as an angle whose vertex is at the center of the earth. [edit] History Mariners and explorers for most of history struggled to determine precise longitude. Latitude was calculated by observing with quadrant or astrolabe the inclination of the sun or of charted stars, but longitude presented no such manifest means of study. Amerigo Vespucci was perhaps the first to proffer a solution, after devoting a great deal of time and energy studying the problem during his sojourns in the New World: As to longitude, I declare that I found so much difficulty in determining it that I was put to great pains to ascertain the east-west distance I had covered. The final result of my labors was that I found nothing better to do than to watch for and take observations at night of the conjunction of one planet with another, and especially of the conjunction of the moon with the other planets, because the moon is swifter in her course than any other planet. I compared my observations with an almanac. After I had made experiments many nights, one night, the twenty-third of August, 1499, there was a conjunction of the moon with Mars, which according to the almanac was to occur at midnight or a half hour before. I found that...at midnight Mars's position was three and a half degrees to the east.^[2] By comparing the relative positions of the moon and Mars with their anticipated positions, Vespucci was able to crudely deduce his longitude. But this method had several limitations: First, it required the occurrence of a specific astronomical event (in this case, Mars passing through the same right ascension as the moon), and the observer needed to anticipate this event via an astronomical almanac. One needed also to know the precise time, which was difficult to ascertain in foreign lands. Finally, it required a stable viewing platform, rendering the technique useless on the rolling deck of a ship at sea. Unlike latitude, which has the equator as a natural starting position, there is no natural starting position for longitude. Therefore, a reference meridian had to be chosen. It was a popular practice to use a nation's capital as the starting point, but other significant locations were also used. While British cartographers had long used the Greenwich meridian in London, other references were used elsewhere, including: El Hierro, Rome, Copenhagen, Jerusalem, Saint Petersburg, Pisa, Paris, Philadelphia, and Washington. In 1884, the International Meridian Conference adopted the Greenwich meridian as the universal prime meridian or zero point of longitude. [edit] Noting and calculating longitude Longitude is given as an angular measurement ranging from 0° at the prime meridian to +180° eastward and −180° westward. The Greek letter λ (lambda),^[3]^[4] is used to denote the location of a place on Earth east or west of the prime meridian. Each degree of longitude is sub-divided into 60 minutes, each of which divided into 60 seconds. A longitude is thus specified in sexagesimal notation as 23° 27′ 30" E. For higher precision, the seconds are specified with a decimal fraction. An alternative representation uses degrees and minutes, where parts of a minute are expressed in decimal notation with a fraction, thus: 23° 27.500′ E. Degrees may also be expressed as a decimal fraction: 23.45833° E. For calculations, the angular measure may be converted to radians, so longitude may also be expressed in this manner as a signed fraction of π (pi), or an unsigned fraction of 2π. For calculations, the West/East suffix is replaced by a negative sign in the western hemisphere. Confusingly, the convention of negative for East is also sometimes seen. The preferred convention—that East be positive—is consistent with a right-handed Cartesian coordinate system with the North Pole up. A specific longitude may then be combined with a specific latitude (usually positive in the northern hemisphere) to give a precise position on the Earth's surface. Longitude at a point may be determined by calculating the time difference between that at its location and Coordinated Universal Time (UTC). Since there are 24 hours in a day and 360 degrees in a circle, the sun moves across the sky at a rate of 15 degrees per hour (360°/24 hours = 15° per hour). So if the time zone a person is in is three hours ahead of UTC then that person is near 45° longitude (3 hours × 15° per hour = 45°). The word near was used because the point might not be at the center of the time zone; also the time zones are defined politically, so their centers and boundaries often do not lie on meridians at multiples of 15°. In order to perform this calculation, however, a person needs to have a chronometer (watch) set to UTC and needs to determine local time by solar observation or astronomical observation. The details are more complex than described here: see the articles on Universal Time and on the equation of time for more details. [edit] Plate movement and longitude The surface layer of the Earth, the lithosphere, is broken up into several tectonic plates. Each plate moves in a different direction, at speeds of about 50 to 100 mm per year.^[5] As a result, for example, the longitudinal difference between a point on the equator in Uganda (on the African Plate) and a point on the equator in Ecuador (on the South American Plate) is increasing by about 0.0014 arcseconds per year. If a global reference frame such as WGS84 is used, the longitude of a place on the surface will change from year to year. To minimize this change, when dealing exclusively with points on a single plate, a different reference frame can be used, whose coordinates are fixed to a particular plate, such as NAD83 for North America or ETRS89 for Europe. [edit] Elliptic parameters Because most planets (including Earth) are closer to ellipsoids of revolution, or spheroids, rather than to spheres, both the radius and the length of arc varies with latitude. This variation requires the introduction of elliptic parameters based on an ellipse's angular eccentricity, $o\!\varepsilon\,\!$ (which equals $\scriptstyle{\arccos(\frac{b}{a})}\,\!$, where $a\;\!$ and $b\;\!$ are the equatorial and polar radii; $\scriptstyle{\sin^2(o\!\varepsilon)}\;\!$ is the first eccentricity squared, ${e^2}\;\!$; and $\scriptstyle{2\sin^2(\frac{o\!\varepsilon}{2})}\;\!$ or $\scriptstyle {1-\cos(o\!\varepsilon)}\;\!$ is the flattening, ${f}\;\!$). Utilized in creating the integrands for curvature is the inverse of the principal elliptic integrand, $E'\;\!$: \begin{align}M(\phi)&=a\cdot\cos^2(o\!\varepsilon)n'^3(\phi)=\frac{(ab)^2}{\Big((a\cos(\phi))^2+(b\sin(\phi))^2\Big)^{3/2}}\\ N(\phi)&=a{\cdot}n'(\phi)=\frac{a^2}{\sqrt{(a\cos(\phi))^2+(b\sin [edit] Degree length The length of an arcdegree of north-south latitude difference, $\scriptstyle{\Delta\phi}\;\!$, is about 60 nautical miles, 111 kilometres or 69 statute miles at any latitude. The length of an arcdegree of east-west longitude difference, $\scriptstyle{\cos(\phi)\Delta\lambda}\;\!$, is about the same at the equator as the north-south, reducing to zero at the poles. In the case of a spheroid, a meridian and its anti-meridian form an ellipse, from which an exact expression for the length of an arcdegree of latitude is: This radius of arc (or "arcradius") is in the plane of a meridian, and is known as the meridional radius of curvature, $M\;\!$.^[6]^[7] Similarly, an exact expression for the length of an arcdegree of longitude is: The arcradius contained here is in the plane of the prime vertical, the east-west plane perpendicular (or "normal") to both the plane of the meridian and the plane tangent to the surface of the ellipsoid, and is known as the normal radius of curvature, $N\;\!$.^[6]^[7] Along the equator (east-west), $N\;\!$ equals the equatorial radius. The radius of curvature at a right angle to the equator (north-south), $M\;\!$, is 43 km shorter, hence the length of an arcdegree of latitude at the equator is about 1 km less than the length of an arcdegree of longitude at the equator. The radii of curvature are equal at the poles where they are about 64 km greater than the north-south equatorial radius of curvature because the polar radius is 21 km less than the equatorial radius. The shorter polar radii indicate that the northern and southern hemispheres are flatter, making their radii of curvature longer. This flattening also 'pinches' the north-south equatorial radius of curvature, making it 43 km less than the equatorial radius. Both radii of curvature are perpendicular to the plane tangent to the surface of the ellipsoid at all latitudes, directed toward a point on the polar axis in the opposite hemisphere (except at the equator where both point toward Earth's center). The east-west radius of curvature reaches the axis, whereas the north-south radius of curvature is shorter at all latitudes except the poles. The WGS84 ellipsoid, used by all GPS devices, uses an equatorial radius of 6378137.0 m and an inverse flattening, (1/f), of 298.257223563, hence its polar radius is 6356752.3142 m and its first eccentricity squared is 0.00669437999014.^[8] The more recent but little used IERS 2003 ellipsoid provides equatorial and polar radii of 6378136.6 and 6356751.9 m, respectively, and an inverse flattening of 298.25642.^[9] Lengths of degrees on the WGS84 and IERS 2003 ellipsoids are the same when rounded to six significant digits. An appropriate calculator for any latitude is provided by the U.S. government's National Geospatial-Intelligence Agency (NGA).^[10] N-S radius Surface distance E-W radius Surface distance Latitude of curvature per 1° change of curvature per 1° change $M\;\!$ in latitude $N\;\!$ in longitude 0° 6335.44 km 110.574 km 6378.14 km 111.320 km 15° 6339.70 km 110.649 km 6379.57 km 107.551 km 30° 6351.38 km 110.852 km 6383.48 km 96.486 km 45° 6367.38 km 111.132 km 6388.84 km 78.847 km 60° 6383.45 km 111.412 km 6394.21 km 55.800 km 75° 6395.26 km 111.618 km 6398.15 km 28.902 km 90° 6399.59 km 111.694 km 6399.59 km 0.000 km [edit] Ecliptic latitude and longitude Ecliptic latitude and longitude are defined for the planets, stars, and other celestial bodies in a similar way to that in which the terrestrial counterparts are defined. The pole is the normal to the ecliptic nearest to the celestial north pole. Ecliptic latitude is measured from 0° to 90° north (+) or south (−) of the ecliptic. Ecliptic longitude is measured from 0° to 360° eastward (the direction that the Sun appears to move relative to the stars) along the ecliptic from the vernal equinox. The equinox at a specific date and time is a fixed equinox, such as that in the J2000 reference frame. However, the equinox moves because it is the intersection of two planes, both of which move. The ecliptic is relatively stationary, wobbling within a 4° diameter circle relative to the fixed stars over millions of years under the gravitational influence of the other planets. The greatest movement is a relatively rapid gyration of Earth's equatorial plane whose pole traces a 47° diameter circle caused by the Moon. This causes the equinox to precess westward along the ecliptic about 50" per year. This moving equinox is called the equinox of date. Ecliptic longitude relative to a moving equinox is used whenever the positions of the Sun, Moon, planets, or stars at dates other than that of a fixed equinox is important, as in calendars, astrology, or celestial mechanics. The 'error' of the Julian or Gregorian calendar is always relative to a moving equinox. The years, months, and days of the Chinese calendar all depend on the ecliptic longitudes of date of the Sun and Moon. The 30° zodiacal segments used in astrology are also relative to a moving equinox. Celestial mechanics (here restricted to the motion of solar system bodies) uses both a fixed and moving equinox. Sometimes in the study of Milankovitch cycles, the invariable plane of the solar system is substituted for the moving ecliptic. Longitude may be denominated from 0 to $\begin{matrix}2\pi\end{matrix}$ radians in either case. [edit] Longitude on bodies other than Earth Planetary co-ordinate systems are defined relative to their mean axis of rotation and various definitions of longitude depending on the body. The longitude systems of most of those bodies with observable rigid surfaces have been defined by references to a surface feature such as a crater. The north pole is that pole of rotation that lies on the north side of the invariable plane of the solar system (near the ecliptic). The location of the prime meridian as well as the position of body's north pole on the celestial sphere may vary with time due to precession of the axis of rotation of the planet (or satellite). If the position angle of the body's prime meridian increases with time, the body has a direct (or prograde) rotation; otherwise the rotation is said to be retrograde. In the absence of other information, the axis of rotation is assumed to be normal to the mean orbital plane; Mercury and most of the satellites are in this category. For many of the satellites, it is assumed that the rotation rate is equal to the mean orbital period. In the case of the giant planets, since their surface features are constantly changing and moving at various rates, the rotation of their magnetic fields is used as a reference instead. In the case of the Sun, even this criterion fails (because its magnetosphere is very complex and does not really rotate in a steady fashion), and an agreed-upon value for the rotation of its equator is used instead. For planetographic longitude, west longitudes (i.e., longitudes measured positively to the west) are used when the rotation is prograde, and east longitudes (i.e., longitudes measured positively to the east) when the rotation is retrograde. In simpler terms, imagine a distant, non-orbiting observer viewing a planet as it rotates. Also suppose that this observer is within the plane of the planet's equator. A point on the equator that passes directly in front of this observer later in time has a higher planetographic longitude than a point that did so earlier in time. However, planetocentric longitude is always measured positively to the east, regardless of which way the planet rotates. East is defined as the counter-clockwise direction around the planet, as seen from above its north pole, and the north pole is whichever pole more closely aligns with the Earth's north pole. Longitudes traditionally have been written using "E" or "W" instead of "+" or "−" to indicate this polarity. For example, the following all mean the same thing: The reference surfaces for some planets (such as Earth and Mars) are ellipsoids of revolution for which the equatorial radius is larger than the polar radius; in other words, they are oblate spheroids. Smaller bodies (Io, Mimas, etc.) tend to be better approximated by triaxial ellipsoids; however, triaxial ellipsoids would render many computations more complicated, especially those related to map projections. Many projections would lose their elegant and popular properties. For this reason spherical reference surfaces are frequently used in mapping programs. The modern standard for maps of Mars (since about 2002) is to use planetocentric coordinates. The meridian of Mars is located at Airy-0 crater.^[11] Tidally-locked bodies have a natural reference longitude passing through the point nearest to their parent body.^[12] However, libration due to non-circular orbits or axial tilts causes this point to move around any fixed point on the celestial body like an analemma. [edit] See also • American Practical Navigator • Great-circle distance • Lunar distance (navigation) [edit] External links
{"url":"http://wiki.gis.com/wiki/index.php/Longitude","timestamp":"2014-04-16T22:12:08Z","content_type":null,"content_length":"57240","record_id":"<urn:uuid:3b1afeea-c275-43f9-a389-56788e711293>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
equate to sth English definition of “equate to sth” equate to sth — phrasal verb with equate /ɪˈkweɪt/ verb [T] › to be the same in amount, number, or size: The price of such goods in those days equates to about $50 a kilo at current prices.Calculations and calculatingAddition, subtraction, multiplication and Focus on the pronunciation of equate to sth
{"url":"http://dictionary.cambridge.org/us/dictionary/british/equate-to-sth?topic=calculations-and-calculating","timestamp":"2014-04-21T12:13:30Z","content_type":null,"content_length":"66431","record_id":"<urn:uuid:73eee0ff-3d2d-4ba1-9be8-f749e27d8ece>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Linpack Benchmark -- Java Version This applet runs the Linpack Benchmark in Java. The Linpack Benchmark is a numerically intensive test that has been used for years to measure the floating point performance of computers. To run the benchmark after the graph loads hit "Press to run ...". Jack Dongarra, Reed Wade, and Paul McMahan (please direct inquiries to dongarra@cs.utk.edu) A number of people have been confused by the results of this benchmark. This test is more a reflection of the state of the Java systems than of the floating point performance of the underlying processors. Some Java systems do line by line interpretation and others perform ``just in time'' (jit) compilation. As you might guess, the jit systems perform better, perhaps by an order of The problem solved is a dense 500x500 system of linear equations with one right hand side, Ax=b. The matrix is generated randomly and the right hand side is constructed so the solution has all components equal to one. The method of solution is based on Gaussian elimination with partial pivoting. Millions of floating point operations per second. A floating point operation here is a floating point addition or a floating point multiplication with 64 bit operands. For this problem there are 2/3 n^3 + n^2 floating point operations. The time in seconds to solve the problem, Ax=b. Norm Res A check is made to show that the computed solution is correct. The test is based on || Ax - b || / ( || A || || x || eps) where eps is described below. The Norm Res should be about O(1) in size. If this quantity is much larger than 1, the solution is probably incorrect. The relative machine precision usually the smallest positive number such that fl( 1.0 - eps ) < 1.0, where fl denotes the computed value and eps is the relative machine precision. Special thanks to Jonathan Hardwick (jch@cs.cmu.edu) who made a number of valuable optimizations to the Linpack Java Benchmark. These changes brought a 10-20% speed increase and are described here. Also don't miss his more general Java Optimization page. As of 5 April 96 we are running the optimized code and will maintain separate timings lists until a significant number of timings have been done under the new code. Also, see Ivan Phillips' CaffeineMark benchmark. As of 30 June 2000 the problem size has been increased to 500x500. This was done because the timing resolution was too low to get accurate Mflop ratings for the 100x100 problem on very fast machines. A plain-text version of the java-linpack benchmark (tar gzip or zip file). It has all the graphical components removed so that you can run it from the command line (it doesn't run as an applet). Here's an example of the output: mycomputer> java Linpack Mflops/s: 0.351 Time: 1.96 secs Norm Res: 1.67 Precision: 2.22045e-16 This is a list of the current timings that have been submitted using the 500x500 problem set . This is a list of timings that have been submitted using the 100x100 problem set (list size > 200kbytes). Here are the list of timings from the first version of the Linpack Java Benchmark. (Before optimizations.)
{"url":"http://www.netlib.org/benchmark/linpackjava/","timestamp":"2014-04-19T01:50:56Z","content_type":null,"content_length":"5507","record_id":"<urn:uuid:5fd91f5d-cf7f-490a-9155-1502596a7053>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Show that [A,B^{n}]=nB^{n-1}[A,B] I'm having trouble figuring out the following commutator relation problem: Suppose A and B commute with their commutator, i.e., [tex][B,[A,B]]=[A,[A,B]]=0[/tex]. Show that I have [tex][A,B^{n}] = AB^{n} - B^{n}A[/tex] and also [tex][A,B^{n}] = AB^{n} - B^{n}A = ABB^{n-1} - BB^{n-1}A[/tex] I don't know where to go from here. I'm not positive the above relation is correct either. Do you know the relation [A,BC] = B[A,C] + [A,B] C It's easy to prove. Just expand out. Now, use with [itex] C= B^{n-1} [/itex]. , that is use [itex] [A,B^n] = B[A,B^{n-1}] + [A,B] B^{n-1} [/itex]. Now, repeat this again on the first term using now [itex] C= B^{n-2} [/itex]. You will get a recursion formula that will give you the proof easily.
{"url":"http://www.physicsforums.com/showpost.php?p=1018006&postcount=2","timestamp":"2014-04-19T07:30:14Z","content_type":null,"content_length":"8207","record_id":"<urn:uuid:58086d21-1406-42fd-9ab5-d15c4e1c1abc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Undergraduate Catalog 2009-2011 STAT 104 Elementary Statistics 3 Prereq.: MATH 101 (C- or higher) or placement exam. Intuitive treatment of some fundamental concepts involved in collecting, presenting, and analyzing data. Topics include frequency distributions, graphical presentations, measures of relative position, measures of variability, probability, probability distributions (binomial and normal), sampling theory, regression, and correlation. No credit given to students with credit for STAT 108, 200, 215, 314 or 315. Skill Area II STAT 200 Business Statistics 3 Prereq.: MATH 101 (C- or higher) or placement exam. Application of statistical methods used for a description of analysis of business problems. The development of analytic skills is enhanced by use of one of the widely available statistical packages and a graphing calculator. Topics include frequency distributions, graphical presentations, measures of relative position, measures of central tendency and variability, probability distributions including binomial and normal, confidence intervals, and hypothesis testing. No credit given to students with credit for STAT 104, 108, 215, 314, or 315. Skill Area II STAT 201 Business Statistics II 3 Prereq.: STAT 200 or equivalent (C- or higher). Application of statistical methods used for a description and analysis of business problems. The development of analytical skills is enhanced by use of one of the widely available statistical packages. Topics include continuation of hypothesis testing, multiple regression and correlation analysis, residual analysis, variable selection techniques, analysis of variance and design of experiments, goodness of fit, and tests of independence. No credit given to students with credit for STAT 216, 416 or 453. STAT 215 Statistics for Behavioral Sciences I 3 Prereq.: MATH 101 (C- or higher) or placement exam. Introductory treatment of research statistics used in behavioral sciences. Quantitative descriptive statistics, including frequency distributions, measures of central tendency and variability, correlation, and regression. A treatment of probability distributions including binomial and normal. Introduction to the idea of hypothesis testing. No credit given to students with credit for STAT 104, 108, 200, 314 or 315. Skill Area II STAT 216 Statistics for Behavioral Sciences II 3 Prereq.: STAT 215 or permission of instructor. Continuation of STAT 215. Survey of statistical tests and methods of research used in behavioral sciences, including parametric and nonparametric methods. No credit given to students with credit for STAT 201, 416 or 453. Spring. Skill Area II STAT 314 Introductory Statistics for Secondary Teachers 3 Prereq.: MATH 218 and 221. Techniques in probability and statistics necessary for secondary school teaching. Topics include sampling, probability, probability distributions, simulation, statistical inference, and the design and execution of a statistical study. Computers and graphing calculators will be used. No credit given to those with credit for STAT 201, 216 or 453. Graphing calculator required. Fall. STAT 315 Mathematical Statistics I 3 Prereq.: MATH 221; and MATH 218 or permission of department chair. Theory and applications in statistical analysis. Combinations, permutations, probability, distributions of discrete and continuous random variables, expectation, and common distributions (including normal). Fall. STAT 416 Mathematical Statistics II 3 Prereq.: STAT 315. Continuation of theory and applications of statistical inference. Elements of sampling, point and interval estimation of population parameters, tests of hypotheses, and the study of multivariate distributions. [GR] STAT 425 Loss and Frequency Distributions and Credibility Theory 3 Prereq.: STAT 416 (may be taken concurrently). Topics chosen from credibility theory, loss distributions, simulation, and time series. Spring. [GR] STAT 453 Applied Statistical Inference 3 Prereq.: Graduate standing with at least one course in statistics or STAT 315 or permission of instructor. Statistical techniques used to make inferences in experiments in social, physical, and biological sciences, and in education and psychology. Topics included are populations and samples, tests of significance concerning means, variances and proportions, and analysis of variance. No credit given to students with credit for STAT 201 or 216. Spring, Summer. [GR] STAT 455 Experimental Design 3 Prereq.: STAT 201 or 216 or 416 or permission of instructor. Introduction to experimental designs in statistics. Topics include completely randomized blocks, Latin square, and factorial experiments. Fall. (O) [GR] STAT 456 Fundamentals of SAS 3 Prereq.: CS 151 and STAT 201 or 216 or equivalent. Introduction to statistical software. Topics may include creation and manipulation of SAS data sets; and SAS implementation of the following statistical analyses: basic descriptive statistics, hypotheses tests, multiple regression, generalized linear models, discriminant analysis, clustering and analysis, factor analysis, logistic analysis and model evaluation. This course is cross listed with MKT 444. No credit given to students with credit for MKT 444. Spring. (E) [GR] STAT 465 Nonparametric Statistics 3 Prereq.: STAT 201 or 216 or 416 or permission of instructor. General survey of nonparametric or distribution-free test procedures and estimation techniques. Topics include one-sample, paired-sample, two-sample, and k-sample problems as well as regression, correlation, and contingency tables. Comparisons with the standard parametric procedures will be made, and efficiency and applicability discussed. Fall. (E) [GR] STAT 476 Topics in Statistics 3 Prereq.: Permission of instructor. Topics depending on interest and qualifications of the students will be chosen from sampling theory, decision theory, probability theory, Bayesian statistics, hypothesis testing, time series or advanced topics in other areas. May be repeated under different topics to a maximum of 6 credits. Spring. (O) [GR]
{"url":"http://www.ccsu.edu/page.cfm?p=2755","timestamp":"2014-04-20T23:37:00Z","content_type":null,"content_length":"25432","record_id":"<urn:uuid:4227ace9-4e51-4170-bf8d-430648b73847>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Correctness of Parallel Bisection in Floating Point James W. Demmel, Inderjit Dhillon and Huan Ren EECS Department University of California, Berkeley Technical Report No. UCB/CSD-94-805 March 1994 Bisection is an easily parallelizable method for finding the eigenvalues of real symmetric tridiagonal matrices, or more generally symmetric acyclic matrices. It requires a function Count( x) which counts the number of eigenvalues less than x. In exact arithmetic Count( x) is an increasing function of x, but this is not necessarily the case with roundoff. Our first result is that as long as the floating point arithmetic is monotonic, the computed function Count( x) implemented appropriately will also be monotonic; this extends an unpublished 1966 result of Kahan to the larger class of symmetric acyclic matrices. Second, we analyze the impact of nonmonotonicity of Count( x) on the serial and parallel implementations of bisection. We present simple and natural implementations which can fail because of nonmonotonicity; this includes the routine bisect in EISPACK. We also show how to implement bisection correctly despite nonmonotonicity; this is important because the fastest known parallel implementation of Count( x) is nonmonotonic even if the floating point is not. BibTeX citation: Author = {Demmel, James W. and Dhillon, Inderjit and Ren, Huan}, Title = {On the Correctness of Parallel Bisection in Floating Point}, Institution = {EECS Department, University of California, Berkeley}, Year = {1994}, Month = {Mar}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/1994/5599.html}, Number = {UCB/CSD-94-805}, Abstract = {Bisection is an easily parallelizable method for finding the eigenvalues of real symmetric tridiagonal matrices, or more generally symmetric acyclic matrices. It requires a function Count(<i>x</i>) which counts the number of eigenvalues less than <i>x</i>. In exact arithmetic Count(<i>x</i>) is an increasing function of <i>x</i>, but this is not necessarily the case with roundoff. Our first result is that as long as the floating point arithmetic is monotonic, the computed function Count(<i>x</i>) implemented appropriately will also be monotonic; this extends an unpublished 1966 result of Kahan to the larger class of symmetric acyclic matrices. Second, we analyze the impact of nonmonotonicity of Count(<i>x</i>) on the serial and parallel implementations of bisection. We present simple and natural implementations which can fail because of nonmonotonicity; this includes the routine bisect in EISPACK. We also show how to implement bisection correctly despite nonmonotonicity; this is important because the fastest known parallel implementation of Count(<i>x</i>) is nonmonotonic even if the floating point is not.} EndNote citation: %0 Report %A Demmel, James W. %A Dhillon, Inderjit %A Ren, Huan %T On the Correctness of Parallel Bisection in Floating Point %I EECS Department, University of California, Berkeley %D 1994 %@ UCB/CSD-94-805 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/1994/5599.html %F Demmel:CSD-94-805
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/1994/5599.html","timestamp":"2014-04-19T14:33:19Z","content_type":null,"content_length":"6990","record_id":"<urn:uuid:49b54729-97d4-45c2-98c0-712988c08095>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Structural and Thermophysical Properties of Cadmium Oxide ISRN Thermodynamics Volume 2012 (2012), Article ID 798140, 4 pages Research Article Structural and Thermophysical Properties of Cadmium Oxide High Pressure Research Laboratory, Department of Physics, Barkatullah University, Bhopal 462026, India Received 19 January 2012; Accepted 19 February 2012 Academic Editors: H. Hirao and Z. Slanina Copyright © 2012 Purvee Bhardwaj. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We have studied the structural and thermophysical properties of cadmium oxide (CdO), using the Three-Body Potential (TBP) model. Phase transition pressures are associated with a sudden collapse in volume. The phase transition pressures and related volume collapses obtained from this model show a generally good agreement with available experimental others data. The thermophysical properties like molecular force constant, Debye temperature, and so forth, of CdO are also reported. 1. Introduction The group of IIB-VIA oxides have presented a great deal of interest because of their applications in various technologies [1]. The semiconducting compounds of this group crystallize mostly in the zincblende (B3), wurtzite (B4), or both structures. Cadmium oxide (CdO) is one of the binary oxides having important electronic, structural, and optical properties. Cadmium oxide occurs naturally as the rare mineral monteponite. CdO is a semiconductor with a band gap of 2.16eV at room temperature. It normally crystallizes in a cubic sodium chloride (NaCl) rock-salt structure, with octahedral cation and anion centers. However under pressure it shows a first-order structural phase transition from NaCl (B1) to CsCl (B2) structure [2]. First-principles calculations of the crystal structures, and phase transition, and elastic properties of cadmium oxide (CdO) have been carried out with the plane-wave pseudopotential density functional theory method by Peng et al. [3]. Liu et al. studied the B1 to B2 phase transition pressure at about 90.6GPa for CdO [4]. Guerrero-Moreno et al. [5] observed the ground-state properties of CdO with B1 to B2 structure, using the first 2 principles We have applied the Three-Body Potential (TBP) model to the present compound to study the high pressure phase transition and other properties. The need of inclusion of three-body interaction forces was emphasized by many workers for the betterment of results [6–8]. Earlier calculations for B1-B2 transitions were based on two-body potential mainly. They concluded that possible reasons for disagreements include the failure of the two-body potential model. Since these studies were based on two-body potentials and could not explain Cauchy violations (). They remarked that results could be improved by including the effect of nonrigidity of ions in the model. This Three-Body Potential (TBP) model consists of long-range Coulomb energy, three body interactions corresponding to the nearest neighbour separation, vdW (van der Waal) interaction, and energy due to the overlap repulsion represented by Hafemiester and Flygare (HF) [9] type potential and extended up to the second-neighbour ions. The purpose of this work is to investigate the structural and thermophysical properties of CdO. 2. Potential Model and Method Application of pressure directly results in compression leading to the increased charge transfer (or three body interaction effect [10]) due to the deformation of the overlapping electron shell of the adjacent ions (or nonrigidity of ions) in solids. These effects have been incorporated in the Gibbs free energy as a function of pressure and three-body interactions (TBI) [10], which are the most dominant among the many body interactions. Here, is the internal energy of the system equivalent to the lattice energy at temperature near zero and is the entropy. At temperature K and pressure the Gibbs free energies for rock salt (B1, real) and CsCl (B2, hypothetical) structures are given by With and as unit cell volumes for and phases, respectively. The first terms in (1) and (2) are lattice energies for and structures and they are expressed as with and as the Madelung constants for NaCl and CsCl structure, respectively. C(C′) and D(D′) are the overall vander der Waal coefficients of B1 (B2) phases, are the Pauling coefficients. Ze is the ionic charge and are the hardness (range) parameters, are the nearest neighbour separations for NaCl (CsCl) structure and is the three body force parameter. These lattice energies consists of long-range Coulomb energy (first term), three-body interactions corresponding to the nearest neighbour separation (second term), vdW (van der Waal) interaction (third term), and energy due to the overlap repulsion represented by Hafemeister and Flygare (HF) type potential and extended up to the second neighbour ions (remaining terms). 3. Results and Discussion The Gibb’s free energies contain three model parameters . The values of these parameters have been evaluated using the first- and second-order space derivatives of the cohesive energy expressed as and following method adopted earlier [11]. Using these model parameters and the minimization technique, phase transition pressures of CdO have been computed. The input data of the crystal and calculated model parameters are listed in Table 1. We have followed the technique of minimization of Gibbs free energies of real and hypothetical phases. We have minimized and given by (3) and (4) at different pressures in order to obtain the interionic separations and corresponding to and phases associated with minimum energies. The factor plays an important role in stability of structures. The phase transition occurs when approaches zero . The phase transition pressure is the pressure at which approaches zero. At these compounds undergo a transition associated with a sudden collapse in volume showing a first-order phase transition. Figure 1 shows our present computed phase transition pressure for NaCl-type to CsCl-type structures in CdO at 90GPa. The present phase transition pressure is illustrated by arrow in Figure 1. The calculated values of phase transition pressure have been listed in Table 2 and compared with their experimental and other theoretical results. It is interesting to note from Table 2 and Figure 1 that the phase transition pressures , obtained from our model, are in general in closer agreement with experimental data [4] and match equally well with other theoretical results [5]. The compression curves are plotted in Figure 2. The values of the volume collapses are depicted in Table 2. The experimental and theoretical values of volume collapses are not available for the present compounds. It is clear that during the phase transition from NaCl to CsCl, the volume discontinuity in pressure volume phase diagram identifies the occurrence of first-order phase transition and the same trend as the other theoretical approach. In Figure 2 the pressure versus volume graph has been plotted. In additional to know the behaviour of the interionic distance with pressure for the present oxide, we present the variation of nearest-neighbor (nn) and next-nearest neighbor (nnn) distances for both the and phases with pressure in Figure 3. The interionic distances of the present oxide decrease on increasing the pressure. The open circles represent the nearest-neighbor (nn) and solid circles represent next nearest neighbor (nnn) distance in Figure 3 for CdO. To further increase the applicability of our model, we have calculated the molecular force constant (), infrared absorption frequency (), Debye temperature (), and Grunneisen parameter () which are directly derived from the cohesive energy, . The compressibility is well known to be given by in terms of molecular force constants With as the short range nearest neighbour part of given by the last three terms in (3) and (4). This force constant leads to the infrared absorption frequency with the knowledge of the reduced mass () of the crystals. The thermal expansion coefficient () can be calculated with the knowledge of specific heat (). The expressions have been given in our earlier paper [12]. We have calculated the thermophysical properties of CdO. The thermophysical properties provide us the interesting information about the substance. The Debye characteristic temperature reflects its structure stability, the strength of bonds between its separate elements, structure defects availability, and its density. The calculated thermophysical properties have been listed in Table 3. Due to the lack of experimental and theoretical data, we could not compare them. We have compared the value of Debye temperature with theoretical results provided by Peng et al. [3]. Our result shows the same trend as reported by others. As to the best of our knowledge the value of the thermal properties for present compounds have not yet been measured or calculated, hence our results can serve as a prediction for future investigations. In view of the overall attainments, it may be concluded that there is generally a good agreement of Three-Body Potential (TBP) model with the available experimental and theoretical values. Finally, it may be concluded that the present model has successfully predicted the compression curves and phase diagrams giving the phase transition pressures, associated volume collapses, and elastic properties correctly for cadmium oxide.
{"url":"http://www.hindawi.com/journals/isrn/2012/798140/","timestamp":"2014-04-17T05:16:23Z","content_type":null,"content_length":"107115","record_id":"<urn:uuid:f97b60f2-33fb-4ca4-aac3-fd8a8e157837>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Your students know how to find the area of common figures like rectangles, parallelograms, and triangles. Now they will extend their knowledge to finding the area of a circle. Spend some time helping them understand the number pi as the ratio of the circumference of a circle to its diameter. This will help them feel more comfortable with the formula for the circumference of a circle. It will also help them relate pi to the area of a circle. Materials: 5 circular objects of different sizes, such as jar lids, for every two students, string, rulers, and blank paper for all students Preparation: Pass out a set of jar lids to student pairs. Also pass out string, rulers, and blank paper for each student to use. Have students create a table on their paper similar to the one described and illustrated below. Prerequisite Skills: Students should be able to measure distances. Draw a picture of a circle on the board or overhead projector and review the definitions for circle, diameter, and radius. Introduce the concept of the circumference as perimeter. • Ask: Does anyone know what this figure is? (circle) Draw a diameter in the circle. Does anyone know what we call this line that passes through the center of a circle? (diameter) What do we call the line segment from the center to a point on the circle? (radius) Does anyone remember what we call the distance around the circle? (Circumference, if they don't know this, tell them.) • Say: Today we are going to find a way to calculate the circumference if we know the diameter or radius. • Have students get into groups of two. Illustrate for them how to find circumference using a piece of string and a ruler. • Say: To find the circumference of this jar lid, we can wrap a string around it like this. (Demonstrate this process.) The length of that string is the circumference. Next, we take the string and lay it alongside a ruler to find its length. To find the diameter of the lid, we measure across the circle so the string passes through the center. I'd like you to do this for the five jar lids you have at your desks. Then put this information in a table like the one on the board (overhead). Finally, calculate the ratio of the circumference to the diameter and put that in your table as │Circumference (C) │Diameter (d) │Ratio C/d│ │7.6 cm │2.4 cm │3.16 │ • Say: After you have collected all the information, examine it and write down anything you discover about the relationship between circumference and diameter. Students should discover that the ratio is a little greater than three for each circle. The measurements will not be exact, but they should be close enough to arrive at a value to 3.1. If some students come up with a ratio not close to three, they should recheck their measurements and their division with their partner. • Ask: What relationships did you discover about the data you collected? Some students will say that as the circumference increased, so did the diameter. Elicit from students that the ratios are all about the same, something close to three. • Say: That's right. The ratios should all be about the same. In fact, mathematicians have been able to prove that they all should equal a number called pi. Pi is approximately equal to 3.14. This leads us to a formula for the circumference, which is C = • Ask: Who can tell me how to find the circumference of a circle if its diameter is 8 m? Students should suggest substituting 8 for d in the formula C = 3.14 x d. Have a volunteer do this on the board for the class to see. Emphasize the importance of labeling the answer. • Then have students do the problems below.
{"url":"http://www.eduplace.com/math/mathsteps/6/g/6.pi.ideas.html","timestamp":"2014-04-17T06:51:49Z","content_type":null,"content_length":"8969","record_id":"<urn:uuid:107d0e0e-d5e0-4c7f-bccf-717d8d73b573>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Towards higher dimensional analogues of the torsor of Drinfeld's associators Seminar Room 1, Newton Institute The purpose of this talk is to give a description of the set of homotopy classes of formality quasi-isomorphisms for a Sullivan model of the Little n-discs operads, where we consider any n>1. The Sullivan model of a topological operad combines a commutative dg-algebra structure, reflecting the rational homotopy of the spaces underlying the operad, and a cooperad structure, reflecting the composition structures of the operad. I will explain the definition of an obstruction spectral sequence for the formality of these Sullivan models of operads. I will give a description of the obstruction spectral sequence associated to the little discs operads, and I will explain the connection with the definition of the Drinfeld associators. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/GDO/seminars/2013040813301.html","timestamp":"2014-04-16T19:03:19Z","content_type":null,"content_length":"6268","record_id":"<urn:uuid:984000e7-a28c-43da-b7bc-84135f4f42c5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: which is the biggest fraction 2 3/4, 2 3/5, 2 1/2, 2 5/8, or 2 7/10 • one year ago • one year ago Best Response You've already chosen the best response. Those are all mixed numbers with a whole part of 2, so that can be ignored. There are several ways to compare fractions. One is to give them all common denominators. An equivalent way is to turn them all into decimal fractions. Another way to compare fractions is to cross-multiply and compare the products. Best Response You've already chosen the best response. 2 3/4 Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/509aaccfe4b02ec0829d2858","timestamp":"2014-04-19T22:45:11Z","content_type":null,"content_length":"30194","record_id":"<urn:uuid:205efc52-92e1-475f-b52c-f8a91ba1b998>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Mass in Relativity But my question reamains unanswered! As in post #8 "By that I suppose that mass is a scalar qty and hence invariant. But the relativistic mass is just due to relative motion between two observers. But then how mass is defined in SR and QM and GR ? Is the definition of mass taken the same in all these fields? Please explain. Principle of equivalence is applied in GR while not in SR and QM why?" At least please answer my questions.
{"url":"http://www.physicsforums.com/showpost.php?p=1166623&postcount=11","timestamp":"2014-04-17T15:31:47Z","content_type":null,"content_length":"7089","record_id":"<urn:uuid:5be58d8e-b407-4da1-93d7-1d462ed454d7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Main dish \[(1-x^2) y'-x^2y = (1+x)\sqrt{1-x^2}\] • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50adfb19e4b09749ccabd628","timestamp":"2014-04-19T22:47:23Z","content_type":null,"content_length":"35171","record_id":"<urn:uuid:92b5bc10-0829-4945-b85d-efc9104c8a33>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
VARIANCE IN CRAPS--ODDS BET OR NO? November 20th, 2009 at 10:01:04 AM permalink In a previous thread, one poster suggested that after 10,000 or so rolls, your results would be "durn close" whether you take the odds bet or not. I disagree, and here is my attempt to explain why. I'd like to modify the premise slightly from 10,000 rolls to 3,000 Pass Line bets resolved. This is a bit more than the number of expected Pass Line bets resolved over 10,000 rolls, and it is easier to solve. I found in a Probability text book that for n independent trials with probability of success, p, and probability of failure, q, that the variance is n*p*q. Furthermore, I found that you can sum indpendent variances. The formula assumes a payout of 1 for success and 0 for failure. However, that is not the case in craps, so we need to define another variable "x" as the difference between the payout for winning and losing (e.g., if point is 8, a win is worth +7 and a loss -6, so x=13). When computing the variance we have to square x and multiply it to For Pass Line bets only (no odds), n=3000, p~0.507071, q~0.492929, x=2 => Var=2,999.4 and standard deviation~54.8. For the case of making Pass Line bets and taking 3x4x5x odds, the true calculation should be the sum for every possible outcome of the product of the square of the difference between that Member outcome from the mean and the probability of that outcome. It's hard to write in words and even harder to calculate. So I will instead approximate the variance by summing the variances of since: Nov the possible outcomes of one trial and using their expected number of occurences in 3000 trials as the value for n. 2, 2009 Threads: 19 For come out winners and losers, n=1000, p=8/12, q=4/12, x=2 => Var~888.89 Posts: 138 For points 6 and 8, n~833.33, p=5/11, q=6/11, x=13 => Var~34,917.36 For points 5 and 9, n~666.67, p=4/10, q=6/10, x=12 => Var~23,040.00 For points 4 and 10, n=500, p=3/9, q=6/9, x=11 => Var~13,444.44 Summing the variances, we get Var~72,290.69, so the standard deviation for 3000 trials, taking odds ~268.9. I would suggest that even after 3,000 trials 268.9 is significantly higher than 54.8, but I suppose that is a matter of interpretation. If anyone knows a shortcut for calculating the true variance for the case of taking odds, or has any corrections to my math, please share. The ratio of people to cake is too big. November 20th, 2009 at 10:41:27 AM permalink I just realized I left out a major piece of the puzzle that will help compare the variances over different numbers of trials--Expected Value. The EV for either betting strategy is the same and is dependent on the number of trials. Specifically EV=~.01414*n. EV (n=3,000) ~ -42.42 STDEV (Pass Line only, n=3,000) ~ 54.77 STDEV (Pass Line + odds, n=3,000) ~ 268.87 EV (n=30,000) ~ 424.24 dk STDEV (Pass Line only, n=30,000) ~ 173.19 Member since: Nov 2, 2009 STDEV (Pass Line + odds, n=30,000) ~ 850.24 Threads: 19 Posts: 138 EV (n=3,000,000) ~ -42,424.24 STDEV (Pass Line only, n=3,000,000) ~ 1,731.88 STDEV (Pass Line + odds, n=3,000,000) ~ 8,502.39 For 3,000 trials, breaking even is within 1 standard deviation for both strategies. For 30,000 trials, breaking even is still within 1 standard deviation for the odds bettor, but not for the Pass Line only strategist. For 3,000,000 trials, breaking even is several standard deviations away from EV, so that even the luckiest player will almost certainly lose over this number of trials. The ratio of people to cake is too big. November 21st, 2009 at 3:54:09 AM permalink odiousgambit Again, my shortcomings in math are quite regrettable, but one thing has become clear: 10,000 rolls [come-out rolls?] can mean only 3,000 trials where free odds are involved. Now what you have said makes more sense to me. Member since: Nov 9, 2009 At something like 30 come-out rolls per hour, there must be quite a few craps players who would see 10,000 rolls in a reasonable span or certainly in a lifetime. Threads: 220 Posts: 4309 "Baccarat is a game whereby the croupier gathers in money with a flexible sculling oar, then rakes it home. If I could have borrowed his oar I would have stayed." Mark Twain December 8th, 2009 at 4:21:37 PM permalink In my previous posts to this thread, I had the right idea, but the wrong math. The table below shows my calculations with (hopefully) the right math. A trial is defined as a completed Pass Line bet. Expected Value is the same for all cases and is solely dependent on the number of trials. The first Standard Deviation and Win % columns assume $1 bet on the pass line. The second Standard Deviation and Win % columns assume $1 bet on the pass line plus 3x4x5x odds. The third Standard Deviation and Win % columns assume $1 bet on the pass line plus 100x odds. The Win % is based on the normal curve and is the probability that after the given number of trials, the bettor will be ahead or break even overall. Obviously the Win % should be the dk same after 1 trial for all betting systems, but the error is due to using the normal curve as an approximation of the possible outcomes. This approximation becomes more valid as the number of trials increases. Member since: Nov 2, 2009 Pass Line Only 3x4x5x Odds 100x Odds Threads: 19 Trials EV STDEV Win % STDEV Win % STDEV Win % Posts: 138 1 $(0.01) $0.71 49.2% $1.01 49.4% $20.68 50.0% 10 $(0.14) $2.24 47.5% $3.20 48.2% $65.39 49.9% 100 $(1.41) $7.07 42.1% $10.12 44.4% $206.77 49.7% 1,000 $(14.14) $22.36 26.4% $32.00 32.9% $653.86 49.1% 10,000 $(141.41) $70.71 2.3% $101.20 8.1% $2,067.69 47.3% 100,000 $(1,414.14) $223.61 0.0% $320.03 0.0% $6,538.62 41.4% 1,000,000 $(14,141.41) $707.11 0.0% $1,012.02 0.0% $20,676.93 24.7% 10,000,000 $(141,414.14) $2,236.07 0.0% $3,200.30 0.0% $65,386.18 1.5% Note that the 100x odds bettor has nearly a 1 in 4 chance of being up even after 1 million trials. That IMO is the power of the free odds bet. If anyone has any corrections, please let me know. The ratio of people to cake is too big. May 15th, 2011 at 9:30:33 AM permalink Pass Line Only Pass Line Only With 3x4x5x With 3x4x5x With 100x With 100X Trials EV STDEV Session Win % STDEV Session Win % STDEV Session Win % 1 $(0.01) $0.71 49.2% $1.01 49.4% $20.68 50.0% 10 $(0.14) $2.24 47.5% $3.20 48.2% $65.39 49.9% 100 $(1.41) $7.07 42.1% $10.12 44.4% $206.77 49.7% odiousgambit 1,000 $(14.14) $22.36 26.4% $32.00 32.9% $653.86 49.1% 10,000 $(141.41) $70.71 2.3% $101.20 8.1% $2,067.69 47.3% Member since: Nov 100,000 $(1,414.14) $223.61 0.0% $320.03 0.0% $6,538.62 41.4% 9, 2009 1,000,000 $(14,141.41) $707.11 0.0% $1,012.02 0.0% $20,676.93 24.7% Threads: 220 10,000,000 $(141,414.14) $2,236.07 0.0% $3,200.30 0.0% $65,386.18 1.5% Posts: 4309 Apologies to dk, but I wanted to change some headers to understand this better. Hopefully it's right! This possibly is a very rare chart, showing changes in standard deviation using the free odds [at least I havent been able to find it elsewhere]. No one responded to the request that this be checked out for accuracy. Anyone know, then, if this is an accurate chart? "Baccarat is a game whereby the croupier gathers in money with a flexible sculling oar, then rakes it home. If I could have borrowed his oar I would have stayed." Mark Twain May 16th, 2011 at 3:02:29 AM permalink a little bump, not giving up on this Member since: Nov 9, 2009 Threads: 220 "Baccarat is a game whereby the croupier gathers in money with a flexible sculling oar, then rakes it home. If I could have borrowed his oar I would have stayed." Mark Twain Posts: 4309 May 16th, 2011 at 1:55:30 PM permalink I guess maybe I way off on what you are trying to get at but.......I think were you are going wrong is using the same bank roll for both pass line bets and odds bets. If you separate vert1276 the two you will see if the one person bets 1 unit on the pass line and and another places 1 unit on the pass line with 2X's odd over 3000 rolls of the dice came out statistically right they would lose the same amount of money. Member since: Apr 25, 2011 If they had the same bankroll and they were both placing the same amount on every roll. Player one makes a 100 unit P/L bet and player two was playing a 20 unit P/L bet with 4 times Threads: 69 odds(80 units) they both have 100 units in play but player two will lose less money. because only 20% of his bet is subject to negative expectations. will 100% of player ones bet is Posts: 444 subject to negative exceptions. But if they both made the same P/L bet. And one placed odds and one didn't in the end they would both lose the same amount. June 18th, 2011 at 9:39:37 AM permalink sorry, lost track of this for a while I'll repeat that it is very unusual to find anyone with figures on variance/standard deviation in Craps with free odds , at least I don't run across them Member since: Nov 9, 2009 Quote: vert1276 Threads: 220 Posts: 4309 And one placed odds and one didn't in the end they would both lose the same amount. the premise of the original post is that this "in the end" business takes many more trials than the 10,000 claimed in another thread. I am the person who said 10,000 yields "durn close". If the claim is true, 10,000 not enough, then 3000 rolls or even 3000 come-out rolls is not enough. Or 10,000. I have pretty much confirmed this for myself now, using Wincraps. "Baccarat is a game whereby the croupier gathers in money with a flexible sculling oar, then rakes it home. If I could have borrowed his oar I would have stayed." Mark Twain June 18th, 2011 at 11:34:43 AM permalink Bottom line: Our best chance of staying competitive is just betting the ole boring pass-odds bet. Mulitiple place betting or for that matter multiple come-odds betting schemes are dwm doomed as the Ole Ugly is just too common, I have found that out the hard way. Think of it this way, much easier to recover from losing one bet compared to 3-6 bets. Member since: Various betting schemes for the odds bet and I do like increasing the odds bet as inside box numbers are rolled after the point is established. Personally I like starting with $5 pass Aug 9, 2010 and $20 odds, then increasing the odds bet one unit as every inside number is rolled and stopping at $50 odds, then start anew on every pass-point sequence. Using a $600 day session Threads: 28 bankroll and have had good net results over many sessions thusfar. Posts: 191 I tend to stray away as do like the action of place betting, but place betting has overall hurt my bankroll. The pass odds only is much better and it has taken awhile for it to sink in, must be a slow learner... June 18th, 2011 at 12:49:25 PM permalink That is basically it. Which is the reason the skimpy costumes, free booze and stick's spiel exist. Combined, they are designed to have an effect on your betting, not just your merriment. Someone who starts out conservatively with pass line and 2x odds is often someone who winds up making the higher house-edge bets after a bit of booze and excitement has its I hate the effect of a seven-out if I've been doing come bets as well as my initial pass line bet, but I'm told that a come bet and a passline bet are virtually identical and it makes FleaStiff no difference if I do a PassLine Bet at each of two tables or do a PassLine bet and then a ComeBet at the same table. Member since: I know about the dice having no memory. We all know that. However, I do always feel that a "seven-out" is "due". It is why I often try to do PassLine, ComeBet, ComeBet, DontCome, Oct 19, 2009 DontCome. Its a way of saying to myself: Okay, you've got some chances if the shooter is lucky and some if he ain't. And if things work out right, you can see those Come Bets rolled, Threads: 161 then if the seven does appear, it won't hurt you too bad to lose that Passline bet. Posts: 8261 We all know to have as much money as possible on the Odds portion and as little as required on the Flat portion. We don't always do it, but we know the theory and don't really dispute it in any way. On a choppy table (they all seem to be choppy) we can do well if we pretty much wind up even. Sure somebody will walk up and make a high house edge center bet and walk away with his winnings, but often he just walks away. We who play a tight Basic Strategy game and resist temptations to listen to the stickman's patter tend to do the best at the tables despite the headline making stories such as that four hour roll at Borgata by a newbie.
{"url":"http://wizardofvegas.com/forum/questions-and-answers/math/384-variance-in-craps-odds-bet-or-no/","timestamp":"2014-04-20T20:55:51Z","content_type":null,"content_length":"34435","record_id":"<urn:uuid:b1ef0816-1e38-4a56-a3c4-ef7e7d46f54c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Salem, NH Science Tutor Find a Salem, NH Science Tutor ...I taught eight years High school Physics. I taught over forty College Courses in Physics, Mathematics and Electrical Engineering. I have a CAGS degree in Educational Leadership (It is like a Master's degree, in School Administration). I enjoy teaching High/Middle school and College students, regardless of level. 6 Subjects: including physics, algebra 1, electrical engineering, prealgebra ...Many colleges and universities require the SAT test. The PSAT is an excellent way to identify how well prepared you are for the SAT, while also allowing you qualify for a National Merit Scholarship. I was a National Merit Scholar, and I appreciate how valuable that accomplishment was for opening college doors. 55 Subjects: including ACT Science, philosophy, geology, English I'm a doctoral candidate in Environmental Biology with pedagogy training and experience developing middle school and high school science curriculum. I have also taught middle-school, high-school, and college level (Introductory Biology and Evolutionary Ecology) courses. I'm passionate about biolog... 7 Subjects: including biochemistry, ACT Science, biology, physical science ...Even though I'm new to tutoring, I'm prepared to help students learn the material needed and give my best effort in the process. Upon selecting me as a tutor, I would teach physics, chemistry, and or mathematics (algebra, calculus). Aside from my studies, I was a water polo player and play guita... 6 Subjects: including chemistry, physics, geometry, algebra 1 ...I am a native Chinese Speaker. Chinese Language was always one of my strong subjects in high school while I was in China. I got high scores on the college entrance exam for this subject. 5 Subjects: including physics, Chinese, algebra 2, precalculus Related Salem, NH Tutors Salem, NH Accounting Tutors Salem, NH ACT Tutors Salem, NH Algebra Tutors Salem, NH Algebra 2 Tutors Salem, NH Calculus Tutors Salem, NH Geometry Tutors Salem, NH Math Tutors Salem, NH Prealgebra Tutors Salem, NH Precalculus Tutors Salem, NH SAT Tutors Salem, NH SAT Math Tutors Salem, NH Science Tutors Salem, NH Statistics Tutors Salem, NH Trigonometry Tutors Nearby Cities With Science Tutor Andover, MA Science Tutors Atkinson, NH Science Tutors Derry, NH Science Tutors Dracut Science Tutors Haverhill, MA Science Tutors Hudson, NH Science Tutors Lawrence, MA Science Tutors Londonderry, NH Science Tutors Methuen Science Tutors North Andover Science Tutors North Salem, NH Science Tutors Pelham, NH Science Tutors Plaistow Science Tutors Tewksbury Science Tutors Windham, NH Science Tutors
{"url":"http://www.purplemath.com/salem_nh_science_tutors.php","timestamp":"2014-04-16T10:41:37Z","content_type":null,"content_length":"23774","record_id":"<urn:uuid:b8bcfead-5002-4d3b-ac9b-f3dab5903ef3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America Over a range of high temporal and low spatial frequencies, counterphase flickering gratings evoke the so-called frequency-doubling illusion, in which the apparent brightness of the grating varies at twice its real spatial frequency. The form of the nonlinearity that causes this second-harmonic distortion of the visual response was determined by a cancellation technique. The harmonic distortion can be measured as a function of amplitude (or contrast) by adding to the flickering grating a real, nonflickering, double-frequency component with the amplitude and phase required to cancel the illusory second harmonic. Harmonic distortion curves obtained in this way imply that the nonlinearity is of the form |<i>s</i>| <sup><i>P</i></sup>, where <i>s</i> is the stimulus pattern (without its d<i>c</i> component) and <i>p</i> is close to 0.6. If <i>p</i> = 1, or if the absolute value is not taken, this expression predicts distortion curves that differ significantly from the experimental results. Hence neither rectification nor compression alone is sufficient to account for the second-harmonic distortion; both are required. © 1981 Optical Society of America D. H. Kelly, "Nonlinear visual responses to flickering sinusoidal gratings," J. Opt. Soc. Am. 71, 1051-1055 (1981) Sort: Year | Journal | Reset 1. C. A. Burbeck and D. H. Kelly, "Retinal mechanisms inferred from measurements of threshold sensitivity versus suprathreshold orthogonal mask contrast," in Proceedings of Topical Meeting on Recent Advances in Vision (Optical Society of America, Washington, D.C., 1980), paper ThB4. 2. B. G. Cleland, W. R. Levick, and K. J. Sanderson, "Properties of sustained and transient ganglion cells in the cat retine," J. Physiol. (London) 228, 649–680 (1973); see also C. Enroth-Cugell and J. G. Robson, "The contrast sensitivity of retinal ganglion cells," J. Physiol. (London) 187, 517–552 (1966). 3. W. Richards and T. B. Felton, "Spatial frequency doubling: retinal or central?" Vision Res. 13, 2129–2137 (1973). 4. C. W. Tyler, "Observations on spatial frequency doubling," Perception 3, 81–86 (1974). 5. V. Virsu and P. Laurinen, "Long-lasting afterimages caused by neural adaptation," Vision Res. 17, 853–860 (1977). 6. K. I. Naka and W. A. H. Rushton, "S-potentials from luminosity units in the retina of fish (cyprinidae)," J. Physiol. (London) 185, 587–599 (1966). 7. D. H. Kelly and R. E. Savoie, "Theory of flicker and transient responses. III. An essential nonlinearity," J. Opt. Soc. Am. 68, 1481–1490 (1978). 8. C. Rashbass, "The visibility of transient changes of luminance," J. Physiol. (London) 210, 165–186 (1970). 9. See, for example, I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series and Products (Academic, New York, 1965). This integral is evaluated on p. 372 (Formula 3.631-9) in terms of the beta function B, where B(x, y) = [Γ(x)Γ(y)]/Γ(x + y). 10. Regardless of the form of the nonlinearity, its odd part can create only odd harmonics and therefore can have no effect on the second harmonic. Conversely, the even part of the nonlinearity can create only even harmonics and therefore can have no effect on the fundamental. Thus, even if these two frequency components are responses of the same nonlinear transducer, they represent separate, independent, additive aspects of its behavior. This is true for any transducer function that can he expanded in a Taylor series, and it does not depend on the phase of the stimulus. It follows from the fact that odd powers of a sinusoidal input (sin^2n-1 θ and cos^2n-l θ) can always be expressed as a (finite) sum of odd harmonic terms, sin(2n - 2k - 1)θ and cos(2n - 2k - 1)θ, while even powers (sin^2n θ and cos^2n θ) can be expressed as a sum of even harmonic terms, in the latter case always of the form cos 2(n - k)θ, where K ranges from 0 to n - 1. See Ref. 10, pp. 25, 26. 11. D. H. Kelly, "Visual nonlinearity measurement," J. Opt. Soc. Am. 71, 368A (1981). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
{"url":"http://www.opticsinfobase.org/josa/abstract.cfm?uri=josa-71-9-1051","timestamp":"2014-04-16T10:09:14Z","content_type":null,"content_length":"79229","record_id":"<urn:uuid:8f9eaa8e-53ef-4c30-ad3b-cbabe9298abd>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Salem, NH Science Tutor Find a Salem, NH Science Tutor ...I taught eight years High school Physics. I taught over forty College Courses in Physics, Mathematics and Electrical Engineering. I have a CAGS degree in Educational Leadership (It is like a Master's degree, in School Administration). I enjoy teaching High/Middle school and College students, regardless of level. 6 Subjects: including physics, algebra 1, electrical engineering, prealgebra ...Many colleges and universities require the SAT test. The PSAT is an excellent way to identify how well prepared you are for the SAT, while also allowing you qualify for a National Merit Scholarship. I was a National Merit Scholar, and I appreciate how valuable that accomplishment was for opening college doors. 55 Subjects: including ACT Science, philosophy, geology, English I'm a doctoral candidate in Environmental Biology with pedagogy training and experience developing middle school and high school science curriculum. I have also taught middle-school, high-school, and college level (Introductory Biology and Evolutionary Ecology) courses. I'm passionate about biolog... 7 Subjects: including biochemistry, ACT Science, biology, physical science ...Even though I'm new to tutoring, I'm prepared to help students learn the material needed and give my best effort in the process. Upon selecting me as a tutor, I would teach physics, chemistry, and or mathematics (algebra, calculus). Aside from my studies, I was a water polo player and play guita... 6 Subjects: including chemistry, physics, geometry, algebra 1 ...I am a native Chinese Speaker. Chinese Language was always one of my strong subjects in high school while I was in China. I got high scores on the college entrance exam for this subject. 5 Subjects: including physics, Chinese, algebra 2, precalculus Related Salem, NH Tutors Salem, NH Accounting Tutors Salem, NH ACT Tutors Salem, NH Algebra Tutors Salem, NH Algebra 2 Tutors Salem, NH Calculus Tutors Salem, NH Geometry Tutors Salem, NH Math Tutors Salem, NH Prealgebra Tutors Salem, NH Precalculus Tutors Salem, NH SAT Tutors Salem, NH SAT Math Tutors Salem, NH Science Tutors Salem, NH Statistics Tutors Salem, NH Trigonometry Tutors Nearby Cities With Science Tutor Andover, MA Science Tutors Atkinson, NH Science Tutors Derry, NH Science Tutors Dracut Science Tutors Haverhill, MA Science Tutors Hudson, NH Science Tutors Lawrence, MA Science Tutors Londonderry, NH Science Tutors Methuen Science Tutors North Andover Science Tutors North Salem, NH Science Tutors Pelham, NH Science Tutors Plaistow Science Tutors Tewksbury Science Tutors Windham, NH Science Tutors
{"url":"http://www.purplemath.com/salem_nh_science_tutors.php","timestamp":"2014-04-16T10:41:37Z","content_type":null,"content_length":"23774","record_id":"<urn:uuid:b8bcfead-5002-4d3b-ac9b-f3dab5903ef3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
"The Quantum Universe" and "How to teach quantum physics to your dog" > “The Quantum Universe” and “How to teach quantum physics to your dog” “The Quantum Universe” and “How to teach quantum physics to your dog” I’ve written previously about how it can be difficult to move from the basics of a subject to the more complex aspects. Quantum mechanics is a good case in point. There are many many books that introduce the subject to a non-expert user, and just as many undergraduate and graduate textbooks that try and turn those inspired by the popular science into experts. It’s a big jump between the two. Brian Cox and Jeff Forshaw’s book, “The Quantum Universe, Everything that can Happen Does Happen” is possible an attempt to try and bridge the gap between the two. It covers many of the familiar aspects of the popular science book, but tries and to discuss this in terms of Feynman path integrals (without too much of the calculation) and attempts to give, in my view at least, a slightly different spin on the ‘weirdness’ of quantum mechanics. The main thrust of the book has been caught up in a little bit of internet controversy (here and here, for example), in which Cox described the change in energy of an electron in a lump of diamond to be inextricably linked to the energy of every other in the Universe, and that due to the Pauli Exclusion Principle, in which no two fermions can exist in the same quantum state, if the energy of one changes, then the energy of every other electron changes in response. Everything is connected to everything else. This goes to show you have to be very careful about how you present complex science in public, and the more famous you get the more careful you have to be. Cox seems to have oversimplified in the TV lecture, and even if correct it does seem a bit of a sales gimmick in this case, as any changes are simply t0o small to be measured. In addition there does seem to be a little bit of confusion with this idea when discussing degeneracy pressure in neutron stars in the final, fairly technical chapter. While the book is interesting I did find it rather hard work due to the use of a ‘clocks’ analogy that is used to describe quantum interference effects, and which is used throughout the book. I didn’t find it a very intuitive analogy, and coupled with the somewhat wordy explanations, and lack of diagrams (and diagrams on pages away from the explanations dealing with them) I’m not sure I’d recommend the book to anyone other than with a desperate need to keep up to date with Brain Cox’s popular science output. The book should be applauded for the introduction of topics such as semiconductors and attempt to do a real world calculation at the end, although I’m not really sure who this is intended for. I amy be wrong, but I don’t think Brian Cox does much in the way of undergraduate teaching. Someone who does is Chad Orzel, from Union College, who has written a much clearer explanation of quantum physics with his “How to teach quantum physics to your dog“. In this book much of the same material is covered in a much clearer and more concise fashion. It doesn’t have as much technical info as “The Quantum Universe”, but covers the material well, from the basic ideas of quantum uncertainty all the way to quantum teleportation. The hook here is that the discussion is framed as a sort of Socratic dialogue between Orzel and his Dog Emmy. Initially I thought this would just annoy me, but I got used to it quickly and it really helps the explanations to have an interjector asking sensible questions about the validity of what has just been said (always good when dealing with quantum physics!). In many ways this looks like a smart undergraduate asking the questions (although with less squirrels) and I suspect that this clarity of thought has come from extensive interactions with inquiring young minds. I really enjoyed the book, and the only weak explanation, where the technicalities prove a bit too much for many questions, was on the teleportation chapter. There are lots and lots of books on quantum theory in the popular market place, but if I were looking to invest in one of the newer ones, then I’d plump with Orzel’s book and Emmy’s bunnies. It’s a nice refresher with good explanations, and is well suited for senior school pupils. Cox and Forshaw’s book is, I feel, trying to being a Feynman’s QED for a modern age, but comes up short with more muddled explanations. It is to be commended for trying to take on some very challenging ideas though. 1. No comments yet. 1. No trackbacks yet.
{"url":"http://davidmcgloin.wordpress.com/2012/03/23/the-quantum-universe-and-how-to-teach-quantum-physics-to-your-dog/","timestamp":"2014-04-21T04:32:38Z","content_type":null,"content_length":"57320","record_id":"<urn:uuid:8ee3a699-0f6e-4455-a255-120d19843363>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: OPTICAL WAVEGUIDES HAVING FLATTENED HIGH ORDER MODES Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A deterministic methodology is provided for designing optical fibers that support field-flattened, ring-like higher order modes. The effective and group indices of its modes can be tuned by adjusting the widths of the guide's field-flattened layers or the average index of certain groups of layers. The approach outlined here provides a path to designing fibers that simultaneously have large mode areas and large separations between the propagation constants of its modes. A waveguide for guiding a field-flattening preferred mode at a preferred mode effective index, comprising: a plurality of field-flattening regions, wherein each field-flattening region of said plurality of field-flattening regions comprises a field-flattening region inner boundary, a field-flattening region outer boundary and a field-flattening region refractive index, wherein said field-flattening region refractive index does not vary substantially within said each field-flattening region of said plurality of field-flattening regions, and said field-flattening region refractive index is substantially equal to a preferred mode effective index, to thus induce the field associated with the field-flattening preferred mode to not vary substantially with position within said each field-flattening region; one or more stitching regions, wherein a stitching region of said one or more stitching regions is located between neighboring said each field-flattening region, wherein each said stitching region comprises a stitching region inner boundary, a stitching region outer boundary, and a stitching region refractive index structure, wherein said stitching region refractive index structure comprises means for inducing the field of said preferred mode to vary substantially with position within said stitching region and means for inducing the gradient of the field of said field-flattening preferred mode to be zero or nearly zero at said stitching region inner boundary and at said stitching region outer boundary; one or more terminating regions, wherein each terminating region of said one or more terminating regions comprises a terminating region inner boundary, a terminating region outer boundary, and a terminating region refractive index structure, wherein said terminating region inner boundary is in contact with one said field-flattening region outer boundary; and a cladding region comprising a cladding region inner boundary and a substantially homogeneous cladding refractive index, wherein said cladding region surrounds all said plurality of field-flattening regions, surrounds all said one or more stitching regions, and surrounds all said one or more terminating regions, wherein said cladding region inner boundary is in contact with said terminating region outer boundary of all of said one or more terminating regions, wherein each said terminating region refractive index structure of said one or more terminating regions comprises means for inducing the field of said field-flattening preferred mode to transition from the field at said terminating region inner boundary to a decaying field in said cladding region. The waveguide of claim 1, wherein each stitching region of said one or more stitching regions comprises one or more stitching region layers, wherein each said stitching region layer of said one or more stitching region layers comprises a stitching region layer inner boundary, a stitching region layer outer boundary, and a substantially homogeneous stitching layer refractive index and wherein said each terminating region comprises one or more terminating region layers, wherein each terminating region layer of said one or more terminating region layers comprises a terminating region layer inner boundary, a terminating region layer outer boundary, and a substantially homogeneous terminating region refractive index. The waveguide of claim 1, wherein each stitching region of said one or more stitching regions comprises one or more stitching region layers, wherein each said stitching region layer of said one or more stitching region layers comprises a stitching region layer inner boundary and a stitching region layer outer boundary and wherein said each terminating region comprises one or more terminating region layers, wherein each terminating region layer of said terminating region layers comprises a terminating region layer inner boundary and a terminating region layer outer boundary, wherein within said each stitching region layer and within said each terminating region layer the refractive index may vary with position. The waveguide of claim 2, wherein one or more of said one or more stitching regions comprises a single stitching layer having a said substantially homogeneous stitching layer refractive index that is greater than said preferred mode effective index, and the position of said stitching layer inner boundary, the position of said stitching layer outer boundary, and said substantially homogeneous stitching layer refractive index together are configured to induce the field of said preferred mode to change polarity one or more times within said single stitching layer. The waveguide of claim 2, wherein one or more of said one or more stitching regions comprises at least two stitching region layers, wherein each stitching region layer of said at least two stitching region layers comprises a said substantially homogeneous stitching layer refractive index that is greater than said preferred mode effective index and wherein the position of said stitching layer inner boundary, the position of said stitching layer outer boundary, and said substantially homogenous stitching layer refractive index of each of said at least two stitching region layers together are configured to induce the field of said preferred mode to change polarity one or more times within said at least two stitching region layers, and together are further configured to induce the magnitude of the field at said stitching region inner boundary, and the magnitude of the field at said stitching region outer boundary, to be substantially equal. The waveguide of claim 2, wherein one or more of said one or more stitching regions comprises at least two stitching region layers, wherein each stitching region layer of said at least two stitching region layers comprises said substantially homogeneous stitching layer refractive index that is greater than said preferred mode effective index and wherein the position of said stitching layer inner boundary, the position of said stitching layer outer boundary, and said substantially homogenous stitching layer refractive index of said each stitching layer together are configured to induce the magnitude of the field at said stitching region inner boundary, and the magnitude of the field at said stitching region outer boundary, to differ by a ratio greater than 4 or to differ by a ratio less than 0.7. 7. The waveguide of claim 6, further comprising a gain medium in one or more field-flattening regions having a larger field-flattening region than other field-flattening regions within said waveguide. The waveguide of claim 6, further comprising a lossy medium, such as a stress applying material, in one or more field-flattening regions having a smaller field-flattening region than other field-flattening regions within said waveguide. The waveguide of claim 2, wherein one or more of said one or more stitching regions comprises at least two stitching region layers and wherein the position of said stitching layer inner boundary, the position of said stitching layer outer boundary, and said substantially homogenous stitching layer refractive index of each of said at least two stitching region layers together are configured to induce the field of said preferred mode to be substantially zero at the interface between one or more pairs of adjacent layers in said at least two stitching region layers. The waveguide of claim 2, wherein one or more of said one or more stitching regions is comprised of three or more layers, two or more of said three or more layers having a said substantially homogeneous refractive index that is greater than said preferred mode effective index and one or more of said three or more layers having a said substantially homogeneous refractive index less than said preferred mode effective index, and wherein the position of said inner boundary, the position of said outer boundary, and said substantially homogenous index of each of said three or more layers together comprise means for inducing the field of said preferred mode to change polarity within at least one of said one or more layers of said three or more layers having a said substantially homogeneous refractive index less than said preferred mode effective index. The waveguide of claim 1, wherein the area-weighted average refractive index of one or more of said one or more stitching regions is less than the average of said preferred mode effective index and the maximum refractive index of all layers and regions comprising said waveguide, and is greater than the average of said preferred mode effective index and the minimum refractive index of all layers and regions comprising said waveguide. The waveguide of claim 2, wherein one or more of said one or more terminating regions is comprised of one or more layers and wherein the position of said inner boundary, the position of said outer boundary, and said substantially homogenous index of each of said one or more layers together are configured to induce the field of said preferred mode to not change polarity within said one or more terminating regions. The waveguide of claim 2, wherein one or more of said one or more terminating regions is comprised of one or more layers and wherein the position of said inner boundary, the position of said outer boundary and said substantially homogenous index of each of said one or more layers together are configured to induce the field of said preferred mode to change polarity one or more times within said one or more terminating region. The waveguide of claim 2, wherein one or more of said one or more terminating regions comprised of two or more layers and wherein the position of said inner boundary, the position of said outer boundary and said substantially homogenous index of each of said two or more layers together are configured to induce the field of said preferred mode to be substantially zero at the interface between one or more pairs of adjacent layers in said one or more terminating regions comprised of two or more layers. The waveguide of claim 2, wherein one or more of said one or more terminating regions is comprised of three or more layers, two or more of said three or more layers having a said substantially homogeneous refractive index that is greater than said preferred mode effective index and one or more of said three or more layers having a said substantially homogeneous refractive index that is less than said preferred mode effective index and wherein the position of said inner boundary, the position of said outer boundary, and said substantially homogenous index of each of said one or more layers together are configured to induce the field of said preferred mode to change polarity within at least one of said one or more layers of said three or more layers having a said substantially homogeneous refractive index less than said preferred mode effective index. The waveguide of claim 1, wherein the area-weighted average refractive index of one or more of said one or more terminating regions is less than the average of said preferred mode effective index and the maximum refractive index of all layers and regions comprising said waveguide, and is greater than the average of said preferred mode effective index and the minimum refractive index of all layers and regions comprising said waveguide. The waveguide of claim 2 or 3, wherein the cross-section of said waveguide is substantially circular, and the cross-sections of said plurality of field-flattening regions are substantially circular or circular annular, and the cross-section of each of said layers of each of said stitching regions is substantially circular or circular annular, and wherein said one or more terminating region comprises a single terminating region, wherein the cross-section of said one or more terminating region is substantially circular annular, and the cross-section of each layer comprising said one or more terminating region is substantially circular annular, wherein the centers of each circular or circular annular field-flattening region, of each circular or circular annular stitching region layer, and of each circular annular terminating region layer are substantially coincident and wherein said inner boundary of regions having circular annular cross-section is the inner circle of the circular annular region, said outer boundary of regions having circular annular cross-section is the outer circle of the circular annular region, said inner boundary of regions having a circular cross-section is a circle having radius of zero length, and said outer boundary of regions having a circular cross-section is the outer circle of the circular region. The waveguide of claim 2 or 3, wherein the cross-section of said waveguide is substantially rectangular, and the cross-sections of said plurality of field-flattening regions, of each said one or more stitching region layers, and of each said one or more terminating region layers are substantially rectangular, and a side of each said field-flattening region, a side of each said stitching region layer, and a side of each said terminating region layer are substantially parallel to each other, wherein said inner boundary of each said substantially rectangular layer is one of the longer sides of said substantially rectangular layer, and said outer boundary of each said substantially rectangular layer is the side opposite the side chosen as the inner boundary of said substantially rectangular layer and wherein said one of said one or more stitching regions or one of said one or more terminating regions substantially bound each rectangular field-flattening region on at least two sides of said rectangular field-flattening region. The waveguide of claim 2 or 3, wherein the cross-section of said waveguide is substantially elliptical, and the cross-sections of said plurality of field-flattening regions are substantially elliptical or elliptical annular, and the cross-section of each of said layers of each of said stitching regions is substantially elliptical or elliptical annular, wherein said one or more terminating region comprises a single terminating region, wherein the cross-section of said one terminating region is substantially elliptical annular, and the cross-section of each layer comprising said one terminating region is substantially elliptical annular, wherein the centers of each elliptical or elliptical annular field-flattening region, of each elliptical or elliptical annular stitching region layer, and of each elliptical annular terminating region layer are substantially coincident, wherein the axes of said elliptical or elliptical annulus regions or said elliptical or elliptical annulus layers are substantially parallel and wherein said inner boundary of regions having elliptical annular cross-section is the inner ellipse of the elliptical annular region, said outer boundary of regions having elliptical annular cross-section is the outer ellipse of the elliptical annular region, said inner boundary of regions having elliptical cross-section is an ellipse having a cross-sectional area of zero, and said outer boundary of regions having elliptical cross-section is the outer ellipse of the elliptical region. The waveguide of claim 2 or 3, wherein the cross-section of said waveguide is substantially hexagonal, and the cross-sections of said plurality of field-flattening regions are substantially hexagonal or hexagonal annular, and the cross-section of each of said layers of each of said stitching regions is substantially hexagonal or hexagonal annular, wherein said one or more terminating region comprises a single terminating region, wherein the cross-section of said one terminating region is substantially hexagonal annular, and the cross-section of each layer comprising said one terminating region is substantially hexagonal annular, wherein the centers of each hexagonal or hexagonal annular field-flattening region, of each hexagonal or hexagonal annular stitching region layer, and of each hexagonal annular terminating region layer are substantially coincident, wherein the axes of said hexagonal or hexagonal annulus regions or said hexagonal or hexagonal annulus layers are substantially parallel and wherein said inner boundary of regions having hexagonal annular cross-section is the inner hexagon of the hexagonal annular region, said outer boundary of regions having hexagonal annular cross-section is the outer hexagon of the hexagonal annular region, said inner boundary of regions having hexagonal cross-section is a hexagon having a cross-sectional area of zero, and said outer boundary of regions having hexagonal cross-section is the outer hexagon of the hexagonal region. A method for fabricating the waveguide of claim 1, comprising: depositing glass on the inside of a tube or the outside of a mandrel to produce said plurality of field-flattening regions, said one or more stitching regions, said one or more terminating regions and said cladding region, wherein the step of depositing glass utilizes chemical vapor deposition; varying the composition of said glass at intervals during said chemical vapor deposition to form said field-flattening region refractive index structure, said stitching region refractive index structure, said terminating region refractive index structure and said cladding refractive index; consolidating said glass into a preform; and drawing said preform to a reduced cross-section. A method for fabricating the waveguide of claim 1, comprising: sheathing annular glass pieces to produce said plurality of field-flattening regions, said one or more stitching regions, said one or more terminating regions and said cladding region; varying the sizes, shapes, and refractive indices of said annular glass pieces to form said field-flattening region refractive index structure, said stitching region refractive index structure, said terminating region refractive index structure and said cladding refractive index; consolidating said annular glass pieces into a preform; and drawing said preform to a reduced cross-section. A method for fabricating the waveguide of claim 1, comprising: arranging rectangular glass pieces side-by-side to produce said plurality of field-flattening regions, said one or more stitching regions, said one or more terminating regions and said cladding region; arranging sizes, refractive indices, and placement of said rectangular glass pieces to form said field-flattening region refractive index structure, said stitching region refractive index structure, said terminating region refractive index structure and said cladding refractive index; consolidating the set of said rectangular glass pieces into a preform; and drawing said preform to a reduced cross-section A method for fabricating the waveguide of claim 1, comprising: arranging glass rods and glass capillaries into an array to produce said plurality of field-flattening regions, said one or more stitching regions, said one or more terminating regions and said cladding region; arranging the sizes, shapes, refractive indices and placement of said glass rods and said glass capillaries within said array to produce said field-flattening region refractive index structure, said stitching region refractive index structure, said terminating region refractive index structure and said cladding refractive index; consolidating the set of said rectangular glass pieces into a preform; and drawing said preform to a reduced cross-section The method of claims 21-24, wherein the step of consolidating is carried out with a furnace or a torch and wherein the step of drawing is carried out with a furnace and a pulling apparatus. A waveguide, comprising: a plurality of field-flattening regions, wherein each field-flattening region of said plurality of field-flattening regions comprises a field-flattening region refractive index that does not vary substantially within said each field-flattening region; one or more stitching regions, wherein a stitching region of said one or more stitching regions is located between neighboring said each field-flattening region, wherein each said stitching region comprises a stitching region refractive index structure configured to induce the field of a mode to vary substantially with position within said stitching region; one or more terminating regions, wherein each terminating region of said one or more terminating regions comprises a terminating region refractive index structure, wherein said each terminating region is in contact with one said field-flattening region; and a cladding region comprising a substantially homogeneous cladding refractive index, wherein said cladding region is in contact with a terminating region of said one or more terminating regions. BACKGROUND OF THE INVENTION [0002] 1. Field of the Invention The present invention relates to waveguides that propagate light at multiple discreet speeds--equivalently, multiple discreet transverse modes--and that transport telecommunications signals, generate or amplify light, transport electromagnetic power, or are used for decorative or display purposes. 2. Description of Related Art Optical fiber waveguides that transport telecommunications signals are typically designed and manufactured to allow light to propagate at just one speed, to ensure that a signal arrives at its destination in a single, brief instant. Waveguides that generate or amplify light, such as those doped with rare-earth ions, are also typically designed and manufactured to allow light to propagate at just one speed, in this case to ensure that the pattern of radiation emitted by the waveguides may be focused to the tightest possible spot. Such a radiation source is said to be "diffraction Waveguides that transport telecommunications signals or that generate or amplify light may also be designed and manufactured to allow light to propagate at multiple discreet speeds (in multiple discreet transverse radiation patterns, or "modes"). Such waveguides are sometimes more economical to manufacture or to interconnect, and the benefits of the single-speed fibers may be retained by preferentially attenuating light that has propagated at undesired speeds or by selectively exciting light that propagates at one preselected speed. An advantage of the selective-excitation approach is that light that propagates in a high-order mode--a mode that forms many well-defined rings or spots in a plane transverse to the propagation direction of the light--travels at an effective index that differs more significantly, when compared to the differences that naturally arise in conventional waveguides, from the effective indices of its neighboring modes. This inherent advantage simplifies the task of selectively exciting and de-exciting a desired mode, but unfortunately a large fraction of the power guided by the high order circularly-symmetric modes of conventional waveguides tends to be located near the central axis of the waveguide, and this hot-spot may reduce the threshold for undesired nonlinear propagation artifacts and waveguide damage. Waveguides that allow light to propagate at only one speed most often distribute their guided power in the shape that is Gaussian, or nearly Gaussian, in the plane transverse to the propagation direction of light. Waveguides may also be designed so that their guided power is flat, or nearly flat, in the transverse plane. Since the peak power density of a flattened-mode waveguide is lower than that of a Gaussian-mode waveguide, the flattened-mode waveguide has a higher (and thus more desirable) threshold for nonlinear propagation artifacts and waveguide damage. SUMMARY OF THE INVENTION [0009] The present invention relates to dielectric, semiconductor, or metallic waveguides that propagate light at multiple discreet speeds. The structure of the waveguide is tailored so that the transverse profile of light propagating at one of those speeds is flattened, or largely flattened. The transverse profile of a desired propagation mode is flattened by adding layers or groups of layers at selected intervals, in order to stitch together flat or substantially flat portions of the mode to make a larger flattened mode. The layers or groups of layers induce the field or its slope to change significantly, and may additionally change the sign of the field one or multiple times. An additional layer group or groups bind the flattened mode to a surrounding cladding. By applying this invention, the field of the stitched high-order mode can be made more robust to nonlinear propagation defects, and can be made to propagate at a speed that differs significantly from the speeds of its neighboring modes (when compared to the differences that naturally arise in conventional waveguides). These attributes make the higher order mode easier to cleanly excite than a mode of the same size in a conventional waveguide. Other benefits are that the stitched high order mode waveguide can be designed to pack the power it guides very efficiently, and can be designed to avoid problematic hot spots in the guided power. The spatial extent of the flattened sub-portions of the mode may also be independently varied to reduce nonlinear propagation artifacts or to create unique or aesthetically pleasing patterns. The present invention has applications in many areas. Examples include uses in (i) optical fiber waveguides for high energy or high power lasers or amplifiers, (ii) laser defense applications, (iii) short pulse laser sources and amplifiers, (iv) seed sources and amplification systems for the National Ignition Facility (NIF) laser system at Lawrence Livermore National Laboratory, (v) transport fiber and fiber laser sources for telecommunication applications, (vi) fibers propagating modes having unique or attractive shapes for decorative or display purposes, (vii) optical power distribution and power distribution networks and (viii) various materials processing and machining applications including metal, dielectric or plastic cutting, brazing and soldering, and deep penetration metal BRIEF DESCRIPTION OF THE DRAWINGS [0013] FIG. 1 illustrates the refractive index profile of a notional waveguide, showing flattening layers (iii, v, vii), stitching groups (iv, vi), and termination groups (ii, viii), surrounded by a cladding (i, ix). FIGS. 2A-C illustrate, for a slab-like geometry, three examples of half-wave stitching groups. FIGS. 3A-C illustrate, for a slab-like geometry, three examples of full-wave stitching groups. FIGS. 4A-C illustrate, for a slab-like geometry, three examples of termination layers. FIGS. 5A-E illustrate, for a slab-like geometry, several examples of waveguides which propagate flattened higher-order modes, and includes the designs, the field of the flattened high order mode and size-spacing products of each guides modes. FIG. 6A shows, for a cylindrically-symmetric geometry, half-wave stitching accomplished with a single layer. FIG. 6B shows, for a cylindrically-symmetric geometry, the addition of a second layer to make the magnitude of the field to the right of the group the same as the magnitude to its left. FIG. 6C illustrates, for a cylindrically-symmetric geometry, an evanescent half-wave stitching group, a term that here refers to groups having at least one layer in which the field is the sum of exponentially growing and decaying functions. FIGS. 7A-C illustrate, for a cylindrically-symmetric geometry, three full-wave stitching groups, that is, three groups that cause the field's polarity to change sign an even number of times. FIGS. 8A-C illustrate, for a cylindrically-symmetric geometry, three fractional wave stitching groups, that is, three groups that return the field's slope to zero without allowing the field's polarity change to sign. FIGS. 9A-C illustrate, for a cylindrically-symmetric geometry, three termination groups applied to three flattened waveguides. FIGS. 10A-C show line-outs of the scaled index and field for three cylindrically-symmetric designs. FIGS. 11A-C show field (not irradiance) distributions for the LP03 and LP13 modes of the three example designs--two flattened-mode fibers and a step index fiber. FIGS. 12A-C compare the size-spacing products (essentially the radiance), Θ , defined by Eq. (59) in Appendix IV, for the modes of the three designs. FIGS. 13A-C show the size-spacing products for the effective indices of the modes of the three designs, as a function of the azimuthal order 1. FIG. 14 illustrates the cross-section of a waveguide that supports a mode that is flattened in one direction. FIGS. 15A and 15B illustrate the refractive index profiles along lines x-x' and y-y', respectively, of FIG. 14. FIGS. 16A and 16B illustrate the field distribution of the waveguide's flattened mode. FIG. 17 illustrates the cross-section of a waveguide that supports a mode that is flattened in two directions. FIGS. 18A and 18B illustrate the refractive index profiles along lines x-x' and y-y', respectively, of FIG. 17. FIGS. 19A and 19B illustrate the field distribution of the waveguide's flattened mode. DETAILED DESCRIPTION OF THE INVENTION [0034] The present invention reduces the intensity of light propagating in the core of a preselected high-order propagation mode of a waveguide by distributing it more evenly across the guide's cross-section via careful design of the refractive index profile. The resulting high order mode is more robust to perturbation than is the fundamental mode of an equivalent conventional or flattened waveguide, and does not suffer the potentially problematic hot spots of conventional high order mode fibers. The waveguides described here are presumed to be made of glass or of a material that allows light to propagate a suitable distance with a suitably low loss to meet the needs of its intended FIG. 1 illustrates the refractive index profile of a notional waveguide, showing flattening layers, stitching groups, and termination groups. In general, the waveguide structure is chosen so that, over selected portions of its cross-section, the local refractive index is equal to or nearly equal to the effective refractive index of the propagating mode; this condition allows the electric or magnetic field of the propagating mode in those regions to be constant or nearly constant with position. The structure is broken at selected intervals by "stitching layers"--layers or series of layers that together act to change the sign of the field or cause the field or its slope to change to a selected level. The layered structure at the boundary of the waveguide is additionally chosen to match the well-known boundary conditions of the fields in the cladding, or "terminating" the mode, as described below. In general, the thickness (spatial extent) of the stitching layer or layers can be reduced by increasing the refractive index contrast (the index differentials) of the layer or layers that comprise the stitches. The index contrast can be varied by altering the concentrations of well-known index-adjusting dopants in silica glass. Larger index differences can be obtained by other well-known techniques, such as using semiconductor materials, phosphide-based glasses, or by incorporating holes into the glass structure. -Like Waveguides Consider an essentially one-dimensional, slab-like waveguide, that is, one whose cross-section is nominally rectangular, whose long dimension is much larger than its narrow dimension. The wave equation that governs the field, ψ, of the modes in such a guide is given by: { ∂ 2 ∂ x 2 + ( 2 π λ ) 2 [ n 2 ( x ) - n eff 2 ] } ψ ( r ) = 0 ##EQU00001## is where ψ represents the field of a guided mode, n(x) is the index at position x, n the effective index of the mode, and λ is the vacuum wavelength of the guided light. In the discussion that follows, we assume the index profile consists of discreet, step-like layers. Define the dimensionless and scaled variables: v x = 2 π λ xNA flat ##EQU00002## η ( v x ) = [ n 2 ( v x ) - n clad 2 ] / NA flat 2 ##EQU00002.2## and : ##EQU00002.3## η eff = ( η eff 2 - η clad 2 ) / NA flat 2 ##EQU00002.4## where : ##EQU00002.5## NA flat = n flat 2 - n clad 2 ##EQU00002.6## where n[clad] is the refractive index of a cladding that surrounds the waveguide and n is the refractive index of the layer or layers in which the field will ultimately be flattened. In these terms, the scaled wave equation becomes: { ∂ 2 ∂ v x 2 + η ( v ) - η eff } ψ ( v x ) = 0 ##EQU00003## -Flattened Layers Consider a layer whose refractive index is equal to the effective index of a guided mode, that is, a layer having η=η . For such a layer, the previous equation has the solution: , where A and B are constants determined by the boundary conditions on that layer. For weakly-guided modes, those conditions are that the field and its derivative with respect to x are continuous across boundaries; note that by definition of v , the field is thus also continuous with respect to v . That derivative is: ψ v x = B ##EQU00004## The previous two equations apply at any position within the layer , as well as at the layer's boundaries. The equations can be inverted to express A and B in terms of the field and its derivative at v [ A B ] = [ 1 v x 1 0 1 ] [ ψ 1 ψ v x 1 ] ##EQU00005## Since A and B do not change within a layer , we may write a similar expression at v [ A B ] = [ 1 v x 2 0 1 ] [ ψ 2 ψ v x 2 ] ##EQU00006## Equating these expressions for A and B yields a relationship between the field and its derivative at one position and those at another: [ ψ 1 ψ v x 1 ] = [ 1 Δ v x 0 1 ] [ ψ 2 ψ v x 2 ] ##EQU00007## ( for η = η eff ) ##EQU00007.2## where ##EQU00007.3## Δ v x = v x 2 - v x 1 . ##EQU00007.4## Note that if the field's slope is zero on either side of an η=η layer (equivalently, an n=n layer), it stays zero within the layer. Thus, a field-flattened layer is any layer whose index is equal to the effective index of the guide's preferred mode, and surrounded by appropriate layer groups, the stitching or termination groups as described below. Stitching Groups [0042] A stitching group is a layer or group of layers in which the field's slope is zero at its leftmost and rightmost interfaces, and wherein the field varies substantially between those interfaces. In most examples herein, the field changes polarity (sign) one or more times within the stitching group. Consider layers in which the local index is greater than the preferred mode's effective index, that is, layers where η>η . For those layers, the solution to the one-dimensional wave equation is a linear combination of sine and cosine functions. Following an analysis similar to the one outlined for the η=η case, the field and its derivative may be expressed by the following matrix equation: [ ψ 2 ψ v x 2 ] = [ cos ( Δ v x η - η eff ) 1 η - η eff sin ( Δ v x η - η eff ) - η - η eff sin ( Δ v x η - η eff ) cos ( Δ v x η - η eff ) ] [ ψ 1 ψ v x 1 ] ( for η > η eff ) ##EQU00008## Note that if a layer's index and thickness obey: {square root over (η-η where m represents zero or a positive integer , then after an interval Δv the field and its derivative both change signs but retain their magnitudes. Further, if the field's slope is zero on one side of a layer, that is, if the field is flat there, then it is also flat on the other side. Thus, the above is the condition for a single-layer stitching group wherein the field changes sign from one side of the group to the other. FIG. 2A illustrates such a layer. Note also that if a layer's index and thickness obey: {square root over (η-η where m represents zero or a positive integer , then after an interval Δv the field and its derivative retain their signs and magnitudes. Further, if the field's slope is zero on one side of a layer, that is, if the field is flat there, then it is also flat on the other side. Thus, the above is the condition for a single-layer stitching group wherein the field returns to the same sign from one side of the group to the other. FIG. 3A illustrates such a layer. A similar analysis can be applied to layers whose index is less than a mode's effective index, to find: [ ψ 2 ψ v x 2 ] = [ cosh ( Δ v x η - η eff ) 1 η - η eff sinh ( Δ v x η - η eff ) η - η eff sinh ( Δ v x η - η eff ) cosh ( Δ v x η - η eff ) ] [ ψ 1 ψ v x 1 ] ( for η > η eff ) ##EQU00009## `sin h` and `cos h` designate the hyperbolic sine and cosine functions. As an example, consider a three-layer stitching group, one in which the leftmost and rightmost layers have refractive indices greater than a mode's effective index, and the central layer has a refractive index less than the mode's effective index. Further, let the mode of interest, or preferred mode, be flattened in the layers that abut either side of the group; thus the field's slope is zero on both sides of the three-layer group, and since the mode is field-flattened, η =1 by definition. There are six unknowns, the index and thickness of each of the three layers. For now, assume the indices are known, leaving just the three thicknesses as unknowns. Let the leftmost and rightmost layers have equal thicknesses and indices; these are not necessary conditions, but in some situations may prove desirable--for example, they may simplify fabrication, create advantageous properties for the preferred mode, or ameliorate problems associated with one or more undesired modes. Finally, let the width-averaged scaled index of the three layers be equal to the scaled index of the preferred mode, that is: η = i η i Δ v xi i Δ v xi = η eff = 1 ##EQU00010## refers to the scaled width of the i layer and the summation is over all layers in the group, for this example, three layers. Note that this constraint on <η> is not necessary, but in some situations may prove desirable. The constraints imposed for this example leave only one free variable; without loss of generality, let this be the thickness of the leftmost layer. Assume that this group of layers is placed between field-flattening layers, making the field's slope zero on both sides; further assume that the group is intended to return the field to its original magnitude but changes the field's sign, or polarity. Mathematically: [ ψ out 0 ] = [ cos ( α 3 ) Δ v 3 α 3 sin ( α 3 ) - α 3 Δ v 3 sin ( α 3 ) cos ( α 3 ) ] [ cosh ( α 2 ) Δ v 2 α 2 sinh ( α 2 ) α 2 Δ v 2 sinh ( α 2 ) cosh ( α 2 ) ] [ cos ( α 1 ) Δ v 1 α 1 sin ( α 1 ) - α 1 Δ v 1 sin ( α 1 ) cos ( α 1 ) ] [ ψ in 0 ] ##EQU00011## and ψ are the fields on either side of the group. The aforementioned constraints imply that ψ and α . In addition, the constraint on the width-averaged index implies: α 2 = 2 α 1 η 1 - 1 1 - η 2 ##EQU00012## Now assign indices. The net width of the stitching group tends to be smaller when the index contrast is made larger, so for this example set the scaled index of the leftmost and rightmost layers to η =+10 and the index of the center layer to η=-10; these are reasonable values for doped silica assuming NA is roughly 0.05. Solving the above matrix equation results in α =0.996 and α=1.801; taking into account the assigned values of the scaled indices, we find Δv 3=0.106π and Δv =0.173π. FIG. 2A-C illustrates several half-wave stitching layers for one-dimensional waveguides. FIG. 3A-C illustrates several full-wave stitching groups for one-dimensional waveguides, determined in a manner similar to those listed above. Termination Groups [0052] A termination group is a layer, or group of layers, that transition the field and the field's slope at the boundary of the flattening layer or stitching group nearest the cladding to the field and slope required within the cladding. For a bound mode in a one-dimensional waveguide, the field in the cladding must follow the form: Ω=A exp(-v {square root over (η )})=A exp(-v {square root over (η where A is a constant and the final form of the above equation follows from the definition of η. At the cladding interface, and throughout the cladding, the ratio of the field's slope to the field must thus be: ψ / v s ψ = - η eff ##EQU00013## For a given design and a given mode, this ratio can also be calculated at the cladding interface through the matrices described above, or through other wave propagation methods. In general, the value of the ratio at the final interface of the final flattening layer or stitching group does not match the ratio required in the cladding; the termination group transitions the fields so the ratio becomes matched. This procedure is analogous to impedance matching in electrical circuits. Consider a one-dimensional waveguide consisting of a single field-flattening layer, a single termination layer having an index greater than the effective index of the flattened mode to be guided, and a cladding. Represent the field at the boundary of the field-flattening layer by the symbol ψ , and note that since the field is flat, its slope there is zero (dψ/dv =0). The field and slope at the cladding interface is then: [ ψ clad ψ v x clad ] = [ cos ( Δ v x η - η eff ) 1 η - η eff sinh ( Δ v x η - η eff ) - η - η eff sin ( Δ v x η - η eff ) cos ( Δ v x η - η eff ) ] [ ψ 0 0 ] ##EQU00014## For the flattened layer , η =1 by definition, and termination reduces to picking the index and thickness of the single termination layer of this example such that: {square root over (η-1)} tan(Δv {square root over (η-1)})=1 If we choose η=10, then the argument of the tangent function is 0.322, making Δv =0.034π. FIG. 4A illustrates this termination layer. FIG. 4B is a two-layer termination group. FIG. 4C illustrates a termination layer in which the field crosses zero. Example Waveguides [0055] FIG. 5A-E gives examples of one-dimensional waveguides; these waveguides are designed by interspersing field-flattening layers with stitching groups, then adding a termination group to bind the mode to the cladding. The designs in FIG. 5A-E are symmetrical about the origin and thus only half of each is shown; note, however, that symmetry is not a necessary condition. The top row shows the scaled refractive index profiles and corresponding field of the flattened modes. The second row lists the designs--scaled indices and scaled thicknesses, in tabular form--of the layers that comprise the guides. The bottom row shows the size-spacing products, θ , defined below, of the modes of the waveguides. The widths of the field-flattening layers and the designs and number of the stitching groups vary from example to example. The termination group is the same for all waveguides, though alternate termination groups may be applied instead. Though the analysis presumes an idealized waveguide that is purely one-dimensional, real waveguides have a two-dimensional cross-section. The idealized analysis is approximately correct, and can be refined with commercial waveguide analysis software. Though the narrow dimensions of the waveguides illustrated in FIG. 5 are not assigned, the following quantity, θ , a size-spacing product, provides a means of comparing waveguides: -n.su- b.clad [flat] where the symbol `˜` is read here as `is proportional to,` and the quantity w ,sealed is a measure of the effective width--the longer dimension of a substantially rectangular guide--of the guide's flattened mode, and is defined as: w flat , scaled = ( ∫ ψ 2 v x ) 2 ∫ ψ 4 v x ##EQU00015## Waveguides having larger separations in θ are often preferred, as this implies, for a given size of the flattened mode, larger spacings between the effective indices of the waveguides' modes; or for given effective index spacings, a larger flattened mode. The bottom rows of FIGS. 5A-E illustrate the distribution of θ values for the allowed modes of those waveguides. In the bottom row, the darker lines correspond to modes that are symmetric about the origin (x=0), the gray lines correspond to modes that are anti-symmetric, and the dotted lines designate the waveguide's flattened mode. FIG. 5A corresponds to a conventional flattened mode fiber, similar to those shown in the literature. Note that the spacing between the flattened mode and its nearest neighboring mode is relatively small. The spacing for the example of FIG. 5B is only slightly larger; note, however, that the waveguide of FIG. 5B may have more pronounced advantages when considering other attributes. Compared to the modal spacings in FIG. 5A, those in FIG. 5C are significantly larger, as are those in FIG. 5D, with the latter also having fewer allowed modes; in some applications, fewer modes is advantageous since extraneous modes, if inadvertently excited, can be problematic. FIG. 5E shows the largest separation between the flattened mode and the cladding (θ =0), which eliminates cross-coupling modes having orders higher than (modes having θ less than) the preferred flattened mode. The examples of FIG. 5 illustrate that by varying the thicknesses of a waveguide's field-flattening layers or the structure of the stitching groups, the effective index of the preferred mode, and the effective indices of other allowed modes, may be independently and preferentially altered. Though not illustrated, varying the structure of the termination group causes similar effects. These same variations also affect many other properties of the guide, such as the modes' group indices and chromatic dispersion, and their overlap with embedded gain media. -Symmetric Waveguides Most nonlinear propagation artifacts in glass waveguides can be reduced by spreading the power the waveguides carry over a large area. Many telecommunications and laser applications, however, require the power to be confined to a single transverse spatial mode. Unfortunately, as a mode's area increases, its effective index approaches those of its neighboring modes, making it susceptible to power cross-coupling and potentially degrading the mode's spatial or temporal fidelity. Optical fibers that propagate power in a high-order mode [1, 2] offer a path to simultaneously increasing the effective area [3] of a mode and the spacing between the desired mode's propagation constant and those of its neighbors. Unfortunately, the high-order modes of a step index fiber can have hotspots--regions in their transverse profiles where the local irradiance significantly exceeds the average value--which may make them more susceptible to damage or nonlinear artifacts than modes whose power is relatively uniformly distributed, such as the fundamental. Optical fibers having a flattened fundamental [4-8] are also attractive, as they spread the propagating power very uniformly, and in an amplifier fiber allow for uniform and efficient extraction of energy from the gain medium. Like all waveguides, though, they are bound by a mode size-spacing tradeoff, and we show below that in this regard they are only moderately better than more economically-manufactured conventional guides. We present here a design methodology that combines the benefits of the two waveguides described above, enabling the construction of a flattened high-order mode. Specifically, we provide design rules for creating structures that support flattened mode segments, that interconnect these segments, and bind (terminate) the resulting mode to the cladding. In the step-like structures of the following designs, the field's continuity is enforced between steps by matching the field and its radial derivative across the interfaces. The modes of the guides are analyzed by the transfer matrices of Appendix II and by a separate two-dimensional mode solver that finds the eigenmodes of the scalar Helmholtz equation. The mathematics and physics that describe fields in general cylindrically-symmetric, stratified media have been considered by others [9-11] and are considered in the Appendices. Appendix I presents Bessel solutions to the equation governing axially-symmetric waveguides such as a conventional telecom fiber; its results can be used to determine the refractive indices and thickness of the layers that comprise the flattened, stitching, and termination groups defined below. Appendix II presents transfer matrices that can also be used to determine layer indices and thicknesses, and to determine the properties of all bound modes of the fiber. Appendix III presents closed-form solutions to the mode normalization integral. Appendix IV defines several mode size-spacing products and shows that for a given waveguide these products are fixed, a consequence of the radiance theorem. The designs of the stitching and terminating groups may be accomplished by the mathematics in the Appendices, or through trial and error with commercial mode-solving software, or a combination of the two. Scaled Quantities [0067] A characteristic numerical aperture of the fiber, NA , is defined as: = {square root over (n )} (1) where n[clad] is the refractive index of the cladding and n is the index of the layer or layers over which the field is to be flattened. The scaled radial coordinate, v, is defined as: = 2 π λ r NA flat ( 2 ) ##EQU00016## λ is the wavelength of the guided light and r is the radial coordinate. The scaled refractive index profile, η(v), is defined as: For the flattened waveguides described here, n is usually chosen to be the minimum refractive index that can be well controlled. For silica fibers, the flattened layer might be lightly doped with an index-raising dopant such as germanium or doped with a rare-earth along with index-raising and lowering dopants. Alternatively, n might be pure silica and the cladding might be lightly doped with an index depressing agent such as fluorine; in this case, the dopant only needs to extend to the penetration depth of the desired A layer group's area-averaged index, η, is defined as: η = group η i A i / group A i ( 4 ) ##EQU00017## and A represent the scaled index and cross-sectional area of the i layer of the group. In the layer groups defined below, we sometimes constrain this value; η sometimes tunes the number of allowed modes or the guide's intermodal spacings. Several of the examples that follow list a mode's scaled effective area and illustrate its scaled field. The scaled area is defined such that the physical area, A , is given by Eq. (57): A eff = ( λ / 2 π ) 2 NA flat 2 A eff scaled ( 5 ) ##EQU00018## The scaled field is defined such that the physical field, ψ, is given from Eq. (50): ψ = 2 π λ NA flat P 0 1 2 ψ scaled ( 6 ) ##EQU00019## where P[0] is the power carried by the mode. In the following examples, η is assumed to range between ±10, which is achievable for germanium and fluorine-doped silica provided NA is on the order of 0.06. In silica, other dopants might extend this range moderately, or in phosphate glasses or holey structures, various dopants or air holes can extend this range significantly. Moreover, in holey fibers NA might be controlled to a much smaller value, which would proportionally extend the range of n. A larger range of indices is generally advantageous, as it reduces the portion of the guide devoted to the stitching and terminating groups described below. Flattened Layers [0073] A flattened layer is one in which the field does not vary with radius; that is, one where: ψ'=∂ψ/ψr (7) is zero . Eq. (29) and Eq. (32) of Appendix I show that for this to occur the layer's index must be equal to the guided mode's effective index (n ) and the azimuthal order, 1, must be equal to zero. Furthermore, it is necessary that a flattened layer be joined to appropriate stitching or termination groups, as defined below. Stitching Groups [0074] A stitching group is a layer or group of layers in which the field's slope is zero at both endpoints (to match that of the adjacent flattened region) and is predominantly nonzero between those points, usually crossing zero one or more times. This can be accomplished in different ways to produce a variety of mode shapes; several examples are presented here. FIGS. 6A-C, FIGS. 7A-C, and FIGS. 8A-C illustrate stitching groups that might form a portion of a guide that supports a flattened mode. In the figures, η is 1 (from Eq. (3) since n(v)=n ), the minimum and maximum values of η are assumed to fall between ±10, and the left edge of each group starts at v =0.5π, an arbitrarily chosen value. The thicknesses of the layers that comprise the groups were determined numerically from Bessel solutions to the wave equation, as outlined in Appendix I. Half Wave Stitching [0076] FIG. 6A-C illustrate three half-wave stitching groups, that is, three groups that cause the field's polarity to change sign an odd number of times. FIG. 6A shows half-wave stitching accomplished with a single layer. The field changes by a factor of -0.78 as determined by its Bessel solution's behavior. Simulations show that for a single layer, as the left side of the group is placed at higher values of v , the ratio of the magnitude of the fields approaches unity and: lim v 0 → ∞ ( Δ v η - 1 ) = m π ( 8 ) ##EQU00020## Δv is the scaled thickness of the layer, η is the layer's scaled index, the numeral one arises from the assumption that the layer is surrounded by field-flattened layers having η=1, and m is an odd integer. This can be shown to be the condition for single layer, half-wave stitching in a one-dimensional slab waveguide (in slab guides, independent of v ), a reassuring result. In FIG. 6B, a second layer is added to make the magnitude of the field to the right of the group the same as the magnitude to its left. We mention without illustration that if the sequence of the layers in FIG. 1(b) is reversed--that is, if the higher index layer is place to the right of the lower index layer--the field on that group's right can be made an even smaller fraction of the field on its left, when compared to the single layer example of FIG. 1(a). FIG. 6C illustrates an evanescent half-wave stitching group, a term that here refers to groups having at least one layer in which the field is the sum of exponentially growing and decaying functions. The thicknesses of the layers that comprise the group are adjusted to also make the <η>=1 for the group (see Eq. (4)) and to make ψ=-1 and ψ'=0 on the group's right edge. Full Wave Stitching [0080] FIGS. 7A-C illustrate three full-wave stitching groups, that is, three groups that cause the field's polarity to change sign an even number of times. FIG. 7A shows half-wave stitching accomplished with a single layer. The field changes by a factor of 0.66 due to its Bessel solution's behavior. As v is increased, an equation similar to Eq. (8) holds, but whose right-hand side is proportional to an even multiple of π. FIG. 7B illustrates a two-layer full-wave group that returns the field's magnitude and polarity to their original values. The thickness of the group's first layer is chosen to make the field zero at the right boundary of the first layer. The thickness and index of the second layer are determined numerically to make ψ=1 and ψ'=0 on the group's right edge. FIG. 7C illustrates a five-layer evanescent full-wave stitching group. The thickness of the first two layers and a portion of the thickness of the third layer are chosen so that the slope is returned to zero, the field is changed by a factor of -0.707 (ψ drops by a factor of two) within the third layer; we also require that, for the group, <η>=1 (see Eq. (4)). The thicknesses of the second portion of the third layer and of the remaining two layers are determined in the same fashion, but now with the constraint that ψ=1 and ψ'=0 on the group's right edge. Fractional Wave Stitching [0084] FIGS. 8A-C illustrates three fractional wave stitching groups, that is, three groups that return the field's slope to zero without allowing the field's polarity change to sign. FIG. 8A illustrates a central stitching layer. The central index is lower than the cladding's and the field consequently grows exponentially with position; the field on-axis is not zero, here it is 2% of the field at the layer's edge, and hence it is not classified as a half-wave group. Simulations show that layers such as this can efficiently disrupt the properties of a guide's non-flattened mode or can mitigate losses in a lossy glass such as stress-applying region, though their disadvantage is that they carry very little power. Note that the central index of FIG. 3(a) could be made higher than the cladding's index, resulting in a field similar to that in FIG. 1(a) or FIG. 2(a). FIG. 8B illustrates a three layer stitching group in which the field dips but does not pass through zero. Simulations suggest that such a group may be difficult to manufacture since its behavior varies relatively strongly with its layers' thicknesses. FIG. 8C illustrates a three layer stitching group in which the field's magnitude rises within the group. The resulting hotspot may be advantageous for applications where field effects are to be enhanced, but problematic for many high power laser applications. Like the example of FIG. 8B, simulations suggest that such a group may be difficult to manufacture. Termination Groups [0088] A termination group is a layer or group of layers placed between one region of a guide, here most often a region in which the slope of the desired mode's field is zero, and the guide's cladding. The indices and thicknesses of the layers that comprise the group are chosen to force the cladding's exponentially-growing term to zero, and to thus bind the mode to the guide. Termination is analogous to impedance matching The examples of this and the following section give the flattened mode's scaled effective area and illustrate its scaled field, quantities defined by Eq. (5) and Eq. (6). For example and comparison, consider a step-index fiber that supports the LP mode and is at the cusp of supporting the LP mode, that is, v=1.23π. It can be shown that its fundamental mode has a scaled effective area of 37.5; therefore, if the guide's design operates at λ=1 μm and its core has a numerical aperture of 0.06, its effective area will be 260 μm . It can be further shown that this mode has a scaled peak field of 0.219=1/ 20.8. If the fiber carries 1 kW of power its peak field will be 2.61 W /2μm and its peak irradiance will be (2.61 W =6.8 W/μm . Note that the peak irradiance is 1.8 times higher than the simple ratio of the power to the effective area (37.5/20.8). For flattened modes this ratio is closer to unity, for examples here it is typically 1.15. FIGS. 9A-C illustrate three termination groups applied to three flattened waveguides. In the figure, η is 1 (from Eq. (3) since n(v)=n ) and the minimum and maximum values of η are limited to ±10. The thickness of the flattened layer is chosen so that each guide is on the cusp of allowing one axially-symmetric mode beyond the flattened mode. The thicknesses of the layers that comprise the groups were determined numerically from Bessel solutions to the wave equation, applying the constraints listed for each example. FIG. 9A illustrates a single-layer termination group. Note that the field extends relatively far into the cladding; at the cladding interface the field is 93% of its value in the flattened region and 21% of the mode's power is guided in the cladding. Since the effective index of the guide's flattened mode is predetermined (because n ), the mode's decay constant in the cladding is fixed and consequently the field in the cladding can only be reduced by reducing the field at the cladding interface--the purpose of the additional layers in FIG. 9B and FIG. 9C. FIG. 9B illustrates a two-layer termination group, similar to those described in [8]. In this group, the group-averaged scaled index, Eq. (3), serves as an additional constraint; simulations show that it strongly affects the field at the cladding interface. In the example, the layers' thicknesses are varied to make the field at the cladding boundary 50% of the field in the flattened layer (this occurs with the group's average index, Eq. (4), set to <η>=0.7), and to match the field's slope at the cladding interface. Roughly 7% of the mode's power is guided in the cladding. FIG. 9C illustrates a three-layer termination group. The field is set to zero at the interface between the first and second layer, the local minima in the second layer is 50% of the field in the flattened layer, and the group-averaged index, Eq. (4), is set to <η>=0.7. The field at the cladding interface is -3% of the field in the flattened region, and 0.04% of the mode's power is guided in the cladding, though now a significant power-fraction is guided by the termination group. -US-00001 TABLE 1 Parameters for two three-ringed flattened mode designs (A and B) and a step-index design (C). All quantities are dimensionless. Design A Design B Design C layer Δv/π η Δv/π η Δv/π η i 0.900 1 0.470 1 3.240 1 ii 0.128 10 0.133 10 iii 0.124 -10 0.137 -10 iv 0.107 10 0.099 10 v 0.289 1 0.470 1 vi 0.123 10 0.120 10 vii 0.125 -10 0.138 -10 viii 0.110 10 0.106 10 ix 0.202 1 0.470 1 x 0.076 10 0.076 10 xi 0.064 -10 0.064 -10 Termination groups of the type shown in FIG. 9C enhance the mode's confinement but also allow at least one additional axially-symmetric mode, plus the asymmetric modes that may accompany it. Relative to the desired mode, the additional modes can have very different propagation constants, very different transverse power distributions, or both; thus they may not readily couple to the desired mode and may not be problematic. Example Waveguides [0096] Waveguides that propagate a flattened high order mode are created by interleaving flattening layers with stitching groups, typically starting from the inside of the guide and working outward, then binding the mode to the cladding with a termination group. Table 1 lists designs for three waveguides; A and B both support a three-ringed, flattened mode, and C supports several higher-order modes. A and B each have three flattened layers (i, v and ix), two three-layer half-wave stitching groups similar to those illustrated in FIG. 6C (ii-iv and vi-viii), and a two-layer termination group similar to the one in FIG. 9B (x-xi). Surrounding these layers is the cladding having η=0. In Design A the flattened layers have equal cross-sectional areas, both stitching groups have η=3.0, and the termination group has η=0.7. In Design B the flattened layers have equal widths, both stitching groups have η=2.4, and the termination group has η=0.7. We compare the flattened LP modes of Designs A and B to the LP mode of a few-mode step index design, Design C. Design C is similar to the high-order mode fibers reported by others [2], but has a smaller v-number to make its mode count similar to those of A and FIGS. 10A-C shows line-outs of the scaled index (dark lines) and field (grey lines) for the three designs; a), b), and c) correspond to Designs A, B, and C. All quantities are dimensionless. For Design A, the scaled area is 140 and the scaled peak field is 1/ 122; for Design B the values are 150 and 1/ 134; and for Design C the values are 140 and 1/ 30.8. The large disparity between the two measures of mode size for C--140 for its effective area vs. 30.8 for the reciprocal of its peak irradiance, a ratio of 4.5--is due to its central hotspot. FIGS. 11A-C show field (not irradiance) distributions for the LP03 and LP13 modes of the three example designs--two flattened-mode fibers and a step index fiber. The colors blue and red designate positive and negative polarities of the field and the depth of the color designates its relative amplitude. All figures are scaled as the one on the left, and all quantities are dimensionless. These figures show the transverse field distributions of the LP and LP modes of the three designs; when bent, the LP 's will morph toward their respective LP 's. Note that the power is more compactly packed in the flattened modes than in the step-index mode. Note, too, that the inner rings of LP modes of the flattened designs have essentially the same diameter as the inner rings of their corresponding LP modes. The inner ring of the LP mode for the step-index design, though, has a substantially larger diameter than its corresponding LP mode. This suggests the latter's mode will experience a larger shift in its centroid when that fiber is bent. The design of the high-order mode fiber in [2] has a central spike in its index profile, perhaps to keep its mode centered. FIGS. 12A-C compare the size-spacing products (essentially the radiance), Θ , defined by Eq. (59) in Appendix IV, for the modes of the three designs. The size-spacing products are an invariant of a design. Larger values are often preferable, since they imply that larger-sized modes may be fabricated while the keeping the intermodal spacing constant, and thus keeping the likelihood of intermodal coupling constant. Keep in mind that the effective area term in Θ equation is the same for all of a design's modes; for each design, it is chosen to be the area of the design's LP The plots of FIGS. 12A-C show, as a function of the azimuthal order 1, the size-spacing products for the effective indices of the modes of the three designs (Θ is defined in Eq. (59)). The red circles designate the LP mode, which for A and B is the flattened mode. For all of a design's modes, the value of A used to calculate its size-spacing products is the area of that design's LP mode. The legend adjacent to (c) applies to all figures, and all quantities are dimensionless. For Designs A and B, the spacing between the Θ 's for the three highest-order symmetric modes, the LP , LP (flattened mode) and LP (on the cusp of existence), have been made equal by choosing an appropriate thickness for the flattened layers and by choosing an appropriate value of <η> (Eq. (3)) for each design's stitching For A and B, the size-spacing differential for the axially-symmetric modes is 2.5 times larger than it is for Design C, and three times larger than for the designs in FIG. 4. This implies that for the same manufacturing tolerances, the three-ringed flattened design can have 2.5 times the area of C, or three times the area of the designs in FIGS. 9A-C. Note that the effective index spectra of A and B are strongly affected by the relative widths of the flattened layers; a relatively large spacing has been created between the LP and LP 2 modes of B (red arrow in FIG. 7(b)). The plots of FIGS. 13A-C show, as a function of the azimuthal order 1, the size-spacing products for the group indices of the modes of the three designs (Θ is defined in Eq. (62)). The red circles designate the flattened mode, which for A and B is the flattened mode. For all of a design's modes, the value of A used to calculate its size-spacing products is the area of that design's LP mode. The legend adjacent to FIG. 13C applies to all figures, and all quantities are dimensionless. FIGS. 13A-C compare the size-spacing products, θ , defined by Eq. (62) in Appendix IV, for the modes of the three designs. The size-spacing products are an invariant of a design. Larger values are likely preferable, since they imply that larger-sized modes may be fabricated while the keeping the intermodal spacing constant, and thus keeping the likelihood of intermodal coupling constant. Keep in mind that the effective area term in Θ equation is the same for all of a design's modes; for each design, it is chosen to be the area of the design's LP Note that the group index spacings of the two flattened designs, A and B, are significantly larger than those of the step-index design, C; the larger spacings may help reduce linear and nonlinear modal coupling in pulsed laser applications. Simulations show that the group delay spectra of A and B are strongly affected by the relative widths of the flattened layers. Note that a local maxima has been created for the LP mode of B (red arrow in FIG. 13B), and that in FIGS. 13A and 13C, the flattened mode is the slowest axially-symmetric mode, while in B it is the fastest of all modes. Compared to the design of conventional fibers, the design approach presented here is atypical--it begins with the desired mode's shape and then constructs a waveguide that allows it. Flattening layers are interleaved with stitching groups and a termination group binds the flattened mode to the guide; the latter is analogous to impedance matching. For axially-symmetric waveguides, the thicknesses or indices of the layers that comprise the stitching groups must be changed when the group's radial placement is changed; the examples presented here should be considered starting points for user-specific designs. The high-order flattened modes allow two size-spacing invariants--one relating to the phase index spacing, one relating to the group index spacing--to be tailored. In particular, we have shown that the effective index (phase index) spacing of the guide's axially-symmetric modes can be increased substantially, and show that this spacing grows in proportion to the number of rings added to the Note that the flattened modes do not suffer potentially problematic hotspots, they inherently pack the propagated power into a compact cross-section, and they may reduce a mode's susceptibility to some artifacts such as nonlinear self-focusing. In an amplifier, they allow power to be extracted uniformly and efficiently across the mode's cross section. Furthermore, in amplifier applications the stitching and termination groups would not likely be doped with rare-earth ions, allowing for better control of their indices, and since the field of the flattened mode is near-zero in those regions, avoiding leaving regions of unsaturated gain that might contribute to noise or amplification of undesired modes. Here we have qualitatively considered the bending properties of the flattened high-order modes by inspecting the transverse structure of the neighboring mode that they would couple to, and find that the flattened modes will stay well-centered. Comparisons to the high-order modes of a step-index fiber are complicated by the fact that the effective area, as conventionally defined, does not account for hotspots in a mode's peak irradiance. We have used the effective area metric here, but suggest that in some applications it may give an overly optimistic representation of the performance of high order step-index modes. Despite applying this possibly lenient metric, the high-order mode of the step-index example fiber is less attractive than the flattened modes in terms of intermodal spacing, peak irradiance, and the compactness of its mode. While increasing the v-number of the step-index design would improve the intermodal spacing, it would also increase its mode count, accentuate its central hotspot, and further reduce its mode's packing density. In principle, flattened high-order modes could be manufactured with conventional telecom techniques such as modified chemical vapor deposition and outside vapor deposition, but the tighter manufacturing tolerances allowed by holey-fiber construction techniques may prove, however, to be preferable. Rectangular Waveguides [0116] Solutions for the one-dimensional, slab-like flattened-mode waveguides, described above, provide designs or starting points for designs of rectangular waveguides that support a flattened high-order FIG. 14 illustrates the cross-section of a waveguide that supports a mode that is flattened in one direction. FIG. 15 illustrates the refractive index profiles along lines x-x' and y-y' of FIG. 14. Table 2 lists parameters for those profiles; in the table, Δv refers to the normalized thickness of the layer. FIG. 16 illustrates the field distribution of that waveguide's flattened mode. The profiles were determined by applying the design rules for the one-dimensional slab-like waveguide. The effective index of the flattened mode is 0.6, substantially equal to the effective index of the field-flattening layers of 1.0; the small difference stems from the fact that the mode is only flattened in one direction. -US-00002 TABLE 2 x-x' y-y' layer region type Δv /π η layer region type Δv /π η i terminating 0.0996 -1.0 i -- 0.9739 1 ii terminating 0.0815 10 iii flattening 1.4510 1 iv stitching 0.0974 10 v stitching 0.2381 -10 vi stitching 0.0974 10 vii flattening 1.4510 1 viii stitching 0.0974 10 ix stitching 0.2381 -10 x stitching 0.0974 10 xi flattening 1.4510 1 xii terminating 0.0815 10 xiii terminating 0.0996 -10 FIG. 17 illustrates the cross-section of a waveguide that supports a mode that is flattened in two directions. FIG. 18 illustrates the refractive index profiles along lines x-x' and y-y' of FIG. 17. Table 3 lists parameters for those profiles; in the table, ΔV refers to the normalized thickness of the layer. FIG. 19 illustrates the field distribution of that waveguide's flattened mode. The profiles were determined by applying the design rules for the one-dimensional slab-like waveguide, then refining the design via computer modeling to further flatten the mode. The effective index of the flattened mode is 1.003, substantially equal to the effective index of the field-flattening layers of 1.0. , Hexagonal Waveguides In some embodiments of the invention, the cross-section of the waveguide is substantially elliptical, and the cross-sections of the field-flattening regions are substantially elliptical or elliptical annular, and the cross-section of each of the layers of the stitching regions is substantially elliptical or elliptical annular. One or more terminating region include a single terminating region having a cross-section that is substantially elliptical annular. The cross-section of each layer the terminating region is substantially elliptical annular, where the centers of each elliptical or elliptical annular field-flattening region, of each elliptical or elliptical annular stitching region layer, and of each elliptical annular terminating region layer are substantially coincident. The axes of the elliptical or elliptical annulus regions or the elliptical or elliptical annulus layers are substantially parallel and the inner boundary of regions having elliptical annular cross-section is the inner ellipse of the elliptical annular region. The outer boundary of regions having elliptical annular cross-section is the outer ellipse of the elliptical annular region. The inner boundary of regions having elliptical cross-section is an ellipse having a cross-sectional area of zero and the outer boundary of regions having elliptical cross-section is the outer ellipse of the elliptical region. In some embodiments of the invention, the cross-section of the waveguide is substantially hexagonal, and the cross-sections of the plurality of field-flattening regions are substantially hexagonal or hexagonal annular, and the cross-section of each of the layers of each of the stitching regions is substantially hexagonal or hexagonal annular. The one or more terminating region comprises a single terminating region. The cross-section of the one terminating region is substantially hexagonal annular, and the cross-section of each layer comprising the one terminating region is substantially hexagonal annular. The centers of each hexagonal or hexagonal annular field-flattening region, of each hexagonal or hexagonal annular stitching region layer, and of each hexagonal annular terminating region layer are substantially coincident, where the axes of the hexagonal or hexagonal annulus regions or the hexagonal or hexagonal annulus layers are substantially parallel and where the inner boundary of regions having hexagonal annular cross-section is the inner hexagon of the hexagonal annular region, the outer boundary of regions having hexagonal annular cross-section is the outer hexagon of the hexagonal annular region, the inner boundary of regions having hexagonal cross-section is a hexagon having a cross-sectional area of zero, and the outer boundary of regions having hexagonal cross-section is the outer hexagon of the hexagonal region. Fabrication [0121] A embodiment for fabricating the waveguide of the present invention includes depositing glass on the inside of a tube or the outside of a mandrel to produce the plurality of field-flattening regions, the one or more stitching regions, the one or more terminating regions and the cladding region, where the step of depositing glass utilizes chemical vapor deposition. The composition of the glass is varied at intervals during the chemical vapor deposition to form the field-flattening region refractive index structure, the stitching region refractive index structure, the terminating region refractive index structure and the cladding refractive index. The glass is consolidated the glass into a preform and the preform is drawn to a reduced cross-section. Another embodiment for fabricating the waveguide of the present invention includes sheathing annular glass pieces to produce the plurality of field-flattening regions, the one or more stitching regions, the one or more terminating regions and the cladding region. The sizes, shapes, and refractive indices of the annular glass pieces are varied to form the field-flattening region refractive index structure, the stitching region refractive index structure, the terminating region refractive index structure and the cladding refractive index. The annular glass pieces are consolidated into a preform which is drawn to a reduced cross-section. Another embodiment for fabricating the waveguide of the present invention includes arranging rectangular glass pieces side-by-side to produce the plurality of field-flattening regions, the one or more stitching regions, the one or more terminating regions and the cladding region. The sizes, refractive indices, and placement of the rectangular glass pieces are arranged to form the field-flattening region refractive index structure, the stitching region refractive index structure, the terminating region refractive index structure and the cladding refractive index. The set of the rectangular glass pieces are consolidated into a preform which is drawn to a reduced cross-section Another embodiment for fabricating the waveguide of the present invention includes arranging glass rods and glass capillaries into an array to produce the plurality of field-flattening regions, the one or more stitching regions, the one or more terminating regions and the cladding region. The sizes, shapes, refractive indices and placement of the glass rods and the glass capillaries are arranged within the array to produce the field-flattening region refractive index structure, the stitching region refractive index structure, the terminating region refractive index structure and the cladding refractive index. The set of the rectangular glass pieces are consolidated into a preform which is drawn to a reduced cross-section. In some embodiments, the step of consolidating is carried out with a furnace or a torch and where the step of drawing is carried out with a furnace and a pulling apparatus. APPENDIX I Bessel Solutions [0126] Consider the equation that governs the radially-varying portion of the field in an axially symmetric waveguide such as a conventional telecom optical fiber [9]: { ∂ 2 ∂ r 2 + 1 r ∂ ∂ r - l 2 r 2 + ( 2 π λ ) 2 [ n 2 ( r ) - n eff 2 ] } ψ ( r ) = 0 ( 9 ) ##EQU00021## ψ represents the field of a guided mode, 1 is the azimuthal order, n(r) is the index at radial coordinate r, n is the effective index (propagation constant) of the mode, and λ is the vacuum wavelength of the guided light. In the discussion that follows we assume that the radial index profile varies in discreet steps, or layers. Define the dimensionless and scaled variables: = 2 π λ r n flat 2 - n clad 2 ( 10 ) η = n 2 ( v ) - n clad 2 n flat 2 - n clad 2 and ( 11 ) η eff = n eff 2 - n clad 2 n flat 2 - n clad 2 ( 12 ) ##EQU00022## where n[flat] is the refractive index of the layer or layers in which the field will ultimately be flattened (in the method prescribed in this paper, n is chosen before the waveguide is designed). In these terms the wave equation becomes: { ∂ 2 ∂ v 2 + 1 v ∂ ∂ v - l 2 v 2 + η ( v ) - η eff } ψ ( v ) = 0 ( 13 ) ##EQU00023## For weak waveguides, the field and its radial derivative are continuous across the step-like boundaries between layers. Since the radial derivative is continuous, so is the quantity: ζ = r ∂ ψ ∂ r = v ∂ ψ ∂ v . ( 14 ) ##EQU00024## To determine the field distribution of the modes of a complex waveguide, we track ψ and ζ; we begin by determining analytic solutions for the field in layers whose index is greater than, less than, and equal to the propagation constant. Each analytic solution has two unknown constants, which can be determined by the boundary conditions. Begin by considering layers that are neither the inner-most layer, here referred to as the "core," nor the outermost layer, referred to as the "cladding." The cladding is presumed to extend to In layers where η>η ), the solution to the wave equation is: ) (15) where J[1] and Y are oscillatory Bessel functions, A and B are unknown constants, and: =v {square root over (|η-η |)} (16) If ψ and ζ are known at some position v , such as at one of the layer's boundaries, then A and B can be expressed: = π 2 [ x 1 Y I ' ( x 1 ) ψ 1 - Y I ( x 1 ) ζ 1 ] ( 17 ) B = π 2 [ - x 1 J I ' ( x 1 ) ψ 1 + J I ( x 1 ) ζ 1 ] ( 18 ) ##EQU00025## A and B were determined with the help of the following Bessel identity [12]: (x)=2/π (19) Note that the derivatives of the Bessel functions can calculated exactly from the identities: +1(x) (20) +1(x) (21) In layers where η<η ) the solution to the wave equation is: ) (22) where I[1] and K are exponentially growing and decaying modified Bessel functions and A and B are unknown constants. If ψ and ζ are known at some position v , such as at one of the layer's boundaries, then A and B can be expressed: In determining A and B we used the Bessel identity: (x)=1 (25) Note that the derivatives of the Bessel functions can be calculated exactly from the identities: +1(x) (26) +1(x) (27) In layers where η=n ) the wave equation reduces to: { ∂ 2 ∂ v 2 + 1 v ∂ ∂ v - l 2 v 2 } ψ ( v ) = 0 ( 28 ) ##EQU00026## For 1≠0 the solution is: ,1≠0) (29) and the constants A and B become = v 1 - I 2 l ( l ψ 1 + ζ 1 ) ( 30 ) B = v 1 l 2 l ( l ψ 1 - ζ 1 ) ( 31 ) ##EQU00027## For 1=0 the solution is: ψ=A+B ln(v)(n=n ,1=0) (32) and the constants A and B become ) (33) Note that in Eq. (32), the field can be made independent of position by forcing the constant B to zero (from Eq. (34), this is equivalent to making the field's slope zero); thus a necessary condition is that n=n . Comparing Eq. (29) and Eq. (32) we see that the field can only be flattened if, in addition to n=n , the azimuthal order, 1, is also zero. Now consider the inner-most layer, the core, and the outer-most layer, the cladding. In these only a single Bessel solution is allowed. In the core the solutions are: ) (35) ) (36) ,1≠0) (37) ,1=0) (38) and in the cladding the allowed solution is (x) (39) APPENDIX II Transfer Matrices [0143] The solutions for the constants A and B can be substituted into the original expressions for ψ and the corresponding expressions for ζ to obtain transfer matrices, M, that relate ψ and ζ at position to their known values at position v [ ψ 2 ζ 2 ] = M [ ψ 1 ζ 2 ] ( 40 ) ##EQU00028## In all cases, the matrices can be written in the form: ) (41) where x[1] is the quantity x, defined by Eq. (16), evaluated at position v and index η (the index between v and v ), and x is x evaluated at v and index η The determinant of each matrix is unity, but they are not orthogonal. Their inverses are found by exchanging their diagonal elements and changing the signs of their off-diagonal elements. In layers where η>n ( x ) = π 2 [ xY I ' ( x ) - Y I ( x ) - xJ I ' ( x ) J I ( x ) ] ( 42 ) ##EQU00029## In layers where η<η ( x ) = [ xI I ' ( x ) - I I ( x ) - x K I ' ( x ) K I ( x ) ] ( 43 ) ##EQU00030## In layers where η=η and 1≠0: ( x ) = 1 2 [ v - l 1 / v i - lv i v l ] ( 44 ) ##EQU00031## In layers where η=η and 1=0: ( x ) = [ 1 - ln ( v ) 0 1 ] ( 45 ) ##EQU00032## The transfer matrix solution to the wave equation for a step-like fiber then becomes: [ 1 Ω care ] = ( const ) [ 1 Ω clad ] ( 46 ) ##EQU00033## where the quantity Ω is defined as: Ω=ζ/ψ (47) is (from Eq. (39)): Ω clad = xK l ' ( x ) K l ( x ) | x = x clad ( 48 ) ##EQU00034## where x[clad] is the term x; as defined by Eq. (16), evaluated at position v and index η =0. Note that the Bessel derivates can be calculated from Eq. (27). Ω is similarly calculated from Eq. (35), Eq. (36), Eq. (37), or Eq. (38) at the core's boundary. The matrix M is the product of the matrices that represent the layers between the core and cladding; it takes advantage of the fact that ψ and ζ are continuous across layer boundaries. For a given waveguide, the propagation constant η is determined iteratively--that is, by varying its value until the transfer matrix solution is satisfied. In the above, (const) refers to a multiplicative constant related to the total power carried by a mode, as discussed in the following Appendix. APPENDIX III Mode Normalization [0153] This appendix gives closed-form solutions for the mode normalization integral, and defines scaled fields. Mode normalization involves choosing the (const) term of Eq. (46) to make the power carried by a mode equal to some preselected value, P 2 π ( const ) 2 ∫ 0 ∞ ψ 2 r r = P 0 ( 49 ) ##EQU00035## Define ψ such that: ψ 2 = ( 2 π λ ) 2 ( n flat 2 - n clad 2 ) P 0 ψ scaled 2 ( 50 ) ##EQU00036## Then normalization reduces to setting: 2 π ( const ) 2 ∫ 0 ∞ ψ scaled 2 v v = 1 ( 51 ) ##EQU00037## The integration is typically performed numerically, though with the expressions that follow, which we believe are novel, it can be calculated analytically. The solutions were obtained by integrating the above expression by parts twice and taking advantage of the fact that the bound modes' fields satisfy the original wave equation, Eq. (13). η ≠ η eff ( n ≠ n eff ) : 2 π ∫ ψ 2 v v = π ζ 2 - l 2 ψ 2 η - η eff + π v 2 ψ 2 ( 52 ) For ν = η eff and 1 = 0 : 2 π ∫ ψ 2 v v = π v 2 2 ( ζ 2 - 2 ψ ζ ) + π v 2 ψ 2 ( 53 ) For η = η eff and l = 1 : 2 π ∫ ψ 2 v v = π ( v 2 ) 2 [ 1 2 ( ζ + ψ ) 2 + 2 ( ζ - ψ ) 2 ln ( v ) - 2 ( ζ 2 + ψ 2 ) ] + π v 2 ψ 2 ( 54 ) ##EQU00038## And finally, for η=η and 1≧2: 2 π ∫ ψ 2 v v = π ( v 2 l ) 2 [ 1 l + 1 ( ζ + l ψ ) 2 - 1 l - 1 ( ζ - l ψ ) 2 - 2 ( ζ 2 + l 2 ψ 2 ) ] + π v 2 ψ 2 ( 55 ) ##EQU00039## These are the indefinite solutions to the integrals; the contribution from an individual layer is found by evaluating its solution (depending on its index relative to the propagation constant) at the its boundaries, and subtracting one from the other. The full integral (from zero to infinity) is found by summing the individual contributions. Note that, for any waveguide design, the right-most terms of the piece-wise integrals contributes the following series to the full integral: ].sub..e- ta. + . . . +π[v However, since v and ψ are continuous across interfaces, this reduces to π[v .sup.η, which is zero for all bound modes. Thus, while the right-most terms contribute to the piece-wise integrals, they do not contribute to the full integral. The closed form solutions can also be used to quickly calculate the group index of a mode via Eq. (60). APPENDIX IV Size -Spacing Products This appendix defines several mode size-spacing products and shows that for a given waveguide design, these are fixed. It refers to scaled teems defined in Appendix 1. Once the scaled index profile (Eq. (11)) is specified, the scaled propagation constants, Eq. (12), and the shapes of the allowed modes are completely determined, as implied by the form of the scaled wave equation, Eq. (13). To relate scaled quantities to those that can be measured in a laboratory, begin by noting that the effective mode area can be written: A eff = 2 π ( ∫ ψ 2 r r ) 2 ∫ ψ 4 r r = ( λ / 2 π ) 2 n flat 2 - n clad 2 A eff scaled ( 57 ) ##EQU00040## where the scaled effective area is defined as A eff scaled = 2 π ( ∫ ψ 2 v v ) 2 ∫ ψ 4 v v ( 58 ) ##EQU00041## For each allowed mode of a design, the propagation constant and scaled area are fixed, and thus their product, represented here by the symbol Θ , is also fixed: Θ eff = η eff A eff scaled = A eff λ 2 ( n eff 2 - n clad 2 ) ( 59 ) ##EQU00042## The right-most term is found through substitution; note that though it was derived from scaling arguments, it consists only of quantities that can be directly measured, and that since Θ is fixed, if a mode's size is increased, its effective index necessarily approaches the cladding index. Since this holds for all modes, it follows that as a desired mode's size is increased, the effective indices of all modes necessarily approach each other. The effective index is the phase index of the mode. When evaluating pulse propagation effects, the group index, n , is also important. Using an integral form of the group index [13] it can be shown that: n eff n g - n clad 2 n flat 2 - n clad 2 = ∫ ηψ 2 v v ∫ ψ 2 v v ( 60 ) ##EQU00043## and following arguments similar to those that led to , it can be shown that the following quantity is also fixed for each mode of a waveguide: Θ eff , g = A eff λ 2 ( n eff n g - n clad 2 ) ( 61 ) ##EQU00044## where n[effn][g] is the product of a mode's phase and group indices. Like Θ , this is a strict invariant of a design (within the strictures of the weak-guiding approximation), but unfortunately the separations between the Θ ,g's are not obvious indicators of the separations between the group indices. The following term is more transparent: Θ g = A eff λ 2 ( n g 2 - n clad 2 ) = 2 Θ eff , g - Θ eff + λ 2 n eff 2 A eff ( Θ eff , g - Θ eff ) 2 ( 62 ) ##EQU00045## where the right hand side has been found by substitution . Since Θ depends on A it is not a true invariant of the guide. However, if the mode's area is sufficiently large the term containing A can be neglected, usually justified for guides designed for high power laser applications, so that Θ may be considered, to a good approximation, invariant. REFERENCES [0166] 1. J. Fini and S. Ramachandran, "Natural bend-distortion immunity of higher-order-mode large-mode-area fibers," Opt. Lett. 32, 748-750 (2007). 2. S. Ramachandran, J. M. Fini, M. Mermelstein, J. W. Nicholson, S. Ghalmi, and M. F. Yan, "Ultra-large effective-area, higher-order mode fibers: a new strategy for high-power lasers," Laser Photonics Rev. 2, 429 (2008). 3. R. H. Stolen and C. Lin, "Self-phase-modulation in silica optical fibers," Phys. Rev. A 17, 1448 (1978). 4. A. K. Ghatak, I. C. Goyal, R. Jindal, "Design of waveguide refractive index profile to obtain flat modal field" SPIE Proceedings vol. 3666, pp. 40 (1998). 5. J. Dawson, R. Beach, I. Jovanovic, B. Wattellier, Z. Liao, S. Payne, and C. Barty, "Large flattened-mode optical fiber for reduction of nonlinear effects in optical fiber lasers," Proc. SPIE 5335, 132 (2004). 6. W. Torruellas, Y. Chen, B. McIntosh, J. Farroni, K. Tankala, S. Webster, D. Hagan, M. J. Soileau, M. Messerly, and J. Dawson, "High peak power Ytterbium doped fiber amplifiers," Proc. SPIE, 6102, 61020-1-61020-7 (2006). 7. B. Ward, C. Robin, and M. Culpepper, "Photonic crystal fiber designs for power scaling of single-polarization amplifiers" Proc. SPIE 6453, 645307 (2007). 8. C. Zhao, Z. Tang, Y. Ye, L. Shen, D. Fan, "Design guidelines and characteristics for a kind of four-layer large flattened mode fibers," Optik--International Journal for Light and Electron Optics, 119, 749-754 (2008). 9. A. Yariv, Optical Electronics, 3 Edition, (Holt, Rinehart and Winston, 1985). 10. P. Yeh, "Optical Waves in Layered Media. Wiley-Interscience, Hoboken N.J. (1998). 11. P. Yeh, A. Yariv, and C. S. Hong, "Electromagnetic propagation in periodic stratified media, I. General theory," J. Opt. Soc. Am. 67:423-438 (1977). 12. M. Abramowitz and I. Stegun, Handbook of Mathematical Functions (Dover, 1972). 13. A. W. Snyder and J. D. Love, Optical Waveguide Theory p. 644 (Chapman and Hall Ltd, 1983). The above references, 1-13, are incorporated herein by reference. The foregoing description of the invention has been presented for purposes of illustration and description and is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments disclosed were meant only to explain the principles of the invention and its practical application to thereby enable others skilled in the art to best use the invention in various embodiments and with various modifications suited to the particular use contemplated. The scope of the invention is to be defined by the following claims. Patent applications by Jay Walter Dawson, Livermore, CA US Patent applications in class OPTICAL FIBER WAVEGUIDE WITH CLADDING Patent applications in all subclasses OPTICAL FIBER WAVEGUIDE WITH CLADDING User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120321260","timestamp":"2014-04-17T16:28:10Z","content_type":null,"content_length":"143888","record_id":"<urn:uuid:926f9546-090b-4b04-be7c-94a5efc0ebe0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Weatherford, TX Geometry Tutor Find a Weatherford, TX Geometry Tutor ...I have found that students at all levels often fear examinations. That fear can be grounded in knowledge problems. either in subject matter or testing fear itself. I have in an in depth knowledge of testing techniques which is usually helpful to students. 40 Subjects: including geometry, chemistry, physics, GED ...I have tutored privately for over ten years and have been employed by a college to deliver tutorials and laboratory demonstrations in civil, mechanical and electrical engineering and computer science courses. I have also worked as a high school teacher for six years and have taught mathematics, physics, science and computer science. My teacher registration is current. 56 Subjects: including geometry, chemistry, calculus, physics ...I currently teach high school but have taught elementary and junior high in the past. I especially like tutoring small groups or one-on-one. When I taught 4th grade, I told my students to do their best and together we would get them to pass the TAKS test. 7 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I also have experience tutoring students in advanced math classes. I graduated with honors in Mechanical and Aerospace Engineering. I have four years of industry experience. 21 Subjects: including geometry, English, chemistry, calculus ...I can meet anywhere (typically a local library, Starbucks, or in the comfort of your home). About me: I am a senior at Texas Wesleyan, completing my bachelor's degree in Mathematics and Secondary Education. I have been tutoring for the last year, with an emphasis in Algebra, Geometry, and Pre-C... 13 Subjects: including geometry, calculus, linear algebra, logic Related Weatherford, TX Tutors Weatherford, TX Accounting Tutors Weatherford, TX ACT Tutors Weatherford, TX Algebra Tutors Weatherford, TX Algebra 2 Tutors Weatherford, TX Calculus Tutors Weatherford, TX Geometry Tutors Weatherford, TX Math Tutors Weatherford, TX Prealgebra Tutors Weatherford, TX Precalculus Tutors Weatherford, TX SAT Tutors Weatherford, TX SAT Math Tutors Weatherford, TX Science Tutors Weatherford, TX Statistics Tutors Weatherford, TX Trigonometry Tutors Nearby Cities With geometry Tutor Annetta N, TX geometry Tutors Annetta S, TX geometry Tutors Annetta, TX geometry Tutors Brazos Bend, TX geometry Tutors Colleyville geometry Tutors Decordova, TX geometry Tutors Forest Hill, TX geometry Tutors Hudson Oaks, TX geometry Tutors Mineral Wells, TX geometry Tutors Northlake, TX geometry Tutors Peaster geometry Tutors Saginaw, TX geometry Tutors Southlake geometry Tutors Waxahachie geometry Tutors Willow Park, TX geometry Tutors
{"url":"http://www.purplemath.com/Weatherford_TX_Geometry_tutors.php","timestamp":"2014-04-20T00:00:34Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:905d7e8e-2198-42dc-8e61-2b9a5660cc47>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
help with solving for x August 20th 2013, 10:49 PM #1 Aug 2013 help with solving for x the problem im having trouble with is: solve 1 + cos(2x) = 2sin^2(2x) for x, where 0 ≤ x < 2π i set everything equal to 0 and worked it down to (cos(2x) + 1)(2cos(2x) - 1) = 0 and solved each of those individually getting x = π/2 and x = π/6 but I'm not even sure if everything is correct to that point or how to finish the problem off from there if I was going on the right direction. any help would be much appreciated. thanks Re: help with solving for x Hey mikeythemanic. After you verify by plugging in, see if you have other values that lie in the [0,2pi) interval. Remember that cos(x) = cos(2*pi-x) and sin(x) = -sin(2*pi-x). Re: help with solving for x You are missing the values of 5pi/6 and 3pi /2. What really helps with this problem is to set 2x = z. The only identity you will need to know is that sin^2 x + cos^2 x = 1. Use that to get rid of the sin in the equation. When you plug 2x back in and you have something like 2x = pi, remember to do this 2x = pi + 2pi*n (n being any integer). Remember that 2x provides double the solutions as x, so 2x = pi + 2pi*n becomes x = pi/2 +pi*n, the range is [0,2pi) so will have pi/2 and 3pi/2. Last edited by ShadowKnight8702; August 20th 2013 at 11:23 PM. Re: help with solving for x i ended up with cosx = 2sinxcosx and divide by cosx, then again by 2 and i get sinx = 1/2 so back to the original problem, would i then just say x = arcsin(1/2) or would it be pi/6? and 1 last question, how am i so retarded? i feel like it's my gift. Re: help with solving for x ok now im really confused since you edited your post, was i doing it the right way before? my head is going to explode soon i think. Re: help with solving for x Let me walk you through the problem. From the original equation, I set 2x = z. 1 + cosz - 2sin^2 z = 0 1 + cosz - 2(1-cos^2 z) = 0 Remember sin^2 x + cos^2 x = 1 2cos^2 z + cosz - 1 = 0 (2cosz - 1) (cosz + 1) = 0 Starting with 2cosz - 1 =0 **For problems with a coefficient next to x, set the roots up as if there were no restraints to values of x cosz = 1/2 z = pi/3 + 2pi*n or z = 5pi/3 + 2pi*n **n being any value since this statement represent all possible values with out any range 2x = pi/3 + 2pi*n or 2x = 5pi/3 + 2pi*n x = pi/6 + pi*n or x = 5pi/6 + pi*n x = pi/6 or 7pi/6 or x = 5pi/6 now with cosz + 1 = 0 cosz = -1 z = pi + 2pi*n 2x = pi + 2pi*n x = pi/2 + pi*n x = pi/2 or 3pi/2 In all, our values are pi/6, 7pi/6, 5pi/6, pi/2, 3pi/2. Understand why? i left so values out of my previous post Re: help with solving for x i think im starting to understand it better, so my final answer would be those 5 answers because the range given for x was between 0 and 2pi? getting this many answers as an answer is a bit new to me, gonna take some getting used to i guess. thanks for walking me through it, helps a lot. Re: help with solving for x My apologies, I forgot to include 11pi/6 in the final answer. I forgot about it in the equation x = 5pi/6 + pi*n. Both 5pi/6 and 11pi/6 are possible Re: help with solving for x no worries, im certainly not good enough at this to catch many mistakes. lol thanks again. August 20th 2013, 10:57 PM #2 MHF Contributor Sep 2012 August 20th 2013, 11:00 PM #3 Sep 2012 United States August 20th 2013, 11:26 PM #4 Aug 2013 August 20th 2013, 11:27 PM #5 Aug 2013 August 21st 2013, 12:01 AM #6 Sep 2012 United States August 21st 2013, 12:31 AM #7 Aug 2013 August 21st 2013, 12:49 AM #8 Sep 2012 United States August 21st 2013, 01:06 AM #9 Aug 2013
{"url":"http://mathhelpforum.com/trigonometry/221309-help-solving-x.html","timestamp":"2014-04-18T18:30:16Z","content_type":null,"content_length":"49404","record_id":"<urn:uuid:c713bd46-c7b6-46d4-be97-834e03221124>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
cube root cube root how do you find this? i tried this: fyards = (area ^ (1/3)); //cube root but i came up with this: 63 C:\Dev-Cpp\program2.cpp invalid operands of types `float' and `int' to binary `operator^' The power function is in the library <cmath> And it goes like this: variable = pow(x,y); //Where x is what you want and y is the power double temp = pow(8,(1.0/3.0) ); //Will get you 2 Alternative, if area is guaranteed to be greater than zero, would be; fyards = exp(log(area)/3.0); Rashakil Fol Originally Posted by grumpy Alternative, if area is guaranteed to be greater than zero, would be; fyards = exp(log(area)/3.0); This is actually what the pow function does. Mathematically it is equivalent. As to it being what the pow() function actually does: that's implementation dependent. I suggest pow() will also do a few other things, otherwise it wouldn't work if the first argument is negative. ^ is the XOR operator by the way... Originally Posted by arjunajay ^ is the XOR operator by the way... But the way the question was asked he was using x^y as a notation for "x raised to the power of y".
{"url":"http://cboard.cprogramming.com/cplusplus-programming/72209-cube-root-printable-thread.html","timestamp":"2014-04-24T04:48:52Z","content_type":null,"content_length":"9251","record_id":"<urn:uuid:3d08c602-a457-48fb-be2a-611a6a837459>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Statistics Tutor ...I am comfortable with any of these topics: Solving Systems of Linear Equations; Vectors, Geometry of R^n, and Solution Sets; Linear Independence and Linear Transformation; Matrix Operations and Matrix Inverses; LU Factorization; Subspaces, Bases, Dimension, Rank; Determinants; Vector Spaces; Eige... 15 Subjects: including statistics, chemistry, calculus, physics ...It is from these individual sessions that I find the students learn the most. My tutoring style is strongly influenced by the positive experience that I gained from these personalized instruction with students. Each student has a different way of learning a subject. 16 Subjects: including statistics, physics, calculus, geometry ...My graduate work was completed at the University of Maryland College Park, where I specialized in international development and quantitative analysis. I currently work as a professional economist. Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to... 16 Subjects: including statistics, calculus, geometry, algebra 1 ...I completed AP Calculus BC through my Junior year of high school, and have been tutoring Algebra 1 and 2 and geometry recently. Please feel free to contact with questions. I scored very high on my math portions of the SAT and ACT (750 out of 800 SAT, 35 out of 36 ACT). I have helped several other students with study tips and practice problems for these exams as well. 16 Subjects: including statistics, geometry, algebra 1, algebra 2 ...I know geology very well and have published professional papers in the subject. I am a doctoral candidate in Environmental Science at George Mason University. I also tutor in mathematics, statistics, Microsoft Office products, and computer science.I have used Excel in various aspects of science and business. 10 Subjects: including statistics, algebra 1, algebra 2, Microsoft Excel
{"url":"http://www.purplemath.com/mcchord_afb_wa_statistics_tutors.php","timestamp":"2014-04-19T02:23:02Z","content_type":null,"content_length":"24106","record_id":"<urn:uuid:56d75ff5-8881-45a3-973f-2cd16467574b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
Pigeonhole theorem problem May 27th 2013, 02:27 AM Pigeonhole theorem problem Problem #1: Suppose N objects are distributed between B boxes in such a way that no box is empty. How many objects must we choose in order to be sure that we have chosen the entire contents of at least one My strategy was to find out the maximum number of contents one box can have, say M. Then if I choose M objects I can satisfy the given task. But I'm stuck on finding this maximum number. I know that by using the pigeonhole principle, I can say at least one box has at most or at least some number of contents. How can I say all boxes have at most some number of contents? Problem #2: There are twelve signs of the Western Zodiac. Suppose there are 145 people in a room. Show that there must be 13 people who share the same sign of the Western Zodiac. I'm confused about the wording. Is the problem different from "show that there must be at least 13 people who share a zodiac"? If so, I need hints to find how to narrow down from saying "at least 13 people share one zodiac" to "exactly 13 people share one zodiac". May 27th 2013, 03:26 AM Re: Pigeonhole theorem problem in #2 you are splitting 145 objects between 12 groups. Consider the case when the objects are most spread out May 27th 2013, 03:26 AM Re: Pigeonhole theorem problem Problem #1: Suppose N objects are distributed between B boxes in such a way that no box is empty. How many objects must we choose in order to be sure that we have chosen the entire contents of at least one Problem #2: There are twelve signs of the Western Zodiac. Suppose there are 145 people in a room. Show that there must be 13 people who share the same sign of the Western Zodiac. I have absolutely no idea what #1 means. For #2, if there are 12 boxes and 144 balls then if you put all of the balls into the boxes is it possible that each box contains at most 12 balls? May 27th 2013, 03:39 AM Re: Pigeonhole theorem problem Thank you, I realized where I was stuck on #2. I thought there had to be some zodiac-subset that had contained precisely 13 people and no more. But if 15 people had the same zodiac, it would still be true that 13 out of these 15 share the same zodiac, right? May 27th 2013, 06:46 AM Re: Pigeonhole theorem problem I believe your method for #1 is correct, when they say choose they mean remove. If no box can be empty then the maximum number in 1 box is N-B+1 May 27th 2013, 02:08 PM Re: Pigeonhole theorem problem Thanks Shakarri. So after I put one item in each box so that no box is empty, I have N-B objects left. If I put all of that in one box, that box will have the max number of contents possible. Since this box has N-B+1 objects in it, choosing N-B+1, I'm guaranteed to exhaust at least one box?
{"url":"http://mathhelpforum.com/discrete-math/219362-pigeonhole-theorem-problem-print.html","timestamp":"2014-04-18T12:10:11Z","content_type":null,"content_length":"7554","record_id":"<urn:uuid:fb926cba-979a-4375-806f-6b1994f40950>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00514-ip-10-147-4-33.ec2.internal.warc.gz"}
The Purplemath Forums Re: sine or cosine frankie10 wrote:how do you know when to use sine law or cosine. See next post below ... Last edited by on Wed Jun 17, 2009 10:12 pm, edited 2 times in total. sine or cosine how do you know when to use sine law or cosine. frankie10 wrote:how do you know when to use sine law or cosine. If you have two angles and a side opposite one of them, or two sides and an angle opposite one of them, then use the Law of Sines, because you have the information that fits. If you have two sides and the included angle, or all three sides and need an angle, use the Law of Cosines, because you have the information that fits.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=12&t=631&p=1980","timestamp":"2014-04-18T00:44:00Z","content_type":null,"content_length":"20417","record_id":"<urn:uuid:bb484a3b-8d3c-486b-90f5-2283c1a4a347>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: can someone help me with geometry crap? • one year ago • one year ago Best Response You've already chosen the best response. what's the question here? Best Response You've already chosen the best response. can you take a picture of your screen because its asking me to log into ur book...send a picture n ill help Best Response You've already chosen the best response. i cant. thats ok. thanks for ur help anyways! i fanned u anyways! :D Best Response You've already chosen the best response. just go to your screen and press Ctrl and PrtSc on your keyboard....then paste the picture on a document and attach it :) Best Response You've already chosen the best response. oh ok. Best Response You've already chosen the best response. # 15 please! Best Response You've already chosen the best response. you might need to zoom in. sorry. Best Response You've already chosen the best response. we know that \( x+(x+y)=180\) and \((x+y)=2y\) ......look at the graph now just solve :) \[x+x+y=180\]\[2x+y=180\]\[x+y=90\]\[y=90-x\]then plug that into the second equation \(x+y=2y\)\[x+(90-x)= 2(90-x)\]\[90=180-2x\]\[-90=-2x\]\[x=45\]now put that back into \(y=90-x\) \[y=90-45\]\[y=45\] we get \(x=45\) and \(y=45\) so im not sure if the picture is just not to scale but check the answer in the back of the book and let me know :) Best Response You've already chosen the best response. i meant to say to look at the picture not the graph, sorry Best Response You've already chosen the best response. thats ok. Thanks lots! it makes sense now :D Best Response You've already chosen the best response. is the answer correct though?? Best Response You've already chosen the best response. did you check the answer?? Best Response You've already chosen the best response. no i didnt. hold on. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50760d49e4b0aae3aa615d03","timestamp":"2014-04-17T06:52:12Z","content_type":null,"content_length":"57504","record_id":"<urn:uuid:4b3a79a5-369f-490c-8697-90f742fd0cab>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Microcontroler board question (50$ robot tutorial) Im ready to build the controller board for the 50$ robot. Unfortunately when I got to the part shop they didn't have the exact resistance listed in the part list. So the part list has 340 ohm and 1.62K ohm Both of them were unavailable.. ( So I got 330 ohm and 1.6K ohm both of them have a +- 5% tolerance) So here is my question, do I need exactly those resistance values or can I use resistance close to the mentioned values. Thanx everyone!
{"url":"http://www.societyofrobots.com/robotforum/index.php?topic=2656.0","timestamp":"2014-04-20T00:58:13Z","content_type":null,"content_length":"45412","record_id":"<urn:uuid:289cc60b-0679-4323-a6d4-0a2d75ba49fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Distraught Part 2 Post reply Calculus Distraught Part 2 What is the slope of the tangent line to the graph of y = x^3 at the point (2,8) ? Consider the parabola given by the equation y = x^2 + 4x - 5. At which point on the graph of this parabola is the slope of the tangent line equal to 10? The line y = 2x - 9 is tangent to the parabola y = x^2 + ax + b at the point (4,-1). What are the values of a and b? -3,-6 respectively. circle with equation x^2 + y^2 = 25. Which is the equation of the tangent line to the circle at the point (3,4)? 3x + 4y = 25 I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Re: Calculus Distraught Part 2 First one is correct! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Calculus Distraught Part 2 I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Real Member Re: Calculus Distraught Part 2 The second one is correct. The third one isn't. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Calculus Distraught Part 2 Eureka, and why? I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Re: Calculus Distraught Part 2 a=-3 and b=-6 I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Re: Calculus Distraught Part 2 Fourth one is correct! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Calculus Distraught Part 2 Eureka again. More discoveries then Archimedes. I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Re: Calculus Distraught Part 2 He was a great mathematician. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Real Member Re: Calculus Distraught Part 2 I moght be wrong, but I am getting -6 and 7 in #3. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Calculus Distraught Part 2 You are correct! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Calculus Distraught Part 2 Oh, my apologies. dy/dx = 2x + a so... I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann Post reply
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=235459","timestamp":"2014-04-17T12:36:01Z","content_type":null,"content_length":"22536","record_id":"<urn:uuid:e2542d99-9072-407c-8d0e-19d3e7e8b630>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/webbville725/answered","timestamp":"2014-04-20T11:00:37Z","content_type":null,"content_length":"76770","record_id":"<urn:uuid:5003ff95-c372-49f8-84c4-bc1d453d6cac>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Prove Triangles Congruent - SSS, SAS, ASA, AAS Rules (with worked solutions and videos) How to Prove Triangles Congruent - SSS, SAS, ASA, AAS Rules In this lesson, we will learn • the SSS, SAS, ASA and AAS rules • how to use two-column proofs to prove triangles congruent Related Topics: More Geometry Lessons Congruent Triangles Congruent triangles are triangles that have the same size and shape. This means that the corresponding sides are equal and the corresponding angles are equal. We can tell whether two triangles are congruent without testing all the sides and all the angles of the two triangles. In this lesson, we will consider the four rules to prove triangle congruence. They are called the SSS rule, SAS rule, ASA rule and AAS rule. In another lesson, we will consider a proof used for right triangles called the Hypotenuse Leg rule. As long as one of the rules is true, it is sufficient to prove that the two triangles are congruent. Side-Side-Side (SSS) Rule Side-Side-Side is a rule used to prove whether a given set of triangles are congruent. The SSS rule states that If three sides of one triangle are equal to three sides of another triangle, then the triangles are congruent. In the diagrams below, if AB = RP, BC = PQ and CA = QR, then triangle ABC is congruent to triangle RPQ. Side-Angle-Side (SAS) Rule Side-Angle-Side is a rule used to prove whether a given set of triangles are congruent. The SAS rule states that If two sides and the included angle of one triangle are equal to two sides and included angle of another triangle, then the triangles are congruent. An included angle is an angle formed by two given sides. Included Angle Non-included angle For the two triangles below, if AC = PQ, BC = PR and angle C = angle P , then using the SAS rule, triangle ABC is congruent to triangle QRP Angle-Side-Angle (ASA) Rule Angle-side-angle is a rule used to prove whether a given set of triangles are congruent. The ASA rule states that If two angles and the included side of one triangle are equal to two angles and included side of another triangle, then the triangles are congruent. Angle-Angle-Side (AAS) Rule Angle-angle-side is a rule used to prove whether a given set of triangles are congruent. The AAS rule states that If two angles and a non-included side of one triangle are equal to two angles and a non-included side of another triangle, then the triangles are congruent. In the diagrams below, if AC = QP, angle A = angle Q, and angle B = angle R, then triangle ABC is congruent to triangle QRP. The following video will explain three ways to prove triangles congruent - A lesson on SAS, ASA and SSS Using Two Column Proofs to Prove Triangles Congruent Triangle Congruence by SSS - How to Prove Triangles Congurent Side Side Side Postulate If three sides of one triangle are congruent to three sides of another triangle, then the two triangles are congruent. Triangle Congruence by SAS - How to Prove Triangles Congurent SAS Postulate If two sides and the included angle of one triangle are congruent to two sides and the included angle of another triangle, then the two triangles are congruent. Prove Triangle Congruence with ASA Postulate Angle Side Angle Postulate It two angles and the included side of one triangle are congruent to two angles and the included side of another triangle, then the two triangles are congruent. Triangle Congruence by AAS Postulate Angle Angle Side Postulate It two angles and a nonincluded side of one triangle are congruent to two angles and a nonincluded side of another triangle, then the two triangles are congruent. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"http://www.onlinemathlearning.com/prove-triangles-congruent.html","timestamp":"2014-04-18T21:12:21Z","content_type":null,"content_length":"44030","record_id":"<urn:uuid:5a770bc0-fd44-42c9-bc13-cd7e5cae2c45>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Science of q, quasi discriminating | Dictionary.com Browse our Science Dictionary Alphabetically The American Heritage Science Dictionary clearly describes the complex language of specialized branches for a wide audience. Biographies of eminent scientists are included along with the applied vocabularies of chemistry, biology, physics, engineering, computer science, and more. « Back to Home Page
{"url":"http://dictionary.reference.com/science/list/q/quasi+discriminating/14","timestamp":"2014-04-21T06:15:36Z","content_type":null,"content_length":"43095","record_id":"<urn:uuid:72e23d49-bdb2-4a3b-bfc2-329334ccd488>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian nonparametrics and variance regression-: mixtures of Dirichlet processes and the slice sampler Seminar Room 1, Newton Institute Mixtures of Dirichlet Processes (MDP) have been widely used as a method of overcoming the discreteness of the Dirichlet Process (DP). The two approaches taken to sample from the Dirichlet measure are: the marginal approach (Escobar and West 1995) where the measure is integrated out within the Gibbs sampler via a clever use of the Polya Urn construction of the DP and the conditional approach (see Ishwaran and Zarepour 2000,2002) which makes use of the infinite sum construction of the DP (see Sethuraman 1994). The ways around this infinite sum construction are either approximations or truncations (Ishwaran and Zarepour 2000) or using the retrospective sampler (see Papaspiliopoulos and Roberts 2005). The retrospective sampler deals with the infinite sum directly, via use of reversible jump steps. We introduce a simpler sampler, which instead of using reversible jumps, introduces an auxiliary variable and incorporates the slice sampler within the construction of the posteriors for the Gibbs sampler (see P. Damien, J. Wakefield, S.G.Walker 1999). The new algorithm works with the infinite sum construction of the DP from the very beginning and by introducing auxiliary variables the Gibbs sampler updating is done within finite sets.
{"url":"http://www.newton.ac.uk/programmes/BNR/seminars/2007080716301.html","timestamp":"2014-04-19T06:59:55Z","content_type":null,"content_length":"5236","record_id":"<urn:uuid:956fe46e-dc51-4260-bc9a-75e1f7755e94>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
Torque C What's Torque What is torque and tension? Torque is twisting or turning force Tension is the act or process of stretching something tight Torque is used for creating Tension. Basic Torque Calculations Therefore, a longer lever will require less hand force to produce the same amount of torque. T1 = F1 x L1 = 10 lbs x 2 ft = 20 ft·lbs T2 = F2 x L2 = 20 lbs x 1 ft = 20 ft·lbs T1 = T2 Weight and Mass Mass will not change anywhere on the earth, even under zero gravity conditions, while weight is the amount caused by an acceleration that is felt by the body on which the acceleration is acting. In a no gravity zone there is no feeling of weight. The gravity acceleration is different depending on your lattitude location on the earth. The weight of an object depends on its mass and strength of the gravitational pull. Examples of Force, Mass, and Length Units • Force Unit: [N] Newton SI Unit (International Standard) □ One Newton [N] (equivalent 0.1[kgf] is the force cause by accelerating a mass of 1kg at 1m/s2 □ [kgf] kilogram force) previous Japanese standard units prior to adoption of SI International Standard • Mass Unit: [kg] kilogram • Length Unit: [m] meter Torque Units: SI, Metric and American Because torque is a product of length and force. The units used to describe torque reference both a force and a length. There are three common torque units: SI (International Standard) based on Newton meters, Metric based on kilogram force centimeters, and American/ English based on inch pounds. • SI Unit [N.m] Newton meter □ 1000 [nNm]=100 [cNm]=1 [Nm]=0.001 [kNm] • Metric Unit: [kgf.cm] Kilogram force centimeter □ 1000 [gf.cm.]=1[kgf.cm]=0.01[kgf.m] • Imperial/American/English Unit: [lbf.in] inch pounds □ 16[ozf.in]=1[lbf.in.]=0.0833[lbf.ft.] Why do we tighten bolts and screws? Fasteners tightening is done in order to stop objects from moving--to fix them. The following are the major objectives of fastener tightening: 1. For fixing and joining objects 2. For transmitting driving force and braking force 3. For sealing drain bolts, gas, and liquid The fixing force is referred to as axial tension or tightening force and the objective of screw tightening is to apply an appropriate amount of axial tension. Although axial tension is really what needs to be controlled and measured, it is very difficult to do so, therefore torque is used as a substitute characteristic for administering and controlling tightening operations.
{"url":"http://www.tohnichi.com/torque.asp","timestamp":"2014-04-19T14:54:40Z","content_type":null,"content_length":"27287","record_id":"<urn:uuid:b8fe85a2-9178-4194-bb64-4b33c2dbdc71>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
SnoWest News: Set My Ponies Free This past week a startling truth was presented that caused the entire foundation of my snowmobiling knowledge to become discombobulated. There are no tiny ponies powering my snowmobile; i.e. horsepower does not mean actual horses. That’s right, horsepower doesn’t literally mean the power of horses. It’s a mathematical formulation that consists of “work over time equals force times distance over time which equals 180 lbf and 2.4 times 2 pie symbol times 12 feet over one minute equals 32,572 feet/lbf/minute” … or something sort of scientific/mathematical like that. Okay, some maybe the original formula for horsepower was based on how much power is required to equal the pulling power of a horse. But at no time did these great minds figure out how to actually put a tiny horse inside a combustion engine to turn the power wheel. (They did figure out how to put hamsters in a cage to turn a wheel … but then the hamsters union got involved and required so many breaks per hour plus benefits and made this advancement in technology cost prohibitive.) So now, rather than little tiny horses causing our snowmobile to climb tall mountains, we actually have math geeks powering our sleds. This mere knowledge in itself has caused me to second-think the next time I look at a death-defying vertical climb. What would happen if I’m nearing the peak of the mountain only to have one of my math geeks lose his inhaler? This is definitely a formula for This has also caused me significant concern when I ponder the fact that those engineers who are designing snowmobile engine technology actually rely on this horsepower formulation to determine the power output of any given engine. Now it makes some sense why snowmobile manufacturers for years have tried to keep horsepower ratings out of print. They don’t want their customers cracking open a cylinder only to find that all their tiny ponies have escaped … when in reality those ponies were never actually there to begin with. So now, the next time you hear someone boasting about having a 300-horsepower turbo, you can simply smile and pat them on the head and say “that’s nice.” Likely, if they still believe this horsepower myth, they also believe in Santa Claus and the tooth fairy.
{"url":"http://www.snowest.com/snowmobile-news/print.cfm?id=3662","timestamp":"2014-04-19T13:31:50Z","content_type":null,"content_length":"5393","record_id":"<urn:uuid:98340053-9d73-4038-b4ed-d2cef218eecf>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
19 search hits Hadron production from a hadronizing quark gluon plasma (1997) Christian Spieles Horst Stöcker Walter Greiner Measured hadron yields from relativistic nuclear collisions can be equally well understood in two physically distinct models, namely a static thermal hadronic source versus a time-dependent, non-equilibrium hadronization off a quark gluon plasma droplet. Due to the time-dependent particle evaporation off the hadronic surface in the latter approach the hadron ratios change (by factors of / 5) in time. The overall particle yields then reflect time averages over the actual thermodynamic properties of the system at a certain stage of evolution. A Microscopic calculation of secondary Drell-Yan production in heavy ion collisions (1997) Christian Spieles Lars Gerland Nils Hammon Marcus Bleicher Steffen A. Bass Horst Stöcker Walter Greiner Carlos Lourenco Ramona Vogt A study of secondary Drell-Yan production in nuclear collisions is presented for SPS energies. In addition to the lepton pairs produced in the initial collisions of the projectile and target nucleons, we consider the potentially high dilepton yield from hard valence antiquarks in produced mesons and antibaryons. We calculate the secondary Drell-Yan contributions taking the collision spectrum of hadrons from the microscopic model URQMD. The con- tributions from meson-baryon interactions, small in hadron-nucleus interac- tions, are found to be substantial in nucleus-nucleus collisions at low dilepton masses. Preresonance collisions of partons may further increase the yields. Hypermatter in chiral field theory (1997) Panajotis Papazoglou Detlef Zschiesche Stefan Schramm Horst Stöcker Walter Greiner Abstract. A generalized Lagrangian for the description of hadronic matter based on the linear SU(3)L × SU(3)R -model is proposed. Besides the baryon octet, the spin-0 and spin-1 nonets, a gluon condensate associated with broken scale invariance is incorporated. The observed values for the vacuum masses of the baryons and mesons are reproduced. In mean-field approximation, vector and scalar interactions yield a saturating nuclear equation of state. Finite nuclei can be reasonably described, too. The condensates and the e ective baryon masses at finite baryon density and temperature are discussed. Chiral Lagrangian for strange hadronic matter (1997) Panajotis Papazoglou Stefan Schramm Jürgen Schaffner-Bielich Horst Stöcker Walter Greiner A generalized Lagrangian for the description of hadronic matter based on the linear SU(3)L × SU(3)R -model is proposed. Besides the baryon octet, the spin-0 and spin-1 nonets, a gluon condensate associated with broken scale invariance is incorporated. The observed values for the vacuum masses of the baryons and mesons are reproduced. In mean-field approximation, vector and scalar interactions yield a saturating nuclear equation of state. We discuss the di culties and possibilities to construct a chiral invariant baryon-meson interaction that leads to a realistic equation of state. It is found that a coupling of the strange condensate to nucleons is needed to describe the hyperon potentials correctly. The effective baryon masses and the appearance of an abnormal phase of nearly massless nucleons at high densities are examined. A nonlinear realization of chiral symmetry is considered, to retain a Yukawa-type baryon-meson interaction and to establish a connection to the Walecka-model. Hot nuclear matter in the quark meson coupling model (1997) P. K. Panda Amruta Mishra Judah M. Eisenberg Walter Greiner We study here hot nuclear matter in the quark meson coupling model which incorporates explicitly quark degrees of freedom, with quarks coupled to scalar and vector mesons. The equation of state of nuclear matter including the composite nature of the nucleons is calculated at finite temperatures. The calculations are done taking into account the medium-dependent bag constant. Nucleon properties at finite temperatures as calculated here are found to be appreciably different from the value at T=0. Dilepton production by bremsstrahlung of meson fields in nuclear collisions (1997) Igor N. Mishustin Leonid M. Satarov Horst Stöcker Walter Greiner We study the bremsstrahlung of virtual omega mesons due to the collective deceleration of nuclei at the initial stage of an ultrarelativistic heavy ion collision. It is shown that electromagnetic decays of these mesons may give an important contribution to the observed yields of dileptons. Mass spectra of e+e and µ+µ pairs produced in central Au+Au collisions are calculated under some simplifying assumptions on the space time variation of the baryonic current in a nuclear collision process. Comparison with the CERES data for 160 AGev Pb+Au collisions shows that the proposed mechanism gives a noticeable fraction of the observed e+e pairs in the intermediate region of invariant masses. Sensi tivity of the dilepton yield to the in medium modification of masses and widths of vector mesons is demonstrated. Collective mechanism of dilepton production in high-energy nuclear collisions. (1997) Igor N. Mishustin Leonid M. Satarov Horst Stöcker Walter Greiner Collective bremsstrahlung of vector meson fields in relativistic nuclear collisions is studied within the time dependent Walecka model. Mutual deceleration of the colliding nuclei is described by introducing the e ective stopping time and average rapidity loss of baryons. It is shown that electromagnetic decays of virtual &#969; mesons produced by bremsstrahlung mechanism can provide a substantial contribution to the soft dilepton yield at the SPS bombarding energies. In particular, it may be responsible for the dilepton enhancement observed in 160 AGev central Pb+Au collisions. Suggestions for future experiments to estimate the relative contribution of the collective mechanism are given. Structure of the vacuum in nuclear matter: a nonperturbative approach (1997) Amruta Mishra P. K. Panda Stefan Schramm Joachim Reinhardt Walter Greiner We compute the vacuum polarization correction to the binding energy of nuclear matter in the Walecka model using a nonperturbative approach. We first study such a contribution as arising from a ground-state structure with baryon-antibaryon condensates. This yields the same results as obtained through the relativistic Hartree approximation of summing tadpole diagrams for the baryon propagator. Such a vacuum is then generalized to include quantum effects from meson fields through scalar-meson condensates which amounts to summing over a class of multiloop diagrams. The method is applied to study properties of nuclear matter and leads to a softer equation of state giving a lower value of the incompressibility than would be reached without quantum effects. The density-dependent effective sigma mass is also calculated including such vacuum polarization effects. Relativistic transport theory of N, Delta and N* (1440) interacting through sigma, omega and pi mesons. (1997) Guangjun Mao Ludwig Neise Horst Stöcker Walter Greiner Zhuxia Li A self-consistent relativistic integral-di erential equation of the Boltzmann- Uehling-Uhlenbeck-type for the N*(1440) resonance is developed based on an effective Lagrangian of baryons interacting through mesons. The closed time-path Green s function technique and semi-classical, quasi-particle and Born approxima- tions are employed in the derivation. The non-equilibrium RBUU-type equation for the N*(1440) is consistent with that of nucleon s and delta s which we derived before. Thus, we obtain a set of coupled equations for the N,Delta and N*(1440) distribution functions. All the N (1440)-relevant in-medium two-body scattering cross sections within the N,Delta and N*(1440) system are derived from the same effective Lagrangian in addition to the mean field and presented analytically, which can be directly used in the study of relativistic heavy-ion collisions. The theoreticalprediction of the free pp - pp* (1440) cross section is in good agreement with the experimental data. We calculate the in-medium N+N - N+N* , N* +N - N+N and N*+N - N* +N cross sections in cold nuclear matter up to twice the nuclear matter density. The influence of different choices of the N* N* coupling strengths, which can not be obtained through fitting certain experimental data, are discussed. The results show that the density dependence of predicted in-medium cross sections are sensitive to the N* N* coupling strengths used. An evident density dependence will appear when a large scalar coupling strength of g^(sigma) N*N* is assumed. PACS number(s): 24.10.Cn; 25.70.-z; 21.65.+f Transition to delta matter from hot, dense nuclear matter within a relativistic mean field formulation of the nonlinear sigma and omega model (1997) Zhuxia Li Guangjun Mao Yizhong Zhuo Walter Greiner An investigation of the transition to delta matter is performed based on a relativistic mean field formulation of the nonlinear sigma and omega model. We demonstrate that in addition to the Delta-meson coupling, the occurrence of the baryon resonance isomer also depends on the nucleon-meson coupling. Our results show that for the favored phenomenological value of m* and K, the Delta isomer exists at baryon density ~ 2–3 p0 if beta=1.31 is adopted. For universal coupling of the nucleon and Delta, the Delta density at baryon density ~ 2–3 p0 and temperature ~ 0.4–0.5 fm-1 is about normal nuclear matter density, which is in accord with a recent experimental finding.
{"url":"http://publikationen.stub.uni-frankfurt.de/solrsearch/index/search/searchtype/authorsearch/author/%22Walter+Greiner%22/start/0/rows/10/yearfq/1997/sortfield/author/sortorder/desc","timestamp":"2014-04-17T06:50:23Z","content_type":null,"content_length":"48674","record_id":"<urn:uuid:d55a58f4-7636-4f43-b82d-cbd7f4f0bef9>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Verifying trig identities? March 19th 2009, 07:59 AM #1 Mar 2009 Verifying trig identities? Okay, I've been getting some of these, but I can't seem to verify this identity... any help? Here's the problem Sin(x+y) + Sin(x-y) = 2sinxcosy Okay, I've been working on the left side, and distribute, getting: Sinx + Siny + Sinx - Siny And, the sinx's add up to the 2sinx that I need for the right side, but the siny's cancel out if I don't change them around. So I changed one of them to 1/cscy... but I can't seem to work with that and the other siny to end up with cosy. Where did I go wrong, or where do I go with it now? do not distribute use trig. formula for sin(x+y) and sin(x-y) sin(x+y) = sinx cosy + cosx siny sin(x-y) = sinx cosy - cosx siny now add try the following page for trigonometric identities mixture: trigonometric identities oh, dang, forgot about that identity! Okay, well, adding that up it ends up as 2sinx2cosy... but I need it to be 2sinx(1)cosy. or am I really tired and I'm thinking too out of the box to realize the entire term of 2sinxcosy consists of 2sinx's and 2cosy's? if not, what do I switch up to drop a cosy to satisfy the right side of the equation? oh man, am i sure forgetting everything about math, lol. March 19th 2009, 08:23 AM #2 Junior Member Jan 2007 March 19th 2009, 10:28 AM #3 Mar 2009 March 19th 2009, 10:58 AM #4
{"url":"http://mathhelpforum.com/trigonometry/79503-verifying-trig-identities.html","timestamp":"2014-04-18T10:29:07Z","content_type":null,"content_length":"38587","record_id":"<urn:uuid:60c502e5-7e40-4fde-9f3f-4d1d075ea1d6>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Eureka - Operation Research Generalized Inventory Management Model The inventory control problem arises when it becomes necessary to create a stock of material resources or commodities with the purpose of meeting the demand within a given time span (finite or infinite.) The challenge in any inventory control problem is to determine the quantity of products to be ordered and the moment for placing the order, which both affect the amount of the costs. Solutions regarding the size of an order and the moment for its placement can be based upon the minimization of the corresponding overall costs function. The total costs of a inventory management system can be described as a function of its primary components as follows: Total cost of a inventory management system = Acquisition cost + Ordering cost + Holding cost + Deficiency losses Unfortunately, the general solution to the inventory control problem cannot be obtained on the basis of one model. Therefore, most diverse models, describing different particular cases have been developed. One of the determinatives in the development of a inventory management model is pattern of demand. Dynamic Lot Sizing Model Consider a situation where demand for an item is known in advance with certainty for a number of future periods, and the demand for the item varies from period to period. For many practical situations, the demand is known precisely for a certain number of periods. Such situation frequently arises in material management, that is, where the item is a raw material or a component part in a manufacturing process. Demand for the item for several periods in advance can be inferred from the master production schedule during the MRP process. We must plan a sequence of orders, or production batches, over a T period planning horizon. In each period, a single decision must be made: the size of the order or the production batch. To specify the problem and model, we will make use of the following notations: t = time period (e.g., day, week, month); we will consider t = 1,2,…,T, where T represents the planning horizon. dt = demand for period t (in units, non-negative integers.) Ct = unit production cost, not including setup or inventory costs for period t. At = setup (order) cost to produce (purchase) a lot in period t. Ht = holding cost to carry a unit of inventory from period t to period t+1. It = inventory (in units) left over at the end of period t. Xt = lot size (in units, non-negative integers) in period t (the decision variable.) Using these variables, the problem can be formulated as follows: Min ∑[ At + Ct + Ht] (t=1..T) It = It-1 + Xt - dt (*), It , Xt >=0 (t=1..T) Here constraints (*) are called inventory-balance constraints. Note that the inventory can also be rewritten as: ∑( Xt - dt) (t=1..T), and therefore, the It variables can be eliminated from the formula. The basic problem is to meet all the demands at minimal cost. The only controls are the production quantities. We will also make the following standard assumptions: - Lead time is zero; orders arrive immediately. (Unit production time is negligible.) A different assumption is to be made for the real conditions; according to it, the unit production time can be determined with a negligible inaccuracy. Thus, assume it takes two weeks to produce a lot of units. If according to the production program the demand is met by the units produced in February, in fact, the manufacture of that lot is to be launched two weeks ahead; i.e. in the second half of January.) - Demand is needed and consumed on the first day of a period. Holding costs are therefore not charged on the items consumed during the period. (Also, it is possible to model a situation, where inventory charges are estimated from an average level - Costs for each period depend on the current lot size and on the inventory rate at the end of the period; demand for each period completely is met. There is a series of techniques for solving this problem. Some of them belong to the heuristic methods; i.e. they do not guarantee obtaining the optimum solution. Moreover, often the solutions can be very distant from the optimums. There are precise algorithms too. One of them is the Wagner-Whitin algorithm. However, in order to apply the Wagner-Whitin algorithm, one important restriction is to be observed: On stage I, the production (purchase) cost per unit and its holding costs are the concaved functions (constant or non-increasing) of the volume of produced (purchased) and held units respectfully. The dynamic programming method relieves us from such restriction and allows applying any functions of costs. However, it has high calculation difficulty, which increases very quickly with the increase of the number of possible actions. This peculiarity is to be taken into account when composing models. Since the values entered in the model are discrete (integer), each possible value represents an action choice. Therefore, we should take a point that actually affects the result of the managerial decision. For example, instead of describing a model in kilograms, it may be better to switch to larger points (a centner, 500 kg, a ton, carriage…), if there is no real need for such a small point. This would essentially cut the calculations Also, limiting the top level of possible dimensions of the order will let us do without labor-intensive calculations. Eureka: Dynamic Inventory for Excel (download) This is an Excel-based dynamic order size problem solver, using the dynamic programming method and the Wagner-Whitin algorithm. Once Add-in dInventory is installed, Excel will get a respective toolbar. To create a new model table, click “Create”. That will create a new sheet with the model table that includes the title and one line. The number of lines in the table matches the number of periods – the planning horizon. In general, nothing binds you to lock periods to equal periods of time. Use the “+” and the “-“ buttons to increase or decrease the number of periods. For each period, we are to determine • Demands (integer values.) • Order Cost function. It unites Setup Cost and Order Cost. These may depend on Order Quantity for current period (as a rule), any other cell outside the table, or remain constant. But directly or indirectly they cannot depend on other cells of the table. • Holding Cost function. May depend on inventory volume for current period (as a rule), any other cell outside the table, or remain constant. • Initial inventory. • Maximum order quantity for current period. Provided only for purposes of the dynamic programming algorithm. By default it’s equal to the total demand from current period until next one. The remainder is calculated by the formula: Inventory[i] = Inventory[i-1] + OrderQuantity[i] - Demand[i.] The “Run(DP)” button launches the search for a solution with the dynamic programming algorithm; the “Run(WW)” button does the same with the Wagner-Whitin algorithm. The solution for the model is the Order Quantity column. Building the model in Excel imposes certain restrictions on the productivity of the calculation. You can order (suren.tamrazyan@gmail.com) the development of a custom application, which would run noticeably faster and probably with your information system.
{"url":"http://eureka-operationresearch.blogspot.com/2011/09/dynamic-lot-size-model.html","timestamp":"2014-04-21T05:04:51Z","content_type":null,"content_length":"79190","record_id":"<urn:uuid:2b40d2a4-f4fb-4d9d-ae2d-bed51d04dd32>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Dosage Calculations Test 1. 0 May 22, '13 by Hi everyone, I have recently been accepted into a local ADN program and was informed that we will have a dosage calculations test 2 weeks into the program. We must make a 90% and have 2 attempts, but my question is will they prepare us for this during the first couple of weeks or will we have to teach ourselves? I've never done dosage calculations before and want to know the best way to prepare. Thank you! 3. 0 We had to teach our selves. Ask your program if they will prepare you. Be glad. We had to pass at 100 percent. 4. 0 I also had to teach myself. I got three tries and has to make a 100 percent. And we are tested every semester on the first day of class. Find a good drug calculations work book and work through 5. 1 May 22, '13 by "Calculation of Drug Dosages" by Ogden is a great workbook to practice your material. We used this in Pharmacology. We were expected to learn it on our own. Had to pass the test with 90% or better or you failed the entire class. We have to do the same thing in Nursing school as well. This book was very helpful and doing the practice quizzes enabled me to get high scores on each of my tests. My first test was a 96% and I got 100% on the remaining tests. So, get a copy of this book and practice, practice, practice! If you have trouble with dimensional analysis or how to approach a dosage problem, I highly recommend the book "Math Attack: Strategies for Winning the Pharmacology Math Battle" by Karen Champion. It is GREAT!!! She gives you several different approaches to the same problem and you choose the method that works best for you. It really helped me find the solution style the "clicks" with me. She's also very reassuring and makes you feel like it's not impossible to do well in pharm math After completing Pharm & passing my math exams, I feel like it'll be no problem in nursing school, since we'll continue with the same book. Good luck to you!!!! 6. 0 That sounds a lot like the BSN program I am starting in the fall. I think we spend part of the first day of class talking about dosage calculations and then have a test two weeks later. I bought Dosage Calculations Made Incredibly Easy. I'm pretty good at math, I just haven't had to do it for a while. 7. 0 In my school they reviewed the types of questions that would be on the exam and after that you were on your own. One or two teachers may say you can come to them. But for the most part the class utilized each other for help. If you study each day and get your own way and method of doing the questions you would be fine. Trust me. Because I SUCK in math. Like I suck since kindergarten...lol. And I passed each one of those exams with a 90 or higher. Good luck! 8. 0 May 23, '13 by It is simple math but you just have to remember to keep all of your needed information straight. The way we do it at our school is that semester 2 your have to get a 90% 1st try, 3rd semester is 95%, and 100% is needed in the last semester. You had two chances to retake but you need 100% on the retake. If not, you were on clinical probation and not allowed to pass meds. 9. 0 I remember having a dosage calculations test for nearly every class. In the beginning, we had instruction on dosage calculations...such as 625mg Tylenol ordered, have 325mg caplets, how many would you administer? As well as how many mL's to give...for example, MD orders 2mg morphine, have 5mg/1mL on hand, how many mL's would you give? Etc. For everything else, we had to teach ourselves. I suggest getting the "Calculate With Confidence" workbook. I'm mathematically-challenged and that book really helped me. It was also the required text for my program. 10. 0 You had to have had at least intro algebra to get into nursing school. All medication calcs are "solve for x." They're actually easier than most of algebra. If you can go to the grocery store and figure out which product is a better buy (price per ounce, say) then seriously, you ought to be able to pass this test. This is why they expect you to "teach yourself," because you are supposed to already know it and have the basic analytic ability to figure it out. Seriously. But hey. I see a lot of nursing students here who forgot their basic "solve for x" algebra course. Pick up a high school sophomore algebra book and check it out.
{"url":"http://allnurses.com/nursing-student-assistance/dosage-calculations-test-834732.html","timestamp":"2014-04-18T22:00:28Z","content_type":null,"content_length":"41177","record_id":"<urn:uuid:e37b0c34-e15b-4ca3-8076-c4d6c49f3af8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Map Complexity [Public Discussion/Review] Like we announced in past ( in this thread ), at some point we will have a public map gallery that is nothing else that the public side of the big map database we're currently working on. After some discussion behind the scenes, we decided that it would be a good thing to have a public discussion for two items we want to add to the map gallery/database: Tag and Complexity Levels. In this topic we will discuss Complexity Levels , if you want to discuss about Tags, please use this thread Right now we have set 4 complexity levels. Unfortunately complexity levels are very subjective so we would like to hear your thoughts before to proceed further. The current complexity levels are: • Simple • Standard • Difficult • Extreme Few questions about this topic: 1.how many values would you like to have for complexity, and what should they be called?2.We can agree that the classic map should be basis for a comparison; in this case it has to be considered simple or standard? Give every man your ear, but few thy voice. Take each man's censure, but reserve thy judgment. Re: Map Complexity [Public Discussion/Review] 4 is good but 3 (no Simple) might be the way to go. Classic map should be Standard. Then look at the xml features of a map. Does it have - • Bombarding • Killer Neutrals • (Multiple) Winning Conditions • (Multiple) Losing Conditions assign points to these things to determine how much more complex, difficult or nonstandard a map is and use that to determine if a map is Difficult or Extreme Re: Map Complexity [Public Discussion/Review] greenoaks wrote:4 is good but 3 (no Simple) might be the way to go. Classic map should be Standard. Never more than 3 levels. Classic is the benchmark for standard maps. Re: Map Complexity [Public Discussion/Review] laughingcavalier wrote: Never more than 3 levels. Classic is the benchmark for standard maps. i like Challenging/Complex more than Difficult/Extreme Re: Map Complexity [Public Discussion/Review] greenoaks wrote:4 is good but 3 (no Simple) might be the way to go. Classic map should be Standard. Then look at the xml features of a map. Does it have - □ Bombarding □ Killer Neutrals □ (Multiple) Winning Conditions □ (Multiple) Losing Conditions assign points to these things to determine how much more complex, difficult or nonstandard a map is and use that to determine if a map is Difficult or Extreme I like the points idea. The handful of classifications is good for a shorthand, but detailed ratings based on features should be available so players can make their own judgements more easily. Re: Map Complexity [Public Discussion/Review] greenoaks wrote: □ Bombarding □ Killer Neutrals □ (Multiple) Winning Conditions □ (Multiple) Losing Conditions I would start with that list, but I would add a few more things. 1. One-way portals 2. Graphic difficulty (maps where players find that it's difficult to see the attacking pathways and such for purely visual reasons. Northwest Passage, Falklands War, and Poison Rome spring to mind 3. Overall number of terts. One-way portals would be part of the base difficulty, but Graphic difficulty and number of terts would be multipliers. So, you would have a formula much like Difficulty = (1+Number of Complicating Factors {bombards, one-ways, collection bonuses, autodeploys, victory conditions, losing conditions, conditional borders, killer neutrals}) x (1+(subjective popular rating of graphical difficulty on a scale of 1 to 5)/10) x (1+(number of terts on map/number of terts on largest known map on site). This gives you a difficulty rating between 1 and about 27. However, it is expandable if even more Complicating Factors are introduced later. To make neater numbers I suppose we could take the square root of the whole thing. SQRT[(1+Number of Complicating Factors {bombards, one-ways, collection bonuses, autodeploys, victory conditions, losing conditions, conditional borders, killer neutrals}) x (1+(subjective popular rating of graphical difficulty on a scale of 1 to 5)/10) x (1+(number of terts on map/number of terts on largest known map on site)] gives a value of 1 to 3. I agree that Classic should be the benchmark for what is a Standard map. However, Classic by my formula comes out with a rating of only about 1.25 to maybe 1.5, so the range for a Simple map would be only from 1 to 1.2. This would apply to really simple maps like Doodle and Luxembourg. I think we need to have a lot more than 3 grades, however. If Simple describes Doodles and Standard describes Classic, it's ridiculous that there should be only one level above that. Arms Race is a whole order of magnitude more complex than Classic, but certainly Lunar War is another order of magnitude tougher than Arms Race, and Stalingrad is easily another level of magnitude tougher than Lunar War, and so on. I don't think it would be wrong to have five or seven levels above Standard. Re: Map Complexity [Public Discussion/Review] If we go with Dukasaur, then we could have 10 or more levels of complexity. This would be way too much. 5 is a good number. Beginners - Doodle Standard - Classic Challenging - 1982 Complex - Das Schloss Uber Complex - Stalingrad, Waterloo Re: Map Complexity [Public Discussion/Review] koontz1973 wrote:If we go with Dukasaur, then we could have 10 or more levels of complexity. This would be way too much. 5 is a good number. Beginners - Doodle Standard - Classic Challenging - 1982 Complex - Das Schloss Uber Complex - Stalingrad, Waterloo i would scratch Beginners because the Classic map is the beginners map. Re: Map Complexity [Public Discussion/Review] greenoaks wrote: koontz1973 wrote:If we go with Dukasaur, then we could have 10 or more levels of complexity. This would be way too much. 5 is a good number. Beginners - Doodle Standard - Classic Challenging - 1982 Complex - Das Schloss Uber Complex - Stalingrad, Waterloo i would scratch Beginners because the Classic map is the beginners map. We have simpler maps though. So maybe not beginners but something that shows a level of game play a blind cat could play. It could be Elementary or Standard Lite. Re: Map Complexity [Public Discussion/Review] koontz1973 wrote:If we go with Dukasaur, then we could have 10 or more levels of complexity. This would be way too much. 5 is a good number. Beginners - Doodle Standard - Classic Challenging - 1982 Complex - Das Schloss Uber Complex - Stalingrad, Waterloo i agree with koontz about 5 levels...but koontz how the hell did you get Waterloo into the Uber Complex category and Das Schloss into the Complex category? * CC: Perth * Massacre a Paris * Rail S America * Promontory Summit * Gallipoli * Re: Map Complexity [Public Discussion/Review] cairnswk wrote: koontz1973 wrote:If we go with Dukasaur, then we could have 10 or more levels of complexity. This would be way too much. 5 is a good number. Beginners - Doodle Standard - Classic Challenging - 1982 Complex - Das Schloss Uber Complex - Stalingrad, Waterloo i agree with koontz about 5 levels...but koontz how the hell did you get Waterloo into the Uber Complex category and Das Schloss into the Complex category? Personal preference. But this leads onto the next discussion. How the hell do you put a map into a category without hiring a math professor to run complex equations. My vote would then go for: Standard Lite Uber Complex and leave it up to someone else to put maps into each. Re: Map Complexity [Public Discussion/Review] there is no need to have anything before Standard. Classic is the basic training map. the map for beginners. sure, split the maps above Classic into many groups but it is pointless to create a group before Beginners because the only thing before beginners is 'Never played Risk before'. Re: Map Complexity [Public Discussion/Review] greenoaks wrote:there is no need to have anything before Standard. Classic is the basic training map. the map for beginners. sure, split the maps above Classic into many groups but it is pointless to create a group before Beginners because the only thing before beginners is 'Never played Risk before'. +1 again And the difficulty of deciding which complexity rating a map gets is more argument for keeping the complexity ratings simple & few. You could say doodle is more complex than classic for example - because the action is so fast on a small map you need to get your game right form turn 1 .... Only 3 complexity levels please. Re: Map Complexity [Public Discussion/Review] I like the 5 categories. The first category could be just "mini-maps" to explain Doodle, Luxembourg, etc. It's not really a complexity tag, but more just a tag indicating the map is much smaller than Classic and has no special GP features. Then Standard (any map that is based solely on geographic gameplay with clearly defined attack paths, i.e., Conquer Man or Conquer 4 would be excluded) Then Conquest (while this denotes a map type, it could also be used to explain any map with starting positions). Most of these maps have progressive bonus structures as well (AOR, Jamaica, New World, Then Challenging (maybe geographic maps with tons of regions, unclear attack routes (at least for a novice, like HIve, Bamboo Jack, Forbidden City, Poison Rome, Rail maps, etc) and maps with special GP features (losing conditions, winning conditions) And finally, Formidable. Basically, any map that if a person is playing on it for the first time...they're not going to know what is going on (i.e., Stalingrad, Waterloo, Monsters, Das Schloss, Conquer Rome, KCII, etc). With the variety of maps on this site, I like the idea of 5 categories. 3 would group too many different maps into the same category, but 10 is just overwhelming. And I do like the idea of some sort of mathematic formula, but I think the one Duka mentioned is too complex, lol. Re: Map Complexity [Public Discussion/Review] koontz1973 wrote: cairnswk wrote: koontz1973 wrote:If we go with Dukasaur, then we could have 10 or more levels of complexity. This would be way too much. 5 is a good number. Beginners - Doodle Standard - Classic Challenging - 1982 Complex - Das Schloss Uber Complex - Stalingrad, Waterloo i agree with koontz about 5 levels...but koontz how the hell did you get Waterloo into the Uber Complex category and Das Schloss into the Complex category? Personal preference. But this leads onto the next discussion. How the hell do you put a map into a category without hiring a math professor to run complex equations. Well, that's the beauty of computers: they run complex math equations all day long, and never get sick of the job. And they're cheaper than hiring professors. When something is to be computerized, there's no reason not to account for all the factors in detail. Simplifying formulas is for work you might have to do in the field, where you might have to work stuff out with pen-and-paper. Since CC is on the Internet, you will by definition be using a computer every time you're on CC. My vote would then go for: Standard Lite Uber Complex and leave it up to someone else to put maps into each. The first part of my formula, the Complicating Factors, would be entered by the mapmaker or possibly a cartography advisor, via a simple set of radio buttons, like answering a forum poll. A matter of a few seconds. The second part, the graphical difficulty, would probably best be done by a poll of the players, since there's a large subjective element. Again, a simple matter of them being directed to a forum poll, after that the computer does the work. The calculation of the number of terts could be completely automatic as would the crunching of the numbers and the presentation of a final score.
{"url":"https://www.conquerclub.com/forum/viewtopic.php?f=127&t=180569","timestamp":"2014-04-21T08:23:34Z","content_type":null,"content_length":"180292","record_id":"<urn:uuid:4b3ad46e-f8df-4cd5-b420-00c5cd4bc627>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit evaluation November 28th 2010, 05:57 AM Limit evaluation Hi forum members, I've attached 2 limits I'm struggling to evaluate. could you give me a lead on it ? thanks in advance. November 28th 2010, 06:20 AM For the first one, you can find the limit of each term and sum them. Since the degree of the numerator is less than the denominator's in each case, each term approaches $0$. Hence, the limit is... For the second one, it would be useful to rewrite it as $(3n^2 + 20n + 13)$ to the power $1/n$. The exponent here approaches $0$, and any positive number to the power $0$ is... November 28th 2010, 06:46 AM For the first one, you can find the limit of each term and sum them. Since the degree of the numerator is less than the denominator's in each case, each term approaches $0$. Hence, the limit is... For the second one, it would be useful to rewrite it as $(3n^2 + 20n + 13)$ to the power $1/n$. The exponent here approaches $0$, and any positive number to the power $0$ is... The first one is wrong and the second one is misleading: In the first one, the number of summands depends on n, which tends to infinity - that is why the limit of the sum does not equal the sum of the limits (only true when there is a fixed amount of Instead, try evaluating a lower bound and an upper bound for the sum and then use the sandwhich theorem. I'll show you how to do the lower bound and you do the upper bound: The smallest summand is $\frac{n}{n^2+n}$, therefore, if we denote the sum by $S_n$, we have that $\displaystyle S_n \ge n \cdot \frac{n}{n^2+n} = \frac{n^2}{n^2+n} = \frac{n}{n+1} \to 1$ as $n \ to \infty$. Try working the upper bound in a similar fashion. For the second one, you can evaluate the expression inside the root as follows: $\displaystyle (3n^2 +20n +13)^{\frac{1}{n}} \le (3n^2 + 20n^2 + 13n^2)^{\frac{1}{n}} = (36n^2)^{\frac{1}{n}}$ And then use a similar trick to get an upper bound for the expression, and finally apply the sandwhich theorem. November 28th 2010, 07:50 AM Uh-oh, it's been a while since I've taken calculus. Time to review! November 28th 2010, 01:24 PM thanks for the help I should practice the sandwich rule more, would it be safe to say it is better to try the sandwich approach with sums ? could that be used as the inequality for the sandwich ? $\frac{n}{n^2+n}\ge A_n\ge\frac{n^2}{n^2+1}$ for the first one and $13^{\frac{1}{n}} \le (3n^2 +20n +13)^{\frac{1}{n}} \le (3n^2 + 20n^2 + 13n^2)^{\frac{1}{n}} = (36n^2)^{\frac{1}{n}}$ for the second one.
{"url":"http://mathhelpforum.com/calculus/164599-limit-evaluation-print.html","timestamp":"2014-04-20T22:04:11Z","content_type":null,"content_length":"9995","record_id":"<urn:uuid:791b640a-37b9-4a6a-b867-42602f41fa99>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
orngStat: Orange Statistics for Predictors This module contains various measures of quality for classification and regression. Most functions require an argument named res, an instance of ExperimentResults as computed by functions from orngTest and which contains predictions obtained through cross-validation, leave one-out, testing on training data or test set examples. To prepare some data for examples on this page, we shall load the voting data set (problem of predicting the congressman's party (republican, democrat) based on a selection of votes) and evaluate naive bayesian learner, classification trees and majority classifier using cross-validation. For examples requiring a multivalued class problem, we shall do the same with the vehicle data set (telling whether a vehicle described by the features extracted from a picture is a van, bus, or Opel or Saab car). part of statExamples.py import orange, orngTest, orngTree learners = [orange.BayesLearner(name = "bayes"), voting = orange.ExampleTable("voting") res = orngTest.crossValidation(learners, voting) vehicle = orange.ExampleTable("vehicle") resVeh = orngTest.crossValidation(learners, vehicle) If examples are weighted, weights are taken into account. This can be disabled by giving unweighted=1 as a keyword argument. Another way of disabling weights is to clear the ExperimentResults' flag General Measures of Quality CA(res, reportSE=False) Computes classification accuracy, i.e. percentage of matches between predicted and actual class. The function returns a list of classification accuracies of all classifiers tested. If reportSE is set to true, the list will contain tuples with accuracies and standard errors. If results are from multiple repetitions of experiments (like those returned by orngTest.crossValidation or orngTest.proportionTest) the standard error (SE) is estimated from deviation of classification accuracy accross folds (SD), as SE = SD/sqrt(N), where N is number of repetitions (e.g. number of folds). If results are from a single repetition, we assume independency of examples and treat the classification accuracy as distributed according to binomial distribution. This can be approximated by normal distribution, so we report the SE of sqrt(CA*(1-CA)/N), where CA is classification accuracy and N is number of test examples. Instead of ExperimentResults, this function can be given a list of confusion matrices (see below). Standard errors are in this case estimated using the latter method. AP(res, reportSE=False) Computes the average probability assigned to the correct class. BrierScore(res, reportSE=False) Computes the Brier's score, defined as the average (over test examples) of sum[x](t(x)-p(x))^2, where x is a class, t(x) is 1 for the correct class and 0 for the others, and p(x) is the probability that the classifier assigned to the class x. IS(res, apriori=None, reportSE=False) Computes the information score as defined by Kononenko and Bratko (1991). Argument 'apriori' gives the apriori class distribution; if it is omitted, the class distribution is computed from the actual classes of examples in res. So, let's compute all this and print it out. part of statExamples.py import orngStat CAs = orngStat.CA(res) APs = orngStat.AP(res) Briers = orngStat.BrierScore(res) ISs = orngStat.IS(res) print "method\tCA\tAP\tBrier\tIS" for l in range(len(learners)): print "%s\t%5.3f\t%5.3f\t%5.3f\t%6.3f" % (learners[l].name, CAs[l], APs[l], Briers[l], ISs[l]) The output should look like this. method CA AP Brier IS bayes 0.903 0.902 0.175 0.759 tree 0.846 0.845 0.286 0.641 majrty 0.614 0.526 0.474 -0.000 Script statExamples.py contains another example that also prints out the standard errors. Confusion Matrix confusionMatrices(res, classIndex=-1, {cutoff}) This function can compute two different forms of confusion matrix: one in which a certain class is marked as positive and the other(s) negative, and another in which no class is singled out. The way to specify what we want is somewhat confusing due to backward compatibility issues. A positive-negative confusion matrix is computed (a) if the class is binary unless classIndex argument is -2, (b) if the class is multivalued and the classIndex is non-negative. Argument classIndex then tells which class is positive. In case (a), classIndex may be omited; the first class is then negative and the second is positive, unless the baseClass attribute in the object with results has non-negative value. In that case, baseClass is an index of the traget class. baseClass attribute of results object should be set manually. The result of a function is a list of instances of class ConfusionMatrix, containing the (weighted) number of true positives (TP), false negatives (FN), false positives (FP) and true negatives (TN). We can also add the keyword argument cutoff (e.g. confusionMatrices(results, cutoff=0.3); if we do, confusionMatrices will disregard the classifiers' class predictions and observe the predicted probabilities, and consider the prediction "positive" if the predicted probability of the positive class is higher than the cutoff. The example below shows how setting the cut off threshold from the default 0.5 to 0.2 affects the confusion matrics for naive Bayesian classifier. part of statExamples.py cm = orngStat.confusionMatrices(res)[0] print "Confusion matrix for naive Bayes:" print "TP: %i, FP: %i, FN: %s, TN: %i" % (cm.TP, cm.FP, cm.FN, cm.TN) cm = orngStat.confusionMatrices(res, cutoff=0.2)[0] print "Confusion matrix for naive Bayes:" print "TP: %i, FP: %i, FN: %s, TN: %i" % (cm.TP, cm.FP, cm.FN, cm.TN) The output, Confusion matrix for naive Bayes: TP: 238, FP: 13, FN: 29.0, TN: 155 Confusion matrix for naive Bayes: TP: 239, FP: 18, FN: 28.0, TN: 150 shows that the number of true positives increases (and hence the number of false negatives decreases) by only a single example, while five examples that were originally true negatives become false positives due to the lower threshold. To observe how good are the classifiers in detecting vans in the vehicle data set, we would compute the matrix like this: cm = orngStat.confusionMatrices(resVeh, vehicle.domain.classVar.values.index("van")) and get the results like these TP: 189, FP: 241, FN: 10.0, TN: 406 while the same for class "opel" would give TP: 86, FP: 112, FN: 126.0, TN: 522 The main difference is that there are only a few false negatives for the van, meaning that the classifier seldom misses it (if it says it's not a van, it's almost certainly not a van). Not so for the Opel car, where the classifier missed 126 of them and correctly detected only 86. General confusion matrix is computed (a) in case of a binary class, when classIndex is set to -2, (b) when we have multivalued class and the caller doesn't specify the classIndex of the positive class. When called in this manner, the function cannot use the argument cutoff. The function then returns a three-dimensional matrix, where the element A[learner][actualClass][predictedClass] gives the number of examples belonging to 'actualClass' for which the 'learner' predicted 'predictedClass'. We shall compute and print out the matrix for naive Bayesian classifier. part of statExamples.py cm = orngStat.confusionMatrices(resVeh)[0] classes = vehicle.domain.classVar.values print "\t"+"\t".join(classes) for className, classConfusions in zip(classes, cm): print ("%s" + ("\t%i" * len(classes))) % ((className, ) + tuple(classConfusions)) Sorry for the language, but it's time you learn to talk dirty in Python, too. "\t".join(classes) will join the strings from list classes by putting tabulators between them. zip merges to lists, element by element, hence it will create a list of tuples containing a class name from classes and a list telling how many examples from this class were classified into each possible class. Finally, the format string consists of a %s for the class name and one tabulator and %i for each class. The data we provide for this format string is (className, ) (a tuple containing the class name), plus the misclassification list converted to a tuple. So, here's what this nice piece of code gives: bus van saab opel bus 56 95 21 46 van 6 189 4 0 saab 3 75 73 66 opel 4 71 51 86 Van's are clearly simple: 189 vans were classified as vans (we know this already, we've printed it out above), and the 10 misclassified pictures were classified as buses (6) and Saab cars (4). In all other classes, there were more examples misclassified as vans than correctly classified examples. The classifier is obviously quite biased to vans. sens(confm), spec(confm), PPV(confm), NPV(confm), precision(confm), recall(confm), F2(confm), Falpha(confm, alpha=2.0), MCC(conf) With the confusion matrix defined in terms of positive and negative classes, you can also compute the sensitivity [TP/(TP+FN)], specificity [TN/(TN+FP)], positive predictive value [TP/(TP+FP)] and negative predictive value [TN/(TN+FN)]. In information retrieval, positive predictive value is called precision (the ratio of the number of relevant records retrieved to the total number of irrelevant and relevant records retrieved), and sensitivity is called recall (the ratio of the number of relevant records retrieved to the total number of relevant records in the database). The harmonic mean of precision and recall is called an F-measure, where, depending on the ratio of the weight between precision and recall is implemented as F1 [2*precision*recall/(precision+recall)] or, for a general case, Falpha [(1+alpha)*precision*recall / (alpha*precision + recall)]. The [http://en.wikipedia.org/wiki/Matthews_correlation_coefficient Matthews correlation coefficient] in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 an inverse prediction. If the argument confm is a single confusion matrix, a single result (a number) is returned. If confm is a list of confusion matrices, a list of scores is returned, one for each confusion matrix. Note that weights are taken into account when computing the matrix, so these functions don't check the 'weighted' keyword argument. Let us print out sensitivities and specificities of our classifiers. part of statExamples.py cm = orngStat.confusionMatrices(res) print "method\tsens\tspec" for l in range(len(learners)): print "%s\t%5.3f\t%5.3f" % (learners[l].name, orngStat.sens(cm[l]), orngStat.spec(cm[l])) ROC Analysis Receiver Operating Characteristic (ROC) analysis was initially developed for a binary-like problems and there is no consensus on how to apply it in multi-class problems, nor do we know for sure how to do ROC analysis after cross validation and similar multiple sampling techniques. If you are interested in the area under the curve, function AUC will deal with those problems as specifically described below. AUC(res, method = AUC.ByWeightedPairs) Returns the area under ROC curve (AUC) given a set of experimental results. For multivalued class problems, it will compute some sort of average, as specified by the argument method: AUC.ByWeightedPairs (or 0) Computes AUC for each pair of classes (ignoring examples of all other classes) and averages the results, weighting them by the number of pairs of examples from these two classes (e.g. by the product of probabilities of the two classes). AUC computed in this way still behaves as concordance index, e.g., gives the probability that two randomly chosen examples from different classes will be correctly recognized (this is of course true only if the classifier knows from which two classes the examples came). AUC.ByPairs (or 1) Similar as above, except that the average over class pairs is not weighted. This AUC is, like the binary, independent of class distributions, but it is not related to concordance index any AUC.WeightedOneAgainstAll (or 2) For each class, it computes AUC for this class against all others (that is, treating other classes as one class). The AUCs are then averaged by the class probabilities. This is related to concordance index in which we test the classifier's (average) capability for distinguishing the examples from a specified class from those that come from other classes. Unlike the binary AUC, the measure is not independent of class distributions. AUC.OneAgainstAll (or 3) As above, except that the average is not weighted. In case of multiple folds (for instance if the data comes from cross validation), the computation goes like this. When computing the partial AUCs for individual pairs of classes or singled-out classes, AUC is computed for each fold separately and then averaged (ignoring the number of examples in each fold, it's just a simple average). However, if a certain fold doesn't contain any examples of a certain class (from the pair), the partial AUC is computed treating the results as if they came from a single-fold. This is not really correct since the class probabilities from different folds are not necessarily comparable, yet this will most often occur in a leave-one-out experiments, comparability shouldn't be a problem. Computing and printing out the AUC's looks just like printing out classification accuracies (except that we call AUC instead of CA, of course): part of statExamples.py AUCs = orngStat.AUC(res) for l in range(len(learners)): print "%10s: %5.3f" % (learners[l].name, AUCs[l]) For vehicle, you can run exactly this same code; it will compute AUCs for all pairs of classes and return the average weighted by probabilities of pairs. Or, you can specify the averaging method yourself, like this AUCs = orngStat.AUC(resVeh, orngStat.AUC.WeightedOneAgainstAll) The following snippet tries out all four. (We don't claim that this is how the function needs to be used; it's better to stay with the default.) part of statExamples.py methods = ["by pairs, weighted", "by pairs", "one vs. all, weighted", "one vs. all"] print " " *25 + " \tbayes\ttree\tmajority" for i in range(4): AUCs = orngStat.AUC(resVeh, i) print "%25s: \t%5.3f\t%5.3f\t%5.3f" % ((methods[i], ) + tuple(AUCs)) As you can see from the output, bayes tree majority by pairs, weighted: 0.789 0.871 0.500 by pairs: 0.791 0.872 0.500 one vs. all, weighted: 0.783 0.800 0.500 one vs. all: 0.783 0.800 0.500 AUC_single(res, classIndex) Computes AUC where the class given classIndex is singled out, and all other classes are treated as a single class. To find how good our classifiers are in distinguishing between vans and other vehicle, call the function like this orngStat.AUC_single(resVeh, classIndex = vehicle.domain.classVar.values.index("van")) AUC_pair(res, classIndex1, classIndex2) Computes AUC between a pair of examples, ignoring examples from all other classes. Computes a (lower diagonal) matrix with AUCs for all pairs of classes. If there are empty classes, the corresponding elements in the matrix are -1. Remember the beautiful(?) code for printing out the confusion matrix? Here it strikes again: part of statExamples.py classes = vehicle.domain.classVar.values AUCmatrix = orngStat.AUC_matrix(resVeh)[0] print "\t"+"\t".join(classes[:-1]) for className, AUCrow in zip(classes[1:], AUCmatrix[1:]): print ("%s" + ("\t%5.3f" * len(AUCrow))) % ((className, ) + tuple(AUCrow)) The remaining functions, which plot the curves and statistically compare them, require that the results come from a test with a single iteration, and they always compare one chosen class against all others. If you have cross validation results, you can either use splitByIterations to split the results by folds, call the function for each fold separately and then sum the results up however you see fit, or you can set the ExperimentResults' attribute numberOfIterations to 1, to cheat the function - at your own responsibility for the statistical correctness. Regarding the multi-class problems, if you don't chose a specific class, orngStat will use the class attribute's baseValue at the time when results were computed. If baseValue was not given at that time, 1 (that is, the second class) is used as default. We shall use the following code to prepare suitable experimental results ri2 = orange.MakeRandomIndices2(voting, 0.6) train = voting.selectref(ri2, 0) test = voting.selectref(ri2, 1) res1 = orngTest.learnAndTestOnTestData(learners, train, test) AUCWilcoxon(res, classIndex=1) Computes the area under ROC (AUC) and its standard error using Wilcoxon's approach proposed by Hanley and McNeal (1982). If classIndex is not specified, the first class is used as "the positive" and others are negative. The result is a list of tuples (aROC, standard error). To compute the AUCs with the corresponding confidence intervals for our experimental results, simply call compare2AUCs(res, learner1, learner2, classIndex=1) Compares ROC curves of learning algorithms with indices learner1 and learner2. The function returns three tuples, the first two have areas under ROCs and standard errors for both learner, and the third is the difference of the areas and its standard error: ((AUC1, SE1), (AUC2, SE2), (AUC1-AUC2, SE(AUC1)+SE(AUC2)-2*COVAR)). This function is broken at the moment: it returns some numbers, but they're wrong. computeROC(res, classIndex=1) Computes a ROC curve as a list of (x, y) tuples, where x is 1-specificity and y is sensitivity. computeCDT(res, classIndex=1), ROCsFromCDT(cdt, {print}) These two functions are obsolete and shouldn't be called. Use AUC instead. AROC(res, classIndex=1), AROCFromCDT(res, {print}), compare2AROCs(res, learner1, learner2, classIndex=1) These are all deprecated, too. Instead, use AUCWilcoxon (for AROC), AUC (for AROCFromCDT), and compare2AUCs (for compare2AROCs). Comparison of Algorithms Computes a triangular matrix with McNemar statistics for each pair of classifiers. The statistics is distributed by chi-square distribution with one degree of freedom; critical value for 5% significance is around 3.84. McNemarOfTwo(res, learner1, learner2) McNemarOfTwo computes a McNemar statistics for a pair of classifier, specified by indices learner1 and learner2. Several alternative measures, as given below, can be used to evaluate the sucess of numeric prediction: Computes mean-squared error. Computes root mean-squared error. Computes mean absolute error. Computes relative squared error. Computes root relative squared error. Computes relative absolute error. Computes the coefficient of determination, R-squared. The following code uses most of the above measures to score several regression methods. import orange import orngRegression as r import orngTree import orngStat, orngTest data = orange.ExampleTable("housing") # definition of regressors lr = r.LinearRegressionLearner(name="lr") rt = orngTree.TreeLearner(measure="retis", mForPruning=2, minExamples=20, name="rt") maj = orange.MajorityLearner(name="maj") knn = orange.kNNLearner(k=10, name="knn") learners = [maj, rt, knn, lr] # cross validation, selection of scores, report of results results = orngTest.crossValidation(learners, data, folds=3) scores = [("MSE", orngStat.MSE), ("RMSE", orngStat.RMSE), ("MAE", orngStat.MAE), ("RSE", orngStat.RSE), ("RRSE", orngStat.RRSE), ("RAE", orngStat.RAE), ("R2", orngStat.R2)] print "Learner " + "".join(["%-8s" % s[0] for s in scores]) for i in range(len(learners)): print "%-8s " % learners[i].name + \ "".join(["%7.3f " % s[1](results)[i] for s in scores]) The code above produces the following output: Learner MSE RMSE MAE RSE RRSE RAE R2 maj 84.585 9.197 6.653 1.002 1.001 1.001 -0.002 rt 40.015 6.326 4.592 0.474 0.688 0.691 0.526 knn 21.248 4.610 2.870 0.252 0.502 0.432 0.748 lr 24.092 4.908 3.425 0.285 0.534 0.515 0.715 Plotting Functions graph_ranks(filename, avranks, names, cd=None, lowv=None, highv=None, width=6, textspace=1, reverse=False, cdmethod=None) Draws a CD graph, which is used to display the differences in methods' performance. See Janez Demsar, Statistical Comparisons of Classifiers over Multiple Data Sets, 7(Jan):1--30, 2006. Needs matplotlib to work. Output file name (with extension). Formats supported by matplotlib can be used. List of average methods' ranks. List of methods' names. Critical difference. Used for marking methods that whose difference is not statistically significant. The lowest shown rank, if None, use 1. he highest shown rank, if None, use len(avranks). Width of the drawn figure in inches, default 6 inches. Space on figure sides left for the description of methods, default 1 inch. If True, the lowest rank is on the right. Default: False. None by default. It can be an index of element in avranks or or names which specifies the method which should be marked with an interval. If specified, the interval is marked only around that method. This option is ment to be used with Bonferonni-Dunn test. import orange, orngStat names = ["first", "third", "second", "fourth" ] avranks = [1.9, 3.2, 2.8, 3.3 ] cd = orngStat.compute_CD(avranks, 30) #tested on 30 datasets orngStat.graph_ranks("statExamples-graph_ranks1.png", avranks, names, \ cd=cd, width=6, textspace=1.5) The code above produces the following graph: compute_CD(avranks, N, alpha="0.05", type="nemenyi") Returns critical difference for Nemenyi or Bonferroni-Dunn test according to given alpha (either alpha="0.05" or alpha="0.1") for average ranks and number of tested data sets N. Type can be either "nemenyi" for for Nemenyi two tailed test or "bonferroni-dunn" for Bonferroni-Dunn test. compute_friedman(avranks, N) Returns a tuple composed of (friedman statistic, degrees of freedom) and (Iman statistic - F-distribution, degrees of freedoma) given average ranks and a number of tested data sets N. Utility Functions Splits ExperimentResults of multiple iteratation test into a list of ExperimentResults, one for each iteration.
{"url":"http://orange.biolab.si/doc/modules/orngStat.htm","timestamp":"2014-04-19T22:46:45Z","content_type":null,"content_length":"34749","record_id":"<urn:uuid:7ebba60b-89c0-4afa-9575-b2e55b7e15de>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
[R-sig-eco] nested factor for which i would like a parameter estimate [R-sig-eco] nested factor for which i would like a parameter estimate dougwyu dougwyu at gmail.com Sun Mar 13 08:29:41 CET 2011 Hello all, I am trying to test whether "sound diversity" predicts "bird diversity." The question i have concerns how to deal with a factor that could be treated either as a random or fixed factor. Response: bird_entropy (continuous) Fixed factors: landtype (factor), sound_entropy (continuous), AM/PM (factor) Random factor: Plot The problematic factor is AM/PM. We have 29 total sampling plots, each of which is measured for sound_diversity at five subplots, once in the AM (dawn) and again in the PM (dusk), for a total 10 measurements per plot. (Each plot has only one, overall measure of bird_diversity). On the one hand, AMPM is nested within plot, but on the other hand, we would like to estimate a parameter for AMPM, since we expect different suites of sound producers (not just birds) at different times of the day. However, it's reasonable to expect temporal correlation between AM and PM. The first model seems uncontroversial to me (these are post-model-selection). Mod.lme1 <- lme(bird_entropy ~ sound_entropy * landuse, random = ~ 1 | plot / AMPM, data=Bird) But i'm curious to know if this second model is reasonable, and if so, how would I code the plot variable? Mod.lme2 <- lme(bird_entropy ~ sound_entropy * landuse + AMPM, random = ~ 1 | plot, data=Bird) Thanks all, More information about the R-sig-ecology mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-ecology/2011-March/001965.html","timestamp":"2014-04-20T04:07:55Z","content_type":null,"content_length":"4334","record_id":"<urn:uuid:538a2e6c-c5c8-48ab-a523-07361d63aa38>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
IBM Busts Record for ‘Superconducting’ Quantum Computer Today's quantum computers are no more than experiments. Researchers can string together a handful of quantum bits - seemingly magical bits that store a "1″ and "0″ at the same time - and these ephemeral creations can run relatively simple algorithms. But new research from IBM indicates that far more complex quantum computers aren't that far away. On Tuesday, IBM revealed that physicists at its Watson Research Center in Yorktown Heights, New York have made significant advances in the creation of "superconducting qubits," one of several research fields that could eventually lead to a quantum computer that's exponentially more powerful than today's classical computers. According to Matthias Steffen - who oversees Big Blue's experimental quantum computing group - he and his team have improved the performance of superconducting qubits by a factor of two to four. "What this means is that we can really start thinking about much larger systems," he tells Wired, "putting several of these quantum bits together and performing much larger error correction." David DiVincenzo - a professor at the Jülich Research Center‘s Institute of Quantum Information in western Germany and a former colleague if Steffen - agrees that IBM's new research is more than just a milestone. "These metrics have now - for the first time - attained the levels necessary to begin scaling up quantum computation to greater complexity," he says. "I think that we will soon see whole quantum computing modules, rather than just two- or three-qubit experiments." Whereas the computer on your desk obeys the laws of classical physics - the physics of the everyday world - a quantum computer taps the mind bending properties of quantum mechanics. In a classic computer, a transistor stores a single "bit" of information. If the transistor is "on," for instance, it holds a "1." If it's "off," it holds a "0." But with quantum computer, information is represented by a system that can an exist in two states at the same time, thanks to the superposition principle of quantum mechanics. Such a qubit can store a "0″ and "1″ simultaneously. Information might be stored in the spin of electron, for instance. An "up" spin represents a "1." A "down" spin represent a "0." And at any given time, this spin can be both up and down. "The concept has almost no analog in the classical world," Steffan says. "It would be almost like me saying I could be over here and over there where you are at the same time." If you then put two qubits together, they can hold four values at once: 00, 01, 10, and 11. And as you add more and more qubits, you can build a system that's exponentially more powerful than a classic computer. You could, say, crack the world's strongest encryption algorithms in a matter of seconds. As IBM points out, a 250-qubit quantum computer would contain more bits that there are particles in the universe. But building a quantum computer isn't easy. The idea was first proposed in the mid-80s, and we're still at the experimental stage. The trouble is that quantum systems so easily "decohere," dropping from two simultaneous states into just a single state. Your quantum bit can very quickly become an ordinary classical bit. Researchers such as Matthias Steffen and David DiVincenzo aim to build systems that can solve this decoherence problem. At IBM, Steffen and his team base their research on a phenomenon known as superconductivity. In essence, if you cool certain substances to very low temperatures, they exhibit zero electrical resistance. Steffen describes this as something akin to a loop where current flows in two directions at the same time. A clockwise current represents a "1," and counter clockwise represents a "0." IBM's qubits are built atop a silicon substrate using aluminum and niobium superconductors. Essentially, two superconducting electrodes sit between an insulator - or Josephson junction - of aluminum oxide. The trick is keep this quantum system from decohering for as long as possible. If you can keep the qubits in a quantum state for long enough, Steffen says, you can build the error correction schemes you need to operate a reliable quantum computer. The threshold is about 10 to 100 microseconds, and according to Steffen, his team has now reached this point with a "three-dimensional" qubit based on a method originally introduced by researchers at Yale University. Ten years ago, decoherence times were closer to a nanosecond. In other words, over the last ten years, researchers have improved the performance of superconducting qubits by a factor of more than 10,000. IBM's team has also built a "controlled NOT gate" with traditional two-dimensional qubits, meaning they can flip the state of one qubit depending on the state of the other. This too is essential to building a practical quantum computer, and Steffen says his team can successfully flip that state 95 percent of the time - thanks to a decoherence time of about 10 microseconds. "So, not just is our single device performance remarkably good," he explains, "our demonstration of a two-qubit device - an elementary logic gate - is also good enough to get at least close to the threshold needed for a practical quantum computer. We're not quite there yet, but we're getting there." The result is that the researchers are now ready to build a system that spans several qubits. "The next bottleneck is now how to make these devices betters. The bottleneck is how to put five or ten of these on a chip," Steffen says. "The device performance is good enough to do that right now. The question is just: ‘How do you put it all together?'" Image: IBM Wired.com has been expanding the hive mind with technology, science and geek culture news since 1995. 1 46Reply
{"url":"http://gizmodo.com/5888878/ibm-busts-record-for-superconducting-quantum-computer?tag=guts","timestamp":"2014-04-16T05:40:05Z","content_type":null,"content_length":"90101","record_id":"<urn:uuid:d5cf484e-32de-4b35-bac8-fca387f5c0c1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics The theory of anti-eigenvalues is a spectral theory based upon the turning angles of a matrix or operator Eigen value for the spectral theory of stretchings, rather than turnings, of a matrix or For a strongly accretive operator From (a1) one has immediately the notion of the angle Minmax theorem. For any strongly accretive bounded operator Hilbert space Using the minmax theorem, the right-hand side of (a2) is seen to define in such a way that [a1]). Euler equation. For any strongly accretive bounded operator When normal operator, (a4) is satisfied not only by the first anti-eigenvectors of [a2], [a3]. The theory of anti-eigenvalues has been applied recently (from 1990 onward) to gradient and iterative methods for the solution of linear systems [a5], [a6]. For example, the Kantorovich convergence rate for steepest descent, Thus, the Kantorovich error rate is trigonometric. Similar trigonometric convergence bounds hold for conjugate gradient and related more sophisticated algorithms [a4]. Even the basic Richardson method Richardson extrapolation) may be seen to have optimal convergence rate [a5], [a6]. [a1] K. Gustafson, "Operator trigonometry" Linear Multilinear Alg. , 37 (1994) pp. 139–159 [a2] K. Gustafson, "Antieigenvalues" Linear Alg. & Its Appl. , 208/209 (1994) pp. 437–454 [a3] K. Gustafson, "Matrix trigonometry" Linear Alg. & Its Appl. , 217 (1995) pp. 117–140 [a4] K. Gustafson, "Operator trigonometry of iterative methods" Numerical Linear Alg. Appl. , to appear (1997) [a5] K. Gustafson, "Lectures on computational fluid dynamics, mathematical physics, and linear algebra" , Kaigai & World Sci. (1996/7) [a6] K. Gustafson, D. Rao, "Numerical range" , Springer (1997) How to Cite This Entry: Anti-eigenvalue. K. Gustafson (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Anti-eigenvalue&oldid=12538 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php?title=Anti-eigenvalue&oldid=12538","timestamp":"2014-04-19T14:29:58Z","content_type":null,"content_length":"23208","record_id":"<urn:uuid:c1ae3956-6ec6-46ba-84f9-991a15a6ca8a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Using Binomial Theorm to compute f November 1st 2012, 09:18 AM #1 Nov 2012 Using Binomial Theorm to compute f Hey, I was hoping someone could check and tell me if I have done the below question correctly The question asks that we use Binomial theorm and compute $(2- i)^5$ The answer I get is ( $-38 -119i$). I for some reason think it is incorrect, could someone double check for me please and thank you! Re: Using Binomial Theorm to compute f Hey, I was hoping someone could check and tell me if I have done the below question correctly The question asks that we use Binomial theorm and compute $(2- i)^5$ The answer I get is ( $-38 -119i$). I for some reason think it is incorrect, could someone double check for me please and thank you! Yep. It's off. $(2 - i)^5 = 2^5 - 5*2^4*i + 10*2^3*I^2 - 10*2^2*i^3 + 5*2*i^4 - i^5$ $=~(32-80+10) + (-80+40-1)i$ Does this help? November 1st 2012, 09:41 AM #2
{"url":"http://mathhelpforum.com/advanced-math-topics/206541-using-binomial-theorm-compute-f.html","timestamp":"2014-04-17T14:35:19Z","content_type":null,"content_length":"35050","record_id":"<urn:uuid:5c2cf550-eedc-4875-961f-13eb1b18871a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
when is an element of $M_n(M)$ $\ast$-free from $M_n(\mathbb{C})$ for a $\ast$-non-commutative probability space $M$. up vote 1 down vote favorite From "Lectures on the combinatorics of free probability" by Nica and Speicher we have a necessary sufficient criteria for an element of $M_n(M)$ being free from $M_n(\mathbb{C})$ for a non-commutative probability space (NCPS) $M$. Do we have a similar result for such an element being $\ast$-free from $M_n(\mathbb{C})$ for a $\ast$-non-commutative probability space $M$? I can't think of even a good necessary condition. The result that I mentioned above is: Let $x_{i,j} \in$ an NCPS $(M,\phi)$, $i,j=1, \cdots, n$. TFAE: (1) The matrix $(x_{i,j}$ is free from $M_n(\mathbb{C})$ in $(M_n(M),Tr)$ where $M_n(M) = M_n(\mathbb{C}) \otimes M$ and $Tr=tr \otimes \phi$. (2) Free cumulants of $\{x_{i,j}\}$ in $(M,\phi)$ are such that only cyclic cumulants $\kappa_m(x_{i(1),i(2)},x_{i(2),i(3)}, \cdots, x_{i(m),i(1)})$ are possibly different from 0, the value depending only on $m$, not the tuple $(i(1), \cdots, i(m))$. I am also defining a $\ast$-non-commutative probability space $M$ and $\ast$-freeness below: Def 1- A non-commutative probability space (NCPS) is a couple $(M,\phi)$ where $M$ is a unital algebra and $\phi$ is a unital linear functional on it. $(M, \phi)$ is called a $\ast$-non-commutative probability space if $M$ has an involution $\ast$ on it and $\phi$ is positive w.r.t $\ast$. My interest lies in the case where $M$ is a $II_1$ factor and $\phi$ is tracial state. But the result we have is for general NCPS. Def 2- A family $(A_i)$ of unital $\ast$-subelgebras of $M$ is called free if $\phi(a_1 \cdots a_n)=0$ whenever $n \ge 1, a_j \in A_{i(j)}, \phi(a_j)=0$ and $i(j) \ne i(j+1)$ for all $j=1, \cdots, n$. $(a_i)$, a family of elements of $M$ are called $\ast$-free if the family $(alg(1,a_i,a_i^*))$ is free in $M$. von-neumann-algebras pr.probability 4 Could you please say what the characterization is which you would like to generalize. Also, could you please define $*$-free and a $*$-non-commutative probability space. – Jesse Peterson Apr 17 '11 at 0:08 Sorry I should have mentioned the result in my question. I am editing the question along with the definitions. – Madhushree Apr 18 '11 at 9:48 The result mentioned above is Theorem 14.20 in Nica and Speicher's book. – Madhushree Apr 18 '11 at 10:44 add comment 1 Answer active oldest votes Yes, one can get a similar characterization, only you will need to talk about $\ast$-cumulants instead of cumulants (in other words, you will need to allow both $x_{ij}$'s and $x_{ji}^\ ast$'s in the formula involving the cyclic cumulants above, and require that all other cumulants vanish). Let me explain the source behind the formula of Nica-Speicher. Associated to $x\in M_n(M)$ there are two $R$-transforms: the scalar-valued one (i.e., Voiculescu's $R$-transform computed for the element $x$ viewed as belonging to the non-commutative probability space $(M_n(M), \frac{1}{n} Tr\circ \tau)$ and the $M_n(\mathbb{C})$-valued $R$-transform (i.e., the $R$-transform of $x$ viewed as belonging to the non-commutative probability space $(M_N(M), E=1\otimes \tau : M_n(M)\to M_n(\mathbb{C})$. Then a way to characterize freeness of $x$ is to say that the matrix-valued $R$-transform is the composition of the scalar-valued $R$-transform with the trace $\tau$ (so that the matrix-valued $R$-transform "factorizes" through the up vote 2 trace). For details, see section 3 of our paper with Nica and Speicher (http://arxiv.org/pdf/math.OA/0201001). When translated combinatorially, this corresponds to the characterization you down vote mentioned. A similar characterization is available (in the paper above) for the case that you compare $D$-freeness and $B$-freeness where $D\subset B$ is some subalgebra (the discussion accepted above being for $\mathbb{C}=D\subset B=M_{n\times n}(\mathbb{C})$. Now the exact same characterization is available for the case of $\ast$-probability spaces (or, if you like, $n$-tuples of operators; indeed, to consider the $\ast$-operation all you need to do is to view $(x,x^\ast)$ as a pair). Another (equivalent) trick is to encode the $\ast$-distribution of $x$ as the $M_{2\times 2}(\mathbb{C})$ valued distribution of the matrix $$\ left(\begin{matrix} 0 & x \cr x^\ast & 0 \end{matrix}\right)$$ and then write (in terms of factorization of $R$-transforms) that this matrix is free with amalgamation over $D=M_{2\times 2} \otimes 1$ from $B=M_{2\times 2}(\mathbb{C})\otimes M_{n\times n}(\mathbb{C})$. add comment Not the answer you're looking for? Browse other questions tagged von-neumann-algebras pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/61971/when-is-an-element-of-m-nm-ast-free-from-m-n-mathbbc-for-a-ast-n/62151","timestamp":"2014-04-18T08:50:19Z","content_type":null,"content_length":"57907","record_id":"<urn:uuid:6eae830a-bf41-4f1e-bb20-3bdbd8139f08>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00636-ip-10-147-4-33.ec2.internal.warc.gz"}
Can the Weak Link in Psychological Research be Fixed? My impression of psychological research is that it is conducted by bright, well-trained individuals armed with millions of dollars in research funds and that their work is resulting in massive amounts of data relevant to a wide range of important and interesting issues. There is, however, a component of this process that has become, over the years, the weak link in our goal to understand psychological phenomena: Data analysis. A half century ago, there was a reasonably small gap between cutting-edge methods for analyzing data and methods used in psychological research. But for a variety of reasons, this gap has widened tremendously, particularly during the last twenty years. To put it simply, all of the hypothesis testing methods taught in a typical introductory statistics course, and routinely used by applied researchers, are obsolete; there are no exceptions. Hundreds of journal articles and several books point this out, and no published paper has given a counter argument as to why we should continue to be satisfied with standard statistical techniques. These standard methods include Student’s T for means, Student’s T for making inferences about Pearson’s correlation, and the ANOVA F, among others. My comments are not meant as an indictment against all published studies. If groups of participants do not differ in any way, meaning that they have identical distributions (so in particular they have equal means, variances, and skewness), standard methods are just fine. That is, they guard against Type I errors. And when studying associations among a collection of variables, standard methods perform well when there is independence. It is when groups differ in some manner or variables are dependent, that extremely serious practical problems arise. If the goal is to collect data and not discover true differences or true associations, or to poorly characterize how groups differ and how variables are related, stick with standard statistical techniques. It is not my intention here to explain why standard methods fail in applied psychological research? Nontechnical explanations can be found in Wilcox (2001), and there are now many modern methods aimed at addressing these concerns (e.g., Wilcox, 1997, in press). Rather, my goal is to point out some roadblocks to achieving change in the hope that more influential individuals might be able to address these problems in an effective manner. But before continuing, let me briefly indicate the main sources of concern when using standard statistical techniques. First, arbitrarily small departures from normality can destroy power, our ability to detect true differences and true associations. This became evident with the publication of a paper by Tukey (1960). There are in fact two types of non-normality that cause problems. One has to do with outliers (unusually small or large values) and the other has to do with skewness. Even when there are no outliers, skewness can be devastating (e.g., Westfall and Young, 1990), and the more obvious methods for dealing with this problem (simple transformations of the data) are known to fail. The other problem with conventional techniques is that under general circumstances they use the wrong standard error. This can result in poor power as well, even under normality, and some problems persist no matter how large the sample sizes might be (e.g., Cressie and Whitford, 1986). Switching to conventional nonparametric methods does not avoid this problem, but again, there are effective ways of dealing with this issue (e.g., Brunner, Domfoh & Langer, 2002; Cliff, 1997; Wilcox, in press). If modern technology has so much to offer, why is it that most applied researchers don’t take advantage of it? The remainder of this article outlines my own observations relevant to this question in the hope that raising these issues helps our profession as a whole. Commercial software. Several factors, taken together, make it difficult to change the status quo. The first is popular commercial software, which is both a blessing and a curse. It is a blessing for obvious reasons, but it is a curse because applied research is limited by the reluctance of commercial enterprises to modernize the point-and-click methods they provide. Without easy-to-use software, modern methods are inaccessible to most applied researcher, meaning that antiquated methods are often the only options for many psychologists. Years ago, one of my students asked a representative from a well-known software company why they do not add modern methods. His response: ‘We are aware of the problem and have no plans to correct it.’ The reason for this attitude is unclear, but a guess is that the explosion in the number of modern methods may be too daunting for commercial companies with an eye on the bottom line, so we pay the price in our inability to get the most out of our data. It seems we must address this problem ourselves, meaning we need to take steps to provide relatively easy-to-use software. Standard intro course. The second general problem we face stems from what has become the standard introductory course. There are, of course, basic principles that must be covered, and conventional methods need to be taught, one reason being that they continue to be used. The problem is that the vast majority of introductory books (and some advanced books too) ignore the advances and insights from the last half century. The result is an unmistakable, albeit implied, message that important advances in statistical methods ceased circa 1955. Of course, this is incorrect. Moreover, textbooks reinforce the impression that the methods in commercial software perform well. Under the circumstances, why would any applied researcher bother to check with a statistician or a quantitative I hasten to add that I do not intend to be overly critical of authors of standard textbooks. I am aware that some of these authors are cognizant of the problem we face and why standard statistical methods fail, but there are pressures that hinder change. To elaborate a bit, some years ago I thought it might be possible to improve data analysis by writing an undergraduate introductory book that at least touched on modern insights. I submitted two chapters for review, one of which described why serious practical problems arise when using Student’s T. I then described one of the simpler attempts at correcting these problems and indicated that there are even better techniques but that they are best left for a more advanced course. One referee was very positive and enthusiastic, but another argued vehemently that the book should not be published because it would confuse the instructor. A third referee was less negative but essentially echoed this view. ‘Anyone can teach stats.’ This brings me to the third barrier to achieving change: There seems to be a common attitude that almost anyone can teach a statistics course. The instructors I know are intelligent and very capable, but it is clear that many of them are too busy with their own area of expertise to keep up with advances in statistics. The reality is that keeping up with advances demands a fair amount of effort and we should not expect most non-quantitative psychologists to address this problem anymore than we would expect a pediatrician to be a neurosurgeon in her spare A few instructors I know are aware of advances in statistics but cannot imagine saying anything about them to students, in the belief that the students wouldn’t understand anyway. But we must face the problem of improving instruction if we want psychology to take advantage of modern technology. I know instructors who deal with this problem instead of avoiding it. They show that it can be done, but bringing about meaningful change seems to be just beyond our reach, at least for the moment. Disciplinary attitudes. Finally, attitudes I’ve encountered from non-psychologists, primarily statisticians, seem rather telling. One of the individuals who reviewed my book published in 2001 was clearly a statistician. His reaction was that a nontechnical book that tries to explain modern insights related to fundamental principles is a waste of time; in essence, applied researchers are generally hopeless. Under this logic, statisticians are willing to use modern methods, but applied researchers will never avail themselves of what these methods have to offer. I refuse to accept this, which is one motivation for this article. Also, it seems to me that the lines of communication have virtually broken down between statistics and psychology, partly because each is absorbed in its own disciplinary enterprise, and partly because each views the other as isolationist. We spend millions of dollars collecting data. Surely a reasonable policy is to get the most out of what the data might tell us. So can the weak link in psychological research be fixed? I would argue that the answer has to be yes. In fact, we have quantitative journals aimed at bridging the gap, plus editors of some prestigious applied journals are aware of this issue and have an interest in doing something about it. So there is hope that at least some subsets of psychological research are moving ahead. But it seems that more needs to be done. Specifically, we need to develop comprehensive strategies focused on eliminating the existing technical and “cultural” barriers that hinder the adoption of modern statistical techniques in applied research in psychology. Brunner, E, Domhof, S & Langer, F. (2002). Nonparametric Analysis of Longitudinal Data in Factorial Experiments. New York: Wiley. Cliff, N. (1996). Ordinal Methods for Behavioral Data Analysis. Mahwah, NJ: Erlbaum. Cressie, NAC, & Whitford, HJ (1986). How to use the two sample t-test. Biometrical Journal, 28, 131-148. Tukey, JW. (1960). A survey of sampling from contaminated normal distributions. In I. Olkin et al. (eds.) Contributions to Probability and Statistics. Stanford, CA: Stanford University Press. Westfall, PH, & Young, SS. (1993). Resampling Based Multiple Testing. New York: Wiley. Wilcox, RR. (1997). Introduction to Robust Estimation and Hypothesis Testing. San Diego, CA: Academic Press. Wilcox, RR. (2001). Fundamentals of Modern Statistical Methods: Substantially Increasing Power and Accuracy. New York: Springer. Wilcox, R. R. (in press). Applying Conventional Statistical Techniques. San Diego, CA: Academic Press. Observer Vol.15, No.4 April, 2002
{"url":"http://www.psychologicalscience.org/index.php/uncategorized/can-the-weak-link-in-psychological-research-be-fixed.html","timestamp":"2014-04-17T07:03:51Z","content_type":null,"content_length":"48497","record_id":"<urn:uuid:6b5271f0-1199-4524-85b5-f462a3069a4c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Im Trying To Solve The Problem From The <p>Im trying to solve the problem from the "Vector MEchanics for Engineers" Statics Ninth Edition by Beer/Johnston/Mazurek/Eisenberg<br /><br />The question is A rope ABCD is looped over two pipes. Knowing that the coefficient of statics friction is 0.25, determine (a)the smallest value of the mass m for wich equilibrium is possible. B)th corresponding tension in portion BC of the rope.<br /> <br />The answer based on the back of the book is 22.8Kg and B)291N. I cant seem to match my answers. I don't know what I am doing wrong. PLease, I need help!</p> Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/im-trying-solve-problem-vector-mechanics-engineers-statics-ninth-edition-beer-johnston-maz-q1080812","timestamp":"2014-04-23T07:51:36Z","content_type":null,"content_length":"21601","record_id":"<urn:uuid:2cb2855f-8fb2-4b1c-827e-2fee93041475>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
At this site we maintain a list of the 5000 Largest Known Primes which is updated hourly. This list is the most important databases at The Prime Pages: a collection of research, records and results all about prime numbers. This page summarizes our information about one of these primes. This prime's information: field (help) value Description: 103040! - 1 Verification status (*): Proven Official Comment: Factorial Unofficial Comments: This prime has 1 user comment below. Proof-code(s): (*): p301 : Winskill1, Fpsieve, PrimeGrid, OpenPFGW Decimal Digits: 471794 (log[10] is 471793.32496352) Rank (*): 1040 (digit rank is 1) Entrance Rank (*): 136 Currently on list? (*): short Submitted: 12/14/2010 21:31:36 CDT Last modified: 12/18/2010 17:20:21 CDT Database id: 96944 Status Flags: none Score (*): 44.3267 (normalized score 2.7519) Archival tags: There are certain forms classed as archivable: these prime may (at times) remain on this list even if they do not make the Top 5000 proper. Such primes are tracked with archival tags. Factorial primes (archivable *) Prime on list: yes, rank 4 Subcategory: "Factorial" (archival tag id 213027, tag last modified 2013-09-04 05:50:30) User comments about this prime (disclaimer): User comments are allowed to convey mathematical information about this number, how it was proven prime.... See our guidelines and restrictions. Verification data: The Top 5000 Primes is a list for proven primes only. In order to maintain the integrity of this list, we seek to verify the primality of all submissions. We are currently unable to check all proofs (ECPP, KP, ...), but we will at least trial divide and PRP check every entry before it is included in the list. field value prime_id 96944 person_id 9 machine Ditto P4 P4 what trial_divided notes Command: /home/ditto/client/TrialDiv/TrialDiv -q 1 103040 ! -1 2>&1 [Elapsed time: 40571.083 seconds] modified 2011-12-27 16:48:35 created 2010-12-14 21:35:02 id 123278 field value prime_id 96944 person_id 9 machine Ditto P4 P4 what prime Command: /home/ditto/client/pfgw -tp -q"103040!-1" 2>&1 PFGW Version 20031027.x86_Dev (Beta 'caveat utilitor') [FFT v22.13 w/P4] Primality testing 103040!-1 [N+1, Brillhart-Lehmer-Selfridge] Running N+1 test using discriminant 103043, base 1+sqrt(103043) Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(196608,19) to FFT(196608,18) Reduced from FFT(196608,18) to FFT(196608,17) Reduced from FFT(196608,17) to FFT(196608,16) notes 3134544 bit request FFT size=(196608,16) Running N+1 test using discriminant 103043, base 2+sqrt(103043) Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(196608,19) to FFT(196608,18) Reduced from FFT(196608,18) to FFT(196608,17) Reduced from FFT(196608,17) to FFT(196608,16) 3134544 bit request FFT size=(196608,16) Calling Brillhart-Lehmer-Selfridge with factored part 33.98% 103040!-1 is prime! (328910.6907s+2.6303s) [Elapsed time: 3.81 days] modified 2011-01-18 10:15:30 created 2010-12-14 21:38:02 id 123279 Query times: 0.0007 seconds to select prime, 0.0008 seconds to seek comments.
{"url":"http://primes.utm.edu/primes/page.php?id=96944","timestamp":"2014-04-17T12:30:15Z","content_type":null,"content_length":"12707","record_id":"<urn:uuid:6878dade-4c12-422e-bbb9-a8316ae12499>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Square Law Series Expansion Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search When viewed as a Taylor series expansion such as Eq.6.18), the simplest nonlinearity is clearly the square law nonlinearity: where ^7.18 Consider a simple signal processing system consisting only of the square-law nonlinearity: The Fourier transform of the output signal is easily found using the dual of the convolution theorem:^7.19 where ``convolution. In general, the bandwidth of double that of Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/waveguide/Square_Law_Series_Expansion.html","timestamp":"2014-04-19T07:42:26Z","content_type":null,"content_length":"8914","record_id":"<urn:uuid:9a02e90e-655f-4bfe-b000-c0408abcb840>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Natural frequencies and mode shapes Hi people, we were doing an experiment in lab where we excite a plate clamped at the circumference at various frequencies to detect natural frequencies. My questions are: 1. Sometimes at certain frequencies, we were observing multiple mode-shapes (an overlap of two mode shapes, to be exact). What is the reason? It probably has to do something with the imperfection in our set-up. Does anyone have a more definitive answer? 2. The first theoretical natural frequency was 160 Hz. However, in reality, we observed the same mode shape at 80 Hz and 160 Hz. What may be a reason? 3. Why amplitude of vibration at natural frequencies decreases with increasing natural frequencies? i.e. first mode shape (and lowest natural frequency) has the highest amplitude, and the later shapes have lower amplitude. I was thinking frequency is proportional to the square root of the elastic modulus. But it's not the complete answer (TA told me). Does it have anything to do with energy and power? Thank you a lot for your help!
{"url":"http://www.physicsforums.com/showthread.php?t=577336","timestamp":"2014-04-20T11:25:10Z","content_type":null,"content_length":"18930","record_id":"<urn:uuid:5ce2f068-f75a-41e5-b906-f5ef86c3568f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
mathematical terms - sorry if this is in the wrong forum September 9th 2009, 12:56 PM mathematical terms - sorry if this is in the wrong forum I apologize if this is posted in the incorrect forum. I couldn't quite find one that seemed appropriate. I have a question about terms in mathematics: Given the expression: $(x + 2)(x + 5)$ Could $(x + 2)$ be called a "term" in the expression? Or could it be called a factor of the expression? I guess I'm looking for what to call it. I've done some google searches, but couldn't find anything. September 9th 2009, 01:20 PM I dont think we can call it a term because a term is broken up by addition or subtraction signs. The x and 2 in (x+2) are called variable and monomial, respectivly. Other than that I dont know what to call it. I did find this site, though, that is like an algebraic dictionary:Algebra Homework Help : Algebraic Terms & Definitions September 9th 2009, 01:44 PM Matt Westwood I apologize if this is posted in the incorrect forum. I couldn't quite find one that seemed appropriate. I have a question about terms in mathematics: Given the expression: $(x + 2)(x + 5)$ Could $(x + 2)$ be called a "term" in the expression? Or could it be called a factor of the expression? I guess I'm looking for what to call it. I've done some google searches, but couldn't find anything. It would be a "factor". However, in that factor $x+2$, you have two "terms": $x$ and $2$. Now, if it were $3(x + 2)$, the $3$ would then be a "coefficient", which can be loosely understood to be a "constant factor".
{"url":"http://mathhelpforum.com/algebra/101371-mathematical-terms-sorry-if-wrong-forum-print.html","timestamp":"2014-04-20T10:17:26Z","content_type":null,"content_length":"7289","record_id":"<urn:uuid:a846529e-7ab3-4db7-8052-13a5275740a2>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from May 2007 on The Unapologetic Mathematician One very useful example of a category is the category of arrows of a given category $\mathcal{C}$. We start with any category $\mathcal{C}$ with objects ${\rm Ob}(\mathcal{C})$ and morphisms ${\rm Mor}(\mathcal{C})$. From this we build a new category called $\mathcal{C}^\mathbf{2}$, for reasons that I’ll explain later. The objects of $\mathcal{C}^\mathbf{2}$ are just the morphisms of $\mathcal{C}$. The morphisms of this new category are where things start getting interesting. Let’s take two objects of $\mathcal{C}^\mathbf{2}$ — that is, two morphisms of $\mathcal{C}$ — and lay them side-by-side: Now we want something that transforms one into the other. What we’ll do is connect each of the objects on the left to the corresponding object on the right by an arrow: and require that the resulting square commute: $g\circ h=k\circ f$ as morphisms in $\mathcal{C}$. This is a morphism from $f$ to $g$. Sometimes we’ll write $(h,k):f\rightarrow g$, and sometimes we’ll name the square and write $\alpha:f\rightarrow g$. If we have three morphisms $f$, $g$, and $h$ in $\mathcal{C}$, and commuting squares $(k_1,k_2):f\rightarrow g$ and $(k_3,k_4):g\rightarrow h$ then we can get a commuting square $(k_3\circ k_1,k_4\ circ k_2):f\rightarrow h$. We check that this square commutes: $h\circ k_3\circ k_1=k_4\circ g\circ k_1=k_4\circ k_2\circ f$. This gives a composition of commuting squares. It’s easily checked that this is associative. Given any morphism $f:A\rightarrow B$ in $\mathcal{C}$ we can just apply the identity arrows to each of $A$ and $B$ to get a commuting square $(1_A,1_B)$ between $f$ and itself. It is clear that this square serves as the identity arrow on the object $f$ in $\mathcal{C}^\mathbf{2}$, completing our proof that arrows and commuting squares in $\mathcal{C}$ do form a category. As expected, the only really interesting part of the scrimmage was the “power question”. This is basically a proof-based problem the whole team of 15 (or so) works on for an hour. This was always what I was best at, and tonight’s was no exception. I’ll post the question here for you to chew on. I’m restating it somewhat for this forum. You can ask for clarifications in the comments, but I’d rather you not post solutions since I intend to come back to it in a week to give my own (cleaned-up) solutions. I didn’t write them all the answers out myself, but I could have done within the hour if I didn’t have to write them out longhand. I also give credit to more fastidious members of the team reviewing some of my answers and writing them out in the actual competition. This problem is concerned with “arrangements” of “bits”. A bit is just a symbol in one of two states: ${}0$ or $1$. A configuration is a string of bits, considered to loop around from one end to the other. Actually the problem is written in terms of bits arranged around a circle, but I’ll just write them as strings to avoid having to draw circular configurations here. We’re interested in the following transformation on arrangements: a string $a_1a_2...a_n$ becomes the string $b_1b_2...b_n$, where $b_i=1$ if $a_ieq a_{i+1}$ and $b_i=0$ if $a_i=a_{i+1}$. Since we’re considering the strings to loop around, we have $b_n=1$ if $a_neq a_1$ and $b_n=0$ if $a_n=a_0$. As an example, the string $1000$ becomes the string $1001$. Part I 1. What arrangements are created by starting with $1001$ and transforming it one, two, three, and four times? 2. Show the first 4 transformations of $100$. 3. Justify why $100$ will never become all zeroes no matter how many transformations are applied. Part II 4. Show that any arrangement of two bits becomes all zeroes within two transformations. 5. Let $a_ia_{i+1}a_{i+2}$ be any three consecutive bits in an arrangement, which may have any number of other bits (these three may also wrap from the end of a string to the beginning). Show that transforming the arrangements twice gives a ${}0$ or $1$ at position $i$ depending on whether $a_i$ and $a_{i+2}$ are the same or different. 6. Use problem 5 to prove the following statement: if an arrangement with an even number of bits is transformed twice, then the result is the same if every other bit was treated as an arrangement and transformed once. That is: $a_1a_2a_3a_4a_5a_6a_7a_8$ going to $b_1b_2b_3b_4b_5b_6b_7b_8$ in two transformations is equivalent to $a_1a_3a_5a_7$ and $a_2a_4a_6a_8$ going to $b_1b_3b_5b_7$ and $b_2b_4b_6b_8$ in one transformation each, and similarly for other even-length arrangements. 7. Extend the idea of problems 5 and 6 by proving the following. Let $a_1...a_{2^n}$ be an arrangement of length $2^n$. Prove that after $2^k$ transformations, the value in position $1$ depends only on whether $a_1$ and $a_{2^k+1}$ were the same or different in the original arrangement for all $k<n$. 8. Prove that, for any positive integer $k$, any arrangement of $2^k$ bits becomes all zeros after $2^k$ transformations. Part III 9. Justify why arrangements that are either all zeros or all ones are the only arrangements that give all zeros after one transformation. 10. Prove that one transformation on any arrangement of bits results in arrangement with an even number of ones. 11. Combine Problems 9 and 10 and prove that if an arrangement has an odd number of bits and is not all zeros and not all ones, then no number of transformations will result in all zeros. 12. Prove that if an arrangement has an even number of bits that is not a power of two, and exactly one bit is $1$, then no number of transformations will result in an arrangement of all zeros. [UPDATE] Somehow the post got clipped.. I’ve replaced the old material as close to my original wording as I remember I’m about to head off to participate on the “alumni” team in a scrimmage for the Howard County and Baltimore County teams going to the American Regions Math League. As the term “alumnus” connotes, I did this stuff myself back in high school. To be honest, I thought that it was pretty silly even then. It ends up emphasizing speed and trivia over deep understanding of mathematics. The various Olympiads are better, but still not great. There’s something in the society at large, though, that wants to reduce every single human activity to a contest, and mathematics for high school students is no exception. If I hadn’t already been studying more advanced material on my own, I could easily see ARML beating the enjoyment of mathematics out of me. Still, some kids like running the races and like memorizing a billion little factoids. If they enjoy it, fine, and it’s close enough to real mathematics to make it worth encouraging. And so I do my As with all the other algebraic structures we’ve considered, we’re interested in the “structure-preserving maps” between categories. In this case, they’re called “functors”. A functor $F$ from a category $\mathcal{C}$ to a category $\mathcal{D}$ consists of two functions, both also called $F$. One sends objects of $\mathcal{C}$ to objects of $\mathcal{D}$, and the other sends morphisms of $\mathcal{C}$ to morphisms of $\mathcal{D}$. Of course, these are subject to a number of restrictions: • If $m$ is a morphism from $X$ to $Y$ in $\mathcal{C}$, then $F(m)$ is a morphism from $F(X)$ to $F(Y)$ in $\mathcal{D}$. • For every object $X$ of $\mathcal{C}$, we have $F(1_X)=1_{F(X)}$ in $\mathcal{D}$ — identities are sent to identities. • Given morphisms $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ in $\mathcal{C}$, we have $F(g\circ f)=F(g)\circ F(f)$ in $\mathcal{D}$ — a functor preserves compositions. It’s tempting at this point to think of a “category of categories”, but unfortunately this gets hung up on the same hook as the “set of sets”. A lot of the intuition goes through, however, and we do have a category $\mathbf{Cat}$ of small categories (with only a set of objects and a set of morphisms) and functors between them. Every category $\mathcal{C}$ comes with an identity functor $1_\mathcal{C}$. This is an example of an “endofunctor” (in analogy with “endomorphism”). Every category of algebraic structures we’ve considered — $\mathbf{Grp}$, $\mathbf{Mon}$, $\mathbf{Ring}$, $R-\mathbf{mod}$, etc. — comes with a “forgetful” functor to the category of sets. Remember that a group (for example) is a set with extra structure on top of it, and a group homomorphism is a function that preserves the group structure. If we forget all that extra structure we’re just left with sets and functions again. To be explicit, there is a functor $U:\mathbf{Grp}\rightarrow\mathbf{Set}$ that sends a group $(G,\cdot)$ to its underlying set $G$. It sends a homomorphism $f:G\rightarrow H$ to itself, now considered as a function on the underlying sets. It should be apparent that this sends the identity homomorphism on the group $G$ to the identity function on the set $G$, and that it preserves compositions. The same arguments go through for rings, monoids, $R$-modules. In fact, there are other forgetful functors that behave in much the same way. A ring is an abelian group with extra structure, so we can forget that structure to get a functor from $\mathbf{Ring}$ to $\mathbf{Ab}$ — the category of abelian groups. An abelian group, in turn, is a restricted kind of group. We can forget the restriction to get a functor from $\mathbf{Ab}$ to $\mathbf{Grp}$. Now for some more concrete examples. Remember that a monoid is a category with one object. So what’s a functor between such monoids? Consider monoids $M$ and $N$ as categories. Then there’s only one object in each, so the object function is clear. We’re left with a function on the morphisms sending the identity of $M$ to the identity of $N$ and preserving compositions — a monoid homomorphism! What about functors between preorders, considered as categories? Now all the constraints are on the object function. Consider preorders $(P,\leq)$ and $(Q,\preceq)$ as categories. If there is an arrow from $a$ to $b$ in $P$ then there must be an arrow from $F(a)$ to $F(b)$. That is, if $a\leq b$ then $F(a)\preceq F(b)$. Functors in this case are just order-preserving functions. These two examples show how the language of categories and functors subsumes both of these disparate notions. Preorder relations translate into the existence of certain arrows, which functors must then preserve, while monoidal multiplications translate into compositions of arrows, which functors must then preserve. The categories of (preorders, order-preserving functions) and (monoids, monoid homomorphisms) both find a natural home with in the category of (small categories, functors). Like groups, rings, modules, and other algebraic constructs, we define a category by laying out what’s in it, and how those things relate to each other. The first difference that gives some people pause is that we don’t start with a set, but a class. Classes are pretty much like sets, but they can be “bigger”. In particular, we sometimes run into technical problems with sets containing other sets, so we introduce classes as things that can hold any sort of sets with no problem. Of course we’ve only pushed back the problem to when we might want to collect classes together, but we’ll burn that bridge when we come to it. Anyhow, there’s really nothing that bad about basing an algebraic structure on a class. There are perfectly good reasons (we’ll see) for putting a ring structure on a class. In this case we call the result a “large ring”. On the other hand, when every class involved in a category is a set, we call it a “small category”. Seriously, it’s not as big a deal as people seem to think. Okay, that out of the way; a category $\mathcal{C}$ consists of two classes: the “objects” and the “morphisms”, or sometimes “points” and “arrows”. These are denoted ${\rm Ob}(\mathcal{C})$ and ${\rm Mor}(\mathcal{C})$, respectively. Every morphism $m$ has a “source” and a “target” object: $s(m)$ and $t(m)$. If a morphism $m$ has source $a$ and target $b$ we often write $m:a\rightarrow b$. The class of all morphisms in $\mathcal {C}$ with source $a$ and target $b$ is written $\hom_\mathcal{C}(a,b)$, or just $\hom(a,b)$ if the category is understood. If all these “hom-classes” are actually sets, we say the category is “locally small”. Most of the categories we consider will be locally small, and I’ll just use this assumption without mentioning it explicitly. Given any three objects $a$, $b$, and $c$, we have an operation of “composition”: $\circ:\hom(b,c)\times\hom(a,b)\rightarrow\hom(a,c)$. We think of this as taking an arrow from $a$ to $b$ and one from $b$ to $c$ and joining them tip-to-tail to make an arrow from $a$ to $c$. This composition must be associative — the following diagram commutes: Also, every object $a$ has an “identity” morphism $1_a:a\rightarrow a$ so that $1_a\circ m=m$ for all $m\in\hom(b,a)$ and $m\circ1_a=m$ for all $m\in\hom(a,b)$. We can see that this looks a lot like the definition of a monoid, and for good reason: a monoid is “just” a (small) category with a single object. Walk through the definitions and say that there’s only one object. You’ll see that every morphism has the same source and target, so they can all be composed with each other. Then we’ve got a set of morphisms equipped with an associative composition with an identity element — a monoid! The most commonly seen use of categories is to describe other algebraic structures. The standard example here (which will motivate much of our later definitions) is $\mathbf{Set}$: the category of sets. This has as objects the class of all sets (which can’t itself be a set). The morphisms $\hom_\mathbf{Set}(X,Y)$ are all functions $f:X\rightarrow Y$. Similarly, we have the categories $\mathbf{Grp}$ — groups — $\mathbf{Ring}$ — rings with identity — $R-\mathbf{mod}$ — left $R$-modules — and so on. Each of these categories has as objects the class of all the apropriate algebraic structures, and as morphisms all homomorphisms of those structures. As a more concrete example, consider a ring $R$ with unit. We construct a small category $\mathbf{Mat}_R$ as follows: take as objects the set $\mathbb{N}$ of natural numbers. The morphisms $\hom_{\ mathbf{Mat}_R}(m,n)$ are all $m\times n$ matrices with entries in $R$. The composition is regular matrix multiplication, and the identity on the object $n$ is the $n\times n$ identity matrix. Another great example of a category is a preorder. Given a preorder $(P,\leq)$ we take the set of elements $P$ as the objects of our category. Then we say that there is a single morphism in $\hom_P (x,y)$ if $x\leq y$ and no morphisms in the hom-set otherwise. Reflexivity tells us that there is a morphism in $\hom(x,x)$ for every object $x$ which can serve as an identity, and transitivity tells us that if there’s a morphism in $\hom(x,y)$ and one in $\hom(y,z)$, then there’s one in $\hom(x,z)$ which can serve as their composite. For a good while we’ll be giving a lot of definitions of concepts in the language of categories, usually motivated from the category of sets. Category theory gets a bad rap as involving a lot of definitions, but the language really does streamline a lot of thought about mathematics, so it’s worth picking up a basic fluency. Everything I’ll define in this first series I’ve actually already given good examples of in special cases, so the motivation should be apparent. We’ll see them coming up again and again in later work, which (I hope) will help lead to a comprehension of later mathematical concepts by analogy from the simpler concepts in algebra. If anything has become clearer after a year in the application trenches it is this: the better-known you and your ideas are, the better chance you have in the job market. To that end, I’d like to advertise myself. Eventually the fall semester will start up, and with it the search for seminar speakers. Obviously I think I’d make a great choice. Here are a number of lectures I have basically ready to go. • Functors extending the Kauffman Bracket The Kauffman Bracket is a family of invariants of knots and links up to regular isotopy taking their values in commutative rings, and defined by a “skein theory”. We want to find monoidal functors defined on the category $\mathcal{F}r\mathcal{T}ang$ of framed tangles so that if we restrict the functors to knots and links we recover (essentially) the old invariants. This approach highlights the fact that “skein theories” are actually just generating sets for monoidal categorical ideals, and that the skein-theoretic approach to knot invariants is another branch of representation theory. We thus study the representation theory of $R$-linearizations of the category of framed tangles, and of the Temperley-Lieb categories $\mathcal{TL}_\delta(R)$. We show that the representation theory of these categories is equivalent to the theory of (non-symmetric) nondegenerate bilinear forms over $R$. • The Tangle Group The group of a knot or link is a well-known invariant of ambient isotopy. We would like to extend this invariant to a monoidal functor $\Gamma$ on the category $\mathcal{T}ang$ of tangles in such a way that when we restrict $\Gamma$ to knots and links we recover (essentially) the old knot group. Here, we define a monoidal bifunctor from the bicategory of (tangles, isotopies) to the bicategory of cospans of groups, and show how the restriction of the decategorification of this bifunctor to knots and links reproduces the knot group. We also indicate how the use of cospans immediately applies to generalize the fundamental quandle of a link, the fundamental biquandle of a virtual link, and other such invariants. • A Categorification of Quandle Coloring Numbers by Anafunctors The number of colorings of a link by a given quandle is a classical invariant of links up to ambient isotopy. We would like to categorify and extend this invariant to the category $\mathcal{T} ang$ of tangles. Here, we show how to associate, functorially, to each tangle an anafunctor between two comma categories of quandles. When we restrict this assignment to knots and links and specify a quandle $Q$ of colors we recover $Q$-coloring invariant. If we first decategorify and specify a quandle $Q$ of colors we recover the $Q$-coloring matrix of a given tangle. This approach can be significantly generalized. We indicate the existence of a similar “$\mathcal{C}$-coloring” invariant for any co-$\mathcal{C}$ object in the category of pointed topological pairs up to homotopy. And now some comments. Generally, these abstracts apply to the highest-level version of each talk. I can tweak any of them down a bit, mostly to adjust for familiarity of the audience with categories and with knot theory. The Kauffman Bracket talk is probably the most straightforward. It clearly highlights the relationship between skein theory and representation theory. Its primary interest is in this connection, and in the fact that it lays the groundwork for parallel categorifications of the Kauffman Bracket to Khovanov homology. The knot group talk should be clear to an algebraic topology audience. It’s really the genesis of the use of cospans in the study of tangles For audiences more familiar with knot theory in particular, I can do the whole thing from the get-go in quandles. The quandle talk really isn’t that abstract when it comes down to it, but it uses a number of tools possibly unfamiliar to the general mathematical audience. In fact, a good part of it is devoted to getting the definitions down straight. Once they’re in place, the whole structure just sort of builds itself, which is how I really like my mathematics to go. The caveat, then, is that the audience really does need to either be interested in knot theory already, or somewhat familiar with and friendly towards categories. Otherwise it’s really tough to motivate the material and to cover it within the usual microcentury. I could possibly put the latter two together in a pair of lectures, since the quandle coloring invariant is a direct outgrowth of the fundamental quandle of a tangle. That would also make it a bit easier to motivate the second half, so it may well go more smoothly as a pair to a more general audience. So, if your department is looking to fill a slot in an algebraic topology (or “quantum topology”, as they’re calling this stuff now) seminar, let’s talk. Clearly the easier it is for me to get there from New Orleans the easier it will be to make arrangements. Also, though I’ve gotten used to paying out of pocket for these things, assistance in travel would also be helpful. I am particularly looking for an engagement in the Baltimore/Washington D.C. area around the weekend of October 6, so that gets high priority. I’m wrapping up my coverage of ring theory (for now). There’s a lot I’ve left unsaid about rings, and also about groups. I’m hoping, though, that I’ve given a certain amount of a feel for how algebraic structures work in preparation for the next topic: categories. There are a number of readers, I know, who have been waiting for this point almost as much as I have been. There are also some who are dreading it. Everything up until this point has been stuff that everyone has to know, but categories are still a bit controversial in some circles. Many people find them even more abstract, or technical, or even content-free than other parts of algebra. Category theory is at turns praised and derided with the same phrase, “abstract nonsense”. Indeed the earliest uses were to make general statements about algebra, just like ring theory makes general statements about polynomials, and polynomials make general statements about numbers. For some reason there are still mathematicians who draw a line in the sand and say, “Here! No further!”, just as others saw it as the next natural step. Personally, I have been drawn to categories since I knew they existed. I still remember being shown the natural transformation from the identity functor on the category of vector spaces over a given field to the double-dual functor, and going back to Jeff Adams’ office (yes, the same Jeff Adams) again and again for more back in the spring of 1999. I hope now to say what it is that I saw then (and still see) in category theory, and to make the case for them. I really, honestly believe that within the next quarter-century nobody will be able to get a bachelor’s degree in mathematics without a passing familiarity with categories any more than one could avoid groups now, and it’s not just due to politicking on the part of its proponents as I’ve heard asserted. First of all, categories are tremendously useful as a metamathematical language. I’ll show in the future how it unifies the First Isomorphism theorems, for example. I’ll also show how, in the language of categories, direct products of groups are like greatest lower bounds. “So what,” the naysayer cries, “if this language says that those two concepts are related?” So, mathematics is about analogies. I can begin to understand this because I definitely understand that and this and that are similar in a certain way. Maybe knowing something about greatest lower bounds will tell me something new to look for in direct products of groups. Even if not, the relationships can help illuminate to newcomers — be they students or just lay readers — the essential points of the structures we consider, and more importantly why we consider them. But there’s also another side of categories that the opposition completely ignores: a category can be just as useful a concrete mathematical structure as a group can, and the framework of categories can harmoniously sew together other objects into a coherent whole. The various rings and modules of matrices over a given field meld into the category of all matrices over that field. The braid groups weave together into the category of tangles. And what do we gain from this categorical viewpoint? If unifying language isn’t enough for you, try this: category theory is, at its core, the language of the analytic/synthetic approach to mathematics in particular and all sciences in general. The scientific epistemology is to break complicated systems down into simpler parts, to understand those simple parts, and to understand how to reassemble them into the whole. This is exactly what category theory brings to the table: a systematic study of the nature of composition and how compositions transform when moving from one domain of discourse to another. Category theory is the language of analogies, and analogies are the lifeblood of mathematics. Algebra gives us analogies between equations. Categories give us analogies between theories. Our future is concerned with analogies between analogies. While I slept, The carnival came to town! (at The Geomblog) One thing we haven’t given good examples of is fields. We can get some from factoring out a maximal ideal from a commutative ring with unit, but the most familiar example — rational numbers — comes from a different construction. First we define a multiplicatively closed set. This is a subset $S$ of a commutative ring with unit $R$ which is, predictably enough, closed under the ring multiplication. We also require for technicality’s sake that $S$ contains the unit $1$. A good place to get such multiplicatively closed sets is as complements of prime ideals — given two elements $a$ and $b$ in $R$ but not in the prime ideal $P$, their product $ab$ must also be outside $P$. Another good way is to start with some collection of elements and take the submonoid they generate under multiplication. In general not all the elements of $S$ will be invertible in $R$. What we want to do is make a bigger ring that properly contains (a homomorphic image of) $R$ in which all elements of $S$do have inverses. We’ll do this sort of like how we built the integers by adding negatives to the natural numbers. Consider the set of all elements $(r,s)$ with $r\in R$ and $s\in S$. We’ll think of this as the “fraction” $\frac{r}{s}$. Now of course we have too many elements. For example, $(s,s)$ should be “the same” as $(1,1)$ for all $s\in S$. We introduce the following equivalence relation: $(r_1,s_1)\sim(r_2,s_2)$ if and only if there is a $t\in S$ with $t(r_1s_2-r_2s_1)=0$. Notice that if $S$ contained no zero-divisors we could do away with the “there is a $t$” clause, but we might need it in general. So as usual we pass to the set of equivalence classes and assert that the result is a ring. The definitions of addition and multiplication are exactly what we expect if we remember fractions from elementary school. Choose representatives $(r_1,s_1)$ and $(r_2,s_2)$, and define $(r_1,s_1)+(r_2,s_2)=(r_1s_2+r_2s_1,s_1s_2)$ and $(r_1,s_1)(r_2,s_2)=(r_1r_2,s_1s_2)$. From here it’s a straightforward-but-tedious verification that these operations are independent of the choices of representatives and that they satisfy the ring axioms. We call the resulting ring by a number of names. Two of the most common are $S^{-1}R$ and $R_S$. If $S$ is generated by some collection of elements $\{x_1,...,x_n\}$ we sometimes write $R[x_1^ {-1},...,x_n^{-1}]$. There are a few more, but I’ll leave them alone for now. It comes with a homomorphism $\iota:R\rightarrow R_S$, sending $r$ to $(r,1)$. If $S$ contains no zero-divisors then this is an isomorphism onto its image, since then $(r_1,1)\sim(r_2,1)$ would imply that $r_1-r_2=0$. That is, a copy of $R$ sits inside $R_S$. This homomorphism has a nice universal property: if $f:R\rightarrow R'$ is any homomorphism of commutative rings with units sending each element of $S$ to a unit, then $f$ factors uniquely as $\bar{f}\circ\iota$. That is, $\iota:R\rightarrow R_S$ is the “most general” such homomorphism. Now let’s say we start with an integral domain $D$. This means that the ideal $\mathbf{0}$ consisting of only the zero element is prime. Then its complement — all nonzero elements of $D$ — is a multiplicatively closed set $D^{\times}$. We construct the field of fractions $D_{D^{\times}}$ by adding inverses to all the nonzero elements. Now every nonzero element has an inverse, so this really is a field. In fact, it’s the “most general” field containing $D$. And, finally, let’s apply this construction to the integers. They are an integral domain, so it applies. Now the field of fractions consists of all fractions $\frac{m}{n}$ with $m,n\in\mathbb{Z}$, with the above-defined sum and product. That is, it consists of the fractions we all know from elementary school. We call this field $\mathbb{Q}$: the field of rational numbers. Now we know that we can talk about divisibility in terms of ideals, we remember a definition from back in elementary school: a number $p$ is “prime” if the only numbers that divide it are $1$ and $p$ itself. So, we might make the guess that a prime ideal $P$ is one so that the only ideals containing it are $P$ itself and the whole ring. Unfortunately, that’s not quite right. There’s actually a different definition of a prime number, and it just so happens for numbers that the two definitions describe (almost) the same numbers. In more general rings, however, they’re different. What we’ve just described we’ll call a “maximal” ideal, since you can’t make it any bigger without getting the whole ring. Here’s the other definition of a prime number: a number $p$ is prime if and only if whenever $p|ab$ then either $p|a$ or $p|b$. Let’s turn this into ideals. We’re defining a property of an ideal $P$ in terms of two other ideals $A$ and $B$. In the case of integers, these are the principal ideals $(a)$ and $(b)$ since all ideals in $\mathbb{Z}$ are principal. The product of two integers generates the ideal $(ab)=(a)(b)$ — the product of the two ideals, so we’ll also consider the product ideal $AB$. Now we can state our property: an ideal $P$ is prime if whenever $AB\subseteq P$ then either $A \subseteq P$ or $B\subseteq P$. We also insist that $P$ is not the whole ring, just as we insist that $1$ is not a prime number. Prime ideals have a number of nice properties, especially when we’re just looking at commutative rings with units. For instance, let’s consider the quotient $R/P$ of a commutative ring $R$ by a prime ideal $P$, and elements $a+P$ and $b+P$ in this quotient ring. If their product $ab+P=0$ then $ab\in P$ so $(ab)\subseteq P$. Now we can show that $(a)(b)\subseteq(ab)\subseteq P$, so either $(a)\ subseteq P$ or $(b)\subseteq P$ since $P$ is prime. In particular $a\in P$ or $b\in P$, so $a+P=0$ or $b+P=0$. That is, if the product of two elements in $R/P$ is zero, then one or the other must be — $R/P$ is an integral domain! What happens if we use a maximal ideal $M$ in this construction? Given any element $a+Meq0$ in $R/M$, we have an element $aotin M$. If we try to make an ideal containing all of $M$ and also $a$, then we get the whole ring $R$. In particular we get $1=xa+ym$ for some $m\in M$. Then $(x+M)(a+M)=xa+M=(1-ym)+M=1+M$ in $R/M$, so $x+M$ is an inverse of $a+M$ — $R/M$ is a field! Now we can be sure that there are rings with prime ideals that are not maximal, as indicated above. Take any integral domain $D$ that’s not a field. Then the ideal $\mathbf{0}$ is prime, since $D/\ mathbf{0}\cong D$ is an integral domain, but it’s not maximal since $D$ isn’t a field. Of course I hear you cry out, “but maybe the only difference is ever the zero ideal!” Well, just take the direct sum of two copies of the ring: $D_1\oplus D_2$. Then the second copy is an ideal in the direct sum, and $(D_1\oplus D_2)/D_2\cong D_1$ is an integral domain but not a field. Thus $D_2$ is a prime ideal, but not a maximal one. • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/05/page/2/","timestamp":"2014-04-17T18:37:03Z","content_type":null,"content_length":"126666","record_id":"<urn:uuid:4e79c02f-8169-4807-a4a6-6fdd2b63d638>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Van Aubel's theorem Van Aubel’s theorem This theorem can be stated: Start with any planar quadrilateral. Draw squares outwards from each edge, and draw lines between the centres of opposite squares. The two lines thus drawn will be equal in length, and perpendicular. Here are some illustrations (from answers.com): As you see, one of the pleasant things about this theorem is its generality: the quadrilateral does not have to be convex; its edges may cross, and one or two of them can even have zero length. I like this theorem: it has a nice unexpectedness about it (to me, anyway); also it’s not hard to prove. I have a rather uninspiring proof using vectors. I think this would make a nice student project: verifying the theorem for different quadrilaterals, and (for the better students) proving it. There’s a lovely interactive diagram of this theorem at http://www.mste.uiuc.edu/dildine/geometry/vanaubel.html. This entry was posted in Maths teaching. Bookmark the permalink.
{"url":"http://amca01.wordpress.com/2008/04/01/van-aubels-theorem/","timestamp":"2014-04-20T20:55:17Z","content_type":null,"content_length":"52403","record_id":"<urn:uuid:9d0d09cf-15fc-4095-8905-a16c060d8e34>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2005 [00730] [Date Index] [Thread Index] [Author Index] Re: Integrating a complicated expression involving Sign[...] etc. • To: mathgroup at smc.vnet.net • Subject: [mg56348] Re: Integrating a complicated expression involving Sign[...] etc. • From: Paul Abbott <paul at physics.uwa.edu.au> • Date: Fri, 22 Apr 2005 06:23:40 -0400 (EDT) • Organization: The University of Western Australia • References: <d3t38d$8jd$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com In article <d3t38d$8jd$1 at smc.vnet.net>, Christian Mikkelsen <s010132 at student.dtu.dk> wrote: > I want to ask some general questions. I get some very complicated > intermediate results (see expressions below) that Mathematica does not > proces (Integrate) within a day or two. > 1) Does it normally pay off just leaving it to it? If Mathematica is still > running does it mean that it is doing meaningful transformations and > calculations? Usually very unlikely. > 2) I would like to understand the structure of some of the intermediate > results ("Thetaintegrant" in particular) to see if Mathematica might need > a little help (see my earlier post and the kludge below). Is there any > good way to do that? Using TreeForm, for example if I could just look at > the upper most levels of the expression tree? Mathematica certainly needs some help here ... > 3) My problem seems to have a lot of structure so I am hopeful that > something can be done to help Mathematica a little. > Oh, and a easy one... :-) > 4) How do I use $Assumptions to tell Mathematica that A is real etc. > globally? This is possible, but usually not useful. > (* This is actually a spherical Bessel-function *) > j[k_, n_] = ((-I)^n*Sqrt[2*Pi]*BesselJ[1/2 + n, k])/Sqrt[k] > (* A, B, C are to substituted by sines and cosines to make up polar > coordinates *) > StorMatrix = {{B^2+C^2, A B, A C}, > {B A, A^2+C^2, B C}, > {C A, C B, A^2+B^2}}; Do you have areference for this matrix and the following computations? What is the application and derivation of the integral and matrix? Often it is better to go back to the mathematics to decide the best approach for a Mathematica implementation. Importantly, if the symbolic result ends up being too complicated, it is unlikely to be that useful -- especially if the final goal is numerical computation of a range of integrals. > (* My integrant in the case (0, 0) *) > Integrant = Simplify[StorMatrix j[k A, n] j[k B, m] j[k a, 0] Exp[I k (A > x + B y + C z)]/.{n->0, m->0}, > Assumptions -> {A \[Element] Reals, B \[Element] Reals, > C \[Element] Reals, a > 0, > x \[Element] Reals, y \[Element] Reals, > z \[Element] Reals}]; > (* Mathematica refuses to do the definite integral but happily does the > indefinite one *) > KIntegral = Integrate[Integrant, k, > Assumptions -> {A \[Element] Reals, B \[Element] Reals, > C \[Element] Reals, a > 0, > x \[Element] Reals, y \[Element] Reals, > z \[Element] Reals}]; A better approach is to identify exactly what you are trying to compute. For example, consider the following integral: Integrate[j[n, t] Exp[I w t], {t, -Infinity, Infinity}] where j[n, t] is a spherical Bessel function (reversing the order of the arguments compared to your definition -- acutally, j[n][t] is a better notation). This is, essentially, a much simpler version of your integral. However, even in this much simpler problem, Mathematica is not capable of computing this directly. However, it is just the Fourier transform of the spherical Bessel function, which is proportional to LegendreP[n, w] for -1 < w < 1. (Mathematica cannot compute this general Fourier transform directly The conclusion to draw from this though is that I expect that your general integral can be computed by suitable use of integral transforms. I note that at http://www.tcm.phy.cam.ac.uk/~pdh1001/thesis/node31.html the integral of a particular triple product of spherical Bessel functions is computed. > (* I calculate the lower limit, no problems here *) > Klowlimit = Limit[KIntegral, k -> 0, > Assumptions -> {A \[Element] Reals, B \[Element] Reals, > C \[Element] Reals, a > 0, > x \[Element] Reals, y \[Element] Reals, > z \[Element] Reals}]//Simplify; > (* Upper limit, with ComplexExpand and cheating. True? *) > KinfIntegral = ComplexExpand[KIntegral] /. > {ExpIntegralEi[ I A_ k] -> Sign[ I A] Pi, > ExpIntegralEi[-I A_ k] -> Sign[-I A] Pi}; It should be possible to justify this type of approach (one that I often use). However, using Sign is probably not optimal. UnitStep is better (for example, Mathematica knows how to differentiate and inegrate UnitStep). Another approach would be to use Piecewise. > Kinflimit = Limit[KinfIntegral, k -> \[Infinity], > Assumptions -> {A \[Element] Reals, B \[Element] Reals, > C \[Element] Reals, a > 0, > x \[Element] Reals, y \[Element] Reals, > z \[Element] Reals}]; > Thetaintegrant = (Kinflimit-Klowlimit)/.{A->Sqrt[1-w^2] P, B->Sqrt[1-w^2] > Q, C-> w}; > (* Warning: Mathematica 5.1 gets stuck on this integral : *) > Thetaintegral = > Integrate[Thetaintegrant, {w, -1, 1}, > Assumptions -> {P >= -1, P <= 1, Q >= -1, Q <= 1, a > 0}] You could try interchanging the order of integration, k <-> w. It does not look promising to me though. Paul Abbott Phone: +61 8 6488 2734 School of Physics, M013 Fax: +61 8 6488 1014 The University of Western Australia (CRICOS Provider No 00126G) 35 Stirling Highway Crawley WA 6009 mailto:paul at physics.uwa.edu.au AUSTRALIA http://physics.uwa.edu.au/~paul
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Apr/msg00730.html","timestamp":"2014-04-19T00:02:52Z","content_type":null,"content_length":"40579","record_id":"<urn:uuid:96a16434-a515-4cfb-adb0-97ac0fe5a323>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Quotient Space October 5th 2011, 03:14 PM #1 MHF Contributor Mar 2010 Quotient Space Let $f:X\to X$ be a linear transformation, and $V\subset X$ an invariant subspace of f, i.e., $f(V)\subset V$. Prove that f induces a linear transformation $\hat{f}:X/V\to X/V$ Since f is invariant, we know $f(V)=\text{Im} \ f\subset V$. I am struggling with quotient spaces. I know it means X mod V where $\{x\in X: x+ V\}$. I need some guidance and a good explanation of what is going on if possible. Re: Quotient Space what we would like to do is define $\hat{f}(x+X) = f(x)+X$. first we need to be sure that this is well-defined, since we are working with cosets, instead of elements. so suppose y is in x + X (that is, that y + X = x + X), so that y = x + v, for some vector v in X. then f(y) = f(x + v) = f(x) + f(v) = f(x) + v', (where v' is some element of X) since X is invariant for f, so f(y) is in f(x) + X, thus f(y) + X = f(x) + X. hence $\hat{f}(y+X) = f(y) + X = f(x) + X = \hat{f}(x+X)$, so $\hat{f}$ is indeed well-defined. from here. it's all down-hill, the linearity of $\hat{f}$ is a direct consequence of the linearity of f: $\hat{f}((\alpha u + \beta v) + X) = f(\alpha u + \beta v) + X = \alpha f(u) + \beta f(v) + X$ $= (\alpha f(u) + X) + (\beta f(v) + X) = \alpha \hat{f}(u) + \beta \hat{f}(v)$ it might be helpful to see a simple concrete example. let $V =\mathbb{R}^2$, the Euclidean plane, with the usual vector operations, and suppose $X = \{(x,0) \in \mathbb{R}^2\}$. what do the elements of V/X look like? well anything in (x,y) + X has a 2nd coordinate of y, so the elements of V/X are all horizontal lines (we get one for each different real number y). so suppose f(x,y) = (3x+y, 2y). it should be clear X is an invariant subspace for f. then $\hat{f}$ is the mapping that takes the line going through y, to the line going through 2y. in other words, $\hat{f}$ acts "just like" the function a-->2a (of one real variable). the reason being, when we act "mod X", we are "shrinking" the entire x-dimension down to 0. so what f does on the first coordinate becomes irrelelvant, as far as $\hat{f}$ is concerned. Last edited by Deveno; October 6th 2011 at 06:22 AM. Re: Quotient Space what we would like to do is define $\hat{f}(x+V) = f(x)+V$. first we need to be sure that this is well-defined, since we are working with cosets, instead of elements. so suppose y is in x + V (that is, that y + V = x + V), so that y = x + v, for some vector v in V. then f(y) = f(x + v) = f(x) + f(v) = f(x) + v', (where v' is some element of V) since V is invariant for f, so f(y) is in f(x) + V, thus f(y) + V = f(x) + V. hence $\hat{f}(y+V) = f(y) + V = f(x) + V = \hat{f}(x+V)$, so $\hat{f}$ is indeed well-defined. from here. it's all down-hill, the linearity of $\hat{f}$ is a direct consequence of the linearity of f: $\hat{f}((\alpha u + \beta v) + V) = f(\alpha u + \beta v) + V = \alpha f(u) + \beta f(v) + V$ $= (\alpha f(u) + V) + (\beta f(v) + V) = \alpha \hat{f}(u) + \beta \hat{f}(v)$ it might be helpful to see a simple concrete example. let $V =\mathbb{R}^2$, the Euclidean plane, with the usual vector operations, and suppose $X = \{(x,0) \in \mathbb{R}^2\}$. what do the elements of V/X look like? well anything in (x,y) + X has a 2nd coordinate of y, so the elements of V/X are all horizontal lines (we get one for each different real number y). so suppose f(x,y) = (3x+y, 2y). it should be clear X is an invariant subspace for f. then $\hat{f}$ is the mapping that takes the line going through y, to the line going through 2y. in other words, $\hat{f}$ acts "just like" the function a-->2a (of one real variable). the reason being, when we act "mod X", we are "shrinking" the entire x-dimension down to 0. so what f does on the first coordinate becomes irrelelvant, as far as $\hat{f}$ is concerned. How did you know how to define $\hat{f}\mbox{?}$ I also don't really understand your example either. Re: Quotient Space that is the usual definition of an "induced map". the map V--->V/X given by v-->v + X is "canonical", in that it does not depend on the choice of a basis for V. you can also easily show it is linear, let's say we call it T. thus f (via T) "induces" the map $\hat{f}$ by setting: $\hat{f} \circ T = T \circ f$ the reason we need X to be invariant for f, is because if x is in X, but f(x) is not in X, then f(v + X) might not be in the coset f(v) + X (although it will be in the coset f(v) + f(X), but this is not "the same" canonical map T). in this problem, all we have to work with is the map v-->v+X, and the map f, so our answer must only depend on those. as for the example, the situation is like this: linear spaces are very well-behaved, and have natural geometrical interpretations. a 1-dimensional space is a line, a 2-dimensional space is a plane, a 3-dimensional space is like the world we live in (ignoring the curvature introduced by the minkowski metric). higher-dimensional spaces are hard to "visualize" but their behavior is "the same" as lower dimensional spaces, just...more basis vectors to specify. so 3-space mod a line, would be like partitioning space into a set of parallel "straws" (really skinny ones). the result is 2-dimensional, by specifying a point in a plane which cuts through all the straws, we can tell "which" straw we're at...that is, which coset of the line (straw) that goes through the origin. similarly, 3-space mod a plane, is like partitioning space into sheets parallel to a plane that goes through the origin. if we have a line that crosses all the "sheets", specifying a point on that line, tells us which "sheet" (that is coset of the plane going through the origin) we are in. in vector spaces, "modding" via a subspace is analogous to "modding mod n" in the integers. when we take an integer mod n, we are essentially declaring "all multiples of n are 0" which just leaves a cycle between 0 and n-1. when we take V/X, we are essentially setting the entire subspace X as {0}, leaving only the remaining dim(V) - dim(X) dimensions as relevant. in fact, what the rank-nullity theorem says is: dim(V) = dim(X) + dim(V/X). this is just the first isomorphism theorem of group theory in the context of vector spaces, and can be proved the same Last edited by Deveno; October 5th 2011 at 05:12 PM. Re: Quotient Space So $\hat{f}$ isn't monic then, correct? What would be an example that shows $\hat{f}$ isn't monic? Re: Quotient Space it might be, it might not be. in the example i gave, $\hat{f}$ was monic, but if f is the 0-map, then so is $\hat{f}$. even if f isn't monic, $\hat{f}$ might be: suppose f(x,y,z) = (x,x,z) in $\mathbb{R}^3$. now f certainly isn't monic, f(1,2,3) = f(1,4,3), for example. but if our subspace X was all vectors of the form (0,y,0), (which is 1-dimensional, with basis {(0,1,0)}, then certainly f(0,y,0) = (0,0,0) is in X. so $\hat{f}((x,y,z)+X) = (x,x,z) + X = (x,0,z) + X$ (since (x,y,z) - (x,0,z) = (0,y,0) is in X). i claim $\hat{f}$ is monic. suppose $\hat{f}((x,y,z) + X) = \hat{f}((x',y',z') + X)$. then (x,0,z) + X = (x',0,z') + X, so we must have x' = x, and z' = z. thus (x,y,z) - (x',y',z') = (x,y,z) - (x,y',z) = (0,y-y',0) is in X, so (x,y,z) + X = (x',y',z') + X. Re: Quotient Space For this problem then, $\hat{f}$ is monic. $\hat{f}(x+V)=\hat{f}(y+V)\Rightarrow f(x)+V=f(y)+V\Rightarrow (f(x)-f(y))+V=0$ So $f(x)-f(y)\in X/V$ $f(x)-f(y)=0\Rightarrow f(x)=f(y)\Rightarrow x=y$ Is this correct? Re: Quotient Space how can you deduce that x = y from f(x) = f(y)? IF (and that's a big if, at high decibal levels) f is monic, then $\hat{f}$ will be, too (you can't make a map "less monic" by factoring out information). also f(x) + X = f(y) + X, merely implies f(x) and f(y) are in the same coset of V, that is: (f(x) - f(y)) + X = X, not "0" (of course 0 + X is the 0-vector (coset) in the coset space V/X, but is is unwise to denote this by simply "0", it's not the same type of vector as a vector in V, it's a vector composed of a SET of vectors in V (just as a residue class in the integers mod n isn't an integer, but an equivalence class of a SET of integers)). saying f(x) - f(y) is in X/V is meaningless....what is X/V? even if you meant f(x) - f(y) is in V/X, this is still wrong, f(x), f(y) are elements of V, NOT elements of V/X. part of your confusion is due to a bad typo in my orginal post, which i have edited. the cosets in V/X are cosets of X in V. i am more used to the notation U/V (which is more common), so most of the "stuff + V" expressions in my original post should have been "stuff + X". embarrassing o.o Re: Quotient Space $\hat{f}$ isn't monic. If I define f as $f:X\to X$ by $f(x)$, then f is a linear transformation. But $\hat{f}$ isn't monic since every element in the domain is mapped to $0+V$ Is this correct? Re: Quotient Space again, no. i don't know why you're fixated on whether or not $\hat{f}$ is monic. sometimes it is, sometimes it isn't, it depends on f and X. look all $\hat{f}$ is, is a linear transformation "as much like f as possible" on V/X. "modding by X" could take some of the domain values where multiple values in V get mapped by f to the same point, to a single point in V/X, but it might not. as far as the example you gave, how do you propose to extend f to a map defined on all of V? but sure, if X = V, and f is the identity on V, f(x) = x, then of course $\hat{f}$ takes everything to the "0" of V/V, because V/V has just ONE coset, with all of V in it! V/V = {0+V}, so $\hat{f}$ is the 0-map AND monic! but seriously, i think you are making this be far more complicated than it needs to be. a vector space is just an abelian group with some "extra structure" (namely, the scalar multiplication). and the quotient space is just the quotient group, as with any abelian group, along with an "induced" scalar multiplication: a(v + X) = av + X. $\hat{f}$ is just f "ignoring what happens in X" (since f(X) is a subset of X, which all just gets dropped in the + X part of the coset). October 5th 2011, 03:37 PM #2 MHF Contributor Mar 2011 October 5th 2011, 04:41 PM #3 MHF Contributor Mar 2010 October 5th 2011, 04:56 PM #4 MHF Contributor Mar 2011 October 5th 2011, 05:18 PM #5 MHF Contributor Mar 2010 October 5th 2011, 06:16 PM #6 MHF Contributor Mar 2011 October 6th 2011, 05:36 AM #7 MHF Contributor Mar 2010 October 6th 2011, 06:17 AM #8 MHF Contributor Mar 2011 October 6th 2011, 04:57 PM #9 MHF Contributor Mar 2010 October 6th 2011, 05:44 PM #10 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/advanced-algebra/189641-quotient-space.html","timestamp":"2014-04-17T04:03:33Z","content_type":null,"content_length":"76801","record_id":"<urn:uuid:20ed341e-de5e-46a1-8077-d9adb12789f0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Non-computable numbers What are the limits to computation? The computer science theory of computation can be intimidating because of its use of logic but taking a programmer's approach makes it seem much simpler. So if you want to know what a non-computable number is - read on. One of the big topics of computer science is computability and it can often seem mysterious and strange. How can there be things that aren't computable? The whole idea seems silly. There is another way of looking at the question that is more the way that a programmer would think about it - so let's get started. To be 100% clear this is a general discussion about ways of thinking about computability and not formal theorem and proof. Some of the ideas expressed here can be turned into exact proofs others are better expressed in different and more precise language before proving them. The central issue is about how much "stuff" there is in anything. I'm not going to bore you with stories about the history of numbers - fascinating though it is (and you can explore some recommended book titles by following links in right-hand panel). All you really need to know is that in the beginning were the whole numbers, which we can confuse with the integers (which include zero and the negative numbers). Then there were fractions and the these filled in all the spaces on the number line. That is, if you draw a line between two points then each point will correspond to an integer or a fraction on some measuring system. A small part of the number line to every number there is a point but is the reverse true? Is there a number for every point? Notice that if you pick any two points on the line then you can always find another usually fractional point between them. In fact if you pick any two points you can find an infinity of fractions that lie between them. The fractions form a dense set that "covers" the number line. The irrationals Now, with the integers and the fractions there is a number to every point and a point to every number - NO! This is wrong but it is far from obvious that it is wrong. If you think it is obviously wrong then you have been exposed to too much math - not in itself a bad thing but it can sometimes make it harder to see the naive view point. What happened was that some followers of Pythagoras discovered a shocking truth - that there were points on the line that didn't correspond to integers or fractions. The square root of two is one such point. You can find the point that corresponds to the square root of two simply by drawing a unit square and drawing the diagonal - now you have a line segment that is exactly the square root of two long and thus lines of that length exist and are easy enough to construct. That is you can find a point on the line that is the square root of two quite easily. So far so much basic math but, and this is the big but, it can be proved (fairly easily) that there can be no fraction i.e. a number of the form a/b that is the square root of two. You can show that a line square root of two long exists and so there has to be a point on the number line that is that far from zero. Is there a rational number corresponding to this point? The simple answer is no! What this means is that if we only have the integers and the fractions or the rationals (because they are a ratio of two integers) then there are points on the line that do not correspond to a number - and this is unacceptable. The solution is that we simply expanded what we regard as a number and the irrational numbers were added to the mix. Now we have integers like one or two, fractions like 5/6 and irrationals like the square root of two. From the point of view of computing this is where it all gets very messy. RSS feed of all content Copyright © 2014 i-programmer.info. All Rights Reserved.
{"url":"http://www.i-programmer.info/babbages-bag/1902-non-computable-numbers.html","timestamp":"2014-04-16T10:24:35Z","content_type":null,"content_length":"35507","record_id":"<urn:uuid:f66d5b49-8b1b-4936-a0f6-800030685756>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
def'n of Limit Point? and limit. Let f: U->R^n Let a be an element of the reals such that for all delta' >0 there exists an x in U, different than a, such that ||x-a||<delta'. We say that lim f(x)=L as x->a, if for every epsilon>0 there exists a delta''>0 so that if ||x-a||<delta'' then ||f(x)-L||<epsilon. In set form, the definition then reads that for every open set U about a in R, there exists an open set V in R about L that contains f(U), the image of U under f. If D is the domain of f, then we see that L is a limit point of f(D).
{"url":"http://www.physicsforums.com/showthread.php?p=797135","timestamp":"2014-04-18T15:53:38Z","content_type":null,"content_length":"36662","record_id":"<urn:uuid:8c30a9f0-b35f-4f9b-961f-3fec1edb2366>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
November 18th 2008, 07:39 AM #1 Oct 2008 a ball is thrown straight up its height h(metres) after t seconds is given, h=-5t^2+10t+2. To the nearest tenth of a second, when is the ball 6m above the ground? Explain why there is two answers. Help me please Hello, mcinnes! It's simple algebra . . . Exactly where is your difficulty? A ball is thrown straight up. Its height $h$ in metres after $t$ seconds is: . $h \:=\:-5t^2+10t+2$ To the nearest tenth of a second, when is the ball 6m above the ground? Explain why there is two answers. The question is: When is $h = 6$ ? We have: . $-5t^2 + 10t + 2 \:=\:6 \quad\Rightarrow\quad 5t^2 - 10t +4 \:=\:0$ Quadratic Formula: . $t \;=\;\frac{10 \pm\sqrt{20}}{10} \;=\;\frac{5\pm\sqrt{5}}{5} \;\approx\;\begin{Bmatrix}1.4 \\ 0.6\end{Bmatrix}$ seconds. It attains a height of 6 meters on the way up and on the way down. im doing math on my own, and need to see how some questions are answered, with all the work, and what not, so i can do them on all the other questions i have, with this one, i dont get the two answer part of it, i wouldnt mind knowing why or how there is two answers:P If you graphed the equation you would find it is a parabola. He gave you the (physical) reason why there are two answers...If you throw a ball in the air it goes up...reaches a max height...and comes back down. November 18th 2008, 07:57 AM #2 Super Member May 2006 Lexington, MA (USA) November 18th 2008, 08:01 AM #3 Oct 2008 November 18th 2008, 09:38 AM #4 Junior Member Sep 2008
{"url":"http://mathhelpforum.com/algebra/60250-quadratics.html","timestamp":"2014-04-18T17:57:05Z","content_type":null,"content_length":"39031","record_id":"<urn:uuid:6b2e9c40-7534-4775-afcf-df0449aa61d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Northbrook SAT Math Tutor Find a Northbrook SAT Math Tutor ...Because there are 7 passages in this section to process in a mere 35 minutes, this section can be particularly challenging. I have developed strategies that are very effective at helping students to efficiently process these sections, as well as to develop their skills at correctly interpreting the many charts, tables, and graphs that appear in this section. It is a beast that can be tamed! 20 Subjects: including SAT math, reading, English, writing ...I look forward to helping you succeed in mathematics.I have a teaching certificate in mathematics issued by the South Carolina State Department of Education. During my two and half years of teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by the South Carolina Department of 12 Subjects: including SAT math, calculus, geometry, algebra 1 ...I also tutored math at that location, working on skills ranging from simple arithmetic through Geometry, Algebra II, and Trig. I am prepared to tutor any Calculus students as well. As opposed to standardized tests, in which the result is all-important - I believe comprehension and a consistent method of arriving at solutions should be stressed first, and that improved grades will 28 Subjects: including SAT math, English, reading, Spanish ...I also possess a teaching certification/degree from Northeastern. For photography, I have been a lifelong photographer as a hobby. I learned on manual rangefinder cameras in the 1970's. 37 Subjects: including SAT math, reading, English, writing ...Well-versed in matters of style, tone, and flow, I can help you improve your writing both at school and in the workplace. Geography is one of those subjects I always aced whenever it surfaced in my schoolwork. Although it hasn't been a big part of my college or professional background, I still ... 28 Subjects: including SAT math, English, reading, physics
{"url":"http://www.purplemath.com/Northbrook_SAT_math_tutors.php","timestamp":"2014-04-16T10:35:04Z","content_type":null,"content_length":"24257","record_id":"<urn:uuid:d0e92923-0091-4dbd-b22d-2f35eb1c089a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00444-ip-10-147-4-33.ec2.internal.warc.gz"}