url
stringlengths
6
1.61k
fetch_time
int64
1,368,856,904B
1,726,893,854B
content_mime_type
stringclasses
3 values
warc_filename
stringlengths
108
138
warc_record_offset
int32
9.6k
1.74B
warc_record_length
int32
664
793k
text
stringlengths
45
1.04M
token_count
int32
22
711k
char_count
int32
45
1.04M
metadata
stringlengths
439
443
score
float64
2.52
5.09
int_score
int64
3
5
crawl
stringclasses
93 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.06
1
https://cracku.in/37-suppose-the-employees-are-allowed-to-process-multi-x-cat-2017-shift-1
1,713,577,866,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817463.60/warc/CC-MAIN-20240419234422-20240420024422-00719.warc.gz
171,201,699
25,329
Instructions Healthy Bites is a fastfood joint serving three items, burgers, fries and ice cream. It has two employees Anish and Bani who prepare the items ordered by the clients. Preparation time is 10 minutes for a burger and 2 minutes for an order of Ice cream. An employee can prepare only one of these items at a time. The fries are prepared in an automatic fryer which can prepare upto to 3 portions of fries at a time, and takes 5 minutes irrespective of the number of portions. The fryer does not need an employee to constantly attend to it, and we can ignore the time taken by an employee to start and stop the fryer; thus, an employee can be engaged in preparing other items while the frying is on. However fries cannot be prepared in anticipation of future orders. Healthy Bites wishes to serve the orders as early as possible. The individual items in any order are served as and when ready; however,the order is considered to be completely served only when all the items of that order are served. The table below gives the orders of three clients and the times at which they placed their orders: Question 37 # Suppose the employees are allowed to process multiple orders at a time, but the preference would be to finish orders of clients who placed their orders earlier.At what time is the order placed by Client 2 completely served? Solution It is given that 1 burger takes 10 minutes 1 ice cream takes 2 minutes and 3 portions of fries take 5 min by the machine (operator is not required). The employees are allowed to process multiple orders at a time, but the preference would be to finish orders of clients who placed their orders earlier. The first order is 1 burger, 1 ice cream and 3 portions of fries. Anish can start working on the burger and Bani can start working on the ice cream for the first client at 10:00. The burger will be done at 10:10, ice - cream at 10:02 and fries at 10:05. The second order is placed at 10:05. (ice cream and fries) Bani can work on the ice cream for the second client at 10:05 and also put the fries. The ice cream will be done by 10:07 but the fries will be done by 10:10. Thus, order will be completed by 10:10.
497
2,185
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.75
3
CC-MAIN-2024-18
latest
en
0.967951
https://brainly.in/question/17983
1,484,565,451,000,000,000
text/html
crawl-data/CC-MAIN-2017-04/segments/1484560279169.4/warc/CC-MAIN-20170116095119-00302-ip-10-171-10-70.ec2.internal.warc.gz
812,616,851
10,169
By selling 36 bananas, a vender loses the selling price of 4 bananas, find his loss per cent. 2 by abhishekdasonl 2014-06-13T16:09:16+05:30 Cp =40, sp=36  cost of each banana is x x=cp-sp     x=4 loss%=4/40x100=10% 2014-06-13T16:38:30+05:30 Let S.P. of 1 banana be Rs. x. Then, S.P. of 4 bananas = Rs. 4 x =  Loss on selling 36 bananas and  S.P. of 36 bananas = Rs. 36 x. Loss = S.P. - C.P. 4 x = 36 x - C.P. C.P. = 32 x. Loss% = Loss/C.P. * 100 = 4x/32x * 100 = 12.5% Therefore, Loss% = 12.5%
226
495
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2017-04
latest
en
0.775434
https://www.gradesaver.com/textbooks/math/algebra/algebra-1/chapter-7-exponents-and-exponential-functions-concept-byte-powers-of-powers-and-powers-of-products-page-432/4
1,686,209,210,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224654606.93/warc/CC-MAIN-20230608071820-20230608101820-00503.warc.gz
853,338,126
13,566
## Algebra 1 $a^{4+4} = a^{4\cdot2} =a^{8}$ $a^m \cdot a^n = a^{m+n}$ Use the rule above to obtain: $(a^4)^2=a^4 \cdot a^4 = a^{4+4} =a^{8}$
72
141
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.96875
4
CC-MAIN-2023-23
latest
en
0.238477
https://www.onlinemath4all.com/factoring-4th-degree-polynomials.html
1,627,966,265,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00479.warc.gz
940,787,275
16,154
# FACTORING 4TH DEGREE POLYNOMIALS To factor a polynomial of degree 3 or more, we can use synthetic division method. In this method, we will find the factors of a polynomial by trial and error. To learn synthetic division step by step, click here Example 1 : Factor the following polynomial given that the product of two of the zeros is 8. x4 + 2x3 - 25x2 - 26x + 120 Solution : Because the product two of the zeros is 8, we can try 2 and 4 in synthetic division. x = 2 and x = 4 are the two zeros of the given polynomial of degree 4. Because x = 2 and x = 4 are the two zeros of the given polynomial, the two factors are (x - 2) and (x - 4). To find other factors, factor the quadratic expression which has the coefficients 1, 8 and 15. That is, x2 + 8x + 15. x2 + 8x + 15  =  (x + 3)(x + 5) So, the factors of the given polynomial are (x - 2), (x - 4), (x + 3) and (x + 5) Example 2 : Factor : x4 - 10x3 + 37x2 - 60x + 36 Solution : By trial and error, we can check whether 1 is a zero of the above polynomial. Because the remainder is 4 (not zero), 1 is not a zero of the given polynomial. Now, let us check with -1. Because the remainder is 24 (not zero), -1 is not a zero of the given polynomial. Now, let us check with 2. Both x = 2 and x = 3 are the two zeros of the given polynomial. Because x = 2 and x = 3 are the two zeros of the given polynomial, the two factors are (x - 2) and (x - 3). To find other factors, factor the quadratic expression which has the coefficients 1, -5 and 6. That is, x2 - 5x + 6. x2 - 5x + 6  =  (x - 2)(x - 3) So, the factors of the given polynomial are (x - 2), (x - 3), (x - 2) and (x - 3) Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. If you have any feedback about our math content, please mail us : v4formath@gmail.com We always appreciate your feedback. You can also visit the following web pages on different stuff in math. WORD PROBLEMS Word problems on simple equations Word problems on linear equations Word problems on quadratic equations Algebra word problems Word problems on trains Area and perimeter word problems Word problems on direct variation and inverse variation Word problems on unit price Word problems on unit rate Word problems on comparing rates Converting customary units word problems Converting metric units word problems Word problems on simple interest Word problems on compound interest Word problems on types of angles Complementary and supplementary angles word problems Double facts word problems Trigonometry word problems Percentage word problems Profit and loss word problems Markup and markdown word problems Decimal word problems Word problems on fractions Word problems on mixed fractrions One step equation word problems Linear inequalities word problems Ratio and proportion word problems Time and work word problems Word problems on sets and venn diagrams Word problems on ages Pythagorean theorem word problems Percent of a number word problems Word problems on constant speed Word problems on average speed Word problems on sum of the angles of a triangle is 180 degree OTHER TOPICS Profit and loss shortcuts Percentage shortcuts Times table shortcuts Time, speed and distance shortcuts Ratio and proportion shortcuts Domain and range of rational functions Domain and range of rational functions with holes Graphing rational functions Graphing rational functions with holes Converting repeating decimals in to fractions Decimal representation of rational numbers Finding square root using long division L.C.M method to solve time and work problems Translating the word problems in to algebraic expressions Remainder when 2 power 256 is divided by 17 Remainder when 17 power 23 is divided by 16 Sum of all three digit numbers divisible by 6 Sum of all three digit numbers divisible by 7 Sum of all three digit numbers divisible by 8 Sum of all three digit numbers formed using 1, 3, 4 Sum of all three four digit numbers formed with non zero digits Sum of all three four digit numbers formed using 0, 1, 2, 3 Sum of all three four digit numbers formed using 1, 2, 5, 6 Share this page: What’s this? Enjoy this page? Please pay it forward. Here's how... Would you prefer to share this page with others by linking to it? 1. Click on the HTML link code below. 2. Copy and paste it, adding a note of your own, into your blog, a Web page, forums, a blog comment, your Facebook account, or anywhere that someone would find this page valuable.
1,155
4,587
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.84375
5
CC-MAIN-2021-31
latest
en
0.853088
http://www.trueachievements.com/a191445/a-walk-in-the-park-achievement.htm
1,477,436,622,000,000,000
text/html
crawl-data/CC-MAIN-2016-44/segments/1476988720468.71/warc/CC-MAIN-20161020183840-00366-ip-10-171-6-4.ec2.internal.warc.gz
762,086,280
7,391
12,620 (1,200) ## Chariot There are a maximum of 46 Chariot achievements (36 without DLC) worth 12,620 (1,200) 52,042 tracked gamers have this game, 73 have completed it (0.14%) # A Walk in the Park22 (10) ### Find the Verdant Burrows sepulcher. • Unlocked by 10,693 tracked gamers (21% - TA Ratio = 2.20) 52,042 Solution
109
328
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2016-44
latest
en
0.842759
https://www.worldsrichpeople.com/what-is-1nf-2nf-and-3nf-with-examples/
1,719,137,914,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198862466.81/warc/CC-MAIN-20240623100101-20240623130101-00041.warc.gz
935,186,570
11,840
# What is 1NF 2NF and 3NF with examples? ## What is 1NF 2NF and 3NF with examples? Types of Normal Forms A relation is in 1NF if it contains an atomic value. 2NF. A relation will be in 2NF if it is in 1NF and all non-key attributes are fully functional dependent on the primary key. 3NF. A relation will be in 3NF if it is in 2NF and no transition dependency exists. What is normalized data with example? The most basic form of data normalization is 1NFm which ensures there are no repeating entries in a group. To be considered 1NF, each entry must have only one single value for each cell and each record must be unique. For example, you are recording the name, address, gender of a person, and if they bought cookies. ### What is 1NF 2NF 3NF? 1NF (First Normal Form) 2NF (Second Normal Form) 3NF (Third Normal Form) BCNF (Boyce-Codd Normal Form) 4NF (Fourth Normal Form) What is 3NF example? A relation is in 3NF when it is in 2NF and there is no transitive dependency or a relation is in 3NF, when it is in 2NF and all non-key attributes directly depend on candidate key….Example. Rollno Game Feestructure 7 Tennis 400 #### What is 2nd normal form with example? Second Normal Form (2NF) Example: Let’s assume, a school can store the data of teachers and the subjects they teach. In a school, a teacher can teach more than one subject. In the given table, non-prime attribute TEACHER_AGE is dependent on TEACHER_ID which is a proper subset of a candidate key. What is 2NF in database? Second normal form (2NF) is the second step in normalizing a database. 2NF builds on the first normal form (1NF). Normalization is the process of organizing data in a database so that it meets two basic requirements: There is no redundancy of data (all data is stored in only one place). ## What is 2NF in DBMS with example? How do I convert 3NF to 1NF? To normalize a table from 1NF to 3NF, you need to normalize it to 2NF first then to 3NF. In the normalization process, you decompose a table into multiple tables that contain the same information as the original table. The normalization process usually removes many problems related to data modification. ### How do I find my 3NF? Third Normal Form Requirements There are two basic requirements for a database to be in 3NF: The database must meet the requirements of both 1NF and 2NF. All database columns must depend on the primary key, meaning that any column’s value can be derived from the primary key only. What is 2nd normal form in DBMS? #### What is 2nd normal form in SQL? Second Normal Form (2NF) is based on the concept of full functional dependency. Second Normal Form applies to relations with composite keys, that is, relations with a primary key composed of two or more attributes. A relation with a single-attribute primary key is automatically in at least 2NF. What is database normalization and various normal forms? This Tutorial will Explain what is Database Normalization and various Normal Forms like 1NF 2NF 3NF and BCNF With SQL Code Examples: Database Normalization is a well-known technique used for designing database schema. ## What is 3NF example in SQL? 3NF Example Below is a 3NF example in SQL database: We have again divided our tables and created a new table which stores Salutations. There are no transitive functional dependencies, and hence our table is in 3NF Are most databases in 1NF or 3NF? Most databases are in 3NF. There are certain rules that each normal form follows. You can also download 1nf 2nf 3nf with example PDF from the web for more details. Register for a Demo class to practice Database normalization in SQL with examples. ### What is the current theory of normalization in MySQL server? The Theory of Data Normalization in MySQL server is still being developed further. For example, there are discussions even on 6 th Normal Form. However, in most practical applications, normalization achieves its best in 3rd Normal Form. The evolution of Normalization in SQL theories is illustrated below- Begin typing your search term above and press enter to search. Press ESC to cancel.
969
4,105
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2024-26
latest
en
0.92777
https://www.neetprep.com/questions/1843-Physics/701-Dual-Nature-Radiation-Matter?courseId=1277&testId=1020201-NCERT-Exercise-Based-MCQs&questionId=199907--power--W-light-bulb-converted-visibleradiation-average-intensity-visible-radiation-distance--m-bulb-Wm-Wm-Wm-Wm
1,718,366,467,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861546.27/warc/CC-MAIN-20240614110447-20240614140447-00875.warc.gz
839,595,633
57,022
# About $$5$$% of the power of a $$100$$ W light bulb is converted to visible radiation. What is the average intensity of visible radiation at a distance of $$1$$ m from the bulb? 1. $$0.472$$ W/m2 2. $$0.398$$ W/m2 3. $$0.323$$ W/m2 4. $$0.401$$ W/m2 Subtopic:  Particle Nature of Light | 63% From NCERT To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch What is the de Broglie wavelength of a bullet of mass $$0.040$$ kg traveling at the speed of $$1.0$$ km/s? 1. $$1.65\times10^{-35}$$ m 2. $$1.05\times10^{-35}$$ m 3. $$2.15\times10^{-35}$$ m 4. $$2.11\times10^{-35}$$ m Subtopic:  De-broglie Wavelength | 78% From NCERT To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch An electron and a photon each have a wavelength of 1.00 nm. The momentum of the electron will be: 1. Greater than photon. 2. Equal to the photon. 3. Less than photon. 4. None of these. Subtopic:  De-broglie Wavelength | 68% To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints The work function of cesium metal is $$2.14$$ eV. When light of frequency $$6\times10^{14}$$ Hz is incident on the metal surface, photoemission of electrons occurs. What is the stopping potential of the metal? 1. $$0.212$$ V 2. $$0.345$$ V 3. $$0.127$$ V 4. $$0.311$$ V Subtopic:  Einstein's Photoelectric Equation | 66% From NCERT To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch The photoelectric cut-off voltage in a certain experiment is $$1.5~\mathrm{V}$$. What is the maximum kinetic energy of photoelectrons emitted? 1. $$2.1 \times 10^{-19} \mathrm{~J}$$ 2. $$1.7 \times 10^{-19} \mathrm{~J}$$ 3. $$2.4 \times 10^{-19} \mathrm{~J}$$ 4. $$1.1 \times 10^{-19}~\mathrm{J}$$ Subtopic:  Photoelectric Effect: Experiment | 76% To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch Monochromatic light of wavelength $$632.8~\text{nm}$$ is produced by a helium-neon laser. The power emitted is $$9.42~\text{mW}$$. The energy of each photon in the light beam is: 1. $$4.801 \times 10^{-19}~\text{J}$$ 2. $$2.121 \times 10^{-19}~\text{J}$$ 3. $$5.043 \times 10^{-19}~\text{J}$$ 4. $$3.141 \times 10^{-19}~\text{J}$$ Subtopic:  Particle Nature of Light | 60% To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch The energy flux of sunlight reaching the surface of the earth is $$1.388\times10^{3}$$ W/m2. How many photons (nearly) per square meter are incident on the Earth per second? Assume an average wavelength of $$550~\text{nm}$$. 1. $$3.84\times10^{21}$$ 2. $$2.97\times10^{21}$$ 3. $$4.12\times10^{21}$$ 4. $$2.10\times10^{21}$$ Subtopic:  Particle Nature of Light | 66% From NCERT To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch In an experiment on the photoelectric effect, the slope of the cut-off voltage versus frequency of incident light is found to be . The value of Planck's constant is: 1. 2. 3. 4. Subtopic:  Einstein's Photoelectric Equation | 62% To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints A $$100$$ W sodium lamp radiates energy uniformly in all directions. The lamp is located at the center of a large sphere that absorbs all the sodium light which is incident on it. The wavelength of sodium light is $$589$$ nm. What is the energy per photon associated with the sodium light? 1. $$1.21$$ eV 2. $$2.21$$ eV 3. $$2.11$$ eV 4. $$1.11$$ eV Subtopic:  Particle Nature of Light | 62% To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints To view explanation, please take trial in the course. NEET 2025 - Target Batch Light of frequency  is incident on a metal surface. Electrons with a maximum speed of  are ejected from the surface. What is the threshold frequency for the photoemission of electrons? 1. $$5.109 \times10^{14}$$ Hz 2. $$3.45 \times10^{14}$$ Hz 3. $$6.733 \times10^{14}$$ Hz 4. $$4.738 \times10^{14}$$ Hz Subtopic:  Einstein's Photoelectric Equation | 52% To view explanation, please take trial in the course. NEET 2025 - Target Batch Hints
1,488
4,629
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2024-26
latest
en
0.670332
https://www.marronniers-restaurant-stremeze.fr/Oct/11+37126.html
1,675,021,629,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764499758.83/warc/CC-MAIN-20230129180008-20230129210008-00356.warc.gz
895,893,660
9,699
# calculate the motor for conveyor ## Motor for conveyor [SOLVED] | All About Circuits Any conveyor I've ever had anything to do with doesn't just have a motor, they also use a reduction gear box. Just a motor alone usually can't be slowed down enough to run a conveyor at a slow enough speed, and the gearbox also allows a smaller horsepower motor to be used. Reactions: cmartinez and strantor. ## Motor Selection Basics: Variable Speed Belt Conveyor This post explains how to use a motor sizing tool to calculate these values, and how to use them to select a motor for a variable speed belt conveyor. What is a belt conveyor? Belt conveyors use pulleys and belts to convert rotary motion to linear motion and move a load on its belt. There may be a linear guide to support the load. ## Conveyor Calculation Sheet - YouTube Program sederhana perhitungan desain conveyor menggunakan excel sheet, perhitungan dibuat mengacu ke standard CEMA.Output dari perhitungan:1. Motor KW, Ratio... ## Conveyor Belt Calculations - Bright Hub Engineering Understanding a basic conveyor belt calculation will ensure your conveyor design is accurate and is not putting too many demands on your system. We use cookies to personalize content and analyze traffic. We also share information about your use of our site with our social media, advertising, and analytics partners who may combine it with other ... ## Conveyor Motor Power Calculation - bulk-online.com The drive shaft of the conveyor is placed 0,8m above the floor. The nominal power consumption for the empty conveyor is 1500W. We need to design a motor for this, should we calculate the torque needed using diameter of the driven shaft or drive shaft? and what other things we need to consider? ## Horsepower calculation - Pacific Conveyors In order to calculate horsepower, it is necessary to determine the resultant force (R) due to the weight (W) acting at incline angle (a) Force (R) equals Weight (W) multiplied by sine (a). Sines of common incline angles are listed in weights and tables. The force (T) required to move the load is the sum of the resultant force (R), plus the ... ## Calculation Of Motor Power On Rolling Mill example torque in conveyor motor power calculation''Steel Rolling Mill Motor Design Calculations cdsspgc co in April 28th, 2018 - An investigation on the roll force and torque Mar 28 2014 The calculation of roll force and torque in hot rolling mills the roll separating forces and ## Belt Conveyor Sizing Tool - Oriental Motor U.S.A. Corp. Motor Sizing Tools > Belt Conveyor Belt Conveyor Sizing Tool External force F A = lb Mechanism Placement Mechanism angle α = ° Other requirement (s) It is necessary to hold the load even after the power supply is turned off. → You need an electromagnetic brake. ## formula for conveyor capacity A conveyor system for controlling two conveyor belts with a single motor including a belt link for removing material from between the run and the return . ... Calculation Of Conveyor Speed Capacity Factors for Special Pitches Capacity Using the formula below the exact conveyor speed S can be calculated . ## Calculation of Torque for Drive Pulley of Roller Conveyor ... - Conveyor consists of 25 roller, 1 drive pulley & 1 tail pulley - Total load of the conveyor : 5,500 kg To move the conveyor, we install Drive Pulley with diameter of 550mm, and connected to Gear Box (RPM = 95) + Motor (5,5 HP ; RPM 1,500) How do we calculate how much is the Torque of the Drive Pulley? ## Belt Conveyors for Bulk Materials Practical Calculations Belt Conveyors are also a great option to move products through elevations. Incline Belt Conveyors from low to high and Decline Belt Conveyors from high to low. This manual is short, with quick and easy reading paragraphs, very practical for calculations of belt, chain conveyors and mechanical miscellaneous, in the metric and imperial system. ## Conveyor Belt Calculations • Con Belt Conveyor length is approximately half of the total belt length. g = Acceleration due to gravity = 9.81 m/sec 2. mi = Load due to the idlers in Kg/m. = Load due to belt in Kg/m. mm = Load due to the conveyed materials in Kg/m. δ = Inclination angle of the conveyor in Degree. H = vertical height of the conveyor in meters. ## Conveyor Belt Motors (20 points) The brushes in the motors ... Conveyor Belt Motors (20 points) The brushes in the motors that control the conveyor belts at Delectable Delights need to be rep laced periodically. Since there are many conveyor belts through the buildings, a supply of brushes is always kept on hand. ## How To Measure Conveyor Speed With Encoders | Dynapar Synchronizing the speed of one conveyor belt with another requires multiple encoders and a master-slave architecture. The motor has an encoder mounted to the shaft as described above. The slave conveyor has an encoder mounted to a shaft originating from a set of rollers on the secondary conveyor. Both encoders are wired back to the controller. ## How To Calculate Motor Power For Conveyors-roller Crusher Conveyor Speed Calculator Fpm Formula Guide Cisco. Model ta a slider bed conveyor 11 long requires 12 hp motor at 65 feet per minute for a total load of 320 pounds you desire your conveyor to operate at 90 feet per minute calculate as follows 12 x 90 65 69 you should select the next highest horsepower or 34 hp. ## conveyor manufacturing calculation in excel conveyor power calculation in excel formatBelt conveyor motor power calculation conveyor how to calculate belt conveyor capacity in exel B . belt conveyor sizing calculation in excel sheet. belt conveyor sizing calculation in excel sheetConveyor Software The ability for the software to perform calculations in both standard and SI units you . ## Screw Conveyor Torque | Engineering Guide S = Conveyor Speed. Torque is measured in inch-lbs. for screw conveyor components. The torque rating of the drive shaft, coupling shafts, coupling bolts and conveyor screw must be able to withstand Full Motor Torque without failing. Every KWS screw conveyor is designed to this criteria with a minimum safety factor of 5 to 1. ## mechanical engineering - Determining required torque for a ... If you plan on driving your conveyor pulley directly by your motor shaft you do not need to consider it. The conveyor belt will always have some belt efficiency so this must be considered and it can be easily looked up. The above equations are mainly used to calculate the torque required by the motor due to "frictional forces". ## Motor Sizing Calculations Calculate the value for load torque, load inertia, speed, etc. at the motor drive shaft of the mechanism. Refer to page 3 for calculating the speed, load torque and load inertia for various mechanisms. Select a motor type from AC Motors, Brushless DC Motors or Stepping Motors based on the required specifications. ## Calculation methods – conveyor belts Conveyor and processing belts Calculation methods – conveyor belts Content 1 Terminology 2 Unit goods conveying systems 3 Take-up range for ... determined from the installed motor power P M as per the given formula and used to select a belt type. With calculable effective pull F U. Conveyor and processing belts * accumulated goods. 5 c ## Screw Conveyor Example | Engineering Guide HP = Nameplate Horsepower of the motor on the screw conveyor S = Screw Conveyor Speed. The torque rating of the drive shaft, coupling shafts, coupling bolts and conveyor screw must be greater than Full Motor Torque for proper design. A 12-inch diameter screw conveyor was selected for the example. ## Calculating the Speed of a Conveyor System - Technical ... A conventional track conveyor would use the same formula as above if using an AC motor to drive the track. If using a servo-driven belt conveyor, the speed can be calculated within the servo motor's control system.Any servo conveyor will rely heavily on the takt time of the system and the type of material being transported to calculate the speed. ## Belt Calculator | Conveyor Belt Maintenance | Shipp Belting A catalog of tools for optimizing conveyor belt engineering and conveyor belt repair. ... These calculation tools are to provide product selection ONLY and final. application suitability is the sole responsibility of the user. Belt Speed Calculator. Sprocket or Roll Diameter * Motor RPM * GearBox Ratio * Speed. Inches Per Minute. Speed Per ... ## how to calculate motor hp (horse power) required to ... how to use a single motor to run 2 conveyor belts but with a different directions? 1 answer 113 views 0 followers how to calculate torque and power in creo parametric 2.0? ## Motor Sizing Basics Part 1: How to Calculate Load Torque This can be calculated by multiplying force ( F) by the rotation radius ( r ). In order to move the load (blue box), the motor must generate more torque than this value. To calculate load torque, multiply the force (F) by the distance away from the rotational axis, … ## Vertical Transportation Design and Traffic Calculations ... In a lift system which has an MG set supplying its DC hoist, calculate the size of the AC prime mover for a 49 passenger lift, running at 1.6 m/s, if the efficiency of the installation (including the MG set, the DC hoist motor and the shaft efficiency) is 70%, and the counterweight ratio is 40%. ## How to Select and Size Gearmotors for Conveyor ... This post provides step-by-step instructions for how to size and select a gearmotor in a belt-driven conveyor application. Before sizing a gearmotor, we must first know the application requirements. For our example, the conveyor system requirements are as follows: Able to handle a 200lb (90kg) load Have adjustable speed, up to 12 inches/second Be able… ## What are the best methods to calculate motor power and ... Divide your linear speed you desire for the belt in m/s by the circumference of your drive drum in metres to get revolutions per second. For example, 0.3m/s / 0.45m = 0.66′. Multiply this number by 60 to get RPM. 0.66*60 = 40 RPM. Multiply the torque in … ## Conveyor Belt Drive Selection Sample Acceleration Time Calculation. Drive – Conveyor (load characteristics as in Figure D1 below) Motor – 15 HP, 1800 RPM, TEFC, NEMA Design 'C'. WK² = 1.78 lb ft² Load – Constant torque at 90% FLT of motor with. This is the factor used in Column (d), Tables DT1 & … ## Calculating Conveyor Power for Bulk Handling | Rulmeca Corp Since one horsepower (HP) = 33,000 ft-lbs/min, required conveyor drive power may be expressed in HP as follows, (Te in lbs) x (V in fpm)/ ( (33,000 ft-lbs/min)/HP) = HP After calculating Te, it is important to calculate T2 slip (slack side tension … ## How do I calculate power required for a motor to drive a ... Basically, it is a trolley, which will be moved up stairs, using conveyor belt system at its base. I am trying to find the required power for … ## How is torque calculated in conveyor motor? - Motorization How do you calculate conveyor? To calculate load torque, multiply the force (F) by the distance away from the rotational axis, which is the radius of the pulley (r). If the mass of the load (blue box) is 20 Newtons, and the radius of the pulley is 5 cm away, then the required torque for the application is 20 N x 0.05 m = 1 Nm. ## You asked: How do you calculate the conveyor of a motor ... Belt conveyor with a Bodine 42A-FX PMDC gearmotor and WPM DC motor speed control. Step 1: Determine speed and torque. Step 1: Determine speed and torque, contd…. Step 2: Determine the type of motor/gearmotor that is best for the application. …. Step 3: Select a Gearmotor. ## Motor Sizing Basics Part 4 - How to Calculate Radial Load ... W = T / y. With a belt conveyor, the motor torque provides the driving force that generates work. This is shown as T, which is the amount of torque in N·m. If we consider y (effective radius in meters) to be the radius of the pulley, then we can calculate radial load or W (amount of work).
2,645
12,016
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.28125
3
CC-MAIN-2023-06
latest
en
0.855438
https://mathshistory.st-andrews.ac.uk/Extras/Sinai_concepts/
1,720,945,925,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514551.8/warc/CC-MAIN-20240714063458-20240714093458-00101.warc.gz
332,460,995
10,752
# An elementary approach to concepts by Yakov G Sinai Arne B Sletsjøe wrote four elementary articles which illustrate ideas introduced by Yakov Sinai. These articles are: Chaos; Dynamical billiard; Entropy of 0, 1-sequences; and The entropy of a dynamical system. These are to be found at https://abelprize.no/abel-prize-laureates/2014 Below we give extracts from these articles. Click on a link below to go to that article 1. Chaos 2. Dynamical billiard 3. Entropy of 0, 1-sequences 4. The entropy of a dynamical system 1. Chaos. Chaos as a phenomenon of daily life is something everybody has experienced. For mathematicians it has been important to understand the deeper meaning of this concept, and how to quantify chaotic behaviour. The term chaos has its origin from the Greek term χάος and has been interpreted as "a moving, formless mass from which the cosmos and the gods originated." A more up-to-date definition of the term is something like a state of complete confusion and disorder, with no immediate view of achieving stability. We have at least two kinds of chaos. A random system will in many cases appear to us as chaotic. Throwing a dice may result in a sequence 3, 1, 5, 3, 3, 2, 6, 1, ... , for which we are sure to find no pattern. Total unpredictability is often considered as chaos. Another kind of chaos is what is denoted deterministic chaos. Deterministic is more or less synonymous with predictable, and deterministic chaos may therefore seem to be somewhat paradoxical. But the chaotic behaviour stems from the fact that the system is sensitively dependent on its initial state. As an example, consider the following set-up. Onto a rather big sphere we drop small spheres, always trying to hit the top of the bigger sphere. The smaller spheres jump or roll in different directions, depending on which side of the top point they land. The chaotic behaviour is a result of small differences in the initial state, i.e. the landing point. This is a deterministic chaotic situation, deterministic because the smaller spheres just obey the physical laws of motion, and chaotic because of the sensitive initial state dependency. Another example of deterministic chaos is the three-body problem. This problem concerns the trajectories of three bodies, which mutually influence each others' motion, due to gravitational forces. The system is deterministic because every single movement can be predicted using the physical laws of motion, and it is chaotic because of its sensitive dependency on the initial state. This dependency is often denoted the butterfly effect, referring to the theoretical example of a hurricane's formation being contingent on whether or not a distant butterfly has flapped its wings several weeks earlier. Even the apparently random like system of throwing dice is in fact deterministic. Fixing the initial position and velocity of the dice, the precise shape of the dice and taking into account our accurate knowledge of the surface of the table, we are able to predict the result of a throw of a dice, at least theoretically. But if we impose a small change in an input parameter, we are lost. So even if the system is deterministic, it appears to us as being stochastic. Let us illustrate some variations of a dynamical system using a marble and a pan. We put the marble in the pan. The initial state of the system is the position of the marble, and the dynamical system gives an accurate description of the trajectories of the marble. The marble will obviously move towards the lowest point of the pan. After some oscillation it will finally reach the equilibrium point. In this dynamical system all trajectories converge to the same point. If we perform the same experiment with a marble on a plane surface, the trajectories will not converge to one specific point, but spread out rectilinearly in all directions. A small change in the initial angle will cause an increasing distance between the trajectories, but the growth of the distance will be constant. The mathematical notion for measuring this sort of dispersion is Lyapunov's exponent. In the plane example Lyapunov's exponent is 0. In the pan, with all trajectories ending in the same point, Lyapunov's exponent is negative. The most interesting case is when Lyapunov's exponent is positive. In this case trajectories may disperse radically, even if their initial states are very close. Lyapunov's exponent gives a quantification of the rate of dispersion. The French mathematician Jacques Hadamard described in 1898 a dynamical system where Lyapunov's exponent is everywhere positive. Thus the dynamical system shows chaotic behaviour everywhere. It is said that Hadamard discovered chaos, at least that he was the first to formally describe a chaotic dynamical system. The connection between Kolomogorov-Sinai-entropy and Lyapunov's exponent is given in the so-called Pesin's Theorem. A consequence of Pesin's Theorem is that if the entropy is positive, then there exists positive Lyapunov's exponent, and vice versa. The result is by no means obvious. The positive Lyapunov's exponent tells us that trajectories may diverge rapidly, even if their initial states are rather close. Positive Kolomogorov-Sinai-entropy indicates that the system as a whole shows a certain degree of uncertainty. Pesin's theorem says that the two ways of measuring chaotic behaviour of a dynamical system are equivalent. 2. Dynamical billiard. A dynamical billiard is an idealisation of the game of billiard, but where the table can have shapes other than the rectangular and even be multidimensional. We use only one billiard ball, and the billiard may even have regions where the ball is kept out. Formally, a dynamical billiard is a dynamical system where a massless and point shaped particle moves inside a bounded region. The particle is reflected by specular reflections at the boundary, without loss of speed. In between two reflections the particle moves rectilinear at constant speed. Remember that a specular reflection is characterised by the law of reflection, the angle of incidence equals the angle of reflection. An example of a dynamical billiard is the so-called Sinai's billiard. The table of the Sinai billiard is a square with a disk removed from its centre; the table is flat, having no curvature. The billiard ball is reflected alternately from the outer and the inner boundary. Sinai's billiard arises from studying the model of the behaviour of molecules in a so-called ideal gas. In this model we consider the gas as numerous tiny balls (gas molecules) bouncing inside a square, reflecting off the boundaries of the square and off each other. Sinai's billiard provides a simplified, but rather good illustration of this model. The billiard was introduced by Yakov G Sinai as an example of an interacting Hamiltonian system that displays physical thermo-dynamic properties: all of its possible trajectories are ergodic, and it has positive Lyapunov exponents. Thus the system shows chaotic behaviour. As a model of a classical gas, the Sinai billiard is sometimes called the Lorentz gas. Sinai's great achievement with this model was to show that the behaviour of the gas molecules follows the trajectories of the Hadamard dynamical system, as described by Hadamard in 1898, in the first paper that studied mathematical chaos systematically. A dynamical billiard doesn't have to be planar. In case of non-zero curvature rectilinear motion is replaced by motion along geodesics, i.e. curves which give the shortest path between points in the billiard. The motion of the ball is geodesic of constant speed, thus the trajectories are completely described by the reflections at the boundary. The system is deterministic, thus if we know the position and the angle of one reflection, the whole trajectory can be determined. The map that takes one state to the next is called the billiard transformation. The billiard transformation determines the dynamical system. In the ordinary rectangular billiard we observe no chaotic behaviour. A small change in the initial data will induce significant deviation in the long run, but the deviation will be a linear function of time. Chaotic behaviour is characterised by exponential growth in the deviation. For Sinai's billiard chaotic behaviour is observed. For a long time it was assumed that the reason for the exponential deviation of trajectories that are close to each other was the concave shape of the inner boundary. It was also believed that a concave shape was necessary to obtain the chaotic behaviour, just like a concave lens spreads the light. But in 1974 Leonid Bunimovich proved that a billiard table shaped like a stadium, where two opposing sides are replaced by semicircles, produces chaotic behaviour, in spite of the fact that this billiard is completely convex. 3. Entropy of 0, 1-sequences. We consider a dynamical system where the state space consists of all infinite 0,1-sequences and where the dynamics is given by the shift operator. This dynamical system is named after the great seventeenth century mathematician, Jacob Bernoulli. Consider the following 0,1-sequence made up of 50 digits: 11010010001010111011011000101010100011100110100011 Do we have any reason to believe that this sequence is randomly generated? We state some relevant facts about the sequence. 1. The sequence contains 25 0's and equally many 1's. This fits with an assumption of randomness. 2. In 30 positions the sequence switches from 0 to 1 or vice versa, leaving 19 positions where the next digit is the same as the previous one. In a random sequence these numbers would tend to be the same. 3. The sequence contains six subsequences with 3 consecutive digits being equal, but none with 4. In a randomly generated sequence of 50 digits, the probability of finding a subsequence of at least 4 consecutive digits being equal is approximately 98 %. The fact that our sequence has no such subsequence indicates that it is not randomly generated. Based on these arguments we conclude that we do not believe that the sequence is randomly generated. The truth is that the sequence is manually generated in an attempt to produce a sequence which looks random. Our mistake, as our small analysis shows, is that we have switched digits too often. Entropy of Bernoulli schemes A 0,1-sequence is known as a Bernoulli scheme. In a randomly generated process, we have equal probability $p = \large\frac{1}{2}\normalsize$ for the digits 0 and 1. In our example it seems that the occurrences of 0 and 1 have equal probability, but that the combinations 01 and 10 are more likely, say 60/40 %, than the combinations 00 and 11. This difference in the predictability is quantified in the entropy of the system. The more unpredictable a system is the higher is the entropy. A random 0, 1-sequence has entropy 0.693. Our example has entropy 0.673, which is slightly lower. In general, a Bernoulli scheme of two outcomes of probability $p$ and $1 − p$ has entropy given by $E = −p \ln p − (1 − p) \ln (1 − p)$ A Bernoulli scheme may have more than two outcomes. The set of all infinite sequences of letters is a Bernoulli scheme of 26 outcomes. The famous mathematician John von Neumann asked an intriguing question about Bernoulli schemes. He wondered if it is possible that two structurally different Bernoulli schemes can produce the same result. Is it possible to identify the two Bernoulli schemes BS($\large\frac{1}{2}\normalsize , \large\frac{1}{2}\normalsize$) and BS($\large\frac{1}{3}\normalsize , \large\frac{1}{3}\normalsize , \large\frac{1}{3}\normalsize$)? BS stands for Bernoulli scheme and the fractions give the probability for each outcome, thus BS($\large\frac{1}{3}\normalsize , \large\frac{1}{3}\normalsize , \large\frac{1}{3}\normalsize$) is the Bernoulli scheme of three outcomes of equal probability. The solution to the question of von Neumann was finally given by Donald Ornstein in 1970. The answer was no, two essentially different Bernoulli schemes provides different results. The basis for this result was given by Sinai and Kolmogorov in 1959. It turns out that the Kolmogorov-Sinai-entropy is precisely what separates different Bernoulli schemes. 4. The entropy of a dynamical system. Towards the end of the 1950s, the Russian mathematician Andrey Kolmogorov held a seminar series on dynamical systems at the Moscow University. A question often raised in the seminar concerned the possibility of deciding structural similarity between different dynamical systems. A young seminar participant, Yakov Sinai, presented an affirmative answer, introducing the concept of entropy of a dynamical system. Let us start by going one decade back in time. In 1948 the American mathematician Claude E Shannon published an article entitled "A Mathematical Theory of Communication". His idea was to use the formalism of mathematics to describe communication as a phenomenon. The purpose of all communication is to convey a message, but how this is done is the messenger's choice. Some will express themselves using numerous words or characters; others prefer to be more brief. The content of the information is the same, but the information density may vary. An example is the so-called SMS language. When sending an SMS message it is common to try to minimise the number of characters. The sentence "I love You" consists of 10 characters, while "I$\heartsuit$U" consists of only 3, but the content of the two messages is the same. Shannon introduced the notion of entropy to measure the density of information. To what extent does the next character in the message provide us with more information? High Shannon entropy means that each new character provides new information; low Shannon entropy indicates that the next character just confirms something we already know. A dynamical system is a description of a physical system and its evolution over time. The system has many states and all states are represented in the state space of the system. A path in the state space describes the dynamics of the dynamical system. A dynamical system may be deterministic. In a deterministic system no randomness is involved in the development of future states of the system. A swinging pendulum describes a deterministic system. Fixing the position and the speed, the laws of physics will determine the motion of the pendulum. When throwing a dice, we have the other extreme; a stochastic system. The future is completely uncertain; the last toss of the dice has no influence on the next. In general, we can get a good overview of what happens in a dynamical system in the short term. However, when analysed in the long term, dynamical systems are difficult to understand and predict. The problem of weather forecasting illustrates this phenomenon; the weather condition, described by air pressure, temperature, wind, humidity, etc. is a state of a dynamical system. A weather forecast for the next ten minutes is much more reliable than a weather forecast for the next ten days. Yakov Sinai was the first to come up with a mathematical foundation for quantifying the complexity of a given dynamical system. Inspired by Shannon's entropy in information theory, and in the framework of Kolmogorov's Moscow seminar, Sinai introduced the concept of entropy for so-called measure-preserving dynamical systems, today known as Kolmogorov-Sinai-entropy. This entropy turned out to be a strong and far-reaching invariant of dynamical systems. The Kolmogorov-Sinai-entropy provides a rich generalisation of Shannon entropy. In information theory a message is an infinite sequence of symbols, corresponding to a state in the framework of dynamical systems. The shift operator, switching the sequence one step, gives the dynamics of the system. Entropy measures to what extent we are able to predict the next step in the sequence. Another example concerns a container filled with gas. The state space of this physical system represents states of the gas, i.e. the position and the momentum of every single gas molecule, and the laws of nature determine the dynamics. Again, the degree of complexity and chaotic behaviour of the gas molecules will be the ingredients in the concept of entropy. Summing up, the Kolmogorov-Sinai-entropy measures unpredictability of a dynamical system. The higher unpredictability, the higher entropy. This fits nicely with Shannon entropy, where unpredictability of the next character is equivalent to new information. It also fits with the concept of entropy in thermodynamics, where disorder increases the entropy, and the fact that disorder and unpredictability are closely related. Kolmogorov-Sinai-entropy has strongly influenced our understanding of the complexity of dynamical systems. Even though the formal definition is not that complicated, the concept has shown its strength through the highly adequate answers to central problems in the classification of dynamical systems. Last Updated December 2023
3,612
17,031
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 8, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.515625
4
CC-MAIN-2024-30
latest
en
0.940557
https://e-edukasyonph.com/math/if-there-are-12-teams-in-a-basketba-74915982
1,686,138,481,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224653764.55/warc/CC-MAIN-20230607111017-20230607141017-00500.warc.gz
250,499,308
34,803
, 21.03.2023 08:32 camillebalajadia # if there are 12 teams in a basketball tournament and each team must play every other teaam in the eliminations how many elimination games will there be? Answers: 1 ### Another question on Math Math, 28.10.2019 15:29 Name the angles and their intercepted arcs in the figure below. Answers: 3 Math, 28.10.2019 16:29 Give the inverse for multiplication of 7, -8 and 2/3. Answers: 1 Math, 28.10.2019 19:28 In a restaurant, there were 1/4 more women than children. 2/5 of the number of women are men. if there were 24 children in a party, how many men were there? Answers: 1 Math, 28.10.2019 21:29 25.68 is read when rounded to the nearest whole number Answers: 1 You know the right answer? if there are 12 teams in a basketball tournament and each team must play every other teaam in the el... Questions Araling Panlipunan, 22.09.2020 03:01 Computer Science, 22.09.2020 03:01 Physical Education, 22.09.2020 03:01 Araling Panlipunan, 22.09.2020 03:01 Filipino, 22.09.2020 03:01
337
1,014
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2023-23
latest
en
0.889577
http://csma31.csm.jmu.edu/physics/Courses/P140L/appendices/a1-fittingdata.htm
1,513,095,346,000,000,000
text/html
crawl-data/CC-MAIN-2017-51/segments/1512948517350.12/warc/CC-MAIN-20171212153808-20171212173808-00076.warc.gz
64,639,802
8,019
FITTING DATA There are several methods that one can use to find a function that passes through a set of data points thereby revealing a mathematical relationship.  To perform a fit the experimenter must choose a functional form. Functional form defines the mathematical relationship between the dependent variable (y) and independent variable (x). The form usually contains parameters whose values must be chosen to fix the relationship. Examples of some functional forms follow. Line: Parameters: m, b Exponential: Parameters: Ao, l Polynomial Parameters: a, b, c, d A computer program usually changes the parameters of interest in some pattern that is designed to find the best values for the parameters in an efficient way. The y-values calculated with the fit function are compared to the data y-values. The quality of the fit is judged by the difference. The method employed to search for the best parameters is unimportant as long as a good fit is found. Consider the following function There are four parameters, A, B, C, and D.  Choosing different values for these parameters results in a different lines as shown in the graph below. Both lines shown on the plot represent the same functional form F(t) but with different values for the parameters. Neither function passes through the experimental values, shown as triangles with error bars.  A fitting program keeps changing the parameter values and testing if the new line passes through the data points. When the line passes sufficiently close to the data (y-values from fit are close to the y-values of the data) the fitting program returns the values of the parameters.  A good fitting routine can vary the parameters again to see how much a parameter can change while still passing through the error bars. This allows the routine to establish an uncertainty (range of possible values) for each parameter.  In the laboratory, data analysis almost always requires both a value and an uncertainty. The fitting routines • DATAFIT, • Logger Pro and • GRAPHICAL ANALYSIS provide values and uncertainties. Student therefore need to be able to pass data to one of these fitting programs, run the fit and retrieve parameters and uncertainties. If one uses a routine that doesn’t provide parameter uncertainties then an alternativemethod to determine these uncertainties is required. Trendline: Excel provides trend lines for charts. These lines are made to pass through the data. The parameters can be viewed by displaying the trend line function on the chart. The disadvantage of this method is that it doesn't indicate the uncertainty in the parameters. Therefore this approach is NOT recommended. However this method in junction with the approach described below can be used to find a line that fits the data and an estimate for the parameter uncertainties. Finding Uncertainty (repeated trials method): An uncertainty can be determined (for a trend line analysis) by measuring more than one data set. Trend lines can be placed on each of the different data sets and the parameter values from each dataset (e.g. slope of a straight line) can be put into a table and compared using the SD to estimate the uncertainty in the fitted parameter (e.g. slope).  This method requires the experimenter to repeat the experiment so that independent datasets are compared. This method can be used to estimate an uncertainty for any fit method. As mentioned above, routines such as DataFit provide uncertainties based on one data set. The two methods should agree. DataFit: A separate program, DataFit, is one of the best tools for general fitting. §       Start program using the DataFit icon. §       Enter the number of independent variables (usually 1). §       Decide if you want to have a column for y uncertainties (standard deviation column) or no column (usually no column).   Typically, one wants to enter an uncertainty for each Y-data point. §       Hit OK. §       Paste the data to the data window. §       Choose regression under the solve menu. §       Choose nonlinear. §       Choose the functional form from among the options or provide a custom function. §       If the fit is successful the results can be obtained by choosing  - detailed…-  in the results menu. Scroll down until you find the table Regression Variable Results use Value and Standard Error for each parameter. The results contain the parameters and their uncertainties (standard error) as well as a host of plots and other indicators. Graphical Analysis: This package is supplied as part of the data collection and analysis tools from Vernier Software. This package allows the experimenter to enter or import data, to plot, calculate, graph and fit data. It has a complete set of tools so that a full analysis can be performed. It provides text boxes for comments, and graphs with sophisticated display options. Graphical analysis is a fairly complete, additional spread sheet which is available for student use. Copies of the software are available for installation on your home computer. Ask your instructor. Start Graphical Analysis by double clicking on the GA icon. Open a new analysis by choosing new under the file menu. Import or cut & paste data to the table window. Use the toolbar “Curve Fit” button or choose curve fit from the analyze menu. Choose the data to fit in the window that appears (y-column). Choose the functional form and click the “try fit” button. Complete the process by using the “OK” button. A window should appear on the graph showing the parameters and the associated uncertainties. The root mean square error, RMSE, is also given. Vernier Tech Info Library TIL # 1014 (from website) MSE: Mean Square Error, for every data point, you take the distance vertically from the point to the corresponding point on the curve fit and square the value. Then you add up all those values for all data points, and divide by the number of points. The squaring is done so negative values do not cancel positive values. The smaller the Mean Squared Error, the closer the fit is to the data. RMSE:  Root Mean Squared Error is just the square root of the mean square error. That is probably the most easily interpreted statistic, since it has the same units as the quantity plotted on the y axis. The RMSE is thus the distance, on average, of a data point from the fitted line, measured along a vertical line. LoggerPro: Logger Pro, also provided by Vernier, has fitting functions available. These come in handy when recorded data needs to be fit quickly. Logger Pro does provide an estimate of the uncertainty for the fitted parameters. • Highlight a section of data with the mouse. • From “analyze menu” choose “curve fit”. • Highlight a function. • Hit “Try Fit” button. • Hit “OK” • The results appear on the graph. If you do not see the parameter uncertainties in the fitting summary dialog box then right click on the dialog box to obtain get the options and check the appropriate boxes. There are several interesting options available. You can vary the parameters and see how the function changes. You can define additional functions. Also see Logger Pro help files. The following functions are useful for the restricted case of a straight-line relationship between the dependent and the independent variable. Linest:  The function LINEST returns the slope and intercept data from a straight line LSQ fit. Since there are several values returned you must: ·       Enter the formula LINEST(y range, x range, 1, 1) into a cell. ·       Select a range of cells that include this formula at the upper left (2 cells across, 5 down). ·       Type F2 function key followed by “cntl-shift-enter”. (Excel’s Array entry) ·       In these data the slope intercept and their uncertainties can be found. Regression Analysis: This is a just a fancy name for straight-line fitting. It assumes that the relationship between the variables is linear. It therefore can be used to find the best straight line that passes through as set of (x,y) pairs. The regression function returns a full set of quantities that can be used to describe the quality of the fit. It also provides estimates of the uncertainty in the slope and intercept. To perform a regression in excel: §       Use Data Analysis item in the tools menu. §       Choose regression. §       Enter the y and x values. §       Hit OK. §       View the sheet with the results. §       The intercept and uncertainty are tabulated. §       A value for the slope (x variable 1) and an associated uncertainty for this value are also tabulated.
1,730
8,581
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.40625
3
CC-MAIN-2017-51
latest
en
0.817873
http://engaging-math.blogspot.ca/2015_09_01_archive.html
1,524,378,576,000,000,000
text/html
crawl-data/CC-MAIN-2018-17/segments/1524125945497.22/warc/CC-MAIN-20180422061121-20180422081121-00077.warc.gz
98,620,118
17,410
Monday, 21 September 2015 I Have, Who Has - Making Change in Canada An I Have, Who Has game is not a new concept. The premiss is that each person gets a card that has two statements. One is the "I have" statement and the other is the "Who has" statement. In this case the "I have" statement is an expression dealing with making change with money in Canada. If you are not from Canada, you may not know that here we no longer use pennies. This means that when we buy things and pay cash we actually have to round to the nearest nickel to make change. The way the game works is that a person starts by reading their "Who has" statement. For example, someone might say "Who has \$4.35?". Someone else will have a card where their change equals \$4.35 so they would say "I have \$20 and it costs \$15.63. Who has change of \$1.75?" That is, they read their statement that equals \$4.35 and then asks their "Who has" statement. Then someone else will have an expression that matches \$1.75 and the game continues. If done correctly, it should end up with the person who started giving their "I have" statement. It works really well as a warm up and one of nice things about this is that you could do it multiple days and kids will likely get different cards. • MAT1L - DMS1.03 – round money values to stated accuracies (e.g., the nearest cent, the nearest dollar, the nearest ten dollars, the nearest hundred dollars, the nearest thousand dollars, and the nearest million dollars), in applications drawn from everyday situations; DMS2.01 – make the correct change for an offered amount with and without concrete materials (e.g., change from a \$5 bill for an item costing \$4.77); • MAT2L - EMS1.01 – read and interpret money values given in words, write money values as decimals, and round money values appropriately, in solving problems found in everyday contexts; • This is a small set that has only 9 cards in it (you can see that the card on the top left has the "I have" to match the "Who has" of the card on the bottom right). You will likely have more than 9 students in your class and so will need multiple sets (ie groups of 9). In order for the game to work, all cards need to be passed out. So some students may need to have more than one card. • Print out the set you want (ideally on coloured card stock) and we also suggest lamination to lengthen the lifespan of the cards. • Be sure to print out a set for yourself that you don't cut out so that it will be easier for you to check as students play the game. 1. Make sure you have gone over the rounding rules for money first. 2. Distribute the cards one per student. All cards must be handed out so some students might need more than one card. 3. Tell each person to calculate the change for their "I have" expression and check their answer with at least one other person. 4. Once students are confident with their answers all students should stand in their groups and then you choose one to read their "Who has" statement. The person who's answer is the same should read their "I have" statement followed by their "Who has" statement and then sit down. Eventually the last person standing should be the person who started. 5. A variation might be to have students walk to the front and stand next to the person who they were matched with and eventually form an entire loop around the class. • IHaveWhoHas-MakingChange (pdf) (doc) • IHaveWhoHas-BlankTemplate (doc) Did you use this activity? Do you have a way to make it better? If so tell us in the comment section. Thanks Tuesday, 8 September 2015 I Have, Who Has - Integers An I Have, Who Has game is not a new concept. The premiss is that each person gets a card that has two statements. One is the "I have" statement and the other is the "Who has" statement. In this case the "I have" statement is an expression dealing with addition, subtraction, multiplication and division of of integers. The way the game works is that a person starts by reading their "Who has" statement. For example, someone might say "Who has 2?". Someone else will have a card where their expression equals 2 so they would say " I have -2 + 5 -1. Who has 7?" That is, they read their expression that equals 2 and then asks their "Who has" statement. Then someone else will have an expression that matches 7 and the game continues. If done correctly, it should end up with the person who started giving their "I have" statement. It works really well as a warm up and one of nice things about this is that you could do it multiple days and kids will likely get different cards. • Grade 8 - solve problems involving operations with integers, using a variety of tools • MPM1D, MFM1P - simplify numerical expressions involving integers and rational numbers, with and without the use of technology • There are two sets of cards that you could download here. One set (pictured here) has only 9 cards in it (you can see that the card on the top left has the "I have" to match the "Who has" of the card on the bottom right). Depending on the size of class you have you might want to use this set multiple times (ie groups of 9) or use the larger set of 27. Either way, in order for the game to work, all cards need to be passed out. So some students may need to have more than one card. • Regardless. Print out the set you want (ideally on coloured card stock) and we also suggest lamination to lengthen the lifespan of the cards. • Be sure to print out a set for yourself that you don't cut out so that it will be easier for you to check as students play the game. 1. Distribute the cards one per student. All cards must be handed out so some students might need more than one card. 2. Tell each person to simplify their "I have" expression and check their answer with at least one other person. 3. Once students are confident with their simplification all students should stand and then you choose one to read their "Who has" statement. The person who's simplified answer is the same should read their "I have" statement followed by their "Who has" statement and then sit down. Eventually the last person standing should be the person who started. 4. A variation might be to have students walk to the front and stand next to the person who they were matched with and eventually form an entire loop around the class. • IHaveWhoHas-Integers-9cards (pdf) (doc) • IHaveWhoHas-Integers-27cards (pdf) (doc) • IHaveWhoHas-BlankTemplate (doc) Did you use this activity? Do you have a way to make it better? If so tell us in the comment section. Thanks
1,521
6,535
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5
4
CC-MAIN-2018-17
latest
en
0.957861
https://brilliant.org/problems/tens-digit-twice-the-ones-digit/
1,529,412,055,000,000,000
text/html
crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00115.warc.gz
568,500,285
17,237
# Tens Digit Twice the Ones Digit Arpit is thinking of a 2-digit number whose tens digit is equal to twice the ones digit. How many different numbers can Arpit be thinking of? Details and assumptions The number $$12 = 012$$ is a two digit number, not a three digit number. ×
67
278
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2018-26
latest
en
0.825761
https://myprivateresearcher.com/problem-1-area-of-triangle/
1,675,300,230,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764499954.21/warc/CC-MAIN-20230202003408-20230202033408-00538.warc.gz
425,634,284
15,289
# Problem 1: Area of triangle Problem 1: Area of triangle Write  a full program that asks the user to input six value that represents  the x and y coordinates for three points (x1, y1), (x2, y2), (x3, y3).  The three points represent a triangle’s corner. Use these values  to calculate the area of this triangle, then display the result. The  formula for computing the area of a triangle is SideLength = The square root of the following (x2 – x1)2 + (y2 – y1)2 s = (side1 + side2 + side3) / 2 Area =  The square root of the following (s (s – side1)(s – side2)(s – side3) ). Sample run… Enter x-coord for the first point: 1.5 Enter y-coord for the first point: -3.4 Enter x-coord for the second point: 4.6 Enter y-coord for the second point: 5 Enter x-coord for the third points: 9.5 Enter y-coord for the third points: -3.4 The area of the triangle is 33.6 ## WHY CHOOSE US? ### We deliver quality original papers Our experts write quality original papers using academic databases. ### Free revisions We offer our clients multiple free revisions just to ensure you get what you want. ### Discounted prices All our prices are discounted which makes it affordable to you. Use code FIRST15 to get your discount ### 100% originality We deliver papers that are written from scratch to deliver 100% originality. Our papers are free from plagiarism and NO similarity ### On-time delivery We will deliver your paper on time even on short notice or  short deadline, overnight essay or even an urgent essay
405
1,508
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2023-06
latest
en
0.778921
http://www.toontricks.com/2019/01/tutorial-how-to-find-lowest-common.html
1,563,600,239,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00034.warc.gz
269,330,368
52,502
# Tutorial :How to find the lowest common ancestor of two nodes in any binary tree? ### Question: The Binary Tree here is may not necessarily be a Binary Search Tree. The structure could be taken as - ``struct node { int data; struct node *left; struct node *right; }; `` The maximum solution I could work out with a friend was something of this sort - Consider this binary tree : Binary Tree http://lcm.csa.iisc.ernet.in/dsa/img151.gif The inorder traversal yields - 8, 4, 9, 2, 5, 1, 6, 3, 7 And the postorder traversal yields - 8, 9, 4, 5, 2, 6, 7, 3, 1 So for instance, if we want to find the common ancestor of nodes 8 and 5, then we make a list of all the nodes which are between 8 and 5 in the inorder tree traversal, which in this case happens to be [4, 9, 2]. Then we check which node in this list appears last in the postorder traversal, which is 2. Hence the common ancestor for 8 and 5 is 2. The complexity for this algorithm, I believe is O(n) (O(n) for inorder/postorder traversals, the rest of the steps again being O(n) since they are nothing more than simple iterations in arrays). But there is a strong chance that this is wrong. :-) But this is a very crude approach, and I'm not sure if it breaks down for some case. Is there any other (possibly more optimal) solution to this problem? ### Solution:1 Nick Johnson is correct that a an O(n) time complexity algorithm is the best you can do if you have no parent pointers.) For a simple recursive version of that algorithm see the code in Kinding's post which runs in O(n) time. But keep in mind that if your nodes have parent pointers, an improved algorithm is possible. For both nodes in question construct a list containing the path from root to the node by starting at the node, and front inserting the parent. So for 8 in your example, you get (showing steps): {4}, {2, 4}, {1, 2, 4} Do the same for your other node in question, resulting in (steps not shown): {1, 2} Now compare the two lists you made looking for the first element where the list differ, or the last element of one of the lists, whichever comes first. This algorithm requires O(h) time where h is the height of the tree. In the worst case O(h) is equivalent to O(n), but if the tree is balanced, that is only O(log(n)). It also requires O(h) space. An improved version is possible that uses only constant space, with code shown in CEGRD's post Regardless of how the tree is constructed, if this will be an operation you perform many times on the tree without changing it in between, there are other algorithms you can use that require O(n) [linear] time preparation, but then finding any pair takes only O(1) [constant] time. For references to these algorithms, see the the lowest common ancestor problem page on Wikipedia. (Credit to Jason for originally posting this link) ### Solution:2 Starting from `root` node and moving downwards if you find any node that has either `p` or `q` as its direct child then it is the LCA. (edit - this should be if `p` or `q` is the node's value, return it. Otherwise it will fail when one of `p` or `q` is a direct child of the other.) Else if you find a node with `p` in its right(or left) subtree and `q` in its left(or right) subtree then it is the LCA. The fixed code looks like: ``treeNodePtr findLCA(treeNodePtr root, treeNodePtr p, treeNodePtr q) { // no root no LCA. if(!root) { return NULL; } // if either p or q is the root then root is LCA. if(root==p || root==q) { return root; } else { // get LCA of p and q in left subtree. treeNodePtr l=findLCA(root->left , p , q); // get LCA of p and q in right subtree. treeNodePtr r=findLCA(root->right , p, q); // if one of p or q is in leftsubtree and other is in right // then root it the LCA. if(l && r) { return root; } // else if l is not null, l is LCA. else if(l) { return l; } else { return r; } } } `` The below code fails when either is the direct child of other. ``treeNodePtr findLCA(treeNodePtr root, treeNodePtr p, treeNodePtr q) { // no root no LCA. if(!root) { return NULL; } // if either p or q is direct child of root then root is LCA. if(root->left==p || root->left==q || root->right ==p || root->right ==q) { return root; } else { // get LCA of p and q in left subtree. treeNodePtr l=findLCA(root->left , p , q); // get LCA of p and q in right subtree. treeNodePtr r=findLCA(root->right , p, q); // if one of p or q is in leftsubtree and other is in right // then root it the LCA. if(l && r) { return root; } // else if l is not null, l is LCA. else if(l) { return l; } else { return r; } } } `` Code In Action ### Solution:3 Here is the working code in JAVA ``public static Node LCA(Node root, Node a, Node b) { if (root == null) { return null; } // If the root is one of a or b, then it is the LCA if (root == a || root == b) { return root; } Node left = LCA(root.left, a, b); Node right = LCA(root.right, a, b); // If both nodes lie in left or right then their LCA is in left or right, // Otherwise root is their LCA if (left != null && right != null) { return root; } return (left != null) ? left : right; } `` ### Solution:4 The answers given so far uses recursion or stores, for instance, a path in memory. Both of these approaches might fail if you have a very deep tree. Here is my take on this question. When we check the depth (distance from the root) of both nodes, if they are equal, then we can safely move upward from both nodes towards the common ancestor. If one of the depth is bigger then we should move upward from the deeper node while staying in the other one. Here is the code: ``findLowestCommonAncestor(v,w): depth_vv = depth(v); depth_ww = depth(w); vv = v; ww = w; while( depth_vv != depth_ww ) { if ( depth_vv > depth_ww ) { vv = parent(vv); depth_vv--; else { ww = parent(ww); depth_ww--; } } while( vv != ww ) { vv = parent(vv); ww = parent(ww); } return vv; `` The time complexity of this algorithm is: O(n). The space complexity of this algorithm is: O(1). Regarding the computation of the depth, we can first remember the definition: If v is root, depth(v) = 0; Otherwise, depth(v) = depth(parent(v)) + 1. We can compute depth as follows: ``depth(v): int d = 0; vv = v; while ( vv is not root ) { vv = parent(vv); d++; } return d; `` ### Solution:5 Well, this kind of depends how your Binary Tree is structured. Presumably you have some way of finding the desired leaf node given the root of the tree - simply apply that to both values until the branches you choose diverge. If you don't have a way to find the desired leaf given the root, then your only solution - both in normal operation and to find the last common node - is a brute-force search of the tree. ### Solution:6 This can be found at:- http://goursaha.freeoda.com/DataStructure/LowestCommonAncestor.html `` tree_node_type *LowestCommonAncestor( tree_node_type *root , tree_node_type *p , tree_node_type *q) { tree_node_type *l , *r , *temp; if(root==NULL) { return NULL; } if(root->left==p || root->left==q || root->right ==p || root->right ==q) { return root; } else { l=LowestCommonAncestor(root->left , p , q); r=LowestCommonAncestor(root->right , p, q); if(l!=NULL && r!=NULL) { return root; } else { temp = (l!=NULL)?l:r; return temp; } } } `` ### Solution:7 Tarjan's off-line least common ancestors algorithm is good enough (cf. also Wikipedia). There is more on the problem (the lowest common ancestor problem) on Wikipedia. ### Solution:8 To find out common ancestor of two node :- • Find the given node Node1 in the tree using binary search and save all nodes visited in this process in an array say A1. Time - O(logn), Space - O(logn) • Find the given Node2 in the tree using binary search and save all nodes visited in this process in an array say A2. Time - O(logn), Space - O(logn) • If A1 list or A2 list is empty then one the node does not exist so there is no common ancestor. • If A1 list and A2 list are non-empty then look into the list until you find non-matching node. As soon as you find such a node then node prior to that is common ancestor. This would work for binary search tree. ### Solution:9 I have made an attempt with illustrative pictures and working code in Java, http://tech.bragboy.com/2010/02/least-common-ancestor-without-using.html ### Solution:10 The below recursive algorithm will run in O(log N) for a balanced binary tree. If either of the nodes passed into the getLCA() function are the same as the root then the root will be the LCA and there will be no need to perform any recussrion. Test cases. [1] Both nodes n1 & n2 are in the tree and reside on either side of their parent node. [2] Either node n1 or n2 is the root, the LCA is the root. [3] Only n1 or n2 is in the tree, LCA will be either the root node of the left subtree of the tree root, or the LCA will be the root node of the right subtree of the tree root. [4] Neither n1 or n2 is in the tree, there is no LCA. [5] Both n1 and n2 are in a straight line next to each other, LCA will be either of n1 or n2 which ever is closes to the root of the tree. ``//find the search node below root bool findNode(node* root, node* search) { //base case if(root == NULL) return false; if(root->val == search->val) return true; //search for the node in the left and right subtrees, if found in either return true return (findNode(root->left, search) || findNode(root->right, search)); } //returns the LCA, n1 & n2 are the 2 nodes for which we are //establishing the LCA for node* getLCA(node* root, node* n1, node* n2) { //base case if(root == NULL) return NULL; //If 1 of the nodes is the root then the root is the LCA //no need to recurse. if(n1 == root || n2 == root) return root; //check on which side of the root n1 and n2 reside bool n1OnLeft = findNode(root->left, n1); bool n2OnLeft = findNode(root->left, n2); //n1 & n2 are on different sides of the root, so root is the LCA if(n1OnLeft != n2OnLeft) return root; //if both n1 & n2 are on the left of the root traverse left sub tree only //to find the node where n1 & n2 diverge otherwise traverse right subtree if(n1OnLeft) return getLCA(root->left, n1, n2); else return getLCA(root->right, n1, n2); } `` ### Solution:11 Just walk down from the whole tree's `root` as long as both given nodes ,say `p` and `q`, for which Ancestor has to be found, are in the same sub-tree (meaning their values are both smaller or both larger than root's). This walks straight from the root to the Least Common Ancestor , not looking at the rest of the tree, so it's pretty much as fast as it gets. A few ways to do it. Iterative, O(1) space Python ``def lowestCommonAncestor(self, root, p, q): while (root.val - p.val) * (root.val - q.val) > 0: root = (root.left, root.right)[p.val > root.val] return root `` Java ``public TreeNode lowestCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { while ((root.val - p.val) * (root.val - q.val) > 0) root = p.val < root.val ? root.left : root.right; return root; } `` in case of overflow, I'd do (root.val - (long)p.val) * (root.val - (long)q.val) Recursive Python ``def lowestCommonAncestor(self, root, p, q): next = p.val < root.val > q.val and root.left or \ p.val > root.val < q.val and root.right return self.lowestCommonAncestor(next, p, q) if next else root `` Java ``public TreeNode lowestCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { return (root.val - p.val) * (root.val - q.val) < 1 ? root : lowestCommonAncestor(p.val < root.val ? root.left : root.right, p, q); } `` ### Solution:12 In scala, the code is: ``abstract class Tree case class Node(a:Int, left:Tree, right:Tree) extends Tree case class Leaf(a:Int) extends Tree def lca(tree:Tree, a:Int, b:Int):Tree = { tree match { case Node(ab,l,r) => { if(ab==a || ab ==b) tree else { val temp = lca(l,a,b); val temp2 = lca(r,a,b); if(temp!=null && temp2 !=null) tree else if (temp==null && temp2==null) null else if (temp==null) r else l } } case Leaf(ab) => if(ab==a || ab ==b) tree else null } } `` ### Solution:13 ``Node *LCA(Node *root, Node *p, Node *q) { if (!root) return NULL; if (root == p || root == q) return root; Node *L = LCA(root->left, p, q); Node *R = LCA(root->right, p, q); if (L && R) return root; // if p and q are on both sides return L ? L : R; // either one of p,q is on one side OR p,q is not in L&R subtrees } `` ### Solution:14 If it is full binary tree with children of node x as 2*x and 2*x+1 than there is a faster way to do it ``int get_bits(unsigned int x) { int high = 31; int low = 0,mid; while(high>=low) { mid = (high+low)/2; if(1<<mid==x) return mid+1; if(1<<mid<x) { low = mid+1; } else { high = mid-1; } } if(1<<mid>x) return mid; return mid+1; } unsigned int Common_Ancestor(unsigned int x,unsigned int y) { int xbits = get_bits(x); int ybits = get_bits(y); int diff,kbits; unsigned int k; if(xbits>ybits) { diff = xbits-ybits; x = x >> diff; } else if(xbits<ybits) { diff = ybits-xbits; y = y >> diff; } k = x^y; kbits = get_bits(k); return y>>kbits; } `` How does it work 1. get bits needed to represent x & y which using binary search is O(log(32)) 2. the common prefix of binary notation of x & y is the common ancestor 3. whichever is represented by larger no of bits is brought to same bit by k >> diff 4. k = x^y erazes common prefix of x & y 5. find bits representing the remaining suffix 6. shift x or y by suffix bits to get common prefix which is the common ancestor. This works because basically divide the larger number by two recursively until both numbers are equal. That number is the common ancestor. Dividing is effectively the right shift opearation. So we need to find common prefix of two numbers to find the nearest ancestor ### Solution:15 Here is the C++ way of doing it. Have tried to keep the algorithm as much easy as possible to understand: ``// Assuming that `BinaryNode_t` has `getData()`, `getLeft()` and `getRight()` class LowestCommonAncestor { typedef char type; // Data members which would behave as place holders const BinaryNode_t* m_pLCA; type m_Node1, m_Node2; static const unsigned int TOTAL_NODES = 2; // The core function which actually finds the LCA; It returns the number of nodes found // At any point of time if the number of nodes found are 2, then it updates the `m_pLCA` and once updated, we have found it! unsigned int Search (const BinaryNode_t* const pNode) { if(pNode == 0) return 0; unsigned int found = 0; found += (pNode->getData() == m_Node1); found += (pNode->getData() == m_Node2); found += Search(pNode->getLeft()); // below condition can be after this as well found += Search(pNode->getRight()); if(found == TOTAL_NODES && m_pLCA == 0) m_pLCA = pNode; // found ! return found; } public: // Interface method which will be called externally by the client const BinaryNode_t* Search (const BinaryNode_t* const pHead, const type node1, const type node2) { // Initialize the data members of the class m_Node1 = node1; m_Node2 = node2; m_pLCA = 0; // Find the LCA, populate to `m_pLCANode` and return (void) Search(pHead); return m_pLCA; } }; `` How to use it: ``LowestCommonAncestor lca; BinaryNode_t* pNode = lca.Search(pWhateverBinaryTreeNodeToBeginWith); if(pNode != 0) ... `` ### Solution:16 The easiest way to find the Lowest Common Ancestor is using the following algorithm: ` Examine root node if value1 and value2 are strictly less that the value at the root node Examine left subtree else if value1 and value2 are strictly greater that the value at the root node Examine right subtree else return root ` ``public int LCA(TreeNode root, int value 1, int value 2) { while (root != null) { if (value1 < root.data && value2 < root.data) return LCA(root.left, value1, value2); else if (value2 > root.data && value2 2 root.data) return LCA(root.right, value1, value2); else return root } return null; } `` ### Solution:17 I found a solution 1. Take inorder 2. Take preorder 3. Take postorder Depending on 3 traversals, you can decide who is the LCA. From LCA find distance of both nodes. Add these two distances, which is the answer. ### Solution:18 Consider this tree If we do postorder and preorder traversal and find the first occuring common predecessor and successor, we get the common ancestor. postorder => 0,2,1,5,4,6,3,8,10,11,9,14,15,13,12,7 preorder => 7,3,1,0,2,6,4,5,12,9,8,11,10,13,15,14 • eg :1 Least common ancestor of 8,11 in postorder we have = >9,14,15,13,12,7 after 8 & 11 in preorder we have =>7,3,1,0,2,6,4,5,12,9 before 8 & 11 9 is the first common number that occurs after 8& 11 in postorder and before 8 & 11 in preorder, hence 9 is the answer • eg :2 Least common ancestor of 5,10 11,9,14,15,13,12,7 in postorder 7,3,1,0,2,6,4 in preorder 7 is the first number that occurs after 5,10 in postorder and before 5,10 in preorder, hence 7 is the answer ### Solution:19 Here is what I think, 1. Find the route for the fist node , store it on to arr1. 2. Start finding the route for the 2 node , while doing so check every value from root to arr1. 3. time when value differs , exit. Old matched value is the LCA. Complexity : step 1 : O(n) , step 2 =~ O(n) , total =~ O(n). ### Solution:20 Here are two approaches in c# (.net) (both discussed above) for reference: 1. Recursive version of finding LCA in binary tree (O(N) - as at most each node is visited) (main points of the solution is LCA is (a) only node in binary tree where both elements reside either side of the subtrees (left and right) is LCA. (b) And also it doesn't matter which node is present either side - initially i tried to keep that info, and obviously the recursive function become so confusing. once i realized it, it became very elegant. 2. Searching both nodes (O(N)), and keeping track of paths (uses extra space - so, #1 is probably superior even thought the space is probably negligible if the binary tree is well balanced as then extra memory consumption will be just in O(log(N)). so that the paths are compared (essentailly similar to accepted answer - but the paths is calculated by assuming pointer node is not present in the binary tree node) 3. Just for the completion (not related to question), LCA in BST (O(log(N)) 4. Tests Recursive: ``private BinaryTreeNode LeastCommonAncestorUsingRecursion(BinaryTreeNode treeNode, int e1, int e2) { Debug.Assert(e1 != e2); if(treeNode == null) { return null; } if((treeNode.Element == e1) || (treeNode.Element == e2)) { //we don't care which element is present (e1 or e2), we just need to check //if one of them is there return treeNode; } var nLeft = this.LeastCommonAncestorUsingRecursion(treeNode.Left, e1, e2); var nRight = this.LeastCommonAncestorUsingRecursion(treeNode.Right, e1, e2); if(nLeft != null && nRight != null) { //note that this condition will be true only at least common ancestor return treeNode; } else if(nLeft != null) { return nLeft; } else if(nRight != null) { return nRight; } return null; } `` where above private recursive version is invoked by following public method: ``public BinaryTreeNode LeastCommonAncestorUsingRecursion(int e1, int e2) { var n = this.FindNode(this._root, e1); if(null == n) { throw new Exception("Element not found: " + e1); } if (e1 == e2) { return n; } n = this.FindNode(this._root, e2); if (null == n) { throw new Exception("Element not found: " + e2); } var node = this.LeastCommonAncestorUsingRecursion(this._root, e1, e2); if (null == node) { throw new Exception(string.Format("Least common ancenstor not found for the given elements: {0},{1}", e1, e2)); } return node; } `` Solution by keeping track of paths of both nodes: ``public BinaryTreeNode LeastCommonAncestorUsingPaths(int e1, int e2) { var path1 = new List<BinaryTreeNode>(); var node1 = this.FindNodeAndPath(this._root, e1, path1); if(node1 == null) { throw new Exception(string.Format("Element {0} is not found", e1)); } if(e1 == e2) { return node1; } List<BinaryTreeNode> path2 = new List<BinaryTreeNode>(); var node2 = this.FindNodeAndPath(this._root, e2, path2); if (node1 == null) { throw new Exception(string.Format("Element {0} is not found", e2)); } BinaryTreeNode lca = null; Debug.Assert(path1[0] == this._root); Debug.Assert(path2[0] == this._root); int i = 0; while((i < path1.Count) && (i < path2.Count) && (path2[i] == path1[i])) { lca = path1[i]; i++; } Debug.Assert(null != lca); return lca; } `` where FindNodeAndPath is defined as ``private BinaryTreeNode FindNodeAndPath(BinaryTreeNode node, int e, List<BinaryTreeNode> path) { if(node == null) { return null; } if(node.Element == e) { path.Add(node); return node; } var n = this.FindNodeAndPath(node.Left, e, path); if(n == null) { n = this.FindNodeAndPath(node.Right, e, path); } if(n != null) { path.Insert(0, node); return n; } return null; } `` BST (LCA) - not related (just for completion for reference) ``public BinaryTreeNode BstLeastCommonAncestor(int e1, int e2) { //ensure both elements are there in the bst var n1 = this.BstFind(e1, throwIfNotFound: true); if(e1 == e2) { return n1; } this.BstFind(e2, throwIfNotFound: true); BinaryTreeNode leastCommonAcncestor = this._root; var iterativeNode = this._root; while(iterativeNode != null) { if((iterativeNode.Element > e1 ) && (iterativeNode.Element > e2)) { iterativeNode = iterativeNode.Left; } else if((iterativeNode.Element < e1) && (iterativeNode.Element < e2)) { iterativeNode = iterativeNode.Right; } else { //i.e; either iterative node is equal to e1 or e2 or in between e1 and e2 return iterativeNode; } } //control will never come here return leastCommonAcncestor; } `` Unit Tests ``[TestMethod] public void LeastCommonAncestorTests() { int[] a = { 13, 2, 18, 1, 5, 17, 20, 3, 6, 16, 21, 4, 14, 15, 25, 22, 24 }; int[] b = { 13, 13, 13, 2, 13, 18, 13, 5, 13, 18, 13, 13, 14, 18, 25, 22}; BinarySearchTree bst = new BinarySearchTree(); foreach (int e in a) { bst.Add(e); bst.Delete(e); bst.Add(e); } for(int i = 0; i < b.Length; i++) { var n = bst.BstLeastCommonAncestor(a[i], a[i + 1]); Assert.IsTrue(n.Element == b[i]); var n1 = bst.LeastCommonAncestorUsingPaths(a[i], a[i + 1]); Assert.IsTrue(n1.Element == b[i]); Assert.IsTrue(n == n1); var n2 = bst.LeastCommonAncestorUsingRecursion(a[i], a[i + 1]); Assert.IsTrue(n2.Element == b[i]); Assert.IsTrue(n2 == n1); Assert.IsTrue(n2 == n); } } `` ### Solution:21 If someone interested in pseudo code(for university home works) here is one. ``GETLCA(BINARYTREE BT, NODE A, NODE B) IF Root==NIL return NIL ENDIF IF Root==A OR root==B return Root ENDIF Left = GETLCA (Root.Left, A, B) Right = GETLCA (Root.Right, A, B) IF Left! = NIL AND Right! = NIL return root ELSEIF Left! = NIL Return Left ELSE Return Right ENDIF `` ### Solution:22 Although this has been answered already, this is my approach to this problem using C programming language. Although the code shows a binary search tree (as far as insert() is concerned), but the algorithm works for a binary tree as well. The idea is to go over all nodes that lie from node A to node B in inorder traversal, lookup the indices for these in the post order traversal. The node with maximum index in post order traversal is the lowest common ancestor. This is a working C code to implement a function to find the lowest common ancestor in a binary tree. I am providing all the utility functions etc. as well, but jump to CommonAncestor() for quick understanding. ``#include <stdio.h> #include <malloc.h> #include <stdlib.h> #include <math.h> static inline int min (int a, int b) { return ((a < b) ? a : b); } static inline int max (int a, int b) { return ((a > b) ? a : b); } typedef struct node_ { int value; struct node_ * left; struct node_ * right; } node; #define MAX 12 int IN_ORDER[MAX] = {0}; int POST_ORDER[MAX] = {0}; createNode(int value) { node * temp_node = (node *)malloc(sizeof(node)); temp_node->left = temp_node->right = NULL; temp_node->value = value; return temp_node; } node * insert(node * root, int value) { if (!root) { return createNode(value); } if (root->value > value) { root->left = insert(root->left, value); } else { root->right = insert(root->right, value); } return root; } /* Builds inorder traversal path in the IN array */ void inorder(node * root, int * IN) { static int i = 0; if (!root) return; inorder(root->left, IN); IN[i] = root->value; i++; inorder(root->right, IN); } /* Builds post traversal path in the POST array */ void postorder (node * root, int * POST) { static int i = 0; if (!root) return; postorder(root->left, POST); postorder(root->right, POST); POST[i] = root->value; i++; } int findIndex(int * A, int value) { int i = 0; for(i = 0; i< MAX; i++) { if(A[i] == value) return i; } } int CommonAncestor(int val1, int val2) { int in_val1, in_val2; int post_val1, post_val2; int j=0, i = 0; int max_index = -1; in_val1 = findIndex(IN_ORDER, val1); in_val2 = findIndex(IN_ORDER, val2); post_val1 = findIndex(POST_ORDER, val1); post_val2 = findIndex(POST_ORDER, val2); for (i = min(in_val1, in_val2); i<= max(in_val1, in_val2); i++) { for(j = 0; j < MAX; j++) { if (IN_ORDER[i] == POST_ORDER[j]) { if (j > max_index) { max_index = j; } } } } printf("\ncommon ancestor of %d and %d is %d\n", val1, val2, POST_ORDER[max_index]); return max_index; } int main() { node * root = NULL; /* Build a tree with following values */ //40, 20, 10, 30, 5, 15, 25, 35, 1, 80, 60, 100 root = insert(root, 40); insert(root, 20); insert(root, 10); insert(root, 30); insert(root, 5); insert(root, 15); insert(root, 25); insert(root, 35); insert(root, 1); insert(root, 80); insert(root, 60); insert(root, 100); /* Get IN_ORDER traversal in the array */ inorder(root, IN_ORDER); /* Get post order traversal in the array */ postorder(root, POST_ORDER); CommonAncestor(1, 100); } `` ### Solution:23 There can be one more approach. However it is not as efficient as the one already suggested in answers. • Create a path vector for the node n1. • Create a second path vector for the node n2. • Path vector implying the set nodes from that one would traverse to reach the node in question. • Compare both path vectors. The index where they mismatch, return the node at that index - 1. This would give the LCA. Cons for this approach: Need to traverse the tree twice for calculating the path vectors. Need addtional O(h) space to store path vectors. However this is easy to implement and understand as well. Code for calculating the path vector: ``private boolean findPathVector (TreeNode treeNode, int key, int pathVector[], int index) { if (treeNode == null) { return false; } pathVector [index++] = treeNode.getKey (); if (treeNode.getKey () == key) { return true; } if (findPathVector (treeNode.getLeftChild (), key, pathVector, index) || findPathVector (treeNode.getRightChild(), key, pathVector, index)) { return true; } pathVector [--index] = 0; return false; } `` ### Solution:24 Try like this ``node * lca(node * root, int v1,int v2) { if(!root) { return NULL; } if(root->data == v1 || root->data == v2) { return root;} else { if((v1 > root->data && v2 < root->data) || (v1 < root->data && v2 > root->data)) { return root; } if(v1 < root->data && v2 < root->data) { root = lca(root->left, v1, v2); } if(v1 > root->data && v2 > root->data) { root = lca(root->right, v1, v2); } } return root; } `` ### Solution:25 Crude way: • At every node • X = find if either of the n1, n2 exist on the left side of the Node • Y = find if either of the n1, n2 exist on the right side of the Node • if the node itself is n1 || n2, we can call it either found on left or right for the purposes of generalization. • If both X and Y is true, then the Node is the CA The problem with the method above is that we will be doing the "find" multiple times, i.e. there is a possibility of each node getting traversed multiple times. We can overcome this problem if we can record the information so as to not process it again (think dynamic programming). So rather than doing find every node, we keep a record of as to whats already been found. Better Way: • We check to see if for a given node if left_set (meaning either n1 | n2 has been found in the left subtree) or right_set in a depth first fashion. (NOTE: We are giving the root itself the property of being left_set if it is either n1 | n2) • If both left_set and right_set then the node is a LCA. Code: ``struct Node * findCA(struct Node *root, struct Node *n1, struct Node *n2, int *set) { int left_set, right_set; left_set = right_set = 0; struct Node *leftCA, *rightCA; leftCA = rightCA = NULL; if (root == NULL) { return NULL; } if (root == n1 || root == n2) { left_set = 1; if (n1 == n2) { right_set = 1; } } if(!left_set) { leftCA = findCA(root->left, n1, n2, &left_set); if (leftCA) { return leftCA; } } if (!right_set) { rightCA= findCA(root->right, n1, n2, &right_set); if(rightCA) { return rightCA; } } if (left_set && right_set) { return root; } else { *set = (left_set || right_set); return NULL; } } `` ### Solution:26 Code for A Breadth First Search to make sure both nodes are in the tree. Only then move forward with the LCA search. Please comment if you have any suggestions to improve. I think we can probably mark them visited and restart the search at a certain point where we left off to improve for the second node (if it isn't found VISITED) ``public class searchTree { static boolean v1=false,v2=false; public static boolean bfs(Treenode root, int value){ if(root==null){ return false; } Queue<Treenode> q1 = new LinkedList<Treenode>(); q1.add(root); while(!q1.isEmpty()) { Treenode temp = q1.peek(); if(temp!=null) { q1.remove(); if (temp.value == value) return true; if (temp.left != null) q1.add(temp.left); if (temp.right != null) q1.add(temp.right); } } return false; } public static Treenode lcaHelper(Treenode head, int x,int y){ if(head==null){ return null; } if(head.value == x || head.value ==y){ if (head.value == y){ v2 = true; return head; } else { v1 = true; return head; } } Treenode left = lcaHelper(head.left, x, y); Treenode right = lcaHelper(head.right,x,y); if(left!=null && right!=null){ return head; } return (left!=null) ? left:right; } public static int lca(Treenode head, int h1, int h2) { v1 = bfs(head,h1); v2 = bfs(head,h2); if(v1 && v2){ Treenode lca = lcaHelper(head,h1,h2); return lca.value; } return -1; } } `` ### Solution:27 You are correct that without a parent node, solution with traversal will give you O(n) time complexity. Traversal approach Suppose you are finding LCA for node A and B, the most straightforward approach is to first get the path from root to A and then get the path from root to B. Once you have these two paths, you can easily iterate over them and find the last common node, which is the lowest common ancestor of A and B. Recursive solution Another approach is to use recursion. First, we can get LCA from both left tree and right tree (if exists). If the either of A or B is the root node, then the root is the LCA and we just return the root, which is the end point of the recursion. As we keep divide the tree into sub-trees, eventually, we’ll hit either A and B. To combine sub-problem solutions, if LCA(left tree) returns a node, we know that both A and B locate in left tree and the returned node is the final result. If both LCA(left) and LCA(right) return non-empty nodes, it means A and B are in left and right tree respectively. In this case, the root node is the lowest common node. Check Lowest Common Ancestor for detailed analysis and solution. ### Solution:28 Some of the solutions here assumes that there is reference to the root node, some assumes that tree is a BST. Sharing my solution using hashmap, without reference to `root` node and tree can be BST or non-BST: `` var leftParent : Node? = left var rightParent : Node? = right var map = [data : Node?]() while leftParent != nil { map[(leftParent?.data)!] = leftParent leftParent = leftParent?.parent } while rightParent != nil { if let common = map[(rightParent?.data)!] { return common } rightParent = rightParent?.parent } `` ### Solution:29 ``public TreeNode lowestCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { if(root==null || root == p || root == q){ return root; } TreeNode left = lowestCommonAncestor(root.left,p,q); TreeNode right = lowestCommonAncestor(root.right,p,q); return left == null ? right : right == null ? left : root; } `` ### Solution:30 Solution 1: Recursive - Faster • The idea is to traverse the tree starting from root. If any of the given keys p and q matches with root, then root is LCA, assuming that both keys are present. If root doesn’t match with any of the keys, we recurse for left and right subtree. • The node which has one key present in its left subtree and the other key present in right subtree is the LCA. If both keys lie in left subtree, then left subtree has LCA also, otherwise LCA lies in right subtree. • Time Complexity: O(n) • Space Complexity: O(h) - for recursive call stack ``class Solution { public TreeNode lowestCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { if(root == null || root == p || root == q) return root; TreeNode left = lowestCommonAncestor(root.left, p, q); TreeNode right = lowestCommonAncestor(root.right, p, q); if(left == null) return right; else if(right == null) return left; else return root; // If(left != null && right != null) } } `` Solution 2: Iterative - Using parent pointers - Slower • Create an empty hash table. • Insert p and all of its ancestors in hash table. • Check if q or any of its ancestors exist in hash table, if yes then return the first existing ancestor. • Time Complexity: O(n) - In the worst case we might be visiting all the nodes of binary tree. • Space Complexity: O(n) - Space utilized the parent pointer Hash-table, ancestor_set and queue, would be O(n) each. ``class Solution { public TreeNode lowestCommonAncestor(TreeNode root, TreeNode p, TreeNode q) { HashMap<TreeNode, TreeNode> parent_map = new HashMap<>(); HashSet<TreeNode> ancestors_set = new HashSet<>(); Queue<TreeNode> queue = new LinkedList<>(); parent_map.put(root, null); queue.add(root); while(!parent_map.containsKey(p) || !parent_map.containsKey(q)) { TreeNode node = queue.poll(); if(node.left != null) { parent_map.put(node.left, node); queue.add(node.left); } if(node.right != null) { parent_map.put(node.right, node); queue.add(node.right); } } while(p != null) { ancestors_set.add(p); p = parent_map.get(p); } while(!ancestors_set.contains(q)) q = parent_map.get(q); return q; } } `` Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com Previous Next Post »
10,430
40,878
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.890625
4
CC-MAIN-2019-30
longest
en
0.917755
https://socratic.org/questions/what-are-the-answers-and-how-do-you-solve-them-1#614226
1,656,211,796,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103036363.5/warc/CC-MAIN-20220626010644-20220626040644-00376.warc.gz
587,658,042
6,795
# What are the answers and how do you solve them? May 15, 2018 The odds are 1/10, 3/10, 6/10, in that order. It's a dependent event. #### Explanation: P(blue,blue) is equal to the odds of picking blue the first time times the odds of picking blue the second time. The first time, the odds of picking blue is $\frac{2}{5}$ since there are $2$ blue marbles but $5$ total marbles. If you pick a blue marble, there is 1 blue marble remaining, and 4 total marbles remaining, making the odds of picking a blue marble on the second try equal to $\frac{1}{4}$ $\frac{2}{5} \cdot \frac{1}{4} = \frac{2}{20} = \frac{1}{10}$, so P(blue,blue)=1/10 P(red,red) involves a similar approach. The first time, the odds of picking a red marble is $\frac{3}{5}$ since there are $3$ red marbles, and $5$ marbles total. If you pick the red marble, there is $2$ red marbles remaining, and $4$ total marbles remaining, making the odds of picking a red marble on the second try equal to $\frac{2}{4}$ $\frac{3}{5} \cdot \frac{2}{4} = \frac{6}{20} = \frac{3}{10}$ The last one you can do out step by step, following a similar method above, but there is a shortcut you can use for this case. The odds of picking two blue marbles is $\frac{1}{10}$, and the odds of picking two red marbles is $\frac{3}{10}$. The odds of picking two marbles of the same color in a row is $\frac{4}{10}$ then, or $\frac{2}{5}$ Therefore the odds of picking two different marble colors (P(one blue, one red) is $1 - \frac{2}{5} = \frac{3}{5}$ There are two possibilities for P(one blue, one red) to happen, one of which is P(blue, red; in that order). The odds of picking a blue marble first is $\frac{2}{5}$, and the odds of picking a red marble after that is $\frac{3}{4}$. Therefore the probability of the event taking place is $\frac{2}{5} \cdot \frac{3}{4} = \frac{3}{10}$, meaning P(blue, red; in that order) is $\frac{3}{10}$ So in conclusion P(blue,blue)=$\frac{1}{10}$ P(red,red)=$\frac{3}{10}$ P(one blue, one red)=$\frac{6}{10}$ P(blue, red; in that order)=$\frac{3}{10}$ May 15, 2018 $\frac{1}{10} , \frac{3}{10} , \frac{3}{10} , \frac{6}{10}$ and dependent. #### Explanation: This is a dependent event as the second one is based after which one you took in the first one. Knowing that, we calculate the chance of getting the first one multiplied by the chance of getting the second one from the remaining ones $\frac{2}{5} \cdot \frac{1}{4} = \frac{1}{10}$ (B, B) $\frac{3}{5} \cdot \frac{2}{4} = \frac{3}{10}$ (R,R) $\frac{2}{5} \cdot \frac{3}{4} = \frac{3}{10}$ (B, R) Getting one of each is also the sum of the chance of either outcome $\frac{2}{5} \cdot \frac{3}{10} + \frac{3}{5} \cdot \frac{2}{4} = \frac{6}{10}$ (1B, 1R) We can check that this is correct as the probabilites of (B, B), (R, R), (B,R) and (R, B) add up to 1
930
2,815
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 30, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.8125
5
CC-MAIN-2022-27
latest
en
0.888896
https://reefs.com/short-take-watts-up-with-your-tank-estimating-your-tank-s-electrical-costs/
1,716,544,653,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971058709.9/warc/CC-MAIN-20240524091115-20240524121115-00029.warc.gz
420,035,534
126,319
# Estimating Your Tank’s Electrical Costs Running powerful lighting over your little slice of the ocean?  Perhaps thousands of gallons per hour in water movement to keep those polyps swaying happily? Maybe a pair of high wattage tank heaters maintaining tropical reef temperatures as well? Have you ever wondered what your tank costs are for electrical consumption? In these days of seemingly ever increasing electrical costs, along with the push to conserve energy, it is important to see where your hard earned dollars are being spent. In this article I will attempt to explain (with some basic calculations) how I determined exactly what my tank draw is and how much it affects my monthly bill within a few cents. One method would be to calculate mathematically using the product of amperage, voltage and the power factor of each piece of equipment. However this would not offer a true picture of usage as these readings are based on the maximum output of any given piece of equipment. Maximum output with a one hundred percent power factor is rarely if ever achieved. Power factor is the ratio of true power or watts to the apparent power or volt-amperes. The power factor is expressed as a decimal or in percentage. Kilowatts indicate real power and Kilovolt-amperes apparent power. They are identical only when current and voltage are in phase i.e., when the power factor is 1.0 or one hundred percent. Some examples of typical power factors are as follows: • Incandescent lighting 1.0 • Fluorescent lighting 0.5 to 0.95 dependant upon ballast type • Mercury-Vapor Lighting 0.5 to 0.95 dependant upon ballast type • Single phase induction motors 1/20 to 1hp, power factor 0.55 to 0.75, average 0.68 at rated load My calculations will be based upon readings taken in a real time situation of actual draw (amperage) by either individual piece or group of components because ampmeters and voltmeters indicate total effective current and voltage regardless of the power factor. In other words the ampmeter doesn’t care how the component uses the amps, just that it is using X amount of amps to perform the job regardless of efficiency. Using the clamp on ampmeter negates the need to know the power factor. It measures what the component uses, not how it is used. The ampmeter will see what your residential electric meter sees. ## Some background information The tank (All-Glass Aquarium) is a 120 gallon reef-ready using a standard 30 gallon tank as a sump. Sump return is handled by a Supreme Mag12 pump which feeds a ¾ inch Sea-Swirl (Aquarium Currents Inc.) Rotating Return Device and the skimmer (Precision Marine Bullet 2) is fed by a Supreme Mag18. Additional water movement is provided by four Aquarium Systems Maxi-Jet 1200 power heads attached to a Red-Sea Fish Pharm Ltd. Wavemaster Pro wave maker, and a GenX Mak4 water pump which is part of a closed loop system. A 300 watt (w) Otto Computherm heater keeps the tank at 82ºF (~28°C). Lighting consists of two 400w 6500k Iwasaki halides powered by a PFO dual ballast, and two 110w VHO (very high output) actinics powered by an Icecap 660 ballast. And finally an Advanced Reef Technologies K2R calcium reactor using an Ehiem model 1046 pump for re-circulation and fed by another Maxi-Jet 1200. In order to do this properly, the following items are needed: • clamp on style ampmeter • short length (approximately 12 inches) of 12/3 wire to make a “patch-cord”. • straight blade plug and connector (2 pole, 3 wire) • cost of electricity per kilowatt hour (kwh) Everything in this list can be found at a local hardware store, with the exception of electricity rates which can be found on your most recent bill or the electric company’s Figure 1 web site. In order to create the patch-cord you must carefully strip away the outer most insulation from the wire with a sharp instrument, leaving the insulation on the three internal wires (usually white, green, and black) intact. Properly attach the straight blade plug and connector to the wires (white wire to silver connector, black wire to gold connector, and green wire to green connector). This patch-cord will be used in conjunction with the clamp on ampmeter to measure amperage (Figure 1). The meter I used is a Fluke 33 True RMS Clamp Meter (Figure 2). Figure 2 Electric company rates in my area (New York) are determined in two ways. Either a regular rate of \$0.1237/kilowatt hour (kwh) (24 hours per day every day), or a day/night rate which runs \$0.1391/kwh (Day-time Period 7 a.m. to 11:30 p.m. EST or 8 a.m. to 12:30 p.m. EDT) and \$0.0571/kwh (Night-time Period 11:30 p.m. to 7 a.m. EST or 12:30 a.m. to 8 a.m. EDT). With so much electrical equipment on the tank, most people use power strips and I’m no exception. At this point we can measure the power draw (amperage) of each individual piece or the entire power strip worth of equipment. Once the patch-cord is placed between the power strip (or equipment plug) and the wall outlet, simply place the ampmeter around either the white or black lead of the patch-cord. Many digital ampmeters will display to the second or even third decimal place. For our purposes we can round off to the nearest tenth of an amp. I recommend measuring the lighting system separately from the rest of the tank so we can take photoperiod into account, thus giving a more accurate view of total costs involved. My tank is set up so that one circuit handles both halides and the sump return pump, and a second circuit handles the VHO lighting and the rest of the electrical equipment. This way if a circuit breaker should trip, there will be at least some light and water movement available to the tank. The following chart shows the readings taken as indicated. EquipmentAmps Halide 1 (10 hours)3.4 amps Halide 2 (10 hours)4 amps VHO including fans (10 hours)2.4 amps Other equipment4.7 amps Total14.5 amps In order to convert amperage (a) to wattage (w) and ultimately kilowatts (kw), multiply amps and volts (v). As an average of line voltage (110-120v) we will use 115v for our purposes. This gives the following equations: All Lights 9.8a x 115v = 1127w or 1.1kw Other Equipment 4.7a x 115v = 540.5w or 0.5kw Cost can be determined using these figures. For 10 hours a day total equipment draw is 1.6kw per hour. For the remaining 14 hours (all equipment without lights) the draw is 0.5kw per hour. Now we simply multiply kw by run time hours for kilowatt hours, then multiply by the previously mentioned regular rates. 1.6kw x 10 hrs = 16kwh x \$0.1237 = \$1.98/day 0.5kw x 14 hrs = 7kwh x \$0.1237 = \$0.87/day Sub Total: \$2.85/day \$2.85 x 30days = \$85.50/month Total Cost If light timers run 10am to 8pm EST, we can determine if the day/night rates will provide any savings. 1.6kw x 10 hrs = 16kwh x \$0.1391 = \$2.23/day (lights on, day rate) 0.5kw x 6.5 hrs = 3.3kwh x \$0.1391 = \$0.46/day (lights off, day rate) 0.5kw x 7.5 hrs = 3.8kwh x \$0.0571 = \$0.22/day (lights off, night rate) Sub Total: \$2.91/day \$2.91 x 30days = \$87.30/month Total Cost Based on current timer settings, no savings is realized by taking advantage of the day/night rates offered by the electric company. However, substantial savings would be evident should the lights be run during the discounted nighttime period. Hopefully this article has enlightened you as to where electrical costs are in relation to reef keeping. I wish to thank my friend Richard Boll, Electrical Engineer, for his expert advice. ## References 1. New York State Electric and Gas and American Electrician’s Handbook (Eleventh Edition) by Terrell Croft and Wilford Summers (ISBN 0-07-013932-6) only. If you are unsure of what you are doing, consult an electrician! This author cannot be held liable for any damage to equipment or bodily injury.
1,987
7,806
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.734375
4
CC-MAIN-2024-22
latest
en
0.912218
https://math.stackexchange.com/questions/1553508/calculating-mean-of-sequence-given-means-of-subsets
1,563,615,970,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00385.warc.gz
480,499,975
36,848
# Calculating mean of sequence given means of subsets Let $(s_n) = \{s_1,s_2,\dots,s_n\}$ be a sequence of real numbers. Let us assume that we do not know the the value of any $s_i, \ i\in \{1,2,\dots,n\}$, but instead the sequence $(s_n)$ has been divided in some unknown way to $k$ non-overlapping subsequences and let the length of $j$th, $j\in \{1,2,\dots,k\}$ such a subsequence be $l_j$. We can assume that all lengths $l_j$ are greater than zero and thus smaller than $n$. Assume we have been given only the set of means of these subsequences $(a_k) = \{a_1,a_2,\dots,a_k\}$, e.g. $a_1 = \frac{s_2+s_5+s_{n-2}}{3},$ thus $l_1 = 3.$ Question: is it possible to use only the values of $(a_k)$ and $n$ to find out the mean of $(s_n)$? If $k = n$ or if $k=1$, the question is trivial. If $l_j$ are equal for all values of $j$, the mean of $(a_k)$ is equal to the mean of $(s_n)$. Let us find all possible ways to represent $n$ as a sum of $k$ integers, call such a representation a set of weights. For example, if $n=5$ and $k=3$, $(1,2,2)$ is a set of weights. If we consider every possible way to multiply the values of $(a_k)$ with corresponding values in a set of weights, add them together and divide the result by $n$, at least one of these is the true mean of $(s_n)$, namely the one with the set of weights $(l_1,l_2,\dots,l_k)$. However, we can then only say that the true mean is in this list of possibilities found using this method. • It is just an example, poorly constructed, it seems. I just came up with some numbers and wanted to use the n, the number of numbers in the sequence, instead of any fixed number. – Valtteri Nov 30 '15 at 17:44 • We cannot reconstruct the mean given the set of means of an unknown partition. – André Nicolas Nov 30 '15 at 17:51 $$\frac{1}{n}\sum_{i=1}^ka_il_i$$ But if you don't have the weights, then the averages no longer contain enough information. This is alluded to in the final paragraph. For example, consider the sequences $1, 1, 2, 2$ and $1, 1, 1, 2$. We could split the first into $1, 1$ and $2, 2$ and the second into $1, 1, 1$ and $2$. In both cases, we get $k=2$ and $(a_1, a_2)$ = $(1, 2)$. But clearly the averages of the two sequences differ, so $k$, $n$ and the vector of $a_i$ is clearly not enough information to determine the average of the initial sequence. Entropy has won. • Well, that's true, I said that much in the final paragraph, but we do not know the values of $l_i$. – Valtteri Nov 30 '15 at 17:49 • Seems so. Was hoping that maybe there would be some way to find it as a mean of weighted means of means or something similar. Or that maybe the true mean would be the expected value of some random process that uses the values of $(a_k)$. But I don't know. – Valtteri Nov 30 '15 at 17:52 • Well, if all you're allowed use is the set $a_k$ and the number $n$, and not the lengths, then its impossible. As I demonstrate above with a counter-example, for a given $n$ and set $a_k$, there is more than one possibility for a sequence. So no matter what process you use, it's doomed if it is only allowed the information specified. – Colm Bhandal Nov 30 '15 at 17:58
946
3,148
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.40625
4
CC-MAIN-2019-30
latest
en
0.871997
https://www.doubtnut.com/question-answer/in-delta-abc-sin-a-a-b-cos-c-sin-c-c-b-cos-a-39170101
1,618,590,013,000,000,000
text/html
crawl-data/CC-MAIN-2021-17/segments/1618038088245.37/warc/CC-MAIN-20210416161217-20210416191217-00054.warc.gz
844,438,191
53,227
Class 12 MATHS Properties And Solutions Of Triangle # In Delta ABC, (sin A (a - b cos C))/(sin C (c -b cos A))= Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 30-1-2020 Apne doubts clear karein ab Whatsapp par bhi. Try it now. Watch 1000+ concepts & tricky questions explained! 43.9 K+ 39.3 K+ Related Videos 8494473 2.9 K+ 58.8 K+ 2:50 68809880 4.7 K+ 94.4 K+ 10:05 20210 7.2 K+ 143.4 K+ 1:52 8494464 4.1 K+ 82.2 K+ 1:29 2350417 3.1 K+ 61.8 K+ 4:28 2350405 3.0 K+ 60.0 K+ 2:37 1285661 123.1 K+ 132.5 K+ 3:45 8494517 61.7 K+ 61.8 K+ 1:17 8494518 2.0 K+ 40.6 K+ 1:18 8494459 2.0 K+ 39.6 K+ 2:21 1137743 85.0 K+ 130.0 K+ 7:45 20219 308.3 K+ 332.3 K+ 2:09 1447186 5.5 K+ 110.3 K+ 2:45 1339263 25.3 K+ 509.0 K+ 3:39 54849232 3.0 K+ 61.1 K+ 6:13 Very Important Questions Text Solution -2-101 (sinA (a -b cos C))/(sinC (c -b cos A)) <br> = (sin A (b cos C + c cos B - cos C))/(sin C (a cos B + b cos A - b cos A)) <br> = (sin A (c cos B))/(sin C (a cos B)) <br> =1 " " ("as " c sin A = a sin C)
480
1,111
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.25
3
CC-MAIN-2021-17
latest
en
0.460076
http://mymathforum.com/algebra/340116-probability-algebra.html
1,506,364,896,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818693240.90/warc/CC-MAIN-20170925182814-20170925202814-00488.warc.gz
236,504,777
9,581
My Math Forum Probability in Algebra Algebra Pre-Algebra and Basic Algebra Math Forum April 20th, 2017, 12:01 AM #1 Member   Joined: Jan 2017 From: US Posts: 91 Thanks: 5 Probability in Algebra Solve for the probability that a student is selected at random from a group of 20 students will draw a 6 from a bag with the numbers 1 though 15 in it. Show and explain all of your work. Could anyone tell me how I would go about this? Thanks April 20th, 2017, 12:49 AM #2 Senior Member     Joined: Sep 2015 From: Southern California, USA Posts: 1,413 Thanks: 717 does it matter which student is chosen? Well there's nothing in the problem that would lead you to believe one student will chose any differently than any other student so no. Thus the bit about choosing a student at random doesn't matter at all. What's the probability that any student will choose a 6 out of the 15 tiles. Well there's only one 6 out of the 15 so the probability is $p=\dfrac {1}{15}$ Thanks from Indigo28 April 20th, 2017, 01:31 AM   #3 Member Joined: Jan 2017 From: US Posts: 91 Thanks: 5 Quote: Originally Posted by romsek does it matter which student is chosen? Well there's nothing in the problem that would lead you to believe one student will chose any differently than any other student so no. Thus the bit about choosing a student at random doesn't matter at all. What's the probability that any student will choose a 6 out of the 15 tiles. Well there's only one 6 out of the 15 so the probability is $p=\dfrac {1}{15}$ Thank you! This seems so obvious to me now, I don't know why I didn't see how simple that was. Lol April 20th, 2017, 03:24 AM   #4 Senior Member Joined: May 2016 From: USA Posts: 786 Thanks: 312 Quote: Originally Posted by Indigo28 Solve for the probability that a student is selected at random from a group of 20 students will draw a 6 from a bag with the numbers 1 though 15 in it. Show and explain all of your work. Could anyone tell me how I would go about this? Thanks Romsek gave the correct answer to a reasonable interpretation of a flawed question. I do not think it is the only possible interpretation. "A student is selected at random from a group of 20 students will draw ..." The verb "will draw" has no subject. Romsek assumes quite sensibly that the intended subject is "the student, who is selected ..., will draw." He then deduces correctly that how the student is selected is irrelevant to whether that student draws a 6. I agree if that is the intended problem. But that very minimal addition to the words of the given problem may not be the correct addition needed to get to the intended problem. Is the problem you gave the complete and exact wording of the problem in your text? If not, what is the complete and exact wording? Tags algebra, probability Thread Tools Display Modes Linear Mode Similar Threads Thread Thread Starter Forum Replies Last Post Niladri Advanced Statistics 3 June 29th, 2016 03:00 AM ItisBowtime Geometry 2 April 2nd, 2013 08:34 PM Espanol Algebra 2 June 16th, 2009 03:36 PM MATHguerilla Algebra 2 August 21st, 2008 01:44 PM MATHguerilla Abstract Algebra 1 December 31st, 1969 04:00 PM Contact - Home - Forums - Cryptocurrency Forum - Top
827
3,214
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2017-39
longest
en
0.968365
https://www.physicsforums.com/threads/why-does-flemings-right-hand-rule-apply-for-induced-current.949073/
1,719,231,120,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198865383.8/warc/CC-MAIN-20240624115542-20240624145542-00309.warc.gz
814,029,897
19,138
# Why does Fleming's right hand rule apply for induced current • YourBoy In summary, the right hand rule applies for induced currents, and the left hand rule is for motors. The right hand rule is more consistent and easier to follow, and current flows in the direction that the right hand rule is applied. YourBoy I know that the right hand rule applies for induced currents, and the left hand rule is for motors, but why? It doesn't seem to connect with any other physics laws and is very counter-intuitive that the current will flow in one direction rather than the opposite direction if both are perpendicular to the path of motion and electric field, and the electric fields are symmetrical lengthwise. Fleming's right hand rule shows that if you have a force in a certain direction and a magnetic field to in another direction, current will flow in a direction perpendicular to both. If you had a motion upwards, and a field to the left, current would flow towards you. but why not away from you? If it flows away from you it would still be perpendicular to both motion and magnetic field, so what makes it flow towards you instead of away from you? I am looking for a physical explanation more than a mathematical one. #### Attachments • LnXDx.png 6 KB · Views: 540 YourBoy said: If you had a motion upwards, and a field to the left, current would flow towards you. but why not away from you? This is a simple matter of convention. We arbitrarily define the direction of the magnetic field to point in the direction that the right hand rule works. We could have arbitrarily chosen to use a left hand rule instead. Then the direction for the field would be chosen with the opposite sense such that a left hand rule would work. Okay, but then current would still be choosing one path over the other, why does it move in one direction particularly? The magnetic field is symmetrical lengthwise so it seems that current would not travel in either direction because the forces causing it to move would cancel out due to the field being symmetrical #### Attachments • bar_magnet_field.png 6.7 KB · Views: 486 YourBoy said: I am looking for a physical explanation more than a mathematical one. This is such a common request about many aspects of Physics and it is basically asking too much of 'verbal' language. If I asked you for a "Physical Reason" why and how the money that you are paying on your credit card bill is what it is then you would, I think, realize that it's all about the interest rates and the small print about what proportion gets used for this and that. You would acknowledge that it's all a matter of Maths and that it never involves actual notes and coins. Maths describes what goes on in your account and nothing else does it half as well. Why do you use a calculator? Because words don't do the job that you want. Physics is the same only moreso. For centuries, Scientists developed the optimum way to describe the Physical World and that involves Maths. We have not found a better way to model the universe than Maths (yet). If something else turns up you can be sure it still won't involve any of the simple 'mechanical' descriptions that people crave. It will be harder than Maths, even. YourBoy YourBoy said: Okay, but then current would still be choosing one path over the other, why does it move in one direction particularly? The magnetic field is symmetrical lengthwise so it seems that current would not travel in either direction because the forces causing it to move would cancel out due to the field being symmetrical I am not exactly sure what you are asking here. If you simply put a loop of wire around a magnet then you are correct, there is no current flow. On the other hand, if you run current through a loop of wire around a magnet (e.g. drive the current with a battery), then there will be a force exerted on that loop of wire. YourBoy said: why does it move in one direction particularly Firstly, it would need to be consistent ; it would need to do the same thing every time under the same circumstances. Secondly, if we had chosen to define the direction of the field or the direction of the current, we would be using the Left Hand Rule to predict the direction of the force. That direction depends on two totally arbitrary choices of direction of two other quantities. Edit: I was so involved with making the point that the actual direction depends on the arbitrary original choice of current and field definitions that I switched to the Left Hand Motor Rule and not the Right Hand 'Gene-RIGHT-er' Rule. Last edited: It looks like there is some confusion in the use of the words. In the picture, the thumb is showing something called motion. If that is the direction of the force on a current in a magnetic field, that picture is wrong, as sophiecentaur points out. YourBoy said: Okay, but then current would still be choosing one path over the other, why does it move in one direction particularly? The magnetic field is symmetrical lengthwise so it seems that current would not travel in either direction because the forces causing it to move would cancel out due to the field being symmetrical Let's say a large positively and uniformly charged plate is moving to the right. The electric field lines of the plate look like this: | | | , and those lines are moving to the right with the plate. An observer below the plate observes that kind of lines as depicted above. Now lets' say that that observer accelerates towards the plate. After the acceleration the lines appear to be oriented like this: / / / , according to the observer. If those lines that I drew were sticks, the sticks would tilt the same way. When the observer accelerates, clocks at the upper ends of the sticks run faster than clocks at the lower ends of the sticks, and the upper ends move faster that the lower ends, according to the observer - that's just simple relativity. Thats' the explanation, I think. Tilting of electric field lines.(For more magnetic field and less electric field add a negatively charged plate moving to the left next to the positively charged plate moving to the right) Last edited: YourBoy said: Okay, but then current would still be choosing one path over the other, why does it move in one direction particularly? The induced emf is in that direction. If you don't want a mathematical explanation then you won't want to look at Maxwell's equations but you would then have to accept "that's the way it is" as an answer. Last edited: davenn ## 1. Why is it called "Fleming's right hand rule"? Fleming's right hand rule is named after the British scientist, John Ambrose Fleming, who first described the relationship between induced current and magnetic fields in the late 19th century. The rule uses the right hand to represent the direction of the magnetic field, current, and motion, making it easy to remember and apply. ## 2. How does Fleming's right hand rule apply to induced current? Fleming's right hand rule states that when a conductor is moved through a magnetic field, an induced current will flow in the direction perpendicular to both the magnetic field and the direction of motion. This can be visualized by using the right hand to represent the direction of the magnetic field, current, and motion. ## 3. What is the significance of Fleming's right hand rule in electromagnetic induction? Fleming's right hand rule is a fundamental principle in the study of electromagnetic induction. It helps to determine the direction of induced current in a conductor, which is important in various applications such as generators, motors, and transformers. ## 4. Is Fleming's right hand rule applicable to all types of conductors? Yes, Fleming's right hand rule can be applied to all types of conductors, including wires, coils, and rotating loops. As long as there is relative motion between the conductor and the magnetic field, an induced current will be produced according to the direction determined by the rule. ## 5. Can Fleming's right hand rule be used to determine the magnitude of induced current? No, Fleming's right hand rule only determines the direction of induced current, not the magnitude. The magnitude of induced current depends on factors such as the strength of the magnetic field, the velocity of the conductor, and the properties of the conductor itself. • Electromagnetism Replies 8 Views 7K • Electromagnetism Replies 10 Views 1K • Electromagnetism Replies 2 Views 994 • Electromagnetism Replies 3 Views 1K • Electromagnetism Replies 15 Views 976 • Electromagnetism Replies 7 Views 1K • Electromagnetism Replies 32 Views 2K • Electromagnetism Replies 1 Views 1K • Electromagnetism Replies 4 Views 1K • Electromagnetism Replies 40 Views 5K
1,899
8,748
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5
4
CC-MAIN-2024-26
latest
en
0.962412
https://www.xloypaypa.pub/codeforces-round-708-div2-solution/
1,726,279,207,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00447.warc.gz
989,826,538
7,042
Skip problem AB for this round. ## C2. k-LCM (hard version) Simple construction problem. The point is $\frac{n}{2}$. After dividing by 2, LCM is $\frac{n}{2}$. Then we just think of a way to subtract one, and make it can divide 2. ## D. Genius Normal. Know it's DP at very beginning. The memory limit is telling you it state compression DP directly. The breakthrough point is: if just move forward, no need skip any questions. And the IQ is only related to i and j. So it's safe to move backward. The last issue is the case of same tag. Or state compression. Just normal things. ## E2. Square-free division (hard version) I not solved this problem by myself. I looked some others' solution. I found the way to calculate the left, also thought about the dp definition. But not calculate dp successfully. So Sad. I was trying to find some $O(n\cdot{k})$ solution. Because it need 4000,000 times calculation.  And think may get TLE for $O(n\cdot{k})$ solution. But after looked others' solution, $O(n\cdot{k^2})$ solution is ok. Then fine...
269
1,050
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2024-38
latest
en
0.906859
https://gmatclub.com/forum/if-x-and-y-are-selected-from-2-3-4-5-and-6-what-is-the-27591.html?fl=similar
1,498,510,847,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320865.14/warc/CC-MAIN-20170626203042-20170626223042-00635.warc.gz
752,240,253
45,696
It is currently 26 Jun 2017, 14:00 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # If x and y are selected from 2, 3, 4, 5, and 6, what is the Author Message Director Joined: 17 Oct 2005 Posts: 928 If x and y are selected from 2, 3, 4, 5, and 6, what is the [#permalink] ### Show Tags 22 Mar 2006, 00:24 This topic is locked. If you want to discuss this question please re-post it in the respective forum. If x and y are selected from 2, 3, 4, 5, and 6, what is the probability that x*y is divisible by 4? Director Joined: 06 Feb 2006 Posts: 897 ### Show Tags 22 Mar 2006, 01:35 isn't it 2/5? 5C2 total possible number of picks... = 10 Favourable picks 4 Intern Joined: 18 Jan 2006 Posts: 15 ### Show Tags 22 Mar 2006, 03:22 is it 1/2. 5/5C2 Thanks _________________ If I cant win, I'll die flying Director Joined: 06 Feb 2006 Posts: 897 ### Show Tags 22 Mar 2006, 03:31 Gmate wrote: is it 1/2. 5/5C2 Thanks Oh yeah, 5 favourable outcomes.... 1/2 Director Joined: 17 Oct 2005 Posts: 928 ### Show Tags 22 Mar 2006, 15:57 can you guys elaborate on this one? Is there a quick way or did you guys just calclated all the outcomes? thanks Senior Manager Joined: 11 Nov 2005 Posts: 328 Location: London ### Show Tags 22 Mar 2006, 16:36 total outcome 10 favorable outcome 5 5/10 = 1/2 SVP Joined: 14 Dec 2004 Posts: 1689 ### Show Tags 22 Mar 2006, 23:25 I think we get 6 favorable outcomes. (2,4) (3,4) (4,4) (5,4) (6,4) (2,6) Director Joined: 06 Feb 2006 Posts: 897 ### Show Tags 23 Mar 2006, 02:18 vivek123 wrote: I think we get 6 favorable outcomes. (2,4) (3,4) (4,4) (5,4) (6,4) (2,6) Nope you can not count 4 and 4, because you have to choose from 2 3 4 5 and 6... there are no two 4s Director Joined: 06 Feb 2006 Posts: 897 ### Show Tags 23 Mar 2006, 02:22 joemama142000 wrote: can you guys elaborate on this one? Is there a quick way or did you guys just calclated all the outcomes? thanks We calculated total outcomes by using the combination formula.... n!/(k!(n!-k!)) n! - denotes the total number of choices we have = 5 k! - denotes the number of choices we are arranging = 2 In this example n!=1*2*3*4*5 k!=1*2 And yes, the favourable outcomes are calculated by hand but it does not take long... max 1 minute... Manager Joined: 20 Mar 2005 Posts: 201 Location: Colombia, South America ### Show Tags 24 Mar 2006, 00:28 vivek123 wrote: I think we get 6 favorable outcomes. (2,4) (3,4) (4,4) (5,4) (6,4) (2,6) in that case (2,2) and (6,6) would work too but it would be over 5^2 = 25 I guess Sima is right GMAT Club Legend Joined: 07 Jul 2004 Posts: 5043 Location: Singapore ### Show Tags 24 Mar 2006, 01:57 x*y must be a multiple a 4: 4,8,12,16,20,24,28 (x,y) sets (2,4) (2,6) (3,4) (4,5) (4,6) We don't have to consider (4,2) (6,2) (4,3) (5,4) amd (6,4) as we're not interested in placing but groupings. # of ways ot pick any 2 numbers = 5C2 = 10 P = 5/10 = 1/2 Director Joined: 24 Oct 2005 Posts: 659 Location: London ### Show Tags 24 Mar 2006, 02:54 I got the favourable outcomes as 2 2 2 4 2 6 3 4 4 2 4 3 4 4 4 5 4 6 5 4 6 2 6 4 Total outcomes = 25 (x can be any of the 5 nos and y can be any of the 5 nos.) Probability = 12/25 Director Joined: 02 Mar 2006 Posts: 575 Location: France ### Show Tags 24 Mar 2006, 08:53 remgeo wrote: I got the favourable outcomes as 2 2 2 4 2 6 3 4 4 2 4 3 4 4 4 5 4 6 5 4 6 2 6 4 Total outcomes = 25 (x can be any of the 5 nos and y can be any of the 5 nos.) Probability = 12/25 I had the same result, because no where it is written that once you picked a number, there is one less for the next selection. 24 Mar 2006, 08:53 Display posts from previous: Sort by
1,504
4,179
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.9375
4
CC-MAIN-2017-26
longest
en
0.887144
https://us.metamath.org/ileuni/elixpsn.html
1,718,882,612,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861940.83/warc/CC-MAIN-20240620105805-20240620135805-00765.warc.gz
531,025,344
8,094
Intuitionistic Logic Explorer < Previous   Next > Nearby theorems Mirrors  >  Home  >  ILE Home  >  Th. List  >  elixpsn GIF version Theorem elixpsn 6629 Description: Membership in a class of singleton functions. (Contributed by Stefan O'Rear, 24-Jan-2015.) Assertion Ref Expression elixpsn (𝐴𝑉 → (𝐹X𝑥 ∈ {𝐴}𝐵 ↔ ∃𝑦𝐵 𝐹 = {⟨𝐴, 𝑦⟩})) Distinct variable groups:   𝑥,𝐵,𝑦   𝑥,𝐹,𝑦   𝑥,𝐴,𝑦   𝑥,𝑉,𝑦 Proof of Theorem elixpsn Dummy variables 𝑧 𝑤 are mutually distinct and distinct from all other variables. StepHypRef Expression 1 sneq 3538 . . . 4 (𝑧 = 𝐴 → {𝑧} = {𝐴}) 21ixpeq1d 6604 . . 3 (𝑧 = 𝐴X𝑥 ∈ {𝑧}𝐵 = X𝑥 ∈ {𝐴}𝐵) 32eleq2d 2209 . 2 (𝑧 = 𝐴 → (𝐹X𝑥 ∈ {𝑧}𝐵𝐹X𝑥 ∈ {𝐴}𝐵)) 4 opeq1 3705 . . . . 5 (𝑧 = 𝐴 → ⟨𝑧, 𝑦⟩ = ⟨𝐴, 𝑦⟩) 54sneqd 3540 . . . 4 (𝑧 = 𝐴 → {⟨𝑧, 𝑦⟩} = {⟨𝐴, 𝑦⟩}) 65eqeq2d 2151 . . 3 (𝑧 = 𝐴 → (𝐹 = {⟨𝑧, 𝑦⟩} ↔ 𝐹 = {⟨𝐴, 𝑦⟩})) 76rexbidv 2438 . 2 (𝑧 = 𝐴 → (∃𝑦𝐵 𝐹 = {⟨𝑧, 𝑦⟩} ↔ ∃𝑦𝐵 𝐹 = {⟨𝐴, 𝑦⟩})) 8 elex 2697 . . 3 (𝐹X𝑥 ∈ {𝑧}𝐵𝐹 ∈ V) 9 vex 2689 . . . . . . 7 𝑧 ∈ V 10 vex 2689 . . . . . . 7 𝑦 ∈ V 119, 10opex 4151 . . . . . 6 𝑧, 𝑦⟩ ∈ V 1211snex 4109 . . . . 5 {⟨𝑧, 𝑦⟩} ∈ V 13 eleq1 2202 . . . . 5 (𝐹 = {⟨𝑧, 𝑦⟩} → (𝐹 ∈ V ↔ {⟨𝑧, 𝑦⟩} ∈ V)) 1412, 13mpbiri 167 . . . 4 (𝐹 = {⟨𝑧, 𝑦⟩} → 𝐹 ∈ V) 1514rexlimivw 2545 . . 3 (∃𝑦𝐵 𝐹 = {⟨𝑧, 𝑦⟩} → 𝐹 ∈ V) 16 eleq1 2202 . . . 4 (𝑤 = 𝐹 → (𝑤X𝑥 ∈ {𝑧}𝐵𝐹X𝑥 ∈ {𝑧}𝐵)) 17 eqeq1 2146 . . . . 5 (𝑤 = 𝐹 → (𝑤 = {⟨𝑧, 𝑦⟩} ↔ 𝐹 = {⟨𝑧, 𝑦⟩})) 1817rexbidv 2438 . . . 4 (𝑤 = 𝐹 → (∃𝑦𝐵 𝑤 = {⟨𝑧, 𝑦⟩} ↔ ∃𝑦𝐵 𝐹 = {⟨𝑧, 𝑦⟩})) 19 vex 2689 . . . . . 6 𝑤 ∈ V 2019elixp 6599 . . . . 5 (𝑤X𝑥 ∈ {𝑧}𝐵 ↔ (𝑤 Fn {𝑧} ∧ ∀𝑥 ∈ {𝑧} (𝑤𝑥) ∈ 𝐵)) 21 fveq2 5421 . . . . . . . 8 (𝑥 = 𝑧 → (𝑤𝑥) = (𝑤𝑧)) 2221eleq1d 2208 . . . . . . 7 (𝑥 = 𝑧 → ((𝑤𝑥) ∈ 𝐵 ↔ (𝑤𝑧) ∈ 𝐵)) 239, 22ralsn 3567 . . . . . 6 (∀𝑥 ∈ {𝑧} (𝑤𝑥) ∈ 𝐵 ↔ (𝑤𝑧) ∈ 𝐵) 2423anbi2i 452 . . . . 5 ((𝑤 Fn {𝑧} ∧ ∀𝑥 ∈ {𝑧} (𝑤𝑥) ∈ 𝐵) ↔ (𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵)) 25 simpl 108 . . . . . . . . 9 ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) → 𝑤 Fn {𝑧}) 26 fveq2 5421 . . . . . . . . . . . . 13 (𝑦 = 𝑧 → (𝑤𝑦) = (𝑤𝑧)) 2726eleq1d 2208 . . . . . . . . . . . 12 (𝑦 = 𝑧 → ((𝑤𝑦) ∈ 𝐵 ↔ (𝑤𝑧) ∈ 𝐵)) 289, 27ralsn 3567 . . . . . . . . . . 11 (∀𝑦 ∈ {𝑧} (𝑤𝑦) ∈ 𝐵 ↔ (𝑤𝑧) ∈ 𝐵) 2928biimpri 132 . . . . . . . . . 10 ((𝑤𝑧) ∈ 𝐵 → ∀𝑦 ∈ {𝑧} (𝑤𝑦) ∈ 𝐵) 3029adantl 275 . . . . . . . . 9 ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) → ∀𝑦 ∈ {𝑧} (𝑤𝑦) ∈ 𝐵) 31 ffnfv 5578 . . . . . . . . 9 (𝑤:{𝑧}⟶𝐵 ↔ (𝑤 Fn {𝑧} ∧ ∀𝑦 ∈ {𝑧} (𝑤𝑦) ∈ 𝐵)) 3225, 30, 31sylanbrc 413 . . . . . . . 8 ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) → 𝑤:{𝑧}⟶𝐵) 339fsn2 5594 . . . . . . . 8 (𝑤:{𝑧}⟶𝐵 ↔ ((𝑤𝑧) ∈ 𝐵𝑤 = {⟨𝑧, (𝑤𝑧)⟩})) 3432, 33sylib 121 . . . . . . 7 ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) → ((𝑤𝑧) ∈ 𝐵𝑤 = {⟨𝑧, (𝑤𝑧)⟩})) 35 opeq2 3706 . . . . . . . . 9 (𝑦 = (𝑤𝑧) → ⟨𝑧, 𝑦⟩ = ⟨𝑧, (𝑤𝑧)⟩) 3635sneqd 3540 . . . . . . . 8 (𝑦 = (𝑤𝑧) → {⟨𝑧, 𝑦⟩} = {⟨𝑧, (𝑤𝑧)⟩}) 3736rspceeqv 2807 . . . . . . 7 (((𝑤𝑧) ∈ 𝐵𝑤 = {⟨𝑧, (𝑤𝑧)⟩}) → ∃𝑦𝐵 𝑤 = {⟨𝑧, 𝑦⟩}) 3834, 37syl 14 . . . . . 6 ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) → ∃𝑦𝐵 𝑤 = {⟨𝑧, 𝑦⟩}) 399, 10fvsn 5615 . . . . . . . . . 10 ({⟨𝑧, 𝑦⟩}‘𝑧) = 𝑦 40 id 19 . . . . . . . . . 10 (𝑦𝐵𝑦𝐵) 4139, 40eqeltrid 2226 . . . . . . . . 9 (𝑦𝐵 → ({⟨𝑧, 𝑦⟩}‘𝑧) ∈ 𝐵) 429, 10fnsn 5177 . . . . . . . . 9 {⟨𝑧, 𝑦⟩} Fn {𝑧} 4341, 42jctil 310 . . . . . . . 8 (𝑦𝐵 → ({⟨𝑧, 𝑦⟩} Fn {𝑧} ∧ ({⟨𝑧, 𝑦⟩}‘𝑧) ∈ 𝐵)) 44 fneq1 5211 . . . . . . . . 9 (𝑤 = {⟨𝑧, 𝑦⟩} → (𝑤 Fn {𝑧} ↔ {⟨𝑧, 𝑦⟩} Fn {𝑧})) 45 fveq1 5420 . . . . . . . . . 10 (𝑤 = {⟨𝑧, 𝑦⟩} → (𝑤𝑧) = ({⟨𝑧, 𝑦⟩}‘𝑧)) 4645eleq1d 2208 . . . . . . . . 9 (𝑤 = {⟨𝑧, 𝑦⟩} → ((𝑤𝑧) ∈ 𝐵 ↔ ({⟨𝑧, 𝑦⟩}‘𝑧) ∈ 𝐵)) 4744, 46anbi12d 464 . . . . . . . 8 (𝑤 = {⟨𝑧, 𝑦⟩} → ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) ↔ ({⟨𝑧, 𝑦⟩} Fn {𝑧} ∧ ({⟨𝑧, 𝑦⟩}‘𝑧) ∈ 𝐵))) 4843, 47syl5ibrcom 156 . . . . . . 7 (𝑦𝐵 → (𝑤 = {⟨𝑧, 𝑦⟩} → (𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵))) 4948rexlimiv 2543 . . . . . 6 (∃𝑦𝐵 𝑤 = {⟨𝑧, 𝑦⟩} → (𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵)) 5038, 49impbii 125 . . . . 5 ((𝑤 Fn {𝑧} ∧ (𝑤𝑧) ∈ 𝐵) ↔ ∃𝑦𝐵 𝑤 = {⟨𝑧, 𝑦⟩}) 5120, 24, 503bitri 205 . . . 4 (𝑤X𝑥 ∈ {𝑧}𝐵 ↔ ∃𝑦𝐵 𝑤 = {⟨𝑧, 𝑦⟩}) 5216, 18, 51vtoclbg 2747 . . 3 (𝐹 ∈ V → (𝐹X𝑥 ∈ {𝑧}𝐵 ↔ ∃𝑦𝐵 𝐹 = {⟨𝑧, 𝑦⟩})) 538, 15, 52pm5.21nii 693 . 2 (𝐹X𝑥 ∈ {𝑧}𝐵 ↔ ∃𝑦𝐵 𝐹 = {⟨𝑧, 𝑦⟩}) 543, 7, 53vtoclbg 2747 1 (𝐴𝑉 → (𝐹X𝑥 ∈ {𝐴}𝐵 ↔ ∃𝑦𝐵 𝐹 = {⟨𝐴, 𝑦⟩})) Colors of variables: wff set class Syntax hints:   → wi 4   ∧ wa 103   ↔ wb 104   = wceq 1331   ∈ wcel 1480  ∀wral 2416  ∃wrex 2417  Vcvv 2686  {csn 3527  ⟨cop 3530   Fn wfn 5118  ⟶wf 5119  ‘cfv 5123  Xcixp 6592 This theorem was proved from axioms:  ax-mp 5  ax-1 6  ax-2 7  ax-ia1 105  ax-ia2 106  ax-ia3 107  ax-io 698  ax-5 1423  ax-7 1424  ax-gen 1425  ax-ie1 1469  ax-ie2 1470  ax-8 1482  ax-10 1483  ax-11 1484  ax-i12 1485  ax-bndl 1486  ax-4 1487  ax-14 1492  ax-17 1506  ax-i9 1510  ax-ial 1514  ax-i5r 1515  ax-ext 2121  ax-sep 4046  ax-pow 4098  ax-pr 4131 This theorem depends on definitions:  df-bi 116  df-3an 964  df-tru 1334  df-nf 1437  df-sb 1736  df-eu 2002  df-mo 2003  df-clab 2126  df-cleq 2132  df-clel 2135  df-nfc 2270  df-ral 2421  df-rex 2422  df-reu 2423  df-v 2688  df-sbc 2910  df-un 3075  df-in 3077  df-ss 3084  df-pw 3512  df-sn 3533  df-pr 3534  df-op 3536  df-uni 3737  df-br 3930  df-opab 3990  df-mpt 3991  df-id 4215  df-xp 4545  df-rel 4546  df-cnv 4547  df-co 4548  df-dm 4549  df-rn 4550  df-res 4551  df-ima 4552  df-iota 5088  df-fun 5125  df-fn 5126  df-f 5127  df-f1 5128  df-fo 5129  df-f1o 5130  df-fv 5131  df-ixp 6593 This theorem is referenced by:  ixpsnf1o  6630 Copyright terms: Public domain W3C validator
3,890
5,159
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.078125
3
CC-MAIN-2024-26
latest
en
0.25953
https://gmatclub.com/forum/what-is-the-greatest-common-divisor-of-positive-integers-m-2362.html?kudos=1
1,498,437,922,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320595.24/warc/CC-MAIN-20170625235624-20170626015624-00195.warc.gz
750,725,843
45,959
It is currently 25 Jun 2017, 17:45 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # What is the greatest common divisor of positive integers m Author Message Intern Joined: 06 Sep 2003 Posts: 19 Location: island What is the greatest common divisor of positive integers m [#permalink] ### Show Tags 12 Sep 2003, 10:12 00:00 Difficulty: (N/A) Question Stats: 0% (00:00) correct 0% (00:00) wrong based on 1 sessions ### HideShow timer Statistics This topic is locked. If you want to discuss this question please re-post it in the respective forum. What is the greatest common divisor of positive integers m and n? 1. m is a prime number. 2. m and n are consecutive integers. Thanks!! Mystery Senior Manager Joined: 22 May 2003 Posts: 329 Location: Uruguay ### Show Tags 12 Sep 2003, 17:21 B? If they are consecutive, the GCD is 1 Intern Joined: 06 Sep 2003 Posts: 19 Location: island ### Show Tags 12 Sep 2003, 17:32 Thanks!! mystery 12 Sep 2003, 17:32 Display posts from previous: Sort by
402
1,526
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.84375
3
CC-MAIN-2017-26
latest
en
0.891919
https://www.scribd.com/doc/20668706/Pythagorean-Theorem-Word-Problems-Homework
1,521,664,962,000,000,000
text/html
crawl-data/CC-MAIN-2018-13/segments/1521257647692.51/warc/CC-MAIN-20180321195830-20180321215830-00161.warc.gz
872,798,832
25,752
# Take the WOO Name: Date: Section: Number : Homework: Pythagorean Theorem Word Problems 1. Find this missing side of this triangle: 5 8 2. Find the missing side of this triangle: 7 9 3. The bottom of a ladder must be placed 3 feet from a wall. The ladder is 12 feet long. How far above the ground does the ladder touch the wall? 4. How far from the base of the house do you need to place a 15-foot ladder so that it exactly reaches the top of a 12-foot tall wall? 5. Two sides of a right triangle are 8 and 12. a. Find the missing side if 8 and 12 are legs. b. Find the missing side if 8 and 12 are a leg and hypotenuse. Bonus: 6. What is the length of the diagonal of a 10 cm by 15 cm rectangle? 7. The diagonal of a rectangle is 25 in. The width is 15 inches. What is the length?
225
790
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.828125
4
CC-MAIN-2018-13
latest
en
0.870568
https://www.coursehero.com/file/6480464/quiz-3/
1,498,344,516,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00142.warc.gz
843,451,151
69,245
# quiz 3 - View Results Quiz 3 Chs 7 8 9(Only T/F MC... This preview shows pages 1–2. Sign up to view the full content. View Results Name: Hussein Bzeih Attempt: 1 / 1 Out of: 16 Started: March 29, 2010 9:29pm Finished: March 29, 2010 10:20pm Time spent: 51 min. 41 sec. Student finished 28 min. 19 sec. ahead of the 80 min. time limit. Question 1 (4 points) Similar to HW #7.28, p. 268 - Blank 6th edition Arc-bot Technologies, manufacturers of six-axis electric servo-driven robots, has experienced the cash flows blow in a shipping department. For each year the first number represents the Expenses and the second number represents the Savings. t = 0 -45,000 0.00; t = 1 -10,000 20,000; t = 2 -30,000 27,000; t = 3 -15,000 40,000; t = 4 -12,000 11,000; t= 5 -5,000 25,000. The number of i* values between 0% and 100% is: Student response: Percent Value Correct Response Student Response Answer Choices 100.0% a. one 0.0% b. three 0.0% c. two 0.0% d. five Score: 4 / 4 Question 2 (3 points) Similar to HW #8.30, p. 304 - Blank 6th edition An independent dirt contractor is trying to determine which size dump truck to buy. The cash flows associated with each size truck are estimated below. The contractor’s MARR is 20% per year, and all trucks are expected to have a useful life of 5 years. Truck Bed Size, cubic meters 15: -27,000; -20,500 Truck Bed Size, cubic meters 20: -32,000; -19,000 Truck Bed Size, cubic meters 25: -35,000; -18,000 The two figures for each size truck above represent: Initial This preview has intentionally blurred sections. Sign up to view the full version. View Full Document This is the end of the preview. Sign up to access the rest of the document. ## This note was uploaded on 10/19/2011 for the course MES 304 taught by Professor Costa during the Spring '07 term at CSU Northridge. ### Page1 / 4 quiz 3 - View Results Quiz 3 Chs 7 8 9(Only T/F MC... This preview shows document pages 1 - 2. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
609
2,033
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.4375
3
CC-MAIN-2017-26
longest
en
0.892535
https://365financialanalyst.com/templates-and-models/mid-excel-template/
1,670,440,482,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446711218.21/warc/CC-MAIN-20221207185519-20221207215519-00400.warc.gz
113,819,800
27,972
# MID – Excel Template MID returns a specified number of characters from the middle of a text strings. The user indicates how many characters, and which is the starting character from where to extract the specified number of characters. MID is similar to the LEFT function. The main difference, though, is that the user can decide which character of the word she wants to start from. Some other related topics you might be interested to explore are LEFT, RIGHT, and CONCATENATE. This is an open-access Excel template in XLSX format that will be useful for anyone who wants to work as a Financial Analyst, Business Analyst, Consultant, Corporate Executive, or everyone preparing a corporate presentation. You can now download the Excel template for free. ### 3-Statement Model – Excel Template The P&L, Balance sheet, and Cash flow statements are three interrelated parts. The P&L feeds net income on the ### 3-Statement Model – Excel Template The P&L, Balance sheet, and Cash flow statements are three interrelated parts. The P&L feeds net income on the liabilities and equity side of the Balance sheet. At the same time, we obtain Cash (an asset) by summing the bottom-line result of the Cash flow statement with previous year... ### Cash Flow – Excel Template The cash flow statement shows how a company generated and spent cash throughout a given timeframe. An important truth ### Cash Flow – Excel Template The cash flow statement shows how a company generated and spent cash throughout a given timeframe. An important truth that is frequently neglected by inexperienced business owners is that profit does not equal cash. Every business owner and manager needs to have a clear idea of the cash flows... ### Balance Sheet – Excel Template If the P&L statement shows how profitable a company was over a given timeframe, we can say that the ### Balance Sheet – Excel Template If the P&L statement shows how profitable a company was over a given timeframe, we can say that the Balance sheet is like a momentary picture of the firm’s condition at the time of preparation. The Balance sheet shows what a company owns (assets) and what it owes (liabilities...
445
2,173
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.609375
3
CC-MAIN-2022-49
longest
en
0.90352
https://library.fiveable.me/information-theory/unit-1/fundamental-concepts-key-principles/study-guide/iQs39r0Clj1MmAp6
1,726,578,959,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00049.warc.gz
337,718,925
42,354
# 1.2 Fundamental concepts and key principles Information theory fundamentals form the backbone of how we measure and understand information in various systems. This section introduces key concepts like entropy, , and , which are crucial for quantifying uncertainty and information flow. These principles find wide-ranging applications, from to error correction in digital communications. Understanding these concepts helps us optimize information transmission and storage in our increasingly data-driven world. ## Information Theory Fundamentals ### Concept of information measurement • Information reduces uncertainty about random variables measured in bits quantifying surprise or novelty in messages • represents binary choice between two equally likely outcomes fundamental unit of information • measures information content in single event $I(x) = -\log_2(p(x))$ where p(x) is probability of event x occurring • (entropy) expected value of self-information across all possible events ### Probability and information relationship • Inverse relationship between probability and information content less probable events carry more information (winning lottery) highly probable events carry less (sun rising) • maximizes information content all outcomes equally likely (fair dice roll) • have zero information content probability of 1 (certainty) (gravity pulling objects down) • allows additive properties when combining independent events (flipping two coins) ## Key Principles and Applications ### Key principles of information theory • Entropy measures average uncertainty in random variable $H(X) = -\sum_{x} p(x) \log_2(p(x))$ maximum when all outcomes equally likely (fair coin toss) minimum for deterministic variables (predetermined outcome) • Mutual information quantifies dependence between two random variables $I(X;Y) = H(X) - H(X|Y)$ reduction in uncertainty about X given knowledge of Y (weather forecast affecting outdoor plans) • Channel capacity maximum rate of reliable information transmission $C = \max_{p(x)} I(X;Y)$ depends on channel characteristics and input distribution (bandwidth in communication systems) • relates entropy to data compression (ZIP files) • relates channel capacity to error-free communication (error-correcting codes in digital transmissions) ### Applications of information theory concepts • Data compression 1. creates variable-length prefix codes for lossless compression 2. compresses sequences of repeated symbols • detect and correct single-bit errors in digital communications • provide simple error detection technique in data storage • Channel capacity calculation for binary symmetric channel $C = 1 - H(p)$ where p is error probability () • Mutual information computation for discrete variables $I(X;Y) = \sum_{x,y} p(x,y) \log_2(\frac{p(x,y)}{p(x)p(y)})$ (gene expression analysis) • using empirical entropy calculated from observed frequencies in data (language modeling) • balances compression and preserved relevant information (feature selection in machine learning) ## Key Terms to Review (21) Average information content: Average information content refers to the expected value of the information produced by a stochastic source of data, quantifying how much uncertainty is reduced when a particular outcome is observed. This concept is central to understanding how information can be measured and transmitted, highlighting the relationship between probability and information in communication systems. Bit: A bit, short for binary digit, is the smallest unit of data in computing and digital communications. It represents a state of either 0 or 1, which forms the basis for all binary code used in computers and digital systems. Understanding bits is fundamental to grasping how information is stored, processed, and transmitted in various technologies. Channel Capacity: Channel capacity is the maximum rate at which information can be reliably transmitted over a communication channel without errors, given the channel's characteristics and noise levels. Understanding channel capacity is essential for optimizing data transmission, developing efficient coding schemes, and ensuring reliable communication in various technologies. Channel Coding Theorem: The Channel Coding Theorem establishes the fundamental limits of reliable data transmission over a noisy communication channel. It asserts that it is possible to transmit information at a rate up to the channel capacity with an arbitrarily low probability of error, provided that appropriate coding schemes are used. This theorem connects the concepts of information theory and coding, illustrating how proper encoding can overcome noise and maximize the efficiency of data transfer. Claude Shannon: Claude Shannon was an American mathematician and electrical engineer, widely regarded as the father of Information Theory, who developed key concepts that quantify information, enabling efficient communication systems. His pioneering work laid the foundation for understanding how to measure and transmit information in the presence of noise, connecting directly to fundamental principles that drive modern telecommunications and data processing. Data Compression: Data compression is the process of encoding information using fewer bits than the original representation, reducing the amount of data needed to store or transmit. This technique plays a crucial role in enhancing efficiency by optimizing storage space and minimizing bandwidth usage, which are essential in various applications such as streaming, file storage, and communication systems. Deterministic Events: Deterministic events are outcomes that are fully predictable and follow a specific set of rules or laws, meaning there is no randomness involved. In information theory, these events are crucial because they allow for precise calculations and predictions, which can significantly influence data transmission and processing efficiency. Understanding deterministic events helps in grasping the foundational concepts of how information is structured and transmitted without ambiguity. Entropy Estimation: Entropy estimation refers to the process of quantifying the amount of uncertainty or randomness in a set of data. This concept is crucial for understanding how information is stored and transmitted, as it provides a measure of the inherent unpredictability in a random variable. By accurately estimating entropy, one can optimize coding strategies, improve data compression, and enhance the overall efficiency of communication systems. Error Detection and Correction: Error detection and correction refer to techniques used in digital communication and data storage to identify and rectify errors that occur during data transmission or storage. These methods ensure data integrity by allowing systems to detect corrupted or lost information and make necessary adjustments, thereby maintaining accurate communication between devices and reliable data retrieval. Hamming Codes: Hamming codes are a set of error-correcting codes that enable the detection and correction of errors in data transmission and storage. They work by adding redundant bits to the original data, allowing systems to identify and fix single-bit errors automatically. This is crucial for ensuring data integrity in various applications, particularly in digital communication and computer memory. Huffman Coding: Huffman coding is an efficient method of data compression that assigns variable-length codes to input characters, with shorter codes assigned to more frequent characters and longer codes to less frequent ones. This technique is closely tied to the principles of information theory, especially in the context of optimal coding strategies and entropy, making it a foundational concept in data compression algorithms. Information Bottleneck Method: The information bottleneck method is a technique in information theory that focuses on compressing the input data while retaining the most relevant information for predicting an output variable. It provides a framework for understanding how to balance the trade-off between retaining useful information and minimizing irrelevant data, effectively serving as a tool for feature selection and dimensionality reduction in various applications like machine learning and neural networks. Information Entropy: Information entropy is a measure of the unpredictability or uncertainty associated with a random variable. It quantifies the amount of information that is produced when one outcome of a random process occurs, and is essential for understanding how much information can be transmitted or stored in a system. This concept is foundational for analyzing data compression, coding schemes, and secure communication protocols. Logarithmic Nature of Information: The logarithmic nature of information refers to how information is measured and quantified using logarithmic scales, particularly in terms of bits. This concept is fundamental in understanding how information is processed, stored, and transmitted, allowing for a more efficient representation of data, especially in the context of communication systems and coding theory. Mutual Information: Mutual information is a measure of the amount of information that one random variable contains about another random variable. It quantifies the reduction in uncertainty about one variable given knowledge of the other, connecting closely to concepts like joint and conditional entropy as well as the fundamental principles of information theory. Noisy telephone line: A noisy telephone line refers to a communication channel where signal interference occurs, resulting in distortions or errors in the transmitted message. This noise can originate from various sources, such as electromagnetic interference, physical obstructions, or even hardware malfunctions, impacting the clarity and accuracy of the conversation taking place over the line. Understanding the concept of a noisy telephone line is crucial in information theory, as it highlights the challenges faced in transmitting information reliably and the importance of error detection and correction methods. Parity bits: Parity bits are extra bits added to a binary data set to help detect errors during data transmission or storage. They play a critical role in ensuring data integrity by indicating whether the number of set bits (1s) in the data is even or odd. This simple error-checking mechanism is a fundamental concept in error detection and correction, essential for reliable communication in digital systems. Run-Length Encoding: Run-length encoding (RLE) is a simple form of lossless data compression where sequences of the same data value, known as runs, are stored as a single data value and a count. This method is particularly effective for data with many repeated elements, as it reduces the amount of storage needed by replacing long sequences with a shorter representation. RLE connects to various fundamental concepts in information theory, showcases its applications in modern technology, and integrates well with transform coding techniques to optimize data compression. Self-information: Self-information quantifies the amount of information that a specific outcome provides, usually measured in bits. It helps to understand the uncertainty associated with an event; the more unlikely an event is, the higher its self-information value. This concept is foundational in communication systems, as it allows for the measurement of information content and plays a crucial role in determining the efficiency of data transmission. Source Coding Theorem: The Source Coding Theorem states that it is possible to compress the output of a discrete memoryless source to its entropy without losing any information. This theorem is fundamental in understanding how to efficiently represent and transmit data while minimizing redundancy, which ties into key concepts like data compression and channel capacity. Uniform Distribution: A uniform distribution is a probability distribution where all outcomes are equally likely within a specified range. This means that every event in the set has the same chance of occurring, leading to a flat probability function. Uniform distributions can be discrete, where a finite number of outcomes exist, or continuous, where the possible outcomes form an interval on the real line. This concept plays a crucial role in various areas such as statistics, information theory, and data analysis.
2,142
12,490
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.234375
3
CC-MAIN-2024-38
latest
en
0.812236
https://www.teacherspayteachers.com/Product/Multiplying-Fractions-Using-Area-Models-Supplement-Common-Core-Aligned-1008471
1,513,478,660,000,000,000
text/html
crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00496.warc.gz
824,494,701
24,680
Total: \$0.00 Multiplying Fractions Using Area Models Supplement (Common Core Aligned) Common Core Standards Product Rating File Type Compressed Zip File 1020 KB|10 pages Share Product Description Multiplying fractions is a key part of the fifth grade mathematics curriculum. This set of two worksheets focuses on helping students develop a conceptual understanding of what it means to multiply two fractions (answer keys included). Both worksheets include word problems that require students to either complete a fractional model or create their own fractional model to solve, and can be used as homework, assessment, independent practice, or remedial support for students who need additional practice with the concept. These worksheets are the perfect supplement to the Multiplying Fractions Using Area Models Practice Packet. This set of worksheets aligns with the following Common Core State Standards for Mathematics: -CCSS.Math.Content.5.NF.B.4 Apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction. -CCSS.Math.Content.5.NF.B.6 Solve real world problems involving multiplication of fractions and mixed numbers, e.g., by using visual fraction models or equations to represent the problem. Please take some time to download a preview of this product. I hope your students benefit from working with this product! Thank you so much for your purchase. Total Pages 10 pages Included Teaching Duration N/A Report this Resource \$1.50 More products from Mrs Kiswardys Class \$0.00 \$0.00 \$0.00 \$0.00 \$0.00 \$1.50
336
1,580
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2017-51
latest
en
0.867615
https://tackk.com/s0aawx
1,506,301,434,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818690268.19/warc/CC-MAIN-20170925002843-20170925022843-00212.warc.gz
724,083,381
7,975
Tackk will be shutting down on 9.30.17 - Export your Tackks # Independent and Conditional Probability What is the difference between independent and conditional probability? What do they both mean? # Kahoot Quiz Write down 1 example of independent and 1 example of conditional probability. # Starter Class Question: What is more likely, to pick the combination 3 of hearts, ace of spades, 7 of hearts, 9 of diamonds or, Ace, 2, 3, 4 of spades? # Objectives 1) To be able to calculate the relative frequency of an event happening. 2) To be able to calculate the expected outcome of an experiment repeated multiple times. 3) To be able to design an experiment to test a hypothesis. What is the probability of rolling a 4 when we roll a dice? How can we calculate the experimental probability? What parts of the experiment do we need to take into account? Was the relative frequency what you expected it to be? If not, what could be the reason? # Experiment 2, Counter Drop How can we calculate the probability of when you drop a counter it lands curved side up? # Analysis What was the result of the experiment? Was it what you expected? If you repeated the experiment 1000 times, how many curved side downs would you expect?
282
1,239
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.46875
3
CC-MAIN-2017-39
latest
en
0.923762
http://cerco.cs.unibo.it/export/2053/src/Clight/SimplifyCasts.ma
1,619,187,505,000,000,000
text/plain
crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00536.warc.gz
19,718,528
28,045
include "Clight/Csyntax.ma". include "Clight/TypeComparison.ma". (* IG: used to prove preservation of the semantics. *) include "Clight/Cexec.ma". include "Clight/casts.ma". include "Clight/CexecSound.ma". (* include "ASM/AssemblyProof.ma". *) (* I had to manually copy some of the lemmas in this file, including an axiom ... *) (* Attempt to remove unnecessary integer casts in Clight programs. This differs from the OCaml prototype by attempting to recursively determine whether subexpressions can be changed rather than using a fixed size pattern. As a result more complex expressions such as (char)((int)x + (int)y + (int)z) where x,y and z are chars can be simplified. A possible improvement that doesn't quite fit into this scheme would be to spot that casts can be removed in expressions like (int)x == (int)y where x and y have some smaller integer type. *) (* Attempt to squeeze integer constant into a given type. This might be too conservative --- if we're going to cast it anyway, can't I just ignore the fact that the integer doesn't fit? *) (* [reduce_bits n m exp v] takes avector of size n + m + 1 and returns (if all goes well) a vector of size * m+1 (an empty vector of bits makes no sense). It proceeds by removing /all/ the [n] leading bits equal * to [exp]. If it fails, it returns None. *) let rec reduce_bits (n,m:nat) (exp:bool) (v:BitVector (plus n (S m))) on n : option (BitVector (S m)) ≝ match n return λn. BitVector (n+S m) → option (BitVector (S m)) with [ O ⇒ λv. Some ? v | S n' ⇒ λv. if eq_b (head' ?? v) exp then reduce_bits ?? exp (tail ?? v) else None ? ] v. lemma reduce_bits_ok : ∀n,m.∀exp.∀v,v'. reduce_bits (S n) m exp v = Some ? v'→ reduce_bits n m exp (tail ?? v) = Some ? v'. #n #m #exp #v #v' #H whd in H:(??%?); lapply H -H cases (eq_b ? exp) [ 1: #H whd in H:(??%?); // | 2: #H normalize in H; destruct ] qed. lemma reduce_bits_trunc : ∀n,m.∀exp.∀v:(BitVector (plus n (S m))).∀v'. reduce_bits n m exp v = Some ? v' → v' = truncate n (S m) v. #n #m elim n [ 1: #exp #v #v' #H normalize in v v' H; destruct normalize >vsplit_O_n // | 2: #n' #Hind #exp #v #v' #H >truncate_tail > (Hind ??? (reduce_bits_ok ?? exp v v' H)) // ] qed. lemma reduce_bits_dec : ∀n,m.∀exp.∀v. (∃v'.reduce_bits n m exp v = Some ? v') ∨ reduce_bits n m exp v = None ?. #n #m #exp #v elim (reduce_bits n m exp v) [ 1: %2 // | 2: #v' %1 @(ex_intro … v') // ] qed. (* pred_bitsize_of_intsize I32 = 31, … *) definition pred_bitsize_of_intsize : intsize → nat ≝ λsz. pred (bitsize_of_intsize sz). definition signed : signedness → bool ≝ λsg. match sg with [ Unsigned ⇒ false | Signed ⇒ true ]. (* [simplify_int sz sz' sg sg' i] tries to convert an integer [i] of width [sz] and signedness [sg] * into an integer of size [sz'] of signedness [sg']. * - It first proceeds by comparing the source and target width: if the target width is strictly superior * to the source width, the conversion fails. * - If the size is equal, the argument is returned as-is. * - If the target type is strictly smaller than the source, it tries to squeeze the integer to * the desired size. *) let rec simplify_int (sz,sz':intsize) (sg,sg':signedness) (i:bvint sz) : option (bvint sz') ≝ let bit ≝ signed sg ∧ head' … i in (* [nat_compare] does more than an innocent post-doc might think. It also computes the difference between the two args. * if x < y, nat_compare x y = lt(x, y-(x+1)) * if x = y, nat_compare x y = eq x * if x > y, nat_compare x y = gt(x-(y+1), y) *) match nat_compare (pred_bitsize_of_intsize sz) (pred_bitsize_of_intsize sz') return λn,m,x.BitVector (S n) → option (BitVector (S m)) with [ nat_lt _ _ ⇒ λi. None ? (* refuse to make constants larger *) | nat_eq _ ⇒ λi. Some ? i | nat_gt x y ⇒ λi. (* Here, we have [x]=31-([y]+1) and [y] ∈ {15; 7} OR [x] = 15-(7+1) and [y] = 7. I.e.: x=15,y=15 or x=23,y=7 or x=7,y=7. * In [reduce_bits n m bit i], [i] is supposed to have type BitVector n + (S m). Since its type is here (S x) + (S y), * we deduce that the actual arguments of [reduce_bits] are (S x) and (S y), which is consistent. * If [i] is of signed type and if it is negative, then it tries to remove the (S x) first "1" bits. * Otherwise, it tries to remove the (S x) first "0" bits. *) match reduce_bits ?? bit (i⌈BitVector (S (y+S x))↦BitVector ((S x) + (S y))⌉) with [ Some i' ⇒ if signed sg' then if eq_b bit (head' … i') then Some ? i' else None ? else Some ? i' | None ⇒ None ? ] ] i. >(commutative_plus_faster (S x)) @refl qed. lemma eq_intsize_identity : ∀id. eq_intsize id id = true. * normalize // qed. lemma sz_eq_identity : ∀s. sz_eq_dec s s = inl ?? (refl ? s). * normalize // qed. lemma type_eq_identity : ∀t. type_eq t t = true. #t normalize elim (type_eq_dec t t) [ 1: #Heq normalize // | 2: #H destruct elim H #Hcontr elim (Hcontr (refl ? t)) ] qed. lemma type_neq_not_identity : ∀t1, t2. t1 ≠ t2 → type_eq t1 t2 = false. #t1 #t2 #Hneq normalize elim (type_eq_dec t1 t2) [ 1: #Heq destruct elim Hneq #Hcontr elim (Hcontr (refl ? t2)) | 2: #Hneq' normalize // ] qed. definition size_le ≝ λsz1,sz2. match sz1 with [ I8 ⇒ True | I16 ⇒ match sz2 with [ I16 ⇒ True | I32 ⇒ True | _ ⇒ False ] | I32 ⇒ match sz2 with [ I32 ⇒ True | _ ⇒ False ] ]. definition size_lt ≝ λsz1,sz2. match sz1 with [ I8 ⇒ match sz2 with [ I16 ⇒ True | I32 ⇒ True | _ ⇒ False ] | I16 ⇒ match sz2 with [ I32 ⇒ True | _ ⇒ False ] | I32 ⇒ False ]. lemma size_lt_to_le : ∀sz1,sz2. size_lt sz1 sz2 → size_le sz1 sz2. #sz1 #sz2 elim sz1 elim sz2 normalize // qed. lemma size_lt_dec : ∀sz1,sz2. size_lt sz1 sz2 + (¬ (size_lt sz1 sz2)). * * normalize /2/ /3/ qed. lemma size_not_lt_to_ge : ∀sz1,sz2. ¬(size_lt sz1 sz2) → (sz1 = sz2) + (size_lt sz2 sz1). * * normalize /2/ /3/ qed. (* This function already exists in prop, we want it in type. *) definition sign_eq_dect : ∀sg1,sg2:signedness. (sg1 = sg2) + (sg1 ≠ sg2). * * normalize // qed. lemma size_absurd : ∀a,b. size_le a b → size_lt b a → False. * * normalize #H1 #H2 try (@(False_ind … H1)) try (@(False_ind … H2)) qed. (* This defines necessary conditions for [src_expr] to be coerced to "target_ty". * Again, these are /not/ sufficient conditions. See [simplify_inv] for the rest. *) definition necessary_conditions ≝ λsrc_sz.λsrc_sg.λtarget_sz.λtarget_sg. match size_lt_dec target_sz src_sz with [ inl Hlt ⇒ true | inr Hnlt ⇒ match sz_eq_dec target_sz src_sz with [ inl Hsz_eq ⇒ match sign_eq_dect src_sg target_sg with [ inl Hsg_eq ⇒ false | inr Hsg_neq ⇒ true ] | inr Hsz_neq ⇒ false ] ]. (* Inversion on necessary_conditions *) lemma necessary_conditions_spec : ∀src_sz,src_sg,target_sz, target_sg. (necessary_conditions src_sz src_sg target_sz target_sg = true) → ((size_lt target_sz src_sz) ∨ (src_sz = target_sz ∧ src_sg ≠ target_sg)). #src_sz #src_sg #target_sz #target_sg whd in match (necessary_conditions ????); cases (size_lt_dec target_sz src_sz) normalize nodelta [ 1: #Hlt #_ %1 // | 2: #Hnlt cases (sz_eq_dec target_sz src_sz) normalize nodelta [ 2: #_ #Hcontr destruct | 1: #Heq cases (sign_eq_dect src_sg target_sg) normalize nodelta [ 1: #_ #Hcontr destruct | 2: #Hsg_neq #_ %2 destruct /2/ ] ] ] qed. (* Compare the results [r1,r2] of the evaluation of two expressions. If [r1] is an * integer value smaller but containing the same stuff than [r2] then all is well. * If the first evaluation is erroneous, we don't care about anything else. *) definition smaller_integer_val ≝ λsrc_sz,dst_sz,sg. λr1,r2:res(val×trace). match r1 with [ OK res1 ⇒ let 〈val1, tr1〉 ≝ res1 in ∀v1. val1 = Vint src_sz v1 → match r2 with [ OK res2 ⇒ let 〈val2, tr2〉 ≝ res2 in ∃v2. (val2 = Vint dst_sz v2 ∧ v2 = cast_int_int src_sz sg dst_sz v1 ∧ tr1 = tr2 ∧ size_le dst_sz src_sz) | _ ⇒ False ] | Error errmsg1 ⇒ True ]. (* Simulation relation used for expression evaluation. *) inductive simulate (A : Type[0]) (r1 : res A) (r2 : res A) : Prop ≝ | SimOk : (∀a:A. r1 = OK ? a → r2 = OK ? a) → simulate A r1 r2 | SimFail : (∃err. r1 = Error ? err) → simulate A r1 r2. (* Invariant of simplify_expr *) inductive simplify_inv (ge : genv) (en : env) (m : mem) (e1 : expr) (e2 : expr) (target_sz : intsize) (target_sg : signedness) : bool → Prop ≝ (* Inv_eq states a standard simulation result. We enforce some needed equations on types to prove the cast cases. *) | Inv_eq : ∀result_flag. result_flag = false → typeof e1 = typeof e2 → simulate ? (exec_expr ge en m e1) (exec_expr ge en m e2) → simulate ? (exec_lvalue ge en m e1) (exec_lvalue ge en m e2) → simplify_inv ge en m e1 e2 target_sz target_sg result_flag (* Inv_coerce_ok states that we successfuly squeezed the source expression to [target_sz]. The details are hidden in [smaller_integer_val]. *) | Inv_coerce_ok : ∀src_sz,src_sg. (typeof e1) = (Tint src_sz src_sg) → (typeof e2) = (Tint target_sz target_sg) → (smaller_integer_val src_sz target_sz src_sg (exec_expr ge en m e1) (exec_expr ge en m e2)) → simplify_inv ge en m e1 e2 target_sz target_sg true. (* Invariant of simplify_inside *) definition conservation ≝ λe,result:expr. ∀ge,en,m. simulate ? (exec_expr ge en m e) (exec_expr ge en m result) ∧ simulate ? (exec_lvalue ge en m e) (exec_lvalue ge en m result) ∧ typeof e = typeof result. (* This lemma proves that simplify_int actually implements an integer cast. *) (* The case 4 can be merged with cases 7 and 8. *) lemma simplify_int_implements_cast : ∀sz,sz'.∀sg,sg'.∀i,i'. simplify_int sz sz' sg sg' i = Some ? i' → i' = cast_int_int sz sg sz' i. * * [ 1: #sg #sg' #i #i' #Hsimp normalize in Hsimp ⊢ %; elim sg normalize destruct // | 2,3,6: #sg #sg' #i #i' #Hsimp normalize in Hsimp; destruct (* ⊢ %; destruct destruct whd in Hsimp:(??%?); destruct *) | 4: * * #i #i' #Hsimp whd in Hsimp:(??%?) ⊢ (??%?); normalize nodelta in Hsimp; normalize in i i' ⊢ %; normalize in match (signed ?) in Hsimp; normalize in match (S (plus ??)) in Hsimp; normalize in match (plus 7 8) in Hsimp; lapply Hsimp -Hsimp cases (head' bool 15 i) normalize in match (andb ??); [ 1,3: elim (reduce_bits_dec 8 7 true i) [ 1,3: * #v' #Heq >Heq letin Heq_trunc ≝ (reduce_bits_trunc … Heq) normalize nodelta [ 1: cases (eq_b true ?) normalize #H destruct normalize @refl | 2: #H destruct normalize @refl ] | 2,4: #Heq >Heq normalize nodelta #H destruct ] | 2,4,5,6,7,8: elim (reduce_bits_dec 8 7 false i) [ 1,3,5,7,9,11: * #v' #Heq >Heq normalize nodelta letin Heq_trunc ≝ (reduce_bits_trunc … Heq) [ 1,3,4: cases (eq_b false ?) normalize nodelta #H destruct normalize @refl | 2,5,6: #H destruct normalize @refl ] | 2,4,6,8,10,12: #Heq >Heq normalize nodelta #H destruct ] ] | 5,9: * * #i #i' #Hsimp whd in Hsimp:(??%?) ⊢ (??%?); destruct @refl | 7, 8: * * #i #i' #Hsimp whd in Hsimp:(??%?) ⊢ (??%?); normalize nodelta in Hsimp; normalize in i i' ⊢ %; normalize in match (signed ?) in Hsimp; normalize in match (S (plus ??)) in Hsimp; normalize in match (plus 7 24) in Hsimp; lapply Hsimp -Hsimp cases (head' bool ? i) normalize in match (andb ??); [ 1,3,9,11: [ 1,2: (elim (reduce_bits_dec 24 7 true i)) | 3,4: (elim (reduce_bits_dec 16 15 true i)) ] [ 1,3,5,7: * #v' #Heq >Heq letin Heq_trunc ≝ (reduce_bits_trunc … Heq) normalize nodelta [ 1,3: cases (eq_b true ?) normalize #H destruct normalize @refl | 2,4: #H destruct normalize @refl ] | 2,4,6,8: #Heq >Heq normalize nodelta #H destruct ] | 2,4,5,6,7,8,10,12,13,14,15,16: [ 1,2,3,4,5,6: elim (reduce_bits_dec 24 7 false i) | 6,7,8,9,10,11,12: elim (reduce_bits_dec 16 15 false i) ] [ 1,3,5,7,9,11,13,15,17,19,21,23: * #v' #Heq >Heq normalize nodelta letin Heq_trunc ≝ (reduce_bits_trunc … Heq) [ 1,3,4,7,9,10: cases (eq_b false ?) normalize nodelta #H destruct normalize @refl | 2,5,6,8,11,12: #H destruct normalize @refl ] | 2,4,6,8,10,12,14,16,18,20,22,24: #Heq >Heq normalize nodelta #H destruct ] ] ] qed. (* Facts about cast_int_int *) (* copied from AssemblyProof *) lemma Vector_O: ∀A. ∀v: Vector A 0. v ≃ VEmpty A. #A #v generalize in match (refl … 0); cases v in ⊢ (??%? → ?%%??); // #n #hd #tl #abs @⊥ destruct (abs) qed. lemma Vector_Sn: ∀A. ∀n.∀v:Vector A (S n). ∃hd.∃tl.v ≃ VCons A n hd tl. #A #n #v generalize in match (refl … (S n)); cases v in ⊢ (??%? → ??(λ_.??(λ_.?%%??))); [ #abs @⊥ destruct (abs) | #m #hd #tl #EQ <(injective_S … EQ) %[@hd] %[@tl] // ] qed. lemma vector_append_zero: ∀A,m. ∀v: Vector A m. ∀q: Vector A 0. v = q@@v. #A #m #v #q >(Vector_O A q) % qed. corollary prod_vector_zero_eq_left: ∀A, n. ∀q: Vector A O. ∀r: Vector A n. 〈q, r〉 = 〈[[ ]], r〉. #A #n #q #r generalize in match (Vector_O A q …); #hyp >hyp in ⊢ (??%?); % qed. lemma vsplit_eq : ∀A. ∀m,n. ∀v : Vector A ((S m) + n). ∃v1:Vector A (S m). ∃v2:Vector A n. v = v1 @@ v2. # A #m #n elim m [ 1: normalize #v elim (Vector_Sn ?? v) #hd * #tl #Eq @(ex_intro … (hd ::: [[]])) @(ex_intro … tl) >Eq normalize // | 2: #n' #Hind #v elim (Vector_Sn ?? v) #hd * #tl #Eq elim (Hind tl) #tl1 * #tl2 #Eq_tl @(ex_intro … (hd ::: tl1)) @(ex_intro … tl2) destruct normalize // ] qed. lemma vsplit_eq2 : ∀A. ∀m,n : nat. ∀v : Vector A (m + n). ∃v1:Vector A m. ∃v2:Vector A n. v = v1 @@ v2. # A #m #n elim m [ 1: normalize #v @(ex_intro … (VEmpty ?)) @(ex_intro … v) normalize // | 2: #n' #Hind #v elim (Vector_Sn ?? v) #hd * #tl #Eq elim (Hind tl) #tl1 * #tl2 #Eq_tl @(ex_intro … (hd ::: tl1)) @(ex_intro … tl2) destruct normalize // ] qed. lemma vsplit_zero: ∀A,m. ∀v: Vector A m. 〈[[]], v〉 = vsplit A 0 m v. #A #m #v elim v [ % | #n #hd #tl #ih normalize in ⊢ (???%); % ] qed. lemma vsplit_zero2: ∀A,m. ∀v: Vector A m. 〈[[]], v〉 = vsplit' A 0 m v. #A #m #v elim v [ % | #n #hd #tl #ih normalize in ⊢ (???%); % ] qed. (* This is not very nice. Note that this axiom was copied verbatim from ASM/AssemblyProof.ma. * TODO sync with AssemblyProof.ma, in a better world we shouldn't have to copy all of this. *) axiom vsplit_succ: ∀A, m, n. ∀l: Vector A m. ∀r: Vector A n. ∀v: Vector A (m + n). ∀hd. v = l@@r → (〈l, r〉 = vsplit ? m n v → 〈hd:::l, r〉 = vsplit ? (S m) n (hd:::v)). axiom vsplit_succ2: ∀A, m, n. ∀l: Vector A m. ∀r: Vector A n. ∀v: Vector A (m + n). ∀hd. v = l@@r → (〈l, r〉 = vsplit' ? m n v → 〈hd:::l, r〉 = vsplit' ? (S m) n (hd:::v)). lemma vsplit_prod2: ∀A,m,n. ∀p: Vector A (m + n). ∀v: Vector A m. ∀q: Vector A n. p = v@@q → 〈v, q〉 = vsplit' A m n p. #A #m elim m [ #n #p #v #q #hyp >hyp <(vector_append_zero A n q v) >(prod_vector_zero_eq_left A …) @vsplit_zero2 | #r #ih #n #p #v #q #hyp >hyp cases (Vector_Sn A r v) #hd #exists cases exists #tl #jmeq >jmeq @vsplit_succ2 [1: % |2: @ih % ] ] qed. lemma vsplit_prod: ∀A,m,n. ∀p: Vector A (m + n). ∀v: Vector A m. ∀q: Vector A n. p = v@@q → 〈v, q〉 = vsplit A m n p. #A #m elim m [ #n #p #v #q #hyp >hyp <(vector_append_zero A n q v) >(prod_vector_zero_eq_left A …) @vsplit_zero | #r #ih #n #p #v #q #hyp >hyp cases (Vector_Sn A r v) #hd #exists cases exists #tl #jmeq >jmeq @vsplit_succ [1: % |2: @ih % ] ] qed. lemma cast_decompose : ∀s1, v. cast_int_int I32 s1 I8 v = (cast_int_int I16 s1 I8 (cast_int_int I32 s1 I16 v)). #s1 #v normalize elim s1 normalize nodelta normalize in v; elim (vsplit_eq ??? (v⌈Vector bool 32 ↦ Vector bool (16 + 16)⌉)) [ 2,4: // | 1,3: #l * #r normalize nodelta #Eq1 <(vsplit_prod bool 16 16 … Eq1) elim (vsplit_eq ??? (r⌈Vector bool 16 ↦ Vector bool (8 + 8)⌉)) [ 2,4: // | 1,3: #lr * #rr normalize nodelta #Eq2 <(vsplit_prod bool 8 8 … Eq2) cut (v = (l @@ lr) @@ rr) [ 1,3 : destruct >(vector_associative_append ? 16 8) // | 2,4: #Hrewrite destruct <(vsplit_prod bool 24 8 … Hrewrite) @refl ] ] ] qed. lemma cast_idempotent : ∀s1,s2,sz1,sz2,v. size_lt sz1 sz2 → cast_int_int sz2 s1 sz1 (cast_int_int sz1 s2 sz2 v) = v. #s1 #s2 * * #v elim s1 elim s2 normalize #H try @refl @(False_ind … H) qed. lemma cast_identity : ∀sz,sg,v. cast_int_int sz sg sz v = v. * * #v normalize // qed. lemma cast_collapse : ∀s1,s2,v. cast_int_int I32 s1 I8 (cast_int_int I16 s2 I32 v) = (cast_int_int I16 s1 I8 v). #s1 #s2 #v >cast_decompose >cast_idempotent [ 1: @refl | 2: // ] qed. lemma cast_composition_lt : ∀a_sz,a_sg, b_sz, b_sg, c_sz, val. size_lt c_sz a_sz → size_lt c_sz b_sz → (cast_int_int a_sz a_sg c_sz val = cast_int_int b_sz b_sg c_sz (cast_int_int a_sz a_sg b_sz val)). * #a_sg * #b_sg * #val whd in match (size_lt ??); whd in match (size_lt ??); #Hlt1 #Hlt2 [ 1,2,3,4,5,6,7,8,9,10,11,12,14,15,17,18,19,20,21,23,24,27: @(False_ind … Hlt1) @(False_ind … Hlt2) | 13,25,26: >cast_identity elim a_sg elim b_sg normalize // | 22: normalize elim b_sg elim a_sg normalize normalize in val; elim (vsplit_eq ??? (val⌈Vector bool 32 ↦ Vector bool (16 + 16)⌉)) [ 2,4,6,8: normalize // | 1,3,5,7: #left * #right normalize #Eq1 <(vsplit_prod bool 16 16 … Eq1) elim (vsplit_eq ??? (right⌈Vector bool 16 ↦ Vector bool (8 + 8)⌉)) [ 2,4,6,8: // | 1,3,5,7: #rightleft * #rightright normalize #Eq2 <(vsplit_prod bool 8 8 … Eq2) cut (val = (left @@ rightleft) @@ rightright) [ 1,3,5,7: destruct >(vector_associative_append ? 16 8) // | 2,4,6,8: #Hrewrite destruct <(vsplit_prod bool 24 8 … Hrewrite) @refl ] ] ] | 16: elim b_sg elim a_sg >cast_collapse @refl ] qed. lemma cast_composition : ∀a_sz,a_sg, b_sz, b_sg, c_sz, val. size_le c_sz a_sz → size_le c_sz b_sz → (cast_int_int a_sz a_sg c_sz val = cast_int_int b_sz b_sg c_sz (cast_int_int a_sz a_sg b_sz val)). #a_sz #a_sg #b_sz #b_sg #c_sz #val #Hle_c_a #Hle_c_b cases (size_lt_dec c_sz a_sz) cases (size_lt_dec c_sz b_sz) [ 1: #Hltb #Hlta @(cast_composition_lt … Hlta Hltb) | 2: #Hnltb #Hlta cases (size_not_lt_to_ge … Hnltb) [ 1: #Heq destruct >cast_identity // | 2: #Hltb @(False_ind … (size_absurd ?? Hle_c_b Hltb)) ] | 3: #Hltb #Hnlta cases (size_not_lt_to_ge … Hnlta) [ 1: #Heq destruct >cast_idempotent // | 2: #Hlta @(False_ind … (size_absurd ?? Hle_c_a Hlta)) ] | 4: #Hnltb #Hnlta cases (size_not_lt_to_ge … Hnlta) cases (size_not_lt_to_ge … Hnltb) [ 1: #Heq_b #Heq_a destruct >cast_identity >cast_identity // | 2: #Hltb #Heq @(False_ind … (size_absurd ?? Hle_c_b Hltb)) | 3: #Eq #Hlta @(False_ind … (size_absurd ?? Hle_c_a Hlta)) | 4: #Hltb #Hlta @(False_ind … (size_absurd ?? Hle_c_a Hlta)) ] ] qed. let rec assert_int_value (v : option val) (expected_size : intsize) : option (BitVector (bitsize_of_intsize expected_size)) ≝ match v with [ None ⇒ None ? | Some v ⇒ match v with [ Vint sz i ⇒ match sz_eq_dec sz expected_size with [ inl Heq ⇒ Some ?? | inr _ ⇒ None ? ] | _ ⇒ None ? ] ]. >Heq in i; #i @i qed. (* cast_int_int behaves as truncate (≃ vsplit) when downsizing *) (* ∀src_sz,target_sz,sg. ∀i. size_le target_sz src_sz → cast_int_int src_sz sg target_sz i = truncate *) lemma vsplit_to_truncate : ∀m,n,i. (\snd  (vsplit bool m n i)) = truncate m n i. #m #n #i normalize // qed. (* Somme lemmas on how "simplifiable" operations behave under cast_int_int. *) lemma integer_add_cast_lt : ∀src_sz,target_sz,sg. ∀lhs_int,rhs_int. size_lt target_sz src_sz → (addition_n (bitsize_of_intsize target_sz) (cast_int_int src_sz sg target_sz lhs_int) (cast_int_int src_sz sg target_sz rhs_int) = cast_int_int src_sz sg target_sz (addition_n (bitsize_of_intsize src_sz) lhs_int rhs_int)). #src_sz #target_sz #sg #lhs_int #rhs_int #Hlt elim src_sz in Hlt lhs_int rhs_int; elim target_sz [ 1,2,3,5,6,9: normalize #H @(False_ind … H) | *: elim sg #_ normalize in match (bitsize_of_intsize ?); normalize in match (bitsize_of_intsize ?); #lint #rint normalize in match (cast_int_int ????); normalize in match (cast_int_int ????); whd in match (addition_n ???); whd in match (addition_n ???) in ⊢ (???%); >vsplit_to_truncate >vsplit_to_truncate cast_identity >cast_identity >cast_identity // qed. lemma integer_add_cast_le : ∀src_sz,target_sz,sg. ∀lhs_int,rhs_int. size_le target_sz src_sz → (addition_n (bitsize_of_intsize target_sz) (cast_int_int src_sz sg target_sz lhs_int) (cast_int_int src_sz sg target_sz rhs_int) = cast_int_int src_sz sg target_sz (addition_n (bitsize_of_intsize src_sz) lhs_int rhs_int)). #src_sz #target_sz #sg #lhs_int #rhs_int #Hle cases (sz_eq_dec target_sz src_sz) [ 1: #Heq @(integer_add_cast_eq … Heq) | 2: #Hneq cut (size_lt target_sz src_sz) [ 1: elim target_sz in Hle Hneq; elim src_sz normalize // #_ * #H @(H … (refl ??)) | 2: #Hlt @(integer_add_cast_lt … Hlt) ] ] qed. lemma truncate_eat : ∀l,n,m,v. l = n → ∃tl. truncate (S n) m v = truncate l m tl. #l #n #m #v #len elim (Vector_Sn … v) #hd * #tl #Heq >len @(ex_intro … tl) >Heq >Heq elim (vsplit_eq2 … tl) #l * #r #Eq normalize <(vsplit_prod bool n m tl l r Eq) <(vsplit_prod2 bool n m tl l r Eq) normalize // qed. lemma integer_neg_trunc : ∀m,n. ∀i: BitVector (m+n). two_complement_negation n (truncate m n i) = truncate m n (two_complement_negation (m+n) i). #m elim m [ 1: #n #i normalize in i; whd in match (truncate ???); whd in match (truncate ???) in ⊢ (???%); Heq in ⊢ (??%?); >truncate_tail whd in match (tail ???) in ⊢ (??%?); whd in match (two_complement_negation ??) in ⊢ (??%?); lapply (Hind ? tl) #H whd in match (two_complement_negation ??) in H; (* trying to reduce add_with_carries *) normalize in match (S m'+n); whd in match (zero ?) in ⊢ (???%); >Heq in match (negation_bv ??) in ⊢ (???%); whd in match (negation_bv ??) in ⊢ (???%); >add_with_carries_unfold in ⊢ (???%); normalize in ⊢ (???%); cases hd normalize nodelta [ 1,2: (vsplit_to_truncate (S m')) >truncate_tail cases b normalize nodelta normalize in match (tail ???); @H ] ] qed. (* This was painful. *) lemma integer_sub_cast_lt : ∀src_sz,target_sz,sg. ∀lhs_int,rhs_int. size_lt target_sz src_sz → (subtraction (bitsize_of_intsize target_sz) (cast_int_int src_sz sg target_sz lhs_int) (cast_int_int src_sz sg target_sz rhs_int) = cast_int_int src_sz sg target_sz (subtraction (bitsize_of_intsize src_sz) lhs_int rhs_int)). #src_sz #target_sz #sg #lhs_int #rhs_int #Hlt elim src_sz in Hlt lhs_int rhs_int; elim target_sz [ 1,2,3,5,6,9: normalize #H @(False_ind … H) | *: elim sg #_ normalize in match (bitsize_of_intsize ?); normalize in match (bitsize_of_intsize ?); #lint #rint normalize in match (cast_int_int ????); normalize in match (cast_int_int ????); whd in match (subtraction ???); whd in match (subtraction ???) in ⊢ (???%); >vsplit_to_truncate >vsplit_to_truncate >integer_neg_trunc cast_identity >cast_identity >cast_identity // qed. lemma integer_sub_cast_le : ∀src_sz,target_sz,sg. ∀lhs_int,rhs_int. size_le target_sz src_sz → (subtraction (bitsize_of_intsize target_sz) (cast_int_int src_sz sg target_sz lhs_int) (cast_int_int src_sz sg target_sz rhs_int) = cast_int_int src_sz sg target_sz (subtraction (bitsize_of_intsize src_sz) lhs_int rhs_int)). #src_sz #target_sz #sg #lhs_int #rhs_int #Hle cases (sz_eq_dec target_sz src_sz) [ 1: #Heq @(integer_sub_cast_eq … Heq) | 2: #Hneq cut (size_lt target_sz src_sz) [ 1: elim target_sz in Hle Hneq; elim src_sz normalize // #_ * #H @(H … (refl ??)) | 2: #Hlt @(integer_sub_cast_lt … Hlt) ] ] qed. lemma simplify_int_success_lt : ∀sz,sg,sz',sg',i,i'. (simplify_int sz sz' sg sg' i=Some (bvint sz') i') → size_le sz' sz. * #sg * #sg' #i #i' #H whd in H:(??%?); try destruct normalize // qed. lemma smaller_integer_val_identity : ∀sz,sg,x. smaller_integer_val sz sz sg x x. #sz #sg * [ 2: #error // | 1: * #val #trace whd in match (smaller_integer_val ?????); #v1 #Hval %{v1} @conj try @conj try @conj // elim sz // ] qed. (* Inversion on exec_cast *) lemma exec_cast_inv : ∀castee_val,src_sz,src_sg,cast_sz,cast_sg,m,result. exec_cast m castee_val (Tint src_sz src_sg) (Tint cast_sz cast_sg) = OK ? result → ∃i. castee_val = Vint src_sz i ∧ result = Vint cast_sz (cast_int_int src_sz src_sg cast_sz i). #castee_val #src_sz #src_sg #cast_sz #cast_sg #m #result elim castee_val [ 1: | 2: #sz' #i | 3: #f | 4: #r | 5: #ptr ] [ 2: | *: whd in ⊢ ((??%?) → ?); #Habsurd destruct ] whd in ⊢ ((??%?) → ?); cases (sz_eq_dec sz' src_sz) [ 1: #Heq destruct >intsize_eq_elim_true normalize nodelta #Heq destruct %{i} /2/ | 2: #Hneq >intsize_eq_elim_false; try assumption #H destruct ] qed. (* Lemmas related to the Ebinop case *) lemma classify_add_int : ∀sz,sg. classify_add (Tint sz sg) (Tint sz sg) = add_case_ii sz sg. * * // qed. lemma classify_sub_int : ∀sz,sg. classify_sub (Tint sz sg) (Tint sz sg) = sub_case_ii sz sg. * * // qed. lemma bool_conj_inv : ∀a,b : bool. (a ∧ b) = true → a = true ∧ b = true. * * normalize #H @conj // qed. (* Operations where it is safe to use a smaller integer type on the assumption that we would cast it down afterwards anyway. *) definition binop_simplifiable ≝ λop. match op with [ Oadd ⇒ true | Osub ⇒ true | _ ⇒ false ]. (* Inversion principle for integer addition *) lemma iadd_inv : ∀sz,sg,v1,v2,m,r. sem_binary_operation Oadd v1 (Tint sz sg) v2 (Tint sz sg) m = Some ? r → ∃dsz,i1,i2. v1 = Vint dsz i1 ∧ v2 = Vint dsz i2 ∧ r = (Vint dsz (addition_n (bitsize_of_intsize dsz) i1 i2)). #sz #sg #v1 #v2 #m #r elim v1 [ 1: | 2: #sz' #i | 3: #f | 4: #r | 5: #ptr ] whd in ⊢ ((??%?) → ?); normalize nodelta >classify_add_int normalize nodelta #H destruct elim v2 in H; [ 1: | 2: #sz'' #i' | 3: #f' | 4: #r' | 5: #ptr' ] whd in ⊢ ((??%?) → ?); #H destruct elim (sz_eq_dec sz' sz'') [ 1: #Heq destruct >intsize_eq_elim_true in H; #Heq destruct %{sz''} %{i} %{i'} /3/ | 2: #Hneq >intsize_eq_elim_false in H; try assumption #H destruct ] qed. (* Inversion principle for integer subtraction. *) lemma isub_inv : ∀sz,sg,v1,v2,m,r. sem_binary_operation Osub v1 (Tint sz sg) v2 (Tint sz sg) m = Some ? r → ∃dsz,i1,i2. v1 = Vint dsz i1 ∧ v2 = Vint dsz i2 ∧ r = (Vint dsz (subtraction ? i1 i2)). #sz #sg #v1 #v2 #m #r elim v1 [ 1: | 2: #sz' #i | 3: #f | 4: #r | 5: #ptr ] whd in ⊢ ((??%?) → ?); normalize nodelta >classify_sub_int normalize nodelta #H destruct elim v2 in H; [ 1: | 2: #sz'' #i' | 3: #f' | 4: #r' | 5: #ptr' ] whd in ⊢ ((??%?) → ?); #H destruct elim (sz_eq_dec sz' sz'') [ 1: #Heq destruct >intsize_eq_elim_true in H; #Heq destruct %{sz''} %{i} %{i'} /3/ | 2: #Hneq >intsize_eq_elim_false in H; try assumption #H destruct ] qed. definition is_int : val → Prop ≝ λv. match v with [ Vint _ _ ⇒ True | _ ⇒ False ]. (* "negative" (in the sense of ¬ Some) inversion principle for integer addition *) lemma neg_iadd_inv : ∀sz,sg,v1,v2,m. sem_binary_operation Oadd v1 (Tint sz sg) v2 (Tint sz sg) m = None ? → ¬ (is_int v1) ∨ ¬ (is_int v2) ∨ ∃dsz1,dsz2,i1,i2. v1 = Vint dsz1 i1 ∧ v2 = Vint dsz2 i2 ∧ dsz1 ≠ dsz2. #sz #sg #v1 #v2 #m elim v1 [ 1: | 2: #sz' #i | 3: #f | 4: #r | 5: #ptr ] [ 2: | *: #_ %1 %1 % #H @H ] elim v2 [ 1: | 2: #sz'' #i' | 3: #f' | 4: #r' | 5: #ptr' ] [ 2: | *: #_ %1 %2 % #H @H ] whd in ⊢ ((??%?) → ?); normalize nodelta >classify_add_int normalize nodelta elim (sz_eq_dec sz' sz'') [ 1: #Heq destruct >intsize_eq_elim_true #Habsurd destruct (Habsurd) | 2: #Hneq >intsize_eq_elim_false try assumption #_ %2 %{sz'} %{sz''} %{i} %{i'} try @conj try @conj // ] qed. (* "negative" inversion principle for integer subtraction *) lemma neg_isub_inv : ∀sz,sg,v1,v2,m. sem_binary_operation Osub v1 (Tint sz sg) v2 (Tint sz sg) m = None ? → ¬ (is_int v1) ∨ ¬ (is_int v2) ∨ ∃dsz1,dsz2,i1,i2. v1 = Vint dsz1 i1 ∧ v2 = Vint dsz2 i2 ∧ dsz1 ≠ dsz2. #sz #sg #v1 #v2 #m elim v1 [ 1: | 2: #sz' #i | 3: #f | 4: #r | 5: #ptr ] [ 2: | *: #_ %1 %1 % #H @H ] elim v2 [ 1: | 2: #sz'' #i' | 3: #f' | 4: #r' | 5: #ptr' ] [ 2: | *: #_ %1 %2 % #H @H ] whd in ⊢ ((??%?) → ?); normalize nodelta >classify_sub_int normalize nodelta elim (sz_eq_dec sz' sz'') [ 1: #Heq destruct >intsize_eq_elim_true #Habsurd destruct (Habsurd) | 2: #Hneq >intsize_eq_elim_false try assumption #_ %2 %{sz'} %{sz''} %{i} %{i'} try @conj try @conj // ] qed. lemma simplifiable_op_inconsistent : ∀op,sz,sg,v1,v2,m. ¬ (is_int v1) → binop_simplifiable op = true → sem_binary_operation op v1 (Tint sz sg) v2 (Tint sz sg) m = None ?. #op #sz #sg #v1 #v2 #m #H elim op normalize in match (binop_simplifiable ?); #H destruct elim v1 in H; [ 1,6: | 2,7: #sz' #i normalize in ⊢ (% → ?); * #H @(False_ind … (H I)) | 3,8: #f | 4,9: #r | 5,10: #ptr ] #_ whd in match (sem_binary_operation ??????); normalize nodelta >classify_add_int normalize nodelta // >classify_sub_int normalize nodelta // qed. notation > "hvbox('let' «ident x,ident y» 'as' ident E ≝ t 'in' s)" with precedence 10 for @{ match \$t return λx.x = \$t → ? with [ mk_Sig \${ident x} \${ident y} ⇒ λ\${ident E}.\$s ] (refl ? \$t) }. notation > "hvbox('let' « 〈ident x1,ident x2〉, ident y» 'as' ident E, ident F ≝ t 'in' s)" with precedence 10 for @{ match \$t return λx.x = \$t → ? with [ mk_Sig \${fresh a} \${ident y} ⇒ λ\${ident E}. match \${fresh a} return λx.x = \${fresh a} → ? with [ mk_Prod \${ident x1} \${ident x2} ⇒ λ\${ident F}. \$s ] (refl ? \${fresh a}) ] (refl ? \$t) }. (* This function will make your eyes bleed. You've been warned. * Implements a correct-by-construction version of Brian's original cast-removal code. Does so by * threading an invariant defined in [simplify_inv], which says roughly "simplification yields either what you hoped for, i.e. an integer value of the right size, OR something equivalent to the original expression". [simplify_expr] is not to be called directly: simplify inside is the proper wrapper. * TODO: proper doc. Some cases are simplifiable. Some type equality tests are maybe dispensable. * This function is slightly more conservative than the original one, but this should be incrementally * modifiable (replacing calls to simplify_inside by calls to simplify_expr, + proving correction). * Also, I think that the proofs are factorizable to a great deal, but I'd rather have something * more or less "modular", case-by-case wise. *) let rec simplify_expr (e:expr) (target_sz:intsize) (target_sg:signedness) : Σresult:bool×expr. ∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result) ≝ match e return λx. x = e → ? with [ Expr ed ty ⇒ λHexpr_eq. match ed return λx. ed = x → ? with [ Econst_int cst_sz i ⇒ λHdesc_eq. match ty return λx. x=ty → ? with [ Tint ty_sz sg ⇒ λHexprtype_eq. (* Ensure that the displayed type size [cst_sz] and actual size [sz] are equal ... *) match sz_eq_dec cst_sz ty_sz with [ inl Hsz_eq ⇒ match type_eq_dec ty (Tint target_sz target_sg) with [ inl Hdonothing ⇒ «〈true, e〉, ?» | inr Hdosomething ⇒ (* Do the actual useful work *) match simplify_int cst_sz target_sz sg target_sg i return λx. (simplify_int cst_sz target_sz sg target_sg i) = x → ? with [ Some i' ⇒ λHsimpl_eq. «〈true, Expr (Econst_int target_sz i') (Tint target_sz target_sg)〉, ?» | None ⇒ λ_. «〈false, e〉, ?» ] (refl ? (simplify_int cst_sz target_sz sg target_sg i)) ] | inr _ ⇒ (* The expression is ill-typed. *) «〈false, e〉, ?» ] | _ ⇒ λ_. «〈false, e〉, ?» ] (refl ? ty) | Ederef e1 ⇒ λHdesc_eq. let «e2,Hequiv» as Hsimplify ≝ simplify_inside e1 in «〈false, Expr (Ederef e2) ty〉, ?» | Eaddrof e1 ⇒ λHdesc_eq. let «e2,Hequiv» as Hsimplify ≝ simplify_inside e1 in «〈false, Expr (Eaddrof e2) ty〉, ?» | Eunop op e1 ⇒ λHdesc_eq. let «e2,Hequiv» as Hsimplify ≝ simplify_inside e1 in «〈false, Expr (Eunop op e2) ty〉, ?» | Ebinop op lhs rhs ⇒ λHdesc_eq. (* Type equality is enforced to prove the equalities needed in return by the invariant. *) match binop_simplifiable op return λx. (binop_simplifiable op) = x → (Σresult:(bool × expr). (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result))) with [ true ⇒ λHop_simplifiable_eq. match assert_type_eq ty (typeof lhs) with [ OK Hty_eq_lhs ⇒ match assert_type_eq (typeof lhs) (typeof rhs) with [ OK Htylhs_eq_tyrhs ⇒ let «〈desired_type_lhs, lhs1〉, Hinv_lhs» as Hsimplify_lhs, Hpair_lhs ≝ simplify_expr lhs target_sz target_sg in let «〈desired_type_rhs, rhs1〉, Hinv_rhs» as Hsimplify_rhs, Hpair_rhs ≝ simplify_expr rhs target_sz target_sg in match desired_type_lhs ∧ desired_type_rhs return λx. (desired_type_lhs ∧ desired_type_rhs) = x → (Σresult:(bool × expr). (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result))) with [ true ⇒ λHdesired_eq. «〈true, Expr (Ebinop op lhs1 rhs1) (Tint target_sz target_sg)〉, ?» | false ⇒ λHdesired_eq. let «lhs1, Hequiv_lhs» as Hsimplify_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Hsimplify_rhs ≝ simplify_inside rhs in «〈false, Expr (Ebinop op lhs1 rhs1) ty〉, ?» ] (refl ? (desired_type_lhs ∧ desired_type_rhs)) | Error _ ⇒ let «lhs1, Hequiv_lhs» as Hsimplify_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Hsimplify_rhs ≝ simplify_inside rhs in «〈false, Expr (Ebinop op lhs1 rhs1) ty〉, ?» ] | Error _ ⇒ let «lhs1, Hequiv_lhs» as Hsimplify_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Hsimplify_rhs ≝ simplify_inside rhs in «〈false, Expr (Ebinop op lhs1 rhs1) ty〉, ?» ] | false ⇒ λHop_simplifiable_eq. let «lhs1, Hequiv_lhs» as Hsimplify_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Hsimplify_rhs ≝ simplify_inside rhs in «〈false, Expr (Ebinop op lhs1 rhs1) ty〉, ?» ] (refl ? (binop_simplifiable op)) | Ecast cast_ty castee ⇒ λHdesc_eq. match cast_ty return λx. x = cast_ty → ? with [ Tint cast_sz cast_sg ⇒ λHcast_ty_eq. match type_eq_dec ty cast_ty with [ inl Hcast_eq ⇒ match necessary_conditions cast_sz cast_sg target_sz target_sg return λx. x = (necessary_conditions cast_sz cast_sg target_sz target_sg) → (Σresult:(bool × expr). (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result))) with [ true ⇒ λHconditions. let «〈desired_type, castee1〉, Hcastee_inv» as Hsimplify1, Hpair1 ≝ simplify_expr castee target_sz target_sg in match desired_type return λx. desired_type = x → Σresult:bool×expr. (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result)) with [ true ⇒ λHdesired_eq. «〈true, castee1〉, ?» | false ⇒ λHdesired_eq. let «〈desired_type2, castee2〉, Hcast2» as Hsimplify2, Hpair2 ≝ simplify_expr castee cast_sz cast_sg in match desired_type2 return λx. desired_type2 = x → Σresult:bool×expr. (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result)) with [ true ⇒ λHdesired2_eq. «〈false, castee2〉, ?» | false ⇒ λHdesired2_eq. «〈false, Expr (Ecast ty castee2) cast_ty〉, ?» ] (refl ? desired_type2) ] (refl ? desired_type) | false ⇒ λHconditions. let «〈desired_type2, castee2〉, Hcast2» as Hsimplify2, Hpair2 ≝ simplify_expr castee cast_sz cast_sg in match desired_type2 return λx. desired_type2 = x → Σresult:bool×expr. (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result)) with [ true ⇒ λHdesired2_eq. «〈false, castee2〉, ?» | false ⇒ λHdesired2_eq. «〈false, Expr (Ecast ty castee2) cast_ty〉, ?» ] (refl ? desired_type2) ] (refl ? (necessary_conditions cast_sz cast_sg target_sz target_sg)) | inr Hcast_neq ⇒ (* inconsistent types ... *) let «castee1, Hcastee_equiv» as Hsimplify ≝ simplify_inside castee in «〈false, Expr (Ecast cast_ty castee1) ty〉, ?» ] | _ ⇒ λHcast_ty_eq. let «castee1, Hcastee_equiv» as Hsimplify ≝ simplify_inside castee in «〈false, Expr (Ecast cast_ty castee1) ty〉, ?» ] (refl ? cast_ty) | Econdition cond iftrue iffalse ⇒ λHdesc_eq. let «cond1, Hcond_equiv» as Hsimplify ≝ simplify_inside cond in match assert_type_eq ty (typeof iftrue) with [ OK Hty_eq_iftrue ⇒ match assert_type_eq (typeof iftrue) (typeof iffalse) with [ OK Hiftrue_eq_iffalse ⇒ let «〈desired_true, iftrue1〉, Htrue_inv» as Hsimplify_iftrue, Hpair_iftrue ≝ simplify_expr iftrue target_sz target_sg in let «〈desired_false, iffalse1〉, Hfalse_inv» as Hsimplify_iffalse, Hpair_iffalse ≝ simplify_expr iffalse target_sz target_sg in match desired_true ∧ desired_false return λx. (desired_true ∧ desired_false) = x → (Σresult:(bool × expr). (∀ge,en,m. simplify_inv ge en m e (\snd result) target_sz target_sg (\fst result))) with [ true ⇒ λHdesired_eq. «〈true, Expr (Econdition cond1 iftrue1 iffalse1) (Tint target_sz target_sg)〉, ?» | false ⇒ λHdesired_eq. let «iftrue1, Htrue_equiv» as Hsimplify_iftrue ≝ simplify_inside iftrue in let «iffalse1, Hfalse_equiv» as Hsimplify_iffalse ≝ simplify_inside iffalse in «〈false, Expr (Econdition cond1 iftrue1 iffalse1) ty〉, ?» ] (refl ? (desired_true ∧ desired_false)) | _ ⇒ let «iftrue1, Htrue_equiv» as Hsimplify_iftrue ≝ simplify_inside iftrue in let «iffalse1, Hfalse_equiv» as Hsimplify_iffalse ≝ simplify_inside iffalse in «〈false, Expr (Econdition cond1 iftrue1 iffalse1) ty〉, ?» ] | _ ⇒ let «iftrue1, Htrue_equiv» as Hsimplify_iftrue ≝ simplify_inside iftrue in let «iffalse1, Hfalse_equiv» as Hsimplify_iffalse ≝ simplify_inside iffalse in «〈false, Expr (Econdition cond1 iftrue1 iffalse1) ty〉, ?» ] (* Could probably do better with these, too. *) | Eandbool lhs rhs ⇒ λHdesc_eq. let «lhs1, Hlhs_equiv» as Eq1 ≝ simplify_inside lhs in let «rhs1, Hrhs_equiv» as Eq2 ≝ simplify_inside rhs in «〈false, Expr (Eandbool lhs1 rhs1) ty〉, ?» | Eorbool lhs rhs ⇒ λHdesc_eq. let «lhs1, Hlhs_equiv» as Eq1 ≝ simplify_inside lhs in let «rhs1, Hrhs_equiv» as Eq2 ≝ simplify_inside rhs in «〈false, Expr (Eorbool lhs1 rhs1) ty〉,?» (* Could also improve Esizeof *) | Efield rec_expr f ⇒ λHdesc_eq. let «rec_expr1, Hrec_expr_equiv» as Hsimplify ≝ simplify_inside rec_expr in «〈false,Expr (Efield rec_expr1 f) ty〉, ?» | Ecost l e1 ⇒ λHdesc_eq. (* The invariant requires that the toplevel [ty] type matches the type of [e1]. *) (* /!\ XXX /!\ We assume that the type of a cost-labelled expr is the type of the underlying expr. *) match type_eq_dec ty (typeof e1) with [ inl Heq ⇒ let «〈desired_type, e2〉, Hinv» as Hsimplify, Hpair ≝ simplify_expr e1 target_sz target_sg in «〈desired_type, Expr (Ecost l e2) (typeof e2)〉, ?» | inr Hneq ⇒ let «e2, Hexpr_equiv» as Eq ≝ simplify_inside e1 in «〈false, Expr (Ecost l e2) ty〉, ?» ] | Econst_float f ⇒ λHdesc_eq. «〈false, Expr ed ty〉, ?» (* | Evar id ⇒ λHdesc_eq. «〈false, Expr ed ty〉, ?» *) (* In order for the simplification function to be less dymp, we would have to use this line, which would in fact require to alter the semantics of [load_value_of_type]. *) | Evar id ⇒ λHdesc_eq. «〈type_eq ty (Tint target_sz target_sg), Expr ed ty〉, ?» | Esizeof t ⇒ λHdesc_eq. «〈type_eq ty (Tint target_sz target_sg), Expr ed ty〉, ?» ] (refl ? ed) ] (refl ? e) and simplify_inside (e:expr) : Σresult:expr. conservation e result ≝ match e return λx. x = e → ? with [ Expr ed ty ⇒ λHexpr_eq. match ed return λx. x = ed → ? with [ Ederef e1 ⇒ λHdesc_eq. let «e2, Hequiv» as Hsimplify ≝ simplify_inside e1 in «Expr (Ederef e2) ty, ?» | Eaddrof e1 ⇒ λHdesc_eq. let «e2, Hequiv» as Hsimplify ≝ simplify_inside e1 in «Expr (Eaddrof e2) ty, ?» | Eunop op e1 ⇒ λHdesc_eq. let «e2, Hequiv» as Hsimplify ≝ simplify_inside e1 in «Expr (Eunop op e2) ty, ?» | Ebinop op lhs rhs ⇒ λHdesc_eq. let «lhs1, Hequiv_lhs» as Eq_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Eq_rhs ≝ simplify_inside rhs in «Expr (Ebinop op lhs1 rhs1) ty, ?» | Ecast cast_ty castee ⇒ λHdesc_eq. match type_eq_dec ty cast_ty with [ inl Hcast_eq ⇒ match cast_ty return λx. x = cast_ty → Σresult:expr. conservation e result with [ Tint cast_sz cast_sg ⇒ λHcast_ty_eq. let «〈success, castee1〉, Htrans_inv» as Hsimplify, Hpair ≝ simplify_expr castee cast_sz cast_sg in match success return λx. x = success → Σresult:expr. conservation e result with [ true ⇒ λHsuccess_eq. «castee1, ?» | false ⇒ λHsuccess_eq. «Expr (Ecast cast_ty castee1) ty, ?» ] (refl ? success) | _ ⇒ λHcast_ty_eq. «e, ?» ] (refl ? cast_ty) | inr Hcast_neq ⇒ «e, ?» ] | Econdition cond iftrue iffalse ⇒ λHdesc_eq. let «cond1, Hequiv_cond» as Eq_cond ≝ simplify_inside cond in let «iftrue1, Hequiv_iftrue» as Eq_iftrue ≝ simplify_inside iftrue in let «iffalse1, Hequiv_iffalse» as Eq_iffalse ≝ simplify_inside iffalse in «Expr (Econdition cond1 iftrue1 iffalse1) ty, ?» | Eandbool lhs rhs ⇒ λHdesc_eq. let «lhs1, Hequiv_lhs» as Eq_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Eq_rhs ≝ simplify_inside rhs in «Expr (Eandbool lhs1 rhs1) ty, ?» | Eorbool lhs rhs ⇒ λHdesc_eq. let «lhs1, Hequiv_lhs» as Eq_lhs ≝ simplify_inside lhs in let «rhs1, Hequiv_rhs» as Eq_rhs ≝ simplify_inside rhs in «Expr (Eorbool lhs1 rhs1) ty, ?» | Efield rec_expr f ⇒ λHdesc_eq. let «rec_expr1, Hequiv_rec» as Eq_rec ≝ simplify_inside rec_expr in «Expr (Efield rec_expr1 f) ty, ?» | Ecost l e1 ⇒ λHdesc_eq. let «e2, Hequiv» as Eq ≝ simplify_inside e1 in «Expr (Ecost l e2) ty, ?» | _ ⇒ λHdesc_eq. «e, ?» ] (refl ? ed) ] (refl ? e). #ge #en #m [ 1,3,5,6,7,8,9,10,11,12: %1 try @refl cases (exec_expr ge en m e) #res try (@(SimOk ???) //) | 2: @(Inv_coerce_ok ge en m … target_sz target_sg target_sz target_sg) destruct /by refl/ (* whd in match (exec_expr ????); >eq_intsize_identity whd >sz_eq_identity normalize % [ 1: @conj // | 2: elim target_sz in i; normalize #i @I ] *) | 4: destruct @(Inv_coerce_ok ge en m ???? ty_sz sg) / by refl/ whd in match (exec_expr ????); whd in match (exec_expr ????); >eq_intsize_identity >eq_intsize_identity whd #v1 #Heq destruct (Heq) %{i'} try @conj try @conj try @conj // [ 1: @(simplify_int_implements_cast … Hsimpl_eq) | 2: @(simplify_int_success_lt … Hsimpl_eq) ] | 13: %1 // >Hexpr_eq cases (exec_expr ge en m e) #res try (@(SimOk ???) //) | 14: elim (type_eq_dec ty (Tint target_sz target_sg)) [ 1: #Heq >Heq >type_eq_identity @(Inv_coerce_ok ??????? target_sz target_sg) destruct [ 1,2: // | 3: @smaller_integer_val_identity ] | 2: #Hneq >(type_neq_not_identity … Hneq) %1 // destruct @(SimOk ???) // ] | 15: destruct %1 try @refl elim (Hequiv ge en m) * #Hexpr_sim #Hlvalue_sim #Htype_eq [ 1: (* Proving preservation of the semantics for expressions. *) cases Hexpr_sim [ 2: (* Case where the evaluation of e1 as an expression fails *) normalize * #err #Hfail >Hfail normalize nodelta @(SimFail ???) /2 by ex_intro/ | 1: (* Case where the evaluation of e1 as an expression succeeds (maybe) *) #Hsim %1 * #val #trace normalize #Hstep cut (∃ptr. (exec_expr ge en m e1 = OK ? 〈Vptr ptr, trace〉) ∧ (load_value_of_type ty m (pblock ptr) (poff ptr) = Some ? val)) [ 1: lapply Hstep -Hstep cases (exec_expr ge en m e1) [ 1: * #val' #trace' normalize nodelta cases val' normalize nodelta [ 1,2,3,4: #H1 destruct #H2 destruct #H3 destruct | 5: #pointer #Heq @(ex_intro … pointer) (* @(ex_intro … trace') *) cases (load_value_of_type ty m (pblock pointer) (poff pointer)) in Heq; normalize nodelta [ 1: #Heq destruct | 2: #val2 #Heq destruct @conj // ] ] | 2: normalize nodelta #errmesg #Hcontr destruct ] | 2: * #e1_ptr * #He1_eq_ptr #Hloadptr cut (∃ptr1. (exec_expr ge en m e2 = OK ? 〈Vptr ptr1, trace〉) ∧ (load_value_of_type ty m (pblock ptr1) (poff ptr1) = Some ? val)) [ 1: @(ex_intro … e1_ptr) @conj [ 1: @Hsim // | 2: // ] | 2: * #e2_ptr * #He2_exec #Hload_e2_ptr normalize >He2_exec normalize nodelta >Hload_e2_ptr normalize nodelta @refl ] ] ] | 2: (* Proving the preservation of the semantics for lvalues. *) cases Hexpr_sim [ 2: (* Case where the evaluation of e1 as an lvalue fails *) normalize * #err #Hfail >Hfail normalize nodelta @(SimFail ???) /2 by ex_intro/ | 1: (* Case where the evaluation of e1 as an expression succeeds (maybe) *) #Hsim %1 * * #block #offset #trace normalize #Hstep cut (∃ptr. (exec_expr ge en m e1 = OK ? 〈Vptr ptr, trace〉) ∧ pblock ptr = block ∧ poff ptr = offset) [ 1: lapply Hstep -Hstep cases (exec_expr ge en m e1) [ 1: * #val' #trace' normalize nodelta cases val' normalize nodelta [ 1,2,3,4: #H1 destruct #H2 destruct #H3 destruct | 5: #pointer #Heq @(ex_intro … pointer) (* @(ex_intro … trace') *) destruct try @conj try @conj // ] | 2: normalize nodelta #errmesg #Hcontr destruct ] | 2: * #e1_ptr * * #He1_eq_ptr #Hblock #Hoffset cut (∃ptr1. (exec_expr ge en m e2 = OK ? 〈Vptr ptr1, trace〉) ∧ pblock ptr1 = block ∧ poff ptr1 = offset) [ 1: @(ex_intro … e1_ptr) @conj try @conj // @Hsim // | 2: * #e2_ptr * * #He2_exec #Hblock #Hoffset normalize >He2_exec normalize nodelta // ] ] ] ] | 16: destruct %1 try @refl elim (Hequiv ge en m) * #Hexpr_sim #Hlvalue_sim #Htype_eq [ 1: (* Proving preservation of the semantics for expressions. *) cases Hlvalue_sim [ 2: (* Case where the evaluation of e1 as an expression fails *) * #err #Hfail @SimFail whd in match (exec_expr ????); >Hfail normalize nodelta /2 by ex_intro/ | 1: (* Case where the evaluation of e1 as an expression succeeds (maybe) *) #Hsim %1 * #val #trace whd in match (exec_expr ????); #Hstep cut (∃block,offset,r,ptype,pc. (exec_lvalue ge en m e1 = OK ? 〈block, offset, trace〉) ∧ (pointer_compat_dec block r = inl ?? pc) ∧ (ty = Tpointer r ptype) ∧ val = Vptr (mk_pointer r block pc offset)) [ 1: lapply Hstep -Hstep cases (exec_lvalue ge en m e1) [ 1: * * #block #offset #trace' normalize nodelta cases ty [ 2: #sz #sg | 3: #fsz | 4: #rg #ptr_ty | 5: #rg #array_ty #array_sz | 6: #domain #codomain | 7: #structname #fieldspec | 8: #unionname #fieldspec | 9: #rg #id ] normalize nodelta try (#Heq destruct) @(ex_intro … block) @(ex_intro … offset) @(ex_intro … rg) @(ex_intro … ptr_ty) cases (pointer_compat_dec block rg) in Heq; normalize [ 2: #Hnotcompat #Hcontr destruct | 1: #compat #Heq @(ex_intro … compat) try @conj try @conj try @conj destruct // ] | 2: normalize nodelta #errmesg #Hcontr destruct ] | 2: * #block * #offset * #region * #ptype * #pc * * * #Hexec_lvalue #Hptr_compat #Hty_eq #Hval_eq whd in match (exec_expr ????); >(Hsim … Hexec_lvalue) normalize nodelta destruct normalize nodelta >Hptr_compat // ] ] | 2: (* Proving preservation of the semantics of lvalues. *) @SimFail /2 by ex_intro/ ] | 17: destruct %1 try @refl elim (Hequiv ge en m) * #Hexpr_sim #Hlvalue_sim #Htype_eq [ 1: whd in match (exec_expr ge en m (Expr ??)); whd in match (exec_expr ge en m (Expr ??)); cases Hexpr_sim [ 2: * #error #Hexec >Hexec normalize nodelta @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e1) [ 2: #error #Hexec normalize nodelta @SimFail /2 by ex_intro/ | 1: * #val #trace #Hexec >(Hexec ? (refl ? (OK ? 〈val,trace〉))) normalize nodelta @SimOk #a >Htype_eq #H @H ] ] | 2: @SimFail /2 by ex_intro/ ] | 18: destruct elim (bool_conj_inv … Hdesired_eq) #Hdesired_lhs #Hdesired_rhs -Hdesired_eq inversion (Hinv_lhs ge en m) [ 1: #result_flag_lhs #Hresult_lhs #Htype_lhs #Hsim_expr_lhs #Hsim_lvalue_lhs #Hresult_flag_lhs_eq_true #Hinv Hdesired_lhs #Habsurd destruct | 2: #lhs_src_sz #lhs_src_sg #Htype_lhs #Htype_lhs1 #Hsmaller_lhs #Hdesired_type_lhs #_ inversion (Hinv_rhs ge en m) [ 1: #result_flag_rhs #Hresult_rhs #Htype_rhs #Hsim_expr_rhs #Hsim_lvalue_rhs #Hdesired_type_rhs_eq #_ Hdesired_rhs #Habsurd destruct | 2: #rhs_src_sz #rhs_src_sg #Htype_rhs #Htype_rhs1 #Hsmaller_rhs #Hdesired_type_rhs #_ @(Inv_coerce_ok ge en m … target_sz target_sg lhs_src_sz lhs_src_sg) [ 1: >Htype_lhs // | 2: // | 3: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); (* Tidy up the type equations *) >Htype_lhs in Htylhs_eq_tyrhs; >Htype_rhs #Heq destruct lapply Hsmaller_rhs lapply Hsmaller_lhs generalize in match rhs_src_sz; #src_sz generalize in match rhs_src_sg; #src_sg -Hsmaller_lhs -Hsmaller_rhs -Htype_lhs -Htype_rhs -Hinv_lhs -Hinv_rhs >Htype_lhs1 >Htype_rhs1 -Htype_lhs1 -Htype_rhs1 (* Enumerate all the cases for the evaluation of the source expressions ... *) cases (exec_expr ge en m lhs); try // * #val_lhs #trace_lhs normalize nodelta cases (exec_expr ge en m rhs); try // * #val_rhs #trace_rhs normalize nodelta whd in match (m_bind ?????); (* specialize to the actual simplifiable operations. *) cases op in Hop_simplifiable_eq; [ 1,2: | *: normalize in ⊢ (% → ?); #H destruct (H) ] #_ [ 1: lapply (iadd_inv src_sz src_sg val_lhs val_rhs m) | 2: lapply (isub_inv src_sz src_sg val_lhs val_rhs m) ] cases (sem_binary_operation ? val_lhs (Tint src_sz src_sg) val_rhs (Tint src_sz src_sg) m) [ 1,3: #_ #_ #_ normalize @I ] #src_result #Hinversion_src elim (Hinversion_src src_result (refl ? (Some ? src_result))) #src_result_sz * #i1 * #i2 * * #Hval_lhs_eq #Hval_rhs_eq #Hsrc_result_eq whd in match (opt_to_res ???); whd in match (m_bind ?????); normalize nodelta >Hval_lhs_eq >Hval_rhs_eq #Hsmaller_rhs #Hsmaller_lhs whd #result_int #Hsrc_result >Hsrc_result in Hsrc_result_eq; #Hsrc_result_eq lapply (sym_eq ??? Hsrc_result_eq) -Hsrc_result_eq #Hsrc_result_eq cut (src_result_sz = src_sz) [ 1,3: destruct // ] #Hsz_eq lapply Hsmaller_lhs lapply Hsmaller_rhs cases (exec_expr ge en m lhs1) normalize nodelta [ 2,4: destruct #error normalize in ⊢ (% → ?); #H @(False_ind … (H i1 (refl ? (Vint src_sz i1)))) ] * #val_lhs1 #trace_lhs1 cases (exec_expr ge en m rhs1) [ 2,4: destruct #error #_ normalize in ⊢ (% → ?); #H @(False_ind … (H i2 (refl ? (Vint src_sz i2)))) ] * #val_rhs1 #trace_rhs1 whd in match (m_bind ?????); normalize nodelta [ 1: lapply (neg_iadd_inv target_sz target_sg val_lhs1 val_rhs1 m) lapply (iadd_inv target_sz target_sg val_lhs1 val_rhs1 m) | 2: lapply (neg_isub_inv target_sz target_sg val_lhs1 val_rhs1 m) lapply (isub_inv target_sz target_sg val_lhs1 val_rhs1 m) ] cases (sem_binary_operation ? val_lhs1 (Tint target_sz target_sg) val_rhs1 (Tint target_sz target_sg) m) [ 1,3: destruct #_ #Hneg_inversion elim (Hneg_inversion (refl ? (None ?))) (* Proceed by case analysis on Hneg_inversion to prove the absurdity of this branch *) * [ 1,4: whd in ⊢ (? → % → ?); normalize nodelta #Habsurd #Hcounterexample elim (Hcounterexample i1 (refl ? (Vint src_sz i1))) #i * * * #Hlhs1_is_int >Hlhs1_is_int in Habsurd; * #Habsurd @(False_ind … (Habsurd I)) | 2,5: whd in ⊢ (? → ? → % → ?); normalize nodelta #Habsurd #_ #Hcounterexample elim (Hcounterexample i2 (refl ? (Vint src_sz i2))) #i * * * #Hlhs1_is_int >Hlhs1_is_int in Habsurd; * #Habsurd @(False_ind … (Habsurd I)) | 3,6: #dsz1 * #dsz2 * #j1 * #j2 * * #Hval_lhs1 #Hval_rhs1 #Hsz_neq whd in ⊢ (% → % → ?); normalize nodelta #Hsmaller_lhs #Hsmaller_rhs elim (Hsmaller_lhs … i1 (refl ? (Vint src_sz i1))) #li * * * #Hval_lhs1_alt #H_lhs_cast_eq #Htrace_eq_lhs #Hsize_le elim (Hsmaller_rhs … i2 (refl ? (Vint src_sz i2))) #ri * * * #Hval_rhs1_alt #H_rhs_cast_eq #Htrace_eq_rhs #_ destruct elim Hsz_neq #Habsurd @(Habsurd (refl ? target_sz)) ] | 2,4: destruct #result #Hinversion #_ #Hsmaller_lhs #Hsmaller_rhs normalize nodelta elim (Hinversion result (refl ? (Some ? result))) #result_sz * #lhs_int * #rhs_int * * #Hlhs1_eq #Hrhs1_eq #Hop_eq elim (Hsmaller_lhs … i1 (refl ? (Vint src_sz i1))) #li * * * #Hval_lhs1_alt #H_lhs_cast_eq #Htrace_eq_lhs #Hsize_le elim (Hsmaller_rhs … i2 (refl ? (Vint src_sz i2))) #ri * * * #Hval_rhs1_alt #H_rhs_cast_eq #Htrace_eq_rhs #_ destruct [ 1: %{(addition_n (bitsize_of_intsize target_sz) (cast_int_int src_sz src_sg target_sz i1) (cast_int_int src_sz src_sg target_sz i2))} try @conj try @conj try @conj // >integer_add_cast_le try // | 2: %{(subtraction (bitsize_of_intsize target_sz) (cast_int_int src_sz src_sg target_sz i1) (cast_int_int src_sz src_sg target_sz i2))} try @conj try @conj try @conj // >integer_sub_cast_le try // ] ] ] ] ] | 19,20,21,22: destruct %1 try @refl elim (Hequiv_lhs ge en m) * #Hexpr_sim_lhs #Hlvalue_sim_lhs #Htype_eq_lhs elim (Hequiv_rhs ge en m) * #Hexpr_sim_rhs #Hlvalue_sim_rhs #Htype_eq_rhs [ 1,3,5,7: whd in match (exec_expr ????); whd in match (exec_expr ????); cases Hexpr_sim_lhs [ 2,4,6,8: * #error #Herror >Herror @SimFail /2 by refl, ex_intro/ | *: cases (exec_expr ge en m lhs) [ 2,4,6,8: #error #_ @SimFail /2 by refl, ex_intro/ | *: * #lval #ltrace #Hsim_lhs normalize nodelta cases Hexpr_sim_rhs [ 2,4,6,8: * #error #Herror >Herror @SimFail /2 by refl, ex_intro/ | *: cases (exec_expr ge en m rhs) [ 2,4,6,8: #error #_ @SimFail /2 by refl, ex_intro/ | *: * #rval #rtrace #Hsim_rhs whd in match (exec_expr ??? (Expr (Ebinop ???) ?)); >(Hsim_lhs 〈lval,ltrace〉 (refl ? (OK ? 〈lval,ltrace〉))) >(Hsim_rhs 〈rval,rtrace〉 (refl ? (OK ? 〈rval,rtrace〉))) normalize nodelta >Htype_eq_lhs >Htype_eq_rhs @SimOk * #val #trace #H @H ] ] ] ] | *: @SimFail /2 by refl, ex_intro/ ] (* Jump to the cast cases *) | 23,30,31,32,33,34,35,36: %1 try @refl [ 1,4,7,10,13,16,19,22: destruct // ] elim (Hcastee_equiv ge en m) * #Hexec_sim #Hlvalue_sim #Htype_eq (* exec_expr simulation *) [ 1,3,5,7,9,11,13,15: cases Hexec_sim [ 2,4,6,8,10,12,14,16: destruct * #error #Hexec_fail @SimFail whd in match (exec_expr ge en m ?); >Hexec_fail /2 by refl, ex_intro/ | 1,3,5,7,9,11,13,15: #Hsim @SimOk * #val #trace Hdesc_eq whd in match (exec_expr ge en m ?); #Hstep cut (∃v1. exec_expr ge en m castee = OK ? 〈v1,trace〉 ∧ exec_cast m v1 (typeof castee) cast_ty = OK ? val) [ 1,3,5,7,9,11,13,15: lapply Hstep -Hstep cases (exec_expr ge en m castee) [ 2,4,6,8,10,12,14,16: #error1 normalize nodelta #Hcontr destruct | 1,3,5,7,9,11,13,15: * #val1 #trace1 normalize nodelta #Hstep @(ex_intro … val1) cases (exec_cast m val1 (typeof castee) cast_ty) in Hstep; [ 2,4,6,8,10,12,14,16: #error #Hstep normalize in Hstep; destruct | 1,3,5,7,9,11,13,15: #result #Hstep normalize in Hstep; destruct @conj @refl ] ] | 2,4,6,8,10,12,14,16: * #v1 * #Hexec_expr #Hexec_cast whd in match (exec_expr ge en m ?); >(Hsim … Hexec_expr ) normalize nodelta Hexec_cast // ] ] | 2,4,6,8,10,12,14,16: destruct @SimFail /2 by refl, ex_intro/ ] | 24: destruct inversion (Hcastee_inv ge en m) [ 1: #result_flag #Hresult_flag #Htype_eq #Hexpr_sim #Hlvalue_sim #Hresult_flag_2 Htype_castee (* Simplify the goal by culling impossible cases, using Hsmaller_val *) cases (exec_expr ge en m castee) in Hsmaller_eval; [ 2: #error // | 1: * #castee_val #castee_trace #Hsmaller normalize nodelta lapply (exec_cast_inv castee_val src_sz src_sg cast_sz cast_sg m) cases (exec_cast m castee_val (Tint src_sz src_sg) (Tint cast_sz cast_sg)) [ 2: #error #_ @I | 1: #result #Hinversion elim (Hinversion result (refl ? (OK ? result))) #castee_int * #Hcastee_val_eq #Hresult_eq whd in match (m_bind ?????); #result_int #Hresult_eq2 cases (exec_expr ge en m castee1) in Hsmaller; [ 2: #error normalize in ⊢ (% → ?); #Habsurd @(False_ind … (Habsurd castee_int Hcastee_val_eq)) | 1: * #val1 #trace1 whd in ⊢ (% → ?); normalize nodelta #Hsmaller elim (Hsmaller castee_int Hcastee_val_eq) #val1_int * * * #Hval1_eq #Hval1_int_eq #Hcastee_trace_eq destruct #Hle %{(cast_int_int src_sz src_sg target_sz castee_int)} try @conj try @conj try @conj try // [ 1: @cast_composition ] try assumption elim (necessary_conditions_spec … (sym_eq … Hconditions)) [ 2,4: * #Heq >Heq #_ elim target_sz // | 1,3: #Hlt @(size_lt_to_le ?? Hlt) ] ] ] ] ] ] | 25,27: destruct inversion (Hcast2 ge en m) [ 1,3: (* Impossible case. *) #result_flag #Hresult #Htype_eq #Hsim_expr #Hsim_lvalue #Hresult_contr Htype_castee2 // | 2,5: (* Prove simulation for exec_expr *) whd in match (exec_expr ??? (Expr ??)); cases (exec_expr ge en m castee) in Hsmaller_eval; [ 2,4: (* erroneous evaluation of the original expression *) #error #Hsmaller_eval @SimFail @(ex_intro … error) // | 1,3: * #val #trace normalize nodelta >Htype_castee lapply (exec_cast_inv val src_sz src_sg cast_sz cast_sg m) cases (exec_cast m val (Tint src_sz src_sg) (Tint cast_sz cast_sg)) [ 2,4: #error #_ #_ @SimFail /2 by ex_intro/ | 1,3: #result #Hinversion elim (Hinversion result (refl ??)) #val_int * #Hval_eq #Hresult_eq cases (exec_expr ge en m castee2) [ 2,4: #error #Hsmaller_eval normalize in Hsmaller_eval; @(False_ind … (Hsmaller_eval val_int Hval_eq)) | 1,3: * #val1 #trace1 #Hsmaller elim (Hsmaller val_int Hval_eq) #val1_int * * * #Hval1_eq #Hcast_eq #Htrace_eq #Hle destruct @SimOk normalize #a #H @H ] ] ] | 3,6: @SimFail /2 by refl, ex_intro/ ] ] | 26,28: destruct inversion (Hcast2 ge en m) [ 2,4: (* Impossible case. *) #src_sz #src_sg #Htype_castee #Htype_castee2 #Hsmaller_eval #Habsurd #Hinv_eq (* Do some gymnastic to transform the Habsurd jmeq into a proper, 'destruct'able eq *) letin Habsurd_eq ≝ (jmeq_to_eq ??? Habsurd) lapply Habsurd_eq -Habsurd_eq -Habsurd #Habsurd destruct | 1,3: (* All our attempts at casting down the expression have failed. We still use the resulting expression, as we may have discovered and simplified unrelated casts. *) #result_flag #Hresult #Htype_eq #Hsim_expr #Hsim_lvalue #_ #Hinv @(Inv_eq ???????) // [ 1,3: (* Simulation for exec_expr *) whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr [ 2,4: * #error #Hexec_err >Hexec_err @SimFail @(ex_intro … error) // | 1,3: #Hexec_ok cases (exec_expr ge en m castee) in Hexec_ok; [ 2,4: #error #Hsim @SimFail normalize nodelta /2/ | 1,3: * #val #trace #Hsim normalize nodelta >Htype_eq >(Hsim 〈val,trace〉 (refl ? (OK ? 〈val,trace〉))) normalize nodelta @SimOk #a #H @H ] ] | 2,4: @SimFail /2 by refl, ex_intro/ ] ] | 29: destruct elim (Hcastee_equiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq @(Inv_eq ???????) // whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); [ 1: cases Hsim_expr [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by refl, ex_intro/ | 1: #Hexec_ok @SimOk * #val #trace cases (exec_expr ge en m castee) in Hexec_ok; [ 2: #error #Habsurd normalize in Habsurd; normalize nodelta #H destruct | 1: * #val #trace #Hexec_ok normalize nodelta >(Hexec_ok … 〈val, trace〉 (refl ? (OK ? 〈val, trace〉))) >Htype_eq normalize nodelta #H @H ] ] | 2: @SimFail /2 by refl, ex_intro/ ] | 37: destruct elim (bool_conj_inv … Hdesired_eq) #Hdesired_true #Hdesired_false -Hdesired_eq inversion (Htrue_inv ge en m) [ 1: #result_flag_true #Hresult_true #Htype_true #Hsim_expr_true #Hsim_lvalue_true #Hresult_flag_true_eq_false #Hinv Hdesired_true #Habsurd destruct | 2: #true_src_sz #true_src_sg #Htype_eq_true #Htype_eq_true1 #Hsmaller_true #_ #Hinv_true inversion (Hfalse_inv ge en m) [ 1: #result_flag_false #Hresult_false #Htype_false #Hsim_expr_false #Hsim_lvalue_false #Hresult_flag_false_eq_false #Hinv Hdesired_false #Habsurd destruct | 2: #false_src_sz #false_src_sg #Htype_eq_false #Htype_eq_false1 #Hsmaller_false #_ #Hinv_false >Htype_eq_true @(Inv_coerce_ok ??????? true_src_sz true_src_sg) [ 1,2: // | 3: whd in match (exec_expr ????); whd in match (exec_expr ??? (Expr ??)); elim (Hcond_equiv ge en m) * #Hexec_cond_sim #_ #Htype_cond_eq cases Hexec_cond_sim [ 2: * #error #Herror >Herror normalize @I | 1: cases (exec_expr ge en m cond) [ 2: #error #_ normalize @I | 1: * #cond_val #cond_trace #Hcond_sim >(Hcond_sim 〈cond_val,cond_trace〉 (refl ? (OK ? 〈cond_val,cond_trace〉))) normalize nodelta >Htype_cond_eq cases (exec_bool_of_val cond_val (typeof cond1)) * [ 3,4: normalize // | 1,2: normalize in match (m_bind ?????); normalize in match (m_bind ?????); -Hexec_cond_sim -Hcond_sim -cond_val [ 1: (* true branch taken *) cases (exec_expr ge en m iftrue) in Hsmaller_true; [ 2: #error #_ @I | 1: * #val_true_branch #trace_true_branch #Hsmaller #val_true_branch #Hval_true_branch lapply Hsmaller -Hsmaller cases (exec_expr ge en m iftrue1) [ 2: #error normalize in ⊢ (% → ?); #Hsmaller @(False_ind … (Hsmaller val_true_branch Hval_true_branch)) | 1: * #val_true1_branch #trace_true1_branch #Hsmaller normalize nodelta elim (Hsmaller val_true_branch Hval_true_branch) #val_true1_int * * * #val_true1_branch #Hval_cast_eq #Htrace_eq #Hle %{val_true1_int} try @conj try @conj try @conj // ] ] | 2: (* false branch taken. Same proof as above, different arguments ... *) cut (false_src_sz = true_src_sz ∧ false_src_sg = true_src_sg) [ 1: >Htype_eq_true in Hiftrue_eq_iffalse; >Htype_eq_false #Htype_eq destruct (Htype_eq) @conj @refl ] * #Hsz_eq #Hsg_eq destruct cases (exec_expr ge en m iffalse) in Hsmaller_false; [ 2: #error #_ @I | 1: destruct * #val_false_branch #trace_false_branch #Hsmaller #val_false_branch #Hval_false_branch lapply Hsmaller -Hsmaller cases (exec_expr ge en m iffalse1) [ 2: #error normalize in ⊢ (% → ?); #Hsmaller @(False_ind … (Hsmaller val_false_branch Hval_false_branch)) | 1: * #val_false1_branch #trace_false1_branch #Hsmaller normalize nodelta elim (Hsmaller val_false_branch Hval_false_branch) #val_false1_int * * * #val_false1_branch #Hval_cast_eq #Htrace_eq #Hle %{val_false1_int} try @conj try @conj try @conj // ] ] ] ] ] ] ] ] ] | 38,39,40: destruct elim (Hcond_equiv ge en m) * #Hsim_expr_cond #Hsim_vlalue_cond #Htype_cond_eq elim (Htrue_equiv ge en m) * #Hsim_expr_true #Hsim_vlalue_true #Htype_true_eq elim (Hfalse_equiv ge en m) * #Hsim_expr_false #Hsim_vlalue_false #Htype_false_eq %1 try @refl [ 1,3,5: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases (exec_expr ge en m cond) in Hsim_expr_cond; [ 2,4,6: #error #_ @SimFail /2 by ex_intro/ | 1,3,5: * #cond_val #cond_trace normalize nodelta cases (exec_expr ge en m cond1) [ 2,4,6: #error * [ 1,3,5: #Hsim lapply (Hsim 〈cond_val,cond_trace〉 (refl ? (OK ? 〈cond_val,cond_trace〉))) #Habsurd destruct | *: * #error #Habsurd destruct ] | 1,3,5: * #cond_val1 #cond_trace1 * [ 2,4,6: * #error #Habsurd destruct | 1,3,5: #Hsim lapply (Hsim 〈cond_val,cond_trace〉 (refl ? (OK ? 〈cond_val,cond_trace〉))) #Hcond_eq normalize nodelta destruct (Hcond_eq) >Htype_cond_eq cases (exec_bool_of_val cond_val (typeof cond1)) [ 2,4,6: #error @SimFail normalize /2 by refl, ex_intro / | 1,3,5: * (* true branch *) [ 1,3,5: normalize in match (m_bind ?????); normalize in match (m_bind ?????); cases Hsim_expr_true [ 2,4,6: * #error #Hexec_fail >Hexec_fail @SimFail /2 by refl, ex_intro/ | 1,3,5: cases (exec_expr ge en m iftrue) [ 2,4,6: #error #_ normalize nodelta @SimFail /2 by refl, ex_intro/ | 1,3,5: * #val_true #trace_true normalize nodelta #Hsim >(Hsim 〈val_true,trace_true〉 (refl ? (OK ? 〈val_true,trace_true〉))) normalize nodelta @SimOk #a #H @H ] ] | 2,4,6: normalize in match (m_bind ?????); normalize in match (m_bind ?????); cases Hsim_expr_false [ 2,4,6: * #error #Hexec_fail >Hexec_fail normalize nodelta @SimFail /2 by refl, ex_intro/ | 1,3,5: cases (exec_expr ge en m iffalse) [ 2,4,6: #error #_ normalize nodelta @SimFail /2 by refl, ex_intro/ | 1,3,5: * #val_false #trace_false normalize nodelta #Hsim >(Hsim 〈val_false,trace_false〉 (refl ? (OK ? 〈val_false,trace_false〉))) normalize nodelta @SimOk #a #H @H ] ] ] ] ] ] ] | 2,4,6: @SimFail /2 by ex_intro/ ] | 41,42: destruct elim (Hlhs_equiv ge en m) * #Hsim_expr_lhs #Hsim_lvalue_lhs #Htype_eq_lhs elim (Hrhs_equiv ge en m) * #Hsim_expr_rhs #Hsim_lvalue_rhs #Htype_eq_rhs %1 try @refl [ 1,3: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr_lhs [ 2,4: * #error #Hexec_fail >Hexec_fail normalize nodelta @SimFail /2 by refl, ex_intro/ | 1,3: cases (exec_expr ge en m lhs) [ 2,4: #error #_ @SimFail /2 by refl, ex_intro/ | 1,3: * #lhs_val #lhs_trace #Hsim normalize nodelta >(Hsim 〈lhs_val,lhs_trace〉 (refl ? (OK ? 〈lhs_val,lhs_trace〉))) normalize nodelta >Htype_eq_lhs cases (exec_bool_of_val lhs_val (typeof lhs1)) [ 2,4: #error normalize @SimFail /2 by refl, ex_intro/ | 1,3: * whd in match (m_bind ?????); whd in match (m_bind ?????); [ 2,3: (* lhs evaluates to true *) @SimOk #a #H @H | 1,4: cases Hsim_expr_rhs [ 2,4: * #error #Hexec >Hexec @SimFail /2 by refl, ex_intro/ | 1,3: cases (exec_expr ge en m rhs) [ 2,4: #error #_ @SimFail /2 by refl, ex_intro/ | 1,3: * #rhs_val #rhs_trace -Hsim #Hsim >(Hsim 〈rhs_val,rhs_trace〉 (refl ? (OK ? 〈rhs_val,rhs_trace〉))) normalize nodelta >Htype_eq_rhs @SimOk #a #H @H ] ] ] ] ] ] | 2,4: @SimFail /2 by ex_intro/ ] | 43: destruct cases (type_eq_dec ty (Tint target_sz target_sg)) [ 1: #Htype_eq >Htype_eq >type_eq_identity @(Inv_coerce_ok ??????? target_sz target_sg) [ 1,2: // | 3: @smaller_integer_val_identity ] | 2: #Hneq >(type_neq_not_identity … Hneq) %1 // @SimOk #a #H @H ] | 44: destruct elim (Hrec_expr_equiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq %1 try @refl [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); whd in match (exec_lvalue ????) in Hsim_lvalue; whd in match (exec_lvalue' ?????); whd in match (exec_lvalue' ?????); >Htype_eq cases (typeof rec_expr1) normalize nodelta [ 2: #sz #sg | 3: #fl | 4: #rg #ty | 5: #rg #ty #n | 6: #tl #ty | 7: #id #fl | 8: #id #fl | 9: #rg #ty ] [ 1,2,3,4,5,8,9: @SimFail /2 by refl, ex_intro/ | 6,7: cases Hsim_lvalue [ 2,4: * #error #Herror >Herror normalize in ⊢ (??%?); @SimFail /2 by refl, ex_intro/ | 1,3: cases (exec_lvalue ge en m rec_expr) [ 2,4: #error #_ normalize in ⊢ (??%?); @SimFail /2 by refl, ex_intro/ | 1,3: #a #Hsim >(Hsim a (refl ? (OK ? a))) whd in match (m_bind ?????); @SimOk #a #H @H ] ] ] | 2: whd in match (exec_lvalue ??? (Expr ??)); whd in match (exec_lvalue ??? (Expr ??)); >Htype_eq cases (typeof rec_expr1) normalize nodelta [ 2: #sz #sg | 3: #fl | 4: #rg #ty | 5: #rg #ty #n | 6: #tl #ty | 7: #id #fl | 8: #id #fl | 9: #rg #ty ] [ 1,2,3,4,5,8,9: @SimFail /2 by refl, ex_intro/ | 6,7: cases Hsim_lvalue [ 2,4: * #error #Herror >Herror normalize in ⊢ (??%?); @SimFail /2 by refl, ex_intro/ | 1,3: cases (exec_lvalue ge en m rec_expr) [ 2,4: #error #_ normalize in ⊢ (??%?); @SimFail /2 by refl, ex_intro/ | 1,3: #a #Hsim >(Hsim a (refl ? (OK ? a))) whd in match (m_bind ?????); @SimOk #a #H @H ] ] ] ] | 45: destruct inversion (Hinv ge en m) [ 2: #src_sz #src_sg #Htypeof_e1 #Htypeof_e2 #Hsmaller #Hdesired_eq #_ @(Inv_coerce_ok ??????? src_sz src_sg) [ 1: >Htypeof_e1 // | 2: >Htypeof_e2 // | 3: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases (exec_expr ge en m e1) in Hsmaller; [ 2: #error normalize // | 1: * #val1 #trace1 #Hsmaller #val1_int #Hval1_eq cases (exec_expr ge en m e2) in Hsmaller; [ 2: #error normalize in ⊢ (% → ?); #Habsurd @(False_ind … (Habsurd val1_int Hval1_eq)) | 1: * #val2 #trace #Hsmaller elim (Hsmaller val1_int Hval1_eq) #val2_int * * * #Hval2_eq #Hcast #Htrace #Hle normalize nodelta %{val2_int} try @conj try @conj try @conj // ] ] ] | 1: #result_flag #Hresult #Htype_eq #Hsim_expr #Hsim_lvalue #Hdesired_typ #_ >Hresult %1 try @refl [ 1: >Htype_eq // | 2: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr [ 2: * #error #Hexec_error >Hexec_error @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #He2_exec >He2_exec @SimOk #a #H @H ] ] | 3: @SimFail /2 by ex_intro/ ] ] | 46: destruct elim (Hexpr_equiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq %1 try @refl [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hsim2 >Hsim2 @SimOk #a #H @H ] ] | 2: @SimFail /2 by ex_intro/ ] (* simplify_inside cases. Amounts to propagate a simulation result, except for the /cast/ case which actually calls * simplify_expr *) | 47, 48, 49: (* trivial const_int, const_float and var cases *) try @conj try @conj try @refl @SimOk #a #H @H | 50: (* Deref case *) destruct elim (Hequiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq try @conj try @conj [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); whd in match (exec_lvalue' ?????); whd in match (exec_lvalue' ?????); | 2: whd in match (exec_lvalue ??? (Expr ??)); whd in match (exec_lvalue ??? (Expr ??)); ] [ 1,2: cases Hsim_expr [ 2,4: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1,3: cases (exec_expr ge en m e1) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #H >H @SimOk #a #H @H ] ] | 3: // ] | 51: (* Addrof *) destruct elim (Hequiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq try @conj try @conj [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_lvalue [ 2: * #error #Hlvalue_fail >Hlvalue_fail @SimFail /2 by ex_intro/ | 1: cases (exec_lvalue ge en m e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #H >H @SimOk #a #H @H ] ] | 2: @SimFail /2 by ex_intro/ | 3: // ] | 52: (* Unop *) destruct elim (Hequiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq try @conj try @conj [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #H >H @SimOk >Htype_eq #a #H @H ] ] | 2: @SimFail /2 by ex_intro/ | 3: // ] | 53: (* Binop *) destruct elim (Hequiv_lhs ge en m) * #Hsim_expr_lhs #Hsim_lvalue_lhs #Htype_eq_lhs elim (Hequiv_rhs ge en m) * #Hsim_expr_rhs #Hsim_lvalue_rhs #Htype_eq_rhs try @conj try @conj [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr_lhs [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m lhs) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #lhs_value #Hsim_lhs cases Hsim_expr_rhs [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m rhs) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #rhs_value #Hsim_rhs lapply (Hsim_lhs lhs_value (refl ? (OK ? lhs_value))) lapply (Hsim_rhs rhs_value (refl ? (OK ? rhs_value))) #Hrhs >Hrhs #Hlhs >Hlhs >Htype_eq_rhs >Htype_eq_lhs @SimOk #a #H @H ] ] ] ] | 2: @SimFail /2 by ex_intro/ | 3: // ] | 54: (* Cast, fallback case *) try @conj try @conj try @refl @SimOk #a #H @H | 55: (* Cast, success case *) destruct inversion (Htrans_inv ge en m) [ 1: (* contradiction *) #result_flag #Hresult_flag #Htype_eq #Hsim_epr #Hsim_lvalue #Hresult_flag_true Hsrc_type_eq lapply (exec_cast_inv val src_sz src_sg cast_sz cast_sg m) cases (exec_cast m val ??) [ 2: #error #_ #_ @SimFail /2 by ex_intro/ | 1: #result #Hinversion elim (Hinversion result (refl ??)) #val_int * #Hval_eq #Hcast cases (exec_expr ge en m castee1) [ 2: #error #Habsurd normalize in Habsurd; @(False_ind … (Habsurd val_int Hval_eq)) | 1: * #val1 #trace1 #Hsmaller elim (Hsmaller val_int Hval_eq) #val1_int * * * #Hval1_int #Hval18int #Htrace #Hle @SimOk destruct normalize // ] ] ] | 2: @SimFail /2 by ex_intro/ | 3: >Htarget_type_eq // ] ] | 56: (* Cast, "failure" case *) destruct inversion (Htrans_inv ge en m) [ 2: (* contradiction *) #src_sz #src_sg #Htype_castee #Htype_castee1 #Hsmaller #Habsurd lapply (jmeq_to_eq ??? Habsurd) -Habsurd #Herror destruct | 1: #result_flag #Hresult_flag #Htype_eq #Hsim_expr #Hsim_lvalue #_ #_ try @conj try @conj try @conj [ 1: whd in match (exec_expr ????); whd in match (exec_expr ??? (Expr ??)); cases Hsim_expr [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ??? castee) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hexec_ok >Hexec_ok @SimOk >Htype_eq #a #H @H ] ] | 2: @SimFail /2 by ex_intro/ | 3: // ] ] | 57,58,59,60,61,62,63,64,68: try @conj try @conj try @refl @SimOk #a #H @H | 65: destruct elim (Hequiv_cond ge en m) * #Hsim_exec_cond #Hsim_lvalue_cond #Htype_eq_cond elim (Hequiv_iftrue ge en m) * #Hsim_exec_true #Hsim_lvalue_true #Htype_eq_true elim (Hequiv_iffalse ge en m) * #Hsim_exec_false #Hsim_lvalue_false #Htype_eq_false try @conj try @conj [ 1: whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim_exec_cond [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ??? cond) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: * #condb #condtrace #Hcond_sim lapply (Hcond_sim 〈condb, condtrace〉 (refl ? (OK ? 〈condb, condtrace〉))) #Hcond_ok >Hcond_ok >Htype_eq_cond normalize nodelta cases (exec_bool_of_val condb (typeof cond1)) [ 2: #error @SimFail /2 by ex_intro/ | 1: * whd in match (m_bind ?????); whd in match (m_bind ?????); normalize nodelta [ 1: (* true branch taken *) cases Hsim_exec_true [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ??? iftrue) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #H >H @SimOk #a #H @H ] ] | 2: (* false branch taken *) cases Hsim_exec_false [ 2: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ??? iffalse) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #H >H @SimOk #a #H @H ] ] ] ] ] ] | 2: @SimFail /2 by ex_intro/ | 3: // ] | 66,67: destruct elim (Hequiv_lhs ge en m) * #Hsim_exec_lhs #Hsim_lvalue_lhs #Htype_eq_lhs elim (Hequiv_rhs ge en m) * #Hsim_exec_rhs #Hsim_lvalue_rhs #Htype_eq_rhs try @conj try @conj whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); [ 1,4: cases Hsim_exec_lhs [ 2,4: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1,3: cases (exec_expr ??? lhs) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hlhs >Hlhs >Htype_eq_lhs normalize nodelta elim a #lhs_val #lhs_trace cases (exec_bool_of_val lhs_val (typeof lhs1)) [ 2,4: #error @SimFail /2 by ex_intro/ | 1,3: * whd in match (m_bind ?????); whd in match (m_bind ?????); [ 2,3: @SimOk // | 1,4: cases Hsim_exec_rhs [ 2,4: * #error #Hexec_fail >Hexec_fail @SimFail /2 by ex_intro/ | 1,3: cases (exec_expr ??? rhs) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrhs >Hrhs >Htype_eq_rhs @SimOk #a #H @H ] ] ] ] ] ] | 2,5: @SimFail /2 by ex_intro/ | 3,6: // ] | 69: (* record field *) destruct elim (Hequiv_rec ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq try @conj try @conj whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); whd in match (exec_lvalue' ??? (Efield rec_expr f) ty); whd in match (exec_lvalue' ??? (Efield rec_expr1 f) ty); [ 1: >Htype_eq cases (typeof rec_expr1) normalize nodelta [ 2: #sz #sg | 3: #fl | 4: #rg #ty' | 5: #rg #ty #n | 6: #tl #ty' | 7: #id #fl | 8: #id #fl | 9: #rg #id ] try (@SimFail /2 by ex_intro/) cases Hsim_lvalue [ 2,4: * #error #Hlvalue_fail >Hlvalue_fail @SimFail /2 by ex_intro/ | 1,3: cases (exec_lvalue ge en m rec_expr) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hexec >Hexec @SimOk #a #H @H ] ] | 2: (* Note: identical to previous case. Too lazy to merge and manually shift indices. *) >Htype_eq cases (typeof rec_expr1) normalize nodelta [ 2: #sz #sg | 3: #fl | 4: #rg #ty' | 5: #rg #ty #n | 6: #tl #ty' | 7: #id #fl | 8: #id #fl | 9: #rg #id ] try (@SimFail /2 by ex_intro/) cases Hsim_lvalue [ 2,4: * #error #Hlvalue_fail >Hlvalue_fail @SimFail /2 by ex_intro/ | 1,3: cases (exec_lvalue ge en m rec_expr) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hexec >Hexec @SimOk #a #H @H ] ] | 3: // ] | 70: (* cost label *) destruct elim (Hequiv ge en m) * #Hsim_expr #Hsim_lvalue #Htype_eq try @conj try @conj whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); [ 1: cases Hsim_expr [ 2: * #error #Hexec >Hexec @SimFail /2 by ex_intro/ | 1: cases (exec_expr ??? e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #H >H @SimOk #a #H @H ] ] | 2: @SimFail /2 by ex_intro/ | 3: // ] ] qed. (* Propagate cast simplification through statements and programs. *) definition simplify_e ≝ λe. pi1 … (simplify_inside e). let rec simplify_statement (s:statement) : statement ≝ match s with [ Sskip ⇒ Sskip | Sassign e1 e2 ⇒ Sassign (simplify_e e1) (simplify_e e2) | Scall eo e es ⇒ Scall (option_map ?? simplify_e eo) (simplify_e e) (map ?? simplify_e es) | Ssequence s1 s2 ⇒ Ssequence (simplify_statement s1) (simplify_statement s2) | Sifthenelse e s1 s2 ⇒ Sifthenelse (simplify_e e) (simplify_statement s1) (simplify_statement s2) (* TODO: try to reduce size of e *) | Swhile e s1 ⇒ Swhile (simplify_e e) (simplify_statement s1) (* TODO: try to reduce size of e *) | Sdowhile e s1 ⇒ Sdowhile (simplify_e e) (simplify_statement s1) (* TODO: try to reduce size of e *) | Sfor s1 e s2 s3 ⇒ Sfor (simplify_statement s1) (simplify_e e) (simplify_statement s2) (simplify_statement s3) (* TODO: reduce size of e *) | Sbreak ⇒ Sbreak | Scontinue ⇒ Scontinue | Sreturn eo ⇒ Sreturn (option_map ?? simplify_e eo) | Sswitch e ls ⇒ Sswitch (simplify_e e) (simplify_ls ls) | Slabel l s1 ⇒ Slabel l (simplify_statement s1) | Sgoto l ⇒ Sgoto l | Scost l s1 ⇒ Scost l (simplify_statement s1) ] and simplify_ls ls ≝ match ls with [ LSdefault s ⇒ LSdefault (simplify_statement s) | LScase sz i s ls' ⇒ LScase sz i (simplify_statement s) (simplify_ls ls') ]. definition simplify_function : function → function ≝ λf. mk_function (fn_return f) (fn_params f) (fn_vars f) (simplify_statement (fn_body f)). definition simplify_fundef : clight_fundef → clight_fundef ≝ λf. match f with [ CL_Internal f ⇒ CL_Internal (simplify_function f) | _ ⇒ f ]. definition simplify_program : clight_program → clight_program ≝ λp. transform_program … p simplify_fundef. (* Simulation on statement continuations. Stolen from labelSimulation and adapted to our setting. *) inductive cont_cast : cont → cont → Prop ≝ | cc_stop : cont_cast Kstop Kstop | cc_seq : ∀s,k,k'. cont_cast k k' → cont_cast (Kseq s k) (Kseq (simplify_statement s) k') | cc_while : ∀e,s,k,k'. cont_cast k k' → cont_cast (Kwhile e s k) (Kwhile (simplify_e e) (simplify_statement s) k') | cc_dowhile : ∀e,s,k,k'. cont_cast k k' → cont_cast (Kdowhile e s k) (Kdowhile (simplify_e e) (simplify_statement s) k') | cc_for1 : ∀e,s1,s2,k,k'. cont_cast k k' → cont_cast (Kseq (Sfor Sskip e s1 s2) k) (Kseq (Sfor Sskip (simplify_e e) (simplify_statement s1) (simplify_statement s2)) k') | cc_for2 : ∀e,s1,s2,k,k'. cont_cast k k' → cont_cast (Kfor2 e s1 s2 k) (Kfor2 (simplify_e e) (simplify_statement s1) (simplify_statement s2) k') | cc_for3 : ∀e,s1,s2,k,k'. cont_cast k k' → cont_cast (Kfor3 e s1 s2 k) (Kfor3 (simplify_e e) (simplify_statement s1) (simplify_statement s2) k') | cc_switch : ∀k,k'. cont_cast k k' → cont_cast (Kswitch k) (Kswitch k') | cc_call : ∀r,f,en,k,k'. cont_cast k k' → cont_cast (Kcall r f en k) (Kcall r (simplify_function f) en k'). lemma call_cont_cast : ∀k,k'. cont_cast k k' → cont_cast (call_cont k) (call_cont k'). #k0 #k0' #K elim K /2/ qed. inductive state_cast : state → state → Prop ≝ | swc_state : ∀f,s,k,k',e,m. cont_cast k k' → state_cast (State f s k e m) (State (simplify_function f) (simplify_statement s ) k' e m) | swc_callstate : ∀fd,args,k,k',m. cont_cast k k' → state_cast (Callstate fd args k m) (Callstate (simplify_fundef fd) args k' m) | swc_returnstate : ∀res,k,k',m. cont_cast k k' → state_cast (Returnstate res k m) (Returnstate res k' m) | swc_finalstate : ∀r. state_cast (Finalstate r) (Finalstate r) . record related_globals (F:Type[0]) (t:F → F) (ge:genv_t F) (ge':genv_t F) : Prop ≝ { rg_find_symbol: ∀s. find_symbol ? ge s = find_symbol ? ge' s; rg_find_funct: ∀v,f. find_funct ? ge v = Some ? f → find_funct ? ge' v = Some ? (t f); rg_find_funct_ptr: ∀b,f. find_funct_ptr ? ge b = Some ? f → find_funct_ptr ? ge' b = Some ? (t f) }. (* The return type of any function is invariant under cast simplification *) lemma fn_return_simplify : ∀f. fn_return (simplify_function f) = fn_return f. // qed. definition expr_lvalue_ind_combined ≝ λP,Q,ci,cf,lv,vr,dr,ao,uo,bo,ca,cd,ab,ob,sz,fl,co,xx. conj ?? (expr_lvalue_ind P Q ci cf lv vr dr ao uo bo ca cd ab ob sz fl co xx) (lvalue_expr_ind P Q ci cf lv vr dr ao uo bo ca cd ab ob sz fl co xx). lemma simulation_transitive : ∀A,r0,r1,r2. simulate A r0 r1 → simulate A r1 r2 → simulate A r0 r2. #A #r0 #r1 #r2 * [ 2: * #error #H >H #_ @SimFail /2 by ex_intro/ | 1: cases r0 [ 2: #error #_ #_ @SimFail /2 by ex_intro/ | 1: #elt #Hsim lapply (Hsim elt (refl ? (OK ? elt))) #H >H // ] ] qed. lemma sim_related_globals : ∀ge,ge',en,m. related_globals ? simplify_fundef ge ge' → (∀e. simulate ? (exec_expr ge en m e) (exec_expr ge' en m e)) ∧ (∀ed, ty. simulate ? (exec_lvalue' ge en m ed ty) (exec_lvalue' ge' en m ed ty)). #ge #ge' #en #m #Hrelated @expr_lvalue_ind_combined [ 1: #sz #ty #i @SimOk #a normalize // | 2: #ty #f @SimOk #a normalize // | 3: * [ 1: #sz #i | 2: #fl | 3: #id | 4: #e1 | 5: #e1 | 6: #op #e1 | 7: #op #e1 #e2 | 8: #cast_ty #e1 | 9: #cond #iftrue #iffalse | 10: #e1 #e2 | 11: #e1 #e2 | 12: #sizeofty | 13: #e1 #field | 14: #cost #e1 ] #ty #Hsim_lvalue try // whd in match (Plvalue ???); whd in match (exec_expr ????); whd in match (exec_expr ????); cases Hsim_lvalue [ 2,4,6: * #error #Hlvalue_fail >Hlvalue_fail @SimFail /2 by ex_intro/ | *: cases (exec_lvalue' ge en m ? ty) [ 2,4,6: #error #_ @SimFail /2 by ex_intro/ | *: #a #Hsim_lvalue lapply (Hsim_lvalue a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] | 4: #v #ty whd in match (exec_lvalue' ?????); whd in match (exec_lvalue' ?????); cases (lookup SymbolTag block en v) normalize nodelta [ 2: #block @SimOk // | 1: elim Hrelated #Hsymbol #_ #_ >(Hsymbol v) @SimOk // ] | 5: #e #ty #Hsim_expr whd in match (exec_lvalue' ?????); whd in match (exec_lvalue' ?????); cases Hsim_expr [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] | 6: #ty #ed #ty' #Hsim_lvalue whd in match (exec_expr ????); whd in match (exec_expr ????); whd in match (exec_lvalue ????); whd in match (exec_lvalue ????); cases Hsim_lvalue [ 2: * #error #Hlvalue_fail >Hlvalue_fail @SimFail /2 by ex_intro/ | 1: cases (exec_lvalue' ge en m ed ty') [ 2: #error #_ @SimFail /2 by ex_intro/ | *: #a #Hsim_lvalue lapply (Hsim_lvalue a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] | 7: #ty #op #e #Hsim whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] | 8: #ty #op #e1 #e2 #Hsim1 #Hsim2 whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim1 [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite cases Hsim2 [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e2) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] ] ] | 9: #ty #cast_ty #e #Hsim whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] (* mergeable with 7 modulo intros *) | 10: #ty #e1 #e2 #e3 #Hsim1 #Hsim2 #Hsim3 whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim1 [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e1) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite normalize nodelta cases (exec_bool_of_val (\fst a) (typeof e1)) [ 2: #error @SimFail /2 by ex_intro/ | 1: * [ 1: (* true branch *) cases Hsim2 | 2: (* false branch *) cases Hsim3 ] [ 2,4: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e2) | 3: cases (exec_expr ge en m e3) ] [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] ] ] | 11,12: #ty #e1 #e2 #Hsim1 #Hsim2 whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim1 [ 2,4: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1,3: cases (exec_expr ge en m e1) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite normalize nodelta cases (exec_bool_of_val ??) [ 2,4: #erro @SimFail /2 by ex_intro/ | 1,3: * whd in match (m_bind ?????); whd in match (m_bind ?????); [ 2,3: @SimOk // | 1,4: cases Hsim2 [ 2,4: * #error #Hfail >Hfail normalize nodelta @SimFail /2 by ex_intro/ | 1,3: cases (exec_expr ge en m e2) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] ] ] ] ] | 13: #ty #sizeof_ty @SimOk normalize // | 14: #ty #e #ty' #field #Hsim_lvalue whd in match (exec_lvalue' ? en m (Efield ??) ty); whd in match (exec_lvalue' ge' en m (Efield ??) ty); normalize in match (typeof (Expr ??)); cases ty' in Hsim_lvalue; normalize nodelta [ 2: #sz #sg | 3: #fsz | 4: #rg #ptr_ty | 5: #rg #array_ty #array_sz | 6: #domain #codomain | 7: #structname #fieldspec | 8: #unionname #fieldspec | 9: #rg #id ] #Hsim_lvalue try (@SimFail /2 by ex_intro/) normalize in match (exec_lvalue ge en m ?); normalize in match (exec_lvalue ge' en m ?); cases Hsim_lvalue [ 2,4: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1,3: cases (exec_lvalue' ge en m e ?) [ 2,4: #error #_ @SimFail /2 by ex_intro/ | 1,3: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk /2 by ex_intro/ ] ] | 15: #ty #lab #e #Hsim whd in match (exec_expr ??? (Expr ??)); whd in match (exec_expr ??? (Expr ??)); cases Hsim [ 2: * #error #Hfail >Hfail @SimFail /2 by ex_intro/ | 1: cases (exec_expr ge en m e) [ 2: #error #_ @SimFail /2 by ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite @SimOk // ] ] (* cf case 7, again *) | 16: * [ 1: #sz #i | 2: #fl | 3: #id | 4: #e1 | 5: #e1 | 6: #op #e1 | 7: #op #e1 #e2 | 8: #cast_ty #e1 | 9: #cond #iftrue #iffalse | 10: #e1 #e2 | 11: #e1 #e2 | 12: #sizeofty | 13: #e1 #field | 14: #cost #e1 ] #ty normalize in match (is_not_lvalue ?); [ 3,4,13: #Habsurd @(False_ind … Habsurd) ] #_ @SimFail /2 by ex_intro/ ] qed. lemma related_globals_expr_simulation : ∀ge,ge',en,m. related_globals ? simplify_fundef ge ge' → ∀e. simulate ? (exec_expr ge en m e) (exec_expr ge' en m (simplify_e e)) ∧ typeof e = typeof (simplify_e e). #ge #ge' #en #m #Hrelated #e whd in match (simplify_e ?); cases e #ed #ty cases ed [ 1: #sz #i | 2: #fl | 3: #id | 4: #e1 | 5: #e1 | 6: #op #e1 | 7: #op #e1 #e2 | 8: #cast_ty #e1 | 9: #cond #iftrue #iffalse | 10: #e1 #e2 | 11: #e1 #e2 | 12: #sizeofty | 13: #e1 #field | 14: #cost #e1 ] elim (simplify_inside (Expr ??)) #e' #Hconservation whd in Hconservation; @conj lapply (Hconservation ge en m) * * try // cases (exec_expr ge en m (Expr ??)) try (#error #_ #_ #_ @SimFail /2 by ex_intro/) * #val #trace #Hsim_expr #Hsim_lvalue #Htype_eq try @(simulation_transitive ???? Hsim_expr (proj1 ?? (sim_related_globals ge ge' en m Hrelated) ?)) qed. lemma related_globals_lvalue_simulation : ∀ge,ge',en,m. related_globals ? simplify_fundef ge ge' → ∀e. simulate ? (exec_lvalue ge en m e) (exec_lvalue ge' en m (simplify_e e)) ∧ typeof e = typeof (simplify_e e). #ge #ge' #en #m #Hrelated #e whd in match (simplify_e ?); cases e #ed #ty cases ed [ 1: #sz #i | 2: #fl | 3: #id | 4: #e1 | 5: #e1 | 6: #op #e1 | 7: #op #e1 #e2 | 8: #cast_ty #e1 | 9: #cond #iftrue #iffalse | 10: #e1 #e2 | 11: #e1 #e2 | 12: #sizeofty | 13: #e1 #field | 14: #cost #e1 ] elim (simplify_inside (Expr ??)) #e' #Hconservation whd in Hconservation; @conj lapply (Hconservation ge en m) * * try // cases (exec_lvalue ge en m (Expr ??)) try (#error #_ #_ #_ @SimFail /2 by ex_intro/) * #val #trace #Hsim_expr #Hsim_lvalue #Htype_eq (* Having to distinguish between exec_lvalue' and exec_lvalue is /ugly/. *) cases e' in Hsim_lvalue ⊢ %; #ed' #ty' whd in match (exec_lvalue ????); whd in match (exec_lvalue ????); lapply (proj2 ?? (sim_related_globals ge ge' en m Hrelated) ed' ty') #Hsim_lvalue2 #Hsim_lvalue1 try @(simulation_transitive ???? Hsim_lvalue1 Hsim_lvalue2) qed. lemma related_globals_exprlist_simulation : ∀ge,ge',en,m. related_globals ? simplify_fundef ge ge' → ∀args. simulate ? (exec_exprlist ge en m args ) (exec_exprlist ge' en m (map expr expr simplify_e args)). #ge #ge' #en #m #Hrelated #args elim args [ 1: /3/ | 2: #hd #tl #Hind normalize elim (related_globals_expr_simulation ge ge' en m Hrelated hd) * [ 2: * #error #Hfail >Hfail #_ @SimFail /2 by refl, ex_intro/ | 1: cases (exec_expr ge en m hd) [ 2: #error #_ #_ @SimFail /2 by refl, ex_intro/ | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Heq >Heq #Htype_eq >Htype_eq cases Hind normalize [ 2: * #error #Hfail >Hfail @SimFail /2 by refl, ex_intro/ | 1: cases (exec_exprlist ??? tl) [ 2: #error #_ @SimFail /2 by refl, ex_intro/ | 1: * #values #trace #Hsim lapply (Hsim 〈values, trace〉 (refl ? (OK ? 〈values, trace〉))) #Heq >Heq @SimOk // ] ] ] ] ] qed. lemma simplify_type_of_fundef_eq : ∀clfd. (type_of_fundef (simplify_fundef clfd)) = (type_of_fundef clfd). * // qed. lemma simplify_typeof_eq : ∀ge:genv.∀en:env.∀m:mem. ∀func. typeof (simplify_e func) = typeof func. #ge #en #m #func whd in match (simplify_e func); elim (simplify_inside func) #func' #H lapply (H ge en m) * * #_ #_ // qed. lemma simplify_fun_typeof_eq : ∀ge:genv.∀en:env.∀m:mem. ∀func. fun_typeof (simplify_e func) = fun_typeof func. #ge #en #m #func whd in match (simplify_e func); whd in match (fun_typeof ?) in ⊢ (??%%); >simplify_typeof_eq whd in match (simplify_e func); // qed. lemma simplify_is_not_skip: ∀s.s ≠ Sskip → ∃pf. is_Sskip (simplify_statement s) = inr … pf. * [ 1: * #Habsurd elim (Habsurd (refl ? Sskip)) | *: #a try #b try #c try #d try #e whd in match (simplify_statement ?); whd in match (is_Sskip ?); try /2 by refl, ex_intro/ ] qed. lemma call_cont_simplify : ∀k,k'. cont_cast k k' → cont_cast (call_cont k) (call_cont k'). #k0 #k0' #K elim K /2/ qed. lemma simplify_ls_commute : ∀l. (simplify_statement (seq_of_labeled_statement l)) = (seq_of_labeled_statement (simplify_ls l)). #l @(labeled_statements_ind … l) [ 1: #default_statement // | 2: #sz #i #s #tl #Hind whd in match (seq_of_labeled_statement ?) in ⊢ (??%?); whd in match (simplify_ls ?) in ⊢ (???%); whd in match (seq_of_labeled_statement ?) in ⊢ (???%); whd in match (simplify_statement ?) in ⊢ (??%?); >Hind // ] qed. lemma select_switch_commute : ∀sz,i,l. select_switch sz i (simplify_ls l) = simplify_ls (select_switch sz i l). #sz #i #l @(labeled_statements_ind … l) [ 1: #default_statement // | 2: #sz' #i' #s #tl #Hind whd in match (simplify_ls ?) in ⊢ (??%?); whd in match (select_switch ???) in ⊢ (??%%); cases (sz_eq_dec sz sz') [ 1: #Hsz_eq destruct >intsize_eq_elim_true >intsize_eq_elim_true cases (eq_bv (bitsize_of_intsize sz') i' i) normalize nodelta whd in match (simplify_ls ?) in ⊢ (???%); [ 1: // | 2: @Hind ] | 2: #Hneq >(intsize_eq_elim_false ? sz sz' ???? Hneq) >(intsize_eq_elim_false ? sz sz' ???? Hneq) @Hind ] ] qed. lemma elim_IH_aux : ∀lab. ∀s:statement.∀k,k'. cont_cast k k' → ∀Hind:(∀k:cont.∀k':cont. cont_cast k k' → match find_label lab s k with  [ None ⇒ find_label lab (simplify_statement s) k'=None (statement×cont) | Some (r:(statement×cont))⇒ let 〈s',ks〉 ≝r in  ∃ks':cont. find_label lab (simplify_statement s) k' = Some (statement×cont) 〈simplify_statement s',ks'〉 ∧ cont_cast ks ks']). (find_label lab s k = None ? ∧ find_label lab (simplify_statement s) k' = None ?) ∨ (∃st,kst,kst'. find_label lab s k = Some ? 〈st,kst〉 ∧ find_label lab (simplify_statement s) k' = Some ? 〈simplify_statement st,kst'〉 ∧ cont_cast kst kst'). #lab #s #k #k' #Hcont_cast #Hind lapply (Hind k k' Hcont_cast) cases (find_label lab s k) [ 1: normalize nodelta #Heq >Heq /3/ | 2: * #st #kst normalize nodelta * #kst' * #Heq #Hcont_cast' >Heq %2 %{st} %{kst} %{kst'} @conj try @conj // ] qed. lemma cast_find_label : ∀lab,s,k,k'. cont_cast k k' → match find_label lab s k with [ Some r ⇒ let 〈s',ks〉 ≝ r in ∃ks'. find_label lab (simplify_statement s) k' = Some ? 〈simplify_statement s', ks'〉 ∧ cont_cast ks ks' | None ⇒ find_label lab (simplify_statement s) k' = None ? ]. #lab #s @(statement_ind2 ? (λls. ∀k:cont .∀k':cont .cont_cast k k' →match find_label_ls lab ls k with  [None⇒ find_label_ls lab (simplify_ls ls) k' = None ? |Some r ⇒ let 〈s',ks〉 ≝r in  ∃ks':cont .find_label_ls lab (simplify_ls ls) k' =Some (statement×cont) 〈simplify_statement s',ks'〉 ∧cont_cast ks ks'] ) … s) [ 1: #k #k' #Hcont_cast whd in match (find_label ? Sskip ?); normalize nodelta @refl | 2: #e1 #e2 #k #k' #Hcont_cast whd in match (find_label ? (Sassign e1 e2) ?); normalize nodelta @refl | 3: #e0 #e #args #k #k' #Hcont_cast whd in match (find_label ? (Scall e0 e args) ?); normalize nodelta @refl | 4: #s1 #s2 #Hind_s1 #Hind_s2 #k #k' #Hcont_cast whd in match (find_label ? (Ssequence s1 s2) ?); whd in match (find_label ? (simplify_statement (Ssequence s1 s2)) ?); elim (elim_IH_aux lab s1 (Kseq s2 k) (Kseq (simplify_statement s2) k') ? Hind_s1) [ 3: try ( @cc_seq // ) | 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta elim (elim_IH_aux lab s2 k k' Hcont_cast Hind_s2) [ 2: * #st * #kst * #kst' * * #Hrewrite2 >Hrewrite2 #Hrewrite3 >Hrewrite3 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta // ] ] | 5: #e #s1 #s2 #Hind_s1 #Hind_s2 #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); elim (elim_IH_aux lab s1 k k' Hcont_cast Hind_s1) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta elim (elim_IH_aux lab s2 k k' Hcont_cast Hind_s2) [ 2: * #st * #kst * #kst' * * #Hrewrite2 >Hrewrite2 #Hrewrite3 >Hrewrite3 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta // ] ] | 6: #e #s #Hind_s #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); elim (elim_IH_aux lab s (Kwhile e s k) (Kwhile (simplify_e e) (simplify_statement s) k') ? Hind_s) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta // | 3: @cc_while // ] | 7: #e #s #Hind_s #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); elim (elim_IH_aux lab s (Kdowhile e s k) (Kdowhile (simplify_e e) (simplify_statement s) k') ? Hind_s) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta // | 3: @cc_dowhile // ] | 8: #s1 #cond #s2 #s3 #Hind_s1 #Hind_s2 #Hind_s3 #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); elim (elim_IH_aux lab s1 (Kseq (Sfor Sskip cond s2 s3) k) (Kseq (Sfor Sskip (simplify_e cond) (simplify_statement s2) (simplify_statement s3)) k') ? Hind_s1) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 3: @cc_for1 // | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta elim (elim_IH_aux lab s3 (Kfor2 cond s2 s3 k) (Kfor2 (simplify_e cond) (simplify_statement s2) (simplify_statement s3) k') ? Hind_s3) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 3: @cc_for2 // | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta elim (elim_IH_aux lab s2 (Kfor3 cond s2 s3 k) (Kfor3 (simplify_e cond) (simplify_statement s2) (simplify_statement s3) k') ? Hind_s2) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 3: @cc_for3 // | 1: * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 normalize nodelta // ] ] ] | 9,10: #k #k' #Hcont_cast normalize in match (find_label ???); normalize nodelta // | 11: #e #k #k' #Hcont_cast normalize in match (find_label ???); normalize nodelta // | 12: #e #ls #Hind #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); (* We can't elim the Hind on a list of labeled statements. We must proceed more manually. *) lapply (Hind (Kswitch k) (Kswitch k') ?) [ 1: @cc_switch // | 2: cases (find_label_ls lab ls (Kswitch k)) normalize nodelta [ 1: // | 2: * #st #kst normalize nodelta // ] ] | 13: #lab' #s0 #Hind #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); cases (ident_eq lab lab') normalize nodelta [ 1: #_ %{k'} /2/ | 2: #_ elim (elim_IH_aux lab s0 k k' Hcont_cast Hind) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Heq >Heq #Heq1 >Heq1 normalize nodelta // ] ] | 14: #l #k #k' #Hcont_cast // | 15: #l #s0 #Hind #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); elim (elim_IH_aux lab s0 k k' Hcont_cast Hind) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Heq >Heq #Heq1 >Heq1 normalize nodelta // ] | 16: #s0 #Hind #k #k' #Hcont_cast whd in match (find_label ???); whd in match (find_label ? (simplify_statement ?) ?); elim (elim_IH_aux lab s0 k k' Hcont_cast Hind) [ 2: * #st * #kst * #kst' * * #Hrewrite >Hrewrite #Hrewrite1 >Hrewrite1 #Hcont_cast' normalize nodelta %{kst'} /2/ | 1: * #Heq >Heq #Heq1 >Heq1 normalize nodelta // ] | 17: #sz #i #s0 #t #Hind_s0 #Hind_ls #k #k' #Hcont_cast whd in match (simplify_ls ?); whd in match (find_label_ls ???); lapply Hind_ls @(labeled_statements_ind … t) [ 1: #default_case #Hind_ls whd in match (seq_of_labeled_statement ?); elim (elim_IH_aux lab s0 (Kseq default_case k) (Kseq (simplify_statement default_case) k') ? Hind_s0) [ 2: * #st * #kst * #kst' * * #Hrewrite #Hrewrite1 #Hcont_cast' >Hrewrite >Hrewrite1 normalize nodelta whd in match (find_label_ls ???); >Hrewrite >Hrewrite1 normalize nodelta %{kst'} /2/ | 3: @cc_seq // | 1: * #Hrewrite #Hrewrite1 >Hrewrite normalize nodelta lapply (Hind_ls k k' Hcont_cast) cases (find_label_ls lab (LSdefault default_case) k) [ 1: normalize nodelta #Heq1 whd in match (simplify_ls ?); whd in match (find_label_ls lab ??); whd in match (seq_of_labeled_statement ?); whd in match (find_label_ls lab ??); >Hrewrite1 normalize nodelta @Heq1 | 2: * #st #kst normalize nodelta #H whd in match (find_label_ls lab ??); whd in match (simplify_ls ?); whd in match (seq_of_labeled_statement ?); >Hrewrite1 normalize nodelta @H ] ] | 2: #sz' #i' #s' #tl' #Hind #A whd in match (seq_of_labeled_statement ?); elim (elim_IH_aux lab s0 (Kseq (Ssequence s' (seq_of_labeled_statement tl')) k) (Kseq (simplify_statement (Ssequence s' (seq_of_labeled_statement tl'))) k') ? Hind_s0) [ 3: @cc_seq // | 1: * #Heq #Heq2 >Heq >Heq2 normalize nodelta lapply (A k k' Hcont_cast) cases (find_label_ls lab (LScase sz' i' s' tl') k) normalize nodelta [ 1: #H whd in match (find_label_ls ???); Heq2 normalize nodelta assumption | 2: * #st #kst normalize nodelta #H whd in match (find_label_ls ???); Heq2 normalize nodelta @H ] | 2: * #st * #kst * #kst' * * #Hrewrite #Hrewrite1 #Hcont_cast' >Hrewrite normalize nodelta %{kst'} @conj try // whd in match (find_label_ls ???); Hrewrite1 // ] ] ] qed. lemma cast_find_label_fn : ∀lab,f,k,k',s,ks. cont_cast k k' → find_label lab (fn_body f) k = Some ? 〈s,ks〉 → ∃ks'. find_label lab (fn_body (simplify_function f)) k' = Some ? 〈simplify_statement s,ks'〉 ∧ cont_cast ks ks'. #lab * #rettype #args #vars #body #k #k' #s #ks #Hcont_cast #Hfind_lab whd in match (simplify_function ?); lapply (cast_find_label lab body ?? Hcont_cast) >Hfind_lab normalize nodelta // qed. theorem cast_correction : ∀ge, ge'. related_globals ? simplify_fundef ge ge' → ∀s1, s1', tr, s2. state_cast s1 s1' → exec_step ge s1 = Value … 〈tr,s2〉 → ∃s2'. exec_step ge' s1' = Value … 〈tr,s2'〉 ∧ state_cast s2 s2'. #ge #ge' #Hrelated #s1 #s1' #tr #s2 #Hs1_sim_s1' #Houtcome inversion Hs1_sim_s1' [ 1: (* regular state *) #f #stm #k #k' #en #m #Hcont_cast lapply (related_globals_expr_simulation ge ge' en m Hrelated) #Hsim_related lapply (related_globals_lvalue_simulation ge ge' en m Hrelated) #Hsim_lvalue_related cases stm (* Perform the intros for the statements*) [ 1: | 2: #lhs #rhs | 3: #ret #func #args | 4: #stm1 #stm2 | 5: #cond #iftrue #iffalse | 6: #cond #body | 7: #cond #body | 8: #init #cond #step #body | 9,10: | 11: #retval | 12: #cond #switchcases | 13: #lab #body | 14: #lab | 15: #cost #body ] [ 1: (* Skip *) #Heq_s1 #Heq_s1' #_ lapply Houtcome >Heq_s1 whd in match (exec_step ??); whd in match (exec_step ??); inversion Hcont_cast [ 1: (* Kstop *) #Hk #Hk' #_ >fn_return_simplify cases (fn_return f) normalize nodelta [ 1: >Heq_s1 in Hs1_sim_s1'; >Heq_s1' #Hsim inversion Hsim [ 1: #f0 #s #k0 #k0' #e #m0 #Hcont_cast0 #Hstate_eq #Hstate_eq' #_ #Eq whd in match (ret ??) in Eq; destruct (Eq) %{(Returnstate Vundef Kstop (free_list m (blocks_of_env en)))} @conj [ 1: // | 2: %3 %1 ] | 2: #fd #args #k0 #k0' #m0 #Hcont_cast0 #Habsurd destruct (Habsurd) | 3: #res #k0 #k0' #m0 #Hcont_cast #Habsurd destruct (Habsurd) | 4: #r #Habsurd destruct (Habsurd) ] | 3: #irrelevant #Habsurd destruct | 5: #irrelevant1 #irrelevant2 #irrelevant3 #Habsurd destruct | *: #irrelevant1 #irrelevant2 #Habsurd destruct ] | 2: (* Kseq stm' k' *) #stm' #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ normalize nodelta #Eq whd in match (ret ??) in Eq; destruct (Eq) %{(State (simplify_function f) (simplify_statement stm') k0' en m)} @conj [ 1: // | 2: %1 // ] | 3: (* Kwhile *) #cond #body #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ normalize nodelta #Eq whd in match (ret ??) in Eq; destruct (Eq) %{(State (simplify_function f) (Swhile (simplify_e cond) (simplify_statement body)) k0' en m)} @conj [ 1: // | 2: %1 // ] | 4: (* Kdowhile *) #cond #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ normalize nodelta #Eq elim (Hsim_related cond) #Hsim_cond #Htype_cond_eq cases Hsim_cond [ 2: * #error #Hfail >Hfail in Eq; #Habsurd normalize in Habsurd; destruct | 1: cases (exec_expr ge en m cond) in Eq; [ 2: #error whd in match (m_bind ?????) in ⊢ (% → ?); #Habsurd destruct | 1: * #val #trace whd in match (m_bind ?????) in ⊢ (% → ?); Hrewrite_cond whd in match (m_bind ?????); (* case analysis on the outcome of the conditional *) cases (exec_bool_of_val val (typeof cond)) in Eq ⊢ %; [ 2: (* evaluation of the conditional fails *) #error normalize in ⊢ (% → ?); #Habsurd destruct (Habsurd) | 1: * whd in match (bindIO ??????); whd in match (bindIO ??????); #Eq destruct (Eq) [ 1: %{(State (simplify_function f) (Sdowhile (simplify_e cond) (simplify_statement body)) k0' en m)} @conj [ 1: // | 2: %1 // ] | 2: %{(State (simplify_function f) Sskip k0' en m)} @conj [ 1: // | 2: %1 // ] ] ] ] ] | 5,6,7: #cond #step #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ normalize nodelta #Eq whd in match (ret ??) in Eq ⊢ %; destruct (Eq) [ 1: %{(State (simplify_function f) (Sfor Sskip (simplify_e cond) (simplify_statement step) (simplify_statement body)) k0' en m)} @conj [ 1: // | 2: %1 // ] | 2: %{(State (simplify_function f) (simplify_statement step) (Kfor3 (simplify_e cond) (simplify_statement step) (simplify_statement body) k0') en m)} @conj [ 1: // | 2: %1 @cc_for3 // ] | 3: %{(State (simplify_function f) (Sfor Sskip (simplify_e cond) (simplify_statement step) (simplify_statement body)) k0' en m)} @conj [ 1: // | 2: %1 // ] ] | 8: #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ normalize nodelta #Eq whd in match (ret ??) in Eq ⊢ %; destruct (Eq) %{(State (simplify_function f) Sskip k0' en m)} @conj [ 1: // | 2: %1 // ] | 9: (* Call *) #r #f0 #en0 #k0 #k0' #Hcont_cast #Hind #Hk #Hk' #_ #Eq >fn_return_simplify cases (fn_return f) in Eq; normalize nodelta [ 1: #Eq whd in match (ret ??) in Eq ⊢ %; destruct (Eq) %{(Returnstate Vundef (Kcall r (simplify_function f0) en0 k0') (free_list m (blocks_of_env en)))} @conj [ 1: // | 2: %3 @cc_call // ] | 3: #irrelevant #Habsurd destruct (Habsurd) | 5: #irrelevant1 #irrelevant2 #irrelevant3 #Habsurd destruct (Habsurd) | *: #irrelevant1 #irrelevant2 #Habsurd destruct (Habsurd) ] ] | 2: (* Assign *) #Heq_s1 #Heq_s1' #_ lapply Houtcome >Heq_s1 whd in match (simplify_statement ?); #Heq whd in match (exec_step ??) in Heq ⊢ %; (* Begin by making the simplify_e disappear using Hsim_related *) elim (Hsim_lvalue_related lhs) * [ 2: * #error #Hfail >Hfail in Heq; #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_lvalue ge en m lhs) in Heq; [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: * * #block #offset #trace whd in match (bindIO ??????); #Heq #Hsim #Htype_eq_lhs lapply (Hsim 〈block, offset, trace〉 (refl ? (OK ? 〈block, offset, trace〉))) #Hrewrite >Hrewrite -Hrewrite whd in match (bindIO ??????); (* After [lhs], treat [rhs] *) elim (Hsim_related rhs) * [ 2: * #error #Hfail >Hfail in Heq; #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ge en m rhs) in Heq; [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: * #val #trace whd in match (bindIO ??????); #Heq #Hsim #Htype_eq_rhs lapply (Hsim 〈val, trace〉 (refl ? (OK ? 〈val, trace〉))) #Hrewrite >Hrewrite -Hrewrite whd in match (bindIO ??????); Heq_s1 whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; elim (Hsim_related func) in Heq; * [ 2: * #error #Hfail >Hfail #Htype_eq #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ??? func) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Heq >Heq #Htype_eq >Htype_eq whd in match (bindIO ??????) in ⊢ (% → %); elim (related_globals_exprlist_simulation ge ge' en m Hrelated args) [ 2: * #error #Hfail >Hfail #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_exprlist ge en m args) [ 2: #error #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #l -Hsim #Hsim lapply (Hsim l (refl ? (OK ? l))) #Heq >Heq whd in match (bindIO ??????) in ⊢ (% → %); elim Hrelated #_ #Hfunct #_ lapply (Hfunct (\fst a)) cases (find_funct clight_fundef ge (\fst a)); [ 1: #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 2: #clfd -Hsim #Hsim lapply (Hsim clfd (refl ? (Some ? clfd))) #Heq >Heq whd in match (bindIO ??????) in ⊢ (% → %); >simplify_type_of_fundef_eq >(simplify_fun_typeof_eq ge en m) cases (assert_type_eq (type_of_fundef clfd) (fun_typeof func)) [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #Htype_eq cases ret [ 1: whd in match (bindIO ??????) in ⊢ (% → %); #Eq destruct (Eq) %{(Callstate (simplify_fundef clfd) (\fst  l) (Kcall (None (block×offset×type)) (simplify_function f) en k') m)} @conj [ 1: // | 2: %2 @cc_call // ] | 2: #fptr whd in match (bindIO ??????) in ⊢ (% → %); elim (Hsim_lvalue_related fptr) * [ 2: * #error #Hfail >Hfail #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_lvalue ge en m fptr) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #a #Hsim #Htype_eq_fptr >(Hsim a (refl ? (OK ? a))) whd in match (bindIO ??????) in ⊢ (% → %); #Heq destruct (Heq) %{(Callstate (simplify_fundef clfd) (\fst  l) (Kcall (Some (block×offset×type) 〈\fst  a,typeof (simplify_e fptr)〉) (simplify_function f) en k') m)} @conj [ 1: // | 2: >(simplify_typeof_eq ge en m) %2 @cc_call // ] ] ] ] ] ] ] ] ] ] | 4: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; destruct (Heq) %{(State (simplify_function f) (simplify_statement stm1) (Kseq (simplify_statement stm2) k') en m)} @conj [ 1: // | 2: %1 @cc_seq // ] | 5: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; elim (Hsim_related cond) in Heq; * [ 2: * #error #Hfail >Hfail #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ge en m cond) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: * #condval #condtrace #Hsim lapply (Hsim 〈condval, condtrace〉 (refl ? (OK ? 〈condval, condtrace〉))) #Heq >Heq #Htype_eq_cond whd in match (bindIO ??????) in ⊢ (% → %); >(simplify_typeof_eq ge en m) cases (exec_bool_of_val condval (typeof cond)) [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: * whd in match (bindIO ??????) in ⊢ (% → %); #Heq normalize nodelta in Heq ⊢ %; [ 1: destruct skip (condtrace) %{(State (simplify_function f) (simplify_statement iftrue) k' en m)} @conj [ 1: // | 2: Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; elim (Hsim_related cond) in Heq; * [ 2: * #error #Hfail >Hfail #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ge en m cond) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: * #condval #condtrace #Hsim lapply (Hsim 〈condval, condtrace〉 (refl ? (OK ? 〈condval, condtrace〉))) #Heq >Heq #Htype_eq_cond whd in match (bindIO ??????) in ⊢ (% → %); >(simplify_typeof_eq ge en m) cases (exec_bool_of_val condval (typeof cond)) [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: * whd in match (bindIO ??????) in ⊢ (% → %); #Heq normalize nodelta in Heq ⊢ %; [ 1: destruct skip (condtrace) %{(State (simplify_function f) (simplify_statement body) (Kwhile (simplify_e cond) (simplify_statement body) k') en m)} @conj [ 1: // | 2: Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; destruct (Heq) %{(State (simplify_function f) (simplify_statement body) (Kdowhile (simplify_e cond) (simplify_statement body) k') en m)} @conj [ 1: // | 2: %1 @cc_dowhile // ] | 8: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; cases (is_Sskip init) in Heq; [ 2: #Hinit_neq_Sskip elim (simplify_is_not_skip init Hinit_neq_Sskip) #pf #Hrewrite >Hrewrite normalize nodelta whd in match (ret ??) in ⊢ (% → %); #Eq destruct (Eq) %{(State (simplify_function f) (simplify_statement init) (Kseq (Sfor Sskip (simplify_e cond) (simplify_statement step) (simplify_statement body)) k') en m)} @conj [ 1: // | 2: %1 @cc_for1 // ] | 1: #Hinit_eq_Sskip >Hinit_eq_Sskip whd in match (simplify_statement ?); whd in match (is_Sskip ?); normalize nodelta elim (Hsim_related cond) * [ 2: * #error #Hfail #_ >Hfail #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ge en m cond) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite #Htype_eq_cond >Hrewrite whd in match (m_bind ?????); whd in match (m_bind ?????); Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; inversion Hcont_cast in Heq; normalize nodelta [ 1: #Hk #Hk' #_ | 2: #stm' #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ | 3: #cond #body #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ | 4: #cond #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 5,6,7: #cond #step #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 8: #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 9: #r #f0 #en0 #k0 #k0' #Hcont_cast #Hind #Hk #Hk' #_ ] #H whd in match (ret ??) in H ⊢ %; destruct (H) [ 1,4: %{(State (simplify_function f) Sbreak k0' en m)} @conj [ 1,3: // | 2,4: %1 // ] | 2,3,5,6: %{(State (simplify_function f) Sskip k0' en m)} @conj try // %1 // ] | 10: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; inversion Hcont_cast in Heq; normalize nodelta [ 1: #Hk #Hk' #_ | 2: #stm' #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ | 3: #cond #body #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ | 4: #cond #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 5,6,7: #cond #step #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 8: #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 9: #r #f0 #en0 #k0 #k0' #Hcont_cast #Hind #Hk #Hk' #_ ] #H whd in match (ret ??) in H ⊢ %; destruct (H) [ 1,4,6: %{(State (simplify_function f) Scontinue k0' en m)} @conj try // %1 // | 2: %{(State (simplify_function f) (Swhile (simplify_e cond) (simplify_statement body)) k0' en m)} @conj try // %1 // | 3: elim (Hsim_related cond) #Hsim_cond #Htype_cond_eq elim Hsim_cond in H; [ 2: * #error #Hfail >Hfail #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ??? cond) [ 2: #error #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #a #Hsim lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite whd in match (m_bind ?????) in ⊢ (% → %); Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; cases retval in Heq; normalize nodelta [ 1: >fn_return_simplify cases (fn_return f) normalize nodelta whd in match (ret ??) in ⊢ (% → %); [ 2: #sz #sg | 3: #fl | 4: #rg #ty' | 5: #rg #ty #n | 6: #tl #ty' | 7: #id #fl | 8: #id #fl | 9: #rg #id ] #H destruct (H) %{(Returnstate Vundef (call_cont k') (free_list m (blocks_of_env en)))} @conj [ 1: // | 2: %3 @call_cont_simplify // ] | 2: #e >fn_return_simplify cases (type_eq_dec (fn_return f) Tvoid) normalize nodelta [ 1: #_ #Habsurd destruct (Habsurd) | 2: #_ elim (Hsim_related e) * [ 2: * #error #Hfail >Hfail #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ??? e) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #a #Hsim #Htype_eq_e lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite whd in match (m_bind ?????); whd in match (m_bind ?????); #Heq destruct (Heq) %{(Returnstate (\fst  a) (call_cont k') (free_list m (blocks_of_env en)))} @conj [ 1: // | 2: %3 @call_cont_simplify // ] ] ] ] ] | 12: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; elim (Hsim_related cond) in Heq; * [ 2: * #error #Hfail >Hfail #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: cases (exec_expr ??? cond) [ 2: #error #_ #_ #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #a #Hsim #Htype_eq_cond lapply (Hsim a (refl ? (OK ? a))) #Hrewrite >Hrewrite whd in match (bindIO ??????); whd in match (bindIO ??????); cases (\fst a) normalize nodelta [ 1,3,4,5: #a destruct (a) #b destruct (b) | 2: #sz #i whd in match (ret ??) in ⊢ (% → %); #Heq destruct (Heq) %{(State (simplify_function f) (seq_of_labeled_statement (select_switch sz i (simplify_ls switchcases))) (Kswitch k') en m)} @conj [ 1: // | 2: @(labeled_statements_ind … switchcases) [ 1: #default_s whd in match (simplify_ls ?); whd in match (select_switch sz i ?) in ⊢ (?%%); whd in match (seq_of_labeled_statement ?) in ⊢ (?%%); %1 @cc_switch // | 2: #sz' #i' #top_case #tail #Hind cut ((seq_of_labeled_statement (select_switch sz i (simplify_ls (LScase sz' i' top_case tail)))) = (simplify_statement (seq_of_labeled_statement (select_switch sz i (LScase sz' i' top_case tail))))) [ 1: >select_switch_commute >simplify_ls_commute @refl | 2: #Hrewrite >Hrewrite %1 @cc_switch // ] ] ] ] ] ] | 13: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; destruct (Heq) %{(State (simplify_function f) (simplify_statement body) k' en m)} @conj %1 // | 14: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; lapply (cast_find_label_fn lab f (call_cont k) (call_cont k')) cases (find_label lab (fn_body f) (call_cont k)) in Heq; normalize nodelta [ 1: #Habsurd destruct (Habsurd) | 2: * #st #kst normalize nodelta #Heq whd in match (ret ??) in Heq; #H lapply (H st kst (call_cont_simplify ???) (refl ? (Some ? 〈st,kst〉))) try // * #kst' * #Heq2 #Hcont_cast' >Heq2 normalize nodelta destruct (Heq) %{(State (simplify_function f) (simplify_statement st) kst' en m)} @conj [ 1: // | 2: %1 // ] ] | 15: #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (simplify_statement ?) in Heq ⊢ %; #Heq whd in match (exec_step ??) in Heq ⊢ %; destruct (Heq) %{(State (simplify_function f) (simplify_statement body) k' en m)} @conj [ 1: // | 2: %1 // ] ] | 2: (* Call state *) #fd #args #k #k' #m #Hcont_cast #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (exec_step ??) in ⊢ (% → %); elim fd in Heq_s1'; normalize nodelta [ 1: * #rettype #args #vars #body #Heq_s1' whd in match (simplify_function ?); cases (exec_alloc_variables empty_env ??) #local_env #new_mem normalize nodelta cases (exec_bind_parameters ????) [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #new_mem_init whd in match (m_bind ?????); whd in match (m_bind ?????); #Heq destruct (Heq) %{(State (mk_function rettype args vars (simplify_statement body)) (simplify_statement body) k' local_env new_mem_init)} @conj [ 1: // | 2: %1 // ] ] | 2: #id #argtypes #rettype #Heq_s1' cases (check_eventval_list args ?) [ 2: #error #Habsurd normalize in Habsurd; destruct (Habsurd) | 1: #l whd in match (m_bind ?????); whd in match (m_bind ?????); #Habsurd destruct (Habsurd) ] ] | 3: (* Return state *) #res #k #k' #m #Hcont_cast #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (exec_step ??) in ⊢ (% → %); inversion Hcont_cast [ 1: #Hk #Hk' #_ | 2: #stm' #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ | 3: #cond #body #k0 #k0' #Hconst_cast0 #Hind #Hk #Hk' #_ | 4: #cond #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 5,6,7: #cond #step #body #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 8: #k0 #k0' #Hcont_cast0 #Hind #Hk #Hk' #_ | 9: #r #f0 #en0 #k0 #k0' #Hcont_cast #Hind #Hk #Hk' #_ ] normalize nodelta [ 1: cases res normalize nodelta [ 2: * normalize nodelta #i [ 3: #Heq whd in match (ret ??) in Heq; destruct (Heq) %{(Finalstate i)} @conj [ 1: // | 2: // ] | * : #Habsurd destruct (Habsurd) ] | *: #a try #b destruct ] | 9: elim r normalize nodelta [ 2: * * #block #offset #typ normalize nodelta cases (opt_to_io io_out io_in mem ? (store_value_of_type' ????)) [ 2: #mem whd in match (m_bind ?????); whd in match (m_bind ?????); #Heq destruct (Heq) %{(State (simplify_function f0) Sskip k0' en0 mem)} @conj [ 1: // | 2: %1 // ] | 1: #output #resumption whd in match (m_bind ?????); #Habsurd destruct (Habsurd) | 3: #eror #Habsurd normalize in Habsurd; destruct (Habsurd) ] | 1: #Heq whd in match (ret ??) in Heq; destruct (Heq) %{(State (simplify_function f0) Sskip k0' en0 m)} @conj [ 1: // | 2: %1 // ] ] | *: #Habsurd destruct (Habsurd) ] | 4: (* Final state *) #r #Heq_s1 #Heq_s1' #_ >Heq_s1 in Houtcome; whd in match (exec_step ??) in ⊢ (% → %); #Habsurd destruct (Habsurd) ]
43,948
117,677
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2021-17
latest
en
0.751657
https://blog.csdn.net/moge19/article/details/83004811
1,550,824,134,000,000,000
text/html
crawl-data/CC-MAIN-2019-09/segments/1550247514804.76/warc/CC-MAIN-20190222074154-20190222100154-00418.warc.gz
500,210,439
31,855
# Python实现多层感知器MLP(基于双月数据集) 1、加载必要的库,生成数据集 import math import random import matplotlib.pyplot as plt import numpy as np class moon_data_class(object): def __init__(self,N,d,r,w): self.N=N self.w=w self.d=d self.r=r def sgn(self,x): if(x>0): return 1; else: return -1; def sig(self,x): return 1.0/(1+np.exp(x)) def dbmoon(self): N1 = 10*self.N N = self.N r = self.r w2 = self.w/2 d = self.d done = True data = np.empty(0) while done: #generate Rectangular data tmp_x = 2*(r+w2)*(np.random.random([N1, 1])-0.5) tmp_y = (r+w2)*np.random.random([N1, 1]) tmp = np.concatenate((tmp_x, tmp_y), axis=1) tmp_ds = np.sqrt(tmp_x*tmp_x + tmp_y*tmp_y) #generate double moon data ---upper idx = np.logical_and(tmp_ds > (r-w2), tmp_ds < (r+w2)) idx = (idx.nonzero())[0] if data.shape[0] == 0: data = tmp.take(idx, axis=0) else: data = np.concatenate((data, tmp.take(idx, axis=0)), axis=0) if data.shape[0] >= N: done = False #print (data) db_moon = data[0:N, :] #print (db_moon) #generate double moon data ----down data_t = np.empty([N, 2]) data_t[:, 0] = data[0:N, 0] + r data_t[:, 1] = -data[0:N, 1] - d db_moon = np.concatenate((db_moon, data_t), axis=0) return db_moon 2、定义激活函数 def rand(a,b): return (b-a)* random.random()+a def sigmoid(x): #return np.tanh(-2.0*x) return 1.0/(1.0+math.exp(-x)) def sigmoid_derivate(x): #return -2.0*(1.0-np.tanh(-2.0*x)*np.tanh(-2.0*x)) return x*(1-x) #sigmoid函数的导数 3、定义神经网络 class BP_NET(object): def __init__(self): self.input_n = 0 self.hidden_n = 0 self.output_n = 0 self.input_cells = [] self.bias_input_n = [] self.bias_output = [] self.hidden_cells = [] self.output_cells = [] self.input_weights = [] self.output_weights = [] self.input_correction = [] self.output_correction = [] def setup(self, ni,nh,no): self.input_n = ni+1#输入层+偏置项 self.hidden_n = nh self.output_n = no self.input_cells = [1.0]*self.input_n self.hidden_cells = [1.0]*self.hidden_n self.output_cells = [1.0]*self.output_n self.input_weights = make_matrix(self.input_n,self.hidden_n) self.output_weights = make_matrix(self.hidden_n,self.output_n) for i in range(self.input_n): for h in range(self.hidden_n): self.input_weights[i][h] = rand(-0.2,0.2) for h in range(self.hidden_n): for o in range(self.output_n): self.output_weights[h][o] = rand(-2.0,2.0) self.input_correction = make_matrix(self.input_n , self.hidden_n) self.output_correction = make_matrix(self.hidden_n,self.output_n) def predict(self,inputs): for i in range(self.input_n-1): self.input_cells[i] = inputs[i] for j in range(self.hidden_n): total = 0.0 for i in range(self.input_n): total += self.input_cells[i] * self.input_weights[i][j] self.hidden_cells[j] = sigmoid(total) for k in range(self.output_n): total = 0.0 for j in range(self.hidden_n): total+= self.hidden_cells[j]*self.output_weights[j][k]# + self.bias_output[k] self.output_cells[k] = sigmoid(total) return self.output_cells[:] def back_propagate(self, case,label,learn,correct): #计算得到输出output_cells self.predict(case) output_deltas = [0.0]*self.output_n error = 0.0 #计算误差 = 期望输出-实际输出 for o in range(self.output_n): error = label[o] - self.output_cells[o] #正确结果和预测结果的误差:0,1,-1 output_deltas[o]= sigmoid_derivate(self.output_cells[o])*error#误差稳定在0~1内 hidden_deltas = [0.0] * self.hidden_n for j in range(self.hidden_n): error = 0.0 for k in range(self.output_n): error+= output_deltas[k]*self.output_weights[j][k] hidden_deltas[j] = sigmoid_derivate(self.hidden_cells[j])*error for h in range(self.hidden_n): for o in range(self.output_n): change = output_deltas[o]*self.hidden_cells[h] #调整权重:上一层每个节点的权重学习*变化+矫正率 self.output_weights[h][o] += learn*change #更新输入->隐藏层的权重 for i in range(self.input_n): for h in range(self.hidden_n): change = hidden_deltas[h]*self.input_cells[i] self.input_weights[i][h] += learn*change error = 0 for o in range(len(label)): for k in range(self.output_n): error+= 0.5*(label[o] - self.output_cells[k])**2 return error def train(self,cases,labels, limit, learn,correct=0.1): for i in range(limit): error = 0.0 # learn = le.arn_speed_start /float(i+1) for j in range(len(cases)): case = cases[j] label = labels[j] error+= self.back_propagate(case, label, learn,correct) if((i+1)%500==0): print("error:",error) def test(self): #学习异或 N = 200 d = -4 r = 10 width = 6 data_source = moon_data_class(N, d, r, width) data = data_source.dbmoon() # x0 = [1 for x in range(1,401)] input_cells = np.array([np.reshape(data[0:2*N, 0], len(data)), np.reshape(data[0:2*N, 1], len(data))]).transpose() labels_pre = [[1.0] for y in range(1, 201)] labels_pos = [[0.0] for y in range(1, 201)] labels=labels_pre+labels_pos self.setup(2,5,1) #初始化神经网络:输入层,隐藏层,输出层元素个数 self.train(input_cells,labels,2000,0.05,0.1) #可以更改 test_x = [] test_y = [] test_p = [] y_p_old = 0 for x in np.arange(-15.,25.,0.1): for y in np.arange(-10.,10.,0.1): y_p =self.predict(np.array([x, y])) if(y_p_old <0.5 and y_p[0] > 0.5): test_x.append(x) test_y.append(y) test_p.append([y_p_old,y_p[0]]) y_p_old = y_p[0] #画决策边界 plt.plot( test_x, test_y, 'g--') plt.plot(data[0:N, 0], data[0:N, 1], 'r*', data[N:2*N, 0], data[N:2*N, 1], 'b*') plt.show() if __name__ == '__main__': nn = BP_NET() nn.test() 4、运行结果
1,776
5,164
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2019-09
latest
en
0.183185
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-2-equations-inequalities-and-problem-solving-2-6-solving-inequalities-2-6-exercise-set-page-136/94
1,537,374,490,000,000,000
text/html
crawl-data/CC-MAIN-2018-39/segments/1537267156252.62/warc/CC-MAIN-20180919161420-20180919181420-00284.warc.gz
755,687,760
13,921
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition) $t\ge-20$ Using the Distributive Property and the properties of inequality, the solution to the given inequality, $5(t+3)+9\ge3(t-2)-10 ,$ is \begin{array}{l}\require{cancel} 5(t)+5(3)+9\ge3(t)+3(-2)-10 \\\\ 5t+15+9\ge3t-6-10 \\\\ 5t+24\ge3t-16 \\\\ 5t-3t\ge-16-24 \\\\ 2t\ge-40 \\\\ t\ge-\dfrac{40}{2} \\\\ t\ge-20 .\end{array}
165
407
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2018-39
longest
en
0.6088
https://www.true-telecom.com/what-is-the-meaning-of-filtration-fraction/
1,718,989,271,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198862132.50/warc/CC-MAIN-20240621160500-20240621190500-00401.warc.gz
912,197,533
12,328
# What is the meaning of filtration fraction? ## What is the meaning of filtration fraction? The filtration fraction (FF) is the portion of plasma that is filtered across the glomerulus relative to the renal plasma flow (RPF). In a healthy individual, the usual filtration fraction is around 0.2, or 20% of the total renal plasma flow. How do you find the filtration fraction? Formulas 1. Filtration Fraction (FF) % = GFR / RPF * 100. 2. Filtration Fraction = Ultrafiltrate flow rate / [Blood flow rate x (1 – Hct) + Pre-dilution replacement flow rate] What is Level 3 kidney disease? In Stage 3 CKD, your kidneys have mild to moderate damage, and they are less able to filter waste and fluid out of your blood. This waste can build up in your body and begin to harm other areas, such as to cause high blood pressure, anemia and problems with your bones. This buildup of waste is called uremia. ### What is excretion rate? The urinary excretion rate is calculated based on short-term, defined time sample collections with a known sample mass, and this measurement can be used to remove the variability in urine concentrations due to urine dilution. What is the filtration fraction? The filtration fraction is the volume of plasma removed from the dialysed blood by ultrafiltration. The official definition is “the ratio of ultrafiltration rate to plasma water flow rate”. A filtration fraction of 25% represents 25% of the plasma water removed by ultrafiltration What is the filtration fraction (Quf)? The filtration fraction is defined as the ratio between the ultrafiltration flow rate QUF and the plasma flow rate QP ( Neri et al, 2016 ), such that: ## What is the normal filtration fraction of plasma? The filtration fraction (FF) is the portion of plasma that is filtered across the glomerulus relative to the renal plasma flow (RPF). In a healthy individual, the usual filtration fraction is around 0.2, or 20% of the total renal plasma flow. What is the normal filtration rate of the renal tubules? the fraction of the plasma entering the kidney that filters into the lumen of the renal tubules, determined by dividing the glomerular filtration rate by the renal plasma flow; normally, it is around 0.17. Begin typing your search term above and press enter to search. Press ESC to cancel.
516
2,313
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2024-26
latest
en
0.911013
http://mathhelpforum.com/discrete-math/index148.html
1,527,289,864,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794867220.74/warc/CC-MAIN-20180525215613-20180525235613-00526.warc.gz
192,889,172
15,305
# Discrete Math Forum Discrete Math Help Forum: Discrete mathematics, logic, set theory 1. ### Difference equation tutorial: draft of part I • Replies: 4 • Views: 4,563 Jun 14th 2014, 01:54 AM • Replies: 0 • Views: 4,463 Feb 11th 2011, 11:34 AM 3. ### List of rules used to moderate MHF - please read carefully. • Replies: 0 • Views: 2,959 Jul 19th 2010, 10:33 PM 1. ### Two combinatoric problems • Replies: 6 • Views: 942 Aug 21st 2010, 05:16 PM 2. ### Turing machine • Replies: 14 • Views: 1,084 Aug 21st 2010, 07:17 AM 3. ### Combination Problem • Replies: 7 • Views: 668 Aug 21st 2010, 03:49 AM 4. ### Minimal cost circulation • Replies: 0 • Views: 374 Aug 21st 2010, 03:47 AM 5. ### How many combinations without duplicates? • Replies: 2 • Views: 884 Aug 19th 2010, 09:38 PM 6. ### Check my proof! • Replies: 2 • Views: 526 Aug 19th 2010, 05:19 PM 7. ### Finding a counterexample • Replies: 2 • Views: 567 Aug 18th 2010, 08:56 AM 8. ### Quantifier Proofs • Replies: 4 • Views: 792 Aug 18th 2010, 04:36 AM 9. ### Combinatorial Proof • Replies: 7 • Views: 593 Aug 18th 2010, 04:08 AM 10. ### Jalapeños and Eggs... how many combinations for this set of numbers. THANK YOU • Replies: 3 • Views: 454 Aug 17th 2010, 02:16 PM 11. ### Deduction Theorem Help • Replies: 6 • Views: 661 Aug 17th 2010, 11:49 AM 12. ### Logical statements • Replies: 20 • Views: 1,489 Aug 17th 2010, 01:55 AM 13. ### An Equality of Binomial Sums • Replies: 2 • Views: 744 Aug 16th 2010, 12:25 AM 14. ### Induction question, help greatly appreciated :) • Replies: 1 • Views: 451 Aug 15th 2010, 11:58 PM 15. ### Universal Quantification proof • Replies: 8 • Views: 736 Aug 14th 2010, 11:29 AM 16. ### Optimization problem • Replies: 9 • Views: 656 Aug 13th 2010, 10:55 AM 17. ### Big-Oh algorithm Proof • Replies: 4 • Views: 965 Aug 12th 2010, 11:20 PM 18. ### Induction question, not sure how to do this, help appreciated • Replies: 1 • Views: 348 Aug 12th 2010, 05:54 PM 19. ### quickie on digraphs • Replies: 0 • Views: 415 Aug 12th 2010, 02:24 PM 20. ### Codes • Replies: 0 • Views: 576 Aug 12th 2010, 02:36 AM 21. ### A functions proof • Replies: 3 • Views: 524 Aug 12th 2010, 02:10 AM 22. ### Finding the greatest common divisor • Replies: 1 • Views: 622 Aug 11th 2010, 06:27 PM • Replies: 2 • Views: 738 Aug 11th 2010, 11:54 AM 24. ### [SOLVED] Strange pure predicate calculus result, what went wrong? • Replies: 2 • Views: 447 Aug 11th 2010, 11:12 AM 25. ### Induction • Replies: 1 • Views: 513 Aug 11th 2010, 02:08 AM 26. ### binomial theorem questions. • Replies: 1 • Views: 469 Aug 10th 2010, 07:51 PM • • 28. ### [SOLVED] Math Riddle- Help • Replies: 1 • Views: 597 Aug 9th 2010, 12:38 PM , , , , , # "discrete" Click on a term to search for related topics. Use this control to limit the display of threads to those newer than the specified time frame. Allows you to choose the data by which the thread list will be sorted.
1,140
2,951
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.40625
3
CC-MAIN-2018-22
latest
en
0.699898
https://www.wyzant.com/resources/answers/700048/how-do-i-describe-and-calculate-the-effect-of-an-impacting-object
1,624,539,254,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00604.warc.gz
967,345,113
14,251
# How do I describe and calculate the effect of an impacting object? My lab studies the physiology of impact injury on biological tissues. I use a pneumatic cylinder to impart injury into a biological sample and then assess the molecular and physiological changes in that tissue. It is the first step in trying to understand the pathophysiology of traumatic brain injury. So, I have the mass of the internal moving components of the cylinder (the rod and piston body = 25grams) and I have the velocity of these moving components (let's call it 10m/s). I also have the sample and cylinder set up so that the total displacement is 5mm. The sample sits on a foam pad. Some of this displacement is represented by compression of the sample, but for the most part the sample rapidly accelerates and decelerates through this 5mm displacement. Most of the related literature simply reports the velocity of the impact. However, I know enough physics to know that velocity is but one piece of the impact physics. So, my questions: 1. Colloquially, one might ask what is the force imparted on the tissue. But, that might not be the correct term. What is the best way to label the effect of the cylinder on the tissue? Is Force correct? Would it be Kinetic Energy? I'm just trying to figure out the most informative/accurate description of the effect of the cylinder on the tissue. 2. Then, how do I calculate that (what ever it is: force, KE, ...)? ## 2 Answers By Expert Tutors By: Tutor New to Wyzant Experienced American Nuclear Physicist ## Still looking for help? Get the right answer, fast. Get a free answer to a quick problem. Most questions answered within 4 hours. #### OR Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
396
1,776
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2021-25
latest
en
0.933328
https://oeis.org/A292380
1,638,022,595,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964358189.36/warc/CC-MAIN-20211127133237-20211127163237-00376.warc.gz
528,299,928
5,175
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A292380 Base-2 expansion of a(n) encodes the steps where multiples of 4 are encountered when map x -> A252463(x) is iterated down to 1, starting from x=n. 7 0, 0, 0, 1, 0, 0, 0, 3, 2, 0, 0, 1, 0, 0, 0, 7, 0, 4, 0, 1, 0, 0, 0, 3, 4, 0, 6, 1, 0, 0, 0, 15, 0, 0, 0, 9, 0, 0, 0, 3, 0, 0, 0, 1, 2, 0, 0, 7, 8, 8, 0, 1, 0, 12, 0, 3, 0, 0, 0, 1, 0, 0, 2, 31, 0, 0, 0, 1, 0, 0, 0, 19, 0, 0, 8, 1, 0, 0, 0, 7, 14, 0, 0, 1, 0, 0, 0, 3, 0, 4, 0, 1, 0, 0, 0, 15, 0, 16, 2, 17, 0, 0, 0, 3, 0 (list; graph; refs; listen; history; text; internal format) OFFSET 1,8 LINKS Antti Karttunen, Table of n, a(n) for n = 1..16384 FORMULA a(n) = A048735(A156552(n)). a(n) = A292370(A292384(n)). Other identities. For n >= 1: a(n) AND A292382(n) = 0, where AND is a bitwise-AND (A004198). a(n) + A292382(n) = A156552(n). A000120(a(n)) + A000120(A292382(n)) = A001222(n). A000035(a(n)) = A121262(n). EXAMPLE For n = 4, the starting value is a multiple of four, after which follows A252463(4) = 2, and A252463(2) = 1, the end point of iteration, and neither 2 nor 1 is a multiple of four, thus a(4) = 1*(2^0) + 0*(2^1) + 0*(2^2) = 1. For n = 8, the starting value is a multiple of four, after which follows A252463(8) = 4 (also a multiple), continuing as before as 4 -> 2 -> 1, thus a(8) = 1*(2^0) + 1*(2^1) + 0*(2^2) + 0*(2^3) = 3. For n = 9, the starting value is not a multiple of four, after which follows A252463(9) = 4 (which is), continuing as before as 4 -> 2 -> 1, thus a(9) = 0*(2^0) + 1*(2^1) + 0*(2^2) + 0*(2^3) = 2. MATHEMATICA Table[FromDigits[Reverse@ NestWhileList[Function[k, Which[k == 1, 1, EvenQ@ k, k/2, True, Times @@ Power[Which[# == 1, 1, # == 2, 1, True, NextPrime[#, -1]] & /@ First@ #, Last@ #] &@ Transpose@ FactorInteger@ k]], n, # > 1 &] /. k_ /; IntegerQ@ k :> If[Mod[k, 4] == 0, 1, 0], 2], {n, 105}] (* Michael De Vlieger, Sep 21 2017 *) PROG (Scheme) (define (A292380 n) (A292370 (A292384 n))) (Python) from sympy.core.cache import cacheit from sympy.ntheory.factor_ import digits from sympy import factorint, prevprime from operator import mul from functools import reduce def a292370(n):     k=digits(n, 4)[1:]     return 0 if n==0 else int("".join(['1' if i==0 else '0' for i in k]), 2) def a064989(n):     f=factorint(n)     return 1 if n==1 else reduce(mul, [1 if i==2 else prevprime(i)**f[i] for i in f]) def a252463(n): return 1 if n==1 else n//2 if n%2==0 else a064989(n) @cacheit def a292384(n): return 1 if n==1 else 4*a292384(a252463(n)) + n%4 def a(n): return a292370(a292384(n)) print([a(n) for n in range(1, 111)]) # Indranil Ghosh, Sep 21 2017 (PARI) a(n) = my(m=factor(n), k=-1, ret=0); for(i=1, matsize(m)[1], ret += bitneg(0, m[i, 2]-1) << (primepi(m[i, 1])+k); k+=m[i, 2]); ret; \\ Kevin Ryde, Dec 11 2020 CROSSREFS Cf. A005940, A048735, A156552, A292370, A292381, A292382, A292383, A292384. Sequence in context: A206590 A206825 A336551 * A242165 A231724 A214851 Adjacent sequences:  A292377 A292378 A292379 * A292381 A292382 A292383 KEYWORD nonn,base AUTHOR Antti Karttunen, Sep 15 2017 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified November 27 07:28 EST 2021. Contains 349365 sequences. (Running on oeis4.)
1,501
3,549
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.640625
4
CC-MAIN-2021-49
latest
en
0.689
https://www.convert-measurement-units.com/convert+Gallons+per+100+miles+US+to+Liter+per+100+Kilometer.php
1,621,175,695,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243991224.58/warc/CC-MAIN-20210516140441-20210516170441-00099.warc.gz
752,565,151
17,262
 Convert GPHM to l/100km (Gallons per 100 miles (US) to Liter per 100 Kilometer) ## Gallons per 100 miles (US) into Liter per 100 Kilometer Measurement Categorie: Original value: Original unit: Gallons per 100 miles (Imperial)Gallons per 100 miles (US) [GPHM]Gallons per 10000 miles (Imperial)Gallons per 10000 miles (US) [GP10K]Gallons per mile (Imperial)Gallons per mile (US) [GPM]Kilometer per Liter [km/l]Liter per 100 Kilometer [l/100km]Liter per Kilometer [l/km]Miles per gallon (Imperial)Miles per gallon (US) [MPG] Target unit: Gallons per 100 miles (Imperial)Gallons per 100 miles (US) [GPHM]Gallons per 10000 miles (Imperial)Gallons per 10000 miles (US) [GP10K]Gallons per mile (Imperial)Gallons per mile (US) [GPM]Kilometer per Liter [km/l]Liter per 100 Kilometer [l/100km]Liter per Kilometer [l/km]Miles per gallon (Imperial)Miles per gallon (US) [MPG] numbers in scientific notation Direct link to this calculator: https://www.convert-measurement-units.com/convert+Gallons+per+100+miles+US+to+Liter+per+100+Kilometer.php # Convert Gallons per 100 miles (US) to Liter per 100 Kilometer (GPHM to l/100km): 1. Choose the right category from the selection list, in this case 'Fuel consumption'. 2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), brackets and π (pi) are all permitted at this point. 3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Gallons per 100 miles (US) [GPHM]'. 4. Finally choose the unit you want the value to be converted to, in this case 'Liter per 100 Kilometer [l/100km]'. 5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so. With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '367 Gallons per 100 miles (US)'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Gallons per 100 miles (US)' or 'GPHM'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Fuel consumption'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Alternatively, the value to be converted can be entered as follows: '80 GPHM to l/100km' or '92 GPHM into l/100km' or '39 Gallons per 100 miles (US) -> Liter per 100 Kilometer' or '43 GPHM = l/100km' or '12 Gallons per 100 miles (US) to l/100km' or '4 GPHM to Liter per 100 Kilometer' or '65 Gallons per 100 miles (US) into Liter per 100 Kilometer'. For this alternative, the calculator also figures out immediately into which unit the original value is specifically to be converted. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second. Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(53 * 7) GPHM'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '367 Gallons per 100 miles (US) + 1101 Liter per 100 Kilometer' or '14mm x 80cm x 83dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question. If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 1.005 563 262 454 3×1028. For this form of presentation, the number will be segmented into an exponent, here 28, and the actual number, here 1.005 563 262 454 3. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 1.005 563 262 454 3E+28. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 10 055 632 624 543 000 000 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications. ## How many Liter per 100 Kilometer make 1 Gallons per 100 miles (US)? 1 Gallons per 100 miles (US) [GPHM] = 2.352 Liter per 100 Kilometer [l/100km] - Measurement calculator that can be used to convert Gallons per 100 miles (US) to Liter per 100 Kilometer, among others.
1,329
4,982
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.78125
3
CC-MAIN-2021-21
latest
en
0.641755
https://paperzz.com/doc/7187874/towards-solution-of-large-scale-image-restoration-and
1,685,854,101,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224649439.65/warc/CC-MAIN-20230604025306-20230604055306-00100.warc.gz
494,731,022
17,293
### Towards Solution of Large Scale Image Restoration and ```Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Towards Solution of Large Scale Image Restoration and Reconstruction Problems Rosemary Renaut Joint work with Anne Gelb, Aditya Viswanathan, Hongbin Guo, Doug Cochran,Youzuo Lin, Arizona State University November 4, 2009 National Science Foundation: Division of Computational Mathematics 1 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Outline 1 Motivation Quick Review 2 Statistical Results for Least Squares Summary of LS Statistical Results Implications of Statistical Results for Regularized Least Squares 3 Newton algorithm Algorithm with LSQR (Paige and Saunders) Results 4 Large Scale Problems Application in Image Reconstruction and Restoration 5 Stopping the EM Algorithm Statistically 6 Edge Detection for PSF Estimation National Science Foundation: Division of Computational Mathematics 2 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Signal/Image Restoration: Integral Model of Signal Degradation b(t) = R K(t, s)x(s)ds K(t, s) describes blur of the signal. Convolutional model: invariant K(t, s) = K(t − s) is Point Spread Function (PSF). Typically sampling includes noise e(t), model is Z b(t) = K(t − s)x(s)ds + e(t) Discrete model: given discrete samples b, find samples x of x Let A discretize K, assume known, model is given by b = Ax + e. Naı̈vely invert the system to find x! National Science Foundation: Division of Computational Mathematics 3 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Example 1-D Original and Blurred Noisy Signal Original signal x. Blurred and noisy signal b Gaussian PSF. National Science Foundation: Division of Computational Mathematics 4 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation The Solution: Regularization is needed Naı̈ve Solution A Regularized Solution National Science Foundation: Division of Computational Mathematics 5 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Least Squares for Ax = b: A Quick Review Background Consider discrete systems: A ∈ Rm×n , b ∈ Rm , x ∈ Rn Ax = b + e, Classical Approach Linear Least Squares (A full rank) xLS = arg min ||Ax − b||22 x Difficulty xLS sensitive to changes in right hand side b when A is ill-conditioned. For convolutional models system is ill-posed. National Science Foundation: Division of Computational Mathematics 6 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Introduce Regularization to Find Acceptable Solution Weighted Fidelity with Regularization • Regularize xRLS (λ) = arg min{kb − Axk2Wb + λ2 R(x)}, x Weighting matrix Wb • R(x) is a regularization term • λ is a regularization parameter which is unknown. Solution xRLS (λ) depends on λ. depends on regularization operator R depends on the weighting matrix Wb National Science Foundation: Division of Computational Mathematics 7 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation The Weighting Matrix: Some Assumptions for Multiple Data Measurements Given multiple measurements of data b: Usually error in b, e is an m−vector of random measurement errors with mean 0 and positive definite covariance matrix Cb = E(eeT ). For uncorrelated heteroskedastic measurements Cb is diagonal matrix of standard deviations of the errors. (Colored noise) For white noise Cb = σ 2 I. Weighting by Wb = Cb −1 in data fit term, theoretically, ẽ = Wb 1/2 e are uncorrelated. Difficulty if Wb increases ill-conditioning of A! For images find Wb from the image data National Science Foundation: Division of Computational Mathematics 8 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Formulation: Generalized Tikhonov Regularization With Weighting Use R(x) = kD(x − x0 )k2 x̂ = argmin J(x) = argmin{kAx − bk2Wb + λ2 kD(x − x0 )k2 }. (1) D is a suitable operator, often derivative approximation. Assume N (A) ∩ N (D) = {0} x0 is a reference solution, often x0 = 0, might need to be average solution. Having found λ, the posterior inverse covariance matrix is W̃x = AT Wb A + λ2 I Posterior information can give some confidence on parameter estimates. National Science Foundation: Division of Computational Mathematics 9 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ is crucial: an example with D = I. National Science Foundation: Division of Computational Mathematics 10 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Choice of λ crucial: Different algorithms - different solutions. Discrepancy Principle Suppose noise is white: Cb = σb2 I. Find λ such that the regularized residual satisfies σb2 = 1 kb − Ax(λ)k22 . m Can be implemented by a Newton root finding algorithm. But discrepancy principle typically oversmooths. Others [Vog02] L-Curve Generalized Cross Validation (GCV) Unbiased Predictive Risk (UPRE) National Science Foundation: Division of Computational Mathematics 11 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Some standard approaches I: L-curve - Find the corner Let r(λ) = (A(λ) − A)b: Influence Matrix A(λ) = A(AT Wb A + λ2 DT D)−1 AT Plot log(kDxk), log(kr(λ)k) Find corner Expensive - requires range of λ. GSVD makes calculations efficient. Not statistically based No corner National Science Foundation: Division of Computational Mathematics 12 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Generalized Cross-Validation (GCV) Let A(λ) = A(AT Wb A + λ2 DT D)−1 AT Can pick Wb = I. Minimize GCV function kb − Ax(λ)k2Wb , [trace(Im − A(λ))]2 Multiple minima which estimates predictive risk. Expensive - requires range of λ. GSVD makes calculations efficient. Requires minimum Sometimes flat National Science Foundation: Division of Computational Mathematics 13 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Unbiased Predictive Risk Estimation (UPRE) Minimize expected value of predictive risk: Minimize UPRE function kb − Ax(λ)k2Wb +2 trace(A(λ)) − m Expensive - requires range of λ. GSVD makes calculations efficient. Need estimate of trace Minimum needed National Science Foundation: Division of Computational Mathematics 14 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Background: Statistics of the Least Squares Problem Theorem (Rao73) Let r be the rank of A and for b ∼ N (Ax, σb2 I), (errors in measurements are normally distributed with mean 0 and covariance σb2 I), then J = min kAx − bk2 ∼ σb2 χ2 (m − r). x 2 J follows a χ distribution with m − r degrees of freedom: Basically the Discrepancy Principle Corollary (Weighted Least Squares) For b ∼ N (Ax, Cb ), Wb = Cb −1 then J = min kAx − bk2Wb ∼ χ2 (m − r). x National Science Foundation: Division of Computational Mathematics 15 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Extension: Statistics of the Regularized Least Squares Problem Thm: χ2 distribution of the regularized functional (Renaut/Mead 2008) (NOTE: Weighting Matrix on Regularization term.) x̂ = argmin JD (x) = argmin{kAx − bk2Wb + k(x − x0 )k2WD }, WD = DT Wx D. (2) Assume Wb and Wx are symmetric positive definite. Problem is uniquely solvable N (A) ∩ N (D) = {0}. Moore-Penrose generalized inverse of WD is CD Statistics: Errors in the right hand side e ∼ N (0, Cb ), and x0 is known so that (x − x0 ) = f ∼ N (0, CD ), x0 is the mean vector of the model parameters. Then JD (x̂(WD )) ∼ χ2 (m + p − n) National Science Foundation: Division of Computational Mathematics 16 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Significance of the χ2 result JD ∼ χ2 (m + p − n) For sufficiently large m̃ = m + p − n E(J(x(WD ))) = m + p − n Moreover m̃ − E(JJ T ) = 2(m + p − n) √ √ 2m̃zα/2 < J(x̂(WD )) < m̃ + 2m̃zα/2 . (3) 2 zα/2 is the relevant z-value for a χ -distribution with m̃ = m + p − n degrees National Science Foundation: Division of Computational Mathematics 17 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation General Result [MR09b], [RHM09], [MR09a] The Cost Functional follows a χ2 Statistical Distribution If x0 is not the mean value, then we introduce a non-central χ2 distribution with centrality parameter c. If the problem is rank deficient the degrees of freedom are reduced. Suppose degrees of freedom m̃ and centrality parameter c then E(JD ) = m̃ + c T E(JD JD ) = 2(m̃) + 4c Suggests: Try to find WD so that E(J) = m̃ + c First find λ only. Find Wx = λ2 I National Science Foundation: Division of Computational Mathematics 18 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation What do we need to apply the Theory? Requirements Covariance Cb on data parameters b (or on model parameters x!) A priori information x0 , mean x. But x (and hence x0 ) are not known. If not known use repeated data measurements calculate Cb and mean b. Hence estimate the centrality parameter E(b) = AE(x) implies b = Ax. Hence c = kck22 = kQ̃U T Wb 1/2 (b − Ax0 )k22 E(JD ) = E(kQ̃U T Wb 1/2 (b − Ax0 )k22 ) = m + p − n + kck22 Given the GSVD estimate the degrees of freedom m̃. Then we can use E(J) to find λ National Science Foundation: Division of Computational Mathematics 19 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Assume x0 is the mean (experimentalists know something about model parameters) DESIGNING THE ALGORITHM: I Recall: if Cb and Cx are good estimates of covariance |JD (x̂) − (m + p − n)| should be small. Thus, let m̃ = m + p − n then we want √ √ m̃ − 2m̃zα/2 < J(x(WD )) < m̃ + 2m̃zα/2 . zα/2 is the relevant z-value for a χ2 -distribution with m̃ degrees GOAL Find Wx to make (3) tight: Single Variable case find λ JD (x̂(λ)) ≈ m̃ National Science Foundation: Division of Computational Mathematics 20 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation A Newton-line search Algorithm to find λ = 1/σ. (Basic algebra) Newton to Solve F (σ) = JD (σ) − m̃ = 0 We use σ = 1/λ, and y(σ (k) ) is the current solution for which x(σ (k) ) = y(σ (k) ) + x0 Then 2 ∂ J(σ) = − 3 kDy(σ)k2 < 0 ∂σ σ Hence we have a basic Newton Iteration σ (k+1) = σ (k) (1 + 1 σ (k) 2 ( ) (JD (σ (k) ) − m̃)). 2 kDyk National Science Foundation: Division of Computational Mathematics 21 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Practical Details of Algorithm: Large Scale problems Algorithm Initialization Convert generalized Tikhonov problem to standard form.( if L is not invertible you just need to know how to find Ax and AT x, and the null space of L) Use LSQR (Paige and Saunders) algorithm to find the bidiagonal matrix for the projected problem. Obtain a solution of the bidiagonal problem for given initial σ. Subsequent Steps Increase dimension of space if needed with reuse of existing bidiagonalization. May also use smaller size system if appropriate. Each σ calculation of algorithm reuses saved information from the Lancos bidiagonalization. National Science Foundation: Division of Computational Mathematics 22 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Illustrating the Results for Problem Size 512: Two Standard Test Problems Comparison for noise level 10%. On left D = I and on right D is first derivative Notice L-curve and χ2 -LSQR perform well. UPRE does not perform well. National Science Foundation: Division of Computational Mathematics 23 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Real Data: Seismic Signal Restoration The Data Set and Goal Real data set of 48 signals of length 3000. The point spread function is derived from the signals. Calculate the signal variance pointwise over all 48 signals. Goal: restore the signal x from Ax = b, where A is PSF matrix and b is given blurred signal. Method of Comparison- no exact solution known: use convergence with respect to downsampling. National Science Foundation: Division of Computational Mathematics 24 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Comparison High Resolution White noise Greater contrast with χ2 . UPRE is insufficiently regularized. L-curve severely undersmooths (not shown). Parameters not consistent across resolutions National Science Foundation: Division of Computational Mathematics 25 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation THE UPRE SOLUTION: x0 = 0 White Noise Regularization Parameters are consistent: σ = 0.01005 all resolutions National Science Foundation: Division of Computational Mathematics 26 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation THE LSQR Hybrid SOLUTION: White Noise Regularization quite consistent resolution 2 to 100 σ = 0.0000029, .0000029, .0000029, .0000057, .0000057 National Science Foundation: Division of Computational Mathematics 27 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Illustrating the Deblurring Result: Problem Size 65536 Example taken from RESTORE TOOLS Nagy et al 2007-8: 15% Noise Computational Cost is Minimal: Projected Problem Size is 15, λ = .58 National Science Foundation: Division of Computational Mathematics 28 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Problem Grain noise 15% added : increasing subproblem size to validate against increasing subproblem size (a) Signal to noise ratio 10 log10 (1/e) relative error e (b) Regularization Parameter Against Problem Size National Science Foundation: Division of Computational Mathematics 29 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Illustrating the progress of the Newton algorithm post LSQR National Science Foundation: Division of Computational Mathematics 30 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Illustrating the progress of the Newton algorithm with LSQR National Science Foundation: Division of Computational Mathematics 31 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Problem Grain noise 15% added for increasing subproblem size Figure: Signal to noise ratio 10 log10 (1/e) relative error e National Science Foundation: Division of Computational Mathematics 32 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation An Alternative Direction For Large Scale Problems : Domain Decomposition [Ren98] Domain decomposition of x into several domains: x = (xT1 , xT2 , ..., xTp )T . Corresponding to different splitting of image x, kernel operator A is split A = (A1 , A2 , ..., Ap ). eg: Different Splitting Schemes National Science Foundation: Division of Computational Mathematics 33 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Formulation - Regularized Least Squares [LRG09] The linear system Ax ≈ b is replaced with the split systems X Ai yi ≈ bi (x), bi (x) = b − Aj xj = b − Ax + Ai xi . j6=i Locally solve Ax ≈ b min kAi yi − bi (x)k2 , yi ∈<ni 1 ≤ i ≤ p. If the problem is ill-posed we have the regularized problem ˘ ¯ min k Ax − b k22 +λ2 kDxk22 . f Similarly, we will have splitting on operator assuming local regularization «„ « „ «« „ « „„ A2 Ap A A1 . = ··· λ2 D2 λp Dp λ1 D1 DΛ Solve iteratively using novel updates for changing right hand sides, [Saa87], [CW97] Update Scheme - Global solution update from local solutions at step k to step k + 1 x(k+1) = p X (k+1) τi (k+1) (xlocal )i , i=1 (k+1) where (xlocal )i (k) (k) (k+1) T = ((x1 )T , . . . , (xi−1 )T , (yi (k) (k) ) , (xi+1 )T , . . . , (xp )T )T National Science Foundation: Division of Computational Mathematics 34 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Feasibility of the Approach 1-D Phillips, size 1024, noise level 6% Regularization .25 (a) No Splitting Rel Error .0525 (b) 4 Sub Problems Rel Error .0499 National Science Foundation: Division of Computational Mathematics 35 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation 2-D PET Reconstruction, Size 64 × 64, Noise Level 1.5% Regularization .2 (c) No Splitting SNR 11.73DB (d) 4 Sub Problems SNR 12.24DB National Science Foundation: Division of Computational Mathematics 36 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation A New Stopping Rule for the EM Method [GR09]) Quick Rationale PET Well-known that ML-EM method converges to overly noisy solutions, iterative methods have to stop before convergence [SV82], [HHL92], [HL94]. P Detected counts in tube i are Poisson with mean bi = j aij xj . Hence basic relationship is b ≈ Ax, A is projection matrix, b are counts and x is the density to be reconstructed. EM iteration: x(k+1) = (AT (b./(Ax(k) ))). ∗ x(k) . (e) True (f) k = 95 National Science Foundation: Division of Computational Mathematics (g) k = 500 37 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation A New Estimate of the Stopping Level Algorithm: For k until converged for m tubes Calculate step of EM, x(k) . Update tube means b(k) = Ax(k) . Bin all tubes to have b(k) > 20. √ Calculate y = (b − b(k) )./ b(k) , then y ∼ N (0, 1) Calculate mean ȳ and sample standard deviation s for yi , i = 1 : m. √ Calculate α = m − 1ȳ/s, and pt (α), α ∼ t(m − 1) (t-student density with m − 1 degrees of freedom). p Calculate β = (m − 1)s2 and pN (β), β ∼ N (m − 1, 2(m − 1)) (Gaussian density mean m − 1). Calculate likelihood of sampling α and β from the two distributions: l(k) = pt (α)pN (β). When l(k) is maximum, l(k) < l(k−1) . STOP Solution is x(k−1) . National Science Foundation: Division of Computational Mathematics 38 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Simulations: Validation Table: The best and the predicted stopping step for 11 simulations Best Pred 95 96 90 88 89 94 90 89 90 89 95 95 90 100 92 100 89 95 94 93 91 94 Figure: l(k) = pt (α)pN (β) for 500 steps. Maximized at k = 96. National Science Foundation: Division of Computational Mathematics 39 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Extension for Mammogram Denoising: Early Ideas The Model Assume blurring of the mammogram by PSF kernel K and measured image is b. Deconvolve in the Fourier domain and invert to give noisy estimate of optical density d. √ Each entry of d is a linear combination of x-ray energy with Poisson noise, and d is close to normally distributed [ANS48]. To find true optical density x denoise the deconvolved d. Use total variation min x m X √ √ ( di − xi )2 + λ2 kxkT V , s. t. x ≥ 0. i=1 Given knowledge of variance in noise in x-ray automatically select λ using statistical estimation approach. National Science Foundation: Division of Computational Mathematics 40 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Trial Experiment: Use data set from UoFlorida (DDSM) cancer case 0001, left breast with CC scanning angle (a) Original Image (b) Restored Image Figure: Total yellow (calcification) reduced by deblurring. Rectangle at bottom rhs indicates deblurring National Science Foundation: Division of Computational Mathematics 41 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation PSF Estimation in Blurring Problems using Edge Detection( Cochran, Gelb, Viswanathan, Renaut, Stefan) Given the blurring model (PSF convolution operator K) and x ∈ L2 (−π, π) piecewise-smooth. We estimate the psf starting with 2N + 1 blurred Fourier coefficients b̂(j), j = −N, ..., N . b=K ∗x+e Principle: Apply a linear edge detector, denote by T . We shall assume that the edge detector can be written as a convolution with an appropriate kernel T ∗ (K ∗ x + e) = (K ∗ x + e) ∗ T =x∗K ∗T +e∗T = (x ∗ T ) ∗ K + e ∗ T ≈ [x] ∗ K + ẽ Here [x](s) is a jump function. For a jump discontinuity in a function the jump function at any point s only depends on the values of x at s+ and s− . [x](s) := x(s+ ) − x(s− ) Hence, we observe shifted and scaled replicates of the psf. National Science Foundation: Division of Computational Mathematics 42 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Example (No Noise) Function 2 1.5 1.5 1 1 h(x) f(x) 0.5 0 0.5 −0.5 −1 0 −1.5 −2 −3 −2 −1 0 x 1 2 −0.5 3 −3 (a) True function −1 0 x 1 2 3 2 3 (b) Motion Blur PSF Blurred Function 1.5 −2 2 1.5 1 1 0.5 f(x) g(x) 0.5 0 0 −0.5 −0.5 −1 Function −1 −1.5 Blur after edge detection True blur (normalized) −1.5 −3 −2 −1 0 x 1 (c) Blurred Function 2 3 −2 −3 −2 −1 0 x 1 (d) After applying edge detection Figure: Function subjected to motion blur, N = 128 National Science Foundation: Division of Computational Mathematics 43 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation 2 2 1.5 1.5 1 1 0.5 0.5 f(x) f(x) Representative Examples : Gaussian PSF 0 −0.5 0 −0.5 −1 −1 Function Function Blurred, noisy function −1.5 Blurred, noisy function −1.5 Blur after edge detection Blur after edge detection True blur (normalized) −2 −3 −2 −1 True blur (normalized) 0 x 1 2 (a) Noisy blur estimation 3 −2 −3 −2 −1 0 x 1 2 3 (b) After low-pass filtering Figure: Function subjected to Gaussian blur, N = 128 Complex noise distribution on Fourier coefficients – ê ∼ N (0, 1.5 ) (2N +1)2 Second picture subjected to low-pass (Gaussian) filtering It is conceivable that parameter estimation for a Gaussian PSF can take into account the effect of Gaussian filtering National Science Foundation: Division of Computational Mathematics 44 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Representative Examples: Motion Blur 2 2 1.5 1.5 1 1 0.5 0.5 Function Blurred, noisy function f(x) f(x) Blur after edge detection 0 −0.5 True blur (normalized) 0 −0.5 −1 −1 Function Blurred, noisy function −1.5 −1.5 Blur after edge detection True blur (normalized) −2 −3 −2 −1 0 x 1 2 (a) Noisy blur estimation 3 −2 −3 −2 −1 0 x 1 2 3 (b) After TV denoising Figure: Function subjected to Motion blur, N = 128 Cannot perform conventional low-pass filtering since blur is piecewise-smooth We compute the noisy blur estimate for Fourier expansion of blurred jump SN [b] ≈ [x] ∗ K + ẽ Denoising problem formulation min x k x − SN [b] k22 + λ2 k Dx k1 . National Science Foundation: Division of Computational Mathematics 45 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Future Work Combining Approaches Extend the parameter selection methods to the domain decomposition problems for large scale. Use efficient schemes for large scale problems - eg right hand side updates Extend to edge detection approaches Use tensor product of the PSF for extension to 2D - is it feasible Use parameter estimation techniques for the 2D problem Further development of statistical techniques for estimating acceptable solutions. National Science Foundation: Division of Computational Mathematics 46 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Bibliography I F. J. ANSCOMBE. The transformation of poisson, binomial and negative-binomial data. Biometrika, 35:246–254, 1948. T. F. Chan and W. L. Wan. Analysis of projection methods for solving linear systems with multiple right-hand sides. 1997. H. Guo and R. A. Renaut. Revisiting stopping rules for iterative methods used in emission tomography: Analysis and developments. Physics of Medicine and Biology, submitted, 2009. H. M. Hudson, B. F. Hutton, and R. Larkin. Accelerated EM reconstruction using ordered subsets. J. Nucl. Med., 33:960, 1992. H. M. Hudson and R. Larkin. Accelerated imaging reconstruction using orded subsets of projection data. IEEE Trans. Med. Imag., 13(4):601–609, 1994. National Science Foundation: Division of Computational Mathematics 47 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Bibliography II Y. Lin, R. A. Renaut, and H. Guo. Multisplitting for regularized least squares. in prep., 2009. J. Mead and R. A. Renaut. Least squares problems with inequality constraints as quadratic constraints. Linear Algebra and its Applications, 2009. J. Mead and R. A. Renaut. A Newton root-finding algorithm for estimating the regularization parameter for solving ill-conditioned least squares problems. Inverse Problems, 25, 2009. R. A. Renaut. A parallel multisplitting solution of the least squares problem. BIT, 1998. R. A Renaut, I. Hnetynkova, and J. Mead. Regularization parameter estimation for large scale Tikhonov regularization using a priori information. Computational Statistics and Data Analysis, 54(1), 2009. National Science Foundation: Division of Computational Mathematics 48 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Bibliography III On the Lanczos method for solving symmetric linear systems with several right-hand sides. 1987. L. A. Shepp and Y. Vardi. Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imag., MI-1(2):113–122, Oct. 1982. Curtis R. Vogel. Computational Methods for Inverse Problems. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2002. National Science Foundation: Division of Computational Mathematics 49 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Future Work Other Results and Future Work Software Package! Diagonal Weighting Schemes Edge preserving regularization - Total Variation Better handling of Colored Noise. Residual Periodogram for large scale. National Science Foundation: Division of Computational Mathematics 50 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Algorithm Using the GSVD GSVD Use GSVD of [Wb 1/2 A, D] For γi the generalized singular values, and s = U T Wb 1/2 r m̃ = m − n + p s̃i = si /(γi2 σx2 + 1), i = 1, . . . , p, Find root of F (σx ) = p X ( i=1 ti = s̃i γi . 1 γi2 σx2 + 1 )s2i + m X s2i − m̃ = 0 i=n+1 Equivalently: solve F = 0, where F (σx ) = sT s̃ − m̃ and F 0 (σx ) = −2σx ktk22 . National Science Foundation: Division of Computational Mathematics 51 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Practical Details of Algorithm Find the parameter Step 1: Bracket the root by logarithmic search on σ to handle the asymptotes: yields sigmamax and sigmamin Step 2: Calculate step, with steepness controlled by tolD. Let t = Dy/σ (k) , where y is the current update, then step = 1 1 ( )2 (JD (σ (k) ) − m̃) 2 max {ktk, tolD} Step 3: Introduce line search α(k) in Newton sigmanew = σ (k) (1 + α(k) step) National Science Foundation: Division of Computational Mathematics 52 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Key Aspects of the Proof I: The Functional J Algebraic Simplifications: Rewrite functional as quadratic form Regularized solution given in terms of regularization matrix R(WD ) x̂ R(WD ) = x0 + (AT Wb A + DT Wx D)−1 AT Wb r, = x0 + R(WD )Wb = x0 + y(WD ). = T 1/2 (4) r = b − Ax0 r, (5) T −1 (A Wb A + D Wx D) T A Wb 1/2 (6) Functional is given in terms of influence matrix A(WD ) A(WD ) = Wb 1/2 AR(WD ) JD (x̂) = rT Wb 1/2 (Im − A(WD ))Wb 1/2 r, = T (7) let r̃ = Wb 1/2 r r̃ (Im − A(WD ))r̃. A Quadratic Form National Science Foundation: Division of Computational Mathematics (8) (9) 53 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Key Aspects of the Proof II : Properties of a Quadratic Form χ2 distribution of Quadratic Forms xT P x for normal variables (Fisher-Cochran Theorem) Components xi are independent normal variables xi ∼ N (0, 1), i = 1 : n. A necessary and sufficient condition that xT P x has a central χ2 distribution is that P is idempotent, P 2 = P . In which case the degrees of freedom of χ2 is rank(P ) =trace(P ) = n. . When the means of xi are µi 6= 0, xT P x has a non-central χ2 distribution, with non-centrality parameter c = µT P µ A χ2 random variable with n degrees of freedom and centrality parameter c has mean n + c and variance 2(n + 2c). National Science Foundation: Division of Computational Mathematics 54 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Key Aspects of the Proof III: Requires the GSVD Lemma Assume invertibility and m ≥ n ≥ p. There exist unitary matrices U ∈ Rm×m , V ∈ Rp×p , and a nonsingular matrix X ∈ Rn×n such that » – Υ A=U X T D = V [M, 0p×(n−p) ]X T , (10) 0(m−n)×n Υ = diag(υ1 , . . . , υp , 1, . . . , 1) ∈ Rn×n , 0 ≤ υ1 ≤ · · · ≤ υp ≤ 1, υi2 + µ2i = 1, M = diag(µ1 , . . . , µp ) ∈ Rp×p , 1 ≥ µ1 ≥ · · · ≥ µp > 0, i = 1, . . . p. (11) The Functional with the GSVD Let Q̃ = diag(µ1 , . . . , µp , 0n−p , Im−n ) then J = r̃T (Im − A(WD ))r̃ = kQ̃U T r̃k22 = kkk22 National Science Foundation: Division of Computational Mathematics 55 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation Proof IV: Statistical Distribution of the Weighted Residual Covariance Structure Errors in b are e ∼ N (0, Cb ). Now b depends on x, b = Ax hence we can show b ∼ N (Ax0 , Cb + ACD AT ) (x0 is mean of x) Residual r = b − Ax ∼ N (0, Cb + ACD AT ). r̃ = Wb 1/2 r ∼ N (0, I + ÃCD ÃT ), Ã = Wb 1/2 A. Use the GSVD I + ÃCD ÃT = U Q−2 U T , Q = diag(µ1 , . . . , µp , In−p , Im−n ) Now k = QU r̃ then k ∼ N (0, QU (U Q−2 U T )U Q) ∼ N (0, Im ) T T But J = kQ̃U T r̃k2 = kk̃k2 , where k̃ is the vector k excluding components p + 1 : n. Thus JD ∼ χ2 (m + p − n). National Science Foundation: Division of Computational Mathematics 56 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation When mean of the parameters is not known, or x0 = 0 is not the mean Corollary: non-central χ2 distribution of the regularized functional Recall x̂ = argmin JD (x) = argmin{kAx − bk2Wb + k(x − x0 )k2WD }, WD = DT Wx D. Assume all assumptions as before, but x 6= x0 is the mean vector of the model parameters. Let c = kck22 = kQ̃U T Wb 1/2 A(x − x0 )k22 Then JD ∼ χ2 (m + p − n, c) The functional at optimum follows a non central χ2 distribution National Science Foundation: Division of Computational Mathematics 57 / 49 Motivation Statistical Results for Least Squares Newton algorithm Large Scale Problems Stopping the EM Algorithm Statistically Edge Detection for PSF Estimation A further result when A is not of full column rank The Rank Deficient Solution Suppose A is not full column rank. Then the filtered solution can be written in terms of the GSVD xFILT (λ) = p X p n n X X X fi γi2 s x̃ + s x̃ = s x̃ + si x̃i . i i i i i i + λ2 ) υi i=p+1 i=1 i=p+1 υi (γi2 i=p+1−r Here fi = 0, i = 1 : p − r, fi = γi2 /(γi2 + λ2 ), i = p − r + 1 : p. This yields J(xFILT (λ)) ∼ χ2 (m − n + r, c) notice degrees of freedom are reduced. National Science Foundation: Division of Computational Mathematics 58 / 49 ```
10,399
39,258
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2023-23
latest
en
0.70756
https://jstationx.com/how-long-is-the-first-half-of-a-high-school-football-game/
1,696,007,010,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00657.warc.gz
351,835,557
28,615
# How Long Is the First Half of a High School Football Game How Long Is the First Half of a High School Football Game? High school football games are an important part of American culture, with fans eagerly anticipating the thrill of each match. One commonly asked question among spectators is, “How long is the first half of a high school football game?” In this article, we will explore the duration of the first half and answer some common questions related to high school football. The first half of a high school football game typically lasts for 24 minutes. Each half is divided into four quarters, each lasting for 12 minutes. However, it is important to note that the clock does not run continuously during these 24 minutes. There are various factors that can stop the clock, such as timeouts, incomplete passes, penalties, or when a player goes out of bounds. 1. How long is a high school football game in total? A high school football game usually lasts for about two hours, including both halves and halftime. 2. How long is halftime? Halftime typically lasts for 20 minutes in high school football. This gives players, coaches, and officials a chance to rest and strategize for the second half. 3. How many timeouts are allowed in a high school football game? In high school football, each team is allowed three timeouts per half. These timeouts can be used strategically to stop the clock, regroup, or make adjustments. 4. How long is each timeout? Each timeout in high school football lasts for one minute. This gives teams an opportunity to discuss tactics, catch their breath, or make substitutions. 5. Can the clock stop during the first half? Yes, the clock can stop during the first half of a high school football game. It stops for various reasons, such as incomplete passes, penalties, or when a player goes out of bounds. 6. What happens if the game is tied at halftime? If the game is tied at halftime, the teams will continue to play the second half until a winner is determined. In some cases, games may go into overtime to decide the winner. 7. How long is each quarter in high school football? Each quarter in high school football lasts for 12 minutes. This time can be adjusted officials in certain circumstances, such as injuries or weather conditions. 8. Can the clock start running again after it has stopped? Yes, the clock can start running again after it has stopped due to a timeout, incomplete pass, or other factors. The clock generally starts again when the ball is snapped for the next play. 9. What happens if a player gets injured during the first half? If a player gets injured during the first half, the game is temporarily stopped to provide necessary medical attention. The player may be substituted, and the clock may be paused until the game resumes. 10. Can a team score during halftime? No, a team cannot score during halftime. Halftime is a designated break period, and no plays or scoring can occur during this time. 11. Can coaches make changes to their game plan during halftime? Yes, halftime is an essential period for coaches to make adjustments to their game plan. They can analyze the first half’s performance, make strategic changes, and motivate their players for the second half. 12. How many officials are present during a high school football game? Typically, a high school football game is officiated seven officials. They ensure that the game is played according to the rules and make crucial decisions throughout the match.
708
3,504
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2023-40
latest
en
0.966954
https://www.technicaljockey.com/tableau-training-consultant/lod-comparative-sales-analysis/
1,563,271,866,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195524522.18/warc/CC-MAIN-20190716095720-20190716121720-00093.warc.gz
855,840,799
22,660
Tableau LOD Comparative Analysis During my Corporate Tableau Training in Gurgaon , i get questions many time regarding LOD Comparative Analysis in Tableau . If we want to do a comparative Analysis of specific versus total in the same visualization, we can make use of LOD as below. For the better understanding of the situation analyze this from the sample superstore data set i.e. We have to analyze the Sales of our various Sub Categories in all the 4 region and also their Sales in the Selected regions. For Corporate training and Online Training contact at TJT@TECHNICALJOCKEY.COM The End Result that we are expecting is as below: So The Question that we are seeking an Answer is as below Question: How much Sales is Phones making in all the 4 Regions i.e. East, West, North, South and what is the Comparative Sales of Phones in Central & East as compared to all the 4 Regions. Step-1: Lets’ first make a viz with Region and Sales in Column & Sub Categories in Rows. Usually, this is what we will make when we are asked about the Sales of all the Sub Category in various Regions, right? So. The result will be like image below: Now, let’s analyze the second part of our question: The Sales of various Sub Category in the Selected region- Center & East. So, we’ll drop the Region to Filters and choose Center & East. The result will be as below: But, by doing this we have missed the first part of the question: The Sales of all the Sub Category in all the 4 Region. Remember we have to analyze: How much Sales is Phones making in all the 4 Regions i.e. East, West, North, South and what is the Comparative Sales of Phones in Central & East as compared to all the 4 Regions. That means we want a Dual Axis chart showing Sales in various Region and also in Selected Region. So, let’s move to Step 2. Step-2: Open a Calculated Field and name it ‘Sub Category Sales LOD’ with the following formula: So, what exactly are we trying to do here is: we are Fixing the Sales of Sub Categories by this LOD Calculation. That means that whenever we use this ‘Sub Category Sales LOD’ field any filters will not effect the value of Sales as it is Fixed. So, it will give the total Sales of all the Sub Category irrespective of any filters. Step-3: Let’s make a viz using this LOD Calculation and see what happens. In a new sheet, Drag Sub category Sales LOD & sales to Column and Sub category to Rows. Put Regions to Filter shelf and choose Center & East. If you notice the Marks shelf you notice that there are 3 marks cards: All, Sum(Sub Category Sales LOD), Sum(Sales). Put Sub Category Sales LOD from the data pane to the labels in Sum(Sub Category Sales LOD) marks shelf and Sales from the data pane to the labels of Sum(Sales). The result will be as below: Step-4: Now Right click on Sum(Sales) on the Column shelf to open a drop down and choose dual axis. Then from All in the Marks shelf choose Bars from Automatic drop down. From pane click on T to remove Labels.The result will be as below: Also Right click on the Axis at top and select “Synchronize Axis” to get both the Axis synchronized and see the Comparison clearly as below: Also right click on the Top Axis and de-select “show header” to remove it ,if you would like to show only one axis after the Axis are synchronized. Step- 5: The final formatting: drop Sales and Sub Category Sales LOD from the data pane to the Detail Shelf of All card, Click on Tooltip and make the following changes in the tooltip dialogue box. So This completes our comparative Analysis and in one single visualization, we are able to see the sales of the selected region and also the total sales. And we finally get the answer to our Question: For Corporate training and Online Training contact at TJT@TECHNICALJOCKEY.COM There are no entries yet. Looking for Corporate Training ? Reach out to us at Akriti.Lal@instrovate.com Reach out to us if you are looking for Corporate Training to Build The Next Generation Analytical Workforce with an in-depth understanding of  Exploratory Data Analysis , Data Visualisation, Data Analytics , AI First , Machine Learning & Deep Learning Training & Consulting helping them to take Data Informed Decision at each stage of the business. We understand that At the present times , the Entire Industry is in a Tranformation stage with the Softwares  being rebuilt with Artificial Intelligence Capabilities . We need SMART WORKFORCE for the SMART SOFTWARES to reap the maximum return . Whatsapp at +91-9953805788 or email at - akriti.lal@instrovate.com if you would like to know more . • Corporate Tableau Training in Gurgaon • Corporate Data Analytics Training in Gurgaon • Corporate Microsoft Power BI Training in Gurgaon • Corporate Microstrategy Training in Gurgaon • Corporate Google Data Studio Training in Gurgaon • Corporate Python Training in Gurgaon • Corporate Advance Analytics in R Programming Training in Gurgaon • Corporate Machine Learning Training in Gurgaon • Corporate Deep Learning Training in Gurgaon • Corporate Data Visualization Training in Gurgaon Instrovate Technologies Noida, Gurgaon akriti.lal@instrovate.com Hit Your Refresh Button To Rise Higher
1,149
5,183
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.34375
3
CC-MAIN-2019-30
longest
en
0.930769
https://diffgeom.subwiki.org/w/index.php?title=Flow_of_a_metric&oldid=548
1,582,969,622,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875148850.96/warc/CC-MAIN-20200229083813-20200229113813-00403.warc.gz
339,550,966
7,559
# Flow of a metric (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## Definition ### Definition with symbols Let $M$ be a Riemannian manifold. A flow of a metric on $M$ is defined as a map from an interval (possibly infinite) in the real numbers, to the space of all possible Riemannian metrics on the manifold. The domain of this map is often called the time domain and we think of the Riemannian metric as evolving with time. We typically require the flow to be smooth, that is, the metric should vary smoothly as a function of time. ## Notions ### Invariants of a flow We may require that the flow does not change a certain property, or quantity, associated with the metric. For instance, a conformal flow is a flow that does not change the conformal class of the metric (viz the metrics at any two points in time are conformally equivalent). A volume-normalized flow does not change the total volume of the manifold (viz at any two points in time, the total volume of the manifolds is the same). ### Goal of the flow The goal of the flow may be to eventually reach a nice metric, such as: and so on A flow is said to terminate in finite time if it reaches its goal in finite time, that is, after a finite time, the metric stops evolving. A flow is said to converge if, in the limit, it reaches a particular kind of flow. ## Other things which evolve with the metric ### Riemann curvature tensor The Riemann curvature tensor is a (1,3)-tensor (or a (0,4)-tensor) on the manifold, determined by the Levi-Civita connection, which is turn is determined by the Riemannian metric. Thus, as the Riemannian metric evolves, so does the curvature tensor. The curvature tensor flows along, or traces a path in, the space of $(0,4)$-tensors. In fact, since the Riemann curvature tensor always lives inside the Riemann curvature bundle (which is independent of the metric), we actually get a flow in the Riemann curvature bundle. ### Ricci curvature tensor The Ricci curvature tensor is a (0,2)-tensor on the manifold, determined by the Riemann curvature tensor, which in turn is determined by the Levi-Civita connection, which in turn is determined by the Riemannian metric. Thus, as the metric evolves, so does the Ricci curvature tensor. Further, the Ricci curvature tensor is a tensor of the same type as the metric tensor, hence their evolution can be studied together (this is precisely what is done for the Ricci flow and the volume-normalized Ricci flow). ### Scalar curvature As the Riemannian metric evolves, so does the scalar curvature, which is after all defined in terms of the Riemannian metric. The scalar curvature is an ordinary scalar function defined on the manifold. The evolution of scalar curvature is governed by a single equation as opposed to the multiple equations that govern the evolution of the Ricci flow and the volume-normalized Ricci flow.
666
2,909
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.546875
4
CC-MAIN-2020-10
latest
en
0.92167
https://www.physicsforums.com/threads/relativistic-momentum-football-problem.313208/
1,582,008,676,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875143635.54/warc/CC-MAIN-20200218055414-20200218085414-00190.warc.gz
866,334,007
15,605
# Relativistic momentum football problem ## Homework Statement A football player with a mass of 82.1 kg and a speed of 2.00 m/s collides head-on with a player from the opposing team whose mass is 126 kg. The players stick together and are at rest after the collision. Calculate the speed of the second player, assuming the speed of light is 3.00 m/s ## Homework Equations p= mv/[sqrt(1-v^2/c^2)] ## The Attempt at a Solution Basically how i attempted to solve this is used the above equation for each player, and set them equal to each other using 3.00m/s as the speed of light, because if they are at rest after the collision then their momentum must be equal. Because i have knowns for 1 player, and the mass of the other player, I should be able to solve for v, the other players speed, but the answer i calculate isn't right.... any suggestions? Related Introductory Physics Homework Help News on Phys.org diazona Homework Helper First suggestion: show your work in more detail... I don't notice any obvious issues but maybe if you actually specify what you did something will become clear. p= 82.1*2/[sqrt(1-2^2/3^2)] p= 220.297 220.297=126*v/[sqrt(1-(v^2/3^2)] v= 1.94m/s which is not correct p= 82.1*2/[sqrt(1-2^2/3^2)] p= 220.297 220.297=126*v/[sqrt(1-(v^2/3^2)] v= 1.94m/s which is not correct
377
1,313
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.734375
4
CC-MAIN-2020-10
longest
en
0.927883
https://www.helpteaching.com/questions/267980/let-x-represent-any-number-in-the-set-3-6-7-9-which-inequali
1,539,829,992,000,000,000
text/html
crawl-data/CC-MAIN-2018-43/segments/1539583511642.47/warc/CC-MAIN-20181018022028-20181018043528-00517.warc.gz
948,956,271
5,514
##### Question Info This question is public and is used in 145 tests or worksheets. Type: Multiple-Choice Category: Inequalities Standards: 6.EE.B.5 Tags: PARCC Author: LBeth View all questions by LBeth. # Inequalities Question View this question. Add this question to a group or test by clicking the appropriate button below. ## Grade 6 Inequalities CCSS: 6.EE.B.5 Let $x$ represent any number in the set ${3, 6, 7, 9}$. Which inequality is true for all values of $x$? 1. $x > 5$ 2. $x < 5$ 3. $x > 10$ 4. $x < 10$ You need to have at least 5 reputation to vote a question down. Learn How To Earn Badges.
191
614
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2018-43
latest
en
0.831724
https://discuss.leetcode.com/topic/94924/golang-solution-39ms
1,516,188,035,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084886895.18/warc/CC-MAIN-20180117102533-20180117122533-00225.warc.gz
652,982,129
8,476
# Golang solution (39ms) • Maintain a global `maxSum` to keep track of the maximum path sum seen so far. • At every node we need to compare the maximum of four values: (1) The node itself (2) The node + maximum path sum of the left subtree (3) The node + maximum path sum of the right subtree (4) The node + maximum path sum of both the left and right subtrees • We compute the max of #1, #2, #3 and assign it to `maxSumEndingAtNode` • Then we update the global `maxSum` with the maximum of either #4 or `maxSumEndingAtNode`. • Finally, we return `maxSumEndingAtNode`, since this value represents the maximum path sum of the subtree rooted at this node. ``````func maxPathSum(root *TreeNode) int { maxSum := math.MinInt32 maxPathSumHelper(root, &maxSum) return maxSum } func maxPathSumHelper(node *TreeNode, maxSum *int) int { if node == nil { return 0 } left, right := maxPathSumHelper(node.Left, maxSum), maxPathSumHelper(node.Right, maxSum) maxSumEndingAtNode := max(node.Val, node.Val + max(left, right)) *maxSum = max(*maxSum, max(node.Val + left + right, maxSumEndingAtNode)) return maxSumEndingAtNode } func max(x, y int) int { if x > y { return x }; return y } `````` Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
351
1,275
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2018-05
latest
en
0.571692
https://archive.midrange.com/rpg400-l/201204/msg00114.html
1,610,929,364,000,000,000
text/html
crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00628.warc.gz
228,121,969
12,043
Subject: Re: RPG math From: Charles Wilt Date: Thu, 12 Apr 2012 10:08:10 -0400 List-archive: List-help: List-id: RPG programming on the IBM i / System i List-post: List-subscribe: , List-unsubscribe: , My first thought... Why worry about it RPG when you don't appear to be worried about it Excel? Are you under the impression that Excel is magical and thus doesn't suffer from the same precision issues as any other piece of software? http://lmgtfy.com/?q=excel+precision+error Given that Excel apparently stores numbers as floating point and that there are differences between floating point arithmetic and RPG's fixed decimal arithmetic...expecting the same results is probably a fantasy. Personally, I'd consider RPG's fixed decimal answer "right" over Excel's floating point answer when dealing with real world values especially for currency. http://c2.com/cgi/wiki?FloatingPointCurrency However, if you want to consider Excel's floating point answer "right", it would behoove you to use floating point variables in RPG to match. d myRate s 8f 0 The true "right" solution in my mind 1) use RPG fixed point 2) Understand RPG's fixed point precision rules 3) use %round() and/or %dec() as needed to control intermediate results as you want. HTH, Charles On Thu, Apr 12, 2012 at 6:55 AM, Dave <dfx1@xxxxxxxxxxxxxx> wrote: Hi all, I recently asked how to access an EXCEL formula from an RPG program but here I am now having to reproduce the EXCEL PMT function in RPG. Maybe someone has already done this? Anyway, I've been given a formula to code and an EXCEL spreadsheet to control my results. Which I've coded like this : MonthlyRate = interestRate /12; MonthlyPayment = ( LoanAmount / MonthlyRate ) *  ( 1 - ( 1 + MonthlyRate ) ** ( -1  * numberOfPayments )); LoanRemaining = 12 * MonthlyPayment * 1 - ( 1 + MonthlyRate ) ** ( -1 * numberOfPayments - My fields are defined so (imposed) : LoanAmount,  MonthlyPayment, LoanRemaining 13S2 I've very little experience with RPG math and I'm worried about errors from intermediate calculations How should I define the rates MonthlyRate and interestRate so I don't get any loss of information after the divisions? Thanks. -- This is the RPG programming on the IBM i / System i (RPG400-L) mailing list To post a message email: RPG400-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options, visit: http://lists.midrange.com/mailman/listinfo/rpg400-l or email: RPG400-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/rpg400-l. As an Amazon Associate we earn from qualifying purchases. Follow-Ups: Replies: This mailing list archive is Copyright 1997-2021 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address]. Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.
742
3,094
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.140625
3
CC-MAIN-2021-04
latest
en
0.881154
https://nl.mathworks.com/matlabcentral/profile/authors/1949113
1,638,805,299,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964363301.3/warc/CC-MAIN-20211206133552-20211206163552-00592.warc.gz
476,318,991
21,680
Community Profile # JD Last seen: ongeveer een jaar ago Active since 2016 #### Content Feed View by Solved UICBioE240 2.2 Make a 3x4 matrix that contains all ones. meer dan 4 jaar ago Solved UICBioE240 problem 1.13 Compute the following - y = x^5/(x^-1) and y = (1-(1/x^5))^-1. Have the final answer of y to equal a 1 by 2 vector. meer dan 4 jaar ago Solved UICBioE240 problem 1.4 So if A = [ 1 2 3; 4 5 6; 7 8 9] B = [ 3 3] meer dan 4 jaar ago Solved UICBioE240 problem 1.17 In the expression (2+5i), how does MATLAB read the expressions A = 2+5i B = 2+5*i C = both are okay Write capital letter a... meer dan 4 jaar ago Solved UICBioE240 problem 1.16 sin^2(pi/6) + cos^2(pi/6) meer dan 4 jaar ago Solved Calculate volume of box Calculate the volume of box,hiven its sides meer dan 4 jaar ago Solved Penny flipping - calculate winning probability (easy) Two players are playing a fair penny flipping game. For each flip, the winner adds one penny from the loser's collection to his/... meer dan 4 jaar ago Solved Permute diagonal and antidiagonal Permute diagonal and antidiagonal For example [1 2 3;4 5 6;7 8 9] -> [3 2 1;4 5 6;9 8 7] WITHOUT diag function (and variable n... meer dan 4 jaar ago Solved Matrix with different incremental runs Given a vector of positive integers a = [ 3 2 4 ]; create the matrix where the *i* th column contains the vector *1:a(i)... meer dan 4 jaar ago Solved Rotate input square matrix 90 degrees CCW without rot90 Rotate input matrix (which will be square) 90 degrees counter-clockwise without using rot90,flipud,fliplr, or flipdim (or eval).... meer dan 4 jaar ago Solved Negative without '-' Simple: return a negative number without using the '-' sign. Thanks to Problem <https://www.mathworks.com/matlabcentral/cody/... meer dan 4 jaar ago Solved UICBioE240 problem 1.14 Solve 3^x = 17 meer dan 4 jaar ago Solved Sum the Infinite Series Given that 0 < x and x < 2*pi where x is in radians, write a function [c,s] = infinite_series(x); that returns with the... meer dan 4 jaar ago Solved Integer sequence - 2 : Kolakoski sequence Get the n-th term of <https://oeis.org/A000002 Kolakoski Sequence>. meer dan 4 jaar ago Solved Find the stride of the longest skip sequence We define a _skip sequence_ as a regularly-spaced list of integers such as might be generated by MATLAB's <http://www.mathworks.... meer dan 4 jaar ago Solved Integer Sequence - II : New Fibonacci Crack the following Integer Sequence. (Hints : It has been obtained from original Fibonacci Sequence and all the terms are also ... meer dan 4 jaar ago Solved Is X a Fibonacci Matrix? In honor of Cleve's new blog and post: <http://blogs.mathworks.com/cleve/2012/06/03/fibonacci-matrices/> Is X a Fibonacci ... meer dan 4 jaar ago Solved Golomb's self-describing sequence (based on Euler 341) The Golomb's self-describing sequence {G(n)} is the only nondecreasing sequence of natural numbers such that n appears exactly G... meer dan 4 jaar ago Solved "Look and say" sequence What's the next number in this sequence? * [0] * [1 0] * [1 1 1 0] * [3 1 1 0] * [1 3 2 1 1 0] This a variant on the w... meer dan 4 jaar ago Solved Fibonacci-Sum of Squares Given the Fibonacci sequence defined by the following recursive relation, * F_n = F_(n-1) + F_(n-2) * where F_0 = 0 and F_1 ... meer dan 4 jaar ago Solved Transposition as a CIPHER This all about transcripting a text message. If the input string is: *s1 = 'My name is Sourav Mondal'*, then the output is: *s2... bijna 5 jaar ago Solved Convert a structure into a string Convert the contents of each fields into a string. Example with an input structure s with 2 fields : s.age = '33' s.... bijna 5 jaar ago Solved Use of regexp Given a string, containing several sentences, such as: 'I played piano. John played football. Anita went home. Are you safe?... bijna 5 jaar ago Solved Compress strings (not springs) Please remove excess space, limit one space between others, and no space before punctuation marks. * For example, 'Trendy , ... bijna 5 jaar ago Solved Convert String to Morse Code Convert a given string to international <http://en.wikipedia.org/wiki/Morse_code Morse Code> * Strings will only have [A-z], ... bijna 5 jaar ago Solved How Many Months Until It's Today Again? Given a particular date, calculate how many months must pass before that same day of the month occurs on the same day of the wee... bijna 5 jaar ago Solved What's size of TV? Many people buy TV. Usually they ask about diagonal. But also important are width and height. Let's assume that all TV have rati... bijna 5 jaar ago Solved Split a string into chunks of specified length Given a string and a vector of integers, break the string into chunks whose lengths are given by the elements of the vector. Ex... bijna 5 jaar ago Solved Guess Cipher Guess the formula to transform strings as follows: 'Hello World!' --> 'Ifmmp Xpsme!' 'Can I help you?' --> 'Dbo J ifmq zpv... bijna 5 jaar ago Solved Skip by a multiple Given an integer create an array of its multiples. Array must have a length of 15 bijna 5 jaar ago
1,494
5,147
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.234375
3
CC-MAIN-2021-49
latest
en
0.63366
https://caeassistant.com/blog/identifying-convergence-issues-abaqus/
1,686,312,862,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224656675.90/warc/CC-MAIN-20230609100535-20230609130535-00312.warc.gz
174,172,511
42,684
Blog # Convergency and ABAQUS convergence issues #### What does convergence in Abaqus mean? The finite element method uses this main equation to solve a problem: F=K×U.  The “K” is the stiffness matrix, the “U” is called the displacement matrix, and the “F” is the force matrix. When this equation is solved correctly, we will have accurate results, and here we use the term “the problem is convergence”. #### What are the reasons behind the Abaqus convergence issues? The most important reasons for convergence issues are: • Defining inappropriate constraints that may cause conflicts in boundary conditions or contact conditions. • Using wrong elements. • Defining inadequate material in the property module. • Defining inappropriate boundary conditions or contacts. • Modeling an unstable physical system. • Set an improper increment size. #### What should we do when faced with a convergence issue? • First, check all boundary conditions, contact conditions, material properties, and load conditions. • Then, in a model with several pieces, insert one piece at a time to minimize the number of issues sources. • If the convergence problem is not solved, try to simulate a simpler model by making a 2D model or don’t use the plasticity property in material behavior. Also, see the reasons for convergence issues reported in the (.dat), (.sta), (.msg), and (.odb) files. #### What is the error “Too many attempts made for this increment”? “Too many attempts made for this increment”; means the solver attempted several times to calculate the equations for this increment, but the convergence conditions were not satisfied; so, you either look into the “Increment size” again and modify it or look somewhere else to overcome this error. #### How can I extract the reasons for convergence issues in the .sta and .msg files? You can use the “*PRINT” command to add more information in message files (.msg,. sta). First, open the input file and add your desired “*PRINT” commands; then, run the input file. After running the job, you will see the results in the .msg (Standard solver) and .sta (Explicit solver) files. Engineer students, engineer designers, and whoever is working with finite element software, especially the ABAQUS, heard the word “Convergence” or “convergence issues.” Here, we will explain what that all mean and answer common questions such as “what is the convergency?”, “how to identify the ABAQUS convergence issues?”  The Abaqus convergence problem and Abaqus contact convergence are discussed overall. ## 1        Finite element software If you are an engineer, you always have design problems. When you are starting to solve problems, you could do it in a conventional way: using ideal models, simple physical models, writing equations on paper, etc. But nowadays, in the real world, we use computers and software to design. With finite element software, you can design complex models and solve complicated equations to have actual results in the fastest way possible. So be a pro engineer and solve the problem in real conditions with the help of finite element software. When you are using finite element software, you will encounter issues. Don’t worry; this is an ordinary matter. Here you will learn how to find these issues and deal with Abaqus convergence problems. ## 2        Convergency in finite element The finite element method uses this main equation to solve a problem: The “K” is the stiffness matrix, the “U” is called the displacement matrix, and the “F” is the force matrix (see figure 1 as an example). When this equation is solved correctly, we will have accurate results, and here we use the term “the problem is convergence”. If by any means the equation cannot be solved or has some issues which give us inaccurate results, we use the term “there are some convergence issues.” Figure 1 an example of a stiffness matrix with the dimension 6×6 ## 3        Symptoms for the ABAQUS convergence issues Now, how do we know there is a convergence issue? You could see the solving procedure when running a job by clicking on the “monitor.” As you see (figure2), there are “Error” and “Warning” tabs. You can see some messages that inform you about your issues. Also, you could find more information in “.msg” (Standard solver), “.dat,” and “. sta” (Explicit solver) files. These issues are more in nonlinear models than the linear ones. Some examples of these symptoms are shown in figure 1. Figure 2: some warnings in the monitor window / Abaqus convergence problem ## 4 Reasons behind the ABAQUS convergence issues | Abaqus convergence problem This section will explain the reasons for the ABAQUS convergence issues or Abaqus convergence problems. It may be more than we explain here, but we discussed the most important and common ones. Incomplete and defective modeling in FE is the most common reason for convergence issues; for example: • Defining inappropriate constraints that may cause conflicts in boundary conditions or contact conditions. • Using wrong elements. • Defining inadequate material in the property module. • Defining inappropriate boundary conditions or contacts. • Modeling an unstable physical system. • Set an improper increment size. Now, we intend to explain one or two of these conflicts. See figure 3 for an example. Imagine you have a box, and an amount of pressure is applied to it. You want to calculate the stress; if you do not define the boundary condition, the box moves, and the software cannot calculate the stress, and you will see an error. Note that this only happens when you analyze a static problem and use the Static step. Another example is if you constrain a surface and apply pressure on it simultaneously (see figure 4), the software will show an error; because it is not possible to have the surface fixed and apply pressure at the same time (inappropriate boundary conditions). Figure 3: pressurized box without boundary conditions Figure 4 inappropriate boundary conditions ## 5        Identifying ABAQUS convergence issues Here, we intend to present some recommendations to identify which symptoms and reasons are causing the issues, the “Errors” and the “Warnings.” The general way is to list the top potential reasons and then check them to see the changes in the software. In the end, start to fix them one at a time. Now, here are some of our recommendations: ### 5.1       One of the best ways is to simulate a simpler model: • If possible, make a 2D model or a linear one to have fewer details and elements. • If possible, do not enter Plasticity or Nonlinear geometry to understand the model’s behavior. • In a model with several pieces, insert one piece at a time to minimize the number of issues sources. ### 5.2       Set increment values Figure 5 Staticgeneral step window and Incrementation tab When the ABAQUS standard solver starts to run a Job, it splits the step in the maximum number of increments you specified (system default is 100). According to the specified increment size (Figure 5), the solver starts to run the Job. In the case of increments, in ABAQUS standard solver, you usually encounter three Errors: “Too many attempts made for this increment”; means the solver attempted several times to calculate the equations for this increment, but the convergence conditions were not satisfied; so, you either look into the “Increment size” again and modify it or look somewhere else to overcome this Error. “Too many increments needed to complete the step”; means the solver needs more increments; therefore, you have to increase the “Maximum number of increments.” “Time increment required is less than the minimum specified”; in this Error, you have to decrease the “Minimum” in “Increment size” to satisfy the convergence conditions. ### 5-3       See the reasons for convergence issues See the reasons for convergence issues reported in the .dat, .sta, .msg, and .odb files. You can add more information in the message files. For example, use the keyword command “*PRINT, CONTACT=YES” in the input file of the model (-.inp) to get contact information in the message file. Or use the command “*PRINT, PLASTICITY=YES” to get the integration point numbers and element output for material issues. You can use the “*PRINT” command to add more information in message files (.msg, .sta). Be patient, and we will explain to you how to use this command. First, you need to know the input file (.inp). The Input file is one of the ABAQUS files that contains model data such as load, step, etc. It is like the “.cae” file but has less size, and you can open it in a text file and change whatever you want. When you have created your model completely and then created a Job for it, before you run it, you can create an input file for the model by clicking on the “Write Input” button in the “Job Manager” window (see figure 6). You can open the input file in a text file and change whatever you want; then, to use it in the ABAQUS, open the file according to Figure 7. Figure 6 Create an input file Figure 7: Open the Input file in the software Now, as you know what an input file is, let’s use the “*PRINT” command. You can find the instructions about the “*PRINT” command or any other keywords in the Keywords in the ABAQUS documentation (Figure 8). Figure 8 Finding PRINT keyword in ABAQUS documentation As you can see in Figure 9, open your input file through the Edit Keywords window; then, find the lines that define the loading conditions; after these lines and before “*END STEP,” you can add your “*PRINT” commands then click OK button. After running the Job, you would see the results in the .msg (Standard solver) and .sta (Explicit solver) files. Now, you know how to write an input file, use it, and modify it. The next articles will discuss tools and methods to overcome the convergence issues. Figure 9 Enter the PRINT command I hope you have got enough information about the Abaqus convergence problem and Abaqus contact convergence issues in this post. It would be useful to see Abaqus Documentation to understand how it would be hard to start an Abaqus simulation without any Abaqus tutorial.
2,200
10,173
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2023-23
latest
en
0.877546
https://calories-info.com/granola-bar-calories-kcal/
1,656,376,741,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103344783.24/warc/CC-MAIN-20220627225823-20220628015823-00055.warc.gz
198,535,369
61,399
# Granola Bar: Calories and Nutrition Analyse ## How many calories in granola bar? 100g of granola bars have about 418 calories (kcal). Calories per: ounce | one granola bar | tablespoon | cup | half cup To illustrate, medium size granola bar (35 g) has about 146 calories. It is about 6% of daily calories intake for adult person with medium weight and medium activity (for calculation we assumed 2400 kcal daily intake). To visualize how much it actually is, take in mind that calories amount from medium size granola bar is similar to calories amount from ie.: • 2.5 apples • 1.5 glasses of Coca Cola (220 ml glass) • 1 slice of cheese • 1 glass of milk • 7.5 cubes of sugar To burn this calories you have to bike at least 21 minutes, swim for about 17 minutes or run for 15 minutes. ### Granola bar: calories and nutrition per 100g (and per ounce) Calories418kcal/100g(119 kcal/oz) Protein5.7g/100g(1.6 g/oz) Carbs Total70.2g/100g(19.9 g/oz) Fat16.6g/100g(4.7 g/oz) per 100 gper ounce Calories418 ~ 118.5 Carbs Total70.2 g~ 19.9 g Dietary fiber3.8 g~ 1.1 g Fat16.57 g~ 4.7 g Protein5.65 g~ 1.6 g Water6.45 g~ 1.8 g ### How many calories in 1, 2, 3 or 5 granola bars? As I wrote before medium size granola bar (35 g) have 146 calories. It is easy to count that two granola bars have about 292 calories and three granola bars have about 438 calories. In table below you can also see calories amount for four and five granola bars. • Medium size granola bar (35 g)146 kcal • Tablespoon of granola bar (12g)50 kcal • Cup of granola bar (90g)376 kcal • Half cup of granola bar (45g)188 kcal • Ounce (oz) of granola bars119 kcal • Half of medium size granola bar73 kcal • Small size granola bar (28g)116.8 kcal • Big size granola bar (46g)189.8 kcal • Two medium size granola bars292 kcal • Three medium size granola bars438 kcal • Four medium size granola bars584 kcal • Five medium size granola bars730 kcal ### Protein in granola bar Granola bars have 5.65 g protein per 100g. When you multiplay this value with weight of medium size granola bar (35 g) you can see that you will get about 2 g of protein. ### Carbs in granola bars Granola bars have 70.2 g carbohydrates per 100g. In the same way as for protein we can calculate that medium size granola bar (35 g) has about 24.6 g of carbs. ### Fat in granola bars Granola bars have 16.57 g fat per 100g. So it is easy to count that medium size granola bar (35 g) has about 5.8 g of fat. ## medium size granola bar (35 g) has: 146kcalFor burning these calories you have to: Bike21 min.Bike Horse ride27 min.Horse ride Play tennis14 min.Tennis Run15 min.Run Swim17 min.Swim ## granola bar - vitamins per 100g • Vitaminium B1 (Thiamine)0.169 mg • Vitaminium B2 (riboflavin)0.082 mg • Vitaminium B3 (Niacin)0.765 mg • Vitaminium B60.066 mg • Vitaminium B9 (Folic acid)0.016 mcg • Vitaminium E0.31 mg • Vitaminium K0.016 mg • 62% CARBS • 5% PROTEIN • 33% FAT ## granola bar - minerals per 100g • Potassium237 mg • Magnessium64 mg • Calcium41 mg • Sodium251 mg • Iron2.19 mg ## Interesting charts - compare granola bar with other sweets When you look at charts below you will see how granola bar looks like in comparsion to other products from its category. When you click on selected product you will se detailed comparsion. Twizzlers (0.1) more...Marshmallow (0.1) more...Tootsie Rolls (0.1) more...Ice Cream Cone McDonalds (0.1) more...Vanilla ice cream (0.7) more...Ice cream (0.7) more...Soft serve ice cream (0.7) more...Twix (1) more...Kit Kat (1) more...Kit Kat White (1) more...Kit Kat Mini (1) more...Twix bar (fun size, mini) (1) more...Milky Way Mini (1) more...Jam (1.1) more...Chocolate ice cream (1.2) more...Fudge candy (1.7) more...Fudge (1.7) more...Chocolate chips (2) more...Ferrero Rocher (2.6) more...Chocolate covered strawberries (2.8) more...M&M's (2.8) more...Snickers (2.9) more...Mochi (3.3) more...Milk chocolate (3.4) more...Reese's Peanut Butter Cup (3.6) more...Granola bar (3.8) more...Hershey's Kiss (4) more...Klondike bar (4.3) more...Halvah (4.5) more...Almond Joy Candy Bar (5) more...Dark chocolate (7) more...Honey Nut Cheerios (7.1) more...Clif bar (7.4) more...Chocolate covered almond (8) more...Larabar Chocolate Truffles (13.3) more... Fiber in Granola Bar Based On Sweets Category Maple syrup (0.06) more...Jam (0.07) more...Marshmallow (0.2) more...Popsicle (0.24) more...Freeze pop (0.24) more...Twizzlers (1.7) more...Tootsie Rolls (3.31) more...Klondike bar (4.1) more...Skittles (4.37) more...Pudding (4.6) more...Ice Cream Cone McDonalds (4.86) more...McDonalds Sundae Caramel (4.89) more...Honey Nut Cheerios (5) more...Laffy Taffy Candy (5.13) more...Frozen yogurt (5.6) more...Clif bar (5.88) more...Chocolate covered strawberries (6.76) more...Starbursts (7.78) more...McFlurry McDonald's (10) more...Mochi (10) more...Fudge (10.41) more...Fudge candy (10.41) more...Chocolate ice cream (11) more...Ice cream (11) more...Soft serve ice cream (11) more...Vanilla ice cream (11) more...Frozen custard (13.59) more...Granola bar (16.57) more...Milky Way Mini (17.23) more...M&M's (21.13) more...Snickers Fun Size (21.43) more...Halvah (21.52) more...Twix bar (fun size, mini) (23.2) more...Twix (23.2) more...Chocolate chips (24.72) more...Kit Kat (25.99) more...Kit Kat White (25.99) more...Kit Kat Mini (25.99) more...Almond Joy Candy Bar (26.93) more...Snickers (28.9) more...Milk chocolate (29.66) more...Larabar Chocolate Truffles (30) more...Reese's Peanut Butter Cup (30.53) more...Light whipping cream (30.91) more...Nutella (31) more...Dark chocolate (31.28) more...Hershey's Kiss (32) more...Heavy whipping cream (36.08) more...Ferrero Rocher (42.11) more...Chocolate covered almond (45.66) more... Fat in Granola Bar Based On Sweets Category Starbursts (0.01) more...Jelly (0.02) more...Light whipping cream (0.03) more...McDonalds Sundae Caramel (0.08) more...Soft serve ice cream (0.09) more...Ice cream (0.09) more...Vanilla ice cream (0.09) more...Heavy whipping cream (0.1) more...Maple syrup (0.11) more...Marshmallow (0.23) more...Frozen yogurt (0.3) more...Popsicle (0.3) more...Ice Cream Cone McDonalds (0.35) more...Frozen custard (0.35) more...Milky Way Mini (0.49) more...Jam (0.49) more...Freeze pop (0.54) more...Twizzlers (0.55) more...Klondike bar (0.75) more...Tootsie Rolls (0.8) more...Chocolate ice cream (0.93) more...Chocolate covered strawberries (1) more...Kit Kat Mini (1) more...Kit Kat (1) more...Kit Kat White (1) more...M&M's (1.11) more...Reese's Peanut Butter Cup (1.21) more...Pudding (1.27) more...Almond Joy Candy Bar (1.27) more...Snickers (1.3) more...Twix (1.3) more...Twix bar (fun size, mini) (1.3) more...Snickers Fun Size (1.71) more...Fudge (1.77) more...Fudge candy (1.77) more...Ferrero Rocher (1.89) more...Granola bar (2.19) more...Milk chocolate (2.35) more...Chocolate covered almond (3) more...Halvah (4.53) more...Chocolate chips (5.58) more...Larabar Chocolate Truffles (6) more...Clif bar (6.62) more...Dark chocolate (8.02) more...Honey Nut Cheerios (20.58) more... Iron in Granola Bar Based On Sweets Category Freeze pop (1) more...Jelly (1) more...Skittles (1) more...Marshmallow (2) more...Popsicle (3) more...Jam (4) more...Heavy whipping cream (7) more...Light whipping cream (7) more...McDonalds Sundae Caramel (12) more...Ice Cream Cone McDonalds (13) more...Soft serve ice cream (14) more...Vanilla ice cream (14) more...Ice cream (14) more...Frozen yogurt (14) more...Pudding (18) more...Milky Way Mini (20) more...Maple syrup (21) more...Tootsie Rolls (22) more...Twix (24) more...Twix bar (fun size, mini) (24) more...Chocolate ice cream (29) more...Chocolate covered strawberries (35) more...Fudge candy (36) more...Fudge (36) more...Kit Kat Mini (37) more...Kit Kat (37) more...Kit Kat White (37) more...Chocolate chips (41) more...M&M's (44) more...Reese's Peanut Butter Cup (62) more...Milk chocolate (63) more...Granola bar (64) more...Snickers (72) more...Honey Nut Cheerios (90) more...Dark chocolate (146) more...Clif bar (147) more...Chocolate covered almond (206) more...Halvah (218) more... Magnesium in Granola Bar Based On Sweets Category Nutella (0) more...McFlurry McDonald's (0) more...Jelly (1) more...Tootsie Rolls (116) more...Skittles (12) more...Milky Way Mini (124) more...Fudge candy (134) more...Fudge (134) more...Twizzlers (14) more...Freeze pop (15) more...Twix (159) more...Twix bar (fun size, mini) (159) more...Chocolate chips (171) more...Pudding (184) more...McDonalds Sundae Caramel (186) more...Halvah (187) more...Ice Cream Cone McDonalds (193) more...Ice cream (199) more...Vanilla ice cream (199) more...Chocolate covered strawberries (199) more...Soft serve ice cream (199) more...Frozen yogurt (211) more...Maple syrup (212) more...Popsicle (23) more...Kit Kat (231) more...Kit Kat White (231) more...Kit Kat Mini (231) more...Granola bar (237) more...Chocolate ice cream (249) more...Almond Joy Candy Bar (254) more...M&M's (261) more...Snickers (312) more...Reese's Peanut Butter Cup (343) more...Milk chocolate (372) more...Honey Nut Cheerios (412) more...Clif bar (412) more...Marshmallow (5) more...Dark chocolate (559) more...Chocolate covered almond (573) more...Larabar Chocolate Truffles (700) more...Jam (77) more...Heavy whipping cream (95) more...Light whipping cream (97) more... Potassium in Granola Bar Based On Sweets Category Maple syrup (0.04) more...Skittles (0.19) more...Jam (0.37) more...Starbursts (0.39) more...Jelly (1.22) more...Chocolate covered strawberries (1.44) more...Tootsie Rolls (1.59) more...Marshmallow (1.8) more...Pudding (2.09) more...Light whipping cream (2.17) more...Fudge (2.39) more...Fudge candy (2.39) more...Candy corn (2.5) more...Heavy whipping cream (2.84) more...Frozen custard (2.91) more...Nonfat frozen yogurt (2.94) more...Twizzlers (2.97) more...Vanilla ice cream (3.5) more...Soft serve ice cream (3.5) more...Ice cream (3.5) more...McDonalds Sundae Caramel (3.58) more...Chocolate ice cream (3.8) more...Frozen yogurt (4) more...Milky Way Mini (4.01) more...Klondike bar (4.09) more...Almond Joy Candy Bar (4.13) more...Ice Cream Cone McDonalds (4.24) more...M&M's (4.33) more...Snickers Fun Size (4.76) more...Dark chocolate (4.88) more...Chocolate chips (5.1) more...Gummy bear (5.13) more...Twix (5.3) more...Twix bar (fun size, mini) (5.3) more...McFlurry McDonald's (5.6) more...Granola bar (5.65) more...Kit Kat (6.51) more...Kit Kat White (6.51) more...Kit Kat Mini (6.51) more...Mochi (6.67) more...Nutella (6.8) more...Milk chocolate (7.65) more...Ferrero Rocher (7.89) more...Hershey's Kiss (8) more...Honey Nut Cheerios (8.85) more...Snickers (9.7) more...Larabar Chocolate Truffles (10) more...Reese's Peanut Butter Cup (10.24) more...Halvah (12.49) more...Clif bar (14.71) more...Chocolate covered almond (17.27) more... Protein in Granola Bar Based On Sweets Category Chocolate covered strawberries (3) more...Freeze pop (7) more...Jelly Bean (12) more...Maple syrup (12) more...Popsicle (13) more...Skittles (15) more...Dark chocolate (24) more...Heavy whipping cream (27) more...Jam (32) more...Light whipping cream (34) more...Chocolate covered almond (37) more...Tootsie Rolls (44) more...Fudge candy (45) more...Fudge (45) more...Slush bar (47) more...Mochi (50) more...Kit Kat (54) more...Kit Kat White (54) more...Kit Kat Mini (54) more...M&M's (61) more...Gummy bear (64) more...Ferrero Rocher (66) more...Ice Cream Cone McDonalds (67) more...Jelly (75) more...Chocolate ice cream (76) more...Milk chocolate (79) more...Ice cream (80) more...McDonalds Sundae Caramel (80) more...Soft serve ice cream (80) more...Marshmallow (80) more...Vanilla ice cream (80) more...Frozen yogurt (87) more...Starbursts (89) more...Hershey's Kiss (100) more...Jolly Rancher (100) more...Laffy Taffy Candy (128) more...Nonfat frozen yogurt (132) more...Candy corn (138) more...Almond Joy Candy Bar (142) more...Pudding (152) more...Klondike bar (159) more...Milky Way Mini (167) more...Twix (170) more...Twix bar (fun size, mini) (170) more...Halvah (195) more...Clif bar (195) more...Larabar Chocolate Truffles (200) more...Snickers Fun Size (214) more...Candy cane (222) more...Granola bar (251) more...Twizzlers (261) more...Frozen custard (262) more...Chocolate chips (311) more...Reese's Peanut Butter Cup (357) more...Snickers (359) more...Honey Nut Cheerios (566) more... Sodium in Granola Bar Based On Sweets Category Heavy whipping cream (2.92) more...Light whipping cream (2.96) more...Slush bar (11.63) more...Jelly (13.49) more...Freeze pop (13.66) more...Popsicle (14.57) more...Chocolate covered strawberries (15.68) more...Pudding (17.17) more...Chocolate covered almond (18.69) more...Ice Cream Cone McDonalds (19.49) more...Nonfat frozen yogurt (20.59) more...Ice cream (21.22) more...Vanilla ice cream (21.22) more...Soft serve ice cream (21.22) more...McDonalds Sundae Caramel (23.54) more...Frozen yogurt (24) more...Klondike bar (24.62) more...Chocolate ice cream (25.36) more...Granola bar (28.92) more...Larabar Chocolate Truffles (30) more...Frozen custard (30.1) more...Clif bar (31.62) more...Honey Nut Cheerios (32.89) more...Chocolate chips (32.9) more...Mochi (36.67) more...Ferrero Rocher (39.47) more...Gummy bear (46.15) more...Reese's Peanut Butter Cup (47.17) more...Snickers Fun Size (47.62) more...Dark chocolate (47.9) more...Almond Joy Candy Bar (48.34) more...Jam (48.5) more...Kit Kat Mini (48.68) more...Kit Kat (48.68) more...Kit Kat White (48.68) more...Laffy Taffy Candy (48.72) more...Milk chocolate (51.5) more...Starbursts (56.1) more...Tootsie Rolls (56.32) more...Marshmallow (57.56) more...Milky Way Mini (59.69) more...Hershey's Kiss (60) more...Jelly Bean (60) more...Maple syrup (60.46) more...M&M's (63.68) more...Mentos (66.67) more...Candy cane (66.67) more...Candy corn (67.5) more...Blow Pop (68.75) more...Fudge candy (73.12) more...Fudge (73.12) more...Jolly Rancher (73.33) more...Dum Dum Pop (73.33) more...Skittles (75.84) more... Sugar in Granola Bar Based On Sweets Category Marshmallow (0.001) more...Jam (0.016) more...Heavy whipping cream (0.02) more...Light whipping cream (0.024) more...Pudding (0.024) more...Dark chocolate (0.025) more...Fudge candy (0.026) more...Fudge (0.026) more...Chocolate covered strawberries (0.031) more...McDonalds Sundae Caramel (0.037) more...Frozen yogurt (0.037) more...Vanilla ice cream (0.041) more...Ice cream (0.041) more...Soft serve ice cream (0.041) more...Chocolate ice cream (0.042) more...Ice Cream Cone McDonalds (0.043) more...Milky Way Mini (0.046) more...Tootsie Rolls (0.056) more...Maple syrup (0.066) more...M&M's (0.079) more...Snickers (0.1) more...Twix (0.1) more...Twix bar (fun size, mini) (0.1) more...Milk chocolate (0.112) more...Kit Kat (0.117) more...Kit Kat White (0.117) more...Kit Kat Mini (0.117) more...Reese's Peanut Butter Cup (0.16) more...Chocolate covered almond (0.167) more...Granola bar (0.169) more...Chocolate chips (0.292) more...Halvah (0.424) more...Clif bar (0.551) more...Honey Nut Cheerios (1.917) more... Vitamin B1 in Granola Bar Based On Sweets Category Marshmallow (0.001) more...Jelly (0.006) more...Skittles (0.023) more...Chocolate covered strawberries (0.037) more...Dark chocolate (0.05) more...Tootsie Rolls (0.07) more...Pudding (0.072) more...Jam (0.076) more...Granola bar (0.082) more...Fudge candy (0.085) more...Fudge (0.085) more...Halvah (0.088) more...Twix (0.1) more...Twix bar (fun size, mini) (0.1) more...Milky Way Mini (0.103) more...Reese's Peanut Butter Cup (0.11) more...Light whipping cream (0.125) more...Heavy whipping cream (0.188) more...Chocolate ice cream (0.194) more...Snickers (0.2) more...Kit Kat Mini (0.21) more...Kit Kat (0.21) more...Kit Kat White (0.21) more...M&M's (0.213) more...McDonalds Sundae Caramel (0.223) more...Frozen yogurt (0.224) more...Soft serve ice cream (0.24) more...Vanilla ice cream (0.24) more...Ice cream (0.24) more...Ice Cream Cone McDonalds (0.246) more...Chocolate chips (0.297) more...Milk chocolate (0.298) more...Clif bar (0.375) more...Chocolate covered almond (0.587) more...Maple syrup (1.27) more...Honey Nut Cheerios (1.867) more... Vitamin B2 in Granola Bar Based On Sweets Category Jelly (0.001) more...Skittles (0.007) more...Jam (0.036) more...Light whipping cream (0.042) more...Heavy whipping cream (0.064) more...Marshmallow (0.078) more...Maple syrup (0.081) more...McDonalds Sundae Caramel (0.114) more...Soft serve ice cream (0.116) more...Vanilla ice cream (0.116) more...Ice cream (0.116) more...Pudding (0.123) more...Milky Way Mini (0.154) more...Fudge candy (0.176) more...Fudge (0.176) more...Tootsie Rolls (0.21) more...Chocolate ice cream (0.226) more...M&M's (0.27) more...Frozen yogurt (0.287) more...Twix (0.3) more...Twix bar (fun size, mini) (0.3) more...Milk chocolate (0.386) more...Chocolate covered strawberries (0.395) more...Ice Cream Cone McDonalds (0.498) more...Kit Kat Mini (0.5) more...Kit Kat (0.5) more...Kit Kat White (0.5) more...Dark chocolate (0.725) more...Granola bar (0.765) more...Chocolate covered almond (2.566) more...Chocolate chips (2.665) more...Halvah (2.856) more...Snickers (3.4) more...Clif bar (4.412) more...Reese's Peanut Butter Cup (4.49) more...Honey Nut Cheerios (21.4) more... Vitamin B3 (Niacin) in Granola Bar Based On Sweets Category Maple syrup (0.002) more...Marshmallow (0.003) more...Skittles (0.003) more...Tootsie Rolls (0.01) more...Fudge candy (0.012) more...Fudge (0.012) more...Milky Way Mini (0.014) more...Pudding (0.018) more...Jam (0.02) more...Kit Kat (0.02) more...Kit Kat White (0.02) more...Kit Kat Mini (0.02) more...M&M's (0.025) more...Light whipping cream (0.028) more...Heavy whipping cream (0.035) more...Milk chocolate (0.036) more...Dark chocolate (0.042) more...Chocolate covered strawberries (0.044) more...Vanilla ice cream (0.048) more...Ice cream (0.048) more...Soft serve ice cream (0.048) more...McDonalds Sundae Caramel (0.049) more...Ice Cream Cone McDonalds (0.054) more...Chocolate ice cream (0.055) more...Chocolate chips (0.058) more...Granola bar (0.066) more...Frozen yogurt (0.08) more...Chocolate covered almond (0.091) more...Reese's Peanut Butter Cup (0.1) more...Twix (0.1) more...Twix bar (fun size, mini) (0.1) more...Snickers (0.2) more...Halvah (0.348) more...Clif bar (0.588) more...Honey Nut Cheerios (2.267) more... Vitamin B6 in Granola Bar Based On Sweets Category Jelly (0.001) more...Marshmallow (0.001) more...Pudding (0.003) more...Fudge (0.004) more...Fudge candy (0.004) more...Milky Way Mini (0.004) more...Heavy whipping cream (0.004) more...Light whipping cream (0.004) more...Soft serve ice cream (0.005) more...Ice cream (0.005) more...Vanilla ice cream (0.005) more...Frozen yogurt (0.006) more...M&M's (0.008) more...Jam (0.011) more...Milk chocolate (0.011) more...Tootsie Rolls (0.011) more...Chocolate ice cream (0.016) more...Granola bar (0.016) more...Kit Kat (0.018) more...Kit Kat White (0.018) more...Kit Kat Mini (0.018) more...Chocolate covered strawberries (0.022) more...Chocolate covered almond (0.038) more...Reese's Peanut Butter Cup (0.05) more...Halvah (0.065) more...Chocolate chips (0.092) more...Clif bar (0.118) more...Honey Nut Cheerios (1.106) more... Vitamin B9 (Folic Acid) in Granola Bar Based On Sweets Category McDonalds Sundae Caramel (0.01) more...Ice Cream Cone McDonalds (0.03) more...Frozen yogurt (0.11) more...Jam (0.12) more...Reese's Peanut Butter Cup (0.15) more...Skittles (0.17) more...Fudge candy (0.18) more...Fudge (0.18) more...Twix (0.2) more...Twix bar (fun size, mini) (0.2) more...Clif bar (0.24) more...Chocolate covered strawberries (0.28) more...Vanilla ice cream (0.3) more...Ice cream (0.3) more...Soft serve ice cream (0.3) more...Chocolate ice cream (0.3) more...Granola bar (0.31) more...Pudding (0.31) more...Kit Kat (0.34) more...Kit Kat White (0.34) more...Kit Kat Mini (0.34) more...M&M's (0.36) more...Honey Nut Cheerios (0.48) more...Milk chocolate (0.51) more...Dark chocolate (0.54) more...Tootsie Rolls (0.65) more...Light whipping cream (0.88) more...Milky Way Mini (0.89) more...Heavy whipping cream (0.92) more...Snickers (1) more...Chocolate chips (2.01) more...Chocolate covered almond (16.78) more... Vitamin E in Granola Bar Based On Sweets Category Dark chocolate (0.97) more...Reese's Peanut Butter Cup (1.44) more...Milk chocolate (1.5) more...Kit Kat Mini (1.63) more...Kit Kat (1.63) more...Kit Kat White (1.63) more...M&M's (1.7) more...Honey Nut Cheerios (2.86) more...Chocolate chips (3.57) more...Chocolate covered almond (3.61) more...Halvah (3.67) more...Skittles (3.83) more...Twix (4.2) more...Twix bar (fun size, mini) (4.2) more...Snickers (5.2) more...Milky Way Mini (6.3) more...Granola bar (6.45) more...Tootsie Rolls (6.69) more...Almond Joy Candy Bar (8.2) more...Fudge (9.81) more...Fudge candy (9.81) more...Starbursts (10.89) more...Clif bar (11) more...Twizzlers (15) more...Marshmallow (16.4) more...Jam (30.47) more...Maple syrup (32.39) more...Klondike bar (45.51) more...Chocolate ice cream (55.7) more...McDonalds Sundae Caramel (57.27) more...Heavy whipping cream (57.71) more...Soft serve ice cream (61) more...Vanilla ice cream (61) more...Ice cream (61) more...Light whipping cream (63.5) more...Ice Cream Cone McDonalds (63.64) more...Frozen yogurt (65.3) more...Pudding (69.46) more...Chocolate covered strawberries (71.33) more...Popsicle (79.96) more...Freeze pop (80.49) more...Jelly (84.39) more... Water in Granola Bar Based On Sweets Category Heavy whipping cream (2.84) more...Light whipping cream (2.96) more...Jelly (14.19) more...Freeze pop (19.23) more...Popsicle (19.68) more...Chocolate covered strawberries (19.9) more...Pudding (23.01) more...Slush bar (23.26) more...Vanilla ice cream (23.6) more...Soft serve ice cream (23.6) more...Ice cream (23.6) more...Frozen yogurt (24.2) more...Ice Cream Cone McDonalds (26.36) more...Chocolate ice cream (28.2) more...Nonfat frozen yogurt (29.41) more...Chocolate covered almond (30.89) more...McDonalds Sundae Caramel (33.36) more...Frozen custard (34.95) more...McFlurry McDonald's (43) more...Klondike bar (45.32) more...Ferrero Rocher (47.37) more...Larabar Chocolate Truffles (50) more...Snickers (52.6) more...Reese's Peanut Butter Cup (55.36) more...Nutella (56) more...Milk chocolate (59.4) more...Almond Joy Candy Bar (59.51) more...Jam (59.86) more...Halvah (60.49) more...Dark chocolate (61.17) more...Snickers Fun Size (61.9) more...Mochi (63.33) more...Hershey's Kiss (64) more...Twix (64.2) more...Twix bar (fun size, mini) (64.2) more...Kit Kat (64.59) more...Kit Kat White (64.59) more...Kit Kat Mini (64.59) more...Chocolate chips (65.36) more...Clif bar (65.44) more...Maple syrup (67.04) more...Granola bar (70.2) more...Milky Way Mini (71.17) more...M&M's (71.19) more...Fudge (76.44) more...Fudge candy (76.7) more...Gummy bear (76.92) more...Laffy Taffy Candy (76.92) more...Twizzlers (79.38) more...Honey Nut Cheerios (79.69) more...Starbursts (79.73) more...Marshmallow (81.3) more...Blow Pop (87.5) more...Tootsie Rolls (87.73) more...Jelly Bean (90) more...Skittles (90.78) more...Candy corn (92.5) more...Jolly Rancher (93.33) more...Candy cane (94.44) more...Dum Dum Pop (100) more...Mentos (100) more... Carbohydrates in Granola Bar Based On Sweets Category Jelly (60) more...Freeze pop (79) more...Popsicle (81) more...Slush bar (93) more...Chocolate covered strawberries (129) more...Nonfat frozen yogurt (132) more...Pudding (142) more...Frozen yogurt (159) more...Ice Cream Cone McDonalds (162) more...McDonalds Sundae Caramel (188) more...Vanilla ice cream (207) more...Soft serve ice cream (207) more...Ice cream (207) more...Chocolate ice cream (216) more...Klondike bar (224) more...Jam (238) more...Maple syrup (260) more...Frozen custard (272) more...McFlurry McDonald's (285) more...Light whipping cream (292) more...Marshmallow (318) more...Gummy bear (333) more...Mentos (333) more...Twizzlers (338) more...Heavy whipping cream (340) more...Clif bar (346) more...Jelly Bean (350) more...Laffy Taffy Candy (359) more...Mochi (363) more...Candy corn (375) more...Blow Pop (375) more...Honey Nut Cheerios (376) more...Tootsie Rolls (387) more...Tic Tac (388) more...Candy cane (389) more...Jolly Rancher (400) more...Dum Dum Pop (400) more...Starbursts (400) more...Skittles (405) more...Fudge candy (410) more...Fudge (411) more...Granola bar (418) more...Milky Way Mini (456) more...Halvah (469) more...Snickers Fun Size (476) more...Almond Joy Candy Bar (479) more...Twix (483) more...Twix bar (fun size, mini) (483) more...M&M's (492) more...Chocolate chips (492) more...Snickers (497) more...Larabar Chocolate Truffles (500) more...Reese's Peanut Butter Cup (515) more...Kit Kat (518) more...Kit Kat White (518) more...Kit Kat Mini (518) more...Nutella (530) more...Milk chocolate (535) more...Dark chocolate (546) more...Hershey's Kiss (560) more...Chocolate covered almond (574) more...Ferrero Rocher (605) more... Calories in Granola Bar Based On Sweets Category Jelly (3) more...Marshmallow (3) more...Popsicle (5) more...Candy cane (6) more...Twizzlers (12) more...Chocolate covered strawberries (19) more...Jam (20) more...Chocolate chips (21) more...Halvah (33) more...Tootsie Rolls (36) more...Granola bar (41) more...Fudge (49) more...Fudge candy (49) more...Pudding (51) more...Dark chocolate (56) more...Almond Joy Candy Bar (64) more...Heavy whipping cream (66) more...Light whipping cream (69) more...Reese's Peanut Butter Cup (78) more...Snickers Fun Size (95) more...Frozen custard (97) more...Snickers (98) more...Maple syrup (102) more...M&M's (105) more...Ferrero Rocher (105) more...Twix (109) more...Chocolate ice cream (109) more...Twix bar (fun size, mini) (109) more...Milky Way Mini (115) more...Kit Kat Mini (125) more...Kit Kat (125) more...Kit Kat White (125) more...Klondike bar (125) more...McDonalds Sundae Caramel (127) more...Soft serve ice cream (128) more...Ice cream (128) more...Vanilla ice cream (128) more...Ice Cream Cone McDonalds (129) more...Larabar Chocolate Truffles (133) more...Frozen yogurt (143) more...Nonfat frozen yogurt (147) more...Hershey's Kiss (160) more...Milk chocolate (189) more...Chocolate covered almond (222) more...Clif bar (368) more...Honey Nut Cheerios (425) more...Blow Pop (938) more... Calcium in Granola Bar Based On Sweets Category All information about nutrition on this website was created with help of information from the official United States Department of Agriculture database.
8,875
26,530
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2022-27
longest
en
0.757084
http://spotidoc.com/doc/725711/electric-potential
1,544,993,158,000,000,000
text/html
crawl-data/CC-MAIN-2018-51/segments/1544376827992.73/warc/CC-MAIN-20181216191351-20181216213351-00559.warc.gz
267,923,408
13,057
# Electric Potential ```Physics 121 - Electricity and Magnetism Lecture 05 -Electric Potential Y&F Chapter 23 Sect. 1-5 • • • • • • • • • Electric Potential Energy versus Electric Potential Calculating the Potential from the Field Potential due to a Point Charge Equipotential Surfaces Calculating the Field from the Potential Potentials on, within, and near Conductors Potential due to a Group of Point Charges Potential due to a Continuous Charge Distribution Summary 1 Electrostatics: Two spheres, different radii, one with charge Connect wire between spheres, then disconnect it Initially Q10= 10 C cm r = 10 1 Q1f= ?? wire Q2f= ?? •Are final charges equal? •What determines how charge redistributes itself? Q20= 0 C r2= 20 cm Mechanical analogy: Water pressure Open valve, water flows What determines final water levels? gy = PE/unit mass P2 = rgy2 P1 = rgy1 X ELECTRIC POTENTIAL  V(r ) Potential Energy due to an electric field per unit (test) charge • Closely related to Electrostatic Potential Energy……but…… • DPE: ~ work done ( = force x displacement) • DV: ~ work done/unit charge ( = field x displacement) • Potential summarizes effect of charge on a distant point without specifying a test charge there (Like field, unlike PE) • Scalar field  Easier to use than E (vector) • Both DPE and DV imply a reference level • Both PE and V are conservative forces/fields, like gravity • Can determine motion of charged particles using: Second Law, F = qE or PE, Work-KE theorem &/or mechanical energy conservation Units, Dimensions: • Potential Energy U: Joules • Potential V: [U]/[q] Joules/C. = VOLTS • Synonyms for V: both [F][d]/[q], and [q][E][d]/[q] = N.m / C. • Units of field are [V]/[d] = Volts / meter – same as N/C. Reminder: Work Done by a Constant Force 5-1: In the four examples shown in the sketch, a force F an object and does work. In all four cases, the force has same magnitude and the displacement of the object is to right and has the same magnitude. Rank the cases in order of the work done by the force on object, from most positive to the most negative. acts on the the the Ds A. B. C. D. E. I, IV, III, II II, I, IV, III III, II, IV, I I, IV, II, III III, IV, I, II  F I III  F  F II  F IV Work Done by a Constant Force (a reminder)   DW  F  Ds = FDs cos  The work DW done by a constant external force on it is the product of: • the magnitude F of the force • the magnitude Δs of the displacement of the point of application of the force • and cos(θ), where θ is the angle between force and displacement vectors: Dr  F  F  Dr  Dr II I WII =  FDr WI = 0  F  F  Dr  Dr III If the force varies in direction and/or magnitude along the path:   DW   F  ds WIII = FDr IV WIV = FDr cos  f i Example of a “Path Integral” Result may depend on path Spring 2014 Definitions: Electrostatic Potential Energy versus Potential Recall: Conservative Fields definition • Work done BY THE FIELD on a test charge moving from i to f does not depend on the path taken. • Work done around any closed path equals zero.   dU = dW = Fe  ds dV  dU / q0 (basic definition)   Fe = q0E POTENTIAL ENERGY DIFFERENCE: Charge q0 moves from i to f along ANY path POTENTIAL DIFFERENCE: Potential is potential energy per unit charge f     Uf  Ui  DU   DW    Fe  ds = q0  E  ds f i Path Integral i   DU DW DV  = = E  Ds q0 q0   Vf  Vi  DV   DW / q0 =   E  ds f i (from basic definition) ( Evaluate integrals on ANY path from i to f ) Some distinctions and details D U = q0 D V • • • • • • The field depends on a charge distribution elsewhere). A test charge q0 moved between i and f gains or loses potential energy DU. DU does not depend on path DV also does not depend on path and also does not depend on |q0| (test charge). potential differences to motion Only differences in electric potential and PE are meaningful: – Relative reference: Choose arbitrary zero reference level for ΔU or ΔV. – Absolute reference: Set Ui = 0 with all charges infinitely far apart – Volt (V) = SI Unit of electric potential – 1 volt = 1 joule per coulomb = 1 J/C – 1 J = 1 VC and 1 J = 1 N m • Electric field units – new name: – 1 N/C = (1 N/C)(1 VC/1 Nm) = 1 V/m • A convenient energy unit: electron volt – 1 eV = work done moving charge e through a 1 volt potential difference = (1.60×10-19 C)(1 J/C) = 1.60×10-19 J Work and PE : Who/what does positive or negative work? 5-2: In the figure, suppose we exert a force and move the proton from point i to point f in a uniform electric field directed as shown. Which statement of the following is true? A. B. C. D. E. f i E Electric field does positive work on the proton. Electric potential energy of the proton increases. Electric field does negative work on the proton. Electric potential energy of the proton decreases. Our force does positive work on the proton. Electric potential energy of the proton increases. Our force does positive work on the proton. Electric potential energy of the proton decreases. The changes cannot be determined. •Hint: which directions pertain to displacement and force? EXAMPLE: Find change in potential as test charge +q0 moves from point i to f in a uniform field i E f Dx o DU and DV depend only on the endpoints ANY PATH from i to f gives same results •uniform field To convert potential to/from PE just multiply/divide by q0   DVfi =   E  ds path   Fe = q0E   DUfi   DWfi =   F  ds path DVfi  DUfi / q0 DUfi = q0DVfi EXAMPLE: CHOOSE A SIMPLE PATH THROUGH POINT “O” DVf ,i = DVo,i  DVf ,o DVo,i = 0 Displacement i  o is normal to field (path along equipotential)  DVf ,i = DVf ,o   = - E  Dx = E | Dx | • External agent must do positive work on positive test charge to move it from o  f - units of E can be volts/meter • E field does negative work What are signs of DU and DV if test charge isCopyright negative? R. Janow Spring 2014 Potential Function for a Point Charge • • • • Charges are infinitely far apart  choose Vinfinity = 0 (reference level) DU = work done on a test charge as it moves to final location DU = q0DV Field is conservative  choose most convenient path = radial Find potential V(R) a distance R from a point charge q :    V(R)  V  VR =   E  ds along radial path from r = R to  ds = r̂dr R    q  dr 1 q E(r ) = k 2 r̂   E  ds =  V(R) = kq  2 = kq = k r rR R r  q  V(R ) =  k R   R R (23.14) • Positive for q > 0, Negative for q<0 • Inversely proportional to r1 NOT r2 Similarly, for potential ENERGY: (use same method but integrate force) U(r )  QV (R ) = k q.Q R (23.9) • Shared PE between q and Q • Overall sign depends on both signs Equi-potential surfaces: Voltage and potential energy are constant; i.e. DV=0, DU=0 • No change in potential energy along an equi-potential • Zero work is done moving charges along an equi-potential • Electric field must be perpendicular to tangent of equipotential   DV  E  Ds = -E Ds cos() = 0  and DU = DW = Fe  Ds = 0  for Ds along surface • Equipotentials are perpendicular to the electric field lines DV = 0 Vi > Vf DV = 0 Vfi  DV = 0 CONDUCTORS ARE ALWAYS EQUIPOTENTIALS - Charge on conductors moves to make Einside = 0 - Esurf is perpendicular to surface of V so DV = 0 along any path on or Spring 2014 inR.aJanow conductor Examples of equipotential surfaces Uniform Field Point charge or outside sphere of charge Dipole Field Equipotentials are planes (evenly spaced) Equipotentials are spheres (not evenly spaced) Equipotentials are not simple shapes The field E(r) is the gradient of the potential   dV  E  ds = - E ds cos() •Component of ds on E produces potential change •Component of ds normal to E produces no change •Field is normal to equipotential surfaces •For path along equipotential, DV = 0  ds  • Gradient = spatial rate of change    dV V  V V  E=-  = i ĵ  k̂  - V ds is  to equipotential ds x y z f (x, y, z ) Math note : is a " partial" derivative x EXAMPLE: UNIFORM FIELD E – 1 dimension   DV  E  Ds =  E Ds Ds E DU = q0 DV = q0EDs = FDs Potential difference between oppositely charged conductors (parallel plate capacitor) + - • Equal and opposite surface charges • All charge resides on inner surfaces (opposite charges attract) Dx  L    =   E=   0 L Dx DV  Vf  Vi = E  Dx Vf E=0 Example: Find the potential difference DV across the capacitor, assuming: •  = 1 nanoCoulomb/m2 • Dx = 1 cm & points from negative to positive plate • Uniform field E   1 10-9 DV = E  Dx =  E Dx =  10-2 0 Vi DV =  1.13 volts A positive test charge +q gains potential energy DU = qDV as it moves from - plate to + plate along any path (including external circuit) Comparison of point charge and mass formulas for vector and scalar fields FIELD FORCE VECTORS     M mM force/unit mass g ( r ) = G r̂ F(r ) = G 2 r̂ Gravitation (acceleration) r r2     1 Q force/unit charge 1 qQ Electrostatics F(r ) = E(r ) = r̂ r̂ (n/C) 4 0 r 2 4 0 r 2 SCALARS Gravitation POTENTIAL ENERGY Ug (r ) = G mM r 1 qQ Electrostatics Ue (r ) = 4 0 r POTENTIAL Vg (r ) = G M r 1 Q Ve (r ) = 4 0 r PE/unit mass (not used often) PE/unit charge Fields and forces ~ 1/R2 but Potentials and PEs ~ 1/R1  U F=  s  V E=  s Visualizing the potential function V(r) for a positive point charge (2 D) For q negative V is negative (funnel) V(r) 1/r r r Conductors are always equipotentials Example: Two spheres, different radii, one charged to 90,000 V. Connect wire between spheres – charge moves Conductors come to same potential Charge redistributes to make it so V1f = V2 f r1= 10 cm V10= 90,000 V. wire Q1f  Q2f = Q10 r2= 20 cm Initially: V10 kQ10 = 9  104 Volts =  Q10 = 1.0 C. r1 V20= 0 V. Q20= 0 V. Find the final charges: V1f = k[Q 10  Q1f ] kQ1f = V2 f = r1 r2 Find the final potential(s): kQ1f V1f = = r1 r2 -1 Q1f = Q10 (1  ) = 0.33 C. r1 Q2f = Q10  Q1f = 0.67 C. 9  109  0.33  x10 6 = 30,000 Volts = V2f 0.1 Spring 2014 Potential inside a hollow conducting shell Vc = Vb (shell is an equipotential) = 18,000 Volts on surface R = 10 cm Shell can be any closed surface (sphere or not) d c a R Find potential Va at point “a” insidea shell b Definition: DVab    Va  Vb =   E  ds b Apply Gauss’ Law: choose GS just inside shell: qenc = 0  E = 0 everywhere inside  DVab = 0  Va = Vsurface = Vb = Vc = Vd = 18,000 Volts Potential is continuous across surface – field is not V(r) E(r) Vinside=Vsurf Voutside= kq / r Eoutside= kq / r2 Einside=0 R r R r Potential due to a group of point charges • Use superposition for n point charges  V(r ) = n 1 V =  i 4  0 i =1 n  r i =1 qi   ri • The sum is an algebraic sum, not a vector sum.   r  r2   r  r1  r  r1  r2 • Reminder: For the electric field, by superposition, for n point charges   E(r ) =   Ei  n i =1 1 4 0 n  i=1 qi   2 r̂i r  ri • E may be zero where V does not equal to zero. • V may be zero where E does not equal to zero. Use Superposition Examples: potential due to point charges Note: E may be zero where V does not = 0 V may be zero where E does not = 0 TWO EQUAL CHARGES – Point P at the midpoint between them EP = 0 d +q VP = +q P by symmetry kq kq kq  =4 d/2 d/2 d obviously not zero F and E are zero at P but work would have to be done to move a test charge to P from infinity. Let q = 1 nC, d = 2 m: 9  10 9  10 9 VP = 4 = 18 Volts 2 DIPOLE – Otherwise positioned as above EP  0 d +q -q P but Let q = 1 nC, d = 2 m: VP = obviously EP = 2 kq kq  =0 d/2 d/2 kq d2 /4 = 8 kq d2 9  109  10 9 EP = = 8 = 18 V/m (or N/C) 4 Another example: square with charges on corners a q -q d a P a d -q q a Find E & V at center point P d= a 2/2 EP = 0 by symmetry kq k k VP =  i =  qi = [q  q  q  q] d i d i ri VP = 0 Another example: same as above with all charges positive EP = 0 by symmetry, again kqi k 4kq 8kq VP =  =  qi = = = 510 Volts r d a 2 /2 a 2 i i i Another example: find work done by 12 volt battery in 1 minute as 1 ampere current flows to light lamp i E + - DW  work done = - DU = - QDV Q = charge moved from  to - by current = i Dt = 1 amp x 60 sec = 60 C. DU = QDV = 60 x DV DV = 12 Volts  DU =  720 Joules R. Janow Spring 2014 DW =  DU =  720 Joules (from battery) Electric Field and Electric Potential 5-3: Which of the following figures have V=0 and E=0 at the red point? q -q q q A q q q q q -q C B -q q D q -q E Method for finding potential function V at a point P due to a continuous charge distribution 1. Assume V = 0 infinitely far away from charge distribution (finite size) 2. Find an expression for dq, the charge in a “small” chunk of the distribution, in terms of l, , or r  ldl for a linear distributi on    dq =  d2 A for a surface distributi on   rd3 V for a volume distributi on    Typical challenge: express above in terms of chosen coordinates 3. At point P, dV is the differential contribution to the potential due to a pointlike charge dq located in the distribution. Use symmetry. dq dV = scalar, r = distance from dq to P 4  0r 4. Use “superposition”. Add up (integrate) the contributions over the whole charge distribution, varying the displacement r as needed. Scalar VP. 1 dq VP =  dVP = (line, surface, or volume integral) 4 0  r dist dist 5. Field E can be gotten from potential by taking the “gradient”:     V Rate of potential change E =     V dV  E  ds perpendicular to equipotential s Example 23.11: Potential along Z-axis of a ring of charge z Q = charge on the ring l = uniform linear charge density = Q/2a r = distance from dq to “P” = [a2 + z2]1/2 ds = arc length = adf P dq dV = k r 2 kal kQ V =  dV = d f = r 0 r ring r z f y a dq x FIND ELECTRIC FIELD (along z by symmetry) As Before  V= All scalars - no need to Almost point charge formula kQ [ z 2  a 2 ]1 / 2 • As z  0, V  kQ/a • As a  0 or z  inf, V  point charge V kQ  (z 2 ) kQz Ez =  k̂ = 3 k̂ = 2 k̂ 2 3/2 z 2r z [z  a ] • E  0 as z  0 (for “a” finite) • E  point charge formula forR. Janow z >> Spring a 2014 Example: Potential Due to a Charged Rod • A rod of length L located parallel to the x axis has a uniform linear charge density  . Find the electric potential at a point P located on the y axis a distance d from the origin. • r  [x 2  d2 ]1 / 2 dq = ldx 1 dq 1 ldx dV = = 4 0 r 4 0 (x 2  d2 )1 / 2 • Integrate over the charge distribution   L l dx l = ln x  ( x 2  d2 )1 / 2 2 2 1 / 2 4 0 (x  d ) 4 0 0 V =  dV =  = •    l ln L  (L2  d2 )1 / 2  ln(d) 4 0  L0  Check by differentiating d log(x  r ) dx for r = [x 2  d2 ]1 / 2 d 1 d(x  r ) 1 dr 1 x 1 rx 1 log(x  r ) = = (1  )= (1  ) = ( )= dx x  r dx xr dx xr r xr r r • Result  L  (L2  d2 )1 / 2  l V= ln   4 0  d  Example 23.10: Potential near an infinitely long charged line or charged conducting cylinder E= 2kl r Near line or outside cylinder r > R DVfi = Vf - Vi =   i f  f dr  E  d r =  2kl  i r DVfi =  2kl ln[ri / rf ] Above is negative for rf > ri with l positive E=0 Inside conducting cylinder r < R DVinside =   i f   E  dr = 0 Potential inside is constant and equals surface value Example 23.12: Potential at a symmetry point near a finite line of charge l = Q/2a Uniform linear charge density dq = ldy Charge in length dy dq Potential of point charge dV = k r a a dy VP =  dV = kl  -a - a (x 2  y2 )1/ 2 Standard integral from tables:  (x 2 dy =  y2 )1/ 2 ln [ y  r ] = ln [ y  (x 2  y2 )1/ 2 ] VP = VP  a  (x 2  a2 )1/ 2  kl ln 2 2 1/ 2    a  (x  a )   (x 2  a2 )1/ 2  a  kQ = ln  2 2 1/ 2 2a (x  a )  a   Limiting cases: • Point charge formula for x >> 2a • Example 23.10 formula for near field limit x << 2a Example: Potential on the symmetry axis of a charged disk P • Q = charge on disk whose radius = R. • Uniform surface charge density  = Q/4R2 • Disc is a set of rings, each of them da wide in radius • For one of the rings: dq   dA =  a da df VP,z 1 = 4  0 2 R   0 0 dVP ,z  a da d [ a 2  z 2 ]1/ 2  r z R cos() = z/r r 2 = a2  z2 k dq = r f a Double integral x • Integrate twice: first on azimuthal angle f from 0 to 2 which yields a factor of 2 then on ring radius a from 0 to R (note: (1  x)1/ 2  1  21 x  21 41 x2..... for x2  1 ) 2  a da 4  0 0 [ a 2  z 2 ]1/ 2 R VP,z = “Far field” (z>>R): disc looks like point charge a d 2 Use Anti= [ a  z 2 ]1/ 2 2 2 1/ 2 derivative: [ a  z ] da Vdisk =  2 0  z 2  R2  1/2  z  Vdisk   2 0   1 R2 1 Q  z = z  2 z 4 0 z   “Near field” (z<<R): disc looks like infinite sheet of charge Vdisk  R 20 z   1  R  E dV  = dz 2 0 ```
7,106
16,619
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4
4
CC-MAIN-2018-51
latest
en
0.807133
https://www.printablemultiplication.com/make-your-own-multiplication-flash-cards/
1,656,662,765,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00792.warc.gz
979,946,244
10,269
# Make Your Own Multiplication Flash Cards Learning multiplication right after counting, addition, as well as subtraction is perfect. Children learn arithmetic by way of a natural progression. This growth of studying arithmetic is truly the adhering to: counting, addition, subtraction, multiplication, and lastly section. This assertion brings about the concern why understand arithmetic in this particular pattern? Moreover, why understand multiplication right after counting, addition, and subtraction just before division? ## The following details answer these concerns: 1. Youngsters find out counting first by associating visible objects using their hands. A concrete illustration: How many apples are there within the basket? A lot more abstract example is how aged are you? 2. From counting figures, the following reasonable stage is addition then subtraction. Addition and subtraction tables are often very helpful training tools for kids because they are graphic equipment creating the move from counting less difficult. 3. Which should be learned after that, multiplication or department? Multiplication is shorthand for addition. At this stage, young children possess a business understanding of addition. As a result, multiplication will be the up coming rational form of arithmetic to understand. ## Evaluate essentials of multiplication. Also, look at the basics utilizing a multiplication table. Allow us to evaluation a multiplication example. Employing a Multiplication Table, grow 4 times 3 and get a solution 12: 4 x 3 = 12. The intersection of row a few and line several of any Multiplication Table is 12; a dozen will be the solution. For kids beginning to learn multiplication, this is certainly effortless. They can use addition to resolve the situation therefore affirming that multiplication is shorthand for addition. Illustration: 4 by 3 = 4 4 4 = 12. It is really an superb guide to the Multiplication Table. A further gain, the Multiplication Table is visual and displays returning to studying addition. ## In which should we commence understanding multiplication while using Multiplication Table? 1. Very first, get informed about the table. 2. Start with multiplying by a single. Start at row primary. Move to column # 1. The intersection of row a single and line one is the perfect solution: 1. 3. Repeat these methods for multiplying by one. Multiply row one by columns one particular via twelve. The solutions are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 respectively. 4. Recurring these methods for multiplying by two. Multiply row two by columns one by means of 5 various. The answers are 2, 4, 6, 8, and 10 respectively. 5. Let us jump ahead of time. Replicate these actions for multiplying by 5. Multiply row five by posts one through 12. The solutions are 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, and 60 respectively. 6. Now let us raise the amount of difficulty. Repeat these techniques for multiplying by a few. Grow row three by posts one through a dozen. The solutions are 3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, and 36 correspondingly. 7. When you are at ease with multiplication so far, try out a test. Fix the following multiplication difficulties in your mind and then examine your responses for the Multiplication Table: grow six and two, increase nine and 3, increase one and eleven, grow several and a number of, and grow 7 and 2. The problem solutions are 12, 27, 11, 16, and 14 correspondingly. If you got a number of from 5 issues correct, design your very own multiplication exams. Estimate the replies in your thoughts, and appearance them using the Multiplication Table.
821
3,639
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.6875
5
CC-MAIN-2022-27
latest
en
0.943275
http://mathhelpforum.com/calculus/2508-i-need-help-about-hypergeometric-series.html
1,524,430,780,000,000,000
text/html
crawl-data/CC-MAIN-2018-17/segments/1524125945648.77/warc/CC-MAIN-20180422193501-20180422213501-00373.warc.gz
201,781,165
11,076
1. I need help about hypergeometric series The hyper geometric series :the infinite series [1+ (ab/1!c)x+[a(a+1)b(b+1)/(2!c(c+1)]x^2+{[a(a+1)(a+2)b(b+1)(b+2)]/[3!c(C+1)(c+1)(c+2)]}x^3+....] Where a,b and c are neither 0 nor negative integers,, Please can anyone help me to prove by explaining that the hyper geometric series is converges??! any help would be appreciated ,thank you 2. There are a few ways to write this, but try looking at it like this. $\displaystyle S_{n}=1+\sum_{n=0}^{\infty}\frac{(a+n)!(b+n)!x^{n+ 1}}{(n+1)!(c+n)!}$ Whew. And I would definitely take the ratio test from here. 3. Thanks Jameson but if you do'nt have any objection,can you give me the complete answer I'm confused and not sure if I can do this or no thanks for any responses. 4. Ok. I'll try to Tex this all out. Let $\displaystyle b_{n}=\frac{a_{n+1}}{a_n}$ or $\displaystyle a_{n+1}*\frac{1}{a_n}$ So $\displaystyle b_n=\frac{(a+n+1)!(b+n+1)!x^{n+2}}{(n+2)!(c+n+1)!} *\frac{(n+1)!(c+n)!}{a+n)!(b+n)!x^{n+1}}$ All I did was substitute (n+1) for in, then divide that expression by a_n. But to simplify, I just flipped a_n and multiplied. Now for some cancellation. $\displaystyle b_n=\left(\frac{(a+n+1)(a+n)!(b+n+1)(b+n)!x^{n+1}* x}{(n+2)(n+1)!(c+n+1)(c+n)!}\right)$$\displaystyle \left(\frac{(n+1)!(c+n)!}{(a+n)!(b+n)!x^{n+1}} \right)$ For some reason, that Latex won't make that last bracket! Anyway, cancel some terms out and you'll get. $\displaystyle b_n=\frac{(a+n+1)(b+n+1)x}{(n+2)(c+n+1)}$ Now you have your common ratio! So take the limit as $\displaystyle n\rightarrow\infty$ to see how the terms trend torwards infinity, and you should be able to show that this converges for all x. Hopefully that makes sense. 5. I try to take limit as $\displaystyle n\rightarrow\infty$I get $\displaystyle \lim b_n=x$ And conclude that the series diverges for x>1 , converges for x<1 and test failed at x=1 . We can use Gauss' test to see series behavior .. But I have one question : Is it necessary that we test series behavior at x=-1 or no??! thanks 6. Yes. Sorry. The series converges for $\displaystyle |x|<1$ and you must test the endpoints of 1 and -1 for convergence. 7. but that's very long & hard anyway.. thanks so much for Jameson
756
2,258
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.15625
4
CC-MAIN-2018-17
latest
en
0.852835
https://ordonews.com/what-happens-when-traveling-at-near-light-speed/?amp=1
1,675,618,877,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500273.30/warc/CC-MAIN-20230205161658-20230205191658-00465.warc.gz
445,379,637
10,112
# What happens when traveling at near-light speed (ORDO NEWS) — It is known that the speed of light in vacuum is an absolute value, which is an inviolable cosmic speed limit. Recall that the speed of light in a vacuum is 299,792,458 meters per second, or 1.08 billion kilometers per hour! According to Albert Einstein’s Special Theory of Relativity, all physical processes in a moving body proceed more slowly than in a similar body at rest. This effect also extends to time, which, as the speed increases, begins to slow down. For example, on the International Space Station (ISS), time goes slower than on Earth , but the effect is extremely insignificant, since the speed of the station is still relatively small. Time lag on the ISS is only 0.01 seconds for every year that has passed on Earth. #### But what happens if you move at a faster speed? Of course, the greater the speed, the more noticeable the effect of time dilation. Let’s take a time indicator of one hour (60 minutes) for a stationary object, the speed of which will be 0% of the speed of light. • At 10% of the speed of light, one hour would “turn” into 59.52 minutes; • At 20% of the speed of light, one hour “turns” into 58.70 minutes; • At 30% of the speed of light, one hour “turns” into 57.20 minutes; • At 40% of the speed of light, one hour “turns” into 55.00 minutes; • At 50% of the speed of light, one hour will “turn” into 52.10 minutes; • At 60% of the speed of light, one hour “turns” into 48.10 minutes; • At 70% of the speed of light, one hour “turns” into 42.85 minutes; • At 80% of the speed of light, one hour will “turn” into 36.00 minutes; • At 90% of the speed of light, one hour “turns” into 26.18 minutes; • At 92% of the speed of light, one hour “turns” into 23.52 minutes; • At 95% of the speed of light, one hour “turns” into 18.71 minutes; • At 99% of the speed of light, one hour would “turn” into 8.53 minutes; • At 99.9% of the speed of light, one hour “turns” into 2.78 minutes; • At 99.997% of the speed of light, one hour would “turn” into 1.17 minutes; • At 100% of the speed of light, one hour will “turn” into 0 minutes. Roughly speaking, a person traveling at 99% of the speed of light would experience a slowdown of almost seven times. Suppose this person were to fly at this speed for a year, and when he returned to Earth, he would find out that seven years had passed. If the same person accelerated to 99.9999% of the speed of light, then each year of his journey would turn into 70 years on Earth. A year’s travel at 99.9999999% would take 2,236 years on Earth. At a rate of 99.9999999999%, a year will turn into 70,710 years on Earth. Another year of flight at a speed of 99.999999999999999% of the speed of light will be 22,369,621 years on Earth! #### It is worth accelerating to 100% of the speed of light and time no longer exists … tempting? Unfortunately or fortunately, humanity does not have the technology to accelerate to such speeds. NASA‘s Parker Solar Probe , the fastest spacecraft ever to study the Sun , was able to reach a speed of 692,018 kilometers per hour, or 0.06412% of the speed of light. Online:
856
3,151
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2023-06
latest
en
0.928873
https://sportsbizusa.com/20-making-friends-worksheets/
1,638,932,790,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964363437.15/warc/CC-MAIN-20211208022710-20211208052710-00002.warc.gz
588,244,381
11,274
HomeTemplate ➟ 20 20 Making Friends Worksheets # 20 Making Friends Worksheets Making Friends English ESL Worksheets for distance social skills making friends worksheets, free printable making friends worksheets, making good friends worksheets, making friends worksheets kindergarten, autism making friends worksheets, via: en.islcollective.com Numbering Worksheets for Kids. Kids are usually introduced to this topic matter during their math education. The main reason behind this is that learning math can be done with the worksheets. With an organized worksheet, kids will be able to describe and explain the correct answer to any mathematical problem. But before we talk about how to create a math worksheet for kids, let’s have a look at how children learn math. In elementary school, children are exposed to a number of different ways of teaching them how to do a number of different subjects. Learning these subjects is important because it would help them develop logical reasoning skills. It is also an advantage for them to understand the concept behind all mathematical concepts. To make the learning process easy for children, the educational methods used in their learning should be easy. For example, if the method is to simply count, it is not advisable to use only numbers for the students. Instead, the learning process should also be based on counting and dividing numbers in a meaningful way. The main purpose of using a worksheet for kids is to provide a systematic way of teaching them how to count and multiply. Children would love to learn in a systematic manner. In addition, there are a few benefits associated with creating a worksheet. Here are some of them: Children have a clear idea about the number of objects that they are going to add up. A good worksheet is one which shows the addition of different objects. This helps to give children a clear picture about the actual process. This helps children to easily identify the objects and the quantities that are associated with it. This worksheet helps the child’s learning. It also provides children a platform to learn about the subject matter. They can easily compare and contrast the values of various objects. They can easily identify the objects and compare it with each other. By comparing and contrasting, children will be able to come out with a clearer idea. He or she will also be able to solve a number of problems by simply using a few cells. He or she will learn to organize a worksheet and manipulate the cells. to arrive at the right answer to any question. This worksheet is a vital part of a child’s development. When he or she comes across an incorrect answer, he or she can easily find the right solution by using the help of the worksheets. He or she will also be able to work on a problem without having to refer to the teacher. And most importantly, he or she will be taught the proper way of doing the mathematical problem. Math skills are the most important part of learning and developing. Using the worksheet for kids will improve his or her math skills. Many teachers are not very impressed when they see the number of worksheets that are being used by their children. This is actually very much true in the case of elementary schools. In this age group, the teachers often feel that the child’s performance is not good enough and they cannot just give out worksheets. However, what most parents and educators do not realize is that there are several ways through which you can improve the child’s performance. You just need to make use of a worksheet for kids. elementary schools. As a matter of fact, there is a very good option for your children to improve their performance in math. You just need to look into it. Related Posts : [gembloong_related_posts count=2] ## Making friends around the world ESL worksheet by Dianasuzuki Making friends around the world ESL worksheet by Dianasuzuki via : eslprintables.com ## Hello Nice to meet you Basic conversation making friends Hello Nice to meet you Basic conversation making friends via : en.islcollective.com ## Meeting people and making friends conversation questions Meeting people and making friends conversation questions via : eslprintables.com ## Making new friends Interactive worksheet Making new friends Interactive worksheet via : liveworksheets.com ## Pin on Printable Worksheet for Kindergarten Pin on Printable Worksheet for Kindergarten via : pinterest.com ## THE THREE FRIENDS MAKING SUGGESTIONS English ESL THE THREE FRIENDS MAKING SUGGESTIONS English ESL via : en.islcollective.com ## 20 Making Friends Worksheets Kindergarten in 2020 20 Making Friends Worksheets Kindergarten in 2020 via : pinterest.com ## English worksheets Making Friends English worksheets Making Friends via : eslprintables.com ## A good friend VS A bad friend Interactive worksheet A good friend VS A bad friend Interactive worksheet via : liveworksheets.com ## Vocabulary friendship Vocabulary friendship via : pinterest.com.au ## Meeting a new friend ESL worksheet by rosadonuria Meeting a new friend ESL worksheet by rosadonuria via : eslprintables.com ## Making friends A gap fill questions Interactive worksheet Making friends A gap fill questions Interactive worksheet via : liveworksheets.com ## Friends worksheet Friends worksheet via : pinterest.co.uk ## making suggestions English ESL Worksheets for distance making suggestions English ESL Worksheets for distance via : en.islcollective.com ## Quiz & Worksheet Friendship Skills for Kids with Autism Quiz &amp; Worksheet Friendship Skills for Kids with Autism via : study.com ## My Friend Circle English Worksheet For Kids With My Friend Circle English Worksheet For Kids With via : 1989generationinitiative.org ## Irony Worksheet 2 Irony Worksheet 2 via : ereadingworksheets.com ## Substance Abuse Activity Worksheets Printable And Group Substance Abuse Activity Worksheets Printable And Group via : 1989generationinitiative.org ## Friendship Interactive worksheet Friendship Interactive worksheet via : liveworksheets.com ## 20 social Skills Worksheets Free Free Printable social Skills Worksheets in 2020 free printable social skills worksheets for kindergarten, free social skills worksheets for kindergarten, free printable social skills worksheets for adults, free social skills worksheets for adults, mental health free printable social skills worksheets for adults, via: pinterest.com Numbering Worksheets for Kids. Kids are usually introduced to this topic […] ## 20 Plate Tectonics Printable Worksheets Plate Tectonics Vocab worksheet plate tectonics model, plate tectonics system, plate tectonics hot spots, plate tectonics bc, plate tectonics earth, via: liveworksheets.com Numbering Worksheets for Kids. Kids are usually introduced to this topic matter during their math education. The main reason behind this is that learning math can be done with the worksheets. With an […] ## 20 Matter Worksheet 5th Grade Properties Matter Review Worksheet Have Fun Teaching matter worksheet fifth grade, matter lessons 5th grade, states of matter lessons 5th grade, states of matter worksheet answers 5th grade, measuring matter worksheet 5th grade, via: 1989generationinitiative.org Numbering Worksheets for Kids. Kids are usually introduced to this topic matter during their math education. The main reason behind […]
1,452
7,390
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2021-49
latest
en
0.949833
http://crypto.stackexchange.com/questions/tagged/des+block-cipher
1,406,199,171,000,000,000
text/html
crawl-data/CC-MAIN-2014-23/segments/1405997888236.74/warc/CC-MAIN-20140722025808-00072-ip-10-33-131-23.ec2.internal.warc.gz
89,098,483
13,717
# Tagged Questions 192 views ### Exercise: Attack on a Two-Round DES Cipher Working through the exercises in Cryptography Engineering (Schneier, Ferguson & Kohno) I have stalled on the following exercise: Consider a new block cipher, DES2, that consists only of two ... 818 views ### Can you explain “weak keys” for DES? A weak key for DES is a key $K$ such that $DES_{k_1}(DES_{k_2}(x))=x$ for all $x$. I don't get why are the 4 keys $k_1||k_2$: $1^{112}$, $0^{112}$, $0^{56}||1^{56}$, $1^{56}||0^{56}$ considered as ... 125 views ### Can you help me with this DES variant analysis? I'm struggling with some DES variant that I got as an exercise (exercise taken from Katz-Lindell Ex5.14). The variant is as follows: The left half of the master key is used to derive all the ... 229 views ### How to choose keys for a block cipher? AES and DES are block ciphers. Mathematically, its the mapping from plaintext space to ciphertext space using the keys i.e. $\{{0,1}\}^k$ x $\{{0,1}\}^l \longrightarrow \{{0,1}\}^l$ I know that these ... 3k views ### How does DES decryption work? Is it the same as encryption or the reverse? [duplicate] If DES decryption is the same as encryption done in reverse order, then how can the reversed S-Box convert 4 bits into 6 bits? 188 views ### Have these compositions of block ciphers the same security? I'm interested by the compositions of the block cipher DES, instantiated with independent keys $k_1$, $k_2$ and $k_3$. Are these 3 compositions equivalent in terms of security? ... 506 views ### Can we replace the XOR operation in DES with some other operation? Can we replace the XOR operation in the DES algorithm with some other operation? If so, does it work for both encryption and decryption? 2k views ### Why is MixColumns omitted from the last round of AES? All rounds of AES (and Rijndael) have a MixColumns step, save the last round which omits it. DES has a similar feature where the last round differs slightly. The rationale, if I recall correctly, ... 3k views ### Avalanche effect in DES I couldn't understand the avalanche effect in DES. Could someone explain how avalanche effect happens in DES Suppose that a single evaluation of a block-cipher (DES or AES) takes 10 operations, and the computer can do $10^{15}$ such operations per second. How long would it take for to recover a DES key, ...
604
2,367
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2014-23
latest
en
0.898708
https://www.worksheetanswers.com/dimensional-analysis-worksheet-and-answers/
1,695,891,032,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00870.warc.gz
1,163,607,280
15,568
# Dimensional Analysis Worksheet And Answers Posted on Dimensional Analysis Worksheet And Answers – Lots of grammar worksheets that cover quite so much of subjects. NoRetain the current instance of the string and advance to the next occasion. YesReplace the current instance of the string with the required alternative and advance to the following instance. Interactive therapy tools are unique and fascinating assets to enhance your remedy apply. Each interactive device is like a small app that you have to use on your pc, cellphone, or tablet with the clicking of a button. Try video games and illustrated tales for kids, or activities and audio for adults. In spreadsheet applications just like the open supply LibreOffice Calc or Microsoft’s Excel, a single document is called a ‘workbook’ and will have by default three arrays or ‘worksheets’. One advantage of such programs is that they will contain formulae so that if one cell value is changed, the complete document is automatically updated, based mostly on these formulae. Worksheet mills are often used to develop the type of worksheets that comprise a set of similar problems. Past and current guidelines, reports, forms, directions, worksheets, and other related sources. This interactive worksheet is offered for informational purposes only. The consumer ought to independently verify that each one entries and calculations generated by the interactive worksheet are right before counting on its outcomes or filing it with a court. Resizing the current warehouse to dynamically increase or decrease the compute assets utilized for executing your queries and other DML statements. If income varies lots from month to month, use a mean of the final twelve months, if available, or last year’s revenue tax return. When you load a workbook from a spreadsheet file, it is going to be loaded with all its existing worksheets . Move on to actions in which students use the first sources as historic proof, like on DocsTeach.org. This coloring math worksheet gives your baby follow discovering 1 extra and 1 less than numbers as a lot as 20. “Reading” footage #2 “Reading” photos #2 Where’s the word? In this early reading worksheet, your baby draws circles across the word under each image and then guesses what the word may imply based mostly on the picture. “Reading” pictures #1 “Reading” footage #1 Draw a circle round every word you see! This article will help you get familiar with the idea of a worksheet and its features. It’s simple to add extra flair and personality to your initiatives with Adobe Spark’s unique design property. Add animated stickers from GIPHY or apply a text animation for short-form graphic movies in a single tap. The W-4 type permits the employee to select an exemption level to reduce the tax factoring , or specify an extra amount above the usual quantity . The form comes with two worksheets, one to calculate exemptions, and another to calculate the consequences of other earnings (second job, partner’s job). The backside number in each worksheet is used to fill out two if the strains in the principle W4 kind. The major kind is filed with the employer, and the worksheets are discarded or held by the employee. Many tax forms require advanced calculations and table references to calculate a key value, or may require supplemental info that is only relevant in some circumstances. Rather than incorporating the calculations into the main form, they’re typically offloaded on a separate worksheet. ## Dimensional Analysis Worksheet And Answers Visit the studying comprehension web page for an entire assortment of fiction passages and nonfiction articles for grades one through six. Enter the cost paid by every parent for work-related youngster care. If the price varies , take the total yearly cost and divide by 12. The custodial parent is the father or mother who has the kid extra of the time. If every of you have the kid 50 of the time, choose one of you to be the custodial parent. Select Text AreaTo select a text space, maintain down the or key. ## Related posts of "Dimensional Analysis Worksheet And Answers" Lab Equipment Worksheet Answers - Spend as little or as a lot time as you want to make the graphic your individual. With a premium plan, you probably can even auto-apply your brand brand, colours, and fonts, so you’re all the time #onbrand. Adobe Spark Post has custom-made worksheets for your whole classroom needs. Whether... #### Mind Over Mood Worksheet Mind Over Mood Worksheet. Trying to keep away from emotions of disappointment and loss only prolongs the grieving process. This worksheet is an efficient way for kids to suppose about and plan forward for annoying and tough situations that may come up. This type of respiration brings vitality to your physique, making it healthier and... #### Commas In A Series Worksheet Commas In A Series Worksheet. Learners will take a deep dive into comma use in this punctuation practice worksheet. Information about Usage of the Site. Your position is cogent, well-reasoned, and well-said. Add commas where they are wanted. In this follow worksheet, kids develop their punctuation skills as they appropriate 5 sentences and rewrite five...
1,021
5,222
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.8125
3
CC-MAIN-2023-40
latest
en
0.902959
https://www.slideshare.net/lauhatkazira/ovoid-from-largest-axis
1,506,289,943,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818690211.67/warc/CC-MAIN-20170924205308-20170924225308-00710.warc.gz
851,968,742
33,393
Upcoming SlideShare × # Ovoid from largest axis 2,381 views Published on Published in: Technology, Art & Photos 1 Like Statistics Notes • Full Name Comment goes here. Are you sure you want to Yes No • Be the first to comment Views Total views 2,381 On SlideShare 0 From Embeds 0 Number of Embeds 1,812 Actions Shares 0 2 0 Likes 1 Embeds 0 No embeds No notes for slide ### Ovoid from largest axis 1. 1. This presentation has been made for you by FRANCISCO GUIJARRO BELDA From Isaac Albéniz Secondary School Leganés. Madrid. Spain. 2. 2. HOW TO DRAW AN OVOID UPON ITS LARGEST AXIS 1) Let´s imagine AB is the given axis. You have to divide this segment into six equal parts by means of the Thales Theorem (if you don´t remember how to work it, go to 1 Term Theory page on the blog and review everything) 3. 3. 2) Points 2 and 5 are crucial to the process. Keep them and get rid of the rest. Draw a line perpendicular to AB from point 2 4. 4. 3) Draw a circle (green color) with center in point 2 and radius 2A, you will get points C and D. Draw a semicircle (red color) with center in point 2 and radius 2B, you will get points E and F 5. 5. 4) Join points E and F to 5 and extend these lines until the red semicircle 6. 6. 5) Draw two arches from points E and F and radius ED and FC until the black lines you had already drawn (in pink color in the image below). You will get points P and Q 7. 7. 6) Join points P, Q and B by means of a circle with center in 5 and radius 5P, 5Q or 5B 8. 8. 7) Get rid of everything you don´t need any more and you are done. Your beautiful egg should look like the one you have here below
489
1,629
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.53125
4
CC-MAIN-2017-39
latest
en
0.867911
https://schoollearningcommons.info/question/sia-weighs-4-5-kg-more-than-raima-if-their-total-weight-is-140-kg-find-sia-s-weight-17649787-7/
1,653,508,851,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662593428.63/warc/CC-MAIN-20220525182604-20220525212604-00057.warc.gz
571,473,550
14,983
## Sia weighs 4.5 kg more than raima. If their total weight is 140 kg find sia’s weight Question Sia weighs 4.5 kg more than raima. If their total weight is 140 kg find sia’s weight in progress 0 5 months 2021-12-21T20:38:01+00:00 1 Answer 0 views 0 1. Step-by-step explanation: Let Raima’s weight be x . Therefore Sia’s weight = x+4.5 Therefore the equation is ; (x) + ( x+ 4.5) = 140 2x + 4.5 = 140 2x = 140- 4.5 2x = 135.5 x = 67.75 Therefore Sia’s weight is 67.75 + 4.5 = 72.25 Hope it helps.
202
511
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.1875
4
CC-MAIN-2022-21
latest
en
0.78844
https://quantumpositioned.com/quantum-phase-estimation/
1,719,027,973,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198862249.29/warc/CC-MAIN-20240622014659-20240622044659-00355.warc.gz
424,037,512
22,913
# Quantum Phase Estimation: Unlocking Hidden Information in Quantum Systems H Hannan Updated on: Quantum Phase Estimation is a pivotal quantum algorithm that unlocks deep insights into quantum systems. This algorithm can extract hidden details about quantum states, cementing its importance across applications in quantum computing and quantum physics. At its core, Quantum Phase Estimation utilizes clever qubit manipulations to analyze quantum processes. Determining these quantum phases allows for determining key properties of the system dynamics and enables the measurement of eigenvalues, energy levels, simulation timescales, and other fundamental quantum qualities. ## Understanding Quantum Phase Before learning about Quantum Phase Estimation, it’s important to understand the concept of the quantum phase. In quantum mechanics, a phase is an attribute of a quantum state that influences the behaviour of particles and waves. It determines the position of a particle on a wave or its behaviour in an interference pattern. Think of a quantum state like a spinning top: its phase is akin to the angle at which the top is tilted. This phase information is a fundamental aspect of quantum systems, and harnessing it can provide valuable insights. ## The Significance of Quantum Phase Estimation Quantum Phase Estimation is a quantum algorithm designed to extract the phase information from a quantum state. The algorithm holds significance because it enables us to perform tasks that would be infeasible or extremely time-consuming using classical computers. Among its various applications, QPE’s most notable role is in quantum computing, where it plays a crucial part in several algorithms, including Shor’s algorithm for factoring large numbers and solving discrete logarithm problems, both of which have significant implications for cryptography. ## The Quantum Phase Estimation Algorithm At its core, Quantum Phase Estimation is a quantum subroutine that takes advantage of the power of quantum parallelism. It uses a quantum computer to estimate the phase of a quantum state encoded in a unitary operator. The algorithm involves two main steps: preparing the input state and applying the Quantum Phase Estimation procedure. 1. Preparing the Input State: To estimate the phase of a given quantum state |𝜓〉, the algorithm prepares an auxiliary state |𝜓a〉, that interacts with the unitary operator. The input state is typically chosen to be an eigenstate of the operator, making it easier to extract the phase information. 2. Applying Quantum Phase Estimation: The core of the algorithm involves the controlled application of the unitary operator to the input state. This is done using a series of controlled operations that introduce controlled phase shifts based on the eigenvalues of the unitary operator. By applying the controlled operations multiple times, the algorithm effectively amplifies the phase information of the eigenstate. 3. Quantum Fourier Transform: After applying the controlled operations, the algorithm performs a Quantum Fourier Transform on the auxiliary qubits. This step essentially extracts the phase information and encodes it into the quantum state of the auxiliary qubits. 4. Measurement: Finally, the auxiliary qubits are measured, yielding a binary representation of the estimated phase. This binary representation can then be converted into a decimal fraction, which provides an approximation of the original phase. ## Quantum Phase Estimation in Action Imagine you have a quantum state that encodes the factors of a large number in a quantum computer. By applying Quantum Phase Estimation, you could estimate the phase corresponding to the factors, revealing valuable information about the original number’s properties. This insight is what makes QPE a vital component of Shor’s algorithm for factoring large numbers, a process that has implications for breaking classical encryption methods. ## Challenges and Future Directions While Quantum Phase Estimation is a powerful tool, it’s not without challenges. The algorithm’s success hinges on the availability of a suitable eigenstate for the unitary operator, which may not always be straightforward to find. Additionally, the algorithm’s precision is influenced by the number of qubits used in the auxiliary register and the number of controlled operations applied. The future of Quantum Phase Estimation is closely tied to the advancement of quantum hardware and error correction techniques. As quantum computers become more robust and capable of handling larger and more complex calculations, the potential applications of QPE will continue to develop. ## Conclusion Quantum Phase Estimation offers a glimpse into quantum systems’ hidden intricacies by calculating their phase properties. This algorithm is a key that unlocks quantum advantages across applications. As quantum technologies mature. Quantum Phase Estimation’s useability will only increase. Its role in both theoretical quantum science and practical quantum computing will likely expand. By estimating quantum phases, this algorithm promises breakthroughs across the quantum landscape.
929
5,176
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.78125
3
CC-MAIN-2024-26
latest
en
0.871783
https://www.topperlearning.com/answer/permutation-and-combination/0icp85ee
1,675,947,950,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764499966.43/warc/CC-MAIN-20230209112510-20230209142510-00754.warc.gz
1,023,284,126
59,360
Request a call back Permutation and combination Asked by NikhilD | 14 Apr, 2010, 11:28: AM As we can see 8 are unmarried people, and 2 are married to each other, and since the married couple refuses to attend seperately, the total invitations are 9. If the couple is excluded she has to select 5 people from 8, so 8C5 If the couple is included she has to select 3 people from 8, so 8C3. Hence the total number of ways = 8C5 + 8C3 = 56 + 56 = 112. Regards, Team, TopperLearning. Answered by | 14 Apr, 2010, 04:26: PM ## Concept Videos CBSE 11-science - Maths Asked by sdmbotch1123 | 18 Jan, 2023, 11:02: PM CBSE 11-science - Maths Asked by sdmbotch1123 | 18 Jan, 2023, 11:01: PM CBSE 11-science - Maths Asked by hrai7387 | 16 Jan, 2023, 07:36: PM CBSE 11-science - Maths Asked by kushik.jnp2025 | 15 Jan, 2023, 12:00: AM CBSE 11-science - Maths CBSE 11-science - Maths Asked by naiduuofficial | 06 Jan, 2023, 07:49: PM
339
928
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2023-06
longest
en
0.946624
https://fr.mathworks.com/matlabcentral/profile/authors/9400488
1,606,672,247,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141201836.36/warc/CC-MAIN-20201129153900-20201129183900-00561.warc.gz
297,263,714
20,735
Community Profile # pawan singh ##### Last seen: environ un mois ago 56 total contributions since 2017 B.tech ECE(15) #### pawan singh's Badges View details... Contributions in View by Solved Replacing a row For matrix G=[1 2 3; 4 5 6; 7 8 9] Replace the 2nd row with 8s **remember to create matrix G environ 3 ans ago Solved Who is the smartest MATLAB programmer? Who is the smartest MATLAB programmer? Examples: Input x = 'Is it Obama?' Output = 'Me!' Input x = 'Who ?' Ou... environ 3 ans ago Solved Triangle sequence A sequence of triangles is constructed in the following way: 1) the first triangle is Pythagoras' 3-4-5 triangle 2) the s... environ 3 ans ago Solved Find a Pythagorean triple Given four different positive numbers, a, b, c and d, provided in increasing order: a < b < c < d, find if any three of them com... environ 3 ans ago Solved Side of a rhombus If a rhombus has diagonals of length x and x+1, then what is the length of its side, y? <<http://upload.wikimedia.org/wikipe... environ 3 ans ago Solved Is this triangle right-angled? Given any three positive numbers a, b, c, return true if the triangle with sides a, b and c is right-angled. Otherwise, return f... environ 3 ans ago Solved Length of a short side Calculate the length of the short side, a, of a right-angled triangle with hypotenuse of length c, and other short side of lengt... environ 3 ans ago Solved Dimensions of a rectangle The longer side of a rectangle is three times the length of the shorter side. If the length of the diagonal is x, find the width... environ 3 ans ago Solved Side of an equilateral triangle If an equilateral triangle has area A, then what is the length of each of its sides, x? <<http://upload.wikimedia.org/wikipe... environ 3 ans ago Solved Length of the hypotenuse Given short sides of lengths a and b, calculate the length c of the hypotenuse of the right-angled triangle. <<http://upload.... environ 3 ans ago Solved Rotate a Matrix Input a Matrix x, Output y is the matrix rotating x 90 degrees clockwise environ 3 ans ago Solved Sum the numbers on the main diagonal Sum the numbers on the main diagonal of an n-by-n matrix. For input: A = [1 2 4 3 6 2 2 4 7]... environ 3 ans ago Solved Evaluating a polynomial Given the following polynomial and the value for x, determine y. y = 3x^5 – x^3 + 8x – 3 Example x = 1 y = 3 - 1 +... environ 3 ans ago Solved y equals x divided by 2 function y = x/2 environ 3 ans ago Solved Area of a Square Inside a square is a circle with radius r. What is the area of the square? environ 3 ans ago Solved 02 - Vector Variables 2 Make the following variable: <<http://samle.dk/STTBDP/Assignment1_2b.png>> environ 3 ans ago Solved Factorial Numbers Factorial is multiplication of integers. So factorial of 6 is 720 = 1 * 2 * 3 * 4* 5 *6 Thus 6 factorial = factorial(720).... environ 3 ans ago Solved Create times-tables At one time or another, we all had to memorize boring times tables. 5 times 5 is 25. 5 times 6 is 30. 12 times 12 is way more th... environ 3 ans ago Solved Matlab Basics II - Velocity of a particle A particle is moving in space, such that it's velocity is given by: <<http://s30.postimg.org/5rf1xtvj5/cody1.png>> write a... environ 3 ans ago Solved Create a Matrix of Zeros Given an input x, create a square matrix y of zeros with x rows and x columns. environ 3 ans ago Solved Find out sum of all elements of given Matrix Find out sum of all elements of given Matrix A=[1 2 3;4 5 6 ;7 8 9]; Answer must be: 45 *If you like this problem, pl... environ 3 ans ago Solved Matlab Basics - Rounding I Write a script to round x DOWN to the next lowest integer: e.g. x = 2.3 --> x = 2 also: x = 2.7 --> x = 2 environ 3 ans ago Solved Given a and b, return the sum a+b in c. environ 3 ans ago Solved Add two numbers (For beginners) environ 3 ans ago Solved Find the hypotenuse Given a and b (the two sides of a right-triangle), find c, the hypotenuse. environ 3 ans ago Solved Area of an equilateral triangle Calculate the area of an equilateral triangle of side x. <<http://upload.wikimedia.org/wikipedia/commons/e/e0/Equilateral-tr... environ 3 ans ago Solved Perfect Square or not find Given input x is perfect square or not,if yes then output y=1.else y=0 environ 3 ans ago Solved Who has power to do everything in this world? There is only one person who is older than this universe. He is Indian version of Chuck Norris. environ 3 ans ago Solved Tell me the slope Tell me the slope, given a vector with horizontal run first and vertical rise next. Example input: x = [10 2]; environ 3 ans ago Solved Create a vector Create a vector from 0 to n by intervals of 2. environ 3 ans ago
1,405
4,750
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2020-50
latest
en
0.716717
http://www.nelson.com/mathfocus/grade3/quizzes/ch02/ch02_5.htm
1,582,923,918,000,000,000
text/html
crawl-data/CC-MAIN-2020-10/segments/1581875147647.2/warc/CC-MAIN-20200228200903-20200228230903-00063.warc.gz
209,556,460
6,269
Name:    Lesson 5 - Using Number Lines 1. Which number can be placed on the number line at the arrow? a. 138 b. 134 c. 126 d. 141 2. Which number can be placed on the number line at the arrow? a. 111 b. 124 c. 118 d. 108 3. Which number can be placed on the number line at the arrow? a. 291 b. 287 c. 297 d. 284 4. Which number can be placed on the number line at the arrow? a. 265 b. 273 c. 262 d. 267 5. Where can the number 462 be placed on the number line? a. between 460 and 480 b. between 440 and 460 c. between 480 and 500 6. Where can the number 376 be placed on the number line? a. between 350 and 360 b. between 380 and 390 c. between 370 and 380 7. Where can the number 208 be placed on the number line? a. between 200 and 210 b. between 190 and 200 c. between 210 and 220 8. Where can the number 233 be placed on the number line? a. between 220 and 230 b. between 230 and 240 c. between 240 and 250 9. Which number is between 350 and 370? a. 364 b. 341 c. 382 10. Which number is between 620 and 650? a. 619 b. 657 c. 648
349
1,071
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.265625
3
CC-MAIN-2020-10
latest
en
0.827085
https://mathhelpboards.com/threads/continuity-question.7120/
1,600,602,313,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600400197946.27/warc/CC-MAIN-20200920094130-20200920124130-00144.warc.gz
528,788,263
15,720
# Continuity Question #### ryo0071 ##### New member Okay so the question is: Let $$\displaystyle f:R^2 \rightarrow R$$ by $$\displaystyle f(x) = \frac{x_1^2x_2}{x_1^4+x_2^2}$$ for $$\displaystyle x \not= 0$$ Prove that for each $$\displaystyle x \in R$$, $$\displaystyle f(tx)$$ is a continuous function of $$\displaystyle t \in R$$ ($$\displaystyle R$$ is the real numbers, I'm not sure how to get it to look right). I am letting $$\displaystyle t_0 \in R$$ and $$\displaystyle \epsilon > 0$$ then trying to find a $$\displaystyle \delta > 0$$ so $$\displaystyle |f(t) - f(t_0)| < \epsilon$$ whenever $$\displaystyle |t - t_0| < \delta$$ I am stuck trying to find the delta what will work, in trying to find it I am unable to simplify out $$\displaystyle |t - t_0|$$ to use. Am I missing something really obvious here? Any help appreciated. #### Fantini ##### "Read Euler, read Euler." - Laplace MHB Math Helper Perhaps you mean that for each $x \in \mathbb{R}^2, x \neq 0 \in \mathbb{R}^2$ and $t \in \mathbb{R}$ the function $f(tx)$ is continuous? Because we have $$f(tx) = f(tx_1, tx_2) = \frac{(tx_1)^2 (tx_2)}{(tx_1)^4 + (tx_2)^2} = \frac{t^3 x_1^2 x_2}{t^4 x_1^4 + t^2 x_2^2} = \frac{t^3 x_1^2 x_2}{t^2 (t^2 x_1^4 + x_2^2)} = \frac{tx_1^2 x_2}{t^2 x_1^4 + x_2^2}.$$ This function tends to zero as $t \to 0$ and is continuous everywhere else by noting that it is the result of operations with continuous functions (power, quotient, products and compositions). EDIT: I think this needs a bit more explanation. If $x = (x_1, x_2) \neq 0$ then this means that $x_1 \neq 0$ or $x_2 \neq 0$ (this is a logical 'or', both can be nonzero). If $x_1 =0$ and $x_2 \neq 0$ then we obviously have $f(tx) = 0$ because the expression in the numerador is automatically zero while the denominator is nonzero. The same if the variables switch roles (the first becomes nonzero and the second becomes zero). Therefore the only case left to be discussed is when both are nonzero. Then you have what I just said. Last edited: #### ryo0071 ##### New member Thank you for your response. I probably should have mentioned I have taken care of the cases where $$\displaystyle x_1 = 0$$ and $$\displaystyle x_2 \not= 0$$ as well as $$\displaystyle x_1 \not= 0$$ and $$\displaystyle x_2 = 0$$. Also, I am aware that it would be continuous since it is the result of operations of continuous function but I am trying to prove it using the epsilon-delta definition of the limit (by actually finding a delta that will work for an arbitrary epsilon, which is where I am getting stuck). #### Fantini ##### "Read Euler, read Euler." - Laplace MHB Math Helper I don't think you will manage to do it with the epsilon-delta definition. This function is the usual counterexample that you can have a function continuous at the origin for every line through it but it is actually discontinuous there: just consider the case where $x_2 = x_1^2$. In fact, you probably forgot to mention the definition at $x = 0$, else it is automatically continuous at where it is defined. If you manage to show this by epsilon-delta proof it would mean that it is continuous at the origin, which is not.
966
3,167
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.0625
4
CC-MAIN-2020-40
longest
en
0.792666
https://www.mrexcel.com/board/threads/help-with-a-date-range-function.123066/
1,670,460,347,000,000,000
text/html
crawl-data/CC-MAIN-2022-49/segments/1669446711221.94/warc/CC-MAIN-20221207221727-20221208011727-00323.warc.gz
940,221,075
17,285
# Help with a date range function... #### rizco ##### Board Regular Test.xls ABCDEFGH 1Dates1/12/20051/27/20052/1/20052/15/20052/28/20053/12/20053/27/2005 Sheet1 In the example above I have a date range. Lets say I am looking to display the second smallest date in February 2005 (2/15/2005) If all the dates were in Feb, I would just do the formula [=SMALL(D1:F1,2)] but they are not. I would like to enter in a date in say cell Z1 that says 2/1/2005. I would like cell A2 to return a result that goes something like this: =SMALL(B1:H1),2 where Month and Year= Z1 returning the second smallest date during the month and year specified in cell Z1. ray: ### Excel Facts Can you AutoAverage in Excel? There is a drop-down next to the AutoSum symbol. Open the drop-down to choose AVERAGE, COUNT, MAX, or MIN #### IML ##### MrExcel MVP For non year specific you could use =MIN(IF(MONTH(A1:G1)=2,A1:G1)) (where 2 represents February) For year specific, try =MIN(IF((A1:G1>=--"2-1-05")*(A1:G1<=--("2-28-25")),A1:G1)) Both formulas require array (control shift enter) confirmation #### rizco ##### Board Regular Wouldn't that return the smallest date in February 2/1/2005 instead of the second smallest date 2/15/2005? #### IML ##### MrExcel MVP Yes. I misread it. Let me play a bit more. #### rizco ##### Board Regular Hmmmm. This seemed to work. =SMALL(IF(MONTH(B1:H1)=2,B1:H1),2) Of course, it requires the (control shift enter) confirmation #### IML ##### MrExcel MVP Yes, but it fails if you only have 1 entry for that month. Replies 2 Views 69 Replies 5 Views 160 Replies 5 Views 104 Replies 2 Views 336 Replies 3 Views 639 1,181,994 Messages 5,933,164 Members 436,883 Latest member RyanI1986 ### We've detected that you are using an adblocker. We have a great community of people providing Excel help here, but the hosting costs are enormous. You can help keep this site running by allowing ads on MrExcel.com. ### Which adblocker are you using? 1)Click on the icon in the browser’s toolbar. 2)Click on the icon in the browser’s toolbar. 2)Click on the "Pause on this site" option. Go back 1)Click on the icon in the browser’s toolbar. 2)Click on the toggle to disable it for "mrexcel.com". Go back ### Disable uBlock Origin Follow these easy steps to disable uBlock Origin 1)Click on the icon in the browser’s toolbar. 2)Click on the "Power" button. 3)Click on the "Refresh" button. Go back ### Disable uBlock Follow these easy steps to disable uBlock 1)Click on the icon in the browser’s toolbar. 2)Click on the "Power" button. 3)Click on the "Refresh" button. Go back
778
2,603
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2022-49
latest
en
0.836762
http://nbviewer.jupyter.org/github/jckantor/Rainy-Lake-Hydrology/blob/master/notebooks/Namakan_and_Rainy_Lake_Water_Levels_1970-2010.ipynb
1,524,700,483,000,000,000
text/html
crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00437.warc.gz
209,964,219
722,713
# Namakan and Rainy Lake Water Levels 1970-2014¶ Rainy Lake is a relatively large lake on Minnesota/Ontario border that makes a portion of Voyageurs National Park. In recent years Rainy Lake has experienced periods of high water levels and flooding. The purpose of these notes and calculations here is to quantify those episodes using publically available data sets, explore possible reasons for these events, and what can be done to reduce the risk of future high water and flooding events. ### Initialization¶ Initilization of the graphics system and computational modules used in this IPython notebook. In [1]: # Display graphics inline with the notebook %matplotlib inline # Standard Python modules import numpy as np import matplotlib.pyplot as plt import pandas as pd import os import datetime # Modules to display images and data tables from IPython.display import Image from IPython.core.display import display import seaborn as sns sns.set_context('talk') # Data directory dir = '../data/' img = '../images/' ## Namakan Lake and Rainy Lake Levels¶ In [71]: plt.figure(figsize=(12,6)) RL.plot(lw=1.5) NL.plot(lw=1.5) ax1 = plt.gca() x1,x2 = ax1.get_xlim() y1,y2 = ax1.get_ylim() def rcPlot(yr,mo,dy): t = datetime.datetime(yr,mo,dy).toordinal() - datetime.datetime(1969,12,31).toordinal() plt.plot([t,t],[336,y2],'r--',lw=1.5) plt.text(t,335.7,'{0:4d}'.format(yr),ha='center',va='top',size=14) rcPlot(1940,10,3) rcPlot(1949,6,8) rcPlot(1957,10,1) rcPlot(1970,7,29) rcPlot(2000,1,5) plt.legend([RL.name,NL.name]); plt.title('Lake Levels' + ' ' + str(RL.dropna().index[0].year) + '-' + str(RL.dropna().index[-1].year)) plt.ylabel('meters') m2f = 3.28084 ax2 = plt.twinx() ax2.set_xlim(x1,x2) ax2.set_yticks([m2f*y for y in ax1.get_yticks()]) ax2.set_ylim(m2f*y1,m2f*y2) ax2.set_ylabel('feet') ax2.grid(None) fname = img + 'RainyNamakanLakeLevels.png' plt.savefig(fname) !convert $fname -trim$fname !convert $fname -transparent white$fname ## Rule Curve Performance 1970-1999¶ Load the rule curves from another notebook. In [72]: NL1970 = pd.read_pickle(dir+'NL1970.pkl') plt.figure(figsize=(10,7)) NL1970['LRC'].plot(color='y') NL1970['URC'].plot(color='y') RL1970['LRC'].plot(color='y') RL1970['URC'].plot(color='y') plt.fill_between(NL1970.index, NL1970['LRC'].tolist(), NL1970['URC'].tolist(), color='y',alpha=1) plt.fill_between(NL1970.index, NL1970['LRC'].tolist(), NL1970['URC'].tolist(), color='y', alpha=1) plt.fill_between(RL1970.index, RL1970['LRC'].tolist(), RL1970['URC'].tolist(), color='y', alpha=1) plt.ylabel('Lake Level [meters]') plt.title('Rule Curves for Namakan and Rainy Lakes: 1971-1999') e = pd.Series([]) for (yr,r) in RL['1971':'1999'].groupby(RL['1971':'1999'].index.year): shift = datetime.datetime(2014,1,1) - datetime.datetime(yr,1,1) r = r.tshift(shift.days) r.plot() e = e.append((r - RL1970['URC']).tshift(-shift.days)) for (yr,r) in NL['1971':'1999'].groupby(NL['1971':'1999'].index.year): shift = datetime.datetime(2014,1,1) - datetime.datetime(yr,1,1) r.tshift(shift.days).plot() import matplotlib.dates as mdates plt.gca().xaxis.set_major_locator(mdates.MonthLocator()) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b')) fname = img + 'RuleCurvePerformance1970-1999.png' plt.savefig(fname) !convert $fname -trim$fname In [73]: NL2000 = pd.read_pickle(dir+'NL2000.pkl') plt.figure(figsize=(10,7)) NL2000['LRC'].plot(color='b') NL2000['URC'].plot(color='b') RL2000['LRC'].plot(color='b') RL2000['URC'].plot(color='b') plt.fill_between(NL2000.index, NL2000['LRC'].tolist(), NL2000['URC'].tolist(), color='b', alpha=1) plt.fill_between(RL2000.index, RL2000['LRC'].tolist(), RL2000['URC'].tolist(), color='b', alpha=1) plt.ylabel('Lake Level [meters]') plt.title('Rule Curves for Namakan and Rainy Lakes: 2000-2010') for (yr,r) in RL['2000':'2010'].groupby(RL['2000':'2010'].index.year): shift = datetime.datetime(2014,1,1) - datetime.datetime(yr,1,1) r = r.tshift(shift.days) r.plot() e = e.append((r - RL2000['URC']).tshift(-shift.days)) for (yr,r) in NL['2000':'2010'].groupby(NL['2000':'2010'].index.year): shift = datetime.datetime(2014,1,1) - datetime.datetime(yr,1,1) r.tshift(shift.days).plot() import matplotlib.dates as mdates plt.gca().xaxis.set_major_locator(mdates.MonthLocator()) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b')) fname = img + 'RuleCurvePerformance2000-2010.png' plt.savefig(fname) !convert $fname -trim$fname !convert $fname -transparent white$fname ## Frequency and Distribution of High Water Events¶ In [75]: p1970 = RL['1971':'1999'].index pA = p1970[(p1970.month >= 5) & (p1970.month <= 9)] p2000 = RL['2000':'2010'].index pB = p2000[(p2000.month >= 5) & (p2000.month <= 9)] def highEvents(str,S,h): print("\n{:s}".format(str),end="") print("exceeded {:d} days out of {:d} days.".format(S[S>h].count(),len(S.index))) print("{:>20s}: {:6.2f}%".format("Frequency", 100.0*float(S[S>h].count())/len(S.index))) print("{:>20s}: {:7.3f} meters".format("Median Value", (S[S>h]-h).median())) print("{:>20s}: {:7.3f} meters".format("95th Percentile", (S[S>h]-h).quantile(0.95))) r = pd.Series([]) r['Days in Period'] = len(S.index) r['Days exceeded'] = S[S>h].count() r['Frequency (%)'] = 100.0*float(S[S>h].count())/len(S.index) r['Median'] = (S[S>h]-h).median() r['95th Percentile'] = (S[S>h]-h).quantile(0.95) return r def highWater(per): print("\n\nPeriod: ", per[0].date().strftime("%Y/%b/%d"), "-", per[-1].date().strftime("%Y/%b/%d")) r = highEvents("Rule Curve",e[per],0.0) r = highEvents("Emergency High Water",RL[per],337.75) r = highEvents("All Gates Open",RL[per],337.90) highWater(pA) highWater(pB) Period: 1971/May/01 - 1999/Sep/30 Rule Curveexceeded 661 days out of 4437 days. Frequency: 14.90% Median Value: 0.068 meters 95th Percentile: 0.383 meters Emergency High Waterexceeded 346 days out of 4437 days. Frequency: 7.80% Median Value: 0.048 meters 95th Percentile: 0.356 meters All Gates Openexceeded 84 days out of 4437 days. Frequency: 1.89% Median Value: 0.124 meters 95th Percentile: 0.288 meters Period: 2000/May/01 - 2010/Sep/30 Rule Curveexceeded 303 days out of 1683 days. Frequency: 18.00% Median Value: 0.230 meters 95th Percentile: 0.704 meters Emergency High Waterexceeded 235 days out of 1683 days. Frequency: 13.96% Median Value: 0.187 meters 95th Percentile: 0.710 meters All Gates Openexceeded 147 days out of 1683 days. Frequency: 8.73% Median Value: 0.171 meters 95th Percentile: 0.621 meters In [76]: plt.subplot(2,1,1) plt.hist(e[pA],bins=np.arange(0,1.0,.02),color='y',alpha=0.5,normed=True) plt.xlim(0,1.0) plt.ylim(0,12) plt.title("{0} - {1}".format(pA[0].date().strftime("%Y/%b/%d"),pA[-1].date().strftime("%Y/%b/%d"))) plt.subplot(2,1,2) plt.hist(e[pB],bins=np.arange(0,1.0,.02),color='b',alpha=0.5,normed=True) plt.xlim(0,1.0) plt.ylim(0,12) plt.title("{0} - {1}".format(pB[0].date().strftime("%Y/%b/%d"),pB[-1].date().strftime("%Y/%b/%d"))) plt.tight_layout() In [79]: plt.figure(figsize=(9,5)) pd.Series(e[pA]).hist(alpha=0.4,bins=np.arange(0,1,.05),color='y',normed=True) pd.Series(e[pB]).hist(alpha=0.4,bins=np.arange(0,1,.05),color='b',normed=True) Out[79]: <matplotlib.axes._subplots.AxesSubplot at 0x118ceaf28> ## Stage Frequency for Rainy Lake Levels¶ In [80]: xlo = 336.3 xhi = 338.6 dx = 0.02 plt.figure(figsize=(10,4)) ts = RL['1971':'1999'] ts.hist(bins=np.arange(xlo,xhi+dx,dx)) ax = plt.axis() plt.axis([xlo,xhi,ax[2],ax[3]]) plt.title('Distribution of Rainy Lake Levels, 1971-1999') fname = img + 'RainyLakeLevelDistribution1970.png' plt.savefig(fname) !convert $fname -trim$fname !convert $fname -transparent white$fname plt.figure(figsize=(12,4)) ts = RL['2000':'2012'] ts.hist(bins=np.arange(xlo,xhi+dx,dx)) ax = plt.axis() plt.axis([xlo,xhi,ax[2],ax[3]]) plt.xlabel('Lake Level [meters]') plt.title('Distribution of Rainy Lake Levels, 2000-2012'); fname = img + 'RainyLakeLevelDistribution2000.png' plt.savefig(fname) !convert $fname -trim$fname In [83]: plt.figure(figsize=(10,4)) RL['1970':'1999'].hist(cumulative=True, bins=np.arange(xlo,xhi+dx,dx), alpha=0.4, color = 'b', normed=True) RL['2000':'2012'].hist(cumulative=True, bins=np.arange(xlo,xhi+dx,dx), alpha=0.4, color = 'y', normed=True) plt.axis([xlo,xhi,0,1.1]) plt.xlabel('Lake Level [meters]') plt.legend(['1970-1999','2000-2012'],loc = 'upper left') plt.title('Cumulative Distribution of Daily Levels for Rainy Lake'); In [84]: tsa = RL['1970':'1999'] tsb = RL['2000':'2012'] mostr = ['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] plt.figure(figsize=(10,8)) for mo in range(1,13): plt.subplot(4,3,mo) tsa[tsa.index.month == mo].hist(cumulative=True, normed=True, bins=np.arange(xlo,xhi+3*dx,3*dx), color = 'b', alpha = 0.4) tsb[tsb.index.month == mo].hist(cumulative=True, normed=True, bins=np.arange(xlo,xhi+3*dx,3*dx), color = 'y', alpha = 0.4) plt.legend(['1970-1999','2000-2012'],loc='upper left') plt.axis([xlo,xhi,0,1.05]) plt.title(mostr[mo-1]) plt.tight_layout() In [85]: plt.figure() hist,bins = np.histogram([t for t in RL['1970':'1999'] if pd.notnull(t)],bins=100) hist = np.cumsum(hist) plt.semilogx([1-float(h)/hist.max() for h in hist],bins[:-1],color='b') hist,bins = np.histogram([t for t in RL['2000':'2012'] if pd.notnull(t)],bins=100) hist = np.cumsum(hist) plt.semilogx([1-float(h)/hist.max() for h in hist],bins[:-1],color='r') bins.size hist.size Out[85]: 100 ## Rule Curve Performance 2000-2014¶ Load the rule curves from another notebook. In [86]: NL2000 = pd.read_pickle(dir+'NL2000.pkl') plt.figure(figsize=(12,7)) NL2000['LRC'].plot(color='y') NL2000['URC'].plot(color='y') RL2000['LRC'].plot(color='y') RL2000['URC'].plot(color='y') NL2000['ELW'].plot(color='b',alpha=0.5) NL2000['EDL'].plot(color='b',alpha=0.5) NL2000['EHW'].plot(color='b',alpha=0.5) NL2000['AGO'].plot(color='b',alpha=0.5) RL2000['ELW'].plot(color='r',alpha=0.5) RL2000['EDL'].plot(color='r',alpha=0.5) RL2000['EHW'].plot(color='r',alpha=0.5) RL2000['AGO'].plot(color='r',alpha=0.5) plt.fill_between(NL2000.index, NL2000['LRC'].tolist(), NL2000['URC'].tolist(), color='b', alpha=0.5) plt.fill_between(RL2000.index, RL2000['LRC'].tolist(), RL2000['URC'].tolist(), color='r', alpha=0.5) plt.ylabel('Lake Level [meters]') plt.title('Rule Curves for Namakan and Rainy Lakes: ' + '2000 - ' + str(RL.dropna().index[-1].year)) e = pd.Series([]) for (yr,r) in RL['2000':].groupby(RL['2000':].index.year): shift = datetime.datetime(2014,1,1) - datetime.datetime(yr,1,1) r = r.tshift(shift.days) r.plot() e = e.append((r - RL2000['URC']).tshift(-shift.days)) for (yr,r) in NL['2000':].groupby(NL['2000':].index.year): shift = datetime.datetime(2014,1,1) - datetime.datetime(yr,1,1) r.tshift(shift.days).plot() plt.ylim(336,341.5) import matplotlib.dates as mdates plt.gca().xaxis.set_major_locator(mdates.MonthLocator()) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%b')) fname = img + 'RuleCurvePerformance2000-2014.png' plt.savefig(fname) !convert $fname -trim$fname
3,788
11,066
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.75
3
CC-MAIN-2018-17
longest
en
0.502071
https://www.systutorials.com/docs/linux/man/3-slaqge/
1,686,386,753,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224657144.94/warc/CC-MAIN-20230610062920-20230610092920-00663.warc.gz
1,143,122,911
8,936
# slaqge (3) - Linux Manuals slaqge.f - ## SYNOPSIS ### Functions/Subroutines subroutine slaqge (M, N, A, LDA, R, C, ROWCND, COLCND, AMAX, EQUED) SLAQGE scales a general rectangular matrix, using row and column scaling factors computed by sgeequ. ## Function/Subroutine Documentation ### subroutine slaqge (integerM, integerN, real, dimension( lda, * )A, integerLDA, real, dimension( * )R, real, dimension( * )C, realROWCND, realCOLCND, realAMAX, characterEQUED) SLAQGE scales a general rectangular matrix, using row and column scaling factors computed by sgeequ. Purpose: ``` SLAQGE equilibrates a general M by N matrix A using the row and column scaling factors in the vectors R and C. ``` Parameters: M ``` M is INTEGER The number of rows of the matrix A. M >= 0. ``` N ``` N is INTEGER The number of columns of the matrix A. N >= 0. ``` A ``` A is REAL array, dimension (LDA,N) On entry, the M by N matrix A. On exit, the equilibrated matrix. See EQUED for the form of the equilibrated matrix. ``` LDA ``` LDA is INTEGER The leading dimension of the array A. LDA >= max(M,1). ``` R ``` R is REAL array, dimension (M) The row scale factors for A. ``` C ``` C is REAL array, dimension (N) The column scale factors for A. ``` ROWCND ``` ROWCND is REAL Ratio of the smallest R(i) to the largest R(i). ``` COLCND ``` COLCND is REAL Ratio of the smallest C(i) to the largest C(i). ``` AMAX ``` AMAX is REAL Absolute value of largest matrix entry. ``` EQUED ``` EQUED is CHARACTER*1 Specifies the form of equilibration that was done. = 'N': No equilibration = 'R': Row equilibration, i.e., A has been premultiplied by diag(R). = 'C': Column equilibration, i.e., A has been postmultiplied by diag(C). = 'B': Both row and column equilibration, i.e., A has been replaced by diag(R) * A * diag(C). ``` Internal Parameters: ``` THRESH is a threshold value used to decide if row or column scaling should be done based on the ratio of the row or column scaling factors. If ROWCND < THRESH, row scaling is done, and if COLCND < THRESH, column scaling is done. LARGE and SMALL are threshold values used to decide if row scaling should be done based on the absolute size of the largest matrix element. If AMAX > LARGE or AMAX < SMALL, row scaling is done. ``` Author: Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. Date: September 2012 Definition at line 142 of file slaqge.f. ## Author Generated automatically by Doxygen for LAPACK from the source code.
746
2,620
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.9375
3
CC-MAIN-2023-23
longest
en
0.803921
http://sosmath.com/CBB/viewtopic.php?f=3&t=57561&view=previous
1,369,533,212,000,000,000
text/html
crawl-data/CC-MAIN-2013-20/segments/1368706484194/warc/CC-MAIN-20130516121444-00023-ip-10-60-113-184.ec2.internal.warc.gz
255,121,163
7,293
# S.O.S. Mathematics CyberBoard Your Resource for mathematics help on the web! It is currently Sun, 26 May 2013 02:53:41 UTC All times are UTC [ DST ] Page 1 of 1 [ 5 posts ] Print view Previous topic | Next topic Author Message Post subject: two curves touchPosted: Thu, 8 Mar 2012 18:45:46 UTC Member of the 'S.O.S. Math' Hall of Fame Joined: Fri, 28 Dec 2007 12:01:53 UTC Posts: 1263 when did two curves touch each other. if there slopes are equal at their point of intersection then is it sufficient? _________________ There is no god in this world except PARENTS and i have lost ONE Top Post subject: Re: two curves touchPosted: Thu, 8 Mar 2012 21:02:39 UTC Moderator Joined: Wed, 30 Mar 2005 04:25:14 UTC Posts: 12103 Location: Austin, TX mun wrote: when did two curves touch each other. if there slopes are equal at their point of intersection then is it sufficient? Huh? What do you mean by "touch", because usually that means they are just intersecting. If you're demanding tangency, you need to make sure the tangent spaces at the point are the same. _________________ (\ /) (O.o) (> <) This is Bunny. Copy Bunny into your signature to help him on his way to world domination Top Post subject: Re: two curves touchPosted: Thu, 8 Mar 2012 21:19:54 UTC Moderator Joined: Wed, 30 Mar 2005 04:25:14 UTC Posts: 12103 Location: Austin, TX mun wrote: when did two curves touch each other. if there slopes are equal at their point of intersection then is it sufficient? Huh? What do you mean by "touch", because usually that means they are just intersecting. If you're demanding tangency, you need to make sure the tangent spaces at the point are the same. where "the same" means that considering them as naturally embedded in the space where you have your curves are equal. _________________ (\ /) (O.o) (> <) This is Bunny. Copy Bunny into your signature to help him on his way to world domination Top Post subject: Re: two curves touchPosted: Fri, 9 Mar 2012 08:16:47 UTC Moderator Joined: Mon, 29 Dec 2008 17:49:32 UTC Posts: 6010 Location: 127.0.0.1, ::1 (avatar courtesy of UDN) mun wrote: when did two curves touch each other. if there slopes are equal at their point of intersection then is it sufficient? Huh? What do you mean by "touch", because usually that means they are just intersecting. If you're demanding tangency, you need to make sure the tangent spaces at the point are the same. where "the same" means that considering them as naturally embedded in the space where you have your curves are equal. where we are assuming the curves are differentiable at the common point, blahblahblah. _________________ Top Post subject: Re: two curves touchPosted: Fri, 9 Mar 2012 08:36:13 UTC Moderator Joined: Wed, 30 Mar 2005 04:25:14 UTC Posts: 12103 Location: Austin, TX outermeasure wrote: mun wrote: when did two curves touch each other. if there slopes are equal at their point of intersection then is it sufficient? Huh? What do you mean by "touch", because usually that means they are just intersecting. If you're demanding tangency, you need to make sure the tangent spaces at the point are the same. where "the same" means that considering them as naturally embedded in the space where you have your curves are equal. where we are assuming the curves are differentiable at the common point, blahblahblah. _________________ (\ /) (O.o) (> <) This is Bunny. Copy Bunny into your signature to help him on his way to world domination Top Display posts from previous: All posts1 day7 days2 weeks1 month3 months6 months1 year Sort by AuthorPost timeSubject AscendingDescending Page 1 of 1 [ 5 posts ] All times are UTC [ DST ] #### Who is online Users browsing this forum: No registered users You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forum Search for: Jump to:  Select a forum ------------------ High School and College Mathematics    Algebra    Geometry and Trigonometry    Calculus    Matrix Algebra    Differential Equations    Probability and Statistics    Proposed Problems Applications    Physics, Chemistry, Engineering, etc.    Computer Science    Math for Business and Economics Advanced Mathematics    Foundations    Algebra and Number Theory    Analysis and Topology    Applied Mathematics    Other Topics in Advanced Mathematics Other Topics    Administrator Announcements    Comments and Suggestions for S.O.S. Math    Posting Math Formulas with LaTeX    Miscellaneous
1,136
4,566
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2013-20
latest
en
0.934649
http://www.geeksforgeeks.org/write-a-c-program-to-reverse-digits-of-a-number/
1,513,087,301,000,000,000
text/html
crawl-data/CC-MAIN-2017-51/segments/1512948517181.32/warc/CC-MAIN-20171212134318-20171212154318-00711.warc.gz
394,772,402
16,919
# Write a C program to reverse digits of a number Write a C program to reverse digits of an integer. ```Input : num = 12345 Output : 54321 Input : num = 876 Output : 678 ``` ## Recommended: Please solve it on “PRACTICE ” first, before moving on to the solution. ITERATIVE WAY Algorithm: ```Input: num (1) Initialize rev_num = 0 (2) Loop while num > 0 (a) Multiply rev_num by 10 and add remainder of num divide by 10 to rev_num rev_num = rev_num*10 + num%10; (b) Divide num by 10 (3) Return rev_num ``` Example: num = 4562 rev_num = 0 rev_num = rev_num *10 + num%10 = 2 num = num/10 = 456 rev_num = rev_num *10 + num%10 = 20 + 6 = 26 num = num/10 = 45 rev_num = rev_num *10 + num%10 = 260 + 5 = 265 num = num/10 = 4 rev_num = rev_num *10 + num%10 = 265 + 4 = 2654 num = num/10 = 0 Program: ## C ```#include <stdio.h> /* Iterative function to reverse digits of num*/ int reversDigits(int num) { int rev_num = 0; while(num > 0) { rev_num = rev_num*10 + num%10; num = num/10; } return rev_num; } /*Driver program to test reversDigits*/ int main() { int num = 4562; printf("Reverse of no. is %d", reversDigits(num)); getchar(); return 0; } ``` ## Python ```# Python program to reverse a number n = 4562; rev = 0 while(n > 0): a = n % 10 rev = rev * 10 + a n = n / 10 print(rev) # This code is contributed by Shariq Raza ``` Time Complexity: O(Log(n)) where n is the input number. Output: ```2654 ``` RECURSIVE WAY Thanks to Raj for adding this to the original post. ```#include <stdio.h>; /* Recursive function to reverse digits of num*/ int reversDigits(int num) { static int rev_num = 0; static int base_pos = 1; if(num > 0) { reversDigits(num/10); rev_num += (num%10)*base_pos; base_pos *= 10; } return rev_num; } /*Driver program to test reversDigits*/ int main() { int num = 4562; printf("Reverse of no. is %d", reversDigits(num)); getchar(); return 0; } ``` Time Complexity: O(Log(n)) where n is the input number. Reverse digits of an integer with overflow handled Note that above above program doesn’t consider leading zeroes. For example, for 100 program will print 1. If you want to print 001 then see this comment from Maheshwar. Try extensions of above functions that should also work for floating point numbers. # GATE CS Corner    Company Wise Coding Practice 1.7 Average Difficulty : 1.7/5.0 Based on 31 vote(s) Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
757
2,457
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.53125
4
CC-MAIN-2017-51
latest
en
0.643779
http://mathhelpforum.com/trigonometry/23422-verify-solution-equation.html
1,481,425,759,000,000,000
text/html
crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00200-ip-10-31-129-80.ec2.internal.warc.gz
169,373,286
10,315
# Thread: Verify solution of equation 1. ## Verify solution of equation HI! Directions: verify that the x-values are solutions of the equation. 2cosx-1=0 (a.) x= pi/3 (b.) x= 5pi/3 Thanks again! 2. Originally Posted by overduex HI! Directions: verify that the x-values are solutions of the equation. 2cosx-1=0 (a.) x= pi/3 (b.) x= 5pi/3 Thanks again! First, convert the radians to degrees Then, use your calculator to find the decimal values for cosx For example, in question a) cos(pi/3) or cos60 = 0.5 Then sub in the value of 0.5 in the equation "2cosx - 1 = 0" And solve: 2cos(0.5) - 1 = 0 3. Originally Posted by Macleef Then, use your calculator to find the decimal values for cosx I dearly hope that anyone that would be expected to be able to solve such an equation would already have memorized $cos(60^o)= \frac{1}{2}$ and not need to rely on a calculator to give them that answer. -Dan
275
909
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.609375
4
CC-MAIN-2016-50
longest
en
0.908353
https://obliviousfinance.com/category/enterprise-equity-value/
1,631,982,474,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00335.warc.gz
476,552,539
16,265
# Enterprise / Equity Value ## What percentage dilution in Equity Value is “too high?” There’s no strict “rule” here but most bankers would say that anything over 10% is odd. If your basic Equity Value is \$100 million and the diluted Equity Value is \$115 million, you might want to check your calculations – it’s not necessarily wrong, but over 10% dilution is unusual for most companies. ## Should you use the book value or market value of each item when calculating Enterprise Value? Technically, you should use market value for everything. In practice, however, you usually use market value only for the Equity Value portion, because it’s almost impossible to establish market values for the rest of the items in the formula – so you just take the numbers from the company’s Balance Sheet. ## Are there any problems with the Enterprise Value formula you just gave me? Yes – it’s too simple. There are lots of other things you need to add into the formula with real companies: • Net Operating Losses – Should be valued and arguably added in, similar to cash. • Long-Term Investments – These should be counted, similar to cash. • Equity Investments – Any investments in other … ## What’s the difference between Equity Value and Shareholders’ Equity? Equity Value is the market value and Shareholders’ Equity is the book value. Equity Value can never be negative because shares outstanding and share prices can never be negative, whereas Shareholders’ Equity could be any value. For healthy companies, Equity Value usually far exceeds Shareholders’ Equity. ## A company has 1 million shares outstanding at a value of \$100 per share. It also has \$10 million of convertible bonds, with par value of \$1,000 and a conversion price of \$50. How do I calculate diluted shares outstanding? This gets confusing because of the different units involved. First, note that these convertible bonds are in-the-money because the company’s share price is \$100, but the conversion price is \$50. So we count them as additional shares rather than debt. Next, we need to divide the value of the convertible bonds – \$10 million – … ## How do you account for convertible bonds in the Enterprise Value formula If the convertible bonds are in-the-money, meaning that the conversion price of the bonds is below the current share price, then you count them as additional dilution to the Equity Value; if they’re out-of-the-money then you count the face value of the convertibles as part of the company’s Debt.
515
2,503
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.25
3
CC-MAIN-2021-39
latest
en
0.916157
https://web2.0calc.com/questions/suppose-you-have-a-weighted-coin-in-which-heads-comes
1,623,563,934,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00127.warc.gz
541,219,174
5,714
+0 # Suppose you have a weighted coin in which heads comes up with probability $\frac34$ and tails with probability $\frac14$. If you flip heads, 0 22 1 +191 Suppose you have a weighted coin in which heads comes up with probability 3/4 and tails 1/4 with probability . If you flip heads, you win 2 but if you flip tails, you lose 1 What is the expected win of a coin flip in dollars? Jun 5, 2021 #1 +145 +1 The expected amount of money you will win from this is $\frac{3}{4} \cdot 2 + \frac{1}{4} \cdot -1 = \boxed{\frac{5}{4}}$, which is the expected win amount of the coin flip. This is essentially multiplying the probability of each scenario (heads and tails), with the amount of money you would win theoretically if you flipped that scenario. Even if you will never win exactly $\frac{5}{4}$ dollars in a coinflip, it's essentially an average of how much you would profit on average for each coin flip. Jun 5, 2021
253
929
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.921875
4
CC-MAIN-2021-25
longest
en
0.948805
http://mathhelpforum.com/calculus/213579-trigonometric-integral-transformation-required.html
1,481,305,541,000,000,000
text/html
crawl-data/CC-MAIN-2016-50/segments/1480698542714.38/warc/CC-MAIN-20161202170902-00276-ip-10-31-129-80.ec2.internal.warc.gz
174,015,258
10,751
# Thread: trigonometric integral: transformation required? 1. ## trigonometric integral: transformation required? Hi. It's getting quite late at night, but I still have to figure out this integral: $\int{cos^{3}x\,sin^{4}x\,dx$ I believe this is supposed to be solved with integration by parts, but I think there needs to be some kind of regrouping or transformation to make it work. I'm not seeing it. Could somebody help me out? 2. ## Re: trigonometric integral: transformation required? Originally Posted by infraRed Hi. It's getting quite late at night, but I still have to figure out this integral: $\int{cos^{3}x\,sin^{4}x\,dx$ I believe this is supposed to be solved with integration by parts, but I think there needs to be some kind of regrouping or transformation to make it work. I'm not seeing it. Could somebody help me out? Never go for integration by parts when there is a simple substitution. \displaystyle \begin{align*} \int{\cos^3{(x)}\sin^4{(x)}\,dx} &= \int{\cos{(x)}\cos^2{(x)}\sin^4{(x)}\,dx} \\ &= \int{\cos{(x)}\left[ 1 - \sin^2{(x)} \right] \sin^4{(x)}\,dx} \\ &= \int{\cos{(x)}\left[ \sin^4{(x)} - \sin^6{(x)} \right] dx } \end{align*} Now make the substitution \displaystyle \begin{align*} u = \sin{(x)} \implies du = \cos{(x)}\,dx \end{align*} and the integral becomes \displaystyle \begin{align*} \int{\cos{(x)}\left[ \sin^4{(x)} - \sin^6{(x)} \right] dx} &= \int{u^4 - u^6\,du} \\ &= \frac{u^5}{5} - \frac{u^7}{7} + C \\ &= \frac{\sin^5{(x)}}{5} - \frac{\sin^7{(x)}}{7} + C \end{align*}
508
1,527
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.125
4
CC-MAIN-2016-50
longest
en
0.82252
https://dtkaplan.github.io/MC2/Modeling/01-parameters.html
1,702,206,831,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679101779.95/warc/CC-MAIN-20231210092457-20231210122457-00449.warc.gz
257,889,865
19,840
# 8  Parameters The variety of shapes of the nine pattern-book functions means that, often, one or another will be suitable for the modeling situation in hand. Combining the functions to create a greater diversity of shapes is the subject of Chapter Chapter 9. Even if the shape of the function used is appropriate, the pattern still needs to be “adjusted” so that the units of output and input are well matched to the phenomenon being modeled. Let’s consider data from the outbreak of COVID-19 as an example. Figure 8.1 shows, day-by-day, the number of officially confirmed COVID-19 cases as the in the US in March 2020. During the outbreak, case numbers increased with time. As time went on, the rate of case-number increase itself grew faster and faster. This is the same pattern provided by the exponential function. Alongside the case-number data Figure 8.1 shows the function $$\text{cases}(t) \equiv e^t$$ plotted as a $$\color{magenta}{\text{magenta}}$$ curve. There is an obvious mismatch between the data and the function $$e^t$$. Does this mean the COVID pattern is not exponential? This chapter will introduce how modelers stretch and shift the individual patter-book functions so that they can be used in models of real-world situations such as the outbreak of COVID-19. ## 8.1 Matching numbers to quantities The coordinate axes in Figure 8.1 represent quantities. On the horizontal axis is time, measured in days. The vertical axis is denominated in “10000 cases,” meaning that the numbers on the vertical scale should be multiplied by 10000 to get the number of cases. The exponential function takes as input a pure number and produces an output that is also a pure number. This is true for all the pattern-book functions. Since the graph axes don’t show pure numbers, it is no surprise then that the pattern-book exponential function doesn’t align with the COVID case data. Recall that pure numbers, like 17.32, do not have units. Quantities, on the other hand, usually do have units, as in 17.3 days or 34 meters. If we want the input to the model function $$\text{cases}(t)$$ to be denominated in days, we will have to convert $$t$$ to a pure pure number (e.g. 10, not “10 days”) before the quantity is handed off as the argument to $$\exp()$$. We do this by introducing a parameter. In every case, these parameters are arranged to translate a with-units quantity into a pure number suitable as an input to the pattern-book function. Similarly, parameters will translate the pure-number output from the pattern-book function into a quantity with units. The standard parameterization for the exponential function is $$e^{kt}$$. The parameter $$k$$ will be a quantity with units of “per-day.” Suppose we set $$k=0.2$$ per day. Then $$k\, t{\LARGE\left.\right|}_{t=10 days} = 2$$. This “2” is a pure number because the units on the 0.2 (“per day”) and on the 10 (days) cancel out: $0.2\, \text{day}^{-1} \cdot 10\, \text{days} = 2\ .$ The use of a parameter like $$k$$ does more than handle the formality of converting input quantities into pure numbers. Having a choice for $$k$$ allows us to stretch or compress the function to align with the data. Figure 8.2 plots the modeling version of the exponential function to the COVID-case data: ## 8.2 Parallel scales At the heart of how we use the pattern-book functions to model the relationship between quantities is the idea of conversion between one scale and another. Consider these everyday objects: a thermometer and a ruler. Each object presents a read-out of what’s being measured—temperature or length—on two different scales. At the same time, the objects provide a way to convert one scale to another. A function gives the output for any given input. We represent the input value as a position on a number line—which we call an “axis”—and the output as a position on another output line, almost always drawn perpendicular to one another. But the two number lines can just as well be parallel to one another. To evaluate the function, find the input value on the input scale and read off the corresponding output. We can translate the correspondance between one scale and the other into the form of a straight-line function. For instance, if we know the temperature in Fahrenheit ($$^\circ$$F) and want to convert it to Celsius ($$^\circ C$$) we have the following function: $C(F) \equiv {\small\frac{5}{9}}(F-32)\ .$ Similarly, converting inches to centimeters can be accomplished with $\text{cm(inches)} \equiv 2.54 \, (\text{inches}-0)\ .$ Both of these scale conversion functions have the form of the straight-line function, which can be written as $f(x) \equiv a x + b\ \ \ \text{or, equivalently as}\ \ \ \ f(x) \equiv a(x-x_0)\ ,$ where $$a$$, $$b$$, and $$x_0$$ are parameters. In Section 8.3, we will use the $$ax + b$$ form of scale conversion, to scale the input to pattern-book functions, but we could equally well have used $$a(x-x_0)$$. In Section 8.4 we will introduce a second scale conversion function, for the output from pattern-book functions. That scaling will also be in the form of a straight-line function: $$A x + B$$. The use of the lower-case parameter names ($$a$$, $$b$$) versus the upper-case parameter names ($$A$$, $$B$$) will help us distinguish the two different uses for scale conversion, namely input scaling versus output scaling. ## 8.3 Input scaling Figure 8.3 is based on the data frame RI-tide which is a minute-by-minute record of the tide level in Providence, Rhode Island (USA) for the period April 1 to 5, 2010. The level variable is measured in meters; the hour variable gives the time of the measurement in hours after midnight at the start of April 1. The pattern-book $$\sin()$$ and the function $$\color{magenta}{\text{level}}\color{blue}{(hour)}$$ have similar shapes, so it seems reasonable to model the tide data as a sinusoid. However, the scale of the axes is different on the two graphs. To model the tide with a sinusoid, we need to modify the sinusoid to change the scale of the input and output. First, let’s look at how to accomplish the input scaling. Specifically, we want the pure-number input $$t$$ to the sinusoid be a function of the quantity $$hour$$. Our framework for this re-scaling is the straight-line function. We will replace the pattern-book input $$t$$ with a function $t(\color{blue}{hour}) \equiv a\, \color{blue}{hour} + b\ .$ The challenge is to find values for the parameters $$a$$ and $$b$$ that will transform the $$\color{blue}{\mathbf{\text{blue}}}$$ horizontal axis into the black horizontal axis, like this: By comparing the two axes, we can estimate that $$\color{blue}{10} \rightarrow 4$$ and $$\color{blue}{100} \rightarrow 49$$. With these two coordinate points, we can find the straight-line function that turns $$\color{blue}{\mathbf{\text{blue}}}$$ into black by plotting the coordinate pairs $$(\color{blue}{0},1)$$ and $$(\color{blue}{100}, 51)$$ and finding the straight-line function that connects the points. You can calculate for yourself that the function that relates $$\color{blue}{\mathbf{\text{blue}}}$$ to black is $t(\color{blue}{time}) = \underbrace{\frac{1}{2}}_a \color{blue}{time} \underbrace{-1\LARGE\strut}_b$ Replacing the pure number $$t$$ as the input to pattern-book $$\sin(t)$$ with the transformed $$\frac{1}{2} \color{blue}{time}$$ we get a new function: $g(\color{blue}{time}) \equiv \sin\left(\strut {\small\frac{1}{2}}\color{blue}{time} - 1\right)\ .$ Figure 11.6 plots $$g()$$ along with the actual tide data. ## 8.4 Output scaling Just as the natural input needs to be scaled before it reaches the pattern-book function, so the output from the pattern-book function needs to be scaled before it presents a result suited for interpreting in the real world. The overall result of input and output scaling is to tailor the pattern-book function so that it is ready to be used in the real world. Let’s return to Figure 11.6 which shows that the function $$g(\color{blue}{time})$$, which scales the input to the pattern-book sinusoid, has a much better alignment to the tide data. Still, the vertical axes of the two graphs in the figure are not the same. This is the job for output scaling, which takes the output of $$g(\color{blue}{time})$$ (bottom graph) and scales it to match the $$\color{magenta}{level}$$ axis on the top graph. That is, we seek to align the black vertical scale with the $$\color{magenta}{\mathbf{\text{magenta}}}$$ vertical scale. To do this, we note that the range of the $$g(\color{blue}{time})$$ is -1 to 1, whereas the range of the tide-level is about 0.5 to 1.5. The output scaling will take the straight-line form ${\color{magenta}{\text{level}}}({\color{blue}{time}}) = A\, g({\color{blue}{time}}) + B$ or, in graphical terms We can figure out parameters $$A$$ and $$B$$ by finding the straight-line function that connects the coordinate pairs $$(-1, \color{magenta}{0.5})$$ and $$(1, \color{magenta}{1.5})$$ as in Figure 8.7. You can confirm for yourself that the function that does the job is ${\color{magenta}{\text{level}}} = 0.5 g({\color{blue}{time}}) + 1\ .$ Putting everything together, that is, scaling both the input to pattern-book $$\sin()$$ and the output from pattern-book $$\sin()$$, we get ${\color{magenta}{\text{level}}}({\color{blue}{time}}) = \underbrace{0.5}_A \sin\left(\underbrace{\small\frac{1}{2}}_a {\color{blue}{time}} \underbrace{-1}_b\right) + \underbrace{1}_B$ ## 8.5 A procedure for building models We’ve been using pattern-book functions as the intermediaries between input scaling and output scaling, using this format. $f(x) \equiv A e^{ax + b} + B\ .$ We can use the other pattern-book functions—the gaussian, the sigmoid, the logarithm, the power-law functions—in the same way. That is, the basic framework for modeling is this: $\text{model}(x) \equiv A\, {g_{pattern\_book}}(ax + b) + B\ ,$ where $$g_{pattern\_book}()$$ is one of the pattern-book functions. To construct a basic model, you task has two parts: 1. Pick the specific pattern-book function whose shape resembles that of the relationship you are trying to model. For instance, we picked $$e^x$$ for modeling COVID cases versus time (at the start of the pandemic). We picked $$\sin(x)$$ for modeling tide levels versus time. 2. Find numerical values for the parameters $$A$$, $$B$$, $$a$$, and $$b$$. In Chapter we will show you some ways to make this part of the task easier. It is remarkable that models of a very wide range of real-world relationships between pairs of quantities can be constructed by picking one of a handful of functions, then scaling the input and the output. As we move on to other Blocks in MOSAIC Calculus, you will see how to generalize this to potentially complicated relationships among more than two quantities. That is a big part of the reason you’re studying calculus. ## 8.6 Other formats for scaling Often, modelers choose to use input scaling in the form $$a (x - x_0)$$ rather than $$a x + b$$. The two are completely equivalent when $$x_0 = - b/a$$. The choice between the two forms is largely a matter of convention. But almost always the output scaling is written in the format $$A y + B$$. For the COVID case-number data shown in Figure 8.2, we found that a reasonable match to the data can be had by input- and output-scaling the exponential: $\text{cases}(t) \equiv \underbrace{573}_A e^{\underbrace{0.19}_a\ t}\ .$ You might wonder why the parameters $$B$$ and $$b$$ aren’t included in the model. One reason is that cases and the exponential function already have the same range: zero and upwards. So there is no need to shift the output with a parameter B. Another reason has to do with the algebraic properties of the exponential function. Specifically, $e^{a x + b}= e^b e^{ax} = {\cal A} e^{ax}$ where $${\cal A} \equiv e^b$$. In the case of exponentials, writing the input scaling in the form $$e^{a(x-x_0)}$$ can provide additional insight. A bit of symbolic manipulation of the model can provide some additional insight. As you know, the properties of exponentials and logarithms are such that $A e^{at} = e^{\log(A)} e^{at} = e^{a t + \log(A)} = e^{a\left(\strut t + \log(A)/a\right)} = e^{a(t-t_0)}\ ,$ where $t_0 = - \log(A)/a = - \log(593)/0.19 = -33.6\ .$ You can interpret $$t_0$$ as the starting point of the pandemic. When $$t = t_0$$, the model output is $$e^{k 0} = 1$$: the first case. According to the parameters we matched to the data for March, the pandemic’s first case would have happened about 33 days before March 1, which is late January. We know from other sources of information, the outbreak began in late January. It is remarkable that even though the curve was constructed without any data from January or even February, the data from March, translated through the curve-fitting process, pointed to the start of the outbreak. This is a good indication that the exponential form for the model is fundamentally correct. ## 8.7 Parameterization conventions There are conventions for the symbols used for input-scaling parameterization of the pattern-book functions. Knowing these conventions makes it easier to read and assimilate mathematical formulas. In several cases, there is more than one conventional option. For instance, the sinusoid has a variety of parameterization forms that get used depending on which feature of the function is easiest to measure. ?tbl-param that are used in practice. Some standard forms of input scaling parameterizations Function Written form Parameter 1 Parameter 2 Exponential $$e^{kt}$$ $$k$$ Not used Exponential $$e^{t/\tau}$$ $$\tau$$ “time constant” Not used Exponential $$2^{t/\tau_2}$$ $$\tau_2$$ “doubling time” Not used Exponential $$2^{-\tau_{1/2}}$$ $$-\tau_{1/2}$$ “half life” Not used Power-law $$[x - x_0]^p$$ $$x_0$$ x-intercept exponent Sinusoid $$\sin\left(\frac{2 \pi}{P} (t-t_0)\right)$$ $$P$$ “period” $$t_0$$ “time shift” Sinusoid $$\sin(\omega t + \phi)$$ $$\omega$$ “angular frequency” $$\phi$$ “phase shift” Sinusoid $$\sin(2 \pi \omega t + \phi)$$ $$\omega$$ “frequency” $$\phi$$ “phase shift” Gaussian dnorm(x, mean, sd) “mean” (center) sd “standard deviation” Sigmoid pnorm(x, mean, sd) “mean” (center) sd “standard deviation” Straight-line $$mx + b$$ $$m$$ “slope” $$b$$ “y-intercept” Straight-line $$m (x-x_0)$$ $$m$$ “slope” $$x_0$$ “center” ## 8.8 Drill Part 1 What is the period of the function $$\sin(6\pi t)$$? 1/3 1/2 2 3 6 Part 2 What is the period of $$g(t)$$? $g(t) \equiv \frac{5}{\sin(2 \pi t)}$ 1. 1 2. 5 3. $$2 \pi/5$$ 4. $$5/2\pi$$ 5. $$g(t)$$ isn’t periodic. Part 3 What is the period of $$g(t)$$? $g(t) \equiv \text{dnorm}\left(\frac{2\pi}{5}(t-3)\right)$ 1. 1 2. 5 3. $$2 \pi/5$$ 4. $$5/2\pi$$ 5. $$g(t)$$ isn’t periodic. Part 4 One of the following choices is the standard deviation of the function graphed in Figure 8.8. Which one? 0 1 2 3 4 Part 5 What is the value of the parameter “mean” for the function shown in Figure 11.11? 1. -2 2. -1 3. 0.5 4. 1 5. 2 6. “mean” is not a parameter of this function. Part 6 What is the value of the parameter “sd” for the function shown in Figure 8.10? 1. -2 2. -1 3. 0.5 4. 1 5. 2 6. “sd” is not a parameter of this function. Part 7 What is the value of the parameter “mean” for the function shown in Figure 8.11? 1. -2 2. -1 3. 0.5 4. 1 5. 2 6. “mean” is not a parameter of this function. Part 8 What is the value of the parameter “sd” for the function shown in Figure 8.11 1. -2 2. -1 3. 0.5 4. 1 5. 2 6. “sd” is not a parameter of this function. ## 8.9 Exercises #### Exercise 8.01 Each of the following plots shows a basic modeling function whose input scaling has the form $$x - x_0$$. Your job is to figure out from the graph what is the numerical value of $$x_0$$. (Hint: For simplicity, $$x_0$$ in the questions will always be an integer.) Part A In plot (A), what is $$x_0$$? -2 -1 0 1 2 Part B In plot (B), what is $$x_0$$? -2 -1 0 1 2 Part C In plot (C), what is $$x_0$$? -2 -1 0 1 2 Part D In plot (D), what is $$x_0$$? -2 -1 0 1 2 Part E In plot (E), what is $$x_0$$? -2 -1 0 1 2 #### Exercise 8.02 Each of the graphs shows two horizontal scales, one drawn on the edge graphics frame (black) and one drawn slighly above that (blue). Which horizontal scale (black or blue) corresponds to the pattern-book function shown in the graph? Part A For graph (A), which scale corresponds to the pattern-book function? black blue neither both Part B For graph (B), which scale corresponds to the pattern-book function? black blue neither both Part C For graph (C), which scale corresponds to the pattern-book function? black blue neither both #### Exercise 8.03 Find the straight-line function that will give the value on the bottom (black) scale for each point $$x$$ on the top (blue) scale. The function will take the top(blue)-scale reading as input and produce the bottom(black)-scale reading as output, that is: $\text{black}(x) \equiv a (x - x_0)$ Part A For Graph A, which function maps blue $$x$$ to the value on the black scale? $$\frac{1}{3} x$$$$3\, x$$$$x + 3$$$$x - 3$$ Part B For Graph B, which function maps blue $$x$$ to the value on the black scale? $$-\frac{2}{3}\,x$$$$\frac{3}{2} x$$$$\frac{2}{3} x$$$$-\frac{3}{2}x$$ Part C For Graph C, which function maps blue $$x$$ to the value on the black scale? $$\frac{1}{2}(x - 2)$$$$3\, x$$$$2\,x$$$$2\,(x + 2)$$ Part D For Graph D, which function maps blue $$x$$ to the value on the black scale? 1. $$\frac{2}{3} (x + 3)$$ 2. $$\frac{3}{2} (x - 3)$$ 3. $$\frac{3}{2} (x+1)$$ 4. $$\frac{3}{2}(x - 2)$$ #### Exercise 8.04 The graph shows a linear combination of two sinusoids, one of period 0.6 and the other of period 2. There is also a baseline shift. That is, the graph shows the function: $A_1 \sin\left(\frac{2\pi}{2}t\right) + A_2 \sin\left(\frac{2\pi}{0.6} (t-.3)\right) + A_3$ Part A What is $$A_3$$? -4 -2 0 2 4 Part B What is $$A_1$$? 0 1 2 3.5 Part C What is $$A_2$$? 0 1 2 3.5 #### Exercise 8.05 The Bargain Basement store wants to sell its goods quickly. Consequently, they reduce each product’s price $$P$$ by 5% per day. Part A If a jacket costs $80 today, how much will it cost in $$t$$ days? $$P = 80 - 5t$$$$P = 80 - 4t$$$$P = 80 - 0.05t$$$$P = 80 (0.05)^t$$$$P = 80 (0.95)^t$$ You will need to use an R console to answer the next question. A hint: the answer is related to the answer from the previous question. Remember, to raise a number to a power, you can use an expression like 0.95^7. Part B You decided that you like the$80 jacket, but you have a budget of only \$60. How long should you wait before coming back to the Bargain Basement store.? 3 days 4 days 5 days 6 days Part C The answer to the first question is an exponential function, even if at first it doesn’t look like it. Which of these is the same function but written in the standard $$e^{kt}$$ format? 1. $$80 \exp( \ln(0.95) t)$$ 2. $$0.95 \exp(80 t)$$ 3. $$80 \exp(-\ln(0.95) t)$$ 4. $$0.95 \exp(\ln(80) t)$$ #### Exercise 8.06 The three functions created by the statements below are different in important ways. Explain what those differences are. f1 <- makeFun(sin((2*pi/P)*t) ~ t) f2 <- makeFun(sin((2*pi/6)*t) ~ t) f3 <- makeFun(sin((2*pi/P)*t) ~ t, P=6) #### Exercise 8.07 Watch this movie showing the growth of a colony of E. coli. Each rod is one bacterium. Bacteria exhibit exponential growth under optimal conditions. In general, if the rate of growth depends on some quantity (here bacteria) then the exponential is the best first guess at a model. In the movie, notice that the rate of expansion depends on the number of bacteria present; the more E.coli, the faster the rate of growth. This is true for any exponential process: the instantaneous rate of growth or decay depends on the amount currently present. If the experiment were continued indefinitely, the number of bacteria would eventually outgrow the petri dish or deplete their food source. When this happens, we say the bacteria have approached the carrying capacity of their environment. When the population is constrained in this way, a sigmoid would be a more appropriate model to start your modeling process. So, the deciding factor between exponential and sigmoid really depends upon whether 1) we assume a constrained or unconstrained environment, and 2) we let the bacteria reach the carrying capacity of the petri dish or not. 1. Is there any obvious sign that the bacteria are reaching the carrying capacity of their environment before the end of the movie? 2. Estimate the doubling time of the number of bacteria as they are growing exponentially. Do this by figuring out how long it takes the area of the colony to double (roughly). Hint: You’ll need to use the time marker in the bottom left corner of the movie. 3. Estimate the doubling time in another way, by observing an individual bacterium. At any point in the movie, choose a bacterium at random. Watch it until it splits in two then, immediately, note the time. Watch some more until one of the two children split. The time difference between the mother’s split and the child’s split is the doubling time. 4. Compare the doubling time for a mother in the center of a large colony to the doubling time of a mother on the edge of the colony. Is there any clear sign that growth in the more crowded part of the colony is slower than in the suburbs? Credit: Math 141Z/142Z 2021-2022 development team. #### Exercise 8.08 Here are four frames from a movie showing (through a microscope) the growth of E. coli. bacteria. 1. In each frame, count the number of bacteria. 2. Construct a data frame recording the time stamp and the number of bacteria in each frame. The unit of observation is a frame. You can use a command like this, replacing the count with your own numbers: Ecoli <- tibble::tribble( ~ time, ~ count, 100, 36, 150, 289, 200, 1683, 250, 2945 ) 1. Make a point plot of the number of bacteria versus time. Use linear, logarithmic or semi-logarithmic axes as most appropriate to show a simple pattern. A. Which type of axes shows the pattern most simply? B. Is the pattern most consistent with linear growth, exponential growth, or power-law growth. C. From your graph, find the parameter that describes the growth rate: - If linear growth, the slope of the line (give units) - If exponential growth, the doubling time (give units) - If power law, the exponent (which will not have units). #### Exercise 8.09 The diagram shows how the intensity of light from the sun depends on distance $$r$$. Wikipedia link The intensity is the number of photons per unit area. Imagining each red line to be the path followed by one photon, the intensity can be calculated by the area of the surfaces at distance $$r$$, $$2r$$, and $$3r$$. Part A Which of these functional forms best models intensity $$\cal I$$ as a function of distance $$r$$? 1. Proportional: $$\cal I(r)\equiv ar+b$$ 2. Power-law: $$\cal I(r)\equiv Ar^p$$ 3. Exponential $$\cal I(r)\equiv Ae^{kr}+C$$ 4. Sine: $$\cal I(r)\equiv A\sin \left(\frac{2\pi}{p}(r-r_0)\right)+B$$ 5. Sigmoid $$\cal I(r)\equiv A\cdot pnorm(r,mean,sd)+B$$ 6. Gaussian $$\cal I(r)\equiv A\cdot dnorm(r,mean,sd)+B$$ #### Exercise 8.10 The performance $$p$$ of a worker depends on the level of stimulation/stress $$s$$ imposed by the task. This phenomenon has come to be known as the Yerkes-Dodson Stress Performance Curve, and you’ve probably experienced this yourself. If a task is not stimulating enough people become inactive/bored and performance is negatively impacted. If tasks are over stimulating (stressful), people become anxious, fatigued, and burn-out. The overall pattern is shown by the diagram. Part A Which of these functional forms best imitates the Yerkes-Dodson stress performance curve? 1. Proportional: $$p(s) \equiv as+b$$ 2. Power-law: $$p(s) \equiv As^p$$ 3. Exponential $$p(s) \equiv Ae^{kt}+C$$ 4. Sine: $$p(s) \equiv A\sin\left(\frac{2\pi}{p}(t-t_0)\right)+B$$ 5. Sigmoid $$p(s) \equiv A\cdot pnorm(s,mean,sd)+B$$ 6. Gaussian $$p(s) \equiv A\cdot dnorm(s,mean,sd)+B$$ A manager must balance workloads between too much and too little stimulation to get peak performance out of each team member. #### Exercise 8.11 The graph shows the proportion $$P$$ of the US cell-phone users who own a smart phone as a function of the year $$y$$. As a rule, when a quantity grows exponentially but is ultimately limited to some maximum level, the sigmoid is the choice for modeling. The proportion of smartphone owners grew exponentially during the early 2000’s. As the number of smartphones increased, the broader familiarity with and advertisement of smartphones also increased, which help sustain this exponential growth. However, adoption has slowed as smartphone penetration reaches the maximum carrying capacity. In other words, once everyone has a smartphone, the proportion of smartphone owners cannot increase—everyone already owns a smartphone. So, eventually the exponential growth must taper-off. According to several datasets, this inflection point occurred sometime between 2013 and 2014. This behavior is visible in the graphic below showing US smartphone penetration between Jan 2005 and Oct 2020 with raw data shown from 2010 to 2015. Part A During the initial, exponential phase of smartphone penetration, what was the doubling time for penetration? (Note that the horizontal axis labels have 1/4 year inbetween them.)
6,918
25,468
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.03125
4
CC-MAIN-2023-50
latest
en
0.90552
https://www.ocf.berkeley.edu/~shidi/cs61a/w/index.php?title=Interpreter&oldid=920
1,642,434,208,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320300574.19/warc/CC-MAIN-20220117151834-20220117181834-00352.warc.gz
960,014,690
6,962
# Interpreter (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) An interpreter is a program that translates user input into computable data, then outputs desired results. It evaluates input by first separating user input into a sequence of tokens and analyzing them, a technique called parsing, then evaluating the disjointed expression. Evaluating is often a complex process of applying functions to combine elements until the expression is completely simplified. The interpreter will will recursion to reach the most primitive elements of the expression before attempting to apply functions on them. For example, the expression `(+ (* 3 4) 5)` will be separated into `['(', '+', '(', '*', 3, 4, ')', 5, ')']`. The interpreter will recurse into the most deeply nested case, `(* 3 4)`, and evaluate it by applying `*` to 3 and 4. The expression will now be `(+ (12 5)`. Now the interpreter will apply `+` to 12 and 5, resulting in an output of 17. ## Parsing Parsing is the process of generating expression trees from raw text input[1]. User input in Scheme consists of nested and grouped elements conjoined by functions. A typical mathematical expression would look like (+ 3 4), where the function preceeds the elements. Since the input is text based, the interpreter will need to break it into individual elements then create Pair objects that represent the expression. A (+ 3 4) input would become ['(' , '3' , '4' , ')'], then be converted to Pair("+", Pair('3', Pair('4', nil))). With this Pair object representation, the interpreter can now evaluate it. ## Calculator Main article: Calculator
366
1,631
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2022-05
latest
en
0.835737
https://www.homebrewersassociation.org/forum/index.php?action=profile;u=6247;area=showposts;start=7410
1,516,526,239,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00746.warc.gz
910,044,925
7,941
### Show Posts This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to. ### Messages - tschmidlin Pages: 1 ... 493 494 [495] 496 497 ... 547 7411 ##### The Pub / Re: OOOH Space beer. « on: September 30, 2010, 06:30:00 AM » They should flocculate fine, just not fall out of solution.  Stirring the beer will easily make the larger particles move to the center, just like in the mash.  Then you can draw off from the side.  I think filtering would likely be the way to go though, and you'd probably still want to spin it so the beer all moves to the outside of the vessel as you're pumping out liquid. 7412 ##### Beer Recipes / Re: How does this sound? « on: September 30, 2010, 06:12:53 AM » You can assume that honey will have a potential gravity of about 1.035.  That means 1 lb in 1 gallon gets you 1.035.  There might be better numbers available if you've measured your honey, but it's an approximation.  We'll call this 35 gravity points. So, you added 1.5 gallons, that gets you 52.5 gravity points.  If this is a 5 gallon batch, then you just divide by 5 to get 10.5 gravity points.  You're starting gravity was 1.050, or 50 gravity points, so 50+10.5=60.5.  So once you added the honey you can estimate a SG of 1.0605. 7413 ##### General Homebrew Discussion / Re: What's Brewing This Weekend - 10/1 « on: September 30, 2010, 05:49:31 AM » Not brewing - I'm officiating a friend's wedding in Portland OR on Saturday, I hope to hit the new Cascade and Hair of the Dog while I'm there.  Hard to say though, the schedule might be too tight to hit both. 7414 ##### Equipment and Software / Re: lost siphon « on: September 29, 2010, 11:09:44 PM » You can start it by mouth, but that's not very sanitary.  Most people who bottle flat beer use a bottling bucket with a spigot at the bottom, so a siphon is not so much of an issue. 7415 ##### All Things Food / Re: Ethnic and Regional Cooking « on: September 29, 2010, 09:58:56 PM » I make some darned good mussels, too.  Chorizo, garlic, shallots and whatever homebrewed Belgian beer I happen to have on tap. No offense Jeff, but how could you make anything bad with those ingredients? 7416 ##### Ingredients / Re: re-using dryhops « on: September 29, 2010, 06:36:30 PM » That's not surprising 7417 ##### Ingredients / Re: re-using dryhops « on: September 29, 2010, 04:11:48 PM » Good point on the lupulin that I hadn't thought of.  I should have, since a lot of my dry hopped beers end up with a layer of lupulin on the bottom of the glass. Mine too, at least for the first pint or two (or five) out of the keg 7418 ##### All Things Food / Re: Ethnic and Regional Cooking « on: September 29, 2010, 04:05:42 PM » [img width=640 height=383]http://sphotos.ak.fbcdn.net/hphotos-ak-ash2/hs157.ash2 We steamed some 2.5 pound lobsters up in Cape Cod this summer.  Delicious.  I'll still take blue crab as my favorite, though We got a bushel of blue crab when we were in NC this past summer, it was awesome!  But it's still no Dungeness. Nice icon narvin, Mr. Boh! 7419 ##### All Things Food / Re: BBQ Style « on: September 29, 2010, 04:03:44 PM » And . . .done.  Plus I ordered some cherry and hickory dust to go in it.  I want to smoke some cheese before Thanksgiving, this will give me a chance to practice a bit and get it done.  Thanks, I hadn't heard of that product. Here's a much cheaper source of dust.  I have 5lbs of mesquite and 5lbs of apple, and 5lbs of sawdust is a LOT of sawdust as you might imagine, this thing only uses a few ounces per burn.  I don't know if I'll ever use it all up. edit: helps to include the link! http://www.psseasoning.com/index.cfm/act/products.view/category_id/20 Things that would have been good to know YESTERDAY! No big deal, if it works well for me I'll order a bunch of the other stuff.  I can use it as a change from the typical madrona/apple/plum that I use. 7420 ##### All Things Food / Re: Smokers « on: September 29, 2010, 03:56:36 PM » Hmmmm... I'm not really sure. But better safe than sorry. http://en.wikipedia.org/wiki/Metal_fume_fever I wouldn't risk my life or health on anything found solely in wikipedia I'd find another source for the safe temps for galvanized. 7421 ##### Ingredients / Re: re-using dryhops « on: September 29, 2010, 03:48:24 PM » Although the alpha acids won't be as soluble since they haven't been isomerized, that won't stop the lupulin from falling to the bottom of the fermenter.  Your bag might, and the hop mass will stop some, but my guess is you'll lose some.  I have no idea how much though.  I don't see any problem with doing it, especially if you're brewing the same day you remove them or you freeze the whole bag after removing it from the the beer.  I haven't done it though, so those are just my thoughts on the subject. 7422 ##### All Things Food / Re: Smokers « on: September 29, 2010, 07:41:50 AM » After rereading the MSDS on it I'm not sure about the "fumes" aspect of it. At what point does galvanized steel fume? At pit temps (185-400F) or at much higher temps? According to wikipedia: Quote Galvanized steel is suitable for high-temperature applications of up to 392 °F (200 °C). The use of galvanized steel at temperatures above this will result in peeling of the zinc at the intermetallic layer. 7423 ##### All Things Food / Re: BBQ Style « on: September 29, 2010, 07:27:44 AM » Yeah, warm enough to be a bit soft . . . I imagine there would be some variability between cheese varieties and a softer cheese might not need to be as warm.  But it's a great starting point. 7424 ##### All Things Food / Re: BBQ Style « on: September 29, 2010, 06:36:01 AM » Oh I thought it had to be below 40F? The temps you describe sound better to me. Interesting. Well, that's what they say . . . http://www.macsbbq.co.uk/ColdSmoking.html 7425 ##### The Pub / Re: Note to self... « on: September 29, 2010, 06:15:18 AM »
1,745
5,961
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2018-05
latest
en
0.867511
https://www.armoredpenguin.com/crossword/Data/best/math/2d-figures.01.html
1,550,388,378,000,000,000
text/html
crawl-data/CC-MAIN-2019-09/segments/1550247481766.50/warc/CC-MAIN-20190217071448-20190217093448-00627.warc.gz
764,750,818
9,601
### Two-Dimensional Figures Loren Byers 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 Across 4.angles formed by parallel lines and a transversal that lie outside parallel lines 8.slides 9.parts of congruent shapes that match 10.turns 11.line that makes two halves of a figure symmetrical (match) 12.line segment perpendicular to the bases with endpoints on the base and the side opposite the base 13.the distance from the center to any point on the circle 14.polygon that is equilateral and equiangular 16.closed figure with four sides and four vertices 20.angles formed by parallel lines and a transversal that are on opposite sides of the transversal and inside the parallel lines 21.the distance across the circle through its center 25.when all angles are congruent 29.lines that intersect to form a right angle 31.the distance around a circle 33.parallelogram with four right angles 36.figures that have the same size and same shape 37.angles formed by parallel lines and a transversal that lie inside parallel lines 38.if the sum of 2 angles measures 90 degrees, the angles are _____ 40.flips 41.a simple, closed figured formed by three or more line segments Down 1.alter the size of a figure 2.angles that are on opposite sides of the transversal and outside the parallel lines 3.all sides are congruent 4.angle formed when a side polygon is extended 5.angles that are in the same position on the parallel lines in relation to the transversal 6.parallelorgram with four congruent sides and four right angles 7.the given point of a circle 12.two angles that have the same vertex, share a common side and do not overlap 15.any figure that can be turned or rotated less than 360 degrees about a fixed point so that the figure looks exactly as it does in its original position 17.quadrilateral with both pairs of opposite sides parallel and congruent 18.two pairs of opposite angles formed when two lines intersect 19.figure that has line symmetry 22.line that intersects two parallel lines forming eight angles 23.angle who has its vertex on the circle and sides that are chords 24.parallelogram with four congruent sides 26.angle whose vertex is the center of the circle 27.an angle inside a polygon 28.if 2 angles measure 180 degrees, the angles are _____ 30.quadrilateral with one pair of opposite sides parallel 31.segment of a circle whose endpoints are on the circle 32.length of an altitude 34.set of all points in a plan that are the same distance from a given point 35.a line segment in a polygon that joins two nonconsecutive vertices 39.3.14 or 22/7 Use the "Printable HTML" button to get a clean page, in either HTML or PDF, that you can use your browser's print button to print. This page won't have buttons or ads, just your puzzle. The PDF format allows the web site to know how large a printer page is, and the fonts are scaled to fill the page. The PDF takes awhile to generate. Don't panic! Web armoredpenguin.com
743
3,009
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.125
4
CC-MAIN-2019-09
longest
en
0.894
https://tutorstips.com/question-no-17-chapter-no-5-t-s-grewal-11-class/
1,600,529,137,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00363.warc.gz
668,844,031
37,486
# Question No 17 Chapter No 5 – T.S. Grewal 11 Class Question No 17 Chapter No 5 17. Prepare Accounting Equation from the following: (a) Started business with cash Rs 1,00,000. (b) Purchased goods for cash Rs 20,000 and on credit Rs 30,000 (c) Sold goods for cash costing Rs 10,000 and on credit costing Rs 15,000 both at a profit of 20% (d) Paid salaries Rs 8,000 ### Solution of Question No 17 Chapter No 5: – S. No. Particulars Assets Liabilities Capital Cash +Stock +Debtor +Creditors (a) Started business with cash Rs 1,00,000. +1,00,000 +1,00,000 1,00,000 – – – +1,00,000 (b) Purchased goods for cash Rs 20,000 and on credit Rs 30,000 -20,000 +50,000 – +30,000 – +80,000 +50,000 – 30,000 +1,00,000 (c) Sold goods for cash costing Rs 10,000 and on credit costing Rs 15,000 both at a profit of 20% +12,000 -25,000 +18,000 – +5,000 92,000 +25,000 +18,000 30,000 1,05,000 (d) Paid salaries Rs 8,000 – 8,000 – – – -8,000 Total 84,000 +25,000 +18,000 30,000 +97,000 Assets: – Cash 84,000 + Stock 25,000 + Debtors 18,000 = 1,27,000/- Liabilities: – Creditors 30,000 = 30,000/- Capital = 97,000/- Liabilities  +Capital 30,000 + 97,000 = 1,27,000/- ### Explanation of All Transactions with images: – This is not a part of the solution, So you don’t have to write it in the exam. So why we explained if it is not needed. Because This explanation will help you to understand all transactions with logic so don’t need to remember all the transactions but just understand and remember the logic use behind it. #### Transaction No. 1 As we discuss in the previous topic, A owner and the business both have a separate identity in the eye of law. So, the business will be treated as an Artificial Person and anything invested by the owner into the business will be treated as capital. So, In this transaction, as shown in the above image owner investing her cash into the business, this will be treated as capital of the business. The business receiving an i.e. cash, bank, stock, machine and Furniture. #### Transaction No. 2 In this transaction, as shown in the above image three accounts are involved i.e. Stock(Purchase), Cash and Creditors. • Stock a/c (Purchase):- Because business receiving goods. • Cash a/c : – Because business paying some part of due amount in cash. • Creditor : – Because Business did not pay some part of due amount yet, But it has to pay in future, so that’s why account of creditors is created. #### Transaction No. 3 In this transaction, as shown in the above image four accounts are involved i.e. Stock(Sale), Cash, Debtors  and Capital(Profit): • Stock a/c (Sale):- Because business giving(selling) its goods. • Debtors: – Because half of the payment is yet not received. It will be our new asset. • Capital(Profit): Because owner has right on all profit of the business so the amount of the profit will be added in the capital a/c. (Profit = sale price – cost price) 22,000-20,000 = 2,000(Profit) #### Transaction No. 4 In this transaction, as shown in the above image two accounts are involved i.e. cash and capital • Cash A/c: – Because business cash goes out from the business. • Capital a/c:- because business get services from its employees and paid them for it. So this is expenses for the business. “ All expenses and losses are deducted from the amount of capital
922
3,323
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2020-40
latest
en
0.896594
https://nl.mathworks.com/matlabcentral/cody/problems/42643-matlab-basic-rounding-iii/solutions/1395248
1,600,533,571,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00311.warc.gz
532,248,365
16,544
Cody # Problem 42643. MATLAB Basic: rounding III Solution 1395248 Submitted on 27 Dec 2017 by Anil Sarode This solution is locked. To view this solution, you need to provide a solution of the same size or smaller. ### Test Suite Test Status Code Input and Output 1   Pass x = -8.8; y_correct = -9; assert(isequal(round_x(x),y_correct)) 2   Pass x = -8.4; y_correct = -9; assert(isequal(round_x(x),y_correct)) 3   Pass x = 8.8; y_correct = 8; assert(isequal(round_x(x),y_correct)) 4   Pass x = 8.4; y_correct = 8; assert(isequal(round_x(x),y_correct)) 5   Pass x = 8.49; y_correct = 8; assert(isequal(round_x(x),y_correct)) 6   Pass x = 128.52; y_correct = 128; assert(isequal(round_x(x),y_correct)) 7   Pass x = pi; y_correct = 3; assert(isequal(round_x(x),y_correct))
263
779
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2020-40
latest
en
0.623183
https://www.overcomingbias.com/2020/11/why-abstaining-helps.html
1,675,773,204,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500456.61/warc/CC-MAIN-20230207102930-20230207132930-00318.warc.gz
926,930,652
12,403
# Why Abstaining Helps Misunderstandings that I heard in response to these tweets has encouraged me to try to explain more clearly the logic of why most eligible voters should abstain from voting. Think of each vote cast between two candidates as being either +1 or -1, so that the positive candidate wins if the sum of all votes cast is positive; the negative candidate wins otherwise. Abstaining is then a vote of 0. (If the vote sum is zero, the election is a tie.) Assume that there is one binary quality variable that expresses which of the two candidates is “better for the world”, that these two options are equally likely, that each voter gets one binary clue correlated with that quality, and that voters vote simultaneously. What we should want is to increase the chance that the better candidate wins. While all else equal, each voter may prefer a higher quality candidate, they need not be otherwise indifferent. So if, based on other considerations, they have a strong enough preference for one of the candidates, such “partisan” voters will pick that candidate regardless of their clue. Thus their vote will not embody any info about candidate quality. They are so focused on other considerations that they won’t help make for a more informed election, at least not via their vote. The other “informed” voters care enough about quality that their vote will depend on their quality clue. Thus the total vote will be the sum of the partisan votes plus the informed votes. So the sum of the partisan votes will set a threshold that the informed votes must overcome to tip the election. For example, if the partisan sum is -10, then the informed votes must sum to at least 10 to tip the election toward the positive candidate. For our purposes here it won’t matter if there is uncertainty over this sum of partisan votes or not; all that matters is that the partisan sum sets the threshold that informed votes must overcome. Now in general we expect competing candidates to position themselves in political and policy spaces so that on average the partisan threshold is not too far from zero. After all, it is quite unusual for everyone to be very confident that one side will win. So I will from here on assume a zero threshold, though my analysis will be robust to modest deviations from that. Assume for now that the clues of the informed voters are statistically independent of each other, given candidate quality. Then with many informed voters the sum of informed votes will approach a normal distribution, and the chance that the positive candidate wins is near the integral of this normal distribution above the partisan threshold. Thus all that matters from each individual voter is the mean and variance of their vote. Any small correlation between a voter’s clue and quality will create a small positive correlation between quality and their mean vote. Thus their vote will move the mean of the informed votes in the right direction. Because of this, many say that the more voters the better, no matter how poorly informed is each one. However, each informed voters adds to both the mean and the variance of the total vote, as shown in this diagram: What matters is the “z-score” of the informed vote, i.e., the mean divided by its standard deviation. The chance that the better candidate wins is increasing in this z-score. So if a voter adds proportionally more to the standard deviation than they add to the mean, they make the final vote less likely to pick the better candidate, even if their individual contribution to the mean is positive. This is why poorly informed voters who vote can hurt elections, and it is why the relevant standard is your information compared to that of the other voters who don’t abstain. If you are an informed voter who wants to increase the chance that the better candidate wins, then you should abstain if you are not sufficiently well informed compared to the others who will vote. In a previous post I considered the optimal choice of when to abstain in two extreme cases: when all other informed voters also abstain optimally, and when no one else abstains but this one voter. Realistic cases should be somewhere between these extremes. To model inequality in how informed are various voters, I chose a power law dependence of clue correlation relative to voter rank. If the power is high, then info levels fall very quickly as you move down in voter rank from the most informed voter. If the power is low, then info levels fall more slowly, and voters far down in rank may still have a lot of info. I found that for a power less than 1/2, and ten thousand informed voters, everyone should vote in both extreme cases. That is, when info is distributed equally enough, it really does help to average everyone’s clues via their votes. But for a power of 3/4, more than half should abstain even if no one else abstains, and only 6 of them should vote if all informed voters abstained optimally. For a power of 1 then 80% should abstain even if no one else does, and only 2 of them should vote if all abstain optimally. For higher powers, it gets worse. My best guess is that a power of one is a reasonable guess, as this is a very common power and also near the middle of the distribution of observed powers. Thus even if everyone else votes, for the purpose of making the total vote have a better chance of picking the better candidate, you should abstain unless you are especially well informed, relative to the others who actually vote. And the more unequal you estimate the distribution of who is how informed, the more reluctant you should be to vote. Note that the above analysis ignored the cost of getting informed and voting, and that people seem to in general be overconfident when they estimate their informedness rank. Both of these considerations should make you more willing to abstain. In the above I assumed voter clues are independent, but what if they are correlated? For the same means, clue correlation increases the variance of the sum of individual votes. So all else equal voters with correlated clues should be more willing to abstain, compared to other voters. Yes, I’ve used binary clues throughout, and you might claim that all this analysis completely changes for non-binary clues. Possible, but that would surprise me. Added 7a: Re the fact that it is possible and desirable to tell if you are poorly informed, I love this saying: If you’re playing a poker game and you look around the table and can’t tell who the sucker is, it’s you. GD Star Rating
1,331
6,559
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2023-06
latest
en
0.955078
http://physicshelpforum.com/advanced-electricity-magnetism/4015-electric-field-uniformly-charged-ring.html
1,553,634,514,000,000,000
text/html
crawl-data/CC-MAIN-2019-13/segments/1552912206016.98/warc/CC-MAIN-20190326200359-20190326222359-00463.warc.gz
165,357,632
10,117
Physics Help Forum Electric Field of a Uniformly Charged Ring Jan 24th 2010, 09:48 AM #1 Junior Member   Join Date: Jan 2010 Posts: 3 Electric Field of a Uniformly Charged Ring 1. The problem statement, all variables and given/known data A uniformly charged ring of radius 8.1 cm has a total charge of 118 micro Coulombs. The value of the Coulomb constant is 8.98755e9 N M^2/C^2. Find the magnitude of the electric field on the axis of the ring at 1.15 cm from the center of the ring. Answer in units of N/C. 2. Relevant equations F= k Qq/ r^2 E= kq/r^2 3. The attempt at a solution I tried subtracting 1.15 cm from 8.1 cm for "r" and plugged that "r" value in the F equation but that answer is wrong. By axis , do they mean horizontally (as in along the diameter) or vertically?Someone said I had to integrate but I dont know what to integrate and from what points. Jan 24th 2010, 11:09 AM   #2 Senior Member Join Date: Apr 2008 Posts: 815 Originally Posted by ILoveCollege 1. The problem statement, all variables and given/known data A uniformly charged ring of radius 8.1 cm has a total charge of 118 micro Coulombs. The value of the Coulomb constant is 8.98755e9 N M^2/C^2. Find the magnitude of the electric field on the axis of the ring at 1.15 cm from the center of the ring. Answer in units of N/C. 2. Relevant equations F= k Qq/ r^2 E= kq/r^2 3. The attempt at a solution I tried subtracting 1.15 cm from 8.1 cm for "r" and plugged that "r" value in the F equation but that answer is wrong. By axis , do they mean horizontally (as in along the diameter) or vertically?Someone said I had to integrate but I dont know what to integrate and from what points. Hi ILoveCollege and welcome to PHF! It's a very common problem. For instance, I found in google Electric Field on the Axis of Charged Ring or Electric Field Along the Axis of a Charged Semicircle or Ring. Does it make sense? __________________ Isaac If the problem is too hard just let the Universe solve it. Jan 24th 2010, 12:11 PM   #3 Junior Member Join Date: Jan 2010 Posts: 3 Originally Posted by arbolis Hi ILoveCollege and welcome to PHF! It's a very common problem. For instance, I found in google Electric Field on the Axis of Charged Ring or Electric Field Along the Axis of a Charged Semicircle or Ring. Does it make sense? Yes! Thank You! I also have another question: "Sketch the gravitational "lines of force" for two equal point masses. These are isomorphic to the electrostatic lines of force of..." My question is wouldn't two equal "+" point charges produce the same electric field line diagram as two equal "-" point charges? Jan 24th 2010, 12:17 PM   #4 Senior Member Join Date: Apr 2008 Posts: 815 Originally Posted by ILoveCollege Yes! Thank You! I also have another question: "Sketch the gravitational "lines of force" for two equal point masses. These are isomorphic to the electrostatic lines of force of..." My question is wouldn't two equal "+" point charges produce the same electric field line diagram as two equal "-" point charges? I'll wait for a more experienced person to answer you well on this question. If both lines of force are indeed isomorphic, then I believe that the answer is yes. Now I don't think that they are isomorphic so I have a doubt. __________________ Isaac If the problem is too hard just let the Universe solve it. Tags charged, electric, field, ring, uniformly Thread Tools Display Modes Linear Mode Similar Physics Forum Discussions Thread Thread Starter Forum Replies Last Post pranimaboity2050 Electricity and Magnetism 0 Apr 2nd 2015 09:24 AM azupol Advanced Electricity and Magnetism 0 Oct 17th 2014 01:07 PM ironman1478 Electricity and Magnetism 4 Jan 16th 2010 12:22 PM db2dz Electricity and Magnetism 5 Sep 13th 2009 01:32 PM rice4lifelegit Electricity and Magnetism 2 Sep 7th 2009 05:52 PM
1,015
3,836
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.515625
4
CC-MAIN-2019-13
latest
en
0.923637
https://www.newcomputerworld.com/130-mm-to-inches/amp/
1,643,317,345,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00464.warc.gz
958,404,067
17,034
Tech # 130 mm to inches – Convert Millimeters to Inches 130 mm to inches – (in)? How to convert mm to inches? Here we will demonstrate how to get mm in inches as a decimal and give you the answer to 130 mm in inches as a fraction. ## 130 mm to inches as a Decimal There are 25.4 millimeters per Inch and 0.0393701 inches per millimeter. Therefore, you can answer 130 mm to inches two different ways. You can either multiply 130 by 0.0393701 or divide 130 by 25.4. Here is the math to get the reply by dividing 130 mm by 25.4. 130 / 25.4 = 5.11811023622047 130 mm ≈ 5.118 inches ## Convert 130.0 mm to Inches 130.0 Millimeters (mm)                =             5.11811 Inches (in) 1 mm = 0.03937 in 1 in = 25.4 mm ## 130 mm to Inches – Unit Definition Millimeters Definition – A millimeter is a measuring unit for small objects, and it belongs to the metric system and is equal to 0.001 meters, shortened as mm. The “millimeter” spelling uses in the United States, but it’s a millimeter in the UK and other nations. A millimeter is equal to approximately 0.04 of an inch (to be specific, 0.0393700787402 inches). An mm is smaller than a centimeter, as 1 mm equals 0.1 of a centimeter in the metric system. One thousand millimeters is equivalent to 1000 mm, and mm uses when an object is too tiny for inches. Inches Definition – The Inch is the preferred unit of measurement. It is equal to 1/36 of a yard, and 12 inches is equivalent to afoot. The Inch derives from Ince or Inch, which comes from uncia. The Inch has two abbreviations, in. and “. Aside from the US, Canada and the UK use this for measurement. In Japan, the Inch uses to gauge the display screen. And the official symbol of the Inches in it. But it is mainly displayed as a double prime (“), the same symbol used for quotes, i.e., 5”. ## Nearest Numbers for 130 Millimeters MILLIMETERS                  INCHES 130.1 mm            =             5.1220472440945 in 130.13 mm         =             5.1232283464567 in 130.18 mm         =             5.1251968503937 in 130.2 mm            =             5.1259842519685 in 130.3 mm            =             5.1299212598425 in 130.47 mm         =             5.1366141732283 in 130.5 mm            =             5.1377952755906 in 130.59 mm         =             5.1413385826772 in 130.6 mm            =             5.1417322834646 in 130.8 mm            =             5.1496062992126 in 130.81 mm         =             5.15 in 130.9 mm            =             5.1535433070866 in 130.92 mm         =             5.1543307086614 in 131 mm                 =           5.1574803149606 in 131.1 mm            =             5.1614173228346 in 131.19 mm         =             5.1649606299213 in 131.2 mm            =             5.1653543307087 in 131.24 mm         =             5.1669291338583 in 131.35 mm         =             5.1712598425197 in 131.4 mm            =             5.1732283464567 in Q: How many mm are in an Inch? Q: How do you convert 130 Millimeter (mm) to Inch (in)? 130 Millimeters is equal to 5.11811 Inches. Formula to convert 130 mm to in is 130 / 25.4 Q: How many Millimeters in 130 Inches? New Computer World Share New Computer World ## What are the preferences of women and men in the casino? women and men in the casino - While there is no doubt that men and… Read More January 24, 2022 ## Is It Allowed to Use VPN During Online Gambling? Online Gambling - Thousands of people use virtual private networks to keep their data safe… Read More January 23, 2022 ## Gifts for Valentine’s Day That Any Gamer Would Love Gifts for Valentine's Day - If you’re struggling to figure out what to get your… Read More January 10, 2022 ## Key Attributes of Fully Managed WordPress Hosting Service Provider Hosting Service Provider - More or less, WordPress-specific facilitating adds various helpful elements to make… Read More January 4, 2022 ## Project Ideas For Students Studying Cloud Computing Cloud Computing - When Charles Babbage invented the first computer, it was the size of… Read More December 30, 2021 ## 30 Inches in Cm Convert How Much are 30 Inches in Cm? 30 inches equal 76.2 cm (30in = 76.2cm).… Read More December 17, 2021
1,202
4,222
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.53125
4
CC-MAIN-2022-05
latest
en
0.853452
http://web2.0calc.com/questions/solving-rational-equations_1
1,516,248,042,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084887065.16/warc/CC-MAIN-20180118032119-20180118052119-00355.warc.gz
366,268,630
6,680
+0 # Solving Rational Equations 0 134 2 +681 $${1\over x(x-4)}+{5-2x\over x^2-3x-4}={5\over x(x+1)}$$ Solve this rational equation. Sort: #1 0 Solve for x: 1/(x (x - 4)) + (5 - 2 x)/(x^2 - 3 x - 4) = 5/(x (x + 1)) Hint: | Write the left hand side as a single fraction. Bring 1/((x - 4) x) + (5 - 2 x)/(x^2 - 3 x - 4) together using the common denominator x (x - 4) (x + 1): (-2 x^2 + 6 x + 1)/(x (x - 4) (x + 1)) = 5/(x (x + 1)) Hint: | Multiply both sides by a polynomial to clear fractions. Cross multiply: x (x + 1) (-2 x^2 + 6 x + 1) = 5 x (x - 4) (x + 1) Hint: | Write the quartic polynomial on the left hand side in standard form. Expand out terms of the left hand side: -2 x^4 + 4 x^3 + 7 x^2 + x = 5 x (x - 4) (x + 1) Hint: | Write the cubic polynomial on the right hand side in standard form. Expand out terms of the right hand side: -2 x^4 + 4 x^3 + 7 x^2 + x = 5 x^3 - 15 x^2 - 20 x Hint: | Move everything to the left hand side. Subtract 5 x^3 - 15 x^2 - 20 x from both sides: -2 x^4 - x^3 + 22 x^2 + 21 x = 0 Hint: | Factor the left hand side. The left hand side factors into a product with five terms: -x (x + 1) (x + 3) (2 x - 7) = 0 Hint: | Multiply both sides by a constant to simplify the equation. Multiply both sides by -1: x (x + 1) (x + 3) (2 x - 7) = 0 Hint: | Find the roots of each term in the product separately. Split into four equations: x = 0 or x + 1 = 0 or x + 3 = 0 or 2 x - 7 = 0 Hint: | Look at the second equation: Solve for x. Subtract 1 from both sides: x = 0 or x = -1 or x + 3 = 0 or 2 x - 7 = 0 Hint: | Look at the third equation: Solve for x. Subtract 3 from both sides: x = 0 or x = -1 or x = -3 or 2 x - 7 = 0 Hint: | Look at the fourth equation: Isolate terms with x to the left hand side. x = 0 or x = -1 or x = -3 or 2 x = 7 Hint: | Solve for x. Divide both sides by 2: x = 0 or x = -1 or x = -3 or x = 7/2 Hint: | Now test that these solutions are correct by substituting into the original equation. Check the solution x = -3. 1/((x - 4) x) + (5 - 2 x)/(x^2 - 3 x - 4) ⇒ (5 - 2 (-3))/(-4 - 3 (-3) + (-3)^2) + 1/((-4 - 3) (-3)) = 5/6 5/(x (x + 1)) ⇒ -5/(3 (1 - 3)) = 5/6: So this solution is correct Hint: | Check the solution x = -1. 1/((x - 4) x) + (5 - 2 x)/(x^2 - 3 x - 4) ⇒ (5 - 2 (-1))/(-4 - 3 (-1) + (-1)^2) + 1/((-4 - 1) (-1)) = ∞^~ 5/(x (x + 1)) ⇒ -5/(1 - 1) = ∞^~: So this solution is incorrect Hint: | Check the solution x = 0. 1/((x - 4) x) + (5 - 2 x)/(x^2 - 3 x - 4) ⇒ 1/((0 - 4) 0) + (5 - 2 0)/(-4 - 3 0 + 0^2) = ∞^~ 5/(x (x + 1)) ⇒ 5/(0 (1 + 0)) = ∞^~: So this solution is incorrect Hint: | Check the solution x = 7/2. 1/((x - 4) x) + (5 - 2 x)/(x^2 - 3 x - 4) ⇒ 1/(1/2 (7/2 - 4) 7) + (5 - (2 7)/2)/(-4 - (3 7)/2 + (7/2)^2) = 20/63 5/(x (x + 1)) ⇒ 5/(7/2 (1 + 7/2)) = 20/63: So this solution is correct Hint: | Gather any correct solutions. The solutions are: x = -3          or          x = 7/2 Guest Sep 13, 2017 #2 +5905 +1 $$\frac{1}{x(x-4)}+\frac{5-2x}{x^2-3x-4}=\frac{5}{x(x+1)}$$ Factor the  x2 - 3x - 4 . What two numbers add to  -3  and multiply to  -4 ?       +1  and  -4  . $$\frac{1}{x(x-4)}+\frac{5-2x}{(x+1)(x-4)}=\frac{5}{x(x+1)}$$ Multiply the first fraction by  $$\frac{x+1}{x+1}$$ . $$\frac{1(x+1)}{x(x-4)(x+1)}+\frac{5-2x}{(x+1)(x-4)}=\frac{5}{x(x+1)}$$ Multiply the middle fraction by  $$\frac{x}{x}$$ . $$\frac{1(x+1)}{x(x-4)(x+1)}+\frac{(5-2x)x}{(x+1)(x-4)x}=\frac{5}{x(x+1)}$$ Multiply the last fraction by  $$\frac{x-4}{x-4}$$ . $$\frac{1(x+1)}{x(x-4)(x+1)}+\frac{(5-2x)x}{(x+1)(x-4)x}=\frac{5(x-4)}{x(x+1)(x-4)}$$ Now we have a common denominator. Let's multiply both sides by this denominator...but first we must say that   x ≠ 0 ,  x ≠ 4 ,  and  x ≠ -1 , because these values cause a zero in the denominator of the original equation. 1(x + 1) + (5 - 2x)x  =  5(x - 4)          When    x ≠ 0 ,  x ≠ 4 ,  and  x ≠ -1  . Multiply out the parenthesees and simplify. x + 1 + 5x - 2x2  =  5x - 20 -2x2 + x + 21  =  0 Solve for  x  with the quadratic formula. $$x = {-1 \pm \sqrt{1^2-4(-2)(21)} \over 2(-2)} \\~\\ x = {-1 \pm 13\over -4} \\~\\ x=-\frac{12}{4}=-3\qquad\text{ or }\qquad x=\frac{-14}{-4}=\frac{7}{2}$$ hectictar  Sep 13, 2017 ### 23 Online Users We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners.  See details
1,940
4,419
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.59375
5
CC-MAIN-2018-05
latest
en
0.82852
https://moderncalculators.com/home-loan-affordability-calculator-affordability-calculator-house-affordability-calculator-home-loan/
1,721,074,680,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514713.74/warc/CC-MAIN-20240715194155-20240715224155-00839.warc.gz
352,302,219
67,753
Embark on your home-buying journey with confidence by leveraging the power of the Home Loan Affordability Calculator (Affordability Calculator Home Loan). This essential tool will help you determine your borrowing power, find the perfect property price range, and compare various loan scenarios. Discover how this Affordability Calculator House can simplify your path to homeownership and make your dream home a reality!This calculator used to estimate house affordability based on monthly allocations of a fixed amount for housing costs. # House Affordability Calculator ## How much home can I afford? When it comes to buying a home, one of the most important questions you should ask yourself is, "How much home can I afford?" Determining how much you can afford to spend on a house can help you avoid financial stress and make the home buying process much smoother. In this article, we'll discuss the factors that impact home affordability and how you can calculate your home affordability. ## How to calculate your house affordability? To determine how much home you can afford, there are a few steps you should take: 1. Calculate Your Monthly Income: To calculate your monthly income, add up all of your sources of income, including your salary, bonuses, and any other income streams you may have. 2. Calculate Your Monthly Debt: To calculate your monthly debt, add up all of your monthly debt payments, including credit card payments, car payments, and any other loans you may have. 4. Calculate Your Maximum Monthly Payment To calculate your maximum monthly payment, you'll need to consider a few factors, including your monthly income, debt-to-income ratio, and down payment. The general rule of thumb is that your monthly mortgage payment should not exceed 28% of your monthly income. However, this can vary depending on your individual financial situation. To calculate your maximum monthly payment, multiply your monthly income by 0.28. For example, if your monthly income is \$5,000, your maximum monthly payment would be \$1,400. Next, subtract your monthly debt payments from your maximum monthly payment. For example, if your maximum monthly payment is \$1,400 and your monthly debt payments are \$500, your maximum monthly mortgage payment would be \$900. Finally, subtract your estimated monthly property taxes and insurance from your maximum monthly mortgage payment to determine the maximum amount you can afford to pay towards your principal and interest. 5. Use a Home Affordability Calculator: There are many home affordability calculators available online that can help you determine how much home you can afford based on your income, down payment, and other factors. These calculators can also help you estimate your monthly mortgage payments. ### Tips for Improving Your Home Affordability If you find that you can't afford as much home as you would like, there are a few things you can do to improve your home affordability: 1. Increase Your Income: Increasing your income can help you qualify for a larger mortgage. Consider taking on a side job or asking for a raise at work. 2. Save for a Larger Down Payment: Saving for a larger down payment can help you qualify for a lower interest rate and reduce your monthly mortgage payments. 4. Consider a Fixer-Upper: Consider buying a fixer-upper that needs some work. These homes are often priced lower than move-in ready homes and can be a great way to get into a larger home for less money. ## Which factors affect your Home Loan Affordability? Key factors that can affect your home loan affordability. 1. Income and Employment Status Your income and employment status are the most important factors when it comes to determining your home loan affordability. Lenders want to see that you have a stable income and that it’s sufficient to cover your monthly mortgage payments. They’ll also look at your employment history to see how long you’ve been with your current employer and whether you have a stable job. 1. Debt-to-Income Ratio Your debt-to-income ratio is another important factor that lenders consider when determining your home loan affordability. This ratio compares your monthly debt payments to your monthly income. Lenders want to see that your debt payments are manageable and that you have enough income left over to cover your monthly mortgage payments. 1. Credit Score Your credit score is a measure of your creditworthiness, and it plays a big role in determining your home loan affordability. Lenders use your credit score to determine how likely you are to repay your mortgage. A higher credit score can help you qualify for a lower interest rate, which can make your monthly mortgage payments more affordable. 1. Down Payment The size of your down payment can also affect your home loan affordability. Generally, the larger your down payment, the less you’ll need to borrow, which can result in lower monthly mortgage payments. Additionally, a larger down payment can make you a more attractive borrower to lenders, which can help you qualify for a lower interest rate. 1. Property Taxes and Insurance When calculating your home loan affordability, it’s important to consider property taxes and insurance. These expenses are typically included in your monthly mortgage payment, so they can have a big impact on your overall affordability. Property taxes and insurance can vary depending on the location and value of the property, so it’s important to factor them into your calculations. 1. Other Expenses Finally, it’s important to consider other expenses when determining your home loan affordability. These can include utilities, maintenance, and repairs. You’ll also need to factor in any other debts you have, such as car loans or student loans. In conclusion, there are several factors that can affect your home loan affordability. These include your income and employment status, debt-to-income ratio, credit score, down payment, property taxes and insurance, and other expenses. By considering these factors and using a home loan affordability calculator, you can determine how much you can afford to borrow for a home purchase and make a more informed decision about your home buying journey. ## What is Front-End Ratio ? What is Back-End Ratio ? When applying for a home loan, there are many factors that lenders consider to determine your eligibility. Two of the most important factors are your front-end ratio and back-end ratio. Understanding these ratios can help you determine how much you can afford to borrow and increase your chances of getting approved for a home loan. In this article, we will explain what front-end and back-end ratios are and how they impact your home loan affordability. Front-End Ratio Your front-end ratio, also known as your housing ratio, is a percentage of your monthly income that goes toward your housing expenses, such as your mortgage, property taxes, and insurance. Lenders use this ratio to determine if you can afford to make your monthly mortgage payments. The recommended front-end ratio is 28%, which means your housing expenses should not exceed 28% of your gross monthly income. To calculate your front-end ratio, simply divide your monthly housing expenses by your gross monthly income. For example, if your monthly housing expenses are \$1,500 and your gross monthly income is \$5,000, your front-end ratio would be 30%. Back-End Ratio To calculate your back-end ratio, simply divide your total monthly debt payments by your gross monthly income. For example, if your total monthly debt payments are \$2,000 and your gross monthly income is \$5,000, your back-end ratio would be 40%. ### Why Front-End and Back-End Ratios Matter Front-end and back-end ratios are important because they help lenders determine your ability to afford a home loan. If your ratios are too high, you may be at risk of defaulting on your mortgage payments, which can lead to foreclosure. On the other hand, if your ratios are too low, you may not be able to get approved for a home loan. ### Improving Your Front-End and Back-End Ratios If your front-end or back-end ratio is too high, there are several ways to improve it: 2. Pay off debt: Paying off debt will lower your back-end ratio and increase your chances of getting approved for a home loan. 3. Reduce your housing expenses: If you can reduce your housing expenses, your front-end ratio will improve. 4. Consider a co-signer: If you have a co-signer with good credit and a low DTI, it can improve your chances of getting approved for a home loan. ## Conventional Loans and the 28/36 Rule Conventional loans have specific requirements, including a debt-to-income ratio (DTI) that must be met. In this article, we'll explore the 28/36 rule that is used to determine whether you qualify for a conventional loan. ### What is the 28/36 Rule? The 28/36 rule is a guideline that lenders use to determine whether a borrower can afford a conventional loan. This rule states that a borrower's housing expenses, including principal, interest, taxes, and insurance, should not exceed 28% of their gross monthly income. Additionally, a borrower's total debt payments, including their mortgage, car payments, credit card payments, and other debts, should not exceed 36% of their gross monthly income. ### Why is the 28/36 Rule Important? The 28/36 rule is important because it helps lenders determine whether a borrower can afford a mortgage loan. If a borrower's DTI ratio is too high, they may not be able to make their mortgage payments each month, which could lead to default and foreclosure. The 28/36 rule is a way to ensure that borrowers are taking on a mortgage that they can realistically afford, based on their income and debt obligations. To determine whether you meet the 28/36 rule, you'll need to calculate your DTI ratio. To do this, you'll need to add up your monthly debt payments, including your proposed mortgage payment, and divide that number by your gross monthly income. If your housing expenses exceed 28% of your gross monthly income or your total debt payments exceed 36% of your gross monthly income, you may not qualify for a conventional loan. For example, let's say you earn \$5,000 per month and your proposed mortgage payment is \$1,200 per month. You also have a car payment of \$300 per month and credit card payments of \$200 per month. Your total monthly debt payments would be \$1,700 (\$1,200 + \$300 + \$200). To calculate your DTI ratio, you would divide \$1,700 by \$5,000, which equals 34%. In this example, you would qualify for a conventional loan because your DTI ratio is below the 36% threshold. ### Alternatives to Conventional Loans If you don't meet the 28/36 rule requirements for a conventional loan, there are other options available. For example, you may be able to qualify for an FHA loan, which has more flexible requirements. FHA loans have a minimum credit score requirement of 580 and a maximum DTI ratio of 43%. Additionally, you may be able to qualify for a VA loan, which is a type of mortgage loan available to veterans and active-duty military members. VA loans have no down payment requirement and no minimum credit score requirement, although they do have a funding fee. If you're in the market to buy a home and are considering a conventional loan, it's important to understand the 28/36 rule and how it affects your ability to qualify for a mortgage. By calculating your DTI ratio, you can determine whether you meet the requirements for a conventional loan. If you don't meet the requirements, there are other options available, such as FHA and VA loans, that may be more suitable for your situation. ## What is FHA Loans and VA Loans For many first-time homebuyers or those with limited financial resources, securing a loan can be a challenge. Fortunately, the Federal Housing Administration (FHA) and the Department of Veterans Affairs (VA) offer two popular loan programs that make homeownership more accessible. In this article, we'll take a closer look at FHA loans and VA loans, including their features, benefits, and eligibility requirements. ### What is an FHA Loan? An FHA loan is a mortgage insured by the Federal Housing Administration. The FHA is a government agency that was established in 1934 to help stimulate the housing market during the Great Depression. Today, the agency's primary role is to insure mortgages made by approved lenders, making it easier for borrowers to qualify for loans. One of the key benefits of an FHA loan is the lower down payment requirement. While most conventional loans require a minimum down payment of 20%, an FHA loan only requires a down payment of 3.5% for borrowers with a credit score of 580 or higher. For borrowers with a credit score between 500 and 579, a down payment of 10% is required. Another advantage of an FHA loan is that it is more forgiving of past credit problems. Borrowers with a bankruptcy, foreclosure, or short sale in their credit history may still be eligible for an FHA loan, provided they meet certain requirements and have reestablished their credit. FHA loans also have more lenient debt-to-income (DTI) ratios than conventional loans, making it easier for borrowers to qualify. However, FHA loans do have some drawbacks. For one, they require borrowers to pay mortgage insurance premiums (MIP) for the life of the loan. This can add up to thousands of dollars over the life of the loan. Additionally, FHA loans have loan limits that vary by county and are typically lower than conventional loan limits. ### What is a VA Loan? A VA loan is a mortgage guaranteed by the Department of Veterans Affairs. The VA is a government agency that provides a wide range of benefits to eligible veterans, including home loans. VA loans are designed to help veterans, active-duty service members, and their families become homeowners. One of the biggest advantages of a VA loan is that it does not require a down payment. This means that eligible borrowers can finance 100% of the purchase price of their home. VA loans also have no private mortgage insurance (PMI) requirement, which can save borrowers hundreds of dollars per month. Another benefit of a VA loan is that they have more flexible credit requirements than conventional loans. While lenders will still look at your credit score, they will also consider other factors such as your income, employment history, and debt-to-income ratio. This means that veterans with less-than-perfect credit may still be able to qualify for a VA loan. Like FHA loans, VA loans do have some limitations. For example, there are loan limits that vary by county and may change from year to year. Additionally, VA loans are only available to eligible veterans, active-duty service members, and their spouses. To qualify for a VA loan, you must meet certain service requirements, such as serving at least 90 days during wartime or 181 days during peacetime. Which Loan is Right for You? Choosing between an FHA loan and a VA loan depends on your individual circumstances. If you are a veteran or active-duty service member, a VA loan may be your best option since it offers a zero-down payment requirement and more flexible credit requirements. However, if you have a credit score of 580 or higher and want to put down a smaller down payment, an FHA loan may be a better fit. ## Custom Debt-to-Income Ratios If you're considering taking out a loan, one of the most important factors that lenders will look at is your debt-to-income ratio (DTI). Your DTI is a measure of your monthly debt payments compared to your monthly income. Most lenders use a standard DTI ratio of 43% or less, but some lenders offer custom DTI ratios. In this article, we'll take a closer look at what custom DTI ratios are and how they can benefit borrowers. ### What is a Custom DTI Ratio? A custom DTI ratio is a debt-to-income ratio that is tailored to the individual borrower. While most lenders use a standard DTI ratio of 43% or less, some lenders may offer a higher DTI ratio for borrowers with unique circumstances. For example, if you have a high income or significant assets, a lender may be willing to offer a higher DTI ratio. Custom DTI ratios are not as common as standard DTI ratios, but they can be a valuable tool for borrowers who are otherwise unable to qualify for a loan. For example, if you have a lot of debt but a high income, a lender may be willing to offer a higher DTI ratio because they believe you have the means to repay the loan. ### Benefits of a Custom DTI Ratio One of the biggest benefits of a custom DTI ratio is that it can help borrowers who would otherwise be unable to qualify for a loan. If you have a high level of debt or other financial obligations, a standard DTI ratio may not accurately reflect your ability to repay a loan. By offering a custom DTI ratio, lenders can take into account other factors such as your income, assets, and credit score to make a more informed lending decision. Custom DTI ratios can also benefit borrowers who have a high level of debt but are able to manage it effectively. For example, if you have a lot of student loan debt but have a solid income and a good track record of making payments on time, a custom DTI ratio may allow you to qualify for a loan that you would otherwise be denied. In addition, custom DTI ratios can be useful for borrowers who have unique circumstances that may not be reflected in a standard DTI ratio. For example, if you are self-employed or have irregular income, a lender may be willing to offer a higher DTI ratio because they believe you have the ability to generate income in the future. ### How to Qualify for a Custom DTI Ratio While custom DTI ratios can be a valuable tool for borrowers, they are not available to everyone. In general, you will need to have a strong financial profile and a good credit score to qualify for a custom DTI ratio. To qualify for a custom DTI ratio, you will typically need to provide documentation of your income, assets, and debts. This may include bank statements, tax returns, and proof of employment. You will also need to have a good credit score and a history of making payments on time. If you are interested in obtaining a custom DTI ratio, the best place to start is by talking to a lender. They can help you understand your options and guide you through the application process. Keep in mind that not all lenders offer custom DTI ratios, so you may need to shop around to find a lender that can meet your needs. ## FAQ's we will go over some frequently asked questions on home affordability. 1. ### What is Home Affordability? Home affordability refers to the ability to purchase a home without overextending your finances. In general, home affordability is determined by a combination of your income, expenses, and credit score. Lenders use this information to calculate how much money you can borrow and what your monthly mortgage payment will be. 1. ### How Much House Can I Afford? The amount of house you can afford depends on your income, expenses, and credit score. Most lenders use a debt-to-income ratio (DTI) of 43% or less as a guideline for determining how much house you can afford. To calculate your DTI, add up all of your monthly debt payments and divide them by your gross monthly income. This will give you a percentage that represents how much of your income is going towards debt. 1. ### How is Mortgage Affordability Calculated? Mortgage affordability is calculated using a number of factors, including your income, expenses, credit score, and down payment amount. Lenders will typically look at your debt-to-income ratio (DTI) to determine how much money you can afford to borrow. In addition, they will also consider your credit score, employment history, and other factors to determine your creditworthiness. 1. ### What is a Good Credit Score for Buying a House? A good credit score for buying a house is generally considered to be 620 or higher. However, the higher your credit score, the better your chances of getting approved for a mortgage and receiving a lower interest rate. A credit score of 700 or higher is typically considered excellent. 1. ### What is PMI? PMI stands for private mortgage insurance. It is typically required by lenders when a borrower makes a down payment of less than 20% of the home's purchase price. PMI protects the lender in case the borrower defaults on the loan. The cost of PMI can vary, but it is usually around 0.3% to 1.5% of the loan amount per year. 1. ### How Much Should I Save for a Down Payment? Most lenders require a down payment of at least 3% to 5% of the home's purchase price. However, a larger down payment can help you get a lower interest rate and reduce your monthly mortgage payment. It's generally recommended to save at least 20% of the home's purchase price for a down payment. 1. ### How Much Will My Monthly Mortgage Payment Be? Your monthly mortgage payment will depend on a number of factors, including the size of your mortgage, your interest rate, and the length of your loan. You can use an online mortgage calculator to estimate your monthly payment based on these factors. 1. ### What is an Escrow Account? An escrow account is an account set up by your lender to hold funds for property taxes and homeowners insurance. When you make your monthly mortgage payment, a portion of it is set aside in the escrow account to cover these expenses. This ensures that these expenses are paid on time and in full. 1. ### Can I Negotiate My Mortgage Rate? Yes, you can negotiate your mortgage rate with your lender. However, it's important to do your research and shop around for the best rates before you begin negotiating. Having a good credit score, a stable income, and a down payment of at least 20% can also help you get a better rate. ## How to use our home affordability calculator? We will guide you through how to use a home affordability calculator. Step 1: Gather Your Financial Information Before using a home affordability calculator, you'll need to gather your financial information. This includes your income, expenses, and debt. You'll also need to know your credit score and the amount of money you have saved for a down payment. The first step in using a home affordability calculator is to enter your income. This includes your gross annual income and any other income you receive, such as bonuses or rental income. Next, you'll need to enter your monthly expenses. This includes expenses such as car payments, student loans, and credit card payments. You'll also need to enter your monthly expenses for utilities, groceries, and other necessary expenses. After entering your expenses, you'll need to enter your debt. This includes any outstanding debt you have, such as credit card debt, car loans, and student loans. Step 5: Enter Your Down Payment The next step is to enter your down payment. This is the amount of money you plan to put down on your new home. A larger down payment can help you get a lower interest rate and reduce your monthly mortgage payment. Step 6: Enter Your Interest Rate The next step is to enter the interest rate you expect to receive on your mortgage. You can check current mortgage rates online or speak to a mortgage lender to get an estimate. Step 7: Enter Your Loan Term Finally, you'll need to enter your loan term. This is the length of time you plan to take to pay off your mortgage. Most mortgages have a term of 30 years, but you can choose a shorter or longer term depending on your financial goals. Once you've entered all of your financial information, the home affordability calculator will provide you with a estimate of how much you can afford to spend on a home. It will also show you what your monthly mortgage payment will be. If the results of the home affordability calculator aren't what you were expecting, you can adjust your numbers. For example, you can increase your down payment, reduce your debt, or adjust your loan term to see how it affects your monthly mortgage payment. Step 10: Consult a Mortgage Lender While a home affordability calculator can provide you with a good estimate of how much you can afford to spend on a home, it's always a good idea to consult with a mortgage lender. A lender can help you understand the different mortgage options available to you and can provide you with a more accurate estimate of what your monthly mortgage payment will be. ### Benefits of Using a  Home Loan Affordability Calculator we'll guide you through the process of calculating your house affordability. Step 1: Determine Your Monthly Income The first step in calculating your house affordability is to determine your monthly income. This includes your gross income, which is your income before taxes and other deductions are taken out. Be sure to include any bonuses or other sources of income you receive. Step 2: Calculate Your Debt-to-Income Ratio Your debt-to-income ratio is a measure of how much of your monthly income goes towards paying off debt. To calculate your debt-to-income ratio, add up all of your monthly debt payments, including credit card payments, car payments, student loans, and any other loans or debts you have. Then, divide this total by your monthly income. For example, if your monthly debt payments total \$1,500 and your monthly income is \$5,000, your debt-to-income ratio is 30%. Step 3: Determine Your Down Payment Your down payment is the amount of money you will put down on your new home. The larger your down payment, the lower your monthly mortgage payments will be. Most lenders require a down payment of at least 5% of the home's purchase price. Step 4: Calculate Your Maximum Monthly Payment To calculate your maximum monthly payment, you'll need to consider a few factors, including your monthly income, debt-to-income ratio, and down payment. The general rule of thumb is that your monthly mortgage payment should not exceed 28% of your monthly income. However, this can vary depending on your individual financial situation. To calculate your maximum monthly payment, multiply your monthly income by 0.28. For example, if your monthly income is \$5,000, your maximum monthly payment would be \$1,400. Next, subtract your monthly debt payments from your maximum monthly payment. For example, if your maximum monthly payment is \$1,400 and your monthly debt payments are \$500, your maximum monthly mortgage payment would be \$900. Finally, subtract your estimated monthly property taxes and insurance from your maximum monthly mortgage payment to determine the maximum amount you can afford to pay towards your principal and interest. Step 5: Use an Online Calculator If you prefer not to do the calculations manually, you can use an online house affordability calculator. These calculators will take into account your income, debt-to-income ratio, down payment, and other factors to determine how much you can afford to spend on a home. Step 6: Get Pre-Approved for a Mortgage Once you have an idea of how much you can afford to spend on a home, it's a good idea to get pre-approved for a mortgage. This will give you a better idea of what you can realistically afford and will make the home buying process smoother. In conclusion, calculating your house affordability is an important step in the home buying process. By determining your monthly income, debt-to-income ratio, down payment, and maximum monthly payment, you can ensure that you don't end up with a mortgage that is too expensive. Conclusion: Utilizing the Home Loan Affordability Calculator, Affordability Calculator House, and Affordability Calculator Home Loan can significantly improve your ability to find and finance the perfect home. These calculators offer valuable insights and guidance, empowering you to make informed decisions about your home-buying journey. By taking advantage of these tools, you can find the ideal property that suits your financial needs and achieve your dream of homeownership. Legal Notices and Disclaimer All Information contained in and produced by the ModernCalculators.com is provided for educational purposes only. This information should not be used for any Financial planning etc. Take the help from Financial experts for any Finace related Topics. This Website will not be responsible for any Financial loss etc. error: Content is protected !! Scroll to Top
5,832
28,503
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.484375
3
CC-MAIN-2024-30
latest
en
0.935547
https://www.topperlearning.com/doubts-solutions/how-many-millions-make-a-billion-4he3k7gii/
1,558,499,574,000,000,000
text/html
crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00380.warc.gz
992,750,731
46,688
1800-212-7858 (Toll Free) 9:00am - 8:00pm IST all days 8104911739 or Thanks, You will receive a call shortly. Customer Support You are very important to us For any content/service related issues please contact on this toll free number 022-62211530 Mon to Sat - 11 AM to 8 PM # How many millions make a billion? Asked by Topperlearning User 29th October 2013, 2:17 AM We know that, 1 billion can be written as 1,000,000,000 = 1000 x 1, 000, 000 = 1000 x 1 million. Hence, 1000 millions make a billion. Answered by Expert 29th October 2013, 4:17 AM • 1 • 2 • 3 • 4 • 5 • 6 • 7 • 8 • 9 • 10 You have rated this answer /10
229
631
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2019-22
latest
en
0.909239
https://www.jiskha.com/display.cgi?id=1191978815
1,516,446,225,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00369.warc.gz
922,631,450
4,159
# Algebra posted by . Solve the polynomial equation. In order to obtain the first root, use synthetic division to test the possible rational roots. 5.2x^3-13x^2+22x-8=0 6.x^3-8x^2-x+8=0 Find the rational zero of the polynomial function and use it to find all the zeros of the function. 3. f(x)=x^3+3x^2-4x-12 4. f(x)3x^3-14x^2+13x+6 • Algebra - you are correct in all cases ## Similar Questions 1. ### math Still looking at how to solve these... Both are cubic polynomials? 2. ### Algebra Can someone please explain how to do these problems. 1)write a polynomial function of least degree with intregal coefficients whose zeros include 4 and 2i. 2)list all of the possible rational zeros of f(x)= 3x^3-2x^2+7x+6. 3)Find all … 3. ### algebra 1)A.f(x)= x^2+4 B.f(x)= x^2-4x^2+4x-16 C.f(x)= x^2+4x^2+4x+16 D.f(x)= x^2-4x^2-4x+16 2)A.+-1,+-2,+-3,+-6 B.0,+-1,+-2,+-3,+-6,+-1/3,+-2/3 C.+-1,+-2,+-3,+-6,+-1/3,+-2/3 D.+-1,+-3,+-1/6,+-1/3,+-1/2,+-3/2 4)I don't know what they mean … 4. ### Algebra please help me with this question and check my answers. Solve the problem. A rectangle with width 2x + 5 inches has an area of 2x4 + 9x3 - 12x2 - 79x - 60 square inches. Write a polynomial that represents its length. ( I don't know … 5. ### math Could you please solve so I can double check my answers for the practice quiz? 6. ### math I HAVE THESE ANSWERS FOR THE PROBLEMS. COULD YOU DOUBLE CHECK PLEASE, THIS IS A PRACTICE QUIZ WHICH ISN'T A GRADE IT JUST HELPS ME GET READY FOR THE TEST. 1) a 2) b 3) d 4) a 5) d 1. Solve x^3 + 6x^2 + 13x + 10 = 0. a) –2 + 2i, –2 … 7. ### ALGEBRA 2 Use the Rational Root Theorem to list all possible rational roots for each equation. Then find any actual roots. x^3 + 2x^2 + 3x + 6 = 0 (8 points) 8. ### Math List all possible rational roots, use synthetic division to find an actual root, then use this root to solve the equation. x^3 -7x^2 +13x -7=0 9. ### algebra using the rational root theorem to list all possible rational roots of the polynomial equation x^3-x^2-x-3=0 possible answers -3,-1,1,3 1,3 -33 no roots 10. ### Algebra 2 1) Find the roots of the polynomial equation. x^3-2x^2+10x+136=0 2) Use the rational root theorem to list all problem rational roots of the polynomial equation. x^3+x^2-7x-4=0. Do not find the actual roots. More Similar Questions
826
2,315
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.546875
4
CC-MAIN-2018-05
latest
en
0.690446
http://healthcare-economist.com/2016/12/28/what-is-a-pseudo-r-squared/
1,521,401,516,000,000,000
text/html
crawl-data/CC-MAIN-2018-13/segments/1521257645943.23/warc/CC-MAIN-20180318184945-20180318204945-00033.warc.gz
125,988,156
9,224
Healthcare Economist Unbiased Analysis of Today's Healthcare Issues What is a Pseudo R-squared? Written By: Jason Shafrin - Dec• 28•16 When running an ordinary least squares (OLS) regression, one common metric to assess model fit is the R-squared (R2). The R2 metric can is calculated as follows. • R2 = 1 – [Σi(yii)2]/[Σi(yi-ȳ)2] The dependent variable is y, the predicted value from the OLS regression is ŷ, and the average value of y across all observations is ȳ. The index for observations is omitted for brevity. One can interpret the R2 metric a variety of ways. UCLA’s Institute for Digital Research and Education explains as follows: 1. R-squared as explained variability – The denominator of the ratio can be thought of as the total variability in the dependent variable, or how much y varies from its mean. The numerator of the ratio can be thought of as the variability in the dependent variable that is not predicted by the model. Thus, this ratio is the proportion of the total variability unexplained by the model. Subtracting this ratio from one results in the proportion of the total variability explained by the model. The more variability explained, the better the model. 2. R-squared as improvement from null model to fitted model – The denominator of the ratio can be thought of as the sum of squared errors from the null model–a model predicting the dependent variable without any independent variables. In the null model, each y value is predicted to be the mean of the y values. Consider being asked to predict a y value without having any additional information about what you are predicting. The mean of the y values would be your best guess if your aim is to minimize the squared difference between your prediction and the actual y value. The numerator of the ratio would then be the sum of squared errors of the fitted model. The ratio is indicative of the degree to which the model parameters improve upon the prediction of the null model. The smaller this ratio, the greater the improvement and the higher the R-squared. 3. R-squared as the square of the correlation – The term “R-squared” is derived from this definition. R-squared is the square of the correlation between the model’s predicted values and the actual values. This correlation can range from -1 to 1, and so the square of the correlation then ranges from 0 to 1. The greater the magnitude of the correlation between the predicted values and the actual values, the greater the R-squared, regardless of whether the correlation is positive or negative. So then what is a pseudo R-squared? When running a logistic regression, many people would like a similar goodness of fit metric. An R-squared value does not exist, however, for logit regressions since these regressions rely on “maximum likelihood estimates arrived at through an iterative process. They are not calculated to minimize variance, so the OLS approach to goodness-of-fit does not apply.” However, there are a few variations of a pseudo R-squared which are analogs to the OLS R-squared. For instance: • Efron’s Pseudo R-Squared. R2 = 1 – [Σi(yi-πˆi)2]/[Σi(yi-ȳ)2], where πˆi are the model’s predicted values. • McFadden’s Pseudo R-Squared. R2 = 1 – [ln LL(Mˆfull)]/[ln LL(Mˆintercept)]. This approach is one minus the ratio of two log likelihoods. The numerator is the log likelihood of the logit model selected and the denominator is the log likelihood if the model just had an intercept. McFadden’s Pseudo R-Squared is the approach used as the default for a logit regression in Stata. • McFadden’s Pseudo R-Squared (adjusted). R2adj = 1 – [ln LL(Mˆfull)-K]/[ln LL(Mˆintercept)]. This approach is similar to above but the model is penalized penalizing a model for including too many predictors, where K is the number of regressors in the model.  This adjustment, however, makes it possible to have negative values for the McFadden’s adjusted Pseudo R-squared. There are a number of other Pseudo R-Squared approaches that are listed on the UCLA IDRE website. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
959
4,154
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.765625
4
CC-MAIN-2018-13
latest
en
0.907317
https://math.stackexchange.com/questions/2985781/can-one-state-and-prove-that-euclidean-space-has-genus-0-in-hilberts-geometry
1,566,654,271,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027321140.82/warc/CC-MAIN-20190824130424-20190824152424-00428.warc.gz
549,648,912
30,284
# Can one state and prove that Euclidean space has genus $0$ in Hilbert's geometry? Related to a more historically inclined question I'd like to ask, if the language and axioms of Hilbert's geometry do suffice to first state and then prove that in the standard model (Euclidean space) every circle can be (continuously) collapsed to a point, i.e. that Euclidean space has genus $$0$$. Or is this beyond Euclidean geometry? (If it cannot be proved there must be models of genus $$\neq 0$$?) • I'm fairly sure I can't give a definitive answer this question, but regardless I think that some clarification is needed. What do you mean by proving Euclidean space has genus 0 in the language of Hilbert's geometry? I ask because continuity and continuous maps from the circle cross the interval to Euclidean space are definitely not notions of Hilbert's geometry. Continued below. – jgon Nov 5 '18 at 14:23 • This is a crucial point though because geometric circles and topological circles do not coincide as notions at all. Consider the 2-torus with metric induced by the quotient $\Bbb{R}^2/\Bbb{Z}^2$. Then lines of rational slope are topologically circles, but not geometrically. Moreover, every geometric circle can be continuously contracted to a point, since geometric circles are the images of geometric circles in $\Bbb{R}^2$. However the torus of course doesn't have genus 0. (Depending on how you define circles in a Riemannian manifold I guess. I don't do much Riemannian geometry.) – jgon Nov 5 '18 at 14:24 • @jgon: Would geometric and topological closed curves coincide? – Hans-Peter Stricker Nov 5 '18 at 14:55 • I'm not sure. More precisely, I'm not sure how one would define a geometric closed curve in the language of Hilbert's geometry. – jgon Nov 5 '18 at 14:57 • This may be recklessly naive, but wouldn't it be as simple as proving: Given any circle $C$ with center $O$ and radius $R$, and given any $0<r<R$, there exists a circle centered at $O$ with radius $r$? That seems to capture the idea of "a circle can be collapsed to a point" pretty naturally without needing to formally define continuity. – mweiss Nov 5 '18 at 15:15
538
2,148
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2019-35
latest
en
0.943185
https://math.stackexchange.com/questions/1133309/correspondence-between-valuations-and-valuation-rings
1,571,515,142,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570986697760.44/warc/CC-MAIN-20191019191828-20191019215328-00050.warc.gz
577,194,595
31,301
# Correspondence between valuations and valuation rings. Matsumura gives us the following definition of an additive valuation. A map $\nu: K \rightarrow H \cup \{\infty\}$ from a field $K$ to $H \cup \{\infty\}$ is called an additive valuation or just a valuation of $K$ if it satisfies the conditions (1) $\nu(xy) = \nu(x) + \nu(y)$ (2) $\nu(x+y) \geq \text{min}\{(\nu(x), \nu(y))\}$ (3) $\nu(x) = \infty \iff x=0$ At the end of the page (p.75), Matsumura asks us to prove the following: Let $\nu: K \rightarrow G \cup \{\infty\}$ and $\nu' : K \rightarrow G' \cup \{\infty\}$ be two additive valuations where $K$ is a field and $G, G'$ are ordered groups. Suppose that both $\nu$ and $\nu'$ have the same valuation ring $R$. Then I want to prove that there is an order-isomorphism $\phi: H \rightarrow H'$ with $\nu' = \phi\nu$, where $H$ and $H'$ are the images of $\nu$ and $\nu'$, respectivaly. This was my initial thought: Let $x \in H$, then $\exists \space a \in K$ with $\nu(a) = x$. So define $\phi: H \rightarrow H'$ by $x \mapsto \nu'(a)$. Suppose that $\nu(a)=x$ and $\nu(b)=y$. Then $\phi(x) + \phi(y)= \nu'(a) + \nu'(b) = \nu'(ab) = \phi(x + y)$ since $x + y =\nu(a) + \nu(b) = \nu(ab)$. So $\phi$ is obviously a group homomorphism. Also if $\nu(a) \geq \nu(b)$, then $\nu(ab^{-1}) = \nu(x) - \nu(y) \geq 0 \implies ab^{-1} \in R$. Since $\nu$ and $\nu'$ have the same valuation ring, $\nu'(ab^{-1}) \geq 0$ as well. But...I later realized that $\nu$ is not necessarily injective. So it may be that two elements in $K$ are sent to the same element in $G$...so then $\phi$ would not be well-defined. However, I cannot see any other way to define $\phi$, so I would appreciate if anybody helped me out. Suppose there are elements $a,b\in K$ such that $\nu(a)=\nu(b) = x$, but $\nu'(a)<\nu'(b)$. Then $\nu(ab^{-1}) = x-x = 0$, so $ab^{-1}\in R$. However, $\nu'(ab^{-1}) = \nu'(a)-\nu'(b)<0$, which means that $ab^{-1}\notin R$. This is a contradiction.
672
1,977
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.9375
4
CC-MAIN-2019-43
latest
en
0.844032
https://notrickszone.com/2012/08/27/oh-no-six-thousandths-of-one-percent-0-006-more-of-the-worlds-ice-melted-this-summer/
1,726,369,181,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00453.warc.gz
395,876,779
24,395
# Oh No! Six Thousandths Of One Percent (0.006%) More Of The World’s Ice Melted This Summer! So just how bad is the Arctic ice melt this year? Listening to the alarmists you’d think the world’s ice supply was rapidly dwindling. When dealing with such phenomena, you have got to pull your eyeballs back a little and take a look at the entire picture to put things in their proper perspective. The headlines are that Arctic ice melt will reach a record low since satellite measurements have been taken (all the way back to the 1970s, i.e. roughly a whole half an AMO cycle – sarc off). So how much more Arctic sea ice has melted (been dispersed) this year? Let’s say the Arctic sea ice retreats to 3.5 million km2 by mid September. That would mean 800,000 sq. km less than 2007. That sounds frightening. But how much ice is that? Answer: 800,000 km2 x 0.002 km thick = 1600 cubic km. Holy moly! Now, how much is that in relation the world’s total ice volume? This is important to know. If it’s 2 or 3%, then we will need to worry. Using the numbers from Wikipeda we can calculate a rough inventory: 1. Antarctica continent: Area = 13,700,000 km2 covered with ice Mean ice thickness: 1.6 km Ice volume: 21,920,000 km3 2. Antarctica sea ice (Aug): Area 15,000,000 km2 Mean sea ice thinkness: 0.002 km (rough conservative estimate) Antarctica sea ice volume = 30,000 km3 3. Greenland Ice volume (Wikipedia) = 2,850,000 km3 4. Arctic sea ice Area, September 2007: 4.3 million km2 Average ice thickness: 0.002 km Total September Arctic sea ice volume = 8600 km3 Adding them up, it yields a total ice volume of: 24,808,600 cubic km stockpiled on the planet (neglecting the glaciers on mountains, which are puny in comparison). This year in the Arctic, I estimate (see above) that a “whopping” 1600 km3 more Arctic sea ice will have melted by mid September. Yes, 1600 km3 from the total of almost 25 million we have stocked on Earth! How much is that in percent? (1600 / 24,808,600) x 100 = ## 0.006% 0.006% more of the world’s ice melted this year. At this rate it’ll take 166 years to see a 1% reduction. This is like taking a glass of ice from a frozen swimming pool. The number is so small that it is outside the statistical margins of certainty. Scientists are not even sure how thick the ice is at many locations. As one reader points out: we are talking about parts per million here! 🙂 This is why it’s just plain stupid to hysterically focus on a thin film of ice at one pole. It’s utter nonsense. Ice-free Arctics happened in the past and are nothing new. So with the wave of hysteria about to be unleashed by the merchants and prophets of doom in the days and weeks ahead, it will do everyone some good and keep it all in perspective. Not only should the focus expand beyond the Arctic, but it also has to expand back beyond the cool days of Ronald Reagan. If you want a clear picture of the Arctic, go back and look at the entire Holocene. Yes, the climate is a little warmer than 30 years ago, and so a little more ice (0.006%) is going to melt. How astonishing. ### 17 responses to “Oh No! Six Thousandths Of One Percent (0.00617) More Of The World’s Ice Melted This Summer!” 1. oh come on, even my 12 yr old nephew has learned in school, “the icecaps are melting”. sarc/off 2. OMG! That’s 60 ppm. 3. […] No Tricks Zone Share this:PrintEmailMoreStumbleUponTwitterFacebookDiggRedditLike this:LikeBe the first to like this. This entry was posted in Climate Change and tagged arctic sea ice. Bookmark the permalink. ← Ed Caryl: Arctic Ice Loss: Temperature Or Soot? […] 4. well written! Thx Pierre 5. Interesting to compare: __18th Sept. 2007 versus __26th Aug. 2012, here: http://www.arcticportal.org/news/29-2012-climate/830-seaiceextent26082012 6. […] Oh No! Six Thousandths Of One Percent (0.006%) More Of The World’s Ice Melted This Summer! […] 7. Rahmstorf gets his chance to take his position about arctic case public: http://www.sueddeutsche.de/wissen/eisschmelze-und-klimawandel-im- teufelskreis-der-erwaermung-1.1452223 “Die Arktis sendet uns jetzt deutliche Alarmsignale – wir können nur hoffen, dass die Menschen die Augen nicht länger davor verschließen.” oh no….. 😉 8. […] How much is that in percent? (1600 / 24,808,600) x 100 = 0.006% […] 9. […] So just how bad is the Arctic ice melt this year? Listening to the alarmists you’d think the world’s ice supply was rapidly dwindling.  […] 10. […] If that is true, then the world has 0.006% less ice this year than in 2007. (Source). […] 11. Your apples to oranges comparison is as ridiculous as it is stupid. The last time the Arctic Ocean experienced an ice free summer was 2 million years ago. And loss of that ice amplifies warming in the Arctic. Sea ice melt doesn’t contribute to sea level rise. But Greenland ice melt does. Once the sea ice is gone, Greenland is next. But you’ll be here, I’m sure, when some city or island nation was drowned by ice melt to write some stupid brutish and imbecile puff piece about why we shouldn’t worry, why the concerns of scientists are overblown, and why we should all go back to sleep. 1. “And loss of that ice amplifies warming in the Arctic. ” Got any evidence for that? 12. […] If you do look at volume (Antarctica and the Arctic), then there really is nothing to worry about. Global ice volume varies by only a few thousandths of a percent globally each year – even over decades. I discussed this once already not long ago HERE. […] 13. […] No Tricks Zone […]
1,451
5,510
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.0625
3
CC-MAIN-2024-38
latest
en
0.921574
http://pabrandt06.blogspot.com/2012/09/mistake.html
1,531,957,848,000,000,000
text/html
crawl-data/CC-MAIN-2018-30/segments/1531676590362.13/warc/CC-MAIN-20180718232717-20180719012717-00304.warc.gz
274,762,001
15,610
## Friday, September 14, 2012 ### Mistake? I was grading some student work today and came upon this mistake. In the class, we are currently studying problem solving strategies. This particular problem came from the 'Look For A Pattern' lesson. The problem asks the student to find the next four rows in Pascal's Triangle. Now, the majority of my students have not seen or worked with Pascal's Triangle before, so to them this is a seemingly random set of rows of numbers. I did not give them the background knowledge of the Triangle beforehand either because I wanted to see what they came up with as their answer. Usually students get it correct right away. This, however, was new to me. Because I only gave the first five rows, if you look at them as whole numbers rather than individual digits, they are all powers of 11. Pretty neat. This student went with that and continued. Unfortunately, Pascal's Triangle differs from this point on. She did find a pattern, thinking outside the box. Now, I did mention specifically in the problem that it is Pascal's Triangle, but she found a pattern. Does she have an understanding of finding patterns? I believe so. Do I mark her wrong because her answer is different than what I've got on my answer key? If I'm grading on finding patterns (which I am), then no. If I'm grading on their understanding of Pascal's Triangle (which they may have never seen before), then yes. But then again, why would I grade on something they've never been exposed to? I thought this solution was interesting; thought I would share.
332
1,560
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2018-30
latest
en
0.983478
https://thedecisionlab.com/biases/availability-heuristic/?gclid=Cj0KCQjwsqmEBhDiARIsANV8H3ZCchvBm5DOZvQRX5glHt8_aYPc5hOZxfDZEVNqLuKyJRIoiKC0eo8aAlDpEALw_wcB
1,627,731,898,000,000,000
text/html
crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00421.warc.gz
561,632,246
30,058
The # Availability Heuristic , explained. ## What is the availability heuristic? The availability heuristic describes our tendency to use information that comes to mind quickly and easily when making decisions about the future. ## Where it occurs Imagine you are considering either John or Jane, two employees at your company, for a promotion. Both have a steady employment record, though Jane has been the highest performer in her department during her tenure. However, in Jane’s first year, she unwittingly deleted a company project when her computer crashed. The vivid memory of having lost that project likely weighs more heavily on the decision to promote Jane than it should. This is due to the availability heuristic, which suggests that singular memorable moments have an outsized influence on decisions. ## Individual effects The availability heuristic can lead to bad decision-making because memories that are easily recalled are frequently insufficient for figuring out how likely things are to happen again in the future. Ultimately, this leaves the decision-maker with low-quality information to form the basis of their decision. ## Group effects Exploring the availability heuristic leads to troubling conclusions across many different academic and professional areas. If each one of us analyzes information in a way that prioritizes memorability and nearness over accuracy, then the model of a rational, logical chooser, which is predominant in economics as well as many other fields, can be flawed at times. The implications of the availability heuristic suggest that many academics, policy-makers, business leaders, and media figures have to revisit their basic assumptions about how people think and act in order to improve the quality and accuracy of their work. ## Why it happens A heuristic is a ‘rule-of-thumb’, or a mental shortcut, that helps guide our decisions. When we make a decision, the availability heuristic makes our choice easier. However, the availability heuristic challenges our ability to accurately judge the probability of certain events, as our memories may not be realistic models for forecasting future outcomes.1 For example, if you were about to board a plane, how would you go about calculating the probability that you would crash? Many different factors could impact the safety of your flight, and trying to calculate them all would be very difficult. Provided you didn’t google the relevant statistics, your brain may do something else to satisfy your curiosity. In fact, many of us do this on an everyday basis. ##### Your brain uses shortcuts Your brain could use a common mental shortcut by drawing upon the information that most easily comes to mind. Perhaps you had just read a news article about a massive plane crash in a nearby country. The memorable headline, paired with the image of a wrecked plane wreathed in flames, left an easily recalled impression, which causes you to wildly overrate the chance that you’ll be involved in a similar crash. This is the availability heuristic bias at work. The availability heuristic exists because some memories and facts are spontaneously retrieved, whereas others take effort and reflection to be recalled. Certain memories are automatically recalled for two main reasons: they appear to happen often or they leave a lasting imprint on our minds. ##### Certain memories are recalled easier than others Those that appear to happen often generally coincide with other shortcuts we use to comprehend our world. This is seen with a study that Tversky and Kahneman, two pioneers of behavioral science, conducted in 1973.2 They asked participants whether more words begin with the letter K or if more words have K as their third letter. Even though a typical text contains twice as many words in which K is the third letter rather than the first, 70% of the participants said that more words begin with K. This is because it is much easier for people to think of words that begin with K (e.g., kitchen, kangaroo, kale, etc) than words that have K as the third letter (e.g., ask, cake, biking). Since words that begin with K are easier to think of, it seems like there are more of them. Other events leave a lasting impression, which primes their chance of recall when we make decisions. Tversky and Kahneman exposed this tendency in a study conducted in 1983,3 in which half of the participants were asked to guess the chance that a massive flood would occur somewhere in North America, while the other half were asked the likelihood of a massive flood occurring due to an earthquake in California. By definition, the chance of a flood in California is necessarily smaller than that of a flood for all of North America. Participants said, nonetheless, that the chance of the flood in California, provoked by an earthquake, is higher than that in all of North America. An explanation is that an earthquake in California is easier to imagine. There is a coherent story, which begins with a familiar event (the earthquake) that causes the flood, in a context that creates a vivid picture in one’s head. A large, ambiguous area like all of North America does not create a clear picture, so the prediction has no lasting mental imprint to draw on. ## Why it's important The availability heuristic has serious consequences in most professional fields and many aspects of one’s daily life. People make thousands of decisions per day and factors such as media coverage, emotional reactions and vivid images have greater influence than they would in an entirely rational calculation. Awareness of our intrinsic biases can be a safeguard against fallacious reasoning, unintentional discrimination or costly mistakes in investments and business decisions. ## How to avoid it The availability heuristic is a label for the core cognitive function of saving mental effort that we often go through. Unfortunately, unlike a sleight of hand trick, simply knowing how it works is not sufficient to overcome it completely.4 The availability heuristic describes behavior that results from numerous shortcuts that our brain makes in order to process all of the world’s information. Although awareness alone cannot change one’s thought process, it is essential in order to support and implement policies that take the heuristic into account. Taking steps to recognize and check the availability heuristic is crucial for ensuring fair treatment for consumers and citizens in areas ranging from regulating gambling law, to preventing discrimination, to holding the media accountable. ##### System 1 and System 2 thinking In practice, guaranteeing thoughtful and rigorous mental analysis is challenging. The availability heuristic is everywhere, so avoiding its effects demands what Daniel Kahneman and Amos Tversky, two pioneers in the field of behavioral science, referred to as ‘System 2 thinking’. System 2 refers to the mental network that is engaged in deliberative, careful and reflective decision-making.5 As opposed to System 1, which is fast and automatic. The availability heuristic works on System 1 because upon thorough reflection, people are able to realize that their quick approximations of probable outcomes are skewed. Overcoming the availability heuristic involves activating System 2 thinking. This is often easier to do in collective decision making because others can catch instances when one is captivated by superficially convincing (but ultimately false) information. A more deliberate strategy to counter the availability heuristic is called ‘red-teaming.’ Red-teaming involves nominating one member of a group to challenge the prevailing opinion, no matter their personal beliefs.6 Intentionally seeking out the mistakes that occur in individual decision-making can reduce the chance that heuristics are reflexively treated as facts. ##### Red-teaming for debiasing the availability heuristic In order for red-teaming, or other similar initiatives, to effectively identify the availability heuristic, we must be aware of the bias in order to observe its effect on the behavior of the group. Understanding a bias may not eliminate it completely from our decision-making; however, it increases the chances that we will be able to identify it in group settings, or in the behavior of colleagues and collaborators. Heuristics like the availability heuristic are especially tenacious until one develops an understanding of how they work. A dedicated devil’s advocate can fall prey to the same biases that they are designed to prevent unless they are specifically attentive to the cases where those biases take effect. Combining expert insights from behavioral science with dedicated resources can prevent bad decision-making and can help increase productivity across a variety of environments. For those of us without an expert consultant on hand, learning about behavioral science is a solid first step towards leveraging its power to influence important choices. ## How it all started Amos Tversky and Daniel Kahneman’s work in 19737 helped generate insights about the availability heuristic. They described the availability heuristic as “whenever [one] estimates frequency or probability by the ease with which instances or associations could be brought to mind.” In simpler terms, one guesses the likelihood that things happen by using easily recalled memories as a reference. The concluding remarks of their paper noted that analyzing the heuristics that a person uses when making decisions can predict whether their judgement will be too high or too low. Everyday life is filled with uncertainty due to the seemingly infinite number of decisions and information that our brains process daily, which is why knowing about common heuristics is so important. By being aware of the availability heuristic, humans can make less judgemental errors under uncertain conditions. ## Example 1 - Lottery winners Let’s say you watch a documentary series, or see a plethora of advertisements, about the luxurious lives of those who won the lottery. After watching, you mistakenly figure that your chances of winning are higher than they actually are. Why did this happen? The documentary showcased the winner’s luxury house and brand new sports car; this left a strong impression in your mind, which will ultimately help with ease of recall. Later that day, you were feeling lucky, so you bought a Lotto 6/49 ticket with a \$40 million jackpot prize. Because of the documentary, you figured you had a decent chance of winning—after all, those people won, and they were regular people like you before buying that lucky ticket. However, you forgot the homework assignment you did for your statistics class a few years earlier where you calculated the odds of winning the 6/49 lottery as 1 in 13,983,816.8 Unfortunately, your ticket did not win, which may not have surprised you if you could’ve more easily recalled the actual odds you were up against. ## Example 2 - Drug use and the media A study by Russell Eisenman in 19939 examined how media coverage of specific topics can impact people’s perceptions via the availability heuristic. In this study, college students were asked if drug use in the United States was increasing or decreasing. It was found that they were more likely to say that it was increasing despite reputable survey data from the National Household Survey on Drug Abuse that claimed otherwise. Eisenman cited a 1984 study by Tyler and Cook10 which concluded that constant media coverage of certain topics like drug use can distort perceptions of how often those events occur in the real world. The key idea is that news stories about sensationalized and relatively rare topics such as drug use or plane crashes can evoke the availability heuristic. People wildly overestimate the chance that these events happen compared to other deadly events that are statistically more likely, such as heart disease or car accidents. Depending on what you watch and read (and, perhaps most importantly, how much they inform your actions), your decisions could be based on heavily biased information. ## Summary ##### What it is The availability heuristic describes the mental shortcut where we make decisions based on emotional cues, familiar facts, and vivid images that leave an easily recalled impression in our minds. ##### Why it happens The brain tends to minimize the effort necessary to complete routine tasks. When making decisions — especially ones involving probability — certain memories and knowledge jump out to replace the complicated task of calculating statistics. Some memories leave a lasting impression because they connect to emotional triggers. Others seem familiar because they align with the way we process the world, such as recognizing words by their first letter. ##### Example #1 – Lottery winners One buys lottery tickets because the lifestyle that follows a winning ticket comes to mind easily and vividly, while the probability of winning is a complex calculation that does not jump out while one is at the ticket counter. ##### Example #2 – Drug use and the media Sensational news stories seem much more likely to occur than unremarkable (yet dangerous) activities. The availability heuristic skews the distribution of fear towards events that leave a lasting mental impression due to their graphic content or unexpected occurrence versus comparatively dangerous yet more probable events. ##### How to avoid it The best way to avoid the availability heuristic, on a small scale, is to combine expertise in behavioral science with dedicated attention and resources to locate the points where it takes hold of individual choices. On a larger scale, the solution remains similar. Dedicating a specialized team to focus on the role of heuristics in public policy, institutional behavior or media output can achieve more logical outcomes wherever human behavior is concerned. ## Related resources This article examines how nudging can be used to help drive desirable outcomes in the medical field, from increasing organ donors to reducing the use of misprescribed antibiotics. The author notes that the availability heuristic can get in the way of our efforts to stay healthy, such as when we remember that taking a specific screening test in the past hurt. This can be harmful if it causes us to avoid potentially helpful screening tests in the future. How to Protect An Aging Mind From Financial Fraud This article explores how elderly individuals are more likely to be victims of financial fraud from a behavioral science lens, and how this fraud can be prevented. The author notes that elderly individuals may be more susceptible to financial fraud because they think more in the present, which can increase their vulnerability in financial decision-making environments. This is an example of the availability heuristic and explains why fraudulent emails sometimes leverage urgent calls to action. ## Sources 1. A Neural network framework for cognitive biases, by J.E. (Hans) Korteling, A.-M. Brouwer, A. Toet, Frontiers of Psychology (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6129743/) 2. Tversky, Amos, and Daniel Kahneman. “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive Psychology 5, no. 2 (1973): 207–32. https://doi.org/10.1016/0010-0285(73)90033-9. 3. Tversky, Amos, and Daniel Kahneman. “Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment.” Psychological Review 90, no. 4 (1983), 293-315. doi:10.1037/0033-295x.90.4.293. 4. Mason, Betsy. “Making People Aware of Their Implicit Biases Doesn’t Usually Change Minds. But Here’s What Does Work.” PBS, 10 June 2020. 5. Kahneman, Daniel. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2015. 6. See 1 7. Samson, Alain. The Behavioral Economics Guide 2017 (with an introduction by Cass Sunstein). Retrieved from https://www.behavioraleconomics.com 8. “LOTTO 6/49 Odds & Payouts.” OLG. https://lottery.olg.ca/en-ca/lotto-games/lotto-649/lotto-649-odds-and-payouts. 9. Eisenman, Russell. “Belief that drug usage in the United States is increasing when it is really decreasing: An example of the availability heuristic.” Bulletin of the Psychonomic Society 31, no. 4 (1993), 249-252. doi:10.3758/bf03334920. 10. Tyler, Tom R., and Fay L. Cook. “The mass media and judgments of risk: Distinguishing impact on personal and societal level judgments.” Journal of Personality and Social Psychology 47, no. 4 (1984), 693-708. doi:10.1037/0022-3514.47.4.693.
3,258
16,556
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2021-31
latest
en
0.950687
https://questions.llc/questions/1046005
1,701,529,111,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100427.59/warc/CC-MAIN-20231202140407-20231202170407-00745.warc.gz
558,751,955
7,216
# Melissa Popp is thinking about buying some shares of R.H. Lawncare Equipment, at \$48 per share. She expects the price of the stock to rise to \$60 over the next 3 years. During that time she also expects to receive annual dividends of \$4 per share. A. What is the intrinsic worth of this stock, given a 12% required rate of return? B. What is its expected return? 1497644 Created Rating 0 ## To calculate the intrinsic worth of the stock, we will use the dividend discount model (DDM). The formula for DDM is: Intrinsic Worth = (Dividends per Share / Required Rate of Return) + (Expected Stock Price / (1 + Required Rate of Return)^t) Where: - Dividends per Share = \$4 - Required Rate of Return = 12% or 0.12 - Expected Stock Price (Future Price) = \$60 - t = time in years, which in this case is 3 years Let's calculate the intrinsic worth: A. Intrinsic Worth: Dividends per Share / Required Rate of Return = \$4 / 0.12 = \$33.33 (rounded to two decimal places) Expected Stock Price / (1 + Required Rate of Return)^t = \$60 / (1 + 0.12)^3 = \$60 / (1 + 0.12)^3 = \$60 / 1.404928 = \$42.69 (rounded to two decimal places) Intrinsic Worth = \$33.33 + \$42.69 = \$75.02 Therefore, the intrinsic worth of this stock is \$75.02. B. Expected Return: To calculate the expected return, we will use the formula: Expected Return = (Expected Dividend / Initial Stock Price) + ((Expected Stock Price - Initial Stock Price) / Initial Stock Price) Where: - Expected Dividend = \$4 (annual dividend) - Initial Stock Price = \$48 Expected Return = (\$4 / \$48) + ((\$60 - \$48) / \$48) = 0.083333 + 0.25 = 0.333333 (rounded to six decimal places) or 33.33% (rounded to two decimal places) Therefore, the expected return of this stock is 33.33%. 3421865 Created Rating 0 ## To calculate the intrinsic worth of the stock given a 12% required rate of return, we need to use the discounted cash flow (DCF) method. A. The intrinsic worth of a stock is the present value of its future cash flows, which includes both the dividends received and the price appreciation over the given time period. To calculate this, we will use the formula for the present value of a growing perpetuity: Intrinsic worth = (Dividends / (Required rate of return - Dividend growth rate)) + (Price appreciation / (1 + Required rate of return) ^ number of years) Given: - Dividends = \$4 per share annually - Required rate of return = 12% - Dividend growth rate = 0% (as mentioned, dividends are expected to remain constant at \$4 per share) - Price appreciation = \$60 - \$48 = \$12 - Number of years = 3 Substituting these values into the formula: Intrinsic worth = (4 / (0.12 - 0)) + (12 / (1 + 0.12) ^ 3) = 33.33 + 8.11 = \$41.44 Therefore, the intrinsic worth of this stock, given a 12% required rate of return, is approximately \$41.44 per share. B. The expected return of the stock is calculated by dividing the intrinsic worth by the current market price of the stock and then subtracting 1. Expected return = (Intrinsic worth / Market price) - 1 = (41.44 / 48) - 1 = (0.8627) - 1 = -0.1373 By converting the expected return to a percentage, we find that the expected return of this stock is approximately -13.73%. Please note that a negative expected return suggests that the stock may be overvalued or not favorable compared to the required rate of return. 3827044 Created Rating 0
912
3,385
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2023-50
latest
en
0.841733
https://metanumbers.com/53975
1,596,538,873,000,000,000
text/html
crawl-data/CC-MAIN-2020-34/segments/1596439735867.93/warc/CC-MAIN-20200804102630-20200804132630-00386.warc.gz
417,556,691
7,482
## 53975 53,975 (fifty-three thousand nine hundred seventy-five) is an odd five-digits composite number following 53974 and preceding 53976. In scientific notation, it is written as 5.3975 × 104. The sum of its digits is 29. It has a total of 4 prime factors and 12 positive divisors. There are 40,320 positive integers (up to 53975) that are relatively prime to 53975. ## Basic properties • Is Prime? No • Number parity Odd • Number length 5 • Sum of Digits 29 • Digital Root 2 ## Name Short name 53 thousand 975 fifty-three thousand nine hundred seventy-five ## Notation Scientific notation 5.3975 × 104 53.975 × 103 ## Prime Factorization of 53975 Prime Factorization 52 × 17 × 127 Composite number Distinct Factors Total Factors Radical ω(n) 3 Total number of distinct prime factors Ω(n) 4 Total number of prime factors rad(n) 10795 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 0 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0 The prime factorization of 53,975 is 52 × 17 × 127. Since it has a total of 4 prime factors, 53,975 is a composite number. ## Divisors of 53975 1, 5, 17, 25, 85, 127, 425, 635, 2159, 3175, 10795, 53975 12 divisors Even divisors 0 12 6 6 Total Divisors Sum of Divisors Aliquot Sum τ(n) 12 Total number of the positive divisors of n σ(n) 71424 Sum of all the positive divisors of n s(n) 17449 Sum of the proper positive divisors of n A(n) 5952 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 232.325 Returns the nth root of the product of n divisors H(n) 9.06838 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors The number 53,975 can be divided by 12 positive divisors (out of which 0 are even, and 12 are odd). The sum of these divisors (counting 53,975) is 71,424, the average is 5,952. ## Other Arithmetic Functions (n = 53975) 1 φ(n) n Euler Totient Carmichael Lambda Prime Pi φ(n) 40320 Total number of positive integers not greater than n that are coprime to n λ(n) 5040 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5495 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares There are 40,320 positive integers (less than 53,975) that are coprime with 53,975. And there are approximately 5,495 prime numbers less than or equal to 53,975. ## Divisibility of 53975 m n mod m 2 3 4 5 6 7 8 9 1 2 3 0 5 5 7 2 The number 53,975 is divisible by 5. • Arithmetic • Deficient • Polite ## Base conversion (53975) Base System Value 2 Binary 1101001011010111 3 Ternary 2202001002 4 Quaternary 31023113 5 Quinary 3211400 6 Senary 1053515 8 Octal 151327 10 Decimal 53975 12 Duodecimal 2729b 20 Vigesimal 6eif 36 Base36 15nb ## Basic calculations (n = 53975) ### Multiplication n×i n×2 107950 161925 215900 269875 ### Division ni n⁄2 26987.5 17991.7 13493.8 10795 ### Exponentiation ni n2 2913300625 157245401234375 8487320531625390625 458103125694480458984375 ### Nth Root i√n 2√n 232.325 37.7918 15.2422 8.83972 ## 53975 as geometric shapes ### Circle Diameter 107950 339135 9.1524e+09 ### Sphere Volume 6.58668e+14 3.66096e+10 339135 ### Square Length = n Perimeter 215900 2.9133e+09 76332.2 ### Cube Length = n Surface area 1.74798e+10 1.57245e+14 93487.4 ### Equilateral Triangle Length = n Perimeter 161925 1.2615e+09 46743.7 ### Triangular Pyramid Length = n Surface area 5.04598e+09 1.85315e+13 44070.4 ## Cryptographic Hash Functions md5 fd9f53461c57c2ea3318c601f13beb0d bc133833b456d3a180c9e4dcb0d6268967155135 5803d6c5133971f25f658aedb47aeef5ec85c24db61828b1b8885d26b4a79db1 420bc947643248dae943fba2ad20dd641ea0172251ab2943c758e327740e04733b45b00a1794d01fa0ac749ba5f1ecabe9a550f1a5b1c0b42edebae011a34e25 31020dede07a7523f482b8a2fff294faa25fd5cd
1,452
4,102
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.546875
4
CC-MAIN-2020-34
latest
en
0.801568
https://www.diyaudio.com/community/threads/need-help-with-lens-for-long-throw-projection-set-up.61478/
1,720,998,935,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514654.12/warc/CC-MAIN-20240714220017-20240715010017-00387.warc.gz
648,058,262
22,744
# Need help with lens for long throw projection set up Status This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button. #### Daniel Neuman Hello All, I sent basically the same message to the forum at diyprojectorcompany.com (thats where the theory is coming from) but I am trying to cast as wide a net as possible. This is my first post to this group. I have looked at a lot of messages and read thru the theory section on the website and have a few questions on what is the best lens to use for the following setup. Source image-3by5" - 5.8" diag Projected image = 6by10' or 72by120" - 139.9" diag Distance to screen is 20' or 240" This is a magnification of 24x Rearranging the equations in the theory to solve for focal length with the above numbers I get a focal length of 9.5" or 242.5mm. Does this sound right to y'all? With this setup what would be the best focal length fresnal lens to use and why? I've read the theory section but one thing that confuses me is where do you place the projection lens in relation to the image? Do you place the image at the focal point of the projection lens? Thanks, #### Guy Grotke simple equation 1/fl = 1/LCD to lens + 1/lens to screen lens to screen = 240" lens fl = 9.5" so LCD to lens = 9.9" LCD to lens is always more than the focal length of the lens. If you put it at exactly 9.5", then the image would focus at infinity! I would get a pair of 220 mm fl fresnels, and then build a split design: Lamp arc / 220 mm space / fresnel with grooves facing LCD / 20 mm space / LCD / 20 mm space / fresnel with grooves facing LCD / 220 mm space / triplet. Fire it up, adjust triplet position to focus at your 20' throw distance, then adjust lamp arc to fresnel distance until you get a nice even image. #### Daniel Neuman Thanks again for the reply Guy, This is the kind of info I need. I do understand the thins lens equation but it does not seem practical because its for focus at infinty. But I have heard that infinity is 10X the focal length? Is this an approx. that people use? Is there not any way to narrow down the range of LCD to projection lens distance? I know this is probably basic projection 101 but please help me understand the purpose of the fresnals. Its my under standing that if you put the light source at the focus of the first fresnel (in your suggested split design) it will produce a collimated beam of light accross its entire area. This collimated beam then passes thru the image-lighting it evenly then thru the second fresnel lens which does the opposite of the first and starts to focus it down to a point-before its focused to a point the projection lens intercepts the light and focuses the light out to the projection screen. The fresnels also must be placed a certain distance away from the image yes? Like 2cm on each side? If my image is small enough can I (should I) use a glass lens instead? I am worried that the fresnels will be to close to the ultra hot light source. What if I used the split fresnels with the light side having a much larger focal length to push the light farther away? Then the downstream fresnel (sorry I don't have the correct term. y'all use) could have a shorter focal length to keep the projection lens closer? #### gunvald I've been looking into this stuff recently also. Here's what I know from looking at the physics book for about a hundred pages; You've got the right idea. Fresnels are cheater lenses, used instead of large, heavy, expensive convex lenses. If you can get a lens of the right size for not too much cash, that's the way to do it. Fresnels make more sense for the sizes of DIY projectors. If you want to push back the light, get a fresnel with a longer focal length, just like you thought. Alternately, I've read in other forums of people using low-E glass (low energy emission) to keep the heat off the fresnel and lcd panel. I don't know if this blurs the image though (scatters the light from your point source). Sounds like a great idea if it works. #### Guy Grotke yes & no... >Is there not any way to narrow down the range of LCD to projection lens distance? A: You need adjustability since your throw distance may change. Use the thin lens approximation to plan and then some real experiments with your actual throw distance to find the place to mount your lens in the center of its adjustment range. >fresnel work like this... A: Yes, you've got it. >fresnels 2cm [from LCD]? A: Any closer and you will see fresnel rings in the screen image. They can be much farther away on the lamp side if focal lengths, etc. require it. Better to keep them on the lamp side of the LCD, if possible. >can I (should I) use a glass lens instead? A: Expensive, & heavy, but glass will work fine. >What if I used the split fresnels with the light side having a much larger focal length to push the light farther away? A: Okay, but as you move away from the lamp, the light intensity drops by distance^2. Another strategy to gather more light is to put a precondensor lens close to the lamp, adjusted to capture a wide cone of light and direct it all (in a narrower cone) to the condensor fresnel. >...downstream fresnel [the field fresnel] could have a shorter focal length to keep the projection lens closer? A: The LCD to projection lens distance is fixed by the lens fl and the throw distance. The field fresnel will not affect that much. The ideal fresnel arrangement is to put them together 20mm before the LCD, with the lamp arc at the focal distance of the condensor fresnel, and the projection lens at the focal distance of the field fresnel. This is not always possible! So then you get to fiddle with the lamp arc to condensor fresnel distance to get those light rays focussed into the projection lens. >heat control? A: Add IR filter glass (DIYprojectorCompany.com) or a piece of Rosco Thermashield BEFORE the condensor fresnel. Use a fan to pull air from around your lamp and push it outside the enclosure. Your fresnels and LCD will stay nice and cool then. Status This old topic is closed. If you want to reopen this topic, contact a moderator using the "Report Post" button.
1,475
6,200
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.375
3
CC-MAIN-2024-30
latest
en
0.927402
https://www.prettymotors.com/how-to-calculate-esal-using-truck-axle/
1,716,949,728,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971059167.30/warc/CC-MAIN-20240529010927-20240529040927-00171.warc.gz
825,507,817
25,456
# How to Calculate Esal Using Truck Axle? To calculate ESAL, one must first know how many pounds each axle can carry. To do so, one can use a weigh-in-motion (WI-M) scale, which measures axle separation and weight while the truck is moving over it. The data collected from the scale can then be used to calculate the ESALs of various types of trucks. This data is collected from transportation departments across the country and is a starting point for calculating ESALs. There are many ways to calculate ESALs. One method involves multiplying the total number of trucks in a given region by the ESAL of one axle. This method is often used by state and local agencies when calculating ESALs. It can also be used to estimate the ESAL of a given truck when its axles are empty. ## How Do You Calculate an Axle Load? When you’re planning a haul, you’ll need to know how much weight your truck can carry. You can use the weight distribution formula to determine how much you can safely transport using one or both axles. The total weight of your truck is the combination of the chassis weight, payload, and any additional weight carried by your bodywork. Then, divide that total weight by the total axle capacity to determine the exact axle load. For example, a single axle has a maximum weight of fourteen,300 pounds. The same weight would require a tandem truck with two axles, but the steer axle is typically smaller. When calculating your axle load, keep in mind that your axles must be separated by more than 40 inches. The federal regulations allow a tandem truck to carry 34K of weight per axle, but most states allow more than that with a permit. A standard 2-axle, four-wheel-drive class six truck has four heavy-duty leaf springs in its wheels to absorb the impact of highway bumps. As such, they can be regarded as a reasonably accurate scale for calculating an axle load. ## What is the Weight of an Esal? In order to calculate the weight of an ESAL using truck axle data, you must first understand the concept of axle separation. This concept helps to understand how the weight of a truck affects the load on the axle. When calculating the ESAL, you should keep in mind that highway traffic typically consists of different types of vehicles, with different gross weights and configurations. This makes it challenging to convert mixed traffic into one type of axle load. Therefore, it is important to collect detailed traffic data to identify the types of vehicles and to estimate traffic volumes. READ ALSO:  How to Camp in a Truck Bed? The most straightforward method of calculating ESAL is to use the load equivalency factor (LEF) of a truck. Using this equation, you can calculate the weight of an ESAL by multiplying the number of trucks by the number of axles. However, you should keep in mind that different regions experience different loads. Therefore, the LEF for a single truck in one state may be higher than in another. ## What is the Esal? The ESAL, or Equivalent Single Axle Load, is a way to estimate the total load carried by a truck. It measures the amount of pressure exerted on the front end of the truck and the total weight carried on the rear end of the truck. ESALs are calculated by multiplying the number of trucks by the truck factor, and are usually based on a regional average. The Esal E Sawab app is available on Google Play. The latest stable version is 1.1. Its download size is 4.8M. To download the Esal E Sawab app, simply click the link below. If you encounter any problems while installing the application, do not hesitate to ask for help in the comments. ESALs are used in road safety and pavement engineering. One ESAL represents the impact of a single 18,000-pound axle load. Large tractor-trailers cause exponentially more damage than a sedan. A sedan would have to make 20,000 passes to create the same damage as a tractor-trailer. To design for ESALs, engineers typically start with a single axle load of 18,000 pounds. ## What is an 18 Kip Esal? The 18-kip ESAL is a type of ESAL in which truck traffic is only permitted during critical seasons. This type of road must have a minimum EDLA of five over a 20-year design period and must be load zoned to accommodate truck traffic only during these critical times. READ ALSO:  What are J Brakes on a Truck? The 18-kip ESAL is required by AASHTO and meets HS20-44 axle standards. Using mike20793’s ESAL calculator, you can determine the axle load, maximum axle load, and live load distribution for each axle. The Eng-Tips staff will review your post and take appropriate action. ## How Do You Calculate Truck Loads? There are many ways to calculate a vehicle’s ESAL. One common method is to use weigh-in-motion equipment to calculate the weight and axle separation of a truck in a roadway. The truck is then driven over the scale. The results are the ESAL for a single axle and dual wheels. However, the calculations for ESALs are often complicated. Most agencies use an average LEF, or load equivalency factor, over different regions or states. Other agencies use a standard “truck factor,” which is the average number of ESALs per truck. This formula is commonly used for high-volume roads, especially when compared to low-volume roads. Pavement designers use an ESAL tool to estimate the traffic volume and mix of vehicles on a road. The tool then converts the current daily traffic volume to the equivalent single-axle load of a truck axle weight of 18,000 pounds. These procedures are detailed in the Iowa Department of Transportation’s pavement design standards. This process is applicable to both two-lane asphalt roads and urban multi-lane roads. ## What is the Weight Limit Per Axle? To calculate a vehicle’s ESAL, you must know the axle load on the vehicle. To do this, you will need a scale that measures axle separation and weigh-in-motion. A weigh-in-motion scale measures axle separation and weight while the truck is traveling over it. Once you know this data, you can calculate an ESAL for your truck. READ ALSO:  How Much Does It Cost to Get a Truck Wrap? In Washington State, the LEF is 1.028 ESAL/truck, but local LEFs may vary greatly. The ESAL of an empty truck will be much lower than a fully loaded truck. The previous steps will help you calculate an ESAL based on a truck axle, but the actual number will vary depending on the circumstances. The ESAL factors are commonly used in pavement design. They relate different axle load combinations to the standard 80 kN single axle load. These ESALs vary depending on the type of pavement and structure. For example, the slab depth of a rigid structure will be different than that of a flexible structure. The 1993 AASHTO Design Guide recommends that you multiply flexible ESALs by rigid ESALs in order to determine the maximum load that will be permissible on a certain pavement type. ## How Do You Calculate Load Equivalency Factor? Load equivalent factor is the measurement of the load a truck can carry at a given time, in a given location. It is calculated using a vehicle’s axle load and its weight. The load equivalency factor can be used for static or dynamic loads. There are several factors that affect this measurement, including the number of axles and their legal load limits. The amount of damage the load causes to pavement is directly proportional to the load equivalency, which is the load multiplied by four. That means that a 36,000-lb. single axle load will cause 16 times the damage as a 18,000-lb. axle would have caused. The ESALs for these types of loads are listed in Table 1. It is best to spread the load across two axles that are closely spaced. In order to calculate the load equivalency factor for rigid pavements, many factors are taken into consideration. These factors are more complex than the simple “fourth power” equation, and are more dependent on material properties and pavement characterization.
1,716
7,934
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.4375
3
CC-MAIN-2024-22
latest
en
0.947332
https://embed.planetcalc.com/378/
1,607,103,440,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141740670.93/warc/CC-MAIN-20201204162500-20201204192500-00709.warc.gz
293,934,585
11,309
homechevron_rightStudychevron_rightMathchevron_rightAlgebrachevron_rightNumber theory # Roman numerals Converter of Roman numbers and decimal numbers It's well known that the Romans used Latin letters for writing numbers. It is considered that the Roman numeral system is a classic example nonpositional numeral systems, i.e. the numeral systems in which the value of a figure is independent of its position. We remind you that in Roman numeral system I is 1, V is 5, X is 10, L is 50, C is 100, D is 500, M is 1000. For example, number 3, is written as III in Roman numerals. Although, everything is not so simple and it's a nonpositional numeral system because there is an additional rule that modify the value of a digit according to its place. That rule forbids the use of the same digit 3 times in a row. That's why 3 is III but 4 is IV and I(1) placed before the larger digit V(5) means subtraction so it's actually -1. Anticipating the obvious question, we must say that the largest number allowed in this calculator is 3999. For larger numbers, which were used mainly in Medieval time, several different notations were used, including apostrophus and vinculum, but none of which were ever standardised. Here is two calculators below - for numbers ranging from 1 to 3999 to roman conversion and vice versa. Decimal number #### Conversion of decimal numbers to Roman numbers Roman number URL copied to clipboard PLANETCALC, Roman numerals
343
1,457
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.15625
3
CC-MAIN-2020-50
longest
en
0.93991
https://forum.edugorilla.com/forums/topic/equation-of-the-sphere-which-circumscribes-the-tetrahedron-with-vertices-o-0-0-0-a-1-0-0-b/
1,632,352,946,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780057403.84/warc/CC-MAIN-20210922223752-20210923013752-00582.warc.gz
277,824,705
16,662
This topic contains 0 replies, has 0 voices, and was last updated by  EduGorilla 2 years, 4 months ago. • Author Posts • #822328 Reply EduGorilla Keymaster Select Question Language : Equation of the sphere which circumscribes the tetrahedron with vertices O (0, 0, 0), A (1, 0, 0), B (0, 1, 0), C (0, 0, 1) is ### Options :- 1. x2 + y2 + z2 + x + y + z = 0 2. x2 + y2 + z2 - x - y - z = 0 3. x2 + y2 + z2 + 2 x + 2 y + 2 z = 0 4. none of these. Post your Training /Course Enquiry Are You looking institutes / coaching center for • IIT-JEE, NEET, CAT • Bank PO, SSC, Railways • Study Abroad Select your Training / Study category Reply To: Equation of the sphere which circumscribes the tetrahedron with vertices O (0, 0, 0), A (1, 0, 0), B…. Your information: Verify Yourself
286
779
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2021-39
latest
en
0.798932
http://www.mathworks.com/help/symbolic/mupad_ref/signim.html?nocookie=true
1,448,522,251,000,000,000
text/html
crawl-data/CC-MAIN-2015-48/segments/1448398446535.72/warc/CC-MAIN-20151124205406-00119-ip-10-71-132-137.ec2.internal.warc.gz
539,435,289
9,214
signIm Sign of the imaginary part of a complex number Use only in the MuPAD Notebook Interface. This functionality does not run in MATLAB. signIm(z) Description signIm(z) represents the sign of Im(z). signIm(z) indicates whether the complex number z lies in the upper or in the lower half plane: signIm(z) yields 1 if Im(z) > 0, or if z is real and z < 0. At the origin: signIm(0)=0. For all other numerical arguments, - 1 is returned. Thus, signIm(z)=sign(Im(z)) if z is not on the real axis. If the position of the argument in the complex plane cannot be determined, then a symbolic call is returned. If appropriate, the reflection rule signIm(-x) = -signIm(x) is used. The functions diff and series treat signIm as a constant function. Cf. Example 2. The following relation holds for arbitrary complex z and p: . Further, for arbitrary complex z: and . Environment Interactions Properties of identifiers set via assume are taken into account. Examples Example 1 For numerical values, the position in the complex plane can always be determined: signIm(2 + I), signIm(- 4 - I*PI), signIm(0.3), signIm(-2/7), signIm(-sqrt(2) + 3*I*PI) Symbolic arguments without properties lead to symbolic calls: signIm(x), signIm(x - I*sqrt(2)) Properties set via assume are taken into account: assume(x, Type::Real): signIm(x - I*sqrt(2)) assume(x > 0): signIm(x) assume(x < 0): signIm(x) assume(x = 0): signIm(x) unassume(x): Example 2 signIm is a constant function, apart from the jump discontinuities along the real axis. These discontinuities are ignored by diff: diff(signIm(z), z) Also series treats signIm as a constant function: series(signIm(z/(1 - z)), z = 0) Parameters z An arithmetical expression representing a complex number Return Values Either , 0, or a symbolic call of type "signIm".
480
1,826
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.625
4
CC-MAIN-2015-48
latest
en
0.764175
https://scientips.com/fundamentals-of-accounting/
1,718,679,549,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861746.4/warc/CC-MAIN-20240618011430-20240618041430-00413.warc.gz
447,039,257
14,653
# Fundamentals of Accounting ## What is accounting? It is a systematic method of distinguishing, recording, measuring, classifying, verifying, summarizing, deciphering, and communicating financial information. It reveals profit or loss for a provided amount, and therefore the value of a firm’s assets, liabilities, and owners’ equity. Some people confuse this subject with mathematics and as a matter of fact, Math is involved in every subject to an extent. On the contrary, this subject majorly focuses on monetary transactions within businesses. Accountants are the people hired by businesses to conduct the process of accounting and analyzing it to assess the performance of an organization. The use of this information is directly proportional to the stakeholders of the business such as owners, shareholders, employees, government etcetera. ## What is book-keeping? Accounting and book-keeping may appear similar to an untrained eye, but the latter is more concerned with recording financial transactions into the suitable accounts. For instance, an acquittance of goods (products) would be written in the purchases account. Book-keeping helps businesses keep track of every transaction that occurred. ## The role of accounting in providing information for monitoring progress and decision making Every business in this world consists of stakeholders. Stakeholders are the people or institutions interested in the activities of a business and are the potential users of accounting information. They use the information to monitor the progress of the business they have a stake in. This information also helps them to make important economic decisions. Examples of stakeholders are: 1. The owner(s) of the business: These are the people who own and play an important role in the operations of the business. Their investments may be more than the other stakeholders. They expect to know how well the return on the capital is employed in the business. Is the business growing year after year? They also want to the net monetary value of the business calculated by Statement of financial position. 2. Potential owner(s): These people own certain parts of the business. They are also known as investors. The entirety of their investment revolves around the return on the amount invested in the business. They are subjected to receive a dividend according to their share percentage. The information comes in handy for these people because they are constantly comparing current or past financial information among businesses to get the highest possible profits. 3. Business creditors: A lot of transactions conducted within the business world are credit transactions. this implies that payments are made after the trade has taken place. As a result of such transactions, a business could owe cash to a variety of individuals or businesses – known as creditors or trade liabilities. Suppliers and different creditors of the business need to know whether they are paid on time, or at all. Accounting records provide them this information. 4. Bank managers: To lend businesses, they require accounting records from current and past years. For repayment purposes, they measure and assess the business’ liquidity and financial position in order to ensure the servicing of the loan. In addition, if a business has a lot of existing long-term loans, banks will be reluctant to lend any money at all. 5. The government: All governments source the majority of their revenue through corporate taxation. A tax which is levied on businesses and is deducted by a certain percentage of the profit made by business. Therefore, accounting information assists governments to calculate the amount of the tax a business has to pay. ## The accounting equation From the large multi-national company down to a corner utility store, every business transaction will have an impact on a company’s statement of financial position. The financial position of a business is measured by the following categories: 1. Assets: Assets are the economic resources of the entity, tangible or intangible, and embody such things as money, assets (amounts owed to a firm by its customers), inventories, land, buildings, equipment, and even intangible assets like patents and alternative legal rights. Assets entail probable future economic edges to the owner. 2. Liabilities: These are the amounts owed to others relating to loans, overdrafts, and other obligations arising in the course of the business. The idea of liability is to pay for the goods or services acquired while agreeing to the obligation of paying it after some time. 3. Owners’ equity: In simple words, it is the difference between assets and liability or what it owes to the owner(s). The basic accounting equation or formula is: Assets=   liabilities + Capital/Equity For example: A loan obtained by the bank: \$12000
910
4,859
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2024-26
longest
en
0.962224