url
stringlengths
6
1.61k
fetch_time
int64
1,368,856,904B
1,726,893,854B
content_mime_type
stringclasses
3 values
warc_filename
stringlengths
108
138
warc_record_offset
int32
9.6k
1.74B
warc_record_length
int32
664
793k
text
stringlengths
45
1.04M
token_count
int32
22
711k
char_count
int32
45
1.04M
metadata
stringlengths
439
443
score
float64
2.52
5.09
int_score
int64
3
5
crawl
stringclasses
93 values
snapshot_type
stringclasses
2 values
language
stringclasses
1 value
language_score
float64
0.06
1
https://documen.tv/question/kevin-is-designing-a-cover-for-a-comic-book-he-wants-to-have-a-rectangular-grid-with-2-3-10-squa-21582294-24/
1,632,006,461,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00414.warc.gz
285,281,907
15,710
## Kevin is designing a cover for a comic book. He wants to have a rectangular grid with x ^ 2 – 3x – 10 squares in it, for some positive whole Question Kevin is designing a cover for a comic book. He wants to have a rectangular grid with x ^ 2 – 3x – 10 squares in it, for some positive whole number x. What are the possible side lengths for the grid? A. There are no possible whole number side lengths. B. (x + 2) and (x – 5) C. (x – 2) and (x + 5) D. (x + 2) and (x + 5) in progress 0 2 months 2021-07-23T01:12:34+00:00 1 Answers 3 views 0 (X+2) and (x-5) Step-by-step explanation:
196
590
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.28125
3
CC-MAIN-2021-39
latest
en
0.910934
https://bestbinaryqxzp.web.app/plancarte4208zywe/gross-collection-rate-calculation-ci.html
1,639,050,105,000,000,000
text/html
crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00484.warc.gz
198,319,275
6,008
Gross collection rate calculation Gross Collections Calculation. Cash Collected in 2014: \$100,000. Gross Charges in 2014: \$180,000. Gross Collection Rate = 55% 26 Jun 2018 how to calculate medical accounts receivable days in ar We look at the following figures as benchmarks for medical billing and collections: within 60 days, the percentage hitting over 90 days will automatically be lower. Understanding Net Collection Rates - Medconverge www.medconverge.com/2017/06/22/understanding-net-collection-rates 9 Aug 2017 The net collection rate is an essential medical billing metric to measure the ability of your practice to collect the money it owes. Podiatrists will  26 Jun 2018 how to calculate medical accounts receivable days in ar We look at the following figures as benchmarks for medical billing and collections: within 60 days, the percentage hitting over 90 days will automatically be lower. Calculate your net collection by dividing the sum of gross charges, minus contractual adjustments. Your net collection rate, generally, should be at least 97 %. 8 Feb 2017 Net collection percentage (NCP), or as the RBMA refers to it now, the Adjusted Collection Percentage, is calculated by first taking gross charges 7 Oct 2019 Calculate total charges minus approved write-offs (e.g., due to contractual reasons, bad debt, professional courtesy discounts, etc.) for the For example, to calculate the return rate needed to reach an investment goal with particular inputs, click the 'Return Rate' tab. End Amount; Additional Contribute  Show collection (0 pages) · Collections help Cascade taxes make the calculation of a net amount from a given gross amount a bit more difficult. You would If we divide that tax amount by the gross price we get the tax rate. tax amount of 8 Feb 2017 Net collection percentage (NCP), or as the RBMA refers to it now, the Adjusted Collection Percentage, is calculated by first taking gross charges 15 Nov 2018 How to calculate: Total Accounts Receivable / (12 months of Gross a specific time period); Net Collection Rate - This metric is a measure of a  29 Dec 2015 The net collection ratio is calculated this way: Cash collections divided by net charges. Net charges are the difference between gross charges  If you are starting your own business, feel free to visit our collection of start up The formula for gross margin percentage is as follows: gross_margin = 100 15 Jan 2018 At the end of each year, when you evaluate your collection agency performance, what method do you use to calculate If a collection agency has a 50% recovery rate for your company, Gross Collections Method (GCM) (Total Receivables - Credit Balance)/Average Daily Gross Charge Amount (Gross To calculate the adjusted collection rate, divide payments (net of credits) by  7 Oct 2019 Calculate total charges minus approved write-offs (e.g., due to contractual reasons, bad debt, professional courtesy discounts, etc.) for the  Gross Collections Calculation. Cash Collected in 2014: \$100,000. Gross Charges in 2014: \$180,000. Gross Collection Rate = 55%  Understanding Net Collection Rates - Medconverge www.medconverge.com/2017/06/22/understanding-net-collection-rates 9 Aug 2017 The net collection rate is an essential medical billing metric to measure the ability of your practice to collect the money it owes. Podiatrists will  26 Jun 2018 how to calculate medical accounts receivable days in ar We look at the following figures as benchmarks for medical billing and collections: within 60 days, the percentage hitting over 90 days will automatically be lower. 1 Oct 1999 calculation is: Net collection percentage = total fee for service revenue (cash collected) divided by adjusted fee for service charges, or gross  15 Jan 2018 The formula for the collection ratio is to divide total receivables by average daily sales. A lengthy period during which receivables are outstanding  Gross margin is expressed as a percentage. Generally, it is calculated as the selling price of an item, less the cost of goods sold (e.g. production or acquisition
885
4,073
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2021-49
latest
en
0.878769
https://www.atozexams.com/mcq/quantitative-aptitude/437.html
1,695,601,956,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00001.warc.gz
728,165,819
10,560
# 3 times BD = 4 times AD and 5 times DC = 12 times AD. BD = 56 – DC. What is the length of ACin the diagram below 1.  39 2.  36 3.  49 4.  26 4 39 Explanation : No Explanation available for this question # If x, y, z are chosen from the 3 numbers –3,12, 2 without repetition. What is the largest possiblevalue of 1.  214 2.  144 3.  432 4.  168 4 144 Explanation : No Explanation available for this question # Excluding the top book in each pile, which two piles are of the same height above the ground 1.  C, A 2.  A, B 3.  B, C 4.  None of these 4 None of these Explanation : No Explanation available for this question # Excluding the top book, which pile is the highest above the ground 1.  B 2.  D 3.  A 4.  C 4 A Explanation : No Explanation available for this question # Taking all the books together, which 2 piles have the same height above the ground 1.  B, C 2.  C, D 3.  A, B 4.  B, D 4 C, D Explanation : No Explanation available for this question # A man bought 35 books at Rs.200 each and sold them at Rs.209 each. He bought another set of 30books at a total cost of Rs.5000 and by selling them, got a total of Rs.5225. He also bought another37 books at various prices between Rs.800 and Rs.1200 each and sold all of them at 4.5% profit.What is his total profit percentage 1.  4% 2.  4.5% 3.  5% 4.  Indeterminate 4 4.5\% Explanation : No Explanation available for this question # The area of the largest triangle determined and the largest rectangle that can be inscribed in asemicircle of radius ‘r’ are ‘T’ and ‘R’ respectively. Which one is greater 1.  T > R 2.  T 3.  T = R 4.  Indeterminate 4 T = R Explanation : No Explanation available for this question # The pressure of a given volume of gas varies directly as the temperature. When the temperature isconstant, the product of the pressure and volume is constant. When the volume is 200 and pressureis 250, temperature is 100. What will the temperature be when pressure is 300 and volume is 400 1.  120 2.  240 3.  360 4.  480 4 240 Explanation : No Explanation available for this question # The incomes of two people are in the ratio 4 : 5 and expenditures in the ratio 7 : 8 respectively. Thetotal savings between the two of them is Rs.3000 and that is split in the ratio 1 : 2. Find their totalincome. 1.  Rs.15,000 2.  Rs.18,000 3.  Rs.22,500 4.  Rs.27,000 4 Rs.18,000 Explanation : No Explanation available for this question 1.  28 gallons 2.  16 gallons 3.  24 gallons 4.  36 gallons 4
785
2,541
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.625
4
CC-MAIN-2023-40
latest
en
0.812505
https://www.coursehero.com/file/6140788/Sample-Test-3-linear-regression-calcs/
1,490,873,058,000,000,000
text/html
crawl-data/CC-MAIN-2017-13/segments/1490218193716.70/warc/CC-MAIN-20170322212953-00244-ip-10-233-31-227.ec2.internal.warc.gz
894,736,539
73,954
Sample Test 3 (linear regression) calcs # Sample Test 3 - rainfall x wheat y(in/yr(bushel/acre 12.9 62.5 7.2 28.7 11.3 52.2 19.6 80.6 8.8 41.6 10.3 44.5 15.9 73.3 13.1 54.4 data 90 80 70 60 This preview shows pages 1–3. Sign up to view the full content. df8b18b8757d6cf679efd5baf977aa2a63e98409.xls data rainfall, x wheat, y (in/yr) (bushel/acre) 12.9 62.5 7.2 28.7 11.3 52.2 19.6 80.6 8.8 41.6 10.3 44.5 15.9 73.3 13.1 54.4 5 10 15 20 25 0 10 20 30 40 50 60 70 80 90 wheat, y rainfall (in/yr) wheat yield (bu/a) This preview has intentionally blurred sections. Sign up to view the full version. View Full Document df8b18b8757d6cf679efd5baf977aa2a63e98409.xls calculations rainfall, x wheat, y x^2 xy y^2 (in/yr) (bushel/acre) 12.9 62.5 166.41 806.25 3906.25 7.2 28.7 51.84 206.64 823.69 11.3 52.2 127.69 589.86 2724.84 19.6 80.6 384.16 1579.76 6496.36 8.8 41.6 77.44 366.08 1730.56 10.3 44.5 106.09 458.35 1980.25 15.9 73.3 252.81 1165.47 5372.89 13.1 54.4 171.61 712.64 2959.36 sums 99.10 437.80 1338.050 5885.050 25994.200 means 12.39 54.73 b1 num = 461.8 b1 den = 110.45 b1 = 4.1811 bushels/inch of rainfall b0 = 2.9310 bushels Equation of the best fitting straight line (the regression line): Y = 4.1811x + 2.9310 where: x is the annual rainfall in inches Y is the fitted wheat yield per acre in bushels r num = 461.8025 = b1 num (line 16) r den x term = This is the end of the preview. Sign up to access the rest of the document. ## This note was uploaded on 02/19/2011 for the course OM 210 taught by Professor Singer during the Spring '08 term at George Mason. ### Page1 / 5 Sample Test 3 - rainfall x wheat y(in/yr(bushel/acre 12.9 62.5 7.2 28.7 11.3 52.2 19.6 80.6 8.8 41.6 10.3 44.5 15.9 73.3 13.1 54.4 data 90 80 70 60 This preview shows document pages 1 - 3. Sign up to view the full document. View Full Document Ask a homework question - tutors are online
832
1,879
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2017-13
longest
en
0.679803
https://setscholars.net/python-exercise-write-a-python-program-to-find-the-sum-of-the-first-n-positive-integers/
1,722,796,857,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640408316.13/warc/CC-MAIN-20240804164455-20240804194455-00273.warc.gz
421,473,987
22,395
# Python Exercise: Write a python program to find the sum of the first n positive integers ## (Python Example for Citizen Data Scientist & Business Analyst) Write a python program to find the sum of the first n positive integers. Sample Solution Python Code: ``````n = int(input("Input a number: ")) sum_num = (n * (n + 1)) / 2 print(sum_num) `````` Sample Output: ```Input a number: 2 3.0``` ## Applied Machine Learning & Data Science with Python, R and SQL. A list of Python, R and MATLAB Codes for Applied Machine Learning and Data Science https://setscholars.net/ Learn by Coding Categories: Classification: https://setscholars.net/category/classification/ Data Analytics: https://setscholars.net/category/data-analytics/ Data Science: https://setscholars.net/category/data-science/ Data Visualisation: https://setscholars.net/category/data-visualisation/ Machine Learning Recipe: https://setscholars.net/category/machine-learning-recipe/ Supervised Learning: https://setscholars.net/category/supervised-learning/ Tabular Data Analytics: https://setscholars.net/category/tabular-data-analytics/ End-to-End Data Science Recipes: https://setscholars.net/category/a-star-data-science-recipe/ Applied Statistics: https://setscholars.net/category/applied-statistics/ Bagging Ensemble: https://setscholars.net/category/bagging-ensemble/ Boosting Ensemble: https://setscholars.net/category/boosting-ensemble/ Clustering: https://setscholars.net/category/clustering/ Data Analytics: https://setscholars.net/category/data-analytics/ Data Science: https://setscholars.net/category/data-science/ Data Visualisation: https://setscholars.net/category/data-visualisation/ Decision Tree: https://setscholars.net/category/decision-tree/ Machine Learning Recipe: https://setscholars.net/category/machine-learning-recipe/ Multi-Class Classification: https://setscholars.net/category/multi-class-classification/ Neural Networks: https://setscholars.net/category/neural-networks/ Python Machine Learning: https://setscholars.net/category/python-machine-learning/ Python Machine Learning Crash Course: https://setscholars.net/category/python-machine-learning-crash-course/ R Classification: https://setscholars.net/category/r-classification/ R for Beginners: https://setscholars.net/category/r-for-beginners/ R for Data Science: https://setscholars.net/category/r-for-data-science/ R for Data Visualisation: https://setscholars.net/category/r-for-data-visualisation/ R for Excel Users: https://setscholars.net/category/r-for-excel-users/ R Machine Learning: https://setscholars.net/category/r-machine-learning/ R Machine Learning Crash Course: https://setscholars.net/category/r-machine-learning-crash-course/ R Regression: https://setscholars.net/category/r-regression/ Regression: https://setscholars.net/category/regression/ Excel examples for beginners: https://setscholars.net/category/excel-examples-for-beginners/ C Programming tutorials & examples: https://setscholars.net/category/c-programming-tutorials/ Javascript tutorials & examples: https://setscholars.net/category/javascript-tutorials-and-examples/ Python tutorials & examples: https://setscholars.net/category/python-tutorials/ R tutorials & examples: https://setscholars.net/category/r-for-beginners/ SQL for Beginners in 2 Weeks: https://setscholars.net/category/sql-for-citizen-data-scientist-in-2-weeks/ Portfolio Projects for Aspiring Data Scientists: Tabular Text & Image Data Analytics as well as Time Series Forecasting in Python & R @ https://wacamlds.podia.com/portfolio-projects-for-aspiring-data-scientists-end-to-end-applied-machine-learning-solutions-in-python-r ``` Disclaimer: The information and code presented within this recipe/tutorial is only for educational and coaching purposes for beginners and developers. Anyone can practice and apply the recipe/tutorial presented here, but the reader is taking full responsibility for his/her actions. The author (content curator) of this recipe (code / program) has made every effort to ensure the accuracy of the information was correct at time of publication. The author (content curator) does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by errors or omissions, whether such errors or omissions result from accident, negligence, or any other cause. The information presented here could also be found in public knowledge domains.```
986
4,453
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2024-33
latest
en
0.645263
https://meaning-of-number.com/what-does-number-3-mean/
1,713,518,741,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817382.50/warc/CC-MAIN-20240419074959-20240419104959-00008.warc.gz
338,312,849
39,540
# What does number 3 mean? Francis Last Updated on March 16, 2023 by Francis Number 3 is a fascinating number with a deep meaning and significance in the English language. It has been used in literature and art for centuries, and is often seen as a symbol of luck, creativity, and vigor. In this article, we will explore the symbolism behind number 3 and discuss what exactly it means in the English language. Contents ## What is the Significance of Number Three? Number three has numerous meanings and interpretations in various fields, including mathematics, philosophy, astrology, and literature. It is often seen as a symbol of stability, growth, and life, as well as spirituality and divinity. In this article, we will explore the significance of number three in various fields. In mathematics, three is considered to be the first odd number, as well as the first prime number. Three is also the second Fibonacci number, and forms the basis of many mathematical formulas. Three is also significant in geometry, as the triangle is the simplest and most basic shape. In philosophy, three is often seen as the number of truth. Three is also the number of the universe, as it is the sum of the powers of two, which represent the duality of the universe. Three is also seen as the symbol of the three aspects of the self: mind, body, and spirit. ## Religious and Cultural Significance of Number Three Number three also has a significant place in many religions and cultures. In Christianity, three is seen as the number of the Trinity, the three persons in one God. Three is also the number of the Magi, and is seen as the symbol of God’s power. In Hinduism, three is seen as the number of the Trimurti, the three gods of creation, preservation, and destruction. In the Chinese culture, three is seen as a lucky number, as it is associated with the three-legged toad, which is a symbol of prosperity. In Japan, three is seen as a number of harmony, as it is composed of two positive numbers and one negative number. ## Symbolic Meaning of Number Three Number three is also seen as a symbol of life, growth, and stability. It is often used to represent the three stages of life: the past, the present, and the future. Three is also seen as a symbol of balance and harmony, as it is composed of two equal and opposite forces. In literature, three is often seen as a symbol of time. It is often used to represent the three stages of time: past, present, and future. It is also seen as a symbol of completion, as it is the number of words that complete a thought or sentence. ## Astrological Significance of Number Three In astrology, three is seen as a symbol of the trine, which is a harmonious aspect between three planets. It is seen as a symbol of balance, harmony, and stability. It is also seen as a symbol of spiritual growth, as it is associated with the three signs of the zodiac: Aries, Leo, and Sagittarius. ## Number Three in Literature In literature, three is often used as a motif to represent the interconnectedness of life and the balance between good and evil. It is often used to represent the three stages of life: the past, the present, and the future. It is also used to represent the three aspects of the self: mind, body, and spirit. ### Number Three in Fairy Tales In fairy tales, three is often used as a motif to represent the three stages of life: the past, the present, and the future. It is often used to represent the three stages of a journey or a quest, or the three stages of a story. It is also used to represent the three aspects of the self: mind, body, and spirit. ### Number Three in Poetry In poetry, three is often used to represent the trinity of the author, the reader, and the poem itself. It is also used to represent the three stages of a journey or a quest, or the three aspects of a story. ### Number Three in Mythology In mythology, three is often used to represent the trinity of gods, goddesses, and humans. It is also used to represent the three stages of a journey or a quest, or the three aspects of a story. ### Number Three in Numerology In numerology, three is often seen as a symbol of creativity, communication, and self-expression. It is also seen as a symbol of growth and stability. It is associated with the planet Jupiter and is seen as a symbol of abundance and prosperity. ### Number Three in Tarot In tarot, three is often associated with knowledge, wisdom, and understanding. It is seen as a symbol of spiritual growth and enlightenment. It is also used to represent the trinity of the mind, body, and spirit. ### What is the significance of the number 3? The number 3 is often seen as a symbol of completeness, unity and balance. In many cultures, it is seen as a powerful and sacred number. For example, in Christianity, the Holy Trinity is composed of the Father, Son and Holy Spirit. The number 3 also appears as a motif in many stories, such as the Three Little Pigs, the Three Musketeers, and the Three Billy Goats Gruff. In mathematics, 3 is the first odd prime number, and it is the second smallest prime number after 2. ### What is the spiritual meaning of the number 3? In many spiritual traditions, the number 3 is seen as a symbol of transformation, growth and positive change. It is associated with creativity, exploration, and self-expression. It is also seen as a sign of divine protection and guidance, as well as abundance and joy. In numerology, 3 is associated with the planet Jupiter, which is associated with luck, wealth, and wisdom. ### What is the significance of the number 3 in astrology? In astrology, the number 3 is associated with the planet Jupiter and is considered a lucky number. It is associated with growth, expansion, and knowledge. It is a symbol of luck, wealth, and prosperity. It is also a sign of positive change, creativity, and exploration. ### What is the significance of 3 in numerology? In numerology, the number 3 is associated with creativity, expansion, and communication. It is associated with the planet Jupiter, which is associated with luck, wealth, and wisdom. It is seen as a sign of joy and abundance, as well as divine protection and guidance. It is also associated with spiritual growth, transformation, and exploration. ### What is the significance of 3 in Chinese culture? In Chinese culture, the number 3 is seen as a lucky number. It is a symbol of abundance, joy, and good fortune. It is also associated with growth, creativity, and exploration. It is seen as a sign of divine protection and guidance, and it is believed to bring good luck and success. ### What is the significance of 3 in Feng Shui? In Feng Shui, the number 3 is associated with creativity and growth. It is seen as a sign of abundance and joy, as well as good fortune and success. It is associated with the planet Jupiter, which is associated with luck and wisdom. It is also believed to bring positive change, exploration, and divine protection and guidance. ### WHAT DOES 3 MEAN – ANGEL NUMBER | Shika Chica Number 3 is a powerful symbol that has been used in many cultures throughout history. It is seen as a representation of the holy trinity, a connection between body, mind, and spirit, and a representation of the three stages of life. Number 3 symbolizes progress and growth, as well as creativity and a never-ending cycle of life. Whether it’s used in literature, art, or numerology, the power of number 3 is undeniable. It is a reminder that life is a journey, and that each of us can strive to reach our highest potential.
1,659
7,572
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2024-18
latest
en
0.982361
https://neveradulldayinpoland.com/qa/how-many-degrees-is-1000-watts.html
1,621,055,145,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243989812.47/warc/CC-MAIN-20210515035645-20210515065645-00218.warc.gz
437,465,552
8,828
How Many Degrees Is 1000 Watts? How hot is 25 watts? How much heat does a 25-watt bulb give off. These particular incandescent bulbs are not major heating dangers for the most part. They give up merely 25 Watts of heat per hour that is nearly 90% of their power is wasted in heat. You can expect that the surface temperature is going to be a mild 70F at best.. How many watts is 200 degrees? Convert 200 Centigrade Heat Units to Watt Hour200 Centigrade Heat Units (chu)105.578 Watt Hour (Wh)1 chu = 0.527889 Wh1 Wh = 1.894336 chu How do you convert heat dissipation to Watts? For W to BTU/h conversions, 1 W is equal to 3.41 BTU/h. Therefore to convert BTU/h into Watts you need to divide by 3.41; to convert Watts into BTU/h you need to multiply by 3.41. Is 1000 watts enough for a microwave? What microwave wattage do I need? … A 1,000-watt microwave will cook quickly and efficiently, so that’s a great baseline. Microwaves with 700 watts or less are slower and may not cook evenly. In general, the higher the wattage, the faster the cooking time. How many watts is 180 degrees? Convert 180 Centigrade Heat Units to Watt Hour180 Centigrade Heat Units (chu)95.020 Watt Hour (Wh)1 chu = 0.527889 Wh1 Wh = 1.894336 chu Is there a big difference between a 900 watt and 1000 watt microwave? The main difference between a 900-watt microwave and a 1000-watt microwave is how long it takes to cook food items. According to Microwave Cooking For One, it normally takes a 900-watt microwave longer to heat up something than a 1000-watt microwave. How hot does a 700 watt microwave get? This would be like an “average” oven temperature of 350 degrees. 700 Watts in microwave >> like cooking at 350 degrees 800 Watts >> 450 degrees 900 Watts >> 525 degrees (Self clean) 1000 Watts >> 575 degrees 1100 Watts >> 625 degrees (Blow torch!!) How do you calculate heat from Watts? To calculate the wattage requirement to heat steel, use the following equation:Watts = 0.05 x Lbs of Steel x ΔT (in °F) / Heat-Up Time (in hrs) … Watts = 3.1 x Gallons x ΔT (in °F) / Heat-Up Time (in hrs) … Watts = 165 x Gallons Per Minute X ΔT (in °F) … Watts = 1.35 x Gallons x ΔT (in °F) / Heat-Up Time (in hrs) Can you convert Watts to temperature? The ratio between heat and a substance’s temperature rise is its specific heat capacity. This factor, along with the substance’s mass and the length of time during which power acts on it, lets you convert the substance’s wattage to its final temperature, measured in degrees. What is the best wattage microwave to buy? A 1,000-watt or 1,200-watt option is the best for most people’s needs, however. It’s enough power to heat or defrost anything from the freezer section thoroughly. Cheaper microwaves with under 700-watts take longer to cook your food efficiently and may struggle to heat things evenly. How hot is 120 watts? Weather Check:40 Watt110 degrees Fahrenheit80 degrees Fahrenheit60 Watt120 degrees Fahrenheit89 degrees Fahrenheit75 WattN/R95 degrees Fahrenheit100 WattN/R106 degrees Fahrenheit150 WattN/R120 degrees Fahrenheit How hot is 1500 watts? A 1,500 watt heater producing 5,100 BTUs can heat 150 square feet. That’s equivalent to a 10-by-15 foot room, an 11-by-14 or one sized to 12-by-12 1/2 feet with a standard 8-foot ceiling. How hot is 12 watts? Watts to Celsius heat units (IT) per minute conversion tablewatts ( W )Celsius heat units (IT) per minute ( CHU/min )70.22115732792358120.379126847869170.53709636781441220.695065887759834 more rows How many watts do I need to heat a room? A very simple method for determining how much total heating wattage you need can be found by calculating the square footage of the room, then multiplying this by 10 watts to produce a baseline wattage requirement. For example, if you are heating a 12-foot x 12-foot bedroom, it will have 144 square feet. Is a 900 watt microwave sufficient? Is a 900 watt microwave good? A 900 watt microwave should give you a good amount of cooking power which should result in food that is cooked quickly. Most microwaves are rated between 600 – 1100 Watts. Microwaves rated at 700 watts and less may not cook food at the same speed as those with higher ratings. Is a 1200 watt microwave good? A higher wattage will cook foods faster, which is good if you use the microwave often. The power output of most microwaves falls between 600 to 1200 watts. Recipes that are written for the microwave usually specify a power of at least 800 watts so the foods cook evenly. Is 800 watts enough for a microwave? An 800 watt microwave should be powerful enough for most uses. Most microwaves on the market today are rated between 600 to 1100/1200 watts. The higher the watts the faster your food will cook. … A lot of recipes for microwave cooking will often require at least 800 watts of power to ensure the food cooks evenly. How much power does a 1200 watt microwave use? An average modern microwave will use around 1200 watts. Click calculate to find the energy consumption of a microwave oven using 1200 Watts for 30 minutes a day @ \$0.10 per kWh. In addition to using energy while cooking or heating, a microwave will also use 2 to 7 watts of power while in standby mode. How many degrees is 100 watts? 4,600 degrees FahrenheitA 100-watt incandescent light bulb has a filament temperature of approximately 4,600 degrees Fahrenheit. How do you convert watts to degrees Celsius? By using our Fahrenheit/Watt to Celsius/Watt conversion tool, you know that one Fahrenheit/Watt is equivalent to 0.55555555555556 Celsius/Watt. Hence, to convert Fahrenheit/Watt to Celsius/Watt, we just need to multiply the number by 0.55555555555556. Do watts matter in microwaves? In general, the higher the wattage, the faster and more evenly your food will cook. Most microwaves sit somewhere between 600 to 1,200 watts. Larger, more expensive microwaves tend to have a higher wattage, so this is a price and size consideration that can strongly influence microwave cooking performance.
1,508
6,021
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2021-21
latest
en
0.894894
http://www.formulaconversion.com/formulaconversioncalculator.php?convert=dekametersperhour_to_micrometersperhour
1,505,943,873,000,000,000
text/html
crawl-data/CC-MAIN-2017-39/segments/1505818687484.46/warc/CC-MAIN-20170920213425-20170920233425-00271.warc.gz
456,889,968
16,040
Take advantage of the benefits of registration!  As a registered user, the conversions you add to 'Your Favorites' are saved to your account so they follow you wherever you go for fast access anywhere! GPA inputs and calculations too. You can also post to our forums, create a profile, and more!   Click here for our simple sign-up page # Dekameters/hour to micrometers/hour (dam/hr to micrometers/hour) Metric conversion calculator Welcome to our dekameters/hour to micrometers/hour (dam/hr to micrometers/hour) conversion calculator. You can enter a value in either the dekameters/hour or micrometers/hour input fields. For an understanding of the conversion process, we include step by step and direct conversion formulas. If you'd like to perform a different conversion, just select between the listed Speed units in the 'Select between other Speed units' tab below or use the search bar above. Tip: Use the swap button to switch from converting dekameters/hour to micrometers/hour to micrometers/hour to dekameters/hour. ## micrometers/hour (not bookmarks) Swap < == > 1 dam/hr = 10000000 micrometers/hour 1 micrometers/hour = 1.0E-7 dam/hr Algebraic Steps / Dimensional Analysis Formula dam/hr * 10000000 micrometersperhour1 dam/hr = micrometersperhour feet/hour 0 feet/second 0 kilometers/hour 0 kilometers/second 0 knots 0 meters/hour 0 meters/second 0 miles/hour 0 miles/second 0 If you would like to switch between Speed units, select from the tables below centimeters/hour centimeters/second decimeters/hour decimeters/second dekameters/hour dekameters/second feet/hour feet/second gigameters/hour gigameters/second hectometers/hour hectometers/second kilometers/hour kilometers/second knots megameters/hour megameters/second meters/hour meters/second micrometers/hour micrometers/second miles/hour miles/second millimeters/hour millimeters/second yards/hour yards/second < == > centimeters/hour centimeters/second decimeters/hour decimeters/second dekameters/hour dekameters/second feet/hour feet/second gigameters/hour gigameters/second hectometers/hour hectometers/second kilometers/hour kilometers/second knots megameters/hour megameters/second meters/hour meters/second micrometers/hour micrometers/second miles/hour miles/second millimeters/hour millimeters/second yards/hour yards/second Active Users
584
2,330
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2017-39
latest
en
0.723766
https://community.jmp.com/t5/Discussions/Generalized-Linear-Model-Question-for-binomial-distribution/td-p/313796
1,604,183,039,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00588.warc.gz
256,834,386
73,376
Our World Statistics Day conversations have been a great reminder of how much statistics can inform our lives. Do you have an example of how statistics has made a difference in your life? Share your story with the Community! Choose Language Hide Translation Bar Highlighted Level V ## Generalized Linear Model - Question for binomial distribution Hello everyone, I have a question regarding the Generalized Linear Model platform when we use a binomial distribution. The response variable can be specify using two continuous columns as Y in this order: the count of the number of successes, and the count of the number of trials. What if, for some reasons, the number of successes is greater than the number of trials for a few rows (e.g. successes= 60, trials  = 50)? Is it automatically corrected during the analysis (i.e. the number of trials is updated -> 60)? Or should it be absolutely corrected before launching the analysis because it can lead to mistakes during the calculations? 1 ACCEPTED SOLUTION Accepted Solutions Highlighted Staff ## Re: Generalized Linear Model - Question for binomial distribution JMP accepts the data in only one way: first the number of event, then the total trials. There is no correction. JMP assumes that your data is correct. Learn it once, use it forever! 3 REPLIES 3 Highlighted Level VII ## Re: Generalized Linear Model - Question for binomial distribution I can't answer whether JMP will automatically correct the number of trials for all rows, but obviously you have an issue with data collection or entry.  If you have more successes than trials, what other data collection/entry errors are in the data set?  I would correct these before proceeding with any analysis. Highlighted Staff ## Re: Generalized Linear Model - Question for binomial distribution JMP accepts the data in only one way: first the number of event, then the total trials. There is no correction. JMP assumes that your data is correct. Learn it once, use it forever! Highlighted Level V ## Re: Generalized Linear Model - Question for binomial distribution Thanks for your answers  @statman and @markbailey . Of course I totally agree with you: the cleaning step should be the very first step before doing any analysis. In some very particular cases the total number of trials can be slightly updated in comparison to the initial number provided. I just wanted to know, by curiosity, how it was handled from a technical point of view and if the correction was automatically done or not. But definitely I agree that it is better to clean first Article Labels
536
2,590
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2020-45
latest
en
0.870247
https://www.giladhirschberger.com/fixing-limiting-reactant-stoichiometry-issues.html
1,715,996,493,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971057216.39/warc/CC-MAIN-20240517233122-20240518023122-00702.warc.gz
715,653,509
59,194
# Fixing Limiting Reactant Stoichiometry Issues There is enough Al to provide zero.297 mol Fe, however only sufficient Fe2O3 to provide 0.250 mol Fe. This signifies that the amount of Fe truly produced is proscribed by the Fe2O3 present, which is therefore the limiting reagent. Learn tips on how to categorical the focus of an answer when it comes to molarity, molality and mass percent. Discover the variations between an electrolyte and a nonelectrolyte. Discover what titration is and how to calculate the focus of an acid or base that has been titrated to equivalence. Recall that the coefficients of a balanced equation represent the stoichiometric amounts of the reactants and products. The limiting reactant is H2and the excess reactant is O2. A mole is a software utilized in chemistry to depend molecules, primarily based on their mass. By figuring out the number of moles of both oxygen and glucose, you understand how many molecules of every you’re beginning with. To discover the ratio between the 2, divide the variety of moles of 1 reactant by the variety of moles of the opposite. A correctly balanced equation will show the same variety of atoms going into the equation as reactants as you have coming out in the form of merchandise. ## Reactions And Stoichiometry Limiting reagent problems use stoichiometry to determine the theoretical yield for a chemical response. The limiting reactant might be utterly consumed within the reaction and limits the quantity of product you can make. The limiting reactant also determines the amount of product you can make . The reactant that’s left over after the response is complete known as the surplus reactant. To calculate theoretical yield, begin by discovering the limiting reactant within the equation, which is the reactant that gets used up first when the chemical reaction takes place. As demonstrated above, this sort of leakage can be reduced to a really low level and would differ across cell builds. For move batteries to achieve the decadal lifetimes that are perfect for grid storage, it is critical to separate and quantify the contributions of each source of capability fade. To this end, methods for controlling sure contributions whereas measuring others are essential. We current an unbalanced compositionally-symmetric flow cell methodology for revealing and quantifying totally different mechanisms for capability fade in redox move batteries that are primarily based on molecular power storage. ### English Language Learners Definition Of Reactant The reactant that may run out before the response proceeded to completion known as the limiting reactant, and the opposite reactants are termed extra reactants. But we only have 3.a hundred twenty five moles of oxygen available for the reaction, so we’ll run out of oxygen before ammonia. Therefore, oxygen is the limiting reactant and ammonia is available in excess. In the primary method, we are going to find and evaluate the mole ratios of the reactants, while in the different one, we are going to discover the quantity of product that will be produced by every reactant. The one which produces the least amount of the end product is the limiting reagent. In organic chemistry, the time period “reagent” denotes a chemical ingredient introduced to cause a desired transformation of an natural substance. This methodology is most helpful when there are solely two reactants. One reactant is chosen, and the balanced chemical equation is used to determine the quantity of the other reactant necessary to react with A. If the quantity of B truly present exceeds the amount required, then B is in extra and A is the limiting reagent. If the quantity of B present is less than required, then B is the limiting reagent. As discussed within the overview, so as to decide the limiting reactant, we have to use the given moles and calculate which reactant will form less product based on the mole ratios in the chemical equation. The reactant molecules could additionally be in strong, liquid or gaseous part. Outside of the actual metabolisms, it is used for important organic functions, such as polymerase chain response and DNA sequencing. Each step of such a catalytic process is accompanied by conformational adjustments of the DNA polymerase. These transitions, together with the corresponding reaction pathways, have an intrinsically transient nature and have been vastly studied via single molecule–based methods.
886
4,454
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2024-22
latest
en
0.939421
https://cs.stackexchange.com/questions/12129/finding-the-path-of-a-negative-weight-cycle-using-bellman-ford/12206
1,571,554,877,000,000,000
text/html
crawl-data/CC-MAIN-2019-43/segments/1570986703625.46/warc/CC-MAIN-20191020053545-20191020081045-00521.warc.gz
426,522,789
33,102
# Finding the path of a negative weight cycle using Bellman-Ford I wrote a program which implements Bellman-Ford, and identifies when negative weight cycles are present in a graph. However what I'm actually interested in, is given some starting vertex and a graph, which path do I actually trace to get to the original vertex having traveled a negative amount. So to be clear say I have a graph with vertexes, a, b, c, and d and there is a negative cycle between a, b, and d, then when I check for negative weight cycles // Step 1: initialize graph for each vertex v in vertices: if v is source then distance[v] := 0 else distance[v] := infinity predecessor[v] := null // Step 2: relax edges repeatedly for i from 1 to size(vertices)-1: for each edge (u, v) with weight w in edges: if distance[u] + w < distance[v]: distance[v] := distance[u] + w predecessor[v] := u // Step 3: check for negative-weight cycles for each edge (u, v) with weight w in edges: if distance[u] + w < distance[v]: "Graph contains a negative-weight cycle" Instead of it just telling me that a negative cycle is there, I would like it to tell me, go from a -> b -> d -> a. After the relaxing step what do I have to change in my check for negative weight cycles to get it to output this information? • Here is the best information I've been able to find, but I'm still having trouble making sense of it. • Also this which suggests that I need to run breadth first search on the predecessor array to find the information, but I'm not exactly sure where to start (what do I queue first?) • Here is a stack overflow question which shows how to find one of the nodes in the path. • Just to clarify, is $a\xrightarrow{1} b \xrightarrow{-2} c \xrightarrow{-1} b \xrightarrow{1} a$ a (negative) cycle? And are you looking for the most efficient algorithm or just a working algorithm (but with a not so good complexity) is enough? – wece May 21 '13 at 14:03 • I would prefer the most efficient, but it dons't have to be. At the same if it's like $O(n^2)$ time on top of bellman ford then I dont want that either. I know that the predisessor array has the information, so really I'm asking how do I extract it. And more like a->b = 1, b->d = -3, d->a = 1, but really just any negative weight cycle – Loourr May 21 '13 at 14:33 • We already have a question on this topic: Getting negative cycle using Bellman Ford. Does that thread answer your question? If not, please edit your question to state what you still need answered. – Gilles May 22 '13 at 22:33 • Some points of confusion. by "every node u" you mean, for every node u in predecessor? What do you mean by "while v is white and has a predecessor"? and what does it mean to set v := predecessor[v]? Thanks for the answering - @David Eisenstat – Loourr May 21 '13 at 23:02 • I was not questioning what := means but rather what predecessor[v] is, and why its garnered to have a place in the predecessor array? – Loourr May 21 '13 at 23:33 • So in the step for i from 1 to size(vertices)-1: do I want to be doing for i from 1 to size(vertices)? and I'm still not sure what you mean by v "has a predecessor". – Loourr May 22 '13 at 3:02
834
3,165
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.546875
4
CC-MAIN-2019-43
longest
en
0.911104
https://www.educationquizzes.com/ks1/maths/practice-counting-and-number-recognition-10/
1,721,338,189,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514859.56/warc/CC-MAIN-20240718191743-20240718221743-00644.warc.gz
657,606,268
9,335
UK USIndia Every Question Helps You Learn # Practice - Counting and Number Recognition - 10 Hello young mathematician! This quiz is full of fun and exciting questions about counting and number recognition. It's designed specially for young ones like you, who truly enjoy learning new things about numbers. Let's jump in and see if you can get all 10 questions right. Remember, it's all about trying your best. So, are you ready? Let's get started! Question 1 Which of these is even number? 1 5 3 2 2 is an even number because it can be evenly divided by 2. Question 2 Which digits represent the number twelve? 21 12 2 20 The number twelve is written as 12. The option 21 is a reflection of 12. Question 3 Which number is greater, 18 or 13? 18 13 They are equal Cannot be determined The number 18 is greater than 13. Question 4 Ten minus three equals… 7 8 6 10 When you subtract 3 from 10, you are left with 7. Question 5 What is half of 10? 5 3 7 2 Half of 10 is 5. Question 6 Which of these represents zero? 1 10 0 100 Zero is represented by 0. Question 7 How many pennies are there in 1 pound? 10 100 50 1000 1 pound is equivalent to 100 pennies. Question 8 If you have 5 apples and you add 2 more, how many do you have? 6 8 7 5 If you add 2 apples to 5 apples, you will have 7 apples. Question 9 Which number comes next after 15? 16 17 14 15 The number 16 comes directly after 15. Question 10 How many edges does a triangle have? 4 5 3 6 A triangle has three edges. Author:  Graeme Haw
439
1,491
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.890625
4
CC-MAIN-2024-30
latest
en
0.932536
https://ontimemicrofinance.com/qa/quick-answer-what-are-the-two-types-of-discounts.html
1,624,605,417,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623487622113.11/warc/CC-MAIN-20210625054501-20210625084501-00260.warc.gz
391,819,761
8,040
# Quick Answer: What Are The Two Types Of Discounts? ## What is a discount? The noun discount refers to an amount or percentage deducted from the normal selling price of something. As a verb, discount means to reduce the price. The manager can discount the item for you. The verb discount also means to disregard, underestimate, or dismiss.. ## What is trade discount simple words? Definition of Trade Discount A trade discount is a routine reduction from the regular, established price of a product. The use of trade discounts allows a company to vary the final price based on each customer’s volume or status. Note that trade discounts are different from early-payment discounts. ## What is discount strategy? Discount pricing is one type of pricing strategy where you mark down the prices of your merchandise. The goal of a discount pricing strategy is to increase customer traffic, clear old inventory from your business, and increase sales. ## What is a normal cash discount? An example of a typical cash discount is a seller who offers a 2% discount on an invoice due in 30 days if the buyer pays within the first 10 days of receiving the invoice. Giving the buyer a small cash discount would benefit the seller as it would allow her to access the cash sooner. ## What is a good cash discount? Saving as much as \$3 per week adds up to \$150 or more per year. An informal survey of restaurants around the country found 10 percent is the norm for cash discounts, but a few eateries took as much as 15 percent off the bill. ## How do you find a discount? How to calculate a discountConvert the percentage to a decimal. Represent the discount percentage in decimal form. … Multiply the original price by the decimal. … Subtract the discount from the original price. … Round the original price. … Find 10% of the rounded number. … Determine “10’s” … Estimate the discount. … Account for 5%More items…• ## How many types of discounts are there in accounting? There are 3 Types of Discount; Trade discount, Quantity discount, and. Cash discount. ## What is a discount in math? A reduction in price. Here the discount is \$2. Sometimes discounts are in percent, such as a 10% discount, and then you need to do a calculation to find the price reduction. See: Percent. ## What is a discount code? Definition: Discount codes are personalized or publicly-released codes offered to customers as a purchasing incentive that reduces the price of an order. Discount codes can be an effective means for ecommerce stores to attract shoppers and encourage repeat customers. ## What does higher discount rate mean? A higher discount rate implies greater uncertainty, the lower the present value of our future cash flow. Calculating what discount rate to use in your discounted cash flow calculation is no easy choice. It’s as much art as it is science. ## What are the types of discounts? Types of discountsBuy one, get one free. This discount may require a buyer to receive two of the same inventory item, or it could allow for a free item that differs from the initial purchase. … Contractual discounts. … Early payment discount. … Free shipping. … Order-specific discounts. … Price-break discounts. … Seasonal discount. … Trade discount.More items…• ## What is a common type of discount? The types are: 1. Quantity Discounts 2. Trade Discounts 3. Promotional Discounts 4. ## How trade discount is calculated? If the discount is a percentage, you calculate the trade discount by converting the percentage to a decimal and multiplying that decimal by the listed price. If the reseller is purchasing \$1,000 worth of items at a 30-percent discount, the trade discount would be 1,000 x 0.3, which equals \$300. ## How do you get trade discounts? The trade discount may be stated as a specific dollar reduction from the retail price, or it may be a percentage discount. The trade discount customarily increases in size if the reseller purchases in larger quantities (such as a 20% discount if an order is 100 units or less, and a 30% discount for larger quantities). ## What is a discount account? The sales discount account is a contra revenue account, which means that it reduces total revenues. … As discounts are taken, the entry is a credit to the accounts receivable account for the amount of the discount taken and a debit to the sales discount reserve. ## What are the two types of cash discount? In accounting, there are two different ways that cash discounts can be recorded in the books: the net method and the gross method. The net method treats sales revenue as the net amount after the given discount, and any discounts that the buyer doesn’t take are recorded as interest revenue.
973
4,715
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.796875
3
CC-MAIN-2021-25
latest
en
0.941582
https://www.coursehero.com/file/6962092/Note-that-they-asymptotes-are-shown-as-dotted-lines-Example/
1,490,735,139,000,000,000
text/html
crawl-data/CC-MAIN-2017-13/segments/1490218189884.21/warc/CC-MAIN-20170322212949-00068-ip-10-233-31-227.ec2.internal.warc.gz
877,635,255
22,398
Alg_Complete # Note that they asymptotes are shown as dotted lines This preview shows page 1. Sign up to view the full content. This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: d so this graph will never cross the y-axis. It does get very close to the y-axis, but it will never cross or touch it and so no y-intercept. Next, recall that we can determine where a graph will have x-intercepts by solving f ( x ) = 0 . For rational functions this may seem like a mess to deal with. However, there is a nice fact about rational functions that we can use here. A rational function will be zero at a particular value of x only if the numerator is zero at that x and the denominator isn’t zero at that x. In other words, to determine if a rational function is ever zero all that we need to do is set the numerator equal to zero and solve. Once we have these solutions we just need to check that none of them make the denominator zero as well. In our case the numerator is one and will never be zero and so this function will have no xintercepts. Again, the graph will get very close to the x-axis but it will never touch or cross it. Finally, we need to address the fact that graph gets very close t... View Full Document ## This note was uploaded on 06/06/2012 for the course ICT 4 taught by Professor Mrvinh during the Spring '12 term at Hanoi University of Technology. Ask a homework question - tutors are online
335
1,467
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.25
3
CC-MAIN-2017-13
longest
en
0.948818
https://math.stackexchange.com/questions/1170672/how-to-find-the-definite-integral-int-0-infty-fracx-sinh-ax-dx
1,558,622,679,000,000,000
text/html
crawl-data/CC-MAIN-2019-22/segments/1558232257259.71/warc/CC-MAIN-20190523143923-20190523165923-00497.warc.gz
567,268,586
33,165
# How to find the definite integral $\int_0^\infty \frac{x}{\sinh ax}\;dx$ I'm trying to prove that $$I:= \int_0^\infty \frac{x}{\sinh(ax)} dx = \frac{\pi^2}{4a^2}$$ Attempt: $$\sinh (ax) = \frac{1}{2}(e^{ax}-e^{-ax}) = \frac{1}{2}e^{-ax}(e^{2ax}-1)$$ Now I have $$\int_0^\infty \frac{x}{\sinh(ax)} dx = \int_0^\infty 2x \sum_{q=0}^\infty \frac{\frac{a^qx^q}{q!}}{e^{2ax}-1} dx$$. Substituting $y=2ax$ I get: $$I= \frac{1}{2a^2} \sum_{q=0}^\infty \frac{1}{2^qq!} \int_0^\infty \frac{y^{q+1}}{e^y-1}dy=\frac{1}{2a^2} \sum_{q=0}^\infty \frac{1}{2^qq!} \Gamma(q+2) \zeta(q+2)$$ Am I on the right way? What is with the infinite series; why it must have the value $\frac{\pi^2}{2}$? Every hints will be highly appreciated. • Have you tried parameter differentiation on $a$ ? – Gabriel Romon Mar 1 '15 at 18:37 • Have you tried integration by parts? – user207710 Mar 1 '15 at 18:44 • Yes, I have. But in many cases suitable series expansions can be helpful. – kryomaxim Mar 1 '15 at 18:44 • In general, $~\displaystyle\int_0^\infty\frac{x^{k-1}}{\sinh x}~dx~=~\Big(2-2^{1-k}\Big)~\Gamma\big(k\big)~\zeta\big(k\big)~$ – Lucian Mar 1 '15 at 20:26 We may write \begin{align} \int_0^{+\infty} \frac{x}{\sinh (ax)} \:dx&=2\int_0^{+\infty} \frac{x}{e^{ax} - e^{-ax}} \:dx\\\\ &=2\int_0^{+\infty} \frac{x}{1 - e^{-2ax}} e^{-ax}\:dx\\\\ &=2\int_0^{+\infty} x \sum_{n=0}^{\infty}e^{-a(2n+1)x}dx\\\\ &=2\sum_{n=0}^{\infty}\int_0^{+\infty} x \:e^{-a(2n+1)x}dx\\\\ &=\frac{2}{a^2}\sum_{n=0}^{\infty}\frac{1}{(2n+1)^2}\\\\ &=\frac{2}{a^2}\times\frac{\pi^2}{8}\\\\ &=\frac{\pi^2}{4a^2}. \end{align} We have: $$\int_{0}^{+\infty}\frac{x}{\sinh(a x)}\,dx = \frac{1}{a^2}\int_{0}^{+\infty}\frac{x}{\sinh x}\,dx =\frac{1}{a^2}\int_{1}^{+\infty}\frac{2\log t}{t^2+1}\,dt$$ hence: $$\int_{0}^{+\infty}\frac{x}{\sinh(a x)}\,dx = -\frac{2}{a^2}\int_{0}^{1}\frac{\log u}{1+u^2}\,du = \frac{2}{a^2}\cdot\frac{\pi^2}{8}=\color{red}{\frac{\pi^2}{4a^2}}.$$
918
1,933
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
3.84375
4
CC-MAIN-2019-22
latest
en
0.469989
https://math.stackexchange.com/questions/1407250/fourier-transform-of-the-cosine-function-with-phase-shift
1,713,954,940,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296819089.82/warc/CC-MAIN-20240424080812-20240424110812-00765.warc.gz
330,822,475
35,601
Fourier transform of the Cosine function with Phase Shift? How can i calculate the Fourier transform of a delayed cosine? I haven't found anywhere how to do that. This is my attempt in hoping for a way to find it without using the definition: $$x(t) = cos(2\pi f_ct -θ) = cos\bigg(2\pi f_c\bigg(t -\frac{θ}{2πf_c}\bigg)\bigg)$$ ($f_c$ stands for the fundamental frequency of the signal and $θ$ is the phase shift) Now using the Fourier time-shift property $:$ $$x(t-θ) \longrightarrow X(f)e^{-j2πfθ}$$ and knowing the fourier transform of $$cos(2πf_ct) = \frac{1}{2}δ(f-f_c)+\frac{1}{2}δ(f+f_c)$$ i get: $$cos\bigg(2\pi f_c\bigg(t -\frac{θ}{2πf_c}\bigg)\bigg) = \bigg[ \frac{1}{2}δ(f-f_c)+\frac{1}{2}δ(f+f_c) \bigg]e^{-j2πf\frac{θ}{2πf_c}} =$$ $$= \frac{1}{2}δ(f-f_c)e^{-\frac{jfθ}{f_c}} +\frac{1}{2}δ(f+f_c)e^{-\frac{jfθ}{f_c}}$$ Is this a way to find it? If yes can i simplify it further? And what happens with the simplifications when you're given the $f_c$? Thanks in advance. • Why not taking this approach? $$x(t) = cos(2\pi f_ct -θ) = cos(2\pi f_ct)cos(θ)-sin(2\pi f_ct)sin(θ)$$ – Moti Aug 24, 2015 at 0:49 • This seems harder to find the Fourier transform. Why not complete that and add as the answer? Aug 24, 2015 at 7:59 • What is the transform for $cos(2\pi f_c t)$? – Moti Aug 25, 2015 at 4:27 • It's right there. I have wrote it.. Do you see it? I will write it in one whole line, just so it's better to see. Aug 25, 2015 at 8:19 • You just need to multiply the cos and sin transforms by the phase correction. It looks like what you got is the right result. – Moti Aug 26, 2015 at 3:58 Fourier transform gives the locations and the (complex) amplitudes of the exponential,i.e. $e^{jwt}$, terms. By using the Euler identity $cos(\theta)=\frac{e^{j\theta}+e^{-j\theta}}{2}$ Fourier transform of $cos(wt)$ and $sin(wt)$ can be found. This is due to the fact that $F(e^{jw_0t})=2\pi\delta(w-w_0)$. Thus the Fourier transform of shifted cosine $x(t)=cos(w_0t-\theta)$ is $$cos(w_0t-\theta)=\frac{e^{j(w_0t-\theta)}+e^{-j(w_0t-\theta)}}{2} \\ \rightarrow F(cos(w_0t-\theta))=F(\frac{e^{j(w_0t-\theta)}+e^{-j(w_0t-\theta)}}{2})\\ =\frac{F(e^{j(w_0t-\theta)})+F(e^{-j(w_0t-\theta)})}{2} \\ =\frac{e^{-j\theta}F(e^{jw_0t})+e^{j\theta}F(e^{-jw_0t})}{2} \\ =\frac{e^{-j\theta}2\pi\delta(w-w_0)+e^{j\theta}2\pi\delta(w+w_0)}{2} \\ =\pi(e^{-j\theta}\delta(w-w_0)+e^{j\theta}\delta(w+w_0))$$
928
2,396
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.9375
4
CC-MAIN-2024-18
latest
en
0.810718
https://blog.csdn.net/ddc2004/article/details/87904297
1,619,094,343,000,000,000
text/html
crawl-data/CC-MAIN-2021-17/segments/1618039603582.93/warc/CC-MAIN-20210422100106-20210422130106-00236.warc.gz
248,036,226
19,754
# OpenCV使用Matlab生成的畸变系数做图像修正 Matlab中是CamPoints = undistortPoints(distortedCamPoints, params); Matlab相机标定函数生成的cameraParam即上面的params是一个很大的变量,把相机有关的所有参数都包含在内了。问题是这个函数无法通过Matlab coder导出。 pixelX = x * focal length x + principal point x pixelY = y * focal length y + principal point y vector<Point2f>  distortPoints, unDistortPoints; distortPoints.push_back(Point2f(711.13, 631.42)); Mat cameraMatrix = Mat::eye(3, 3, CV_32F); cameraMatrix.at<float>(0, 0) = 7.8818e+03; cameraMatrix.at<float>(0, 1) = 0; cameraMatrix.at<float>(0, 2) = 1.3083e+03; cameraMatrix.at<float>(1, 1) = 7.89e+03; cameraMatrix.at<float>(1, 2) = 1.1996e+03; Mat distCoeffs = Mat::zeros(5, 1, CV_32F); distCoeffs.at<float>(0, 0) = -0.2363;              //k1, radial distortion distCoeffs.at<float>(1, 0) = -0.9046;              //k2, radial distortion distCoeffs.at<float>(2, 0) = 0;                        //p1, tangential distortion distCoeffs.at<float>(3, 0) = 0;                        //p2, tangential distortion distCoeffs.at<float>(4, 0) = 0;                        //k3, radial distortion undistortPoints(distortPoints, unDistortPoints, cameraMatrix, distCoeffs); unDistortPoints[0].x = unDistortPoints[0].x * cameraMatrix.at<float>(0, 0) + cameraMatrix.at<float>(0, 2); unDistortPoints[0].y = unDistortPoints[0].y * cameraMatrix.at<float>(1, 1) + cameraMatrix.at<float>(1, 2); 12-01 06-07 04-22 06-07 2万+ 03-28 8965 01-13 2895 01-14 842 10-14 1627 06-12 5568 09-21 2万+ 10-27 1万+ 07-27 4347
591
1,473
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2021-17
latest
en
0.219814
https://nz.education.com/resources/fourth-grade/math-puzzles/CCSS-Math-Content-4/
1,606,805,191,000,000,000
text/html
crawl-data/CC-MAIN-2020-50/segments/1606141652107.52/warc/CC-MAIN-20201201043603-20201201073603-00011.warc.gz
401,286,578
25,377
# Search Our Content Library 15 filtered results 15 filtered results Math Puzzles Sort by Number Crunchers: Operations Practice Workbook Number Crunchers: Operations Practice This workbook is packed with worksheets that let kids sharpen math skills by practicing the four basic math operations as well as factoring. Math Workbook Solving KenKen Puzzles Lesson Plan Solving KenKen Puzzles It's time for some fun with grid puzzles! In this lesson, students will learn how to solve KenKen puzzles using logic and applying their knowledge of addition, subtraction, multiplication, and division skills. Math Lesson Plan Polygon Practice Worksheet Polygon Practice Math Worksheet All Kinds of Polygons Worksheet All Kinds of Polygons Use this fun math challenge worksheet to review all kinds of polygons! Math Worksheet It's a Maze of Hexagons! Worksheet It's a Maze of Hexagons! Students practice identifying prime numbers, regular polygons, and irregular polygons! Math Worksheet Math Pattern Puzzles Worksheet Math Pattern Puzzles Math Worksheet What Kind of Polygon? Worksheet What Kind of Polygon? Math Worksheet How Did You Solve the Puzzle? Lesson Plan How Did You Solve the Puzzle? Grid puzzles are a terrific way to practice thinking about math! In this lesson, students are asked to solve addition grid puzzles and compare processes. Use it on its own or as support for the lesson Solving KenKen Puzzles. Math Lesson Plan Pentagon Perimeter Practice Worksheet Pentagon Perimeter Practice Math Worksheet Describing Polygons Worksheet Describing Polygons Math Worksheet More Pentagon Perimeter Practice Worksheet More Pentagon Perimeter Practice This is a great worksheet to practice calculating the perimeter for a pentagon. Math Worksheet More Amaz-ing Hexagons! Worksheet More Amaz-ing Hexagons! This multi-part worksheet helps students practice identifying the differences between regular and irregular hexagons while also reviewing their knowledge of composite and prime numbers. Math Worksheet Finish the Polygon Challenge Worksheet Finish the Polygon Challenge Brush up on the different types of polygons and what makes them unique! Math Worksheet Vocabulary Cards: How Did You Solve the Puzzle? Worksheet Vocabulary Cards: How Did You Solve the Puzzle? Use these vocabulary cards with the EL Support Lesson: How Did You Solve the Puzzle?
507
2,348
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.21875
3
CC-MAIN-2020-50
latest
en
0.820294
http://apleaverules.tk/physics-reference-sheet-ncsecu.html
1,563,919,311,000,000,000
text/html
crawl-data/CC-MAIN-2019-30/segments/1563195529737.79/warc/CC-MAIN-20190723215340-20190724001340-00384.warc.gz
9,333,215
4,351
# Physics reference sheet ncsecu Ncsecu reference ## Physics reference sheet ncsecu To obtain large print Reference Tables,. I asked everyone around me reference how they call these type of sheets and people told me " cheat. Physics involves a lot of calculations and problem solving. Reading reference & Taking Notes from a Textbook. They are based on. AP Physics 1 Equation Sheet. HigHer ScHool certificate examination REFERENCE physics SHEET – Mathematics – – Mathematics Extension 1 – – Mathematics ncsecu Extension 2 – In academic terms, a reference sheet physics is a compilation of notes regarding a specific topic such ncsecu as math formulas to act as a memory sheet aid. They might even function or take the form of reference charts in pdf. Mona Shores High ncsecu School. Taking Notes ncsecu on Math physics Problems. g ( ) = Elastic potential energy = 1 2 spring constant distance stretched or compressed. STAAR PHYSICS RefeRenCe MATeRIAlS. NY sheet Regents physics Reference Tables for Physics. Physics reference sheet ncsecu. To get additional hard copies of the Braille send requests on school letter head to State ncsecu Ed' s FAX Include: physics subject, quantity, large- type editions ONLY of physics the Earth Science Reference Tables ( ESRT' s), name school address & school' s BEDS code. Physics reference sheet ncsecu. State of Texas Assessments of Academic Readiness. Massachusetts Comprehensive Assessment System Introductory Physics Reference Sheet Formulas T = 1 f ncsecu Q = mc˜ T V = IR v = ˚ f a average = ˜ v ˜ t ˜ x = v i˜ t + 1a˜ t2 2 s average = d ˜ t v f = v i + a˜ t v average = ˜ x ˜ t F net = ma F g = G m 1m 2 d2 p = ncsecu mv F g reference = mg F˜ t = ncsecu ˜ p sheet ˜ PE = mg˜ h eff = E out E in F e = k q 1q 2 d2 reference W = ˜ E = Fd physics KE. PBIS on Steroids. Reference Guide & Formula Sheet for Physics Dr. study sheet aid physics class reference equations errors physics sheet guide kindle tool cross dot ncsecu handy Top Reviews Most recent Top Reviews There was a problem filtering reviews right now. Formsbank offers a variety of free colleges, universities , multi- purpose forms for schools training centers. Physics Regents Examination. NCDPI Reference Tables for Physics ( ) Version 2 Page 2 Mechanics Energy x v t    x x f i vt   2 1 2 f i i ncsecu x x v t ncsecu at    v a t   sheet  2 2 2 x f i v v a    F = ma g F = mg 1 2 2 Gm m F r  p = mv   J ncsecu F t 2 c v a r  2 c mv F reference ncsecu r  a = uniform acceleration c a = centripetal acceleration reference F = force c F ncsecu = centripetal force physics g F = weight g =. Having on hand the most frequently sheet used physics physics equations formulas helps you perform these tasks more efficiently accurately. Kinetic energy reference = 1 2 ( mass) ( velocity) 2. Gravitational potential energy reference = ( mass) acceleration due to gravity ( height) PE mgh. Price Page 4 of 8 Version 5/ 12/ # 84 Capacitance of a Capacitor C = κ• εo• A / d κ = dielectric constant A = physics area of plates d = distance between plates ε o = 8. Basic Conversion Cheat Sheet • Three basic units of measurement length mass ( weight) volume o The basic unit of length is: ncsecu METER o The basic unit of volume is: LITER o The basic unit of mass ( weight) is: GRAM • The following are some of the prefixes for the ncsecu metric system. The frame of reference of any problem is assumed to be inertial unless. 85 E( - 12) F/ m # 85 Induced Voltage N = # of loops t Emf N ∆ ∆ Φ =. AP Physics 1 equation sheet CED. Physics Reference Tables. ADVANCED PLACEMENT PHYSICS 1 EQUATIONS, EFFECTIVE. Access CBT as well as standard reference sheets for Mathematics, , PBT practice tests, approved ELA graphic organizers physics reference sheets for students with disabilities. Physics II For Dummies Cheat Sheet. Please note: You must use Adobe Acrobat Reader/ Professional X or higher to open the secure PDF files of scoring materials. You can use these physics formulas as a quick reference for when you’ re solving problems in electricity magnetism light. A cheat sheet is a reference sheet or formulas sheet we are allowed to use during an exam to help memory! Also available are blank CBT reference response boxes, which allow students to practice answering constructed- response questions using the TestNav8 testing platform. This Cheat Sheet also includes a list physics constants that you’ ll find useful in a broad range of physics problems. Physics DATA SHEET – 2 – ncsecu FORMULAE SHEET – 3 – FORMULAE SHEET – 4 – Title: Physics data sheet formulae sheet periodic table of the elements Author:. Download and customize thousands of Physics Reference Sheets – no registering required! ## Reference physics Learn about Physics on reference. com including: Electricity, Magnetism, Motion & Mechanics and much more. Unit 2: Force & Circular Motion. Unit 3: Energy & Momentum. ``physics reference sheet ncsecu`` Unit 4: Waves & Sound. Regents Physics Reference Sheet. Business and Technology.
1,292
5,019
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2019-30
latest
en
0.807116
http://maxima-online.org/tags/index/lhs
1,553,495,182,000,000,000
text/html
crawl-data/CC-MAIN-2019-13/segments/1552912203755.18/warc/CC-MAIN-20190325051359-20190325073359-00359.warc.gz
125,924,097
5,320
### Related ##### lhs-rhs-sqrt lhs : ( - sqrt(3) ) *... rhs : 4 - 3 * x; locus : lhs - rhs = 0; Calculate ##### lhs-linsolve-plot2d eq1:x*4+y-2=y; eq2:x+2=y; linsolve([eq1,eq2],[x... Calculate a_12xm[t]:=(N_x[t]-N_... eq:a_12xm[t]=(13+11*D... Calculate ##### lhs-rhs-sqrt g:sqrt(x-3)-1=0 /* E... g2:lhs(g1)**2=rhs(g1)... Calculate AwP_xm[t]:=(M__x[t]-M... V[t]:=12*awi_12xm[0]/... Calculate ##### lhs lhs(3*x^2+2*x+x^3-x^2... Calculate a4:81; a7:2187; g1:81=a1+q^3; Calculate e=12; lhs(e); Calculate ##### lhs-linsolve eq1:x*4+y-2=y; eq2:x+2=y; linsolve([eq1,eq2],[x... Calculate ##### lhs-subst var1: E[A] = 1.0e7; var2: E[P] = 3.04e5; var3: R[1] = .62; Calculate ### lhs Run Example (%i1)rt:tan(atan(1/2)+atan(1/3))=1; 1 1 (%o1) tan(atan(-) + atan(-)) = 1 2 3 (%i2) trigsimp(lhs(rt)); 1 1 sin(atan(-) + atan(-)) 2 3 (%o2) ---------------------- 1 1 cos(atan(-) + atan(-)) 2 3 (%i3) logarc(lhs(rt)); %i %i %i %i %i (log(-- + 1) - log(1 - --)) %i (log(-- + 1) - log(1 - --)) 2 2 3 3 (%o3) - tan(------------------------------ + ------------------------------) 2 2 (%i4) ratsimp(%); (%o4) Run Example eq:(x^2-17*x+1)^(x^2-34*x+1)=1; 2 2 x - 34 x + 1 (%o1) (x - 17 x + 1) = 1 (%i2) ratdisp:false; (%o2) false (%i3) ratsimpexpons:false; (%o3) false (%i4) rat(lhs(eq)), ratsimp; 2 2 x - 34 x + 1 (%o4) (x - 17 x + 1) (%i5) Run Example derivabbrev:true ; (%o1) true (%i2) x(t):= r(t)*sin(theta(t)); (%o2) x(t) := r(t) sin(theta(t)) (%i3) y(t):=r(t)*cos(theta(t)); (%o3) y(t) := r(t) cos(theta(t)) (%i4) xdot: diff(x(t), t, 1); (%o4) r(t) cos(theta(t)) theta(t) + r(t) sin(theta(t)) t t (%i5) ydot: diff(y(t), t, 1); (%o5) r(t) cos(theta(t)) - r(t) sin(theta(t)) theta(t) t t (%i6) xdotdot: diff(xdot, t, 1); 2 (%o6) r(t) cos(theta(t)) theta(t) - r(t) sin(theta(t)) (theta(t) ) t t t + 2 r(t) cos(theta(t)) theta(t) + r(t) sin(theta(t)) t t t t (%i7) ydotdot: diff(ydot, t, 1); 2 (%o7) - r(t) sin(theta(t)) theta(t) - r(t) cos(theta(t)) (theta(t) ) t t t - 2 r(t) sin(theta(t)) theta(t) + r(t) cos(theta(t)) t t t t (%i8) Assumption1: xdotdot=0; 2 (%o8) r(t) cos(theta(t)) theta(t) - r(t) sin(theta(t)) (theta(t) ) t t t + 2 r(t) cos(theta(t)) theta(t) + r(t) sin(theta(t)) = 0 t t t t (%i9) Assumption2: ydotdot=0; 2 (%o9) - r(t) sin(theta(t)) theta(t) - r(t) cos(theta(t)) (theta(t) ) t t t - 2 r(t) sin(theta(t)) theta(t) + r(t) cos(theta(t)) = 0 t t t t (%i10) E1: solve(Assumption1, r(t)); sin(theta(t)) r(t) + 2 cos(theta(t)) theta(t) r(t) t t t t (%o10) [r(t) = - -------------------------------------------------------] 2 cos(theta(t)) theta(t) - sin(theta(t)) (theta(t) ) t t t (%i11) E2: solve(Assumption2, r(t)); cos(theta(t)) r(t) - 2 sin(theta(t)) theta(t) r(t) t t t t (%o11) [r(t) = -------------------------------------------------------] 2 sin(theta(t)) theta(t) + cos(theta(t)) (theta(t) ) t t t (%i12) solve(subst(E2, [E1]), r(t)); (%o12) [] (%i13) E3: solve(lhs(E1)-lhs(E2), r(t)); 3 2 (theta(t) ) r(t) t t (%o13) [r(t) = - --------------------] t t theta(t) t t (%i14) Help for Lhs
1,413
3,978
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.515625
4
CC-MAIN-2019-13
longest
en
0.213857
https://en.wikipedia.org/wiki/Talk:Geometrical_optics
1,498,731,500,000,000,000
text/html
crawl-data/CC-MAIN-2017-26/segments/1498128323895.99/warc/CC-MAIN-20170629084615-20170629104615-00031.warc.gz
767,581,945
10,365
# Talk:Geometrical optics WikiProject Physics (Rated C-class, Mid-importance) This article is within the scope of WikiProject Physics, a collaborative effort to improve the coverage of Physics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks. C  This article has been rated as C-Class on the project's quality scale. Mid  This article has been rated as Mid-importance on the project's importance scale. I plan soon to move a more general description of geometrical optics to this page, focusing on the optics, rather than on the abstract mathematics. At that time, the mathematical treatment currently here may move to a subsection, or to a more specialized article.--Srleffler (talk) 04:48, 3 June 2009 (UTC) Geometrical optics is primarily primarily concerned with a discussion of rays and surfaces. Modern optics is now included as part of Maxwell's electromagnetic theory of light and quantum optics. --Jbergquist (talk) 07:04, 15 August 2013 (UTC) Huygen's theory of interference may be considered to belong to the wave theory of light. --Jbergquist (talk) 07:11, 15 August 2013 (UTC) ## Ray diagram Would this article be a good place to introduce the basic conventions of ray diagrams ? Redbobblehat (talk) 10:56, 9 August 2009 (UTC) Yes!--Srleffler (talk) 18:54, 9 August 2009 (UTC) ## The title is wrong It should be geometric optics, not geometrical optics. — Preceding unsigned comment added by 2.101.202.195 (talk) 18:36, 27 March 2013 (UTC) I agree. Does anyone object to moving the article? (I reverted the changes in text, until we decide on whether to move the article.)--Srleffler (talk) 05:21, 28 March 2013 (UTC) Geometrical Optics is a subject heading at the British Library and the Library of Congress. --Jbergquist (talk) 06:56, 15 August 2013 (UTC) Why do we have two adjectives with similar meanings? We talk about electricity, electrical engineering and electric motors for instance. I've been thinking about this and my impression is that its a little tricky. The -al may indicate a distinction of some kind and suggests a topic for discussion, an area of research or a field of study. No -al is more often associated with uses and methods. Another set is the ellipse, an elliptic integral and an elliptical oval. This is just an impression and not intended to be the final word on usage. --Jbergquist (talk) 17:49, 15 August 2013 (UTC)
611
2,463
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2017-26
latest
en
0.977427
https://pt.cppreference.com/w/c/numeric/math/isinf
1,566,341,024,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027315681.63/warc/CC-MAIN-20190820221802-20190821003802-00352.warc.gz
620,208,894
10,743
# isinf < c‎ | numeric‎ | math C Linguagem Digite apoio Gerenciamento de memória dinâmica De tratamento de erros Utilidades do programa Utilitários de data e hora Biblioteca cordas Algoritmos Numéricos Entrada / saída de apoio Suporte de localização Apoio discussão (C11) Operações atômicas (C11) Funções matemáticas comuns Funções Original: Functions The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. Operações básicas Original: Basic operations The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. abslabsllabsimaxabs (C99) fabs divldivlldivimaxdiv (C99) fmod remainder (C99) remquo (C99) fma (C99) fmax (C99) fmin (C99) fdim (C99) nannanfnanl (C99)(C99)(C99) Funções exponenciais Original: Exponential functions The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. exp exp2 (C99) expm1 (C99) log log10 log1p (C99) log2 (C99) Funções de poder Original: Power functions The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. sqrt cbrt (C99) hypot (C99) pow Funções trigonométricas e hiperbólicas Original: Trigonometric and hyperbolic functions The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. sin cos tan asin acos atan atan2 sinh cosh tanh asinh (C99) acosh (C99) atanh (C99) Erro e funções gamma Original: Error and gamma functions The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. erf (C99) erfc (C99) lgamma (C99) tgamma (C99) Número inteiro mais próximo de operações de ponto flutuante Original: Nearest integer floating point operations The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. ceil floor roundlroundllround (C99)(C99)(C99) trunc (C99) nearbyint (C99) rintlrintllrint (C99)(C99)(C99) Flutuando funções de manipulação de pontos Original: Floating point manipulation functions The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. ldexp scalbnscalbln (C99)(C99) ilogb (C99) logb (C99) frexp modf nextafternexttoward (C99)(C99) copysign (C99) Classificação Original: Classification The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. fpclassify (C99) isfinite (C99) isinf (C99) isnan (C99) isnormal (C99) signbit (C99) Constantes de macros Original: Macro constants The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. HUGE_VALFHUGE_VALHUGE_VALL (C99)(C99) FP_NORMALFP_SUBNORMALFP_ZEROFP_INFINITEFP_NAN (C99)(C99)(C99)(C99)(C99) Definido no cabeçalho #define isinf(arg) /* implementation defined */ Determina se o dado `arg` número de ponto flutuante é infinito positivo ou negativo. A macro retorna um valor integral. Original: Determines if the given floating point number `arg` is positive or negative infinity. The macro returns an integral value. The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. ### [editar]Parâmetros arg - flutuando valor de pontoOriginal: floating point valueThe text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions. ### [editar]Valor de retorno Valor integral diferente de zero se `arg` é infinito, 0 outra forma. Original: Nonzero integral value if `arg` is infinite, 0 otherwise. The text has been machine-translated via Google Translate. You can help to correct and verify the translation. Click here for instructions.
1,084
4,187
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.734375
3
CC-MAIN-2019-35
latest
en
0.437386
http://photo.stackexchange.com/questions/48144/can-i-make-real-tilt-shift-2d-photos-from-3d-photos
1,464,189,851,000,000,000
text/html
crawl-data/CC-MAIN-2016-22/segments/1464049274994.48/warc/CC-MAIN-20160524002114-00104-ip-10-185-217-139.ec2.internal.warc.gz
225,652,911
17,862
## Nidelva river through Trondheim Norway by Saaru Lindestokke Hall of Fame and help us grow. # Can I make real tilt-shift 2D photos from 3D Photos? I have a 3D camera, i can shoot 3D stills, is there a plug-in or some sort of post-production method to estimate the distance from the camera to every single object in the scene (maybe a pixel analysis) and then blur out the single pixels in base of the distance, to achieve true tilt-shift? - What format does it write the 3d images into? – dav1dsm1th Feb 22 '14 at 22:32 What do you mean by "true tilt-shift" ? Do you want to correct perspective and/or alter the focal plane (and sharpness area) ? Or do you want to blur some parts of your photo relative to depth ? – FredP Feb 22 '14 at 22:38 Given a stereo image pair you can estimate the depth to each point in the image (producing what is known as a depth map), from which you could stimulate the tilted plane of focus of a tilt shift lens. There's no simple way to estimate the depth map but there are plenty academic papers on the subject. Likewise there's no simple way to simulate depth of field from a depth map though Photoshop's lens blur filter will get you close. It would be much easier to use a tilt shift lens, however. - @user26302: Part of the problem is: it is easy to create a depth map for points that have been recorded on both photos, but how do you handle occlusion? And the other thing: the noise ratio (between estimated and actual distance) is pretty high on background points, so two image is usually not enough. – TFuto Feb 25 '14 at 16:21 I'm not sure about the "tilt-shift" part but the first part of the question is feasible, see for example http://dplenticular.com/technology/creating-3d-images/ . Obviously this works for getting a notion of the original depth, which one could use to create some post-production blurring, thus simulating a specific DOF (and also possibly tilted focal plane). -
472
1,942
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2016-22
longest
en
0.9163
https://economicskey.com/the-concept-of-marginal-revenue-3680
1,632,806,143,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780060201.9/warc/CC-MAIN-20210928032425-20210928062425-00438.warc.gz
275,070,980
13,665
Select Page Price, Quantity and Total Revenue Suppose that a firm finds its elfin possession of a complete monopoly in its industry. The firm might be the fortunate owner of a patent for a new anticancer drug, or it might own the operating code to a valuable computer program. H the monopolist wishes to maximize its profits, what price should it charge and what output level should it produce? To answer these questions, we need a new concept. marginal or MR). From the firm’s demand curve, we know the relationship between price (P) and quantity sold (q). These are shown in columns (1) and (2) of Table 9-3 and as the black demand curve (dd) for the monopolist in Figure 9-3(a). We next calculate the total revenue at each sales level by multiplying price times quantity. Column (3) of Table 9-3 shows how to calculate the total revenue (TR), which is simply P X q. Thus 0 units bring in TR of 0; 1 unit brings in TR = \$180.X 1 =\$180; 2 units bring in \$160 X 2 = \$320; and so forth. I~ this example of a straight-line or linear demand curve, total revenue at first rises with output, since the reduction in P needed to sell the extra q is moderate in this upper. elastic range of the demand curve. But when we reach the midpoint of the straight-line demand curve, TR reaches its maximum. This comes at q = 5, P = \$100. ‘with TR = \$500. Increasing q beyond (his point brings the firm into the inelastic demand region. For inelastic demand. reducing price increases sales less than proportionally, so total revenue falls. Figure 9-3(b) shows TR to dome-shaped; rising from zero’ at a very high price to a maximum of \$500 and then falling to zero as price approaches zero. How could you find the price at which revenues are maximized? You would see in Table 9-3 that TR is ‘maximized when q = 5 and P = 100. This is the point where the demand elasticity is’exactly 1. Note that the price per unit-can be called aver revenue (AR) to distinguish it from total revenue. Hence, we get P = AR by dividing TR by q (just as we earlier got AC by dividing TC by q). Verify that if column (3) had been written down before column (2). we could have filled in column (2) by division. [av_button label='Get Any Economics Assignment Solved for US\$ 55' link='manually,http://economicskey.com/buy-now' link_target='' color='red' custom_bg='#444444' custom_font='#ffffff' size='large' position='center' icon_select='yes' icon='ue859' font='entypo-fontello']
648
2,453
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.859375
4
CC-MAIN-2021-39
latest
en
0.843636
raidersgameinfo.com
1,726,830,020,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00730.warc.gz
427,756,618
19,547
# Continuum Hypothesis – A Central Open Problem in Set Theory A continuum is a continuous series or whole, ranging from one end or extreme to the other, with no clear dividing lines or points. Continuums are used to describe everything from the way a rainbow changes colors to the flow of blood in an arm or a foot. #### Continuum Hypothesis: A Central Open Problem in Set Theory In the late nineteenth century, Georg Cantor was trying to solve a very important open problem in set theory, called the continuum hypothesis (CH). He believed that it could be solved and that it would be a significant step forward in the development of mathematics. When Cantor tried to solve the CH problem, he found that it was extremely difficult and frustrating. He also discovered that the solution could not be given in any reasonable manner by using the axioms that were available to him at the time. Cantor eventually gave up on the problem and stopped trying to find a way to solve it. He regarded the CH problem as a waste of his time and energy and that it was a hindrance to his work. It was not until the twentieth century that new mathematical methods were developed that enabled mathematicians to resolve the CH problem. These methods allowed for the introduction of a universe of constructible sets, which is now called Godel’s universe. A universe of constructible sets is a model of the mathematical universe, or set of real numbers. This universe is small enough that it can be made to satisfy all the requirements of set theory, and it can also be made to be consistent with the continuum hypothesis. Moreover, it is possible to build another universe in which the continuum hypothesis fails, just as it was possible for Godel to do. This is a remarkable accomplishment, but it also raises a lot of questions about the solvability of the CH problem. The most important question is how we might be able to build a model of the mathematical universe in which the continuum hypothesis fails. This is a very interesting and important question for mathematics, because it shows that we are not stuck with the methods we have today. In the 1990s, Jorg Brendle, Paul Larson, and Stevo Todorcevic showed that there are three new principles implicit in the continuum hypothesis which, taken together, put a limit on the size of the continuum. These principles are very different from the standard axioms of Zermelo-Fraenkel set theory, which is based on the Axiom of Choice (ZFC). If these three principles are true, then there is a provable bound on the size of the continuum. The size of the continuum is a central problem in set theory, and it has been a major focus of research for the past two hundred years. The new ideas in this area have been extremely influential. They are now a staple of set theory and many other areas of mathematics, and they are helping to resolve the most important open problems in the field. These include the continuum hypothesis, the infinity arithmetic problem, and the sigma prime problem.
619
3,031
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.140625
3
CC-MAIN-2024-38
latest
en
0.975304
https://www.eng-tips.com/viewthread.cfm?qid=351172
1,716,669,449,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971058834.56/warc/CC-MAIN-20240525192227-20240525222227-00072.warc.gz
640,018,013
12,720
× INTELLIGENT WORK FORUMS FOR ENGINEERING PROFESSIONALS Are you an Engineering professional? Join Eng-Tips Forums! • Talk With Other Members • Be Notified Of Responses • Keyword Search Favorite Forums • Automated Signatures • Best Of All, It's Free! *Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail. #### Posting Guidelines Promoting, selling, recruiting, coursework and thesis posting is forbidden. # Implementing Area Method: Step Response Test ## Implementing Area Method: Step Response Test (OP) Hi, I'm a newbie using matlab and to deal with system identification. But as it happend I have to tune a PI Controller for a process. So I thought a open loop test like a step response test is sufficient. I carried out the following test: set the control signal to a fixed value and see what happen: control signal is set to 6 A at t=0 and the output value is measured. I got the follwoing data for Y: 0.03, 0.04, 0.04, 0.11, 0.11, 0.7, 0.7, 1.06, 1.06, 1.22, 1.22, 1.29, 1.29, 1.34, 1.39, 1.39, 1.42, 1.42, 1.42, 1.42, 1.43, 1.43, 1.42, 1.42, 1.4, 1.4, 1.4, 1.4, 1.4, 1.4, 1.43, 1.43, 1.42, 1.42, 1.39, 1.39, 1.4, 1.4, 1.4 --> unit kW the time is t=0:0.5:39; Then according to Hägglund one calculate the paramaeters of first-order plus deadtime model G(s)=Ks/(1+sT)*e^(-sL): Ks=Y(end); %not quite right but close A0=trapz(T,(Ks-Y)); t0=A0/Ks; t0=A0/(Ks); idx=find(T<t0); t1=T(idx);y1=Y(idx); A1=trapz(t1,y1); tau=exp(1)*A1/(Ks); L=max(0,(A0-exp(1)*A1)/(Ks)); GM=tf(Ks,[tau 1],'iodelay',L); Well, I got something for the parameters: L=2.34 Ks=1.4/6 tau=T=1.016 but what tunig rule (practical) is suitable for this problem? The deadtime is twice as the time constant. So it's hard to control. Does somebody has a suggestion? cheers, ### RE: Implementing Area Method: Step Response Test (OP) it should be: L=1.7 K=1.4/6 T=1.325 #### Red Flag This Post Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework. #### Red Flag Submitted Thank you for helping keep Eng-Tips Forums free from inappropriate posts. The Eng-Tips staff will check this out and take appropriate action. Close Box # Join Eng-Tips® Today! Join your peers on the Internet's largest technical engineering professional community. It's easy to join and it's free. Here's Why Members Love Eng-Tips Forums: • Talk To Other Members • Notification Of Responses To Questions • Favorite Forums One Click Access • Keyword Search Of All Posts, And More... Register now while it's still free!
841
2,628
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2024-22
latest
en
0.782506
https://assignmentchef.com/product/solved-bim207-filename-and-topn/
1,708,829,612,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474573.20/warc/CC-MAIN-20240225003942-20240225033942-00413.warc.gz
117,785,340
38,867
# [Solved] BIM207-filename and topN Click And Check all our Assignments 10 USD \$ Category: Tag: • Your program takes two arguments: filename and topN • You should read the given text file and preprocess the text according to following order: Tokenize the text by whitespace(not just space character, e.g. more than one space, tab, newline etc.), remove punctuations, and apply the lowercase. • You are asked to calculate followings: Average Term Length By Initial Character: For example, If your tokens are [”apple”,”banana”,”avocado”,”blueberry”], then your output should be like a = 6 b = 7.5 Total Minimum Distance: For each term pair, calculate the following formula f(t1) * f(t2) 1+ln d(t1,t2) where f(t)​ is the count of the term t​ in the text and d(t1,t2) gives the minimum distance between t1 and t​2 where t1 is followed by t2. For example, If the text is ”aa bb cc aa cc dd bb” and t1 = aa and t2 = bb, then ∑d(t1,t2) = 1+3 = 4. You should print only topN pairs according to the score. Important ! Make sure the following commands are running mvn clean package java -jar targetbim207hw.jar sampleText.txt 10 Sample Output InitialCharacter AverageLength 1 3.5 2 2.0 3 5.0 5 1.0 7 4.0 • 285714285714286 • 0 • 333333333333333 • 0 f 6.0 g 7.125 h 5.375 i 6.0 k 9.266666666666667 m 5.857142857142857 o 8.0 p 8.5 r 6.0 s 7.214285714285714 t 6.363636363636363 • 0 • 4285714285714284 y 10.0 z 7.5 ç 11.666666666666666 ö 11.090909090909092 ü 12.666666666666666 Pair{t1=’yerleşkesindeki’, t2=’ve’, factor=26.0} Pair{t1=’ve’, t2=’sayılı’, factor=15.356018837890671} Pair{t1=’tarih’, t2=’ve’, factor=13.0} Pair{t1=’donanımlı’, t2=’ve’, factor=13.0} Pair{t1=’öğrencileri’, t2=’ve’, factor=13.0} Pair{t1=’söyleşilere’, t2=’ve’, factor=13.0} Pair{t1=’yaratıcı’, t2=’ve’, factor=13.0} Pair{t1=’eden’, t2=’ve’, factor=13.0} Pair{t1=’ve’, t2=’30425′, factor=13.0} Pair{t1=’kültürel’, t2=’ve’, factor=13.0} ## Reviews There are no reviews yet. Only logged in customers who have purchased this product may leave a review. Shopping Cart [Solved] BIM207-filename and topN 10 USD \$
767
2,110
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.59375
3
CC-MAIN-2024-10
latest
en
0.534701
https://discourse.processing.org/t/passing-shader-texture-with-negative-values-accesing-32bit-floats-shader-texture/22354
1,656,735,933,000,000,000
text/html
crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00603.warc.gz
254,890,062
6,015
# Passing shader texture with negative values/accesing 32bit floats shader texture Hi there, In my processing sketch I am passing textures between two shaders by first drawing into a PGraphics and than passing the PGraphics with the .set function of the shader. This works fine,but the PGraphics clamps all the texture values between 0 and 255. Is there a way to pass textures between shader without the clamping? So I can also pass negative values? 1 Like Hi Welcome to the forum! To be able to use negative values the buffers should be of type floating point, but by default Processing does not provide them. Maybe @kosowski or @neilcsmith know if it’s doable? 2 Likes Thank you Hamoid, would be great to have some suggestions on how to handle that. With difficulty I think. It’s probably possible dropping to lowlevel OpenGL (as described at https://github.com/processing/processing/wiki/Advanced-OpenGL ) but you’d have to create the framebuffers from scratch as far as I know. The obvious question is why you need this? Have worked around in the past by treating 0.5 / 127 as mid-point. Not great on resolution of steps though! 1 Like I was using some example from shadertoy on fluid simulation. Thanks for the tips, my next idea was indeed to remap things for 0.5/127, lets see if that works :). For more resolution, you could pack your float into more than one byte (eg. for depth map information like this ``````vec4 packDepth(float depth) { float depthFrac = fract(depth * 255.0); return vec4( depth - depthFrac / 255.0, depthFrac, O.0, // or whatever 1.0); } `````` and unpack like ``````float rawget(int x, int y) { color c=pixel(x, y); float f=(blue(c)+green(c)/255.0); if (inverted) f=255-f; return f; } `````` Pretty naive, but works for me, somewhere I also have functions for a 3byte float encoding (good for RGB only pictures), I will dig them up if interested. Or you could encode them as 32bit native IEEE floats…if you find a worKing way to do that (on Android!) I’d like to know. 3 Likes There is this modification to Processing’s source for enabling floating point texture, but it breaks some other functionalities. Besides, I think the PixelFlow library uses floating point textures as well. Although the simplest method would be packing your values into the RGBA channels as @uheinema suggested (note you can use all 4 channels, thus 32 bits available) 5 Likes Great, that conversation is exactly my problem. I will follow their instructions, thank you! For the people that have the same problem with accesing the 32bit buffer texture of a loaded shader. For me the easiest was to use the Shadertoy class of PixelFlow. When you render your Shader not to the PGraphics but only apply the shader with toyA.apply(width, height); Than with the toyA.tex.getFloatTextureData() you can get the 32bit floats. If you render your shader to PGraphics, it doesnt store that 32bit in the texture of the ShaderToy, so nothing comes out of it when you get Texture data. Thank everybody for the help. 1 Like
738
3,052
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2022-27
latest
en
0.903664
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/4840/2/a/t/1/2/
1,718,449,330,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198861586.40/warc/CC-MAIN-20240615093342-20240615123342-00507.warc.gz
783,570,176
79,122
# Properties Label 4840.2.a.t.1.2 Level $4840$ Weight $2$ Character 4840.1 Self dual yes Analytic conductor $38.648$ Analytic rank $1$ Dimension $3$ CM no Inner twists $1$ # Related objects Show commands: Magma / PariGP / SageMath ## Newspace parameters comment: Compute space of new eigenforms [N,k,chi] = [4840,2,Mod(1,4840)] mf = mfinit([N,k,chi],0) lf = mfeigenbasis(mf) from sage.modular.dirichlet import DirichletCharacter H = DirichletGroup(4840, base_ring=CyclotomicField(2)) chi = DirichletCharacter(H, H._module([0, 0, 0, 0])) N = Newforms(chi, 2, names="a") //Please install CHIMP (https://github.com/edgarcosta/CHIMP) if you want to run this code chi := DirichletCharacter("4840.1"); S:= CuspForms(chi, 2); N := Newforms(S); Level: $$N$$ $$=$$ $$4840 = 2^{3} \cdot 5 \cdot 11^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 4840.a (trivial) ## Newform invariants comment: select newform sage: f = N[0] # Warning: the index may be different gp: f = lf[1] \\ Warning: the index may be different Self dual: yes Analytic conductor: $$38.6475945783$$ Analytic rank: $$1$$ Dimension: $$3$$ Coefficient field: 3.3.404.1 comment: defining polynomial  gp: f.mod \\ as an extension of the character field Defining polynomial: $$x^{3} - x^{2} - 5x - 1$$ x^3 - x^2 - 5*x - 1 Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: yes Fricke sign: $$+1$$ Sato-Tate group: $\mathrm{SU}(2)$ ## Embedding invariants Embedding label 1.2 Root $$-1.65544$$ of defining polynomial Character $$\chi$$ $$=$$ 4840.1 ## $q$-expansion comment: q-expansion sage: f.q_expansion() # note that sage often uses an isomorphic number field gp: mfcoefs(f, 20) $$f(q)$$ $$=$$ $$q+1.39593 q^{3} -1.00000 q^{5} +1.65544 q^{7} -1.05137 q^{9} +O(q^{10})$$ $$q+1.39593 q^{3} -1.00000 q^{5} +1.65544 q^{7} -1.05137 q^{9} +6.36226 q^{13} -1.39593 q^{15} -5.31088 q^{17} -4.36226 q^{19} +2.31088 q^{21} -5.57040 q^{23} +1.00000 q^{25} -5.65544 q^{27} -6.79186 q^{29} +0.259511 q^{31} -1.65544 q^{35} -0.791864 q^{37} +8.88128 q^{39} -2.74049 q^{41} -11.6554 q^{43} +1.05137 q^{45} +7.49868 q^{47} -4.25951 q^{49} -7.41363 q^{51} -1.84324 q^{53} -6.08942 q^{57} -7.15412 q^{59} +8.57040 q^{61} -1.74049 q^{63} -6.36226 q^{65} +15.6014 q^{67} -7.77589 q^{69} +1.05137 q^{71} -11.6865 q^{73} +1.39593 q^{75} -2.27284 q^{79} -4.74049 q^{81} -7.15412 q^{83} +5.31088 q^{85} -9.48098 q^{87} -8.31088 q^{89} +10.5324 q^{91} +0.362259 q^{93} +4.36226 q^{95} +14.3623 q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$3 q + q^{3} - 3 q^{5} - q^{7} + 6 q^{9}+O(q^{10})$$ 3 * q + q^3 - 3 * q^5 - q^7 + 6 * q^9 $$3 q + q^{3} - 3 q^{5} - q^{7} + 6 q^{9} - 2 q^{13} - q^{15} - 4 q^{17} + 8 q^{19} - 5 q^{21} - 2 q^{23} + 3 q^{25} - 11 q^{27} - 14 q^{29} - 2 q^{31} + q^{35} + 4 q^{37} - 11 q^{41} - 29 q^{43} - 6 q^{45} + q^{47} - 10 q^{49} + 8 q^{51} + 10 q^{53} + 2 q^{57} + 6 q^{59} + 11 q^{61} - 8 q^{63} + 2 q^{65} + 7 q^{67} + 28 q^{69} - 6 q^{71} - 4 q^{73} + q^{75} - 6 q^{79} - 17 q^{81} + 6 q^{83} + 4 q^{85} - 34 q^{87} - 13 q^{89} + 28 q^{91} - 20 q^{93} - 8 q^{95} + 22 q^{97}+O(q^{100})$$ 3 * q + q^3 - 3 * q^5 - q^7 + 6 * q^9 - 2 * q^13 - q^15 - 4 * q^17 + 8 * q^19 - 5 * q^21 - 2 * q^23 + 3 * q^25 - 11 * q^27 - 14 * q^29 - 2 * q^31 + q^35 + 4 * q^37 - 11 * q^41 - 29 * q^43 - 6 * q^45 + q^47 - 10 * q^49 + 8 * q^51 + 10 * q^53 + 2 * q^57 + 6 * q^59 + 11 * q^61 - 8 * q^63 + 2 * q^65 + 7 * q^67 + 28 * q^69 - 6 * q^71 - 4 * q^73 + q^75 - 6 * q^79 - 17 * q^81 + 6 * q^83 + 4 * q^85 - 34 * q^87 - 13 * q^89 + 28 * q^91 - 20 * q^93 - 8 * q^95 + 22 * q^97 ## Coefficient data For each $$n$$ we display the coefficients of the $$q$$-expansion $$a_n$$, the Satake parameters $$\alpha_p$$, and the Satake angles $$\theta_p = \textrm{Arg}(\alpha_p)$$. Display $$a_p$$ with $$p$$ up to: 50 250 1000 Display $$a_n$$ with $$n$$ up to: 50 250 1000 $$n$$ $$a_n$$ $$a_n / n^{(k-1)/2}$$ $$\alpha_n$$ $$\theta_n$$ $$p$$ $$a_p$$ $$a_p / p^{(k-1)/2}$$ $$\alpha_p$$ $$\theta_p$$ $$2$$ 0 0 $$3$$ 1.39593 0.805942 0.402971 0.915213i $$-0.367978\pi$$ 0.402971 + 0.915213i $$0.367978\pi$$ $$4$$ 0 0 $$5$$ −1.00000 −0.447214 $$6$$ 0 0 $$7$$ 1.65544 0.625698 0.312849 0.949803i $$-0.398717\pi$$ 0.312849 + 0.949803i $$0.398717\pi$$ $$8$$ 0 0 $$9$$ −1.05137 −0.350458 $$10$$ 0 0 $$11$$ 0 0 $$12$$ 0 0 $$13$$ 6.36226 1.76457 0.882287 0.470713i $$-0.156003\pi$$ 0.882287 + 0.470713i $$0.156003\pi$$ $$14$$ 0 0 $$15$$ −1.39593 −0.360428 $$16$$ 0 0 $$17$$ −5.31088 −1.28808 −0.644039 0.764992i $$-0.722743\pi$$ −0.644039 + 0.764992i $$0.722743\pi$$ $$18$$ 0 0 $$19$$ −4.36226 −1.00077 −0.500385 0.865803i $$-0.666808\pi$$ −0.500385 + 0.865803i $$0.666808\pi$$ $$20$$ 0 0 $$21$$ 2.31088 0.504276 $$22$$ 0 0 $$23$$ −5.57040 −1.16151 −0.580754 0.814079i $$-0.697242\pi$$ −0.580754 + 0.814079i $$0.697242\pi$$ $$24$$ 0 0 $$25$$ 1.00000 0.200000 $$26$$ 0 0 $$27$$ −5.65544 −1.08839 $$28$$ 0 0 $$29$$ −6.79186 −1.26122 −0.630609 0.776101i $$-0.717195\pi$$ −0.630609 + 0.776101i $$0.717195\pi$$ $$30$$ 0 0 $$31$$ 0.259511 0.0466095 0.0233047 0.999728i $$-0.492581\pi$$ 0.0233047 + 0.999728i $$0.492581\pi$$ $$32$$ 0 0 $$33$$ 0 0 $$34$$ 0 0 $$35$$ −1.65544 −0.279821 $$36$$ 0 0 $$37$$ −0.791864 −0.130182 −0.0650908 0.997879i $$-0.520734\pi$$ −0.0650908 + 0.997879i $$0.520734\pi$$ $$38$$ 0 0 $$39$$ 8.88128 1.42214 $$40$$ 0 0 $$41$$ −2.74049 −0.427993 −0.213996 0.976834i $$-0.568648\pi$$ −0.213996 + 0.976834i $$0.568648\pi$$ $$42$$ 0 0 $$43$$ −11.6554 −1.77744 −0.888719 0.458452i $$-0.848404\pi$$ −0.888719 + 0.458452i $$0.848404\pi$$ $$44$$ 0 0 $$45$$ 1.05137 0.156730 $$46$$ 0 0 $$47$$ 7.49868 1.09379 0.546897 0.837200i $$-0.315809\pi$$ 0.546897 + 0.837200i $$0.315809\pi$$ $$48$$ 0 0 $$49$$ −4.25951 −0.608502 $$50$$ 0 0 $$51$$ −7.41363 −1.03812 $$52$$ 0 0 $$53$$ −1.84324 −0.253188 −0.126594 0.991955i $$-0.540405\pi$$ −0.126594 + 0.991955i $$0.540405\pi$$ $$54$$ 0 0 $$55$$ 0 0 $$56$$ 0 0 $$57$$ −6.08942 −0.806563 $$58$$ 0 0 $$59$$ −7.15412 −0.931387 −0.465694 0.884946i $$-0.654195\pi$$ −0.465694 + 0.884946i $$0.654195\pi$$ $$60$$ 0 0 $$61$$ 8.57040 1.09733 0.548663 0.836043i $$-0.315137\pi$$ 0.548663 + 0.836043i $$0.315137\pi$$ $$62$$ 0 0 $$63$$ −1.74049 −0.219281 $$64$$ 0 0 $$65$$ −6.36226 −0.789141 $$66$$ 0 0 $$67$$ 15.6014 1.90602 0.953009 0.302942i $$-0.0979689\pi$$ 0.953009 + 0.302942i $$0.0979689\pi$$ $$68$$ 0 0 $$69$$ −7.77589 −0.936107 $$70$$ 0 0 $$71$$ 1.05137 0.124775 0.0623876 0.998052i $$-0.480129\pi$$ 0.0623876 + 0.998052i $$0.480129\pi$$ $$72$$ 0 0 $$73$$ −11.6865 −1.36780 −0.683899 0.729576i $$-0.739717\pi$$ −0.683899 + 0.729576i $$0.739717\pi$$ $$74$$ 0 0 $$75$$ 1.39593 0.161188 $$76$$ 0 0 $$77$$ 0 0 $$78$$ 0 0 $$79$$ −2.27284 −0.255715 −0.127857 0.991793i $$-0.540810\pi$$ −0.127857 + 0.991793i $$0.540810\pi$$ $$80$$ 0 0 $$81$$ −4.74049 −0.526721 $$82$$ 0 0 $$83$$ −7.15412 −0.785267 −0.392633 0.919695i $$-0.628436\pi$$ −0.392633 + 0.919695i $$0.628436\pi$$ $$84$$ 0 0 $$85$$ 5.31088 0.576046 $$86$$ 0 0 $$87$$ −9.48098 −1.01647 $$88$$ 0 0 $$89$$ −8.31088 −0.880952 −0.440476 0.897764i $$-0.645190\pi$$ −0.440476 + 0.897764i $$0.645190\pi$$ $$90$$ 0 0 $$91$$ 10.5324 1.10409 $$92$$ 0 0 $$93$$ 0.362259 0.0375645 $$94$$ 0 0 $$95$$ 4.36226 0.447558 $$96$$ 0 0 $$97$$ 14.3623 1.45827 0.729133 0.684372i $$-0.239923\pi$$ 0.729133 + 0.684372i $$0.239923\pi$$ $$98$$ 0 0 $$99$$ 0 0 $$100$$ 0 0 $$101$$ 3.79186 0.377305 0.188652 0.982044i $$-0.439588\pi$$ 0.188652 + 0.982044i $$0.439588\pi$$ $$102$$ 0 0 $$103$$ 9.77589 0.963247 0.481624 0.876378i $$-0.340047\pi$$ 0.481624 + 0.876378i $$0.340047\pi$$ $$104$$ 0 0 $$105$$ −2.31088 −0.225519 $$106$$ 0 0 $$107$$ 14.9123 1.44163 0.720814 0.693129i $$-0.243768\pi$$ 0.720814 + 0.693129i $$0.243768\pi$$ $$108$$ 0 0 $$109$$ −10.1541 −0.972589 −0.486294 0.873795i $$-0.661652\pi$$ −0.486294 + 0.873795i $$0.661652\pi$$ $$110$$ 0 0 $$111$$ −1.10539 −0.104919 $$112$$ 0 0 $$113$$ 12.6218 1.18736 0.593678 0.804703i $$-0.297675\pi$$ 0.593678 + 0.804703i $$0.297675\pi$$ $$114$$ 0 0 $$115$$ 5.57040 0.519442 $$116$$ 0 0 $$117$$ −6.68912 −0.618409 $$118$$ 0 0 $$119$$ −8.79186 −0.805949 $$120$$ 0 0 $$121$$ 0 0 $$122$$ 0 0 $$123$$ −3.82554 −0.344937 $$124$$ 0 0 $$125$$ −1.00000 −0.0894427 $$126$$ 0 0 $$127$$ 1.29318 0.114751 0.0573757 0.998353i $$-0.481727\pi$$ 0.0573757 + 0.998353i $$0.481727\pi$$ $$128$$ 0 0 $$129$$ −16.2702 −1.43251 $$130$$ 0 0 $$131$$ −5.74049 −0.501549 −0.250774 0.968046i $$-0.580685\pi$$ −0.250774 + 0.968046i $$0.580685\pi$$ $$132$$ 0 0 $$133$$ −7.22147 −0.626181 $$134$$ 0 0 $$135$$ 5.65544 0.486743 $$136$$ 0 0 $$137$$ −10.1161 −0.864275 −0.432138 0.901808i $$-0.642241\pi$$ −0.432138 + 0.901808i $$0.642241\pi$$ $$138$$ 0 0 $$139$$ −5.84324 −0.495617 −0.247808 0.968809i $$-0.579710\pi$$ −0.247808 + 0.968809i $$0.579710\pi$$ $$140$$ 0 0 $$141$$ 10.4676 0.881535 $$142$$ 0 0 $$143$$ 0 0 $$144$$ 0 0 $$145$$ 6.79186 0.564034 $$146$$ 0 0 $$147$$ −5.94599 −0.490417 $$148$$ 0 0 $$149$$ −22.0868 −1.80942 −0.904710 0.426029i $$-0.859912\pi$$ −0.904710 + 0.426029i $$0.859912\pi$$ $$150$$ 0 0 $$151$$ −7.46765 −0.607708 −0.303854 0.952719i $$-0.598274\pi$$ −0.303854 + 0.952719i $$0.598274\pi$$ $$152$$ 0 0 $$153$$ 5.58373 0.451418 $$154$$ 0 0 $$155$$ −0.259511 −0.0208444 $$156$$ 0 0 $$157$$ 11.1408 0.889132 0.444566 0.895746i $$-0.353358\pi$$ 0.444566 + 0.895746i $$0.353358\pi$$ $$158$$ 0 0 $$159$$ −2.57303 −0.204055 $$160$$ 0 0 $$161$$ −9.22147 −0.726754 $$162$$ 0 0 $$163$$ −8.55005 −0.669692 −0.334846 0.942273i $$-0.608684\pi$$ −0.334846 + 0.942273i $$0.608684\pi$$ $$164$$ 0 0 $$165$$ 0 0 $$166$$ 0 0 $$167$$ −10.0717 −0.779373 −0.389686 0.920948i $$-0.627417\pi$$ −0.389686 + 0.920948i $$0.627417\pi$$ $$168$$ 0 0 $$169$$ 27.4783 2.11372 $$170$$ 0 0 $$171$$ 4.58637 0.350728 $$172$$ 0 0 $$173$$ −12.7785 −0.971534 −0.485767 0.874088i $$-0.661460\pi$$ −0.485767 + 0.874088i $$0.661460\pi$$ $$174$$ 0 0 $$175$$ 1.65544 0.125140 $$176$$ 0 0 $$177$$ −9.98667 −0.750644 $$178$$ 0 0 $$179$$ 12.1027 0.904602 0.452301 0.891865i $$-0.350603\pi$$ 0.452301 + 0.891865i $$0.350603\pi$$ $$180$$ 0 0 $$181$$ −19.1382 −1.42253 −0.711264 0.702925i $$-0.751877\pi$$ −0.711264 + 0.702925i $$0.751877\pi$$ $$182$$ 0 0 $$183$$ 11.9637 0.884381 $$184$$ 0 0 $$185$$ 0.791864 0.0582190 $$186$$ 0 0 $$187$$ 0 0 $$188$$ 0 0 $$189$$ −9.36226 −0.681004 $$190$$ 0 0 $$191$$ −12.0894 −0.874759 −0.437380 0.899277i $$-0.644093\pi$$ −0.437380 + 0.899277i $$0.644093\pi$$ $$192$$ 0 0 $$193$$ 13.6191 0.980326 0.490163 0.871631i $$-0.336937\pi$$ 0.490163 + 0.871631i $$0.336937\pi$$ $$194$$ 0 0 $$195$$ −8.88128 −0.636002 $$196$$ 0 0 $$197$$ −14.7919 −1.05388 −0.526938 0.849904i $$-0.676660\pi$$ −0.526938 + 0.849904i $$0.676660\pi$$ $$198$$ 0 0 $$199$$ −17.6865 −1.25376 −0.626881 0.779115i $$-0.715669\pi$$ −0.626881 + 0.779115i $$0.715669\pi$$ $$200$$ 0 0 $$201$$ 21.7785 1.53614 $$202$$ 0 0 $$203$$ −11.2435 −0.789142 $$204$$ 0 0 $$205$$ 2.74049 0.191404 $$206$$ 0 0 $$207$$ 5.85657 0.407060 $$208$$ 0 0 $$209$$ 0 0 $$210$$ 0 0 $$211$$ −27.0868 −1.86473 −0.932365 0.361518i $$-0.882259\pi$$ −0.932365 + 0.361518i $$0.882259\pi$$ $$212$$ 0 0 $$213$$ 1.46765 0.100562 $$214$$ 0 0 $$215$$ 11.6554 0.794895 $$216$$ 0 0 $$217$$ 0.429605 0.0291635 $$218$$ 0 0 $$219$$ −16.3135 −1.10237 $$220$$ 0 0 $$221$$ −33.7892 −2.27291 $$222$$ 0 0 $$223$$ −8.69348 −0.582159 −0.291079 0.956699i $$-0.594014\pi$$ −0.291079 + 0.956699i $$0.594014\pi$$ $$224$$ 0 0 $$225$$ −1.05137 −0.0700916 $$226$$ 0 0 $$227$$ 3.40926 0.226281 0.113140 0.993579i $$-0.463909\pi$$ 0.113140 + 0.993579i $$0.463909\pi$$ $$228$$ 0 0 $$229$$ 13.3489 0.882122 0.441061 0.897477i $$-0.354602\pi$$ 0.441061 + 0.897477i $$0.354602\pi$$ $$230$$ 0 0 $$231$$ 0 0 $$232$$ 0 0 $$233$$ −14.6351 −0.958777 −0.479389 0.877603i $$-0.659142\pi$$ −0.479389 + 0.877603i $$0.659142\pi$$ $$234$$ 0 0 $$235$$ −7.49868 −0.489160 $$236$$ 0 0 $$237$$ −3.17273 −0.206091 $$238$$ 0 0 $$239$$ 8.36226 0.540910 0.270455 0.962733i $$-0.412826\pi$$ 0.270455 + 0.962733i $$0.412826\pi$$ $$240$$ 0 0 $$241$$ 11.0487 0.711712 0.355856 0.934541i $$-0.384189\pi$$ 0.355856 + 0.934541i $$0.384189\pi$$ $$242$$ 0 0 $$243$$ 10.3489 0.663884 $$244$$ 0 0 $$245$$ 4.25951 0.272130 $$246$$ 0 0 $$247$$ −27.7538 −1.76593 $$248$$ 0 0 $$249$$ −9.98667 −0.632879 $$250$$ 0 0 $$251$$ −13.5704 −0.856556 −0.428278 0.903647i $$-0.640880\pi$$ −0.428278 + 0.903647i $$0.640880\pi$$ $$252$$ 0 0 $$253$$ 0 0 $$254$$ 0 0 $$255$$ 7.41363 0.464260 $$256$$ 0 0 $$257$$ −0.881280 −0.0549727 −0.0274864 0.999622i $$-0.508750\pi$$ −0.0274864 + 0.999622i $$0.508750\pi$$ $$258$$ 0 0 $$259$$ −1.31088 −0.0814544 $$260$$ 0 0 $$261$$ 7.14079 0.442004 $$262$$ 0 0 $$263$$ −14.3623 −0.885615 −0.442807 0.896617i $$-0.646017\pi$$ −0.442807 + 0.896617i $$0.646017\pi$$ $$264$$ 0 0 $$265$$ 1.84324 0.113229 $$266$$ 0 0 $$267$$ −11.6014 −0.709996 $$268$$ 0 0 $$269$$ 11.2188 0.684024 0.342012 0.939696i $$-0.388892\pi$$ 0.342012 + 0.939696i $$0.388892\pi$$ $$270$$ 0 0 $$271$$ 14.3756 0.873255 0.436627 0.899642i $$-0.356173\pi$$ 0.436627 + 0.899642i $$0.356173\pi$$ $$272$$ 0 0 $$273$$ 14.7024 0.889833 $$274$$ 0 0 $$275$$ 0 0 $$276$$ 0 0 $$277$$ 31.2029 1.87480 0.937399 0.348257i $$-0.113226\pi$$ 0.937399 + 0.348257i $$0.113226\pi$$ $$278$$ 0 0 $$279$$ −0.272843 −0.0163347 $$280$$ 0 0 $$281$$ 17.1675 1.02412 0.512062 0.858948i $$-0.328882\pi$$ 0.512062 + 0.858948i $$0.328882\pi$$ $$282$$ 0 0 $$283$$ 11.9150 0.708270 0.354135 0.935194i $$-0.384775\pi$$ 0.354135 + 0.935194i $$0.384775\pi$$ $$284$$ 0 0 $$285$$ 6.08942 0.360706 $$286$$ 0 0 $$287$$ −4.53672 −0.267794 $$288$$ 0 0 $$289$$ 11.2055 0.659147 $$290$$ 0 0 $$291$$ 20.0487 1.17528 $$292$$ 0 0 $$293$$ −10.5324 −0.615307 −0.307653 0.951499i $$-0.599544\pi$$ −0.307653 + 0.951499i $$0.599544\pi$$ $$294$$ 0 0 $$295$$ 7.15412 0.416529 $$296$$ 0 0 $$297$$ 0 0 $$298$$ 0 0 $$299$$ −35.4403 −2.04957 $$300$$ 0 0 $$301$$ −19.2949 −1.11214 $$302$$ 0 0 $$303$$ 5.29318 0.304085 $$304$$ 0 0 $$305$$ −8.57040 −0.490739 $$306$$ 0 0 $$307$$ −11.1541 −0.636599 −0.318300 0.947990i $$-0.603112\pi$$ −0.318300 + 0.947990i $$0.603112\pi$$ $$308$$ 0 0 $$309$$ 13.6465 0.776321 $$310$$ 0 0 $$311$$ −10.9840 −0.622847 −0.311424 0.950271i $$-0.600806\pi$$ −0.311424 + 0.950271i $$0.600806\pi$$ $$312$$ 0 0 $$313$$ 12.3977 0.700757 0.350379 0.936608i $$-0.386053\pi$$ 0.350379 + 0.936608i $$0.386053\pi$$ $$314$$ 0 0 $$315$$ 1.74049 0.0980655 $$316$$ 0 0 $$317$$ 6.54569 0.367642 0.183821 0.982960i $$-0.441153\pi$$ 0.183821 + 0.982960i $$0.441153\pi$$ $$318$$ 0 0 $$319$$ 0 0 $$320$$ 0 0 $$321$$ 20.8166 1.16187 $$322$$ 0 0 $$323$$ 23.1675 1.28907 $$324$$ 0 0 $$325$$ 6.36226 0.352915 $$326$$ 0 0 $$327$$ −14.1745 −0.783850 $$328$$ 0 0 $$329$$ 12.4136 0.684386 $$330$$ 0 0 $$331$$ −29.8786 −1.64228 −0.821139 0.570728i $$-0.806661\pi$$ −0.821139 + 0.570728i $$0.806661\pi$$ $$332$$ 0 0 $$333$$ 0.832545 0.0456232 $$334$$ 0 0 $$335$$ −15.6014 −0.852397 $$336$$ 0 0 $$337$$ −3.19480 −0.174032 −0.0870160 0.996207i $$-0.527733\pi$$ −0.0870160 + 0.996207i $$0.527733\pi$$ $$338$$ 0 0 $$339$$ 17.6191 0.956940 $$340$$ 0 0 $$341$$ 0 0 $$342$$ 0 0 $$343$$ −18.6395 −1.00644 $$344$$ 0 0 $$345$$ 7.77589 0.418640 $$346$$ 0 0 $$347$$ 21.3773 1.14759 0.573797 0.818997i $$-0.305470\pi$$ 0.573797 + 0.818997i $$0.305470\pi$$ $$348$$ 0 0 $$349$$ 0.894612 0.0478875 0.0239437 0.999713i $$-0.492378\pi$$ 0.0239437 + 0.999713i $$0.492378\pi$$ $$350$$ 0 0 $$351$$ −35.9814 −1.92054 $$352$$ 0 0 $$353$$ −7.05137 −0.375307 −0.187653 0.982235i $$-0.560088\pi$$ −0.187653 + 0.982235i $$0.560088\pi$$ $$354$$ 0 0 $$355$$ −1.05137 −0.0558012 $$356$$ 0 0 $$357$$ −12.2728 −0.649548 $$358$$ 0 0 $$359$$ −10.5324 −0.555876 −0.277938 0.960599i $$-0.589651\pi$$ −0.277938 + 0.960599i $$0.589651\pi$$ $$360$$ 0 0 $$361$$ 0.0293036 0.00154230 $$362$$ 0 0 $$363$$ 0 0 $$364$$ 0 0 $$365$$ 11.6865 0.611698 $$366$$ 0 0 $$367$$ 33.5341 1.75046 0.875232 0.483703i $$-0.160708\pi$$ 0.875232 + 0.483703i $$0.160708\pi$$ $$368$$ 0 0 $$369$$ 2.88128 0.149993 $$370$$ 0 0 $$371$$ −3.05137 −0.158419 $$372$$ 0 0 $$373$$ −10.1922 −0.527730 −0.263865 0.964560i $$-0.584997\pi$$ −0.263865 + 0.964560i $$0.584997\pi$$ $$374$$ 0 0 $$375$$ −1.39593 −0.0720856 $$376$$ 0 0 $$377$$ −43.2116 −2.22551 $$378$$ 0 0 $$379$$ −0.170094 −0.00873715 −0.00436858 0.999990i $$-0.501391\pi$$ −0.00436858 + 0.999990i $$0.501391\pi$$ $$380$$ 0 0 $$381$$ 1.80520 0.0924830 $$382$$ 0 0 $$383$$ −12.8052 −0.654315 −0.327157 0.944970i $$-0.606091\pi$$ −0.327157 + 0.944970i $$0.606091\pi$$ $$384$$ 0 0 $$385$$ 0 0 $$386$$ 0 0 $$387$$ 12.2542 0.622918 $$388$$ 0 0 $$389$$ 30.1134 1.52681 0.763406 0.645919i $$-0.223526\pi$$ 0.763406 + 0.645919i $$0.223526\pi$$ $$390$$ 0 0 $$391$$ 29.5837 1.49611 $$392$$ 0 0 $$393$$ −8.01333 −0.404219 $$394$$ 0 0 $$395$$ 2.27284 0.114359 $$396$$ 0 0 $$397$$ −10.1027 −0.507042 −0.253521 0.967330i $$-0.581589\pi$$ −0.253521 + 0.967330i $$0.581589\pi$$ $$398$$ 0 0 $$399$$ −10.0807 −0.504665 $$400$$ 0 0 $$401$$ 36.2435 1.80992 0.904958 0.425501i $$-0.139902\pi$$ 0.904958 + 0.425501i $$0.139902\pi$$ $$402$$ 0 0 $$403$$ 1.65107 0.0822458 $$404$$ 0 0 $$405$$ 4.74049 0.235557 $$406$$ 0 0 $$407$$ 0 0 $$408$$ 0 0 $$409$$ −12.9460 −0.640138 −0.320069 0.947394i $$-0.603706\pi$$ −0.320069 + 0.947394i $$0.603706\pi$$ $$410$$ 0 0 $$411$$ −14.1214 −0.696555 $$412$$ 0 0 $$413$$ −11.8432 −0.582768 $$414$$ 0 0 $$415$$ 7.15412 0.351182 $$416$$ 0 0 $$417$$ −8.15676 −0.399438 $$418$$ 0 0 $$419$$ −22.2188 −1.08546 −0.542730 0.839907i $$-0.682609\pi$$ −0.542730 + 0.839907i $$0.682609\pi$$ $$420$$ 0 0 $$421$$ 27.3623 1.33355 0.666777 0.745257i $$-0.267673\pi$$ 0.666777 + 0.745257i $$0.267673\pi$$ $$422$$ 0 0 $$423$$ −7.88392 −0.383329 $$424$$ 0 0 $$425$$ −5.31088 −0.257616 $$426$$ 0 0 $$427$$ 14.1878 0.686596 $$428$$ 0 0 $$429$$ 0 0 $$430$$ 0 0 $$431$$ 34.1382 1.64438 0.822188 0.569215i $$-0.192753\pi$$ 0.822188 + 0.569215i $$0.192753\pi$$ $$432$$ 0 0 $$433$$ 3.63774 0.174819 0.0874093 0.996172i $$-0.472141\pi$$ 0.0874093 + 0.996172i $$0.472141\pi$$ $$434$$ 0 0 $$435$$ 9.48098 0.454578 $$436$$ 0 0 $$437$$ 24.2995 1.16240 $$438$$ 0 0 $$439$$ 18.0621 0.862055 0.431028 0.902339i $$-0.358151\pi$$ 0.431028 + 0.902339i $$0.358151\pi$$ $$440$$ 0 0 $$441$$ 4.47834 0.213254 $$442$$ 0 0 $$443$$ 40.3126 1.91531 0.957655 0.287918i $$-0.0929631\pi$$ 0.957655 + 0.287918i $$0.0929631\pi$$ $$444$$ 0 0 $$445$$ 8.31088 0.393974 $$446$$ 0 0 $$447$$ −30.8316 −1.45829 $$448$$ 0 0 $$449$$ 10.8839 0.513644 0.256822 0.966459i $$-0.417325\pi$$ 0.256822 + 0.966459i $$0.417325\pi$$ $$450$$ 0 0 $$451$$ 0 0 $$452$$ 0 0 $$453$$ −10.4243 −0.489778 $$454$$ 0 0 $$455$$ −10.5324 −0.493764 $$456$$ 0 0 $$457$$ −38.6438 −1.80768 −0.903841 0.427868i $$-0.859265\pi$$ −0.903841 + 0.427868i $$0.859265\pi$$ $$458$$ 0 0 $$459$$ 30.0354 1.40193 $$460$$ 0 0 $$461$$ 25.6891 1.19646 0.598231 0.801324i $$-0.295871\pi$$ 0.598231 + 0.801324i $$0.295871\pi$$ $$462$$ 0 0 $$463$$ −2.41190 −0.112091 −0.0560453 0.998428i $$-0.517849\pi$$ −0.0560453 + 0.998428i $$0.517849\pi$$ $$464$$ 0 0 $$465$$ −0.362259 −0.0167994 $$466$$ 0 0 $$467$$ −39.8423 −1.84368 −0.921842 0.387567i $$-0.873316\pi$$ −0.921842 + 0.387567i $$0.873316\pi$$ $$468$$ 0 0 $$469$$ 25.8273 1.19259 $$470$$ 0 0 $$471$$ 15.5518 0.716588 $$472$$ 0 0 $$473$$ 0 0 $$474$$ 0 0 $$475$$ −4.36226 −0.200154 $$476$$ 0 0 $$477$$ 1.93793 0.0887319 $$478$$ 0 0 $$479$$ −26.4517 −1.20861 −0.604304 0.796754i $$-0.706549\pi$$ −0.604304 + 0.796754i $$0.706549\pi$$ $$480$$ 0 0 $$481$$ −5.03804 −0.229715 $$482$$ 0 0 $$483$$ −12.8725 −0.585721 $$484$$ 0 0 $$485$$ −14.3623 −0.652157 $$486$$ 0 0 $$487$$ −28.5324 −1.29292 −0.646462 0.762946i $$-0.723752\pi$$ −0.646462 + 0.762946i $$0.723752\pi$$ $$488$$ 0 0 $$489$$ −11.9353 −0.539733 $$490$$ 0 0 $$491$$ 28.0354 1.26522 0.632610 0.774471i $$-0.281984\pi$$ 0.632610 + 0.774471i $$0.281984\pi$$ $$492$$ 0 0 $$493$$ 36.0708 1.62455 $$494$$ 0 0 $$495$$ 0 0 $$496$$ 0 0 $$497$$ 1.74049 0.0780716 $$498$$ 0 0 $$499$$ −31.0380 −1.38945 −0.694727 0.719274i $$-0.744475\pi$$ −0.694727 + 0.719274i $$0.744475\pi$$ $$500$$ 0 0 $$501$$ −14.0594 −0.628129 $$502$$ 0 0 $$503$$ −37.0505 −1.65200 −0.825999 0.563671i $$-0.809389\pi$$ −0.825999 + 0.563671i $$0.809389\pi$$ $$504$$ 0 0 $$505$$ −3.79186 −0.168736 $$506$$ 0 0 $$507$$ 38.3579 1.70353 $$508$$ 0 0 $$509$$ 36.7219 1.62767 0.813834 0.581097i $$-0.197376\pi$$ 0.813834 + 0.581097i $$0.197376\pi$$ $$510$$ 0 0 $$511$$ −19.3463 −0.855829 $$512$$ 0 0 $$513$$ 24.6705 1.08923 $$514$$ 0 0 $$515$$ −9.77589 −0.430777 $$516$$ 0 0 $$517$$ 0 0 $$518$$ 0 0 $$519$$ −17.8380 −0.783000 $$520$$ 0 0 $$521$$ 23.6572 1.03644 0.518220 0.855247i $$-0.326595\pi$$ 0.518220 + 0.855247i $$0.326595\pi$$ $$522$$ 0 0 $$523$$ 16.7785 0.733674 0.366837 0.930285i $$-0.380441\pi$$ 0.366837 + 0.930285i $$0.380441\pi$$ $$524$$ 0 0 $$525$$ 2.31088 0.100855 $$526$$ 0 0 $$527$$ −1.37823 −0.0600367 $$528$$ 0 0 $$529$$ 8.02930 0.349100 $$530$$ 0 0 $$531$$ 7.52166 0.326412 $$532$$ 0 0 $$533$$ −17.4357 −0.755224 $$534$$ 0 0 $$535$$ −14.9123 −0.644716 $$536$$ 0 0 $$537$$ 16.8946 0.729056 $$538$$ 0 0 $$539$$ 0 0 $$540$$ 0 0 $$541$$ 5.75646 0.247490 0.123745 0.992314i $$-0.460510\pi$$ 0.123745 + 0.992314i $$0.460510\pi$$ $$542$$ 0 0 $$543$$ −26.7156 −1.14647 $$544$$ 0 0 $$545$$ 10.1541 0.434955 $$546$$ 0 0 $$547$$ 10.9486 0.468129 0.234065 0.972221i $$-0.424797\pi$$ 0.234065 + 0.972221i $$0.424797\pi$$ $$548$$ 0 0 $$549$$ −9.01069 −0.384567 $$550$$ 0 0 $$551$$ 29.6279 1.26219 $$552$$ 0 0 $$553$$ −3.76256 −0.160000 $$554$$ 0 0 $$555$$ 1.10539 0.0469211 $$556$$ 0 0 $$557$$ 1.82991 0.0775356 0.0387678 0.999248i $$-0.487657\pi$$ 0.0387678 + 0.999248i $$0.487657\pi$$ $$558$$ 0 0 $$559$$ −74.1549 −3.13642 $$560$$ 0 0 $$561$$ 0 0 $$562$$ 0 0 $$563$$ 1.61476 0.0680541 0.0340270 0.999421i $$-0.489167\pi$$ 0.0340270 + 0.999421i $$0.489167\pi$$ $$564$$ 0 0 $$565$$ −12.6218 −0.531002 $$566$$ 0 0 $$567$$ −7.84761 −0.329569 $$568$$ 0 0 $$569$$ −23.4296 −0.982220 −0.491110 0.871098i $$-0.663409\pi$$ −0.491110 + 0.871098i $$0.663409\pi$$ $$570$$ 0 0 $$571$$ 30.1382 1.26124 0.630621 0.776091i $$-0.282800\pi$$ 0.630621 + 0.776091i $$0.282800\pi$$ $$572$$ 0 0 $$573$$ −16.8760 −0.705005 $$574$$ 0 0 $$575$$ −5.57040 −0.232302 $$576$$ 0 0 $$577$$ −37.5651 −1.56386 −0.781928 0.623369i $$-0.785764\pi$$ −0.781928 + 0.623369i $$0.785764\pi$$ $$578$$ 0 0 $$579$$ 19.0114 0.790086 $$580$$ 0 0 $$581$$ −11.8432 −0.491340 $$582$$ 0 0 $$583$$ 0 0 $$584$$ 0 0 $$585$$ 6.68912 0.276561 $$586$$ 0 0 $$587$$ 24.7962 1.02345 0.511725 0.859149i $$-0.329007\pi$$ 0.511725 + 0.859149i $$0.329007\pi$$ $$588$$ 0 0 $$589$$ −1.13205 −0.0466454 $$590$$ 0 0 $$591$$ −20.6484 −0.849363 $$592$$ 0 0 $$593$$ 46.2542 1.89943 0.949717 0.313110i $$-0.101371\pi$$ 0.949717 + 0.313110i $$0.101371\pi$$ $$594$$ 0 0 $$595$$ 8.79186 0.360431 $$596$$ 0 0 $$597$$ −24.6891 −1.01046 $$598$$ 0 0 $$599$$ −3.01069 −0.123014 −0.0615068 0.998107i $$-0.519591\pi$$ −0.0615068 + 0.998107i $$0.519591\pi$$ $$600$$ 0 0 $$601$$ 5.52166 0.225233 0.112617 0.993639i $$-0.464077\pi$$ 0.112617 + 0.993639i $$0.464077\pi$$ $$602$$ 0 0 $$603$$ −16.4029 −0.667979 $$604$$ 0 0 $$605$$ 0 0 $$606$$ 0 0 $$607$$ −14.3356 −0.581864 −0.290932 0.956744i $$-0.593965\pi$$ −0.290932 + 0.956744i $$0.593965\pi$$ $$608$$ 0 0 $$609$$ −15.6952 −0.636002 $$610$$ 0 0 $$611$$ 47.7085 1.93008 $$612$$ 0 0 $$613$$ 41.3056 1.66832 0.834159 0.551524i $$-0.185954\pi$$ 0.834159 + 0.551524i $$0.185954\pi$$ $$614$$ 0 0 $$615$$ 3.82554 0.154261 $$616$$ 0 0 $$617$$ 40.1869 1.61786 0.808932 0.587903i $$-0.200046\pi$$ 0.808932 + 0.587903i $$0.200046\pi$$ $$618$$ 0 0 $$619$$ −47.3463 −1.90301 −0.951504 0.307636i $$-0.900462\pi$$ −0.951504 + 0.307636i $$0.900462\pi$$ $$620$$ 0 0 $$621$$ 31.5030 1.26417 $$622$$ 0 0 $$623$$ −13.7582 −0.551210 $$624$$ 0 0 $$625$$ 1.00000 0.0400000 $$626$$ 0 0 $$627$$ 0 0 $$628$$ 0 0 $$629$$ 4.20550 0.167684 $$630$$ 0 0 $$631$$ −33.9681 −1.35225 −0.676124 0.736788i $$-0.736341\pi$$ −0.676124 + 0.736788i $$0.736341\pi$$ $$632$$ 0 0 $$633$$ −37.8113 −1.50286 $$634$$ 0 0 $$635$$ −1.29318 −0.0513184 $$636$$ 0 0 $$637$$ −27.1001 −1.07375 $$638$$ 0 0 $$639$$ −1.10539 −0.0437285 $$640$$ 0 0 $$641$$ 24.3082 0.960118 0.480059 0.877236i $$-0.340615\pi$$ 0.480059 + 0.877236i $$0.340615\pi$$ $$642$$ 0 0 $$643$$ 38.6209 1.52306 0.761529 0.648131i $$-0.224449\pi$$ 0.761529 + 0.648131i $$0.224449\pi$$ $$644$$ 0 0 $$645$$ 16.2702 0.640639 $$646$$ 0 0 $$647$$ −34.9663 −1.37467 −0.687334 0.726341i $$-0.741219\pi$$ −0.687334 + 0.726341i $$0.741219\pi$$ $$648$$ 0 0 $$649$$ 0 0 $$650$$ 0 0 $$651$$ 0.599699 0.0235041 $$652$$ 0 0 $$653$$ −9.44030 −0.369427 −0.184714 0.982792i $$-0.559136\pi$$ −0.184714 + 0.982792i $$0.559136\pi$$ $$654$$ 0 0 $$655$$ 5.74049 0.224299 $$656$$ 0 0 $$657$$ 12.2869 0.479356 $$658$$ 0 0 $$659$$ −14.7652 −0.575171 −0.287585 0.957755i $$-0.592852\pi$$ −0.287585 + 0.957755i $$0.592852\pi$$ $$660$$ 0 0 $$661$$ −8.32422 −0.323775 −0.161887 0.986809i $$-0.551758\pi$$ −0.161887 + 0.986809i $$0.551758\pi$$ $$662$$ 0 0 $$663$$ −47.1675 −1.83183 $$664$$ 0 0 $$665$$ 7.22147 0.280037 $$666$$ 0 0 $$667$$ 37.8334 1.46491 $$668$$ 0 0 $$669$$ −12.1355 −0.469186 $$670$$ 0 0 $$671$$ 0 0 $$672$$ 0 0 $$673$$ −0.388923 −0.0149919 −0.00749595 0.999972i $$-0.502386\pi$$ −0.00749595 + 0.999972i $$0.502386\pi$$ $$674$$ 0 0 $$675$$ −5.65544 −0.217678 $$676$$ 0 0 $$677$$ −41.5837 −1.59819 −0.799096 0.601203i $$-0.794688\pi$$ −0.799096 + 0.601203i $$0.794688\pi$$ $$678$$ 0 0 $$679$$ 23.7759 0.912435 $$680$$ 0 0 $$681$$ 4.75910 0.182369 $$682$$ 0 0 $$683$$ −13.3153 −0.509494 −0.254747 0.967008i $$-0.581992\pi$$ −0.254747 + 0.967008i $$0.581992\pi$$ $$684$$ 0 0 $$685$$ 10.1161 0.386516 $$686$$ 0 0 $$687$$ 18.6342 0.710939 $$688$$ 0 0 $$689$$ −11.7272 −0.446769 $$690$$ 0 0 $$691$$ 3.54373 0.134810 0.0674049 0.997726i $$-0.478528\pi$$ 0.0674049 + 0.997726i $$0.478528\pi$$ $$692$$ 0 0 $$693$$ 0 0 $$694$$ 0 0 $$695$$ 5.84324 0.221647 $$696$$ 0 0 $$697$$ 14.5544 0.551288 $$698$$ 0 0 $$699$$ −20.4296 −0.772719 $$700$$ 0 0 $$701$$ −24.3489 −0.919646 −0.459823 0.888011i $$-0.652087\pi$$ −0.459823 + 0.888011i $$0.652087\pi$$ $$702$$ 0 0 $$703$$ 3.45431 0.130282 $$704$$ 0 0 $$705$$ −10.4676 −0.394234 $$706$$ 0 0 $$707$$ 6.27721 0.236079 $$708$$ 0 0 $$709$$ 21.6484 0.813024 0.406512 0.913645i $$-0.366745\pi$$ 0.406512 + 0.913645i $$0.366745\pi$$ $$710$$ 0 0 $$711$$ 2.38961 0.0896173 $$712$$ 0 0 $$713$$ −1.44558 −0.0541373 $$714$$ 0 0 $$715$$ 0 0 $$716$$ 0 0 $$717$$ 11.6731 0.435942 $$718$$ 0 0 $$719$$ 35.2302 1.31387 0.656933 0.753949i $$-0.271854\pi$$ 0.656933 + 0.753949i $$0.271854\pi$$ $$720$$ 0 0 $$721$$ 16.1834 0.602702 $$722$$ 0 0 $$723$$ 15.4233 0.573598 $$724$$ 0 0 $$725$$ −6.79186 −0.252243 $$726$$ 0 0 $$727$$ 22.3446 0.828714 0.414357 0.910114i $$-0.364007\pi$$ 0.414357 + 0.910114i $$0.364007\pi$$ $$728$$ 0 0 $$729$$ 28.6679 1.06177 $$730$$ 0 0 $$731$$ 61.9007 2.28948 $$732$$ 0 0 $$733$$ −51.5111 −1.90261 −0.951303 0.308257i $$-0.900254\pi$$ −0.951303 + 0.308257i $$0.900254\pi$$ $$734$$ 0 0 $$735$$ 5.94599 0.219321 $$736$$ 0 0 $$737$$ 0 0 $$738$$ 0 0 $$739$$ 15.7219 0.578339 0.289169 0.957278i $$-0.406621\pi$$ 0.289169 + 0.957278i $$0.406621\pi$$ $$740$$ 0 0 $$741$$ −38.7424 −1.42324 $$742$$ 0 0 $$743$$ −22.6041 −0.829263 −0.414631 0.909989i $$-0.636089\pi$$ −0.414631 + 0.909989i $$0.636089\pi$$ $$744$$ 0 0 $$745$$ 22.0868 0.809197 $$746$$ 0 0 $$747$$ 7.52166 0.275203 $$748$$ 0 0 $$749$$ 24.6865 0.902024 $$750$$ 0 0 $$751$$ 27.4490 1.00163 0.500815 0.865554i $$-0.333034\pi$$ 0.500815 + 0.865554i $$0.333034\pi$$ $$752$$ 0 0 $$753$$ −18.9433 −0.690334 $$754$$ 0 0 $$755$$ 7.46765 0.271775 $$756$$ 0 0 $$757$$ 13.5704 0.493224 0.246612 0.969114i $$-0.420683\pi$$ 0.246612 + 0.969114i $$0.420683\pi$$ $$758$$ 0 0 $$759$$ 0 0 $$760$$ 0 0 $$761$$ −27.7892 −1.00736 −0.503679 0.863891i $$-0.668021\pi$$ −0.503679 + 0.863891i $$0.668021\pi$$ $$762$$ 0 0 $$763$$ −16.8096 −0.608547 $$764$$ 0 0 $$765$$ −5.58373 −0.201880 $$766$$ 0 0 $$767$$ −45.5164 −1.64350 $$768$$ 0 0 $$769$$ −6.13815 −0.221347 −0.110674 0.993857i $$-0.535301\pi$$ −0.110674 + 0.993857i $$0.535301\pi$$ $$770$$ 0 0 $$771$$ −1.23021 −0.0443048 $$772$$ 0 0 $$773$$ −21.9867 −0.790805 −0.395403 0.918508i $$-0.629395\pi$$ −0.395403 + 0.918508i $$0.629395\pi$$ $$774$$ 0 0 $$775$$ 0.259511 0.00932189 $$776$$ 0 0 $$777$$ −1.82991 −0.0656475 $$778$$ 0 0 $$779$$ 11.9547 0.428322 $$780$$ 0 0 $$781$$ 0 0 $$782$$ 0 0 $$783$$ 38.4110 1.37270 $$784$$ 0 0 $$785$$ −11.1408 −0.397632 $$786$$ 0 0 $$787$$ −19.3959 −0.691390 −0.345695 0.938347i $$-0.612357\pi$$ −0.345695 + 0.938347i $$0.612357\pi$$ $$788$$ 0 0 $$789$$ −20.0487 −0.713754 $$790$$ 0 0 $$791$$ 20.8946 0.742927 $$792$$ 0 0 $$793$$ 54.5271 1.93631 $$794$$ 0 0 $$795$$ 2.57303 0.0912561 $$796$$ 0 0 $$797$$ 41.2516 1.46121 0.730603 0.682802i $$-0.239239\pi$$ 0.730603 + 0.682802i $$0.239239\pi$$ $$798$$ 0 0 $$799$$ −39.8246 −1.40889 $$800$$ 0 0 $$801$$ 8.73785 0.308737 $$802$$ 0 0 $$803$$ 0 0 $$804$$ 0 0 $$805$$ 9.22147 0.325014 $$806$$ 0 0 $$807$$ 15.6607 0.551283 $$808$$ 0 0 $$809$$ 31.0682 1.09230 0.546149 0.837688i $$-0.316093\pi$$ 0.546149 + 0.837688i $$0.316093\pi$$ $$810$$ 0 0 $$811$$ −3.29227 −0.115607 −0.0578037 0.998328i $$-0.518410\pi$$ −0.0578037 + 0.998328i $$0.518410\pi$$ $$812$$ 0 0 $$813$$ 20.0673 0.703793 $$814$$ 0 0 $$815$$ 8.55005 0.299495 $$816$$ 0 0 $$817$$ 50.8441 1.77881 $$818$$ 0 0 $$819$$ −11.0734 −0.386937 $$820$$ 0 0 $$821$$ −31.2276 −1.08985 −0.544925 0.838485i $$-0.683442\pi$$ −0.544925 + 0.838485i $$0.683442\pi$$ $$822$$ 0 0 $$823$$ −8.04437 −0.280409 −0.140204 0.990123i $$-0.544776\pi$$ −0.140204 + 0.990123i $$0.544776\pi$$ $$824$$ 0 0 $$825$$ 0 0 $$826$$ 0 0 $$827$$ −55.3286 −1.92396 −0.961982 0.273114i $$-0.911946\pi$$ −0.961982 + 0.273114i $$0.911946\pi$$ $$828$$ 0 0 $$829$$ 52.1895 1.81262 0.906309 0.422617i $$-0.138888\pi$$ 0.906309 + 0.422617i $$0.138888\pi$$ $$830$$ 0 0 $$831$$ 43.5571 1.51098 $$832$$ 0 0 $$833$$ 22.6218 0.783798 $$834$$ 0 0 $$835$$ 10.0717 0.348546 $$836$$ 0 0 $$837$$ −1.46765 −0.0507293 $$838$$ 0 0 $$839$$ −51.1355 −1.76539 −0.882697 0.469943i $$-0.844275\pi$$ −0.882697 + 0.469943i $$0.844275\pi$$ $$840$$ 0 0 $$841$$ 17.1294 0.590669 $$842$$ 0 0 $$843$$ 23.9646 0.825385 $$844$$ 0 0 $$845$$ −27.4783 −0.945284 $$846$$ 0 0 $$847$$ 0 0 $$848$$ 0 0 $$849$$ 16.6325 0.570825 $$850$$ 0 0 $$851$$ 4.41099 0.151207 $$852$$ 0 0 $$853$$ −13.7892 −0.472134 −0.236067 0.971737i $$-0.575858\pi$$ −0.236067 + 0.971737i $$0.575858\pi$$ $$854$$ 0 0 $$855$$ −4.58637 −0.156850 $$856$$ 0 0 $$857$$ 20.3756 0.696017 0.348008 0.937491i $$-0.386858\pi$$ 0.348008 + 0.937491i $$0.386858\pi$$ $$858$$ 0 0 $$859$$ 39.2302 1.33852 0.669259 0.743029i $$-0.266612\pi$$ 0.669259 + 0.743029i $$0.266612\pi$$ $$860$$ 0 0 $$861$$ −6.33296 −0.215827 $$862$$ 0 0 $$863$$ 36.6395 1.24722 0.623611 0.781735i $$-0.285665\pi$$ 0.623611 + 0.781735i $$0.285665\pi$$ $$864$$ 0 0 $$865$$ 12.7785 0.434483 $$866$$ 0 0 $$867$$ 15.6421 0.531234 $$868$$ 0 0 $$869$$ 0 0 $$870$$ 0 0 $$871$$ 99.2603 3.36331 $$872$$ 0 0 $$873$$ −15.1001 −0.511061 $$874$$ 0 0 $$875$$ −1.65544 −0.0559642 $$876$$ 0 0 $$877$$ 46.1108 1.55705 0.778526 0.627613i $$-0.215968\pi$$ 0.778526 + 0.627613i $$0.215968\pi$$ $$878$$ 0 0 $$879$$ −14.7024 −0.495901 $$880$$ 0 0 $$881$$ 1.32158 0.0445251 0.0222625 0.999752i $$-0.492913\pi$$ 0.0222625 + 0.999752i $$0.492913\pi$$ $$882$$ 0 0 $$883$$ −17.2302 −0.579843 −0.289921 0.957050i $$-0.593629\pi$$ −0.289921 + 0.957050i $$0.593629\pi$$ $$884$$ 0 0 $$885$$ 9.98667 0.335698 $$886$$ 0 0 $$887$$ −17.7175 −0.594896 −0.297448 0.954738i $$-0.596135\pi$$ −0.297448 + 0.954738i $$0.596135\pi$$ $$888$$ 0 0 $$889$$ 2.14079 0.0717998 $$890$$ 0 0 $$891$$ 0 0 $$892$$ 0 0 $$893$$ −32.7112 −1.09464 $$894$$ 0 0 $$895$$ −12.1027 −0.404550 $$896$$ 0 0 $$897$$ −49.4722 −1.65183 $$898$$ 0 0 $$899$$ −1.76256 −0.0587847 $$900$$ 0 0 $$901$$ 9.78922 0.326126 $$902$$ 0 0 $$903$$ −26.9344 −0.896320 $$904$$ 0 0 $$905$$ 19.1382 0.636174 $$906$$ 0 0 $$907$$ −20.2011 −0.670767 −0.335384 0.942082i $$-0.608866\pi$$ −0.335384 + 0.942082i $$0.608866\pi$$ $$908$$ 0 0 $$909$$ −3.98667 −0.132229 $$910$$ 0 0 $$911$$ −45.1001 −1.49423 −0.747117 0.664693i $$-0.768562\pi$$ −0.747117 + 0.664693i $$0.768562\pi$$ $$912$$ 0 0 $$913$$ 0 0 $$914$$ 0 0 $$915$$ −11.9637 −0.395507 $$916$$ 0 0 $$917$$ −9.50305 −0.313818 $$918$$ 0 0 $$919$$ 3.77589 0.124555 0.0622776 0.998059i $$-0.480164\pi$$ 0.0622776 + 0.998059i $$0.480164\pi$$ $$920$$ 0 0 $$921$$ −15.5704 −0.513062 $$922$$ 0 0 $$923$$ 6.68912 0.220175 $$924$$ 0 0 $$925$$ −0.791864 −0.0260363 $$926$$ 0 0 $$927$$ −10.2781 −0.337578 $$928$$ 0 0 $$929$$ 26.7245 0.876803 0.438401 0.898779i $$-0.355545\pi$$ 0.438401 + 0.898779i $$0.355545\pi$$ $$930$$ 0 0 $$931$$ 18.5811 0.608971 $$932$$ 0 0 $$933$$ −15.3330 −0.501978 $$934$$ 0 0 $$935$$ 0 0 $$936$$ 0 0 $$937$$ −6.68384 −0.218351 −0.109176 0.994022i $$-0.534821\pi$$ −0.109176 + 0.994022i $$0.534821\pi$$ $$938$$ 0 0 $$939$$ 17.3063 0.564769 $$940$$ 0 0 $$941$$ 13.1027 0.427137 0.213569 0.976928i $$-0.431491\pi$$ 0.213569 + 0.976928i $$0.431491\pi$$ $$942$$ 0 0 $$943$$ 15.2656 0.497117 $$944$$ 0 0 $$945$$ 9.36226 0.304554 $$946$$ 0 0 $$947$$ 47.6058 1.54698 0.773490 0.633808i $$-0.218509\pi$$ 0.773490 + 0.633808i $$0.218509\pi$$ $$948$$ 0 0 $$949$$ −74.3524 −2.41358 $$950$$ 0 0 $$951$$ 9.13733 0.296298 $$952$$ 0 0 $$953$$ −15.2348 −0.493504 −0.246752 0.969079i $$-0.579363\pi$$ −0.246752 + 0.969079i $$0.579363\pi$$ $$954$$ 0 0 $$955$$ 12.0894 0.391204 $$956$$ 0 0 $$957$$ 0 0 $$958$$ 0 0 $$959$$ −16.7466 −0.540776 $$960$$ 0 0 $$961$$ −30.9327 −0.997828 $$962$$ 0 0 $$963$$ −15.6784 −0.505230 $$964$$ 0 0 $$965$$ −13.6191 −0.438415 $$966$$ 0 0 $$967$$ 36.4243 1.17133 0.585664 0.810554i $$-0.300834\pi$$ 0.585664 + 0.810554i $$0.300834\pi$$ $$968$$ 0 0 $$969$$ 32.3402 1.03892 $$970$$ 0 0 $$971$$ 35.1001 1.12642 0.563208 0.826315i $$-0.309567\pi$$ 0.563208 + 0.826315i $$0.309567\pi$$ $$972$$ 0 0 $$973$$ −9.67314 −0.310107 $$974$$ 0 0 $$975$$ 8.88128 0.284429 $$976$$ 0 0 $$977$$ −42.8946 −1.37232 −0.686160 0.727451i $$-0.740705\pi$$ −0.686160 + 0.727451i $$0.740705\pi$$ $$978$$ 0 0 $$979$$ 0 0 $$980$$ 0 0 $$981$$ 10.6758 0.340852 $$982$$ 0 0 $$983$$ −28.1559 −0.898032 −0.449016 0.893524i $$-0.648225\pi$$ −0.449016 + 0.893524i $$0.648225\pi$$ $$984$$ 0 0 $$985$$ 14.7919 0.471308 $$986$$ 0 0 $$987$$ 17.3286 0.551575 $$988$$ 0 0 $$989$$ 64.9254 2.06451 $$990$$ 0 0 $$991$$ −43.7085 −1.38845 −0.694224 0.719759i $$-0.744252\pi$$ −0.694224 + 0.719759i $$0.744252\pi$$ $$992$$ 0 0 $$993$$ −41.7085 −1.32358 $$994$$ 0 0 $$995$$ 17.6865 0.560699 $$996$$ 0 0 $$997$$ 13.0247 0.412497 0.206248 0.978500i $$-0.433875\pi$$ 0.206248 + 0.978500i $$0.433875\pi$$ $$998$$ 0 0 $$999$$ 4.47834 0.141688 Display $$a_p$$ with $$p$$ up to: 50 250 1000 Display $$a_n$$ with $$n$$ up to: 50 250 1000 ## Twists By twisting character Char Parity Ord Type Twist Min Dim 1.1 even 1 trivial 4840.2.a.t.1.2 3 4.3 odd 2 9680.2.a.cc.1.2 3 11.10 odd 2 4840.2.a.u.1.2 yes 3 44.43 even 2 9680.2.a.ca.1.2 3 By twisted newform Twist Min Dim Char Parity Ord Type 4840.2.a.t.1.2 3 1.1 even 1 trivial 4840.2.a.u.1.2 yes 3 11.10 odd 2 9680.2.a.ca.1.2 3 44.43 even 2 9680.2.a.cc.1.2 3 4.3 odd 2
19,599
34,927
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.71875
3
CC-MAIN-2024-26
latest
en
0.4733
https://www.aqua-calc.com/calculate/food-weight-to-volume/substance/riccioli-coma-and-blank-upc-column--blank-010978200548
1,566,243,308,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00173.warc.gz
734,998,606
7,676
# Volume of RICCIOLI, UPC: 010978200548 ## food weight to volume conversions ### calculate volume of generic and branded foods per weight #### Volume, i.e. how many spoons, cups,gallons or liters in 100 gram ofRICCIOLI, UPC: 010978200548 centimeter³ 316.85 milliliter 316.85 foot³ 0.01 US cup 1.34 Imperial gallon 0.07 US dessertspoon 42.86 inch³ 19.34 US fluid ounce 10.71 liter 0.32 US gallon 0.08 meter³ 0 US pint 0.67 metric cup 1.27 US quart 0.33 metric dessertspoon 31.68 US tablespoon 21.43 metric tablespoon 21.12 US teaspoon 64.28 metric teaspoon 63.37 #### Weight gram 100 ounce 3.53 kilogram 0.1 pound 0.22 milligram 100 000 Nutrient (find foodsrich in nutrients) Unit Value /100 g BasicAdvancedAll Proximates Energy kcal 357 Protein g 14.29 Total lipid (fat) g 1.79 Carbohydrate,bydifference g 69.64 Fiber,totaldietary g 3.6 Sugars, total g 3.57 Minerals Calcium, Ca mg 0 Iron, Fe mg 3.21 Sodium, Na mg 27 Vitamins Vitamin C,totalascorbic acid mg 0 Thiamin mg 1 Riboflavin mg 0.455 Niacin mg 7.143 Vitamin A, IU IU 0 Lipids Fatty acids,totalsaturated g 0 Cholesterol mg 0 #### See how many calories in0.1 kg (0.22 lbs) ofRICCIOLI, UPC:010978200548 From kilocalories(kcal) kilojoule(kJ) Carbohydrate 0 0 Fat 0 0 Protein 0 0 Other 357 1 493.69 Total 357 1 493.69 • 78.90291 grams [g] of RICCIOLI, UPC: 010978200548 fill 1 metric cup • 2.63391 ounces [oz] of RICCIOLI, UPC: 010978200548 fill 1 US cup • RICCIOLI, UPC: 010978200548 weigh(s) 78.9 gram per (metric cup) or 2.63 ounce per (US cup), and contain(s) 357 calories per 100 grams or ≈3.527 ounces  [ weight to volume | volume to weight | price | density ] • Ingredients:  DURUM WHEAT SEMOLINA, NIACIN, IRON, THIAMINE, RIBOFLAVIN, FOLIC ACID. • For instance, compute how many cups or spoons a pound or kilogram of “RICCIOLI, UPC: 010978200548” fills.  Volume of the selected food item is calculated based on the food density and its given weight. Visit our food calculations forum for more details. • A few foods with a name containing, like or similar to RICCIOLI, UPC: 010978200548: • ORGANIC RICCIOLI, UPC: 077890360699 contain(s) 357 calories per 100 grams or ≈3.527 ounces  [ price ] • RICCIOLI, UPC: 894729000188 contain(s) 357 calories per 100 grams or ≈3.527 ounces  [ price ] • Reference (ID: 217707) • USDA National Nutrient Database for Standard Reference; National Agricultural Library; United States Department of Agriculture (USDA); 1400 Independence Ave., S.W.; Washington, DC 20250 USA. #### Foods, Nutrients and Calories WONKA, SWEETARTS, ARTIFICIALLY FLAVORED, UPC: 073563008066 contain(s) 400 calories per 100 grams or ≈3.527 ounces  [ price ] PUMPKIN SQUARES, UPC: 628834170258 contain(s) 400 calories per 100 grams or ≈3.527 ounces  [ price ] Foods high in Fatty acids, total trans #### Gravels, Substances and Oils CaribSea, Freshwater, Super Naturals, Torpedo Beach weighs 1 505.74 kg/m³ (94.00028 lb/ft³) with specific gravity of 1.50574 relative to pure water.  Calculate how much of this gravel is required to attain a specific depth in a cylindricalquarter cylindrical  or in a rectangular shaped aquarium or pond  [ weight to volume | volume to weight | price ] Germanium dioxide, hexagonal form [GeO2] weighs 4 228 kg/m³ (263.94542 lb/ft³)  [ weight to volume | volume to weight | price | mole to volume and weight | density ] Volume to weightweight to volume and cost conversions for Refrigerant R-12, liquid (R12) with temperature in the range of -51.12°C (-60.016°F) to 71.12°C (160.016°F) #### Weights and Measurements A cubic micron (µm³) is a derived metric SI (System International) measurement unit of volume with sides equal to one micron (1μm) Amount of substance is a quantity proportional to the number of entities N in a sample. Convert long ton per cubic inch to kilogram per cubic millimeter or convert between all units of density measurement #### Calculators Electricity cost calculator per hours, days, weeks, months and years
1,215
3,963
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2019-35
latest
en
0.556587
https://istopdeath.com/write-the-fraction-in-simplest-form-9-10%C3%B73-5/
1,675,734,372,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00645.warc.gz
293,515,371
15,840
# Write the Fraction in Simplest Form (9/10)÷(3/5) To divide by a fraction, multiply by its reciprocal. Cancel the common factor of . Factor out of . Cancel the common factor. Rewrite the expression. Cancel the common factor of . Factor out of . Cancel the common factor. Rewrite the expression. The result can be shown in multiple forms. Exact Form: Decimal Form: Mixed Number Form: Write the Fraction in Simplest Form (9/10)÷(3/5)
108
433
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2023-06
latest
en
0.845283
https://software.intel.com/fr-fr/articles/low-power-intel-architecture-for-small-form-factor-devices-supplemental-information
1,438,179,959,000,000,000
text/html
crawl-data/CC-MAIN-2015-32/segments/1438042986444.39/warc/CC-MAIN-20150728002306-00200-ip-10-236-191-2.ec2.internal.warc.gz
883,662,256
15,780
# Low Power Intel® Architecture for Small Form Factor Devices: Supplemental Information This entry is a supplement to the article Low Power Intel® Architecture for Small Form Factor Devices, by Ram Chary, Pat A. Correia, Raviprakash Nagaraj, and James Song.  Additional technical information presented here was provided by Scott Noble. The attached Microsoft PowerPoint* slides, included in the file small_form_factor.zip below, are provided to help answer two questions received from a developer by our support team: [English] Q. Radiation should be related with the Surface area, why are you saying '300 Cubic centimeters .... Maximum power is about 5W?' I have another question. Restricted in radiation, what is the Upper limit of the mobile device's total power loss. how to calculate? A. The first question asks how we determined 5W thermal limit for a 300cc device. This is based on a number of assumptions: 1. There is no fan and heat is dissipated by being conducted to the enclosure surface and then radiated out into the ambient air 2. Ambient air is 35°C 3. There is an ergonomic limit to the temperature of the enclosure (for people to hold the device, it cannot exceed 50°C). The challenge for maximum heat dissipation is to get all the surfaces up to 50°C. However, heat spreading is not ideal and there is a thermal gradient across the device. In addition, hot spots from components under the enclosure cover can limit the power dissipation capability by up to 50%. 4. A graph in the presentation shows an ideal situation of over 6W if there was ideal heat spreading to all the surfaces of the device. However, this is not practical, so another graph is shown with more realistic heat spreading (still requires very good enclosure and system design). This graph shows the 4.5W to 5W capability. We calculate the thermal capability based on thermal radiation by creating a flowtherm model. We calculate total surface area of the enclosure and estimate the total effective heat dissipating surface area. We calculate based on some heat spreading capability and also factor in hot spots. Some experiments confirmed this. Added features that can extend the power capability would be the addition of a small fan, or a double walled back cover (as shown in slides) that enables some surfaces to reach higher temperatures without contacting the user and thus allow more heat dissipation. Pour de plus amples informations sur les optimisations de compilation, consultez notre Avertissement concernant les optimisations. Catégories:
532
2,551
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2015-32
latest
en
0.915162
https://www.numbersaplenty.com/7677151
1,726,514,328,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00438.warc.gz
866,364,743
2,939
Search a number 7677151 is a prime number BaseRepresentation bin11101010010010011011111 3112110001001221 4131102103133 53431132101 6432314211 7122153236 oct35222337 915401057 107677151 114373a59 1226a2967 13178a4c1 14103bb1d 15a19aa1 hex7524df 7677151 has 2 divisors, whose sum is σ = 7677152. Its totient is φ = 7677150. The previous prime is 7677113. The next prime is 7677161. The reversal of 7677151 is 1517767. It is a strong prime. It is a cyclic number. It is not a de Polignac number, because 7677151 - 211 = 7675103 is a prime. It is a congruent number. It is not a weakly prime, because it can be changed into another prime (7677161) by changing a digit. It is a polite number, since it can be written as a sum of consecutive naturals, namely, 3838575 + 3838576. It is an arithmetic number, because the mean of its divisors is an integer number (3838576). Almost surely, 27677151 is an apocalyptic number. 7677151 is a deficient number, since it is larger than the sum of its proper divisors (1). 7677151 is an equidigital number, since it uses as much as digits as its factorization. 7677151 is an evil number, because the sum of its binary digits is even. The product of its digits is 10290, while the sum is 34. The square root of 7677151 is about 2770.7672222690. The cubic root of 7677151 is about 197.2725662686. The spelling of 7677151 in words is "seven million, six hundred seventy-seven thousand, one hundred fifty-one".
443
1,456
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2024-38
latest
en
0.851886
https://dotnettutorials.net/lesson/toeplitz-matrix/
1,716,071,604,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971057516.1/warc/CC-MAIN-20240518214304-20240519004304-00842.warc.gz
188,511,592
58,279
# Toeplitz Matrix ## Toeplitz Matrix in C Language with Examples: In this article, I am going to discuss Toeplitz Matrix in C Language with Examples. Please read our previous article, where we discussed TriDiagonal and TriBand Matrix in C Language with Examples. ##### Toeplitz Matrix: This is a Toeplitz Matrix of order 5×5. This is having all non-zero elements then what is the property of this matrix? If we observe the elements in a diagonal the elements are the same. Elements in a diagonal are same. So, when the elements are repeating, they are duplicating and there is some pattern followed which is: We don’t have to store all non-zero elements. Then how many elements are sufficient to store in a single dimension array? 1st row and 1st column are sufficient to store an array: The total number of Elements to be stored in the array are: So let us see how to represent this one. So, let’s take an array and store all these elements. What should be the size of the array? The size will be n + n – 1 = 2n – 1 In this case, n is 5 so = 2 * 5 – 1 = 9 So, we would take an array of size 9: Now how to represent these elements. We don’t have to store all the elements. First, we will store row and then columns. ###### 1st Column Elements: We have done the mapping of the elements. Now, what should be the formula for retrieving the elements? See if the elements are in the upper triangular part, they are present in a row. And the elements are in lower triangular elements than they are in a column. So how do we identify the elements here? For retrieving the elements, the conditions are: Index (M [i][j]): So, these are the required formulas. Now let’s see the code part: ##### Toeplitz Matrix Code in C Language: ```#include <stdio.h> #include <stdlib.h> struct Matrix { int *B; int n; }; void Set (struct Matrix *m, int i, int j, int y) { if (i <= j) { m->B[j - i] = y; } if (i > j) { m->B[m->n + i - j - 1] = y; } } int Get (struct Matrix m, int i, int j) { if (i <= j) { return m.B[j - i]; } if (i > j) { return m.B[m.n + i - j - 1]; } return 0; } void Display (struct Matrix m) { int i, j; printf ("\nMatrix is: \n"); for (i = 1; i <= m.n; i++) { for (j = 1; j <= m.n; j++) { if (i <= j) { printf ("%d ", m.B[j - i]); } else if (i > j) { printf ("%d ", m.B[m.n + i - j - 1]); } else { printf ("0 "); } } printf ("\n"); } } int main () { struct Matrix M; int i, j, y; printf ("Enter Dimension of Matrix: "); scanf ("%d", &M.n); M.B = (int *) malloc ((2 * M.n - 1) * sizeof (int)); printf ("Enter all the elements of the matrix:\n"); for (i = 1; i <= M.n; i++) { for (j = 1; j <= M.n; j++) { scanf ("%d", &y); Set (&M, i, j, y); } } Display (M); } ``` ###### Output: In the next article, I am going to discuss Menu Driven Program for Matrices with Examples. Here, in this article, I try to explain Toeplitz Matrix in C Language with Examples and I hope you enjoy this Toeplitz Matrix in C Language with Examples article.
858
2,974
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2024-22
latest
en
0.833054
https://root.cern.ch/doc/v608/GSLNLSMinimizer_8cxx_source.html
1,675,361,663,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500035.14/warc/CC-MAIN-20230202165041-20230202195041-00564.warc.gz
511,059,461
16,249
ROOT   6.08/07 Reference Guide GSLNLSMinimizer.cxx Go to the documentation of this file. 1 // @(#)root/mathmore:$Id$ 2 // Author: L. Moneta Wed Dec 20 17:16:32 2006 3 4 /********************************************************************** 5  * * 6  * Copyright (c) 2006 LCG ROOT Math Team, CERN/PH-SFT * 7  * * 8  * * 9  **********************************************************************/ 10 11 // Implementation file for class GSLNLSMinimizer 12 13 #include "Math/GSLNLSMinimizer.h" 14 17 18 #include "Math/Error.h" 19 #include "GSLMultiFit.h" 20 #include "gsl/gsl_errno.h" 21 22 23 #include "Math/FitMethodFunction.h" 24 //#include "Math/Derivator.h" 25 26 #include <iostream> 27 #include <iomanip> 28 #include <cassert> 29 #include <memory> 30 31 namespace ROOT { 32 33  namespace Math { 34 35 36 // class to implement transformation of chi2 function 37 // in general could make template on the fit method function type 38 39 class FitTransformFunction : public FitMethodFunction { 40 41 public: 42 43  FitTransformFunction(const FitMethodFunction & f, const std::vector<EMinimVariableType> & types, const std::vector<double> & values, 44  const std::map<unsigned int, std::pair<double, double> > & bounds) : 45  FitMethodFunction( f.NDim(), f.NPoints() ), 46  fOwnTransformation(true), 47  fFunc(f), 48  fTransform(new MinimTransformFunction( new MultiNumGradFunction(f), types, values, bounds) ), 50  { 51  // constructor 52  // need to pass to MinimTransformFunction a new pointer which will be managed by the class itself 53  // pass a gradient pointer although it will not be used byb the class 54  } 55 56  FitTransformFunction(const FitMethodFunction & f, MinimTransformFunction *transFunc ) : 57  FitMethodFunction( f.NDim(), f.NPoints() ), 58  fOwnTransformation(false), 59  fFunc(f), 60  fTransform(transFunc), 62  { 63  // constructor from al already existing Transformation object. Ownership of the transformation onbect is passed to caller 64  } 65 66  ~FitTransformFunction() { 67  if (fOwnTransformation) { 68  assert(fTransform); 69  delete fTransform; 70  } 71  } 72 73  // re-implement data element 74  virtual double DataElement(const double * x, unsigned i, double * g = 0) const { 75  // transform from x internal to x external 76  const double * xExt = fTransform->Transformation(x); 77  if ( g == 0) return fFunc.DataElement( xExt, i ); 79  double val = fFunc.DataElement( xExt, i, &fGrad[0]); 82  return val; 83  } 84 85 86  IMultiGenFunction * Clone() const { 87  // not supported 88  return 0; 89  } 90 91  // dimension (this is number of free dimensions) 92  unsigned int NDim() const { 93  return fTransform->NDim(); 94  } 95 96  unsigned int NTot() const { 97  return fTransform->NTot(); 98  } 99 100  // forward of transformation functions 101  const double * Transformation( const double * x) const { return fTransform->Transformation(x); } 102 103 104  /// inverse transformation (external -> internal) 105  void InvTransformation(const double * xext, double * xint) const { fTransform->InvTransformation(xext,xint); } 106 107  /// inverse transformation for steps (external -> internal) at external point x 108  void InvStepTransformation(const double * x, const double * sext, double * sint) const { fTransform->InvStepTransformation(x,sext,sint); } 109 110  ///transform gradient vector (external -> internal) at internal point x 111  void GradientTransformation(const double * x, const double *gext, double * gint) const { fTransform->GradientTransformation(x,gext,gint); } 112 113  void MatrixTransformation(const double * x, const double *cint, double * cext) const { fTransform->MatrixTransformation(x,cint,cext); } 114 115 private: 116 117  // objects of this class are not meant for copying or assignment 118  FitTransformFunction(const FitTransformFunction& rhs); 119  FitTransformFunction& operator=(const FitTransformFunction& rhs); 120 121  double DoEval(const double * x) const { 122  return fFunc( fTransform->Transformation(x) ); 123  } 124 125  bool fOwnTransformation; 126  const FitMethodFunction & fFunc; // pointer to original fit method function 127  MinimTransformFunction * fTransform; // pointer to transformation function 129 130 }; 131 132 133 134 135 // GSLNLSMinimizer implementation 136 138  //fNFree(0), 139  fSize(0), 140  fChi2Func(0) 141 { 142  // Constructor implementation : create GSLMultiFit wrapper object 143  const gsl_multifit_fdfsolver_type * gsl_type = 0; // use default type defined in GSLMultiFit 144  if (type == 1) gsl_type = gsl_multifit_fdfsolver_lmsder; // scaled lmder version 145  if (type == 2) gsl_type = gsl_multifit_fdfsolver_lmder; // unscaled version 146 147  fGSLMultiFit = new GSLMultiFit( gsl_type ); 148 149  fEdm = -1; 150 151  // default tolerance and max iterations 153  if (niter <= 0) niter = 100; 154  SetMaxIterations(niter); 155 157  if (fLSTolerance <=0) fLSTolerance = 0.0001; // default internal value 158 160 } 161 163  assert(fGSLMultiFit != 0); 164  delete fGSLMultiFit; 165 } 166 167 168 170  // set the function to minimizer 171  // need to create vector of functions to be passed to GSL multifit 172  // support now only CHi2 implementation 173 174  // call base class method. It will clone the function and set ndimension 176  //need to check if function can be used 177  const ROOT::Math::FitMethodFunction * chi2Func = dynamic_cast<const ROOT::Math::FitMethodFunction *>(ObjFunction()); 178  if (chi2Func == 0) { 179  if (PrintLevel() > 0) std::cout << "GSLNLSMinimizer: Invalid function set - only Chi2Func supported" << std::endl; 180  return; 181  } 182  fSize = chi2Func->NPoints(); 183  fNFree = NDim(); 184 185  // use vector by value 186  fResiduals.reserve(fSize); 187  for (unsigned int i = 0; i < fSize; ++i) { 188  fResiduals.push_back( LSResidualFunc(*chi2Func, i) ); 189  } 190  // keep pointers to the chi2 function 191  fChi2Func = chi2Func; 192  } 193 195  // set the function to minimizer using gradient interface 196  // not supported yet, implemented using the other SetFunction 197  return SetFunction(static_cast<const ROOT::Math::IMultiGenFunction &>(func) ); 198 } 199 200 202  // set initial parameters of the minimizer 203  int debugLevel = PrintLevel(); 204 205 206  assert (fGSLMultiFit != 0); 207  if (fResiduals.size() != fSize || fChi2Func == 0) { 208  MATH_ERROR_MSG("GSLNLSMinimizer::Minimize","Function has not been set"); 209  return false; 210  } 211 212  unsigned int npar = NPar(); 213  unsigned int ndim = NDim(); 214  if (npar == 0 || npar < ndim) { 215  MATH_ERROR_MSGVAL("GSLNLSMinimizer::Minimize","Wrong number of parameters",npar); 216  return false; 217  } 218 219  // set residual functions and check if a transformation is needed 220  std::vector<double> startValues; 221 222  // transformation need a grad function. Delegate fChi2Func to given object 224  MinimTransformFunction * trFuncRaw = CreateTransformation(startValues, gradFunction); 225  // need to transform in a FitTransformFunction which is set in the residual functions 226  std::unique_ptr<FitTransformFunction> trFunc; 227  if (trFuncRaw) { 228  trFunc.reset(new FitTransformFunction(*fChi2Func, trFuncRaw) ); 229  //FitTransformationFunction *trFunc = new FitTransformFunction(*fChi2Func, trFuncRaw); 230  for (unsigned int ires = 0; ires < fResiduals.size(); ++ires) { 231  fResiduals[ires] = LSResidualFunc(*trFunc, ires); 232  } 233 234  assert(npar == trFunc->NTot() ); 235  } 236 237  if (debugLevel >=1 ) std::cout <<"Minimize using GSLNLSMinimizer " << std::endl; 238 239 // // use a global step size = min (step vectors) 240 // double stepSize = 1; 241 // for (unsigned int i = 0; i < fSteps.size(); ++i) 242 // //stepSize += fSteps[i]; 243 // if (fSteps[i] < stepSize) stepSize = fSteps[i]; 244 245  int iret = fGSLMultiFit->Set( fResiduals, &startValues.front() ); 246  if (iret) { 247  MATH_ERROR_MSGVAL("GSLNLSMinimizer::Minimize","Error setting the residual functions ",iret); 248  return false; 249  } 250 251  if (debugLevel >=1 ) std::cout <<"GSLNLSMinimizer: " << fGSLMultiFit->Name() << " - start iterating......... " << std::endl; 252 253  // start iteration 254  unsigned int iter = 0; 255  int status; 256  bool minFound = false; 257  do { 258  status = fGSLMultiFit->Iterate(); 259 260  if (debugLevel >=1) { 261  std::cout << "----------> Iteration " << iter << " / " << MaxIterations() << " status " << gsl_strerror(status) << std::endl; 262  const double * x = fGSLMultiFit->X(); 263  if (trFunc.get()) x = trFunc->Transformation(x); 264  int pr = std::cout.precision(18); 265  std::cout << " FVAL = " << (*fChi2Func)(x) << std::endl; 266  std::cout.precision(pr); 267  std::cout << " X Values : "; 268  for (unsigned int i = 0; i < NDim(); ++i) 269  std::cout << " " << VariableName(i) << " = " << X()[i]; 270  std::cout << std::endl; 271  } 272 273  if (status) break; 274 275  // check also the delta in X() 276  status = fGSLMultiFit->TestDelta( Tolerance(), Tolerance() ); 277  if (status == GSL_SUCCESS) { 278  minFound = true; 279  } 280 281  // double-check with the gradient 282  int status2 = fGSLMultiFit->TestGradient( Tolerance() ); 283  if ( minFound && status2 != GSL_SUCCESS) { 284  // check now edm 285  fEdm = fGSLMultiFit->Edm(); 286  if (fEdm > Tolerance() ) { 287  // continue the iteration 288  status = status2; 289  minFound = false; 290  } 291  } 292 293  if (debugLevel >=1) { 294  std::cout << " after Gradient and Delta tests: " << gsl_strerror(status); 295  if (fEdm > 0) std::cout << ", edm is: " << fEdm; 296  std::cout << std::endl; 297  } 298 299  iter++; 300 301  } 302  while (status == GSL_CONTINUE && iter < MaxIterations() ); 303 304  // check edm 305  fEdm = fGSLMultiFit->Edm(); 306  if ( fEdm < Tolerance() ) { 307  minFound = true; 308  } 309 310  // save state with values and function value 311  const double * x = fGSLMultiFit->X(); 312  if (x == 0) return false; 313 314  SetFinalValues(x); 315 316  SetMinValue( (*fChi2Func)(x) ); 317  fStatus = status; 318 319  fErrors.resize(NDim()); 320 321  // get errors from cov matrix 322  const double * cov = fGSLMultiFit->CovarMatrix(); 323  if (cov) { 324 325  fCovMatrix.resize(ndim*ndim); 326 327  if (trFunc.get() ) { 328  trFunc->MatrixTransformation(x, fGSLMultiFit->CovarMatrix(), &fCovMatrix[0] ); 329  } 330  else { 331  std::copy(cov, cov + ndim*ndim, fCovMatrix.begin() ); 332  } 333 334  for (unsigned int i = 0; i < ndim; ++i) 335  fErrors[i] = std::sqrt(fCovMatrix[i*ndim + i]); 336  } 337 338  if (minFound) { 339 340  if (debugLevel >=1 ) { 341  std::cout << "GSLNLSMinimizer: Minimum Found" << std::endl; 342  int pr = std::cout.precision(18); 343  std::cout << "FVAL = " << MinValue() << std::endl; 344  std::cout << "Edm = " << fEdm << std::endl; 345  std::cout.precision(pr); 346  std::cout << "NIterations = " << iter << std::endl; 347  std::cout << "NFuncCalls = " << fChi2Func->NCalls() << std::endl; 348  for (unsigned int i = 0; i < NDim(); ++i) 349  std::cout << std::setw(12) << VariableName(i) << " = " << std::setw(12) << X()[i] << " +/- " << std::setw(12) << fErrors[i] << std::endl; 350  } 351 352  return true; 353  } 354  else { 355  if (debugLevel >=1 ) { 356  std::cout << "GSLNLSMinimizer: Minimization did not converge" << std::endl; 357  std::cout << "FVAL = " << MinValue() << std::endl; 358  std::cout << "Edm = " << fGSLMultiFit->Edm() << std::endl; 359  std::cout << "Niterations = " << iter << std::endl; 360  } 361  return false; 362  } 363  return false; 364 } 365 366 const double * GSLNLSMinimizer::MinGradient() const { 367  // return gradient (internal values) 369 } 370 371 372 double GSLNLSMinimizer::CovMatrix(unsigned int i , unsigned int j ) const { 373  // return covariance matrix element 374  unsigned int ndim = NDim(); 375  if ( fCovMatrix.size() == 0) return 0; 376  if (i > ndim || j > ndim) return 0; 377  return fCovMatrix[i*ndim + j]; 378 } 379 381  // return covariance matrix status = 0 not computed, 382  // 1 computed but is approximate because minimum is not valid, 3 is fine 383  if ( fCovMatrix.size() == 0) return 0; 384  // case minimization did not finished correctly 385  if (fStatus != GSL_SUCCESS) return 1; 386  return 3; 387 } 388 389 390  } // end namespace Math 391 392 } // end namespace ROOT 393 Interface (abstract class) for multi-dimensional functions providing a gradient calculation. Definition: IFunction.h:322 virtual const double * X() const return pointer to X values at the minimum void SetMaxIterations(unsigned int maxiter) set maximum iterations (one iteration can have many function calls) Definition: Minimizer.h:459 ~GSLNLSMinimizer() Destructor (no operations) std::vector< double > fErrors This namespace contains pre-defined functions to be used in conjuction with TExecutor::Map and TExecu... Definition: StringConv.hxx:21 const ROOT::Math::FitMethodFunction * fChi2Func virtual double CovMatrix(unsigned int, unsigned int) const return covariance matrices elements if the variable is fixed the matrix is zero The ordering of the v... GSLMultiFit, internal class for implementing GSL non linear least square GSL fitting. Definition: GSLMultiFit.h:52 virtual int CovMatrixStatus() const return covariance matrix status int TestDelta(double absTol, double relTol) const test using abs and relative tolerance |dx| < absTol + relTol*|x| for every component ... Definition: GSLMultiFit.h:203 MinimTransformFunction class to perform a transformations on the variables to deal with fixed or limi... virtual unsigned int NPoints() const return the number of data points used in evaluating the function MinimTransformFunction * CreateTransformation(std::vector< double > &startValues, const ROOT::Math::IMultiGradFunction *func=0) ROOT::Math::GSLMultiFit * fGSLMultiFit STL namespace. virtual double MinValue() const return minimum function value int Set(const std::vector< Func > &funcVec, const double *x) set the solver parameters Definition: GSLMultiFit.h:123 std::vector< double > fCovMatrix virtual std::string VariableName(unsigned int ivar) const get name of variables (override if minimizer support storing of variable names) int PrintLevel() const minimizer configuration parameters Definition: Minimizer.h:419 #define MATH_ERROR_MSGVAL(loc, str, x) Definition: Error.h:69 double sqrt(double) virtual void SetFunction(const ROOT::Math::IMultiGenFunction &func) set the function to minimize Double_t x[n] Definition: legend1.C:17 const double * CovarMatrix() const return covariance matrix of the parameters Definition: GSLMultiFit.h:181 void SetMinValue(double val) virtual unsigned int NPar() const total number of parameter defined unsigned int MaxIterations() const max iterations Definition: Minimizer.h:425 Definition: GSLMultiFit.h:170 virtual unsigned int NDim() const number of dimensions std::string Name() const Definition: GSLMultiFit.h:152 const ROOT::Math::IMultiGenFunction * ObjFunction() const return pointer to used objective function #define MATH_ERROR_MSG(loc, str) Definition: Error.h:50 double Tolerance() const absolute tolerance Definition: Minimizer.h:428 double Edm() const Definition: GSLMultiFit.h:209 const int NPoints Definition: testNdimFit.cxx:35 GSLNLSMinimizer(int type=0) Default constructor. void SetFinalValues(const double *x) virtual unsigned int NDim() const =0 Retrieve the dimension of the function. virtual bool Minimize() method to perform the minimization virtual void SetFunction(const ROOT::Math::IMultiGenFunction &func) set the function to minimize LSResidualFunc class description. double f(double x) FitMethodFunction class Interface for objective functions (like chi2 and likelihood used in the fit) ... Definition: Fitter.h:57 const double * X() const parameter values at the minimum Definition: GSLMultiFit.h:163 int type Definition: TGX11.cxx:120 double func(double *x, double *p) Definition: stressTF1.cxx:213 IBaseFunctionMultiDim IMultiGenFunction Definition: IFunctionfwd.h:28 std::vector< LSResidualFunc > fResiduals Namespace for new Math classes and functions. virtual unsigned int NCalls() const return the total number of function calls (overrided if needed) virtual const double * MinGradient() const return pointer to gradient values at the minimum void SetPrintLevel(int level) set print level Definition: Minimizer.h:453 Documentation for the abstract class IBaseFunctionMultiDim. Definition: IFunction.h:63 virtual double DataElement(const double *x, unsigned int i, double *g=0) const =0 method returning the data i-th contribution to the fit objective function For example the residual fo... BasicFitMethodFunction< ROOT::Math::IMultiGenFunction > FitMethodFunction Definition: Fitter.h:57
4,758
16,617
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2023-06
latest
en
0.274687
https://www.thedonutwhole.com/how-many-cups-of-pasta-are-in-a-16-ounce-box/
1,723,478,941,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641045630.75/warc/CC-MAIN-20240812155418-20240812185418-00644.warc.gz
781,809,344
28,969
# How many cups of pasta are in a 16 ounce box? Determining how many cups of uncooked pasta come in a 16 ounce box can be useful for meal planning and preparation. When following a recipe, it’s important to use the correct pasta measurements for best results. While boxes often list approximate cup amounts, the exact quantity can vary between pasta shapes and brands. We’ll explore the average pasta cup yield per 16 ounce box and how to measure pasta amounts. On average, a 16 ounce box of uncooked pasta contains between 4-6 cups, depending on the pasta shape. Smaller shapes like elbow macaroni fit more cups per box while larger shapes like rotini contain fewer cups per box. The exact amount can vary slightly between brands. ## Measuring Pasta by Weight vs Volume Pasta is typically measured in two ways – by weight in ounces, or by volume in cups. The weight of pasta is consistent while the cup amount can vary. Measuring pasta by weight instead of volume can give more accurate and consistent results: • Weight measurements are precise – A kitchen scale will always measure 16 ounces of pasta as 16 ounces. Volume measures like cups can have slight variations. • Pasta density affects volume – Dense pasta fits less cups per pound while lighter pasta fits more cups per pound. Measuring by weight accounts for this density difference. • Different shapes affect cups per box – Smaller pasta shapes take up more cups per ounce compared to larger pasta shapes. While measuring pasta by weight is most accurate, many recipes specify pasta amounts by volume in cups. Understanding approximately how many cups come in a 16 ounce box can help guide recipe instructions. ## What Factors Affect Pasta Cup Amount? There are a few key factors that determine how many cups of uncooked pasta fit into a 16 ounce box: ### Pasta Shape The shape of the pasta affects how many cups it contains. Smaller and thinner pasta shapes take up more volume per ounce. Larger and denser shapes have fewer cups per ounce. For example: • Small elbow macaroni – approx 6 cups per 16 ounces • Medium shells – approx 5 cups per 16 ounces • Larger rotini – approx 4 cups per 16 ounces ### Pasta Density Some types of pasta are denser than others, which affects the cup yield per ounce. Dense whole wheat or gluten-free pasta contains fewer cups per pound compared to lighter traditional semolina pasta. ### Brand Differences Subtle differences in pasta density and size between brands can lead to small variations in cups per 16 ounce box. For example, one brand’s rotini may be slightly smaller or denser than another’s, resulting in a difference of a few tablespoons per box. ## Average Cups per 16 Ounce Box While amounts can vary slightly based on the factors above, most 16 ounce boxes of uncooked pasta contain approximately: • Small pasta shapes (elbows, small shells): About 6 cups • Medium pasta shapes (penne, fusilli): About 5 cups • Large pasta shapes (rotini, rigatoni): About 4 cups Some whole wheat and gluten-free pastas may contain around 1 less cup per 16 ounces compared to traditional semolina pasta. ## Measuring Pasta Precisely by Volume For precise pasta volume measurements, instead of relying on the package directions, it’s best to directly measure the pasta yourself using measuring cups: 1. Use dry measuring cups, not liquid – Dry cups are specifically calibrated for dry ingredients. Scoop cups into the pasta then level off the top. 2. Weigh the measuring cup first – Tare your measuring cup on a kitchen scale to account for the cup’s weight. 3. Measure pasta shape amounts separately – Don’t mix pasta shapes in one cup measure. 4. Break up large pasta – Snap long pasta like spaghetti in half before measuring. 5. Pack lightly – Add pasta loosely to cups without packing or pressing. 6. Use the same brand – Stick to one pasta brand when measuring to be consistent. Using this precise volume measurement method will give you the true cup amount for that specific shape and brand of pasta. ## Pasta Cooking Ratios Understanding the general ratio of uncooked pasta to cooked pasta can also help estimate cup amounts. As a rule of thumb: • 1 cup uncooked pasta = ~2-2.5 cups cooked • 8 ounces uncooked pasta = ~4-5 cups cooked This ratio allows you to convert between cooked and uncooked pasta amounts in recipes. ## Sample Pasta Cup Amounts Here are some example cup measurements for popular pasta shapes in 16 ounce boxes: Pasta Shape Cups per 16 Ounce Box Elbow macaroni 6 cups Penne 5 cups Rotini 4 cups Farfalle (bowtie) 5 cups Rigatoni 4 cups Small shells 5.5 cups Medium shells 4.5 cups Wagon wheels 4 cups Orzo 6 cups Lasagna noodles 4 cups broken Keep in mind amounts may vary slightly between brands. Measuring cups precisely yourself will give specifics for the exact type of pasta you are using. ## Pasta Serving Size by Cup Knowing approximately how much cooked pasta equals one serving can also assist with portioning. In general: • 1/2 cup cooked pasta = 1 child portion • 1 cup cooked pasta = 1 adult portion This varies a bit depending on the pasta shape. For example, a serving of heavier filled pasta like ravioli may be closer to 3/4 cup. But for most basic types of pasta, 1/2 to 1 cup cooked is a standard serving amount per person. ## Cooking Time by Pasta Shape Cooking time can also vary depending on the type and shape of pasta. Here are approximate cooking times for common pasta shapes: Pasta Shape Cooking Time Small pasta (elbows, small shells) 7-10 minutes Medium pasta (penne, rotini, fusilli) 10-12 minutes Large pasta (rigatoni, ziti) 12-15 minutes Fresh pasta 2-5 minutes Lasagna noodles 10-12 minutes Egg noodles 7-10 minutes Refer to package instructions for exact cooking times. Factors like altitude may also affect pasta cooking. ## Ingredient Substitutions If you don’t have quite enough pasta for a recipe, here are some possible ingredient substitutions: • Broken spaghetti or vermicelli – can substitute for smaller pasta shapes like elbows • Small shells or macaroni – can substitute for penne or rotini • Cook pasta to al dente – uses less pasta than cooking until very soft • Stretch pasta with vegetables – zucchini spirals, spaghetti squash, or carrots can extend pasta • Bulk up with beans or lentils – add chickpeas, white beans, or lentils to make pasta servings more hearty Getting creative with substitutions can help if you don’t have quite enough of the pasta shape your recipe calls for. ## Pasta Package Size Comparisons Knowing how 16 ounce pasta boxes compare to other common package sizes can also assist with shopping and substitution: • 8 oz box = approx 2-3 cups uncooked pasta • 12 oz box = approx 3-4 cups uncooked pasta • 16 oz box = approx 4-6 cups uncooked pasta • 20 oz box = approx 6-8 cups uncooked pasta • 1 lb (16 oz) bag = approx 4-6 cups uncooked pasta Recognizing comparable substitution amounts between different pasta package sizes can be helpful for recipes. ## Sample Recipes and Uses To give an idea of how 16 ounce pasta boxes can be used in recipes: ### Baked Ziti Use 1 16-ounce box of ziti or penne pasta: – Boil and drain pasta – Toss with marinara sauce and cheese – Bake until hot and bubbly Use 1 16-ounce box of rotini, fusilli, or farfalle pasta: – Boil until al dente and drain – Toss with chopped vegetables, cheese, dressing – Chill before serving ### Mac and Cheese Use 1 16-ounce box of elbow macaroni or small shells: – Cook pasta until just underdone – Drain then stir into cheese sauce – Top with breadcrumbs or extra cheese ### Minestrone Soup Use 1/2 16-ounce box of small pasta shells, ditalini, or elbows: – Cook pasta separately just until done – Drain and add to soup near end of cooking – Simmer briefly until heated through ## Tips for Measuring Pasta Here are some helpful tips for successfully measuring out pasta portions: • Weigh pasta for accuracy – Use a kitchen scale for most precision. • Measure cups precisely – Scoop and level dry cups, don’t rely on package cups. • Know your pasta shape – Cups per box varies based on shape and density. • Use the same pasta – Don’t mix brands or shapes when measuring. • Break long pasta – Snap strands in half before measuring. • Pack lightly – Add pasta without cramming or shaking cups. • Separate servings – Measure each person’s portion in different cups. Using proper measuring techniques will give you the best results for pasta recipes. ## FAQs ### Why do some 16 ounce pasta boxes contain fewer cups than others? The pasta shape and density affects how many cups fit per 16 ounce box. Smaller pasta shapes fit more cups while larger, denser shapes contain fewer cups at the same 16 ounce weight. ### How much pasta should I cook per person? On average, 1/2 cup uncooked pasta per child or 1 cup uncooked pasta per adult is a standard cooked single serving. This may vary a bit by pasta shape. ### What’s the difference between measuring pasta by weight versus volume? Weight in ounces is more precise since it remains consistent. Volume measures like cups can vary based on the pasta’s size, shape and density. But recipes often specify pasta amounts in cups. ### Should I break long pasta before measuring it? Yes, it’s best to snap long spaghetti or linguine strands in half before measuring cups. This allows it to fit loosely into cups rather than cramming or overflowing. ### How can I estimate cup amounts if I don’t have measuring cups? If you don’t have measuring cups handy, 1 handful or grasped fist of dry pasta equals approximately 1/2 cup cooked pasta per person. Or use the ratio of 8 ounces dry pasta = 4-5 cups cooked. ## Conclusion Understanding how many cups of dried pasta come in a 16 ounce box is useful knowledge for cooking. While amounts vary between shapes from 4-6 cups, using measuring cups specifically for your pasta type will give you the precise volume. Weighing pasta in grams or ounces can also provide very accurate results. Knowing general cup amounts and cooking ratios allows you to estimate portion sizes and make recipe substitutions when needed.
2,281
10,130
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2024-33
latest
en
0.912936
https://www.ajc.com/blog/high-school-sports/maxwell-class-region-region-preview/EcZdyRS8qVy25m9lJuIFMP/
1,642,888,083,000,000,000
text/html
crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00656.warc.gz
675,436,145
141,211
# Maxwell Class A Region by Region Preview The Maxwell Ratings region by region breakdowns are based on a Monte Carlo simulation of the 2014 season. The simulation completed 1,000,000 trials at the rate of approximately 51 seasons per second. While the Maxwell Ratings reflect each team’s strength, the Monte Carlo simulation highlights the impact of the season’s structure (i.e., schedules, region alignments, etc.). All out of state opponents were considered equal to the average of the GHSA team’s classification (i.e., out of state opponents of Class A GHSA teams were treated as an average Class A team) and results from games with out of state opponents were not factored in calculating the Class A point standings. Although some regions may use different criteria, in the simulation all region standings were deciding by recursive head-to-head comparisons with all remaining ties being broken randomly. ## Region Strength and Parity Each region is shown with its “Competitive Rating”, which is the rating required to win 80% of all games against the region’s teams in an infinite round robin competition, its “Average Rating”, which is the rating required to win 50% of all games against the region’s teams in an infinite round robin competition, and its “Parity Index”, which is the probability of the region champion being a different team in two consecutive simulations adjusted for the number of teams in the region. The Parity Index takes into account each team’s schedule and game site. A higher Parity Index indicates a region whose championship is more open than a region with a lower Parity Index. Region Competitive Rating Average Rating Parity Index 5 - A 56.41 30.33 0.366 7 - Div A - A 47.94 31.77 0.868 2 - A 47.86 30.05 0.867 4 - A 45.95 27.08 0.722 7 - Div B - A 45.84 23.26 0.602 3 - Div A - A 45.12 25.81 0.632 8 - A 43.18 19.51 0.666 3 - Div B - A 41.10 20.65 0.662 6 - Div A - A 39.97 26.91 0.780 6 - Div B - A 37.92 21.19 0.551 1 - A 28.91 13.44 0.739 ## Region by Region Breakdown Each team is shown with its rating, schedule strength, projected overall and region records, average finish in the region, and the number of times winning the region along with its associated odds. ### 1 - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Mitchell County 31 30.19 24.45 5.91-4.09 4.19-0.82 1.74 527,284 0.90 Miller County 38 24.24 22.47 5.26-4.74 3.73-1.27 2.22 301,756 2.31 Terrell County 48 15.60 27.18 3.28-6.72 2.73-2.27 3.03 109,992 8.09 Randolph-Clay 52 12.46 15.54 4.22-5.78 2.42-2.58 3.77 50,030 18.99 Calhoun County 57 5.99 23.84 2.29-7.71 1.68-3.32 4.42 10,930 90.49 Baconton 70 -14.11 10.91 1.54-8.46 0.26-4.74 5.83 8 124,999.00 Stewart County† 71 -21.46 9.24 1.56-8.44 † - Plays non-region schedule ### 2 - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Charlton County 8 43.37 29.84 7.71-2.29 5.40-1.61 2.53 280,152 2.57 Clinch County 11 40.92 27.95 6.88-3.12 5.00-2.01 2.65 264,861 2.78 Irwin County 12 40.84 33.06 6.27-3.73 4.96-2.04 2.75 234,297 3.27 Wilcox County 15 39.28 28.44 6.83-3.17 4.87-2.13 3.03 193,704 4.16 Turner County 30 30.48 27.44 5.48-4.52 3.56-3.44 4.53 26,157 37.23 Telfair County 42 19.46 28.66 3.58-6.42 2.09-4.91 6.15 683 1,463.13 Lanier County 51 13.02 27.71 2.61-7.39 1.43-5.57 6.93 144 6,943.44 Atkinson County 58 5.22 25.89 2.09-7.91 0.71-6.29 7.44 2 499,999.00 ### 3 - Div A - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Calvary Day 3 47.57 21.73 8.90-1.10 4.50-0.50 1.50 631,521 0.58 Savannah Christian 19 37.65 24.16 7.15-2.85 3.61-1.39 2.23 249,591 3.01 Claxton 32 29.78 28.45 5.29-4.71 2.96-2.04 2.93 109,419 8.14 Savannah Country Day 44 18.56 29.11 3.57-6.43 1.75-3.25 4.13 5,388 184.60 Portal 47 16.23 26.83 3.58-6.42 1.48-3.52 4.66 3,897 255.61 Jenkins County 56 7.18 20.64 2.62-7.38 0.70-4.30 5.54 184 5,433.78 ### 3 - Div B - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Emanuel County Institute 13 39.68 21.72 7.94-2.06 3.43-0.57 1.49 586,856 0.70 Johnson County 20 36.61 24.85 6.63-3.37 3.18-0.82 1.81 350,183 1.86 Treutlen 45 17.59 22.04 4.07-5.93 1.76-2.24 3.09 58,243 16.17 Wheeler County 55 8.81 19.21 3.28-6.72 1.05-2.95 4.11 4,267 233.36 Montgomery County 62 2.46 18.91 2.34-7.66 0.57-3.43 4.49 451 2,216.29 ### 4 - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Marion County 5 46.38 23.79 8.60-1.40 6.73-1.27 1.90 490,723 1.04 Dooly County 9 42.68 27.30 7.70-2.30 6.30-1.70 2.22 323,308 2.09 Hawkinsville 23 35.89 29.88 5.96-4.04 5.35-2.65 3.56 92,609 9.80 Brookstone 25 35.23 26.26 6.51-3.49 5.25-2.75 3.73 59,118 15.92 Taylor County 28 31.66 26.91 5.78-4.22 4.67-3.33 4.30 32,914 29.38 Pacelli 43 19.27 30.20 3.23-6.77 2.82-5.18 6.51 932 1,071.96 Greenville 46 17.38 30.18 3.03-6.97 2.67-5.34 6.60 330 3,029.30 Schley County 53 11.80 25.27 3.14-6.86 2.00-6.00 7.32 66 15,150.52 Central (Talbotton) 69 -11.91 24.22 1.31-8.69 0.21-7.79 8.86 - - ### 5 - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Eagle's Landing Christian 1 62.81 45.86 7.27-2.73 3.78-0.22 1.19 827,602 0.21 Landmark Christian 7 43.58 27.49 7.37-2.63 2.75-1.25 2.16 146,355 5.83 Our Lady of Mercy 21 36.54 24.47 6.55-3.45 2.38-1.62 2.69 25,959 37.52 Strong Rock Christian 60 3.53 28.07 2.49-7.51 0.70-3.30 4.31 70 14,284.71 Mount Vernon Presbyterian 65 -3.55 3.26 4.14-5.86 0.39-3.61 4.65 14 71,427.57 ### 6 - Div A - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Mount Paran Christian 16 38.52 27.62 7.09-2.91 3.19-0.81 1.71 526,640 0.90 Trion 29 31.08 21.46 6.95-3.05 2.44-1.56 2.45 285,737 2.50 Walker 37 24.31 24.88 4.96-5.04 1.73-2.27 3.34 107,474 8.30 Christian Heritage 35 26.07 28.06 4.74-5.26 1.91-2.10 3.21 71,404 13.00 Mount Zion (Carroll) 49 14.36 23.06 3.32-6.68 0.74-3.26 4.30 8,745 113.35 North Cobb Christian† 72 -25.73 -2.81 1.48-8.52 † - Plays non-region schedule ### 6 - Div B - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Mount Pisgah Christian 18 38.13 26.06 7.01-2.99 4.33-0.67 1.40 711,596 0.41 Whitefield Academy 36 25.70 19.10 5.99-4.01 2.99-2.01 2.89 165,660 5.04 King's Ridge Christian 39 23.68 26.82 4.36-5.64 2.78-2.22 3.08 68,282 13.65 Pinecrest Academy 40 20.87 23.31 4.46-5.54 2.50-2.50 3.54 45,947 20.76 Fellowship Christian 50 13.90 18.85 3.96-6.04 1.65-3.35 4.66 7,431 133.57 St. Francis 59 4.98 17.51 3.07-6.93 0.75-4.25 5.43 1,084 921.51 ### 7 - Div A - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Tattnall Square 14 39.52 18.42 7.87-2.13 3.49-1.51 2.26 351,894 1.84 Stratford Academy 10 41.54 28.27 7.13-2.87 3.64-1.36 2.34 329,306 2.04 Wilkinson County 24 35.56 29.58 6.01-3.99 2.89-2.11 3.04 163,088 5.13 First Presbyterian 26 34.81 25.69 6.51-3.49 2.92-2.08 3.11 132,509 6.55 Mount de Sales 34 26.94 27.43 5.26-4.74 1.92-3.09 4.32 23,196 42.11 Twiggs County 63 -0.29 33.43 1.14-8.86 0.15-4.85 5.93 7 142,856.14 ### 7 - Div B - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Aquinas 2 51.00 29.40 8.30-1.70 3.57-0.43 1.38 639,018 0.56 Lincoln County 6 44.58 23.10 7.68-2.32 3.27-0.73 1.77 330,011 2.03 Hancock Central 41 19.57 21.01 4.54-5.46 1.80-2.21 3.23 30,491 31.80 Georgia Military College 54 10.63 30.52 2.45-7.55 1.19-2.81 3.78 467 2,140.33 Warren County 68 -9.85 26.70 0.79-9.21 0.18-3.82 4.84 13 76,922.08 Glascock County† 67 -6.75 0.87 3.83-6.17 † - Plays non-region schedule ### 8 - A Projected Record Team Rank Rating Sch Str Overall Region Avg Fin First Odds Prince Avenue Christian 4 47.27 17.05 8.59-1.41 7.15-0.85 1.65 591,213 0.69 George Walton Academy 17 38.50 16.96 7.62-2.38 6.17-1.84 2.75 195,196 4.12 Athens Academy 22 35.93 10.13 7.85-2.15 5.90-2.10 3.09 128,022 6.81 Commerce 27 32.51 19.61 6.32-3.68 5.40-2.60 3.55 56,938 16.56 Athens Christian 33 29.06 15.20 6.54-3.46 5.01-3.00 4.08 28,626 33.93 Lakeview Academy 61 3.44 19.41 3.18-6.82 2.53-5.47 6.63 5 199,999.00 Towns County 64 -2.38 19.63 2.66-7.34 2.07-5.93 6.92 - - Hebron Christian Academy 66 -6.48 19.12 2.15-7.85 1.61-6.39 7.46 - - Providence Christian 73 -30.56 22.07 0.19-9.81 0.17-7.83 8.88 - -
3,839
8,233
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2022-05
latest
en
0.942924
http://www.evi.com/q/0.043_inches_in_mm
1,419,075,211,000,000,000
text/html
crawl-data/CC-MAIN-2014-52/segments/1418802769709.84/warc/CC-MAIN-20141217075249-00011-ip-10-231-17-201.ec2.internal.warc.gz
506,522,457
5,342
0.043 inches in mm • 0.043 inches is 1.09 millimeters. • tk10npubl tk10ncanl Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download her app for free on iOS, Android and Kindle Fire here. Top ways people ask this question: • .043 inch equals how many mm (66%) • what is 0.043inches in mm (14%) • 0.043 inches in mm (5%) • how many mm is .043 inches (3%) • .043 inches to mm (2%) • convert .043 inch into mm (2%) • 0.043 inch to mm (1%) • convert 0.043 inches to mm (1%) • convert 0.043 inch to mm (1%) • convert .043 inches to millimeter (1%) Other ways this question is asked: • .043 inches in mm • .043 inches = how many mm • 0.043 inch is how many mm • .043 in. converted to mm • .043 inch to mm • 0.043 inch in mm • 0.043 inches in millimetres • convert .043 inches to mm • convert .043 inch to mm • what is .043in in mm • .043 inch to milimeters • how many millimeters is .043 inch • 0.043'' to mm • how many mm is .043inches • what is .043 inches in mm • convert .043 inches to millimeters • 0.043'' in mm • convert 0.043 inches into mm • convert 0.043inches to mm • .043 inches equals how many millimeters • .043inches to mm • 0.043 inches to mm • how many mm is 0.043" • convert 0.043inch to mm • whats 0.043inches in mm • convert .043 inches into mm • .043 inch in mm • convert .043inch into mm • how many millimeters is .043 inches • .043inch to mm? • 0.043 inches is how many mm • 0.043 inches equals how many mm • .043 inches equals how many mm • whats 0.043 inches in mm
559
1,766
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.171875
3
CC-MAIN-2014-52
latest
en
0.852506
http://www.diracdelta.co.uk/science/source/r/o/rod/source.html
1,550,398,813,000,000,000
text/html
crawl-data/CC-MAIN-2019-09/segments/1550247481832.13/warc/CC-MAIN-20190217091542-20190217113542-00452.warc.gz
356,195,502
3,957
# Rod An old English unit of length. Also known as Pole. Conversions 1 rod = 5.0292 m 1 rod = 0.25 chain 1 rod = 25 links 1 rod = 16.5 feet 1 rod = 5.5 yards 1 rod = 198 inches
69
179
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2019-09
latest
en
0.823244
https://number.academy/4798
1,652,900,213,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662522309.14/warc/CC-MAIN-20220518183254-20220518213254-00218.warc.gz
492,656,881
14,097
# Number 4798 Number 4,798 spell 🔊, write in words: four thousand, seven hundred and ninety-eight . Ordinal number 4798th is said 🔊 and write: four thousand, seven hundred and ninety-eighth. The meaning of number 4798 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 4798. What is 4798 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 4798. ## What is 4,798 in other units The decimal (Arabic) number 4798 converted to a Roman number is (IV)DCCXCVIII. Roman and decimal number conversions. The number 4798 converted to a Mayan number is Decimal and Mayan number conversions. #### Weight conversion 4798 kilograms (kg) = 10577.7 pounds (lbs) 4798 pounds (lbs) = 2176.4 kilograms (kg) #### Length conversion 4798 kilometers (km) equals to 2982 miles (mi). 4798 miles (mi) equals to 7722 kilometers (km). 4798 meters (m) equals to 15742 feet (ft). 4798 feet (ft) equals 1463 meters (m). 4798 centimeters (cm) equals to 1889.0 inches (in). 4798 inches (in) equals to 12186.9 centimeters (cm). #### Temperature conversion 4798° Fahrenheit (°F) equals to 2647.8° Celsius (°C) 4798° Celsius (°C) equals to 8668.4° Fahrenheit (°F) #### Power conversion 4798 Horsepower (hp) equals to 3528.44 kilowatts (kW) 4798 kilowatts (kW) equals to 6524.35 horsepower (hp) #### Time conversion (hours, minutes, seconds, days, weeks) 4798 seconds equals to 1 hour, 19 minutes, 58 seconds 4798 minutes equals to 3 days, 7 hours, 58 minutes ### Codes and images of the number 4798 Number 4798 morse code: ....- --... ----. ---.. Sign language for number 4798: Number 4798 in braille: Images of the number Image (1) of the numberImage (2) of the number More images, other sizes, codes and colors ... ### Gregorian, Hebrew, Islamic, Persian and Buddhist year (calendar) Gregorian year 4798 is Buddhist year 5341. Buddhist year 4798 is Gregorian year 4255 . Gregorian year 4798 is Islamic year 4304 or 4305. Islamic year 4798 is Gregorian year 5276 or 5277. Gregorian year 4798 is Persian year 4176 or 4177. Persian year 4798 is Gregorian 5419 or 5420. Gregorian year 4798 is Hebrew year 8558 or 8559. Hebrew year 4798 is Gregorian year 1038. The Buddhist calendar is used in Sri Lanka, Cambodia, Laos, Thailand, and Burma. The Persian calendar is official in Iran and Afghanistan. ## Mathematics of no. 4798 ### Multiplications #### Multiplication table of 4798 4798 multiplied by two equals 9596 (4798 x 2 = 9596). 4798 multiplied by three equals 14394 (4798 x 3 = 14394). 4798 multiplied by four equals 19192 (4798 x 4 = 19192). 4798 multiplied by five equals 23990 (4798 x 5 = 23990). 4798 multiplied by six equals 28788 (4798 x 6 = 28788). 4798 multiplied by seven equals 33586 (4798 x 7 = 33586). 4798 multiplied by eight equals 38384 (4798 x 8 = 38384). 4798 multiplied by nine equals 43182 (4798 x 9 = 43182). show multiplications by 6, 7, 8, 9 ... ### Fractions: decimal fraction and common fraction #### Fraction table of 4798 Half of 4798 is 2399 (4798 / 2 = 2399). One third of 4798 is 1599,3333 (4798 / 3 = 1599,3333 = 1599 1/3). One quarter of 4798 is 1199,5 (4798 / 4 = 1199,5 = 1199 1/2). One fifth of 4798 is 959,6 (4798 / 5 = 959,6 = 959 3/5). One sixth of 4798 is 799,6667 (4798 / 6 = 799,6667 = 799 2/3). One seventh of 4798 is 685,4286 (4798 / 7 = 685,4286 = 685 3/7). One eighth of 4798 is 599,75 (4798 / 8 = 599,75 = 599 3/4). One ninth of 4798 is 533,1111 (4798 / 9 = 533,1111 = 533 1/9). show fractions by 6, 7, 8, 9 ... 4798 ### Advanced math operations #### Is Prime? The number 4798 is not a prime number. The closest prime numbers are 4793, 4799. 4798th prime number in order is 46439. #### Factorization and factors (dividers) The prime factors of 4798 are 2 * 2399 The factors of 4798 are 1 , 2 , 2399 , 4798 Total factors 4. Sum of factors 7200 (2402). #### Powers The second power of 47982 is 23.020.804. The third power of 47983 is 110.453.817.592. #### Roots The square root √4798 is 69,267597. The cube root of 34798 is 16,86631. #### Logarithms The natural logarithm of No. ln 4798 = loge 4798 = 8,475954. The logarithm to base 10 of No. log10 4798 = 3,68106. The Napierian logarithm of No. log1/e 4798 = -8,475954. ### Trigonometric functions The cosine of 4798 is -0,705252. The sine of 4798 is -0,708957. The tangent of 4798 is 1,005253. ### Properties of the number 4798 More math properties ... ## Number 4798 in Computer Science Code typeCode value PIN 4798 It's recommendable to use 4798 as a password or PIN. 4798 Number of bytes4.7KB Unix timeUnix time 4798 is equal to Thursday Jan. 1, 1970, 1:19:58 a.m. GMT IPv4, IPv6Number 4798 internet address in dotted format v4 0.0.18.190, v6 ::12be 4798 Decimal = 1001010111110 Binary 4798 Decimal = 20120201 Ternary 4798 Decimal = 11276 Octal 4798 Decimal = 12BE Hexadecimal (0x12be hex) 4798 BASE64NDc5OA== 4798 MD5d6317f80523fdf2a7375da19c9a006b8 4798 SHA15dc9c4219047a81f6ba30abb6955a3d9e0ef4361 4798 SHA2242635c83023fd587a6f5fe7ba8ea91142f7d3e1c7f89a714abc6d1a3e 4798 SHA256bcaf29cf76c157164339236eb7b4a3038d4e0bd039eaa1f5919ab2efb1a23239 More SHA codes related to the number 4798 ... If you know something interesting about the 4798 number that you did not find on this page, do not hesitate to write us here. ## Numerology 4798 ### The meaning of the number 9 (nine), numerology 9 Character frequency 9: 1 The number 9 (nine) is the sign of ideals, Universal interest and the spirit of combat for humanitarian purposes. It symbolizes the inner Light, prioritizing ideals and dreams, experienced through emotions and intuition. It represents the ascension to a higher degree of consciousness and the ability to display love for others. He/she is creative, idealistic, original and caring. More about the meaning of the number 9 (nine), numerology 9 ... ### The meaning of the number 8 (eight), numerology 8 Character frequency 8: 1 The number eight (8) is the sign of organization, perseverance and control of energy to produce material and spiritual achievements. It represents the power of realization, abundance in the spiritual and material world. Sometimes it denotes a tendency to sacrifice but also to be unscrupulous. More about the meaning of the number 8 (eight), numerology 8 ... ### The meaning of the number 7 (seven), numerology 7 Character frequency 7: 1 The number 7 (seven) is the sign of the intellect, thought, psychic analysis, idealism and wisdom. This number first needs to gain self-confidence and to open his/her life and heart to experience trust and openness in the world. And then you can develop or balance the aspects of reflection, meditation, seeking knowledge and knowing. More about the meaning of the number 7 (seven), numerology 7 ... ### The meaning of the number 4 (four), numerology 4 Character frequency 4: 1 The number four (4) came to establish stability and to follow the process in the world. It needs to apply a clear purpose to develop internal stability. It evokes a sense of duty and discipline. Number 4 personality speaks of solid construction. It teaches us to evolve in the tangible and material world, to develop reason and logic and our capacity for effort, accomplishment and work. More about the meaning of the number 4 (four), numerology 4 ... ## Interesting facts about the number 4798 ### Asteroids • (4798) Mercator is asteroid number 4798. It was discovered by E. W. Elst from La Silla Observatory on 9/26/1989. ### Areas, mountains and surfaces • Cerro Quichunque (Peru) is the 773th highest mountain in the world with a height of 15,741 feet (4,798 meters) above sea level. • Cerro Mesani (Peru) is the 773th highest mountain in the world with a height of 15,741 feet (4,798 meters) above sea level. • Mount Emin (Zaire) is the 773th highest mountain in the world with a height of 15,741 feet (4,798 meters) above sea level. ### Distances between cities • There is a 2,982 miles (4,798 km) direct distance between Aba (Nigeria) and Munich (Germany). • There is a 2,982 miles (4,798 km) direct distance between Bangalore (India) and Gaziantep (Turkey). • There is a 2,982 miles (4,798 km) direct distance between Benxi (China) and Multān (Pakistan). • There is a 2,982 miles (4,798 km) direct distance between Bucheon-si (South Korea) and Makassar (Indonesia). • There is a 4,798 miles (7,721 km) direct distance between Cape Town (South Africa) and Damascus (Syria). • There is a 2,982 miles (4,798 km) direct distance between City of London (United Kingdom) and Ilorin (Nigeria). • There is a 4,798 miles (7,721 km) direct distance between Harare (Zimbabwe) and Prague (Czech Republic). • There is a 4,798 miles (7,721 km) direct distance between Chihuahua (Mexico) and Teresina (Brazil). • There is a 2,982 miles (4,798 km) direct distance between Ilorin (Nigeria) and London (United Kingdom). • There is a 2,982 miles (4,798 km) direct distance between Indore (India) and İstanbul (Turkey). • There is a 4,798 miles (7,721 km) direct distance between İstanbul (Turkey) and Nanchang (China). • There is a 2,982 miles (4,798 km) direct distance between Jakarta (Indonesia) and Sūrat (India). • There is a 4,798 miles (7,721 km) direct distance between Jerusalem (Israel) and Jilin (China). • There is a 4,798 miles (7,721 km) direct distance between Jerusalem (Israel) and Kowloon (Hong Kong). • There is a 2,982 miles (4,798 km) direct distance between Khartoum (Sudan) and Ufa (Russia). • There is a 4,798 miles (7,721 km) direct distance between Lomé (Togo) and Tashkent (Uzbekistan). • There is a 2,982 miles (4,798 km) direct distance between Makassar (Indonesia) and Seongnam-si (South Korea). • There is a 2,982 miles (4,798 km) direct distance between Mumbai (India) and Zibo (China). • There is a 4,798 miles (7,721 km) direct distance between Muscat (Oman) and Yono (Japan). • There is a 2,982 miles (4,798 km) direct distance between Peshawar (Pakistan) and Pyongyang (North Korea). • There is a 4,798 miles (7,721 km) direct distance between Port Harcourt (Nigeria) and Thiruvananthapuram (India). • There is a 4,798 miles (7,721 km) direct distance between Tiruchirappalli (India) and Zaria (Nigeria). ### Mathematics • 4798 is a value of n for which n!!! + 1 is prime. ## Number 4,798 in other languages How to say or write the number four thousand, seven hundred and ninety-eight in Spanish, German, French and other languages. The character used as the thousands separator. Spanish: 🔊 (número 4.798) cuatro mil setecientos noventa y ocho German: 🔊 (Anzahl 4.798) viertausendsiebenhundertachtundneunzig French: 🔊 (nombre 4 798) quatre mille sept cent quatre-vingt-dix-huit Portuguese: 🔊 (número 4 798) quatro mil, setecentos e noventa e oito Chinese: 🔊 (数 4 798) 四千七百九十八 Arabian: 🔊 (عدد 4,798) أربعة آلاف و سبعمائةثمانية و تسعون Czech: 🔊 (číslo 4 798) čtyři tisíce sedmset devadesát osm Korean: 🔊 (번호 4,798) 사천칠백구십팔 Danish: 🔊 (nummer 4 798) firetusinde og syvhundrede og otteoghalvfems Hebrew: (מספר 4,798) ארבע אלף שבע מאות תשעים ושמנה Dutch: 🔊 (nummer 4 798) vierduizendzevenhonderdachtennegentig Japanese: 🔊 (数 4,798) 四千七百九十八 Indonesian: 🔊 (jumlah 4.798) empat ribu tujuh ratus sembilan puluh delapan Italian: 🔊 (numero 4 798) quattromilasettecentonovantotto Norwegian: 🔊 (nummer 4 798) fire tusen, syv hundre og nitti-åtte Polish: 🔊 (liczba 4 798) cztery tysiące siedemset dziewięćdzisiąt osiem Russian: 🔊 (номер 4 798) четыре тысячи семьсот девяносто восемь Turkish: 🔊 (numara 4,798) dörtbinyediyüzdoksansekiz Thai: 🔊 (จำนวน 4 798) สี่พันเจ็ดร้อยเก้าสิบแปด Ukrainian: 🔊 (номер 4 798) чотири тисячi сiмсот дев'яносто вiсiм Vietnamese: 🔊 (con số 4.798) bốn nghìn bảy trăm chín mươi tám Other languages ... ## News to email #### Receive news about "Number 4798" to email Privacy Policy. ## Comment If you know something interesting about the number 4798 or any natural number (positive integer) please write us here or on facebook.
3,794
11,973
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2022-21
latest
en
0.685603
https://www.jiskha.com/display.cgi?id=1360636819
1,500,634,281,000,000,000
text/html
crawl-data/CC-MAIN-2017-30/segments/1500549423769.10/warc/CC-MAIN-20170721102310-20170721122310-00714.warc.gz
809,384,277
3,763
# Math posted by . Next door neighbors Moe and Larry are using hoses from each of their houses to fill Moe's pool. They know it takes 12 hours using both hoses. They also know that Moe's hose, used alone, takes 40% less time than using Larry's hose alone. How much time is required to fill the pool using only Moe's hose? • Math - the first thing you do is percent over a hundred and ..i don't know the rest sorry
103
417
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.984375
3
CC-MAIN-2017-30
longest
en
0.968954
https://practicaldev-herokuapp-com.global.ssl.fastly.net/uzochukwueddie/number-formatter-in-javascript-3cl5
1,653,712,541,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00331.warc.gz
542,801,646
24,467
## DEV Community is a community of 853,399 amazing developers We're a place where coders share, stay up-to-date and grow their careers. Uzochukwu Eddie Odozi Posted on • Updated on # Number Formatter in Javascript In this short tutorial, we are going to create a simple number formatter using Javascript. Numbers greater than or equal to a thousand will be shortened and replaced with number category symbol. Examples of symbols: Thousand = K Million = M Billion = G Trillion = T Quatrillion = E And so on. The method below is the number formatter functionality numberFormatter(number, digits){ const symbolArray = [ { value: 1, symbol: "" }, { value: 1E3, symbol: "K" }, { value: 1E6, symbol: "M" }, { value: 1E9, symbol: "G" }, ]; const regex = /\.0+\$|(\.[0-9]*[1-9])0+\$/; let result = ""; for(let i = 0; i < symbolArray.length; i++) { if (number >= symbolArray[i].value) { result = (number / symbolArray[i].value).toFixed(digits).replace(regex, "\$1") + symbolArray[i].symbol; } } return result; } Other values than can be added are 1E12 = 1000000000000 1E15 = 1000000000000000 1E18 = 1000000000000000000 ## Examples numberFormatter(1000, 1) = "1K" numberFormatter(10000, 1) = "10K" numberFormatter(2134564, 1) = "2.1M" numberFormatter(5912364758694, 3) = "5.912T" You can follow me Twitter and also subscribe to my YouTube channel.
413
1,366
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.625
3
CC-MAIN-2022-21
latest
en
0.59433
https://slideplayer.com/slide/7904583/
1,631,899,488,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780055684.76/warc/CC-MAIN-20210917151054-20210917181054-00173.warc.gz
562,339,763
21,779
# How do you calculate the density of a substance? ## Presentation on theme: "How do you calculate the density of a substance?"— Presentation transcript: How do you calculate the density of a substance? Density Calculations How do you calculate the density of a substance? Covered in This Lesson This lesson will demonstrate how to calculate density to the correct number of significant figures(sig figs) using experimental measurements. When calculating density we measure the mass and the volume of the sample. The volume can be measured using one of several methods. Density Formual V When calculating density you measure the mass of the sample using a balance volume of the sample from one of the methods apply the formula: mass/volume = density or m = d V Methods: Volume There are three methods to measure the volume of a substance. The method used is determined by the type of substance. Methods for the; volume of liquid volume of regular solid volume of irregular solid. Volume of a Liquid If the substance is a liquid, measure it directly in a graduated cylinder. Examples 1 If a liquid sample has a mass of 4.83 g and a volume of 2.7 mL what is its density. 4.83 g = 2.7 mL Because 2.7 has 2 sig figs the final answer is 1.8 g/mL Volume of a Regular Solid If you have a regular solid like a cylinder, cube, or rectangular solid measure the dimensions needed and apply the volume formula. Examples 2 If cubic solid sample has a mass of g and the side is 2.55 cm, what is its density. V = s3 = (2.55 cm) 3 = cm3 D = g = cm3 Because 2.55 has 3 sig figs the final answer is 2.44 g/cm3 Volume of a Irregular Solid If you have an irregular solid use displacement, take initial volume of water in a graduated flask, submerging the sample in the flask, and take the final volume while the solid is submerged. Subtract final volume minus initial volume. Examples 3 Find the density of a solid sample with a mass of g and displacement is used to find the volume, the initial volume is mL, the final volume is mL. V = mL – mL = 53.18mL 75.45 g = g/mL 53.18mL Since both mass and volume have 4 sig figs the final answer is g/mL Practice Problems Find the density of a liquid with a volume of 3.8 mL and a mass of 3.75 g. Find the density of a rectangular solid with a length of 4.5 cm, a width of 2.5 cm, a height of 1.5 cm and a mass of g. Find the density of an irregular solid with a mass of 1.2 g, an initial volume of 25.7 mL and final volume of 18.4 mL when displacement is used.
617
2,496
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.6875
5
CC-MAIN-2021-39
latest
en
0.873933
http://math.stackexchange.com/questions/879557/cant-solve-this-trignometric-equation-why-am-i-wrong
1,469,703,062,000,000,000
text/html
crawl-data/CC-MAIN-2016-30/segments/1469257828010.65/warc/CC-MAIN-20160723071028-00110-ip-10-185-27-174.ec2.internal.warc.gz
151,938,795
18,529
# Can't solve this trignometric equation, why am I wrong? There is this trig equation: $$5\tan x - 2\tan 2x = 0 \text{ for 0 < 0 < 360 }$$ So far I've gotten $$\tan x = \text{0, 180}$$ and all I have to solve now is $$\tan ^2x = 0.2$$ which gives me two angle, $24.1$ and $-24.1$. For some reason, this is wrong? Can someone please tell me where I'm going wrong? Thank you in advance! UPDATE I'm so sorry! I made a mistake in my mathjax, I fixed it. - Hint : \begin{align} 5\tan x - 2\tan ^2x &= 0\\ (5 - 2\tan x)\tan x&=0\\ \tan x=0\qquad&;\qquad5 - 2\tan x=0 \end{align} – Tunk-Fey Jul 27 '14 at 12:38 @Shabbeh I made a mistake in my mathjax, so sorry! I fixed it though – Samir Chahine Jul 27 '14 at 12:43 It should be $\tan x = 2.5$ – Darth Geek Jul 27 '14 at 12:43 ## 2 Answers Answer of the Original Version: We have$$\tan x(2\tan x-5)=0$$ If $\tan x=0, x=n180^\circ$ where $n$ is any integer If $\displaystyle2\tan x-5=0,\tan x=\frac52$ Google says $\displaystyle\arctan\frac52\approx68.1985905^\circ$ $\displaystyle\implies x\approx m180^\circ+68.1985905^\circ$ where $m$ is any integer Probably you have meant $0<x<360^\circ\implies 0<m180^\circ+68.1985905^\circ<360^\circ$ Answer to the Edited Version: Using Double Angle formula, $$5\tan x=2\tan2x=2\frac{2\tan x}{1-\tan^2x}$$ $$\tan x(1-5\tan^2 x)=0$$ If $\tan x=0$ has been dealt already $\displaystyle1-5\tan^2x=0\iff \tan^2x=\frac15\implies\cos2x=\frac{1-\tan^2x}{1+\tan^2x}=\frac23$ Using Google, $\displaystyle1\arccos\frac23\approx48.1896851$ $\displaystyle\implies2x=2m\pi\pm48.1896851^\circ\implies x=?$ Find $m$ such that $0<x<360^\circ$ - I'm sorry I made a mistake while writing out my question, I had a different question, sorry. – Samir Chahine Jul 27 '14 at 12:43 @SamirChahine, Please find the edited version – lab bhattacharjee Jul 27 '14 at 12:58 Answered to original questions: $$5\tan x - 2\tan^2 x = 0 \iff \tan x(5 - 2\tan x) = 0$$ Than means $5\tan x = 0\iff \tan x = 0\;$ or $\;2\tan x = 5 \iff \tan x = \frac 52$. Can you take it from here? - Rectify the sign – lab bhattacharjee Jul 27 '14 at 12:42 I had an error while writing my question, sorry! I fixed my question but I'm sure this would have been correct otherwise. – Samir Chahine Jul 27 '14 at 12:44 @amWhy Why have you removed the "you" from "can you take it from here?"? – alexqwx Jul 27 '14 at 12:56 @alexqwx I thought 'you', I just forgot to write "you"! Thanks for pointing out the missing word! – amWhy Jul 27 '14 at 12:57
942
2,500
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2016-30
latest
en
0.738131
https://mathvideoprofessor.com/courses/sixth-grade/lessons/unit-8-data-sets-and-distribution/topic/got-data/
1,701,526,733,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100427.59/warc/CC-MAIN-20231202140407-20231202170407-00444.warc.gz
428,226,234
35,678
Got Data? Warmup Activity #1 Data Visualization • This sketch shows ten data points with value between 0 and 20. ​ • Try other 10 data points by using the table on the right. • Make it so it’s ten different data points. • Change some values (but keep all ten different) to make the other one higher. Activity #2 `Enter Data in a Boxplot.` • Enter your data one element at a time in the curly brackets in the input box below. • Once you have entered all your data, drag the red boxplot to match your data. • Check your boxplot when finished! Activity #3 `Exercise Questions.` Here below is a list of questions. • For each question, decide if the responses will produce numerical data or categorical data and give two possible responses. (1.) What is your favorite breakfast food? (2.) How did you get to school this morning? (3.) How many different teachers do you have? (4.) What is the last thing you ate or drank? (5). How many minutes did it take you to get ready this morning—from waking up to leaving for school? Challenge #1 Priya and Han collected data on the birth months of students in their class. Here are the lists of their records for the same group of students. This list shows Priya’s records: Jan, Apr, Jan, Feb, Oct, May, June, July, Aug, Aug, Sep, Jan, Feb, Mar, Apr, Nov, Nov, Dec, Feb, Mar. This list shows Han’s records: 1, 4, 1, 2, 10, 5, 6, 7, 8, 8, 9, 1, 2, 3, 4, 11, 11, 12, 2, 3. (1). How are their records alike? How are they different? (2). What kind of data—categorical or numerical—do you think the variable “birth month” produces? Explain how you know. Challenge #2 Here below is a dot plot for a data set. Challenge #3 Think about the responses to these survey questions. Do they produce numerical or categorical data? Quiz Time
475
1,789
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.78125
4
CC-MAIN-2023-50
latest
en
0.862078
http://de.metamath.org/mpegif/friendshipgt3.html
1,601,287,488,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00138.warc.gz
31,920,037
16,036
Metamath Proof Explorer < Previous   Next > Nearby theorems Mirrors  >  Home  >  MPE Home  >  Th. List  >  friendshipgt3 Structured version   Unicode version Theorem friendshipgt3 25720 Description: The friendship theorem for big graphs: In every finite friendship graph with order greater than 3 there is a vertex which is adjacent to all other vertices. (Contributed by Alexander van der Vekens, 9-Oct-2018.) Assertion Ref Expression friendshipgt3 FriendGrph Distinct variable groups:   ,,   ,, Proof of Theorem friendshipgt3 Dummy variables are mutually distinct and distinct from all other variables. StepHypRef Expression 1 frgraregorufrg 25671 . . 3 FriendGrph VDeg RegUSGrph 213ad2ant1 1026 . 2 FriendGrph VDeg RegUSGrph 3 frgraogt3nreg 25719 . 2 FriendGrph RegUSGrph 4 frisusgra 25591 . . . . 5 FriendGrph USGrph 543ad2ant1 1026 . . . 4 FriendGrph USGrph 6 simp2 1006 . . . 4 FriendGrph 7 0red 9633 . . . . . . . 8 8 3re 10672 . . . . . . . . 9 98a1i 11 . . . . . . . 8 10 hashcl 12524 . . . . . . . . . 10 1110nn0red 10915 . . . . . . . . 9 1211adantr 466 . . . . . . . 8 13 3pos 10692 . . . . . . . . 9 1413a1i 11 . . . . . . . 8 15 simpr 462 . . . . . . . 8 167, 9, 12, 14, 15lttrd 9785 . . . . . . 7 1716gt0ne0d 10167 . . . . . 6 18 hasheq0 12530 . . . . . . . 8 1918adantr 466 . . . . . . 7 2019necon3bid 2680 . . . . . 6 2117, 20mpbid 213 . . . . 5 22213adant1 1023 . . . 4 FriendGrph 23 usgn0fidegnn0 25628 . . . 4 USGrph VDeg 245, 6, 22, 23syl3anc 1264 . . 3 FriendGrph VDeg 25 r19.26 2953 . . . . . . . 8 VDeg RegUSGrph RegUSGrph VDeg RegUSGrph RegUSGrph 26 simpllr 767 . . . . . . . . . 10 VDeg FriendGrph 27 fveq2 5872 . . . . . . . . . . . . . . . . 17 VDeg VDeg 2827eqeq1d 2422 . . . . . . . . . . . . . . . 16 VDeg VDeg 2928rspcev 3179 . . . . . . . . . . . . . . 15 VDeg VDeg 3029adantlr 719 . . . . . . . . . . . . . 14 VDeg VDeg 3130adantr 466 . . . . . . . . . . . . 13 VDeg FriendGrph VDeg 32 ornld 906 . . . . . . . . . . . . 13 VDeg VDeg RegUSGrph RegUSGrph 3331, 32syl 17 . . . . . . . . . . . 12 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 3433adantr 466 . . . . . . . . . . 11 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 35 eqeq2 2435 . . . . . . . . . . . . . . . 16 VDeg VDeg 3635rexbidv 2937 . . . . . . . . . . . . . . 15 VDeg VDeg 37 breq2 4421 . . . . . . . . . . . . . . . 16 RegUSGrph RegUSGrph 3837orbi1d 707 . . . . . . . . . . . . . . 15 RegUSGrph RegUSGrph 3936, 38imbi12d 321 . . . . . . . . . . . . . 14 VDeg RegUSGrph VDeg RegUSGrph 4037notbid 295 . . . . . . . . . . . . . 14 RegUSGrph RegUSGrph 4139, 40anbi12d 715 . . . . . . . . . . . . 13 VDeg RegUSGrph RegUSGrph VDeg RegUSGrph RegUSGrph 4241imbi1d 318 . . . . . . . . . . . 12 VDeg RegUSGrph RegUSGrph VDeg RegUSGrph RegUSGrph 4342adantl 467 . . . . . . . . . . 11 VDeg FriendGrph VDeg RegUSGrph RegUSGrph VDeg RegUSGrph RegUSGrph 4434, 43mpbird 235 . . . . . . . . . 10 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 4526, 44rspcimdv 3180 . . . . . . . . 9 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 4645com12 32 . . . . . . . 8 VDeg RegUSGrph RegUSGrph VDeg FriendGrph 4725, 46sylbir 216 . . . . . . 7 VDeg RegUSGrph RegUSGrph VDeg FriendGrph 4847expcom 436 . . . . . 6 RegUSGrph VDeg RegUSGrph VDeg FriendGrph 4948com13 83 . . . . 5 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 5049exp31 607 . . . 4 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 5150rexlimivv 2920 . . 3 VDeg FriendGrph VDeg RegUSGrph RegUSGrph 5224, 51mpcom 37 . 2 FriendGrph VDeg RegUSGrph RegUSGrph 532, 3, 52mp2d 46 1 FriendGrph Colors of variables: wff setvar class Syntax hints:   wn 3   wi 4   wb 187   wo 369   wa 370   w3a 982   wceq 1437   wcel 1867   wne 2616  wral 2773  wrex 2774   cdif 3430  c0 3758  csn 3993  cpr 3995  cop 3999   class class class wbr 4417   crn 4846  cfv 5592  (class class class)co 6296  cfn 7568  cr 9527  cc0 9528   clt 9664  c3 10649  cn0 10858  chash 12501   USGrph cusg 24929   VDeg cvdg 25492   RegUSGrph crusgra 25522   FriendGrph cfrgra 25587 This theorem was proved from axioms:  ax-mp 5  ax-1 6  ax-2 7  ax-3 8  ax-gen 1665  ax-4 1678  ax-5 1748  ax-6 1794  ax-7 1838  ax-8 1869  ax-9 1871  ax-10 1886  ax-11 1891  ax-12 1904  ax-13 2052  ax-ext 2398  ax-rep 4529  ax-sep 4539  ax-nul 4547  ax-pow 4594  ax-pr 4652  ax-un 6588  ax-inf2 8137  ax-cnex 9584  ax-resscn 9585  ax-1cn 9586  ax-icn 9587  ax-addcl 9588  ax-addrcl 9589  ax-mulcl 9590  ax-mulrcl 9591  ax-mulcom 9592  ax-addass 9593  ax-mulass 9594  ax-distr 9595  ax-i2m1 9596  ax-1ne0 9597  ax-1rid 9598  ax-rnegex 9599  ax-rrecex 9600  ax-cnre 9601  ax-pre-lttri 9602  ax-pre-lttrn 9603  ax-pre-ltadd 9604  ax-pre-mulgt0 9605  ax-pre-sup 9606 This theorem depends on definitions:  df-bi 188  df-or 371  df-an 372  df-3or 983  df-3an 984  df-tru 1440  df-fal 1443  df-ex 1660  df-nf 1664  df-sb 1787  df-eu 2267  df-mo 2268  df-clab 2406  df-cleq 2412  df-clel 2415  df-nfc 2570  df-ne 2618  df-nel 2619  df-ral 2778  df-rex 2779  df-reu 2780  df-rmo 2781  df-rab 2782  df-v 3080  df-sbc 3297  df-csb 3393  df-dif 3436  df-un 3438  df-in 3440  df-ss 3447  df-pss 3449  df-nul 3759  df-if 3907  df-pw 3978  df-sn 3994  df-pr 3996  df-tp 3998  df-op 4000  df-ot 4002  df-uni 4214  df-int 4250  df-iun 4295  df-disj 4389  df-br 4418  df-opab 4476  df-mpt 4477  df-tr 4512  df-eprel 4756  df-id 4760  df-po 4766  df-so 4767  df-fr 4804  df-se 4805  df-we 4806  df-xp 4851  df-rel 4852  df-cnv 4853  df-co 4854  df-dm 4855  df-rn 4856  df-res 4857  df-ima 4858  df-pred 5390  df-ord 5436  df-on 5437  df-lim 5438  df-suc 5439  df-iota 5556  df-fun 5594  df-fn 5595  df-f 5596  df-f1 5597  df-fo 5598  df-f1o 5599  df-fv 5600  df-isom 5601  df-riota 6258  df-ov 6299  df-oprab 6300  df-mpt2 6301  df-om 6698  df-1st 6798  df-2nd 6799  df-wrecs 7027  df-recs 7089  df-rdg 7127  df-1o 7181  df-2o 7182  df-oadd 7185  df-er 7362  df-ec 7364  df-qs 7368  df-map 7473  df-pm 7474  df-en 7569  df-dom 7570  df-sdom 7571  df-fin 7572  df-sup 7953  df-inf 7954  df-oi 8016  df-card 8363  df-cda 8587  df-pnf 9666  df-mnf 9667  df-xr 9668  df-ltxr 9669  df-le 9670  df-sub 9851  df-neg 9852  df-div 10259  df-nn 10599  df-2 10657  df-3 10658  df-n0 10859  df-z 10927  df-uz 11149  df-rp 11292  df-xadd 11399  df-ico 11630  df-fz 11772  df-fzo 11903  df-fl 12014  df-mod 12083  df-seq 12200  df-exp 12259  df-hash 12502  df-word 12640  df-lsw 12641  df-concat 12642  df-s1 12643  df-substr 12644  df-reps 12647  df-csh 12865  df-s2 12918  df-cj 13130  df-re 13131  df-im 13132  df-sqrt 13266  df-abs 13267  df-clim 13519  df-sum 13720  df-dvds 14273  df-gcd 14432  df-prm 14583  df-phi 14672  df-usgra 24932  df-nbgra 25019  df-wlk 25107  df-trail 25108  df-pth 25109  df-spth 25110  df-wlkon 25113  df-spthon 25116  df-wwlk 25278  df-wwlkn 25279  df-clwwlk 25350  df-clwwlkn 25351  df-2wlkonot 25457  df-2spthonot 25459  df-2spthsot 25460  df-vdgr 25493  df-rgra 25523  df-rusgra 25524  df-frgra 25588 This theorem is referenced by:  friendship  25721 Copyright terms: Public domain W3C validator
3,503
6,942
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.921875
3
CC-MAIN-2020-40
latest
en
0.147639
https://socratic.org/questions/how-do-you-differentiate-f-x-5-2x-3-3-2-using-the-chain-rule
1,713,405,612,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817184.35/warc/CC-MAIN-20240417235906-20240418025906-00680.warc.gz
486,453,255
5,687
# How do you differentiate f(x) = 5(2x-3)^(3/2) using the chain rule? Feb 20, 2016 You can do it like this: #### Explanation: f(x)=5(2x-3)^(3/2 $\therefore f ' \left(x\right) = 5 \times \frac{3}{2} {\left(2 x - 3\right)}^{\frac{1}{2}} \times \left(2\right)$ $\therefore f ' \left(x\right) = 15 {\left(2 x - 3\right)}^{\frac{1}{2}}$ or $f ' \left(x\right) = 15 \sqrt{2 x - 3}$
170
383
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.3125
4
CC-MAIN-2024-18
latest
en
0.449222
https://de.maplesoft.com/support/help/addons/view.aspx?path=Student%2FStatistics%2FOneSampleZTest%2Foverview
1,713,564,311,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817455.17/warc/CC-MAIN-20240419203449-20240419233449-00286.warc.gz
174,547,343
21,452
Student/Statistics/OneSampleZTest/overview - Maple Help Student[Statistics][OneSampleZTest] Overview overview of the One Sample Z-Test Description • One Sample Z Test is used to test if the sample studied follows a normal distribution, assuming that the standard deviation and mean are known. The mean is equal to the test value based on the sample drawn. Cases where standard deviations are known are rare; a more common method of test in that case is the One Sample T Test. • Requirements for using One Sample Z Test: 1 Here, the goal is to test the hypothesis that the population where the sample is drawn from follows a normal distribution with the mean equal to the value that is set. 2 The population studied is assumed to follow a normal distribution. 3 The standard deviation of the population is already known. • The formula is: $Z=\frac{\left(\mathrm{Mean}\left(X\right)-{\mathrm{\mu }}_{0}\right)\sqrt{N}}{\mathrm{\sigma }}$ where $X$ is the sample, ${\mathrm{\mu }}_{0}$ is the test value of mean, $\mathrm{\sigma }$ is The known standard deviation, and $N$ is the sample size. When the sample size, $N$, is sufficiently large, $Z$ is approximately $\mathrm{Normal}\left(0,1\right)$. Example After a math exam, Professor Lee marked and recorded down the grades of 100 students randomly selected from the entire group of 1000 students. It shows that their average grade is 65. He has reasons to assume the grades are normally distributed with standard deviation equal to 15. Now he wants to test if the grades of all students follow a normal distribution whose mean is 64.5. 1 Determine the null hypothesis: Null Hypothesis: ${\mathrm{\mu }}_{0}=64.5$ (the actual mean). 2 Substitute the information into the formula: $z=\frac{\left(65-64.5\right)}{\left(\frac{15}{\sqrt{100}}\right)}$ = 0.333333 3 Compute the p-value: $p-\mathrm{value}=\mathrm{Probability}\left(|Z|>0.333333\right)=\mathrm{Probability}\left(Z<-0.333333\right)+\mathrm{Probability}\left(Z>0.333333\right)$= 0.738883 $Z˜\mathrm{Normal}\left(0,1\right)$. 4 Draw the conclusion: This statistical test does not provide enough evidence to conclude that the null hypothesis is false, so we fail to reject the null hypothesis.
572
2,225
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.78125
5
CC-MAIN-2024-18
latest
en
0.825223
https://www.coursehero.com/file/5919821/hw8solutions/
1,527,006,465,000,000,000
text/html
crawl-data/CC-MAIN-2018-22/segments/1526794864798.12/warc/CC-MAIN-20180522151159-20180522171159-00327.warc.gz
711,452,474
50,694
{[ promptMessage ]} Bookmark it {[ promptMessage ]} # hw8solutions - n ≤ n 1 = ⇒ 1 n 1 ≤ 1 n = ⇒ 1 1 n 1... This preview shows page 1. Sign up to view the full content. Math 521 - Advanced Calculus I Instructor: J. Metcalfe Due: February 8, 2010 Assignment 8 1. Calculate lim n →∞ ( n 2 + n - n ). Here, we note that p n 2 + n - n = n n 2 + n + n = 1 p 1 + (1 /n ) + 1 . From here, it is intuitively clear that the limit is 1 / 2, but we have not yet justified pulling the limit inside of the square root. We have proved in class that you can pull the limit inside of addition and division. Thus, it suffices to show that p 1 + (1 /n ) 1. To this end, we know that p 1 + (1 /n ) 1 for all n . Moreover, by the Archimedian Principle, we know that 1 is the greatest lower bound. Indeed, if a > 1, then we can choose n N sufficiently large so that 1 /n < a 2 - 1, or 1 + (1 /n ) < a 2 or p 1 + (1 /n ) < a . Moreover, we know that p 1 + (1 /n ) is decreasing. Indeed, This is the end of the preview. Sign up to access the rest of the document. Unformatted text preview: n ≤ n + 1 = ⇒ 1 n + 1 ≤ 1 n = ⇒ 1 + 1 n + 1 ≤ 1 + 1 n = ⇒ p 1 + (1 / ( n + 1)) ≤ p 1 + (1 /n ) . Since this sequence is monotonically decreasing and bounded below, it must converge to its greatest lower bound, which is 1. For this latter step, you could also cite Theorem 3.2.10 of the text. 2. Let b ∈ R satisfy 0 < b < 1. Show that lim nb n = 0. We write b = 1 1+ a . Then a = 1 b-1 > 0 since b < 1. Then b n = 1 (1 + a ) n ≤ 1 (1 / 2) n ( n-1) a 2 by the binomial theorem. And, nb n ≤ 2 ( n-1) a 2 . The right side clearly tends to 0 as n → ∞ , and thus, by the squeeze lemma, nb n → 0. 1... View Full Document {[ snackBarMessage ]} ### What students are saying • As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students. Kiran Temple University Fox School of Business ‘17, Course Hero Intern • I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero. Dana University of Pennsylvania ‘17, Course Hero Intern • The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time. Jill Tulane University ‘16, Course Hero Intern
808
2,647
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.53125
5
CC-MAIN-2018-22
latest
en
0.864296
https://mathexamination.com/lab/metrization-theorems.php
1,624,305,389,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623488289268.76/warc/CC-MAIN-20210621181810-20210621211810-00485.warc.gz
342,038,327
8,602
## Do My Metrization Theorems Lab As stated over, I used to write a straightforward and straightforward math lab with only Metrization Theorems Nonetheless, the easier you make your lab, the less complicated it becomes to get stuck at the end of it, after that at the start. This can be very irritating, and all this can take place to you since you are making use of Metrization Theorems and/or Modular Equations inaccurately. With Modular Equations, you are currently making use of the incorrect equation when you obtain stuck at the beginning, if not, after that you are most likely in a dead end, and there is no possible way out. This will only get worse as the issue becomes more intricate, yet then there is the inquiry of exactly how to proceed with the trouble. There is no way to properly go about resolving this kind of math issue without having the ability to right away see what is going on. It is clear that Metrization Theorems and Modular Equations are hard to discover, and also it does take technique to create your very own feeling of instinct. However when you want to address a mathematics issue, you need to utilize a tool, and also the tools for discovering are made use of when you are stuck, and also they are not made use of when you make the wrong relocation. This is where lab Help Service comes in. For instance, what is wrong with the question is incorrect suggestions, such as getting a partial value when you do not have sufficient working components to complete the entire job. There is an excellent factor that this was wrong, as well as it refers logic, not intuition. Reasoning allows you to comply with a detailed procedure that makes good sense, as well as when you make an incorrect relocation, you are typically forced to either try to go forward as well as fix the blunder, or try to go backward and also do an in reverse action. One more instance is when the trainee does not recognize an action of a process. These are both sensible failures, and there is no chance around them. Also when you are stuck in a location that does not enable you to make any kind of kind of step, such as a triangle, it is still crucial to recognize why you are stuck, to ensure that you can make a far better move as well as go from the step you are stuck at to the next area. With this in mind, the very best way to resolve a stuck scenario is to merely take the step forward, as opposed to trying to step. The two procedures are different in their technique, yet they have some fundamental resemblances. Nevertheless, when they are tried together, you can swiftly tell which one is better at solving the problem, and you can additionally tell which one is extra effective. Allow's speak about the very first instance, which associates with the Metrization Theorems math lab. This is not also difficult, so allow's very first go over just how to begin. Take the adhering to process of attaching a part to a panel to be used as a body. This would certainly require 3 dimensions, and also would be something you would need to attach as part of the panel. Now, you would certainly have an extra measurement, however that does not suggest that you can simply maintain that dimension and go from there. When you made your primary step, you can conveniently ignore the measurement, and afterwards you would need to go back and backtrack your steps. Nevertheless, rather than bearing in mind the extra measurement, you can utilize what is called a "mental faster way" to aid you bear in mind that added dimension. As you make your initial step, imagine on your own taking the dimension as well as attaching it to the part you want to affix to, and after that see how that makes you really feel when you duplicate the procedure. Visualisation is an extremely powerful strategy, and also is something that you must not skip over. Envision what it would feel like to actually affix the part and also be able to go from there, without the measurement. Currently, let's look at the 2nd example. Let's take the exact same procedure as in the past, now the student needs to remember that they are going to move back one step. If you tell them that they need to return one action, however then you remove the concept of needing to return one step, after that they won't know how to proceed with the issue, they will not recognize where to look for that step, and also the procedure will be a mess. Instead, use a mental faster way like the mental diagram to psychologically show them that they are mosting likely to return one action. as well as put them in a setting where they can move forward from there. without having to think about the missing a step. ## Hire Someone To Do Your Metrization Theorems Lab " Metrization Theorems - Required Aid With a Mathematics lab?" However, lots of pupils have had a problem understanding the principles of linear Metrization Theorems. Thankfully, there is a new layout for straight Metrization Theorems that can be used to instruct linear Metrization Theorems to students that fight with this concept. Trainees can use the lab Help Service to help them discover brand-new methods in straight Metrization Theorems without dealing with a hill of problems and without needing to take a test on their ideas. The lab Aid Solution was produced in order to assist having a hard time pupils as they relocate from college as well as secondary school to the college as well as task market. Lots of trainees are unable to take care of the stress of the understanding process and also can have very little success in understanding the concepts of direct Metrization Theorems. The lab Help Solution was created by the Educational Testing Solution, who offers a range of different online tests that pupils can take as well as exercise. The Examination Help Solution has actually assisted several trainees enhance their ratings and also can help you enhance your scores also. As pupils move from college and secondary school to the university and also work market, the TTS will aid make your pupils' transition much easier. There are a few different ways that you can make use of the lab Assist Solution. The main manner in which trainees use the lab Assist Service is through the Answer Managers, which can assist trainees find out methods in linear Metrization Theorems, which they can utilize to help them prosper in their courses. There are a number of issues that trainees experience when they initially use the lab Assist Service. Trainees are usually overloaded as well as do not recognize how much time they will require to devote to the Solution. The Answer Managers can assist the pupils assess their principle discovering and help them to examine all of the product that they have currently found out in order to be planned for their following training course work. The lab Assist Service works the same way that a professor does in terms of assisting pupils realize the concepts of linear Metrization Theorems. By providing your trainees with the devices that they need to find out the crucial principles of direct Metrization Theorems, you can make your pupils more effective throughout their researches. Actually, the lab Aid Solution is so effective that several pupils have actually switched from standard mathematics class to the lab Aid Service. The Job Manager is created to help trainees manage their homework. The Job Supervisor can be set up to set up just how much time the student has readily available to finish their assigned research. You can likewise set up a personalized period, which is a fantastic attribute for trainees who have a hectic schedule or a very active secondary school. This attribute can assist trainees stay clear of sensation bewildered with math tasks. An additional beneficial feature of the lab Assist Solution is the Trainee Assistant. The Trainee Aide aids pupils handle their work and gives them an area to post their research. The Pupil Aide is valuable for pupils who do not want to get bewildered with addressing numerous concerns. As trainees get more comfy with their tasks, they are urged to get in touch with the Task Manager as well as the Pupil Aide to obtain an online support group. The on-line support system can aid pupils preserve their focus as they address their tasks. Every one of the assignments for the lab Help Solution are consisted of in the package. Trainees can login and complete their appointed job while having the student aid offered in the background to help them. The lab Aid Solution can be an excellent assistance for your students as they begin to browse the difficult university admissions and job hunting waters. Trainees should be prepared to obtain made use of to their projects as promptly as feasible in order to reach their major objective of getting into the university. They have to strive sufficient to see results that will permit them to walk on at the following level of their research studies. Getting utilized to the procedure of finishing their assignments is extremely crucial. Pupils have the ability to find different methods to help them discover how to use the lab Help Solution. Understanding how to make use of the lab Aid Solution is vital to pupils' success in college and also job application. ## Pay Someone To Take My Metrization Theorems Lab Metrization Theorems is used in a lot of colleges. Some teachers, however, do not utilize it extremely properly or use it incorrectly. This can have an unfavorable impact on the pupil's discovering. So, when designating projects, utilize an excellent Metrization Theorems assistance service to help you with each lab. These services provide a range of valuable services, including: Jobs might need a great deal of reviewing and browsing on the computer. This is when utilizing a help service can be a wonderful benefit. It enables you to get more work done, boost your understanding, as well as stay clear of a lot of tension. These sorts of research services are a superb means to begin working with the best sort of aid for your demands. Metrization Theorems is among one of the most hard subjects to master for pupils. Dealing with a solution, you can make sure that your needs are satisfied, you are educated properly, and also you recognize the material appropriately. There are many ways that you can show yourself to function well with the course and be successful. Make use of a correct Metrization Theorems help solution to guide you as well as obtain the job done. Metrization Theorems is one of the hardest classes to find out but it can be conveniently mastered with the appropriate assistance. Having a research solution also helps to enhance the pupil's qualities. It enables you to include additional debt along with enhance your GPA. Getting additional credit scores is frequently a big advantage in many colleges. Trainees that do not take full advantage of their Metrization Theorems class will end up continuing of the rest of the class. The good news is that you can do it with a fast and very easy solution. So, if you wish to continue in your course, use a good assistance service. Something to remember is that if you actually want to increase your quality degree, your program work requires to obtain done. As high as possible, you require to comprehend as well as work with all your issues. You can do this with an excellent help solution. One benefit of having a research service is that you can aid on your own. If you don't feel confident in your ability to do so, after that an excellent tutor will have the ability to assist you. They will certainly be able to address the troubles you face as well as aid you recognize them to get a far better grade. When you finish from secondary school and enter university, you will certainly require to strive in order to remain ahead of the other pupils. That suggests that you will certainly need to strive on your research. Using an Metrization Theorems service can assist you get it done. Maintaining your qualities up can be difficult due to the fact that you normally require to examine a whole lot as well as take a great deal of tests. You don't have time to deal with your qualities alone. Having an excellent tutor can be a fantastic help since they can help you as well as your homework out. An assistance solution can make it much easier for you to manage your Metrization Theorems class. On top of that, you can find out more about on your own and assist you be successful. Find the very best tutoring solution and you will be able to take your study abilities to the following degree.
2,544
12,598
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.921875
3
CC-MAIN-2021-25
latest
en
0.976152
https://stackoverflow.com/questions/7848065/diet-plan-logic-matching-the-best-result
1,603,424,361,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107880519.12/warc/CC-MAIN-20201023014545-20201023044545-00477.warc.gz
526,198,062
29,458
# Diet plan logic, matching the best result [closed] I have the following problem. I want to make a diet plan using php & mysql. I have the following: • Bread: 1g of Protein, 2g of carbohydrate, 4g of fat • Sugar: 3g of Protein, 6g of carbohydrate, 1g of fat • Coffee: 8g of Protein, 2g of carbohydrate, 2g of fat • Meat: 7g of Protein, 0g of carbohydrate, 12g of fat • Milk: 16g of Protein, 12g of carbohydrate, 2g of fat Having the above, I want to find the best combination of those above to match the following total: GOAL: 160g Protein -- 41g of carbohydrate -- 120g of fat. and show the result like: 5 pieces of Meat, 3 pieces of Milk, etc. I don't have a problem with php & mysql. I try to find the logic behind this problem. Not trivial. Get a book or a good web page about Dynamic Programming, specifically Knapsack problem. Here's a brute-force solution that should work. This is a SQL script (written in SQL Server, should work in MySql, but may require minor changes) that will iterate through all possible combinations of items before finding the optimal solution. ``````-- Limits by protein/carb/fat DECLARE @protein_limit INT SET @protein_limit = 160 DECLARE @carb_limit INT SET @carb_limit = 90 DECLARE @fat_limit INT SET @fat_limit = 120 -- Table holding valid items DECLARE @items TABLE ( id INT IDENTITY(1,1), name VARCHAR(50), protein INT, carb INT, fat INT ) INSERT INTO @items UNION SELECT 'Sugar', 3, 6, 1 UNION SELECT 'Coffee', 8, 2, 2 UNION SELECT 'Meat', 7, 0, 12 UNION SELECT 'Milk', 16, 12, 2 DECLARE @item_count INT SELECT @item_count = COUNT(*) FROM @items -- From: http://stackoverflow.com/questions/9507635/pivot-integer-bitwise-values-in-sql/9509598#9509598 DECLARE @bits TABLE ( number INT, [bit] INT, value INT ) ; with AllTheNumbers as ( select cast (POWER(2, @item_count) as int) - 1 Number union all select Number - 1 from AllTheNumbers where Number > 0 ), Bits as ( select @item_count - 1 Bit union all select Bit - 1 from Bits where Bit > 0 ) INSERT INTO @bits (number, [bit], value) select *, case when (Number & cast (POWER(2, Bit) as int)) != 0 then 1 else 0 end from AllTheNumbers cross join Bits order by Number, [Bit] desc -- Table to hold trials - brute force! DECLARE @trials TABLE ( trial_id INT, item_id INT, item_quantity INT ) DECLARE @trial_max INT SET @trial_max = (@protein_limit + @carb_limit + @fat_limit) * (POWER(2, @item_count)) DECLARE @trial_id INT SET @trial_id = 1 DECLARE @base_quantity INT WHILE @trial_id <= @trial_max BEGIN SET @base_quantity = FLOOR((@trial_id / POWER(2, @item_count))) INSERT INTO @trials (trial_id, item_id, item_quantity) SELECT @trial_id + 1 + b.number , id , @base_quantity + b.value FROM @items a JOIN @bits b ON a.id = b.[bit] + 1 --UPDATE @trials --SET item_quantity = @base_quantity + (@trial_id % item_id) --WHERE trial_id = @trial_id SET @trial_id = @trial_id + POWER(2, @item_count) END -- Get results of each trial SELECT * FROM @trials a JOIN @items b ON a.item_id = b.id ORDER BY a.trial_id -- Use the trial_id field to reference the results of the previous select SELECT * FROM ( SELECT trial_id , SUM(protein * item_quantity) AS protein_total , SUM(carb * item_quantity) AS carb_total , SUM(fat * item_quantity) AS fat_total FROM @trials a JOIN @items b ON a.item_id = b.id GROUP BY trial_id ) a WHERE protein_total <= @protein_limit AND carb_total <= @carb_limit AND fat_total <= @fat_limit ORDER BY ((@protein_limit - protein_total) + (@carb_limit - carb_total) - (@fat_limit - fat_total)) ASC -- This last query gets the best fit SELECT c.name , b.item_quantity FROM ( SELECT * , ROW_NUMBER() OVER (ORDER BY ((@protein_limit - protein_total) + (@carb_limit - carb_total) - (@fat_limit - fat_total)) ASC) AS rn FROM ( SELECT trial_id , SUM(protein * item_quantity) AS protein_total , SUM(carb * item_quantity) AS carb_total , SUM(fat * item_quantity) AS fat_total FROM @trials a JOIN @items b ON a.item_id = b.id GROUP BY trial_id ) a WHERE protein_total <= @protein_limit AND carb_total <= @carb_limit AND fat_total <= @fat_limit ) a JOIN @trials b ON a.trial_id = b.trial_id JOIN @items c ON b.item_id = c.id WHERE a.rn = 1 `````` This will return three results, each a different way of viewing the data. Let me know if it works! The problem itself is a bit flawed. Are you trying to get as close as you can to those targets without going over? Are you merely looking for 'some solution' that's close to those numbers? Depending on how you define what qualifies as an acceptable answer, this can be a very easy or very very very hard problem to solve just off the cuff. For example, add a new ingredient that has 1g of each protein, carb, and fat. Also add 3 more ingredients, each that has 1g of a unique nutrient--one is 1g protein, 0 carb/fat, one is 1g carb 0g protein/fat, etc. Here you have at least two, if not many, solutions that would both match the target exactly. Let's continue on and also assume the protein food is gross to you so you'd rather have a lot more of the 1g/1g/1g nutrient instead. How do we weigh solutions if we can't quite hit the target but don't have you drink 15 glasses of milk and nothing else. The knapsack problem is a great start but there's a million different ways you can branch this problem off into and if you're going to try and code a solution I recommend trying to solve for something specific and then trying to expand that as you understand what's going on under the hood.
1,559
5,463
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.875
3
CC-MAIN-2020-45
latest
en
0.74609
https://www.gamedev.net/forums/topic/387519-sdl-rect-collision-detection/
1,544,938,108,000,000,000
text/html
crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00403.warc.gz
899,293,280
31,763
Public Group # SDL rect collision detection This topic is 4627 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I have a question about rectangle collision detection in SDL, it made sense until it did make me confused. I've created a program from a tutorial that shows a simple example of collision detection and a rectangle that can be moved. I had a problem that the rectangle that you can move, stopped before it hit the rectangle, let's call it wall instead. The rectangle leaves a little space between the wall and the rectangle sometimes. I was experimenting with it for a while to fix it, but the only thing I found out was that if I gave the width and height of the wall to a denary like (10 or 20) it leaved no space at all. For example, when I set the width to 65 it leaves the biggest space possible but if I change it to 60 or 70 the space will disappear. If I change the value to 62 or 68, the space is still there but much smaller, it's like the closer the value is to a denary the less the space between the rectangles become. Is this some kind of known bug or just the bad part of rectangle collision detection? I don't understand why the rectangle mysteriously becomes bigger and leaves un invisible space that the rectangle collides with, while it doesn't change shape when the value is what I mentioned before. I'm tired right now, if necessary, I will post the code too tomorrow.. ##### Share on other sites Well, what are you doing after it hits the wall? If you're not moving the rectangle, it will leave whatever room it had previously between it and the wall. So say it starts 100 pixels from the wall and you move it by 10 pixels each cycle. If the block is 20 pixels wide, it will hit after 8 cycles of movement and end up right next to it. If it's 25 pixels wide, it will move seven times and be 5 pixels from the wall and not be able to move further. If you move the rectangle to the closest to the wall after a hit, I don't know what the problem is. ##### Share on other sites That's because in my tutorial when there's a collision I don't put the rect next to the wall, I undo the movement. just do a if( rect moved right and touched wall ) put rect on left side of wall else if( rect moved up and touched wall ) put rect on bottom side of wall else if( rect moved left and touched wall ) put rect on right side of wall etc etc ##### Share on other sites Quote: Original post by Lazy FooThat's because in my tutorial when there's a collision I don't put the rect next to the wall, I undo the movement.just do aif( rect moved right and touched wall )put rect on left side of wallelse if( rect moved up and touched wall )put rect on bottom side of wallelse if( rect moved left and touched wall )put rect on right side of walletc etc Yeah, it goes to the opposite side instead, with the same speed, I understood that. But shouldn't the collision work that way no matter what value the wall has anyways? Instead of putting it beside the wall, undo the movement should do it. ##### Share on other sites Make it so that the rectangle gets put up next to the wall. Or else you could reduce the speed of movement to 1 pixel, but that's probably not practical. The reason why undoing the movement won't make them right next to eachother is logical. If its moving 10pixels, and it crashes with a wall, then you move it back 10 pixels. This means that the only time when it would end up being in contact with the wall is when it was in contact the previous frame. The only time when it is in contact the previous frame is when the walls position is a multiple of its speed(assuming that the rectangle starts at a multiple of its speed, too). And FYI, next time you don't need to tell us that you're using SDL, since it really isn't relevant to the problem. ##### Share on other sites Well, I thought it was because this kind of behaviour in general is kind of strange in my option, so I pointed out what api I used just in case. People tends to be mad if you don't add all the details in the thread also. I can add that i'm using many walls, i've added three walls, it looks like this in the game loop: SDL_FillRect(screen, &wall, SDL_MapRGB(screen->format, 0x77, 0x77, 0x77)); SDL_FillRect(screen, &wall2, SDL_MapRGB(screen->format, 0x77, 0x77, 0x77)); SDL_FillRect(screen, &wall3, SDL_MapRGB(screen->format, 0x77, 0x77, 0x77)); Outside the gameloop: wall.x = 520; wall.y = 40; wall.w = 70; wall.h = 400; wall2.x = 0; wall2.y = 40; wall2.w = 590; wall2.h = 10; wall3.x = 0; wall3.y = 440; wall3.w = 590; wall3.h = 10; And I also created the global rect objects in the top of the program. I don't know if this is a good way to code but the if statement become something like this now and seems a little too, what should I say, hard to read: if ((box.y<0) || (box.y + SQUARE_HEIGHT > SCREEN_HEIGHT) || (check_collision(box, wall) || (check_collision(box, wall2) || (check_collision(box, wall3)))) { box.y-=yVel; } If I want to add multiple walls, is that a good way to do it? Lazy foo, would be glad if you could explain the pseudocode you added, like where I should add it. Quote: Original post by troutofdoomWell, what are you doing after it hits the wall? If you're not moving the rectangle, it will leave whatever room it had previously between it and the wall. So say it starts 100 pixels from the wall and you move it by 10 pixels each cycle. If the block is 20 pixels wide, it will hit after 8 cycles of movement and end up right next to it. If it's 25 pixels wide, it will move seven times and be 5 pixels from the wall and not be able to move further.If you move the rectangle to the closest to the wall after a hit, I don't know what the problem is. I understand what you mean now it sounds logical, in other words the rectangle moves 10 pixels when you're moving it and if the wall is 22 the two remaining pixels will leave a space. That clears up some misunderstanding atleast. Edit: Changing the xVel and yVel to 1 is a proof of that it works that way. I can always have it that way and change the max frames to a high value, but I also want to learn how to make it stop instead of undoing. I can't find a way to do it. I guess I most change this classfunction to make it work. void Square::show() { box.x += xVel; if ((box.x<0) || (box.x + SQUARE_WIDTH > SCREEN_WIDTH) || (check_collision(box, wall) || (check_collision(box, wall2) || (check_collision(box, wall3))))) { box.x-=xVel; } box.y += yVel; if ((box.y<0) || (box.y + SQUARE_HEIGHT > SCREEN_HEIGHT) || (check_collision(box, wall) || (check_collision(box, wall2) || (check_collision(box, wall3))))) { box.y-=yVel; } apply_surface(box.x, box.y, square, screen); } [Edited by - password on April 16, 2006 7:14:36 AM] 1. 1 2. 2 Rutin 19 3. 3 4. 4 5. 5 • 13 • 26 • 10 • 11 • 9 • ### Forum Statistics • Total Topics 633736 • Total Posts 3013598 ×
1,784
6,943
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.640625
3
CC-MAIN-2018-51
latest
en
0.954883
https://www.unitconverters.net/power/foot-pound-force-second-to-gigajoule-second.htm
1,720,781,893,000,000,000
text/html
crawl-data/CC-MAIN-2024-30/segments/1720763514387.30/warc/CC-MAIN-20240712094214-20240712124214-00398.warc.gz
660,777,684
3,720
Home / Power Conversion / Convert Foot Pound-force/second to Gigajoule/second # Convert Foot Pound-force/second to Gigajoule/second Please provide values below to convert foot pound-force/second to gigajoule/second [GJ/s], or vice versa. From: foot pound-force/second To: gigajoule/second ### Foot Pound-force/second to Gigajoule/second Conversion Table Foot Pound-force/secondGigajoule/second [GJ/s] 0.01 foot pound-force/second1.3558179483294E-11 GJ/s 0.1 foot pound-force/second1.3558179483294E-10 GJ/s 1 foot pound-force/second1.3558179483294E-9 GJ/s 2 foot pound-force/second2.7116358966589E-9 GJ/s 3 foot pound-force/second4.0674538449883E-9 GJ/s 5 foot pound-force/second6.7790897416472E-9 GJ/s 10 foot pound-force/second1.3558179483294E-8 GJ/s 20 foot pound-force/second2.7116358966589E-8 GJ/s 50 foot pound-force/second6.7790897416472E-8 GJ/s 100 foot pound-force/second1.3558179483294E-7 GJ/s 1000 foot pound-force/second1.3558179483294E-6 GJ/s ### How to Convert Foot Pound-force/second to Gigajoule/second 1 foot pound-force/second = 1.3558179483294E-9 GJ/s 1 GJ/s = 737562149.27833 foot pound-force/second Example: convert 15 foot pound-force/second to GJ/s: 15 foot pound-force/second = 15 × 1.3558179483294E-9 GJ/s = 2.0337269224942E-8 GJ/s
447
1,264
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.6875
3
CC-MAIN-2024-30
latest
en
0.533943
https://www.coursehero.com/file/57931263/Laws-of-gravitationpdf/
1,601,571,165,000,000,000
text/html
crawl-data/CC-MAIN-2020-40/segments/1600402131777.95/warc/CC-MAIN-20201001143636-20201001173636-00651.warc.gz
692,905,411
77,842
Laws of gravitation.pdf - Newton's Law of Universal Gravitation Isaac Newton compared the acceleration of the moon to the acceleration of objects on # Laws of gravitation.pdf - Newton's Law of Universal... This preview shows page 1 - 3 out of 9 pages. Newton's Law of Universal Gravitation Isaac Newton compared the acceleration of the moon to the acceleration of objects on earth. Believing that gravitational forces were responsible for each, Newton was able to draw an important conclusion about the depen dence of gravity upon distance. This comparison led him to conclude that the force of gravitational attraction between the Earth and other objects is inversely proportional to the distance separating the earth's center from the object's center. But distanc e is not the only variable effecting the magnitude of a gravitational force. In accord with Newton's famous equation F net = m*a Newton knew that the force which caused the apple's acceleration (gravity) must be dependent upon the mass of the apple. And sin ce the force acting to cause the apple's downward acceleration also causes the earth's upward acceleration (Newton's third law), that force must also depend upon the mass of the earth. So for Newton, the force of gravity acting between the earth and any ot her object is directly proportional to the mass of the earth, directly proportional to the mass of the object, and inversely proportional to the square of the distance which separates the centers of the earth and the object. But Newton's law of universal g ravitation extends gravity beyond earth. Newton's law of universal gravitation is about the universality of gravity. Newton's place in the Gravity Hall of Fame is not due to his discovery of gravity, but rather due to his discovery that gravitation is univ ersal. ALL objects attract each other with a force of gravitational attraction. This force of gravitational attraction is directly dependent upon the masses of both objects and inversely proportional to the square of the distance which separates their cent ers. Newton's conclusion about the magnitude of gravitational forces is summarized symbolically as #### You've reached the end of your free preview. Want to read all 9 pages? • Fall '10 • noris
443
2,251
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2020-40
longest
en
0.925064
http://www.techiedelight.com/check-strings-can-derived-circularly-rotating/
1,519,301,677,000,000,000
text/html
crawl-data/CC-MAIN-2018-09/segments/1518891814105.6/warc/CC-MAIN-20180222120939-20180222140939-00084.warc.gz
563,037,201
18,381
# Check if strings can be derived from each other by circularly rotating them Check if given string can be derived from another string by circularly rotating it. The rotation can be in clockwise or anti-clockwise rotation. For example, Input: X = ABCD Y = DABC Output: Yes Y can be derived from X by right-rotating string X by 1 unit For two given strings X and Y, a simple solution would be to check if string Y is sub-string of string XX or not. If yes, they can be derived from each other. For example, consider string X = ABCD and Y = DABC XX = ABCD + ABCD = ABCDABCD string Y is clearly a substring of ABCDABCD The implementation can be seen here. This solution seems efficient but it is using O(n) extra space. How to do this using O(1) space ? The idea is to in-place rotate the string X and check if it becomes equal to string Y or not. We have to consider every possible rotation of string X (i.e. rotation by 1 unit, 2 unit.. till n-1 unit where n is the length of string X). Note that clockwise or anti-clockwise rotation doesn’t matter. Below is C++ and Java implementation of the idea – ## Java Output: Given strings can be derived from each other Please use ideone or C++ Shell or any other online compiler link to post code in comments. Like us? Please spread the word and help us grow. Happy coding 🙂 Get great deals at Amazon In the for loop, I think it should be `rotate(X.begin(), X.begin()+I, X.end())` since we are checking for each rotation.
364
1,481
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.09375
3
CC-MAIN-2018-09
latest
en
0.857444
https://questions.examside.com/past-years/gate/gate-me/strength-of-materials/torsion
1,713,263,872,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296817081.52/warc/CC-MAIN-20240416093441-20240416123441-00857.warc.gz
453,630,230
30,319
# Torsion ยท Strength of Materials ยท GATE ME Start Practice ## Marks 1 GATE ME 2017 Set 1 A motor driving a solid circular steel shaft transmits $$40$$ $$kW$$ of power at $$500$$ $$rpm$$. If the diameter of the shaft is $$40$$ $$mm,$$ the m... GATE ME 2016 Set 1 The cross sections of two hollow bars made of the same material are concentric circles as shown in the figure. It is given that ๐‘Ÿ3 > ๐‘Ÿ1 and ๐‘Ÿ4 &... GATE ME 2015 Set 2 A rope brake dynamometer attached to the crank shaft of an $$I.C.$$ engine measures a brake power of $$10$$ kW when the speed of rotation of the shaft... GATE ME 2015 Set 1 Consider a stepped shaft subjected to a twisting moment applied at $$B$$ as shown in the figure. Assume shear modulus, $$G=77$$ GPa. The angle of twis... GATE ME 2014 Set 3 Two solid circular shafts of $${{R_1}}$$ and $${{R_2}}$$ are subjected to same torque. The maximum shear stresses developed in the two shafts are $${\... GATE ME 2009 A solid circular shaft of diameter$$d$$is subjected to a combined bending moment,$$M$$and torque,$$T.$$The material property to be used for desi... GATE ME 2006 For a circular shaft of diameter$$'d'$$subjected to torque$$T,$$the maximum value of the shear stress is GATE ME 2003 Maximum shear stress developed on the surface of a solid circular shaft under pure torsion is 240 MPa. If the shaft diameter is doubled then the maxim... ## Marks 2 GATE ME 2016 Set 3 Two circular shafts made of same material, one solid$$(S)$$and one hollow$$(H)$$, have the same length and polar moment of inertia. Both are subjec... GATE ME 2016 Set 2 A rigid horizontal rod of length$$2L$$is fixed to a circular cylinder of radius$$R$$as shown in the figure. Vertical forces of magnitude$$P$$are... GATE ME 2015 Set 2 A hollow shaft of$$1$$m length is designed to transmit a power of$$30$$kW at$$700$$rpm. The maximum permissible angle of twist in the shaft is ... GATE ME 2012 A solid circular shaft needs to be designed to transmit a torque of 50N.m. If the allowable shear stress of the material is$$140$$MPa, assuming a fac... GATE ME 2011 A torque$$T$$is applied at the free end of a stepped rod of circular cross-sections as shown in the figure. The shear modulus of the material of the... GATE ME 2009 A solid shaft of diameter,$$d$$and length,$$L$$is fixed at both the ends. A torque,$${T_0}$$is applied at a distance,$$L/4$$from the left end ... GATE ME 2007 A machine frame shown in the figure below is subjected to a horizontal force of$$600N$$parallel to$$Z$$-direction. The normal and shear str... GATE ME 2007 A machine frame shown in the figure below is subjected to a horizontal force of$$600N$$parallel to$$Z$$-direction. The maximum principal st... GATE ME 2005 The two shafts$$AB$$and$$BC$$, of equal length and diameters$$d$$and$$2d$$, are made of the same material. They are joined at$$B$$through a sh... GATE ME 2004 A torque of$$10N-m$$is transmitted through a stepped shaft as shown in figure. The torsional stiffnesses of individual sections of lengths$$MN... GATE ME 2004 A solid circular shaft of $$60$$ mm diameter transmits a torque of $$1600$$ N - m. The value of maximum shear developed is GATE ME 1994 Two shafts $$A$$ and $$B$$ are made of the same material. The diameter of shaft $$B$$ is twice that of shaft $$A$$. the ratio of power which can be t... GATE ME 1993 The compound shaft shown is built-in at the two ends. It is subjected to a twisting moment $$T$$ at the middle. What is the ratio of the reaction torq... GATE ME 1993 A circular rod of diameter $$d$$ and length $$3d$$ is subjected to a compressive force $$F$$ acting at the top point as shown below. Calculate the str... EXAM MAP Medical NEET
1,025
3,680
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2024-18
latest
en
0.744386
https://www.adventuresincre.com/sumproduct-weighted-average-real-estate/
1,726,582,301,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00414.warc.gz
585,471,713
29,137
# Using SUMPRODUCT to Calculate Weighted Average in Real Estate (Updated Aug 2024) In my experience, using the SUMPRODUCT function in Excel to calculate weighted average is one of the most oft-used Excel techniques in real estate financial modeling. I learned this technique on day one of my first real estate internship and I continue to use it at least once a week to this day. So in this blog post, I’ll show you how to use this in your real estate financial modeling. Note: If you’re an Accelerator member, Michael and I have touched on this topic as it relates to course 1 in some Q&A’s. Not yet an Accelerator member? Consider joining the real estate financial modeling training program used by top real estate companies and elite universities to train the next generation of CRE professionals. ## The Math Behind Weighted Average To fully understand this concept, it’s first necessary to think back to your 7th grade math class when Mrs. Cauliflower went over weighted average. To teach weighted average, she first taught you to calculate the arithmetic mean (i.e. basic averaging formula). Or in other words, to calculate the average of a string of values you find the sum of that string of values, and then divide the total by the number of values in the string: Average = (a1 + a2 + a3 + … + an) ÷ n or 5.8 = (5 + 7 + 4 + 3 + 10) ÷ 5 However, in some cases it’s necessary to assign greater or lesser weight to each value in the string. Thus, to allow for varying weight between each value in the string, we use the weighted average formula: Weighted Average = (a1 * v1 + a2 * v2 + a3 * v3 + … + an * vn) ÷ (v1 + v2 + v3 + … + vn) or 5.57 = (5 * 1 + 7 * 2 + 4 * 1 + 3 * 2 + 10 * 1) ÷ (1 + 2 + 1 + 2 + 1) To hammer this home, here’s my favorite weighted average explainer video on YouTube: ## Weighted Average in Real Estate Financial Analysis So how does Mrs. Cauliflower’s class on weighted averages or some random Youtuber’s explainer video on the subject relate to real estate financial modeling? Well, this logic is used frequently in real estate when averaging weighted values. As I mentioned above, I use it at least weekly and you will too. I most commonly use the weighted average formula when averaging a set of sale or lease comps. Each comp will be weighted differently depending on, for instance, the number of units the comp has. I also regularly use this concept when performing rent roll analysis or when calculating variable vs. fixed cash flow. Sometimes it even becomes necessary to write conditional weighted average logic in Excel in order to only weight and average specific values in an array. And so knowing how to perform this calculation in Excel quickly and accurately is essential to being fully proficient modeling real estate in Excel. ## Using SUMPRODUCT in Excel to Calculate Weighted Average So how do you calculate weighted average in Excel? You really have two options. The first option is to do the math the long way, as outlined above. Or in other words, write a formula that multiplies each value by its respective weight, add up the total, and then divide that total by the sum of the weights. The problem with doing the weighted average calculation this way is that oftentimes, you’re dealing with a string of values many cells long. Imagine writing a formula to calculate the weighted average of a unit mix table with 50 unit types! That would involve writing a formula with 50 weights and 50 values – it would take several minutes to write! So instead, there’s a much faster way: using Excel’s SUMPRODUCT() function. If you’re unfamiliar with SUMPRODUCT in Excel, it essentially performs this portion (a1 * v1 + a2 * v2 + a3 * v3 + … + an * vn) of your weighted average calculation instantlyOr in other words, it calculates the sum product of two (or more) arrays. Completing the weighted average calculation then is as simple as dividing that SUMPRODUCT() result by the SUM() of the weighted array. Let me show you what I mean using a real-to-life real estate example and the following Excel logic: Weighted Average = SUMPRODUCT(Component Array, Weight Array)/SUM(Weight Array)
947
4,154
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.46875
4
CC-MAIN-2024-38
latest
en
0.867664
https://stanford.library.sydney.edu.au/entries/legal-probabilism/
1,632,873,348,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780060908.47/warc/CC-MAIN-20210928214438-20210929004438-00569.warc.gz
575,293,113
72,680
# Legal Probabilism First published Tue Jun 8, 2021 Legal probabilism is a research program that relies on probability theory to analyze, model and improve the evaluation of evidence and the process of decision-making in trial proceedings. While the expression “legal probabilism” seems to have been coined by Haack (2014b), the underlying idea can be traced back to the early days of probability theory (see, for example, Bernoulli 1713). Another term that is sometimes encountered in the literature is “trial by mathematics” coined by Tribe (1971). Legal probabilism remains a minority view among legal scholars, but attained greater popularity in the second half of the twentieth century in conjunction with the law and economics movement (Becker 1968; Calabresi 1961; Posner 1973). To illustrate the range of applications of legal probabilism, consider a stylized case. Alice is charged with murder. Traces at the crime scene, which the perpetrator probably left, genetically match Alice. An eyewitness testifies that Alice ran away from the scene after the crime was committed. Another witness asserts that Alice had previously threatened the victim multiple times. Alice, however, has an alibi. Mark claims he was with her for the entire day. This case raises several questions. How should the evidence be evaluated? How to combine conflicting pieces of evidence, such as the incriminating DNA match and the alibi testimony? If the standard of decision in a murder case is proof beyond a reasonable doubt, how strong should the evidence be to meet this standard? Legal probabilism is a theoretical framework that helps to address these different questions. Finkelstein and Fairley (1970) gave one of the first systematic analyses of how probability theory, and Bayes’ theorem in particular, can help to evaluate evidence at trial (see Section 1). After the discovery of DNA fingerprinting, many legal probabilists focused on how probability theory could be used to quantify the strength of a DNA match (see Section 2). Recent work in Artificial Intelligence made it possible to use probability theory—in the form of Bayesian networks—to evaluate complex bodies of evidence consisting of multiple components (see Section 3). Following the work of Lempert (1977), likelihood ratios are now commonly used as probabilistic measures of the relevance of the evidence presented at trial (see Section 4). Other authors, starting with seminal papers by Kaplan (1968) and Cullison (1969), deployed probability theory and decision theory to model decision-making and standards of proof at trial (see Section 5). Legal probabilism is no doubt controversial. Many legal theorists and philosophers, starting with Tribe (1971), leveled several critiques against it. These critiques range from difficulties in assessing the probability of someone’s criminal or civil liability to the dehumanization of trial decisions to misconstruing how the process of evidence evaluation and decision-making takes place in trial proceedings. A key challenge for legal probabilism—one that has galvanized philosophical attention in recent years—comes from the paradoxes of legal proof or puzzles of naked statistical evidence. Nesson (1979), Cohen (1977) and Thomson (1986) formulated scenarios in which, despite a high probability of guilt or civil liability based on the available evidence, a verdict against the defendant seems unwarranted (see Section 6). Other challenges for legal probabilism include the problem of conjunction and the reference class problem (see Section 7). Legal probabilism can also be understood as a far reaching research program that aims to analyze—by means of probability theory—the trial system as a whole, including institutions such as the jury system and trial procedure. Some early French probability theorists examined the relationship between jury size, jury voting rules and the risk of convicting an innocent (Condorcet 1785; Laplace 1814; Poisson 1837; for more recent discussions, see Kaye 1980; Nitzan 2009; Suzuki 2015). At the root of this more radical version of legal probabilism lies the dream of discerning patterns in the behavior of individuals and improving legal institutions (Hacking 1990). Today’s rapid grow of data—paired with machine learning and the pervasiveness of cost-benefit analysis—has rendered this dream more alive than ever before (Ferguson 2020). For a critique of mathematization, quantification and cost-benefit analysis applied to the justice system, see Allen (2013) and Harcourt (2018). This entry will not, however, discuss this far reaching version of legal probabilism. ## 1. Probabilistic Toolkit This section begins with a review of the axioms of probability and its interpretations, and then shows how probability theory helps to spot mistakes that people may fall prey to when they assess evidence at trial, such as the prosecutor’s fallacy, the base rate fallacy, and the defense attorney’s fallacy. This section also examines how probabilities are assigned to hypotheses and how hypotheses are formulated at different levels of granularity. ### 1.1 Probability and its interpretation Standard probability theory consists of three axioms: Table 1 Axiom In words In symbols Non-negativity The probability of any proposition $$A$$ is greater than or equal to 0. $$\Pr(A)\geq 0$$ Normality The probability of any logical tautology is 1. If $$\models A$$, then $$\Pr(A)=1$$ Additivity The probability of the disjunction of two propositions $$A$$ and $$B$$ is the sum of their respective probabilities, provided the two propositions are logically incompatible. If $$\models \neg (A \wedge B)$$, then $$\Pr(A\vee B)=\Pr(A)+\Pr(B)$$ An important notion in probability theory is that of conditional probability, that is, the probability of a proposition $$A$$ conditional on a proposition $$B$$, in symbols, $$\Pr( A\pmid B)$$. Although it is sometimes taken as a primitive notion, conditional probability is usually defined as the probability of the conjunction $$\Pr(A \wedge B)$$ divided by the probability of the proposition being conditioned on, $$\Pr(B)$$, or in other words, $\Pr(A \pmid B)= \frac{\Pr(A \wedge B)}{\Pr(B)}\quad \text{ assuming $$\Pr(B)\neq 0$$.}$ This notion is crucial in legal applications. The fact-finders at trial might want to know the probability of “The defendant was at the scene when the crime was committed” conditional on “Mrs. Dale asserts she saw the defendant run away from the scene”. Or they might want to know the probability of “The defendant is the source of the traces found at the crime scene” conditional on “The DNA expert asserts that the defendant’s DNA matches the traces at the scene.” In general, the fact-finders are interested in the probability of a given hypothesis $$H$$ about what happened conditional on the available evidence $$E$$, in symbols, $$\Pr(H \pmid E)$$. Most legal probabilists agree that the probabilities ascribed to statements that are disputed in a trial—such as “The defendant is the source of the crime traces” or “The defendant was at the crime scene when the crime was committed”—should be understood as evidence-based degrees of belief (see, for example, Cullison 1969; Kaye 1979b; Nance 2016). This interpretation addresses the worry that since past events did or did not happen, their probabilities should be 1 or 0. Even if the objective chances are 1 or 0, statements about past events could still be assigned different degrees of belief given the evidence available. In addition, degrees of belief are better suited than frequencies for applications to unrepeatable events, such as actions of individuals, which are often the focus of trial disputes Further engagement with these issues lies beyond the scope of this entry (for a more extensive discussion, see the entry on the interpretations of probability as well as Childers 2013; Gillies 2000; Mellor 2004; Skyrms 1966). Some worry that, except for obeying the axioms of probability, degrees of belief are in the end assigned in a subjective and arbitrary manner (Allen and Pardo 2019). This worry can be alleviated by noting that degrees of belief should reflect a conscientious assessment of the evidence available which may also include empirical frequencies (see §1.2 below and the examples in ENFSI 2015). In some cases, however, the relevant empirical frequencies will not be available. When this happens, degrees of belief can still be assessed by relying on common sense and experience. Sometimes there will be no need to assign exact probabilities to every statement about the past. In such cases, the relevant probabilities can be expressed approximately with sets of probability measures (Shafer 1976; Walley 1991), probability distributions over parameter values, or intervals (see later in §1.4). ### 1.2 Probabilistic fallacies Setting aside the practical difficulties of assigning probabilities to different statements, probability theory is a valuable analytical tool in detecting misinterpretations of the evidence and reasoning fallacies that may otherwise go unnoticed. #### 1.2.1 Assuming independence A common error that probability theory helps to identify consists in assuming without justification that two events are independent of one another. A theorem of probability theory states that the probability of the conjunction of two events, $$A\wedge B$$, equals the product of the probabilities of the conjuncts, $$A$$ and $$B$$, that is, $\Pr(A \wedge B) = \Pr(A)\times \Pr(B),$ provided $$A$$ and $$B$$ are independent of one another in the sense that the conditional probability $$\Pr(A \pmid B)$$ is the same as the unconditional probability $$\Pr(A)$$. More formally, note that $\Pr(A \pmid B)=\frac{\Pr(A \wedge B)}{\Pr(B)},$ so $\Pr(A \wedge B)=\Pr(A)\times \Pr(B)$ provided $$\Pr(A)=\Pr(A \pmid B)$$. The latter equality means that learning about $$B$$ does not change one's degree of belief about $$A$$. The trial of Janet and Malcolm Collins, a couple accused of robbery in 1964 Los Angeles, illustrates how the lack of independence between events can be overlooked. The couple was identified based on features at the time considered unusual. The prosecutor called an expert witness, a college mathematician, to the stand, and asked him to consider the following features and assume they had the following probabilities: black man with a beard (1 in 10), man with a mustache (1 in 4), white woman with blond hair (1 out of 3), woman with a ponytail (1 out of 10), interracial couple in a car (1 out of 1000), couple driving a yellow convertible (1 out of 10). The mathematician, correctly, calculated the probability of a random couple displaying all these features on the assumption of independence: 1 in 12 million (assuming the individual probability estimates were correct). Relying on this argument, the jury convicted the couple. If those features are so rare in Los Angeles and the robbers had them—the jury must have reasoned—the Collins must be the robbers. The conviction was later reversed by the Supreme Court of California in People v. Collins (68 Cal.2d 319, 1968). The Court pointed out the mistake of assuming that multiplying the probabilities of each feature would give the probability of their joint occurrence. This assumption holds only if the features in question are probabilistically independent. But this is not the case since the occurrence of, say, the feature “man with a beard” might very well correlate with the feature “man with a mustache”. The same correlation might hold for the features “white woman with blond hair” and “woman with a ponytail”. Besides the lack of independence, another problem is the fact that the probabilities associated with each feature were not obtained by any reliable method. The British case R. v. Clark (EWCA Crim 54, 2000) is another example of how the lack of independence between events can be easily overlooked. Sally Clark had two sons. Her first son died in 1996 and her second son died in similar circumstances a few years later in 1998. They both died within a few weeks after birth. Could it just be a coincidence? At trial, the paediatrician Roy Meadow testified that the probability that a child from an affluent family such as the Clark’s would die of Sudden Infant Death Syndrome (SIDS) was 1 in 8,543. Assuming that the two deaths were independent events, Meadow calculated that the probability of both children dying of SIDS was $\frac{1}{8,543}\times \frac{1}{8,543}, \quad \text{which approximately equals} \quad \frac{1}{73\times 10^6}$ or 1 in 73 million. This impressively low number no doubt played a role in the outcome of the case. Sally Clark was convicted of murdering her two infant sons (though the conviction was ultimately reversed on appeal). The $$\slashfrac{1}{(73\times 10^6)}$$ figure rests on the assumption of independence. This assumption is seemingly false since environmental or genetic factors may predispose a family to SIDS (for a fuller discussion of this point, see Dawid 2002; Barker 2017; Sesardic 2007). #### 1.2.2 The prosecutor’s fallacy Another mistake that people often make while assessing the evidence presented at trial consists in conflating the two directions of conditional probability, $$\Pr(A\pmid B)$$ and $$\Pr(B \pmid A)$$. For instance, if you toss a die, the probability that the result is 2 given that it is even (which equals $$\slashfrac{1}{3}$$) is different from the probability that the result is even given that it is 2 (which equals 1). In criminal cases, confusion about the two directions of conditional probability can lead to exaggerating the probability of the prosecutor’s hypothesis. Suppose an expert testifies that the blood found at the crime scene matches the defendant’s and it is 5% probable that a person unrelated to the crime—someone who is not the source of the blood found at the scene—would match by coincidence. Some may be tempted to interpret this statement as saying that the probability that the defendant is not the source of the blood is 5% and thus it is 95% probable that the defendant is the source. This flawed interpretation is known as the prosecutor’s fallacy, sometimes also called the transposition fallacy (Thompson and Schumann 1987). The 5% figure is the conditional probability $$\Pr(\match \pmid \neg \source)$$ that, assuming the defendant is not the source of the crime scene blood $$(\neg \source),$$ he would still match $$(\match).$$ The 5% figure is not the probability $$\Pr(\neg \source \pmid \match)$$ that, if the defendant matches ($$\match$$), he is not the source ($$\neg \source$$). By conflating the two directions and thinking that $\Pr(\neg \source \pmid \match)=\Pr(\match \pmid \neg \source)=5\%,$ one is led to erroneously conclude that $$\Pr(\source \pmid \match)=95\%$$. The same conflation occurred in the Collins case discussed earlier. Even if the calculations were correct, the 1 in 12 million probability that a random couple would have the specified characteristics should be interpreted as $$\Pr(\match\pmid \textsf{innocent})$$, not as the probability that the Collins were innocent given that they matched the eyewitness description, $$\Pr(\textsf{innocent}\pmid \match)$$. Presumably, the jurors convicted the Collins because they thought it was virtually impossible that they were not the robbers. But the 1 in 12 million figure, assuming it is correct, only shows that $$\Pr(\match\pmid \textsf{innocent})$$ equals $$\slashfrac{1}{(12\times 10^6)}$$, not that $$\Pr(\textsf{innocent}\pmid \match)$$ equals $$\slashfrac{1}{(12\times 10^6)}$$. #### 1.2.3 Bayes’ theorem and the base rate fallacy The relation between the probability of the hypothesis given the evidence, $$\Pr(H \pmid E)$$, and the probability of the evidence given the hypothesis, $$\Pr(E \pmid H)$$, is captured by Bayes’ theorem: $\Pr(H \pmid E) = \frac{\Pr(E \pmid H)}{\Pr(E)} \times \Pr(H),\quad\text{ assuming } \Pr(E) \neq 0.$ The probability $$\Pr(H)$$ is called the prior probability of $$H$$ (it is prior to taking evidence $$E$$ into account) and $$\Pr(H\pmid E)$$ is the posterior probability of $$H$$. This terminology is standard, but is slightly misleading because it suggests a temporal ordering which does not have to be there. Next, consider the ratio $\frac{\Pr(E \pmid H)}{\Pr(E)},$ sometimes called the Bayes factor.[1] This is the ratio between the probability $$\Pr(E \pmid H)$$ of observing evidence $$E$$ assuming $$H$$ (often called the likelihood) and the probability $$\Pr(E)$$ of observing $$E$$. By the law of total probability, $$\Pr(E)$$ results from adding the probability of observing $$E$$ assuming $$H$$ and the probability of observing $$E$$ assuming $$\neg H$$, each weighted by the prior probabilities of $$H$$ and $$\neg H$$ respectively: $\Pr(E)= \Pr(E \pmid H)\times \Pr(H)+\Pr(E \pmid \neg H)\times \Pr(\neg H).$ As is apparent from Bayes’ theorem, multiplying the prior probability by the Bayes factor yields the posterior probability $$\Pr(H \pmid E)$$. Other things being equal, the lower the prior probability $$\Pr(H)$$, the lower the posterior probability $$\Pr(H \pmid E)$$. The base rate fallacy consists in ignoring the effect of the prior probability on the posterior (Kahneman and Tversky 1973). This leads to thinking that the posterior probability of a hypothesis given the evidence is different than it actually is (Koehler 1996). Consider the blood evidence example discussed previously. By Bayes’ theorem, the two conditional probabilities $$\Pr(\neg \source \pmid \match)$$ and $$\Pr(\match \pmid \neg \source)$$ are related as follows: $\Pr(\neg \source \pmid \match) = \\ \frac{\Pr(\match \pmid \neg \source)\times \Pr(\neg \source)} {\Pr(\match \pmid \neg \source)\times \Pr(\neg \source) + \Pr(\match \pmid \source)\times \Pr(\source)}.$ Absent any other compelling evidence to the contrary, it should initially be very likely that the defendant, as anyone else, had little to do with the crime. Say, for illustrative purposes, that the prior probability $$\Pr(\neg \source)= .99$$ and so $$\Pr(\source)=.01$$ (more on how to assess prior probabilities later in §1.4). Next, $$\Pr(\match \pmid \source)$$ can be set approximately to $$1$$, because if the defendant were the source of the blood at the crime scene, he should match the blood at the scene (setting aside the possibility of false negatives). The expert estimated the probability that someone who is not the source would coincidentally match the blood found at the scene—that is, $$\Pr(\match \pmid \neg \source)$$—as equal to .05. By Bayes’ theorem, $\Pr(\neg \source \pmid \match) = \frac{.05}{(.05 \times .99)+ (1 \times .01)} \times .99 \approx .83.$ The posterior probability that the defendant is the source, $$\Pr(\source\pmid \match)$$ is roughly $$1-.83=.17$$, much lower than the exaggerated value of .95. This posterior probability would have dropped even lower if the prior probability were lower. A similar analysis applies to the Collins case. If the prior probability of the Collins’s guilt is sufficiently low—say in 1 in 6 million—the posterior guilt probability given the match with the eyewitness description would be roughly .7, much less impressive than $\left(1-\frac{1}{12\times 10^6}\right)\approx .9999999.$ #### 1.2.4 Defense attorney’s fallacy Although base rate information should not be overlooked, paying excessive attention to it and ignoring other evidence leads to the so-called defense attorney’s fallacy. As before, suppose the prosecutor expert testifies that the defendant matches the traces found at the crime scene and there is a 5% probability that a random person, unrelated to the crime, would coincidentally match, so $$\Pr(\match\pmid \neg\source)=.05$$. To claim that it is 95% likely that the defendant is the source of the traces, or in symbols, $$\Pr(\source\pmid \textsf{ match})=.95$$, would be to commit the prosecutor’s fallacy described earlier. But suppose the defense argues that, since the population in town is 10,000 people, $$5\%$$ of them would match by coincidence, that is, $$10,000\times 5\%=500$$ people. Since the defendant could have left the traces as any of the other 499 people, he is $$\slashfrac{1}{500}=.2\%$$ likely to be the source, a rather unimpressive figure. The match—the defense concludes—is worthless evidence. This analysis portrays the prosecutor’s case as weaker than it need be. If the investigators narrowed down the set of suspects to, say, 100 people—a potential piece of information the defense ignored—the match would make the defendant 16% likely to be the source. This fact can be verified using Bayes’ theorem (left as an exercise for the reader). Even assuming, as the defense claims, that the pool of suspects comprises as many as 10,000 people, the prior probability that the defendant is the source would be $$\slashfrac{1}{10,000}$$, and since the defendant matches the traces, this probability should rise to (approximately) $$\slashfrac{1}{500}$$. Given the upward shift in the probability from $$\slashfrac{1}{10,000}$$ to $$\slashfrac{1}{500}$$, the match should not be considered worthless evidence (more on this later in §2.1). ### 1.3 Odds version of Bayes’ theorem The version of Bayes’ theorem considered so far is informationally demanding since the law of total probability—which spells out $$\Pr(E)$$, the denominator in the Bayes factor $$\slashfrac{\Pr(E \pmid H)}{\Pr(E)}$$—requires one to consider $$H$$ and the catch-all alternative hypothesis $$\neg H$$, comprising all possible alternatives to $$H$$. A simpler version of Bayes’ theorem is the so-called odds formulation: $\frac{\Pr(H \pmid E)} {\Pr(H' \pmid E)} = \frac{\Pr(E \pmid H)}{\Pr(E \pmid H')} \times \frac{\Pr(H)} {\Pr(H')},$ or in words: $\textit{posterior odds} = \textit{likelihood ratio} \times \textit{prior odds}.$ The ratio $\frac{\Pr(H)}{\Pr(H')}$ represents the prior odds, where $$H$$ and $$H'$$ are two competing hypotheses, not necessarily one the complement of the other. The likelihood ratio $\frac{\Pr(E \pmid H)}{\Pr(E \pmid H')}$ compares the probability of the same item of evidence $$E$$ given the two hypotheses. The posterior odds $\frac{\Pr(H \pmid E)}{\Pr(H' \pmid E)}$ compares the probabilities of the hypotheses given the evidence. This ratio is different from the probability $$\Pr(H \pmid E)$$ of a specific hypothesis given the evidence. As an illustration, consider the Sally Clark case (previously discussed in §1.2). The two hypotheses to compare are that Sally Clark’s sons died of natural causes (natural) and that Clark killed them (kill). The evidence available is that the two sons died in similar circumstances one after the other (two deaths). By Bayes’ theorem (in the odds version), $\frac{\Pr(\textsf{kill} \pmid \textsf{two deaths})}{\Pr(\textsf{natural} \pmid \textsf{two deaths})} = \frac{\Pr(\textsf{two deaths} \pmid \textsf{kill})}{\Pr(\textsf{two deaths} \pmid \textsf{natural})} \times \frac{\Pr(\textsf{kill})}{\Pr(\textsf{natural})}.$ The likelihood ratio $\frac{\Pr(\textsf{two deaths} \pmid \textsf{kill})}{\Pr(\textsf{two deaths} \pmid \textsf{natural})}$ compares the likelihood of the evidence (two deaths) under each hypothesis (kill v. natural). Since under either hypothesis both babies would have died—by natural causes or killing—the ratio should equal to one (Dawid 2002). What about the prior odds $\frac{\Pr(\textsf{kill})}{\Pr(\textsf{natural})}?$ Recall the $$\slashfrac{1}{(73\times 10^6)}$$ figure given by the pediatrician Roy Meadow. This figure was intended to convey how unlikely it was that two babies would die of natural causes, one after the other. If it is so unlikely that both would die of natural causes—one might reason—it must be likely that they did not actually die of natural causes and quite likely that Clark killed them. But did she? The prior probability that they died of natural causes should be compared to the prior probability that a mother would kill them. To have a rough idea, suppose that in a mid-size country like the United Kingdom 1 million babies are born every year of whom 100 are murdered by their mothers. So the chance that a mother would kill one baby in a year is 1 in 10,000. What is the chance that the same mother kills two babies? Say we appeal to the (controversial) assumption of independence. On this assumption, the chance that a mother kills two babies equals 1 in 100 million. Assuming independence, $\Pr(\textsf{natural})=\frac{1}{73\times10^6}$ and $\Pr(\textsf{kill})=\frac{1}{100\times 10^6}.$ This means that the prior odds would equal .73. With a likelihood ratio of one, the posterior odds would also equal .73. On this analysis, that Clark killed her sons would be .73 times less likely than they died of natural causes, or in other words, the natural cause hypothesis is 1.37 times more likely than the hypothesis that Clark killed her sons. A limitation of this analysis should be kept in mind. The .73 ratio is a measure of the probability of one hypothesis compared to another. From this ratio alone, the posterior probability of the individual hypotheses cannot be deduced. Only if the competing hypotheses $$H$$ and $$H'$$ are exclusive and exhaustive, say one is the negation of the other, can the posterior probability be derived from the posterior odds $PO=\frac{P(H \pmid E)}{ P(H' \pmid E)}$ via the equality $\Pr(H\pmid E)= \frac{PO}{1+PO}.$ But the hypotheses kill and natural are not exhaustive, since the two babies could have died in other ways. So the posterior odds $\frac{\Pr(\textsf{kill} \pmid \textsf{two deaths})}{\Pr(\textsf{natural} \pmid \textsf{two deaths})}$ cannot be translated into the posterior probabilities of individual hypotheses. (A more sophisticated probabilistic analysis is presented in §3.3.) ### 1.4 Where do the numbers come from? In the examples considered so far, probabilities were assigned on the basis of empirical frequencies and expert opinions. For example, an expert testimony that only 5% of people possess a particular blood type was used to set $$\Pr(\match \pmid \neg \source)=.05$$. Probabilities were also assigned on the basis of common sense, for example, $$\Pr(\match \pmid \source)$$ was set to 1 assuming that if someone is the source of the crime traces, the person must match the crime traces (setting aside the possibility of framing, fabrication of the traces or false negatives in the test results). But probabilities need not always be assigned exact values. Agreeing on an exact value can in fact be extremely difficult, especially for prior probabilities. This is a challenge for legal probabilism (see later in §7.3). One way to circumvent this challenge is to avoid setting exact values and adopt plausible intervals. This method is based on sensitivity analysis, an assessment of how prior probabilities affect other probabilities. Consider the paternity case State v. Boyd (331 N.W.2d 480, Minn. 1983). Expert witness Dr. Polesky testified that 1,121 unrelated men would have to be randomly selected from the general population before another man could be found with all the appropriate genes to have fathered the child in question. This formulation can be misleading since the expected number of matching DNA profiles need not be the same as the number of matching profiles actually found in a population (the error of thinking otherwise is called the expected value fallacy). On a more careful formulation, the probability that a random person would be a match is $$\slashfrac{1}{1121}$$, or in symbols, $\Pr(\match\pmid \neg \textsf{father})=\frac{1}{1121}.$ As explained earlier in §1.2.3, this probability cannot be easily translated into the probability that a person whose genetic profile matches is the father, or in symbols, $$\Pr(\textsf{father}\pmid \match)$$. The latter should be calculated using Bayes’ theorem: \begin{align} \Pr(\textsf{father}\pmid \match) & = \frac{ \Pr(\match\pmid \textsf{father}) \times \Pr(\textsf{father}) } { \Pr(\match \pmid \textsf{father}) \times \Pr(\textsf{father}) + \Pr(\match \pmid \textsf{not-father}) \times \Pr(\textsf{not-father}) } \end{align} In actual practice, the formula to establish paternity is more complicated (Kaiser and Seber, 1983), but for the sake of illustration, we abstract away from this complexity. Suppose the prior probability that the defendant would be the father is as low as .01, or in symbols, $$\Pr(\textsf{father})=.01$$. Plugging in this prior probability into Bayes theorem, along with $$\Pr(\textsf{m}\pmid \textsf{father})=1$$ and $$\Pr(\textsf{m}\pmid \textsf{not-father})=1/1,121$$, gives a probability of paternity $$\Pr(\textsf{father}\pmid \textsf{m})$$ that is equal to about .92. But why take the prior probability $$\Pr(\textsf{father})$$ to be .01 and not something else? The idea in sensitivity analysis is to look at a range of plausible probability assignments and investigate the impact of such choices. In legal applications, the key question is whether assignments that are most favorable to the defendant would still return a strong evidentiary case against the defendant. In the case at hand, the expert testimony is so strong that for a wide spectrum of the prior probability $$\Pr(\textsf{father})$$, the posterior probability $$\Pr(\textsf{father} \pmid \match)$$ remains high, as the plot in figure 1 shows. Figure 1 The posterior surpasses .9 once the prior is above approximately .008. In a paternity case, given the mother’s testimony and other evidence, it is clear that the probability of fatherhood before the expert testimony is taken into account should be higher than that, no matter what the prior probability should exactly be. Without settling on an exact value, the interval of values above .008 would guarantee a posterior probability of paternity to be at least .9. This is amply sufficient to meet the preponderance standard that governs civil cases such as paternity disputes (on standards of proof, see later in Section 5.) ### 1.5 Source, activity and offense level hypotheses Difficulties in assessing probabilities go hand in hand with the choice of the hypotheses of interest. To some approximation, hypotheses can be divided into three levels: offence, activity, and source level hypotheses. At the offence level, the issue is whether the defendant is guilty, as in the statement “Smith is guilty of manslaughter”. At the activity level, hypotheses describe what happened and what those involved did or did not do. An example of activity level hypothesis is “Smith stabbed the victim”. Finally, source level hypotheses describe the source of the traces, such as “The victim left the stains at the scene”, without specifying how the traces got there. Overlooking differences in hypothesis level can lead to serious confusions. Consider a case in which a DNA match is the primary incriminating evidence. DNA evidence is one of the most widely used forms of quantitative evidence currently available, but it does not work much differently from blood evidence or other forms of trace evidence. In testifying about a DNA match, trial experts will often assess the probability that a random person, unrelated to the crime, would coincidentally match the crime stain profile (Foreman et al. 2003). This is called the Genotype Probability or Random Match Probability, which can be extremely low, in the order of 1 in 100 million or even lower (Donnelly 1995; Kaye and Sensabaugh 2011; Wasserman 2002). It is tempting to equate the random match probability to $$\Pr(\match \pmid \textsf{innocence})$$ and together with the prior $$\Pr(\textsf{innocence})$$ use Bayes’ theorem to calculate the posterior probability of innocence $$\Pr(\textsf{innocence} \pmid \match)$$. This would be a mistake. Applying Bayes’ theorem is of course recommended and helps to avoid the prosecutor’s fallacy, the conflation of $$\Pr(\textsf{innocence} \pmid \match)$$ and $$\Pr(\match \pmid \textsf{innocence})$$. The problem here lies elsewhere. Equating the random match probability with $$\Pr(\match \pmid \textsf{innocence})$$ overlooks the difference between offense, activity and source level hypotheses. A DNA match cannot speak directly to the question of guilt or innocence. Even if the suspect is the source of the genetic material at the scene, the match does not establish that the defendant did visit the scene and came into contact with the victim, and even if they did, it does not follow that they committed the crime they were accused of. Few forms of evidence can speak directly to offense level hypotheses. Circumstantial evidence that is more amenable to a probabilistic quantification, such as DNA matches and other trace evidence, does not. Eyewitness testimony may speak more directly to offense level hypotheses, but it is also less easily amenable to a probabilistic quantification (see however Friedman 1987, recent results by Wixted and Wells 2017, and a survey of related issues by Urbaniak et al. 2020). This makes it difficult to assign probabilities to offense level hypotheses. Moving beyond source level hypotheses requires a close collaboration between scientists, investigators and attorneys (see Cook et al. 1998 for a discussion). ## 2. The Strength of Evidence The posterior probability of a hypothesis given the evidence should not be confused with the strength (or probative value, weight) of the evidence in favor of the hypothesis. The strength of an item of evidence reflects its impact on the probability of the hypothesis. Suppose the prior probability of $$H$$ is extremely low, say $$\Pr(H)=.001$$, but taking evidence $$E$$ into account brings this probability up to 35%, that is, $$\Pr(H \pmid E)=.35$$. This is a dramatic upward shift. Even though the posterior probability of $$H$$ given $$E$$ is not high, $$E$$ strongly favors $$H$$. This section examines how probability theory helps to assess the strength of evidence. ### 2.1 Bayes factor v. likelihood ratio The notion of strength of evidence—as distinct from prior and posterior probability—can be captured formally in a number of different ways (for a comprehensive discussion, see the entry on confirmation theory). One measure of the strength of evidence is the Bayes factor $\frac{\Pr(E \pmid H)}{\Pr(E)}$ (already discussed in §1.2.3). This is an intuitively plausible measure of evidential strength. Note that by Bayes’ theorem $\Pr(H \pmid E) = \textit{BayesFactor}(H, E) \times \Pr(H),$ and thus the Bayes factor is greater than one if and only if the posterior probability $$\Pr(H \pmid E)$$ is higher than the prior probability $$\Pr(H)$$. The greater the Bayes factor (for values above one), the greater the upward shift from prior to posterior probability, the more strongly $$E$$ positively supports $$H$$. Conversely, the smaller the Bayes factor (for values below one), the greater the downward shift from prior to posterior probability, the more strongly $$E$$ negatively supports $$H$$. If $$\Pr(H)=\Pr(H\pmid E)$$ the evidence has no impact, upwards or downwards, on the prior probability of $$H$$. The Bayes factor is an absolute measure of evidence $$E$$’s support toward $$H$$ since it compares the probability of $$E$$ under hypothesis $$H$$ against the probability of $$E$$ in general. The denominator is calculated following the law of total probability: $\Pr(E)= \Pr(E \pmid H) \Pr(H)+\Pr(E \pmid \neg H) \Pr(\neg H).$ The catch-all alternative hypothesis $$\neg H$$ can be replaced by a more fine-grained set of alternatives, say $$H_1, H_2, \dots H_k$$, provided $$H$$ and its alternatives cover the entire space of possibilities. The law of total probability would then read: $\Pr(E) = \Pr(E\pmid H)\Pr(H) +\sum_{i=1}^k \Pr(E\pmid H_i)\Pr(H_i).$ Instead of Bayes factor, the strength of evidence can be assessed by means of the likelihood ratio, a comparative measure of whether evidence $$E$$ supports a hypothesis $$H$$ more than a competing hypothesis $$H'$$, in symbols, $\frac{\Pr(E \pmid H)}{\Pr(E \pmid H')}.$ An expert, for instance, may testify that the blood-staining on the jacket of the defendant is ten times more likely to be seen if the wearer of the jacket hit the victim (hypothesis $$H$$) rather than if he did not (hypothesis $$H'$$) (Aitken, Roberts, & Jackson 2010, 38). If the evidence supports $$H$$ more than $$H'$$, the ratio would be above one, and if the evidence supports $$H'$$ more than $$H$$, the ratio would be below one. The greater the likelihood ratio (for values above one), the stronger the evidence in favor of $$H$$ as contrasted with $$H'$$. The smaller the likelihood ratio (for values below one), the stronger the evidence in favor of the competing hypothesis $$H'$$ as contrasted with $$H$$. The relationship between likelihood ratio $\frac{\Pr(E \pmid H)}{\Pr(E \pmid H')}$ and posterior odds $\frac{\Pr(H \pmid E)}{\Pr(H' \pmid E)}$ is apparent in the odds version of Bayes’ theorem (see earlier in §1.3). If the likelihood ratio is greater (lower) than one, the posterior odds will be greater (lower) than the prior odds of $$H$$. The likelihood ratio, then, is a measure of the upward or downward impact of the evidence on the odds of two hypotheses $$H$$ and $$H'$$. A competitor of the likelihood ratio as a measure of evidentiary strength is an even simpler notion, the probability $$\Pr(E \pmid H)$$. It is tempting to think that, whenever $$\Pr(E \pmid H)$$ is low, $$E$$ should be strong evidence against $$H$$. Consider an example by Triggs and Buckleton (2014). In a child abuse case, the prosecutor offers evidence that a couple’s child rocks (a movement pattern) and that only 3% of non-abused children rock, $$\Pr(\textsf{child rocks} \pmid \textsf{no abuse})=.03$$. If it is unlikely that a non-abused child would rock, the fact that this child rocks might seem strong evidence of abuse. But this reading of the 3% figure is mistaken. It could well be that 3% of abused children rock, $$\Pr(\textsf{child rocks} \pmid \textsf{abuse})=.03$$. If rocking is unlikely under either hypothesis—which means the likelihood ratio $\frac{\Pr(\textsf{child rocks} \pmid \textsf{abuse})}{\Pr(\textsf{child rocks} \pmid \textsf{no abuse})}$ equals one—rocking cannot count as evidence of abuse. In order to avoid exaggerations of the evidence, it is best to assess it by means of the likelihood ratio rather the probability of the evidence given a hypothesis (ENFSI 2015; Royall 1997). ### 2.2 Cold-hit DNA matches To better understand likelihood ratios, it is instructive to look at DNA evidence as a case study, focusing in particular on cold-hit matches. DNA evidence may be used to corroborate other evidence in a case or as the primary incriminating evidence. Suppose different investigative leads point to an individual, Mark Smith, as the perpetrator. Suppose the investigators also find several traces at the crime scene left by the perpetrator, and laboratory analyses show that the genetic profile associated with the traces matches Smith. In this scenario, the DNA match corroborates the other evidence against Smith. In contrast, suppose no investigative lead allowed the police to identify a suspect. The only evidence consists of the traces found at the crime scene. The police run the genetic profile associated with the traces through a database of profiles and find a match, a so-called cold-hit. Since in cold-hit cases there is no other evidence, cold-hit matches are the primary item of evidence against the defendant. Some scholars believe that this circumstance weakens the evidentiary value of the match. Others disagree. We now examine some of the main arguments in this debate. #### 2.2.1 Random match v. database match Suppose an expert testifies that the crime traces genetically match the defendant and that the random match probability is extremely low, say 1 in 100 million. The random match probability—often interpreted as the probability that someone who is not the source would coincidentally match, $$\Pr(\match \pmid \neg \source)$$—is a common measure of the strength of a DNA match. The lower this probability, the more strongly incriminating the match. Strictly speaking, a match is strong evidence that the defendant is the source only if the likelihood ratio $\frac{\Pr(\textsf{DNA match} \pmid \source)}{\Pr(\textsf{DNA match} \pmid \neg \source)}$ is significantly greater than one. In practice, however, when the random match probability is low—that is, $$\Pr(\match \pmid \neg \source)$$ is low—the likelihood ratio should be significantly above one because the probability that the individual who is the source would match, $$\Pr(\match \pmid \source)$$, should be high so long as the test has a low false negative rate. For practical purposes, then, a low random match probability does count as strong incriminating evidence. When it comes to cold-hit matches, however, further complications arise. The Puckett case can serve as an illustration. In 2008, John Puckett was identified through a database search of 338,000 profiles. He was the only individual in the database who matched the traces collected from Diana Sylvester, a victim of rape in 1972. The expert witness testified that Puckett’s genetic profile should occur randomly among Caucasian men with a frequency of 1 in 1.1 million. This would seem strong evidence against Puckett’s innocence. But the DNA expert for the defense, Bicka Barlow, pointed out that besides the cold-hit match the evidence against Puckett was slim. Barlow argued that the correct assessment of the cold-hit match required to multiply 1/1.1 million by the size of the database. Call the result of this multiplication the database match probability. Multiplying 1/1.1 million by 338,000 yields a database match probability of roughly 1/3, an unimpressive figure. If someone in the database could match with a probability of 1/3, the cold-hit match should not count as strong evidence against Puckett. This was Barlow’s argument. Barlow followed a 1996 report by the National Research Council, often referred to as NRC II (National Research Council 1996). The report recommended that in cold-hit cases the random match probability should be multiplied by the size of the database. This correction was meant to guard against the heightened risk of mistaken matches for the innocent people in the database. NRC II used an analogy. If you toss several different coins at once and all show heads on the first attempt, this outcome seems strong evidence that the coins are biased. If, however, you repeat this experiment sufficiently many times, it is almost certain that at some point all coins will land heads. This outcome should not count as evidence that the coins are biased. According to NRC II, repeating the coin toss experiment multiple times is analogous to trying to find a match by searching through a database of profiles. As the size of the database increases, it is more likely that someone in the database who had nothing to do with the crime would match. The aptness of the analogy has been challenged (Donnelly and Friedman 1999). Searching a larger database no doubt increases the probability of finding a match at some point. But the relevant proposition here is not “At least one of the profiles in the database would randomly match the crime sample”. Rather, the relevant proposition is “The profile of the defendant on trial would randomly match the crime sample”. The probability of finding a match between the defendant and the crime sample does not increase because other people in the database are tested. In fact, suppose everyone in the world is recorded in the database. A unique cold-hit match would be extremely strong evidence of guilt since everybody would be excluded as a suspect except one matching individual. Instead, if the random match probability were multiplied by the size of the database, the probative value of the match should be quite low. This is counter-intuitive. Another analogy is sometimes used to argue that the evidentiary value of a cold-hit match should be weakened. The analogy is between searching for a match in a database and multiple hypothesis testing, an objectionable research practice. In classical hypothesis testing, if the probability of type I error in a single test of a hypothesis is .05, this probability will increase by testing the same hypothesis multiple times. The database match probability—the argument goes—would correct for the increased risk of type I error. However, as Balding (2002, 2005) points out, multiple testing consists in testing the same hypothesis multiple times against new evidence. In cold-hit cases, multiple hypotheses—each concerning a different individual in the database—are tested only once and then excluded if a negative match occurs. From this perspective, the hypothesis that the defendant is the source is one of the many hypotheses subject to testing. The cold-hit match supports that hypothesis and rules out the others. #### 2.2.2 The likelihood ratio of cold-hit matches A more principled way to assess cold-hit matches is based on the likelihood ratio. The proposal draws from the literature on the so-called island problem, studied by Eggleston (1978), Dawid (1994), and Dawid and Mortera (1996). Let the prosecutor’s hypothesis $$H_p$$ be “The suspect is the source of the crime traces” and the defense hypothesis $$H_d$$ be “The suspect is not the source of the crime traces”. Let $$M$$ be the DNA match between the crime stain and the suspect (included in the database) and $$D$$ the information that no one among the profiles in the database, except for the suspect, matches the crime stain. The likelihood ratio associated with $$M$$ and $$D$$ should be (Balding and Donnelly 1996; Taroni et al. 2014): $V = \frac{\Pr(M,D\pmid H_p)}{\Pr(M,D\pmid H_d)}.$ Since $$\Pr(A\wedge B)=\Pr(A\pmid B)\times \Pr(B)$$, for any statement $$A$$ and $$B$$, this ratio can be written as $V = \frac{\Pr(M\pmid H_p,D)}{\Pr(M\pmid H_d,D)} \times \frac{\Pr(D\pmid H_p)}{\Pr(D\pmid H_d)}.$ The first ratio $\frac{\Pr(M\pmid H_p,D)}{\Pr(M\pmid H_d,D)}$ is roughly $$\slashfrac{1}{\gamma}$$, where $$\gamma$$ is the random match probability. The second ratio $\frac{\Pr(D\pmid H_p)}{\Pr(D\pmid H_d)}$ is the database search ratio, defined as follows (for details, see Balding and Donnelly 1996; Taroni et al. 2014): $\frac{\Pr(D\pmid H_p)}{\Pr(D\pmid H_d)} = \frac{1}{1-\varphi},$ where $$\Pr(S \pmid H_d)=\varphi$$ and $$S$$ stands for the proposition that someone in the database is the source of the crime traces. Donnelly and Friedman (1999) derived a similar formula for this ratio. As the database gets larger, $$\varphi$$ increases and the database search ratio increases. This ratio equals one only if no one in the database could be the source. Since the likelihood ratio $$V$$ of the cold-hit match results by multiplying the likelihood ratio of the DNA match and the database search ratio, $$V$$ will always be greater than the mere likelihood ratio of the match (except for the unrealistic case in which $$\varphi=0$$) . Thus, a cold-hit DNA match should count as stronger evidence than a DNA match of a previously identified suspect. Dawid and Mortera (1996) study different database search strategies and consider the possibility that information about the match is itself uncertain, but the general point remains. Under reasonable assumptions, ignoring the database search would give a conservative assessment of the evidentiary strength of the cold-hit match. There is intuitive resistance in basing a conviction on a cold-hit match, but this resistance is less strong in case of an ordinary match (more on this later in Section 6). This preference for convictions based on an ordinary DNA match seems in tension with the claim that a cold-hit match is stronger evidence of guilt than an ordinary match. There is a way to reconcile both sides, however. The key is to keep in mind that the evidentiary strength—measured by the likelihood ratio—should not be confused with the posterior probability of guilt given the evidence. If the cold-hit match is the only evidence of guilt, the posterior probability of guilt may well be lower compared to cases in which other evidence, such as investigative leads, supplements the DNA match. This lower posterior probability would justify the intuitive resistance towards convictions in cold-hit cases despite the stronger probative value of the cold-hit match. ### 2.3 Choosing competing hypotheses The likelihood ratio is helpful for assessing the strength of the evidence, as the preceding discussion about cold-hit matches shows. One major difficulty, however, is the choice of the hypotheses $$H$$ and $$H'$$ that should be compared. The hypotheses should compete with one another—say, in a criminal trial, $$H$$ is the hypothesis put forward by the prosecution and $$H'$$ is the hypothesis put forward by the defense. But this constraint leaves open the possibility for manipulations and misinterpretations of the evidence. Let us examine some of the main controversies in the literature on this topic. #### 2.3.1 Ad hoc hypotheses and Barry George Consider again a stylized DNA evidence case. Suppose the prosecutor puts forward the hypothesis that the suspect left the traces found at the crime scene. This hypothesis is well supported by laboratory analyses showing that the defendant genetically matches the traces. The defense, however, responds by putting forward the following ad hoc hypothesis: “The crime stain was left by some unknown person who happened to have the same genotype as the suspect”. Since the probability of the DNA match given either hypothesis is 1, the likelihood ratio equals 1 (Evett, Jackson, and Lambert 2000). The problem generalizes. For any item of evidence and any given prosecutor’s hypothesis $$H$$, there is an ad hoc competing hypothesis $$H^*$$ such that $\frac{\Pr(E \pmid H)}{\Pr(E \pmid H^*)}=1.$ Hypothesis $$H^*$$ is a just-so hypothesis, one that is selected only because it explains the evidence just as well as hypothesis $$H$$ does (Mayo 2018: 30–55). If no further constraints are placed on the choice of the competing hypotheses—it would seem—no evidence could ever incriminate a defendant. Judges and jurors will often recognize ad hoc hypotheses for what they are—artificial theories that should not be taken seriously. Perhaps, the common sense of the participants in a trial will suffice to constrain the choice of hypotheses in the right way. But real cases are complex, and it is not always obvious whether a choice of competing hypotheses, which are not obviously ad hoc, is legitimate or not. A notable example is R. v. George (2007 EWCA Crim 2722). Barry George was accused of murdering British journalist and TV celebrity Jill Dando. A single particle of firearm residue was found one year later in George’s coat pocket and it matched the residue from the crime scene. This was the key incriminating evidence against him. George was convicted, and his first appeal was unsuccessful. After the first appeal, Ian Evett from the Forensic Science Service worried that the evidence had not been properly assessed at trial. The jurors were presented with the conditional probability of finding the firearm residue in George’s coat given the defense hypothesis that George did not fire the gun. This probability was estimated to be quite low. But the jurors were not presented with the conditional probability of finding the same evidence given the prosecutor’s hypothesis that George did fire the gun that shot Dando. An expert witness, Mr. Keeley, was asked to provide both conditional probabilities and estimated them to be $$\slashfrac{1}{100}$$, which indicated that the firearm residue had no probative value. George appealed again in 2007, and relying on Keely’s estimates, won the appeal. A study of the trial transcript shows that Keeley’s choice of hypotheses lacked coherence and the likelihood ratio based on them was therefore meaningless. On one occasion, Keeley compared the hypothesis that the particle found in George’s pocket came from a gun fired by George himself, and the alternative hypothesis that the particle came from another source. On another occasion, Keeley took the prosecutor’s hypothesis to be “The particle found in George’s pocket came from the gun that killed Dando.” But the conditional probability of the evidence given this hypothesis should not be low. It should actually be one. The most charitable reading of the trial transcript suggests that the expert had in mind the hypotheses “George was the man who shot Dando” and “The integrity of George’s coat was corrupted”. But Keeley gave no justification for why these hypotheses should be compared in the likelihood ratio (see Fenton et al. 2014 for details). A related complication is that competing hypotheses can concern any factual dispute, from minute details such as whether the cloth used to suffocate the victim was red or blue, to ultimate questions such as whether the defendant stabbed the victim. The likelihood ratio varies across hypotheses formulated at different levels of granularity: offense, activity and source level hypotheses (on this distinction, see earlier in §1.5). It is even possible that, at the source level, the likelihood ratio favors one side, say the prosecution, but at the offence level, the likelihood ratio favors the other side, say the defense, even though the hypotheses at the two levels are quite similar (Fenton, et al. 2014). #### 2.3.2 Exclusive and exhaustive? The confusion in the Barry George case is attributable to the absence of clear rules for choosing the hypotheses in the likelihood ratio. One such rule could be: pick competing hypotheses that are exclusive (they cannot be both true) and exhaustive (they cannot be both false). In this way, the parties would not be able to pick ad hoc hypotheses and skew the assessment of the evidence in their own favor. There are other good reasons why the hypotheses in the likelihood ratio should be exclusive and exhaustive. For if they are not, the likelihood ratio can deliver counterintuitive results. To see why, first consider hypotheses that are not mutually exclusive. Let $$H_p$$ stand for “The defendant is guilty” and $$H_d$$ for “The defendant was not at the crime scene”. Let $$E$$ stand for Ten minutes before the crime took place the defendant—seen at a different location—was overheard on the phone saying go ahead and kill him. The evidence positively supports each hypothesis, yet it is conceivable that the likelihood ratio should equal one in this context. Further, consider two competing hypotheses that are not exhaustive. Suppose Fred and Bill attempted to rob a man. The victim resisted, was struck on the head and died. Say $$H_p$$ stand for “Fred struck the fatal blow” and $$H_d$$ stand for “Bill struck the fatal blow”. The hypotheses are not exhaustive because a third hypothesis is “The man did not die from the blow”. Suppose $$E$$ is the information that the victim had a heart attack six months earlier. The likelihood ratio $\frac{\Pr(E \pmid H_p)}{\Pr(E \pmid H_d)}$ equals one since $\Pr(E\pmid H_p)=\Pr(E\pmid H_d).$ Yet $$E$$ reduces the probability of both $$H_p$$ and $$H_d$$. So, in this case, the evidence negatively supports each hypothesis, contrary to what the likelihood ratio suggests. But relying on exclusive and exhaustive hypotheses is not without complications either. Consider an expert who decides to formulate the defense hypothesis by negating the prosecution hypothesis, say, “the defendant did not hit the victim in the head”. This choice of defense hypothesis can be unhelpful in assessing the evidence. What is the probability that the suspect would carry such and such blood stain if he did not hit the victim in the head? This depends on whether he was present at the scene, what he was doing at the time and many other circumstances. As Evett, Jackson, and Lambert (2000) point out, the choice of a particular hypothesis to be used in the evaluation of the strength of the evidence will depend on contextual factors. More often than not, the hypotheses chosen will not be mutually exclusive. Comparing exclusive and exhaustive hypotheses can also be unhelpful for jurors or judges making a decision at trial. In a paternity case, for example, the expert should not compare the hypotheses “The accused is the father of the child” and its negation, but rather, “The accused is the father of the child” and “The father of the child is a man unrelated to the putative father” (Biedermann et al. 2014). Even though the relatives of the accused are potential fathers, considering such a far-fetched possibility would make the assessment of the evidence more difficult than needed (Evett et al. 2000). Exclusive and exhaustive hypotheses guard against arbitrary comparisons and ensure a more principled assessment of the evidence. The drawback is that such hypotheses cover the entire space of possibilities, and sifting through this space is cognitively unfeasible (Allen 2017). In this respect, comparing more circumscribed hypotheses is preferable. The danger of doing so, however, is that likelihood ratios heavily depend on the hypotheses that are compared. The more latitude in the choice of the hypotheses, the more variable the likelihood ratio as a measure of evidentiary value. ### 2.4 The two-stain problem A case study that further illustrates the limitations of the likelihood ratio as a measure of evidentiary strength is the two-stain problem, originally formulated by Evett (1987). In Evett’s original formulation, two stains from two different sources are left at the crime scene, and the suspect’s blood matches one of them. Let the first hypothesis be that the suspect was one of the two men who committed the crime and the second hypothesis the negation of the first. Evett (1987) shows (see his paper for details) that the likelihood ratio of the match relative to these two hypotheses is $$\slashfrac{1}{2q_1}$$ where $$q_1$$ is the estimated frequency of the characteristics of the first stain. Surprisingly, the likelihood ratio does not depend on the frequency associated with the second stain. Consider now a more complex two-stain scenario. Suppose a crime was committed by two people, who left two stains at the crime scene: one on a pillow and another on a sheet. John Smith, who was arrested for a different reason, genetically matches the DNA on the pillow, but not the one on the sheet. Meester and Sjerps (2004) argue that there are three plausible pairs of hypotheses associated with numerically different likelihood ratios (see their paper for details). The three options are listed in table 2, where $$R$$ is the random match probability of Smith’s genetic profile and $$\delta$$ the prior probability that Smith was one of the crime scene donors. Table 2 $$H_p$$ $$H_d$$ LR Smith was one of the crime scene donors. Smith was not one of the crime scene donors. $$\frac{R}{2}$$ Smith was the pillow stain donor. Smith was not one of the crime scene donors. $$R$$ Smith was the pillow stain donor. Smith was not the pillow stain donor. $$\frac{R(2-\delta)}{2(1-\delta)}$$ Even though the likelihood ratios are numerically different, their posterior probabilities given the evidence are the same. Note that the prior odds of the three $$H_p$$’s in the table should be written in terms of $$\delta$$. Following (Meester and Sjerps 2004), the prior odds of the first hypothesis in the table are $$\slashfrac{\delta}{(1-\delta)}$$. The prior odds of the second hypothesis are $$\slashfrac{(\delta/2)}{(1-\delta)}$$. The prior odds of the third hypothesis are $$\slashfrac{(\delta/2)}{(1-(\delta/2))}$$. In each case, the posterior odds—the result of multiplying the prior odds by the likelihood ratio—are the same: $$R\times \slashfrac{\delta}{(2(1-\delta))}$$. So despite differences in the likelihood ratio, the posterior odds of the different hypotheses are the same so long as the priors are appropriately related. Meester and Sjerps (2004) recommend that each likelihood ratio should be accompanied by a tabular account of how a choice of prior odds (or prior probabilities) will impact the posterior odds, for a sensible range of priors (for a general discussion of this strategy, called “sensitivity analysis”, see earlier discussion in §1.4). In this way, the impact of the likelihood ratio is made clear, no matter the hypotheses chosen. This strategy concedes that likelihood ratios are insufficiently informative, and that they should be combined with other information, such as a range of priors, to allow for an adequate assessment of the evidence. ## 3. Bayesian Networks for Legal Applications So far we examined how probability theory can help to assess single items of evidence such as a DNA match. But things are often more complicated. In legal cases, different lines of evidence may converge, such as two witnesses who testify that the defendant was seen at the crime scene, or they may diverge, such as a witness who asserts the defendant was seen at the crime scene while DNA testing shows no genetic match between the defendant and the scene. Another source of complexity is that the hypotheses put forward by the parties in a trial are often complex structures of statements. How can different statements, and their supporting evidence, be combined and the overall prosecutor’s case (or the defense’s case) be evaluated? The probability of a hypothesis given multiple pieces of evidence can, in principle, be assessed by sequential applications of Bayes’ theorem. Consider, for example, a case in which the defendant faces two pieces of incriminating evidence: a DNA match and hair evidence found at the scene matching the defendant’s hair color. Assume—as is often done—that someone’s hair color is independent of someone’s genetic profile. Say the likelihood ratio of hair evidence is 40 and the likelihood ratio of the DNA match is 200, that is, it is 40 times more likely (and 200 times more likely) to find the evidence given the guilt hypothesis than the innocence hypothesis. If the prior odds of guilt to innocence are $$\slashfrac{1}{1000}$$, the posterior odds would be $$\slashfrac{1}{1000}\times 40\times 200=8$$. These calculations are straightforward, but in more realistic cases, there will be complications. The parties at trial will often put forward several piecemeal claims that need to be combined together to form a theory of what happened. For example, the prosecutor may present eyewitness testimony to argue that the defendant ran away from the crime scene, along with documentary evidence as proof of a motive. The different piecemeal claims, each supported by distinct pieces of evidence, must be combined to form structured hypotheses about what happened. Since different claims and different pieces of evidence may depend on one another, direct calculations would soon become unmanageable. Fortunately, a tool exists to make the task easier: Bayesian networks. This section identifies guidelines for deploying Bayesian networks in the presentation, aggregation and evaluation of complex bodies of evidence and hypotheses. ### 3.1 Bayesian networks to the rescue The idea that Bayesian networks can be used for probabilistic reasoning in legal fact-finding started gaining traction in late eighties (Friedman 1986) and early nineties (Edwards 1991). Two recent books on the topic with an emphasis on legal applications are Fenton and Neil 2013 [2018] and Taroni et al. 2014. A Bayesian network comprises two components: first, a directed acyclic graph of relations of dependence (represented by arrows) between variables (represented by nodes); second, conditional probability tables. Consider the graphical component first. The graph is acyclic because the arrows connecting the nodes do not form loops. As an illustration, let $$H$$ be the claim that the suspect committed the murder, BT the presence of a blood type B match with a crime scene stain, and $$W$$ the fact that an eyewitness observed the suspect near the scene around the time of the crime. The graphical component of the Bayesian network would look like this: Figure 2 The ancestors of a node $$X$$ are all those nodes from which we can reach $$X$$ by following the arrows going forwards. The parents of a node $$X$$ are those for which we can do this in one step. The descendants of $$X$$ are all which can be reached from $$X$$ by following the arrows going forward. The children are those for which we can do this in one step. In the example, $$H$$ is the parent (and ancestor) of both $$W$$ and BT, which are its children (and descendants). There are no non-parent ancestors or non-children descendants. The variables, which are represented by nodes and are connected by arrows, stand in relation of probabilistic dependence. To describe these relations, the graphical model is accompanied by conditional probability tables. For parentless nodes such as $$H$$, the tables specify the prior probabilities of all their possible states. Assuming $$H$$ stands for a binary random variable, with two possible states, the prior probabilities could be: Table 3 Prior $$H=\text{murderer}$$ .01 $$H=\text{not.murderer}$$ .99 The .01 figure for $$H=\text{murderer}$$ rests on the assumption that, absent any incriminating evidence, the defendant is unlikely to be guilty. For children nodes, the tables specify their conditional probability given combinations of their parents’ states. If the variables are binary, an assignment of values for them could be: Table 4 $$H=\text{murderer}$$ $$H=\text{not.murderer}$$ $$W=\text{seen}$$ .7 .4 $$W=\text{not.seen}$$ .3 .6 Table 5 $$H=\text{murderer}$$ $$H=\text{not.murderer}$$ $$\BT=\text{match}$$ 1 .063 $$\BT=\text{no.match}$$ 0 .937 According to the tables above, even if the defendant is not the culprit, the eyewitness testimony would still incriminate him with probability of .4, while the blood evidence with probability equal to only .063. The blood type frequency estimate is realistic (Lucy 2013: 141), and so are the conditional probabilities for the eyewitness identification. As expected, eyewitness testimony is assumed to be less trustworthy than blood match evidence (but for complications about assessing eyewitness testimony, see Wixted and Wells 2017; Urbaniak et al. 2020). The three probability tables above are all that is needed to define the probability distribution. The tables do not specify probabilistic dependencies between nodes that are not in a relation of child/parent, such as BT and $$W$$. Since there is no arrow between them, nodes BT and $$W$$ are assumed to be independent conditional on $$H$$, that is, $$\Pr(W \pmid H)=\Pr(W \pmid H \wedge \BT)$$. This fact represents, as part of the structure of the network, the independence between eyewitness testimony and blood evidence. A generalization of this fact is the so-called Markov condition (see the textbook by Neapolitan [2004] and the supplement on Bayesian networks of the entry on artificial intelligence.) While the Bayesian network above—comprising a directed acyclic graph along with probability tables—is simple, a correct intuitive assessment of the probability of the hypothesis given the evidence is already challenging. Try to guess the probability that the defendant committed the murder ($$H=\text{murderer}$$) given the following states of the evidence: • The suspect’s blood type matches the crime stain but information about the witness is unavailable. • The suspect’s blood type matches the crime stain but the witness says they did not see the suspect near the crime scene. • The suspect’s blood type matches the crime stain and the witness says they saw the suspect near the crime scene. Already at this level of complexity, calculations by hand become cumbersome. In contrast, software for Bayesian networks (see, for example, the $$\textsf{R}$$ package $$\textsf{bnlearn}$$ developed by Marco Scutari and described in Scutari and Denis 2015) will easily give the following results: Table 6 $$H=\text{murderer}$$ $$\BT=\text{match},W=?$$ .138 $$\BT=\text{match},W=\text{not.seen}$$ .074 $$\BT=\text{match}, W=\text{seen}$$ .219 Perhaps surprisingly the posterior probability of $$H$$ = murderer is about .22 even when both pieces of evidence are incriminating ($$\BT=\text{match}, W=\text{seen}$$). ### 3.2 Idioms While modeling the relationships between evidence and hypotheses, simple graphical patterns (called idioms) are often used. Complex graphical models can be constructed by combining these basic patterns in a modular way. General methods for constructing Bayesian networks are discussed in (Neil, Fenton, and Nielson 2000; Hepler, Dawid, and Leucari 2007) and general idioms are discussed in (Fenton, Neil, and Lagnado 2013). A few of the basic idioms are illustrated below. The evidence idiom is the most basic graphical representation of the relation between a hypothesis and a piece of evidence: Figure 3 This directed graph suggests that the direction of influence—which could, but need not, be interpreted as causal influence—goes from hypothesis to evidence (though the probabilistic dependence goes both ways). The hypothesis node and the evidence node can be binary variables, such as “The defendant was the source of the crime scene traces” (hypothesis) and “The defendant genetically matches the crime traces” (evidence). But the variables need not be binary. The hypothesis node might take values from the range of 1–40, say the distance in meters from which the gun was shot, and the evidence node might be a continuous variable representing the density of gun shot residues (Taroni et al. 2014). A more complex idiom, called the evidence accuracy idiom, consists of two arrows going into the evidence node (Bovens and Hartmann 2004; Fenton, Neil, and Lagnado 2013). One incoming arrow comes from the hypothesis node and the other from the accuracy node. This idiom can be used to model, say, an alcohol test: Figure 4 The directions of the arrows indicate that the accuracy of the evidence (accuracy node) and the alcohol level (hypothesis node) influence the outcome of the test (evidence node). The graphical model represents different sources of uncertainty. The uncertainty associated with the sensitivity and specificity of the test—that is, the probability that the tests reports excessive alcohol level when the level is excessive (sensitivity) and the probability that the test reports normal alcohol level when the level is normal (specificity)—is captured by the arrow going from the hypothesis node (Excess alcohol level) to the evidence node (Evidence for excess). Other sources of uncertainty comprise the possibility that the police officer lied about the test report or the possibility that the driver took medications which then affected the alcohol level. These possibilities can be taken into consideration by adding an accuracy node (or multiple accuracy nodes, if each factor is kept separate from the other). When multiple items of evidence depend on one another—as may happen in many legal cases—this situation is modeled by the evidence dependency idiom. Following an example by Fenton and Neil (2013 [2018]), if one of two security cameras directed at the same location captured an image of someone who looks like the defendant but isn’t him, it is likely that the same person walked by the second camera, which also captured the same image. In such cases, presenting the second recording as independent from the first would lead to overestimating the strength of the evidence. Node Proposition H Defendant present at the scene C1 Camera 1 captures image of a matching person C2 Camera 2 captures image of a matching person D What cameras capture is dependent Figure 5 The network structure is quite natural. The truth of the hypothesis, say, the defendant was present at the crime scene, influences whether the cameras capture an image of someone who looks like the defendant. However, if the two camera recordings are dependent on one another (for instance, they are directed at the same spot with a similar angle), the fact that the second camera captured the same image as the first does not make the hypothesis more likely once the first camera recording is known. Finally, the scenario idiom can model complex hypotheses, consisting of a sequence of events organized in space and time (a scenario). A graphical model that uses the scenario idiom would consist of the following components: first, nodes for the states and events in the scenario, with each node linked to the supporting evidence; second, a separate scenario node that has states and events as its children; finally, a node corresponding to the ultimate hypothesis as a child of the scenario node. The graphical model would look like this (Vlek et al. 2014): Figure 6 [An extended description of figure 6 is in the supplement.] The scenario node unifies the different events and states. Because of this unifying role, increasing the probability of one part of the scenario (say State/event 2) will also increase the probability of the other parts (State/event 1 and State/event 3). This captures the fact that the different components of a scenario form an interconnected sequence of events. A discussion of modeling crime scenarios by means of other graphical devices (called structural scenario spaces) mixed with probabilities can be found in the work of Shen et al. (2007); Bex (2011, 2015); and Verheij (2017). See also the survey by Di Bello and Verheij (2018). Dawid and Mortera (2018) give a treatment of scenarios in terms of Bayesian networks. Lacave and Díez (2002) show how Bayesian Network can be used to construct explanations. ### 3.3 Modeling an entire case Kadane and Schum (2011) made one of the first attempts to model an entire criminal case, Sacco and Vanzetti from 1920, using probabilistic graphs. More recently, Fenton and Neil (2013 [2018]) constructed a Bayesian network for the Sally Clark case (discussed earlier in §1.3), reproduced below: Figure 7 [An extended description of figure 7 is in the supplement.] The arrows depict relationships of influence between variables. Whether Sally Clark’s sons, call them $$A$$ and $$B$$, died by SIDS or murder (A.cause and B.cause) influences whether signs of disease (A.disease and B.disease) and bruising (A.bruising and B.bruising) were present. Since son $$A$$ died first, whether $$A$$ was murdered or died by SIDS (A.cause) influences how son $$B$$ died (B.cause). How the sons died determines how many sons were murdered (No.murdered), and how many sons were murdered decides whether Sally Clark is guilty (Guilty). According to the calculation by Fenton and Neil (2013 [2018]) (see their paper for details), the prior probability of Guilty = Yes should be .0789. After taking into account the incriminating evidence presented at trial, such as that there were signs of bruising but no signs of a preexisting disease affecting the children, the posterior probabilities are as follows: Table 8 Evidence (cumulative) $$\Pr(\textrm{Clark guilty})$$ A bruising .2887 A no signs of disease .3093 B bruising .6913 B no signs of disease .7019 The incriminating evidence, combined, brings the probability of guilt from .0789 to .7019. This is a significant increase, but not quite enough for a conviction. If one wishes to perform sensitivity analysis—see earlier discussion in §1.4—by modifying some of the probabilities, this can be easily done. During the appeal trial, new evidence was discovered, in particular, evidence that son $$A$$ was affected by a disease. Once this evidence is taken into account, the probability of guilt drops to .00459 (and if signs of disease were also present on $$B$$, the guilt probability would drop even further to .0009). For a general discussion on how to elicit probabilities, see Renooij (2001) and Gaag et al. (1999). ## 4. Relevance The preceding sections modeled evidence assessment, using Bayes’ theorem (Section 1), likelihood ratios (Section 2) and Bayesian networks (Section 3). Evidence assessment, however, begins with a preliminary decision, the identification of relevant evidence. Once a piece of evidence is deemed relevant, the next step is to assess its strength (probative value, weight). This section discusses how probability theory helps to identify relevant evidence. ### 4.1 Likelihood ratios The U.S. Federal Rules of Evidence define relevant evidence as evidence that has any tendency to make the existence of any fact that is of consequence to the determination of the action more probable or less probable than it would be without the evidence (rule 401). This definition is formulated in a probabilistic language. Legal probabilists interpret it using the likelihood ratio, a standard probabilistic measure of evidential relevance (Aitken et al. 2010; Aitken and Taroni 1995 [2004]; Lempert 1977; Lyon and Koehler 1996; Sullivan 2019). The likelihood ratio (discussed in Section 2) is the probability of observing the evidence given the prosecutor’s or plaintiff’s hypothesis, divided by the probability of observing the same evidence given the defense’s hypothesis. Let $$E$$ be the evidence, $$H$$ the prosecutor’s or plaintiff’s hypothesis, and $$H'$$ the defense’s hypothesis. The likelihood ratio is defined as follows: $LR(E,H,H') = \frac{P(E\pmid H)}{P(E\pmid H')}$ On the likelihood ratio interpretation, relevance depends on the choice of the competing hypotheses. A piece of evidence is relevant—in relation to a pair of hypotheses $$H$$ and $$H'$$—provided the likelihood ratio $$LR(E, H, H')$$ is different from one and irrelevant otherwise. For example, the bloody knife found in the suspect’s home is relevant evidence in favor of the prosecutor’s hypothesis because we think it is far more likely to find such evidence if the suspect committed the crime (prosecutor’s hypothesis) than if he didn’t (defense hypothesis) (Finkelstein 2009). In general, for values greater than one, $$LR(E, H, H')>1$$, the evidence supports the prosecutor’s or plaintiff’s hypothesis $$H$$, and for values below one, $$LR(E, H, H')<1$$, the evidence supports the defense’s hypothesis $$H'$$. If the evidence is equally likely under either hypothesis, $$LR(E, H, H')=1$$, the evidence is irrelevant. ### 4.2 The Small Town Murder objection This account of relevance has been challenged by cases in which the evidence is intuitively relevant and yet its likelihood ratio, arguably, equals one. Here is one problematic case: Small Town Murder: A person accused of murder in a small town was seen driving to the small town at a time prior to the murder. The prosecution’s theory is that he was driving there to commit the murder. The defense theory is an alibi: he was driving to the town to visit his mother. The probability of this evidence if he is guilty equals that if he is innocent, and thus the likelihood ratio is 1 … Yet, every judge in every trial courtroom of the country would admit it [as relevant evidence]. (The difficulty has been formulated by Ronald Allen, see the discussion in Park et al. 2010) Counterexamples of this sort abound. Suppose a prisoner and two guards had an altercation because the prisoner refused to return a food tray. The prisoner had not received a package sent to him by his family and kept the tray in protest. According to the defense, the prisoner was attacked by the guards, but according to the prosecution, he attacked the guards. The information about the package sent to the prisoner and the withholding of the tray fails to favor either version of the facts, yet it is relevant evidence (Pardo 2013). It is true that if a piece of evidence $$E$$ fits equally well with two competing hypotheses $$H$$ and $$H'$$, then $$P(E\pmid H)=P(E\pmid H')$$ and thus $$LR(E,H,H')$$ will equal 1. But the likelihood ratio may change depending on the selection of hypotheses. Rule 401 makes clear that relevant evidence should have any tendency to make the existence of any fact that is of consequence [emphasis ours] to the determination of the action more probable or less probable. So the range of hypotheses to compare should be broad. Just because the likelihood ratio equals one for a specific selection of $$H$$ and $$H'$$, it does not follow that it equals one for any selection of $$H$$ and $$H'$$ which are of consequence to the determination of what happened. In Small Town Murder, whether the suspect was in town is of consequence for determining what happened (if he was not in town, he could not have committed the crime). The fact that he was seen driving is helpful information for establishing whether he was in town. But if the range of hypotheses $$H$$ and $$H'$$ to compare in the likelihood ratio $$LR(E, H, H')$$ is broad, this may raise another concern. The choice of hypotheses needed to determine the relevance of an item of evidence might depend on other items of evidence, and so it might be difficult to determine relevance until one has heard all the evidence. This fact—Ronald Allen and Samuel Gross argue in Park et al. (2010)—makes the probabilistic account of relevance impractical. In response, David Kaye points out that deciding whether a reasonable juror would find evidence $$E$$ helpful requires only looking at what hypotheses or stories the juror would reasonably consider. Since the jurors will rely on several clues about which stories are reasonable, this task is computationally easier than going over all possible combinations of hypotheses (Park et al. 2010). The problem with the paradoxes of relevance is that in complex situations there is no single likelihood ratio that corresponds to a piece of evidence. The problematic cases focus on a single likelihood ratio based on non-exclusive or non-exhaustive hypotheses. However, evidence can be relevant so long as it has a probabilistic impact on a pertinent sub-hypothesis, even without having a probabilistic impact on the prosecutor’s or defense’s ultimate hypotheses. When this happens, the evidence is relevant, in agreement with Rule 401 of the Federal Rules of Evidence. Bayesian networks (discussed in the preceding section) help to see how pieces of evidence can increase or decrease the probability of different sub-hypotheses (for more details, see de Zoete et al. 2019). ## 5. Standards of Proof After the evidence has been presented, examined and cross-examined at trial, trained judges or lay jurors must reach a decision (see Laudan 2010 for a few caveats on what the decision should be about). The decision criterion is defined by law and consists of a standard of proof, also called the burden of persuasion. If the evidence against the defendant is sufficiently strong to meet the requisite proof standard, the defendant should be found liable. This section begins with a description of standards of proof in the law, then outlines a probabilistic account of standards of proof, and discusses some objections to this account. ### 5.1 Legal background In criminal proceedings, the governing standard is “proof beyond a reasonable doubt”. In civil cases, the standard is typically “preponderance of the evidence”. The latter is less demanding than the former, so the same body of evidence may be enough to meet the preponderance standard, but not enough to meet the beyond a reasonable doubt standard. A vivid example of this difference is the 1995 trial of O.J. Simpson who was charged with murdering his wife. He was acquitted of the criminal charges, but when the family of the victim brought a lawsuit against him, they prevailed. O.J. Simpson did not kill his wife according to the beyond a reasonable doubt standard, but he did according to the preponderance standard. An intermediate standard, called “clear and convincing evidence”, is sometimes used for civil proceedings in which the decision is particularly weighty, for example, a decision whether someone should be involuntarily committed to a hospital facility. How to define standards of proof, or whether they should be even defined in the first place, remains contentious (Diamond 1990; Horowitz and Kirkpatrick 1996; Laudan 2006; Newman 1993; Walen 2015). Judicial opinions offer different paraphrases, sometimes conflicting, of what these standards mean. The meaning of “proof beyond a reasonable doubt” is the most controversial. It has been equated to “moral certainty” or “abiding conviction” (Commonwealth v. Webster, 59 Mass. 295, 1850), or to proof of such a convincing character that a reasonable person would not hesitate to rely and act upon it in the most important of his own affairs. (US Federal Jury Practice and Instructions, Devitt et al. 1987, 12.10, 354) But courts have also cautioned that there is no need to define the term because “jurors know what is reasonable and are quite familiar with the meaning of doubt” and attempts to define it only “muddy the water” (U.S. v. Glass, 846 F.2d 386, 1988). Probability theory can bring conceptual clarity to an otherwise heterogeneous legal doctrine, or at least this is the position of legal probabilists. ### 5.2 Probability thresholds Legal probabilists have proposed to interpret proof beyond a reasonable doubt as the requirement that the defendant’s probability of guilt, given the evidence presented at trial, meet a threshold, say, $$>$$.95. Variations of this view are common (see, for example, Bernoulli 1713; DeKay 1996; Kaplan 1968; Kaye 1979b; Laplace 1814; Laudan 2006). This interpretation is, in some respects, plausible. From a legal standpoint, the requirement that guilt be established with high probability, still short of 1, accords with the principle that proof beyond a reasonable doubt is the most stringent standard of all but “does not involve proof to an absolute certainty” and thus “it is not proof beyond any doubt” (R. v. Lifchus, 1997, 3 SCR 320, 335). Reliance on probabilistic ideas is even more explicit in the standard “preponderance of the evidence”—also called “balance of probabilities”—which governs decisions in civil disputes. This standard can be interpreted as the requirement that the plaintiff—the party making the complaint against the defendant—establish its version of the facts with greater than probability of .5. The .5 threshold, as opposed to a more stringent threshold of .95 for criminal cases, reflects the fact that preponderance is less demanding than proof beyond a reasonable doubt. The intermediate standard “clear and convincing evidence” is more stringent than the preponderance standard but not as stringent as the beyond a reasonable doubt standard. Since it lies in between the other two, it can be interpreted as the requirement that the plaintiff establish its versions of the facts with, say, probability at the level of .75-.8. Some worry that a mechanical application of numerical thresholds would undermine the humanizing function of trial decision-making. As Tribe put it, induced by the persuasive force of formulas and the precision of decimal points to perceive themselves as performing a largely mechanical and automatic role, few jurors …could be relied upon to recall, let alone to perform, [their] humanizing function. (1971: 1376) Thresholds, however, can vary depending on the costs and benefits at stake in each case (see later discussion). So they need not be applied mechanically without considering the individual circumstances (Hedden and Colyvan 2019). Furthermore, if jurors are numerically literate, they should not lose sight of their humanizing function as they would no longer be intimated by numbers. So the worry suggests the need to ensure that jurors are numerically literate, not dispensing with probabilistic thresholds altogether. Even if numerical thresholds cannot be used in the daily business of trial proceedings, they can still serve as theoretical concepts for understanding the role of proof standards in the justice system, such as regulating the relative frequency of false positive and false negative decisions or minimizing expected costs. A more stringent threshold will decrease the number of false positives (say false convictions) at the cost of increasing the number of false negatives (say false acquittals), and a less stringent threshold will increase the number of false positives while decreasing the number of false negatives. This trade-off has been described, among others, by Justice Harlan in his concurring opinion in re Winship, 397 U.S. 358, 397 (1970). As shown below, the formal apparatus of probability theory, in combination with expected utility theory, can make this point more precise. ### 5.3 Minimizing expected costs Expected utility theory recommends agents to take the course of action that, among the available alternatives, maximizes expected utility. On this view, the standard of proof is met whenever the expected utility (or cost) of a decision against the defendant (say, a conviction) is greater (or lower) than the expected utility (or cost) of a decision in favor of the defendant (say, an acquittal) (DeKay 1996; Hamer 2004; Kaplan 1968). Let $$c(CI)$$ be the cost of convicting a factually innocent defendant and $$c(AG)$$ the cost of acquitting a factually guilty defendant. For a conviction to be justified, the expected cost of convicting an innocent—that is, $$c(CI)$$ discounted by the probability of innocence $$[1-\Pr(G\pmid E)]$$—must be lower than the expected cost of acquitting a guilty defendant—that is, $$c(AG)$$ discounted by the probability of guilt $$\Pr(G\pmid E)$$. This holds just in case $\frac{\Pr(G\pmid E)}{1- \Pr(G\pmid E)} > \frac{c(CI)}{c(AG)}.$ This inequality serves to identify the probability threshold that should be met for decisions against the defendant. When the cost ratio $$\slashfrac{c(CI)}{c(AG)}$$ is set to 9—which might be appropriate in a criminal case since convicting an innocent is often considered more harmful than acquitting a guilty defendant (however, see Laudan 2016)—the inequality holds only if $$\Pr(G \pmid E)$$ meets the .9 threshold. The same analysis mutatis mutandis applies to civil cases in which mistaken decisions comprise mistaken attributions of liability (false positives) and mistaken failures to attribute liability (false negatives). If the cost ratio is one—as might be appropriate in a civil case in which false positives and false negatives are equally harmful—the inequality holds only if the probability that the defendant is liable meets the .5 threshold. Figure 8 [An extended description of figure 8 is in the supplement.] This analysis only considers the costs of mistaken decisions, but leaves out the benefits associated with correct decisions. More comprehensive analyses would consider both (Lillquist 2002; Laudan and Saunders 2009), but the basic insight would remain the same. Trial decision-making is viewed as one instrument among others for maximizing overall social welfare (Posner 1973). On this account of proof standards, the stringency of the threshold depends on costs and benefits, and thus different cases may require different thresholds. Cases in which the charge is more serious than others—say, murder compared to petty theft—may require higher thresholds so long as the cost of a mistaken decision against the defendant is more significant. Whether or not standards of proof should vary in this way is debated (Kaplow 2012; Picinali 2013; see also the entry on the legal concept of evidence). The standard “proof beyond a reasonable doubt” is often paired with the Blackstone ratio, the principle that it is better that ten guilty defendants go free rather than even just one innocent be convicted. The exact ratio is in fact a matter of controversy (Volokh 1997). It is tempting to think that, say, the .9 threshold guarantees a 1:9 ratio between false convictions and false acquittals. But this would be hasty for at least two reasons. First, probabilistic thresholds affect the expected rate of mistaken decisions. The actual rate may deviate from its expected value (Kaye 1999). Second, if the threshold is .9, at most 10% of decision against defendants are expected to be mistaken (false convictions) and at most 90% of the decisions in favor of the defendant are expected to be mistaken (false acquittals). The exact ratio will depend on the probabilities assigned to defendants and how they are distributed (Allen 2014). What can be said, in general, is that the threshold that minimizes the expected rate of incorrect decisions overall, no matter the underlying distribution, lies at .5 (see Kaye 1982, 1999; Cheng & Pardo 2015 for a proof). ### 5.4 Alternatives to probabilistic thresholds There exist several theoretical alternatives to the probabilistic interpretation of proof standards in the scholarly literature. Pennington and Hastie (1991, 1993) have proposed the story model according to which judges and jurors first make sense of the evidence by constructing stories of what happened, and then select the best story on the basis of multiple criteria, such as coherence, fit with the evidence and completeness. Pardo and Allen (2008) argue that the version of the facts that best explains the evidence should prevail in a court of law. On the role of coherence and story construction in evidence assessment and decision-making at trial, see (Amaya 2015; Griffin 2013; Simon 2004). Another approach is due to Gordon, Prakken, and Walton (2007) and Prakken and Sartor (2009) who view the trial as a place in which arguments and counterarguments confront one another. The party that has the best arguments, all things considered, should prevail. On this view, probability estimates can themselves be the target of objections and counterarguments. Along these lines, Stein (2008) argues that, in order to warrant a verdict against the defendant, the evidence should have survived individualized scrutiny, not merely support a high probability of liability. Philosophers and legal theorists have also leveled distinctively epistemological critiques. Ho (2008) and Haack (2014) hold that degrees of epistemic warrant for a claim, which depend on multiple factors—such as the extent to which the evidence supports the claim and it is comprehensive—cannot be equated to probabilities. Gardiner (2019) argues that standards of proof should rule out all error possibilities that are relevant and these need not coincide with error possibilities that are probable. Finally, some epistemologists argue that a belief that is assigned high probability, no matter how high, is not enough to warrant knowledge, and knowledge should be the standard for trial verdicts (Blome-Tillmann 2017; Duff et al. 2007; Levanon 2019; Littlejohn 2020; Moss forthcoming). Scholars and commentators have also voiced more specific objections that need not invalidate the probabilistic framework but rather call for refinements. Nance (2016) argues that the evidence on which to base a trial decision should be reasonably complete—it should be all the evidence that one would reasonably expect to see from a conscientious investigation of the facts. A similar argument can be found in Davidson and Pargetter (1987). Arguably, probability-based decision thresholds can accommodate these considerations, for example, by lowering the probability of civil or criminal liability whenever the body of evidence is one-sided or incomplete (Friedman 1996; Kaye 1979c, 1986). Another strategy is to give a probability-based account of the notion of completeness of the evidence and other seemingly non-probabilistic criteria (Urbaniak 2018). There are a plethora of other objections. The puzzles of naked statistical evidence and the conjunction paradox are two of the most widely debated in the literature. These and other objections are examined in the sections that follow. ## 6. Naked Statistical Evidence The puzzles of naked statistical evidence consist of hypothetical scenarios in which the probability of the defendant’s civil or criminal liability, given the evidence, is above the requisite threshold. Yet many have the intuition that the defendant should not be found liable. The question is how to justify this intuition despite the fact that the probability of liability meets the threshold. The puzzles of naked statistical evidence concern the question of sufficiency, namely, whether a body of evidence is enough to meet the proof standard applicable in a case. They are not about whether some evidence should be admissible at trial (on the distinction, see Picinali 2016). ### 6.1 Blue Bus, Gatecrasher, Prisoner Blue Bus (Tribe 1971). Mrs. Brown is run down by a bus. It is common knowledge that 80% of the buses in town are owned by Blue Bus, and the remaining 20% by Red Bus. There was no witness to the accident except Brown who is, however, color-blind. Since Blue Bus owns 80% of the buses in town, it is 80% likely that Brown was run down by a bus operated by Blue Bus, well above the 50% threshold for civil liability. Yet, merely presenting the 80% naked statistics should not be sufficient for Brown to prevail in a civil lawsuit against Blue Bus. Gatecrasher (Cohen 1977). It is known that 499 people paid for admission to a rodeo, but the total number of spectators was 1000. Suppose no paper tickets were issued and no witness could identify those who paid. For any spectator picked at random, it is more than 50% likely that they did not pay for admission. But even if this probability is above the .5 threshold for civil liability, it would be odd that the rodeo organizers could win a lawsuit against any of the spectators simply by presenting the 449-out-of-1000 naked statistics. Prisoner (Nesson 1979). 100 prisoners are exercising in a prison yard. Suddenly 99 of them attack and kill the only guard on duty. One prisoner played no role whatsoever in the assault. These are the undisputed facts in the case, and there is no further information about what happened. If a prisoner is picked at random, his probability of guilt would be as high as .99. Yet the intuition is that this cannot be enough to establish guilt beyond a reasonable doubt. These scenarios are like lottery cases in which the probability of a proposition such as “my ticket is a loser” is high, yet intuitively the proposition cannot count as knowledge (see, e.g., Harman 1968; Ebert, Smith, and Durbach 2018; Hawthorne 2004; Lawlor 2013; Nelkin 2000). The evidence in these scenarios—in particular, Gatecrasher and Prisoner—does not single out an individual specifically, but applies to any member of a group. Just as in lottery cases any ticket is very likely to lose, anyone who attended the rodeo or any prisoner in the yard is very likely to be liable. In this sense, naked statistical evidence is sometimes contrasted with individualized or case-specific evidence such as trace evidence or eyewitness testimony (Colyvan, Regan, and Ferson 2001; Stein 2005; Thomson 1986; Williams 1979). The distinction is contested, however. Any form of evidence, after all, relies on a categorization that places the defendant in a class with others, being the class of those who have such-and-such facial features or were in such-and-such a place (Harcourt 2006; Saks and Kidd 1980; Schauer 2003; Schoeman 1987; Wright 1988). Tillers (1997, 2005) notes that it is not always objectionable to base inferences about the behavior of an individual on the behavior of others, for example, membership in a gang or in the Ku Klux Klan can be indicative of someone’s beliefs. Still, there is intuitive resistance against verdicts of liability based on naked statistics, but this resistance is less pronounced for verdicts based on more traditional forms of evidence, such as trace or eyewitness testimony. The asymmetry might just be an artifact of history, since the testimony of an honest witness has been the bedrock of the Anglo-American trial system. However, the resistance toward naked statistical evidence along with the preference for other forms of evidence has also been verified empirically (Arkes, Shoots-Reinhard, and Mayes 2012; Niedermeier, Kerr, and Messé 1999; Wells 1992) and is not limited to the legal context (Ebert et al. 2018; Friedman and Turri 2015; Sykes and Johnson 1999). Some scholars have expressed reservations about the relevance of hypothetical scenarios featuring naked statistical evidence. Since these scenarios are removed from trial practice, they might be an unreliable guide for theorizing about the trial (Allen and Leiter 2001; Dant 1988; Schmalbeck 1986). These scenarios, however, are partly modeled on real cases. For example, Blue Bus is modeled on Smith v. Rapid Transit, Inc. 317 Mas. 469 (1945). The hypothetical also bears a similarity with the landmark case Sindell v. Abbott Laboratories, 26 Cal. 3d 588 (1980) in which different companies marketed the same drug that was later shown to have caused cancer to a particular individual. Since the drug was sold by multiple companies, it was impossible to determine which company was responsible. Statistics about the market share of the two companies were used to attribute liability in absence of better, more individualized evidence. ### 6.2 Cold-hits Legal scholars have drawn parallels between naked statistical evidence and DNA evidence in cold-hit cases (Roth 2010). The peculiarity of cold-hit cases is that the defendant is identified through a database search of several different genetic profiles, and as a consequence, the evidence in cold-hit cases consists almost exclusively of a DNA match between the crime traces and the defendant (see earlier discussion in §2.2). The match—as is customary—is complemented by a statistical estimate of the frequency of the profile, say, one in one hundred million people share the same matching profile. Given the largely statistical nature of the evidence, cold-hit cases can be seen as realistic examples of scenarios such as Prisoner or Gatecrasher. But whether we should think about cold-hit cases in this way is contested. Some authors place naked statistical evidence and cold-hit matches on a par (Smith 2018) and others do not (Enoch and Fisher 2015; Enoch, Spectre, and Fisher 2012). Some appellate courts in the United States have ruled that cold-hit matches, even though they are uncorroborated by other evidence, constitute sufficient ground for verdicts of criminal liability (Malcom 2008). Judge Hardwick, for example, writes that if DNA material is found in a location, quantity, and type inconsistent with casual contact and there is one in one quintillion likelihood that some else was the source of the material, the evidence is legally sufficient to support a guilty verdict. Missouri v. Abdelmalik, 273 S.W.3d, 61, 66 (Mo. Ct. App. 2008) Such pronouncements by appellate court lend support to the view that the statistics underlying cold-hit DNA matches are unlike naked statistical evidence (Cheng and Nunn 2016; Di Bello 2019). ### 6.3 Revisionist responses The puzzles of naked naked statistical evidence are some of the most difficult for legal probabilism. They directly challenge the claim that a high probability should suffice for a judgment of criminal or civil liability. One line of response that legal probabilists can pursue is to recommend revising our intuitions in hypothetical cases and questioning their relevance as a guide for theorizing about standards of proof. Legal probabilists can argue that the preference for traditional forms of evidence over statistical evidence is an unwarranted bias (Laudan 2006; Papineau forthcoming). They can point out that research in psychology and cognitive science has shown that eyewitness testimony and fingerprint evidence are often unreliable (Simons and Chabris 1999), prone to manipulation (Loftus 1979 [1996]), and influenced by subjective considerations and matters of context (Dror, Charlton, and Péron 2006; Zabell 2005). Relying on statistical evidence, on the other hand, should improve the overall accuracy of trial decisions (Koehler and Shaviro 1990). Legal probabilists can also argue that the puzzles of naked statistical evidence are confined to hypotheticals, and our judgments about statistical evidence may well differ in more realistic cases (Hedden and Colyvan 2019; Ross 2021). Few have defended such revisionist views, however. The literature has predominantly tried to vindicate the intuitive difference between naked statistical evidence and other forms of evidence. What follows is a discussion of some of the proposals in the literature, focusing specifically on the probabilistic ones. An examination of the non-probabilistic solutions to the paradoxes of naked statistical evidence falls outside the scope of this entry (for critical surveys, see Redmayne 2008; Gardiner 2018; Pardo 2019). ### 6.4 Non-revisionist responses Below are six non-revisionist moves legal probabilists can make in response to the paradoxes of naked statistical evidence. First, legal probabilists can deny that in scenarios such as Gatecrasher the probability of liability is high enough to meet the required threshold. Kaye (1979a) argues that whenever there is no other evidence besides naked statistical evidence, this is not enough for the plaintiff to win because the evidence is suspiciously thin. This circumstance should lead the fact-finders to lower the probability of liability below the requisite threshold. Along similar lines, Nance (2016) argues that when the evidence presented at trial is incomplete—that is, evidence that one would reasonably expect to see at trial is missing—the defendant should not be found liable. This strategy might work in scenarios such as Blue Bus in which the paucity of the evidence could well be the plaintiff’s fault. But the difficulty is, it is unclear whether this strategy works for scenarios such as Gatecrasher or Prisoner in which the paucity of the evidence is a characteristic of the scenarios themselves, not anyone’s fault. Second, legal probabilists can appeal to the reference class problem (Colyvan, Regan, and Ferson 2001; see also later Section 7). An individual may fall under different reference classes. If 3% of those who smoke have lung cancer and 0.1% of those who exercise regularly have lung cancer, what about someone who smokes and exercises regularly? In Gatecrasher, it is arbitrary to single out the group of people at the stadium and not the group of people with a history of trespassing. There is no clear reason why one or the other reference class was chosen. The choice of another reference class could have led to a different conclusion about the defendant’s probability of liability. This approach has also been endorsed by scholars who are decidedly opposed to legal probabilism (Allen and Pardo 2007). Critics of this approach note that the reference class problem affects any evidence-based judgment. In assessing the strength of any piece of evidence, different reference classes can be used, say the class of witnesses who give detailed descriptions of what they saw or the class of witnesses who are nervous (Redmayne 2008). Third, legal probabilists can observe that probabilistic claims based on naked statistical evidence are not resilient enough against possible countervailing evidence (on the notion of resilience and stability of belief, see Skyrms 1980; Leitgeb 2014). If an eyewitness were to assert that the prisoner did not participate in the riot, the probability that he would be guilty of killing the guard should be lowered significantly. Presumably, trial verdicts cannot be so volatile. They should aim to some degree of stability even in light of possible further evidence (Bolinger 2021). A problem with this approach is that more evidence could always be found that would change one’s probabilistic assessment. A further problem is that the puzzles of naked statistical evidence are cases in which no further evidence is—or could possibly be—available. But adding the assurance that the probabilistic claims could not be revised does not make the naked statistics less problematic. Fourth, legal probabilists can insist that verdicts solely based on naked statistics do not promote the goal of expected utility maximization. This is not so much because the naked statistics are bad evidence, but rather, because reliance on them may have a number of unintended costs, such as suboptimal allocation of the burden of error or lack of deterrence. In Blue Bus, for example, a verdict against the company with the largest market share might create a perverse economic incentive against larger companies (Posner 1973). It is unclear how this explanation could be extended to other cases such as Gatecrasher or Prisoner, and there are even variants of Blue Bus not susceptible to this objection (Wells 1992). Alternatively, Dahlman (2020) argues that verdicts based on naked statistical evidence do not provide any added incentive for lawful conduct because they would not make a distinction between lawful and unlawful behavior. (On deterrence and naked statistics, see also Enoch, Spectre, and Fisher 2012; Enoch and Fisher 2015). Fifth, another line of response for legal probabilists is to concede that the paradoxes of naked statistical evidence demonstrate the inadequacy of simple probabilistic thresholds as proof standards (Urbaniak 2019). Instead of the posterior probability of liability, a number of scholars have focused on the likelihood ratio $$P(E\pmid H)/P(E \pmid H')$$. Their argument is that, even though naked statistical evidence can support a high posterior probability of liability, the likelihood ratio of this evidence equals one because it could be presented against a defendant no matter what the defendant did. If so, naked statistical evidence should have no evidential value (Cheng 2012; Sullivan 2019). However, Dahlman (2020) has criticized this argument noting that, under plausible assumptions, the likelihood ratio of naked statistical evidence is significantly greater than one. Di Bello (2019) argues that in cases such as Prisoner and Gatecrasher the likelihood ratio could take a range of different values depending on the background information that is taken into account. The likelihood ratio of naked statistical evidence in those scenarios is therefore neither one nor greater than one, but strictly speaking unknown (for a critique of this argument, see Urbaniak et al. 2020). Finally, legal probabilists can reformulate probabilistic standards of proof using the concept of knowledge. Moss (2018) argues that a probabilistic belief, just like a full belief, can constitute knowledge. This notion of probabilistic knowledge can be used to formulate an account of standards of proof for trial decisions (Moss, 2018: ch. 10; Moss, forthcoming). The preponderance standard for civil cases would be met if, based on the evidence presented at trial, judges or jurors have at least .5 credence that the defendant is liable, and this probabilistic belief constitutes knowledge. The standard in criminal cases—proof beyond a reasonable doubt—would instead require knowledge of the probabilistic belief that the defendant is extremely likely to be guilty or simply full-fledged knowledge of guilt. To the extent that naked statistical evidence does not warrant full-fledged knowledge or probabilistic knowledge, knowledge-based accounts offer a solution to the puzzles of naked statistical evidence. In addition, the probabilistic knowledge account of standards of proof promises to vindicate the intuition that probability alone is not enough for legal proof while also acknowledging the key role that probability plays in decision-making at trial (however, for a critique of knowledge-based accounts of legal proof from a legal standpoint, see Allen 2021). ## 7. Further Objections Aside from the paradoxes of naked statistical evidence, the conjunction paradox is one of the most widely discussed objections against legal probabilism. This section examines this paradox together with a few other objections. Many of these objections can be traced back to the seminal work of Cohen (1977) who also leveled criticisms against Bayesian epistemology more generally (for further discussion, see Earman 1992; Bovens and Hartmann 2004; Bradley 2015 and the entry on Bayesian epistemology). ### 7.1 The difficulty about conjunction Suppose the plaintiff is to prove two separate claims, $$A$$ and $$B$$, according to the governing standard of proof, say, preponderance of the evidence (which the legal probabilist may interpret as the requirement that the facts be established with greater probability than .5). If the plaintiff has proven each claim with probability .7, the burden of proof should be met. And yet, if the two claims are independent, the probability of their conjunction is only $$.7\times.7=.49$$, below the required threshold. Arguably, common law systems subscribe to a conjunction principle which states that if $$A$$ and $$B$$ are established according to the governing standard of proof, so is their conjunction. Probability theory—the criticism goes—cannot capture this principle. This is the so-called conjunction paradox or difficulty about conjunction. It was originally formulated by Cohen (1977) and has enjoyed great popularity every since (Allen 1986; Allen and Stein 2013; Allen and Pardo 2019; Haack 2014; Schwartz and Sober 2017; Stein 2005). Without rejecting the conjunction principle of the common law, legal probabilists can respond in a few different ways. Dawid (1987) argues that the difficulty about conjunction disappears if evidential support is modeled probabilistically by likelihood ratios instead of posterior probabilities. He writes that suitably measured, the support supplied by the conjunction of several independent testimonies exceeds that supplied by any of its constituents. Although the original paradox pertains to posterior probabilities of liability, Dawid argues that the paradox does not arise for likelihood ratios. Garbolino (2014) also recommends switching to likelihood ratios. Yet the conjunction paradox still arises for likelihood ratios when the items of evidence, say $$a$$ and $$b$$, are assessed relative to a composite hypothesis, say $$A \wedge B$$. Suppose $$a$$ and $$b$$ provide positive support for $$A$$ and $$B$$, respectively. Urbaniak (2019) shows that the combined likelihood ratio $\frac{\Pr(a \wedge b \pmid A\wedge B)}{\Pr(a\wedge b \pmid \neg (A \wedge B))}$ can be lower than the individual likelihood ratios $\frac{\Pr(a \pmid A)}{\Pr(a \pmid \neg A)}$ and $\frac{\Pr(b \pmid B)}{\Pr(b \pmid \neg B)}.$ Cheng (2012) provides another probability-based solution to the conjunction paradox. He argues that the standard of proof in civil cases should require that the plaintiff’s hypothesis be comparatively more probable on the evidence than the defendant’s hypothesis. On this account, the probability of $$A\wedge B$$ given the overall evidence in a case should be compared with the probability of the alternative hypotheses given the same evidence. The alternatives to be considered are as follows: • $$A\wedge \neg B$$, • $$\neg A \wedge B$$, and • $$\neg A \wedge \neg B$$. Given suitable assumptions, the probabilities of these alternatives will fall below the probability of $$A\wedge B$$ provided $$A$$ and $$B$$, individually, are supported by the evidence. So whenever the standard of proof is met for individual claims $$A$$ and $$B$$, the standard should also be met for the composite claim $$A \wedge B$$. Kaplow (2014) advances a similar argument. Urbaniak (2019) points out that Cheng splits the defense hypothesis into three sub-cases, $$A\wedge \neg B$$, $$\neg A \wedge B$$, and $$\neg A \wedge \neg B$$, but does not consider the alternative $$\neg (A \wedge B)$$. The probability of $$\neg (A \wedge B)$$ may actually exceed that of $$A\wedge B$$. If $$\neg (A \wedge B)$$ is the alternative hypothesis, the standard might not be met for the composite claim $$A \wedge B$$, even when the standard is met for individual claims $$A$$ and $$B$$ (for another critique of Cheng’s approach, see Allen & Stein 2013). Finally, legal probabilists can pursue a holistic approach in response to the conjunction paradox. This solution has been defended by opponents of legal probabilism (Allen 1986; Allen and Pardo 2019), but can also be adopted by legal probabilists. Instead of assessing the posterior probabilities of individual claims given individual pieces of evidence, the holistic approach recommends assessing the probability of the overall composite claim, say $$A \wedge B$$, in light of all the evidence available (Hedden and Colyvan, 2019; see however the response by Allen 2020). Bayesian networks (see earlier discussion in Section 3) can help to assess the evidence holistically (de Zoete, Sjerps, and Meester 2017; de Zoete and Sjerps 2018; Neil et al. 2019). ### 7.2 Cohen’s other objections Cohen (1977) leveled a few other objections against legal probabilism. They are less well-known than the paradoxes of naked statistical evidence or the conjunction paradox, but are still worth examining. #### 7.2.1 Completeness The statement $$\Pr(\neg H\pmid E) = 1-\Pr(H\pmid E)$$ is a theorem of the probability calculus. If the probability of $$H$$ given $$E$$ is low, the probability of $$\neg H$$ given the same evidence must be high. This fact may seem to create evidence from ignorance. If $$E$$ is meager evidence of $$H$$ (that is, $$\Pr(H \pmid E)$$ is low), it must be strong evidence for the negation of $$H$$ (that is, $$\Pr(\neg H \pmid E)$$ is high). This seems wrong. Intuitively, some evidence can weakly support both a hypothesis and its negation. For example, suppose one has heard a rumor that the defendant spent the night at the bar when the crime was committed. The rumor is weak evidence for the claim that the defendant spent the night at the bar. It does not follow that the rumor is strong evidence that the defendant did not spend the night at the bar. Evidence may actually have no bearing whatsoever on a hypothesis. Probability seems unable to capture this fact, or at least this is the objection. This difficulty motivated the development of a non-classical theory of probability and evidential support by Dempster (1968) and Shafer (1976). Legal probabilists, however, need not reject classical probability theory. They can respond that the difficulty just described arises only because one is inclined to measure the strength of evidence in terms of the posterior probability $$\Pr(H \pmid E)$$, rather than by means of the likelihood ratio (on this distinction, see earlier in Section 2). If $$E$$ weakly supports $$H$$—that is, the likeklihood ratio $\frac{\Pr(E \pmid H)}{\Pr(E \pmid \neg H)}$ is barely above one—it does not follow that $$E$$ strongly supports $$\neg H$$. In fact, it rather follows follows that $$E$$ weakly disfavors $$\neg H$$, because $\frac{\Pr(E \pmid \neg H)}{\Pr(E \pmid H)}$ will be slightly below one. #### 7.2.2 Corroboration When two or more independent witnesses testify to the truth of the same proposition, and their stories are relatively unlikely, the probability of the proposition in question should increase significantly. This phenomenon of “confidence boost” is known as corroboration. In case of circumstantial evidence, the analogous phenomenon is called convergence. Cohen (1977) argues that no probabilistic measure of evidential support captures this phenomenon. He examines different probabilistic proposals—Boole’s formula (Boole 1857), Ekelöf’s principle (Ekelöf 1964), a formula due to Lambert and Kruskal (1988)—and finds them inadequate. He argues that the confidence boost that is expected from corroboration is not adequately captured by any of these proposals. More recently, better probabilistic accounts of corroboration have been developed (Fenton and Neil 2013 [2018]; Robertson, Vignaux, and Berger 2016; Taroni et al. 2014). A recurrent theme in this line of work is that corroboration can be accounted for in probabilistic terms by multiplying the likelihood ratios of the individual pieces of evidence. The thought is that the result of multiplying the likelihood ratios of different pieces of evidence combined exceeds the likelihood ratios associated with the individual pieces of evidence (see general discussion in Bovens and Hartmann 2004: ch. 5). Cohen, however, insisted that the confidence boost due to corroboration should be large, and the large boost is not reflected as a result of multiplying the likelihood ratios. What needs further exploring is the size of the confidence boost and the features of the evidence that affect the boost. Urbaniak and Janda (2020) offer a detailed discussion of Cohen’s objections and candidates for a solution. ### 7.3 The problem of priors Another objection often leveled against legal probabilism is the problem of priors. This problem emerges as one sets out to assess the posterior probability of a hypothesis given the available evidence, $$\Pr(H \pmid E)$$. To carry out the calculations, Bayes’ theorem requires as a starting point the prior probability of the hypothesis, $$\Pr(H)$$, irrespective of the evidence. The correct assessment of this prior probability is by no means obvious. Different strategies have been proposed. First, the prior probability can be equated to $$1/k$$ where $$k$$ is the number of alternative equiprobable hypotheses. If there are $$k$$ possible hypotheses and no one is more probable than the other, it is natural to assign $$1/k$$ to each hypothesis. This approach, however, would render prior probabilities quite sensitive to the choice of hypotheses and thus potentially arbitrary. In addition, this approach is particularly unsuitable for criminal cases. If two hypotheses are “the defendant is guilty” and “the defendant is innocent”, the prior probability of each would be 50%. Defendants in criminal cases, however, should be presumed innocent until proven guilty. Prior probability of guilt at the level of .5 seems excessive. The presumption of innocence—a procedural protection afforded to all defendants in many countries—should require that the prior probability of guilt be set to a small value (Allen et al. 1995). But it is unclear how low that value should be. Would .001 be sufficiently low or should it be .000001? All that can be said, perhaps, is that in criminal cases the prior probability of guilt should be extremely low (Friedman 2000). Alternatively, the prior probability can be equated to $$1/n$$ where $$n$$ is the number of total possible suspects or wrongdoers who could have committed the crime or civil wrong under dispute at trial. This, too, is a plausible proposal. Since someone must have committed the wrong, absent any further evidence anyone could have committed it, so $$1/n$$ is a reasonable starting point. But this proposal also quickly runs into difficulties. In some cases, whether a wrong was committed by anyone can itself be disputed. Or there might be cases in which an illegal act was certainly committed and the defendant took part in it, but it is not clear what illegal act it was, say murder or manslaughter. To avoid some of the difficulties described above, other models rely on relevant background information, for example, geographical information about people’s opportunities to commit crimes (Fenton et al. 2019). But even if these models are successful in giving well-informed assessments of prior probabilities, a deeper difficulty lingers. That is, any assessment of prior probabilities, no matter how it is done, is likely to violate existing normative requirements of the trial system (Dahlman 2017; Engel 2012; Schweizer 2013). If the assessment of prior probabilities relies on demographic information, people who belong to certain demographic groups will be regarded as having a higher prior probability of committing a wrong than others. But if some people’s priors are higher than other people’s priors, it will be easier to convict or find liable those who are assigned higher priors, even if the evidence against them is the same as the evidence against those assigned lower priors. This outcome can be seen as unfair, especially in criminal cases, since it exposes to a higher risk of conviction those innocents who are assigned a higher prior probability because of the demographic group they belong to (Di Bello and O’Neil 2020). Perhaps, as some have suggested, legal probabilists should do away with prior probabilities and rely on likelihood ratios instead as a guide for trial decisions (Sullivan 2019). Another strategy for avoiding the problem of priors is to consider an interval of values and see the extent to which different possible priors affect the posterior probability (Finkelstein and Fairley 1970), as discussed in Section 1.4. ### 7.4 The reference class problem Another challenge to legal probabilism is the reference class problem. The reference class problem, originally formulated by Venn (1866), arises because the same event may belong to multiple reference classes in which the frequencies of the event in question are different. A common approach, given among others by Reichenbach (1935 [1949: 374]), is to rely on “the narrowest class for which reliable statistics can be compiled”. This may work in some cases provided reliable statistics are available. But what if someone belongs to different classes that are equally narrow? Consider the case of Charles Shonubi, a Nigerian citizen working in New Jersey, who was arrested on 10 December 1991 at JFK Airport in New York for smuggling heroin into the United States. He was found carrying 103 balloons in his gastrintestinal tract containing 427.4 grams of heroin. During the sentencing proceedings, the prosecutor argued that since Shonubi made seven trips between the United States and Nigeria prior to his arrest, he smuggled a larger total quantity of heroin than 427.4 grams. The prosecution offered data on amounts of heroin seized from 117 Nigerian drug smugglers who were arrested at JFK airport between 1 September 1990 and 10 December 1991. The expert for the prosecutor, Dr. Boyum, calculated that it was 99% likely that Shonubi smuggled at least 2090.2 grams in total before his final trip (U.S. v. Shonubi 895 F. Supp 460, E.D.N.Y. 1995). Shonubi was a member of the reference class “people found carrying heroin while traveling between Nigeria and New York” but also the class “toll collectors at the George Washington bridge”. Why rely on the former and not the latter to make inferences about how much heroin Shonubi smuggled into the United States? (Colyvan, Regan, and Ferson 2001). What follows examines the specific difficulties that the reference class problem poses for legal probabilism and how legal probabilists might respond. #### 7.4.1 The challenge Allen and Pardo (2007) argue that the reference class problem poses a challenge for legal probabilism and more specifically for probabilistic measures of evidentiary strength such as likelihood ratios (see earlier in Section 2). The problem is that the same piece of evidence may be assigned different likelihood ratios depending on the reference class chosen. For example, the denominator in the likelihood ratio associated with a DNA match is the probability of a match given that a random person, unrelated to the crime, is the source. This probability depends on the frequency of the profile in a select population. But which reference population should one choose? Since nothing in the natural world picks out one reference class over another—the argument goes—the likelihood ratio would be an arbitrary measure of evidentiary strength. It is tempting to dismiss this challenge by noting that expert witnesses work with multiple reference classes and entertain plausible ranges of values (Nance 2007). In fact, relying on multiple reference classes is customary in the assessment of DNA evidence. In Darling v. State, 808 So. 2d 145 (Fla. 2002), for example, a Polish woman living in Orlando was sexually assaulted and killed. The DNA expert testified about multiple random match probabilities, relying on frequencies about African-Americans, Caucasians and Southeastern Hispanics from the Miami area. Since the perpetrator could have belonged to any of these ethnic groups, the groups considered by the expert were all relevant under different scenarios of what could have happened. Unlike expert witnesses, appellate courts often prefer that only one reference class be considered. In another case, Michael Pizarro, who matched the crime traces, was convicted of raping and suffocating his 13-year-old-half-sister (People v. Pizarro, 110 Cal.App.4th 530, 2003). The FBI analyst testified at trial that the likelihood of finding another unrelated Hispanic individual with the same genetic profile was approximately $$\slashfrac{1}{250,000}$$. Since the race of the perpetrator was not known, Pizarro appealed arguing that the DNA evidence was inadmissible. The appellate court sided with Pizarro and objected to the presentation of frequency estimates for the Hispanic population as well as frequencies for any other racial or ethnic groups. The court wrote: It does not matter how many Hispanics, Caucasians, Blacks, or Native Americans resemble the perpetrator if the perpetrator is actually Asian. The uneasiness that appellate courts display when expert witnesses testify about multiple references classes is understandable. Perhaps, the reference class most favorable to the defendant should be selected, giving the accused the benefit of the doubt. This might be appropriate in some cases. But suppose the random match probability associated with a DNA match is 1 in 10 for people in group A, while it is 1 in 100 million for people in group B. Always going for the reference class that is most favorable to the defendant will in some cases weaken the incriminating force of DNA matches more than necessary. #### 7.4.2 Relevance and critical questions Legal probabilists have formulated different criteria for identifying the most appropriate reference class. Franklin (2011) is optimistic. On his approach, the most appropriate reference class for drawing an inference about an event or outcome $$B$$ is the class defined by the intersection of all the features that are relevant to $$B$$. Relevance is measured statistically given the available data as the co-variation of $$B$$ with the feature in question. Co-variation will be measured using appropriate statistical criteria, such as the correlation coefficient between two variables. For instance, in the Shonubi case, features such as being Nigerian, drug courier, traveling toward JFK, were all relevant for making an inference about the total amount of drugs Shonubi carried. Other features for which data were available, such as being a toll collector at the George Washington Bridge, were not relevant. The optimism of Franklin’s approach, however, is dimmed by the pervasiveness of the reference class problem (for more details, see Hájek 2007). Instead of focusing on relevance only, the choice of the most appropriate reference class may also include a mix of statistical (or epistemic) criteria as well as non-epistemic criteria. Dahlman (2018) proposes a list of critical questions, such as: Is the reference class heterogeneous or homogeneous? Is it robust? Does it put people in the reference class at an unfair disadvantage? The first two questions are epistemic, but the third is not. If people in certain ethnic or socio-economic groups engage in criminal acts more often than others, relying on ethnic or socio-economic reference classes may heighten society’s stigma and prejudice toward members of these groups. Relying on these reference classes should therefore be avoided, not because they are irrelevant but because they are prejudicial. #### 7.4.3 Model selection The reference class problem can be thought of as a particular case of the model selection problem (Cheng 2009). The model should capture the data to some extent, but not overfit the data. Random variation in the data should not be built into the model. In statistics, different criteria for model selection exists, most notably the Akaike’s Information Criterion (AIC), which is a popular measure of the trade-off between model’s fit with the data and its complexity. Consider the Shonubi case again. In order to predict the total amount of heroin Shonubi transported, a simple linear model could be used, where each reference class comes with an empirically established multiplier $$\beta$$ for the expected total amount of drugs carried based on how much drugs one carried on a single trip. Picking too generic a class (say, “airline passenger”) would lower the empirical adequacy of the model, but relying on too narrow a class would incorporate random noise. A model using the class “toll collector” would clearly perform (in terms of statistical measures such as AIC) worse than, say a model based on the class “Nigerian drug courier at JFK”. Colyvan and Regan (2007) argue that the reference class problem is a form of model uncertainty. For them, the reference class problem stems from uncertainty about the particular statistical model that should be employed. Model uncertainty is evident in the Shonubi case as alternative models were discussed. The first model consisted in multiplying the amount which Shonubi was found carrying when he was arrested by the number of trips he made between the Nigeria and New York. Admittedly, this model was too simplistic. The expert witness for the prosecution, Dr. Boyum, developed a second model. It was based on the DEA data and consisted in the simulation of 100,000 possible series of seven trips by re-sampling sets of seven net weights from the 117 known cases. Boyum’s model was criticized by another expert in the case, Dr. Finkelstein, who complained that the model did not take into consideration systematic differences between trips. Presumably, smugglers will tend to carry larger quantities as they become more experienced. When they carry larger quantities, it should be more likely they would be apprehended. If the trip effect holds, the data would mostly be about advanced smugglers who tend to carry more than beginners. No empirical evidence for or against the trip effect theory could be found. Information on the number of previous trips was missing from the data so an appropriate regression model for the trip effect could not be developed. On other other hand, Judge Weinstein argued that beginner smugglers do practice swallowing with grapes, and thus the learning curve should not be excessively steep. In addition, beginners are more likely to be apprehended than advanced smugglers. If so, the data would not be biased by the trip effect. Interestingly, information on Nigerian drug smuggling practices undermines the trip effect theory: the drug cartel did not waste money on sending half-filled drug mules, but rather made them practice swallowing balloons weeks prior to their first trip (Treaser 1992; Wren 1999). Statistical analyses of the evidence in Shonubi were published after the case was decided. These analyses took into account other potential sources of error, such as outliers and biased data. Gastwirth, Freidlin, and Miao (2000) showed that, because of sensitivity to outliers, the inference that the total amount was above 3000 grams was unstable. However, even looking at the data in the light most favorable to the defendant, the total amount of drugs should be above the 1000 gram threshold (Gastwirth, Freidlin, and Miao 2000; Izenman 2000a,b). Besides Shonubi, many other cases raise the reference class or model selection in their own way. Just to list a few: • Vuyanich v. Republic National Bank involved race and sex discrimination allegation. The case involved nine different expert witnesses deploying various statistical analyses. The case ended with a 127-page opinion. • E.E.O.C. v. Federal Reserve Bank of Richmond was a similar case, where the appropriateness of various methods for group comparisons and the choice of level of aggregation for the analysis of employment data was at play. • Gulf South Insulation v. U.S. Consumer Product Safety Commission was related to banning the use of urea-formaldehyde foam insulation. The difficulty lied in the choice of a risk assessment model to calculate the risk of increased incidence of cancer. These cases are interesting as well as complicated. They are discussed in some detail in Fienberg (1989). ## Bibliography • Aitken, Colin, Paul Roberts, and Graham Jackson, 2010, “Fundamentals of Probability and Statistical Evidence in Criminal Proceedings: Guidance for Judges, Lawyers, Forensic Scientists and Expert Witnesses” (Practitioners Guide No 1), Royal Statistical Society’s Working Group on Statistics and the Law. [Aitken, Roberts, and Jackson 2010 available online] • Aitken, Colin G.G. and Franco Taroni, 1995 [2004], Statistics and the Evaluation of Evidence for Forensic Scientists, Chichester, UK: John Wiley & Sons. Second edition, 2004. doi:10.1002/0470011238 • Allen, Ronald J., 1986, “A Reconceptualization of Civil Trials”, Boston University Law Review, 66: 401–437. • –––, 2013, “Complexity, the Generation of Legal Knowledge, and the Future of Litigation”, UCLA Law Review, 60: 1384–1411. • –––, 2014, “Burdens of Proof”, Law, Probability and Risk, 13(3–4): 195–219. doi:10.1093/lpr/mgu005 • –––, 2017, “The Nature of Juridical Proof: Probability as a Tool in Plausible Reasoning”. International Journal of Evidence and Proof, 21(2): 133-142. • –––, 2020, “Legal Probabilism–––A Qualified Rejection: A Response to Hedden and Colyvan”. Journal of Political Philosophy, 28(1): 117-128. • –––, 2021, “Naturalized Epistemology and the Law of Evidence Revisited”, Quaestio Facti, 2: 253–283. • Allen, Ronald J., David J. Balding, Peter Donnelly, Richard Friedman, David H. Kaye, Lewis Henry LaRue, Roger C. Park, Bernard Robertson, and Alexander Stein, 1995, “Probability and Proof in State v. Skipper: An Internet Exchange”, Jurimetrics, 35(3): 277–310. • Allen, Ronald J. and Brian Leiter, 2001, “Naturalized Epistemology and the Law of Evidence”, Virginia Law Review, 87(8): 1491–1550. • Allen, Ronald J. and Alex Stein, 2013, “Evidence, Probability and the Burden of Proof”, Arizona Law Journal, 55(3): 557–602. • Allen, Ronald J. and Michael S. Pardo, 2007, “The Problematic Value of Mathematical Models of Evidence”, The Journal of Legal Studies, 36(1): 107–140. doi:10.1086/508269 • –––, 2019, “Relative Plausibility and Its Critics”, The International Journal of Evidence & Proof, 23(1–2): 5–59. doi:10.1177/1365712718813781 • Amaya, Amalia, 2015, The Tapestry of Reason: An Inquiry into the Nature of Coherence and its Role in Legal Argument, Oxford: Hart Publishing. • Arkes, Hal R., Brittany Shoots-Reinhard, and Ryan S. Mayes, 2012, “Disjunction Between Probability and Verdict in Juror Decision Making”, Journal of Behavioral Decision Making, 25(3): 276–294. doi:10.1002/bdm.734 • Balding, David J., 2002, “The DNA Database Search Controversy”, Biometrics, 58(1): 241–244. doi:10.1111/j.0006-341X.2002.00241.x • –––, 2005, Weight-of-Evidence for Forensic DNA Profiles, Hoboken, NJ: John Wiley & Sons. • Balding, David J. and Peter Donnelly, 1996, “Evaluating DNA Profile Evidence When the Suspect Is Identified Through a Database Search”, Journal of Forensic Sciences, 41(4): 13961J. doi:10.1520/JFS13961J • Barker, Matthew J., 2017, “Connecting Applied and Theoretical Bayesian Epistemology: Data Relevance, Pragmatics, and the Legal Case of Sally Clark”, Journal of Applied Philosophy, 34(2): 242–262. doi:10.1111/japp.12181 • Becker, Gary S., 1968, “Crime and Punishment: An Economic Approach”, Journal of Political Economy, 76(2): 169–217. doi:10.1086/259394 • Bernoulli, Jacobi, 1713, Ars Conjectandi, Basileae : Impensis Thurnisiorum, fratrum. Translated as The Art of Conjecture, 2005, Edith Dudley Sylla (trans), Baltimore: John Hopkins University Press. • Bex, Floris J., 2011, Arguments, Stories and Criminal Evidence: A Formal Hybrid Theory (Law and Philosophy Library 92), Dordrecht: Springer Netherlands. doi:10.1007/978-94-007-0140-3 • –––, 2015, “An Integrated Theory of Causal Stories and Evidential Arguments”, in Proceedings of the 15th International Conference on Artificial Intelligence and Law (ICAIL ’15), San Diego CA: ACM Press, 13–22. doi:10.1145/2746090.2746094 • Biedermann, Alex, Tacha Hicks, Franco Taroni, Christophe Champod, and Colin Aitken, 2014, “On the Use of the Likelihood Ratio for Forensic Evaluation: Response to Fenton et al.”, Science & Justice, 54(4): 316–318. doi:10.1016/j.scijus.2014.04.001 • Blome-Tillmann, Michael, 2017, “‘More Likely Than Not’—Knowledge First and the Role of Bare Statistical Evidence in Courts of Law”, in Adam Carter, Emma Gordon, & Benjamin Jarvi (eds.), Knowledge First—Approaches in Epistemology and Mind, Oxford: Oxford University Press, pp. 278–292. doi:10.1093/oso/9780198716310.003.0014 • Bolinger, Renèe Jorgensen, 2021, “Explaining the Justificatory Asymmetry Between Statistical and Individualized Evidence”, in The Social Epistemology of Legal Trials, Zachary Hoskins and Jon Robson (eds.), New York: Routledge, pages 60–76. • Boole, George, 1857, “On the Application of the Theory of Probabilities to the Question of the Combination of Testimonies or Judgments”, Transactions of the Royal Society of Edinburgh, 21(4): 597–653. doi:10.1017/S0080456800032312 • Bovens, Luc and Stephan Hartmann, 2004, Bayesian Epistemology, Oxford: Oxford University Press. doi:10.1093/0199269750.001.0001 • Bradley, Darren, 2015, A Critical Introduction to Formal Epistemology, London: Bloomsbury Publishing. • Calabresi, Guido, 1961, “Some Thoughts on Risk Distribution and the Law of Torts”, Yale Law Journal, 70(4): 499–553. • Cheng, Edward K., 2009, “A Practical Solution to the Reference Class Problem”, Columbia Law Review, 109: 2081–2105. • –––, 2012, “Reconceptualizing the Burden of Proof”, Yale Law Journal, 122(5): 1254–1279. • Cheng, Edward K. and G. Alexander Nunn, 2016, “DNA, Blue Bus, and Phase Changes”, The International Journal of Evidence & Proof, 20(2): 112–120. doi:10.1177/1365712715623556 • Cheng, Edward K. and Michael S. Pardo, 2015, “Accuracy, Optimality and the Preponderance Standard”, Law, Probability and Risk, 14(3): 193–212. doi:10.1093/lpr/mgv001 • Childers, Timothy, 2013, Philosophy and Probability, Oxford: Oxford University Press. • Cohen, L. Jonathan, 1977, The Probable and The Provable, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198244127.001.0001 • –––, 1981, “Subjective Probability and the Paradox of the Gatecrasher”, Arizona State Law Journal, 1981: 627–634. • Colyvan, Mark and Helen M. Regan, 2007, “Legal Decisions and the Reference Class Problem”, The International Journal of Evidence & Proof, 11(4): 274–285. doi:10.1350/ijep.2007.11.4.274 • Colyvan, Mark, Helen M. Regan, and Scott Ferson, 2001, “Is It a Crime to Belong to a Reference Class”, Journal of Political Philosophy, 9(2): 168–181. doi:10.1111/1467-9760.00123 • Condorcet, Marquis de, 1785, Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix, Paris. • Cook, R., I.W. Evett, G. Jackson, P.J. Jones, and J.A. Lambert, 1998, “A Hierarchy of Propositions: Deciding Which Level to Address in Casework”, Science & Justice, 38(4): 231–239. doi:10.1016/S1355-0306(98)72117-3 • Cullison, Alan D., 1969, “Probability Analysis of Judicial Fact-Finding: A Preliminary Outline of the Subjective Approach”, Toledo Law Review, 1: 538–598. • Dahlman, Christian, 2017, “Unacceptable Generalizations in Arguments on Legal Evidence”, Argumentation, 31(1): 83–99. doi:10.1007/s10503-016-9399-1 • –––, 2018, “Determining the Base Rate for Guilt”, Law, Probability and Risk, 17(1): 15–28. doi:10.1093/lpr/mgx009 • –––, 2020, “Naked Statistical Evidence and Incentives for Lawful Conduct”, The International Journal of Evidence & Proof, 24(2): 162–179. doi:10.1177/1365712720913333 • Dant, Mary, 1988, “Gambling on the Truth: The Use of Purely Statistical Evidence as a Basis for Civil Liability”, Columbia Journal of Law and Social Problems, 22: 31–70. • Davidson, Barbara and Robert Pargetter, 1987, “Guilt beyond Reasonable Doubt”, Australasian Journal of Philosophy, 65(2): 182–187. doi:10.1080/00048408712342861 • Dawid, Alexander Philip, 1987, “The Difficulty About Conjunction”, The Statistician, 36(2/3): 91–97. doi:10.2307/2348501 • –––, 1994, “The Island Problem: Coherent Use of Identification Evidence”, in Aspects of Uncertainty: A Tribute to D. V. Lindley, P. R. Freeman and A. F. M. Smith (eds.), Chichester/New York: Wiley, pp. 159–170. • –––, 2002, “Bayes’s Theorem and Weighing Evidence by Juries”, in Bayes’s Theorem, Richard Swinburne (ed.), Oxford: Oxford University Press, pp. 71–90. • Dawid, A. Philip and Julia Mortera, 1996, “Coherent Analysis of Forensic Identification Evidence”, Journal of the Royal Statistical Society: Series B (Methodological), 58(2): 425–443. doi:10.1111/j.2517-6161.1996.tb02091.x • –––, 2018, “Graphical Models for Forensic Analysis”, in Handbook of Graphical Models, Marloes Maathuis, Mathias Drton, Steffen Lauritzen, and Martin Wainwright (eds.), Boca Raton, FL: CRC Press, pp. 491–514. • DeKay, Michael L., 1996, “The Difference between Blackstone-Like Error Ratios and Probabilistic Standards of Proof”, Law & Social Inquiry, 21(1): 95–132. doi:10.1111/j.1747-4469.1996.tb00013.x • Dempster, A. P., 1968, “A Generalization of Bayesian Inference”, Journal of the Royal Statistical Society: Series B (Methodological), 30(2): 205–232. doi:10.1111/j.2517-6161.1968.tb00722.x • Devitt, Edward J., Charles B. Blackmar and Michael A .Wolff, 1987, Federal Jury Practice and Instructions, (4th ed), St. Paul: West Publishing Company. • de Zoete, Jacob, Marjan Sjerps, and Ronald Meester, 2017, “Evaluating Evidence in Linked Crimes with Multiple Offenders”, Science & Justice, 57(3): 228–238. doi:10.1016/j.scijus.2017.01.003 • de Zoete, Jacob, Norman Fenton, Takao Noguchi, and David Lagnado, 2019, “Resolving the So-Called ‘Probabilistic Paradoxes in Legal Reasoning’ with Bayesian Networks”, Science & Justice, 59(4): 367–379. doi:10.1016/j.scijus.2019.03.003 • de Zoete, Jacob and Marjan Sjerps, 2018, “Combining Multiple Pieces of Evidence Using a Lower Bound for the LR”, Law, Probability and Risk, 17(2): 163–178. doi:10.1093/lpr/mgy006 • Diamond, Henry A., 1990, “Reasonable Doubt: To Define, or Not to Define”, Columbia Law Review, 90(6): 1716–1736. • Di Bello, Marcello, 2019, “Trial by Statistics: Is a High Probability of Guilt Enough to Convict?”, Mind, 128(512): 1045–1084. doi:10.1093/mind/fzy026 • Di Bello, Marcello and Collin O’Neil, 2020, “Profile Evidence, Fairness, and the Risks of Mistaken Convictions”, Ethics, 130(2): 147–178. doi:10.1086/705764 • Di Bello, Marcello and Bart Verheij, 2018, “Evidential Reasoning”, in Handbook of Legal Reasoning and Argumentation, Giorgio Bongiovanni, Gerald Postema, Antonino Rotolo, Giovanni Sartor, Chiara Valentini, and Douglas Walton (eds.), Dordrecht: Springer Netherlands, 447–493. doi:10.1007/978-90-481-9452-0_16 • Donnelly, Peter, 1995, “Nonindependence of Matches at Different Loci in DNA Profiles: Quantifying the Effect of Close Relatives on the Match Probability”, Heredity, 75(1): 26–34. doi:10.1038/hdy.1995.100 • Donnelly, Peter and Richard D. Friedman, 1999, “DNA Database Searches and the Legal Consumption of Scientific Evidence”, Michigan Law Review, 97(4): 931–984. doi:10.2307/1290377 • Dror, Itiel E., David Charlton, and Ailsa E. Péron, 2006, “Contextual Information Renders Experts Vulnerable to Making Erroneous Identifications”, Forensic Science International, 156(1): 74–78. doi:10.1016/j.forsciint.2005.10.017 • Duff, Antony, Lindsay Farmer, Sandra Marshall, and Victor Tadros, 2007, The Trial on Trial (Volume 3): Towards a Normative Theory of the Criminal Trial, Oxford: Hart Publishing. • Earman, John, 1992, Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory, Cambridge, MA: MIT Press. • Ebert, Philip A., Martin Smith, and Ian Durbach, 2018, “Lottery Judgments: A Philosophical and Experimental Study”, Philosophical Psychology, 31(1): 110–138. doi:10.1080/09515089.2017.1367767 • Edwards, Ward, 1991, “Influence Diagrams, Bayesian Imperialism, and the Collins Case: An Appeal to Reason”, Cardozo Law Review, 13: 1025–1074. • Eggleston, Richard, 1978, Evidence, Proof and Probability, London: Weidenfeld and Nicolson. • Ekelöf, Per Olof, 1964, “Free Evaluation of Evidence”, Scandinavian Studies in Law, 8: 47–66. • [ENFSI] European Network of Forensic Science Institutes, 2015, ENFSI Guidelines for Evaluative Reporting in Forensic Sciences. [ENFSI 2015 available online] • Engel, Christoph, 2012, “Neglect the Base Rate: It’s the Law!”, Max Planck Institute (MPI) for Research on Collective Goods Preprint, 2012/23. doi:10.2139/ssrn.2192423 • Enoch, David and Talia Fisher, 2015, “Sense and ‘Sensitivity’: Epistemic and Instrumental Approaches to Statistical Evidence”, Stanford Law Review, 67: 557–611. • Enoch, David, Levi Spectre, and Talia Fisher, 2012, “Statistical Evidence, Sensitivity, and the Legal Value of Knowledge”, Philosophy & Public Affairs, 40(3): 197–224. doi:10.1111/papa.12000 • Evett, I.W., 1987, “On Meaningful Questions: A Two-Trace Transfer Problem”, Journal of the Forensic Science Society, 27(6): 375–381. doi:10.1016/S0015-7368(87)72785-6 • Evett, I.W., G. Jackson, and J.A. Lambert, 2000, “More on the Hierarchy of Propositions: Exploring the Distinction between Explanations and Propositions”, Science & Justice, 40(1): 3–10. doi:10.1016/S1355-0306(00)71926-5 • Fenton, Norman, Daniel Berger, David Lagnado, Martin Neil, and Anne Hsu, 2014, “When ‘Neutral’ Evidence Still Has Probative Value (with Implications from the Barry George Case)”, Science & Justice, 54(4): 274–287. doi:10.1016/j.scijus.2013.07.002 • Fenton, Norman, David Lagnado, Christian Dahlman, and Martin Neil, 2019, “The Opportunity Prior: A Proof-Based Prior for Criminal Cases”, Law, Probability and Risk, 15(4): 237–253. doi:10.1093/lpr/mgz007 • Fenton, Norman and Martin Neil, 2013 [2018], Risk Assessment and Decision Analysis with Bayesian Networks, Boca Raton, FL: Chapman and Hall/ CRC Press. Second edition, 2018. • Fenton, Norman, Martin Neil, and David A. Lagnado, 2013, “A General Structure for Legal Arguments About Evidence Using Bayesian Networks”, Cognitive Science, 37(1): 61–102. doi:10.1111/cogs.12004 • Ferguson, Andrew Guthrie, 2020, “Big Data Prosecution and Brady”, UCLA Law Review, 67: 180–256. • Fienberg, Stephen E. (ed.), 1989, The Evolving Role of Statistical Assessments as Evidence in the Courts, New York: Springer New York. doi:10.1007/978-1-4612-3604-7 • Finkelstein, Michael O., 2009, Basic Concepts of Probability and Statistics in the Law, New York, NY: Springer New York. doi:10.1007/b105519 • Finkelstein, Michael O. and William B. Fairley, 1970, “A Bayesian Approach to Identification Evidence”, Harvard Law Review, 83(3): 489–517. doi:10.2307/1339656 • Foreman, L.A., C. Champod, I.W. Evett, J.A. Lambert, and S. Pope, 2003, “Interpreting DNA Evidence: A Review”, International Statistical Review, 71(3): 473–495. doi:10.1111/j.1751-5823.2003.tb00207.x • Franklin, James, 2011, “Objective Bayesian Conceptualisation of Proof and Reference Class Problems”, Sydney Law Review, 33(3): 545–561. • Friedman, Ori and John Turri, 2015, “Is Probabilistic Evidence a Source of Knowledge?”, Cognitive Science, 39(5): 1062–1080. doi:10.1111/cogs.12182 • Friedman, Richard D., 1986, “A Diagrammatic Approach to Evidence”, Boston University Law Review, 66(4): 571–622. • –––, 1987, “Route Analysis of Credibility and Hearsay”, The Yale Law Journal, 97(4): 667–742. • –––, 1996, “Assessing Evidence”, (Review of Aitken & Taroni 1995 and Robertson & Vignaux 1995) Michigan Law Review, 94(6): 1810–1838. • –––, 2000, “A Presumption of Innocence, Not of Even Odds”, Stanford Law Review, 52(4): 873–887. • Gaag, Linda C. van der, Silja Renooij, Cilia L. M. Witteman, Berthe M. P. Aleman, and Babs G. Taal, 1999, “How to Elicit Many Probabilities”, UAI'99: Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pp. 647–654. [Gaag et al. 1999 available online • Garbolino, Paolo, 2014, Probabilità e logica della prova, Milan: Giuffrè Editore. • Gardiner, Georgi, 2018, “Legal Burdens of Proof and Statistical Evidence”, in David Coady & James Chase (eds.), Routledge Handbook of Applied Epistemology, London: Routledge, ch. 14. • –––, 2019, “The Reasonable and the Relevant: Legal Standards of Proof”, Philosophy & Public Affairs, 47(3): 288–318. doi:10.1111/papa.12149 • Gastwirth, Joseph L. (ed.), 2000, Statistical Science in the Courtroom, New York, NY: Springer New York. doi:10.1007/978-1-4612-1216-4 • Gastwirth, Joseph L., Boris Freidlin, and Weiwen Miao, 2000, “The Shonubi Case as an Example of the Legal System’s Failure to Appreciate Statistical Evidence”, in Gastwirth 2000: 405–413. doi:10.1007/978-1-4612-1216-4_21 • Gillies, Donald, 2000, Philosophical Theories of Probability (Philosophical Issues in Science), London: Routledge. • Gordon, Thomas F., Henry Prakken, and Douglas Walton, 2007, “The Carneades Model of Argument and Burden of Proof”, Artificial Intelligence, 171(10–15): 875–896. doi:10.1016/j.artint.2007.04.010 • Griffin, Lisa, 2013, “Narrative, Truth, and Trial”, Georgetown Law Journal, 101: 281–335. • Haack, Susan, 2014a, Evidence Matters: Science, Proof, and Truth in the Law, Cambridge: Cambridge University Press. doi:10.1017/CBO9781139626866 • –––, 2014b, “Legal Probabilism: An Epistemological Dissent”, in Haack 2014a: 47–77. • Hacking, Ian, 1990, The Taming of Chance, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511819766 • Hájek, Alan, 2007, “The Reference Class Problem Is Your Problem Too”, Synthese, 156(3): 563–585. • Hamer, David, 2004, “Probabilistic Standards of Proof, Their Complements and the Errors That Are Expected to Flow from Them”, University of New England Law Journal, 1(1): 71–107. • Harcourt, Bernard E., 2006, Against Prediction: Profiling, Policing, and Punishing in an Actuarial Age, Chicago: University Of Chicago Press. • –––, 2018, “The Systems Fallacy: A Genealogy and Critique of Public Policy and Cost-Benefit Analysis”, The Journal of Legal Studies, 47(2): 419–447. doi:10.1086/698135 • Harman, Gilbert, 1968, “Knowledge, Inference, and Explanation”, American Philosophical Quartely, 5(3): 164–173. • Hawthorne, John, 2004, Knowledge and Lotteries, Oxford: Clarendon Press. doi:10.1093/0199269556.001.0001 • Hedden, Brian and Mark Colyvan, 2019, “Legal Probabilism: A Qualified Defence”, Journal of Political Philosophy, 27(4): 448–468. doi:10.1111/jopp.12180 • Hepler, Amanda B., A. Philip Dawid, and Valentina Leucari, 2007, “Object-Oriented Graphical Representations of Complex Patterns of Evidence”, Law, Probability and Risk, 6(1–4): 275–293. doi:10.1093/lpr/mgm005 • Ho Hock Lai, 2008, A Philosophy of Evidence Law: Justice in the Search for Truth, Oxford: Oxford University Press. • Horowitz, Irwin A. and Laird C. Kirkpatrick, 1996, “A Concept in Search of a Definition: The Effects of Reasonable Doubt Instructions on Certainty of Guilt Standards and Jury Verdicts.”, Law and Human Behavior, 20(6): 655–670. doi:10.1007/BF01499236 • Izenman, Alan Julian, 2000a, “Assessing the Statistical Evidence in the Shonubi Case”, in Gastwirth 2000: 415–443. doi:10.1007/978-1-4612-1216-4_22 • –––, 2000b, “Introduction to Two Views on the Shonubi Case”, in Gastwirth 2000: 393–403. doi:10.1007/978-1-4612-1216-4_20 • Kadane, Joseph B. and David A. Schum, 2011, A Probabilistic Analysis of the Sacco and Vanzetti Evidence, New York: John Wiley & Sons. • Kahneman, Daniel and Amos Tversky, 1973, “On the Psychology of Prediction”, Psychological Review, 80(4): 237–251. doi:10.1037/h0034747 • Kaiser L. and Seber, G.A., 1983, “Paternity testing: I. Calculation of paternity indexes”, American Journal of Medical Genetics, 15(2): 323–329. doi:10.1002/ajmg.1320150216 • Kaplan, John, 1968, “Decision Theory and the Fact-Finding Process”, Stanford Law Review, 20(6): 1065–1092. • Kaplow, Louis, 2012, “Burden of Proof”, Yale Law Journal, 121(4): 738–1013. • –––, 2014, “Likelihood Ratio Tests and Legal Decision Rules”, American Law and Economics Review, 16(1): 1–39. doi:10.1093/aler/aht020 • Kaye, David H., 1979a, “Probability Theory Meets Res Ipsa Loquitur”, Michigan Law Review, 77(6): 1456–1484. • –––, 1979b, “The Laws of Probability and the Law of the Land”, The University of Chicago Law Review, 47(1): 34–56. • –––, 1979c, “The Paradox of the Gatecrasher and Other Stories”, The Arizona State Law Journal, 1979: 101–110. • –––, 1980, “Mathematical Models and Legal Realities: Some Comments on the Poisson Model of Jury Behavior”, Connecticut Law Review, 13(1): 1–15. • –––, 1982, “The Limits of the Preponderance of the Evidence Standard: Justifiably Naked Statistical Evidence and Multiple Causation”, American Bar Foundation Research Journal, 7(2): 487–516. doi:10.1111/j.1747-4469.1982.tb00464.x • –––, 1986, “Do We Need a Calculus of Weight to Understand Proof Beyond a Reasonable Doubt?” Boston University Law Review, 66: 657–672. • –––, 1999, “Clarifying the Burden of Persuasion: What Bayesian Rules Do and Not Do”, International Commentary on Evidence, 3(1): 1–28. doi:10.1177/136571279900300101 • Kaye, David H. and George F. Sensabaugh, 2011, “Reference Guide on DNA Identification Evidence”, in Reference Manual on Scientific Evidence, third edition, Federal Judicial Center, 129–210. • Koehler, Jonathan J., 1996, “The Base Rate Fallacy Reconsidered: Descriptive, Normative, and Methodological Challenges”, Behavioral and Brain Sciences, 19(1): 1–17. doi:10.1017/S0140525X00041157 • Koehler, Jonathan J. and Daniel N. Shaviro, 1990, “Veridical Verdicts: Increasing Verdict Accuracy Through the Use of Overtly Probabilistic Evidence and Methods”, Cornell Law Review, 75(2): 247–279. • Kruskal, William, 1988, “Miracles and Statistics: The Casual Assumption of Independence”, Journal of the American Statistical Association, 83(404): 929–940. doi:10.1080/01621459.1988.10478682 • Lacave, Carmen and Francisco J Díez, 2002, “A Review of Explanation Methods for Bayesian Networks”, The Knowledge Engineering Review, 17(2): 107–127. doi:10.1017/S026988890200019X • Laplace, Pierre-Simon, 1814, Essai Philosophique sur les Probabilités, Paris. Translated as A philosophical essay on probabilities, Frederick Wilson Truscott and Frederick Lincoln Emory (trans), 1951, New York: Dover. • Laudan, Larry, 2006, Truth, Error, and Criminal Law: An Essay in Legal Epistemology, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511617515 • –––, 2010, “Need Verdicts Come in Pairs?”, The International Journal of Evidence & Proof, 14(1): 1–24. doi:10.1350/ijep.2010.14.1.338 • –––, 2016, The Law’s Flaws: Rethinking Trials and Errors?, London: College Publications. • Laudan, Larry and Harry Saunders, 2009, “Re-Thinking the Criminal Standard of Proof: Seeking Consensus about the Utilities of Trial Outcomes”, International Commentary on Evidence, 7(2): article 1. [Laudan and Saunders 2009 available online] • Lawlor, Krista, 2013, Assurance: An Austinian View of Knowledge and Knowledge Claims, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199657896.001.0001 • Leitgeb, Hannes, 2014, “The Stability Theory of Belief”, Philosophical Review, 123(2): 131–171. doi:10.1215/00318108-2400575 • Lempert, Richard O., 1977, “Modeling Relevance”, Michigan Law Review, 75: 1021–1057. • Levanon, Liat, 2019, “Statistical Evidence, Assertions and Responsibility”, The Modern Law Review, 82(2): 269–292. doi:10.1111/1468-2230.12404 • Lillquist, Erik, 2002, “Recasting Reasonable Doubt: Decision Theory and the Virtues of Variability”, University of California Davies Law Review, 36(1): 85–197 • Littlejohn, Clayton, 2020, “Truth, Knowledge, and the Standard of Proof in Criminal Law”, Synthese, 197(12): 5253–5286. doi:10.1007/s11229-017-1608-4 • Loftus, Elizabeth F., 1979 [1996], Eyewitness Testimony, Cambridge, MA: Harvard University Press. Revised edition, 1996. • Lucy, David, 2013, Introduction to Statistics for Forensic Scientists, Chichester: John Wiley & Sons. • Lyon, Thomas D. and Jonathan J. Koehler, 1996, “Relevance Ratio: Evaluating the Probative Value of Expert Testimony in Child Sexual Abuse Cases”, Cornell Law Review, 82(1): 43–78. • Malcom, Brooke G., 2008, “Convictions Predicated on DNA Evidence Alone: How Reliable Evidence Became Infallible”, Columbia Law Review, 38(2): 313–338. • Mayo, Deborah G., 2018, Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars, Cambridge: Cambridge University Press. doi:10.1017/9781107286184 • Meester, Ronald and Marjan Sjerps, 2004, “Why the Effect of Prior Odds Should Accompany the Likelihood Ratio When Reporting DNA Evidence”, Law, Probability and Risk, 3(1): 51–62. doi:10.1093/lpr/3.1.51 • Mellor, David Hugh, 2004, Probability: A Philosophical Introduction, London: Routledge. • Moss, Sarah, 2018, Probabilistic Knowledge, Oxford: Oxford University Press. • –––, forthcoming, “Knowledge and Legal Proof”, Oxford Studies in Epistemology 7, Oxford: Oxford University Press. • Nance, Dale A., 2007, “The Reference Class Problem and Mathematical Models of Inference”, The International Journal of Evidence & Proof, 11(4): 259–273. doi:10.1350/ijep.2007.11.4.259 • –––, 2016, The Burdens of Proof: Discriminatory Power, Weight of Evidence, and Tenacity of Belief, Cambridge: Cambridge University Press. doi:10.1017/CBO9781316415771 • National Research Council, [NRC II] 1996, The Evaluation of Forensic DNA Evidence, Washington, DC: The National Academies Press. doi:10.17226/5141. • Neapolitan, Richard E., 2004, Learning Bayesian Networks, Upper Saddle River, NJ: Pearson/Prentice Hall. • Neil, Martin, Norman Fenton, David Lagnado, and Richard David Gill, 2019, “Modelling Competing Legal Arguments Using Bayesian Model Comparison and Averaging”, Artificial Intelligence and Law, 27(4): 403–430. doi:10.1007/s10506-019-09250-3 • Neil, Martin, Norman Fenton, and Lars Nielson, 2000, “Building Large-Scale Bayesian Networks”, The Knowledge Engineering Review, 15(3): 257–284. doi:10.1017/S0269888900003039 • Nelkin, Dana K., 2000, “The Lottery Paradox, Knowledge, and Rationality”, Philosophical Review, 109(3): 373–408. doi:10.1215/00318108-109-3-373 • Nesson, Charles R., 1979, “Reasonable Doubt and Permissive Inferences: The Value of Complexity”, Harvard Law Review, 92(6): 1187–1225. doi:10.2307/1340444 • Newman, Jon O., 1993, “Beyon ‘Reasonable Doub’”, New York University Law Review, 68(5): 979–1002. • Niedermeier, Keith E., Norbert L. Kerr, and Lawrence A. Messé, 1999, “Jurors’ Use of Naked Statistical Evidence: Exploring Bases and Implications of the Wells Effect.”, Journal of Personality and Social Psychology, 76(4): 533–542. doi:10.1037/0022-3514.76.4.533 • Nitzan, Shmuel, 2009, Collective Preference and Choice, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511803871 • Papineau, David, forthcoming, “The Disvalue of Knowledge”, Synthese, first online: 4 October 2019. doi:10.1007/s11229-019-02405-4 • Pardo, Michael S., 2013, “The Nature and Purpose of Evidence Theory”, Vanderbilt Law Review, 66: 547–613. • –––, 2019, “The Paradoxes of Legal Proof: A Critical Guide”, Boston University Law Review, 99(1): 233–290. • Pardo, Michael S. and Ronald J. Allen, 2008, “Juridical Proof and the Best Explanation”, Law and Philosophy, 27(3): 223–268. doi:10.1007/s10982-007-9016-4 • Park, Roger C., Peter Tillers, Frederick C. Moss, D. Michael Risinger, David H. Kaye, Ronald J. Allen, Samuel R. Gross, Bruce L. Hay, Michael S. Pardo, and Paul F. Kirgis, 2010, “Bayes Wars Redivivus—An Exchange”, International Commentary on Evidence, 8(1). doi:10.2202/1554-4567.1115 • Pennington, Nancy and Reid Hastie, 1991, “A Cognitive Theory of Juror Decision Making: The Story Model”, Cardozo Law Review, 13: 519–557. • –––, 1993, “Reasoning in Explanation-Based Decision Making”, Cognition, 49(1–2): 123–163. doi:10.1016/0010-0277(93)90038-W • Picinali, Federico, 2013, “Two Meanings of ‘Reasonableness’: Dispelling the ‘Floating’ Reasonable Doubt”, The Modern Law Review, 76(5): 845–875. doi:10.1111/1468-2230.12038 • –––, 2016, “Base-Rates of Negative Traits: Instructions for Use in Criminal Trials: Base-Rates of Negative Traits”, Journal of Applied Philosophy, 33(1): 69–87. doi:10.1111/japp.12109 • Poisson, Siméon Denis, 1837, Recherches sur la Probabilité des Jugements en Matière Criminelle et en Matière Civile, Paris: Bachelier. Translated as Researches into the Probabilities of Judgements in Criminal and Civil Cases, 2013, Oscar Sheynin (trans), Berlin: NG-Verlag. • Posner, Richard, 1973, The Economic Analysis of Law, Boston: Brown & Company. • Prakken, Henry and Giovanni Sartor, 2009, “A Logical Analysis of Burdens of Proof”, in Legal Evidence and Proof: Statistics, Stories, Logic, Hendrik Kaptein, Henry Prakken, & Bart Verheij (eds.), London/New York: Routledge, pp. 223–253. • Redmayne, Mike, 2008, “Exploring the Proof Paradoxes”, Legal Theory, 14(4): 281–309. doi:10.1017/S1352325208080117 • Reichenbach, Hans, 1935 [1949], Wahrscheinlichkeitslehre; eine untersuchung über die logischen und mathematischen grundlagen der wahrscheinlichkeitsrechnung, Leiden: A. W. Sijthoff’s uitgeversmaatschappij. Translated as The Theory of Probability: An Inquiry into the Logical and Mathematical Foundations of the Calculus of Probability, second edition, Ernest H. Hutten and Maria Reichenbach (trans), Berkeley, CA: University of California Press. • Renooij, Silja, 2001, “Probability Elicitation for Belief Networks: Issues to Consider”, The Knowledge Engineering Review, 16(3): 255–269. doi:10.1017/S0269888901000145 • Robertson, Bernard and G.A. Vignaux, 1995 [2016], Interpreting Evidence: Evaluating Forensic Science in the Courtroom, Chichester, UK: John Wiley & Sons. Second edition is Robertson, Vignaux, and Berger 2016. • Robertson, Bernard, G.A. Vignaux, and Charles E.H. Berger, 2016, Interpreting Evidence: Evaluating Forensic Science in the Courtroom, second edition, Chichester, UK: John Wiley & Sons. First edition is Robertson and Vignaux 1995. doi:10.1002/9781118492475 • Ross, Lewis, 2021, “Rehabilitating Statistical Evidence”, Philosophy and Phenomenological Research, 102(1): 3–23. doi:10.1111/phpr.12622 • Roth, Andrea, 2010, “Safety in Numbers? Deciding When DNA Alone Is Enough to Convict”, New York University Law Review, 85(4): 1130–1185. • Royall, Richard M., 1997, Statistical Evidence: A Likelihood Paradigm, London/New York: Chapman & Hall. • Saks, Michael J. and Robert F. Kidd, 1980, “Human Information Processing and Adjudication: Trial by Heuristics”, Law and Society Review, 15(1): 123–160. • Schmalbeck, Richard, 1986, “The Trouble with Statistical Evidence”, Law and Contemporary Problems, 49(3): 221–236. • Schoeman, Ferdinand, 1987, “Statistical vs. Direct Evidence”, Noûs, 21(2): 179–198. doi:10.2307/2214913 • Schwartz, David S. and Elliott R. Sober, 2017, “The Conjunction Problem and the Logic of Jury Findings”, William & Mary Law Review, 59(2): 619–692. • Schweizer, Mark, 2013, “The Law Doesn’t Say Much About Base Rates”, SSRN Electronic Journal. doi:10.2139/ssrn.2329387 • Scutari, Marco and Jean-Baptiste Denis, 2015, Bayesian Networks: With Examples in R, New York: Chapman and Hall/CRC. doi:10.1201/b17065 • Sesardic, Neven, 2007, “Sudden Infant Death or Murder? A Royal Confusion About Probabilities”, The British Journal for the Philosophy of Science, 58(2): 299–329. doi:10.1093/bjps/axm015 • Shafer, Glenn, 1976, A Mathematical Theory of Evidence, Princeton, NJ: Princeton University Press. • Schauer, Frederick, 2003, Profiles, Probabilities, and Stereotypes, Cambridge, MA: Belknap Press. • Shen, Qiang, Jeroen Keppens, Colin Aitken, Burkhard Schafer, and Mark Lee, 2007, “A Scenario-Driven Decision Support System for Serious Crime Investigation”, Law, Probability and Risk, 5(2): 87–117. doi:10.1093/lpr/mgl014 • Simon, Dan, 2004, “A Third View of the Black Box: Cognitive Coherence in Legal Decision Making”, University of Chicago Law Review, 71: 511–586. • Simons, Daniel J and Christopher F Chabris, 1999, “Gorillas in Our Midst: Sustained Inattentional Blindness for Dynamic Events”, Perception, 28(9): 1059–1074. doi:10.1068/p281059 • Skyrms, Brian, 1966, Choice and Chance: An Introduction to Inductive Logic, Belmont, CA: Dickenson Pub. • –––, 1980, Causal Necessity: a Pragmatic Investigation of the Necessity of Laws, New Haven, CT: Yale University Press. • Smith, Martin, 2018, “When Does Evidence Suffice for Conviction?”, Mind, 127(508): 1193–1218. doi:10.1093/mind/fzx026 • Stein, Alex, 2005, Foundations of Evidence Law, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198257363.001.0001 • –––, 2008, “The Right to Silence Helps the Innocent: A Response to Critics”, Cardozo Law Review, 30(3): 1115–1140. • Sullivan, Sean Patrick, 2019, “A Likelihood Story: The Theory of Legal Fact-Finding”, University of Colorado Law Review, 90(1): 1–66. • Suzuki, Jeff, 2015, Constitutional Calculus: The Math of Justice and the Myth of Common Sense, Baltimore, MD: Johns Hopkins University Press. • Sykes, Deanna L. and Joel T. Johnson, 1999, “Probabilistic Evidence Versus the Representation of an Event: The Curious Case of Mrs. Prob’s Dog”, Basic and Applied Social Psychology, 21(3): 199–212. doi:10.1207/S15324834BASP2103_4 • Taroni, Franco, Alex Biedermann, Silvia Bozza, Paolo Garbolino, and Colin Aitken, 2014, Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science, second edition, Chichester, UK: John Wiley & Sons, Ltd. doi:10.1002/9781118914762 • Thompson, William C. and Edward L. Schumann, 1987, “Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor’s Fallacy and the Defense Attorney’s Fallacy.”, Law and Human Behavior, 11(3): 167–187. doi:10.1007/BF01044641 • Thomson, Judith Jarvis, 1986, “Liability and Individualized Evidence”, Law and Contemporary Problems, 49(3): 199–219. • Tillers, Peter, 1997, “Introduction: Three Contributions to Three Important Problems in Evidence Scholarship”, Cardozo Law Review, 18: 1875–1889. [The portion of this article that discusses statistical evidence is often referred to as ‘United States v. Shonubi: A Statistical Oddity' available.] • –––, 2005, “If Wishes Were Horses: Discursive Comments on Attempts to Prevent Individuals from Being Unfairly Burdened by Their Reference Classes”, Law, Probability and Risk, 4(1–2): 33–49. doi:10.1093/lpr/mgi001 • Treaser, Joseph B., 1992, “Nigerian Connection Floods U.S. Airport with Asian Heroin”, New York Times, National edition, 15 February 1992, Section 1, Page 1. • Tribe, Laurence H., 1971, “Trial by Mathematics: Precision and Ritual in the Legal Process”, Harvard Law Review, 84(6): 1329–1393. doi:10.2307/1339610 • Triggs, Christopher M. and John S. Buckleton, 2004, “Comment on: Why the Effect of Prior Odds Should Accompany the Likelihood Ratio When Reporting DNA Evidence”, Law, Probability and Risk, 3(1): 73–82. doi:10.1093/lpr/3.1.73 • Urbaniak, Rafal, 2018, “Narration in Judiciary Fact-Finding: A Probabilistic Explication”, Artificial Intelligence and Law, 26(4): 345–376. doi:10.1007/s10506-018-9219-z • –––, 2019, “Probabilistic Legal Decision Standards Still Fail”, Journal of Applied Logics, 6(5): 865–902. • Urbaniak, Rafal and Pavel Janda, 2020, “Probabilistic Models of Legal Corroboration”, The International Journal of Evidence & Proof, 24(1): 12–34. doi:10.1177/1365712719864608 • Urbaniak, Rafal, Alicja Kowalewska, Pavel Janda, and Patryk Dziurosz-Serafinowicz, 2020, “Decision-Theoretic and Risk-Based Approaches to Naked Statistical Evidence: Some Consequences and Challenges”, Law, Probability and Risk, 19(1): 67–83. doi:10.1093/lpr/mgaa001 • Venn, John, 1866, The Logic of Chance: An Essay on the Foundations and Province of the Theory of Probability, with Especial Reference to Its Application to Moral and Social Science, London/Cambridge: Macmillan. • Verheij, Bart, 2017, “Proof with and without Probabilities: Correct Evidential Reasoning with Presumptive Arguments, Coherent Hypotheses and Degrees of Uncertainty”, Artificial Intelligence and Law, 25(1): 127–154. doi:10.1007/s10506-017-9199-4 • Vlek, Charlotte S., Henry Prakken, Silja Renooij, and Bart Verheij, 2014, “Building Bayesian Networks for Legal Evidence with Narratives: A Case Study Evaluation”, Artificial Intelligence and Law, 22(4): 375–421. doi:10.1007/s10506-014-9161-7 • Volokh, Alexander, 1997, “n Guilty Men”, University of Pennsylvania Law Review, 146(2): 173–216. • Walen, Alec, 2015, “Proof Beyond a Reasonable Doubt: A Balanced Retributive Account”, Louisiana Law Review, 76(2): 355–446. • Walley, Peter, 1991, Statistical Reasoning with Imprecise Probabilities, London: Chapman and Hall. • Wasserman, David, 2002, “Forensic DNA Typing”, in A Companion to Genethics, Justine Burley and John Harris (eds.), Oxford, UK: Blackwell Publishing, 349–363. doi:10.1002/9780470756423.ch26 • Wells, Gary L., 1992, “Naked Statistical Evidence of Liability: Is Subjective Probability Enough?”, Journal of Personality and Social Psychology, 62(5): 739–752. doi:10.1037/0022-3514.62.5.739 • Williams, Glanville, 1979, “The Mathematics of Proof (Parts I and II)”, Criminal Law Review, 297–312 (part I) and 340–354 (part II). • Wixted, John T. and Gary L. Wells, 2017, “The Relationship Between Eyewitness Confidence and Identification Accuracy: A New Synthesis”, Psychological Science in the Public Interest, 18(1): 10–65. doi:10.1177/1529100616686966 • Wren, Christopher S., 1999, “A Pipeline of the Poor Feeds the Flow of Heroin: Traffickers Field More ‘Swallowers’ To Evade Sophisticated Drug Crackdown”, New York Times, National Edition, 21 February 1998, section 1, page 37. • Wright, Richard W., 1988, “Causation, Responsibility, Risk, Probability, Naked Statistics, and Proof: Pruning the Bramble Bush by Clarifying the Concepts”, Iowa Law Review, 73: 1001–1077. • Zabell, Sandy L., 2005, “Fingerprint Evidence”, Journal of Law and Policy, 13(1): 143–179.
42,050
174,615
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0}
2.65625
3
CC-MAIN-2021-39
latest
en
0.943322
https://blogs.scientificamerican.com/life-unbounded/basic-rocket-science-sub-orbital-versus-orbital/
1,685,932,066,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224650620.66/warc/CC-MAIN-20230605021141-20230605051141-00167.warc.gz
170,054,943
18,584
The private space venture Blue Origin made some history on November 23rd 2015 with a successful launch, re-entry, and landing of its fully reusable, passenger-carrying rocket. The project's single-stage launch vehicle lofted a protoype 6-human module to an altitude of approximately 100 kilometers (330,000 feet). The module re-entered the atmosphere and deployed parachutes for a dry touchdown, while the rocket performed a remarkable free-fall and powered landing to return to the very same launchpad it had left a short time earlier. There are no bones to be picked about this, it's a very, very cool technical accomplishment. But it also serves to provide some perspective on the challenges of getting into space, getting into orbit, and getting beyond. Let's break the problem down very simply, starting with a slide I sometimes use in teaching the rudiments of spaceflight. The final equation represents the extreme, the whole hog. If you want to escape the Earth entirely, to get out to interplanetary space in one ballistic shot, you need to quickly reach the velocity ve - which is about 11.2 kilometers per second (40,300 km/hour). Of course you don't have to do it like this. As long as your rocket can provide an upwards force greater than the gravitational force pulling against you, it's perfectly fine to slowly crawl your way out to deep space. But how does this compare to, say, reaching a low Earth orbit? In terms of the required velocity to maintain a circular orbit at some height horbit above the planet, this next slide tells all: This seems promising, you only need to reach about 70% of escape velocity in order to hold an orbit. If we ignore the energy required to gain an orbital altitude (to get above the drag of the atmosphere), the energy needed to just reach that orbital velocity is about 50% of that needed for complete escape. Except, how does this stack up against reaching a sub-orbital point? In other words, doing what Blue Origin did, which is basically to shoot straight up and fall back down again. I won't list the details here, but the calculation is simple; we can just ask what the difference is in gravitational potential energy between an object at some altitude above the surface of the Earth and at the surface of the Earth. For a 100 kilometer jaunt that change in energy is about 1.5% of the energy required to reach escape velocity, or about 3% of the energy required to establish an orbit. In other words, to progress from making a 100 km sub-orbital 'drop' to getting into low-Earth orbit involves roughly a factor of 32 increase in energy budget. And that figure takes no account of how you manage to expend the energy, together with all the inefficiencies of propulsion and the impeding forces (like atmospheric friction) that are going to add to the recipe. A rocket to orbit must be a whole lot bigger and more powerful - just ask Space X. Being stuck deep in a gravity well, with a cloak of atmosphere above our heads may have been a critical ingredient for our evolution and for the four billion years of biological evolution that came before, but it sure can suck when it comes to reaching space.
669
3,169
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.03125
3
CC-MAIN-2023-23
latest
en
0.940842
http://www.peachpit.com/articles/article.aspx?p=2086874&seqNum=5
1,580,226,433,000,000,000
text/html
crawl-data/CC-MAIN-2020-05/segments/1579251779833.86/warc/CC-MAIN-20200128153713-20200128183713-00526.warc.gz
263,329,308
13,998
Publishers of technology books, eBooks, and videos for creative people # Making Money From Games You Give Away: Understanding and Serving Your Players through Analytics • Print This chapter is from the book ## Statistics: The Analyst’s Toolbox Statistics—the manipulation and interpretation of data—is a large and complex area of mathematics that is the basis for analytics in F2P games. The multitude of tools, such as formulae and methods of data interpretation, is often bewildering to nonmathematicians. For this reason, many leading F2P companies have extended their recruitment to city traders and other professional statisticians to fill analyst positions—a role previously unheard of in games. The considerable understanding of these experts provides deeper insight into the data of your games, highlighting correlations that might otherwise be missed or misunderstood. Although a full and complete explanation of all of the tools used by statisticians is outside the scope of this, and almost any book, there are a few terms and techniques you should be aware of. ### Averages: Mean, Mode and Median Averages—the typical amount in a data sample—are one of the most simple but useful tools that an analyst can use. When people talk about an average, they are commonly referring to the mean average: taking the sum of all the data and dividing by the sample size. For example, if your game had 500,000 players in a given day and it made \$25,000 in revenue, the mean average is the sum of the data (\$25,000) divided by the sample size (500,000). \$25,000 / 500,000 = \$0.05 ARPDAU Although the actual revenue or other data from a single player in isolation will vary greatly (the amount by which is known as a range), the mean average will tell you the outcome you can expect to attribute to each player when considered in a group. The mode and median averages are a bit less common, however. Mode is the most frequently occurring data value in a list and the median is the value found in the exact middle of a data set ordered from lowest to highest. For example, if your \$25,000 revenue came from three IAPs—5,000 sales at \$1, 4,000 sales at \$3 and 1,000 sales at \$8—the mode IAP purchase would be \$1 because it is the most commonly occurring value at 5,000 units. The mode tells you which option is most popular and therefore is most likely to occur when you consider a single purchase. To calculate the median, however, you must first ascertain the middle value. You could eliminate the highest and lowest values until you are left with one value, which is the median. But in some cases, as in the preceding example, you will be left with two values. Here’s why. There are 10,000 samples, so the median is between sample 5,000 (\$1) and 5,001 (\$3). In this instance the median value is the mean average of these two samples. Therefore, the median sales price is \$2. Knowing the median allows you to understand where a sample sits in a data set. • “DATA IS DANGEROUS. ASK THE WRONG QUESTION AND YOU’LL GET THE WRONG ANSWER, STEERING YOUR GAME DEVELOPMENT INTO TROUBLE.” • —HENRIQUE OLIFIERS, GAMER-IN-CHIEF, BOSSA STUDIOS ### Causation and Variables Proving causation—that one factor has a distinct and provable effect upon another—is the central purpose of analytics. Causation is what makes your hypothesis either fit the behavior of your players or prove to be wildly wrong. Often, the aim of causation is to find a link between a dependent variable and an independent variable. For instance, you could consider an output as a dependent variable, such as the number of players buying an IAP, and consider an input as an independent variable, such as an IAP’s price. When a dependent variable changes in relation to an independent variable, there is causation and a basis for a hypothesis. This link can be described by using a technique called regression analysis. ### Regression Analysis Regression analysis is a set of statistical techniques that estimates the relationship between variables. Regression analysis can build a model of, for instance, the links between price and sales of an IAP and therefore predict the price point that will return maximum revenue. It is commonly carried out by humans, but in some cases can be somewhat automated in analytics software. For example, imagine you have tested price and recorded the subsequent sales of an IAP in a multivariate test at \$0.99, \$1.99, \$2.99, \$6.99, \$9.99 and \$19.99 (Figure 4.4). From the data, you could suggest that the sales of IAPs (the dependent variable) decrease as price (the independent variable) increases. Specifically, the manner in which the drop occurs is an example of exponential decay. You could then predict and model sales at each dollar increment (Figure 4.5) using your own formula. Using that data, you could predict revenue by multiplying sales by price, thereby finding the price that would produce the maximum revenue (Figure 4.6). Although this is a very simple example, it does show that when regression analysis is used well, as with other tools of analytics, it enables you to have a greater understanding of player behavior. In turn, this information can be interpreted to serve your players via better games.
1,141
5,254
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.515625
4
CC-MAIN-2020-05
latest
en
0.948861
https://www.geeksforgeeks.org/binomial-mean-and-standard-deviation-probability-class-12-maths/?ref=lbp
1,679,867,089,000,000,000
text/html
crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00035.warc.gz
896,022,052
52,537
## Related Articles • CBSE Class 12 Syllabus • CBSE Class 12 Maths Notes • CBSE Class 12 Physics Notes • CBSE Class 12 Chemistry Notes • CBSE Class 12 Accountancy Notes • CBSE Class 12 Computer Science (Self-Paced Course) # Binomial Mean and Standard Deviation – Probability | Class 12 Maths • Last Updated : 27 Jan, 2022 Binomial distribution is the probability distribution of no. of Bernoulli trials i.e. if a Bernoulli trial is performed n times the probability of its success is given by binomial distribution. Keep in mind that each trial is independent of another trial with only two possible outcomes satisfying the same conditions of Bernoulli trials. Consider the case of tossing a coin n times, the probability of getting exactly x no. of heads /tails can be calculated using the binomial distribution. If in the same case tossing of a coin is performed only once it is the same as Bernoulli distribution. A random variable X which takes values 1,2,…..n is said to follow binomial distribution if its probability distribution function is given by ### P(X = r) = nCr pr qn-r Where, r = 0, 1,2……, n, where p, q>0 such that p+q=1 p = probability of success of an event q = probability of failure of an event ## The mean or Expected value of the binomial distribution The mean of the binomial distribution is the same as the average of anything else which is equal to the submission of the product of no. of success and probability at each success. Mean = ∑r r. P(r) = ∑r r  nCr pr qn-r = ∑r r n/r  n-1Cr-1 p.pr-1 qn-r    [as nCr= n/r  n-1Cr-1] = np ∑r n-1Cr-1 pr-1 q(n-1)-(r-1) = np(q+p)n-1       [by binomial theorem i.e. (a+b)n = ∑k=0 nCk an bn-k ] =np [as p+q=1] Therefore, Mean=np ## The variance of binomial distribution We know, variance is the measurement of how spread the numbers are from the mean of the data set. Similarly, the variance of the binomial distribution is the measurement of how to spread the probability at each no. of success from the mean probability which is the average of the squared differences from the mean. Variance = (∑r  r2. P(r)) – Mean2 = ∑r  [r(r-1)+r] nCr pr qn-r – (np)2 = ∑r  r(r-1) nCr  pr qn-r + ∑r  r nCr pr qn-r – (np)2 = ∑r r(r-1) n/r (n-1)/(r-1)  n-2Cr-2 p2 pr-2 qn-r +np – (np)2 = n(n-1)p2 {∑r  n-2Cr-2  pr-2 qn-r } +np – (np)2 = n(n-1) p2 (q+p)n-2 + np – n2p2         [by binomial theorem i.e. (a+b)n = ∑k=0 nCk an bn-k ] = n2p2 -np2 +np-n2p2                        [as p+q=1] = np-np2 = np(1-p) = npq Therefore, Variance=npq ## The standard deviation of binomial distribution Standard deviation is also a standard measure to find out how to spread out are the no. from the mean value. Standard Deviation = (Variance)1/2 = (npq)1/2 Example 1. A coin is tossed five times. What is the probability of getting exactly 3  times head? Also find the mean, variance, and standard deviation. Solution: n = 5 (no. of trials) p = probability of getting head at each trial =1/2 q = 1-1/2 = 1/2 r = 3 ( no. of successes i.e. getting a head) P(X=r) = nCr pr qn-r = 5C3 (1/2)3 (1/2)5-3 = 5!/(3!*2!) 1/8 * 1/4 = 10 *(1/8)*(1/4) = 5/16 Mean = np = 5 * 1/2 = 5/2 Variance = npq = 5 * 1/2 * 1/2 = 5/4 Standard deviation = (5/4)1/ = 51/2/2 Example 2. A die is tossed thrice. What is the probability of getting an even number? What are the mean, variance, and standard deviation of the binomial distribution? Solution: Here, n = 3(no. of trials) p = probability of getting an even number during each trial p = 3/6=1/2 [ 2,4,6 are even no. in dice] q = 1-1/2 =1/2 r= 1( no. of successes i.e. getting a even no. ) P(X=r)= nCr * pr * qn-r =3C (1/2)1  (1/2)3-1 = 3!/(1!*2!) 1/2 * 1/4 = 3 * (1/2) * (1/4) = 3/8 Mean = np = 3 * 1/2 = 3/2 Variance = npq = 3 * 1/2 * 1/2 = 3/4 Standard deviation= (3/4)1/2 =31/2/2 Example 3. If the probability of defective bolts is 0.1, find the mean, variance, and standard deviation for the distribution of defective bolts in a total of 500 bolts. Solution: Considering as a case of binomial distribution , n = 500( no. of trials which we can are no. of bolts here) p = probability of one defective bolt during each trial p = 0.1 Q=1-0.1 =0.9 Mean = np = 500 * 0.1 =50 Variance = npq = 500 * 0.1 * 0.9 = 45 Standard Deviation = (variance)1/2 = (45)1/2 = 6.71 Example 4. Two cards are drawn successively from a pack of 52 cards with replacement. Find the probability distribution for no. of aces. Also find the mean, variance, and standard deviation. Solution: n = 2(no. of trials) p = probability of getting an ace in each trial = 4/52 =1/13 q = 1-1/13 =12/13 r = no. of successes i.e no. of aces (0,1,2) P(X=r) = 2Cr (1/13)r (12/13)2-r For r = 0 P(0) = 2C0 (1/13) (12/13) 2-0 = 144/169 For r=1 P(1) = 2C1 (1/13)1 (12/13)2-1 = 24/169 For r=2 P(2) = 2C2 (1/13) 2 (12/13)2-2 =1/169 Therefore, probability distribution can be given as : Mean =np = 2 * 1/13 =2/13 Variance = npq = 2 * (1/13) * (12/13) = 24/16 Standard Deviation= (variance)1/2 = (24/169)1/2 = 0.376 My Personal Notes arrow_drop_up
1,821
5,071
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.28125
4
CC-MAIN-2023-14
longest
en
0.890154
http://crypto.stackexchange.com/questions/10195/distributed-asymmetric-key-decryption-with-threshold/10251
1,469,315,479,000,000,000
text/html
crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00130-ip-10-185-27-174.ec2.internal.warc.gz
52,468,865
17,778
Distributed Asymmetric Key Decryption with threshold Threshold decryption of public key encryption schemes allows the decryption key to be distributed among trustees. Then, to decrypt the cipher text it requires at least threshold t trustees to run the decryption protocol in order to get the plain text. Now most of the literature is either based on generic multiparty computation computation (which I am not keen on for some reason) or based on distributed RSA decryption and sharing RSA functions. Are there any other ways of achieving distributed asymmetric key threshold decryption? Will elliptic curves be a good choice for distributing the decryption? - You can create a random symmetric key for each message, apply shamir secret sharing to it (symmetric secret sharing) and then encrypt each share with a different public key. – CodesInChaos Sep 6 '13 at 10:09 I heard about this technique from a colleague. but could not locate the actual reference of the paper . do u have it ? – sashank Sep 6 '13 at 13:12 No idea if anybody wrote a paper about it. It's just a straight forward and useful combination of asymmetric encryption and shamir secret sharing. I came up with it myself and I expect many others to have done so before me. – CodesInChaos Sep 8 '13 at 13:31 yes, some other colleague also impromptuly gave me this idea . but some how it is not fitting squarely in my application . thanks anyway ! – sashank Sep 8 '13 at 15:03 I assume you say on the threshold encryption scheme, in which a dealer generates $(PK,SK_1,\dots,SK_n)$ and distributes the secret keys to users indexed by $1,\dots,n$, and if a combiner obtains $t$ partially-decrypted ciphertexts, it can retrieve a plaintext. • Dodis and Katz showed a generic construction of CCA-secure threshold encryption scheme from a secret sharing scheme and a CCA-secure labeled PKE scheme. See the paper, Dodis and Katz: Chosen-Ciphertext Security of Multiple Encryption (TCC 2005). • DDH-based constructions (in ROM) are proposed by Gennaro and Shoup: Securing threshold cryptosystems against chosen ciphertext attack (EUROCRYPT 1998, JoC 2002). • You can also find a pairing-based construction in Boneh, Boyen and Halevi: Chosen Ciphertext Secure Public Key Threshold Encryption Without Random Oracles (CT-RSA 2006). -
522
2,295
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.5625
3
CC-MAIN-2016-30
latest
en
0.871307
https://physics.stackexchange.com/questions/600338/completeness-relation-of-spherical-harmonics
1,653,020,307,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662531352.50/warc/CC-MAIN-20220520030533-20220520060533-00727.warc.gz
542,149,106
65,383
# Completeness relation of spherical harmonics In spherical coordinates, the resolution of the identity can be written as $$1=\int_0^{2\pi}d\phi\int_0^{\pi}\sin\theta\, d\theta\, |\theta,\phi\rangle\langle\theta,\phi| \equiv \int d\Omega |\Omega\rangle\langle \Omega|,$$ where $$|\Omega\rangle = |\theta,\phi\rangle$$. For spherical harmonics $$Y_{lm}(\Omega)$$, we then have $$\delta_{l'l}\delta_{m'm} = \int d\Omega\, Y_{l'm'}^\ast(\Omega)\, Y_{lm}(\Omega).$$ The resolution of the identity in the angular momentum basis is given by $$1=\sum_{l=0}^{\infty}\sum_{m=-l}^l |l,m\rangle\langle l,m|,$$ so that $$\langle \Omega \mid\Omega'\rangle = \sum_{l=0}^{\infty}\sum_{m=-l}^l \langle\Omega\mid l,m\rangle\langle l,m\mid \Omega'\rangle\iff \delta(\Omega-\Omega')=\sum_{l=0}^{\infty}\sum_{m=-l}^l Y_{lm}(\Omega) Y_{lm}(\Omega').$$ Now, the term $$\delta(\Omega-\Omega')$$ is often rewritten as $$\frac1{\sin\theta}\delta(\theta-\theta')\delta(\phi-\phi')$$. How does one find this expression? For $$\delta^{(2)}(\Omega-\Omega')$$ to behave like a delta function, we should get $$1$$ when we integrate it over the surface of the unit sphere. In other words, we should have that $$$$1=\int {\rm} d^2 \Omega \delta^{(2)}(\Omega-\Omega') = \int {\rm d}\theta {\rm} d \phi \sin \theta\delta^{(2)}(\Omega-\Omega')$$$$ You can see this will work out if we take $$$$\delta^{(2)}(\Omega-\Omega')=\frac{1}{\sin\theta} \delta(\theta-\theta')\delta(\phi-\phi')$$$$ but will not work out if we do not include the $$(\sin\theta)^{-1}$$ factor. Indeed if we do not include this factor, then we will get $$\sin\theta'$$ instead of $$1$$. More generally, in $$D$$ spacetime dimensions, one should write the $$D$$-dimensional Dirac delta function as $$$$\frac{1}{\sqrt{|g|}}\delta^{(D)}(x)$$$$ In spherical coordinates on the unit sphere, $$\sqrt{|g|}=\sin \theta$$. This is another argument to explain the factor of $$(\sin\theta)^{-1}$$.
638
1,924
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0}
3.46875
3
CC-MAIN-2022-21
longest
en
0.674331
https://www.sophia.org/tutorials/slope-questions-answered
1,603,961,129,000,000,000
text/html
crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00719.warc.gz
901,458,899
10,690
### Online College Courses for Credit + ##### Rating: (2) • (2) • (0) • (0) • (0) • (0) Author: Christopher Danielson ##### Description: This brief packet consists of a short video using student questions and responses to an in-class task to address common questions about slope and first differences. (more) Sophia’s self-paced online courses are a great way to save time and money as you earn credits eligible for transfer to many different colleges and universities.* No credit card required 37 Sophia partners guarantee credit transfer. 299 Institutions have accepted or given pre-approval for credit transfer. * The American Council on Education's College Credit Recommendation Service (ACE Credit®) has evaluated and recommended college credit for 32 of Sophia’s online courses. Many different colleges and universities consider ACE CREDIT recommendations in determining the applicability to their course and degree programs. Tutorial ## Introduction In class, I occasionally ask students to list out things they know, things they want to know and things they have learned about a topic (this is referred to as a "KWL" among educators, for Know, Want to know, Learned). We recently did a KWL on slope in College Algebra. I noticed that various students did an excellent job of addressing each others' questions in the "want to know" part through their "know" and "learned" responses. This brief packet consists of a video showing students' questions and answers; the video concludes with a final answer from me. Learners should be familiar with the concept of "first difference," which is a quick tool for analyzing change in function tables. The idea is not explained in this packet, but it is demonstrated at the end of the video. ## Students' questions and answers on slope This video uses student responses to a classroom task to ask and answer each others' questions. It finishes with a brief instructor explanation of the relationship between first differences and slope. ## Summary So slope is a rate of change (such as velocity). Slope is the ratio of the change in y to the change in x. And first differences analyze change in y. When the change in x is 1, then the first differences and the slope are identical. First differences are a particularly useful tool when presented with a table of data. And they are especially helpful when the table is non-linear. Non-linear functions technically do not have slopes. But we can still compute first differences and look for patterns that tell us about the behavior of the function. And first differences are an important precursor to the calculus topic of derivative. Rating
528
2,657
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.65625
4
CC-MAIN-2020-45
latest
en
0.947693
https://gmatclub.com/forum/movies-you-have-seen-recently-36583.html?fl=similar
1,510,942,485,000,000,000
text/html
crawl-data/CC-MAIN-2017-47/segments/1510934803848.60/warc/CC-MAIN-20171117170336-20171117190336-00764.warc.gz
624,563,435
44,214
It is currently 17 Nov 2017, 11:14 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track Your Progress every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Movies you have seen recently new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Author Message Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 Movies you have seen recently [#permalink] ### Show Tags 11 Oct 2006, 10:35 This thread is a discussion about movies you have seen recently, from old classics to recent releases. Last week I saw The Departed. It was great. Jack Nicholson's performance stole the entire movie. Last edited by faifai0714 on 17 Oct 2006, 19:33, edited 1 time in total. Kudos [?]: 74 [0], given: 0 Current Student Joined: 29 Jan 2005 Posts: 5201 Kudos [?]: 437 [0], given: 0 ### Show Tags 11 Oct 2006, 10:47 Saw "Miami Vice" last weekend and wasn't exactly thrilled with Colin Farrel's performance. He just can't substitute for Don Johnson, sorry. Action was descent, though. Learjets, Scarabs, Hummers, and especially the fire from the Ferrari's exhaust pipes- as it speeds along the Intracoastal Highway- raises the audience's pulse. Oh yeah, at 39, Gong Li is still HOT Overall a (B-). http://www.youtube.com/watch?v=Rm7nkfu25Tw Next on the list is the new 007 "Casino Royale." Kudos [?]: 437 [0], given: 0 Senior Manager Joined: 30 Aug 2006 Posts: 373 Kudos [?]: 75 [0], given: 0 ### Show Tags 15 Oct 2006, 15:08 Children of Men, a very thought provoking imaginative story!! Perfect for the gmatclub intellectuals . Great performances from Clive Owen & Michael Caine too! http://www.imdb.com/title/tt0206634/ Kudos [?]: 75 [0], given: 0 Intern Joined: 16 Oct 2006 Posts: 17 Kudos [?]: [0], given: 0 ### Show Tags 16 Oct 2006, 22:28 SUPER SIZE ME Kudos [?]: [0], given: 0 Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 ### Show Tags 17 Oct 2006, 19:29 haha i love that documentary I don' feel like going to mc anymore, unless if i am hungry in the middle of night if you go out on the streets in america you'll find a lot obese pp... Kudos [?]: 74 [0], given: 0 Current Student Joined: 29 Jan 2005 Posts: 5201 Kudos [?]: 437 [0], given: 0 ### Show Tags 17 Oct 2006, 20:32 faifai0714 wrote: haha i love that documentary I don' feel like going to mc anymore, unless if i am hungry in the middle of night if you go out on the streets in america you'll find a lot obese pp... ~65% of the total poulation (kids included) to be exact. More like a SUPER SIZED nation. Kudos [?]: 437 [0], given: 0 Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 ### Show Tags 17 Oct 2006, 21:12 I saw basic instinct 2...sharon stone is getting old and she acts like she's 30... it's just so pathetic...bi 2 is one of those movies that is so bad but it's so good...the last 10 minutes of the ending is the best part of the film but still it doesn't redeem the quality of the whole movie... Kudos [?]: 74 [0], given: 0 Intern Joined: 04 May 2006 Posts: 23 Kudos [?]: 2 [0], given: 0 Borat [#permalink] ### Show Tags 07 Nov 2006, 13:14 Have you seen Borat-The Movie? I saw it and I'm with mixed emotions about it. Some people think it's histerical, others are kind of offended. What do you think? Kudos [?]: 2 [0], given: 0 Manager Joined: 17 Oct 2006 Posts: 103 Kudos [?]: 1 [0], given: 0 ### Show Tags 08 Nov 2006, 08:05 Borat looks funny, but wasn't out in my town. Did it only appear in a few towns? Kudos [?]: 1 [0], given: 0 Manager Joined: 04 Oct 2006 Posts: 188 Kudos [?]: 1 [0], given: 0 ### Show Tags 08 Nov 2006, 10:28 Borat was only out on about 800 screens, and is hilarious. If you're offended by it thats fine, but you should also be offended by plenty of the regular Americans in the movie Kudos [?]: 1 [0], given: 0 VP Joined: 25 Jun 2006 Posts: 1161 Kudos [?]: 203 [0], given: 0 ### Show Tags 25 Nov 2006, 03:23 Borat will be shown here on Dec 28th. Gonna catch it. haha. Kudos [?]: 203 [0], given: 0 VP Joined: 25 Jun 2006 Posts: 1161 Kudos [?]: 203 [0], given: 0 Re: Movies you have seen recently [#permalink] ### Show Tags 25 Nov 2006, 03:24 faifai0714 wrote: Last week I saw The Departed. It was great. Jack Nicholson's performance stole the entire movie. Chinese fans find the movie a bore. know why? Kudos [?]: 203 [0], given: 0 Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 Re: Movies you have seen recently [#permalink] ### Show Tags 25 Nov 2006, 16:40 tennis_ball wrote: faifai0714 wrote: Last week I saw The Departed. It was great. Jack Nicholson's performance stole the entire movie. Chinese fans find the movie a bore. know why? Cus they have seen the original Hk version. I'm a chinese fan too. It didn't bother me to watch the same story again. But I like how the US version interprets the film differently. I don't want to go into further details but you know what I mean if you have seen both movies. Kudos [?]: 74 [0], given: 0 VP Joined: 25 Jun 2006 Posts: 1161 Kudos [?]: 203 [0], given: 0 ### Show Tags 26 Nov 2006, 18:28 People who have watched the original version gave low ratings to the hollywood remake. That is why I didn't go to watch. After all, i have been avoiding Holywood sh!ts in recent years. They only know sequals after sequals, adaptions from books and computer games, and remake other or old films. hmmm. Kudos [?]: 203 [0], given: 0 Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 ### Show Tags 27 Nov 2006, 16:53 Finally I get to see Borat. It's funny in the beginning, but the jokes run out of steam towards the end. I can see why people find it offensive, but one reason why pp love borat is because of his sincerity. People like to talk white lies but borat doesn't seem to care much. There's not really much of a story here...it's just borat goofying around everywhere he goes. The credit design is so cheesy and the sound mix of the film is terrible. Kudos [?]: 74 [0], given: 0 Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 ### Show Tags 27 Nov 2006, 16:56 tennis_ball wrote: Borat will be shown here on Dec 28th. Gonna catch it. haha. Wow singapore allow this type of movie to be shown? That is surprising... the amount of sex jokes in this movie would be enough to make this movie banned in singapore Kudos [?]: 74 [0], given: 0 VP Joined: 25 Jun 2006 Posts: 1161 Kudos [?]: 203 [0], given: 0 ### Show Tags 29 Nov 2006, 19:27 faifai0714 wrote: tennis_ball wrote: Borat will be shown here on Dec 28th. Gonna catch it. haha. Wow singapore allow this type of movie to be shown? That is surprising... the amount of sex jokes in this movie would be enough to make this movie banned in singapore haha. maybe they will censor some of the sex jokes, and put it on a maturity rating. I am going to see whether it is censored before I decide to watch it. Hey, prostitution is not illegal here at least. lol. so u can guess how twisted the government's policy is, but all are twisted towards the \$. Kudos [?]: 203 [0], given: 0 VP Joined: 25 Jun 2006 Posts: 1161 Kudos [?]: 203 [0], given: 0 ### Show Tags 29 Jan 2007, 18:48 I watched Borat almost 1 month ago. It was hilarious, but was offensive as well to some people. I felt it even though I am neutral in most of the sensitive topics. Kudos [?]: 203 [0], given: 0 Senior Manager Joined: 02 Sep 2006 Posts: 253 Kudos [?]: 74 [0], given: 0 ### Show Tags 30 Mar 2007, 21:12 anybody seen 300? Kudos [?]: 74 [0], given: 0 Senior Manager Joined: 11 Feb 2007 Posts: 350 Kudos [?]: 186 [0], given: 0 ### Show Tags 01 Apr 2007, 19:42 faifai0714 wrote: anybody seen 300? Here Not much story, just action (the "man-ness"). Was entertaining tho. How much do movie tickets cost in the US??? Here in Korea, it's around \$7~8 per adult. Kudos [?]: 186 [0], given: 0 01 Apr 2007, 19:42 Go to page    1   2    Next  [ 35 posts ] Display posts from previous: Sort by # Movies you have seen recently new topic post reply Question banks Downloads My Bookmarks Reviews Important topics Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
2,709
8,865
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2017-47
latest
en
0.893746
https://worksheetsbag.com/mcq-chapter-3-current-electricity-class-12-physics/
1,718,943,573,000,000,000
text/html
crawl-data/CC-MAIN-2024-26/segments/1718198862036.35/warc/CC-MAIN-20240621031127-20240621061127-00890.warc.gz
545,139,970
27,304
# MCQ Chapter 3 Current Electricity Class 12 Physics Please refer to Current Electricity MCQ Questions Class 12 Physics below. These MCQ questions for Class 12 Physics with answers have been designed as per the latest NCERT, CBSE books, and syllabus issued for the current academic year. These objective questions for Current Electricity will help you to prepare for the exams and get more marks. ## Current Electricity MCQ Questions Class 12 Physics Please see solved MCQ Questions for Current Electricity in Class 12 Physics. All questions and answers have been prepared by expert faculty of standard 12 based on the latest examination guidelines. ### MCQ Questions Class 12 Physics Current Electricity Question. A charge is moving across a junction, then (a) momentum will be conserved. (b) momentum will not be conserved. (c) at some places momentum will be conserved and at some other places momentum will not be conserved. (d) none of these. D Question. Which of the following I-V graph represents ohmic conductors? A Question. The I-V characteristics shown in figure represents (a) ohmic conductors (b) non-ohmic conductors (c) insulators (d) superconductors B Question. Which of the following is correct for V-I graph of a good conductor? A Question. The resistivity of alloy manganin is (a) Nearly independent of temperature. (b) Increases rapidly with increase in temperature (c) Decreases with increase in temperature. (d) Increases rapidly with decrease in temperature. A Question. An electric heater is connected to the voltage supply. After few seconds, current gets its steady value then its initial current will be (a) equal to its steady current. (b) slightly higher than its steady current. (c) slightly less than its steady current. (d) zero B Question. In the series combination of two or more than two resistances (a) the current through each resistance is same. (b) the voltage through each resistance is same. (c) neither current nor voltage through each resistance is same. (d) both current and voltage through each resistance are same. A Question. Combine three resistors 5 W, 4.5 Ω and 3 Ω in such a way that the total resistance of this combination is maximum (a) 12.5 Ω (b) 13.5 Ω (c) 14.5 Ω (d) 16.5 Ω A Question. A cell having an emf e and internal resistance r is connected across a variable external resistance R. As the resistance R is increased, the plot of potential difference V across R is given by B Question. In parallel combination of n cells, we obtain (a) more voltage (b) more current (c) less voltage (d) less current [Ans. (b) B Question. If n cells each of emf e and internal resistance r are connected in parallel, then the total emf and internal resistance will be (a) ∈, r/n (b) ∈,nr (c) n∈, r/n (d) n∈, nr A Question. In a wheat stone bridge if the battery and galvanometer are interchanged then the deflection in galvanometer will (a) change in previous direction (b) change in opposite direction (c) not change (d) none of these. C Question. When a metal conductor connected to left gap of a meter bridge is heated, the balancing point (a) shifts towards right (b) shifts towards left (c) remains unchanged (d) remains at zero B Question. In a potentiometer of 10 wires, the balance point is obtained on the 7th wire. To shift the balance oint to 9th wire, we should (a) decrease resistance in the main circuit. (b) increase resistance in the main circuit. (c) decrease resistance in series with the cell whose emf is to be measured. (d) increase resistance in series with the cell whose emf is to be determined. D Question. AB is a wire of potentiometer with the increase in the value of resistance R, the shift in the balance point J will be (a) towards B (b) towards A (c) remains constant (d) first towards B then back towards A. A Question. A current of 5 A passes through a copper conductor (resistivity =17 x 10-8 . Ω-m) of radius of cross-section 5 mm. Find the mobility of the charges, if their drift velocity is 11 10 3 . × − m/s. (2019 Main, 10 April I) (a) 1.5 m2 / V-s (b) 1.3 m2 / V-s (c) 1.0 m2 / V-s (d) 1.8 m2 / V-s C Question. A metal wire of resistance 3 Ω is elongated to make a uniform wire of double its previous length. This new wire is now bent and the ends joined tomake a circle. If two points on this circle make an angle 60º at the centre, the equivalent resistance between these two points will be (a) (7/2)Ω (b) (5/2)Ω (c) (12/5)Ω (d) (5/3)Ω D Question. In an experiment, the resistance of a material is plotted as a function of temperature (in some range). As shown in the figure, it is a straight line. C Question. In a conductor, if the number of conduction electrons per unit volume is 8 5 1028 . × m−3 and mean free time is 25 fs (femto second), it’s approximate resistivity is (Take, me = 91 x 10-31 . kg) (a) 10 −7Ω-m (b) 10 −5Ω-m (c) 10 −6Ω-m (d) 10 −8Ω-m D Question. Determine the charge on the capacitor in the following circuit (a) 2μC (b) 200μC (c) 10 μC (d) 60 μC B Question. A copper wire is stretched to make it 0.5% longer. The percentage change in its electrical resistance, if its volume remains unchanged is (a) 2.0% (b) 1.0% (c) 0.5% (d) 2.5% B Question. Space between two concentric conducting spheres of radii a and b (b > a ) is filled with a medium of resistivity r. The resistance between the two spheres will be B Question. A wire of resistance R is bent to form a square ABCD as shown in the figure. The effective resistance between E and C is [E is mid-point of arm CD] A Question. A uniform metallic wire has a resistance of 18 Ω and is bent into an equilateral triangle. Then, the resistance between any two vertices of the triangle is (a) 12 Ω (b) 8 Ω (c) 2 Ω (d) 4 Ω D Question. Mobility of electrons in a semiconductor is defined as the ratio of their drift velocity to the applied electric field. If for an n – type semiconductor, the density of electrons is 1019m−3 and their mobility is 16 2 . m (V-s), then the resistivity of the semiconductor (since, it is an n-type semiconductor contribution of holes is ignored) is close to (a) 2 Ω-m (b) 0.2 Ω-m (c) 0.4 Ω-m (d) 4Ω-m C Question. When 5V potential difference is applied across a wire of length 0.1m, the drift speed of electrons is 2 5 x 10-4 ms-1 . If the electron density in the wire is 8 10 × 28−3 m the resistivity of the material is close to (a) 1.6 × 10-8 Wm (b) 1.6 × 10-7 Wm (c) 1.6 × 10-5 Wm (d) 1.6 × 10-6 Wm C Question. Drift speed of electrons, when 1.5 A of current flows in a copper wire of cross-section 5 mm2 is v. If the electron density in copper is 9× 1028 / m3, the value of v in mm/s is close to (Take, charge of electron to be = 16 x 10-19 . C) (a) 0.02 (b) 0.2 (c) 2 (d) 3 A Question. In an aluminium (Al) bar of square cross section, a square hole is drilled and is filled with iron (Fe) as shown in the figure. The electrical resistivities of Al and Fe are 27 x 10-8. Wm and 10 x 10-7. Wm, respectively. The electrical resistance between the two faces P and Q of the composite bar is B Question. Consider a thin square sheet of side L and thickness t, made of a material of resistivity r. The resistance between two opposite faces, shown by the shaded areas in the figure is (a) directly proportional to L (b) directly proportional to t (c) independent of L (d) independent of t C Question. A steady current flows in a metallic conductor of non-uniform cross-section. The quantity/quantities constant along the length of the conductor is/are. (a) current, electric field and drift speed (b) drift speed only (c) current and drift speed (d) current only D Question. Six equal resistances are connected between points P,Q and R as shown in the figure. Then, the net resistance will be maximum between (a) P and Q (b) Q and R (c) P and R (d) any two points A Question. The effective resistance between points P and Q of the electrical circuit shown in the figure is A Question. Read the following statements carefully (1993, 2M) Y : The resistivity of semiconductor decreases with increase of temperature. Z : In a conducting solid, the rate of collisions between free electrons and ions increases with increase of temperature. Select the correct statement (s) from the following (a) Y is true but Z is false (b) Y is false but Z is true\ (c) Both Y and Z are true (d) Y is true and Z is the correct reason for Y C Question. A piece of copper and another of germanium are cooled from room temperature to 80 K. The resistance of (a) each of them increases (b) each of them decreases (c) copper increases and germanium decreases (d) copper decreases and germanium increases D Question. In the circuit shown, the potential difference between A and B is (a) 3 V (b) 1 V (c) 6 V (d) 2 V D Question. To verify Ohm’s law, a student connects the voltmeter across the battery as shown in the figure. The measured voltage is plotted as a function of the current and the following graph is obtained If V0 is almost zero, then identify the correct statement. (a) The emf of the battery is 1.5 V and its internal resistance is 1.5 Ω (b) The value of the resistance R is 1.5 Ω (c) The potential difference across the battery is 1.5 V when it sends a current of 1000 mA (d) The emf of the battery is 1.5Vand the value of R is 1.5 Ω A Question. In the given circuit, an ideal voltmeter connected across the 10 Ω resistance reads 2 V. The internal resistance r, of each cell is (a) 1.5 Ω (b) 0.5 Ω (c) 1 Ω (d) 0 Ω B Question. In the figure shown, what is the current (in ampere) drawn from the battery? You are given : R1 = 15 Ω, R2 = 10 Ω, R3 = 20 Ω, R4 = 5 Ω, R5 = 25 Ω, R6 = 30 Ω, E = 15V (a) 13/24 (b) 7/18 (c) 20/3 (d) 9/32 D Question. For the circuit shown with R1 = 1.0Ω, R2 = 2.0Ω, E1 = 2 V and E2 = E3 = 4 V, the potential difference between the points a and b is approximately (in volt) (a) 2.7 (b) 2.3 (c) 3.7 (d) 3.3 D Question. In a Wheatstone bridge (see figure), resistances P and Q are approximately equal. When R = 400 Ω, the bridge is balanced. On interchanging P and Q , the value of R for balance is 405 Ω. The value of X is close to (a) 404.5 Ω (b) 401.5 Ω (c) 402.5 Ω (d) 403.5 Ω C Question. In the given circuit diagram, the currents I1 = − 0.3 A, I4 = 0.8 A and I5 = 0.4 A, are flowing as shown. The currents I2, I3 and I6 respectively, are (a) 1.1 A, 0.4 A, 0.4 A (b) 1.1 A, − 0 4 . A,0.4A (c) 0.4 A, 1.1 A, 0.4 A (d) − 0 4 . A, 0.4 A,1.1A A Question. In the circuit shown in the figure, the current through (a) the 3 Ω resistor is 0.50 A (b) the 3 Ω resistor is 0.25 A (c) the 4 Ω resistor is 0.50 A (d) the 4 Ω resistor is 0.25 A D Question. In the given circuit, the cells have zero internal resistance. The currents (in Ampere) passing through resistances R1 and R2 respectively are (a) 0.5, 0 (b) 1, 2 (c) 2, 2 (d) 0, 1 A Question. In the given circuit, the internal resistance of the 18 V cell is negligible. If R1 = 400 Ω, R3 = 100 Ω and R4 = 500 Ω and the reading of an ideal voltmeter across R4 is 5 V, then the value of R2 will be a) 550 Ω (b) 230 Ω (c) 300 Ω (d) 450 Ω C Question. When the switch S in the circuit shown is closed, then the value of current i will be (a) 4A (b) 3A (c) 2A (d) 5A D Question. Two batteries with emf 12 V and 13 V are connected in parallel across a load resistor of 10 Ω. The internal resistances of the two batteries are 1Ω and 2 Ω, respectively. The voltage across the load lies between (a) 11.7 V and 11.8 V (b) 11.6 V and 11.7 V (c) 11.5 V and 11.6 V (d) 11.4 V and 11.5 V C Question. In the below circuit, the current in each resistance is (a) 0.25 A (b) 0.5 A (c) 0 A (d) 1 A C Question. In the circuit shown below, the current in the 1Ω resistor is (a) 1.3 A, from P to Q (b) 0.13 A, from Q to P (c) 0 A (d) 0.13 A, from P to Q B Question. Find out the value of current through 2 Ω resistance for the given circuit. (a) 5 A (b) 2 A (c) zero (d) 4 A C Question. In the given circuit, it is observed that the current I is independent of the value of the resistance R6. Then, the resistance values must satisfy C Question. In the circuit shown P ¹ R, the reading of galvanometer is same with switch S open or closed. Then (a) IR = IG (b) IF = IG (c) IQ = IG (d) IQ = IR A Question. A battery of internal resistance 4 Ω is connected to the network of resistances as shown in figure. In order that the maximum power can be delivered to the network, the value of R in Ω should be (a) 4/9 (b) 2 (c) 8/3 (d) 18 B Read the two statements Assertion (A) and Reason (R) carefully to mark the correct option out of the options given below: (a) Assertion (A) and Reason (R) both are correct statements and Reason is correct explanation for Assertion. (b) Assertion (A) and Reason (R) both are correct statements but Reason is not correct explanation for Assertion. (c) Assertion (A) is correct statement but Reason (R) is wrong statement. (d) Assertion (A) is wrong statement but Reason (R) is correct statement. Question. Assertion: The drift velocity of electrons in a metallic wire will decrease, if the temperature of the wire is increased. Reason: On increasing temperature, conductance of metallic wire decrease. B Question. Assertion: Chemical reactions involved in primary cells are irreversible and in secondary cells are reversible. Reason: Primary cells can be recharged, but secondary cells cannot be recharged. C Question. Assertion: It the length of the conductor is doubled, the drift velocity will become half of the original value (keeping potential) difference unchanged). Reason: At constant potential difference, drift velocity is inversely proportional to the length of the conductor. A Question. Assertion: Material used in the construction of a standard resistance is constantan or manganin. Reason: Temperature coefficient of constantan is very small. A Question. Assertion: The 200 W bulbs glows with more brightness than 100 W bulbs. Reason: A 100 watt bulb has more resistance than a 200 W bulb. A Question. Assertion: Fuse wire must have high resistance and low melting point. Reason: Fuse is used for small current flow only. C Question. Assertion: Two electric bulbs of 30 Watt and 100 watt are given. When connected in series 50 watt bulb glows more but when connected in parallel 100 watt bulb glows more. Reason: In series combination, power is directly proportional to the resistance of circuit. But in parallel combination, power is inversely proportional to the resistance of the circuit. A Question. Assertion: It is advantageous to transmit electric power at high voltage. Reason: High voltage implies high current. C Question. Assertion: Though the same current flows through the line wires and the filament of the bulb but heat produced in the filament is much higher than that in line wires. Reason: The filament of bulbs is made of a material of high resistance and high melting point.
4,346
14,910
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.296875
3
CC-MAIN-2024-26
latest
en
0.882742
https://ncatlab.org/nlab/show/string+2-group
1,656,661,747,000,000,000
application/xhtml+xml
crawl-data/CC-MAIN-2022-27/segments/1656103922377.50/warc/CC-MAIN-20220701064920-20220701094920-00560.warc.gz
480,147,369
21,897
# nLab string 2-group Contents cohomology ## Spin geometry spin geometry Dynkin labelsp. orth. groupspin grouppin groupsemi-spin group SO(2)Spin(2)Pin(2) B1SO(3)Spin(3)Pin(3) D2SO(4)Spin(4)Pin(4) B2SO(5)Spin(5)Pin(5) D3SO(6)Spin(6) B3SO(7)Spin(7) D4SO(8)Spin(8)SO(8) B4SO(9)Spin(9) D5SO(10)Spin(10) B5SO(11)Spin(11) D6SO(12)Spin(12) $\vdots$$\vdots$ D8SO(16)Spin(16)SemiSpin(16) $\vdots$$\vdots$ D16SO(32)Spin(32)SemiSpin(32) string geometry # Contents ## Idea The string 2-group is a smooth 2-group-refinement of the topological group called the string group. It is the ∞-group extension induced by the smooth/stacky version of the first fractional Pontryagin class/second Chern class. ## Definition A string 2-group extension $String(G)$ is defined for every simple simply connected compact Lie group $G$, such as the spin group $G = Spin(n)$ or the special unitary group $G = SU(n)$ (for non-low $n$). Since string structures arise predominantly as higher analogs of spin structures, the default choice is $G = Spin$ and in that case one usually just writes $String = String(Spin)$, for short. Recall first that the string group in Top is one step in the Whitehead tower of the orthogonal group. ###### Definition For $n \in \mathbb{N}$ let $Spin(n)$ denote the spin group, regarded as a topological group. Write $B Spin(n) \in$ Top for its classifying space and $\frac{1}{2}p_1 : B Spin(n) \to B^4 \mathbb{Z}$ for a representative of the characteristic class called the first fractional Pontryagin class. Its homotopy fiber in the (∞,1)-topos Top $\simeq$ ∞Grpd is denoted $B String(n) := B O(n)\langle 7 \rangle$ $\array{ B String(n) &\to& * \\ \downarrow && \downarrow \\ B Spin(n) &\stackrel{\frac{1}{2} p_1}{\to}& B^4 \mathbb{Z} } \,.$ The loop space $String(n) := \Omega B String(n)$ is the ∞-group-object in Top called the string group. Write now $(\Pi \dashv Disc \dashv \Gamma \dashv coDisc) : Smooth\infty Grpd \stackrel{\overset{\Pi}{\to}}{\stackrel{\overset{Disc}{\leftarrow}}{\stackrel{\overset{\Gamma}{\to}}{\underset{coDisc}{\leftarrow}}}} \infty Grpd \simeq Top$ for the (∞,1)-topos Smooth∞Grpd of smooth ∞-groupoids, regarded as a cohesive (∞,1)-topos over ∞Grpd. ###### Proposition There is a lift through $\Pi$ of $\frac{1}{2} p_1$ to the smooth first fractional Pontryagin class $\frac{1}{2}\mathbf{p}_1 : \mathbf{B}Spin(n) \to \mathbf{B}^3 U(1)$ in Smooth∞Grpd, where This is shown in (FSS). ###### Definition Write $\mathbf{B}String(n)$ for the homotopy fiber of the smooth first fractional Pontryagin class $\array{ \mathbf{B}String &\to& * \\ \downarrow && \downarrow \\ \mathbf{B}Spin &\stackrel{\frac{1}{2}\mathbf{p}_1}{\to}& \mathbf{B}^3 U(1) }$ in Smooth∞Grpd. Its loop space object $String(n) := \Omega \mathbf{B}String(n)$ is the smooth ∞-group called the smooth string 2-group. ## Properties Write $\vert - \vert := \vert\Pi(-)\vert : Smooth \infty Grpd \stackrel{\Pi}{\to} \infty Grpd \stackrel{\vert - \vert}{\to} Top$ ###### Proposition The smooth string 2-group, def. , indeed maps under $\vert-\vert$ to the topological string group: $\vert \mathbf{B}String(n) \vert \simeq B String(n) \,.$ ###### Proof Since $\mathbf{B}^3 U(1)$ is presented by a simplicial presheaf that is degreewise presented by a paracompact smooth manifold (a finite product of the circle group with itself), it follows from the general properties of $\Pi$ discussed at Smooth∞Grpd that $\Pi$ preserves the homotopy fiber of $\frac{1}{2}\mathbf{p}_1$. ## Presentations Several explicit presentations of the string Lie 2-group are known. ### By Lie integration of the string Lie 2-algebra We discuss a presentation of the smooth string 2-group by Lie integration of the skeletal version of the string Lie 2-algebra. Recall the identification of L-∞ algebras $\mathfrak{g}$ with their dual Chevalley-Eilenberg algebras $CE(\mathfrak{g})$. ###### Definition Write $\mu := \langle - ,[-,-]\rangle : \mathfrak{so}(n) \to b^2 \mathbb{R}$ for the canonical degree-3 cocycle in the Lie algebra cohomology of the special orthogonal group, normalized such that the 3-form $\Omega^\bullet(Spin(n)) \hookleftarrow CE(\mathfrak{so}(n)) \stackrel{\mu}{\leftarrow} CE(b^2 \mathbb{R})$ represents the image in de Rham cohomology of a generators of the integral cohomology group $H^3(G,\mathbb{Z}) \simeq \mathbb{Z}$. Define the string Lie 2-algebra $\mathfrak{string}(n) := \mathfrak{so}(n)_\mu$ to be given by the Chevalley-Eilenberg algebra $CE(\mathfrak{string}(n)) := \wedge^\bullet ( \mathfrak{so}(n)^* \oplus \langle b\rangle , d_{\mathfrak{string}})$ which is that of $\mathfrak{so}(n)$ with a single generator $b$ in degree 3 adjoined and the differential given by $d_{\mathfrak{string}}|_{\mathfrak{so}(n)^*} = d_{\mathfrak{so}(n)};$ $d_{\mathfrak{string}} : b \mapsto \mu \,.$ ###### Proposition We have a pullback square in $L_\infty Alg$ $\array{ \mathfrak{string}(n) &\to& e b \mathbb{R} \\ \downarrow && \downarrow \\ \mathfrak{so}(n) &\stackrel{\mu}{\to}& b^2 \mathbb{R} } \,.$ See string Lie 2-algebra for more discussion. ###### Proposition The Lie integration of $\mathfrak{string}(n)$ yields a presentation of the smooth String 2-group, def. $\mathbf{cosk}_3 \exp(\mathfrak{string}(n)) \simeq \mathbf{B} String(n) \,.$ This is essentially the model considered in (Henriques), discussed here in the context of Smooth∞Grpd as described in (FSS). ###### Proof We observe the image under Lie integration of the $L_\infty$-algebra pullback diagram from prop. is a pullback diagram in $[CartSp_{smooth}^{op}, sSet]_{proj}$ that presents the defining homotopy fiber. Before applying the coskeleton operation we have immediately $\exp(-) \; :\; \left( \array{ \mathfrak{string}(n) &\to& e b \mathbb{R} \\ \downarrow && \downarrow \\ \mathfrak{so}(n) &\stackrel{\mu}{\to}& b^2 \mathbb{R} } \right) \;\mapsto \; \left( \array{ \exp(\mathfrak{string}(n)) &\to& \exp(e b \mathbb{R}) \\ \downarrow && \downarrow \\ \exp(\mathfrak{so}(n)) &\stackrel{\mu}{\to}& \exp(b^2 \mathbb{R}) } \right)$ such that on the right we still have a pullback diagram. We discuss the descent o this pullback diagram along the projection $\exp(\mathfrak{so}(n)) \to \mathbf{cosk}_3 \exp(\mathfrak{so}(n))$. Notice from Lie integration the weak equivalence $\int_{\Delta^\bullet} : \exp(b^n \mathbb{R}) \simeq \mathbf{B}^{n+1}\mathbb{R}_c \,.$ Let $I$ be the set of maps $\partial \Delta[4] \to \exp(b^2 \mathbb{R})$ that fit into a diagram $\array{ \partial \Delta[4] &\to& \exp(b^2 \mathbb{R}) \\ \downarrow && \downarrow^{\mathrlap{\int_{\Delta^\bullet}}} \\ && \mathbf{B}^3 \mathbb{R}_c \\ \downarrow && \downarrow \\ \Delta[4] &\to& \mathbf{B}^3 (\mathbb{Z} \to \mathbb{R})_c }$ (closed 3-forms on 3-balls whose integral is an integer). Write $\exp(b^2 \mathbb{R}/\mathbb{Z}) := \mathbf{cosk}_3 \left( (I \times \Delta[4])\coprod_{I \times \partial \Delta[4]} \mathbf{cosk_3} \exp(b^2 \mathbb{R}) \right)$ for the result of filling all these by 4-cells. Similarly define $\exp(e b \mathbb{R}/\mathbb{Z})$. Then applying the coskeleton functor to the above pullback diagram and using the projection (FSS) $\array{ \exp(\mathfrak{so}(n)) &\stackrel{\exp(\mu)}{\to}& \exp(b^2 \mathbb{R}) \\ \downarrow && \downarrow \\ \mathbf{cosk}_3\exp(\mathfrak{so}(n)) &\stackrel{\frac{1}{2}\mathbf{p}_1}{\to}& \exp(b^2 \mathbb{R}/\mathbb{Z}) }$ we get the diagram $\array{ \mathbf{cosk}_3 (\mathfrak{string}(n)) &\to& \exp(e b \mathbb{R}/\mathbb{Z}) \\ \downarrow && \downarrow \\ \mathbf{cosk}_3 \exp(\mathfrak{so}) &\stackrel{\frac{1}{2}\mathbf{p}_1}{\to}& \exp(b^2 \mathbb{R}/\mathbb{Z}) } \,.$ This is again a pullback diagram of a fibration resolution of the point inclusion, hence presents the homotopy fiber in question. ### By strict Lie $2$-group A realization of the string 2-group as a strict 2-group internal to diffeological spaces was given in (BCSS). This is one of three different (there should be more), weakly equivalent such strict 2-group internal to diffeological space models that are discussed in the (to date unpublished) (This particular section, and its results, are joint work of Urs Schreiber and Danny Stevenson). We have the following pattern of routes through Lie integration: $\array{ StrLie \omega Grpd &&&& StrLie \omega Grpd &\stackrel{\simeq}{\leftarrow}& LieCrsdCmplx \\ \uparrow^{\Pi_n S CE} &&&& \uparrow && \uparrow^{\exp(-)} \\ L_\infty Algebras && \leftarrow&& Str L_\infty Algebras &\to& DiffCrsdCmplx }$ Here $StrLie \omega Grpd$ is strict omega-groupoids internal to diffeological spaces, $LieCrsCmplx$ is accordingly smooth crossed complexes , $L_\infty Algebra$ is all L-infinity algebras and $Str L_\infty Algebra$ is strict $L_\infty$-algebras. The vertical morphism on the right is term-wise ordinary Lie integration. The other vertical morphisms take an L-infinity algebra, form the sheaf on Diff of flat ∞-Lie algebroid differential forms, and then take path n-groupoid $\Pi_n(-)$ of that. For the String-case this yields $\array{ \Pi_2(\Omega^\bullet_{fl}(-,\mathfrak{so}_{\mu_3})) &\stackrel{\simeq}{\mapsto}& \mathbf{B} String_{Mick} &\stackrel{\simeq}{\mapsto}& \mathbf{B} String_{BCSS} &\leftarrow|& (\hat \Omega Spin \to P Spin) \\ \uparrow &&&\nearrow& \uparrow && \uparrow \\ \mathfrak{so}_{\mu_3} &&\stackrel{\simeq}{\mapsto}&& \mathfrak{string} &\mapsto& (\hat \Omega \mathfrak{so} \to P \mathfrak{so}) } \,,$ where • $\mathfrak{so}_{\mu_3}$ denotes the weak, skeletal String Lie 2-algebra • $\mathfrak{string}$ its equivalent strict version given by BCSS • the diagonal morphism is the construction in BCSS. • the strict 2-groupoid $\Pi_2(\Omega^\bullet_{fl}(-,\mathfrak{g}_{\mu_3}))$ has, notice, as morphism smooth paths in $Spin(n)$ that are composed by concatenation • the 2-groupoid $\mathbf{B}String_{Mick}$ is a version of the String Lie 2-group that manifestly uses the Mickelsson cocycle? (morphism are paths in $Spin(n)$ that are composed using the group product) • the 2-groupoid $\mathbf{B}String_{BCSS}$ is the version given in BCSS (morhisms again are paths in $Spin(n)$ that are composed using the group product). ### As an automorphism 2-group of fermionic CFT The string 2-group also appears as a certain automorphism 2-group inside the 3-category of fermionic conformal nets (Douglas-Henriques) ### As the automorphisms of the Wess-Zumino-Witten gerbe 2-connection For $G$ a compact simply connected simple Lie group, there is the “WZW gerbe”, hence the circle 2-bundle with connection on $G$ whose curvature 3-form is the left invariant extension $\langle \theta \wedge [\theta \wedge \theta]\rangle$ of the canonical Lie algebra 3-cocycle to the group $\mathcal{L}_{WZW} \;\colon\; G \longrightarrow \mathbf{B}^2U(1) \,.$ ###### Proposition The string 2-group is the smooth 2-group of automorphism of $\mathcal{L}_{WZW}$ which cover the left action of $G$ on itself (hence the “Heisenberg 2-group” of $\mathcal{L}_{WZW}$ regarded as a prequantum 2-bundle) $\mathbf{Aut}(\mathcal{L}_{WZW}) \simeq String(G) \,,$ This is due to (Fiorenza-Rogers-Schreiber 13, section 2.6.1). fivebrane 6-group$\to$ string 2-group $\to$ spin group $\to$ special orthogonal group $\to$ orthogonal group $\hookrightarrow$ general linear group ## References A crossed module presentation of a topological realization of the string 2-group is implicit in A realization of the string 2-group in ∞-groupoids internal to Banach spaces by Lie integration of the skeletal version of the string Lie 2-algebra is in A realization of the string 2-group in strict 2-groups internal to Frechet manifolds by Lie integration of a strict Lie 2-algebra incarnation of the string Lie 2-algebra in in A realization of the string 2-group as a 2-group in finite-dimensional smooth manifolds in in A discussion as an ∞-group object in Smooth∞Grpd and the realization of the smooth first fractional Pontryagin class is in and in section 4.1 of A 2-group model which has a smoothening of the topological string group in lowest degree has been given in A construction explicitly in terms of the “basic” bundle gerbe on $G$ is discussed in Via fermionic nets/2-Clifford algebra: The realization of the string 2-group as the Heisenberg 2-group of the WZW gerbe is due to A model of the string 2-group using the smooth free loop space (instead of the based loop space) is dicussed in Discussion in the context of matrix factorizations and equivariant K-theory: Further on 2-group-extensions by the circle 2-group: of tori: Discussion of a general definition of smooth string group extensions $A\rightarrow \mathrm{String}(H)\rightarrow H$ of a compact simply connected Lie group $H$, with $A$ not necessarily chosen to be $\mathbf{B}U(1)$ but only of the same homotopy type, in Last revised on July 10, 2021 at 07:39:07. See the history of this page for a list of all contributions to it.
4,137
12,918
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 107, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.828125
3
CC-MAIN-2022-27
longest
en
0.616785
http://www.npl.washington.edu/av/altvw81.html
1,448,650,747,000,000,000
text/html
crawl-data/CC-MAIN-2015-48/segments/1448398450559.94/warc/CC-MAIN-20151124205410-00162-ip-10-71-132-137.ec2.internal.warc.gz
590,312,097
7,410
Analog Science Fiction & Fact Magazine "The Alternate View" columns of John G. Cramer # The Alcubierre Warp Drive ### by John G. Cramer Alternate View Column AV-81 Keywords: Alcubierre Warp Drive FTL spacewarp solution Einstein's equations general relativity Published in the November-1996 issue of Analog Science Fiction & Fact Magazine; This column was written and submitted 4/15/96 and is copyrighted ©1996 by John G. Cramer. the explicit permission of the author. The theoretical physicist Miguel Alcubierre was born in Mexico City, where he lived until 1990 when he traveled to Cardiff in the UK to enter graduate school at the University of Wales. He received his PhD from that institution in 1993 for research in numerical general relativity, solving Einstein's gravitational equations with fast computers. He continues to work in this field, devising numerical techniques for describing the physics of orbiting black holes that spin down to collision. Two years ago Alcubierre published a remarkable paper which grew from his work in general relativity, the current "standard model" for space-time and gravitation. His paper describes a very unusual solution to Einstein's equations of general relativity, described in the title as a "warp drive", and in the abstract as "a modification of space time in a way that allows a space ship to travel at an arbitrarily large speed". In this Alternate View column, I want to explore Alcubierre's work and its implications. Let's start by considering the well-known velocity-of-light speed limit, as viewed by special relativity and by general relativity. In the context of special relativity, the speed of light is the absolute speed limit of the universe for any object having a real mass (i.e., everything but the semi-mythical tachyon), for two reasons. First, giving a fast object even more kinetic energy has the main effect of causing an increase in mass-energy rather than speed, with mass-energy going infinite as speed snuggles up to the velocity of light. By this mechanism, relativistic mass increase limits massive objects to sub-light velocities. There is also a second faster than light (FTL) prohibition supplied by special relativity. Suppose a device like the "ansible" of LeGuin and Card were discovered that permitted faster-than-light or instantaneous communication. Special relativity is based in the treatment of all reference frames (i.e., coordinate system moving at some constant velocity) with perfect even-handedness and democracy. Therefore, FTL communication is implicitly ruled out by special relativity because it could be used to perform "sumultaneity tests" of the readings of separated clocks which would reveal the preferred or "true" reference frame of the universe. The existence of such a preferred frame is in conflict with special relativity. General relativity treats special relativity as a restricted sub-theory that applies locally to any region of space sufficiently small that its curvature can be neglected. General relativity does not forbid faster-than-light travel or communication, but it does require that the local restrictions of special relativity must apply . In other words, light speed is the local speed limit, but the broader considerations of general relativity may provide an end-run way of circumventing this local statute. One example of this is a wormhole [see my AV columns in Analog, June-1989 and May-1990] connecting two widely separated locations in space, say five light-years apart. An object might take a few minutes to move with at low speed through the neck of a wormhole, observing the local speed-limit laws all the way. However, by transiting the wormhole the object has traveled five light years in a few minutes, producing an effective speed of a million times the velocity of light. Another example of FTL in general relativity is the expansion of the universe itself. As the universe expands, new space is being created between any two separated objects. The objects may be at rest with respect to their local environment and with respect to the cosmic microwave background, but the distance between them may grow at a rate greater than the velocity of light. According to the standard model of cosmology, parts of the universe are receding from us at FTL speeds, and therefore are completely isolated from us. As the rate of expansion of the universe diminishes due to the pull of gravity, remote parts of the universe that have been out of light-speed contact with us since the Big Bang are coming over the lightspeed horizon and becoming newly visible to our region of the universe. Alcubierre has proposed a way of beating the FTL speed limit that is somewhat like the expansion of the universe, but on a more local scale. He has developed a "metric" for general relativity, a mathematical representation of the curvature of space, that describes a region of flat space surrounded by a "warp" that propels it forward at any arbitrary velocity, including FTL speeds. Alcubierre's warp is constructed of hyperbolic tangent functions which create a very peculiar distortion of space at the edges of the flat-space volume. In effect, new space is rapidly being created (like an expanding universe) at the back side of the moving volume, and existing space is being annihilated (like a universe collapsing to a Big Crunch) at the front side of the moving volume. Thus, a space ship within the volume of the Alcubierre warp (and the volume itself) would be pushed forward by the expansion of space at its rear and the contraction of space in front. Here's a figure from Alcubierre's paper showing the curvature of space in the region of the travelling warp. For those familiar with usual rules of special relativity, with its Lorentz contraction, mass increase, and time dilation, the Alcubierre warp metric has some rather peculiar aspects. Since a ship at the center of the moving volume of the metric is at rest with respect to locally flat space, there are no relativistic mass increase or time dilation effects. The on-board spaceship clock runs at the same speed as the clock of an external observer, and that observer will detect no increase in the mass of the moving ship, even when it travels at FTL speeds. Moreover, Alcubierre has shown that even when the ship is accelerating, it travels on a free-fall geodesic. In other words, a ship using the warp to accelerate and decelerate is always in free fall, and the crew would experience no accelerational gee-forces. Enormous tidal forces would be present near the edges of the flat-space volume because of the large space curvature there, but by suitable specification of the metric, these would be made very small within the volume occupied by the ship. All of this, for those of us who would like to go to the stars without the annoying limitations imposed by special relativity, appears to be too good to be true. "What's the catch?" we ask. As it turns out, there are two "catches" in the Alcubierre warp drive scheme. The first is that, while his warp metric is a valid solution of Einstein's equations of general relativity, we have no idea how to produce such a distortion of space-time. Its implementation would require the imposition of radical curvature on extended regions of space. Within our present state of knowledge, the only way of producing curved space is by using mass, and the masses we have available for works of engineering lead to negligible space curvature. Moreover, even if we could do engineering with mini black holes (which have lots of curved space near their surfaces) it is not clear how an Alcubierre warp could be produced. Alcubierre has also pointed out a more fundamental problem with his warp drive. General relativity provides a procedure for determining how much energy density (energy per unit volume) is implicit in a given metric (or curvature of space-time). He shows that the energy density is negative, rather large, and proportional to the square of the velocity with which the warp moves forward. This means that the weak, strong, and dominant energy conditions of general relativity are violated, which can be taken as arguments against the possibility of creating a working Alcubierre drive. Alcubierre, following the lead of wormhole theorists, argues that quantum field theory permits the existence of regions of negative energy density under special circumstances, and cites the Casimir effect as an example. Thus, the situation for the Alcubierre drive is similar to that of stable wormholes: they are solutions to the equations of general relativity, but one would need "exotic matter" with negative mass-energy to actually produce them, and we have none at the moment. The possibilities for FTL travel or communication implicit in the Alcubierre drive raise the possibility of causality violations and "timelike loops", i.e., back-in-time communication and time travel. Alcubierre points out that his metric contains no such closed causal loops, and so is free of their paradoxes. However, he speculates that it would probably be possible to construct a metric similar to the one he presented which would contain such loops. A scheme for converting FTL signaling to back-in-time signaling requires some gymnastics with moving reference frames to invert the time sequence of the "send" event and the "receive" event in a signal transmission. I described such a scheme in a recent column on quantum tunneling and FTL signaling [Analog, December-1995]. In the case of the` Alcubierre drive, this would probably require either externally moving the warp generating mechanism at near lightspeed velocities or embedding one warp within the flat-space region of another. The implications of the Alcubierre warp drive for science fiction are fairly clear. If the theoretical and engineering problems outlined above could be overcome, we would have FTL travel, fully consistent with general relativity, that is reminiscent of the warp drives of the good old-time space operas. Remember, however, that using such a drive would undoubtedly require the manipulation of planet-scale quantities of energy (positive or negative). The user would also have to be very careful to avoid the tidal forces of the distorted-space region at the edges of the flat-space region containing the ship. And there is also the question of writing the environmental impact statement. What would happen to external objects (space dust, rocks, other ships, asteroids, planets, ...) that happened to lie in the path of an Alcubierre ship and entered the region of distorted space-time at the leading edge of the warp, where space is rapidly being collapsed? The nuclei of any matter transiting that region would first experience enormous compressional forces, probably form a quark-gluon plasma reminiscent of the first microsecond of the Big Bang, and then explode in a flood of pi mesons and other fundamental particles when the compression forces were released, stealing energy from the warp field in the process. A ship traveling in an Alcubierre space warp should be equipped with plenty of radiation shielding. Perhaps that is not a problem, since the equations for the metric and the energy density of the warp do not seem to depend on how much mass is placed in the flat-space region which is given an FTL velocity. References: The Alcubierre Warp Drive: Miguel Alcubierre, Classical and Quantum Gravity, v. 11, pp. L73-L77, (1994). General Relativity: C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation, W.H. Freeman (1973). SF Novels by John Cramer:  my two hard SF novels, Twistor and Einstein's Bridge, are newly released as eBooks by Book View Cafe and are available at : http://bookviewcafe.com/bookstore/?s=Cramer . AV Columns Online: Electronic reprints of about 177 "The Alternate View" columns by John G. Cramer, previously published in Analog , are available online at: http://www.npl.washington.edu/av. Note (2/18/97): see also a recent paper by Pfenning and Ford applying quantum limits to the Alcubierre warp drive. JGC Exit to the website. This page was created by John G. Cramer on 7/12/1996 and revised on 2/18/1997 and 11/18/2014.
2,551
12,204
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2015-48
longest
en
0.940584
https://math.stackexchange.com/questions/3141858/sequential-analysis-without-knowing-the-hypothetical-probability-distribution
1,566,696,783,000,000,000
text/html
crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00271.warc.gz
536,914,716
29,046
# Sequential analysis without knowing the hypothetical probability distribution? When learning sequential probability ratio test, I get the impression that one should know exactly what the hypothesis is, and what the likelihood function is, in order to calculate and accumulate the likelihood ratio. But what if we don't know the exact form of likelihood function? Suppose in a game, a person is faced with $$N$$ screens, and each screen will show one random number every second. The person is told that for each screen, all the random numbers will be generated from one specific probability distribution (and will never be changed), but exactly what distribution is not known. The person is also told that one of the distributions has an expectation $$X$$. The poor guy's job is to guess which screen has the distribution with expectation $$X$$ based on observing the numbers sequentially, and should decide AS QUICKLY AS possible once reaching enough confidence. What is the theoretically best way to finish this game? I understand that central limit theorem (CLT) states that when observations are long enough, the sum of all random numbers from any distribution approaches normal distribution. But in this case the person may need to decide with only a few observations. And I am not sure whether CLT is applicable here. Thanks for any hints and suggestions.
258
1,367
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2019-35
latest
en
0.947295
http://www.reddit.com/r/shittyaskscience/comments/10tcz5/if_a_unicorn_has_one_horn_does_a_unihorn_have_one/?sort=controversial
1,405,223,763,000,000,000
text/html
crawl-data/CC-MAIN-2014-23/segments/1404776435941.77/warc/CC-MAIN-20140707234035-00051-ip-10-180-212-248.ec2.internal.warc.gz
451,892,100
14,172
[–]IAMA sciencer with over 30 years experience in scientistry. AMA! 6 points7 points sorry, this has been archived and can no longer be voted on No. Unihorns have one fawn, while Unifawns have one vaughn. Univaughns are the ones that have one corn. The suffix-to-possession relationship rotates in kind of a square shape, if that makes sense [–]PhD in sticking lightbulbs in microwaves[S] 1 point2 points sorry, this has been archived and can no longer be voted on Science is hard... [–] 2 points3 points sorry, this has been archived and can no longer be voted on How many corns does Vince Vaughn have? [–]Head of Wumbology Dep. at South Hampton Institute of Technology 3 points4 points sorry, this has been archived and can no longer be voted on Exactly pi corns. [–]IAMA sciencer with over 30 years experience in scientistry. AMA! 1 point2 points sorry, this has been archived and can no longer be voted on Well Vince Vaughn is in a movie called "The Watch", yeah? It can be estimated that several some people will see the movie in theatres. Let that number of people = p It can be estimated that, of those people, about 70% of those will buy popcorn. And in each box of popcorn, it may be estimated that the number of corn kernels used to create the popcorn is equivalent to 1.5 cobs. Let cobs = c There are 4 lead actors in the movie "The Watch", so assuming they all took equal share of the corn, the formula would be something like this: c = (0.7p x 1.5)/4 = (1.05p)/4 = 0.2625p Therefore, we can conclude that the number of corns that Vince Vaughn has is equivalent to 0.2625 times the number of people who see the movie "The Watch" in cinemas. EDIT: Math was shitty [–] 0 points1 point sorry, this has been archived and can no longer be voted on Shit. Fuck my Degree in Biology. Can't even explain this shit. [–]My dissertation was on a cocktail napkin 0 points1 point sorry, this has been archived and can no longer be voted on Excellent question. I hope Dr. Scholls is a redditor. I'm really eager to hear his response.
542
2,057
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.75
4
CC-MAIN-2014-23
latest
en
0.969533
https://www.studysmarter.us/explanations/chemistry/physical-chemistry/thermodynamics/
1,653,220,129,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662545326.51/warc/CC-MAIN-20220522094818-20220522124818-00480.warc.gz
1,179,293,973
35,223
Suggested languages for you: | | ## All-in-one learning app • Flashcards • NotesNotes • ExplanationsExplanations • Study Planner • Textbook solutions # Thermodynamics Save Print Edit When you think of chemistry, you might imagine a scientist in a laboratory creating an explosive reaction. Some chemical reactions release energy in the form of heat. Physical processes also involve energy. For example, when ice melts, it requires energy to change from a solid state to a liquid. Thermodynamics is all about the energy changes involved in physical and chemical processes. ## What is thermodynamics? Thermodynamics is the study of thermal energy or heat in chemical and physical processes. It deals with how thermal energy converts to other kinds of energy and how this affects the properties of a system. In thermodynamics we separate the universe into two parts: the system and the surroundings. We do this to make it easier to work out calculations. A system is a substance or a collection of substances and energy. Everything else that is not in the system, we call the surroundings. For example, if a reaction takes place in a jar, the jar is the system. Everything outside the jar is the surroundings. A Thermodynamic system. Olive [Odagbu]- StudySmarter Originals ### What is energy? Before we get into thermodynamics, we need to talk about energy. What is energy? Scientists struggle to define it. Here is a simple definition: Energy is the capacity to do work or to transfer heat. Let’s break that down: In chemistry work (w or W) is when a force acts on something to make it move. So if there is no motion, no work is done. Heat (q or Q) means the transfer of energy through thermal interactions like radiation or conduction. Everything in the universe is energy. Even when things are not moving they have the capacity to work or transfer heat. We classify energy into two basic types: kinetic energy and potential energy. Kinetic energy has to do with moving objects, while potential energy is stored energy. All other kinds of energy come under these two basic types. We can see the relationship between kinetic and potential energy in a roller coaster. When the wagon reaches a crest and pauses it has no kinetic energy, but a lot of potential energy. When the roller coaster is allowed to free fall, as it increases in speed, its kinetic energy also increases. blogs.unicamp.br ## What are the laws of thermodynamics? The laws of thermodynamics help us understand how energy moves. Scientists like Isaac Newton and James Joule discovered four basic principles that govern the study of thermodynamics. We call them the four laws of thermodynamics. In this article we will only consider the first and second laws. ### What is the first law of thermodynamics? Previously, you learned about the law of conservation of energy which says: "Energy cannot be created or destroyed, it only converts from one form to another." In thermodynamics, we know the law of conservation of energy as the first law. However, we add an extra sentence: "The total amount of energy in the universe is constant." -The First Law of Thermodynamics Remember everything in the universe is energy. That energy is never lost, it only changes from one form to another. So, the total amount of energy in the universe remains the same. We’ll get to the second law of thermodynamics a little later. First, let’s review what you know about enthalpy. ## What is enthalpy? Enthalpy (H) is the thermal energy stored in a system. We also call it heat content. A glass of water has a specific value of mass, volume and temperature. That same glass of water also has a specific enthalpy. But enthalpy is not as easy to measure as the volume, temperature and mass. We can never know the absolute enthalpy of a system. That is like trying to measure the total volume of water in the ocean - next to impossible! But, if you pour five litres of orange juice into the ocean, you can say that the volume has increased by five litres. In the same way, chemists are interested in the energy that goes in and out of a system. Some chemical reactions release energy in the form of heat. We call them exothermic reactions. A heat transfer or chemical change under constant pressure is called enthalpy change or heat of reaction (𝚫H). 1. We use the Greek symbol ‘delta’ 𝚫 to represent change in energy. 2. All chemical reactions occur under a constant pressure. Fortunately, enthalpy is a state function, or pathway independent. That means it does not matter how we arrive at a value for enthalpy change, we will always get the same value. So if we know the heat value at the beginning and end of an experiment, we can measure the enthalpy change. ## What is Hess’ Law? We can calculate enthalpy change using an equation called Hess’ Law. So far, you have learned that enthalpy is the thermal energy in a system and that it is pathway independent. A Swiss scientist named Germain Hess summed up this discovery in a law named after him. "Enthalpy change in a chemical reaction is independent of the route by which the chemical change occurs." -Hess’ Law So as long as you start with the same reactants and end with the same products, the enthalpy change is the same. It doesn’t matter whether you do it in one step, two steps or fifteen steps. Hess' Law. StudySmarter Originals We express Hess’ Law by the following equation: : Enthalpy change of a reaction : Enthalpy change in direct route : Enthalpy change in indirect route ## What is lattice enthalpy? You have previously learned there is energy stored between the bonds of the atoms in a molecule. We call the amount of energy stored between the bonds of the atoms in a covalent compound bond enthalpy. What about ionic bonds? You may remember that we call the structure formed by an ionic compound a lattice or crystal lattice. A lattice is a regular, geometrical, 3-dimensional arrangement of atoms or ions. Lattice enthalpy () is the enthalpy change involved in forming one mole of an ionic lattice from gaseous ions under standard state conditions. Lattice enthalpy is a measure of the strength of the bonds between the ions in an ionic compound. However, these bonds can only completely break when the ions are in a gaseous state, where they are so far apart we consider their forces to be negligible. We cannot measure lattice enthalpy - we have to calculate it. We use a type of Hess’ cycles known as Born-Haber Cycles to calculate lattice enthalpy. ## What are Born-Haber cycles? A Born-Haber cycle is a theoretical model we use to calculate lattice enthalpy. Here’s an overview of how the cycles work: • We draw lines representing energy levels at different points in the reaction. • The base line represents the ionic solid • The top line is the energy level of the gaseous ions • The height difference represents the lattice enthalpy or the drop in energy as we go from one to the other. Born-Haber cycle for the lattice enthalpy of lithium fluoride. StudySmarter Originals Remember, we cannot measure lattice enthalpy, but we can find the enthalpy of formation experimentally. • is usually smaller than so we draw it as a much smaller drop in energy in the Born-Haber cycle. You also write the species that we are interested in above the line. The principle here is the same as the one we use in Hess’ Law cycles: If we create an indirect route to the gaseous ions, we can use the equation for Hess’ Law to find the lattice enthalpy. ## What is the second law of thermodynamics? Now that you know a little about lattice enthalpy and Born-Haber cycles, let’s get back to the laws of thermodynamics. After you learned the first law, you might have wondered: if all the energy in the universe is constant, why is the universe so random? Why does ice melt and sugar dissolve? Why does popcorn pop all over the place? The second law explains it like this: "In spontaneous changes, the universe tends toward a state of greater disorder." -The Second Law of Thermodynamics The second law explains why energy moves in one direction and not in the other. For example, heat will always flow from a hotter body to a colder one. Ice will always melt and sugar will dissolve. These reactions happen spontaneously. We call this randomness in the universe entropy. ### What is entropy? Entropy (S) is a measure of the disorder of a system. The greater the disorder, the higher the entropy. The energy in natural systems tends to move in the direction of increasing entropy. Entropy also increases as a system changes from a solid to a liquid to a gas. Think about how the particles go from highly ordered in a solid to the random movements and collisions in a gas! Entropy increases as a system changes from a solid to a liquid to a gas. ### What are spontaneous reactions? You might have guessed that a spontaneous reaction happens all by itself, without the input of energy from outside. Reactions are more likely to occur if there is an increase in entropy. So the particles move from an ordered state to a less ordered state. From the previous example of melting ice, you can see spontaneous changes do not have to happen immediately. They can be incredibly slow! Another example of a spontaneous reaction is when iron turns to rust. The reaction of iron with oxygen is a spontaneous reaction that happens over a long period of time. Eventually iron turns to iron oxide or ‘rust’. In a spontaneous reaction, change in enthalpy (ΔH) decreases while entropy increases (ΔS). The difference between entropy and enthalpy is something we call free energy or Gibbs Free Energy (ΔG). It shows us the route of a reaction. We express the relationship between entropy and enthalpy in the equation below: ΔG = ΔH - T ΔS Where: ΔG: Change in free energy. ΔH: Change in enthalpy. ΔS: Change in entropy. ## Thermodynamics - Key takeaways • Lattice enthalpy () is the enthalpy change involved in the formation of 1 mole of an ionic lattice from gaseous ions under standard state conditions. • We cannot measure lattice enthalpy, it must be calculated using Born-Haber cycles. • Born-Haber cycles is a theoretical model we use to calculate lattice enthalpy. The principle is the same as Hess’ Law cycles. If we create an indirect route to the gaseous ions, we can use the equation for Hess’ Law to find the lattice enthalpy. • The second law of thermodynamics states that in spontaneous changes, the universe tends toward a state of greater disorder. • Entropy (S) is a measure of the disorder of a system. The greater the disorder the higher the entropy. • A spontaneous reaction is one that happens all by itself, without the input of energy from the outside. In a spontaneous reaction, change in enthalpy (ΔH) decreases and entropy increases (ΔS). • Gibbs free energy (ΔG) is the difference between enthalpy and entropy. ## Thermodynamics Thermodynamics is the study of thermal energy or heat in chemical and physical processes. It deals with how thermal energy is converted to other kinds of energy and how this affects the properties of a system. We use the letter Q to symbolise heat in thermodynamics. Heat (q or Q) means the transfer of energy through thermal interactions like radiation or conduction. ## Final Thermodynamics Quiz Question ​​​​​​​​​​​​​​​What is lattice enthalpy? Lattice enthalpy is the enthalpy change involved in the formation of 1 mole of an ionic lattice from gaseous ions under standard state conditions. Show question Question Which of the following represents lattice enthalpy? Show question Question What is enthalpy? Enthalpy (H) is the thermal energy stored in a system. Show question Question What is entropy? Entropy (S) is a measure of the disorder of a system. Show question Question What is the second law of thermodynamics? In spontaneous changes, the universe tends toward a state of greater disorder. Show question Question What is a spontaneous reaction? A spontaneous reaction is one that happens all by itself, without input of energy from the outside. In a spontaneous reaction, change in enthalpy (ΔH) decreases and entropy increases (ΔS). Show question Question What is the equation for Gibbs free energy? ΔG = ΔH - T ΔS Show question Question In what order do we draw Born-Haber cycles? 1. Draw lines for enthalpies of formation. 2. Draw top line for gaseous ions. 3. Calculate lattice enthalpy using Hess’ Law. 4. Draw base line for ionic solid. Answer: correct order is D, B, A, C Show question Question What is Hess’ Law? Enthalpy change in a chemical reaction is independent of the route by which the chemical change occurs. Show question Question Which is the correct equation for Hess’s Law? Answer: Both are correct! (The enthalpy change for a reaction is equal to the sum of the enthalpy of formation of all the products minus the sum of the enthalpy of formation of all the reactants.) Show question 60% of the users don't pass the Thermodynamics quiz! Will you pass the quiz? Start Quiz ## Study Plan Be perfectly prepared on time with an individual plan. ## Quizzes Test your knowledge with gamified quizzes. ## Flashcards Create and find flashcards in record time. ## Notes Create beautiful notes faster than ever before. ## Study Sets Have all your study materials in one place. ## Documents Upload unlimited documents and save them online. ## Study Analytics Identify your study strength and weaknesses. ## Weekly Goals Set individual study goals and earn points reaching them. ## Smart Reminders Stop procrastinating with our study reminders. ## Rewards Earn points, unlock badges and level up while studying. ## Magic Marker Create flashcards in notes completely automatically. ## Smart Formatting Create the most beautiful study materials using our templates.
3,006
13,883
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.140625
3
CC-MAIN-2022-21
latest
en
0.899066
https://numbermatics.com/n/107520/
1,713,935,616,000,000,000
text/html
crawl-data/CC-MAIN-2024-18/segments/1712296819067.85/warc/CC-MAIN-20240424045636-20240424075636-00285.warc.gz
373,342,207
7,304
# 107520 ## 107,520 is an even composite number composed of four prime numbers multiplied together. What does the number 107520 look like? This visualization shows the relationship between its 4 prime factors (large circles) and 88 divisors. 107520 is an even composite number. It is composed of four distinct prime numbers multiplied together. It has a total of eighty-eight divisors. ## Prime factorization of 107520: ### 210 × 3 × 5 × 7 (2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 × 3 × 5 × 7) See below for interesting mathematical facts about the number 107520 from the Numbermatics database. ### Names of 107520 • Cardinal: 107520 can be written as One hundred seven thousand, five hundred twenty. ### Scientific notation • Scientific notation: 1.0752 × 105 ### Factors of 107520 • Number of distinct prime factors ω(n): 4 • Total number of prime factors Ω(n): 13 • Sum of prime factors: 17 ### Divisors of 107520 • Number of divisors d(n): 88 • Complete list of divisors: • Sum of all divisors σ(n): 393024 • Sum of proper divisors (its aliquot sum) s(n): 285504 • 107520 is an abundant number, because the sum of its proper divisors (285504) is greater than itself. Its abundance is 177984 ### Bases of 107520 • Binary: 110100100000000002 • Base-36: 2AYO ### Squares and roots of 107520 • 107520 squared (1075202) is 11560550400 • 107520 cubed (1075203) is 1242990379008000 • The square root of 107520 is 327.9024245109 • The cube root of 107520 is 47.5513756221 ### Scales and comparisons How big is 107520? • 107,520 seconds is equal to 1 day, 5 hours, 52 minutes. • To count from 1 to 107,520 would take you about five hours. This is a very rough estimate, based on a speaking rate of half a second every third order of magnitude. If you speak quickly, you could probably say any randomly-chosen number between one and a thousand in around half a second. Very big numbers obviously take longer to say, so we add half a second for every extra x1000. (We do not count involuntary pauses, bathroom breaks or the necessity of sleep in our calculation!) • A cube with a volume of 107520 cubic inches would be around 4 feet tall. ### Recreational maths with 107520 • 107520 backwards is 025701 • 107520 is a Harshad number. • The number of decimal digits it has is: 6 • The sum of 107520's digits is 15 • More coming soon! #### Copy this link to share with anyone: MLA style: "Number 107520 - Facts about the integer". Numbermatics.com. 2024. Web. 24 April 2024. APA style: Numbermatics. (2024). Number 107520 - Facts about the integer. Retrieved 24 April 2024, from https://numbermatics.com/n/107520/ Chicago style: Numbermatics. 2024. "Number 107520 - Facts about the integer". https://numbermatics.com/n/107520/ The information we have on file for 107520 includes mathematical data and numerical statistics calculated using standard algorithms and methods. We are adding more all the time. If there are any features you would like to see, please contact us. Information provided for educational use, intellectual curiosity and fun! Keywords: Divisors of 107520, math, Factors of 107520, curriculum, school, college, exams, university, Prime factorization of 107520, STEM, science, technology, engineering, physics, economics, calculator, one hundred seven thousand, five hundred twenty. Oh no. Javascript is switched off in your browser. Some bits of this website may not work unless you switch it on.
926
3,437
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.46875
3
CC-MAIN-2024-18
latest
en
0.861158
http://dict.cnki.net/h_51449786002.html
1,597,248,612,000,000,000
text/html
crawl-data/CC-MAIN-2020-34/segments/1596439738905.62/warc/CC-MAIN-20200812141756-20200812171756-00206.warc.gz
29,927,233
8,835
全文文献 工具书 数字 学术定义 翻译助手 学术趋势 更多 trace form 在 数学 分类中 的翻译结果: 查询用时:0.666秒 在分类学科中查询 所有学科 数学 更多类别查询 历史查询 trace form 迹形式(0)迹型(3)迹型(3) 迹型 Applying Trace Form to Semi-simple Algebras and Representation of Finite Groups 迹型到半单代数和群表示理论中的应用 短句来源 In this paper ,we discuss the derivations of the symplectic ternaryalgebras,the correspondence between the trace form of a Lie triplesystem and that of a symplectic ternary algebra ,and the wedderburnprincipal theorem . 本文讨论了辛三代数的导子性质,其迹型与李三系型的关系,及其分解原理。 短句来源 Using the trace form, we prove the existence of the complete set of primitive central idempotents of semi-simple algebra A on the field K of characteristic zero and exprese A as a direct sum of a finite number of irreducible submodules, further obtain a series of essential results of ordinary representation of finite groups. This method is shorter and clearer than the conventional methods ([1] or[2]). 本文利用迹型证明了特征为0的域K上的半单代数存在本原中心幂等元完全系并且获得半单代数的直和分解,进而得到有限群常表示的一系列基本结果,这种方法比传统的方法(参见[1]和[2])简捷且明晰得多。 短句来源 迹型 Applying Trace Form to Semi-simple Algebras and Representation of Finite Groups 迹型到半单代数和群表示理论中的应用 短句来源 In this paper ,we discuss the derivations of the symplectic ternaryalgebras,the correspondence between the trace form of a Lie triplesystem and that of a symplectic ternary algebra ,and the wedderburnprincipal theorem . 本文讨论了辛三代数的导子性质,其迹型与李三系型的关系,及其分解原理。 短句来源 Using the trace form, we prove the existence of the complete set of primitive central idempotents of semi-simple algebra A on the field K of characteristic zero and exprese A as a direct sum of a finite number of irreducible submodules, further obtain a series of essential results of ordinary representation of finite groups. This method is shorter and clearer than the conventional methods ([1] or[2]). 本文利用迹型证明了特征为0的域K上的半单代数存在本原中心幂等元完全系并且获得半单代数的直和分解,进而得到有限群常表示的一系列基本结果,这种方法比传统的方法(参见[1]和[2])简捷且明晰得多。 短句来源 “trace form”译为未确定词的双语例句 Z GRADED LIE SUPERALGEBRAS WITH A NONSINGULAR TRACE FORM 具有非退化亦型的Z-阶化李超代数 短句来源 In this paper,a Bhattacharryya Inequality for operators statictic moder in a Hilbert space is given in the operator trace form,And we also gave conditions that operators which achieved this bound。 该文在Hilbert空间算子统计模型中,导出了用算子迹表示的Bhat-tacharyya不等式,并通过算子方程给出了达到Bh下界的算子值估计量所应满足的条件。 短句来源 Let L is a finite dimensional simple Z graded Lie superalgebra. In this paper, we obtain the necessary conditions that L possesses a nondegenerate trace form. Using this result, we obtain that the Killing forms of Lie superalgebras X(n) and X(m,n,t) of Cartan type are degenerate. 本文给出了Z-阶化有限维单李超代数具有非退化亦型的一个必要条件.应用这一结果,我们可以得到Cartan 型李超代数X(n)与X(m ,n,t)的Killing 型是退化的. 短句来源 我想查看译文中含有:的双语例句 trace form Specifically, we present a characterization of the Kuz'min radical in terms of a trace form associated with some representation ρ, which is analogous to the characterization which we have in the case of Lie algebras. We show that the essential dimension of a finite-dimensional central simple algebra coincides with the essential dimension of its r-linear trace form, \$\$ (a_1, \ldots, a_r) \mapsto tr(a_1, \ldots a_r), \$\$ for any r ≥ 3. In this article, we show that the second trace form of a central simple algebra A of even degree over a field of characteristic two is non-degenerate and we compute its classical invariants. If K has characteristic not two, it is shown in [U] that ?2,A does not give much more information than the usual trace form. Let ?A and ?2,A be respectively the trace form and the second trace form of A. 更多 Some fundamental theoretical problems in 4--dimension descriptive geometry, such as the projection law of a trace--form plane, the characteristic about projection position of geometrical elements etc, have not been perfected yet and should be discussed further. In this paper, after analyzing those characteristics mentioned above, 14 propositions about the representations of general positions of trace--form planes and semiparallel--semi perpendicular planes are presented. Based on these... Some fundamental theoretical problems in 4--dimension descriptive geometry, such as the projection law of a trace--form plane, the characteristic about projection position of geometrical elements etc, have not been perfected yet and should be discussed further. In this paper, after analyzing those characteristics mentioned above, 14 propositions about the representations of general positions of trace--form planes and semiparallel--semi perpendicular planes are presented. Based on these results, a new method for the classification of various positions between planes and for the study of projection characteristic is given. 四维画法几何中关于迹线平面的投影规律及平面在四维投影体系中处于各种位置时的投影特征等基本理论问题,尚有不完备和可商榷之处,本文在对迹线平面的投影性质进行了分析归纳之后,提出了14个有关迹线表示的斜平面、半平行平面及半垂直平面的投影命题,并在此基础上提出一种新的关于各种位置平面的分类和研究其投影性质的方法。 Using the trace form, we prove the existence of the complete set of primitive central idempotents of semi-simple algebra A on the field K of characteristic zero and exprese A as a direct sum of a finite number of irreducible submodules, further obtain a series of essential results of ordinary representation of finite groups. This method is shorter and clearer than the conventional methods ([1] or[2]). 本文利用迹型证明了特征为0的域K上的半单代数存在本原中心幂等元完全系并且获得半单代数的直和分解,进而得到有限群常表示的一系列基本结果,这种方法比传统的方法(参见[1]和[2])简捷且明晰得多。 In this paper,a Bhattacharryya Inequality for operators statictic moder in a Hilbert space is given in the operator trace form,And we also gave conditions that operators which achieved this bound。 该文在Hilbert空间算子统计模型中,导出了用算子迹表示的Bhat-tacharyya不等式,并通过算子方程给出了达到Bh下界的算子值估计量所应满足的条件。 << 更多相关文摘 相关查询 CNKI小工具 在英文学术搜索中查有关trace form的内容 在知识搜索中查有关trace form的内容 在数字搜索中查有关trace form的内容 在概念知识元中查有关trace form的内容 在学术趋势中查有关trace form的内容 CNKI主页 |  设CNKI翻译助手为主页 | 收藏CNKI翻译助手 | 广告服务 | 英文学术搜索 2008 CNKI-中国知网 2008中国知网(cnki) 中国学术期刊(光盘版)电子杂志社
1,911
5,814
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2020-34
latest
en
0.390435
https://fountainessays.com/histogram-assignment/
1,653,186,135,000,000,000
text/html
crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00215.warc.gz
315,429,748
15,424
# Histogram Assignment This assignment refers to the Histogram shown on the left that depicts the distribution of a set of quiz scores. Q 1 requires application of learning from Module 9 combined with new learning from Module 10. Q’s  2-6 require your interpretation of the Histogram. 1. Based only on the level of measurement (NOIR Scale) of the quiz scores, would the appropriate branch of statistics to analyze the scores be parametric or non-parametric? Explain. 2. Is the distribution of the Quiz 1 scores normal or skewed? Explain. 3. If the distribution of quiz scores is skewed, is the direction of the skew positive or negative? Explain. 4. If quiz scores are normally distributed, would the appropriate branch of statistics used to analyze these data be parametric or non-parametric? Explain. 5. If distribution of quiz scores is skewed, would the appropriate branch of statistics used to analyze these data be parametric or non-parametric? Explain. 6. Provide a short summary statement of your Histogram findings. In other words, what is your overall interpretation of class performance on the quiz based on your analysis of the Histogram? Don't use plagiarized sources. Get Your Custom Essay on Histogram Assignment Just from \$13/Page PermalinkReply◄ AssumptionsJump to… Jump to… SYLLABUS UPDATE Syllabus Acknowledge Evolve Resources for Burns and Grove’s The Practice of Nursing Research, 9th Edition Critical Appraisal Project Voice-Over PPT Critical Appraisal Project Rubrics Critical Appraisal Project Resources Announcements Course Questions Forum Icebreaker Activity Open Lounge Door Here Helpful Tips and Tricks Course Format Discussion Forum WEEK 1 TOPIC AND LECTURE. Discussion Forum BUILDING EVIDENCE-BASED NURSING PRACTIC BUILDING EVIDENCE-BASED NURSING PRACTIC WEEK 1 TOPIC AND LECTURE VIDEO. Discussion Forum SAMPLES OF PICO QUESTIONS Module 2 PowerPoint Nursing Research Research or Evidence-Based Practice? Rigor, Research, and EBP PICO Additional PICO Practice PICO Assignment due Sunday, May 16 by 11:59p PICO Assignment Dropbox Comparisons in Research Methodologies Quantitative Methodology Steps of the Research Process Qualitative Methodology Mixed Methods Discussion Forum due Sunday, May 23 by 11:59p Module 4a PPT Module 4b PPT Literature Review Variables Research Problem Formulating Hypotheses Module 5 PPT Research Study Historical Perspective Nursing Research Ethics Ethical Principles of Research Research Codes and Regulations Institutional Review Board Researcher Misconduct Module 5 Assignment: Self-Reflection Forum Module 6 PPT Approaches to Research Experimental v. Non-Experimental Designs Relationship Between Design and Question Time Classification of Research Designs Module 6 PPT Approaches to Research Experimental v. Non-Experimental Designs Relationship Between Design and Question Time Classification of Research Designs Module 7_PPT Research Study Exemplar Sample Population Eligibility Criteria Sample Size Sampling Key Terms Quiz #1 Module 9 PPT Levels of Measurement NOIR Scale Assessment Activities NOIR Hotspots NOIR Hotspots NOIR Hotspots NOIR Hotspots Drag and Drop Variable Definitions Measurement Module 10 PPT Research Study Exemplar Branches of Statistics Inferential Statistics Data Set Skew Assumptions Order your essay today and save 25% with the discount code: COCONUT ## Order a unique copy of this paper 550 words We'll send you the first draft for approval by September 11, 2018 at 10:52 AM Total price: \$26
734
3,491
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.1875
3
CC-MAIN-2022-21
latest
en
0.80825
http://math.stackexchange.com/questions/20527/plotting-glucose-production-against-time-x-axis-y-axis-issue
1,469,634,373,000,000,000
text/html
crawl-data/CC-MAIN-2016-30/segments/1469257826908.63/warc/CC-MAIN-20160723071026-00198-ip-10-185-27-174.ec2.internal.warc.gz
160,775,898
17,397
# Plotting glucose production against time , x-axis & y-axis issue Working on a pre-lab, some of the directions for drawing graphs is that the "x-axis" is for the non-variables, and the "y-axis" is for the variables (what you have measured). Now, the first thing I have to do is to make a standard curve for "Glucose concentration" and "Absorbtion". The next step is to measure the "Enzyme activity" and use the standard curve done to deternmine the amount of glucose since I will be having the "Absorbance" values in the "Enzyme activity" process. Finally, we are asked to plot a graph that shows the glucose production against time. Based on that, what (glucose production, time) goes where (x-axis, y-axis)? Thanks. - What is a "non-variable"? A constant? I think what you meant to say is that the "x-axis" is usually reserved for the independent variable, while the "y-axis" is reserved for the dependent variable. Time is an example of an independent variable. Wether you perform your experiment or not, it is still flowing, so to speak. On the other hand, all the variables you measure during your experiment will likely take on different values at different times, they therefore depend on time. – Raskolnikov Feb 5 '11 at 16:44 ## 1 Answer time is x axis almost always, glucose production y axis - But, isn't time a variable? And, thus has to go to the y-axis? – Simplicity Feb 5 '11 at 15:08 the time at which you measured something cant be changed afterwarsd, so no – TROLLKILLER Feb 5 '11 at 15:19
381
1,518
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.578125
3
CC-MAIN-2016-30
latest
en
0.939383
http://scienceline.ucsb.edu/getkey.php?key=128
1,701,496,009,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00200.warc.gz
45,287,663
4,729
UCSB Science Line Why is snow "white" but ice cubes are colorless? Both are frozen water? Question Date: 2002-05-10 Answer 1:Let's go over what we know about color. Sunlight is made up of lots of different colors all mixed up together. Light can either be reflected or can pass through things. Different colors might be reflected or transmitted differently, too. An example of this is a prism. A prism takes white light and transmits each different color slightly differently. This ends up spreading the white light out into a rainbow. Not surprisingly, a rainbow is made when light from the sun is transmitted through drops of water differently, causing the different colors to separate. For snow to be white, it means that it must be reflecting all the different colors of light equally. For ice to be clear, it is transmitting all the colors of light equally and not reflecting them back to your eye. To understand where the difference comes from, we need to think about the structure of snow. Lets start with ice. Ice isn't really as transparent as a pane of glass. If you look through an ice cube, everything looks kind of murky. This is because the ice is bending the light a little bit -- it doesn't pass through the ice in a straight line -- and so things get blurry. Snow is made completely out of a bunch of tiny flakes of ice. So when you are looking at a snow bank, you are looking at a bunch of tiny ice flakes and a whole bunch of air that fills the spaces between the snow flakes. Since it snow flake is ice, it will bend light passing through it slightly. This light will hit another flake, then another, then another, and bounce around randomly from flake to flake until it eventually comes right back out again. Some of the light does get absorbed by the snow but a lot of it comes out. Since snow doesn't distinguish between all the different colors of light, they all get reflected back and so the snow appears white. So here is a question for you: What happens if you take an ice cube and you start scraping off a pile of ice flakes? Would the pile look clear like the ice cube or would it look white like snow? Actually, if you have a really large chunk of ice (for example, a glacier) you will notice that the ice looks a little blue, not clear. This is because ice absorbs red light better than blue light. As light travels through the ice, it has less and less red in it but the same amount of blue, so it appears bluish. Answer 2:I believe the difference is that snow is made up of a bunch of small water crystals where big blocks of ice are not. All the crystals in the snow would tend to reflect the light in all sorts of directions with the net effect that a lot of the light that hits a pile of snow is reflected, making the snow look white. With ice, more of the light is transmitted through which makes the ice seem colorless. An even more dramatic example is the difference between diamonds and graphite, both are carbon. Diamonds have a crystalline structure and tend to reflect light in interesting ways. Graphite does not have the same crystal structure and looks black.Click Here to return to the search form.
670
3,147
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.1875
3
CC-MAIN-2023-50
latest
en
0.958805
https://www.scryer.pl/ugraphs
1,723,174,383,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722640751424.48/warc/CC-MAIN-20240809013306-20240809043306-00282.warc.gz
774,956,879
4,119
## Module ugraphs ``:- use_module(library(ugraphs)).`` Graph manipulation library The S-representation of a graph is a list of (vertex-neighbours) pairs, where the pairs are in standard order (as produced by keysort) and the neighbours of each vertex are also in standard order (as produced by sort). This form is convenient for many calculations. A new UGraph from raw data can be created using `vertices_edges_to_ugraph/3`. Adapted to support some of the functionality of the SICStus ugraphs library by Vitor Santos Costa. Ported from YAP 5.0.1 to SWI-Prolog by Jan Wielemaker. Ported from SWI-Prolog to Scryer by Adrián Arroyo Calle #### vertices(+Graph, -Vertices) Unify Vertices with all vertices appearing in Graph. Example: `````` ?- vertices([1-[3,5],2-[4],3-[],4-[5],5-[]], L). L = [1, 2, 3, 4, 5]`````` #### vertices_edges_to_ugraph(+Vertices, +Edges, -UGraph) is det. Create a UGraph from Vertices and edges. Given a graph with a set of Vertices and a set of Edges, Graph must unify with the corresponding S-representation. Note that the vertices without edges will appear in Vertices but not in Edges. Moreover, it is sufficient for a vertice to appear in Edges. `````` ?- vertices_edges_to_ugraph([],[1-3,2-4,4-5,1-5], L). L = [1-[3,5], 2-[4], 3-[], 4-[5], 5-[]]`````` In this case all vertices are defined implicitly. The next example shows three unconnected vertices: `````` ?- vertices_edges_to_ugraph([6,7,8],[1-3,2-4,4-5,1-5], L). L = [1-[3,5], 2-[4], 3-[], 4-[5], 5-[], 6-[], 7-[], 8-[]]`````` Unify NewGraph with a new graph obtained by adding the list of Vertices to Graph. Example: `````` NG = [0-[], 1-[3,5], 2-[], 9-[]]`````` #### del_vertices(+Graph, +Vertices, -NewGraph) is det. Unify NewGraph with a new graph obtained by deleting the list of Vertices and all the edges that start from or go to a vertex in Vertices to the Graph. Example: `````` ?- del_vertices([1-[3,5],2-[4],3-[],4-[5],5-[],6-[],7-[2,6],8-[]], [2,1], NL). NL = [3-[],4-[5],5-[],6-[],7-[6],8-[]]`````` Unify NewGraph with a new graph obtained by adding the list of Edges to Graph. Example: `````` 5-[],6-[],7-[],8-[]], [1-6,2-3,3-2,5-7,3-2,4-5], NL). NL = [1-[3,5,6], 2-[3,4], 3-[2], 4-[5], 5-[7], 6-[], 7-[], 8-[]]`````` #### ugraph_union(+Graph1, +Graph2, -NewGraph) NewGraph is the union of Graph1 and Graph2. Example: `````` ?- ugraph_union([1-[2],2-[3]],[2-[4],3-[1,2,4]],L). L = [1-[2], 2-[3,4], 3-[1,2,4]]`````` #### del_edges(+Graph, +Edges, -NewGraph) Unify NewGraph with a new graph obtained by removing the list of Edges from Graph. Notice that no vertices are deleted. Example: `````` ?- del_edges([1-[3,5],2-[4],3-[],4-[5],5-[],6-[],7-[],8-[]], [1-6,2-3,3-2,5-7,3-2,4-5,1-3], NL). NL = [1-[5],2-[4],3-[],4-[],5-[],6-[],7-[],8-[]]`````` #### graph_subtract(+Set1, +Set2, ?Difference) Is based on `ord_subtract/3` #### edges(+Graph, -Edges) Unify Edges with all edges appearing in Graph. Example: `````` ?- edges([1-[3,5],2-[4],3-[],4-[5],5-[]], L). L = [1-3, 1-5, 2-4, 4-5]`````` #### transitive_closure(+Graph, -Closure) Generate the graph Closure as the transitive closure of Graph. Example: `````` ?- transitive_closure([1-[2,3],2-[4,5],4-[6]],L). L = [1-[2,3,4,5,6], 2-[4,5,6], 4-[6]]`````` #### transpose_ugraph(Graph, NewGraph) is det. Unify NewGraph with a new graph obtained from Graph by replacing all edges of the form V1-V2 by edges of the form V2-V1. The cost is O(|V|*log(|V|)). Notice that an undirected graph is its own transpose. Example: `````` ?- transpose([1-[3,5],2-[4],3-[],4-[5], 5-[],6-[],7-[],8-[]], NL). NL = [1-[],2-[],3-[1],4-[2],5-[1,4],6-[],7-[],8-[]]`````` #### compose(+LeftGraph, +RightGraph, -NewGraph) Compose NewGraph by connecting the drains of LeftGraph to the sources of RightGraph. Example: `````` ?- compose([1-[2],2-[3]],[2-[4],3-[1,2,4]],L). L = [1-[4], 2-[1,2,4], 3-[]]`````` #### top_sort(+Graph, -Sorted) is semidet. Sorted is a topological sorted list of nodes in Graph. A toplogical sort is possible if the graph is connected and acyclic. In the example we show how topological sorting works for a linear graph: `````` ?- top_sort([1-[2], 2-[3], 3-[]], L). L = [1, 2, 3]`````` #### top_sort(+Graph, -Sorted, ?Tail) is semidet. The predicate `top_sort/3` is a difference list version of `top_sort/2`. #### neighbours(+Vertex, +Graph, -Neigbours) is det. Neigbours is a sorted list of the neighbours of Vertex in Graph. Example: `````` ?- neighbours(4,[1-[3,5],2-[4],3-[], 4-[1,2,7,5],5-[],6-[],7-[],8-[]], NL). NL = [1,2,7,5]`````` #### neighbors(+Vertex, +Graph, -Neigbours) is det. Same as `neighbours/3`. #### connect_ugraph(+UGraphIn, -Start, -UGraphOut) is det. Adds Start as an additional vertex that is connected to all vertices in UGraphIn. This can be used to create an topological sort for a not connected graph. Start is before any vertex in UGraphIn in the standard order of terms. No vertex in UGraphIn can be a variable. Can be used to order a not-connected graph as follows: `````` top_sort_unconnected(Graph, Vertices) :- ( top_sort(Graph, Vertices) -> true ; connect_ugraph(Graph, Start, Connected), top_sort(Connected, Ordered0), Ordered0 = [Start|Vertices] ).`````` #### before(+Term, -Before) is det. Unify Before to a term that comes before Term in the standard order of terms. Throws `instantiation_error` if Term is unbound. #### complement(+UGraphIn, -UGraphOut) UGraphOut is a ugraph with an edge between all vertices that are not connected in UGraphIn and all edges from UGraphIn removed. Example: `````` ?- complement([1-[3,5],2-[4],3-[], 4-[1,2,7,5],5-[],6-[],7-[],8-[]], NL). NL = [1-[2,4,6,7,8],2-[1,3,5,6,7,8],3-[1,2,4,5,6,7,8], 4-[3,5,6,8],5-[1,2,3,4,6,7,8],6-[1,2,3,4,5,7,8], 7-[1,2,3,4,5,6,8],8-[1,2,3,4,5,6,7]]`````` #### reachable(+Vertex, +UGraph, -Vertices) True when Vertices is an ordered set of vertices reachable in UGraph, including Vertex. Example: `````` ?- reachable(1,[1-[3,5],2-[4],3-[],4-[5],5-[]],V). V = [1, 3, 5]``````
2,155
5,994
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.890625
3
CC-MAIN-2024-33
latest
en
0.842445
http://www.computeraideddesignguide.com/autocad-exercises/
1,590,953,041,000,000,000
text/html
crawl-data/CC-MAIN-2020-24/segments/1590347413624.48/warc/CC-MAIN-20200531182830-20200531212830-00504.warc.gz
157,664,560
25,885
Do you know all the tricks in AutoCAD? When I first started, I was struggling to learn AutoCAD online and I wish someone had walked me through the process. Here, I made an attempt to provide a nice learning process for those of you willing to start learning the first steps in AutoCAD design. Learning while actually practicing has always been more efficient than any other technique. If you have ever asked yourself, where do I start with AutoCAD then you are in the right place. By learning these AutoCad exercises, you can propel yourself from a beginner to a better designer. What you should learn the first day has been provided in this series of 2D AutoCAD exercises and at the end of it, you should be able “to fly on your own”. AutoCAD Exercises #1Drawing your first piece of 2D using AutoCAD. Using LINE command and fixing your settings for a better user experience. Learning how to use coordinates and Using exclusively the command window to draw. AutoCAD Exercises #2Learning how to draw straight lines with definite dimensions. Basic concept for daily use while working with AutoCAD AutoCAD Exercises #3Learning the use of OSNAP and using skills you gained to design a more complex 2D. AutoCAD Exercises #4Using some math and drawing lines with specific angles and dimensions. Learning the basic concept of angles in AutoCAD, and actually using the trick main while. AutoCAD Exercises #5FILLET command and the use of OSNAP. Learning how to place object. Learning how to use center points. AutoCAD Exercises #6CHAMFER command. Using the CHAMFER command and leaning more about the command window. AutoCAD Exercises #7how to use HATCH in AutoCAD. You will need skills from Day #4 to have the exercise done, and you will learn how to actually use the HATCH command. AutoCAD Exercises #8Playing with the TANGENT feature of the OSNAP. You will have a complex figure to replicate, and you will have to learn how the TANGENT feature of the OSNAP works. You will also have to learn a new trick on how to draw a circle using this very technique. AutoCAD Exercises #9Clear description on how the ARC command works and a complex exercise to try the technique out. All instructions are given for you to easily assimilate the tutorial. AutoCAD Exercises #10Tricky exercise. You will have to use all technique you have learned in previous exercises to accomplish this one. Hints are given. And an opportunity to ask questions is as well given AutoCAD Exercises #11Concrete exercise where you will have to see advantages of learning the ARRAY command. A Polar Array is designed and some old techniques you must have learned from the beginning will be helpful. AutoCAD Exercises #12We have more than an exercise, The aim is to force you to use the ARRAY command. A rectangular Array is to be created. AutoCAD Exercises #13MIRROR command. The mirror command has not been used since the beginning. This exercise makes it impossible for you if you don't use the MIRROR command. Learning the concept of symmetry in AutoCAD AutoCAD Exercises #14Create a layers and changing line type. Using the ROTATE command and the OFFSET command. Learning how to combine a set of techniques toward something definite. AutoCAD Exercises #15Using the combination of all techniques learned so far to come to achieve a complex 2D AutoCAD exercise. Previous learned techniques will be needed in this session and a little bit of math too. AutoCAD Exercises #16This is a nice AutoCAD exercise that will make you test what you have learned so far. It is testing you ability to combine all you know to get a quite tough exercise done. Of course, hints have been provided to help you get it done. AutoCAD Exercises #17Revision exercise. Be ready to use your calculator. You will need a bit more of precision in this session. An emphasis has been made on the ROTATE command, OFFSET command and FILLET command in this exercise. AutoCAD Exercises #18Learning how to draw a polygon in AutoCAD. Here is shown how you can easily construct perfect star in AutoCAD, you may want to convert it into a block for future use. A complex figure is provided as well to help you practice. AutoCAD Exercises #19Learning how to construct a complex drawing using technique provided to you in previous sessions. This serves as a test, to see how far you have gone since the beginning of this series of exercises. AutoCAD Exercises #20Exam Day. Grab your Exam eBook for free and come back to me with questions and suggestions on how to make this course easier to understand if possible.
960
4,536
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.109375
3
CC-MAIN-2020-24
latest
en
0.92825
https://sites.math.washington.edu/~burke/crs/407/models/m9.html
1,686,221,714,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224654871.97/warc/CC-MAIN-20230608103815-20230608133815-00504.warc.gz
561,022,762
1,076
Model 9: Investments over Time An investor has money-making activities A and B available at the beginning of each of the next 5 years (call them years 1 to 5). Each dollar invested in A at the beginning of 1 year returns \$1.40 (a profit of \$0.40) 2 years later (in time for immediate reinvestment). Each dollar invested in B at the beginning of 1 year returns \$1.70 3 years later. In addition, money-making activities C and D will each be available at one time in the future. Each dollar investment in C at the beginning of year 2 returns \$1.90 at the end of year 5. Each dollar invested in D at the beginning of year 5 returns \$1.30 at the end of year 5. The investor begins with \$50,000 and wishes to know which investment plan maximizes the amount of money that can be accumulated by the beginning of year 6. Formulate the linear programming model for this problem.
213
877
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2023-23
latest
en
0.910124
https://brilliant.org/practice/eulers-theorem-warmup/?subtopic=modular-arithmetic&chapter=eulers-theorem
1,534,340,408,000,000,000
text/html
crawl-data/CC-MAIN-2018-34/segments/1534221210105.8/warc/CC-MAIN-20180815122304-20180815142304-00326.warc.gz
636,332,052
12,470
Number Theory # Euler's Theorem Warmup Is there a positive integer $$n$$ such that $$2^n \equiv 1 \pmod{7} \, ?$$ $\frac{1}{15}, \frac{2}{15}, \frac{3}{15}, \ldots, \frac{14}{15}, \frac{15}{15}$ How many of these fractions cannot be reduced? Is 999999 divisible by 7? Hint: $$999999 = 10^6 - 1.$$ What is the last digit of $$3^{100} \, ?$$ Which of these is congruent to $$10^{100} \pmod{11} \, ?$$ × Problem Loading... Note Loading... Set Loading...
161
461
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.671875
3
CC-MAIN-2018-34
longest
en
0.744717
https://socratic.org/questions/how-do-you-find-the-exact-values-of-sin-u-2-cos-u-2-tan-u-2-using-the-half-angle-2
1,723,479,025,000,000,000
text/html
crawl-data/CC-MAIN-2024-33/segments/1722641045630.75/warc/CC-MAIN-20240812155418-20240812185418-00292.warc.gz
423,112,600
6,013
# How do you find the exact values of sin(u/2), cos(u/2), tan(u/2) using the half angle formulas given cosu=3/5, 0<u<pi/2? ##### 1 Answer Jan 9, 2017 $\sin \left(\frac{u}{2}\right) = \frac{\sqrt{5}}{3}$ $\cos \left(\frac{u}{2}\right) = \frac{2 \sqrt{5}}{5}$ $\tan \left(\frac{u}{2}\right) = \frac{1}{2}$ #### Explanation: Use trig identities: $2 {\cos}^{2} \left(\frac{u}{2}\right) = 1 + \cos 2 u$ $2 {\sin}^{2} \left(\frac{u}{2}\right) = 1 - \cos 2 u$ In this case: $2 {\cos}^{2} \left(\frac{u}{2}\right) = 1 + \frac{3}{5} = \frac{8}{5}$ ${\cos}^{2} \left(\frac{u}{2}\right) = \frac{8}{10} = \frac{4}{5}$ $\cos \left(\frac{u}{2}\right) = \pm \frac{2}{\sqrt{5}} = \pm \frac{2 \sqrt{5}}{5}$ Since u is in Quadrant I, then $\cos \left(\frac{u}{2}\right)$ is positive. $2 {\sin}^{2} \left(\frac{u}{2}\right) = 1 - \frac{3}{5} = \frac{2}{5}$ ${\sin}^{2} \left(\frac{u}{2}\right) = \frac{2}{10} = \frac{1}{5}$ $\sin \left(\frac{u}{2}\right) = \pm \frac{1}{\sqrt{5}} = \pm \frac{\sqrt{5}}{5}$ Since u is in Quadrant I, then $\sin \left(\frac{u}{2}\right)$ is positive. $\tan \left(\frac{u}{2}\right) = \frac{\sin}{\cos} = \left(\frac{\sqrt{5}}{5}\right) \left(\frac{5}{2 \sqrt{5}}\right) = \frac{1}{2}$
543
1,200
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.5
4
CC-MAIN-2024-33
latest
en
0.443127
https://edurev.in/course/quiz/attempt/-1_Test-Traffic-Engineering-2/e5ec3d64-b3c8-47af-92d8-f8a7ce271792
1,675,168,579,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764499871.68/warc/CC-MAIN-20230131122916-20230131152916-00184.warc.gz
253,076,966
40,100
Test: Traffic Engineering- 2 # Test: Traffic Engineering- 2 Test Description ## 20 Questions MCQ Test Transportation Engineering | Test: Traffic Engineering- 2 Test: Traffic Engineering- 2 for Civil Engineering (CE) 2023 is part of Transportation Engineering preparation. The Test: Traffic Engineering- 2 questions and answers have been prepared according to the Civil Engineering (CE) exam syllabus.The Test: Traffic Engineering- 2 MCQs are made for Civil Engineering (CE) 2023 Exam. Find important definitions, questions, notes, meanings, examples, exercises, MCQs and online tests for Test: Traffic Engineering- 2 below. Solutions of Test: Traffic Engineering- 2 questions in English are available as part of our Transportation Engineering for Civil Engineering (CE) & Test: Traffic Engineering- 2 solutions in Hindi for Transportation Engineering course. Download more important topics, notes, lectures and mock test series for Civil Engineering (CE) Exam by signing up for free. Attempt Test: Traffic Engineering- 2 | 20 questions in 60 minutes | Mock test for Civil Engineering (CE) preparation | Free important questions MCQ to study Transportation Engineering for Civil Engineering (CE) Exam | Download free PDF with solutions 1 Crore+ students have signed up on EduRev. Have you? Test: Traffic Engineering- 2 - Question 1 ### Which one of the following diagrams illustrates the relation between speed 'u' and density 'k' of traffic flow? Detailed Solution for Test: Traffic Engineering- 2 - Question 1 At u = 0, k = kmax u = umax k → 0 The curve (a) satisfies these conditions. Curve (c) represents relation between density and volume of traffic. Test: Traffic Engineering- 2 - Question 2 ### Which one of the following methods of O-D traffic surveys is conducted for comprehensive analysis of traffic and transportation data? Detailed Solution for Test: Traffic Engineering- 2 - Question 2 Comprehensive analysis of traffic and transportation data require: - Origin and destination in each zone - Mode of transportation - Number of vehicle and passengers in each vehicle - Purpose of each trip - Selection of route - Length of trip - Intermediate stops and their reason etc. All this can be collected by Road side interview method. Test: Traffic Engineering- 2 - Question 3 ### The lost time due to starting delay on a traffic signal is noted to be 3s, the actual green time is 25s and yellow time is 3s. How much is the effective green time? Detailed Solution for Test: Traffic Engineering- 2 - Question 3 Effective green time = Actual green time + Yellow time - lost time = 25 + 3 - 3 = 25 seconds Test: Traffic Engineering- 2 - Question 4 In speed and delay study, if the average journey time on a stretch of road length of 3.5 km is 7.55 minutes and the average stopped delay is 1.8 minutes, the average running speed will be, nearly Detailed Solution for Test: Traffic Engineering- 2 - Question 4 Average running time = Average journey time - Average stopped delay = 7.55 - 1.8 = 5.75 minutes Average running speed = Test: Traffic Engineering- 2 - Question 5 If L is the length of vehicle in meters, C is the clear distance between two consecutive vehicles (Stopping sight distance), V is the speed of vehicles in km/hour; then the maximum number (N) of vehicles/hour is equal to Detailed Solution for Test: Traffic Engineering- 2 - Question 5 Maximum number of vehicles per hour = Test: Traffic Engineering- 2 - Question 6 When the speed of the traffic flow becomes zero, then Detailed Solution for Test: Traffic Engineering- 2 - Question 6 At zero speed density is maximum and volume is zero. Test: Traffic Engineering- 2 - Question 7 It was noted that on a section of road, the free speed was 80 kmph and the jam density was 70 vpkm. The maximum flow in vph that could be expected on this road is Detailed Solution for Test: Traffic Engineering- 2 - Question 7 Test: Traffic Engineering- 2 - Question 8 If the normal flows on two approach roads at an intersection are respectively 500 pcu per hr and 300 pcu per hr, the saturation flows are 1600 pcu per hr on each road and the total lost time per signal cycle is 16 s, then the optimum cycle time by Webster’s method is Detailed Solution for Test: Traffic Engineering- 2 - Question 8 Optimum cycle time, L = Total lost time per cycle = 16 sec Y = y1 + y2 ∴ Test: Traffic Engineering- 2 - Question 9 When two roads with two-lane, two-way traffic, cross at an uncontrolled intersection, the total number of potential major conflict points would be Detailed Solution for Test: Traffic Engineering- 2 - Question 9 Point of potential conflicts depend on the number of lanes on intersecting lanes. For two way traffic on a right angled road intersection, the conflict points are 24 whereas for two way traffic on T-intersection the conflict point are 18 only. Test: Traffic Engineering- 2 - Question 10 An Enoscope is used for measuring Test: Traffic Engineering- 2 - Question 11 Matching List-I (Traffic flow characteristics) with Llst-ll (Figure/symbol) and select the correct answer using the codes given below the lists: Codes: Test: Traffic Engineering- 2 - Question 12 The traffic conflicts that may occur in a rotary intersection are Test: Traffic Engineering- 2 - Question 13 In which of the following traffic signal system are the cycle length and cycle division are automatically varied? Detailed Solution for Test: Traffic Engineering- 2 - Question 13 Flexible progressive system (most efficient method of signalling). Test: Traffic Engineering- 2 - Question 14 Matching List-I with List-Il and select the correct answer using the codes given below the lists: Codes: Test: Traffic Engineering- 2 - Question 15 Traffic volume is equal to Test: Traffic Engineering- 2 - Question 16 With increase in speed of the traffic stream, the maximum capacity of the lane Test: Traffic Engineering- 2 - Question 17 When the speed of traffic flow becomes zero, then Test: Traffic Engineering- 2 - Question 18 The most efficient traffic signal system is Test: Traffic Engineering- 2 - Question 19 A traffic rotary is justified where Test: Traffic Engineering- 2 - Question 20 When a number of roads are meeting at a point and only one of the roads is important, then the suitable shape of rotary is ## Transportation Engineering 22 videos|68 docs|39 tests Use Code STAYHOME200 and get INR 200 additional OFF Use Coupon Code Information about Test: Traffic Engineering- 2 Page In this test you can find the Exam questions for Test: Traffic Engineering- 2 solved & explained in the simplest way possible. Besides giving Questions and answers for Test: Traffic Engineering- 2, EduRev gives you an ample number of Online tests for practice ## Transportation Engineering 22 videos|68 docs|39 tests
1,560
6,822
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.5625
4
CC-MAIN-2023-06
latest
en
0.901944
http://webgraph.di.unimi.it/docs/it/unimi/dsi/webgraph/algo/BetweennessCentrality.html
1,548,276,592,000,000,000
text/html
crawl-data/CC-MAIN-2019-04/segments/1547584350539.86/warc/CC-MAIN-20190123193004-20190123215004-00044.warc.gz
240,365,972
4,832
it.unimi.dsi.webgraph.algo ## Class BetweennessCentrality • java.lang.Object • it.unimi.dsi.webgraph.algo.BetweennessCentrality • ```public class BetweennessCentrality extends java.lang.Object``` Computes the betweenness centrality using an implementation of Brandes's algorithm (Ulrik Brandes, “A Faster Algorithm for Betweenness Centrality”, Journal of Mathematical Sociology 25(2):163−177, 2001) that uses multiple parallel breadth-first visits. To use this class you first create an instance, and then invoke `compute()`. After that, you can peek at the field `betweenness` to discover the betweenness of each node. For every three distinct nodes s, t and v, let σst be the number of shortest paths from s to t, and σst(v) the number of such paths on which v lies. The betweenness centrality of node v is defined to be the sum of δst(v)=σst(v) / σst over all pairs of distinct nodes s, t different from v (the summand is assumed to be zero whenever the denominator is zero). Brandes's approach consists in performing a breadth-first visit from every node, recording the distance of the node from the current source. After each visit, nodes are considered in decreasing order of distance, and for each of them we consider the arcs (v,w) such that the distance of w is exactly one plus the distance of v: in this case we say that v is a parent of w. Such parents are used to compute the values of δ (exactly as in the original algorithm, but without any need to keep an explicit set of parents, which is important since this class is memory intensive). Every visit is independent and is carried out by a separate thread. The only contention point is the update of the array accumulating the betweenness score, which is negligible. The downside is that running on k cores requires approximately k times the memory of the sequential algorithm, as only the graph and the betweenness array will be shared. This class keeps carefully track of overflows in path counters, and will throw an exception in case they happen. Thanks to David Gleich for making me note this serious problem, which is often overlooked. • ### Nested Class Summary Nested Classes Modifier and Type Class and Description `static class ` `BetweennessCentrality.PathCountOverflowException` An exception telling that the path count exceeded 64-bit integer arithmetic. • ### Field Summary Fields Modifier and Type Field and Description `double[]` `betweenness` The array of betweenness value. `protected java.util.concurrent.atomic.AtomicInteger` `nextNode` The next node to be visited. `protected boolean` `stop` Whether to stop abruptly the visiting process. • ### Constructor Summary Constructors Constructor and Description `BetweennessCentrality(ImmutableGraph graph)` Creates a new class for computing betweenness centrality, using as many threads as the number of available processors. ```BetweennessCentrality(ImmutableGraph graph, int requestedThreads)``` Creates a new class for computing betweenness centrality. ```BetweennessCentrality(ImmutableGraph graph, int requestedThreads, ProgressLogger pl)``` Creates a new class for computing betweenness centrality. ```BetweennessCentrality(ImmutableGraph graph, ProgressLogger pl)``` Creates a new class for computing betweenness centrality, using as many threads as the number of available processors. • ### Method Summary All Methods Modifier and Type Method and Description `void` `compute()` Computes betweenness centrality. `static void` `main(java.lang.String[] arg)` • ### Methods inherited from class java.lang.Object `clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait` • ### Field Detail • #### nextNode `protected final java.util.concurrent.atomic.AtomicInteger nextNode` The next node to be visited. • #### stop `protected volatile boolean stop` Whether to stop abruptly the visiting process. • #### betweenness `public final double[] betweenness` The array of betweenness value. • ### Constructor Detail • #### BetweennessCentrality ```public BetweennessCentrality(ImmutableGraph graph, ProgressLogger pl)``` Creates a new class for computing betweenness centrality. Parameters: `graph` - a graph. `requestedThreads` - the requested number of threads (0 for `Runtime.availableProcessors()`). `pl` - a progress logger, or `null`. • #### BetweennessCentrality ```public BetweennessCentrality(ImmutableGraph graph, ProgressLogger pl)``` Creates a new class for computing betweenness centrality, using as many threads as the number of available processors. Parameters: `graph` - a graph. `pl` - a progress logger, or `null`. • #### BetweennessCentrality ```public BetweennessCentrality(ImmutableGraph graph, Creates a new class for computing betweenness centrality. Parameters: `graph` - a graph. `requestedThreads` - the requested number of threads (0 for `Runtime.availableProcessors()`). • #### BetweennessCentrality `public BetweennessCentrality(ImmutableGraph graph)` Creates a new class for computing betweenness centrality, using as many threads as the number of available processors. Parameters: `graph` - a graph. • ### Method Detail • #### compute ```public void compute() throws java.lang.InterruptedException``` Computes betweenness centrality. Results can be found in `betweenness`. Throws: `java.lang.InterruptedException` • #### main ```public static void main(java.lang.String[] arg) throws java.io.IOException, java.lang.InterruptedException, JSAPException``` Throws: `java.io.IOException` `java.lang.InterruptedException` `JSAPException`
1,256
5,552
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.703125
3
CC-MAIN-2019-04
latest
en
0.902876
https://testbook.com/question-answer/the-ratio-between-the-capacity-of-an-open-delta-or--63aeb1910193c15e4d91c763
1,725,965,152,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00270.warc.gz
542,729,307
46,581
# The ratio between the capacity of an Open-delta or V-V connection (of three-phase transformers) and the capacity of a Delta-delta connection (of three-phase transformers) is This question was previously asked in BSPHCL JE Electrical 2019 Official Paper: Batch 2 (Held on 31 Jan 2019) View all BSPHCL JE EE Papers > 1. 57.7% 2. 87.7% 3. 67.7% 4. 47.7% Option 1 : 57.7% Free BSPHCL JE Power System Mock Test 2.1 K Users 20 Questions 20 Marks 18 Mins ## Detailed Solution Concept: Open-delta connection: • An open delta connection transformer uses two single-phase transformers to provide a three-phase supply to the load. • An open delta connection system is also called a V-V system. • Open delta connection systems are usually only used in emergency conditions when one of the transformers in the Delta-Delta system is damaged, and removed from service. • Efficiency for the operation of open delta connection is low as compared to delta-delta systems which are used during standard operations. Power supplied by open delta connection $${P_{V - V}} = \sqrt 3 {V_{ph}}{I_{ph}}$$ Power supplied by open delta connection $${P_{{\rm{\Delta }} - {\rm{\Delta }}}} = 3{V_{ph}}{I_{ph}}$$ Therefore, $${P_{V - V}} = \frac{1}{{\sqrt 3 }}{P_{{\rm{\Delta }} - {\rm{\Delta }}}}$$ Hence, the ratio between the capacity of an Open-delta and the capacity of a Delta-delta connection is 57.7%
394
1,388
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.203125
3
CC-MAIN-2024-38
latest
en
0.854177
http://myriverside.sd43.bc.ca/serenao2016/category/grade-11/math-11/
1,725,916,976,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00617.warc.gz
19,227,257
10,909
# Serena's Blog ### Math 11 This week in Precalc 11, we continued off what we started on trigonometry before our winter break. Since me not doing the week 16 blog post, I have learned a lot about trigonometry this semester that I haven’t share yet…. Continue Reading → This week in precalc 11, we finished our whole unit on rational expressions and equations. One of my favorite things we learned was solving rational equations. There are many different ways you can solve a rational equation such as moving… Continue Reading → This week in precalc 11, we learned many new things. We started our new unit, Rational Expressions and Equations. We learned how to multiply, divide, add and subtract rational expressions. What we recalled starting the unit was that a rational number… Continue Reading → This week in precalc 11, we learned many new concepts and learned a completely different vocabulary. We learned something called reciprocal functions. For example, if we have then the reciprocal would be Some new vocabulary that we learned are invariant… Continue Reading → This week in precalc 11, we started learning about absolute values and reciprocal functions. As we already know, the absolute value of a number can not be negative so we have to always keep that in mind. If we have… Continue Reading → This week in precalc 11 we focused on the new unit we just started, graphing inequalities and systems of equations. A lot of the things we learned this week was review from what we did in grade 9. What we… Continue Reading → This week in precalc 11, we reviewed for our midterm and we started our next unit. During the week, I spent a lot of time reviewing old concepts in the book and realized I forgot how to do a lot… Continue Reading → This week in Precalc 11, we learned many new things and started preparing for our unit test and midterm. One of the most interesting things we learned this week was equivalent forms of the quadratic function. From what we learned,… Continue Reading → This week in precalc 11, we learned a lot of new things. One of my favorite things I learned this week was analyzing which is standard form We know that can tell us … If it is congruent to If… Continue Reading → This week in Precalc 11, we mostly focused on reviewing for our solving quadratic equations test but we did learn one thing before our test. In the picture above is the quadratic formula, the highlighted area is called the discriminant…. Continue Reading →
520
2,478
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.578125
4
CC-MAIN-2024-38
latest
en
0.954879
https://www.doubtnut.com/question-answer/if-equations-x2-a-x-120-x2-b-x-150a-n-dx2-a-bx-360-have-a-common-positive-root-then-find-the-values--29381
1,632,429,906,000,000,000
text/html
crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00349.warc.gz
771,420,330
84,171
Home > English > Class 11 > Maths > Chapter > > If equations x^2+a x+12=0. x^2... # If equations x^2+a x+12=0. x^2+b x+15=0a n dx^2+(a+b)x+36=0, have a common positive root, then find the values of aa n dbdot Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 2-6-2020 Apne doubts clear karein ab Whatsapp par bhi. Try it now. Watch 1000+ concepts & tricky questions explained! 108.8 K+ 5.4 K+ 44765 6.3 K+ 126.5 K+ 4:40 17733 17.9 K+ 358.4 K+ 7:19 53797115 11.1 K+ 223.1 K+ 1:23 1161790 5.5 K+ 110.8 K+ 2:16 2350481 8.3 K+ 165.2 K+ 2:14 1299317 11.2 K+ 224.3 K+ 6:45 2412445 6.9 K+ 137.1 K+ 4:45 1207287 11.6 K+ 231.2 K+ 7:31 328585 18.2 K+ 364.4 K+ 2:18 29451 2.3 K+ 46.5 K+ 4:39 30620319 1.7 K+ 34.9 K+ 2:32 19478 3.7 K+ 73.7 K+ 2:39 27438 2.9 K+ 57.7 K+ 2:37 69044956 8.6 K+ 172.3 K+ 3:09 58909 9.1 K+ 182.5 K+ 3:34
418
944
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2021-39
latest
en
0.413096
https://www.doubtnut.com/question-answer/find-the-distance-between-the-points-p-1-3-4-and-q-4-1-2--867
1,623,793,982,000,000,000
text/html
crawl-data/CC-MAIN-2021-25/segments/1623487621627.41/warc/CC-MAIN-20210615211046-20210616001046-00461.warc.gz
669,493,848
67,270
Class 11 MATHS Introduction To Three Dimensional Geometry # Find the distance between the points P (1, -3, 4)and Q ( -4, 1, 2). Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams. Updated On: 29-4-2020 Apne doubts clear karein ab Whatsapp par bhi. Try it now. Watch 1000+ concepts & tricky questions explained! 254.7 K+ 12.6 K+ Text Solution Solution : Distance between two points (x_1,y_1,z_1) and (x_2,y_2,z_2) can be given as<br> D = sqrt((x_2-x_1)^2+(y_2-y_1)^2+(z_2-z_1)^2)<br> :. Distance between P(1,-3,4) and Q(-4,1,2), <br> PQ=sqrt((-4-1)^2+(1-(-3))^2+(2-4)^2) = sqrt(25+16+4) = sqrt45 = 3sqrt5`<br> Image Solution 53084870 8.8 K+ 178.4 K+ 1:10 867 12.6 K+ 254.7 K+ 2:50 56703442 2.6 K+ 53.1 K+ 2:43 42362288 23.7 K+ 100.0 K+ 1:40 7314922 31.3 K+ 152.1 K+ 1:49 72793342 39.2 K+ 44.2 K+ 1:21 2461 7.6 K+ 153.4 K+ 1:00 53084982 7.7 K+ 154.2 K+ 2:17 72793228 1.6 K+ 31.4 K+ 1:25 72793556 3.6 K+ 72.6 K+ 1:18 72793382 1.6 K+ 32.5 K+ 52807848 2.3 K+ 46.0 K+ 9:49 350963 4.9 K+ 99.1 K+ 2:40 30621123 2.0 K+ 41.1 K+ 2:02 96593318 1.4 K+ 29.2 K+ 1:31 ## Latest Questions Class 11th Introduction To Three Dimensional Geometry
531
1,232
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.796875
4
CC-MAIN-2021-25
latest
en
0.484501
http://www.math-only-math.com/worksheet-on-greater-or-smaller-fraction.html
1,503,481,894,000,000,000
text/html
crawl-data/CC-MAIN-2017-34/segments/1502886118195.43/warc/CC-MAIN-20170823094122-20170823114122-00102.warc.gz
598,870,507
8,681
# Worksheet on Greater or Smaller Fraction Practice the math question given in the worksheet on greater or smaller fraction. The questions are based on fractions having the same denominators but different numerators and fractions having the same numerators but different denominators. 1. Write greater or smaller between the given pairs of fractions: (i) 1/8 ______ 3/8 (ii) 3/7 ______ 2/7 (iii) 15/41 ______ 10/41 (iv) 7/15 ______ 9/15 (v) 112/132 ______ 108/132 (vi) 25/39 ______ 37/39 2. Mark greater (>) or smaller (<) between the given pairs of fractions: (i) 1/9 ______ 1/10 (ii) 5/17 ______ 5/19 (iii) 11/12 ______ 11/13 (iv) 15/21 ______ 15/23 (v) 21/43 ______ 21/45 (vi) 5/21 ______ 5/17 3. Fill in the gaps: (i) 1/5 of 15 (ii) 1/3 of 12 (iii) 2/3 of 24 (iv) 3/7 of 28 (v) 5/8 of 32 (vi) 7/8 of 64 4. Write the fractions in 'ascending order' and 'descending order': (i) 1/4, 3/4, 2/4, 4/4 (ii) 5/16, 7/16, 1/16, 9/16, 11/16 (iii) 5/7, 2/7, 6/7, 3/7 (iv) 6/11, 3/11, 11/11, 8/11, 7/11 5. Davis bought 7 mangoes. 2/7 of them were damaged. How many mangoes were in good condition? 6. Find 3/8 of 32 7. Shelly bought 16 chocolates, 3/4 of which was eaten. Find the number of remaining chocolates. Answers for the worksheet on greater or smaller fraction are given below to check the exact answers of the above questions on finding the greater fraction or smaller fraction between the given pairs of fractions. 1. (i) < (ii) > (iii) > (iv) < (v) > (vi) < 2. (i) > (ii) > (iii) > (iv) > (v) > (vi) < 3. (i) 3 (ii) 4 (iii) 16 (iv) 12 (v) 20 (vi) 56 4. (i) ascending order - 1/4, 2/4, 3/4, 4/4 descending order - 4/4, 3/4, 2/4, 1/4 (ii) ascending order - 1/16, 5/16, 7/16, 9/16, 11/16 descending order - 11/16, 9/16, 7/16, 5/16, 1/16 (iii) ascending order - 2/7, 3/7, 5/7, 6/7 descending order - 6/7, 5/7, 3/7, 2/7 (iv) ascending order - 3/11, 6/11, 7/11, 8/11, 11/11 descending order -11/11, 8/11, 7/11, 6/11, 3/11 5. 5 mangoes 6. 12 7. 4 chocolates
828
2,011
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.28125
4
CC-MAIN-2017-34
longest
en
0.782161
https://us.sofatutor.com/math/videos/reviewing-representations-of-ratios
1,717,050,081,000,000,000
text/html
crawl-data/CC-MAIN-2024-22/segments/1715971059506.69/warc/CC-MAIN-20240530052602-20240530082602-00691.warc.gz
517,585,577
41,396
# Reviewing Representations of Ratios Rating Ø 5.0 / 1 ratings The authors Chris S. ## Basics on the topicReviewing Representations of Ratios After this lesson you will be able to apply your knowledge of ratios to many real world scenarios. The video begins with a review of ratio notation and how to find unit rate. It continues with how to use ratio tables and tape diagrams to find equivalent and associated ratios. It concludes with a review of ratios in graphs, and finding the constant of proportionality. Review representations of ratios by going on a free-sample shopping spree, with Segway Sam at the supermarket! This video includes key concepts, notation, and vocabulary such as: ratio (a comparative, proportional relationship between two amounts); equivalent ratios (two ratios which represent the same proportional relationship); tape diagrams (a diagram used to visualize a set of equivalent ratios); ratio tables (a table of equivalent ratios); unit rate (a proportional relationship which compares a quantity to one unit of another quantity). Before watching this video, you should already be familiar with writing ratios, simplifying ratios, tape diagrams, ratio tables, and plotting ratios on a coordinate plane. After watching this video, you will be prepared to learn more about rates, unit rates, and solve real world rate problems. Common Core Standard(s) in focus: 6.RP.3.a A video intended for math students in the 6th grade Recommended for students who are 11-12 years old ### TranscriptReviewing Representations of Ratios Segway Sam is visiting his favorite supermarket. The free samples at the Steal-a-Deal market are incredible! Sam never seems to buy anything and has a tendency to go a bit overboard with the free samples. He decides to keep track of his calories while snacking using his new, fancy fitness watch. Reviewing representations of ratios will help Sam keep his free-sample eating-habits in check. Sam takes a good look at the cheese cubes. His fancy fitness watch tells him the caloric content of the cheese in the form of a ratio. 3 cheese cubes are 15 calories. We can write the ratio of cheese cubes to calories as 3 to 15 or in fraction form, 3 over 15. We could also think about calories to cheese cubes instead, giving us an associated ratio 15 to 3 or in fraction form, 15 over 3. We could also reduce these ratios to their unit rate: the ratios equivalent to these ones in which one of the terms is equal to one. Here we have that the unit rate of 3 over 15 is 1 over 5, or 1 cube per 5 calories. The unit rate of this associated ratio is then 5 over 1, or 5 calories per cube. He excitedly grabs 2 cheese cubes, giving him 2 times 5 calories, or 10 calories. Good deal! Segway Sam rolls on to his next target. Yo! Jackpot! Tater tots! Let’s take a look at the table given by Sam's fancy watch. 3 tots have 66 calories. Heavy tots! But wait a minute, are we sure this is a table of equivalent ratios? A table is a ratio table if every ratio in the table reduces to the same fraction in simplest form. We see that 3 over 66 reduces to 1 over 22. 6 over 132 also reduces to 1 over 22, the same with 9 over 198, and 30 over 660. This means that all of these ratios are equivalent ratios, and that this table is a ratio table. Sam grabs 3 tots, adding 66 to his calorie count, and Segway's off! Hot dog, what have we here? Tasty mini-dogs, right off the grill. The watch says that each dog is 12 calories. Sam's got his eye on four dogs though, how many calories is that? We can use a tape diagram to figure this out. We draw one square to represent one mini-dog, and we draw 12 squares to represent 12 calories. So each square represents 1, showing us visually that for every one mini-dog, there are 12 calories. How do we use this tape diagram to figure out how many calories four are? Since 1 dog to 12 calories is an equivalent ratio to 4 dogs to however many calories, we know that for every 4 mini-dogs there will be 12 times 4 calories. Which adds up to 48 calories. Hrm. That's a bit too many calories for Sam. If he takes 3 dogs, then by writing a 3 in every calorie square and summing them up gives us 36 calories. Sam can live with that. Sam still has room for dessert! And dessert is served! Piping hot chocolate chip cookies, ready to melt in your mouth! Sam's thinking of taking 20 cookies! We’re gonna need a graph to see how many cookies and calories he is willing to scarf down! The given table shows us that 3 cookies gives him 90 calories, and so on. We can plot the points on a graph. y', the dependent variable, is calories. That makes sense because Sam's calories depend on the number of cookies he eats. The number of cookies is 'x', the independent variable. A straight line through the origin means we’re graphing a set of equivalent ratios! The unit rate shows up on the graph as the 'y'-coordinate of the point (1,30). The unit rate is also known as the constant of proportionality, as it is the constant which every ratio is equal to. So in our case, the number of calories, 'y', over the number of cookies, 'x', is always equal to 30. Notice that this is also the slope of the line. The unit rate, or constant of proportionality, and the slope are always equal to each other. We multiply by 'x' on both sides to get the equation 'y' equals '30x'. Now we use our equation to calculate the calorie count for 20 cookies. Substituting 20 for 'x' gives us 'y' equals 30 times 20. y' equals 600. This means that 20 cookies gives Sam 600 calories of sweet excess! He stuffs 20 cookies in his pack, and he’s off and rolling! Man, knowing how to represent ratios with fractions, tables, tape diagrams, graphs and equations sure makes calorie counting easy. Wait a minute, Steal-A-Deal security doesn't seem to like Sam's sample-eating-habits! He's on Sam's trail and riding a turbo Segway! Well, I guess this is as good of a time as any for Sam to segue into...watch out!
1,410
5,951
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.28125
4
CC-MAIN-2024-22
latest
en
0.927558
https://oeis.org/A121365
1,621,099,482,000,000,000
text/html
crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00570.warc.gz
429,748,076
3,874
The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation. Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!) A121365 a(n) = 6*a(n-1) - 9*a(n-2) + n + 1. 1 1, 1, 1, 2, 9, 43, 185, 732, 2737, 9845, 34449, 118102, 398585, 1328607, 4384393, 14348912, 46633953, 150663529, 484275617, 1549681962, 4939611241, 15690529811, 49686677721, 156905298052, 494251688849 (list; graph; refs; listen; history; text; internal format) OFFSET 1,4 LINKS Index entries for linear recurrences with constant coefficients, signature (8, -22, 24, -9). FORMULA a(n) = (36 + (n-4)*3^n + 9*n)/36. O.g.f.: -x*(-1+7*x-15*x^2+8*x^3)/((-1+x)^2*(3*x-1)^2). - R. J. Mathar, Dec 10 2007 MAPLE A121365:= n-> (36 + (n-4)*3^n + 9*n)/36: seq(A121365(n), n=1..30); # Wesley Ivan Hurt, Apr 29 2014 MATHEMATICA Table[(9*(n+4)+(n-4)*3^n)/36, {n, 25}] CROSSREFS Cf. A121968. Sequence in context: A222472 A132847 A275620 * A018960 A217666 A002310 Adjacent sequences:  A121362 A121363 A121364 * A121366 A121367 A121368 KEYWORD nonn,easy AUTHOR Zak Seidov, Sep 06 2006 STATUS approved Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent The OEIS Community | Maintained by The OEIS Foundation Inc. Last modified May 15 13:20 EDT 2021. Contains 343920 sequences. (Running on oeis4.)
538
1,443
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3
3
CC-MAIN-2021-21
latest
en
0.525962
http://pub41.bravenet.com/forum/static/show.php?usernum=3444295554&frmid=18&msgid=898368&cmd=show
1,537,621,108,000,000,000
text/html
crawl-data/CC-MAIN-2018-39/segments/1537267158429.55/warc/CC-MAIN-20180922123228-20180922143628-00087.warc.gz
205,387,271
9,692
### Number Watch Web Forum This forum is about wrong numbers in science, politics and the media. It respects good science and good English. Number Watch Web Forum Author Comment confidence levels If I flip a coin, there is a 50% chance it will come up heads(50% confidence level). If I flip a second coin, there is also a 50% chance it will come up heads(50% confidence level). If I flip two coins at the same time, there is not a 50% chance they will both come up heads(50% confidence level). There is only a 25% chance(confidence level) that they will both come up heads. One multiplies the probabilities: 0.5 x 0.5 = 0.25(25%) If you have two studies,each with a 95% confidence level, and combime the data into one study, does that new study still have a 95% confidence level or is the confidence level only 90%(0.95 x 0.95 = 0.90) If you combine 5 such studies into one, will the Confidence level for the new study be 95%, or will it be only 77%)0.95 x 0.95 x 0.95 x 0.95 x 0.95 = 70.77) ? Re: confidence levels If you go from the Number Watch index to FAQs and then "What has the weakest link to do with fallacies in medical statistics?" You will find a table with the results you require. Re: confidence levels Thank you Dr. B. But, does that table apply to a meta-analysis of 10 studies to determine the probability of one disease being caused/associated with one particular factor? Using that table, am I correct in finding the 1993 EPA meta-analysis( 10 studies) on SHS and lung cancer has only a 60% CI? Re: confidence levels It is important not to confuse two equal and opposite frauds. A data dredge takes one big survey and pretends it is a lot of little trials, to which the table applies. A metastudy takes a lot of little insignificant trials and pretends they are one big significant one. How these data are combined is a more obscure process that I have never fathomed. That EPA metastudy has five clear frauds in it (see for example April 2003), the biggest being that they began work on the anti-smoking legislation four years before they started to manufacture the test data. The results actually indicate that SHS is harmless. Re: confidence levels "A metastudy takes a lot of little insignificant trials and pretends they are one big significant one." Would it be true to say that a metastudy does little more than to increase the probabilty that a conclusion is incorrect? It would seem that the amount of said increase would be up to the maths of 'conditional probability". Going back to coin flipping. If you take two coins in your hand and then throw them under a cloth: Each coin has a 50% chance of being a heads; but, given the condition that your first coin comes out from under the cloth as a heads, there is only a 25% probability the second coin will also do so. Perhaps, part of the problem with metastudies, is that there is no consideration of 'conditional probability'? Re: confidence levels Now you have lost me. Re: confidence levels That is certainly not your fault. I was trying to explain a concept and it was, I'm afraid, poorly done. Re: confidence levels This free to view paper taken from the Lancet should give an idea how the confidence interval for the "meta analysis" relates to the confidence intervals for the individual contributing studies in the meta analysis. The meta analysis still has 95% confidence levels. http://tobaccodocuments.org/pm/2047231315-1318.pdf The paper seems to be using a technique for combining results from the contributing studies called the Mantel-Haenszel method (it quotes some other methods as well), for which details are given on this webpage: https://www.ctspedia.org/do/view/CTSpedia/StudyAnalysisMH Re: confidence levels Meta-analysis can work fine if you are combining several similarly-executed trials which then gives you the statistical power to see effects that wouldn't be powered for in the smaller trials. The problem with most of the epidemiological stuff is not the use of meta-analysis per se, it is that the statistical testing is applied to large numbers of post-hoc hypotheses and the tests are really designed to look for effects of interventions. Since you can't ethically perform an interventional study with tobacco smoke (we know it's bad for you and the study has no prospect of benefitting the participants) you have to assign "treatment groups" on the basis of asking people about past, incidental exposure. This is not only notoriously inaccurate it introduces a range of biases you can't control for. One odd result off the top of my head: Dutch tea drinkers are more likely to smoke than Dutch non-tea drinkers. If the correlation is strong enough (or the study large enough) you could demonstrate that drinking tea causes heart disease in the Netherlands. Bigger studies also take disproportionate effort to do important things like age and sex matching of controls - importantly failing to do this is likely to dilute real effects but potentiate stochastic effects. In the clinical world, statistically significant results (uncorrected for multiplicity) on things other than your powered, primary efficacy variable, are considered interesting things that might or might not be worthy of further investigation. At best in efficacy they are supportive of a claim. You couldn't usually base a strong enough claim to get a marketing license on them, for example. In the public health world, one P<0.05 among twenty risks (never benefits) tested for is considered adequate justification for draconian legislation.
1,227
5,562
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.328125
3
CC-MAIN-2018-39
longest
en
0.912902
https://www.physicsforums.com/threads/why-is-this-certain-angle-20-degrees.409886/
1,521,528,227,000,000,000
text/html
crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00326.warc.gz
838,068,938
15,872
# Why is this certain angle 20 degrees? 1. Jun 13, 2010 ### elementG 1. The problem statement, all variables and given/known data Problem #199 http://img88.imageshack.us/img88/8008/scan0001vd.jpg [Broken] Solution http://img21.imageshack.us/img21/2131/199hg.jpg [Broken] 2. Relevant equations Why is the angle from VB/A 20 degrees from the solution diagram? It would seem that I had to know that direction of VB/A had the same angle as VA in terms of the geometry (if two parallel lines are cut by a transversal, its alternating interior angles are equal). I just don't see how you can assume that. 3. The attempt at a solution Since drawing a triangle is the first part, I don't have any "attempt" at it yet. Last edited by a moderator: May 4, 2017 2. Jun 13, 2010 ### DaveC426913 I'm not sure what you're asking. You are given that the bearing is 20 degrees. i.e. A takes a bearing of B and sees it is 20 degrees East of North, thus theta is 20 degrees. 3. Jun 13, 2010 ### elementG I saw that 20 degrees was given, I just don't see how the angle is 20 degrees on the solution diagram. Ship A observes ship B at 20 degrees, but how is VB/A also 20 degrees down from horizontal? Sorry for the confusion! 4. Jun 13, 2010 ### DaveC426913 It's been while, sorry, VB/A represents what part of the diagram? I'd assumed we only care about angle theta, which is 20. 5. Jun 13, 2010 ### elementG VB/A comes off the head of VA. I just don't see how its 20 degrees when VB/A and VA are connected as seen on the solutions diagram. 6. Jun 13, 2010 ### DaveC426913 Sorry, I hadn't looked at the second diagram. Well, the solution triangle is just a rearrangement of the starting configuration. The two vectors start off at 20 degrees, why would that change? 7. Jun 13, 2010 ### elementG Oh, I guess I made the wrong assumption. I was assuming the angle that VB/A made was not necessarily 20 degrees. I guess I'm confused (a little bit) still is because I can't see it geometrically. Like say for instance, I'm still on the assumption that the angle is not 20 degrees for VB/A and I label as an unknown, how would I geometrically prove that the angle is 20 degrees? 8. Jun 13, 2010 ### DaveC426913 You would not be able to solve the problem. You're given the angle because you need it. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
644
2,400
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.921875
4
CC-MAIN-2018-13
longest
en
0.953371
https://healthyliving.azcentral.com/purpose-taking-pulse-rate-during-exercise-2647.html
1,686,371,928,000,000,000
text/html
crawl-data/CC-MAIN-2023-23/segments/1685224656963.83/warc/CC-MAIN-20230610030340-20230610060340-00530.warc.gz
336,144,271
28,399
You may have noticed other people taking their heart rates during the middle of a workout, and wondered what they were doing. In most cases, it's likely not just a matter of curiosity, but a matter of getting the most out of each workout. While some researchers say monitoring your heart rate is really only necessary for the most advanced athletes, if you want to try it out, the process is fairly basic and simple to do. ## Basics When you take your pulse, you're measuring the number of times your heart beats within a one-minute period. When you exercise, your heart beats faster to bring more oxygen to your cells. That's also why you begin breathing harder when you work out -- your lungs need to fill with air more often to shuttle into your bloodstream. ## Target Heart Rate When you work out, the ideal condition is to be within a "target heart rate," which allows your body to get the most out of your workout. If you're in a particular heart rate zone, for example, you'll be able to burn more fat, while being above the target can mean overexertion and a less efficient workout. According to the Cleveland Clinic, the target heart rate for most people is about 60 to 80 percent of the individual's maximum heart rate. Maximum heart rate is the absolute fastest your heart can beat -- which typically goes down with age. According to the Cleveland Clinic, you should not exercise above 85 percent of your maximum heart rate, as that can increase the risk to your cardiovascular and orthopedic health. ## Baseline To determine your target heart rate, you'll first need to determine your resting heart rate. To do this, place your index and middle fingers on either side of your windpipe when you first wake in the morning. Watching a clock, count the number of beats you feel in 10 seconds, and then multiply that number by 6. That is your resting heart rate. To determine your target range you'll also need to know your maximum heart rate; men can do this by subtracting their age from 220. For example, if you are 40, your maximum would be 180. According to researchers at Northwestern University, however, women should calculate their heart rate slightly differently; subtracting 88 percent of the woman's age from 206. ## Calculation You'll now need to do a bit of math to determine your target heart rate. According to researchers at the University of Montana, you can do this by first subtracting your resting heart rate from the max heart rate. Using the example of the 40-year old, with a your resting heart rate of 80, subtract 80 from 180 to get to 100. Then multiply that number by .60 -- so in this example, that would be 60. Then add the resting heart rate back -- in this example, add 60 plus 80 to get to 140. This is the target heart rate for a 40 year old with a resting heart rate of 80. While exercising, check your pulse and see if you're close to that; if not, slow down or speed up the intensity of your workout to be within that ideal range. SHARE
649
2,989
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.515625
3
CC-MAIN-2023-23
latest
en
0.945446
https://math.answers.com/questions/37_-5_plus_y
1,708,895,085,000,000,000
text/html
crawl-data/CC-MAIN-2024-10/segments/1707947474643.29/warc/CC-MAIN-20240225203035-20240225233035-00243.warc.gz
381,962,773
46,716
0 # 37 -5 plus y Updated: 9/25/2023 ShaniaBrowngp3571 Lvl 1 8y ago 32 Gretchen Medhurst Lvl 10 2y ago Wiki User 8y ago = 32 + y Wiki User 8y ago 32y Earn +20 pts Q: 37 -5 plus y Submit Still have questions? Related questions ### What is the solution to the linear equation 6 plus y equals 37? If: 6+y =37 Then: y = 31 32 ### What is -2(y plus 5) plus y? -2(y plus 5) plus y? 5 plus y. ### 37 plus y equals 87y equals? So if I understand your question correct, it should be 37+Y=87Y? ### What is 2 plus (5 plus y)? 2 + (5 + y) = 7 + y ### What is Y x plus 5 Y 6? The answer to Y x plus 5 Y 6 is Y(x+5Y5).One possible solution to y x plus 5 y 6 is Y(x+5Y5). ### 80 equals 3 y plus 2 y plus 4 plus 1? 80 = 3 y + 2 y + 4 + 1 80 = 5 y + 5 80 - 5 = 5 y 75 = 5 y 15 = y So Y is 15 5+y+2y? = 5+3y? ### What is 3 plus y times 5 plus y? (3+y)(5+y) 3*5 + 3*y + y*5 + y*y 15 + 3y + 5y + y215 + 8y + y2y2 + 8y + 15 ### What equation represents y x2 10x plus 30 in vertex form y (x 5)2 plus 55 y (x 5)2 plus 30 y (x 5)2 5 y (x 5)2 plus 5? It is difficult to tell because this site uses a useless browser: one which cannot accept most mathematical symbols. So all we can see of the given equation is "y x2 10x plus 30", and the options appear as "y (x 5)2 plus 55 y (x 5)2 plus 30 y (x 5)2 5 y (x 5)2 plus 5".If the equation is y = x^2 - 10x + 30 then the answer is y = (x - 5) ^2 + 5. (y - 5)(y - 6)
600
1,419
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.15625
4
CC-MAIN-2024-10
latest
en
0.671148
https://mathstatbites.org/predicting-the-future-events/
1,726,145,770,000,000,000
text/html
crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00426.warc.gz
354,574,568
11,626
# Predicting the Future (events) Predicting the Future (events) Authors & Year: Qinglong Tian, Fanqi Meng, Daniel J. Nordman, and William Q. Meeker (2021) Journal: Journal of the American Statistical Association [DOI: 10.1080/01621459.2020.1850461] Review Prepared by David Han Statistical Prediction for Quality Engineering For quality assessments in reliability and industrial engineering, it is often necessary to predict the number of future events (e.g., system or component failures). Examples include the prediction of warranty returns and the prediction of future product failures that could lead to serious property damages and/or human casualties. Business decisions such as a product recall are based on such predictions. These applications also require a prediction interval to quantify prediction uncertainty arising from the combination of process variability and parameter uncertainty. In this research, the authors studied the within-sample prediction of the number of future events, given the observed data. The term within-sample prediction was used to distinguish from the more widely known new-sample prediction. For new-sample prediction, past data are used to produce a prediction interval for the lifetime of a single unit from a new independent sample. For within-sample prediction, however, the sample is unchanged. The future quantity that researchers wish to predict (e.g., the number of future failures) relates to the same sample that provided the original data. How to Construct Prediction Interval? To compute a prediction interval, one needs the conditional probability distribution for a random quantity of interest, given the observed data. This conditional distribution is indexed by the parameters, which are usually unknown in practice. This requires estimation of the parameters from the observed data. The popular plug-in (PL) method (a.k.a. the naive or estimative method) replaces the unknown parameter by a consistent estimator, which converges to the true parameter in probability as the sample size grows. Although the PL method has been criticized for ignoring the estimation uncertainty, many researchers argued that the coverage probability of the PL method has a good accuracy under certain conditions. The authors of the current research, however, demonstrated that the PL method fails to provide an asymptotically correct interval for the within-sample prediction. That is, for large amounts of data, the coverage probability always fails to converge to the nominal confidence level. As a solution, they suggested the calibration-bootstrap (CB) method and established its asymptotic correctness. The basic idea of a bootstrap method is that inference based on sample data can be modeled by resampling the sample data and performing inference from the resampled data. The authors also presented two alternative methods based on a predictive distribution. The first is a general method using direct parametric bootstrap (DB) samples. Using bootstrap samples, it establishes the future failure probability distribution. And the predictive distribution is obtained by averaging over this distribution. The second method, inspired by generalized pivotal quantities (GPQ), does so based on a special probability function known as the log-location-scale distribution family, which is popular in reliability engineering. Which One to Use in Practice? Through simulations, the performance difference of these four methods was investigated. Figure 1 compares the coverage probabilities of the prediction bounds based on a single-cohort data generated from Weibull distribution with 20% probability of failure in a future time interval. Pf1 is the probability of failure before an experiment terminates, representing the amount of information that can be used to construct prediction intervals. Each column is for different prediction bound (i.e., one-sided interval, either lower or upper) while the horizontal dashed line represents the nominal level, 90% or 95%. As illustrated, the PL method fails to attain asymptotically correct coverage probability. On the other hand, the DB and GPQ methods are close to each other, and they tend to outperform the CB method since their coverage probabilities are much closer to the nominal level, even though all three methods are asymptotically valid. With some additional benefits such as the ease of implementation and computational stability, the authors recommend predictive distribution methods, especially the GPQ method for general applications involving within-sample prediction. Figure 1.   Coverage probabilities versus expected number of events based on the four methods Now & Onward The present results clearly warn us about the deficiency of the PL method for computing a prediction interval although it has been well known and widely used for its simplicity. Moving forward, it is advised that statistical practitioners and quality engineers should use the DB or GPQ method for within-sample predictions. The research does not stop here. In many applications, test units are exposed to various operating or environmental conditions, resulting in different probabilities of time-to-failure. Prediction intervals that utilize covariate information, such as temperature and humidity, would be useful for manufacturers and regulators in making informed decisions (e.g., a potential product recall). Moreover, there could be seasonality effects in time-to-failure processes, which would impact within-sample predictions. Future research would develop predictive inferential methods extending the results discussed in this paper by incorporating constant or time-varying covariates.
1,023
5,685
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.546875
3
CC-MAIN-2024-38
latest
en
0.86887
http://gbhsweb.glenbrook225.org/gbs/science/phys/mmedia/optics/ifcmb.html
1,516,410,662,000,000,000
text/html
crawl-data/CC-MAIN-2018-05/segments/1516084888341.28/warc/CC-MAIN-20180120004001-20180120024001-00042.warc.gz
143,638,136
2,055
## Image Formation for Concave Mirrors - Case B A GIF Animation To view an object in any type of mirror, a person must sight along a line at the image of the object. All persons capable of seeing the image must sight along a line of sight directed towards the precise image location. As a person sights in a mirror at the image of an object, there will be a reflected ray of light coming from the mirror to that person's eye. The origin of this light ray is the object. A multitude of light rays from the object are incident on the mirror in a variety of directions; yet as you sight at the image, only a small portion of the many rays will reflect off the mirror and travel to your eye. To see an object in a mirror, you must sight at the image; and when you do reflected rays of light will travel from the mirror to your eye along your line of sight. Not all people who are viewing the object in the mirror will sight along the same geometrical line of sight. The precise direction of the sight line depends on the location of the object, the location of the person, and the type of mirror. Yet all of the lines of sight, regardless of their direction, will pass through the image location. In fact, the image location is defined as the location where reflected rays intersect. Since all people see a reflected ray of light as they sight at an image in the mirror, then the image location must be the intersection point of these reflected rays. In the animation above, an object is positioned above the principal axis of a concave mirror and between the center of curvature (C) and the focal point (F). The concave mirror will produce an image of the object which is inverted (positioned below the principal axis) and located somewhere beyond the center of curvature (C) of the mirror. Any person viewing this image must sight at this image position. The animation depicts the path of light to each person's eye. Different people are sighting in different directions; yet each person is sighting at the same image location. As seen in the animation, the image location is the intersection point of all the reflected rays. For more information on the ray nature of light, visit The Physics Classroom. Detailed information is available there on the following topics: Why is an Image Formed? The Anatomy of a Curved Mirror Reflection of Light and Image Formation Two Rules of Reflection for Concave Mirrors Ray Diagrams - Concave Mirrors Image Characteristics for Concave Mirrors The Mirror Equation - Concave Mirrors Other animations can be seen at the Multimedia Physics Studios.
533
2,593
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.765625
3
CC-MAIN-2018-05
latest
en
0.948108
https://www.cardschat.com/forum/live-poker-75/lets-chat-about-gto-in-live-495865/
1,675,407,649,000,000,000
text/html
crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00146.warc.gz
701,788,138
16,499
# Lets chat about GTO in live games #### Pokerstudy ##### Legend So, I am studying GTO for live games, and sure, they may work against pros (if pros are playing GTO also?), but some of the % decisons per charts just seem ridiculous to me. Especially the blind vs blind. Do these charts really work against rec players? Again, I am not looking to be pro, just to play games to make extra income. #### Poker_Mike ##### Legend Loyaler So, I am studying GTO for live games, and sure, they may work against pros (if pros are playing GTO also?), but some of the % decisons per charts just seem ridiculous to me. Especially the blind vs blind. Do these charts really work against rec players? Again, I am not looking to be pro, just to play games to make extra income. What charts are you looking at? #### Pokerstudy ##### Legend What charts are you looking at? The ones I am looking at are GTO charts from Johnathan Little, but I imagine there are others out there as well? The game is in theory solved via solvers and there are charts that display the stats, or rather explain the percentage of times you should play in each situation per position/stack size/RFI/OrVSRaisers or limps etc. Obviously, nobody is a computer and can remember all of the data, but to get a better understanding of it I feel would benefit my live game. I just am having a hard time seeing some of the logic in some of the “solved” equations. This makes me think that GTO really only does work against players playing GTO…So I am trying to see how I would really benefit to study GTO, I am puzzled TBH. Is it helpful to learn? I just do not know. I did say I believe that GTO would not work in the past against players not playing GTO, but want to take another look at it just to be sure lol Thanks Last edited: #### Poker_Mike ##### Legend Loyaler The ones I am looking at are GTO charts from Johnathan Little, but I imagine there are others out there as well? The game is in theory solved via solvers and there are charts that display the stats, or rather explain the percentage of times you should play in each situation per position/stack size/RFI/OrVSRaisers or limps etc. Obviously, nobody is a computer and can remember all of the data, but to get a better understanding of it I feel would benefit my live game. I just am having a hard time seeing some of the logic in some of the “solved” equations. This makes me think that GTO really only does work against players playing GTO…So I am trying to see how I would really benefit to study GTO, I am puzzled TBH. Is it helpful to learn? I just do not know. I did say I believe that GTO would not work in the past against players not playing GTO, but want to take another look at it just to be sure lol Thanks I have known a few players to keep a chart on their smartphone during live games. They can't use a smartphone while they are in a hand - but when not in a hand they are checking and confirming the chart's recommendations for opening range/calling range depending on their seat position. You do this a couple of dozen times and you will begin to memorize and predict the range per the chart. I agree that GTO is not perfect - but it is one approach to the game. Good luck ! #### Beanfacekilla ##### Legend There has been a debate about GTO vs Exploitative for a while now. If you play live low stakes like 1/2 or even 2/5 most of that stuff will be irrelevant IMO. Think of it like this: your opponents will be level zero or level 1 thinkers, and GTO is like level 50. Play level 2. Pretty easy to beat live low stakes if you have self discipline and know how to play. Don't tilt. That's it. EZ game. #### Pokerstudy ##### Legend There has been a debate about GTO vs Exploitative for a while now. If you play live low stakes like 1/2 or even 2/5 most of that stuff will be irrelevant IMO. Think of it like this: your opponents will be level zero or level 1 thinkers, and GTO is like level 50. Play level 2. Pretty easy to beat live low stakes if you have self discipline and know how to play. Don't tilt. That's it. EZ game. I like it! That is exactly how I have been playing, a solid strategy, going into hours into tournaments and eventually getting into the money, I like the level 2 analogy! Solid level 2 is my game plan. I may of exposed my playing style a bit (For Now) , but I learned a lot just now and have a focus of what to do at rec games moving forward. Thank you for the awesome reply! #### Vilgeoforc ##### Visionary GTO strategy will be a good way out in the game against an unknown opponent or when you have not yet figured out how to play against a specific opponent. But then you will need to exploit his weaknesses, otherwise you will lose money. Organize a Home Poker Game Top 10 Games
1,133
4,779
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
2.53125
3
CC-MAIN-2023-06
latest
en
0.964812
https://scikit-survival.readthedocs.io/en/v0.21.0/user_guide/coxnet.html
1,701,275,160,000,000,000
text/html
crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00296.warc.gz
589,901,996
12,343
Interactive online version: . # Penalized Cox Models# Cox’s proportional hazard’s model is often an appealing model, because its coefficients can be interpreted in terms of hazard ratio, which often provides valuable insight. However, if we want to estimate the coefficients of many features, the standard Cox model falls apart, because internally it tries to invert a matrix that becomes non-singular due to correlations among features. ## Ridge# This mathematical problem can be avoided by adding a $$\ell_2$$ penalty term on the coefficients that shrinks the coefficients to zero. The modified objective has the form $\arg\max_{\beta}\quad\log \mathrm{PL}(\beta) - \frac{\alpha}{2} \sum_{j=1}^p \beta_j^2 ,$ where $$\mathrm{PL}(\beta)$$ is the partial likelihood function of the Cox model, $$\beta_1,\ldots,\beta_p$$ are the coefficients for $$p$$ features, and $$\alpha \geq 0$$ is a hyper-parameter that controls the amount of shrinkage. The resulting objective is often referred to as ridge regression. If $$\lambda$$ is set to zero, we obtain the standard, unpenalized Cox model. [1]: import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sksurv.linear_model import CoxPHSurvivalAnalysis, CoxnetSurvivalAnalysis from sksurv.preprocessing import OneHotEncoder from sklearn import set_config from sklearn.model_selection import GridSearchCV, KFold from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler set_config(display="text") # displays text representation of estimators To demonstrate the use of penalized Cox models we are going to use the breast cancer data, which contains the expression levels of 76 genes, age, estrogen receptor status (er), tumor size and grade for 198 individuals. The objective is to predict the time to distant metastasis. First, we load the data and perform one-hot encoding of categorical variables er and grade. [2]: X, y = load_breast_cancer() Xt = OneHotEncoder().fit_transform(X) [2]: X200726_at X200965_s_at X201068_s_at X201091_s_at X201288_at X201368_at X201663_s_at X201664_at X202239_at X202240_at ... X221816_s_at X221882_s_at X221916_at X221928_at age er=positive grade=intermediate grade=poorly differentiated grade=unkown size 0 10.93 8.96 11.63 10.96 11.52 12.04 9.62 9.81 10.02 7.85 ... 10.13 10.93 6.48 5.99 57.0 0.0 0.0 1.0 0.0 3.0 1 12.24 9.53 12.63 11.59 12.32 10.78 10.60 10.70 10.16 8.74 ... 10.21 9.56 4.97 7.05 57.0 1.0 0.0 1.0 0.0 3.0 2 11.66 10.24 12.57 9.17 11.70 11.35 9.38 10.16 10.03 8.13 ... 10.16 9.31 4.28 6.83 48.0 0.0 0.0 1.0 0.0 2.5 3 12.17 9.82 12.11 9.09 13.13 11.86 8.40 8.67 10.73 8.65 ... 10.66 10.21 5.71 6.93 42.0 1.0 0.0 1.0 0.0 1.8 4 11.48 11.49 11.78 8.89 10.43 11.40 7.74 8.64 9.56 8.48 ... 11.57 10.93 5.82 6.66 46.0 1.0 1.0 0.0 0.0 3.0 5 rows × 82 columns Let us begin by fitting a penalized Cox model to various values of $$\alpha$$ using sksurv.linear_model.CoxPHSurvivalAnalysis and recording the coefficients we obtained for each $$\alpha$$. [3]: alphas = 10.0 ** np.linspace(-4, 4, 50) coefficients = {} cph = CoxPHSurvivalAnalysis() for alpha in alphas: cph.set_params(alpha=alpha) cph.fit(Xt, y) key = round(alpha, 5) coefficients[key] = cph.coef_ coefficients = pd.DataFrame.from_dict(coefficients).rename_axis(index="feature", columns="alpha").set_index(Xt.columns) Now, we can inspect how the coefficients change for varying $$\alpha$$. [4]: def plot_coefficients(coefs, n_highlight): _, ax = plt.subplots(figsize=(9, 6)) n_features = coefs.shape[0] alphas = coefs.columns for row in coefs.itertuples(): ax.semilogx(alphas, row[1:], ".-", label=row.Index) alpha_min = alphas.min() top_coefs = coefs.loc[:, alpha_min].map(abs).sort_values().tail(n_highlight) for name in top_coefs.index: coef = coefs.loc[name, alpha_min] plt.text(alpha_min, coef, name + " ", horizontalalignment="right", verticalalignment="center") ax.yaxis.set_label_position("right") ax.yaxis.tick_right() ax.grid(True) ax.set_xlabel("alpha") ax.set_ylabel("coefficient") [5]: plot_coefficients(coefficients, n_highlight=5) We can see that if the penalty has a large weight (to the right), all coefficients are shrunk almost to zero. As the penalty’s weight is decreased, the coefficients’ value increases. We can also observe that the paths for X203391_at and tumor grade quickly separate themselves from the remaining coefficients, which indicates that this particular gene expression level and tumor grade are important predictive factors for time to distant metastasis. ## LASSO# While the $$\ell_2$$ (ridge) penalty does solve the mathematical problem of fitting a Cox model, we would still need to measure the expression levels of all 76 genes to make predictions. Ideally, we would like to select a small subset of features that are most predictive and ignore the remaining gene expression levels. This is precisely what the LASSO (Least Absolute Shrinkage and Selection Operator) penalty does. Instead of shrinking coefficients to zero it does a type of continuous subset selection, where a subset of coefficients are set to zero and are effectively excluded. This reduces the number of features that we would need to record for prediction. In mathematical terms, the $$\ell_2$$ penalty is replaced by a $$\ell_1$$ penalty, which leads to the optimization problem $\arg\max_{\beta}\quad\log \mathrm{PL}(\beta) - \alpha \sum_{j=1}^p |\beta_j| .$ The main challenge is that we cannot directly control the number of features that get selected, but the value of $$\alpha$$ implicitly determines the number of features. Thus, we need a data-driven way to select a suitable $$\alpha$$ and obtain a parsimonious model. We can do this by first computing the $$\alpha$$ that would ignore all features (coefficients are all zero) and then incrementally decrease its value, let’s say until we reach 1% of the original value. This has been implemented in sksurv.linear_model.CoxnetSurvivalAnalysis by specifying l1_ratio=1.0 to use the LASSO penalty and alpha_min_ratio=0.01 to search for 100 $$\alpha$$ values up to 1% of the estimated maximum. [6]: cox_lasso = CoxnetSurvivalAnalysis(l1_ratio=1.0, alpha_min_ratio=0.01) cox_lasso.fit(Xt, y) [6]: CoxnetSurvivalAnalysis(alpha_min_ratio=0.01, l1_ratio=1.0) [7]: coefficients_lasso = pd.DataFrame(cox_lasso.coef_, index=Xt.columns, columns=np.round(cox_lasso.alphas_, 5)) plot_coefficients(coefficients_lasso, n_highlight=5) The figure shows that the LASSO penalty indeed selects a small subset of features for large $$\alpha$$ (to the right) with only two features (purple and yellow line) being non-zero. As $$\alpha$$ decreases, more and more features become active and are assigned a non-zero coefficient until the entire set of features is used (to the left left). Similar to the plot above for the ridge penalty, the path for X203391_at stands out, indicating its importance in breast cancer. However, the overall most important factor seems to be a positive estrogen receptor status (er). ## Elastic Net# The LASSO is a great tool to select a subset of discriminative features, but it has two main drawbacks. First, it cannot select more features than number of samples in the training data, which is problematic when dealing with very high-dimensional data. Second, if data contains a group of features that are highly correlated, the LASSO penalty is going to randomly choose one feature from this group. The Elastic Net penalty overcomes these problems by using a weighted combination of the $$\ell_1$$ and $$\ell_2$$ penalty by solving: $\arg\max_{\beta}\quad\log \mathrm{PL}(\beta) - \alpha \left( r \sum_{j=1}^p |\beta_j| + \frac{1 - r}{2} \sum_{j=1}^p \beta_j^2 \right) ,$ where $$r \in [0; 1[$$ is the relative weight of the $$\ell_1$$ and $$\ell_2$$ penalty. The Elastic Net penalty combines the subset selection property of the LASSO with the regularization strength of the Ridge penalty. This leads to better stability compared to the LASSO penalized model. For a group of highly correlated features, the latter would choose one feature randomly, whereas the Elastic Net penalized model would tend to select all. Usually, it is sufficient to give the $$\ell_2$$ penalty only a small weight to improve stability of the LASSO, e.g. by setting $$r = 0.9$$. As for the LASSO, the weight $$\alpha$$ implicitly determines the size of the selected subset, and usually has to be estimated in a data-driven manner. [8]: cox_elastic_net = CoxnetSurvivalAnalysis(l1_ratio=0.9, alpha_min_ratio=0.01) cox_elastic_net.fit(Xt, y) [8]: CoxnetSurvivalAnalysis(alpha_min_ratio=0.01, l1_ratio=0.9) [9]: coefficients_elastic_net = pd.DataFrame( cox_elastic_net.coef_, index=Xt.columns, columns=np.round(cox_elastic_net.alphas_, 5) ) plot_coefficients(coefficients_elastic_net, n_highlight=5) ## Choosing penalty strength $$\alpha$$# Previously, we focused on the estimated coefficients to get some insight into which features are important for estimating time to distant metastasis. However, for prediction, we need to pick one particular $$\alpha$$, and the subset of features it implies. Here, we are going to use cross-validation to determine which subset and $$\alpha$$ generalizes best. Before we can use GridSearchCV, we need to determine the set of $$\alpha$$ which we want to evaluate. To do this, we fit a penalized Cox model to the whole data and retrieve the estimated set of alphas. Since, we are only interested in alphas and not the coefficients, we can use only a few iterations for improved speed. Note that we are using StandardScaler to account for scale differences among features and allow direct comparison of coefficients. [10]: import warnings from sklearn.exceptions import FitFailedWarning from sklearn.pipeline import make_pipeline from sklearn.preprocessing import StandardScaler coxnet_pipe = make_pipeline(StandardScaler(), CoxnetSurvivalAnalysis(l1_ratio=0.9, alpha_min_ratio=0.01, max_iter=100)) warnings.simplefilter("ignore", UserWarning) warnings.simplefilter("ignore", FitFailedWarning) coxnet_pipe.fit(Xt, y) [10]: Pipeline(steps=[('standardscaler', StandardScaler()), ('coxnetsurvivalanalysis', CoxnetSurvivalAnalysis(alpha_min_ratio=0.01, l1_ratio=0.9, max_iter=100))]) Using the estimated set of alphas, we perform 5 fold cross-validation to estimate the performance – in terms of concordance index – for each $$\alpha$$. Note: this can take a while. [11]: estimated_alphas = coxnet_pipe.named_steps["coxnetsurvivalanalysis"].alphas_ cv = KFold(n_splits=5, shuffle=True, random_state=0) gcv = GridSearchCV( make_pipeline(StandardScaler(), CoxnetSurvivalAnalysis(l1_ratio=0.9)), param_grid={"coxnetsurvivalanalysis__alphas": [[v] for v in estimated_alphas]}, cv=cv, error_score=0.5, n_jobs=1, ).fit(Xt, y) cv_results = pd.DataFrame(gcv.cv_results_) We can visualize the results by plotting the mean concordance index and its standard deviation across all folds for each $$\alpha$$. [12]: alphas = cv_results.param_coxnetsurvivalanalysis__alphas.map(lambda x: x[0]) mean = cv_results.mean_test_score std = cv_results.std_test_score fig, ax = plt.subplots(figsize=(9, 6)) ax.plot(alphas, mean) ax.fill_between(alphas, mean - std, mean + std, alpha=0.15) ax.set_xscale("log") ax.set_ylabel("concordance index") ax.set_xlabel("alpha") ax.axvline(gcv.best_params_["coxnetsurvivalanalysis__alphas"][0], c="C1") ax.axhline(0.5, color="grey", linestyle="--") ax.grid(True) The figure shows that there is a range for $$\alpha$$ to the right where it is too large and sets all coefficients to zero, as indicated by the 0.5 concordance index of a purely random model. On the other extreme, if $$\alpha$$ becomes too small, too many features enter the model and the performance approaches that of a random model again. The sweet spot (orange line) is somewhere in the middle. Let’s inspect that model. [13]: best_model = gcv.best_estimator_.named_steps["coxnetsurvivalanalysis"] best_coefs = pd.DataFrame(best_model.coef_, index=Xt.columns, columns=["coefficient"]) non_zero = np.sum(best_coefs.iloc[:, 0] != 0) print(f"Number of non-zero coefficients: {non_zero}") non_zero_coefs = best_coefs.query("coefficient != 0") coef_order = non_zero_coefs.abs().sort_values("coefficient").index _, ax = plt.subplots(figsize=(6, 8)) non_zero_coefs.loc[coef_order].plot.barh(ax=ax, legend=False) ax.set_xlabel("coefficient") ax.grid(True) Number of non-zero coefficients: 22 The model selected a total of 21 features, and it deemed X204540_at to be the most important one, followed by X203391_at and positive estrogen receptor status: ## Survival and Cumulative Hazard Function# Having selected a particular $$\alpha$$, we can perform prediction, either in terms of risk score using the predict function or in terms of survival or cumulative hazard function. For the latter two, we first need to re-fit the model with fit_baseline_model enabled. [14]: coxnet_pred = make_pipeline(StandardScaler(), CoxnetSurvivalAnalysis(l1_ratio=0.9, fit_baseline_model=True)) coxnet_pred.set_params(**gcv.best_params_) coxnet_pred.fit(Xt, y) [14]: Pipeline(steps=[('standardscaler', StandardScaler()), ('coxnetsurvivalanalysis', CoxnetSurvivalAnalysis(alphas=[0.03860196504106796], fit_baseline_model=True, l1_ratio=0.9))]) For instance, we can now select a patient and determine how positive or negative estrogen receptor status would affect the survival function. [15]: surv_fns = coxnet_pred.predict_survival_function(Xt) time_points = np.quantile(y["t.tdm"], np.linspace(0, 0.6, 100)) legend_handles = [] legend_labels = [] _, ax = plt.subplots(figsize=(9, 6)) for fn, label in zip(surv_fns, Xt.loc[:, "er=positive"].astype(int)): (line,) = ax.step(time_points, fn(time_points), where="post", color=f"C{label}", alpha=0.5) if len(legend_handles) <= label: name = "positive" if label == 1 else "negative" legend_labels.append(name) legend_handles.append(line) ax.legend(legend_handles, legend_labels) ax.set_xlabel("time") ax.set_ylabel("Survival probability") ax.grid(True) We can observe that patients with positive estrogen receptor status tend to have a better prognosis.
3,802
14,172
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.015625
3
CC-MAIN-2023-50
latest
en
0.795989
https://academicassignmentexperts.com/computer-science-homework-help-647/
1,695,519,520,000,000,000
text/html
crawl-data/CC-MAIN-2023-40/segments/1695233506539.13/warc/CC-MAIN-20230923231031-20230924021031-00553.warc.gz
91,018,973
25,850
# Computer Science homework help Version:1.0 StartHTML:000000203 EndHTML:000078331 StartFragment:000004186 EndFragment:000078212 StartSelection:000004290 EndSelection:000078196 SourceURL:https://devry.simnetonline.com/sp/embed/  SIMnet – Excel 365/2019 Capstone – Level 3 Working with Sales Data Alternate with VLOOKUP         window.SP_CDN = ‘../’;       window.SP_CB = ‘fc826db6f4973d3f8643’;       window.SP_EMBED = true; In this project, you will work with sales data from Top’t Corn, a popcorn          company with an online store, multiple food trucks, and two retail stores. You will begin          by copying the sales data for one of the retail stores from another workbook. Next, you          will insert a new worksheet and enter sales data for the four food truck locations,          formatting the data, and calculating totals. You will create a pie chart to represent the          total units sold by location and a column chart to represent sales by popcorn type. You          will format the charts, and then set up the worksheet for printing. Next, you help          Top’t Corn calculate payments for a loan and decide whether or not the purchase is          a good idea. Working with the daily sales data for one of the brick-and-mortar stores, you          will apply conditional formatting to find the top 10 sales dates. Y ou will also calculate          the sales for each date, and the average, minimum, and maximum sales. You will use Goal          Seek to find the appropriate price to reach a higher daily average sales goal. You will          use VLOOKUP to look up sales data for a specific date. Finally, you will work with their          online sales data to format it as an Excel table and apply sorting and filtering. You will          create a PivotTable and a PivotChart from a copy of the online sales data to summarize the          sales. Skills needed to complete this        project: • Open a workbook • Copy a worksheet to another workbook • Close a workbook • Insert a worksheet • Name a worksheet • Move a worksheet • Enter text • Enter numbers • Edit text • Autofit a column • Apply a cell style • Change font color • Merge and center text across cells • Apply bold formatting • Apply number formatting • Enter a SUM function • Copy formula using AutoFill • Insert a pie chart • Apply a chart Quick Layout • Move a chart • Insert a column chart • Switch the row/column in a column chart • Change the chart title • Apply a chart Quick Style • Show chart data labels • Preview how a worksheet will look when printed • Change worksheet orientation • Change the print margins • Scale a worksheet for printing • Change the color of a worksheet tab • Apply a column width • Calculate a loan payment with PMT • Enter a simple formula using multiplication • Enter a simple formula using subtraction • Create a formula referencing cells in another worksheet • Enter an AVERAGE function • Use the IF function • Hide a worksheet • Apply date formats • Apply Top Ten conditional formatting • Use an absolute reference in a formula • Name a range of cells • Use a named range in a formula • Use the MIN function in a formula • Use the MAX function in a formula • Wrap text • Analyze data with Goal Seek • Use the VLOOKUP function in a formula • Convert data into a table • Apply a table Quick Style • Use the table Total row • Sort data in a table • Filter data in a table • Create a PivotTable • Create a PivotChart • Unhide a worksheet 1. Open the start file EX2019-Capstone-Level3. Note:If the workbook opens in            Protected View, click the Enable Editing button in the Message Bar at the top of the            worksheet so you can modify it 2. The file will be renamed automatically to include your name. Change the           project file name if directed to do so by your          instructor, and saveit 1. Open the Excel file OldTownSales. 2. Copy the worksheet OldTownStore. In the               Move or Copydialog, be sure to check the               Create a copycheck box and select your capstone project Excel file from the               To bookdrop-down list. Make the correct selection to ensure the copied worksheet              will appear at the end after the TysonsStore2018worksheet in your capstone workbook. 3. Close the OldTownSalesworkbook when you have successfully copied the               OldTownStoreworksheet to the capstone workbook. 4. Insert a new worksheet and rename it: MobileSales 5. If necessary, move the MobileSalesworksheet so it appears first in the workbook. 6. In the MobileSales worksheet, enter the text and sales data as shown in the          table below. Check your work carefully. A B C D E 1 Top’t Corn Mobile Sales (July) 2 Truck Location 3 Farragut Square GW Georgetown K Street 4 Old Bay 2500 800 600 900 5 Truffle 3200 600 1200 1500 6 Sea Salt and Caramel 4200 1500 1400 1200 1. Still working with the MobileSales worksheet, format the data as follows: 1. Apply the Titlecell style to cell A1. 2. Apply the Purplefill color to cell A1. Use the first color              at the right in the row of Standard colors. 3. Apply the White, Background 1font color to cell A1. Use the first color              at the left in the first row of Theme colors. 4. Merge and center the worksheet title across cells               A1:E1. 5. Apply the Heading 2cell style to cell B2. 6. Merge and center cells B2:E2. 7. Bold cells B3:E3. 8. Apply the Accounting Number Formatwith 0digits after the              decimal to cells B4:E6. 9. AutoFit columns A:E. 2. In the MobileSales worksheet, calculate total sales for each of the truck          locations. 1. Enter the word Total in cell A7. 2. Enter a SUM function in cell B7to              calculate the total of cells B4:B6. 3. Use AutoFill to copy the formula to cells C7:E7. 4. Apply the Totalcell style to cells A7:E7. 3. In the MobileSales worksheet, insert a pie chart (2-D Pie) to          show the Old Bay sales for the month by location. Each piece of the pie should represent          the Old Baysales for a single location. Note:You must complete this step correctly in order to receive points for          completing the next step. Check your work carefully. 4. Working with the pie chart you just created, modify the pie chart as follows: 1. Apply the Layout 6 Quick Layout. 2. Move the chart so it appears below the sales data. 5. In the MobileSales worksheet insert a clustered column chart (2-D Column) to          show the sales for each type of popcorn for each location. Do not include the          totals. Note:You must complete this            step correctly in order to receive points for completing the next step.Check your work carefully. 6. Working with the column chart you just created, modify the column chart as follows: 1. If necessary, modify the chart so each location is represented by a data series and              the popcorn types are listed along the x axis. 2. Change the chart title to: July Sales by Popcorn Type 3. Apply the Style 5chart Quick Style. 4. Display the chart data labels using the Outside Endoption. 5. If necessary, move the chart so it is next to the pie chart and the top of the              charts are aligned. 7. Preview how the MobileSales worksheet will look when printed, and then apply          print settings to print the worksheet on a single page. Hint: If you have one of          the charts selected, deselect it before previewing the worksheet. Preview the worksheet          again when you are finished to check your work. 1. Change the orientation so the page is wider than it is tall. 2. Change the margins to the preset narrow option. 3. Change the printing scale so all columns will print on a single page. 8. Top’t Corn is considering a new truck purchase. Calculate the monthly loan          payments and total cost of the loan. 1. Insert a new worksheet between the MobileSalessheet and the OnlineSalessheets. 2. Name the new worksheet: TruckLoan 3. Change the color of the TruckLoan worksheet tab to               Orange. Use the third color from the left in the row of              Standard colors. 4. Enter the loan terms in the TruckLoan worksheet as shown below. A B 1 Price 55000 2 Interest (annual) 3% 3 Loan term (in months) 24 4 Monthly payment 1. AutoFit column A. 2. Set the width of column Bto 16. 3. Apply the Currencynumber format to cell B1. Display two digits          after the decimal. 4. Enter a formula using the PMTfunction in cell B4. Be sure to use a negative          value for the Pvargument. 5. In cell A6, type: Total payments 6. In cell B6, enter a formula to calculate the total paid          over the life of the loan (the monthly payment amount * the number of payments). Use cell          references. 7. In cell A7, type: Interest paid 8. In cell B7, enter a formula to calculate the total          interest paid over the life of the loan (the total payments – the original price of the          truck). Use cell references. 9. Apply borders using the Thick Outside Bordersoption around cells A6:B7. 10. In cell A9, type: Average sales 11. In cell B9, enter a formula to calculate the average          sales per month for the truck locations. Hint: Use cells           B7:E7from the MobileSalesworksheet as the function argument. 12. Apply the Currencynumber format to cell B9. Display two digits          after the decimal. 13. In cell A10, type: Buy new truck? 14. In cell B10, enter a formula using the           IFfunction to display Yes             if the monthly payment for the truck loan is less than the average          sales per month for the current trucks. Display Noif it is not. 15. This workbook includes two worksheets for data from the Tysons store. You should only          be working with the latest data from 2019. 1. Hide the TysonsStore2018worksheet. 16. Complete the following steps in the TysonsStore2019 worksheet: 1. Select cells A2:A32, and apply the               Short Datedate format. 2. Find the top ten sales items for the month. Select cells               B2:D32and use conditional formatting to              apply a green fill with dark green textto the top 10values. 3. In cell F2, enter a formula to calculate the daily              total in dollars. Multiply the value in the Daily Total (#                Sold)column by the current price per box in cell               K1. Use an absolute reference where appropriate and copy              the formula to cells F3:F32. 4. In cell G2, enter a formula using the               IFfunction to determine whether the daily sales goal in cell               K2was met. Display yesif the              value in the Daily Total (\$)column is               greater than or equal tothe daily sales goal. Display noif it is not. Use an absolute reference where appropriate and copy the              formula to cells G3:G32. 5. Create a named range DailyTotals for cells               F2:F32. 6. In cell K3, enter a formula using the named range               DailyTotalsto calculate the               averagedaily sales in              dollars. 7. In cell K4, enter a formula using the named              rangeDailyTotalsto find the lowestdaily sales in dollars. 8. In cell K5, enter a formula using the named              rangeDailyTotalsto find the               highestdaily sales in dollars. 9. Wrap the text in cell J7. 10. Use Goal Seekto find the new price per              box (cell K8) to reach a new daily average sales goal of              \$3,000 in cell K7. Accept the solution found by Goal              Seek. 11. Modify cell K8to show two places after              the decimal. 12. Create a named range SalesData for cells               A2:G32. 13. In cell K10,enter 8/19/2019 as the lookup date. 14. In cell K11,enter a formula using VLOOKUP to display whether or not the sales goal              was met for the date listed in cell K10. Use the named              range SalesDatafor the               Table_arrayargument. The formula should return the              value in the Sales Goal Met?column (column               7in the data array) only when there is an               exactmatch. 17. Make a copy of the OnlineSalesworksheet and name it PivotData. The           PivotDataworksheet should be the last          worksheet in the workbook. 18. Go to the OnlineSalesworksheet and format the sales data as a table using the table style Aqua, Table Style Light 9. 19. Continue working with the table on the OnlineSales worksheet and display the          table Totalrow. 1. Display the total for the Quantitycolumn. 2. Remove the count from the Statecolumn. 20. Continue working with the table on the OnlineSales worksheet and sort the data          alphabetically by values in the Itemcolumn. 21. Continue working with the table on the OnlineSales worksheet and filter the          table to show only rows where the value in the Statecolumn is MD. 22. Create a PivotTable using the data in cells A3:D120from the data in the PivotDataworksheet. The PivotTable should appear on its own worksheet. Use values from          the Itemcolumn as the rows and the sum of          values in the Quantitycolumn as the values. 23. Name the PivotTable worksheet: PivotTable It should be located to the left          of the PivotDataworksheet. 24. Insert a PivotChart on the PivotTableworksheet. Use a pie chart to represent the total quantity for each item. If necessary, move the PivotChart to the right of the          PivotTable so it does not cover the data. 25. This workbook includes a hidden worksheet with online sales data from the 2018 buy one          get one free sale. 1. Unhide the BOGOSale2018worksheet. 26. Save and close the workbook. 27. Upload and save your project file. 1. Open the start file EX2019-Capstone-Level3. Note: If the workbook opens in            Protected View, click the Enable Editing button in the Message Bar at the top of the            worksheet so you can modify it 2. The file will be renamed automatically to include your name. Change the           project file name if directed to do so by your          instructor, and save it 3. Copy the OldTownStore worksheet from the OldTownSales workbook (downloaded from the Resources link) to the capstone project. 1. Open the Excel file OldTownSales. 2. Copy the worksheet OldTownStore. In the               Move or Copy dialog, be sure to check the               Create a copy check box and select your capstone project Excel file from the               Move selected sheets to book drop-down list. Make the correct selection to ensure the copied worksheet              will appear at the end after the TysonsStore2018 worksheet in your capstone workbook. 3. Close the OldTownSales workbook when you have successfully copied the               OldTownStore worksheet to the capstone workbook. 4. Insert a new worksheet and rename it: MobileSales 5. If necessary, move the MobileSales worksheet so it appears first in the workbook. 6. Enter the text and sales data as shown in the table below. Check your work carefully. A B C D E 1 Top’t Corn Mobile Sales (July) 2 Truck Location 3 Farragut Square GW Georgetown K Street 4 Old Bay 2500 800 600 900 5 Truffle 3200 600 1200 1500 6 Sea Salt and Caramel 4200 1500 1400 1200 1. Format the data as follows: 1. Apply the Title cell style to cell A1. 2. Apply the Purple fill color to cell A1. Use the first color              at the right in the row of Standard colors. 3. Apply the White, Background 1 font color to cell A1. Use the first color              at the left in the first row of Theme colors. 4. Merge and center the worksheet title across cells               A1:E1. 5. Apply the Heading 2 cell style to cell B2. 6. Merge and center cells B2:E2. 7. Bold cells B3:E3. 8. Apply the Accounting Number Format with 0 digits after the              decimal to cells B4:E6. 9. AutoFit columns A:E. 2. Calculate total sales for each of the truck locations. 1. Enter the word Total in cell A7. 2. Enter a SUM function in cell B7 to              calculate the total of cells B4:B6. 3. Use AutoFill to copy the formula to cells C7:E7. 4. Apply the Total cell style to cells A7:E7. 3. Insert a pie chart (2-D Pie) to show the Old Bay sales for the month by          location. Each piece of the pie should represent the Old              Bay sales for a single location. Note: You must complete this step correctly in order to receive points for          completing the next step. Check your work carefully. 4. Modify the pie chart as follows: 1. Apply the Layout 6 Quick Layout. 2. Move the chart so it appears below the sales data. 5. Insert a clustered column chart (2-D Column) to show the sales for each type of popcorn          for each location. Do not include the totals. Note: You must complete this            step correctly in order to receive points for completing the next step. Check your work carefully. 6. Modify the column chart as follows: 1. If necessary, modify the chart so each location is represented by a data series and              the popcorn types are listed along the x axis. 2. Change the chart title to: July Sales by Popcorn Type 3. Apply the Style 5 chart Quick Style. 4. Display the chart data labels using the Outside End option. 5. If necessary, move the chart so it is next to the pie chart and the top of the              charts are aligned. 7. Preview how the worksheet will look when printed, and then apply print settings to          print the worksheet on a single page. Hint: If you have one of the charts selected,          deselect it before previewing the worksheet. Preview the worksheet again when you are          finished to check your work. 1. Change the orientation so the page is wider than it is tall. 2. Change the margins to the preset narrow option. 3. Change the printing scale so all columns will print on a single page. 8. Top’t Corn is considering a new truck purchase. Calculate the monthly loan          payments and total cost of the loan. 1. Insert a new worksheet between the MobileSales sheet and the OnlineSales sheets. 2. Name the new worksheet: TruckLoan 3. Change the color of the worksheet tab to Orange. Use              the third color from the left in the row of Standard colors. 4. Enter the loan terms as shown below. A B 1 Price 55000 2 Interest (annual) 3% 3 Loan term (in months) 24 4 Monthly payment 1. AutoFit column A. 2. Set the width of column B to 16. 3. Apply the Currency number format to cell B1. Display two digits          after the decimal. 4. Enter a formula using the PMT function in cell B4. Be sure to use a negative          value for the Pv argument. 5. In cell A6, type: Total payments 6. In cell B6, enter a formula to calculate the total paid          over the life of the loan (the monthly payment amount * the number of payments). Use cell          references. 7. In cell A7, type: Interest paid 8. In cell B7, enter a formula to calculate the total          interest paid over the life of the loan (the total payments – the original price of the          truck). Use cell references. 9. Apply borders using the Thick Outside Borders option around cells A6:B7. 10. In cell A9, type: Average sales 11. In cell B9, enter a formula to calculate the average          sales per month for the truck locations. Hint: Use cells           B7:E7 from the MobileSales worksheet as the function argument. 12. Apply the Currency number format to cell B9. Display two digits          after the decimal. 13. In cell A10, type: Buy new truck? 14. In cell B10, enter a formula using the           IF function to display Yes             if the monthly payment for the truck loan is less than the average          sales per month for the current trucks. Display No if it is not. 15. This workbook includes two worksheets for data from the Tysons store. You should only          be working with the latest data from 2019. 1. Hide the TysonsStore2018 worksheet. 16. Complete the following steps in the TysonsStore2019 worksheet: 1. Select cells A2:A32, and apply the               Short Date date format. 2. Find the top ten sales items for the month. Select cells               B2:D32 and use conditional formatting to              apply a green fill with dark green text to the top 10 values. 3. In cell F2, enter a formula to calculate the daily              total in dollars. Multiply the value in the Daily Total (#                Sold) column by the current price per box in cell               K1. Use an absolute reference where appropriate and copy              the formula to cells F3:F32. 4. In cell G2, enter a formula using the               IF function to determine whether the daily sales goal in cell               K2 was met. Display yes if the              value in the Daily Total (\$) column is               greater than or equal to the daily sales goal. Display no if it is not. Use an absolute reference where appropriate and copy the              formula to cells G3:G32. 5. Create a named range DailyTotals for cells               F2:F32. 6. In cell K3, enter a formula using the named range               DailyTotals to calculate the               average daily sales in              dollars. 7. In cell K4, enter a formula using the named              range DailyTotals to find the lowest daily sales in dollars. 8. In cell K5, enter a formula using the named              range DailyTotals to find the               highest daily sales in dollars. 9. Wrap the text in cell J7. 10. Use Goal Seek to find the new price per              box (cell K8) to reach a new daily average sales goal of              \$3,000 in cell K7. Accept the solution found by Goal              Seek. 11. Modify cell K8 to show two places after              the decimal. 12. Create a named range SalesData for cells               A2:G32. 13. In cell K10, enter 8/19/2019 as the lookup date. 14. In cell K11, enter a formula using VLOOKUP to display whether or not the sales goal              was met for the date listed in cell K10. Use the named              range SalesData for the               Table_array argument. The formula should return the              value in the Sales Goal Met? column (column               7 in the data array) only when there is an               exact match. 17. Make a copy of the OnlineSales worksheet and name it PivotData. The           PivotData worksheet should be the last          worksheet in the workbook. 18. Go to the OnlineSales worksheet and format the sales data as a table using the table style Aqua, Table Style Light 9. 19. Display the table Total row. 1. Display the total for the Quantity column. 2. Remove the count from the State column. 20. Sort the data alphabetically by values in the Item column. 21. Filter the table to show only rows where the value in the           State column is           MD. 22. Create a PivotTable using the data in cells A3:D120 from the data in the PivotData worksheet. The PivotTable should appear on its own worksheet. Use values from          the Item column as the rows and the sum of          values in the Quantity column as the values. 23. Name the PivotTable worksheet: PivotTable It should be located to the left          of the PivotData worksheet. 24. Insert a PivotChart on the PivotTable worksheet. Use a pie chart to represent the total quantity for each item. If necessary, move the PivotChart to the right of the          PivotTable so it does not cover the data. 25. This workbook includes a hidden worksheet with online sales data from the 2018 buy one          get one free sale. 1. Unhide the BOGOSale2018 worksheet. 26. Save and close the workbook. 27. Upload and save your project file. ## Why US? ##### 100% Confidentiality Information about customers is confidential and never disclosed to third parties. ##### Timely Delivery No missed deadlines – 97% of assignments are completed in time. ##### Original Writing We complete all papers from scratch. You can get a plagiarism report. ##### Money Back If you are convinced that our writer has not followed your requirements, feel free to ask for a refund.
6,097
24,056
{"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
3.453125
3
CC-MAIN-2023-40
longest
en
0.778834
http://mathhelpforum.com/algebra/65691-trigonometric-inequality.html
1,524,596,082,000,000,000
text/html
crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00119.warc.gz
192,520,239
14,891
# Thread: Trigonometric inequality 1. ## Trigonometric inequality This one is fun. Find what values of $\displaystyle x$ make $\displaystyle \cos(\sin(x))\geqslant\sin(\cos(x))$ true. Next find all values that make $\displaystyle \sin(\cos(x))\geqslant\cos(\sin(x))$ true. Note: No calculus. 2. Originally Posted by Mathstud28 This one is fun. Find what values of $\displaystyle x$ make $\displaystyle \cos(\sin(x))\geqslant\sin(\cos(x))$ true. Next find all values that make $\displaystyle \sin(\cos(x))\geqslant\cos(\sin(x))$ true. Note: No calculus. it's a pretty problem! it's always true that $\displaystyle \sin(\cos x) < \cos (\sin x).$ first note that $\displaystyle -\sqrt{2} \leq \cos x \pm \sin x \leq \sqrt{2},$ because $\displaystyle (\cos x \pm \sin x)^2 = 1 \pm \sin(2x) \leq 2.$ but $\displaystyle \sqrt{2} < \frac{\pi}{2}.$ thus: $\displaystyle (1)\ \ \ \frac{-\pi}{2} < \frac{\cos x + \sin x - \frac{\pi}{2}}{2} < 0,$ $\displaystyle (2) \ \ \ 0 < \frac{\cos x - \sin x + \frac{\pi}{2}}{2} < \frac{\pi}{2}.$ now we have: $\displaystyle \sin(\cos x) \ - \ \cos(\sin x)= 2 \sin \left(\frac{\cos x + \sin x - \frac{\pi}{2}}{2} \right) \cos \left(\frac{\cos x - \sin x + \frac{\pi}{2}}{2} \right) < 0,$ by $\displaystyle (1)$ and $\displaystyle (2). \ \ \Box$ 3. Originally Posted by NonCommAlg it's a pretty problem! it's always true that $\displaystyle \sin(\cos x) < \cos (\sin x).$ first note that $\displaystyle -\sqrt{2} \leq \cos x \pm \sin x \leq \sqrt{2},$ because $\displaystyle (\cos x \pm \sin x)^2 = 1 \pm \sin(2x) \leq 2.$ but $\displaystyle \sqrt{2} < \frac{\pi}{2}.$ thus: $\displaystyle (1)\ \ \ \frac{-\pi}{2} < \frac{\cos x + \sin x - \frac{\pi}{2}}{2} < 0,$ $\displaystyle (2) \ \ \ 0 < \frac{\cos x - \sin x + \frac{\pi}{2}}{2} < \frac{\pi}{2}.$ now we have: $\displaystyle \sin(\cos x) \ - \ \cos(\sin x)= 2 \sin \left(\frac{\cos x + \sin x - \frac{\pi}{2}}{2} \right) \cos \left(\frac{\cos x - \sin x + \frac{\pi}{2}}{2} \right) < 0,$ by $\displaystyle (1)$ and $\displaystyle (2). \ \ \Box$ Aha! Similar to my solution...I admit though it was easier for me because I was given that $\displaystyle \sin(\cos(x))<\cos(\sin(x))$ and was just asked to prove it. 4. try your inequality's cousin: prove that: $\displaystyle (\sin x)^{\cos x} < (\cos x)^{\sin x},$ for all $\displaystyle 0 \leq x < \frac{\pi}{4}.$ i'd like to see different approaches. of course, using calculus probably would be the most convenient one and you may use it. but are there other ways to do it? 5. Originally Posted by NonCommAlg try your inequality's cousin: prove that: $\displaystyle (\sin x)^{\cos x} < (\cos x)^{\sin x},$ for all $\displaystyle 0 \leq x < \frac{\pi}{4}.$ i'd like to see different approaches. of course, using calculus probably would be the most convenient one and you may use it. but are there other ways to do it? Besides the obvious way of doing it with maxs and stuff how about this method. It uses some shady postulates (that may be untrue), but this is for fun so why not just suggest it? Suppose the inequality presented is true, then so must be the inequality $\displaystyle \cos(x)\ln(\sin(x))<\sin(x)\ln(\cos(x))$, or alternatively $\displaystyle -\sin(x)\ln(\cos(x))<-\cos(x)\ln(\sin(x))$. It can be verified that both these functions are monotonically increasing, positive, and continuous on the specified interval. Now here is where the geometrically logical but probably incorrect "lemma" I am using comes into play. It goes someting to the tune that if $\displaystyle f,g>0~a$ and $\displaystyle f,g$ are monotonic as well as continuous on the interval $\displaystyle [a,b]$, then $\displaystyle f(x)<g(x)~~~a\leqslant x \leqslant b\quad\Longleftrightarrow\quad \int_a^b f(x)dx<\int_a^b g(x)dx$. Now suppose that this is true, it can be easily verified that $\displaystyle -\int_0^{\frac{\pi}{4}}\cos(x)\ln(\sin(x))dx=\frac{ \sqrt{2}\ln(2)}{4}+\frac{\sqrt{2}}{2}$$\displaystyle >-\int_0^{\frac{\pi}{4}}\sin(x)\ln(\cos(x))dx=1-\frac{\sqrt{2}\ln(2)}{4}-\frac{\sqrt{2}}{2} Now supposing that my "made up" lemma is correct, this proves the inequality. I have learned to not always trust my geometric intuition...so Im not too sure about this 6. Originally Posted by Mathstud28 Besides the obvious way of doing it with maxs and stuff how about this method. It uses some shady postulates (that may be untrue), but this is for fun so why not just suggest it? Suppose the inequality presented is true, then so must be the inequality \displaystyle \cos(x)\ln(\sin(x))<\sin(x)\ln(\cos(x)), or alternatively \displaystyle -\sin(x)\ln(\cos(x))<-\cos(x)\ln(\sin(x)). It can be verified that both these functions are monotonically increasing, positive, and continuous on the specified interval. Now here is where the geometrically logical but probably incorrect "lemma" I am using comes into play. It goes someting to the tune that if \displaystyle f,g>0~a and \displaystyle f,g are monotonic as well as continuous on the interval \displaystyle [a,b], then \displaystyle f(x)<g(x)~~~a\leqslant x \leqslant b\quad\Longleftrightarrow\quad \int_a^b f(x)dx<\int_a^b g(x)dx. Now suppose that this is true, it can be easily verified that \displaystyle -\int_0^{\frac{\pi}{4}}\cos(x)\ln(\sin(x))dx=\frac{ \sqrt{2}\ln(2)}{4}+\frac{\sqrt{2}}{2}$$\displaystyle >-\int_0^{\frac{\pi}{4}}\sin(x)\ln(\cos(x))dx=1-\frac{\sqrt{2}\ln(2)}{4}-\frac{\sqrt{2}}{2}$ Now supposing that my "made up" lemma is correct, this proves the inequality. I have learned to not always trust my geometric intuition...so Im not too sure about this unfortunately your made up lemma is not always true, e.g. $\displaystyle f(x)=\frac{x+1}{4}, \ g(x)=x.$ then $\displaystyle \int_0^1 f(x) \ dx < \int_0^1 g(x) \ dx,$ but neither $\displaystyle f < g$ nor $\displaystyle f > g$ on the unit interval. the trouble here comes from this fact that $\displaystyle f$ and $\displaystyle g$ intersect! so you need to assume that $\displaystyle f,g$ do not intersect in the interval. also i don't think we need $\displaystyle f,g$ to be positive. 7. Originally Posted by NonCommAlg unfortunately your made up lemma is not always true, e.g. $\displaystyle f(x)=\frac{x+1}{4}, \ g(x)=x.$ then $\displaystyle \int_0^1 f(x) \ dx < \int_0^1 g(x) \ dx,$ but neither $\displaystyle f < g$ nor $\displaystyle f > g$ on the unit interval. the trouble here comes from this fact that $\displaystyle f$ and $\displaystyle g$ intersect! so you need to assume that $\displaystyle f,g$ do not intersect in the interval. also i don't think we need $\displaystyle f,g$ to be positive. Dangit! I forgot to say that. I knew that they cannot intersect...and no we do not need them positive we need $\displaystyle fg>0$. I have come up with a "proof" of my lemma if anyone wants to see it. EDIT: Wait, they cannot intersect except possibly at the interval because then $\displaystyle f<g$ after that point. For example $\displaystyle x<\frac{x+1}{4}~~0<x<\frac{1}{3}$ but it is reversed afterwards...but you applied my lemma to the interval $\displaystyle [0,1]$, and $\displaystyle x\not<\frac{x+1}{4}~~\forall x \in[0.1]$ EDIT EDIT: We dont even need $\displaystyle fg>0$ we must just have $\displaystyle |f|<|g|$ EDIT EDIT EDIT: I am busy now but I will come back later and write out this lemma in a clear manner...I will then attempt to prove it 8. Originally Posted by Mathstud28 Dangit! I forgot to say that. I knew that they cannot intersect...and no we do not need them positive we need $\displaystyle fg>0$. I have come up with a "proof" of my lemma if anyone wants to see it. EDIT: Wait, they cannot intersect except possibly at the interval because then $\displaystyle f<g$ after that point. For example $\displaystyle x<\frac{x+1}{4}~~0<x<\frac{1}{3}$ but it is reversed afterwards...but you applied my lemma to the interval $\displaystyle [0,1]$, and $\displaystyle x\not<\frac{x+1}{4}~~\forall x \in[0.1]$ my example was a counter-example to this side of your inequality: $\displaystyle \int_a^b f < \int_a^b g \Longrightarrow f < g.$ in my example $\displaystyle \int_0^1 f < \int_0^1 g$ but $\displaystyle f \not< g$ in [0,1]. you still have a lot of things to do: first you need to show that $\displaystyle \cos(x) \ln(\sin(x)) \neq \sin(x) \ln(\cos(x))$ on the interval and then monotonocity of the functions! 9. Originally Posted by Mathstud28 This one is fun. Find what values of $\displaystyle x$ make $\displaystyle \cos(\sin(x))\geqslant\sin(\cos(x))$ true. Next find all values that make $\displaystyle \sin(\cos(x))\geqslant\cos(\sin(x))$ true. Note: No calculus. Originally Posted by NonCommAlg try your inequality's cousin: prove that: $\displaystyle (\sin x)^{\cos x} < (\cos x)^{\sin x},$ for all $\displaystyle 0 \leq x < \frac{\pi}{4}.$ i'd like to see different approaches. of course, using calculus probably would be the most convenient one and you may use it. but are there other ways to do it? I have the following solution using calculus. Let $\displaystyle f(t):=\cos(\sin(t))-\sin(\cos(t))$ for $\displaystyle t\in\mathbb{R}$. Then, we see that $\displaystyle f$ is $\displaystyle 2\pi$ periodic, hence it suffices to prove $\displaystyle f\geq0$ on $\displaystyle [0,2\pi]$. On the other hand, we have $\displaystyle f(t)=f(2\pi-t)$ for all $\displaystyle t\in[0,2\pi]$, which indicates that it suffices to prove $\displaystyle f(t)\geq0$ for all $\displaystyle t\in[0,\pi]$. Clearly, $\displaystyle \cos$ is positive (and decreasing) on $\displaystyle [0,\pi/2]$, $\displaystyle 0\leq\sin(t)\leq1<\pi/2$ for all $\displaystyle t\in[\pi/2,\pi]$, and $\displaystyle \sin$ is negative (and increasing) on $\displaystyle [-\pi/2,0]$, $\displaystyle -\pi/2<-1\geq\cos(t)\leq0$. Therefore, $\displaystyle f>0$ on $\displaystyle [\pi/2,\pi]$. To complete the proof we have to prove $\displaystyle f\geq0$ on $\displaystyle [0,\pi/2]$. Similar reasoning to the discussion above about increasing and decreasing natures of the functions $\displaystyle \cos$ and $\displaystyle \sin$ together with the fact $\displaystyle \sin(t)\leq t$ for all $\displaystyle t\in[0,\infty)$, we get $\displaystyle f(t)\geq\cos(t)-\sin(\cos(t))\geq\cos(t)-\cos(t)=0$ for all $\displaystyle t\in[0,\pi/2]$, and the proof is hence completed. $\displaystyle \rule{0.3cm}{0.3cm}$ $\displaystyle \rule{9.6cm}{0.05cm}$$\displaystyle \rule{9.7cm}{0.05cm} .................Graph of \displaystyle f.....................................Graph of \displaystyle f^{\prime}.....................................Graph of \displaystyle f^{\prime\prime} \displaystyle \rule{9.6cm}{0.05cm}$$\displaystyle \rule{9.7cm}{0.05cm}$ 10. So the statement is this: Suppose that $\displaystyle f,g$ posses the following charcteristics on $\displaystyle [a,b]$: they are positive, they are continuous, they do not intersect, and they are monotonic. Then on $\displaystyle [a,b]$ it is true that $\displaystyle f<g~\Longleftrightarrow~\int_a^b f(x)~dx<\int_a^b g(x)~dx$ First let us prove that $\displaystyle f<g~\implies~\int_a^b f(x)~dx<\int_a^b g(x)~dx$. Consider any partition $\displaystyle P_n$ of $\displaystyle [a,b]$ consisting of the set of points $\displaystyle \left\{x_i\right\}~1\leqslant i\leqslant n$. Now as in the usual way define $\displaystyle \Delta x_i=x_i-x_{i-1}$, $\displaystyle (Mf)_i=\sup_{x_{i-1}<x<x_i}f(x)$, and $\displaystyle U\left(P_n,f\right)=\sum_{i=1}^{n}(Mf)_i\cdot\Delt a x_i$. Now it is clear that since $\displaystyle f(x)<g(x)$ that $\displaystyle \sup_{x_{i-1}< x<x_i}f(x)<\sup_{x_{i-1}<x<x_i}g(x)$ and since $\displaystyle \Delta x_i>0$ this implies that $\displaystyle \sup_{x_{i-1}< x<x_i}f(x)\cdot\Delta x_i<\sup_{x_{i-1}<x<x_i}g(x)\cdot \Delta x_i$. Finally we can conclude that $\displaystyle U\left(P_n,f\right)=\sum_{i=1}^{n}(Mf)_i\cdot\Delt a x_i<\sum_{i=1}^n (Mg)_i\cdot \Delta x_i=U\left(P_n,g\right)$ . And since $\displaystyle f,g$ are continuous, thus Riemann integrable, $\displaystyle \int_a^b f(x)~dx=\inf_{n\in\mathbb{N}}\left\{U\left(P_n,f\r ight)\right\}<\inf_{n\in\mathbb{N}}\left\{U\left(P _n,g\right)\right\}=\int_a^b g(x)~dx$ Now let us prove that $\displaystyle \int_a^b f(x)~dx<\int_a^b g(x)~dx~\implies~ f(x)<g(x)$. Define $\displaystyle P_n,\Delta x_i, M_i, U\left(P_n,f\right)$ as before. 1. Now it is clear that either $\displaystyle (Mf)_i<(Mg)_i$ or $\displaystyle (Mf)_i>(Mg)_i$ for all $\displaystyle i$. To see this first define $\displaystyle h(x)=f(x)-g(x)$ it is clear that $\displaystyle h$ is continuous. Then suppose that there were two values $\displaystyle 1\leqslant j,k \leqslant n$ such that $\displaystyle (Mf)_j<(Mg)_j$ and $\displaystyle (Mf)_k>(Mg)_k$ then there exists a $\displaystyle \xi\in[x_{j-1},x_j]$ such that $\displaystyle f(\xi)<g(\xi)\implies h(\xi)<0$ and there exists a $\displaystyle \xi_1\in[x_{k-1},x_k]$ such that $\displaystyle f(\xi_1)>g(\xi_1)\implies h(\xi_1)>0$. Now because $\displaystyle h$ is continuous and $\displaystyle [a,b]$ connected this implies there exists a $\displaystyle x\in[a,b]$ such that $\displaystyle h(x)=0\implies f(x)=g(x)$ which contradicts that the functions do not intersect. 2. So from the fact that $\displaystyle \inf_{n\in\mathbb{N}}\left\{U\left(P_n,f\right)\ri ght\}<\inf_{n\in\mathbb{N}}\left\{U\left(P_n,g\rig ht)\right\}$ we can see that $\displaystyle (Mf)_i<(Mg)_i$ 3. So all that is left to do is prove that $\displaystyle (Mf)_i<(Mg)_i\implies f(x)<g(x)~\forall x\in[x_{i-1},x_i]$. To do this once again define $\displaystyle h(x)=f(x)-g(x)$. Let $\displaystyle \xi_f$ be the point such that $\displaystyle f(\xi_f)=\sup_{x_{i-1}\leqslant x \leqslant x_i}f(x)$, and let $\displaystyle \xi_g$ be defined similarly. Now since $\displaystyle [x_{i-1},x_i]$ is compact it follows that $\displaystyle \xi_f,\xi_g\in[x_{i-1},x_i]$. Now consider when $\displaystyle f,g$ are monotonically increasing, it is clear now that $\displaystyle \xi_f,\xi_g=b$. So $\displaystyle h(b)=f(\xi)=\sup_{x_{i-1}\leqslant x \leqslant x_i}f(x)-f(\xi)=\sup_{x_{i-1}\leqslant x \leqslant x_i}g(x)<0$. So now suppose there was a point $\displaystyle y\in[x_{i-1},x_i]$ such that $\displaystyle f(y)>g(y)$, then at that point $\displaystyle h(y)>0$ and by the connectedness of $\displaystyle [x_{i-1},x_i]$ and the continuity of $\displaystyle h(x)$ there must be a point in $\displaystyle [x_{i-1},x_i]$ such that $\displaystyle h=0\implies f=g$, but this contradicts the two functions not intersecting. The proof is done similarly for $\displaystyle f,g$ being monotonically decreasing. 4. Now since the interval $\displaystyle [x_{-1},x_i]$ was arbitrary in 3. this completes the proof $\displaystyle \blacksquare$ 11. Originally Posted by Mathstud28 So the statement is this: Suppose that $\displaystyle f,g$ posses the following charcteristics on $\displaystyle [a,b]$: they are positive, they are continuous, they do not intersect, and they are monotonic. Then on $\displaystyle [a,b]$ it is true that $\displaystyle f<g~\Longleftrightarrow~\int_a^b f(x)~dx<\int_a^b g(x)~dx$ First let us prove that $\displaystyle f<g~\implies~\int_a^b f(x)~dx<\int_a^b g(x)~dx$. Consider any partition $\displaystyle P_n$ of $\displaystyle [a,b]$ consisting of the set of points $\displaystyle \left\{x_i\right\}~1\leqslant i\leqslant n$. Now as in the usual way define $\displaystyle \Delta x_i=x_i-x_{i-1}$, $\displaystyle (Mf)_i=\sup_{x_{i-1}<x<x_i}f(x)$, and $\displaystyle U\left(P_n,f\right)=\sum_{i=1}^{n}(Mf)_i\cdot\Delt a x_i$. Now it is clear that since $\displaystyle f(x)<g(x)$ that $\displaystyle \sup_{x_{i-1}< x<x_i}f(x)<\sup_{x_{i-1}<x<x_i}g(x)$ and since $\displaystyle \Delta x_i>0$ this implies that $\displaystyle \sup_{x_{i-1}< x<x_i}f(x)\cdot\Delta x_i<\sup_{x_{i-1}<x<x_i}g(x)\cdot \Delta x_i$. Finally we can conclude that $\displaystyle U\left(P_n,f\right)=\sum_{i=1}^{n}(Mf)_i\cdot\Delt a x_i<\sum_{i=1}^n (Mg)_i\cdot \Delta x_i=U\left(P_n,g\right)$ . And since $\displaystyle f,g$ are continuous, thus Riemann integrable, $\displaystyle \int_a^b f(x)~dx=\inf_{n\in\mathbb{N}}\left\{U\left(P_n,f\r ight)\right\}<\inf_{n\in\mathbb{N}}\left\{U\left(P _n,g\right)\right\}=\int_a^b g(x)~dx$ Now let us prove that $\displaystyle \int_a^b f(x)~dx<\int_a^b g(x)~dx~\implies~ f(x)<g(x)$. Define $\displaystyle P_n,\Delta x_i, M_i, U\left(P_n,f\right)$ as before. 1. Now it is clear that either $\displaystyle (Mf)_i<(Mg)_i$ or $\displaystyle (Mf)_i>(Mg)_i$ for all $\displaystyle i$. To see this first define $\displaystyle h(x)=f(x)-g(x)$ it is clear that $\displaystyle h$ is continuous. Then suppose that there were two values $\displaystyle 1\leqslant j,k \leqslant n$ such that $\displaystyle (Mf)_j<(Mg)_j$ and $\displaystyle (Mf)_k>(Mg)_k$ then there exists a $\displaystyle \xi\in[x_{j-1},x_j]$ such that $\displaystyle f(\xi)<g(\xi)\implies h(\xi)<0$ and there exists a $\displaystyle \xi_1\in[x_{k-1},x_k]$ such that $\displaystyle f(\xi_1)>g(\xi_1)\implies h(\xi_1)>0$. Now because $\displaystyle h$ is continuous and $\displaystyle [a,b]$ connected this implies there exists a $\displaystyle x\in[a,b]$ such that $\displaystyle h(x)=0\implies f(x)=g(x)$ which contradicts that the functions do not intersect. 2. So from the fact that $\displaystyle \inf_{n\in\mathbb{N}}\left\{U\left(P_n,f\right)\ri ght\}<\inf_{n\in\mathbb{N}}\left\{U\left(P_n,g\rig ht)\right\}$ we can see that $\displaystyle (Mf)_i<(Mg)_i$ 3. So all that is left to do is prove that $\displaystyle (Mf)_i<(Mg)_i\implies f(x)<g(x)~\forall x\in[x_{i-1},x_i]$. To do this once again define $\displaystyle h(x)=f(x)-g(x)$. Let $\displaystyle \xi_f$ be the point such that $\displaystyle f(\xi_f)=\sup_{x_{i-1}\leqslant x \leqslant x_i}f(x)$, and let $\displaystyle \xi_g$ be defined similarly. Now since $\displaystyle [x_{i-1},x_i]$ is compact it follows that $\displaystyle \xi_f,\xi_g\in[x_{i-1},x_i]$. Now consider when $\displaystyle f,g$ are monotonically increasing, it is clear now that $\displaystyle \xi_f,\xi_g=b$. So $\displaystyle h(b)=f(\xi)=\sup_{x_{i-1}\leqslant x \leqslant x_i}f(x)-f(\xi)=\sup_{x_{i-1}\leqslant x \leqslant x_i}g(x)<0$. So now suppose there was a point $\displaystyle y\in[x_{i-1},x_i]$ such that $\displaystyle f(y)>g(y)$, then at that point $\displaystyle h(y)>0$ and by the connectedness of $\displaystyle [x_{i-1},x_i]$ and the continuity of $\displaystyle h(x)$ there must be a point in $\displaystyle [x_{i-1},x_i]$ such that $\displaystyle h=0\implies f=g$, but this contradicts the two functions not intersecting. The proof is done similarly for $\displaystyle f,g$ being monotonically decreasing. 4. Now since the interval $\displaystyle [x_{-1},x_i]$ was arbitrary in 3. this completes the proof $\displaystyle \blacksquare$ ok, i didn't read your proof but i'm sure it's a good practice for you since you're studying Rudin! first of all, you don't need to assume $\displaystyle f,g$ are positive or monotonic. "continuous" and "not intersecting" are only conditions we need: let $\displaystyle h=g-f.$ suppose first that $\displaystyle h > 0$ on the interval. it's not hard to see that the integral of a positive continuous function is positive.* thus $\displaystyle \int_a^b h > 0.$ conversely, suppose $\displaystyle \int_a^b h > 0.$ since $\displaystyle f, g$ do not intersect, we have $\displaystyle h \neq 0$ everywhere on [a,b]. so by the intermediate value theorem, either $\displaystyle h > 0$ or $\displaystyle h < 0,$ everywhere on [a,b]. but if $\displaystyle h < 0,$ then $\displaystyle -h > 0$ and hence by * we'll have $\displaystyle \int_a^b (-h)> 0,$ and hence $\displaystyle \int_a^b h < 0,$ which is a contradiction. Q.E.D. * in general, if $\displaystyle h$ is continuous, non-negative and not identically 0 on [a,b], then $\displaystyle \int_a^b h > 0.$ Hint: since $\displaystyle h$ is not identically 0 over [a,b], there exists a subinterval of [a,b] over which: $\displaystyle h > 0.$ 12. Originally Posted by NonCommAlg ok, i didn't read your proof but i'm sure it's a good practice for you since you're studying Rudin! Yeah, I am not trying to be easy, I am trying to be as rigorous as possible...now this may not always be the best way...but it helps me learn all the material since I end up using three fifths of it in one proof. And I understand your proof, but the reason it is so short is that a lot of the stuff you just stated I proved...now of course for a mathematician such as yourself this is obvious...but I thought for us other folks it would be best to show it. Thanks for your time NonCommAlg
6,670
20,496
{"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0}
4.25
4
CC-MAIN-2018-17
latest
en
0.479042