content
stringlengths
86
994k
meta
stringlengths
288
619
The three-dimensional Variational Moments Equilibrium Code (VMEC) minimizes the energy functional ${\displaystyle W=\int _{\Omega _{p}}{\left({\frac {1}{2\mu _{0}}}B^{2}+p\right)dV}}$ over the toroidal domain Ω[p]. The solution is obtained in flux coordinates (s, θ, ζ), related to the cylindrical coordinates (R, φ, Z) by ${\displaystyle R=\sum {R_{mn}(s)\cos(m\theta -n\zeta )}}$ ${\displaystyle Z=\sum {Z_{mn}(s)\sin(m\theta -n\zeta )}}$ The code assumes nested flux surfaces. ^[1] ^[2] Uses of the code Due to its speed in computing the MHD equilibrium problem in 3-D it has become the "de facto" standard code for calculating 3-D equilibria. This means that practically all the laboratories with stellerator devices routinely use it. It has also been used to model tokamak equilibria and lately (2010) it has been applied to reverse field pinches, in particular helical equilibria (non-axisymmetric) in the RFX-Mod. ^[3] The code is being used at fusion laboratories all over the world: • ORNL, Oak Ridge, TN, USA (code origin) • PPPL, Princeton, NJ, USA • IPP, at Garching and Greifswald, Germany • CRPP, Lausanne, Switzerland • NIFS, Toki, Japan • RFX, Padova. Italy • HSX, Madison, WI, USA • LNF, Madrid, Spain Enhancements / extensions of the code • DIAGNO, ^[4] to calculate the response of magnetic diagnostics • MFBE ^[5] • STELLOPT ^[6] See also
{"url":"http://wiki.fusenet.eu/wiki/VMEC","timestamp":"2024-11-14T21:35:25Z","content_type":"text/html","content_length":"26876","record_id":"<urn:uuid:b5a6dcc5-2101-423b-98f9-9b914c8fcdaf>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00284.warc.gz"}
Discrete Mathematics MCS-013 Assignment SOLUTION Course Code : MCS-013 Course Title : Discrete Mathematics Assignment Number : MCA(I)/013/Assignment/15-16 Maximum Marks : 100 Weightage : 25% Last Dates for Submission : 15^th October, 2015 (For July 2015 Session) 15^th April, 2016 (For January 2016 Session) There are eight questions in this assignment, which carries 80 marks. Rest 20 marks are for viva-voce. Answer all the questions. You may use illustrations and diagrams to enhance the explanations. Please go through the guidelines regarding assignments given in the Programme Guide for the format of presentation. 1. (a) Make truth table for followings. (4 Marks) i) p→(~q ~ r) ~p ~q ii) p→(r ~ q) (~p r) (b) Draw a venn diagram to represent followings: (3 Marks) i) (A B) (C~A) ii) (A B) (B C) (c) Give geometric representation for followings: (3 Marks) i) { 2} x R ii) {1, 2) x ( 2, -3) 2. (a) Write down suitable mathematical statement that can be represented (4 Marks) by the following symbolic properties. (i) ( x) ( y) P (ii) (x) ( y) ( z) P (b) Show whether √15 is rational or irrational. (4 Marks) (c) Explain inclusion-exclusion principle with example. (2 Marks) 3. (a) Make logic circuit for the following Boolean expressions: (6 Marks) i) (x’ y’ z) + (xy’z)’ ii) ( x'y) (yz’) (y’z) iii) (xyz) +(xy’z) (b) What is a tautology? If P and Q are statements, show whether the (4 Marks) 4. (a) How many different 8 professionals committees can be formed each (4 Marks) containing at least 2 Professors, at least 2 Technical Managers and 3 Database Experts from list of 10 Professors, 8 Technical Managers and 10 Database Experts? (b) What are Demorgan’s Law? Explain the use of Demorgen’s law with (4 Marks) (c) Explain addition theorem in probability. (2 Marks) 5. (a) How many words can be formed using letter of UNIVERSITY using (2 Marks) each letter at most once? i) If each letter must be used, ii) If some or all the letters may be omitted. (b) Show that: (4 Marks) (c) Prove that n! (n + 2) = n!+ (n +1)! (4 Marks) 6. (a) How many ways are there to distribute 20 district object into 10 (3 Marks) distinct boxes with: i) At least three empty box. ii) No empty box. (b) Explain principle of multiplication with an example. (3 Marks) (c) Set A,B and C are: A = {1, 2, 4, 8, 10 12,14}, B = { 1,2, 3 ,4, 5 } (4 Marks) and C { 2, 5,7,9,11, 13}. Find A B C , A B C, A B C and (B~C) 7. (a) Find how many 3 digit numbers are odd? (2 Marks) (b) What is counterexample? Explain with an example. (3 Marks) (c) What is a function? Explain following types of functions with example (5 Marks) i) Surgective ii) Injective iii) Bijective 8. (a) Find inverse of the following function: (2 Mark) x^3 2 f(x) = x 3 x 3 (b) Explain equivalence relation with example. (2 Mark) (c) Find Boolean expression for the output of the following logic (3 Marks) 0 comments:
{"url":"http://mba.ignougroup.com/2015/10/discrete-mathematics-mcs-013-assignment.html","timestamp":"2024-11-09T00:27:44Z","content_type":"application/xhtml+xml","content_length":"213103","record_id":"<urn:uuid:fa6591b6-3963-4efd-9480-42382f81c0db>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00729.warc.gz"}
ib math ia examples statistics In this case, r=2 and c=3, therefore: df = (2-1) x (3-1) = 1 x 2 = 2. 3) Is there a correlation between arm span and foot height? The percentages shows that the dominant eye color is brown in people with brunette hair, and the dominant eye color for blondes is blue. If you want to do an investigation with a bit more mathematical content then have a look at this page for over 300 ideas for Maths SL and HL students. Really useful! In this internal assessment, I will study the eye colors and hair colors of a selection of IB students to prove eye color is dependent on hair color. In that group, there are 83 students that I will observe. You can compare life-expectancy rates, GDP, access to secondary education, spending on military, social inequality, how many cars per 1000 people and much much more. Written by an experienced IB teacher this guide talks you through: There’s a really great website been put together to help IB students with topic revision both for during the course and for the end of Year 12 school exams and Year 13 final exams. San Diego International Studies. 19) Is there a correlation between Premier League wages and league positions? Some of these ideas taken from the excellent Oxford IB Maths Studies textbook. You are an avid football player, … Investigate how the scores from different IB subjects compare. Product design needs optimisation techniques to find out the best packaging dimensions. 3) Birthday paradox: If you are doing a Maths SL, HL exploration (exam in 2020), or are doing IB Maths Analysis (exam in 2021 onwards) then go to this page instead. edited by Aaron Keeping. 5) Are a sample of student digit ratios normally distributed? Maths Investigation Ideas for A-level, IB and Gifted GCSE Students ... An example of how to use mathematical proof to solve ... Bad maths in court - how a misuse of statistics in the courtroom can lead to devastating miscarriages of justice. 2) Optimisation in product packaging. Arthur wasn’t the best at Math; he received a 28% on his mocks. A UK study showed that primary school girls play much less sport than boys. A introductory guide to the IA. This list is primarily for Maths Studies students (exam in 2020) and IB Maths Applications students (exam in 2021 onwards). Why? The ideas presented here are in general from the IB OCC teacher support ... An example of an SL type II modeling task in maths. 5) Gapminder is another great resource for comparing development indicators – you can plot 2 variables on a graph (for example urbanisation against unemployment, or murder rates against urbanisation) and then run them over a number of years. TSM – the Technology for Secondary Mathematics. – Are some calculations like 7×8 harder than others? 15) Is there a correlation between female participation in politics and wider access to further education? I would really recommend everyone making use of this – there is a mixture of a lot of free content as well as premium content so have a look and see what you think. 6) Wolfram Alpha is one of the most powerful maths and statistics tools available – it has a staggering amount of information that you can use. To do this, I will conduct a chi-squared (χ2) test and establish two hypotheses: the null hypothesis, that the two factors, eye color and hair color, are independent variables; and the alternative hypothesis, that the two factors are dependent variables. P (5 Guesses) = 8 24 = 3 1 P (6 Guesses) = 8 24 = 3 1 Therefore, if I choose to ask a question that applies to half of the remaining characters each time, it will either take 4 questions, 5 questions, or 6 questions, with each having an equal chance of occurring. This is also a potential opportunity to discuss the Golden Ratio in nature. • 20% of your IB mark Technology: • GeoGebra: Software for working with graphs, diagrams, functions, spreadsheets, statistics and more. IB Exploration Modelling and Statistics Guide. Fully updated for the new syllabus. Use Google Finance to collect data on company share prices. If I find students with hazel-colored eyes (a combination of green and brown) I will ask them which color they identify with more. 9) Is there a correlation between sacking a football manager and improved results? How many people need to be in a room for it to be at least 50% likely that two people will share the same birthday? Conduct this BBC reaction time test to find out. What I like about this is that you are given a difficulty rating, as well as a mark scheme and also a worked video tutorial. What Makes a Good Math IA Topic. 16) Is there a correlation between blood alcohol laws and traffic accidents? So anything you find interesting. The main drawback is that collecting good quality data in sufficient quantity to analyze can be time consuming. Which times tables do students find most difficult? Change ), You are commenting using your Google account. Y = 18.8%+42.7 229 = 18.81 + 42.7 229 - 42.7 = 18.8x 186.3 = 18.8x x = 9.91 As this value is not the same as my recorded value for the x value, I will find the percentage error for -the estimated (9.91) and true (9.6) values. It allows you scope to investigate something that perhaps no-one else has ever done. Title: Using statistics, GDP per capita, income to know the affordability of immigration of Iranians IB MATHEMATICS SL IA A quick description of the IB Math IA/Exploration. It looks like you're using Internet Explorer 11 or older. By looking at a table of critical values of the χ2 distribution, the critical value for two degrees of freedom and 5% level of significance is 5.991. Note that the subreddit is not run by the International Baccalaureate. Is there a correlation between numbers of yellow cards a game and league position? Make sure you read the Maths Studies guidance from the IB prior to starting your IA maths exploration – this linked site gives the full list of assessment criteria you will be judged against and also gives 9 full examples of investigations students have done. In order to graph the data, statistics on the field goals made compared to those attempted are used, particularly the kicking statistics of all thirty-two NFL teams during the 2017 Regular Season. Unleash your inner spy! 4) Is there a correlation between the digit ratio and maths ability? 4) Which times tables do students find most difficult? List Examples of IA: File Size: 111 kb: File Type: docx: Download File. Given the assessment criteria it’s probably easiest to conduct a data analysis investigation, though you can choose to explore other parts of the syllabus instead. 1) How can you optimise the area of a farmer’s field for a given length of fence? Studies have shown that a good night’s sleep raises academic attainment. 3) If you prefer football you can also find a lot of football stats on the Who Scored website. 4) Are a sample of student reaction times normally distributed? 1) Does gender affect hours playing sport? There are two TOK teachers that all meet with students during Learning Lab, or study hall, the second to last block of the school’s schedule; one meets on Mondays and Wednesdays and the other on Tuesdays and Thursdays. How can you optimise the area of a farmer’s field for a given length of fence? The birthday paradox shows how intuitive ideas on probability can often be wrong. Is there a correlation between hours of sleep and exam grades? For this reason, I will travel during Learning Lab on a Monday to the two teachers of the TOK classes and again on the following day. How does that compare with other languages? Modelling the spread of Coronavirus (COVID-19), Rational Approximations to Irrational Numbers – A 78 Year old Conjecture Proved, Hollow Cubes and Hypercubes investigation, Ramanujan’s Taxi Cab and the Sum of 2 Cubes, Finding the volume of a rugby ball (or American football), The Shoelace Algorithm to find areas of polygons, IB Applications and Interpretations SL and HL Resources, IB Analysis and Approaches SL and HL Resources, Stacking cannonballs – solving maths with code, Normal Numbers – and random number generators, The Gini Coefficient – measuring inequality, Follow IB Maths Resources from British International School Phuket on WordPress.com, thoughtsreflectionsandpersonalmodifications. IB HL Math IA Topics (Mathematics in art~expolring the role of math in…: IB HL Math IA Topics ... Statistics . edited by Aaron Keeping. Since the χ2 value 7.03535 is greater than 5.991, I reject the null hypothesis and accept the alternative hypothesis, concluding that eye color and hair color are dependent variables. With n people in a room, how many handshakes are required so that everyone shakes hands with everyone else? Find out! Because of this, brunettes are often brown-eyed, much more often than not. Is there a correlation between arm span and foot height? Revision Village - Voted #1 IB Maths Resource in 2019 & 2020! If you go to the examples link above, then you can choose from data on everything from astronomy, the human body, geography, food nutrition, sports, socioeconomics, education and shopping. If … [2019 Updated] IB Maths SL Mini Topic Exam > Statistics & Probability. Change ), You are commenting using your Facebook account. How to calculate standard deviation by hand, Paired t tests and 2 sample t tests: Reaction times, Spearman’s rank: Taste preference of cola. ... IB MATH SL IA Example.docx. It contains an enormous amount of data on winning times and distances in all events in all Olympics. • Desmos: Software for graphing points, functions and tables. This video is accompanied by an exam style question to further practice your knowledge. Which times tables do students find most difficult to learn? I guess I need this…stupid IA, Stoppage times in National Football League (NFL) games, the “How can you optimise the area of a farmer’s field for a given length of fence?” link won´t open. 17) Is there a correlation between height and basketball ability? Maths Exploration (IA) ideas 5089 December 5, 2020 Predicting fire spread in wildlife fields : This mathematical model of fire presented in this paper offers for the first time a method for making the quantitative evaluation of both rates of spread and fire intensity in fuels that qualify for the assumptions made on the model. Can you solve Oxford University’s Interview Question? ... Get Started. Maths Studies and Applications IA Exploration Topics. Documents. 4) Investigation about the distribution of sweets in packets of Smarties. 470. In an r x c contingency table, the degrees of freedom (df) allowed are given by the formula (r-1) x (c-1). IB Mathematics: Alalysis and Approaches. If there are less work opportunities, do more people turn to crime? A good example of how to conduct a statistical investigation in mathematics. It will help you explore an area of interest deeply and exhaustively, while at the same time providing an avenue for … Topics include Algebra and Number (proof), Geometry, Calculus, Statistics and Probability, Physics, and links with other subjects. Are a sample of student reaction times normally distributed? Premier League wages and league positions. Are the IB maths test scores normally distributed? So, using my formula and substituting in 229 as my y value. Bivariate Statistics questions are frequently found in IB Maths SL exam papers, often in Paper 2. Please like, comment and subscribe for more videos! We know that adult population heights are normally distributed – what about student heights? Beautifully written by an experienced IB Mathematics teacher, and of an exceptionally high quality. This gives you data on things like individual players’ shots per game, pass completion rate etc. edited by Aaron Keeping. 12) Is there a correlation between stock prices of different companies? Calculations for following the formula step by step, with “observed” being the set of original values, “expected” being the second set of calculated values based on the observed, and ∑ being the sum of all final values. Members. Algebra, Calculus etc) and each area then has a number of graded questions. 3) Are a sample of student weights normally distributed? However, it also showed that the majority of the students studied are brunette and approximately 25% of the students are blonde. Do bilingual students have a greater memory recall than non-bilingual students? IB test scores are designed to fit a bell curve. This website works best with modern browsers such as the latest … List of 200 IA Ideas pdf. IB Maths IA Tutors can help you in math IA & math IA structure Get the most effective Math IA tips, math IA examples in statistics, math SL IA template, math hl IA format We offer you an IB Chemistry online tutor for help in IB Chemistry IA & ib chemistry IA format Is there a correlation between the digit ratio and maths ability? Revision Village - Voted #1 IB Maths SL Resource in 2018/19! Sample IA 3 - this scored an 18/20, this project is annotated by the grader. Finding the average distance between 2 points on a hypercube, Find the average distance between 2 points on a square, Generating e through probability and hypercubes, IB HL Paper 3 Practice Questions Exam Pack, Complex Numbers as Matrices: Euler’s Identity, Sierpinski Triangle: A picture of infinity, The Tusi couple – A circle rolling inside a circle, Classical Geometry Puzzle: Finding the Radius, Further investigation of the Mordell Equation. Math ia's are tricky and frustrating especially the process of finding a good topic bug your teacher with q's bc your plan is rly important. In an r x c contingency table, the degrees of freedom (df) allowed are given by the formula (r-1) x (c-1). If you go to the examples link above, then you can choose from data on everything from astronomy, the human body, geography, food nutrition, sports, socioeconomics, education and shopping. You can also download Excel speadsheets of the associated data. After all information has been collected I will add up the data and combine the two charts into one, then I will conduct a χ2 to decide if eye color is dependent on hair color using a table of critical values of the χ2 distribution with two degrees of freedom (df) and 5% level of significance. As you read through it, you will see comments from the moderator in boxes like this: At the end of the sample project is a summary of the moderator’s grades, showing how the project has been graded against all the criteria A to G. These Is there a correlation between GDP and life expectancy? 300 IB Maths Exploration ideas, video tutorials and Exploration Guides, April 1, 2014 in IB Maths | Tags: british international school phuket, IA exploration, Maths studies IB. Includes: Full revision notes for SL Analysis (60 pages), HL Analysis (112 pages) and SL Applications (53 pages). Be it IB SL Math IA topics or any other assessment or essay topic, whenever the topic is of your choice, it shows in the final outcome of the essay. 6) Which times tables do students find most difficult to learn? Seventeen full investigation questions – each one designed to last around 1 hour, and totaling around 40 pages and 600 marks worth of content. Scroll down this page to find over 300 examples of maths IA exploration topics and ideas for IB mathematics students doing their internal assessment (IA) coursework. 80.0k. ( Log Out / This video explores Bivariate Statistics, a key concept in IB Maths SL Topic 5: Statistics and Probability. 4) The World Bank has a huge data bank – which you can search by country or by specific topic. traffic data to anylyse flow rate and form expiremental distridution Data. Please note that difference between HL and SL IAs is the level of math expected of students which is reflected in slightly different rubrics and consequently the HL and SL grades differ slightly. Is there a correlation between sacking a football manager and improved results? 8) Is there a correlation between Olympic 100m sprint times and Olympic 15000m times? 2 Sample project This Maths Studies project has been graded by a moderator. Advice on using Geogebra, Desmos and Tracker. 10) Is there a correlation between time taken getting to school and the distance a student lives from school? You can download a 60-page pdf guide to the entire IA coursework process for the new syllabus (first exam 2021) to help you get excellent marks in your maths exploration. 11) Does eating breakfast affect your grades? A must for all Analysis and Applications students! A 60 page pdf guide full of advice to help with modelling and statistics explorations – focusing in on non-calculator methods in order to show good understanding. 13) Does teenage drinking affect grades? MATH IA GUIDE 2 info@lanternaeducation.com IB Maths is a struggle for most people going through their diploma. This conclusion shows that people with brunettes are more likely to have brown eyes than blue or green eyes and that there is proportionately a greater diversity of eye color evident in blondes than brunettes. • You are not expected to use any mathematics outside the level of this course, but you may if it is commensurate with the level of the course. Throughout a contract, they decide to cut down petroleum supply in order to raise the price. A comprehensive 63 page pdf guide to help you get excellent marks on your maths investigation. Use the Guardian Stats data to find out if teams which commit the most fouls also do the best in the league. edited by Aaron Keeping. 5) Is there a correlation between smoking and lung capacity? This also has some harder exams for those students aiming for 6s and 7s. In this case, r=2 and c=3, therefore: By looking at a table of critical values of the χ2 distribution, the critical value for two degrees of freedom and 5% level of significance is 5.991. An introduction to the essentials about the investigation. Studying the concepts of Math is one thing, but interlinking the concepts with theory, abstract and evaluating a long report is … As I was browsing the internet for Maths IA examples I discovered this site which has examples with marks and also examiner comments: ... alumni, and teachers. The following are examples of HL/SL IAs based on the current mark scheme with grader comments. Online. The Practice Exams section takes you to ready made exams on each topic – again with worked solutions. 8) TSM – the Technology for Secondary Mathematics is something of an internet dinosaur – but has a great deal of downloadable data files on everything from belly-button ratios to lottery number analysis and baby weights. Are a sample of student digit ratios normally distributed? Does this mean that height is a good indicator of weight? 6) Wolfram Alpha is one of the most powerful maths and statistics tools available – it has a staggering amount of information that you can use. ( Log Out / 2) Is there a correlation between height and weight? 9) Google Public Data – an enormous source for public data, which is displayed graphically and can be searched. 1) The Census at School website is a fantastic source of secondary data to use. Is there a correlation between Olympic 100m sprint times and Olympic 15000m times? The NHS use a chart to decide what someone should weigh depending on their height. Breaking the Code Euler’s Totient Theorem Minesweeper… Includes: Which ia’s are considered the most effective or interesting? Run the Gapminder graph to show the changing relationship between GDP and life expectancy over the past few decades. 18) Is there a correlation between stress and blood pressure? A chance to use some real life maths to find out the fence sides that maximise area. 2) If you’re interested in sports statistics then the Olympic Database is a great resource. MATH 4295. essay. Change ), You are commenting using your Twitter account. Example 6 Annotated (20/20) Example 8 Annotated (3+3+3+2+5/20) ... ch 9 and 10 (statistics and probability) 2018_sll_z-score.pdf: File Size: 120 kb: File Type: pdf: Download File. Quadratic regression and cubic regression. 1) Is there a correlation between hours of sleep and exam grades? Change ). Some examples of beautiful maths using Geogebra and Desmos. 5) The mathematics of cons - how con artists use pyramid schemes to get rich quick. For instance, I included black hair with brown hair, which probably skewed my results very much, since some of my test subjects were Asian or black. IB’s Math IA is not something that can be finished in a couple of hours. You can do a math investigation on anything that has applications that are slightly out of the reach of the IB math course you are taking. The results of this study will decide whether I accept or reject that hypothesis. Also you could link this with some optimisation investigation. Exponential and trigonometric regression. All you need is the right preparation, the proven tactics, and a mindset shift. If some students are absent those two days, I will take the time to collect their information on Wednesday and/or Thursday during Learning Lab, and if need be, I will continue my investigation the following week, since TOK classes do not meet on Fridays. 14) Is there a correlation between unemployment rates and crime? It is Population trends in China. Specific topic of student weights normally distributed marks on your Maths investigation given length of ib math ia examples statistics words fantastic. Formula and substituting in 229 as my y value between sacking a manager has no benefit and the improvement. Use some real life Maths to find online raise the price and Olympic 15000m times data from hundreds of sources! Relationship between GDP and life expectancy mathematics in art~expolring the role of Math in…: HL! 7 on his mocks show there is also a fully typed up mark scheme with grader.! 5 ) the World Bank has a huge data Bank – which you can also Download speadsheets... Cards a game and league position this book, he was able to find something!. Girls play much less sport than boys a selection of detailed exploration ideas for 6s 7s! The best in the league of sweets in packets of Smarties students studied are brunette and 25! Sample project this Maths Studies students ( exam in 2021 onwards ) study suggests that sacking a has! Altered this conclusion are some calculations like 7×8 harder than others this also some... Depending on their height again with worked solutions are commenting using your Google account,! Google Finance to collect data on things like individual players ’ shots game. Affected traffic brigde collapse site s sleep raises academic attainment Public data an. The above requirements are often brown-eyed, much more often than not Maths ability quantity to can... And everything from academic ability, aggression and even sexuality yellow cards a game and position! Lot of football stats on the current mark scheme with grader comments the! Which IA ’ s sleep raises academic attainment enormous source for Public data, which is graphically. Much more often than not 8 ) is there a correlation between stress and poorer grades that height is fantastic... Chart to decide what someone should weigh depending on their height stats on the current scheme! Geometry, ib math ia examples statistics, Statistics and Probability, Physics, and can you optimise area... Cons - how con artists use pyramid schemes to get rich quick student heights cards a game and position. You to track the usage of words over the centuries topic 5: Statistics and Probability show. Plotly is a correlation between sacking a football manager and improved results game league. And a mindset shift you optimise the area of a farmer ’ s are considered most! All events in all events in all Olympics so that everyone shakes hands with everyone else order to it! Fully typed up mark scheme a chart to decide what someone should weigh depending on their height useful for! Correlation between stress and blood pressure the price different languages ) Finding expected ib math ia examples statistics! If … the following are examples of HL/SL IAs based on the mark! Like, ib math ia examples statistics and subscribe for more videos test scores normally distributed – about! Memory ” – does this include memory recall than non-bilingual students you solve Oxford University ’ s for... Time taken getting to school and the distance a student lives from school to! The subreddit is not run by the International Baccalaureate it also allows you to ready ib math ia examples statistics exams on each –... Drawback is that you might not be able to find online so that everyone shakes hands with everyone else optimise! Statistics, a key concept in IB Maths exploration page docx: Download File one... You 're using Internet Explorer 11 or older accompanied by an exam style to! 11 ) Google word usage analysis – a great visual graphic site – you can also Excel! Find online that everyone shakes hands with everyone else experienced IB mathematics internal assessment that. The ability to generate data that you might not be able to receive solid. 2 info @ lanternaeducation.com IB Maths Studies project has been graded by moderator! Research could have altered this conclusion GUIDE to help you get excellent marks on Maths... Your knowledge Olympic Database is a great visual graphic site – you can really personalise your investigation data which! Access to further education great visual graphic site – you can also Download Excel speadsheets of the page to databases! And Iran are the only suppliers of petroleum how many handshakes are required so that everyone shakes hands everyone! Search by country or by specific topic using your WordPress.com account are students... Most people going through their diploma mathematics of cons - how con artists use schemes! Calculations like 7×8 harder than others IB test scores are designed to fit a bell curve of?... 7 on his Math IA packaging dimensions are a sample of flower heights normally distributed their.. Football player, … list of 200 IA ideas pdf the mathematics of cons how... Expectancy over the centuries for the IB mathematics teacher, and can time! Accept or reject that hypothesis to find something useful topic 5: Statistics and Probability avid player! A contract, they decide to cut down petroleum supply in order to do it full,. Brigde collapse site you can also find a lot of football stats on the current scheme. With n people in a room, how it affected traffic brigde collapse site are designed fit... Investigation about the distribution of sweets in packets of Smarties 19 ) is there a between... Resource during the course designed to fit a bell curve website is a topic! Ia 3 - this scored an 18/20, this project is annotated by the International.! His mocks in…: IB HL Math IA Topics... Statistics 4 ) investigation the. Below or click an icon to Log in: you are commenting using Facebook. Recommend ib math ia examples statistics use this as a Resource during the course do bilingual students have better “ memory. In 2018/19 in the exploration, a key concept in IB Maths Studies textbook Download File IB. Ias based on the current mark scheme with grader comments the students studied brunette. Not run by the grader for graphing points, functions and tables GUIDE to help you get marks... This study will decide whether I accept or reject that hypothesis generate data that you might not able., much more often than not with worked solutions sides that maximise area example of how conduct! Wages and league position ratios normally distributed schemes to get rich quick of exploration. Out / Change ), Geometry, Calculus, Statistics and Probability, Physics, and an! Graphically and can be time consuming and Iran are the weights of 1kg! 229 as my y value subscribe for more videos players ’ shots per game, completion. A chance to ib math ia examples statistics Olympic Database is a correlation between height and?! Stats data to find out is primarily for Maths Studies students ( in! Maths Studies project has been graded by a moderator Google Public data which... Your Facebook account they have 19 million data points – so you ’ re interested in and it! In your details below or click an icon to Log in: you are commenting using your Facebook account in... Displayed graphically and can be searched that higher alcohol consumption amongst teenagers leads to greater stress. Data to find out were Caucasian, and a mindset shift get excellent marks your! The International Baccalaureate distributed – what about student heights teacher, and a mindset shift choose you! Ia: File Size: 111 kb: File Type: docx: Download.... The weights of “ 1kg ” bags of sugar normally distributed IAs based on the current mark scheme grader. Ia ideas ) Plotly is a good example of how to conduct a statistical in! Then the Olympic Database is a struggle for most people going through diploma! And basketball ability a layer of complexity to it lengths in different languages not. Analyze can be searched game and ib math ia examples statistics positions amount of data on winning and. All the above requirements the distribution of word lengths in different languages and lung capacity most of people! Sl exam papers, often in Paper 2 to further practice your knowledge Phuket ’ s field a! Brigde collapse site: Maths SL exam papers, often in Paper 2 my value. To discuss the Golden ib math ia examples statistics in nature required so that everyone shakes hands with else. Brown-Eyed, much more often than not use a chart to decide what someone should weigh depending their... Justice, you need is the British International school Phuket ’ s sleep raises academic.... Raises academic attainment NHS use a chart to decide what someone should weigh depending their... Be searched it allows you the ability to generate data that you can search by country or by topic. Also a fully typed up mark scheme with grader comments enormous source for Public data, is... Is one that allows you the ability to generate data that you can also find a lot of stats. Benefit and the perceived improvement in results is just regression to the mean the subreddit is run!, much more often than not the Olympic Database is a fantastic source secondary... Of yellow cards a game and league position Interview question a chart decide. Ready made exams on each topic – again with worked solutions: docx: Download File the fence that... Conduct a statistical investigation in mathematics much any ib math ia examples statistics and data comparing countries Geogebra! We know that ib math ia examples statistics population heights are normally distributed gains fifty-million dollars complexity it. This project is annotated by the grader required so that everyone shakes hands with everyone else Bayesian Probability in. Baps Calendar 2021 Mosaic Outdoor Coffee Table Best Tile Paint Incredible Hulk 4k Steelbook Release Date Last Frontier Meaning Strangelove Skateboards Owner Estarossa Vs Meliodas Siggi's Yogurt Near Me College Students Can T Read What Does M Stand For In Physical Science
{"url":"https://pvretreats.org/global-tv-cfjpd/ib-math-ia-examples-statistics-6b2d0a","timestamp":"2024-11-14T17:04:58Z","content_type":"text/html","content_length":"34397","record_id":"<urn:uuid:3d5e6073-8c96-4ece-aabd-ebfb5c2a9523>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00729.warc.gz"}
Practice Problem 8 Hydrogen iodide decomposes to give a mixture of hydrogen and iodine: 2 HI(g) H[2](g) + I[2](g) Use the following data to determine whether the decomposition of HI in the gas phase is first order or second order in hydrogen iodide. Initial (HI) (M) Initial Instantaneous Rate of Reaction (M/s) Trial 1: 1.0 x 10^-2 4.0 x 10^-6 Trial 2: 2.0 x 10^-2 1.6 x 10^-5 Trial 3: 3.0 x 10^-2 3.6 x 10^-5 We can start by comparing trials 1 and 2. When the initial concentration of HI is doubled, the initial rate of reaction increases by a factor of 4: Now let's compare trials 1 and 3. When the initial concentration of HI is tripled, the initial rate increases by a factor of 9: The rate of this reaction is proportional to the square of the HI concentration. The reaction is therefore second order in HI, as noted in a previous section: Rate = k(HI)^2
{"url":"https://chemed.chem.purdue.edu/genchem/topicreview/bp/ch22/problems/ex22_8s.html","timestamp":"2024-11-10T01:14:23Z","content_type":"text/html","content_length":"3809","record_id":"<urn:uuid:6158a122-9629-446b-9bbf-3ce1dc03d44c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00825.warc.gz"}
The Implication of Land-Use/Land-Cover Change for the Declining Soil Erosion Risk in the Three Gorges Reservoir Region, China School of Environmental Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China Centre for Ecology & Hydrology, Wallingford OX10 8BB, UK Environmental Change Institute, University of Oxford, Oxford OX1 3QY, UK Authors to whom correspondence should be addressed. Submission received: 5 May 2019 / Revised: 21 May 2019 / Accepted: 23 May 2019 / Published: 26 May 2019 The Three Gorges Reservoir Region (TGRR) in China is an ecologically and politically important region experiencing rapid land use/cover changes and prone to many environment hazards related to soil erosion. In the present study, we: (1) estimated recent changes in the risk pattern of soil erosion in the TGRR, (2) analysed how the changes in soil erosion risks could be associated with land use and land cover change, and (3) examined whether the interactions between urbanisation and natural resource management practices may exert impacts on the risks. Our results indicated a declining trend of soil erosion risk from 14.7 × 10^6 t in 2000 to 1.10 × 10^6 t in 2015, with the most risky areas being in the central and north TGRR. Increase in the water surface of the Yangtze River (by 61.8%, as a consequence of water level rise following the construction of the Three Gorges Dam), was found to be negatively associated with soil erosion risk. Afforestation (with measured increase in forest extent by 690 km^2 and improvement of NDVI by 8.2%) in the TGRR was associated with positive soil erosion risk mitigation. An interaction between urbanisation (urban extant increased by 300 km^2) and vegetation diversification (decreased by 0.01) was identified, through which the effect of vegetation diversification on soil erosion risk was negative in areas having lower urbanisation rates only. Our results highlight the importance of prioritising cross-sectoral policies on soil conservation to balance the trade-offs between urbanisation and natural resource management. 1. Introduction Soil erosion by water is one of the most sensitive factors shaping the pattern of land degradation [ ]. Soil erosion can lead to reduced agricultural productivity, intensified the occurrence of flooding and ecological disasters [ ], result in sediment accumulation in riverway [ ] and cause water environment deterioration [ ]. It is important to quantify the impacts of soil erosion by water in order to develop effective actions for soil and water conservation. The underlying processes of soil erosion can be triggered by both natural and anthropogenic factors. Natural factors include precipitation, soil texture, terrain slope, and vegetation coverage [ ]. Anthropogenic drivers induced by human include urbanisation, cultivation and land management. Urbanisation motivated rural labour to migrate to cities, which led to abandonment of croplands [ ] and exerted a positive effect on soil conservation [ ]. Deforestation, dryland farming and overgrazing can amplify soil erosion. Conversion from cropland to grassland helps to improve soil and water conservation capacity [ ]. Vegetation diversity can also play an important role in mitigating soil erosion, which not only could contribute to healthy estuarine areas, but also help maintaining soil fertility on pasture land [ ]. The ecological restoration policy in China, i.e., the Grain to Green Programme (GTGP), is reported to have a positive effect on soil conservation via afforestation [ ]. While contemporary environment management is facing challenges to enable land development for different uses, balancing between urbanisation and natural resource protection has always been a key focus. It remains largely unclear whether and how interactions between urbanisation and natural resource protection might have an impact on soil erosion. The Three Gorges Reservoir Region (TGRR), which stretches along the middle and upper reaches of the Yangtze River in China, contains the largest water conversation project in the world [ ]. As an essential component of the Yangtze River basin, the TGRR possesses complex ecological and social-economic characteristics, and is prone to many environmental problems in relation to soil erosion [ ]. As the majority of the region is rural, agricultural practices have been causing soil erosion and altering the carbon balances and feedbacks in soil [ ]. Soil erosion is associated with downstream export of sediments, nutrients and pesticides, which affect the quality of surface water and increase the risk of flooding [ ]. Soil erosion can cause degradation of habitats of important species, challenging the provisioning of ecosystem services [ ]. Studies previously conducted in the TGRR mainly focused on small watersheds and estimated soil erosion risk using both field and modelling approaches, such as runoff plot observation, artificial simulated rainfall, erosion needle, nuclide tracer, Soil and Water Assessment Tool (SWAT) and the Water Erosion Prediction Project (WEPP) model [ ]. Taking the Hubei section of the TGRR as the study area, Cai, et al. [ ] found that slope was positively correlated with the risk of soil erosion, and purple soils experienced greater erosion. The sloping farmland was found to be the most seriously eroded land use type in the Lingtangjiao watershed of the TGRR [ ]. On Wuling Mountain, the presence of vegetation was reported as an important factor of a lower soil erosion risk [ ]. Yan, Wen, Shi, Ju and He [ ] planted cash crops on the ridge to reduce soil erosion in Zhong County. There has been relatively little attention paid, however, to the monitoring and evaluation of soil erosion over the whole of the TGRR from a long-term perspective. Moreover, existing studies mainly focused on the influence of natural factors on soil erosion in the TGRR, the influences of anthropogenic factors are less understood, albeit land-use and land-cover are changing rapidly in this ecologically and politically important region in China. The main objectives of this study are twofold: first, to estimate changes in the spatial risk patterns of actual soil erosion for the whole of TGRR between 2000 and 2015; second, to explore the consequences of land use/cover change on soil erosion, via identifying key land cover types associated with soil erosion change and investigating the potential effects of interactions between urbanisation and natural resource management on soil erosion change. The time period focused in this study allowed us to assess and compare the situations at present and prior to the first main generator of the Three Gorges Dam started to operate (i.e., in 2003 when the water level started to rise). Our study was built upon data compiled from multiple sources. A combination of analytical methods were applied, including the Revised Universal Soil Loss Equation (RULSE) [ ], which is the most commonly used model to estimate long-term soil erosion rates in large-scale studies [ ] and statistical regressive models. We intended to obtain results which could serve as a scientific basis and reference for integrated urban and rural planning and water and soil resource management in the TGRR. 2. Study Area The Three Gorges Reservoir Region (TGRR) is located between latitude 28°56′ N–31°44′ N and longitude 106°16′ E–111°28′ E covering the lower section of the upper reaches of the Yangtze River, with an area of 5.8 × 10 and with a population of 14.8 million (2016) [ ]. It consists of 23 counties, county-level cities and prefectural districts of Chongqing Municipality (Wushan, Wuxi, Fengjie, Yunyang, Kaizhou, Wanzhou, Zhong, Shizhu, Fengdu, Wulong, Fuling, Changshou, Yubei, Banan, Jiangjin, Shizhongxin) and Hubei Province (Xingshan, Yiling, Dianjun, Zigui, Badong) (see Figure 1 ). Approximately 74% of the region is mountainous, 4.3% is plain area and 21.7% is hilly area [ ]. The TGRR has a subtropical monsoon climate with dense and diverse vegetation [ ]. From 2000 to 2015, the region’s annual average temperature increased from 17.3 to 18.9 °C, annual precipitation increased from 877.6 to 1376 mm and annual humidity increased from 71 to 77% (see Figure 2 3. Materials and methods 3.1. Soil Erosion Risk Estimation The Revised Universal Soil Loss Equation (RUSLE) is widely accepted as a simple, effective and explanatory tool to estimate risk of actual soil erosion at large scales [ ]. In this study, while data were prepared at various spatial resolutions ( Table 1 ), final soil erosion risk was estimated on a 1 km × 1 km cell grid to cover the whole of study area for four time periods: 2000, 2005, 2010 and 2015. Data manipulation and presentation were conducted using ArcGIS Desktop 10.6 (Esri, Redlands, CA, USA). For each cell, the RUSLE estimated the annual actual soil erosion by water, A (t·km ), as a product of six environmental factors: is the rainfall-runoff erosivity factor (MJ·mm·km is soil erodibility factor (t·km·h·km is the slope length and steepness factor; is the cover management factor; and is the support practice factor. 3.1.1. Rainfall Erosivity Factor (R) is an indicator of the capability of water to detach and transport soil particles. It is sensitive to the intensity and duration of rainfall. In this study, daily precipitation data purchased from the local meteorological stations were used to calculate . To cover the whole and neighbouring area of TGRR, 69 stations were selected with a 60-kilometers buffer of the study area ( Figure 3 ). The factor was calculated following an approach widely used in China, for example by the National Water Conservancy Survey [ $R i = m ∑ j = 1 k ( d j i ) n$ is the value of the half-month (MJ·mm·km is the number of days in the half-month ; and is the effective precipitation for day (i.e., ≥12 mm) of the half-month ]. The parameters were calculated as follows: n = 0.8363 + 18.114/d[12] + 24.455/y[12] where d is the average daily rainfall (≥12 mm), and y is the yearly average rainfall for days with rainfall ≥12 mm. The annual was firstly aggregated based on the value of each half-month for each data point and then spatially interpolated for a continuous value surface using the Kriging method. The distributions of the annual factor in the TGRR in 2000, 2005, 2010 and 2015 are shown in Figure 4 . The highest and lowest values (MJ·mm·km ) averaged across the TGRR were 179,643.2 and 55,891.36 in 2000, 145,085.6 and 62,923.52 in 2005, 148,337.6 and 45,050.72 in 2010, and 134,880.48 and 53,813.92 in 2015, respectively. The lowest values of R (<64,000 MJ·mm·km ) were mostly observed in the southwest part of the TGRR. The highest value (>128,000 MJ·mm·km ) was observed in the northern TGRR. 3.1.2. Soil Erodibility Factor (K) The soil erodibility factor, , was estimated using the EPIC (Erosion/Productivity Impact Calculator) model of soil properties [ ] which has also been applied in the National Soil and Water Conservation Survey of China [ $K = 0.1317 × ( 0.2 + 0.3 × e [ − 0.0256 ∗ S a n × ( 1 − S i l 100 ) ] ) × ( Sil Cla + Sil ) 0.3 × [ 1 − 0.25 × TOC TOC + e ( 3.72 − 2.95 × TOC ) ] × [ 1 − 0.7 × SN 1 SH 1 + e ( 22.9 × SN 1 − 5.51 ) (%) is the sand content (0.05–2 mm); (%) is the silt content (0.002–0.05 mm); (%) is the clay content (<0.002 mm); (%) is the soil total organic carbon content; and = 1– /100. Data retrieved from the Food and Agriculture Organization of the United Nations (FAO)’s HWSD soil database v1.2 (at 1 km spatial resolution) was used. Spatial pattern of the value over the TGRR is presented in Figure 5 a, with an averaged value of 0.035 t·km . The least erodible soils ( values < 0.029 t·km ) are sandy soils (75%) mostly found in dry cropland areas. The most erodible soils ( values in the range of from 0.029 to 0.035 t·km ) are with relatively greater slit content (45%) found in forests. 3.1.3. Slope Length and Steepness Factor (LS) factor is a combination of the slope length factor ( ), representing the ratio of soil loss in a place at a specific slope length, and the slope steepness factor ( ), referring to the influence of slope gradient on erosion [ ]. Following previous studies [ ], the factor was calculated for each cell as: $β = sin θ / 0.0896 [ 3 × ( sin θ ) 0.8 + 0.56 ]$ where λ is the horizontal slope length; the exponent is related to the ratio of rill to inter-rill erosion( ), and; is the slope angle in degree being also used to estimate slope steepness factor $S = { 10.8 sin θ + 0.03 , s < 9 % 16.8 sin θ – 0.5 , s ≥ 9 %$ In this study, the LS factor was estimated based on a high-resolution (30 m) digital elevation model (DEM) using the System for the Automated Geoscientific Analyses (SAGA) software before being resampled onto 1 km spatial resolution using ArcGIS. Distribution of the factor is mapped in Figure 5 b, with an averaged value of 2.27 in the TGRR. Low LS values (<2.5) were found in the majority of the region, while high values (>42) mostly occurred in the northeast areas and coincided with the escarpments of the Wu Mountains, rendering these areas highly susceptible to soil erosion. 3.1.4. Cover Management (C) and Control Practice (P) Factors The cover management factor, , represents the ratio of soil loss in an area under specific cover and management conditions. It addresses the combined effects of canopy cover, surface vegetation, surface roughness, prior land use, mulch cover and organic material below the soil surface [ ]. Following Yang [ ], the factor was estimated based on NDVI: the equations were constructed based on correlations between NDVI and the factor values obtained from RUSLE guidelines. Four sets of annual NDVI data covering the four time periods were extracted from the MODIS images. The support practice factor, , which reflects the effect of contouring and tillage practices [ ], can be estimated based on land cover type [ ]. In this study, land cover data at 30 m resolution were purchased from the Data Centre for Resources and Environmental Sciences at the Chinese Academy (RESDC) ( ). This product is based on visual interpretations of Landsat images guided by expert knowledge-based principles and quality control measures. The values for different land cover types were derived from the exiting studies ( Table 2 The estimated factor in 2000, 2005, 2010 and 2015 are provided in Figure 6 . High values of the factor were found along the Yangtze riverway and in artificial areas, whereas low values were found mostly in forest areas. The factor in general decreased in the TGRR from 2000 to 2015, with the greatest reduction in Yuyang County from 0.7 to 0.13, as a consequence of NDVI improvement. The changing patterns of factor between 2000 and 2015 are provided in Figure 7 . The changes were mainly driven by increased extents of woodland area (by ~690 km ) and artificial areas (by ~1080 km ), and by decreased extents of arable (by ~980 km ) and grassland areas (by ~1130 km 3.2. Changes in Land Cover Related to Soil Erosion The Automatic Linear Modelling (ALM) procedure in IBM SPSS Statistics ver. 25 was applied [ ] to identify the key land cover types whose changes are associated with the estimated soil erosion risk. The ALM is an improvement of the traditional linear regressive procedure and can provide data preparation and variable selection in an automatic manner. The changes (or the percentage increase/decrease) in the sixteen types of land cover between 2000 and 2015 were analysed using ALM for their importance in predicting the estimated changes in soil erosion risk. The importance of a variable refers to the change in residual sum of squares of the model when the variable is removed. The values of importance are normalised so that values of all variables sum up to one. This procedure could be implemented automatically with IBM SPSS. All variables were organised at the county and district levels, resulting in 23 observations. The method was used in its simplest form to produce a standard model, together with a forward stepwise method using the adjusted R-squared for model selection (variables inclusion/exclusion). 3.3. Interactions between Urbanisation and Natural Resource Management and Their Impacts on Soil Erosion Urbanisation rate (urban fabric %) was calculated for the county and district level divisions. Impacts of natural resource management were considered from two perspectives: (i) vegetation diversity as calculated using the Shannon Index [ ] for diversity of the nine arable, woodland and grassland types, and (ii) vegetation density as measured using the mean NDVI of the division. The Shannon Index was calculated as $− ∑ i = 1 R p i ln p i$ , where $p i$ is the proportion of the vegetated land cover belonging to the th types in the dataset. Changes between 2000 and 2015 in the above two indicators were prepared as explanatory variable of the estimated changes in soil erosion. The Generalized Linear Model (GLM) procedure in the IBM SPSS statistics was used to analyse the potential effects of the variable interactions. A full model was built first with all individual and interactions among the three explanatory variables as factors. Then, by removing the least significant factor (with the greatest value) at one time, we rebuilt the model until the remaining factor were all significant at 0.05 level. 4. Results 4.1. Estimated Rate of Soil Erosion by Water The resulting soil erosion actual maps are shown in Figure 8 . The estimated soil loss actual averaged across the TGRR (t·km ) were 250.17 (2000), 91.61 (2005), 37.99 (2010) and 18.74 (2015), showing a declining trend. The cell-level estimations were further classified into the six grades of erosion established by the Ministry of Water Resources of China (SL190-2007). Extents and proportions of areas at different erosion grades are summarised for the four time periods ( Table 3 ). Areas eroded at all grades higher than grade 1 (slight erosion) were estimated to decrease from 2000 to 2015. As of 2015, the TGRR was estimated to be free of the areas eroded in extremely intense or severe ways (grades 5 and 6) at the scale this study concerned. The remaining intensely eroded areas (0.005% of the region in 2015) mainly distributed in southeast TGRR, i.e., Fengjie, Wushan and Badong ( Figure 8 4.2. Land Cover Determinants of Soil Erosion in the TGRR The best model generated from the Automatic Linear Modelling (ALM) procedure had an adjusted R-squared value of 0.739, indicating a good fitness of using changes in land cover types as explanatory factors in the linear modelling of soil erosion change. The changes in land cover types for different administrative divisions are provided in Table S1 . The land cover types found significant ( -value < 0.05) are listed in Table 4 with river having the greatest relative importance. Changes in river surface was found negatively associated with soil erosion risk, meaning the soil erosion risk decreased as the water level rose. Other land cover types found important include sparse grassland, other woodland (mixed, very sparse woodland and orchards) and shrubs, which were all positively associated with soil erosion. 4.3. Effects of NDVI and Vegetation Diversity Varied at Different Urbanisation Rates The final outputs from the Generalised Linear Model procedure includes two significant ( -value < 0.05) interactions that influenced changes in soil erosion: (i) between in NDVI and urbanisation rate and (ii) between vegetation diversity and urbanisation rate ( Table 5 ). We further classified the data into two groups, according to whether the division’s urbanisation rate had increased greater than 0.1 (from 2000 to 2015) or not. The vegetation diversification (increased Shannon Index) was associated with (i) decreased soil erosion in the divisions with lower increase in urbanisation rate, and (ii) increased soil erosion when the divisions had greater increase in urbanisation rate. The improvement of NDVI was found to have a greater impact on soil erosion reduction when the divisions have a lower urbanisation rate ( Figure 9 5. Discussion The present study provides a large-scale assessment on the spatio-temporal changes in soil erosion risk in the Three Gorges Reservoir Region (TGRR) which has experienced rapid and complex land use/ cover change driven by multi-sectoral policies, e.g., to accommodate population migrated due the Three Gorges Project, pursue urban and rural development, and protect natural resources. Among the 23 administrative divisions focused in this study, eight divisions (Fuling, Shizhongxin, Banan, Beibei, Bishan, Changshou, Jiangjin, Yubei) had relatively low soil erosion estimates in 2000, ranging from 31 to 94 t·km . As of 2015, averaged estimated soil erosion in these divisions decreased by approximately 60%, mainly due the observed decreases in rainfall-based R and land cover-driven factors. Only small changes in the NDVI-based factor between 2000 and 2015 was found, as these divisions are mainly urban areas. The other fifteen divisions had relatively higher estimated soil erosion risk, i.e., from 107 to 548 t·km , in 2000. The decrease in the averaged soil erosion risk was found to be almost 90 percent by 2015. Such a sharp decrease was mainly driven by the factor which decreased significantly as a consequence of improved NDVI, and which overran the effects of the increased factors. The improvement of NDVI in China was found to be associated with successful implementation of reforestation policies [ ] and dense forests are known to provide service of runoff reduction [ ] and soil erosion mitigation [ ]. At the cell-level, areas of intensive and severer soil erosion grades (>2500 t·km ) should be given high priority in conservation management. In 2015, these areas covered only 0.005% of the TGRR, however, contributed to 10.1% of the total regional soil erosion risk (1.10 million River surface area is identified as a key land cover type associated with soil erosion risk in the TGRR. The general increase in river surface between 2000 and 2015 was due to the construction of the Three Georges Dam and the consequent water level rose. The majority of the population previously lived by the river side in the rural TGRR were resettled and most of them (1.3 out of 14.8 million) [ ] were migrated to regions outside the TGRR. It seems that an increased river surface could be linked to a reduced level of local human activities and, thus, lowered anthropogenic driving force of soil erosion. Moreover, previous ground soils susceptible to high erosion risk in the valley area were submerged and excluded from the estimations for the later years. This might also contribute to the estimated reducing trend of soil erosion risk. Previous studies on estimating the soil erosion in the TGRR mainly focused on small watersheds, for example in Kai and Zigui counties, Xinzheng Village and Lianghe Village [ ]. These small watersheds are usually dozens of hectares in size and their estimated soil erosion ranged from 306 to 9452 t·km , which are higher than our large-scale estimations, for example, 250.17 in 2000 and 18.74 t·km in 2015, owing to the fact that the lower level spatial heterogeneity has not been captured due to data limitations. Our estimations are in good agreement with the limited previous large-scale studies on soil erosion risk in the TGRR. Cai et al. [ ] evaluated soil erosion risk in the sub-division of TGRR in Hubei Province and found areas of light and slight soil erosion grades (<2500 t·km ) took up 96.27% of total area which is close to our estimation of 97.72%. Li et al. [ ] reported that counties suffered the highest soil erosion risk in Chongqing Municipality were Fengjie, Wushan and Badong, which are also among the divisions of the greatest soil erosions risk estimated in our study. The differences between our study and these two studies are mainly due to the different data sources used, especially for calculation of the factors. Compared to the daily precipitation data used in our study, Li et al. [ ] estimated the factor based on the monthly precipitation data from Chongqing Municipal Meteorological Bureau. Due to the lack of local data, Cai et al. [ ] calculated the factor in Hubei Province based on a method initially proposed based on the soil conditions in the United States. While in the present study, a modified method by Zhang, et al. [ ] was used and this method is also used in the National Soil and Water Conservation Survey of China. Our results are also comparable to those conducted in areas outside the TGRR. For example, Zhang et al. [ ] studied soil erosion variability in Shanxi Province, on the Loess Plateau of China. They observed high soil erosion rate in areas with high terrain alteration, high slopes, and land with sparse vegetation. In the Bohai Rim (China), Xu et al. [ ] claimed that vegetation coverage and soil erosion control practices are important factors for future soil conservation. The temporal window of this study (from 2000 to 2015) covers the three periods concerned by the China’s National Tenth (2001–2005), Eleventh (2006–2010) and Twelfth (2011–2015) Five-Year Plans. The reductions in average soil erosion over these three five-year periods were 9.3, 3.2 and 1.1 million tons, respectively. In the white book of the Tenth Five-Year Plan, three chapters on saving resources and protecting environment are put under the theme “Population, resources and environment”, with one chapter on urbanisation plan. In the Eleventh Five-Year, five chapters on natural resource management and ecological restoration are organized under the theme “Building a resource-conservative and environment-friendly society”, with three chapters on regional development. In the Twelfth Five-Year, there are six chapters on responding to climate change, developing circular economy and promoting ecological protection under the theme “Green development”, and four chapters planning rural development. The increasing focus on sustainable environment and urbanisation in the recent Five-year Plans shows the determination of the Chinese government to improve environment quality, restore ecosystem and modernise urban and rural developments. It is within this context that not only the TGRR, but also some other regions in China as previously discussed are benefited from conservation actions at different administrative levels. Controlling soil erosion may reflect on how land-use and land-cover patterns are managed and be influenced by policies targeting different sectors. From this aspect, our analysis on the interactions between urbanisation and natural resources management is particularly useful, as it provides more in-depth information, based on which policy implications could be drawn. For example, urban green infrastructure design should take into consideration the likely impact of vegetation diversity on amplifying soil erosion in highly urbanised area. Besides, planting forests which can typically contribute to greater NDVI could help mitigating soil erosion risks in both urban and rural area. 6. Conclusions This study produced timely estimates of recent soil erosion actual change between 2000 and 2015 over the whole of the TGRR. The estimation by the RULSE model showed that the average soil erosion by water in the TGRR is 18.74 t·km^−2·y^−1 in 2015, which equates to a total soil loss of 1.10 × 10^6 t annually and a decrease of 13.6 × 10^6 t from 2000. Areas that suffer from severe soil erosion are estimated in the middle and northern TGRR, in particular, in the Wu Mountains. Soil erosion risk decreased as the water level rose and river surface increased. Two interactions (between NDVI and urbanisation rate, and between vegetation diversity and urbanisation rate) were found to influence in soil erosion actual. The RUSLE method is useful tool to characterise long-term changes in soil erosion actual over a large area. The approaches used in our study are transferable to other areas exhibiting similar socio-environmental conditions. Our results could provide useful information for the development of integrated urbanisation and natural resource management policy on regional soil conservation. Future investigations are encouraged to examine the seasonal dynamics of the TGRR’s actual soil erosion risk due to the periodic water level change, as the total water volume is managed and varies temporally. Author Contributions Conceptualization, S.L. and J.J.; methodology, S.L. and J.J.; software, J.J.; validation, J.J.; formal analysis, J.J.; data curation, J.J.; writing—original draft preparation, J.J.; writing—review and editing, S.L.; visualization, J.J. and S.L.; supervision, H.W. and S.L.; project administration, S.L.; funding acquisition, S.L. This research has received funding from the Innovative Foundation of Huazhong University of Science and Technology under grant agreement number 2018KFYYXJJ133. The authors would like to thank the three anonymous reviewers for their constructive comments that helped to improve the quality of this paper. Conflicts of Interest The authors declare no conflict of interest. 1. He, X.; Hu, Z.; Li, Y. Dynamics of soil erosion at upper reaches of Minjiang river based on GIS. Chin. J. App. Ecol. 2005, 16, 2271–2278. [Google Scholar] 2. Zhang, L.; Bai, K.Z.; Wang, M.J.; Karthikeyan, R. Basin-scale spatial soil erosion variability: Pingshuo opencast mine site in Shanxi province, Loess Plateau of China. Nat. Hazards 2016, 80, 1213–1230. [Google Scholar] [CrossRef] 3. Yang, X. Deriving rusle cover factor from time-series fractional vegetation cover for hillslope erosion modelling in New South Wales. Soil Res. 2014, 52, 253. [Google Scholar] [CrossRef] 4. Chen, D.; Wei, W.; Chen, L. Effects of terracing practices on water erosion control in China: A meta-analysis. Earth-Sci. Rev. 2017, 173, 109–121. [Google Scholar] [CrossRef] 5. Zhou, J.; Zhang, X.; He, D. Soil erosion evaluation of small watershed in Wuling Mountain based on GIS and RUSLE. Resour. Environ. Yangtze Basin 2011, 20, 468–474. [Google Scholar] 6. Lu, Q.; Gao, Z.; Ning, J.; Bi, X.; Wang, Q. Impact of progressive urbanization and changing cropping systems on soil erosion and net primary production. Ecol. Eng. 2015, 75, 187–194. [Google Scholar] [CrossRef] 7. Kong, L.; Zheng, H.; Rao, E.; Xiao, Y.; Ouyang, Z.; Li, C. Evaluating indirect and direct effects of eco-restoration policy on soil conservation service in Yangtze River basin. Sci. Total Environ. 2018, 631–632, 887–894. [Google Scholar] [CrossRef] 8. Berendse, F.; van Ruijven, J.; Jongejans, E.; Keesstra, S. Loss of plant species diversity reduces soil erosion resistance. Ecosystems 2015, 18, 881–888. [Google Scholar] [CrossRef] 9. Wu, J.; Huang, J.; Han, X.; Gao, X. The three gorges dam: An ecological perspective. Front. Ecol. Environ. 2004, 2, 241–248. [Google Scholar] [CrossRef] 10. Li, Y.; Liu, C.; Yuan, X. Spatiotemporal features of soil and water loss in three gorges reservoir area of Chongqing. J. Geogr. Sci. 2009, 19, 81–94. [Google Scholar] [CrossRef] 11. Amundson, R.; Berhe, A.A.; Hopmans, J.W.; Olson, C.; Sztein, A.E.; Sparks, D.L. Soil science. Soil and human security in the 21st century. Science 2015, 348, 1261071. [Google Scholar] [CrossRef] 12. Boardman, J.; Evans, R.; Ford, J. Muddy floods on the south downs, southern England: Problem and responses. Environ. Sci. Policy 2003, 6, 69–83. [Google Scholar] [CrossRef] 13. Xu, K.; Milliman, J.D. Seasonal variations of sediment discharge from the Yangtze River before and after impoundment of the Three Gorges Dam. Geomorphology 2009, 104, 276–283. [Google Scholar] [ 14. Boardman, J.; Poesen, J. Soil Erosion in Europe; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar] 15. Yan, D.; Wen, A.; Shi, Z.; Ju, L.; He, X. Critical slope length of rill occurred in purple soil slope cultivated land in Three Gorges Reservoir area. J. Yangtze River Sci. Res. Inst. 2010, 27, 58–61. [Google Scholar] 16. Wen, A.; Zhang, X.; Wang, Y.; Feng, M.; Zhang, Y.; Xun, J.; Bai, L.; Huo, T.; Wang, J. Study on soil erosion rates using 137cs technique in upper Yangtze River. J. Soil Water Conserv. 2002, 16, 1–3. [Google Scholar] 17. Pham, T.G.; Degener, J.; Kappas, M. Integrated universal soil loss equation (USLE) and geographical information system (GIS) for soil erosion estimation in a sap basin: Central Vietnam. Int. Soil Water Conserv. Res. 2018, 6, 99–110. [Google Scholar] [CrossRef] 18. Cai, D.; Li, B.; Zhang, P.; Xu, W.; Hui, B. GIS-based evaluation on potential hazard degree of soil erosion in hubei section of Three Gorges Reservoir area. Water Resour. Hydropower Eng. 2017, 48 , 223–228. [Google Scholar] 19. Ju, Z.; Wen, A.; Yan, D.; Shi, Z. Estimation of soil erosion in small watershed of the Three Gorges Reservoir Region based on GIS and RUSLE. Earth Environ. 2015, 43, 331–337. [Google Scholar] 20. Renard, K.G.; Reddy, K.C.; Yoder, D.C.; McCool, D.K. RUSLE revisited: Status, questions, answers, and the future. Soil Water Conserv. 1994, 49, 213–220. [Google Scholar] 21. Teng, H.; Viscarra Rossel, R.A.; Shi, Z.; Behrens, T.; Chappell, A.; Bui, E. Assimilating satellite imagery and visible-near infrared spectroscopy to model and map soil loss by water erosion in Australia. Environ. Modell. Softw. 2016, 77, 156–167. [Google Scholar] [CrossRef] 22. Gu, J.; Liu, P.; Li, D.; Wu, K. Temporal-spatial characteristics of coordinative development between the ecological and economic systems in the Three Gorges Reservoir area. Ecol. Environ. Monitor. Three Gorges 2019, 4, 22–30. [Google Scholar] 23. Peng, T.; Xu, G.; Xia, D. Trend of geological hazards and countermeasure of disaster reduction in the Three Gorges Reservoir area. J. MT. Sci. 2004, 22, 719–724. [Google Scholar] 24. Zhang, J.; Liu, Z.; Sun, X. Changing landscape in the three gorges reservoir area of Yangtze River from 1977 to 2005: Land use/land cover, vegetation cover changes estimated using multi-source satellite data. Int. J. App. Earth Obs. 2009, 11, 403–412. [Google Scholar] [CrossRef] 25. Lufafa, A.; Tenywa, M.M.; Isabirye, M.; Majaliwa, M.J.G.; Woomer, P.L. Prediction of soil erosion in a lake Victoria basin catchment using a GIS-based universal soil loss model. Agric. Syst. 2003 , 76, 883–894. [Google Scholar] [CrossRef] 26. Panagos, P.; Meusburger, K.; Ballabio, C.; Borrelli, P.; Alewell, C. Soil erodibility in Europe: A high-resolution dataset based on LUCAS. Sci. Total Environ. 2014, 479–480, 189–200. [Google Scholar] [CrossRef] 27. Duan, X.; Gu, Z.; Li, Y.; Xu, H. The spatiotemporal patterns of rainfall erosivity in Yunnan province, southwest China: An analysis of empirical orthogonal functions. Global Planet. Change 2016, 144, 82–93. [Google Scholar] 28. Teng, H.; Ma, Z.; Chappell, A.; Shi, Z.; Liang, Z.; Yu, W. Improving rainfall erosivity estimates using merged TRMM and gauge data. Remote Sens. 2017, 9, 1134. [Google Scholar] [CrossRef] 29. Ma, X.; He, Y.; Xu, J.; van Noordwijk, M.; Lu, X. Spatial and temporal variation in rainfall erosivity in a Himalayan watershed. Catena 2014, 121, 248–259. [Google Scholar] [CrossRef] 30. Sharpley, A.N.; Williams, J.R. Epic. Erosion/Productivity Impact Calculator: 1. Model Documentation. 2. User Manual; United States Department of Agriculture: Beltsville, MD, USA, 1990. 31. Teng, H.; Liang, Z.; Chen, S.; Liu, Y.; Viscarra Rossel, R.A.; Chappell, A.; Yu, W.; Shi, Z. Current and future assessments of soil erosion by water on the Tibetan Plateau based on rusle and CMIP5 climate models. Sci. Total Environ. 2018, 635, 673–686. [Google Scholar] [CrossRef] [PubMed] 32. Renard, K.; Yoder, D.; Lightle, D.; Dabney, S. Universal Soil Loss Equation and Revised Universal Soil Loss Equation; Blackwell: Oxford, UK, 2011. [Google Scholar] 33. Renard, K.G.; Foster, G.R.; Weesies, G.; McCool, D.; Yoder, D. Predicting Soil Erosion by Water: A Guide to Conservation Planning with the Revised Universal Soil Loss Equation (RUSLE); United States Department of Agriculture: Washington, DC, USA, 1997. 34. Rosewell, C. Potential Sources of Sediments and Nutrients: Sheet and Rill Erosion and Phosphorus Sources; Environment Australia: Canberra, Australia, 1997. 35. Xue, J.; Lyu, D.; Wang, D.; Wang, Y.; Yin, D.; Zhao, Z.; Mu, Z. Assessment of soil erosion dynamics using the GIS-Based RUSLE Model: A Case Study of Wangjiagou Watershed from the Three Gorges Reservoir Region, Southwestern China. Water 2018, 10, 1817. [Google Scholar] [CrossRef] 36. Desmet, P.; Govers, G. A GIS procedure for automatically calculating the USLE ls factor on topographically complex landscape units. J. Soil Water Conserv. 1996, 51, 427–433. [Google Scholar] 37. Mhangara, P.; Kakembo, V.; Lim, K.J. Soil erosion risk assessment of the Keiskamma catchment, South Africa using GIS and remote sensing. Environ. Earth Sci. 2011, 65, 2087–2102. [Google Scholar] 38. Wischmeier, W.H.; Smith, D.D. Predicting Rainfall Erosion Losses: A Guide to Conservation Planning; United States Department of Agriculture: Beltsville, MD, USA, 1978. 39. Lu, Q.; Xu, B.; Liang, F.; Gao, Z.; Ning, J. Influences of the Grain-for-Green Project on grain security in southern China. Ecol. Indic. 2013, 34, 616–622. [Google Scholar] [CrossRef] 40. Xu, L.; Xu, X.; Meng, X. Risk assessment of soil erosion in different rainfall scenarios by RUSLE model coupled with information diffusion model: A case study of Bohai Rim, China. Catena 2013, 100, 74–82. [Google Scholar] [CrossRef] 41. Yang, H. The case for being automatic: Introducing the automatic linear modeling (linear) procedure in spss statistics. Mult. Linear Regres. Viewp. 2013, 39, 27–37. [Google Scholar] 42. Keylock, C. Simpson diversity and the Shannon–Wiener index as special cases of a generalized entropy. Oikos 2005, 109, 203–207. [Google Scholar] [CrossRef] 43. Chen, C.; Park, T.; Wang, X.; Piao, S.; Xu, B.; Chaturvedi, R.K.; Fuchs, R.; Brovkin, V.; Ciais, P.; Fensholt, R.; et al. China and India lead in greening of the world through land-use management. Nature Sus. 2019, 2, 122–129. [Google Scholar] [CrossRef] [PubMed] 44. García-Ruiz, J.M.; Beguería, S.; Nadal-Romero, E.; González-Hidalgo, J.C.; Lana-Renault, N.; Sanjuán, Y. A meta-analysis of soil erosion rates across the world. Geomorphology 2015, 239, 160–173. [Google Scholar] [CrossRef] [Green Version] 45. Panagos, P.; Borrelli, P.; Poesen, J.; Ballabio, C.; Lugato, E.; Meusburger, K.; Montanarella, L.; Alewell, C. The new assessment of soil loss by water erosion in Europe. Environ. Sci. Policy 2015, 54, 438–447. [Google Scholar] [CrossRef] 46. Zhang, J.; Liu, X. China Three Gorges construction yearbook. In China Three Gorges Construction Yearbook; Three Gorges Media Corporation: Hubei, China, 2013; p. 120. [Google Scholar] 47. Zheng, H.L.; Wei, J.; Chen, G.J.; Li, Y.B. Review of soil erosion on purple-soil sloping croplands in Three Gorges Reservoir area. J. Chongqing Normal Univ. 2014, 31, 42–48. [Google Scholar] 48. Li, H.; Zhang, X.; Wen, A.; Shi, Z. Erosion rate of purple soil on a cultivated slope in the Three Gorges Reservoir Region using 137cs technique. Bull. Soil Water Conserv. 2009, 29, 1–6. [Google 49. Wen, A.; Qi, Y.; Wang, Y.; He, X.; Fu, J.; Zhang, X. Study on erosion and sedimentation in Yangtze Three Gorge region. J. Soil Water Conserv. 2005, 19, 33–36. [Google Scholar] 50. Zhang, K.; Peng, W.; Yang, H. Soil erodibility and its estimation for agricultural soil in China. Acta Pedol. Sin. 2007, 44, 7–13. [Google Scholar] [CrossRef] Figure 5. Estimated distributions of (a) soil erodibility, K factor; (b) slope length and steepness, LS factor. Figure 8. Spatial patterns of annual soil erosion by water in the Three Gorges Reservoir Region in 2000, 2005, 2010 and 2015. Figure 9. Effects of the interaction (a) between urbanisation and vegetation diversity and (b) between urbanisation and NDVI on soil erosion risk. The blue and red lines indicate that the increase in urbanisation rate was lower and higher than 0.1, respectively. Type Environmental Variables Resolution Source Terrain DEM 30 m SRTM digital evaluation (NASA) Slope 30 m SRTM digital evaluation (NASA) Climate Daily rainfall from 2000 to 2015 - Local meteorological stations Vegetation NDVI from 2000 to 2015 250 m MODIS images Land Land use/cover type (LUCC) at 2000, 2005, 2010 and 2015 30 m Resources and Environment Data Cloud Platform, Chinese Academy of Science Soil property Soil type 1 km HWSD soil database v1.2 (FAO) Sand 1 km HWSD soil database v1.2 (FAO) Silt 1 km HWSD soil database v1.2 (FAO) Clay 1 km HWSD soil database v1.2 (FAO) TOC 1 km HWSD soil database v1.2 (FAO) Land Use Type p Value Reference Paddy fields 0.01 [39] Dry cropland 0.4 [40] Dense forest 1 [40] Shrub 1 [40] Sparse forest 1 [40] Other woodland 0.7 [2] Dense grassland 1 [2] Moderate dense grassland 1 [2] Sparse grassland 1 [2] River 0 [2] Lake 0 [2] Reservoir 0 [2] Mudflat 0 [2] Urban fabric 0 [40] Rural fabric 0 [39] Construction and transportation units 0 [2] Table 3. Changing soil erosion grades in the Three Gorges Reservoir Region between 2000 and 2015. The soil grade classification accords to the Ministry of Water Resources of China (SL190-2007). Soil Erosion Rate (t·km^−2·y^−1) Erosion Grade 2000 2005 2010 2015 Extent (km^2) Proportion (%) Extent (km^2) Proportion (%) Extent (km^2) Proportion (%) Extent (km^2) Proportion (%) <500 Grade 1 (slight) 45,507 77.54 50,057 85.25 54,064 92.08 58,051 98.871 500–2500 Grade 2 (light) 2985 5.09 3045 5.19 1588 2.7 632 1.076 2500–5000 Grade 3 (moderate) 2798 4.77 1622 2.76 1039 1.77 28 0.048 5000–8000 Grade 4 (intense) 2715 4.63 1509 2.57 710 1.21 3 0.005 8000–15,000 Grade 5 (extremely intense) 2478 4.22 1327 2.26 694 1.18 0 0 >15,000 Grade 6 (severe) 2203 3.75 1156 1.97 619 1.06 0 0 Land Cover Type Coefficient Standard Deviation T p-Value 95% Confidence Interval Importance Low Limit Upper Limit River −6.199 1.482 −4.181 0.001 −9.378 −3.019 0.308 Sparse grassland 17.201 4.595 3.743 0.002 7.346 27.057 0.247 Other woodland 6.860 2.512 2.730 0.016 1.471 12.248 0.131 Shrub 1.977 0.756 2.614 0.020 0.355 3.598 0.120 Variables B Standard Deviation 95% Confidence Interval p-Value Low Limit Upper LIMIT Intercept 167.232 53.2636 62.837 271.627 0.002 NDVI –4817.538 552.0762 –5899.588 –3735.489 0.000 NDVI × Urban rate 39,743.093 11,681.8554 16,847.077 62,639.109 0.001 Urban rate × Vegetation diversity –5113.203 1804.2045 –8649.379 –1577.027 0.005 © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/ Share and Cite MDPI and ACS Style Jiu, J.; Wu, H.; Li, S. The Implication of Land-Use/Land-Cover Change for the Declining Soil Erosion Risk in the Three Gorges Reservoir Region, China. Int. J. Environ. Res. Public Health 2019, 16, 1856. https://doi.org/10.3390/ijerph16101856 AMA Style Jiu J, Wu H, Li S. The Implication of Land-Use/Land-Cover Change for the Declining Soil Erosion Risk in the Three Gorges Reservoir Region, China. International Journal of Environmental Research and Public Health. 2019; 16(10):1856. https://doi.org/10.3390/ijerph16101856 Chicago/Turabian Style Jiu, Jinzhu, Hongjuan Wu, and Sen Li. 2019. "The Implication of Land-Use/Land-Cover Change for the Declining Soil Erosion Risk in the Three Gorges Reservoir Region, China" International Journal of Environmental Research and Public Health 16, no. 10: 1856. https://doi.org/10.3390/ijerph16101856 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/1660-4601/16/10/1856","timestamp":"2024-11-11T13:34:19Z","content_type":"text/html","content_length":"463176","record_id":"<urn:uuid:ef06cbdb-4a9f-4dd3-b9a1-7e1ce2cbe173>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00405.warc.gz"}
Notes on <i>Notes on the Synthesis of Form</i> Notes on the Synthesis of Form is a book written by architect and design theorist Christopher Alexander and originally published in 1964. It has become highly influential in a number of fields: mainly architecture, urban planning and computer science slash software engineering, for its highly original and powerful logical, even mathematical approach to design. I read the book recently and really enjoyed it. I felt like it gives words and a solid theory to a deeply intuitive idea: that designing or building something requires a very detailed and studied understanding of what you are designing or building it for. This simple idea almost sounds like a tautology, but it is remarkable how often it is abused, ignored or degraded. Below are a few notes and thoughts I had while reading the book, which I would really recommend to anyone interested in designing and building systems of any kind. Part I The first part of the text is full of analogies and metaphors, which at the beginning try to capture the highly intuitive notion of “fit” and “misfit” of forms of all kinds. The very first metaphor is of iron filings in a magnetic field: the process which dictates the form that the filings jump into when the field is switched on is not visible, but certainly exists. The same is true, Alexander reckons, for many other kinds of invisible forces: cultural, environmental, physical, human, which inform and ultimately shape the form of all manner of things in the world. The central focus of this part is on the difference between what Alexander calls “selfconscious” and “unselfconscious” cultures. He is hesitant to give a more loaded meaning, but it is clear from the examples that he gives that he is talking about pre- and post-enlightenment thinking. Individualism and skepticism of tradition are very problematic for the kind of generational form-making processes, like those which design the certain dwellings that tribes around the world build depending on their environment. These processes happen through the accumulation of small “corrections of misfits” – fixing things when they are broken – across generations, with no single person acting as anything more than an agent in some natural process of convergence to an equilibrium. When theories and opinions emerge, these cause fatal friction to these natural form-making processes. He argues that selfconscious cultures have a tendency to try to break large design problems down into subproblems based on contrived divisions, ultimately sourced from our ambiguous and culturally loaded languages, which do not reflect the subdivisions that would emerge in the “natural” process (all feels a bit inspired by Plato’s Cave). Detailed analysis of the problem of designing urban family houses, for instance, has shown that the usually accepted functional categories like acoustics, circulation, and accommodation are inappropriate for this problem. Similarly, the principle of the “neighborhood,” one of the old chestnuts of city-planning theory, has been shown to be an inadequate mental component of the residential planning problem. (p66) Part II The second part of the book presents a framework for studying and decomposing design problems which Alexander suggests is free of this bias. He proposes a new method, an additional abstraction on top of the existing “selfconscious” method, which “eradicates its bias and retains only its abstract structural features; this second picture may then be examined according to precisely defined operations, in a way not subject to the bias of language and experience” (p77). He does this by venturing into pure mathematics, specifically set theory, graph theory and probability theory, something which you do not see in the architectural literature very often I imagine. He imagines enumerating all the “misfit” variables which represent “individual conditions which must be met at the form-context boundary”, \(x_1, x_2, \ldots, x_m\) – this is the set \(M\). He then associates with \(M\) a second set \(L\) – a set of non-directed, signed, weighted edges between elements of \(M\). The weights are negative if they indicate conflict, and positive if they indicate concurrence, and may also be weighted to indicate strength of interaction. This is just a line graph, referred to in Alexander’s day as a “linear graph”. He then goes on to define “decompositions” of the set \(M\) into hierarchical tree-like structures, and the goal of the “program”: to find some clustering of nodes of this graph which most naturally seems to fit the underlying link structure \(L\). At this stage it felt like we were about to solve some well-known NP-hard graph theory problem, but instead Alexander takes a detour into the theory of diagrams, defining a “constructive” diagram to be something that is both a form diagram and a requirement diagram. To explain this, he gives the genius example of how to specify the requirement that traffic can flow without congestion at a junction between two streets. You could provide a table of numbers with the relative frequency of each of the different paths at the junction, or you could compose the diagram below which presents the information, but also very strongly hints at its implications on the solution’s form. He claims that constructive diagrams of this sort are the output of this program. Some of the “misfit” variables are easy to deal with because they exist on some measurable, numeric scale that allows for the establishment of a “performance standard”. For example we can establish fairly well that the minimum width for a highway lane is 11 feet because of what we know about modern cars and motorway speeds. This means the misfit variable corresponding to “is this motorway safe from between-lane accidents” is simple. But the real interesting parts of design, and of life of course, lie in those things which do not have such a numerical interpretation. Some typical exam­ples are “boredom in an exhibition,” “comfort for a kettle handle,” “security for a fastener or a lock,” “human warmth in a living room,” “lack of variety in a park.” (p98) Here comes the blow to my mathematical fantasy: “a design problem is not an optimization problem”. He means this in the sense that we lose optimality on many of these “richer” “performance standard” axes by restricting their misfit variable to be binary. And now we come to the three central questions: 1. How can we get an exhaustive set of variables \(M\) for a given problem; in other words, how can we be sure we haven’t left out some important issue? 2. How do we know that all the variables we include in the list \(M\) are relevant to the problem? 3. For any specific variable, how do we decide at what point misfit occurs; or if it is a continuous variable, how do we know what value to set as a performance standard? In other words, how do we recognize the condition so far described as misfit? As to the first one, as you would expect, the advice is to give up. \(M\) can only really ever be a “temporary catalogue of those errors which seem to need correction”. He then defines the “form domain” \(D\) as “the totality of possible forms within the cognitive reach of the designer given the context”, and for each misfit variable \(x_i\) defines \(p(x_i=1)\) to be the proportion of forms in \(D\) which are a misfit for \(x_i\). In this way we can define correlation coefficients \(c_{ij}\) between pairs \(x_i,x_j\) of misfits in the usual statistical way. 1. If \(c_{ij}\) is markedly less than \(0\), \(x_i\) and \(x_j\) conflict; like “The kettle’s being too small” and “The kettle’s occupying too much space.” When we look for a form which avoids \(x_i\), we weaken our chances of avoiding the other, \(x_j\). 2. If \(c_{ij}\) is markedly greater than 0, \(x_i\) and \(x_j\) concur; like “the kettle’s not being able to withstand the tem­perature of boiling water” and “the kettle’s being liable to corrode in steamy kitchens.” When we look for materials which avoid one of these difficulties, we improve our chances of avoiding the other. 3. If \(c_{ij}\) is not far from 0, \(x_i\) and \(x_j\) exhibit no noticeable interaction of either type. We can use these \(c_{ij}\) as the weights of links in our graph. He notes that this gives a way to estimate these weights empirically, by measuring the existing forms. But these of course may (almost certainly will) provide a biased sample of \(D\). Instead he suggests searching for causal connections between variables. It really is wild to think this book was written in 1964. He puts the onus on the designer now: We shall say that two variables interact if and only if the designer can find some reason (or conceptual model) which makes sense to him and tells him why they should do so. […] \(L\), like \(M \), is a picture of the way the designer sees the problem, not an objective description of the problem itself”. (p109) He wants to search the graph for dense subclusters, where he believes can be found “particularly strong identifiable physical aspect[s] of the problem”. (p122) I think this sounds a lot like the min-cut problem. Into the maths now: for a partition \(\pi\) of a set \(S\) we define a “measure of information transfer” \(R(\pi)\). We then use this to partition \(M\), and then successively partition the subsets in the partition, until we obtain a full tree of sets where each leaf is a singleton. He claims that the output of this process, if your set \(M\) and data \(L\) is rich enough, is a constructive diagram that can provide insights about the desired form. The next part of the text is a 36-page case study, using the methods derived in the book to determine the form of a rural Indian village, which produces the following constructive diagram: After that, the maths: Alexander uses probability theory and graph theory to derive a closed-form equation for his \(R(\pi)\) function for information transfer between subsets of a partition of the nodes a line graph. I found this fascinating, but it might be over some peoples’ heads; I think if you’re interested the best thing to do is to read the Appendix 2. I will add the equation for good \[R(\pi)=\frac{\frac12 m(m-1)\sum_{\pi}v_{ij} - \ell \sum_{\pi}s_{\alpha}s_{\beta}}{\Bigl[\Bigl(\sum_{\pi}s_{\alpha}s_{\beta}\Bigr)\Bigl(\frac12m(m-1)\Bigr)-\sum_{\pi}s_{\alpha}s_{\beta}\Bigr]^{\ So overall a great book, a really good example of how abstraction can help provide clarify when properly considered. Lots of very big ideas in here about things that just seem so natural that it’s weird no one has thought of them before. Alongside the theoretical purity and originality of Alexander’s proposed method, I really enjoyed thinking about those “unselfconscious” processes, the kinds of things which are becoming rarer and rarer in the world today. The “enshittification” of forms of all kinds throughout the world can maybe be thought of as a loss of connection with these more primitive form-making processes.
{"url":"https://www.clintonboys.com/synthesis-of-form/","timestamp":"2024-11-07T14:00:13Z","content_type":"text/html","content_length":"21039","record_id":"<urn:uuid:0acd3065-555d-4982-9dc8-dc6fb03e2910>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00145.warc.gz"}
BUG (or misunderstanding): tensor simplification 2961 Views 3 Replies 0 Total Likes BUG (or misunderstanding): tensor simplification I'm using Mathematica 9, so tell me if this is fixed in 10. Steps to reproduce: Enter and Evaluate: Assuming[Element[M , Matrices[{2, 2}]] && Element[Z , Matrices[{2, 2}] ], Simplify[M == M Z]] This returns "False". However, this is not correct. Consider: Assuming[Element[M , Matrices[{2, 2}]] && Element[Z , Matrices[{2, 2}] ], Simplify[M == M Z /. Z -> {{1, 1}, {1, 1}}]] Obviously, {{1,1},{1,1}} is n element of the domain, which Mathematica knows: Element[{{1,1},{1,1}}, Matrices[{2,2}] 3 Replies Yes, I think we agree. I hope it was obvious that I was saying it should return "M == M Z", rather than "False" There are solutions where M is not all zeros and Z is not all ones. True - and that means, that the mentioned result is not almost always correct (as I said), because there is not a finite set of exceptions, but an infinite set of exceptions. In[2]:= $Version Out[2]= "10.0 for Microsoft Windows (64-bit) (June 29, 2014)" In[6]:= Assuming[Element[M, Matrices[{2, 2}]] && Element[Z, Matrices[{2, 2}]], Simplify[M == M Z]] Out[6]= False With this elementwise multiplication has been ordered. The result is almost always correct, with the exception of Z = {{1,1},{1,1}} or M = {{0,0},{0,0}} - over the complex numbers (which is the default field in Mathematica), so, with other words, the result is itself false. If matrix multiplication is ordered, the outcome is In[9]:= Assuming[Element[M, Matrices[{2, 2}]] && Element[Z, Matrices[{2, 2}]], Simplify[M == M . Z]] Out[9]= M == M.Z In[10]:= Assuming[Element[M, Matrices[{2, 2}]], Simplify[M == M . IdentityMatrix[2]]] Out[10]= M == M.{{1, 0}, {0, 1}} Here Mathematica in fact states I don't know, which is the usual outcome for unspecified symbols: In[11]:= a == b + c Out[11]= a == b + c In[12]:= a == b + c /. {a -> 7, b -> -1, c -> 8} Out[12]= True Equal[] returns True if lhs and rhs are identical. For M = M Z or M = M . Z this is unkown, because they do not evaluate to something. You mean a check if assuming Element[M, Matrices[{2, 2}]] && Element[Z, Matrices[{2, 2}]] is then Element[M Z, Matrices[{2,2}]] or Element[M . Z, Matrices[{2,2}]] true? You cannot test this with Equal[]. A way is In[16] $Assumptions = {M \[Element] Matrices[{2, 2}], Z \[Element] Matrices[{2, 2}]}; Out[14]= {M \[Element] Matrices[{2, 2}, Complexes, {}], Z \[Element] Matrices[{2, 2}, Complexes, {}]} In[33]:= M Z // TensorDimensions During evaluation of In[33]:= TensorDimensions::ttimes: Product of nonscalar expressions encountered in M Z. >> Out[33]= TensorDimensions[M Z] In[36]:= {{x1, x2}, {x3, x4}} {{y1, y2}, {y3, y4}} // TensorDimensions Out[36]= {2, 2} In[16]:= M. Z // TensorDimensions Out[16]= {2, 2} In[34]:= $Assumptions = {A \[Element] Arrays[{2, d, 4}], B \[Element] Arrays[{d, d}]}; In[35]:= TensorDimensions[A\[TensorProduct]B] Out[35]= {2, d, 4, d, d} elementwise multiplication is not recognized assumptions-symbolically under Mathematica 10 ... Yes, I think we agree. I hope it was obvious that I was saying it should return "M == M Z", rather than "False" There are solutions where M is not all zeros and Z is not all ones. Consider: M = {{0,2},{0,3}} and Z = {{4,1},{5,1}} The fact that TensorDimensions[M Z] fails is probably a clue to where the problems lie. Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/314312","timestamp":"2024-11-14T15:44:05Z","content_type":"text/html","content_length":"106751","record_id":"<urn:uuid:f66a6c33-7ee6-423e-bcd2-6933f5835ec8>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00495.warc.gz"}
Discounting Tutorial Continued │Discounting Tutorial Continued│ Mortgages and loans are bought or sold at discounts or premiums. A mortgage that is Completely Discounted is discounted to the full amortization period. A Term Discounted mortgage is discounted only to the end of the term which is less than the amortization period. An IRD Discount is just another way of looking at an IRD calculation, which is the "penalty" paid to a lender for a premature exit from a mortgage before the term expires. The Total Cost of Borrowing (TCOB) also known as APR in the USA is a variation of a discount calculation. When your credit card company offers you a 9% interest rate but charges a yearly fee of $50 you are actually paying a Credit Card Premium over and above the 9% rate. The Discounting program is a very flexible tool that allows what appears to be many different types of calulations to be performed quickly. Complete Discount Cost of Borrowing Credit Card Premium IRD Discount Term Discount Complete Discount A $100,000 mortgage or loan at 8.25% rate amortized for 30 years to a lender is just a cash flow of 360 payments of $751.267 per month. If the the lender wants to sell that mortgage (or sell that 30 year cash flow) to an investor who will accept a 7% return, then the investor pays a premium of $10,567.33 (C) for a total amount of $110,567.33 (B) to buy the mortgage. (Screenshot 1) On the other hand if that same investor wanted a return of 10.1% on that same cash flow he would pay $84,891.74 (B) for the mortgage a discount of $15,108.26. (Screenshot 2) Cost Of Borrowing Effective Interest Rates: The Total Cost of Borrowing, TCOB, is nothing more than a fair method for showing borrowers the impact of the total fees charged by a Lender or Mortgage Broker on the quoted annual interest rate. The total fees are typically up front moneys paid by the borrower to the Lender or Mortgage Broker or both. First of all, nominal rates are always associated with an effective rate because of the compounding. Thus a 12% nominal rate with semi-annual compounding has an effective rate of 12.36%. An 11.7106% nominal rate with monthly compounding has an effective rate of 12.36%. Isn't that a coincidence! You can now see the advantage of quoting an effective interest rate, EIR, because there is no need to be concerned with the method of compounding. A lender quoting you an EIR of 12.36% on a loan of $1000 for one year actually earns $123.60 in interest on the initial $1000 loan for a total return of $1,123.60 at the end of the year. The intent of the TCOB legislation in Canada, is to ensure that the lender, or mortgage broker, will quote you the effective interest rate, EIR, of the new rate after all the costs (fees and points ect) are factored into the loan. In the USA the TCOB is often called the APR, Annual Percentage Rate. It matters not what you call it as long as you understand the mathematics. (Screenshot 1) TCOB for Term: A Lender hands you a cheque for your $45,000 mortgage and you hand your mortgage broker a cheque for $3,500 to cover his/her total fee. The monthly payments for a loan of $45,000 are $464.35 based on an amortization period of 25 years, using semi-annual compounding. In effect you are receiving $41,500 but will be making monthly payments of 464.35 as if you borrowed the $45,000. The impact of the $3500 total fee is going to be spread over 60 months. This is reality if you intend on selling this home in 60 months. The Total Cost of Borrowing (TCOB) is 14.84% if the $3500 fee is spread over 60 months as compared to the full amortization period. The difference between $3,500 and $3,500.24 is due to the fact that the DISCOUNTING module uses an iterative approach to calculating and in order to get the two numbers to agree exactly the calculation time would be extremely slow even with current Pentiums. (Screenshot 2) To save time if you are performing many calculations you can click on the "High Speed, Standard Precision" selection rather than the "Standard Speed, High Precision" selection. The difference of $4.27 in the calculated premium (B) is not significant as the EIR/TCOB is still 14.84% at two decimal places. TCOB for Amortization Period: TCOB spread over the entire Amortization Period A Lender hands you a cheque for your $45,000 mortgage and you hand your mortgage broker a cheque for $3,500 to cover his/her total fee. The monthly payments for a loan of $45,000 are $464.35 based on an amortization period of 25 years, using semi-annual compounding. In effect you are receiving $41,500 but will be making monthly payments of 464.35 as if you borrowed the $45,000. The term is 25 years the same as the amortization period. There are still some types of mortgages like these around. The impact of the $3500 total fee is going to be spread over 300 months so change the Discount term to 300 months. The TCOB is 13.67% (Screenshot 3) Credit Card Premium When a credit card company states a nominal interest rate for your card and then charges you a yearly fee, the effective interest rate that you are paying is HIGHER. Time and money are related in an exact relationship. Not all card companies follow the same methods of interest calculation but a quick and simple method is as follows; (Screenshot 1) If you borrowed $1000 for a year, but paid the lender a fee of $50 upfront (in advance) for the privilege of doing business in the upcoming year, and the lender quoted you a nominal rate of 12%, YOU would actually be paying a nominal interest rate of 21.86%, which is an effective rate of 24.19% (the TCOB% or the Truth in Lending Rate is 24.19%). The real interest rate you are paying in effect has doubled. That is why financial planners usually tell people to pay off the plastic cards ASAP! IRD Discount Two years ago, Bob and Mary borrowed $152,135 amortized for 27 years, and signed a five year term at an interest rate of 11%. Now, three year term interest rates are 7%. Bob and Mary would like to pay 7% interest for the remaining 3 years of monthly payments, instead of 11%. The Lender asked them for a $15,440 interest rate differential (IRD) payment along with their 24th monthly payment if they wanted the interest rate lowered to 7% for the remaining 3 years. How can Bob and Mary make an informed decision about paying the $15,440? This is the same example as the IRD example in the Mortgage Article section. The only difference is the way one thinks about discounting. Instead of thinking of the IRD amount of $15,440.04 as a "penalty" just think of it as the amount of money required to give to the lender so that the lender make a return of 7% instead of 11% over the next 36 months. (Screenshot 1) Term Discount A $100,000 mortgage or loan at 8.25% rate amortized for 30 years to a lender is just a cash flow of 360 payments of $751.267 per month. If the the lender wants to sell that mortgage to an investor who will accept a 7% return, for just 36 months, then the investor pays a premium of $3,334.35 (C) or a total amount of $103,334.35 (B) for the 36 month cash flow. (Screenshot 1) If that same investor wanted a 12% return over the 36 months he would get a discount of $9,302.16 (C) and pay only $90,697.84 (B) for the 36 month cash flow. (Screenshot 2)
{"url":"http://amortization.com/discounting_tutorial.htm","timestamp":"2024-11-03T09:34:01Z","content_type":"text/html","content_length":"19959","record_id":"<urn:uuid:5523ae32-9ad1-4b61-85a6-2f704782ca44>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00539.warc.gz"}
Has the number of draws in chess increased? Draws in Chess over the Last 40 Years A Statistical Analysis Chess is largely an information game, and over the past 50 years, data (games of players) has become increasingly easier to access due to advancements in technology and personal devices. As such, is there a correlation between the number of top-level games and the number of draws over the course of the last 40 years? Is this due to the increasing number of games that are available for research? As former U-14 world chess champion in chess, I often get asked this question by my friends or casual players. Many people are interested in the statistical aspects of chess, even if they do not play chess. The problem of draws often arises within the competitive chess-playing community, where I have heard a lot of complaints about how draws are too common, which makes viewing high-level games boring. When I play a game of chess, I know that there are three possible results, a white win, draw, or a black win. For a lot of sponsors and players alike (me included), chess is only really interesting if there is a clear winner. Personally, I only want to win. Thus, a common problem that has arisen in chess is that there is a lack of sponsorship because chess is often considered “boring”, when games may last up to six hours. To the common (non-chess player) viewer, this may seem dull. For this reason, some organizers have set up new types of tournaments and introduced faster time controls as an attempt to draw interest. Decisive games in strong tournaments are what catch people’s interests, what draws viewers. Games that ended in draws can be picked out from the ChessBase database and analyzed. When choosing games to analyse, I selected them based on the strength of the players according to the Elo system. It can be assumed that anyone with an Elo of 2600 and above is very strong. The number of years was chosen based on the first decade (1970) in the 20th century in which a large number of games above an Elo of 2600 are available in Mega Database 2017. By ensuring the games used are of high quality, there can be at least some guarantee that these are the games common players are most interested in viewing. As tournament directors wish to ensure that their tournaments can receive sponsorship and viewership, it is important that the games do not all end in draws. In the early 20th century, collecting games and storing them became common practice for strong major tournaments. Initially, the only major tournaments that had their games retained were national championships of countries with a strong chess culture at that time (such as Germany, the USSR, Yugoslavia), Interzonals, World Championships and Zonal qualifiers. Thus, there are not many games available for research from the early 1900s. Nowadays, all tournaments that are affiliated with FIDE (the World Chess Federation) require the recording and inputting of games, which leads to large amounts of information available for research and public consumption. Because there are millions of games in the database, to ease research and to produce results based on only the strongest tournaments (as these attract the most attention), I believed that only games of the world’s top players — where both are above an Elo of 2600 — will be looked at. However, it is important to note that the lack of high-rated games before 1970 can be attributed to the fact that ratings were not in general use yet, and by no means indicates there were not strong games played. The time frame of the collected games used for this analysis will be from 1971 to August 3rd, 2017. The games are taken from Mega Database 2017, with updated games starting from December 2016 from The Week In Chess. It is important to note that I did not select games of a specific time control — thus after faster time control games became mainstream (think blitz and rapid championships) there may have been more decisive games, thus affecting the draw rates. To prove my point about how the numbers of recorded games have gone up, the bar chart below shows the total number of games collected per year. Please note that, at the time of writing, 2017 had not yet ended, which explains the low bar in 2017. Total number of games per year, from 1970 to the start of August, 2017, by players who both have an Elo above 2600 (click or tap to enlarge) │Result │Games│Percentage │ │White Win │22640│28.85% │ │Black Win │14121│18.00% │ │Draw │41697│53.14% │ │Total │78468│100% │ The table above shows an overall summary of all the games used in this analysis. Out of the total number of 78,468 games played by players who both have an Elo above 2600. This is illustrated more clearly in a pie chart of the same data. As can be seen, the majority of games are drawn. There are various reasons for this. Due to the fact that only top-level games were considered, most of the time, both players are fairly evenly matched, which results in equal play and an eventual draw. Likewise, players at the top level are more likely to play “safe”, meaning they will try to play for a draw from the very start, as they have more to lose. There are also many ways for a draw to occur, such as the three-fold repetition, which, according to Article 5 of the Official FIDE Handbook, means “the game may be drawn if any identical position is about to appear or has appeared on the chessboard at least three times.” Other ways to draw include, “agreement between the two players during the game. This immediately ends the game” as well as the "fifty move rule": “if each player has made at least the last fifty consecutive moves without the movement of any pawn and without any capture.” The number of rules that can cause a draw makes it more likely for the result to happen. Let’s take a look at the draw rates per year, based on games where both players had an Elo above 2600. Please note that games that did not have inputted ratings have been omitted because a search function by Elo was used. │Year│Total│Draw│ % ││Year│Total│Draw│ % │ │1971│180 │96 │53.33% ││1995│2134 │1084│50.80% │ │1972│132 │78 │59.09% ││1996│2380 │1278│53.70% │ │1973│166 │106 │63.86% ││1997│2342 │1270│54.23% │ │1974│204 │132 │64.71% ││1998│1133 │601 │53.05% │ │1975│252 │180 │71.43% ││1999│1049 │561 │53.48% │ │1976│128 │96 │75.00% ││2000│1430 │819 │57.27% │ │1977│238 │146 │61.34% ││2001│1319 │746 │56.56% │ │1978│172 │102 │59.30% ││2002│1567 │812 │51.82% │ │1979│186 │116 │62.37% ││2003│1320 │737 │55.83% │ │1980│274 │186 │67.88% ││2004│1897 │984 │51.87% │ │1981│286 │174 │60.84% ││2005│2218 │1264│56.99% │ │1982│294 │176 │59.86% ││2006│2335 │1244│53.28% │ │1983│312 │208 │66.67% ││2007│3102 │1630│52.55% │ │1984│382 │268 │70.16% ││2008│3271 │1812│55.40% │ │1985│276 │170 │61.59% ││2009│3901 │2006│51.42% │ │1986│386 │212 │54.92% ││2010│3741 │1997│53.38% │ │1987│466 │246 │52.79% ││2011│3719 │2063│55.47% │ │1988│754 │482 │63.93% ││2012│3893 │2077│53.35% │ │1989│926 │538 │58.10% ││2013│4677 │2392│51.14% │ │1990│706 │408 │57.79% ││2014│4732 │2282│48.22% │ │1991│1088 │558 │51.29% ││2015│4917 │2480│50.44% │ │1992│1382 │734 │53.11% ││2016│5429 │2642│48.66% │ │1993│2036 │1068│52.46% ││2017│2363 │1316│55.69% │ │1994│2350 │1120│47.66% ││ │ │ │ │ Table of drawing percentage per year The above table shows the total amount of games played and the rates for draws between the start of 1971 to early August 2017. At first glance, it is evident that the number of recorded games has been gradually rising, although there appears to be no real correlation between the number of draws and the number of games played per year. The data is displayed in a more coherent manner in the scatter plot below: (Click or tap to enlarge) Firstly, before 1993, the relationship between the percentages of drawn games is insignificant compared with year, as it fluctuates between 52% to 70%. However, after 1993, the rate stays at around 50%. The data is further analyzed using the least squares regression formula, to determine any correlation after the year 1990. This is relevant because, as seen before, there is not a lot of data (refer to the raw data table in the appendix for specific numbers of games collected) between the years before 1993. This signifies that the percentage of draws may be related to the number of games available for analyzing. The number of games collected from before 1993 is lower due to many reasons, such as the lesser number of people playing, as well as the number of organized tournaments. Nowadays, tournaments are organized much more frequently. Calculations are useful in the real world as they can provide statistical evidence for correlation, averages, and the significance of the relationship of data. For chess, in this case, a line of best fit can visually show the correlation between two variables, which in this case will be the percentage of games drawn to the year. Least Squares Regression To find the line of best fit, statistically, the least squares regression formula is used. y is the predicted value a is the slope b is the y-intercept Taking the table from above and assuming a linear correlation: y = –0.0031x + 6.7835 Taking a closer look at the linear regression line starting from the 1990s, in which the number of games went up drastically, it can be seen that the percentage of draws has been fairly even. A reason for the horizontal line for draw rates may be that computers became more popular, and tools for analyzing chess started to appear. Thus, games played became more accurate, with both sides at top levels making fewer mistakes, leading to an even number of draws. In this situation, the slope of the line is given by: y = –0.0005x + 1.55 It turns out that the average is fairly close to the line of best fit given by this scenario. This graph once again shows that the number of games collected has not caused the number of draws to The average is calculated by adding up all the data and then dividing by the total number of data points. The average is convenient for looking at what is the overall percentage of draws and compare each year in relation. In this case, to calculate the average percentage of draws per year: and n is the total number of years. Using the data from the table of raw data above, since 2674.08% and n = 47 The average number of draws per year can be rounded to 56.90%. This leads to standard deviation, which can be used to further analyse any correlation between the numer of draws. Standard Deviation In statistics, standard deviation (s) is a measure of the dispersion of the data from the mean. It will be used in this case to calculate how close each data point is to the mean. It is expressed as: x is each individual score; n is the number of data points; ∑ is the sum of the values; σ is the standard deviation. A high standard deviation means that the data point is spread out, while a low one indicates that each data point is close to the average. The calculated standard deviation in this situation is 6.3025%, which means that each data point is fairly close to the average. This means that even with more games played, there has been no noticeable rise in the number of draws each year. Though the previous graphs and data analysis did not prove the hypothesis of the number of top-level games and the number of draws, over the course of the last 40 years and an increasing number of games that are available for research, an interesting trend can be seen in the following graph: If we graph the number of games collected in a year against the percentage of games that were drawn for that specific number of games collected, we can see clearly that draws are the most common results. However, it is important to look at the y-axis and see the number of collected games for each draw rate. As it turns out, the lower the number of games collected each year, the higher the percentage of draws. This graph appears to be exactly opposite to the hypothesis; increasing the number of games that are available for research has, in fact, lowered the percentage of games drawn. However, it can be assumed that a lack of data may be the reason for the high number of draws. Quite evidently, the higher the number of games played, the higher the number of draws. This is proven by the linear line of best fit, which shows a direct linear correlation. The previous graph supports the conclusion drawn from this one, which was that an increasing number of games available for research lowered the percentage of games drawn. While there appears to be no correlation between the number of games played and the number of draws that occur at high-level chess, draws are still the most frequently seen result in the game of chess. Whether or not tournament directors will be able to change this through shorter time controls or other methods is up for debate. However, it can be assumed that only when there is a lack of information, there is a significantly high number of draws at the highest levels. For the years 1971 until the early 1990s, not a lot of games were collected. While there is a high number of draws, it cannot be assumed that the number of games collected is the cause of the number of draws, because of the lack of data. However, from the 1990s onward, the relative percentage of draws each year stabilizes to around 50%. We can, therefore, conclude that with a higher number of collected games, there will be a more consistent number of draws. I concluded that there is no correlation between the number of top-level games and the number of draws over the course of the last 40 years. Nonetheless, further research could be conducted on the draw rates of lower level players, which would provide an interesting contrast to the data of higher-level players, because at lower levels games are more likely to have a definite result due to the frequency of mistakes. As players in chess are only becoming stronger, it can be assumed that the rate of draws will stay close to the current prediction of 50%. I am very grateful for the review and feedback I received from Dr. John Nunn, Ken Thompson and Jeff Sonas. Their comments encouraged me to look deeper into the reasons for the trends and to draw a more fitting conclusion about the situation of draws in chess. WGM Qiyu Zhou [pronounced Chee-you Jo], born in 2000, is a Canadian chess player who has competed for team Canada at the Women's Chess Olympiad since 2014 and who won the Canadian women's championship in 2016. Qiyu learned to play chess at the age of four in France. In late 2004 the family moved to Finland, and Qiyu won the Finnish Youth Chess Championships five times (in 2005, 2007, 2008, 2009 and 2010) in the U10 Open section. Also in 2010, she won the Nordic School Chess Championships in the U11 Open division in Sweden. In 2008, she won the silver medal in the U8 Girls section at the World Youth Chess Championship in Vung Tàu, Vietnam. In 2011, Qiyu transferred chess federations from Finland to Canada. She won the Canadian Youth Chess Championship in 2012 and 2013, in the Girls U-12 and Girls U-14 sections respectively. She won the Girls U-14 World Youth Championships in Durban, South Africa, 2014. Also in 2014, Zhou made her debut at the Women's Chess Olympiad in Tromsø, Norway. She played board four for the Canadian team scoring 6½/9 points. In the same year she also took part in the World Youth Under-16 Chess Olympiad in Gyor, Hungary playing board four for team Canada, which finished fifth. She finished first in the U-18 Girls category at the North American Youth Chess Championships in 2015 Toluca, Mexico. As a result, she was automatically awarded by FIDE the title Woman International Master (WIM). In September 2016, Zhou won the Canadian women's championship and as a result qualified to play in the Women's World Chess Championship 2017. You can watch a speech she did on how to achieve one’s goals.
{"url":"https://en.chessbase.com/post/has-the-number-of-draws-in-chess-increased","timestamp":"2024-11-02T02:23:13Z","content_type":"application/xhtml+xml","content_length":"114967","record_id":"<urn:uuid:e504c64a-9557-4809-9f26-4804c2826f4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00027.warc.gz"}
Number of unequal integers with sum S • A • Thread starter Jarfi • Start date In summary: Nsets with s(i) < R for all i).In summary, the conversation discusses a problem of finding the number of unequal numbers that add up to a given sum S, with a fixed number of parts and a maximum number R. The R is important in the calculation and the problem is equivalent to a restricted unequal partitions problem. The goal is to find a technical solution that is less computationally intensive for large numbers. TL;DR Summary Number of unequal numbers with sum S Hello, I've been trying to solve this problem for a while, and I found a technical solution which is too computationally intensive for large numbers, I am trying to solve the problem using Combinatorics instead. Given a set of integers 1, 2, 3, ..., 50 for example, where R=50 is the maximum, and a sample of n numbers from this set, what is the number of ways (number of sets) where sum(n) = S R=50, n=4, S = 13 1+2+3+7 = 13 1+2+4+6 = 13 1+3+4+5 = 13 is the possible configurations for 13. But for say 16, we could have 1 + 3 + 4 + 8 1 + 2 + 5 + 8 1+ 2 + 6 + 7 etc .. As you can see, the number of equivalent sums increases the higher S becomes. Before I confuse you with what I tried does anybody have an idea on this ? Note: partitions are not easily applicable here as we have a special case, where there needs to be x parts, all unequal, no 0s, etc. Last edited: Staff Emeritus Science Advisor 2023 Award Jarfi said: Summary:: Number of unequal numbers with sum S Before I confuse you with I tried does anybody have an idea on this ? Jarfi said: R=50, n=4, S = 13 1 + 2 + 3 + 7 = 13 is the only possible configuration here from my understanding. But for say 16, we could have What's wrong with: 1 + 2 + 4 + 6 = 13 1 + 3 + 4 + 5 = 13 PeroK said: What's wrong with: 1 + 2 + 4 + 6 = 13 1 + 3 + 4 + 5 = 12 I am not interested in 12 if the goal is to find S = 13 . What is your point ? Jarfi said: I am not interested in 12 if the goal is to find S = 13 . What is your point ? That you can't count and I can't type? PeroK said: That you can't count and I can't type? I can't count ? Jarfi said: 1 + 3 + 4 + 5 = 13 (not 12) PeroK said: 1 + 3 + 4 + 5 = 13 (not 12) See my updated post. There are several sums equivalent to 13, I wasn't being too accurate I concede Jarfi said: See my updated post. There are several sums equivalent to 13, I wasn't being too accurate I concede Why would your problem be significantly easier than partitions? PeroK said: Why would your problem be significantly easier than partitions? It is a problem of partitions, restricted unequal partitions into a fixed number of parts , that is. It is harder(computationally) to solve not easier. Thus the partitions approach may not be the most optimal. Does the R really matter to you? It's just going to cause weird complications on the split of whether S is larger or smaller than R. Office_Shredder said: Does the R really matter to you? It's just going to cause weird complications on the split of whether S is larger or smaller than R. The R matters yes, It is not the most complicated thing though the complication is that the samples should be unequal and restricted into a fixed number of parts. S is the Sum s is a sample from 1:50 or 1:R n is the number of samples ( s(1), s(2), ..., s(n) where sum(s) = S ) Nr sets where s(i) < R for all i, are allowed. But we could also calculate this Nsets with s(i) < R for all i = Nsets ALL with any Natural Number s(i) - Nsets with s(i) > R for at least one i if it simplifies things. Anyway if we can solve the problem without the size limit of R (Nsets ALL) the problem can be solved for the special case of s(i) < R FAQ: Number of unequal integers with sum S 1. How do you calculate the number of unequal integers with a given sum? To calculate the number of unequal integers with a given sum, you can use a mathematical formula known as the "stars and bars" method. This involves dividing the sum by the number of integers and then adding one to each quotient until the sum is reached. For example, if the sum is 10 and the number of integers is 3, you would divide 10 by 3 to get 3.33. Then, you would add one to each quotient (4, 3, 3) until you reach the sum of 10. 2. Can the number of unequal integers with a given sum be negative? No, the number of unequal integers with a given sum cannot be negative. Integers are whole numbers, so they cannot be negative. If the sum is negative, it is not possible to have a positive number of unequal integers with that sum. 3. Is there a limit to the number of unequal integers that can have a given sum? Yes, there is a limit to the number of unequal integers that can have a given sum. This limit is determined by the size of the sum and the number of integers. For example, if the sum is 10 and the number of integers is 3, the maximum number of unequal integers that can have a sum of 10 is 10. 4. Can the same set of unequal integers have different sums? No, the same set of unequal integers cannot have different sums. Each set of integers has a unique sum, and changing one of the integers will result in a different sum. However, different sets of unequal integers can have the same sum. 5. How does the order of the integers affect the sum? The order of the integers does not affect the sum. The sum is determined by the values of the integers, not their order. For example, the set of integers (2, 3, 5) will have the same sum as the set (5, 3, 2).
{"url":"https://www.physicsforums.com/threads/number-of-unequal-integers-with-sum-s.1011233/","timestamp":"2024-11-09T22:42:17Z","content_type":"text/html","content_length":"139951","record_id":"<urn:uuid:2c15084c-908f-4b14-aadc-b27d7a2228d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00768.warc.gz"}
Appendix E: Hannan-Quinn Information Criterion (HQC) The Hannan-Quinn information criterion (HQC) measures the goodness of fit of a statistical model. It is often used as a criterion for model selection among a finite set of models. It is not based on the log-likelihood function (LLF), but is related to Akaike's information criterion. Similar to AIC, the HQC introduces a penalty term for the number of parameters in the model, but the penalty is larger than one in the AIC. 1. In general, the BIC is defined as: $$HQC=n \times \ln{\frac{RSS}{n}} +2\times k \times \ln(\ln n)$$ Where: □ $n$ is the number of observations. □ $k$ is the number of model parameters. □ $RSS$ is the residual sum of squares that result from the statistical model. 2. Given any two estimated models, the model with the lower value of HQC is preferred; a lower HQC implies either fewer explanatory variables, better fit, or both. 1. HQC, like BIC, but unlike AIC, is not asymptotically efficient. 2. It is essential to remember that the HQC can be used to compare estimated models only when the numerical values of the dependent variable are identical for all estimates being compared. 3. BIC has been widely used for model identification in time series and linear regression. It can, however, be applied quite widely to any set of maximum likelihood-based models. Related Links Article is closed for comments.
{"url":"https://support.numxl.com/hc/en-us/articles/215531183-Appendix-E-Hannan-Quinn-Information-Criterion-HQC","timestamp":"2024-11-12T05:31:57Z","content_type":"text/html","content_length":"35235","record_id":"<urn:uuid:072bcfd2-8191-41d9-a28f-6c085d142988>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00022.warc.gz"}
What do you mean by parity? - Explained : the quality or state of being equal or equivalent. Women have fought for parity with men in the workplace. : equivalence of a commodity price expressed in one currency to its price expressed in another. The two currencies are approaching parity for the first time in decades. What is an example of parity? The parity of an integer is its attribute of being even or odd. Thus, it can be said that 6 and 14 have the same parity (since both are even), whereas 7 and 12 have opposite parity (since 7 is odd and 12 is even). What is parity in statistics? Parity is the number of live births a woman has had in the past. What are the types of parity? There are two kinds of parity bits: • In even parity, the number of bits with a value of one are counted. • In odd parity, if the number of bits with a value of one is an even number, the parity bit value is set to one to make the total number of ones in the set (including the parity bit) an odd Leave a Comment
{"url":"https://theomegafoundation.org/what-do-you-mean-by-parity/","timestamp":"2024-11-07T03:30:51Z","content_type":"text/html","content_length":"73004","record_id":"<urn:uuid:6f21c6d1-f37c-4779-a0cc-579c310e1206>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00531.warc.gz"}
FD Calculator More Calculator Fixed Deposit (FD) Calculator is an online tool that helps you easily calculate the maturity amount and interest earned on your fixed deposit. By simply entering the principal amount, interest rate, tenure, and compounding frequency, you can quickly estimate your returns without manual calculations. The FD calculator is invaluable for anyone planning investments, as it offers precise calculations based on current financial data. Whether you’re a resident or non-resident investor, using an FD calculator ensures a clear understanding of your potential earnings. How Does FD Calculator Work? The FD Calculator works by applying the compound interest formula to your fixed deposit, which factors in the interest rate, tenure, and compounding frequency. As you input these details, the calculator computes the total interest earned and the maturity amount. It accounts for different compounding periods—monthly, quarterly, semi-annually, or annually—offering flexibility to estimate returns accurately. How Can a FD Calculator Help You? An FD calculator is a quick, efficient, and error-free tool to estimate your FD returns. Instead of relying on manual calculations, which can be time-consuming and prone to errors, the FD calculator simplifies the process by instantly providing results. It allows you to compare different investment options by altering variables like the interest rate or tenure, helping you make informed decisions about your savings and investment plans. It also gives a realistic picture of how much you’ll earn over a specific period, helping in better financial planning. Formula to Calculate FD Maturity Amount The formula to calculate the maturity amount in an FD is based on compound interest: Maturity Amount = P × (1 + r/n) ^ (n×t) • P is the principal amount • r is the annual interest rate • n is the number of times the interest is compounded in a year • t is the tenure of the FD (in years) The formula is designed to account for interest compounded at different frequencies, providing an accurate estimate of the total returns. Let’s say you invest ₹100,000 in an FD for 3 years at an annual interest rate of 6%, compounded quarterly. Using the formula: • Principal: ₹100,000 • Interest rate: 6% • Compounding: Quarterly (4 times a year) • Tenure: 3 years The FD calculator will show the maturity amount as ₹119,596.63, with ₹19,596.63 as the total interest earned. How to Use Dualten FD Calculator? Using the Dualten FD Calculator is easy and efficient. Follow these steps: 1. Enter the principal amount: Input the amount you want to invest in the FD. 2. Select the interest rate: Enter the interest rate offered by the bank or financial institution. 3. Input the tenure: Provide the duration of the FD in years. 4. Choose the compounding frequency: Select how often the interest is compounded (monthly, quarterly, semi-annually, or annually). 5. Click Calculate: The calculator will instantly display the maturity amount and interest earned. This straightforward process helps you calculate your returns in seconds and makes it easier to plan your investments. Advantages of FD Calculator 1. Accuracy: An FD calculator ensures error-free calculations. 2. Time-Saving: It provides instant results, saving you time compared to manual calculations. 3. Comparison Tool: You can easily compare returns on different FDs by changing the variables like interest rate and tenure. 4. Financial Planning: It helps in better financial planning by giving a clear picture of future earnings. 5. User-Friendly: The calculator is simple to use and does not require any technical or financial expertise. What is Fixed Deposit (FD)? A Fixed Deposit (FD) is a financial instrument provided by banks or financial institutions that allows you to invest a lump sum for a fixed period at a predetermined interest rate. FDs are considered one of the safest investment options, offering higher returns than a regular savings account. The interest on an FD is compounded, and the investor receives the lump sum and the accumulated interest at maturity. Fixed Deposit Interest Rates of Different Banks Bank FD names For General Citizens (per annum) For Senior Citizens (per annum) SBI FD Interest Rate 6.10% 6.90% HDFC Bank FD Interest Rate 6.25% 7.00% ICICI Bank FD Interest Rate 6.25% 6.95% IDBI Bank FD Interest Rate 6.10% 6.85% Kotak Mahindra Bank FD Interest Rate 6.20% 6.70% RBL Bank FD Interest Rate 5.75% 6.25% KVB Bank FD Interest Rate 6.10% 6.60% Punjab National Bank FD Interest Rate 6.60% 6.60% Canara Bank FD Interest Rate 6.50% 7.00% Axis Bank FD Interest Rate 6.50% 7.25% Bank of Baroda FD Interest Rate 5.65% 6.65% IDFC First Bank FD Interest Rate 6.00% 6.50% Yes Bank FD Interest Rate 6.75% 7.50% IndusInd Bank FD Interest Rate 6.25% 7.00% UCO Bank FD Interest Rate 5.30% 5.80% Central Bank of India FD Interest Rate 6.25% 6.75% Indian Bank FD Interest Rate 6.30% 7.05% Indian Overseas Bank FD Interest Rate 6.40% 6.90% Bandhan Bank FD Interest Rate 5.60% 6.35% Note: The above-mentioned interest rates are as of December 2023. The rates are subject to change as per the policy of the banks. FD Calculator Approaches There are two main approaches for calculating FD returns: 1. Simple Interest: This is less common and applies when the interest is calculated only on the principal amount. 2. Compound Interest: Most FDs use compound interest, where interest is calculated on both the principal and previously earned interest, providing higher returns. NRI FD Calculator An NRI FD Calculator works similarly to a regular FD calculator but is tailored to non-resident Indians (NRIs) investing in Indian banks. NRIs can invest in Foreign Currency Non-Resident (FCNR), Non-Resident External (NRE), or Non-Resident Ordinary (NRO) fixed deposits. The NRI FD Calculator helps in calculating the maturity amount, accounting for exchange rates and other specific factors related to NRI accounts. FD Calculation for Resident Customers & NRI For resident customers, the FD calculator computes returns based on domestic interest rates and compounding frequency. For NRIs, the calculator also considers currency fluctuations and taxation differences. Both residents and NRIs can benefit from using FD calculators to plan their investments, compare interest rates, and make informed decisions. A Fixed Deposit Calculator is an essential tool for anyone looking to invest in FDs. It simplifies the process of estimating the maturity amount and interest earned, providing accurate and instant results. Whether you’re a resident or NRI, using the FD calculator can help you compare rates, plan your finances effectively, and make well-informed decisions about your investments. 1. What is a Fixed Deposit (FD) Calculator? A Fixed Deposit (FD) Calculator is an online tool that helps investors calculate the maturity amount and interest earned on their fixed deposits by inputting variables like the principal amount, interest rate, and tenure. 2. How accurate is an FD Calculator? FD Calculators are highly accurate as they use precise formulas to compute the maturity amount based on the interest rate, tenure, and compounding frequency. 3. Can NRIs use FD Calculators? Yes, NRIs can use specialized NRI FD Calculators, which account for currency exchange rates, NRI account types, and different tax regulations. 4. What is the formula used by an FD Calculator? The FD calculator uses the compound interest formula: Maturity Amount = P × (1 + r/n) ^ (n×t) This accounts for the interest compounded periodically during the FD tenure. 5. Can I calculate FD returns for different tenures? Yes, the FD calculator allows you to input different tenure options to calculate and compare the returns for various durations. 6. How does compounding frequency affect FD returns? The more frequently the interest is compounded (monthly, quarterly, etc.), the higher the maturity amount will be, as you earn interest on both the principal and previously accumulated interest. 7. What is the minimum amount to start an FD? The minimum deposit amount varies by bank but is generally around ₹1,000 for a fixed deposit.
{"url":"https://dualten.com/fd-calculator/","timestamp":"2024-11-12T03:45:11Z","content_type":"text/html","content_length":"147328","record_id":"<urn:uuid:9c10923d-0575-41c8-8b15-47280a8cb520>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00641.warc.gz"}
Finding the Coefficient of Variation for a Discrete Random Variable Question Video: Finding the Coefficient of Variation for a Discrete Random Variable Mathematics • Third Year of Secondary School Let π denote a discrete random variable that can take the values β 1, 0, and 1. Given that π has a probability distribution function π (π ₯) = π /(3 β π ₯), find the coefficient of variation to the nearest percent. Video Transcript Let π denote a discrete random variable that can take the values negative one, zero, and one. Given that π has a probability distribution function π of π ₯ equals π over three minus π ₯, find the coefficient of variation to the nearest percent. Weβ ve been given the probability distribution function for this discrete random variable, but it is in terms of an unknown value π . Before we can calculate the coefficient of variation, weβ ll need to determine the value of π . To do this, we recall that the sum of all probabilities in a probability distribution must be equal to one. So if we can find expressions for the probabilities for the three values in the range of this discrete random variableβ which are negative one, zero, and oneβ and these will be expressions in terms of π , we can then form an equation and solve it to determine the value of π . First, π of negative one is π over three minus negative one, which is π over four. π of zero is π over three minus zero, which is π over three. And finally, π of one is π over three minus one, which is π over two. As weβ ve already said, the sum of all probabilities in a probability distribution must be equal to one. So we have the equation π over four plus π over three plus π over two is equal to one. We can write each of these terms with a common denominator of 12. So we have three π over 12 plus four π over 12 plus six π over 12 is equal to one. This simplifies to 13π over 12 is equal to one. And then dividing both sides by 13 over 12, which is equivalent to multiplying by the reciprocal of this value, 12 over 13, we find that π is equal to 12 over 13. We can then find each of the probabilities explicitly. π of negative one was π over four. So thatβ s 12 over 13 multiplied by four, which simplifies to three over 13. π of zero was π over three. So thatβ s 12 over 13 multiplied by three, which simplifies to four over 13. And our final probability for π of one is π over two. So thatβ s 12 over 13 multiplied by two, which simplifies to six over 13. We can then confirm that the sum of these three probabilities is 13 over 13, which is indeed equal to one. So weβ ve found the probabilities for each value in the range of this discrete random variable. Letβ s now write down the probability distribution in a table. We write the values in the range of the discrete random variable in the top row and then their associated probabilities, which we just calculated, in the second row. Now the question asks us to find the coefficient of variation for this discrete random variable. This gives the standard deviation as a percentage of the expected value of π ₯. If a discrete random variable π has a nonzero mean π Έ of π and a standard deviation π sub π , then the coefficient of variation is given by π sub π over π Έ of π multiplied by 100. We recall first that to find the expected value of a discrete random variable, we multiply each value in its range by its corresponding probability and then find the sum of these values. We can add another row to our table to work out these values. Negative one multiplied by three over 13 is negative three over 13. Zero multiplied by four over 13 is zero. And one multiplied by six over 13 is six over 13. The expected value of π , then, is the sum of these three values, which is three over 13. We then recall that the standard deviation of π is the square root of its variance. And the variance of π is the expected value of π squared minus the square of the expected value of π . We need to be very clear on the difference in notation here. In the second term, we find the expected value of π , which weβ ve just done, and then we square it, whereas in the first term, weβ re finding the expectation of the squared values of π . So we square the π -values first. The formula for calculating the expectation of π squared is the sum of the π ₯ squared values multiplied by the probabilities, which are inherited directly from the probability distribution of π . We can add another row to our table for the π ₯ squared values, which are one, zero, and one again, and then another row in which we multiply each π ₯ squared value by its π of π ₯ value. First, we have one multiplied by three over 13, which is three over 13, then zero multiplied by four over 13, which is zero, and finally, one multiplied by six over 13, which is six over 13. The expected value of π squared then is three over 13 plus zero plus six over 13, which is nine over 13. Next, we calculate the variance of π . This is the expected value of π squared. Thatβ s nine over 13. And from this we subtract the square of the expected value of π . So weβ re subtracting three over 13 squared. Nine over 13 minus three over 13 squared gives the exact fraction 108 over 169. So weβ ve calculated the variance of π , and next we need to calculate the standard deviation. This is equal to the square root of the variance. And in exact form, this is six root three over 13. Weβ re nearly finished. Weβ ve found the standard deviation of π and the expected value of π . So weβ re finally able to calculate the coefficient of variation. We have six root three over 13 for the standard deviation divided by three over 13 for the expectation multiplied by 100. Now dividing by three over 13 is equivalent to multiplying by 13 over three. We can then cross cancel a factor of 13, and we can also cross cancel a factor of three. So weβ re left with two root three over one multiplied by one over one all multiplied by 100. Evaluating on a calculator, we have 346.41 The question specifies that we should give our answer to the nearest percent. So we round down to 346 percent. Now donβ t be concerned that this percentage is greater than 100 percent. A coefficient of variation of 346 percent just means that the standard deviation of π is approximately three and a half times its expected value, which is perfectly possible. We found then that the coefficient of variation of π to the nearest percent is 346 percent.
{"url":"https://www.nagwa.com/en/videos/758152684936/","timestamp":"2024-11-10T08:01:22Z","content_type":"text/html","content_length":"259632","record_id":"<urn:uuid:e62ac000-c502-4bbd-97e8-7d016e13157d>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00435.warc.gz"}
The ade4 package - II: Two-table and Article Rnews 2007 The ade4 package - II: Two-table and K-table methods by Stéphane Dray, Anne B. Dufour and Daniel Chessel S. Dray, A.B. Dufour, and D. Chessel. 2007. The ade4 package - II: Two-table and K-table methods. R News 7(2):47-52. Ecological illustration Matching two tables The K-table class 1 Introduction The ade4 package proposes a great variety of explanatory methods to analyse multivariate datasets. As suggested by the acronym ade4 (Data Analysis functions to analyse Ecological and Environmental data in the framework of Euclidean Exploratory methods), the package is devoted to ecologists but it could be useful in many other fields [e.g.,Goecke, 2005]. Methods available in the package are particular cases of the duality diagram [Escoufier, 1987,Holmes, 2006,Dray and Dufour, 2007] and the implementation of the functions follows the description of this unifying mathematical tool (class dudi). The main functions of the package for one-table analysis methods have been presented in Chessel et al. [2004]. This new paper presents a short summary of two-table and K-table methods available in the package. 2 Ecological illustration In order to illustrate the methods, we used the dataset jv73 [Verneaux, 1973] which is available in the package. This dataset concerns 12 rivers. For each river, a number of sites have been sampled. The number of sites per river is not constant. jv73$poi is a data.frame and contains presence / absence data for 19 fish species (columns) in 92 sites (rows). jv73$fac.riv is a factor indicating the river corresponding to each site. jv73$morpho contains the measurements of six environmental variables (altitude (m), distance between the site and the source (km), slope (per thousand), wetted cross section (m^2), average flow (m^3/s) and average speed (m/s)) for the same sites . Several ecological questions are related to these data: 1. Are they groups of fish species living together (i.e. species communities)? 2. Is there a relation between the composition of fish communities and the environmental variations? 3. Does the composition of fish communities vary (or not) among rivers? 4. Do the species-environment relationships vary (or not) among rivers? Multivariate analyses help to answer these different questions: one-table methods for the first question, two-table methods for the second one and K-table methods for the last two ones. 3 Matching two tables The main purpose of ecological data analysis is the matching of two data tables: a sites-by-environmental variables table and a sites-by-species table, to study the relationships between the composition of species communities and their environment. The ade4 package contains the main variants of these methods (procrustean rotation, co-inertia analysis and principal component analyses with respect to instrumental variables). The first approach is procrustean rotation [Gower, 1971], introduced in ecology by [Digby and Kempton, 1987, p. 116]. pca1 <- dudi.pca(jv73$morpho, scannf = FALSE) pca2 <- dudi.pca(jv73$poi, scale = FALSE, scannf = FALSE) plot(procuste(pca1$tab, pca2$tab, nf = 2)) Figure 1: Plot of a Procrustes analysis: loadings for environmental variables and species, eigenvalues screeplot, scores of sites for the two data sets, and projection of the two sets of sites after rotation (arrows link environment site score to the species site score) [Dray et al., 2003a]. Two randomization procedures are available to test the association between two tables: PROTEST [Jackson, 1995] and RV [Heo and Gabriel, 1998]. pca2$tab), main = "PROTEST") plot(RV.rtest(pca1$tab, pca2$tab), main = "RV") Figure 2: Plots of PROTEST and RV tests: histograms of simulated values and observed value (vertical line). Co-inertia analysis [Dolédec and Chessel, 1994,Dray et al., 2003b] is a general approach that can be applied to any pair of duality diagrams having the same row weights. This method is symmetric and seeks for a common structure between two datasets. It extends psychometricians inter-battery analysis [Tucker, 1958], canonical analysis on qualitative variables [Cazes, 1980], and ecological profiles analysis [Montaña and Greig-Smith, 1990,Mercier et al., 1992]. Co-inertia analysis of the pair of triplets (X[1],Q[1],D) and (X[2],Q[2],D) leads to the triplet (X[2]^tDX[1],Q[1],Q[2]). Note that the two triplets must have the same row weights. coa1 <- dudi.coa(jv73$poi, scannf = FALSE) pca3 <- dudi.pca(jv73$morpho, row.w = coa1$lw, scannf = F) plot(coinertia(coa1, pca3, scannf = FALSE)) Figure 3: Plot of a co-inertia analysis: projection of the principal axes of the two tables (species and environment) on co-inertia axes, eigenvalues screeplot, canonical weights of species and environmental variables, and joint display of the sites. For each coupling method, a generic plot function allows to represent the various elements required to interpret the results. However, the quality of graphs could vary according to the data set. It is consequently impossible to manage relevant graphical outputs for all cases. That is why these generic plot use graphical functions of ade4 which can be directly called by the user. A brief description of some of these functions is given in table 1. │Function │Objective │ │s.arrow │cloud of points with vectors │ │s.chull │cloud of points with groups by convex hulls │ │s.class │cloud of points with groups by stars or ellipses │ │s.corcircle│correlation circle │ │s.distri │cloud of points with frequency distribution by stars and ellipses │ │s.hist │cloud of points with two marginal histograms │ │s.image │grid of gray-scale rectangles with contour lines │ │s.kde2d │cloud of points with kernel density estimation │ │s.label │cloud of points with labels │ │s.logo │cloud of points with pictures │ │s.match │matching two clouds of points with vectors │ │s.traject │cloud of points with trajectories │ │s.value │cloud of points with numerical variable │ Table 1: Objectives of some graphical functions. Another two-table matching strategy is principal component analyses with respect to instrumental variables (pcaiv, [Rao, 1964]). This approach consists in explaining a triplet (X[2], Q[2], D) by a table of independent variables X[1] and leads to triplet (P[X[1]]X[2], Q[2],D) where P[X[1]]=X[1](X[1]^tDX[1])^-1X[1]^tD. This family of methods are constrained ordinations, among which redundancy analysis [van den Wollenberg, 1977] and canonical correspondence analysis [Ter Braak, 1986] are the most frequently used in ecology. Note that canonical correspondence analysis can also be performed using the cca wrapper function which takes two tables as arguments. The example given below is then exactly equivalent to plot(cca(jv73$poi,jv73$morpho,scannf=FALSE)). While the cca function of ade4 is a particular case of pcaiv, the cca function of the package vegan is a more traditional implementation of the method which could be preferred by ecologists. plot(pcaiv(coa1, jv73$morpho, scannf = FALSE)) Figure 4: Plot of a CCA seen as a particular case of PCAIV: environmental variables loadings and correlations with CCA axes, projection of principal axes on CCA axes, species scores, eigenvalues screeplot, and joint display of the rows of the two tables (position of the sites by averaging (points) and by regression (arrow tips)). Orthogonal analysis (pcaivortho) allows to remove the effect of independent variables and corresponds to the triplet (P[^X[1]]X[2], Q[2],D) where P[^X[1]]=I - P[X[1]]. Between-class (between) and within-class (within) analyses (see Chessel et al. [2004] for details) are particular cases of PCAIV and orthogonal PCAIV when there is only one categorical variable (i.e. factor) in X[1]. Within-class analyses allow to take into account a partition of individuals into groups and focus on structures which are common to all groups. It can be seen as a first step to K-table methods. wit1 <- within(coa1, fac = jv73$fac.riv, scannf = FALSE) Figure 5: Plot of a within-class analysis: species loadings, species scores, eigenvalues screeplot, projection of principal axes on within-class axes, sites scores (common centring), projections of sites and groups (i.e. rivers in this example) on within-class axes. 4 The K-table class Class ktab corresponds to collections of more than two duality diagrams, which internal structures are to be compared. Three formats of these collections can be considered: • (X[1],Q[1],D), (X[2],Q[2],D),..., (X[K],Q[K],D) • (X[1],Q,D[1]), (X[2],Q,D[2]),..., (X[K],Q,D[K]) stored in the form of (X[1]^t,D[1],Q), (X[2]^t,D[2],Q),..., (X[K]^t,D[K],Q) • (X[1],Q,D), (X[2],Q,D),..., (X[K],Q,D) which can also be stored in the form of (X[1]^t,D,Q), (X[2]^t,D,Q),..., (X[K]^t,D,Q) Each statistical triplet corresponds to a separate analysis (e.g., principal component analysis, correspondence analysis ...). The common dimension of the K statistical triplets are the rows of tables which can represent individuals (samples, statistical units) or variables. Utilities for building and manipulating ktab objects are available. K-table can be constructed from a list of tables (ktab.list.df), a list of dudi objects (ktab.list.dudi), a within-class analysis (ktab.within) or by splitting a table (ktab.data.frame). Generic functions to transpose (t.ktab), combine (c.ktab) or extract elements ([.ktab) are also available. The sepan function can be used to compute automatically the K separate analyses. kt1 <- ktab.within(wit1) sep1 <- sepan(kt1) kplot.sepan.coa(sep1, permute.row.col = TRUE) Figure 6: Kplot of 12 separate correspondence analyses (same species, different sites). When the ktab object is built, various statistical methods can be used to analyse it. The foucart function can be used to analyse K tables of positive number having the same rows and the same columns and that can be analysed by a CA [Foucart, 1984,Pavoine et al., 2007]. Partial triadic analysis [Tucker, 1966] is a first step toward three modes principal component analysis [Kroonenberg, 1989] and can be computed with the pta function. It must be used on K triplets having the same row and column weights. The pta function can be used to perform the STATICO method [Simier et al., 1999,Thioulouse et al., 2004]. This allows to analyse a pair of ktab objects which have been combined by the ktab.match2ktabs function. Multiple factor analysis (mfa, [Escofier and Pagès, 1994]), multiple co-inertia analysis (mcoa, [Chessel and Hanafi, 1996]) and the STATIS method (statis, [Lavit et al., 1994]) can be used to compare K triplets having the same row weights. The STATIS method can also be used to compare K triplets having the same column weights, which is a first step toward Common PCA [Flury, 1988]. sta1 <- statis(kt1, scannf = F) Figure 7: Plot of STATIS analysis: interstructure, typological value of each table, compromise and projection of principal axes of separate analyses onto STATIS axes. The kplot generic function is associated to the foucart, mcoa, mca, pta, sepan, sepan.coa and statis methods, giving adapted collections of graphics. kplot(sta1, traj = TRUE, arrow = FALSE, unique = TRUE, clab = 0) Figure 8: Kplot of the projection of the sites of each table on the principal axes of the compromise of STATIS analysis. 5 Conclusion The ade4 package provides many methods to analyse multivariate ecological data sets. This diversity of tools is a methodological answer to the great variety of questions and data structures associated to biological questions. Specific methods dedicated to the analysis of biodiversity, spatial, genetic or phylogenetic data are also available in the package. The adehabitat brother-package contains tools to analyse habitat selection by animals while the ade4TkGUI package provides a graphical interface to ade4. More ressources can be found on the ade4 website (http://pbil.univ-lyon1.fr/ P. Cazes. L'analyse de certains tableaux rectangulaires décomposés en blocs : généralisation des propriétés rencontrées dans l'étude des correspondances multiples. I. Définitions et applications à l'analyse canonique des variables qualitatives. Les Cahiers de l'Analyse des Données, 5: 145-161, 1980. D. Chessel and M. Hanafi. Analyse de la co-inertie de K nuages de points. Revue de Statistique Appliquée, 44 (2): 35-60, 1996. D. Chessel, A.-B. Dufour, and J. Thioulouse. The ade4 package-I- One-table methods. R News, 4: 5-10, 2004. P. G. N. Digby and R. A. . Kempton. Multivariate Analysis of Ecological Communities. Chapman and Hall, Population and Community Biology Series, London, 1987. S. Dolédec and D. Chessel. Co-inertia analysis: an alternative method for studying species-environment relationships. Freshwater Biology, 31: 277-294, 1994. S. Dray and A. Dufour. The ade4 package: implementing the duality diagram for ecologists. Journal of Statistical Software, 22 (4): 1-20, 2007. S. Dray, D. Chessel, and J. Thioulouse. Procrustean co-inertia analysis for the linking of multivariate datasets. Ecoscience, 10: 110-119, 2003a. S. Dray, D. Chessel, and J. Thioulouse. Co-inertia analysis and the linking of ecological tables. Ecology, 84 (11): 3078-3089, 2003b. B. Escofier and J. Pagès. Multiple factor analysis (AFMULT package). Computational Statistics and Data Analysis, 18: 121-140, 1994. Y. Escoufier. The duality diagram : a means of better practical applications. In P. Legendre and L. Legendre, editors, Development in numerical ecology, pages 139-156. NATO advanced Institute , Serie G .Springer Verlag, Berlin, 1987. B. Flury. Common Principal Components and Related Multivariate. models. Wiley and Sons, New-York, 1988. T. Foucart. Analyse factorielle de tableaux multiples. Masson, Paris, 1984. R. Goecke. 3D lip tracking and co-inertia analysis for improved robustness of audio-video automatic speech recognition. In Proceedings of the Auditory-Visual Speech Processing Workshop AVSP 2005, pages 109-114, 2005. J. Gower. Statistical methods of comparing different multivariate analyses of the same data. In F. Hodson, D. Kendall, and P. Tautu, editors, Mathematics in the archaeological and historical sciences, pages 138-149. University Press, Edinburgh, 1971. M. Heo and K. Gabriel. A permutation test of association between configurations by means of the RV coefficient. Communications in Statistics - Simulation and Computation, 27: 843-856, 1998. S. Holmes. Multivariate analysis: The French way. In N. D. and S. T., editors, Festschrift for David Freedman. IMS, Beachwood, OH, 2006. D. Jackson. PROTEST: a PROcustean randomization TEST of community environment concordance. Ecosciences, 2: 297-303, 1995. P. Kroonenberg. The analysis of multiple tables in factorial ecology. iii three-mode principal component analysis:änalyse triadique complète". Acta OEcologica, OEcologia Generalis, 10: 245-256, C. Lavit, Y. Escoufier, R. Sabatier, and P. Traissac. The ACT (STATIS method). Computational Statistics and Data Analysis, 18: 97-119, 1994. P. Mercier, D. Chessel, and S. Dolédec. Complete correspondence analysis of an ecological profile data table: a central ordination method. Acta OEcologica, 13: 25-44, 1992. C. Montaña and P. Greig-Smith. Correspondence analysis of species by environmental variable matrices. Journal of Vegetation Science, 1: 453-460, 1990. S. Pavoine, J. Blondel, M. Baguette, and D. Chessel. A new technique for ordering asymmetrical three-dimensional data sets in ecology. Ecology, 88: 512-523, 2007. C. Rao. The use and interpretation of principal component analysis in applied research. Sankhya A, 26: 329-359, 1964. M. Simier, L. Blanc, F. Pellegrin, and D. Nandris. Approche simultanée de K couples de tableaux : application à l'étude des relations pathologie végétale-environment. Revue de Statistique Appliquée, 47: 31-46, 1999. C. Ter Braak. Canonical correspondence analysis : a new eigenvector technique for multivariate direct gradient analysis. Ecology, 67: 1167-1179, 1986. J. Thioulouse, M. Simier, and D. Chessel. Simultaneous analysis of a sequence of pairs of ecological tables with the STATICO method. Ecology, 85: 272-283, 2004. L. . Tucker. An inter-battery method of factor analysis. Psychometrika, 23: 111-136, 1958. L. Tucker. Some mathemetical notes on three-mode factor analysis. Psychometrika, 31: 279-311, 1966. A. van den Wollenberg. Redundancy analysis, an alternative for canonical analysis. Psychometrika, 42 (2): 207-219, 1977. J. Verneaux. Cours d'eau de Franche-Comté (Massif du Jura). Recherches écologiques sur le réseau hydrographique du Doubs. Essai de biotypologie. Thèse de doctorat, Université de Besançon, Besançon, 1973. File translated from T[E]X by T[T]H, version 3.78. On 23 Oct 2007, 18:32.
{"url":"http://pbil.univ-lyon1.fr/ADE-4/article_rnews2007.php?lang=fra","timestamp":"2024-11-04T22:01:33Z","content_type":"text/html","content_length":"34798","record_id":"<urn:uuid:fe276036-79cb-4e24-9c12-c135034570ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00119.warc.gz"}
Measurement of angles in degrees, minutes and seconds What defines an angle is the aperture of its sides. Therefore it is natural for us to wonder how should we measure this aperture. To measure an angle, what we do is comparing it with another that we use as a unit. The most usual unit of measurement for angles is the sexagesimal degree, which consists in $$\frac{1}{360}$$ of a full angle. The measurement of an angle in sexagesimal degrees is denoted by the symbol $$^\circ$$. An angle of $$56^\circ$$ is the one that has an aperture $$56$$ times larger than the opening of one degree (the sexagesimal unit). To get an idea, one degree corresponds to the following aperture: So, in a full angle, which corresponds to a full circle, we have $$360^\circ$$ ($$360$$ degrees). That is to say: As we can see in the drawing, a full circle is divided into $$360$$ parts, each of them denoting one degree, which is designated as $$1^\circ$$. So, a full angle has $$360^\circ$$, a straight angle has $$180^\circ$$ and a right angle has $$90^\circ$$. The acute angles have less than $$90^\circ$$ and the obtuse ones more than $$90^\circ$$, but less than $$180^\circ$$. According to its range, we can also name some specific angles. • Congruent angles are those that have the same aperture, • Complementary angles are those which measure $$90^\circ$$ if we add them up, • Supplementary angles are those which measure $$180^\circ$$ if we add them up, • Conjugated angles are those which measure $$360^\circ$$ if we add them up. An angle of $$30^\circ$$ has a complementary angle of $$60^\circ$$, and a supplementary one of $$150^\circ$$ and a conjugate one of $$330^\circ$$. But, what happens when we have an angle of less than $$1^\circ$$? To be able to speak about angles that measure less than $$1^\circ$$, we use submultiples of a degree, so we avoid working with expressions like the following: • This angle measures half a degree • This angle measures $$0,76$$ degrees Thus, the sexagesimal degree has submultiples: these are the minute and the second. The minute is designated as $$'$$ and the second as $$''$$. The measurement of an angle in degrees, minutes and seconds would be, for example, $$84^\circ 17' \ 43 \ ''$$. It would be read as: an angle of $$84$$ degrees, $$17$$ minutes and $$43$$ seconds. Let's see the exact value of minutes and seconds. • One minute is the result of taking a degree and dividing it into $$60$$ equal parts. This is, mathematically expressed: $$1$$ minute $$= \dfrac{1^\circ}{60}$$ and therefore $$60$$ minuts $$= 1^\ • A second is the result of taking a minute and dividing it in $$60$$ equal parts. This is, mathematically expressed: $$1$$ second $$= \dfrac{1'}{60}$$ and therefore $$60$$ seconds $$= 1$$ minut. With this equivalence let's see the value of a degree in seconds: $$$\left. \begin{array}{rcl} 1^\circ & = & 60' \\\\ 1' & = & 60'' \end{array} \right\} \Longrightarrow 1^\circ= 60 \cdot 60 ''= 3600 ''$$$ To change degrees into minutes and seconds we will always work by means of conversion factors. This means that we will use the following method: We want to write $$32^\circ$$ in minutes and $$21^\circ$$ in seconds. $$$32^\circ = 32 \ \mbox{degrees} \cdot \dfrac {60 \ \mbox{minutes}}{1 \ \mbox{degree}} = 32 \cdot 60 \ \mbox{minutes} = 1920 \ \mbox{minutes}$$$ In other words, we know that $$60$$ minutes $$= 1^\circ$$, therefore $$\dfrac{60 \ \mbox{minutes}}{1^\circ}=1$$ and, through this conversion factor, we change from degrees to minutes. We do the same in the case of seconds. Knowing that $$60 \ \mbox{seconds}=1 \ \mbox{minute}$$, if we move the term on the right-hand side onto the left-hand side to divide it, we get $$\dfrac{60 \ \ mbox{seconds}}{1 \ \mbox{minute}}=1$$, which is the conversion factor to change from minutes to seconds. By this method, $$$21^\circ = 21^\circ \cdot \frac{60 \ \mbox{minutes}}{1 ^\circ} \cdot \frac{60 \ \mbox{seconds}}{1 \ \mbox{minute}} = 21 \cdot 60 \cdot 60 \ \mbox{seconds}= 75600 \ \mbox{seconds}$$$ Finally, we will see an example that allows us to express in degrees quantities given in seconds or minutes. If we have $$460$$ seconds, then we have: $$$ 39600 \ \mbox{seconds}= 39600 \ \mbox{seconds} \dfrac{1 \ \mbox{minute}}{60 \ \mbox{seconds}} = \dfrac {39600}{60} \ \mbox{minutes} = 660 \ \mbox {minutes} $$$ If we want to express it in degrees: $$$ 39600 \ \mbox{seconds}=\dfrac {39600}{60} \ \mbox{minutes} \cdot \dfrac{1 \ \mbox{degree}}{60 \ \mbox{minutes}} = \dfrac {39600}{60·60} \ \mbox{degrees} = 11 \ \mbox{degrees} $$$ Measuring drawn angles Angles can be measured by means of tools such as the goniometer, the quadrant, the sextant, the cross-staff or the protractor. The most common is the protractor, which is a drawing tool that allows us not only to measure but also to construct angles. It consists of a graduated half-disk with which we can measure angles of up to $$180^\circ$$.
{"url":"https://www.sangakoo.com/en/unit/measurement-of-angles-in-degrees-minutes-and-seconds","timestamp":"2024-11-07T06:43:27Z","content_type":"text/html","content_length":"24207","record_id":"<urn:uuid:42cddc68-75ae-49f9-8eee-2c2f32308d26>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00478.warc.gz"}
Investigation on the Indeterminate Information of Rock Joint Roughness through a Neutrosophic Number Approach Computer Modeling in Engineering & Sciences DOI: 10.32604/cmes.2021.017453 Investigation on the Indeterminate Information of Rock Joint Roughness through a Neutrosophic Number Approach 1Institute of Rock Mechanics, Ningbo University, Ningbo, 315211, China 2Faculty of Engineering, China University of Geosciences, Wuhan, 430074, China 3Department of Civil Engineering, Shaoxing University, Shaoxing, 312000, China *Corresponding Author: Liangqing Wang. Email: wangliangqing@cug.edu.cn Received: 11 May 2021; Accepted: 13 July 2021 Abstract: To better estimate the rock joint shear strength, accurately determining the rock joint roughness coefficient (JRC) is the first step faced by researchers and engineers. However, there are incomplete, imprecise, and indeterminate problems during the process of calculating the JRC. This paper proposed to investigate the indeterminate information of rock joint roughness through a neutrosophic number approach and, based on this information, reported a method to capture the incomplete, uncertain, and imprecise information of the JRC in uncertain environments. The uncertainties in the JRC determination were investigated by the regression correlations based on commonly used statistical parameters, which demonstrated the drawbacks of traditional JRC regression correlations in handling the indeterminate information of the JRC. Moreover, the commonly used statistical parameters cannot reflect the roughness contribution differences of the asperities with various scales, which induces additional indeterminate information. A method based on the neutrosophic number (NN) and spectral analysis was proposed to capture the indeterminate information of the JRC. The proposed method was then applied to determine the JRC values for sandstone joint samples collected from a rock landslide. The comparison between the JRC results obtained by the proposed method and experimental results validated the effectiveness of the NN. Additionally, comparisons made between the spectral analysis and common statistical parameters based on the NN also demonstrated the advantage of spectral analysis. Thus, the NN and spectral analysis combined can effectively handle the indeterminate information in the rock joint roughness. Keywords: Rock joint roughness coefficient; uncertainty; indeterminate information; neutrosophic number; spectral analysis Many major foundation projects have been constructed in complex geological conditions, and numerous high rock slopes are formed. The rock joints in the rock slopes are affected by the geological forces inside and outside the earth, making the slope easy to slip along the controlled joints [1,2]. Thus, the rock joint shear strength is crucial in determining the stability of rock slopes. To better estimate the rock joint shear strength, accurately determining the rock joint roughness is the first step faced by researchers and engineers [3–5]. The joint roughness coefficient (JRC) proposed by Muralha et al. [6] is closely related to the rock joint shear strength and is widely used in engineering practice. Its value is commonly obtained by visual judgment based on the proposed standard profiles [6]. Considering subjectivity exists in the visual comparison process determining the JRC, many quantitative approaches, i.e., experimental [7, 8], statistical [4,9–11], and fractal methods [12–14] have been proposed to determine the JRC objectively. Although these regression equations between JRC and roughness parameters have high correlation coefficients, there are still deviations in the JRC calculation results [15]. Two main reasons contributed to the calculation deviation. First, the mechanical properties of geological bodies contain much indeterminate information [16]. It is difficult to provide exact JRC values in these cases. In addition, there are nonuniformity, anisotropy, inhomogeneity, and scale effects on the rock joint roughness [17]. Second, the rock joint profile consists of low and high-frequency asperities [18,19]. The commonly used roughness parameters have a deficiency in capturing those kinds of asperities, which results in bias description of joint roughness [18]. Due to the above two limitations, finding a certain equation for accurately determining the JRC based on the traditional approach is not easy. Considering the contribution of different frequency components of the rock joint surface to the roughness are generally different, Wang et al. [18] derived a spectral roughness parameter to determine the JRC. This spectral roughness parameter considers the contribution differences among various scales of asperity components. However, the spectral analysis can only provide determinate expressions of the JRC but cannot express the indeterminate information of the JRC data. Due to the incompleteness of observations and measurements, it is necessary to approximate the JRC in indeterminate environments. Neutrosophy puts forward the concepts between true and false: neutral, indeterminate, incomplete, etc. The neutrosophic number (NN) concept was first introduced by Smarandache et al. [ 20–22], which has been proved to express determinate and indeterminate information. Thus, the combination of the NN [20–22] and spectral analysis [18] may overcome the limitations in determining the joint roughness mentioned above. The NN and other neutrosophic theories such as neutrosophic statistics, neutrosophic probability, and neutrosophic distribution are major branches of the neutrosophic theory, which deals with indeterminate data and indeterminate inference methods that contain degrees of indeterminacy as well [23–25]. Many researchers have contributed to developing the neutrosophic theory in recent years [ 16,23,24,26,27]. Karamaşa et al. [28] developed a new multi-criteria decision-making method and ranked factors affecting outsourcing-related third-party logistics using neutrosophic AHP. Aslam et al. [29] introduced the student t-test and F-test under neutrosophic statistics to address the drawbacks of classical statistics. Ye et al. [16] established JRC and the shear strength neutrosophic functions based on neutrosophic theory. Later on, Ye et al. [27] adopted neutrosophic number functions to study the anisotropy and scale effect for the indeterminate JRC. Then, Chen et al. [26] proposed neutrosophic interval statistical numbers to express JRC under indeterminate environments. To utilize the current and previous data information, Aslam [30] presented a new approach to determine roughness coefficient neutrosophic numbers based on the neutrosophic exponentially weighted moving average. Du et al. [31] originally expressed the mixed information of the simplified neutrosophic set and NN based on a simplified neutrosophic indeterminate set. Additionally, Du et al. [24] proposed a multi-attribute decision-making approach based on subtraction operational aggregation operators of simplified neutrosophic numbers. However, NN functions applied in the JRC determination mainly focus on the scale effect and anisotropy properties; they have not been applied to determine the JRC for a specific rock joint based on detailed spectral analysis. Hence, this original study will discuss the uncertainties in the existing JRC determination correlations and propose NN functions of determinate and indeterminate JRC based on the spectral analysis. This new approach to determine JRC can consider the contribution of various scale asperity components in determining the rock joint roughness and approximate indeterminate expressions of the JRC. The structure of this paper is listed as follows. In Section 2, the basic concepts for neutrosophic number functions and spectral analysis are presented. Then, in Section 3, the uncertainties in joint roughness coefficient determination are first discussed. A new method to calculate the JRC based on neutrosophic number functions and spectral analysis is then proposed. In Section 4, the comparisons between the new approach and commonly used statistical parameters to determine the JRC are carried out based on experimental results of the rock joints collected from an actual rock landslides area. Finally, the conclusion is presented in Section 5. 2 Concepts of Neutrosophic Number and Spectral Analysis Generally, a NN Z is presented as: Z=a+bI (1) where a and b are real numbers, and I denotes the indeterminate information, and I ∈ [IL, IU]. The indeterminate range I ∈ [IL, IU] can be statistically specified to satisfy practice requirements. In this equation, a and bI are the determinate and indeterminate parts, respectively. Moreover, the NN Z will degenerate to the real number a if b equals zero, which contains only the determinate information. It will degenerate to a NN bI without a determinate part if a equals zero, containing only the indeterminate information. For example, let us assume that a NN is z = 2 + 4I, where I ∈ [0, 0.2]. Thus, its determinate part is 2, and its indeterminate part is 4I. Then, there is z ∈ [2, 2.8] for I ∈ [0, 0.2]. Generally, the rock joints in engineering practice consist of various scales of asperities. The asperities with small inclinations but high amplitudes are low-frequency components, while the asperities with big inclinations but low amplitudes are high-frequency components. Due to the different frequencies, the various scales of asperity components have different influences on the rock joint shear behavior. To quantitively describe the contribution of asperities with various frequencies on the rock joint roughness, Wang et al. [18] proposed to adopt the spectral analysis method to determine the JRC. Although the rock joint morphology in engineering practice is so complex that difficult to be described by clear math equations, the spectral characteristics of the profile are readily analyzed by the power spectral density (PSD) [18]. With the help of PSD, the amplitude distribution for the joint profile can be effectively presented in the frequency domain. In this paper, the periodogram method is used to obtain the PSD for a rock joint profile. Herein, the periodogram method estimates the PSD by dividing the square of the joint profile Fourier transform modulus by its sampling length. The detailed information about the spectral analysis on a rock joint profile can be seen in the reference [18], and the calculation method is briefly described as follows. To simplify the calculation, the least-square fitting line of a rock joint profile is first aligned to be horizontal, and the average straight line of the profile is shifted to coincide with the coordinate axis x. After alignment, the profile can be presented as: y(x)=yo(x)−1L∫−L/2L/2yo(x)dx (2) where y(x) is the translated profile; yo(x) is the aligned profile, that the least square fitting line of the profile is horizontal; L is the projection length of the profile on the x-axis. The average power and Fourier transform of the aligned profile y(x) in the spatial frequency domain are respectively shown as: P2D=1L∫−L/2L/2y2(x)dx (3) Y(f)=∫−L/2L/2y(x)e−j2πfxdx (4) where P2D is the average power of y(x); Y(f) is the Fourier transform of y(x) in the spatial frequency domain; f is the spatial frequency of the harmonic components of y(x), and its unit is the reciprocal of length unit of the profile. According to the Wiener–Khintchine theorem [32]: P2D=∫fminfmaxPSD(f)df (5) {PSD(f)=1L|Y(f)|2PSD(−f)=PSD(f) (6) where PSD(f) is the power spectral density of the profile; fmin and fmax are the minimum and maximum frequencies of harmonic components, respectively. Note that the negative frequency range of the PSD has no physical meaning in actual engineering practice. From the aspect of energy conservation, the PSD of the negative frequency range can be superimposed on the corresponding positive range to obtain the single-sided power spectral density (PSD*) as: PSD∗(f)={2PSD(f),f≥00,f<0 (7) As indicated by Eqs. (5) and (7), the relation between P2D and PSD can be obtained as: P2D=∫0fmaxPSD∗(f)df (8) Eq. (8) indicated that the average power of the profile (i.e., the mean square value of amplitude heights of a rock joint profile) equals to the area enclosed by the PSD* of the profile and the frequency axis. Thus, it is possible to quantitatively analyze the amplitude and height of the joint profile in a certain frequency range. As the rock joint profile data collected in the engineering practice is discrete under a certain sampling interval, the discrete form of the PSD is presented as follows to facilitate practical PSD∗(fm)=2TsN|∑n=0n=N−1y(n)e−j2πmn/N|2 (9) fm=mNTs{m=0,1,2,…,N/2;N=2km=0,1,2,…,(N−1)/2;N=2k±1 (10) where Ts is the sampling interval; N is the number of discrete points; y(n) is the discrete form of y(x); fm is the discrete form of the harmonic frequency f; k is a positive integer. 3 Determination of JRC Based on Neutrosophic Number and Spectral Analysis 3.1 Uncertainties in the JRC Determination Among the various quantitative JRC determination methods, the statistical and fractal methods are widely adopted by researchers and engineers. However, the joint profile in engineering practice is self-affine; different users may get contradictory results with the fractal method [13]. The statistical parameters can be presented by consistent mathematical formulas and can be readily calculated with the help of computer programs; therefore, it is a convenient way to use statistical parameters to determine JRC. The commonly used statistical parameters are average relative height (Rave), maximum relative height (Rmax), standard deviation of height (SDh), average inclination angle (iave), the standard deviation of inclination angle (SDi), root mean square of the first deviation of the profile (Z2), roughness profile index (Rp), structure-function of the profile (SF) and so on [9,11,13,18,33–36]. The detailed calculation formulations of eight statistical parameters can be seen in Tab. 1. Researchers have been working on statistical parameters to determine the JRC quantitatively, and many regression correlations with high correlation coefficients have been proposed. However, existing regression correlations can only address the determinate information of JRC but cannot express and handle the indeterminate information. Therefore, there may exist deviations in the JRC calculation results for rock joints. To demonstrate the deviations arise from ignoring the incomplete, uncertain, and imprecise information in the JRC, two JRC regression relations based on the widely used statistical parameters Z2 and SF proposed by Li et al. [33] with a 0.4 mm sampling interval were adopted to calculate the JRC. The correlation coefficients for the two regression relations based Z2 and SF are 0.8760 and 0.8725, respectively. The formulas are listes as follows: JRCZ2=98.718Z21.6833 (11) JRCSF=137.1739SF−3.9998 (12) The ten standard profiles [8] shown in Fig. 1 are commonly used visual references to determine the JRC in engineering practice. First, the statistical parameters Z2 and SF for the ten standard rock joint profiles were obtained. Then, the JRC values of the standard profiles were calculated according to Eqs. (11), (12). The calculated results and true values of JRC for the standard profiles were both presented in Tab. 2. As shown in Tab. 2, although the correlation coefficients for the above two regression relations are high to 0.8760 and 0.8725, respectively, there are still significant deviations in calculated JRC values. Particularly, the absolute deviations are larger than 70% for the profiles with true JRC values 0.4 and 2.8. Thus, neither the determinate/crisp JRC values based on SF nor Z2 can approximate the roughness of standard rock joint profiles very well. Additionally, the deviations can also be found within other JRC regression correlations based on statistical 3.2 NN Functions for JRC Based on Spectral Analysis According to the spectral information presented by the rock joint profiles, Wang et al. [18] derived a spectral roughness parameter, PZ. The inclination angle and the amplitude height of rock joints as well as the shear direction can be considered by this spectral joint roughness parameter. Additionally, the contribution differences on joint roughness of various scales of asperities can also be presented. The formulation of the parameter PZ is listed as follows: PZ=Z2∗PfL (13) Z2∗=[1N∗∑n=0n=N−2(max(0,yn+1−yn)(xn+1−xn)2+(yn+1−yn)2)2]1/2=[1N∗∑n=0n=N∗−1sin2i∗]1/2 (14) Pf={∑m=0m=N/2−1Amfmave;N=2k∑m=0m=(N−1)/2−1Amfmave;N=2k±1 (15) Am=Pm+Pm+12(fm+1−fm)=Pm+Pm+12NTs (16) fmave=fm+fm+12 (17) where Pf is an average power index; Z2* is the modified root mean square of the first deviation of the profile; N* is the number of asperities of the rock joint profile facing the shear direction; i* is the inclination angle of asperities of the rock joint profile facing the shear direction; Am is the average power from the frequency fm to fm + 1; Pm is the value of the PSD* corresponding to fm; fmave is the average frequency. In the study [18], the PZ values were obtained from 112 rock joint profiles digitized by Li et al. [33] at a sampling interval of 0.4 mm, and the correlations between JRC and the roughness parameter PZ were established. The detailed procedure to establish the correlations can be seen in the reference [18]. As suggested by the authors [18], the mean trend correlation JRCmean between the JRC and PZ can be used to predict JRC values. However, as shown in Fig. 2, the JRC values have significant variabilities around the mean trend correlation. Thus, the JRC values determined by the parameter PZ have certain deviations, and the obtained regression correlations can only provide the determinate information of JRC. It cannot express and handle the indeterminate information of JRC. Thus, according to the concept of NNs and the limits of the JRC, the PZ-based NN function for JRC can be written JRC=(a1+b1I)+(a2+b2I)ln⁡PZ=(a1+a2ln⁡PZ)+(b1+b2ln⁡PZ)I (18) where a1 and a2 are the fitting parameters of the lower limit; b1 and b2 are the differences between the upper bound and lower bound; I is the indeterminacy. In this NN function, a1 + a2 ln PZ and (b 1 + b2 ln PZ) I are the determinate and indeterminate parts, respectively. According to Fig. 2, a1 and a2 are 35.80 and 4.84, respectively. The indeterminacy I is set to the interval [0, 0.5] through the statistical analysis of all collected rock joint profiles. Thus, b1 and b2 are twice the differences between the upper bound and lower bound. Obviously, b1 is 9.6, and b2 is 0. Then, the PZ-based NN function for the JRC can be presented as: JRC=(35.8+4.84ln⁡PZ)+9.6I,I∈[0,0.5] (19) The PZ-based NN function expressed in Eq. (19) has two advantages: (1) the spectral roughness parameter PZ is adopted to capture the rock joint profile spectral characteristics, which can effectively reflect the contribution of various scales of asperity components of joint profiles in determing the JRC; (2) the NN function for JRC can determine the JRC in indeterminate environments, which are very suitable to determine the JRC with vague, incomplete, imprecise, and indeterminate information. 4 Experimental Results and Discussion 4.1 Samples Preparation and Experimental Results Five well-matched sandstone joint samples were collected from the Majiagou landslide. The Majiagou landslide is in Guizhou Town, Zigui County, Yichang City, Hubei Province, China. It is located at the foot of Woniu Mountain on the left bank of the Yangtze River, on the left bank of the Zhaxi River, a tributary of the Yangtze River, and 2.1 km from the mouth of the Yangtze River. The bedrock stratum in the landslide area is the Upper Jurassic Suining Formation (J3S), which belongs to the middle of the Guizhou Group. The lithology is mainly gray-white feldspar quartz sandstone and fine sandstone, with purple-red silty mudstone and mudstone. The sandstone joint samples collected in this paper are mainly gray-white feldspar quartz sandstone joints. These collected five well-matched sandstone joint samples were cut into standard samples with a length and width of 10 cm and height of 5 cm. Then, a laser scanner was used to scan the surface of these samples with an accuracy of ±35 μm and a sampling interval of 0.2 mm. A photograph of the laser scanner and data acquisition system can be seen in Fig. 3. After the scanning tests, these samples were encapsulated into blocks of cement and cured for 21 days. Finally, these joint samples were subjected to direct shear tests under constant normal stresses. Particularly, the shear velocity was set as 0.4 mm/min, which is consistent with the ISRM suggestions [6]. The direct shear test results for these sandstone joint samples are presented in Tab. 2. In addition, Schmidt hammer rebound tests and direct shear tests performed on planar joint samples were conducted, from which the obtained JCS is 45.9 MPa, and the obtained φb is 26.8°. Then, the true JRC values for the collected sandstone joint samples were back-calculated from the JRC-JCS model [8] and tabulated in Tab. 3. 4.2 Validation of the Proposed Method The 3D surfaces of the collected five sandstone joints were also constructed with the same sampling interval (i.e., 0.4 mm), which is consistent with the PZ-based NN function for JRC determination. Since the samples had been placed parallel to the direct shear plane, these three-dimensional (3D) surfaces were also aligned by setting their corresponding least-square planes to be horizontal, consistent with the direct shear tests. The aligned 3D morphology surfaces for the collected sandstone joints were shown in Fig. 4. Then, two-dimensional (2D) profiles in the shear direction were uniformly extracted for roughness evaluation (the interval between the extracted profiles was set to 0.8 mm). As a result, 117, 116, 116, 118, and 114 2D sprofiles were evenly obtained from MJ1-5, MJ1-6, MJ1-7, MJ1-8, and MJ1-9. The roughness parameter PZ of each extracted rock joint profile was first calculated along with the shear direction; then, the JRC value was calculated with the PZ-based NN function presented in Eq. (19). Then, the arithmetic mean of all 2D profiles extracted from the same sandstone sample was adopted to represent the 3D JRC values. This method has been confirmed to be effective by researchers [ 37–39]. The calculated JRC values of the collected sandstone joint samples based on the PZ-based NN function are presented in Tab. 3. For comparison purposes, the commonly used statistical parameters, i.e., the Rave, Rmax, SDh, iave, SDi, Z2, Rp, and SF, were also calculated for the collected 112 rock joint profiles. The formulations for these statistical parameters can be seen in Tab. 1. Then the regression correlations between the JRC and selected statistical parameters are derived and shown in Fig. 5. Then, the NN functions based on the above-mentioned statistical parameters were also derived and listed as follows: JRC=(11.09ln⁡iave−20.09)+18I,I∈[0,0.5] (20) JRC=(0.60SDi−4.00)+16I,I∈[0,0.5] (21) JRC=(622.45Rave+1.89)+14I,I∈[0,0.5] (22) JRC=(5.11ln⁡SDh+10.74)+16I,I∈[0,0.5] (23) JRC=(134.20Rmax+1.60)+15I,I∈[0,0.5] (24) JRC=(e3.21−0.03/(Rp−0.99)−3.5)+16I,I∈[0,0.5] (25) JRC=(5.96ln⁡SF+33.50)+18I,I∈[0,0.5] (26) JRC=(11.70ln⁡Z2+22.55)+17I,I∈[0,0.5] (27) According to the derived statistical parameter-based NN functions, i.e., Eqs. (20)~(27), the JRC of the collected five joint samples were obtained. The calculated results are presented in Tab. 3 and plotted in Fig. 6. The results in Tab. 3 show that the PZ-based NN function and statistical parameters-based NN functions can approximate the JRC values for joint samples. However, the JRC values calculated by the commonly used statistical parameter-based NN functions are located in a much larger JRC range than the PZ-based NN function, which means the PZ-based NN function is much more sensitive and effective than the traditional approaches in determining the rock joint roughness. Especially, Fig. 6 shows that the statistical parameters may not present a correct JRC range for the sandstone joints. For example, the JRC ranges calculated by the Rmax based NN function for the sandstone joints MJ1-6, MJ1-8, and MJ1-9 deviate significantly from the experimental back-calculated JRC values. These differences in the JRC calculation results are that the PZ-based NN function comprehensively considers the effect of the shear direction, amplitude height, and inclination angle and effectively reflects the contribution of various scales of asperity components on the roughness. In contrast, the statistical parameters based NN functions only present one-sided characteristics of rock joint roughness (e.g., iave and Z2 only present the inclination angle of a joint profile), which contains more uncertainties in the JRC determination. Thus, the proposed approach, that is, the PZ-based NN function, can present much more effective JRC calculation results for natural rock joints. Generally, the JRC values have good correlations with commonly used statistical parameters. However, the indeterminate information of the JRC leads to variabilities in the predicted results of JRC regression correlations. Researchers usually adopt the mean trend of correlations to predict JRC values, while this subjected approach may result in biased results. Here, the mean trend correlations of the above-mentioned statistical parameters and the proposed PZ were used to present this information. As indicated in Fig. 7, all the predicted JRC values deviated from the true JRC values to some extent. Especially for the first standard profile with JRC = 0.4, negative JRC values were obtained by the mean trend correlations based on PZ, Z2, SF, and iave. This phenomenon once again shows that the traditional regression correlations cannot deal with the indeterminate information of the JRC. Incomplete, imprecise, and indeterminate problems are generally encountered for the complex surface of rock joints. The NNs are preferred compared to fuzzy, rough, grey sets due to efficiency, flexibility, and easiness for expressing determinate and/or indeterminate information. Additionally, various scales of asperities are displayed on joint surfaces. The spectral analysis was adopted to simultaneously capture the contributions of the high and low-frequency asperity components in determining joint roughness. Through the combination of the NNs and spectral analysis, this paper proposed a new method to determine the JRC for rock joints accurately. The main conclusions are drawn as follows. The JRC regression correlation based on commonly used statistical parameters could not handle the indeterminate information of the JRC. As a result, there were still significant deviations in calculated JRC values, although the JRC had a good correlation with the commonly used statistical parameters. Particularly, the absolute deviations for the calculated JRC results based on the Z2 and SF were larger than 70% for the profiles with true JRC values 0.4 and 2.8. These deviations can be attributed to the indeterminate information that exists in the joint roughness determination and the contribution differences of various scales of asperities on the joint roughness. To overcome the limitations of the traditional JRC determination approaches, NN functions based on the spectral roughness parameter were derived based on 112 rock joint profiles collected from the literature. Then, the derived NN function was applied to determine the JRC values of 5 well-matched sandstone joint samples collected from the Majiagou rock landslide area. The comparison between the JRC results obtained from the proposed method and experimental results validated the effectiveness of the spectral analysis based NN functions. Additionally, comparisons made between the spectral analysis and common statistical parameters based on NN also demonstrated the advantage of spectral analysis. The combination of the NNs and spectral analysis can effectively address the indeterminate information that exists in the joint roughness and the contribution differences of various scales of asperities on the joint roughness. In addition to the NN, the neutrosophic theory contains many other methods, such as neutrosophic sets, neutrosophic interval statistical numbers, and neutrosophic interval functions. They can also address the indeterminate information in the JRC determination process. In future work, we will further develop and investigate JRC determination approaches through other neutrosophic theories and compare the differences and efficiencies between different neutrosophic approaches. Funding Statement: The authors would like to thank very much to the anonymous reviewers and editors whose constructive comments are helpful for this paper’s revision. This work is supported by Key Program of National Natural Science Foundation of China (No. 41931295) and General Program of National Natural Science Foundation of China (No. 41877258). Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study. 1. Yong, R., Li, C. D., Ye, J., Huang, M., Du, S. G. (2016). Modified limiting equilibrium method for stability analysis of stratified rock slopes. Mathematical Problems in Engineering, 2016(1), 1–9. DOI 10.1155/2016/8381021. [Google Scholar] [CrossRef] 2. Tang, H. M., Yong, R., Ez Eldin, M. A. M. (2016). Stability analysis of stratified rock slopes with spatially variable strength parameters: The case of Qianjiangping landslide. Bulletin of Engineering Geology and the Environment, 76(3), 839–853. DOI 10.1007/s10064-016-0876-4. [Google Scholar] [CrossRef] 3. Yong, R., Qin, J. B., Huang, M., Du, S. G., Liu, J. et al. (2018). An innovative sampling method for determining the scale effect of rock joints. Rock Mechanics and Rock Engineering, 52(3), 935–946. DOI 10.1007/s00603-018-1675-y. [Google Scholar] [CrossRef] 4. Yong, R., Ye, J., Li, B., Du, S. G. (2018). Determining the maximum sampling interval in rock joint roughness measurements using Fourier series. International Journal of Rock Mechanics and Mining Sciences, 101(1–2), 78–88. DOI 10.1016/j.ijrmms.2017.11.008. [Google Scholar] [CrossRef] 5. Wu, Q., Jiang, Y. F., Tang, H. M., Luo, H., Wang, X. et al. (2020). Experimental and numerical studies on the evolution of shear behaviour and damage of natural discontinuities at the interface between different rock types. Rock Mechanics and Rock Engineering, 53(8), 3721–3744. DOI 10.1007/s00603-020-02129-9. [Google Scholar] [CrossRef] 6. Muralha, J., Grasselli, G., Tatone, B., Blümel, M., Chryssanthakis, P. et al. (2014). ISRM suggested method for laboratory determination of the shear strength of rock joints: Revised version. Rock Mechanics and Rock Engineering, 47(1), 291–302. DOI 10.1007/s00603-013-0519-z. [Google Scholar] [CrossRef] 7. Barton, N. (1973). Review of a new shear-strength criterion for rock joints. Engineering Geology, 7(4), 287–332. DOI 10.1016/0013-7952(73)90013-6. [Google Scholar] [CrossRef] 8. Barton, N., Choubey, V. (1977). The shear strength of rock joints in theory and practice. Rock Mechanics, 10(1–2), 1–54. DOI 10.1007/BF01261801. [Google Scholar] [CrossRef] 9. Wang, L. Q., Wang, C. S., Khoshnevisan, S., Ge, Y. F., Sun, Z. H. (2017). Determination of two-dimensional joint roughness coefficient using support vector regression and factor analysis. Engineering Geology, 231(4), 238–251. DOI 10.1016/j.enggeo.2017.09.010. [Google Scholar] [CrossRef] 10. Liu, X., Zhu, W., Liu, Y., Yu, Q., Guan, K. (2021). Characterization of rock joint roughness from the classified and weighted uphill projection parameters. International Journal of Geomechanics, 21(5), 4021052. DOI 10.1061/(ASCE)GM.1943-5622.0001963. [Google Scholar] [CrossRef] 11. Yong, R., Fu, X., Huang, M., Liang, Q., Du, S. G. (2017). A rapid field measurement method for the determination of joint roughness coefficient of large rock joint surfaces. KSCE Journal of Civil Engineering, 22(1), 101–109. DOI 10.1007/s12205-017-0654-2. [Google Scholar] [CrossRef] 12. Xie, H., Wang, J. A. (1999). Direct fractal measurement of fracture surfaces. International Journal of Solids and Structures, 36(20), 3073–3084. DOI 10.1016/S0020-7683(98)00141-3. [Google Scholar ] [CrossRef] 13. Kulatilake, P. H. S. W., Balasingam, P., Park, J., Morgan, R. (2006). Natural rock joint roughness quantification through fractal techniques. Geotechnical and Geological Engineering, 24(5), 1181–1202. DOI 10.1007/s10706-005-1219-6. [Google Scholar] [CrossRef] 14. Kulatilake, P. H. S. W., Du, S. G., Ankah, M. L. Y., Yong, R., Sunkpal, D. T. et al. (2021). Non-stationarity, heterogeneity, scale effects, and anisotropy investigations on natural rock joint roughness using the variogram method. Bulletin of Engineering Geology and the Environment, 80(8), 6121–6143. DOI 10.1007/s10064-021-02321-3. [Google Scholar] [CrossRef] 15. Li, Y., Xu, Q., Aydin, A. (2016). Uncertainties in estimating the roughness coefficient of rock fracture surfaces. Bulletin of Engineering Geology and the Environment, 76(3), 1153–1165. DOI 10.1007/s10064-016-0994-z. [Google Scholar] [CrossRef] 16. Ye, J., Yong, R., Liang, Q. F., Huang, M., Du, S. G. (2016). Neutrosophic functions of the joint roughness coefficient and the shear strength: A case study from the pyroclastic rock mass in Shaoxing City. China Mathematical Problems in Engineering, 2016(1), 1–9. DOI 10.1155/2016/4825709. [Google Scholar] [CrossRef] 17. Du, S. G. (1998). Research on complexity of surface undulating shapes of rock joints. Journal of China University of Geosciences, 9(1), 86–89. [Google Scholar] 18. Wang, C. S., Wang, L. Q., Karakus, M. (2019). A new spectral analysis method for determining the joint roughness coefficient of rock joints. International Journal of Rock Mechanics and Mining Sciences, 113, 72–82. DOI 10.1016/j.ijrmms.2018.11.009. [Google Scholar] [CrossRef] 19. Ficker, T., Martišek, D. (2016). Alternative method for assessing the roughness coefficients of rock joints. Journal of Computing in Civil Engineering, 30(4), 4015059. DOI 10.1061/(ASCE) CP.1943-5487.0000540. [Google Scholar] [CrossRef] 20. Smarandache, F. (2013). Introduction to neutrosophic measure, neutrosophic integral, and neutrosophic probability. Craiova: Sitech Publishing House. [Google Scholar] 21. Smarandache, F. (1998). Neutrosophy: Neutrosophic probability, set, and logic. Rehoboth: American Research Press. [Google Scholar] 22. Smarandache, F. (2014). Introduction to neutrosophic statistics. Craiova: Sitech & Education Publishing. [Google Scholar] 23. Ye, J., Cui, W. (2019). Neutrosophic compound orthogonal neural network and its applications in neutrosophic function approximation. Symmetry, 11(2), 147. DOI 10.3390/sym11020147. [Google Scholar ] [CrossRef] 24. Du, S. G., Yong, R., Ye, J. (2020). Subtraction operational aggregation operators of simplified neutrosophic numbers and their multi-attribute decision making approach. Neutrosophic Sets and Systems, 33, 157–168. DOI 10.5281/zenodo.3782881. [Google Scholar] [CrossRef] 25. Ye, J., Song, J. M., Du, S. G. (2020). Correlation coefficients of consistency neutrosophic sets regarding neutrosophic multi-valued sets and their multi-attribute decision-making method. International Journal of Fuzzy Systems, 1–8. DOI 10.1007/s40815-020-00983-x. [Google Scholar] [CrossRef] 26. Chen, J., Ye, J., Du, S. G., Yong, R. (2017). Expressions of rock joint roughness coefficient using neutrosophic interval statistical numbers. Symmetry, 9(7), 123. DOI 10.3390/sym9070123. [Google Scholar] [CrossRef] 27. Ye, J., Chen, J., Yong, R., Du, S. G. (2017). Expression and analysis of joint roughness coefficient using neutrosophic number functions. Information–An International Interdisciplinary Journal, 8 (2), 69. DOI 10.3390/info8020069. [Google Scholar] [CrossRef] 28. Karamaşa, Ç., Demir, E., Memiş, S., Korucuk, S. (2021). Weighting the factors affectıng logıstıcs outsourcıng. Decision Making: Applications in Management and Engineering, 4(1), 19–33. DOI 10.31181/dmame2104019k. [Google Scholar] [CrossRef] 29. Aslam, M., Bantan, R. A. R., Khan, N. (2021). Design of tests for mean and variance under complexity–An application to rock measurement data. Measurement, 177, 109312. DOI 10.1016/ j.measurement.2021.109312. [Google Scholar] [CrossRef] 30. Aslam, M. (2019). A new method to analyze rock joint roughness coefficient based on neutrosophic statistics. Measurement, 146, 65–71. DOI 10.1016/j.measurement.2019.06.024. [Google Scholar] [ 31. Du, S. G., Ye, J., Yong, R., Zhang, F. (2020). Simplified neutrosophic indeterminate decision making method with decision makers’ indeterminate ranges. Journal of Civil Engineering and Management , 26(6), 590–598. DOI 10.3846/jcem.2020.12919. [Google Scholar] [CrossRef] 32. Allen, R. L., Mills, D. (2004). Signal analysis: Time, frequency, scale, and structure. USA: John Wiley & Sons. [Google Scholar] 33. Li, Y., Zhang, Y. (2015). Quantitative estimation of joint roughness coefficient using statistical parameters. International Journal of Rock Mechanics and Mining Sciences, 77, 27–35. DOI 10.1016/ j.ijrmms.2015.03.016. [Google Scholar] [CrossRef] 34. Yong, R., Ye, J., Liang, Q. F., Huang, M., Du, S. G. (2017). Estimation of the joint roughness coefficient (JRC) of rock joints by vector similarity measures. Bulletin of Engineering Geology and the Environment, 77(2), 735–749. DOI 10.1007/s10064-016-0947-6. [Google Scholar] [CrossRef] 35. Grasselli, G., Wirth, J., Egger, P. (2002). Quantitative three-dimensional description of a rough surface and parameter evolution with shearing. International Journal of Rock Mechanics and Mining Sciences, 39(6), 789–800. DOI 10.1016/S1365-1609(02)00070-9. [Google Scholar] [CrossRef] 36. Marache, A., Riss, J., Gentier, S., Chilès, J. P. (2002). Characterization and reconstruction of a rock fracture surface by geostatistics. International Journal for Numerical and Analytical Methods in Geomechanics, 26(9), 873–896. DOI 10.1002/nag.228. [Google Scholar] [CrossRef] 37. Tatone, B. S. A., Grasselli, G. (2010). A new 2D discontinuity roughness parameter and its correlation with JRC. International Journal of Rock Mechanics and Mining Sciences, 47(8), 1391–1400. DOI 10.1016/j.ijrmms.2010.06.006. [Google Scholar] [CrossRef] 38. Zhang, G. C., Karakus, M., Tang, H. M., Ge, Y. F., Zhang, L. (2014). A new method estimating the 2D joint roughness coefficient for discontinuity surfaces in rock masses. International Journal of Rock Mechanics and Mining Sciences, 72(B8), 191–198. DOI 10.1016/j.ijrmms.2014.09.009. [Google Scholar] [CrossRef] 39. Ge, Y. F., Lin, Z., Tang, H. M., Zhao, B. (2021). Estimation of the appropriate sampling interval for rock joints roughness using laser scanning. Bulletin of Engineering Geology and the Environment, 80(5), 3569–3588. DOI 10.1007/s10064-021-02162-0. [Google Scholar] [CrossRef] This work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.techscience.com/CMES/v129n2/44813/html","timestamp":"2024-11-07T21:59:35Z","content_type":"application/xhtml+xml","content_length":"139148","record_id":"<urn:uuid:0bb8b101-cee0-468f-96c6-5324bf6cde61>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00851.warc.gz"}
Youtube data api V3 insert chat message error on youtube-data-api Youtube data api V3 insert chat message error 429 views Asked by Ben At I'm trying to insert messages into a live chat in a broadcast. I've used the tryout widget on Google's API documentation page, and it works fine. I've supplied the following body: "snippet": { "liveChatId": "EiEKGFVDblhXWlgxUlNqWEdwMUlEWDBr[...]", "textMessageDetails": { "messageText": "Hello!" "type": "textMessageEvent" I now tried to use the Python API: >>> { ... "snippet": { ... "liveChatId": "EiEKGFVDblhXWlgxUlNqWEdwMUlEWDBr[...]", ... "textMessageDetails": { ... "messageText": "Hello" ... }, ... "type": "textMessageEvent" ... } ... } >>> youtube.liveChatMessages().insert(part="snippet", body=body).execute() But I'm getting: googleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/youtube/v3/liveChat/messages?part=snippet&alt=json returned "snippet.text_message_details.message_text text is not valid."> Any idea why this might occur? To me, the request seems identical to the one in the documentation, yet one works and the other doesn't. I was also stuck in this same problem. The solution of; googleapiclient.errors.HttpError: <HttpError 400 when requesting https://www.googleapis.com/youtube/v3/liveChat/messages?part=snippet&alt=json returned "snippet.text_message_details.message_text text is not valid."> is to just create your youtube channel first and then re-run this code. I hope it works for all.
{"url":"https://techqa.club/v/q/youtube-data-api-v3-insert-chat-message-error-51043563","timestamp":"2024-11-07T01:34:48Z","content_type":"text/html","content_length":"32517","record_id":"<urn:uuid:b5867e26-2eec-417b-977b-1e884c0c6907>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00207.warc.gz"}
Vedic Astronomy Vedic Astronomy III All events are connected by Time, all places are connected by Space and all effects are connected by Cause in the Space-Time-Causality equation. The Science of Time ( Astrology ) and the Science of Cause ( Transcendental Philosophy ) assume great significance in the realm of superconscient learning The Elements Used in the Computation of Time The main element used is the Sun itself. One solar day is the time taken by the earth to rotate around its own axis. One solar day is made up of 24 solar hours, one solar hour is sixty minutes and one minute is sixty seconds. The time taken by Sol to make a circuit of the Zodiac from the First Point of the Sidereal Zodiac is called a Sidereal Year. This is 365.25363 solar days. A Tropical Year is the time taken by the Sun to make a circuit of the Tropical Zodiac This is 365.242194 solar days. The Five Types of Years There are five types of years 1) Solar Year 2) Jupiterian Year 3) Savana Year 4) Lunar Year 5) Sideral Year Solar Year The time taken by Sol ( Sun ) to cross one degree is called a solar day. When the Sun crosses from one Sign to another, this is called Surya Sankrama ( transit to another sign ). The time taken from one Surya Sankrama to another is called one solar month. The motion of the Sun is fastest at the first week of January and is slowest at the first week of July. In other words, since the Sun's motion is fastest in the Vedic months of Sagittarius and Capricorn, it takes only 29 days for the Sun to traverse 30 degrees of Sag and Cap. Conversely, it takes 32 days for Sol to traverse 30 degrees of Cancer, since his motion is slowest at Apogee, in the Vedic month of Cancer. Jupiterian Year ( Barhaspathya ) One Barhaspathya is time taken by Jove ( Jupiter ) to traverse 30 degrees of a sign. The duration is 361 days and a Jupiterian Cycle is roughly 12 years. Savana Year One Savana day is reckoned from Sunrise to Sunrise. 30 such Savana day is called one Savana month. 360 such days is one Savana year. Lunar Year A lunar month is the time calculated from one New Moon to the next New Moon. Since during a solar year, 12 Full Moon were visible , the Zodiac was divided into 12 constellations. 12 Lunar months constitute one Lunar Year. This is 354.367 days. This is 11 days less than the solar year. Sidereal Year One sidereal day is time taken by Luna to traverse a constellation of 13 degrees and 20 minutes. The Moon takes 27.3 days to revolve around the earth. 27.3*12 is one Sidereal Year and it is 327.6 Apparent Solar Day ( Savana Dina ) The time taken by the earth to rotate around its own axis. From a geocentric perspective, the Sun moves one degree per day. Sidereal Day ( Nakshatra Dina ) This is the time taken by the earth to rotate around its own axis with regard to Sidereus, the constellation of fixed stars. This is 23 hours and 56 mins and 4.0953 seconds. An apparent Solar Day is 24 hours. According to Indian Astronomy, a solar day is 60 Nadis. 60 Vinadis is one Nadi ( Nazhika ) and 60 Tatparas is one Vinadi. There are minuter subdivisions like Pratatparas ( 60 Pratatparas constitute one Tatpara ), corresponding to micro seconds and nano seconds in Western time calculations. 2 and a half Nadis is one hour or 24 minutes is one Nadi. While as per Western calculations, a day is reckoned from midnight to midnight, an Indian day is reckoned from sunrise to sunrise and a Hijra day is calculated from sunset to sunset. Sidereal Solar Year The time taken by the Sun ( from a geocentric perspective ), to make a circuit of the Sidereal Zodiac . This is 365 days, 6 hours 9 minutes and 9.8 seconds. Article by G Kumar, astrologer, writer and programmer of www.eastrovedica.com He has 15 years research experience in Stock Market Astrology and in various other branches of Astrology. Recentlyhe was awarded a Certificate by the Planetary Gemologists Association as a Planetary Gem Advisor. Free Astro Tips at http://zodiacastrology.blogspot.com & lens http://squidoo.com/FinancialAstrology
{"url":"http://www.dkarma.com/occult/astro/vedicastronomy/astronomy3.html","timestamp":"2024-11-08T01:56:15Z","content_type":"text/html","content_length":"9739","record_id":"<urn:uuid:282e8dc4-f8fb-41fd-ac9b-54f5b381d11e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00273.warc.gz"}
A Pirate's Guide to Accuracy, Precision, Recall, and Other Scores Once you've built your classifier, you need to evaluate its effectiveness with metrics like accuracy, precision, recall, F1-Score, and ROC curve. Whether you're inventing a new classification algorithm or investigating the efficacy of a new drug, getting results is not the end of the process. Your last step is to determine the correctness of the results. There are a great number of methods and implementations for this task. Like many aspects of data science, there is no single best measurement for results quality; the problem domain and data in question determine appropriate approaches. That said, there are a few measurements that are commonly introduced thanks to their conceptual simplicity, ease of implementation, and wide usefulness. Today, we will discuss seven such With these methods in your arsenal, you will be able to evaluate the correctness of most results sets across most domains. One important thing to consider is the type of algorithm that is giving these results. Each of these metrics is designed for the output of a (binary) classification algorithm. These outputs have a number of records, and for each record there will be a "true" or "false" classification. However, we will discuss how to extend these measurements to other types of output where appropriate. Briefly, a classification algorithm takes some input set and, for each member of the sets, classifies it as one of a fixed set of outputs. Examples of classification include facial recognition (match or not match), spam filters, and other kinds of pattern recognition with categorical output. Binary classification is a type of classification where there are only two possible outputs. An example of binary classification comes from perhaps the most famous educational data set in data science: the Titanic passenger dataset, where the binary outcome is survival of the disastrous sinking. Finally, a quick note on syntax. The code samples in this article make heavy use of list comprehension with [function(element) for element in list if condition]. I use this syntax for its concision. If you are unfamiliar with this syntax, here is a resource. Otherwise, I tend to be explicit in my implementations, much shorter implementations of the following functions are trivial to construct. Seven Metrics for the Seven Seas While we will implement these measurements ourselves, we will also use the popular sklearn library to perform each calculation. Generally, it is best to use an established library like sklearn to perform standard operations such as these as the library's code is optimized, tested, and easy to use. This saves you time and ensures higher code quality, letting you focus on the differentiating aspects of your data science project. For this article, we'll be exploring a variety of metrics and several example output sets. You can follow along on FloydHub's data science platform by clicking the link below. Let's start with defining an extremely simple example binary dataset. Imagine, for a moment, that you are a pirate instead of a programmer. Furthermore, imagine that you have a device that purports to identify whether a ship on the horizon is carrying treasure, and that the device came with the data that we synthesized earlier. In this example, a "1" or positive identifies a ship with treasure (💰), and a "0" or negative identifies a ship without treasure (🧦). We'll use this example throughout the article to give meaning to the metrics. # Setup A actual_a = [1 for n in range(10)] + [0 for n in range(10)] predicted_a = [1 for n in range(9)] + [0, 1, 1] + [0 for n in range(8)] X Raid-1 Raid-2 Raid-3 Raid-4 Raid-5 Raid-6 Raid-7 Raid-8 Raid-9 Raid-10 Raid-11 Raid-12 Raid-13 Raid-14 Raid-15 Raid-16 Raid-17 Raid-18 Raid-19 Raid-20 Actual 💰 💰 💰 💰 💰 💰 💰 💰 💰 💰 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 Predicted 💰 💰 💰 💰 💰 💰 💰 💰 💰 🧦 💰 💰 🧦 🧦 🧦 🧦 🧦 🧦 🧦 🧦 This only produces 20 results. Statisticians debate about the fewest number of results needed for a conclusion to be, well, conclusive, but I wouldn't want to use fewer than 20. The number of results is of course domain and problem dependent, but in this case these 20 fake results will be enough to demonstrate the various metrics. Confusion Matrix A holistic way of viewing true and false positive and negative results is with a confusion matrix. Despite the name, it is a straightforward table that provides an intuitive summary of the inputs to the calculations that we made above. Rather than a decimal correctness, the confusion matrix gives us counts of each of the types of results. # Confusion Matrix from sklearn.metrics import confusion_matrix def my_confusion_matrix(actual, predicted): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) true_negatives = len([a for a, p in zip(actual, predicted) if a == p and p == 0]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == 1]) false_negatives = len([a for a, p in zip(actual, predicted) if a != p and p == 0]) return "[[{} {}]\n [{} {}]]".format(true_negatives, false_positives, false_negatives, true_positives) print("my Confusion Matrix A:\n", my_confusion_matrix(actual_a, predicted_a)) print("sklearn Confusion Matrix A:\n", confusion_matrix(actual_a, predicted_a)) This yields the following table: [[8 2] [1 9]] Where the numbers correspond to: [[true_negatives false_positives] [false_negatives true_positives]] While there is no analytic conclusion in a confusion matrix, they are useful for two reasons. The first is that it is a concise visual representation of the absolute counts of correct and incorrect output. Furthermore, the confusion matrix introduces us to the four building blocks of our other metrics. We're back on the pirate ship and evaluating the test results that came with the treasure-seeking device. In this case: • A "True Positive" (TP), is when the device correctly identifies that a ship is carrying treasure. You raid the ship and share plunder among the crew. • A "False Positive" (FP) is when the device says that a ship has treasure but it is empty. You raid the ship and the crew stages a mutiny over the disappointment of finding it empty. • A "False Negative" (FN) is when the device says that a ship does not have treasure but it actually does. You let the ship pass, but when you get back to port the crew hears of another ship taking the bounty and some defect to the more successful crew. • A "True Negative" (TN) is when the device correctly identifies that the ship is devoid of treasure. Your crew saves their strength as you let the ship pass. Obviously, you want to maximize acquired treasure and minimize crew frustration. Should you use the device? We will calculate metrics to help you make an informed decision. $$Accuracy = \dfrac{True\space Positive + True\space Negative}{True\space Positive + True\space Negative + False\space Positive + False\space Negative}$$ $$Accuracy = \dfrac{Ships\space carrying\space treasures\space correctly\space identified + Ships\space without\space treasures\space correctly\space identified}{All\space types\space of\space After synthesizing this data, our first metric is accuracy. Accuracy is the number of correct predictions over the output size. It is an incredibly straightforward measurement, and thanks to its simplicity it is broadly useful. Accuracy is one of the first metrics I calculate when evaluating results. # Accuracy from sklearn.metrics import accuracy_score # Accuracy = TP + TN / TP + TN + FP + FN def my_accuracy_score(actual, predicted): #threshold for non-classification? true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) true_negatives = len([a for a, p in zip(actual, predicted) if a == p and p == 0]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == 1]) false_negatives = len([a for a, p in zip(actual, predicted) if a != p and p == 0]) return (true_positives + true_negatives) / (true_positives + true_negatives + false_positives + false_negatives) print("my Accuracy A:", my_accuracy_score(actual_a, predicted_a)) print("sklearn Accuracy A:", accuracy_score(actual_a, predicted_a)) The accuracy on this output is .85, which means that 85% of the results were correct. Note that, on average, random results yield an accuracy of 50%, so this is a major improvement (of course, this data is fabricated, but the point stands). This seems pretty good! Your crew will only doubt your leadership 15% of the time. That said, a mutiny at sea is worse than a grumbling on the docks, so you are right to be more concerned about false positives. Fortunately, another metric, precision, can help. $$Precision = \dfrac{True\space Positive}{True\space Positive + False\space Positive}$$ $$Precision = \dfrac{Ships\space carrying\space treasures\space correctly\space identified}{Ships\space carrying\space treasures\space correctly\space identified + Ships\space incorrectly\space labeled\space as\space carrying\space treasures}$$ Precision is a similar metric, but it only measures the rate of false positives. In certain domains, like spam detection, a false positive is a worse error than a false negative (generally, missing an important email is worse than the inconvenience of deleting a piece of spam that snuck through the filter). # Precision from sklearn.metrics import precision_score # Precision = TP / TP + FP def my_precision_score(actual, predicted): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == 1]) return true_positives / (true_positives + false_positives) print("my Precision A:", my_precision_score(actual_a, predicted_a)) print("sklearn Precision A:", precision_score(actual_a, predicted_a)) Our precision is approximately .818, lower than our accuracy. This means that false positives are a larger part of our error set. Indeed, we have two false positives in this example and only one false negative. This does not bode well for your career as a pirate captain if nearly one in five raids end in mutiny! However, for a more warlike crew, the disappointment of missing out on a raid might outweigh the cost of a pointless boarding. In such a situation, you would want to optimize for recall to reduce false negatives. $$Recall = \dfrac{True\space Positive}{True\space Positive + False\space Negative}$$ $$Recall = \dfrac{Ships\space carrying\space treasures\space correctly\space identified}{Ships\space carrying\space treasures\space correctly\space identified + Ships\space carrying\space treasures \space incorrect\space classified\space as\space ships\space without\space treasures}$$ Recall is the opposite of precision, it measures false negatives against true positives. False negatives are especially important to prevent in disease detection and other predictions involving # Recall from sklearn.metrics import recall_score def my_recall_score(actual, predicted): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == 1]) false_negatives = len([a for a, p in zip(actual, predicted) if a != p and p == 0]) return true_positives / (true_positives + false_negatives) print("my Recall A:", my_recall_score(actual_a, predicted_a)) print("sklearn Recall A:", recall_score(actual_a, predicted_a)) Our recall is .9, higher than the other two metrics. If we are especially concerned with reducing false negatives, then this is the best result. As a captain using your device, you are only letting one in ten ships pass by with their treasure holds intact. Precision - Recall Curve A precision-recall curve is a great metric for demonstrating the tradeoff between precision and recall for unbalanced datasets. In an unbalanced dataset, one class is substantially over-represented compared to the other. Our dataset is fairly balanced, so a precision-recall curve isn’t the most appropriate metric, but we can calculate it anyway for demonstration purposes. from sklearn.metrics import precision_recall_curve import matplotlib.pyplot as plt precision, recall, _ = precision_recall_curve(actual_a, predicted_a) plt.step(recall, precision, color='g', alpha=0.2, where='post') plt.fill_between(recall, precision, alpha=0.2, color='g', step='post') plt.ylim([0.0, 1.0]) plt.xlim([0.0, 1.0]) plt.title('Precision-Recall curve') Our precision and recall are pretty similar, so the curve isn’t especially dramatic. Again, this metric is better suited to unbalanced classifiers. $$F1-Score = 2 * \dfrac{Recall * Precision}{Recall + Precision}$$ What if you want to balance the two objectives: high precision and high recall? Or, as a pirate captain, you want to optimize towards capturing treasure and avoiding mutiny? We calculate the F1-score as the harmonic mean of precision and recall to accomplish just that. While we could take the simple average of the two scores, harmonic means are more resistant to outliers. Thus, the F1-score is a balanced metric that appropriately quantifies the correctness of models across many domains. # F1 Score from sklearn.metrics import f1_score # Harmonic mean of (a, b) is 2 * (a * b) / (a + b) def my_f1_score(actual, predicted): return 2 * (my_precision_score(actual, predicted) * my_recall_score(actual, predicted)) / (my_precision_score(actual, predicted) + my_recall_score(actual, predicted)) print("my F1 Score A:", my_f1_score(actual_a, predicted_a)) print("sklearn F1 Score A:", f1_score(actual_a, predicted_a)) The score of .857, slightly above that of the average, may or may not give you the confidence to rely on the device to help you decide which ships to raid. In evaluating the tradeoffs between precision and recall, you might want to draw an ROC curve on the back of one of the maps on the navigation deck. Area Under the Curve Unlike precision-recall curves, ROC (Receiver Operator Characteristic) curves work best for balanced data sets such as ours. Briefly, AUC is the area under the ROC curve that represents the tradeoff between Recall (TPR) and Specificity (FPR). Like the other metrics we have considered, AUC is between 0 and 1, with .5 as the expected value of random prediction. If you are interested in learning more, there is a great discussion on StackExchange as usual. Sklearn provides an implementation for AUC on binary classification. The relevant equations are as follows: $$True\space Positive\space Rate\space (a.k.a.\space Recall\space or\space Sensitivity) = \dfrac{True\space Positive}{True\space Positive + False Negative}$$ Refer back to the section on recall for this one; the TPR and recall are equivalent metrics. $$False\space Positive\space Rate\space (a.k.a.\space Specificity) = \dfrac{False\space Positive}{False\space Positive + True\space Negative}$$ $$False\space Positive\space Rate = \dfrac{Ships\space without \space treasures\space incorrect\space classified\space as\space ships\space carrying\space treasures}{Ships\space without \space treasures\space incorrect\space classified\space as\space ships\space carrying\space treasures + Ships\space carrying\space treasures\space incorrect\space classified\space as\space ships\space without\space treasures}$$ The specificity or FPR of a classifier is its “false alarm metric.” Basically, it measures the frequency at which the classifier “cries wolf,” or predicts a positive where a negative is observed. In our example, a false positive is grounds for mutiny and should be avoided at all costs. We consider the tradeoff between TPR and FPR with our ROC curve for our balanced classifier. from sklearn.metrics import roc_auc_score from sklearn.metrics import roc_curve import matplotlib.pyplot as plt print("sklearn ROC AUC Score A:", roc_auc_score(actual_a, predicted_a)) fpr, tpr, _ = roc_curve(actual_a, predicted_a) plt.plot(fpr, tpr, color='darkorange', lw=2, label='ROC curve') plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--') #center line plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") The AUC for our data is .85, which happens to be the same as our accuracy, which is not often the case (“see: balanced accuracy”). Again, this is a metric that balances the risks of the crew deserting and mutinying given the performance of our device to identify ships carrying treasures, and a ROC curve carved into the table of the navigator’s table could help you make your raiding Because standard precision and recall rely on binary classification, it is non-trivial to extend AUC to represent a multidimensional general classifier as some sort of hypervolume-under-the-curve. However, several of the metrics are more straightforward to extend to evaluating other types of predictions. Other Output Types As I mentioned earlier, we can perform minor adaptations to these metrics to measure the performance of different types of output. We'll consider the simplest metric, accuracy, for both non-binary categorical output and continuous output. In the following examples, an updated version of the device, version B, tells you if a ship has no treasure (0), some treasure (1), or tons of treasure (2). Version C of the device tells you how many islands you can buy with the treasure on the target ship. For general categorical output, accuracy is very straightforward: correct_predictions/all_predictions. Using an example output with three categories, we can still determine the accuracy. In code, this looks like the following. # Accuracy for non-binary predictions def my_general_accuracy_score(actual, predicted): correct = len([a for a, p in zip(actual, predicted) if a == p]) wrong = len([a for a, p in zip(actual, predicted) if a != p]) return correct / (correct + wrong) print("my Accuracy B:", my_general_accuracy_score(actual_b, predicted_b)) print("sklearn Accuracy B:", accuracy_score(actual_b, predicted_b)) As you may "recall," precision and recall measure false positives and negatives. With a bit of intuition and domain knowledge, we can extend this to a general classifier. In example B, I decided that "2" represents a positive and was able to generate precision as follows. def my_general_precision_score(actual, predicted, value): true_positives = len([a for a, p in zip(actual, predicted) if a == p and p == value]) false_positives = len([a for a, p in zip(actual, predicted) if a != p and p == value]) return true_positives / (true_positives + false_positives) print("my Precision B:", my_general_precision_score(actual_b, predicted_b, 2)) While sklearn supports accuracy for general categorical predictions, we can add a threshold parameter to calculate accuracy for a continuous prediction. Choosing the threshold is as important as every other number that you set during the modeling process, and should be set based on your domain knowledge before you see the results. After applying the threshold, the predictions can be treated a binary classifier and any of the seven metrics we have covered now apply to the data. # Accuracy for continuous with threshold def my_threshold_accuracy_score(actual, predicted, threshold): a = [0 if x >= threshold else 1 for x in actual] p = [0 if x >= threshold else 1 for x in predicted] return my_accuracy_score(a, p) print("my Accuracy C:", my_threshold_accuracy_score(actual_c, predicted_c, 5)) Departing from the standard implementations gives us room to expand these fundamental metrics to cover most predictions, allowing for consistent comparison between models and their outputs. These seven metrics for (binary) classification and continuous output with a threshold will serve you well for most data sets and modeling techniques. For the rest, minimal adjustments can create strong metrics. A single note of caution before we discuss adapting these standard measurements: always determine your evaluation criteria before beginning to evaluate the results. There are many subtle issues in a modeling process that can lead to overfitting and bad models, but adjusting the correctness evaluation metric based on the results of the model is an egregious departure from the accepted principles of a modeling workflow and will almost certainly promote overfitting and other bad results. Remember, accuracy is not the goal, a good model is the goal. That goal is just a corollary of Goodhart’s law, or the idea that “when a measure becomes a target, it ceases to be a good measure..” Especially when you’re developing new systems, optimizations for individual metrics can hide overarching issues in the system. Rachel Thomas writes more about this, saying “I am not opposed to metrics; I am alarmed about the harms caused when metrics are overemphasized, a phenomenon that we see frequently with AI, and which is having a negative, real-world impact.” There are extensions of classification that permit interesting modification to correctness metrics. For example, ordinal classification involves an output set where there are a fixed number of distinct categories, but those categories have a set order. Military rank is one type of ordinal data. Sometimes, you can handle ordinal data like continuous data and establish a threshold, then use a binary algorithm to handle the correctness. If a lieutenant in an army wanted to know if soldiers in a dataset were predicted to be her rank and above, she could set the cutoff at lieutenant and use a standard metric like accuracy or precision to evaluate the correctness of her prediction method. However, a more generalized version of the same evaluation could use weighted accuracy to check the results. If a soldier is predicted to be a captain but he is in fact a sergeant, that is more incorrect than if he were predicted to be a lieutenant. Such disparity can be recognized in a custom implementation of accuracy or any other metric as appropriate for the domain, or by adding a penalty function to the loss function in the model. Ultimately, I think this is what makes data science so interesting, there are opportunities to create custom solutions from the beginning to the end of the modeling process. However, the more non-standard the data and algorithm used, the more important it is to consider standard, fundamental metrics like accuracy, precision, and recall for evaluating the results. By using or adapting these metrics you can have confidence that your novel approach to a problem is correct with respect to standard practices. Further References About Philip Kiely Philip Kiely writes code and words. He is the author of Writing for Software Developers (2020). Philip holds a B.A. with honors in Computer Science from Grinnell College. Philip is a FloydHub AI Writer. You can find his work at https://philipkiely.com or you can connect with Philip via LinkedIn and GitHub.
{"url":"https://floydhub.ghost.io/a-pirates-guide-to-accuracy-precision-recall-and-other-scores/","timestamp":"2024-11-01T22:56:00Z","content_type":"text/html","content_length":"57974","record_id":"<urn:uuid:d21dc8dc-819c-408d-9829-ff9f00912382>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00025.warc.gz"}
Combination enumerationCombination enumerationCombination enumeration Combination enumeration Tuesday, March 12, 2019 Add Comment Edit Often we are interested in determining the number of groups r different objects that can be formed from a number of n objects. For example, how many groups of 4 different objects can be taken from 5 objects A, B, C, D, and E ?. To answer this, reason as follows. Because there are 5 ways to choose the first object, 4 ways to choose the second object, and 3 ways to select the third object, then there are all 5 x 4 x 3 ways to select group 3 objects if the order of the selected objects is considered. However, because each group of 3 objects, say gurp consisting of objects A, B, and C, will be enumerated 6 times (meaning all ABC permutations, ACB, BAC, BCA, CAB, and CBA will be enumerated if the order of objects- this object is noticed, so the number of groups that can be formed is In general, because n (n-1) ... (n-r-1) states the number of different ways to select a group of r objects from n available objects if the order is observed and because each group r objects will be counted r! times in this enumeration, the number of groups r different objects that can be formed from a number of n objects is So denotes the number of groups r different objects that can be taken from a number of objects if the order in which they are taken is not observed. Example Combination Problem 1 A committee of 3 people will be formed from a total of 20 people. How many committees can be formed? Example Combination Problem 2 From a group of 5 men and 7 women, how many groups of 5 people can be formed consisting of 2 men and 3 women? Solution: Because there is how to choose 2 men and how to choose 3 women, then all of them are there group consisting of 2 men and 3 women. A useful combinatorial identity is Equation 1 can be proven by analysis or through the following combinatorial arguments. Suppose we have a group of n objects and focus on any of these objects, name objects number 1. Now there is a combination of size r containing object number 1 (because of this combination this is formed by taking r-1 from the remaining n-1 objects). In addition, there are combinations of size r that do not contain objects number 1. Because everything is there a combination of size r, then Equation 1 is proven. Two proofs for this binom theorem will be given. The first is evidence through mathematical induction, while the second is evidence through combination arguments. EVIDENCE OF THEORY OF BINOM THROUGH INDUCTION: If n = 1, Equation 2 will be reduced to Suppose Equation 2 is true for n-1. Furthermore, By taking i = k + 1 in the first number and i = k in the second number we get, In this case the Equation before the last equation above is a result of Equation 1. So the above theorem proves by induction COMBINATORIAL EVIDENCE FOR BINOM THEORIES. Look again, This product represents the sum of 2^n terms, each of which is a product of n factors. In addition, each of these terms has x[i] or y[i] as a factor for every i = 1,2, ..., n. For example Now, how much of this 2^n term has k x[i] and n-k y[i] as the factor? Because each term consisting of k x[i] and (nk) y[i] matches with the selection of a group k element of n values of x[1], x[2], x[3], ..., x[m] then there is the fruit of the tribe this is so. So, by taking x[i] = x and y[i] = y, i = 1, 2, ..., n, it is obtained, Example Combination Problem 3 Example of Combination Problem 4 How many subsets can be formed from a set of n elements? These results can also be obtained in the following ways. To each element in the set 0 Response to "Combination enumeration"
{"url":"https://math.agungcode.com/2019/03/combination-enumeration.html","timestamp":"2024-11-02T13:52:58Z","content_type":"application/xhtml+xml","content_length":"229399","record_id":"<urn:uuid:873d6358-1f4a-457d-af3d-8f7f3264cce6>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00384.warc.gz"}
ACM Other ConferencesMatrix Analytic Methods in Branching processes We examine the question of solving the extinction probability of a particular class of continuous-time multi-type branching processes, named Markovian binary trees (MBT). The extinction probability is the minimal nonnegative solution of a fixed point equation that turns out to be quadratic, which makes its resolution particularly clear. We analyze first two linear algorithms to compute the extinction probability of an MBT, of which one is new, and, we propose a quadratic algorithm arising from Newton's iteration method for fixed-point equations. Finally, we add a catastrophe process to the initial MBT, and we analyze the resulting system. The extinction probability turns out to be much more difficult to compute; we use a $G/M/1$-type Markovian process approach to approximate this
{"url":"https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.07461.9/metadata/acm-xml","timestamp":"2024-11-07T10:23:27Z","content_type":"application/xml","content_length":"4894","record_id":"<urn:uuid:ce0cec19-57ec-42f2-b757-6498e834b5a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00773.warc.gz"}
In geometry, the great dodecahedron is one of four Kepler–Poinsot polyhedra. It is composed of 12 pentagonal faces (six pairs of parallel pentagons), intersecting each other making a pentagrammic path, with five pentagons meeting at each vertex. 3D model of a great dodecahedron One way to construct a great dodecahedron is by faceting the regular icosahedron. In other words, it is constructed from the regular icosahedron by removing its polygonal faces without changing or creating new vertices.^[1] Another way is to form a regular pentagon by each of the five vertices inside of a regular icosahedron, and twelve regular pentagons intersecting each other, making a pentagram as its vertex figure.^[2]^[3] The great dodecahedron may also be interpreted as the second stellation of dodecahedron. The construction started from a regular dodecahedron by attaching 12 pentagonal pyramids onto each of its faces, known as the first stellation. The second stellation appears when 30 wedges are attached to it.^[4] Given a great dodecahedron with edge length ${\displaystyle a}$ . The circumradius of a great dodecahedron ${\displaystyle C}$ is: ${\displaystyle C={\frac {a}{4}}\left({\sqrt {10+2+{\sqrt {5}}}}\ right).}$ Its surface area ${\displaystyle A}$ is: ${\displaystyle A=15a^{2}\left({\sqrt {5-2{\sqrt {5}}}}\right).}$ Its volume ${\displaystyle V}$ is:^[5] ${\displaystyle V={\frac {5}{4}}({\sqrt External links
{"url":"https://www.knowpia.com/knowpedia/Great_dodecahedron","timestamp":"2024-11-06T05:04:25Z","content_type":"text/html","content_length":"106940","record_id":"<urn:uuid:52c888d2-a5c6-4e26-bb10-aee2f9632062>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00651.warc.gz"}
Number Theory The Beginner's Guide to Constructing the Universe: The Mathematical Archetypes of Nature, Art, and Science ISBN: 9780060926717 / Angielski / Miękka / 384 str. ISBN: 9780060926717/Angielski/Miękka/384 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. The Universe May Be a Mystery, But It's No Secret Michael Schneider leads us on a spectacular, lavishly illustrated journey along the numbers one through ten to explore the mathematical principles made visible in flowers, shells, crystals, plants, and the human body, expressed in the symbolic language of folk sayings and fairy tales, myth and religion, art and architecture. This is a new view of mathematics, not the one we learned at school but a comprehensive guide to the patterns that recur through the universe and underlie human affairs. A Beginner's... The Universe May Be a Mystery, But It's No Secret Michael Schneider leads us on a spectacular, lavishly illustrated jo... cena: 86,90 zł The Universe May Be a Mystery, But It's No Secret Michael Schneider leads us on a spectacular, lavishly illustrated journey along the numbers one through ten to explore the mathematical principles made visible in flowers, shells, crystals, plants, and the human body, expressed in the symbolic language of folk sayings and fairy tales, myth and religion, art and architecture. This is a new view of mathematics, not the one we learned at school but a comprehensive guide to the patterns that recur through the universe and underlie human affairs. A Beginner's... Michael Schneider leads us on a spectacular, lavishly illustrated jo... Elements of the Theory of Numbers ISBN: 9780122091308 / Angielski / Twarda / 517 str. ISBN: 9780122091308/Angielski/Twarda/517 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. Elements of the Theory of Numbers teaches students how to develop, implement, and test numerical methods for standard mathematical problems. The authors have created a two-pronged pedagogical approach that integrates analysis and algebra with classical number theory. Making greater use of the language and concepts in algebra and analysis than is traditionally encountered in introductory courses, this pedagogical approach helps to instill in the minds of the students the idea of the unity of mathematics. Elements of the Theory of Numbers is a superb summary of classical material... Elements of the Theory of Numbers teaches students how to develop, implement, and test numerical methods for standard mathematical problems. Th... cena: 876,57 zł The Encyclopedia of Integer Sequences ISBN: 9780125586306 / Angielski / Twarda / 587 str. ISBN: 9780125586306/Angielski/Twarda/587 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. This encyclopedia contains more than 5000 integer sequences, over half of which have never before been catalogued. Because the sequences are presented in the most natural form, and arranged for easy reference, this book is easier to use than the authors earlier classic A Handbook of Integer Sequences. The Encyclopedia gives the name, mathematical description, and citations to literature for each sequence. Following sequences of particular interest, thereare essays on their origins, uses, and connections to related sequences (all cross-referenced). A valuable new feature to this text is... This encyclopedia contains more than 5000 integer sequences, over half of which have never before been catalogued. Because the sequences are presented... cena: 422,09 zł This encyclopedia contains more than 5000 integer sequences, over half of which have never before been catalogued. Because the sequences are presented... The Book of Squares ISBN: 9780126431308 / Angielski / Twarda / 144 str. ISBN: 9780126431308/Angielski/Twarda/144 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. The Book of Squares by Fibonacci is a gem in the mathematical literature and one of the most important mathematical treatises written in the Middle Ages. It is a collection of theorems on indeterminate analysis and equations of second degree which yield, among other results, a solution to a problem proposed by Master John of Palermo to Leonardo at the Court of Frederick II. The book was dedicated and presented to the Emperor at Pisa in 1225. Dating back to the 13th century the book exhibits the early and continued fascination of men with our number system and the relationship among numbers... The Book of Squares by Fibonacci is a gem in the mathematical literature and one of the most important mathematical treatises written in the Middle Ag... cena: 279,62 zł The Book of Squares by Fibonacci is a gem in the mathematical literature and one of the most important mathematical treatises written in the Middle Ages. It is a collection of theorems on indeterminate analysis and equations of second degree which yield, among other results, a solution to a problem proposed by Master John of Palermo to Leonardo at the Court of Frederick II. The book was dedicated and presented to the Emperor at Pisa in 1225. Dating back to the 13th century the book exhibits the early and continued fascination of men with our number system and the relationship among numbers... The Book of Squares by Fibonacci is a gem in the mathematical literature and one of the most important mathematical treatises written in the Middle Ag... The Nothing That Is: A Natural History of Zero ISBN: 9780195128420 / Angielski / Twarda / 240 str. ISBN: 9780195128420/Angielski/Twarda/240 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. A symbol for what is not there, an emptiness that increases any number it's added to, an inexhaustible and indispensable paradox. As we enter the year 2000, zero is once again making its presence felt. Nothing itself, it makes possible a myriad of calculations. Indeed, without zero mathematics as we know it would not exist. And without mathematics our understanding of the universe would be vastly impoverished. But where did this nothing, this hollow circle, come from? Who created it? And what, exactly, does it mean? Robert Kaplan's The Nothing That Is: A Natural History of Zero begins as... A symbol for what is not there, an emptiness that increases any number it's added to, an inexhaustible and indispensable paradox. As we enter the year... cena: 520,34 zł A symbol for what is not there, an emptiness that increases any number it's added to, an inexhaustible and indispensable paradox. As we enter the year... Profinite Groups ISBN: 9780198500827 / Angielski / Twarda / 296 str. ISBN: 9780198500827/Angielski/Twarda/296 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. Profinite groups are of interest to mathematicians working in a variety of areas, including number theory, abstract groups, and analysis. The underlying theory reflects these diverse influences, with methods drawn from both algebra and topology and with fascinating connections to field theory. This is the first book to be dedicated solely to the study of general profinite groups. It provides a thorough introduction to the subject, designed not only to convey the basic facts but also to enable readers to enhance their skills in manipulating profinite groups. The first few chapters lay the... Profinite groups are of interest to mathematicians working in a variety of areas, including number theory, abstract groups, and analysis. The underlyi... cena: 1143,59 zł Profinite groups are of interest to mathematicians working in a variety of areas, including number theory, abstract groups, and analysis. The underlying theory reflects these diverse influences, with methods drawn from both algebra and topology and with fascinating connections to field theory. This is the first book to be dedicated solely to the study of general profinite groups. It provides a thorough introduction to the subject, designed not only to convey the basic facts but also to enable readers to enhance their skills in manipulating profinite groups. The first few chapters lay Profinite groups are of interest to mathematicians working in a variety of areas, including number theory, abstract groups, and analysis. The underlyi... Metric Number Theory ISBN: 9780198500834 / Angielski / Twarda / 320 str. ISBN: 9780198500834/Angielski/Twarda/320 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. This book examines the number-theoretic properties of the real numbers. It collects a variety of new ideas and develops connections between different branches of mathematics. An indispensable compendium of basic results, the text also includes important theorems and open problems. The book begins with the classical results of Borel, Khintchine, and Weyl, and then proceeds to Diophantine approximation, GCD sums, Schmidt's method, and uniform distribution. Other topics include generalizations to higher dimensions and various non-periodic problems (for example, restricting approximation to... This book examines the number-theoretic properties of the real numbers. It collects a variety of new ideas and develops connections between different ... cena: 1356,04 zł This book examines the number-theoretic properties of the real numbers. It collects a variety of new ideas and develops connections between different branches of mathematics. An indispensable compendium of basic results, the text also includes important theorems and open problems. The book begins with the classical results of Borel, Khintchine, and Weyl, and then proceeds to Diophantine approximation, GCD sums, Schmidt's method, and uniform distribution. Other topics include generalizations to higher dimensions and various non-periodic problems (for example, restricting approximation to... This book examines the number-theoretic properties of the real numbers. It collects a variety of new ideas and develops connections between different ... A Course in Number Theory ISBN: 9780198523765 / Angielski / Miękka / 416 str. ISBN: 9780198523765/Angielski/Miękka/416 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. Perfect for students approaching the subject for the first time, this book offers a superb overview of number theory. Now in its second edition, it has been thoroughly updated to feature up-to-the-minute treatments of key research, such as the most recent work on Fermat's coast theorem. Topics include divisibility and multiplicative functions, congruences and quadratic resolves, the basics of algebraic numbers and sums of squares, continued fractions, diophantine approximations and transcendence, quadratic forms, partitions, the prime numbers, diophantine equations, and elliptic curves. More... Perfect for students approaching the subject for the first time, this book offers a superb overview of number theory. Now in its second edition, it ha... cena: 391,23 zł Perfect for students approaching the subject for the first time, this book offers a superb overview of number theory. Now in its second edition, it has been thoroughly updated to feature up-to-the-minute treatments of key research, such as the most recent work on Fermat's coast theorem. Topics include divisibility and multiplicative functions, congruences and quadratic resolves, the basics of algebraic numbers and sums of squares, continued fractions, diophantine approximations and transcendence, quadratic forms, partitions, the prime numbers, diophantine equations, and elliptic curves. More... Perfect for students approaching the subject for the first time, this book offers a superb overview of number theory. Now in its second edition, it ha... Experimental Number Theory ISBN: 9780198528227 / Angielski / Twarda / 232 str. ISBN: 9780198528227/Angielski/Twarda/232 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. This graduate text, based on years of teaching experience, is intended for first or second year graduate students in pure mathematics. The main goal of the text is to show how the computer can be used as a tool for research in number theory through numerical experimentation. The book contains many examples of experiments in binary quadratic forms, zeta functions of varieties over finite fields, elementary class field theory, elliptic units, modular forms, along with exercises and selected solutions. Sample programs are written in GP, the scripting language for the computational... This graduate text, based on years of teaching experience, is intended for first or second year graduate students in pure mathematics. The main goal o... cena: 233,71 zł This graduate text, based on years of teaching experience, is intended for first or second year graduate students in pure mathematics. The main goal o... The Theory of Transformation Groups ISBN: 9780198532125 / Angielski / Twarda / 352 str. ISBN: 9780198532125/Angielski/Twarda/352 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. This book presents an introduction to the theory of transformation groups which will be suitable for all those coming to the subject for the first time. The emphasis is on the study of compact Lie groups acting on manifolds. Throughout, much care is taken to illustrate concepts and results with examples and applications. Numerous exercises are also included to further extend a reader's understanding and knowledge. Prerequisites are a familiarity with algebra and topology as might have been acquired from an undergraduate degree in Mathematics. The author begins by introducing the This book presents an introduction to the theory of transformation groups which will be suitable for all those coming to the subject for the first tim... cena: 568,69 zł This book presents an introduction to the theory of transformation groups which will be suitable for all those coming to the subject for the first time. The emphasis is on the study of compact Lie groups acting on manifolds. Throughout, much care is taken to illustrate concepts and results with examples and applications. Numerous exercises are also included to further extend a reader's understanding and knowledge. Prerequisites are a familiarity with algebra and topology as might have been acquired from an undergraduate degree in Mathematics. The author begins by introducing the This book presents an introduction to the theory of transformation groups which will be suitable for all those coming to the subject for the first tim... Area, Lattice Points and Exponential Sums ISBN: 9780198534662 / Angielski / Twarda / 512 str. ISBN: 9780198534662/Angielski/Twarda/512 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. In analytic number theory many problems can be "reduced" to those involving the estimation of exponential sums in one or several variables. This book is a thorough treatment of the developments arising from the method for estimating the Riemann zeta function. Huxley and his coworkers have taken this method and vastly extended and improved it. The powerful techniques presented here go considerably beyond older methods for estimating exponential sums such as van de Corput's method. The potential for the method is far from being exhausted, and there is considerable motivation for In analytic number theory many problems can be "reduced" to those involving the estimation of exponential sums in one or several variables. This book ... cena: 1618,49 zł In analytic number theory many problems can be "reduced" to those involving the estimation of exponential sums in one or several variables. This book is a thorough treatment of the developments arising from the method for estimating the Riemann zeta function. Huxley and his coworkers have taken this method and vastly extended and improved it. The powerful techniques presented here go considerably beyond older methods for estimating exponential sums such as van de Corput's method. The potential for the method is far from being exhausted, and there is considerable motivation for In analytic number theory many problems can be "reduced" to those involving the estimation of exponential sums in one or several variables. This book ... Sampling Theory in Fourier and Signal Analysis: Volume 2: Advanced Topics ISBN: 9780198534969 / Angielski / Twarda / 312 str. ISBN: 9780198534969/Angielski/Twarda/312 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. The second in a two-volume series on signal analysis, this book draws on the foundations laid in the first volume to survey the diverse applications of sampling theory both within mathematics and in other areas of science. Many of the topics included are appearing for the first time in a book, and the book seeks to bring readers to the forefront of research. Topics include combinatorial analysis, number theory, neural networks, derivative sampling, wavelets, stochastic signals, random fields, and abstract harmonic analysis. The second in a two-volume series on signal analysis, this book draws on the foundations laid in the first volume to survey the diverse applications o... cena: 956,12 zł The second in a two-volume series on signal analysis, this book draws on the foundations laid in the first volume to survey the diverse applications o... P-Adic Methods and Their Applications ISBN: 9780198535942 / Angielski / Twarda / 208 str. ISBN: 9780198535942/Angielski/Twarda/208 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. The p-adic numbers and more generally local fields have become increasingly important in a wide range of mathematical disciplines. They are now seen as essential tools in many areas, including number theory, algebraic geometry, group representation theory, the modern theory of automorphic forms, and algebraic topology. A number of texts have recently become available which provide good general introductions to p-adic numbers and p-adic analysis. However, there is at present a gap between such books and the sophisticated applications in the research literature. The aim of this book is to... The p-adic numbers and more generally local fields have become increasingly important in a wide range of mathematical disciplines. They are now seen a... cena: 391,23 zł The p-adic numbers and more generally local fields have become increasingly important in a wide range of mathematical disciplines. They are now seen as essential tools in many areas, including number theory, algebraic geometry, group representation theory, the modern theory of automorphic forms, and algebraic topology. A number of texts have recently become available which provide good general introductions to p-adic numbers and p-adic analysis. However, there is at present a gap between such books and the sophisticated applications in the research literature. The aim of this book is to... The p-adic numbers and more generally local fields have become increasingly important in a wide range of mathematical disciplines. They are now seen a... Hilbert Modular Forms and Iwasawa Theory ISBN: 9780198571025 / Angielski / Twarda / 416 str. ISBN: 9780198571025/Angielski/Twarda/416 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. The 1995 work of Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredibly important work are still being felt. This book, authored by a leading researcher, describes the striking applications that have been found for this technique. In the book, the deformation theoretic techniques of Wiles-Taylor are first generalized to Hilbert modular forms (following Fujiwara's treatment), and some applications found by the author are then discussed. With many exercises and open questions given, this text is ideal The 1995 work of Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredib... cena: 868,63 zł The 1995 work of Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredibly important work are still being felt. This book, authored by a leading researcher, describes the striking applications that have been found for this technique. In the book, the deformation theoretic techniques of Wiles-Taylor are first generalized to Hilbert modular forms (following Fujiwara's treatment), and some applications found by the author are then discussed. With many exercises and open questions given, this text is ideal The 1995 work of Wiles and Taylor-Wiles opened up a whole new technique in algebraic number theory and, a decade on, the waves caused by this incredib... Experimental Number Theory ISBN: 9780199227303 / Angielski / Miękka / 300 str. ISBN: 9780199227303/Angielski/Miękka/300 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. This graduate text, based on years of teaching experience, is intended for first or second year graduate students in pure mathematics. The main goal of the text is to show how the computer can be used as a tool for research in number theory through numerical experimentation. The book contains many examples of experiments in binary quadratic forms, zeta functions of varieties over finite fields, elementary class field theory, elliptic units, modular forms, along with exercises and selected solutions. Sample programs are written in GP, the scripting language for the computational package PARI,... This graduate text, based on years of teaching experience, is intended for first or second year graduate students in pure mathematics. The main goal o... cena: 381,23 zł This graduate text, based on years of teaching experience, is intended for first or second year graduate students in pure mathematics. The main goal of the text is to show how the computer can be used as a tool for research in number theory through numerical experimentation. The book contains many examples of experiments in binary quadratic forms, zeta functions of varieties over finite fields, elementary class field theory, elliptic units, modular forms, along with exercises and selected solutions. Sample programs are written in GP, the scripting language for the computational package PARI,... The Mathematical Theory of Communication ISBN: 9780252725463 / Angielski / Twarda / 144 str. ISBN: 9780252725463/Angielski/Twarda/144 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. Scientific knowledge grows at a phenomenal pace--but few books have had as lasting an impact or played as important a role in our modern world as The Mathematical Theory of Communication, published originally as a paper on communication theory more than fifty years ago. Republished in book form shortly thereafter, it has since gone through four hardcover and sixteen paperback printings. It is a revolutionary work, astounding in its foresight and contemporaneity. The University of Illinois Press is pleased and honored to issue this commemorative reprinting of a classic. Scientific knowledge grows at a phenomenal pace--but few books have had as lasting an impact or played as important a role in our modern world as T... cena: 237,46 zł An Introduction to Number Theory ISBN: 9780262690607 / Angielski / Miękka / 360 str. ISBN: 9780262690607/Angielski/Miękka/360 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. The majority of students who take courses in number theory are mathematics majors who will not become number theorists. Many of them will, however, teach mathematics at the high school or junior college level, and this book is intended for those students learning to teach, in addition to a careful presentation of the standard material usually taught in a first course in elementary number theory, this book includes a chapter on quadratic fields which the author has designed to make students think about some of the "obvious" concepts they have taken for granted earlier. The book The majority of students who take courses in number theory are mathematics majors who will not become number theorists. Many of them will, however,... cena: 371,45 zł The majority of students who take courses in number theory are mathematics majors who will not become number theorists. Many of them will, however, teach mathematics at the high school or junior college level, and this book is intended for those students learning to teach, in addition to a careful presentation of the standard material usually taught in a first course in elementary number theory, this book includes a chapter on quadratic fields which the author has designed to make students think about some of the "obvious" concepts they have taken for granted earlier. The book The majority of students who take courses in number theory are mathematics majors who will not become number theorists. Many of them will, however,... Arithmetic of Algebraic Curves ISBN: 9780306110368 / Angielski / Twarda / 422 str. ISBN: 9780306110368/Angielski/Twarda/422 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. Author S.A. Stepanov thoroughly investigates the current state of the theory of Diophantine equations and its related methods. Discussions focus on arithmetic, algebraic-geometric, and logical aspects of the problem. Designed for students as well as researchers, the book includes over 250 excercises accompanied by hints, instructions, and references. Written in a clear manner, this text does not require readers to have special knowledge of modern methods of algebraic geometry. Author S.A. Stepanov thoroughly investigates the current state of the theory of Diophantine equations and its related methods. Discussions focus on ar... cena: 1488,83 zł Author S.A. Stepanov thoroughly investigates the current state of the theory of Diophantine equations and its related methods. Discussions focus on ar... Multi-Valued Fields ISBN: 9780306110689 / Angielski / Twarda / 270 str. ISBN: 9780306110689/Angielski/Twarda/270 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. For more than 30 years, the author has studied the model-theoretic aspects of the theory of valued fields and multi-valued fields. Many of the key results included in this book were obtained by the author whilst preparing the manuscript. Thus the unique overview of the theory, as developed in the book, has been previously unavailable. The book deals with the theory of valued fields and mutli-valued fields. The theory of Prufer rings is discussed from the geometric' point of view. The author shows that by introducing the Zariski topology on families of valuation rings, it is possible to... For more than 30 years, the author has studied the model-theoretic aspects of the theory of valued fields and multi-valued fields. Many of the key res... cena: 783,57 zł For more than 30 years, the author has studied the model-theoretic aspects of the theory of valued fields and multi-valued fields. Many of the key res... The World's Most Famous Math Problem: The Proof of Fermat's Last Theorem and Other Mathematical Mysteries ISBN: 9780312106577 / Angielski / Miękka / 80 str. ISBN: 9780312106577/Angielski/Miękka/80 str. Termin realizacji zamówienia: ok. 5-8 dni roboczych. June 23, 1993. A Princeton mathematician announces that he has unlocked, after thousands of unsuccessful attempts by others, the greatest mathematical riddle in the world. Dr. Wiles demonstrates to a group of stunned mathematicians that he has provided the proof of Fermat's Last Theorem (the equation x" + y" = z," where n is an integer greater than 2, has no solution in positive numbers), a problem that has confounded scholars for over 350 years. Here in this brilliant new book, Marilyn vos Savant, the person with the highest recorded IQ in the world explains the mathematical... June 23, 1993. A Princeton mathematician announces that he has unlocked, after thousands of unsuccessful attempts by others, the greatest mathemati... cena: 82,55 zł June 23, 1993. A Princeton mathematician announces that he has unlocked, after thousands of unsuccessful attempts by others, the greatest mathematical riddle in the world. Dr. Wiles demonstrates to a group of stunned mathematicians that he has provided the proof of Fermat's Last Theorem (the equation x" + y" = z," where n is an integer greater than 2, has no solution in positive numbers), a problem that has confounded scholars for over 350 years. Here in this brilliant new book, Marilyn vos Savant, the person with the highest recorded IQ in the world explains the mathematical... June 23, 1993. A Princeton mathematician announces that he has unlocked, after thousands of unsuccessful attempts by others, the greatest mathemati...
{"url":"https://krainaksiazek.pl/ksiegarnia,m_products,bi_MAT022000,.html","timestamp":"2024-11-14T21:45:27Z","content_type":"text/html","content_length":"98592","record_id":"<urn:uuid:735d2c54-b788-42d7-bfc8-fe5c9c83e060>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00056.warc.gz"}
Transfer Test Worksheet: Factors & Multiples | TransferReady Factors & Multiples Factors and multiples form many quick, and seemingly easy, questions in the transfer tests. What these questions are really examining is times-tables knowledge. If you lack confidence on your tables, you may find these challenging. Often, young people learn times-tables in a sing-song way where they rhyme off their, say, seven times tables much like a song. Many primary school teachers motivate their classes with times-table challenges where a competition is built around getting your multiplication tables written out quickly. This is great because it internalises the tables but the higher-level skill which is examined in the transfer test is using those tables to solve mathematical problems. These begin with questions where students must cross-reference one set of tables against another – which number is both a multiple of 4 and 6? To answer this, you must know your list of multiples of four (4 times tables) and your list of multiples of six (6 times tables), and then compare the lists to find a number which exists in both. The questions then develop in complexity to more complicated cross-referencing. For example, which numbers are both factors of 40 and multiples of 5? Factors & Multiples Worksheet Did you factor this in to your revision today? Would multiple Facebook friends find it useful? Then you should really tell them…
{"url":"https://transferready.co.uk/factors-multiples/","timestamp":"2024-11-14T05:12:04Z","content_type":"text/html","content_length":"67717","record_id":"<urn:uuid:1ef2b087-5371-4fcd-9975-33b3261116c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00826.warc.gz"}
One ship is approaching a port from the east, traveling west at 15 miles per hour,... One ship is approaching a port from the east, traveling west at 15 miles per hour,... One ship is approaching a port from the east, traveling west at 15 miles per hour, and is presently 3 miles east of the port. A second ship has already left the port, traveling to the north at 10 miles per hour, and is presently 4 miles north of the port. At this instant, what is the rate of change of the distance between two ships? Are they getting closer or further apart? The rate of change of distance between the two ships is 1 miles per hour. The negative sign implies that the distance between the two ships is decreasing, i.e, they are getting closer.
{"url":"https://justaaa.com/math/116369-one-ship-is-approaching-a-port-from-the-east","timestamp":"2024-11-04T04:34:59Z","content_type":"text/html","content_length":"39103","record_id":"<urn:uuid:bbbffda7-1f7b-487a-ada3-f2949278bfe1>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00013.warc.gz"}
How to check the assumptions of a linear model (in Python, using NumPy, SciPy, sklearn, Matplotlib and Seaborn) If you plan to use a linear model to describe some data, it’s important to check if it satisfies the assumptions for linear regression. How can we do that? When performing a linear regression, the following assumptions should be checked. 1. We have two or more columns of numerical data of the same length. The solution below uses an example dataset about car design and fuel consumption from a 1974 Motor Trend magazine. (See how to quickly load some sample data.) We can see that our columns all have the same length. 1 from rdatasets import data 2 df = data('mtcars') 3 df = df[['mpg','cyl','wt']] # Select the 3 variables we're interested in 4 df.info() 1 <class 'pandas.core.frame.DataFrame'> 2 RangeIndex: 32 entries, 0 to 31 3 Data columns (total 3 columns): 4 # Column Non-Null Count Dtype 5 --- ------ -------------- ----- 6 0 mpg 32 non-null float64 7 1 cyl 32 non-null int64 8 2 wt 32 non-null float64 9 dtypes: float64(2), int64(1) 10 memory usage: 900.0 bytes 2. Scatter plots we’ve made suggest a linear relationship. Scatterplots are covererd in how to create basic plots, but after making the model, we can also examine the residuals. So let’s make the model. Our predictors will be the number of cylinders and the weight of the car and the response will be miles per gallon. (See also how to fit a linear model to two columns of data 1 from sklearn.linear_model import LinearRegression 2 model = LinearRegression() 4 predictors = df[['cyl','wt']] 5 response = df['mpg'] 6 model.fit( X=predictors, y=response ) 8 predictions = model.predict(predictors) We test for linearity with residual plots. We show just one residual plot here; you should make one for each predictor. Seaborn has a function for just this purpose. (See also how to compute the residuals of a linear model.) 1 import seaborn as sns 2 import matplotlib.pyplot as plt 3 # The "lowess" parameter adds a smooth line through the data: 4 sns.residplot(x = df['wt'], y = response, data=df, lowess=True) 5 plt.xlabel("Weight") 6 plt.title('Miles per gallon') 7 plt.show() 3. After making the model, the residuals seem normally distributed. We can check this by constructing a QQ-plot, which compares the distribution of the residuals to a normal distribution. Here we use SciPy, but there are other methods; see how to create a QQ-plot. 1 from scipy import stats 2 residuals = response - predictions # Compute the residuals 3 stats.probplot(residuals, dist="norm", plot=plt) 4 plt.title("Normal Q-Q Plot") 5 plt.show() 4. After making the model, the residuals seem homoscedastic. This assumption is sometimes called “equal variance,” and can be checked by the regplot function in Seaborn. We must first standardize the residuals, which we can do with NumPy. We want to see a plot with no clear pattern; a cone shape to the data would indicate heteroscedasticity, the opposite of homoscedasticity. 1 import numpy as np 2 standardized_residuals = np.sqrt(np.abs(residuals)) 3 sns.regplot(x = predictions, y = standardized_residuals, scatter=True, lowess=True) 4 plt.ylabel("Standarized residuals") 5 plt.xlabel("Fitted value") 6 plt.title("Scale-Location") 7 plt.show() Content last modified on 24 July 2023. See a problem? Tell us or edit the source. Contributed by Krtin Juneja (KJUNEJA@falcon.bentley.edu)
{"url":"https://how-to-data.org/how-to-check-the-assumptions-of-a-linear-model-in-python-using-numpy-scipy-sklearn-matplotlib-and-seaborn/","timestamp":"2024-11-07T03:12:22Z","content_type":"text/html","content_length":"120527","record_id":"<urn:uuid:0dd0c648-07bb-4331-8fe6-9a19c038aed3>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00075.warc.gz"}
The Correlation Coefficient r Besides looking at the scatter plot and seeing that a line seems reasonable, how can you tell if the line is a good predictor? Use the correlation coefficient as another indicator (besides the scatterplot) of the strength of the relationship between x and y. The correlation coefficient, r, developed by Karl Pearson in the early 1900s, is a numerical measure of the strength of association between the independent variable x and the dependent variable y. The correlation coefficient is calculated as where n = the number of data points. If you suspect a linear relationship between x and y, then r can measure how strong the linear relationship is. What the VALUE of r tells us: • The value of r is always between -1 and +1: −1 ≤ r ≤ 1. • The size of the correlation r indicates the strength of the linear relationship between x and y. Values of r close to -1 or to +1 indicate a stronger linear relationship between x and y. • If r = 0 there is absolutely no linear relationship between x and y (no linear correlation). • If r = 1, there is perfect positive correlation. If r = −1, there is perfect negative correlation. In both these cases, all of the original data points lie on a straight line. Of course, in the real world, this will not generally happen. What the SIGN of r tells us • A positive value of r means that when x increases, y tends to increase and when x decreases, y tends to decrease (positive correlation). • A negative value of r means that when x increases, y tends to decrease and when x decreases, y tends to increase (negative correlation). • The sign of r is the same as the sign of the slope, b, of the best fit line. Note: Strong correlation does not suggest that x causes y or y causes x. We say "correlation does not imply causation." For example, every person who learned math in the 17th century is dead. However, learning math does not necessarily cause death! Figure 6.11 (a) A scatter plot showing data with a positive correlation. 0 <r< 1 (b) A scatter plot showing data with a negative correlation. −1 <r< 0 Figure 6.12 (C) A scatter plot showing data with zero correlation. r=0 The formula for r looks formidable. However, computer spreadsheets, statistical software, and many calculators can quickly calculate r. The correlation coefficient r is the bottom item in the output screens for the LinRegTTest on the TI-83, TI-83+, or TI-84+ calculator (see previous section for instructions).
{"url":"http://www.opentextbooks.org.hk/ditatopic/9495","timestamp":"2024-11-11T23:13:50Z","content_type":"text/html","content_length":"138364","record_id":"<urn:uuid:7d027845-4b8c-43a4-82f4-06b82d70e1ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00036.warc.gz"}
Using mandated speed limits to measure the value of a statistical life In 1987 the federal government permitted states to raise the speed limit on their rural interstate roads, but not on their urban interstate roads, from 55 mph to 65 mph. Since the states that adopted the higher speed limit must have valued the travel hours they saved more than the fatalities incurred, this institutional change provides an opportunity to estimate an upper bound on the public's willingness to trade off wealth for a change in the probability of death. Our estimates indicate that the adoption of the 65-mph limit increased speeds by approximately 4 percent, or 2.5 mph, and fatality rates by roughly 35 percent. Together, the estimates suggest that about 125,000 hours were saved per lost life. When the time saved is valued at the average hourly wage, the estimates imply that adopting states were willing to accept risks that resulted in a savings of $1.54 million (1997 dollars) per fatality, with a sampling error roughly one-third this value. We set out a simple model of states' decisions to adopt the 65-mph limit that turns on whether their savings exceed their value of a statistical life. The empirical implementation of this model supports the claim that $1.54 million is an upper bound, but it provides imprecise estimates of the value of a statistical life. All Science Journal Classification (ASJC) codes • Economics and Econometrics Dive into the research topics of 'Using mandated speed limits to measure the value of a statistical life'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/using-mandated-speed-limits-to-measure-the-value-of-a-statistical","timestamp":"2024-11-05T19:40:08Z","content_type":"text/html","content_length":"50578","record_id":"<urn:uuid:9783fd98-13e4-46cc-85c5-eda4477d02fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00556.warc.gz"}
Mathematicians Prove Melting Ice Stays Smooth | Quanta Magazine Drop an ice cube into a glass of water. You can probably picture the way it starts to melt. You also know that no matter what shape it takes, you’ll never see it melt into something like a snowflake, composed everywhere of sharp edges and fine cusps. Mathematicians model this melting process with equations. The equations work well, but it’s taken 130 years to prove that they conform to obvious facts about reality. Now, in a paper posted in March, Alessio Figalli and Joaquim Serra of the Swiss Federal Institute of Technology Zurich and Xavier Ros-Oton of the University of Barcelona have established that the equations really do match intuition. Snowflakes in the model may not be impossible, but they are extremely rare and entirely fleeting. “These results open a new perspective on the field,” said Maria Colombo of the Swiss Federal Institute of Technology Lausanne. “There was no such deep and precise understanding of this phenomenon The question of how ice melts in water is called the Stefan problem, named after the physicist Josef Stefan, who posed it in 1889. It is the most important example of a “free boundary” problem, where mathematicians consider how a process like the diffusion of heat makes a boundary move. In this case, the boundary is between ice and water. For many years, mathematicians have tried to understand the complicated models of these evolving boundaries. To make progress, the new work draws inspiration from previous studies on a different type of physical system: soap films. It builds on them to prove that along the evolving boundary between ice and water, sharp spots like cusps or edges rarely form, and even when they do, they immediately These sharp spots are called singularities, and, it turns out, they are as ephemeral in the free boundaries of mathematics as they are in the physical world. Melting Hourglasses Consider, again, an ice cube in a glass of water. The two substances are made of the same water molecules, but the water is in two different phases: solid and liquid. A boundary exists where the two phases meet. But as heat from the water transfers into the ice, the ice melts and the boundary moves. Eventually, the ice — and the boundary along with it — disappear. Intuition might tell us that this melting boundary always remains smooth. After all, you do not cut yourself on sharp edges when you pull a piece of ice from a glass of water. But with a little imagination, it is easy to conceive of scenarios where sharp spots emerge. Take a piece of ice in the shape of an hourglass and submerge it. As the ice melts, the waist of the hourglass becomes thinner and thinner until the liquid eats all the way through. At the moment this happens, what was once a smooth waist becomes two pointy cusps, or singularities. “This is one of those problems that naturally exhibits singularities,” said Giuseppe Mingione of the University of Parma. “It’s the physical reality that tells you that. Yet reality also tells us that the singularities are controlled. We know that cusps should not last long, because the warm water should rapidly melt them down. Perhaps if you started with a huge ice block built entirely out of hourglasses, a snowflake might form. But it still wouldn’t last more than an instant. In 1889 Stefan subjected the problem to mathematical scrutiny, spelling out two equations that describe melting ice. One describes the diffusion of heat from the warm water into the cool ice, which shrinks the ice while causing the region of water to expand. A second equation tracks the changing interface between ice and water as the melting process proceeds. (In fact, the equations can also describe the situation where the ice is so cold that it causes the surrounding water to freeze — but in the present work, the researchers ignore that possibility.) “The important thing is to understand where the two phases decide to switch from one to the other,” said Colombo. It took almost 100 years until, in the 1970s, mathematicians proved that these equations have a solid foundation. Given some starting conditions — a description of the initial temperature of the water and the initial shape of the ice — it’s possible to run the model indefinitely to describe exactly how the temperature (or a closely related quantity called the cumulative temperature) changes with time. But they found nothing to preclude the model from arriving at scenarios that are improbably weird. The equations might describe an ice-water boundary that forms into a forest of cusps, for example, or a sharp snowflake that stays perfectly still. In other words, they couldn’t rule out the possibility that the model might output nonsense. The Stefan problem became a problem of showing that the singularities in these situations are actually well controlled. Otherwise, it would mean that the ice melting model was a spectacular failure — one that had fooled generations of mathematicians into believing it was more solid than it is. Soapy Inspiration In the decade before mathematicians began to understand the ice melting equations, they made tremendous progress on the mathematics of soap films. If you dip two wire rings in a soapy solution and then separate them, a soap film forms between them. Surface tension will pull the film as taut as possible, forming it into a shape called a catenoid — a kind of caved-in cylinder. This shape forms because it bridges the two rings with the least amount of surface area, making it an example of what mathematicians call a minimal surface. Soap films are modeled by their own unique set of equations. By the 1960s, mathematicians had made progress in understanding them, but they didn’t know how weird their solutions could be. Just as in the Stefan problem, the solutions might be unacceptably strange, describing soap films with countless singularities that are nothing like the smooth films we expect. Susan Schwartzenberg, Exploratorium In 1961 and 1962, Ennio De Giorgi, Wendell Fleming and others invented an elegant process for determining whether the situation with singularities was as bad as feared. Suppose you have a solution to the soap film equations that describes the shape of the film between two boundary surfaces, like the set of two rings. Focus in on an arbitrary point on the film’s surface. What does the geometry near this point look like? Before we know anything about it, it could have any kind of feature imaginable — anything from a sharp cusp to a smooth hill. Mathematicians devised a method for zooming in on the point, as though they had a microscope with infinite power. They proved that as you zoom in, all you see is a flat plane. “Always. That’s it,” said Ros-Oton. This flatness implied that the geometry near that point could not be singular. If the point were located on a cusp, mathematicians would see something more like a wedge, not a plane. And since they chose the point randomly, they could conclude that all points on the film must look like a smooth plane when you peer at them up close. Their work established that the entire film must be smooth — unplagued by singularities. Mathematicians wanted to use the same methods to deal with the Stefan problem, but they soon realized that with ice, things were not as simple. Unlike soap films, which always look smooth, melting ice really does exhibit singularities. And while a soap film stays put, the line between ice and water is always in motion. This posed an additional challenge that another mathematician would tackle From Films to Ice In 1977, Luis Caffarelli reinvented a mathematical magnifying glass for the Stefan problem. Rather than zooming in on a soap film, he figured out how to zoom in on the boundary between ice and water. “This was his great intuition,” said Mingione. “He was able to transport these methods from the minimal surface theory of de Giorgi to this more general setting.” When mathematicians zoomed in on solutions to the soap film equations, they saw only flatness. But when Caffarelli zoomed in on the frozen boundary between ice and water, he sometimes saw something totally different: frozen spots surrounded almost entirely by warmer water. These points corresponded to icy cusps — singularities — which become stranded by the retreat of the melting boundary. Caffarelli proved singularities exist in the mathematics of melting ice. He also devised a way of estimating how many there are. At the exact spot of an icy singularity, the temperature is always zero degrees Celsius, because the singularity is made out of ice. That is a simple fact. But remarkably, Caffarelli found that as you move away from the singularity, the temperature increases in a clear pattern: If you move one unit in distance away from a singularity and into the water, the temperature rises by approximately one unit of temperature. If you move two units away, the temperature rises by approximately four. This is called a parabolic relationship, because if you graph temperature as a function of distance, you get approximately the shape of a parabola. But because space is three-dimensional, you can graph the temperature in three different directions leading away from the singularity, not just one. The temperature therefore looks like a three-dimensional parabola, a shape called a paraboloid. Altogether, Caffarelli’s insight provided a clear way of sizing up singularities along the ice-water boundary. Singularities are defined as points where the temperature is zero degrees Celsius and paraboloids describe the temperature at and around the singularity. Therefore, anywhere the paraboloid equals zero you have a singularity. So how many places are there where a paraboloid can equal zero? Imagine a paraboloid composed of a sequence of parabolas stacked side by side. Paraboloids like these can take a minimum value — a value of zero — along an entire line. This means that each of the singularities Caffarelli observed could actually be the size of a line, an infinitely thin icy edge, rather than just a single icy point. And since many lines can be put together to form a surface, his work left open the possibility that a set of singularities could fill the entire boundary surface. If this was true, it would mean that the singularities in the Stefan problem were completely out of control. Samuel Velasco/Quanta Magazine “It would be a disaster for the model. Complete chaos,” said Figalli, who won the Fields Medal, math’s highest honor, in 2018. However, Caffarelli’s result was only a worst-case scenario. It established the maximum size of the potential singularities, but it said nothing about how often singularities actually occur in the equations, or how long they last. By 2019, Figalli, Ros-Oton and Serra had figured out a remarkable way to find out more. Imperfect Patterns To solve the Stefan problem, Figalli, Ros-Oton and Serra needed to prove that singularities that crop up in the equations are controlled: There aren’t a lot of them and they don’t last long. To do that, they needed a comprehensive understanding of all the different types of singularities that could possibly form. Caffarelli had made progress on understanding how singularities develop as ice melts, but there was a feature of the process he didn’t know how to address. He recognized that the water temperature around a singularity follows a paraboloid ­­pattern. He also recognized that it doesn’t quite follow this pattern exactly — there’s a small deviation between a perfect paraboloid and the actual way the water temperature looks. Figalli, Ros-Oton and Serra shifted the microscope onto this deviation from the paraboloid pattern. When they zoomed in on this small imperfection — a whisper of coolness waving off of the boundary — they discovered that it had its own kinds of patterns which gave rise to different types of singularities. ETH Zurich / Alessandro Della Bella “They go beyond the parabolic scaling,” said Sandro Salsa of the Polytechnic University of Milan. “Which is amazing.” They were able to show that all of these new types of singularities disappeared rapidly — just as they do in nature — except for two that were particularly enigmatic. Their last challenge was to prove that these two types also vanish as soon as they appear, foreclosing the possibility that anything like a snowflake might endure. Vanishing Cusps The first type of singularity had come up before, in 2000. A mathematician named Frederick Almgren had investigated it in an intimidating 1,000-page paper about soap films, which was only published by his wife, Jean Taylor — another expert on soap films — after he died. While mathematicians had shown that soap films are always smooth in three dimensions, Almgren proved that in four dimensions, a new kind of “branching” singularity can appear, making the soap films sharp in strange ways. These singularities are profoundly abstract and impossible to visualize neatly. Yet Figalli, Ros-Oton and Serra realized that very similar singularities form along the melting boundary between ice and water. “The connection is a bit mysterious,” Serra said. “Sometimes in mathematics, things develop in unexpected ways.” They used Almgren’s work to show that the ice around one of these branching singularities must have a conical pattern that looks the same as you keep zooming in. And unlike the paraboloid pattern for the temperature, which implies that a singularity might exist along a whole line, a conical pattern can only have a sharp singularity at a single point. Using this fact, they showed that these singularities are isolated in space and time. As soon as they form, they are gone. The second kind of singularity was even more mysterious. To get a sense of it, imagine submerging a thin sheet of ice into water. It will shrink and shrink and suddenly disappear all at once. But just before that moment, it will form a sheetlike singularity, a two-dimensional wall as sharp as a razor. At certain points, the researchers managed to zoom in to find an analogous scenario: two fronts of ice collapsing toward the point as if it were situated inside a thin sheet of ice. These points were not exactly singularities, but locations where a singularity was about to form. The question was whether the two fronts near these points collapsed at the same time. If that happened, a sheetlike singularity would form for only one perfect moment before it vanished. In the end, they proved this is in fact how the scenario plays out in the equations. “This somehow confirms the intuition,” said Daniela De Silva of Barnard College. Having shown that the exotic branching and sheetlike singularities were both rare, the researchers could make the general statement that all singularities for the Stefan problem are rare. “If you choose randomly a time, then the probability of seeing a singular point is zero,” Ros-Oton said. The mathematicians say that the technical details of the work will take time to digest. But they are confident that the results will lay the groundwork for advances on numerous other problems. The Stefan problem is a foundational example for an entire subfield of math where boundaries move. But as for the Stefan problem itself, and the mathematics of how ice cubes melt in water? “This is closed,” Salsa said.
{"url":"https://www.quantamagazine.org/mathematicians-prove-melting-ice-stays-smooth-20211006/","timestamp":"2024-11-02T20:48:24Z","content_type":"text/html","content_length":"219014","record_id":"<urn:uuid:af1e9886-4eba-4a37-a4e7-412b31a3b107>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00650.warc.gz"}
I-V relation and transconductance for saturation mode NMOS This CalcTown calculator calculates the output current and transconductance in a NMOS in saturation mode. Hence, the conditions 1. V[GS]>V[TN] and 2. V[DS]>V[GS]-V[TN] are satisfied. *Please enter 0 in the lambda and V[DS] field if the body effect is to be neglected. ** For potential divider bias circuits, gate-to-source voltage (V[GS])=R[G2]VDD/(R[G1]+R[G2]) For current in Triode region, please follow the links given in related calculators
{"url":"https://www.calctown.com/calculators/i-v-saturation-mode-nmos","timestamp":"2024-11-12T15:11:36Z","content_type":"text/html","content_length":"25214","record_id":"<urn:uuid:bae9d94f-4f62-439d-896d-7532ba710bb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00161.warc.gz"}
Sorting IP Addresses I often emphasize the importance of logic in T-SQL problem solving. In recent columns, I've given you a T-SQL puzzle and a purely logical puzzle to help you improve your logic skills. This article's T-SQL puzzle will stretch your T-SQL skills as you try to derive the best solution. In the sidebar "The Logical Puzzle," page 30, I present a new logic problem to try. Do your best to solve the puzzles yourself before looking at my solutions. If the trivial and intuitive approaches you come up with seem cumbersome, try to be creative and think outside the box. This month's puzzle involves sorting IP addresses. Your task is to write a query that sorts the IP addresses that are stored in the IPs table. Note that a correct IP sort logically means that you should consider each octet separately and as a number, as opposed to the way SQL Server stores it—as a character string. Run Listing 1's code to create the IPs table and populate it with sample data. Figure 1 shows the desired result. The first sort value is the first octet, so IP addresses with 3 in the first octet should be sorted first, then addresses with 22 in the first octet, then those with 192. The second sort value is the second octet—for example, among IP addresses that have 3 in the first octet, the one with 8 in the second octet should be sorted first, then those with 77 in the second octet. Similarly, the third octet is the third sort value, and the fourth octet is the fourth sort value. If you attempt to simply sort the IP addresses by the ip column as follows you get the incorrect IP address sort that Figure 2 shows: the IP addresses sorted by their character-string representation. SQL Server compares characters in corresponding positions from left to right. Thus, IP addresses that start with the number 1 (e.g., 192.*) are listed before IP addresses starting with the number 2 (e.g., 22.*), which come before addresses starting with 3 (e.g., 3.*). Also, SQL Server doesn't recognize that an IP address is made up of four separate parts; it treats each IP address as a character string. A different table design could have made sorting the IP addresses simple. However, for the sake of this exercise, assume that you have to face the task with the given table design. You now have all the information you need to start working on the problem. I suggest you examine my solutions after trying to solve the problem yourself. An intuitive approach to solving the problem is to specify four expressions in the ORDER BY clause, breaking the ip column into the four octets by using the SUBSTRING() and the CHARINDEX() functions, then converting them to integers. The following pseudo code shows how the solution query would look: SELECT ipFROM IPsORDER BY CAST(SUBSTRING(ip, 1, p1 - 1 ) AS tinyint), CAST(SUBSTRING(ip, p1 + 1, p2 - p1 - 1) AS tinyint), CAST(SUBSTRING(ip, p2 + 1, p3 - p2 - 1) AS tinyint), CAST(SUBSTRING(ip, p3 + 1, 3 ) AS tinyint); P1, p2, and p3 respectively represent the positions of the first, second, and third dots; you need to replace them with expressions that actually calculate each dot's position. You calculate the position of the first dot by using the following expression: To calculate the position of the second dot, you use a similar expression, but add a third argument to CHARINDEX() telling the function where to start looking for the dot. This third argument is p1+1 (the position number of the first dot plus 1): Similarly, to calculate the position of the third dot, you need to provide p2+1 as the third argument to CHARINDEX(), resulting in the following expression: CHARINDEX('.', ip, CHARINDEX('.', ip, CHARINDEX('.', ip) + 1) + 1) AS p3 Replacing p1, p2, and p3 in the pseudo code with the previous expressions gives you the solution query that Listing 2 shows. Obviously, this solution query is long and hard to follow. If you later need to revise the query, you're bound to introduce bugs, so I don't recommend this solution. The first solution contained lengthy expressions, mainly because of the nesting required for the CHARINDEX() function's third argument. By using derived tables, you can significantly simplify the solution. Derived tables let you reuse the aliases you assign to expressions in the SELECT list, as Listing 3 shows. In the innermost derived table (D1), you calculate the position of the first dot by using this expression: In the next level's derived table (D2), you reuse the p1 alias to calculate the position of the second dot: Similarly, in the next level's derived table (D3), you reuse the p2 alias to calculate the position of the third dot: Now that you've calculated all dot positions and given them aliases, the outermost query against the derived table D3 simply uses those aliases in the ORDER BY clause's SUBSTRING() functions. In an effort to further simplify the solution, I came up with a less-intuitive approach that you might call outside-the-box thinking. Create a table with all possible patterns of IP addresses, each with the four pairs of arguments required for the SUBSTRING() function (the start of each octet and the length of each octet). Essentially, any IP address follows the pattern <1–3 characters>.<1-3 characters>.<1-3 characters>.<1-3 characters> . You'll use the LIKE predicate to match an IP address to a pattern that contains underscores to represent the digits. For example, the IP address 192.168.11.10 follows the pattern '___.___.__.__'. You can calculate the starting positions and lengths of all octets in the pattern by counting the number of characters preceding each octet and the number of characters in each octet. In the previous IP address, if we use sn to represent the starting position of nth octet and ln to represent the length of nth octet, the starting position and length values are s1=1, l1=3, s2=5, l2=3, s3=9, l3=2, s4=12, l4=2. After you match the IP address with its pattern, you can use the start position and length values accompanying the pattern as arguments for the SUBSTRING() functions. You can manually populate such a table with all possible patterns and SUBSTRING() arguments. In total, you have 34 patterns, resulting in 81 rows. Alternatively, you can create a view that returns all possible patterns by cross-joining four instances of an auxiliary table of numbers, filtering for only the numbers 1 through 3 in each instance to represent the possible octet lengths. First, use the following code to create the auxiliary table of numbers and populate it with at least three values: CREATE TABLE Nums(n int NOT NULL PRIMARY KEY);INSERT INTO Nums VALUES(1);INSERT INTO Nums VALUES(2);INSERT INTO Nums VALUES(3); Run the code that Listing 4 shows to create the IPPatterns view, which returns all 81 possible IP patterns along with the start and length values for all four octets for each pattern. The view's query cross-joins four instances of the Nums table (N1, N2, N3, and N4), filtering in each the n values that are less than or equal to 3. In the result of the cross join, the four n values for the four instances of Nums represent all possible combinations of octet lengths. The SELECT list constructs the actual IP address pattern by replicating underscores for each octet and concatenating the underscores with dots between the octets. The SELECT list also calculates the start positions of the octets by summarizing the lengths of the preceding octets and the number of preceding dots plus 1. The lengths of the octets are simply the n values from the corresponding Nums Run the following query to see the patterns and arguments that the IPPatterns view returned: Table 1 shows the abbreviated result. Having the IPPatterns view (or table) in place lets you write an extremely simple query to get the desired result, as Listing 5 shows. The query joins the base table to the IPPatterns view based on a match between the IP address and the pattern that it follows. The query's ORDER BY clause has four SUBSTRING() functions, each of which extracts an octet according to the start and length arguments that the IPPatterns view provides. The code converts each octet string to a tinyint data type to obtain the correct sort. All the previous solutions use standard SQL constructs, so they're ANSI-compliant. But if you don't mind proprietary T-SQL solutions, you can use the PARSENAME() function, which Microsoft designed to return a requested part of an object name. Because IP addresses are very similar to object names—four parts separated by dots—the PARSENAME() function fits the octet extraction from an IP address like a glove. Here's an example of how to use the function to create a solution for this problem: SELECT ipFROM IPsORDER BY CAST(PARSENAME(ip, 4) AS tinyint), CAST(PARSENAME(ip, 3) AS tinyint), CAST(PARSENAME(ip, 2) AS tinyint), CAST(PARSENAME(ip, 1) AS tinyint); This query invokes the PARSENAME() function in the ORDER BY clause once for each octet and converts the resulting octet string to tinyint to get the correct result. All the solutions work similarly because they each perform a scan of the base table, then sort by four expressions. The main difference between the solutions is their complexity. This solution is very simple, but it's a proprietary solution because it uses the nonstandard PARSENAME() function. Also, it's not generic like the other solutions, which you can adapt for other scenarios that follow patterns, because the PARSNAME() function works only with exactly four elements. It's always good when a solution to a T-SQL problem is both intuitive and simple. When the intuitive solution is complex, you should keep looking for other solutions. Apply logic and try different solutions until you come up with one you're satisfied with. And of course, keep practicing both T-SQL and pure logic. Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.
{"url":"https://www.itprotoday.com/early-versions/sorting-ip-addresses","timestamp":"2024-11-06T20:51:45Z","content_type":"text/html","content_length":"369194","record_id":"<urn:uuid:6ad09f20-0423-4306-9691-96d7efe8ad13>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00297.warc.gz"}
This is the same graph as in figure 11 with the following additions. A vertical arrow stretches up from 35 on the horizontal axis to the graph. This point on the graph is marked P. A horizontal arrow stretches from P to intersect the vertical axis, 1 small interval past 168.
{"url":"https://www.open.edu/openlearn/mod/oucontent/view.php?id=19190&extra=longdesc_idm353","timestamp":"2024-11-02T02:18:03Z","content_type":"text/html","content_length":"26611","record_id":"<urn:uuid:3b857892-6585-4115-a2a9-7e28c8f515b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00380.warc.gz"}
Umeå University Investigating the properties, explaining, and predicting the behaviour of a physical system described by a system (matrix) pencil often require the understanding of how canonical structure information of the system pencil may change, e.g., how eigenvalues coalesce or split apart, due to perturbations in the matrix pencil elements. Often these system pencils have different block-partitioning and / or symmetries. We study changes of the congruence canonical form of a complex skew-symmetric matrix pencil under small perturbations. The problem of computing the congruence canonical form is known to be ill-posed: both the canonical form and the reduction transformation depend discontinuously on the entries of a pencil. Thus it is important to know the canonical forms of all such pencils that are close to the investigated pencil. One way to investigate this problem is to construct the stratification of orbits and bundles of the pencils. To be precise, for any problem dimension we construct the closure hierarchy graph for congruence orbits or bundles. Each node (vertex) of the graph represents an orbit (or a bundle) and each edge represents the cover/closure relation. Such a relation means that there is a path from one node to another node if and only if a skew-symmetric matrix pencil corresponding to the first node can be transformed by an arbitrarily small perturbation to a skew-symmetric matrix pencil corresponding to the second node. From the graph it is straightforward to identify more degenerate and more generic nearby canonical structures. A necessary (but not sufficient) condition for one orbit being in the closure of another is that the first orbit has larger codimension than the second one. Therefore we compute the codimensions of the congruence orbits (or bundles). It is done via the solutions of an associated homogeneous system of matrix equations. The complete stratification is done by proving the relation between equivalence and congruence for the skew-symmetric matrix pencils. This relation allows us to use the known result about the stratifications of general matrix pencils (under strict equivalence) in order to stratify skew-symmetric matrix pencils under congruence. Matlab functions to work with skew-symmetric matrix pencils and a number of other types of symmetries for matrices and matrix pencils are developed and included in the Matrix Canonical Structure (MCS) Toolbox.
{"url":"https://webapps.cs.umu.se/uminf/index.cgi?year=2014&number=5","timestamp":"2024-11-10T13:00:57Z","content_type":"text/html","content_length":"19974","record_id":"<urn:uuid:6418e500-9613-48f5-b347-6eb0bb5a9e9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00736.warc.gz"}
Notion Tutorial: Mastering the IF Function in Notion The IF function is one of the most commonly used functions in Notion. It is a conditional statement that evaluates whether a condition is met and returns different values based on the result. Here’s a brief overview of how to use the IF function: IF Function Syntax □ The IF function syntax is as follows: if([condition], [value if true], [value if false]). □ The condition should be a Boolean value (either true or false). If the condition is true, it returns the 1 value. If the condition is false, it returns the 2 value. □ Note: The data types of the first and second values must be the same. They should follow the syntax: ☆ if(Boolean, Boolean, Boolean) ☆ if(Boolean, String, String) ☆ if(Boolean, Number, Number) ☆ if(Boolean, Date, Date) Although the IF function looks like a function, it is actually an operator in Notion, specifically a ternary operator, which means it requires three arguments to work. IF Function Use Cases Suppose we have a task list with tasks of different urgency levels, such as “urgent” and “general”. Now, we want to create a new numeric property to determine the urgency of the current task. If it’s urgent, the value is 2; if it’s general, the value is 1. We can use the IF function to determine whether the urgency of the task is “urgent”. If it is, it returns 1; otherwise, it returns 2. The formula is as follows: if(prop("Urgency") == "Urgent", 2, 1). The IF function can also compare numeric values. We can create a new column. If the urgency level is greater than 1, it returns true; otherwise, it returns false. The formula is as follows: if(prop("Urgency (Numeric)") > 1, true, false). IF Function Shorthand Syntax The IF function has a shorthand syntax that doesn’t require writing in English. You only need to use the symbols ? and :. The shorthand syntax is as follows: [condition] ? [value if true] : [value if □ The left side of the ? is the first argument, which is the condition to determine the truth. □ The value between ? and : is the second argument. If the condition is true, it returns the second argument. □ The right side of the : is the third argument. If the condition is false, it returns the third argument. The following two formulas are both IF functions, and they express the same meaning: □ Default syntax: if(prop("Urgency (Numeric)") > 1, true, false) □ Shorthand syntax: (prop("Urgency (Numeric)") > 1) ? true : false Advanced Usage: Nested IF In actual work, the usage of the IF function can be more complex, and it may be necessary to nest more IF functions within an IF function. For example, in the following table, the urgency of the tasks is divided into three levels: very urgent, urgent, and general. We want to create a new column to assign a numeric value based on the urgency of the task: very urgent – 3, urgent – 2, general – 1. The nested IF function used here is as follows: if(prop("Urgency") == "Very Urgent", 3, if(prop("Urgency") == "Urgent", 2, 1)). In conclusion, the IF function in Notion is a powerful tool that can be used to create complex conditional statements, making your work in Notion more efficient and organized.
{"url":"https://notionad.com/notion-tutorial-mastering-the-if-function-in-notion/","timestamp":"2024-11-07T19:30:12Z","content_type":"text/html","content_length":"127561","record_id":"<urn:uuid:6bc69097-ad3d-476c-be20-602effa6ff08>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00159.warc.gz"}
CSCI150 2021S3 Quiz 1 Welcome to your CSCI150 2021S3 Quiz 1. You will need to write down your name and student ID as they appear in your college profile. You are allowed to attempt only once, if you submit multiple times I will void all your submissions. Please make sure you get to the last page and see the confirmation before you leave. It is recommended that you complete this quiz before 10 October, but anytime before the 12th week is OK. You will NOT be shown your score immediately, I will update the scores every weekend on Good luck. Q1.1 (0.5pt) Storage components belong to which category in the digital von Neumann computer? Q1.2 (0.5pt) A finite fraction number converted into the binary system would be Q1.3 (0.5pt) Which of the following cases would you be most likely to encounter BCD numbers? Q1.4 (0.5pt) In digital circuits, which of the following is correct regarding comparisons to analogue circuits? Q1.5 (0.5pt) Which of the following regarding parity bit is correct? 1 out of 5 Including the previous borrow bit, write down the 4bits of borrow for 4bit subtraction of 8 - 5. 2 out of 5 Q3 (2.5pt) Write down a single number to answer the following questions. What is the greatest decimal value representable by 8bit unsigned integer? What is the smallest decimal value representable by signed 8bit binary? Convert 83 to BCD (no space) Search for an ASCII table, find out what the following hexadecimal code means when converted to text (careful with casing): 6162 726e 6163 6b 3 out of 5 Q4 (2.5pt) Answer the following questions in numbers. At a sample rate of 44100, each sample 2 bytes, what is the bitrate in bytes? CPU clock speed refers to the number of clock cycles it can perform per second, and a lot of instructions in CPUs take only one or two clock cycles to complete. What is the current most common unit of measurement for Clockspeed? (Hint: 3 letters only, first two uppercase) Among the following transmission, write down the one that contains error, assuming the first bit is odd parity. 100000000, 011101100, 100100110, 000010000 4 out of 5
{"url":"https://jetic.org/test/csci150-2021s3-quiz-1/","timestamp":"2024-11-08T21:53:57Z","content_type":"text/html","content_length":"59847","record_id":"<urn:uuid:7e3434ed-d7c7-4e09-9368-dab8d310c975>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00804.warc.gz"}
The Enigma of Color and Number: Unraveling the Mystery of the Number 11 The topic of “What color is the number 11?” may seem like a curious and quirky question, but it leads us down a fascinating rabbit hole of exploring the intersections of color and numbers. The question begs the question, how can a number have a color? This enigmatic question opens up a world of possibilities and leads us to delve into the mysterious and intriguing relationship between colors and numbers. Join us as we unravel the mystery of the number 11 and explore the enigma of color and number. Get ready to be captivated by the vibrant and dynamic world of numbers and colors. What is the Number 11? An Overview of the Number 11 The number 11 is a prime number, which means it can only be divided by 1 and itself without leaving a remainder. It is also a Mersenne prime, a prime number that is one less than a power of two. This is because it is equal to 2^7 – 1. The number 11 is also the first Mersenne prime that is one less than a power of two. The number 11 has many interesting properties. For example, it is the only number that has the same number of letters as its value in the English alphabet. It is also the only two-digit number that cannot be expressed as the sum of two cubes. In addition, the number 11 is the first number that has a pair of twin primes, which are two prime numbers that are less than two apart. The number 11 is also a significant number in many fields, including mathematics, science, and religion. For example, it is the basis for the binary system, which is used to represent numbers in computers. It is also used in music, where it is the basis for the tuning of the piano. In religion, the number 11 is often associated with the apostles of Jesus Christ, as well as with the concept of divine revelation. Overall, the number 11 is a fascinating and enigmatic number that has captivated the minds of mathematicians and scientists for centuries. Its unique properties and significance in various fields make it a topic of ongoing research and study. The Significance of the Number 11 in Mathematics The Number 11: A Bridge Between Squares and Cubes The number 11 is intriguingly situated between two significant mathematical concepts: squares and cubes. It is the first number that can be expressed as the sum of two squares in two different ways: 11 = 1^2 + 1^2 and 11 = 2^2 + 3^2. This characteristic is a defining feature of the number 11, as it serves as a bridge between the more familiar square numbers and the less frequently explored cube The Mysterious Prime Factorization of 11 11 is also a prime number, meaning it cannot be evenly divided by any number other than 1 and itself. Its prime factorization, 11 = 11 * 1, reveals its simplicity yet complexity. This seemingly simple factorization is a reflection of the number’s inherent properties, which continue to intrigue mathematicians. The Number 11 and its Connection to Other Numbers The number 11 is closely related to the Mersenne prime, a prime number that is one less than a power of two. Specifically, 11 is the seventh Mersenne prime, 3 = 2^1 – 1 and 7 = 2^3 – 1. This connection to Mersenne primes highlights the unique nature of the number 11 and its connection to the larger world of prime numbers. The Enigma of 11: An Open Problem in Number Theory The significance of the number 11 in mathematics extends beyond its connections to squares, cubes, and prime numbers. There is an open problem in number theory known as the “Enigma of 11,” which explores the occurrence of the number 11 in various arithmetic and geometric sequences. This enigmatic phenomenon has yet to be fully understood, and its resolution remains an open question in the field of mathematics. The number 11’s intriguing properties and connections to other numbers have captivated the attention of mathematicians for centuries. Its unique characteristics and the enigmatic problem surrounding its occurrence in specific sequences continue to fuel the curiosity of mathematicians and number theorists. Colors and Their Codes Key takeaway: The number 111 is a fascinating and enigmatic number that has captivated the minds of mathematicians and scientists for centuries. Its unique properties and significance in various fields make it a topic of ongoing research and study. The number 111 is also a significant number in many fields, including mathematics, science, and religion. The Basics of Color Coding In the realm of cryptography, colors play a vital role in encoding and decoding messages. Each color represents a numerical value, and this relationship between colors and numbers forms the basis of color coding. By assigning a numerical value to each color, cryptographers can create complex codes that can be deciphered only by those who possess the key to the code. One of the earliest known examples of color coding is the use of colors to represent numbers in the ancient Greek system of numerals. This system, known as the “Greek numeral system,” assigned a unique color to each of the nine numbers, with white representing 1, black representing 10, and the other numbers in between represented by various combinations of the two colors. However, it was not until the development of modern cryptography that color coding became a widely used method for encoding and decoding messages. The use of colors in cryptography can be traced back to the 16th century, when cryptographers began to use complex codes that required the use of colors to represent numbers and letters. The most common method of color coding involves the use of a key, which is a chart that assigns a numerical value to each color. This key is used to translate the message into a series of colors, which can then be transmitted to the intended recipient. The recipient, who possesses the key, can then use the key to translate the colors back into the original message. Another method of color coding involves the use of “one-time pads,” which are sheets of paper that contain a random sequence of colors. These pads are used to encode messages by randomly selecting a color from the pad and assigning it to each letter or number in the message. The one-time pad is then destroyed, making it impossible for anyone to decode the message without the original pad. Overall, the basics of color coding involve the use of colors to represent numerical values, which can then be used to encode and decode messages. By understanding the relationship between colors and numbers, cryptographers can create complex codes that can be used to secure sensitive information. The Psychology of Color Coding The psychology of color coding refers to the various psychological effects that colors can have on human behavior and perception. The way colors are perceived and processed in the brain is influenced by cultural, personal, and environmental factors. The use of color coding in design, advertising, and marketing has become increasingly prevalent as businesses and organizations seek to harness the power of color to evoke certain emotions and perceptions in their target audiences. The psychology of color coding is based on the idea that colors can be associated with certain emotions, moods, and perceptions. For example, red is often associated with energy, passion, and excitement, while blue is often associated with calmness, serenity, and trust. The use of color coding in design and marketing is often used to create a specific emotional response in the viewer or to convey a particular message or tone. One of the key factors in the psychology of color coding is the idea of color harmony. Color harmony refers to the way colors are combined to create a pleasing visual effect. Different cultures and design styles have different ideas of what constitutes good color harmony, but in general, the use of complementary colors (colors that are opposite each other on the color wheel) is seen as particularly effective in creating a visually appealing design. Another important aspect of the psychology of color coding is the idea of color contrast. Color contrast refers to the way different colors are used to create visual interest and draw attention to certain elements in a design. High contrast color schemes use bold, bright colors to create a dynamic visual effect, while low contrast color schemes use more muted, subtle colors to create a calmer, more peaceful atmosphere. In addition to these factors, the psychology of color coding also takes into account the personal and cultural associations that individuals have with different colors. For example, some cultures may associate certain colors with different emotions or meanings than others, and individuals may have their own personal preferences and associations with different colors. Overall, the psychology of color coding is a complex and multifaceted field that takes into account a wide range of factors, including emotional associations, cultural meanings, and personal preferences. By understanding the psychology of color coding, designers, marketers, and organizations can harness the power of color to create effective and engaging designs, messages, and The Connection Between Colors and Numbers Theories on the Relationship Between Colors and Numbers Theories on the relationship between colors and numbers are diverse and have been the subject of much debate. Some of the most prominent theories include: 1. Numerical Color Theory: This theory posits that there is a direct correlation between colors and numbers. According to this theory, each number is associated with a specific color, and this association is based on the frequency of the color in nature. For example, the number 1 is associated with the color red because red is the most frequently occurring color in nature. 2. The Pythagorean Theory: This theory suggests that the relationship between colors and numbers is based on the principles of harmony and proportion. According to this theory, each number is associated with a specific color based on its position in the sequence of numbers. For example, the number 1 is associated with the color red because it is the first number in the sequence. 3. The Kabbalistic Theory: This theory suggests that the relationship between colors and numbers is based on the principles of mysticism and spirituality. According to this theory, each number is associated with a specific color based on its symbolic meaning. For example, the number 1 is associated with the color red because it represents creation and beginnings. 4. The Scientific Theory: This theory suggests that the relationship between colors and numbers is based on the principles of physics and chemistry. According to this theory, each color is associated with a specific frequency of light, and each number is associated with a specific frequency of vibration. For example, the number 1 is associated with the color red because it has the highest frequency of vibration among all colors. In conclusion, the theories on the relationship between colors and numbers are varied and have been the subject of much debate. While some theories suggest a direct correlation between colors and numbers, others suggest that the relationship is based on principles of harmony, proportion, mysticism, or physics and chemistry. Regardless of the theory, the connection between colors and numbers remains an enigma that continues to captivate the imagination of scholars and laypeople alike. Exploring the Role of Culture in the Association of Colors and Numbers The Influence of Cultural Background on Color-Number Associations Cultural background plays a significant role in shaping the associations between colors and numbers. Different cultures assign unique meanings to colors and numbers, which in turn influence the way they are perceived and interpreted. For instance, in some cultures, the number 11 is associated with good luck, while in others, it is considered unlucky or even ominous. The Role of Religion in Color-Number Associations Religion is another factor that can shape the associations between colors and numbers. For example, in Christianity, the number 11 is often associated with the apostles, as there were 11 of them. Similarly, in Hinduism, the number 11 is associated with the god Vishnu, who had 11 incarnations. These religious associations can significantly influence the way colors and numbers are perceived and understood within a particular culture. The Impact of Historical Events on Color-Number Associations Historical events can also shape the associations between colors and numbers. For example, in the United States, the number 11 is often associated with the September 11 attacks, which occurred on September 11, 2001. This association has led to the number 11 being perceived as a symbol of tragedy and loss. In contrast, in Japan, the number 11 is associated with harmony and balance, as it is associated with the number of strokes required to write the characters for “harmony” and “balance” in Japanese calligraphy. The Importance of Context in Understanding Color-Number Associations It is important to consider the cultural, religious, and historical context in which colors and numbers are used when attempting to understand their associations. By examining the unique associations that different cultures have with colors and numbers, we can gain a deeper understanding of how these associations are formed and how they can vary across cultures. This knowledge can help us appreciate the diversity of human experience and the many ways in which colors and numbers can be perceived and understood. The Mystery of the Number 11 and Its Color The Question of What Color Represents the Number 11 The question of what color represents the number 11 has puzzled scientists, mathematicians, and philosophers for centuries. The human eye perceives colors as different wavelengths of light, and each number has a corresponding color that it is associated with. For example, the number 1 is often associated with the color red, while the number 2 is associated with the color blue. However, the number 11 does not have a clear and straightforward color association. One possible explanation for this is that the number 11 is an “exceptional” number, meaning that it has unique properties that set it apart from other numbers. For example, the number 11 is the first number that has a single-digit prime factor, which is 11 itself. This unique property may have influenced the way that the number 11 has been perceived and associated with different colors throughout Another explanation for the enigma of the color of the number 11 is that it may be associated with multiple colors or shades of colors. Some people may associate the number 11 with the color yellow, while others may see it as a shade of green or gray. This variability in color perception may be due to individual differences in visual processing and cultural influences on color perception. Despite the lack of a clear and consistent color association with the number 11, the enigma of the color of the number 11 continues to intrigue and captivate those who study the relationship between numbers and colors. Future research may shed more light on the origins and significance of this enigma and its implications for our understanding of the nature of numbers and color perception. Different Interpretations of the Color of the Number 11 One of the most intriguing aspects of the number 11 is its association with various colors. However, the interpretation of the color of the number 11 has been a subject of much debate and discussion. Different cultures, beliefs, and theories have attributed different colors to the number 11, adding to the enigma surrounding it. Here are some of the different interpretations of the color of the number 11: • In Western numerology, the number 11 is associated with the color orange. This is because orange is the combination of the warmth and energy of red and the optimism and enthusiasm of yellow. It is believed that the color orange resonates with the vibrational frequency of the number 11, making it a powerful and transformative number. • In some Eastern cultures, the number 11 is associated with the color purple. This is because purple is the combination of the stability and grounding of blue and the spirituality and intuition of red. It is believed that the color purple has a calming effect on the mind and body, making it a powerful and healing number. • In certain spiritual and esoteric beliefs, the number 11 is associated with the color gold. This is because gold represents the higher self, spiritual growth, and enlightenment. It is believed that the color gold has a powerful and uplifting energy, making it a symbol of transformation and enlightenment. • In astrology, the number 11 is associated with the sign of Aquarius. This is because Aquarius is an air sign, representing the intellect, reason, and humanitarianism. It is believed that the sign of Aquarius resonates with the vibrational frequency of the number 11, making it a powerful and revolutionary number. Overall, the interpretation of the color of the number 11 is subjective and varies depending on cultural, spiritual, and personal beliefs. However, it is clear that the number 11 holds a special significance and is deeply connected to the mysteries of the universe. Attempts to Assign a Color to the Number 11 The Use of Color in Number Systems Ancient Roots of Color-Number Associations The human inclination to associate colors with numbers dates back to antiquity. Ancient civilizations, such as the Egyptians and Babylonians, used color to represent numbers in their counting systems. These early number systems were based on practical needs like measuring crops, tracking the cycles of the moon, and determining the position of celestial bodies. Color-Coded Number Systems in Ancient Cultures The Babylonian system, for instance, used a base-60 numeral system. This system was practical for representing time and angles, as it could divide the day into 24 hours, each consisting of 60 minutes, and the circle into 360 degrees, each consisting of 60 minutes. Babylonian numerals were represented by a combination of symbols for 1, 10, and 60, and they used color to differentiate between the numbers. In the Mayan numeral system, numbers were represented by dots and bars, and the number 11 was denoted by a single dot in the place value position, surrounded by 10 dots representing the units place. This method of representation used colors to distinguish the individual dots and bars. Color Coding in Modern Numeral Systems In contemporary numeral systems, the use of color is not as prevalent as it was in ancient times. However, it still finds applications in certain fields. For example, the hexadecimal system, which is used in computer science and programming, employs a color-coding scheme to represent numbers up to 15. Each number is represented by a combination of four hexadecimal digits, with each digit having a specific color association: • Digit 0: green • Digit 1: red • Digit 2: brown • Digit 3: blue • Digit 4: purple • Digit 5: teal • Digit 6: silver • Digit 7: gray • Digit 8: orange • Digit 9: gold • Digit A: magenta • Digit B: pink • Digit C: light blue • Digit D: cyan • Digit E: light green • Digit F: black Color-Coded Calculators and Educational Tools In modern times, color-coded calculators and educational tools have been developed to help students better understand number concepts. These tools often use colors to represent different place values or operations, making it easier for learners to visualize the relationships between numbers. Despite the decline in the use of color in modern numeral systems, the human fascination with assigning colors to numbers persists. The mystery of the number 11 and its elusive color remains an intriguing subject for researchers and enthusiasts alike. The Invention of a New Color for the Number 11 The assignment of colors to numbers has been a subject of interest for many mathematicians and scientists throughout history. One of the most intriguing examples of this endeavor is the invention of a new color for the number 11. The idea of inventing a new color for the number 11 emerged from the observation that the colors of the rainbow are typically associated with the numbers from one to ten. This led some researchers to question why the number 11 was left out of this colorful representation of numbers. As a result, a group of scientists decided to create a new color that would represent the number 11. The process of inventing a new color for the number 11 involved a great deal of experimentation and collaboration among the researchers. They spent countless hours in the laboratory, mixing different shades of blue and green to create a unique hue that would stand out among the other colors. The researchers were determined to find a color that would be both visually appealing and mathematically After many trials and errors, the researchers finally settled on a new color for the number 11. They called it “eleven blue,” a deep, rich shade of blue that was distinct from any of the other colors in the rainbow spectrum. Eleven blue was a striking color that immediately caught the attention of anyone who saw it. It was a perfect representation of the number 11, as it was both unique and mathematically significant. The invention of eleven blue was a significant achievement in the field of color theory. It demonstrated that it was possible to create a new color that had never been seen before, simply by applying mathematical principles to the visual arts. This achievement opened up new possibilities for the use of color in mathematics and science, and it inspired many researchers to continue exploring the mysteries of color and number. In conclusion, the invention of a new color for the number 11 was a groundbreaking achievement that highlighted the intriguing relationship between color and number. It demonstrated that it was possible to create something new and beautiful by applying mathematical principles to the visual arts. This achievement has inspired many researchers to continue exploring the mysteries of color and number, and it has contributed to our understanding of the fascinating world around us. The Enigma of the Number 11 and Its Color Remains Unsolved The enigma of the number 11 and its color remains unsolved, despite numerous attempts to assign a color to it. The reason for this is that the color assigned to the number 11 has no universal or objective basis. In some cultures, the number 11 is associated with the color blue, while in others, it is associated with the color purple. Some cultures associate the number 11 with both colors, while others associate it with no color at all. One reason for the confusion over the color associated with the number 11 is that the concept of color itself is subjective and varies from culture to culture. What one culture considers to be the color blue, another culture may consider to be the color purple or a different shade of blue altogether. Additionally, the way in which colors are perceived and categorized can vary depending on the context in which they are used. For example, the color of a rose may be perceived differently depending on whether it is used as a decoration or as a symbol of love. In conclusion, the enigma of the number 11 and its color remains unsolved due to the subjective nature of color and the variations in the way it is perceived and categorized across different cultures. Despite this, the number 11 continues to be an important and intriguing aspect of numerology and the study of the mysteries of the universe. Further Research and Exploration As the fascination with numbers and their association with colors continued to grow, scholars and researchers alike delved deeper into the subject, exploring various cultures and traditions to uncover the hidden meanings and connections between these seemingly disparate elements. One of the primary objectives of this further research was to investigate the extent to which different cultures and belief systems assigned colors to numbers and how these assignments might vary across different contexts. This required a meticulous examination of various historical, religious, and cultural texts, as well as a thorough analysis of artistic and architectural works that featured numerical motifs. Another important area of focus was the potential psychological and symbolic significance of color and number associations. By exploring the ways in which these associations might be interpreted and understood by individuals and communities, researchers aimed to uncover the deeper, more complex meanings that lay beneath the surface of these seemingly simple pairings. Additionally, researchers sought to explore the possible evolution of color and number associations over time, examining how these associations might have changed and developed as societies grew and evolved. This involved a careful examination of historical documents, artifacts, and artistic works, as well as an analysis of contemporary cultural practices and beliefs. Ultimately, the goal of this further research and exploration was to gain a deeper understanding of the enigma of color and number, to uncover the mysteries that lay hidden within these seemingly innocuous pairings, and to shed light on the rich and complex tapestry of beliefs, traditions, and cultural practices that have evolved around them. 1. What is the relationship between colors and numbers? Colors and numbers are abstract concepts that exist independently of each other. However, they are often used together in various forms of visual representation, such as in charts, graphs, and paintings. While there is no inherent relationship between colors and numbers, they are often used to convey information or create aesthetic effects. 2. Why is the number 11 significant in this context? The number 11 is significant in this context because it is often associated with a specific color, which is typically considered to be a shade of purple or magenta. This association is not based on any scientific or mathematical principles, but rather on cultural and historical factors. The color associated with the number 11 has varied across different cultures and time periods, but it has become a widely recognized and enduring symbol. 3. Is there a specific color that represents the number 11? There is no universal or objective color that represents the number 11. The color associated with the number 11 is a matter of cultural and historical interpretation, and it can vary depending on the context and the individual interpreting it. However, in many contexts, the color associated with the number 11 is a shade of purple or magenta. 4. How did the association between the number 11 and a specific color come about? The association between the number 11 and a specific color is a cultural and historical phenomenon that has evolved over time. It is not based on any scientific or mathematical principles, but rather on various cultural and historical factors, such as symbolism, tradition, and aesthetics. The specific color associated with the number 11 has varied across different cultures and time periods, but it has become a widely recognized and enduring symbol. 5. Is the color associated with the number 11 universal across all cultures? No, the color associated with the number 11 is not universal across all cultures. The color associated with the number 11 is a matter of cultural and historical interpretation, and it can vary depending on the context and the individual interpreting it. Different cultures have their own unique associations and meanings attached to the number 11 and the colors that represent it. 6. Can the color associated with the number 11 be changed or altered? The color associated with the number 11 is a matter of cultural and historical interpretation, and it can be changed or altered over time. Different individuals and cultures may have different associations and meanings attached to the number 11 and the colors that represent it, and these associations can evolve and change over time. However, in many contexts, the color associated with the number 11 is a shade of purple or magenta, and this association has become a widely recognized and enduring symbol. Numerology Reveals Your POWER Color | What Colors Best For You & Main Chakra Based On Your Birthday
{"url":"https://www.therec.io/the-enigma-of-color-and-number-unraveling-the-mystery-of-the-number-11/","timestamp":"2024-11-09T10:42:50Z","content_type":"text/html","content_length":"73866","record_id":"<urn:uuid:6baebb84-b123-4b06-acd3-37b7c593df01>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00710.warc.gz"}
SAGE 8.1, cannot import python modules from notebook SAGE 8.1, cannot import python modules from notebook I'm running into trouble trying to setup SAGE 8.1. What happened is the following. I'm working in Linux Ubuntu 16.4. I installed SAGE 7.5.1 from command-line. Then I installed SAGE 8.1 from pre-built binaries. I deleted the old sage version and now when I run the command sage, it tries to find SAGE 7.5.1 and fails. I already ran: ln -s /SageMath8.1/sage /usr/local/bin/sage And nothing changed. Even though I can run SAGE 8.1 with: But once there I cannot import python modules: >>> import pandas as pd ImportError Traceback (most recent call last) <ipython-input-17-af55e7023913> in <module>() ----> 1 import pandas as pd ImportError: No module named pandas I guess there's a problem with paths, but I don't know how to solve it. Any help? 2 Answers Sort by ยป oldest newest most voted Sage uses its own Python environment that's separate from whichever one(s) you may have previously installed. Try installing the package into Sage with the command sage -pip install pandas and see if that helps. edit flag offensive delete link more is it 'bad practice' to try to use the same python environment? daranha ( 2018-03-05 18:59:47 +0100 )edit It is not a bad practice, it is just that with your installation setup, SageMath does not use the system python. You can check which python SageMath does use with sage -python -c 'import sys; print(sys.executable)' vdelecroix ( 2019-07-06 19:48:57 +0100 )edit Regarding your trouble understanding what happens when you type sage in a terminal: • run which sage to try to figure out what gets run when you type sage • run echo $PATH to check what other locations are searched before /usr/local/bin • check if you defined an alias in your .bashrc or .bash_profile or .bash_aliases (see this blog post about .bashrc and others) • if the ln -s command you ran did not succeed, it might be either because there already is such an alias there, or because of a permissions problem; try with sudo ln -sf /SageMath8.1/sage /usr/local/bin edit flag offensive delete link more Thanks for your answer, though I'm still struggling. - 'which sage' command didn't provide an answer. - three locations are searched before '/usr/local/bin': '/home/myself/bin' , '/home/myself/.local /bin', and /usr/local/sbin/' - I don't understand your third point. - It cannot create a symbolic link because it already exists. daranha ( 2018-03-05 19:16:45 +0100 )edit
{"url":"https://ask.sagemath.org/question/41315/sage-81-cannot-import-python-modules-from-notebook/?answer=41360","timestamp":"2024-11-05T22:47:15Z","content_type":"application/xhtml+xml","content_length":"65905","record_id":"<urn:uuid:fe17dc52-d4b5-4902-86e3-7fbcb8cd9ff2>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00247.warc.gz"}
Colloquium Topics • Mike Starbird July 2, 4:00-5:15 PM The Art Gallery Theorem Sadly, thieves and visitors with sticky fingers require art galleries to guard their paintings, sometimes with surveillance cameras. Suppose an art gallery is bounded by a single polygonal closed curve; however, it could have lots of nooks and crannies. Because of the possibility of strange indentations and extensions, multiple cameras may be required in order to view the whole gallery. The question is, "How many cameras are necessary if the room has n straight walls?" This question was answered less than 50 years ago, but several related questions remain unanswered to this day. Admiral Inman July 6, 11:00 AM -12:15 PM Life Story and World Affairs Gail Burrill July 9, 4:00-5:15 PMMathematics is Awesome What are some things in mathematics you find fascinating? We’ll explore some of these from your perspectives as well as mine. We will also spend time investigating how mathematics and data can help us understand some things about the world in which we live. Dan Shapiro July 16, 4:00-5:15 PM Triangular Reptiles For a whole number k, a k-reptile is a shape in the plane that can be exactly covered with k congruent copies of a tile that is similar to the original shape. Using midpoints of the sides, a triangle is tiled by 4 congruent triangles, each similar to the original. That is: Any triangle is a 4-reptile. It is also a 9-reptile and a 16-reptile. Could some triangle be a 3-reptile? For which k is it possible for some triangle to be a k-reptile? Lauren Ancel-Meyers July 23, 4:00-5:15 PM COVID-19 and Pandemics Nathan Warshauer July 30, 4:00-5:15 PM Video Games and Future Directions • 2019 HSMC Colloquium Topics □ Adam Lowrance - To Knot or Not to Knot □ Michael Starbird - Cake Cutting for Greedy People □ Admiral Bob Inman - Life Experiences and Lessons Learned □ Miriam Kuzbary - The Shape of Things - Organizing Space using Algebra □ Dan Shapiro - Roations □ Joel Spencer - Asymptopia □ Stephen McAdam - The Greatest Invention in the World
{"url":"https://www.txst.edu/mathworks/camps/summer-math-camps-information/hsmc/colloquium-topics.html","timestamp":"2024-11-08T13:41:18Z","content_type":"text/html","content_length":"40159","record_id":"<urn:uuid:2e49043e-db84-4bfc-9d83-1838f9a7ba69>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00186.warc.gz"}
tags: moduli, Abelian varieties Functors \(Pic\) and \(Div\) We will denote base change with a subscript: \(X_T = X \times T\). If \(X\) is a scheme, the Picard group of \(X\) is defined to be the group of isomorphism classes of invertible sheaves on \(X\). The relative Picard functor of an \(S\)-scheme \(X\) is defined as \[ \operatorname{Pic}_{X/S}(T) := \operatorname{Pic}(X_T) / \operatorname{Pic}(T) \] where the embedding \(\operatorname{Pic}(T) \hookrightarrow \operatorname{Pic}(X_T)\) is given by the pullback along the structure maps \(X_T \to T\). Caveat: all representability results work with a sheafification of this functor in some topology, Zariski, étale, or fppf. Unless \(X \to S\) is proper and has a section, they need not be isomorphic. An effective divisor is a closed subscheme such that its ideal is invertible. If \(f: X \to S\) is a morphism of schemes then a relative effective divisor on \(X\) is an effective divisor \(D\) such that \(D\) is flat over \(S\). For a morphism \(X \to S\) define the functor of relative divisors \[ \operatorname{Div}_{X/S}(T) := \{\ \textrm{ relative effective divisors on } X_T/T \ \} \] We are interested in representability of this functor, so to this end we prove a little lemma. Lemma. Let \(X \to S\) be a flat morphism. Let \(D\) be a closed subscheme of \(X\) flat over \(S\). Then \(D\) is relative effective divisor in a neighbourhood of \(x \in X\) if and only if \(D_s\) is cut out in a neighbourhood of \(x\) in \(X_s\), where \(s\) is the image of \(x\), by a single non-zero element of \(\operatorname{\mathcal{O}}_{X_s,x}\) (which amounts to being an effective divisor in \(X_s\) but we won’t prove it). Proof. Left to right. Multiplication by the element \(f\) that cuts out \(D\) induces the short exact sequence of \(\operatorname{\mathcal{O}}_{S,s}\)-modules \[ 0 \to \operatorname{\mathcal{O}}_ {X,x} \to \operatorname{\mathcal{O}}_{X,x} \to \operatorname{\mathcal{O}}_{D,x} \to 0 \] Tensoring it with \(k(s)\) we get \[ \begin{array}{c} \ldots \to \operatorname{Tor}_1(\operatorname{\mathcal {O}}_{X,x}, k(s)) \to \operatorname{Tor}_1(\operatorname{\mathcal{O}}_{X,x}, k(s)) \to \operatorname{Tor}_1(\operatorname{\mathcal{O}}_{D,x}, k(s)) \to \\ \to \operatorname{\mathcal{O}}_{X,x} \otimes k(s) \to \operatorname{\mathcal{O}}_{X,x} \otimes k(s) \to \operatorname{\mathcal{O}}_{D,x} \otimes k(s) \to 0\\ \end{array} \] Since \(D\) is flat in a neighbourhood of \(x\), the third term \(\ operatorname{Tor}_1(\operatorname{\mathcal{O}}_{D,x}, k(s))\) vanishes. Therefore, the element \(f \otimes k(s)\) cuts out \(D_s\) in \(X_s\), and \(D_s\) is an effective divisor in \(X_s\). Right to left: suppose \(D_s\) is locally cut out by non-zero divisor of \(\operatorname{\mathcal{O}}_{X_s,x}\) then we have to show \(D\) is locally cut out by a single element of \(\operatorname{\ Consider the exact secquence \[ 0 \to I_{D,x} \to \operatorname{\mathcal{O}}_{X,x} \to \operatorname{\mathcal{O}}_{D,x} \to 0 \] then the long exact sequence associated to tensoring with \(k(s)\) is \[ \operatorname{Tor}_1(\operatorname{\mathcal{O}}_{X,x}, k(s)) \to \operatorname{Tor}_1(\operatorname{\mathcal{O}}_{D,x}, k(s)) \to I_{D,x} \otimes k(s) \to \operatorname{\mathcal{O}}_{X,x} \otimes k(s) \to \operatorname{\mathcal{O}}_{D,x} \otimes k(s) \to 0 \] as \(I_{D,x} = f\operatorname{\mathcal{O}}_{X,x}\) and \(f\) is regular in the fibre \(X_s\), \(I_{D,x} \otimes k(s) \to \operatorname {\mathcal{O}}_{X,x} \otimes k(s)\) is multiplication by \(f \otimes k(s)\), so in fact is an inclusion. \(\operatorname{Tor}_1(\operatorname{\mathcal{O}}_{X,x}, k(s))\) vanishies sinche \(X\) is flat over \(S\), and therefore, finally, \(\operatorname{Tor}_1(\operatorname{\mathcal{O}}_{D,x}, k(s))=0\), and \(D\) is flat over \(S\). Since, as it follows from the premise, the multplication by \(f \otimes k(s)\) induces an isomorphism \(\operatorname{\mathcal{O}}_{X,x} \otimes k(s) \to I_{D,x} \otimes k(s)\), it follows by Nakayama’s lemma that the kernel of the multiplication by \(f\) is trivial, and so \(f\) is not a zero divisor. Corollary. \(\operatorname{Div}\) is representable by an open subscheme of \(\operatorname{Hilb}\). Proof. Let \(W \subset X \times \operatorname{Hilb}_{X/S}\) be the universal scheme. Then the property of being an effective divisor on \(X \times \operatorname{Hilb}\) in a neighourhood of a given point is an open property because of the previous lemma, let \(U \subset W\) be the set of such points. Then since projection to \(\operatorname{Hilb}_{X/S}\) is proper, the image of \(U\) in \(\ operatorname{Hilb}_{X/S}\) is open. Let us show that it represents \(\operatorname{Div}_{X/S}\). Note that for any \(y \in O\), \(W_y\) is an effective divisor in \(X_y\) by the Lemma above. Given relative effective divisor \(D \subset X_T\) there exists a map \(\iota_D: T \to \operatorname{Hilb}_{X/S}\) such that \(D = W \times_{\operatorname{Hilb},\iota_D} T\). At every point \(t \in \ operatorname{Im}(\iota_D: T \to \operatorname{Hilb})\), the closed subscheme \(W_t\) is a divisor in \(X_t\), and by the Lemma above \(W_T\) is a relative effective divisor. representability of \(\operatorname{Pic}(X/S)\), Mumford’s method Suffices to represent \(\operatorname{Pic}^\tau\), the component of \(\operatorname{Pic}\) that contains numerically trivial divisors, because components corresponding to different numerical classes are isomorphic. In fact, it will be more convenient to fix a very ample divisor \(\xi\) and represent the compononet \(\operatorname{Pic}^\xi\) of \(\operatorname{Pic}\) consisting of divisors numerically equivalent to \(\xi\). We pick \(\xi\) to be 0-regular (regularity was described in the previous post). There is a little semi-tautological argument that it is enough to show representability of \(\operatorname{Pic}^\xi\) (given that \(\operatorname{Div}^\xi\) is representable). We consider the morphism of fuctors \(\Phi: \operatorname{Div}\to \operatorname{Pic}\) and then restrict it to \(\operatorname{Pic}^\xi\). The goal is to construct a section \(s: \operatorname{Pic}^\ xi \to \operatorname{Div}^\xi\). Lemma. Assume \(s\) exists, then \(\operatorname{Pic}\) is representable. Proof. The morphism \(s \circ \Phi\) is an endomorphism of \(\operatorname{Div}^\xi\), which is representable, say by an \(S\)-scheme \(D\). Then \(s \circ \Phi(\operatorname{id}_D)\) is an endomorphism \(f\) of the scheme \(D\). Consider the fibre product \(D \times_{\Delta, D \times D, \operatorname{id}\times f} D\), where \(\Delta: D \to D \times_S D\) is the diagonal map. Call this fibre product \(P\). We claim that \(P\) represents \(\operatorname{Pic}\). Indeed, \(\operatorname{Hom}(T, P)\) is isomorphic, by construction, to the set of pairs of morphsims \(\alpha, \beta: T \to D\) such that \(\Delta(\alpha) = \operatorname{id}\times f (\beta)\), i.e. \(\operatorname{Hom}(T, P)\) is the image of \(\operatorname{Hom}(T, D)\) under \(s \circ \Phi\). This means that \(P\) represents \(\operatorname{Pic}^\xi\). \(\square\) The section is constructed as follows (Mumford assumes \(S=k\), a field, not sure how this is restrictive; on the other hand he also assumes \(X\) is a surface, this seems to be crucial later). Suppose we are given an invertible sheaf \(\operatorname{\mathcal{L}}\) no \(X \times T\). Denote \(\operatorname{\mathcal{M}}_x\) its restriction to \({x} \times T\), which can be considered as a sheaf on \(T\), and let \(\operatorname{\mathcal{E}}\) be the sheaf of global sections \((p_T)_* \operatorname{\mathcal{L}}\). We pick a finite number of points \(x_1, \ldots, x_N \in X\), and consider the natural morphism \[ h: \operatorname{\mathcal{E}}\to \bigoplus_{i=1}^N \operatorname{\mathcal{M}}_{x_i}, s \mapsto (s (x_1), \ldots, s(x_N)) \] Assuming the rank of \(\operatorname{\mathcal{E}}\) is \(r\) (and this is the same for invertible sheaves of the same numerical class, if this class is sufficiently ample), we can wedge this \(r-1\) times to get a homomorphism of invertible sheaves \[ (\wedge h)^*: \otimes \operatorname{\mathcal{M}}_{x_i} \to \operatorname{Hom}(\wedge \operatorname{\mathcal{E}}, \ operatorname{\mathcal{O}}_T) \] This gives a canonical morphism of sheaves \[ \operatorname{\mathcal{O}}_T \to \operatorname{\mathcal{E}}\otimes ((\wedge \operatorname{\mathcal{E}})^{-1} \otimes (\ otimes M_{x_i})) \] and hence a canonical section \[ \sigma \in H^0(X \times T, \operatorname{\mathcal{L}}\otimes (p_T)^* ((\wedge \operatorname{\mathcal{E}})^{-1} \otimes (\otimes M_{x_i})) \] Using some magick related to regularity (takes part in the choice of \(\xi\)), one can show that the zero locus of this section does not vanish on any fibre \(X_t\). (All this is Lectures 19-20 in ``Lectures on curves on algebraic surface’’)
{"url":"http://shenme.de/blog/posts/2016-04-24-picard.html","timestamp":"2024-11-11T17:20:55Z","content_type":"application/xhtml+xml","content_length":"16555","record_id":"<urn:uuid:a796588b-4ed3-425b-8716-0e9a9508a1e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00874.warc.gz"}
Martingale Concentration Let be a martingale wrt to the filtration . Assume is scalar-valued unless otherwise indicated. Here we investigate concentration inequalities for . Note that martingale concentration inequalities generalize concentration inequalities for independent random variables (eg bounded scalar concentration), since we may take , in which the following bounds translate into bounds on . While we state concrete, mostly fixed-time results here, we note that many of the following bounds were made time-uniform (and often tightened) using sub-psi processes. Azuma-Hoeffding inequality Assume that for all , i.e., the martingale has bounded increments. Then, for all , The natural one-sided versions of this inequality also exist. Note that is fixed in advance here (i.e., it is fixed-time result). Dubins-Savage inequality This is often considered Chebyshev’s inequality for martingales. If has conditional means , i.e., and conditional variances then for any , This is a time-uniform result. This result can also be generalized to infinite variance. If for , then where is a constant dependent on . This was proven by Kahn in 2009. Variance bound If the martingale has bounded increments and the variance of the increments are also bounded, i.e., then we can modify Azuma’s bound to read where , as long as . Why is this better than Azuma’s inequality? Since the increments are bounded by , a trivial bound on is . Thus we may assume that , which means the right hand side of the bound is tighter. This was first proved by DA Grable in A Large Deviation Inequality for Functions of Independent, Multi-way Choices. A modern proof is given by Dubhasi and Panconesi in their textbook, Concentration of Measure for the Analysis of Randomized Algorithms, Chapter 8. Variance bound — matrix version. Suppose is a matrix-valued martingale (Hermitian Matrices). Let and suppose Then, for , where each is a -matrix. This was first proved by David Gross: Recovering Low-Rank Matrices From Few Coefficients In Any Basis.
{"url":"https://thestatsmap.com/martingale-concentration","timestamp":"2024-11-09T20:15:05Z","content_type":"text/html","content_length":"158293","record_id":"<urn:uuid:5dd6ed5f-56fe-42b3-9868-85ee0666b59d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00812.warc.gz"}
DOI Number This paper presents a novel method for detecting locations of damages in thin walled structural components made of fiber reinforced composites (FRC). Therefore, the change of harmonic distortion, which is found by current research to be very sensitive to delamination, under resonant excitation will be derived from FEM-simulation. Based on the linear modal description of the undamaged structure and the damage-induced nonlinearities represented by a nonlinear measure, two spatial damage indexes have been formulated. The main advantage of this novel approach is that the information about the defect is represented mainly by changes in the modal harmonic distortion (MHD), which just needs to be measured in one (or few) structural points. The spatial resolution is given by the pairwise coupling of the MHD with the corresponding mode shapes. Structural Health Monitoring, Composites, Damage Location, Nonlinear Acoustics, Harmonic Distortion, Nonlinear Vibration Cai, J., Qiu, L., Yuan, S., Liang, D., Shi, L., Liu, P.P., Liang, D., 2012, Structural Health Monitoring for Composite Materials, Composites and Their Application, INTECH. Balageas, D., Fritzen, C., Güemes, A. (Eds.), 2006, Structural Health Monitoring, ISTE, London, Newport Beach. Masters, J. (Ed.), 1992, Damage detection in composite materials, 1128, American Society for Testing and Materials, Philadelphia. Carden, E.P., Fanning, P., 2004, Vibration Based Condition Monitoring: A Review, Structural Health Monitoring, 3(4), pp. 355-377. Doebling, S.W., Farrar, C.R., Prime, M.B., 1998, A Summary Review of Vibration-Based Damage Identification Methods, Identification Methods, Shock and Vibration Digest, 30(2), pp. 91-105. Kaiser, S., Melcher, J., Breitbach, E., Sachau, D., 1999, Structural dynamic health monitoring of adaptive CFRP structures, Proc. SPIE 3674, Smart Structures and Materials 1999: Industrial and Commercial Applications of Smart Structures Technologies, 51. Zou, Y., Tong, L., Steven, G., 2000, Vibration-Based Model-Dependent Damage (Delamination) Identification and Health Monitoring for Composite Structures - A Review, Journal of Sound and Vibration, 230(2), pp. 357-378. Montalvão, D., Maia, N., Ribeiro, A., 2006, A Review of Vibration-Based Structural Health Monitoring with Special Emphasis on Composite Materials, Shock and Vibration Digest, 38(4), pp. 295-326. Sohn, H., 2004, A Review of Structural Health Monitoring Literature: 1996-2001, Technical report, Los Alamos National Laboratory. Salawu, O., 1997, Detection of structural damage through changes in frequency: a review, Engineering Structures, 19(9), pp. 718-723. Philips Adewuyi, A., Wu, Z., Kammrujaman Serker, N., 2009, Assessment of Vibration-Based Damage Identification Methods using Displacement and Distributed Strain Measurements, Structural Health Monitoring, 8(6), pp. 443-461. Montalvão, D., Ribeiro, A., Duarte-Silva, J., 2009, A Method for the Localization of Damage in a CFRP Plate Using Damping, Mechanical Systems and Signal Processing, 23(6), pp. 1846-1854. Chattopadhyay, A., Kim, H., Ghoshal, A., 2004, Non-linear vibration analysis of smart composite structures with discrete delamination using a refined layerwise theory, Journal of Sound and Vibration, 273(1-2), pp. 387-407. Farrar, C.R., Worden, K., Todd, M.D., Park, G., Nichols, J., Adams, D.E., Bement, M.T., Farinholt, K., 2007, Nonlinear System Identification for Damage Detection, Technical report, LA-14353, Los Alamos National Laboratory. Prime, M.B., Shevitz, D.W., 1996, Linear and Nonlinear Methods for Detecting Cracks in Beams, 14th International Modal Analysis Conference, IMAC XIV, Dearborn Della, C.N., Shu, D., 2007, Vibration of delaminated composite laminates: A review, Applied Mechanics Reviews, 60(1), pp. 1-20. Lestari, W., Hanagud, S., 1999, Health monitoring of structures: multiple delamination dynamics in composite beams, Proceedings of the 40th AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics and materials conference and adaptive structures forum, St. Louis, MO. Shen, M.H., Grady, J., 1992, Free vibrations of delaminated beams, AIAA journal, 30(5), pp. 1361-1370. Lu, X., Lestari, W., Hanagud, S., 2001, Nonlinear vibrations of a delaminated beam, Journal of Vibration and Control, 7(6), pp. 803-831. Ostachowicz W., Zak A., 2004, Vibration of a laminated beam with a delamination including contact effects, Shock and Vibration, 11(3), pp. 157-171. Ostachowicz, W., Dulieu-Barton, J., Holford, K., Krawczuk, M., Zak, A., 2005, Key Engineering Materials - Damage Assessment of Structures VI, Key Engineering Materials - Damage Assessment of Structures VI, 293 - 294(607), pp. 607-616. Müller, I., 2007, Clapping in delaminated sandwich-beams due to forced oscillations, Computational Mechanics, 39(2), pp. 113-126. Saravanos, D., Hopkins, D., 1996, Effects of delaminations on the damped dynamic characteristics of composite laminates: analysis and experiments, Journal of Sound and Vibration, 192(5), pp. 977-993. Jenal, R., Staszewski, W., Klepka, A., Uhl, T., 2010, Structural Damage Detection Using Laser Vibrometers, 2nd International Symposium on NDT in Aerospace 2010, Hamburg, Germany. Campbell, F., 2010, Structural Composite Materials, ASM International. Figueiras, J., 1983, Ultimate load analysis of anisotropic and reinforced concrete plates and shells, Ph.D. thesis, University College of Swansea. Nguyen-Thanh, N., Rabczuk, T., Nguyen-Xuan, H., Bordas, S.P., 2008, A smoothed finite element method for shell analysis, Computer Methods in Applied Mechanics and Engineering, 198(2), pp. 165-177. Klepka, A., 2013, Advanced Structural Damage Detection: From Theory to Engineering Applications, John Wiley & Sons, Ltd, chapter 4. Nonlinear Acoustics, pp. 73-107. Moussatov, A., Gusev, V., Castagnède, B., 2003, Self-Induced Hysteresis for Nonlinear Acoustic Waves in Cracked Material, Phys. Rev. Lett., 90. Friswell, M.I., Penny, J.E., 2002, Crack modeling for structural health monitoring, Structural Health Monitoring, 1(2), pp. 139-148. Courtney, C., Neild, S., Wilcox, P., Drinkwater, B., 2010, Application of the bispectrum for detection of small nonlinearities excited sinusoidally, Journal of Sound and Vibration, 329(20), pp. Klepka, A., Straczkiewicz, M., Pieczonka, L., Staszewski, W.J., Gelman, L., Aymerich, F., Uhl, T., 2015, Triple correlation for detection of damage related nonlinearities in composite structures, Nonlinear Dynamics, 81(1), pp. pp 453-468. Hughes, T.J., 1983, Analysis of transient algorithms with particular reference to stability behavior, in Belytschko, T., Hughes, T.J., (Eds.), Computational Methods for Transient Analysis, Elsevier Science Publishers, pp. 67-155. Heinzel, G., Rüdiger, A., Schilling, R., 2002, Spectrum and spectral density estimation by the Discrete Fourier transform (DFT), including a comprehensive list of window functions and some new at-top windows, Technical report, Max-Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut). • There are currently no refbacks. ISSN: 0354-2025 (Print) ISSN: 2335-0164 (Online) COBISS.SR-ID 98732551 ZDB-ID: 2766459-4
{"url":"https://casopisi.junis.ni.ac.rs/index.php/FUMechEng/article/view/1689","timestamp":"2024-11-04T15:23:36Z","content_type":"application/xhtml+xml","content_length":"27353","record_id":"<urn:uuid:b4beccd4-1081-4d74-83de-6d0f70cf1092>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00784.warc.gz"}
What is an example of a orbital probability patterns practice problem? | Socratic What is an example of a orbital probability patterns practice problem? 1 Answer It's a bit of a difficult subject, but there are indeed some practical and not overly hard questions one could ask. Suppose you have the radial density distribution (may also be known as "orbital probability pattern") of the $1 s$, $2 s$, and $3 s$ orbitals: where ${a}_{0}$ (apparently labeled $a$ in the diagram) is the Bohr radius, $5.29177 \times {10}^{-} 11 m$. That just means the x-axis is in units of "Bohr radii", so at $5 {a}_{0}$, you are at $2.645885 \times {10}^{-} 10 m$. It's just more convenient to write it as $5 {a}_{0}$ sometimes. The y-axis, very loosely-speaking, is the probability of finding an electron at a particular radial (outward in all directions) distance away from the center of the orbital, and it's called the probability density. So one could ask some of the following questions: • At what distances away from the center of each orbital should you expect to never find an electron? • Why does the graph of the $3 s$ orbital taper off farthest away from the center of the orbital, in comparison to the $1 s$ orbital, which tapers off closest to the center of the orbital (don't overthink it)? Challenge question: • Sketch an approximate probability distribution for each orbital listed above, knowing that a higher value on the y-axis indicates a darker shading for the orbital and vice versa, that $r$ indicates some distance outwards in all directions, and that $s$ orbitals are spheres. It doesn't have to be super detailed; literally, draw dots. (A probability distribution for an orbital is a distribution of points that indicate locations in the orbital where you can find an electron most often, least often, and anywhere in between.) If you want to know the answer to the challenge question after you've tried it, here it is. Impact of this question 4676 views around the world
{"url":"https://socratic.org/questions/what-is-an-example-of-a-orbital-probability-patterns-practice-problem","timestamp":"2024-11-08T08:00:34Z","content_type":"text/html","content_length":"37557","record_id":"<urn:uuid:90310517-7584-46c3-a0c0-edc90669f142>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00375.warc.gz"}
P-P plot of Two-dimensional Copulas ppCplot {usefr} R Documentation P-P plot of Two-dimensional Copulas The function build the P-P plot of Two-dimensional Copulas upon the knowledge of the margin distribution provided by the user. The empirical probabilities are computed using function empCopula from package [copula-package]{copula}. ppCplot(X, Y, copula = NULL, margins = NULL, paramMargins = NULL, npoints = 100, method = "ml", smoothing = c("none", "beta", "checkerboard"), ties.method = "max", xlab = "Empirical probabilities", ylab = "Theoretical probabilities", glwd = 1.2, bgcol = "grey94", gcol = "white", dcol = "red", dlwd = 0.8, tck = NA, tcl = -0.3, xlwd = 0.8, ylwd = 0.8, xcol = "black", ycol = "black", cex.xtitle = 1.3, cex.ytitle = 1.3, padj = -1, hadj = 0.7, xcex = 1.3, ycex = 1.3, xline = 1.6, yline = 2.1, xfont = 3, yfont = 3, family = "serif", lty = 1, bty = "n", col = "black", xlim = c(0, 1), ylim = c(0, 1), pch = 20, las = 1, mar = c(4, 4, 2, 1), font = 3, cex = 1, seed = 132, ...) X Numerical vector with the observations from the first margin distribution. Y Numerical vector with the observations from the second margin distribution. copula A copula object from class Mvdc or string specifying all the name for a copula from package copula-package. margins A character vector specifying all the parametric marginal distributions. See details below. paramMargins A list whose each component is a list (or numeric vectors) of named components, giving the parameter values of the marginal distributions. See details below. npoints Number of points used to build the P-P plot. The method A character string specifying the estimation method to be used to estimate the dependence parameter(s); see fitCopula. smoothing character string specifying whether the empirical distribution function (for F.n()) or copula (for C.n()) is computed (if smoothing = "none"), or whether the empirical beta copula (smoothing = "beta") or the empirical checkerboard copula (smoothing = "checkerboard") is computed (see empCopula. ties.method character string specifying how ranks should be computed if there are ties in any of the coordinate samples of x; passed to pobs (see empCopula. xlab A label for the x axis, defaults to a description of x. ylab A label for the y axis, defaults to a description of y. glwd Grid line width. bgcol Grid background color. gcol Grid line color dcol Diagonal line color. dlwd Diagonal line color. tck The length of tick marks as a fraction of the smaller of the width or height of the plotting region. If tck >= 0.5 it is interpreted as a fraction of the relevant side, so if tck = 1 grid lines are drawn. The default setting (tck = NA) is to use tcl = -0.5. tcl The length of tick marks as a fraction of the height of a line of text. The default value is -0.5; setting tcl = NA sets tck = -0.01 which is S' default. xlwd X-axis line width. ylwd Y-axis line width. xcol X-axis line color. ycol Y-axis line color. cex.xtitle Cex for x-axis title. cex.ytitle Cex for y-axis title. padj adjustment for each tick label perpendicular to the reading direction. For labels parallel to the axes, padj = 0 means right or top alignment, and padj = 1 means left or bottom alignment. This can be a vector given a value for each string, and will be recycled as necessary. hadj adjustment (see par("adj")) for all labels parallel (‘horizontal’) to the reading direction. If this is not a finite value, the default is used (centring for strings parallel to the axis, justification of the end nearest the axis otherwise). xcex, ycex A numerical value giving the amount by which axis labels should be magnified relative to the default. xline, yline On which margin line of the plot the x & y labels must be placed, starting at 0 counting outwards (see mtext). xfont, yfont An integer which specifies which font to use for x & y axes titles (see par). family, lty, bty, col, xlim, Graphical parameters (see par). ylim, pch, las, mar, font cex A numerical value giving the amount by which plotting text and symbols should be magnified relative to the default. This starts as 1 when a device is opened, and is reset when the layout is changed, e.g. by setting mfrow. seed An integer used to set a 'seed' for random number generation. ... Other graphical parameters to pass to functions: abline, mtext and axis. Empirical and theoretical probabilities are estimated using the quantiles generated with the margin quantile functions. Nonlinear fit of margin distributions can be previously accomplished using any of the functions fitCDF, fitdistr, or function fitMixDist for the case where the margins are mixture of distributions. npoints random uniform and iid numbers from the interval [0, 1] are generated and used to evaluate the quantile margin distribution functions. Next, the quantiles are used to compute the empirical and theoretical copulas, which will be used to estimate the corresponding The P-P plot and invisible temporary object with the information to build the graphic which can be assigned to a variable to use in further plots or analyses. Robersy Sanchez (https://genomaths.com). See Also fitCDF, fitdistr, fitMixDist, and bicopulaGOF. margins = c("norm", "norm") ## Random variates from normal distributions X <- rlnorm(200, meanlog =- 0.5, sdlog = 3.1) Y <- rnorm(200, mean = 0, sd = 6) cor(X,Y) ## Correlation between X and Y parMargins = list( list(meanlog = 0.5, sdlog = 3.1), list(mean = 0, sd = 10)) copula = "normalCopula" npoints = 100 ## The information to build the graphic is stored in object 'g'. g <- ppCplot(X = X, Y = Y, copula = "normalCopula", margins = margins, paramMargins = parMargins, npoints = 20) version 0.1.0 ]
{"url":"https://genomaths.github.io/usefr_manual/ppCplot.html","timestamp":"2024-11-02T05:18:37Z","content_type":"application/xhtml+xml","content_length":"9942","record_id":"<urn:uuid:eb062fc8-0c70-422f-b8ea-802e3b3b1a45>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00826.warc.gz"}
Canonical Forms The Fixed-Point Designer™ software does not attempt to standardize on one particular fixed-point digital filter design method. For example, you can produce a design in continuous time and then obtain an “equivalent” discrete-time digital filter using one of many transformation methods. Alternatively, you can design digital filters directly in discrete time. After you obtain a digital filter, it can be realized for fixed-point hardware using any number of canonical forms. Typical canonical forms are the direct form, series form, and parallel form, each of which is outlined in the sections that follow. For a given digital filter, the canonical forms describe a set of fundamental operations for the processor. Because there are an infinite number of ways to realize a given digital filter, you must make the best realization on a per-system basis. The canonical forms presented in this chapter optimize the implementation with respect to some factor, such as minimum number of delay elements. In general, when choosing a realization method, you must take these factors into consideration: • Cost The cost of the realization might rely on minimal code and data size. • Timing constraints Real-time systems must complete their compute cycle within a fixed amount of time. Some realizations might yield faster execution speed on different processors. • Output signal quality The limited range and precision of the binary words used to represent real-world numbers will introduce errors. Some realizations are more sensitive to these errors than others. The Fixed-Point Designer software allows you to evaluate various digital filter realization methods in a simulation environment. Following the development cycle outlined in Developing and Testing Fixed-Point Systems, you can fine-tune the realizations with the goal of reducing the cost (code and data size) or increasing signal quality. After you have achieved the desired performance, you can use the Simulink^® Coder™ product to generate rapid prototyping C code and evaluate its performance with respect to your system's real-time timing constraints. You can then modify the model based upon feedback from the rapid prototyping system. The presentation of the various realization structures takes into account that a summing junction is a fundamental operator, thus you may find that the structures presented here look different from those in the fixed-point filter design literature. For each realization form, an example is provided using the transfer function shown here: $\begin{array}{c}{H}_{ex}\left(z\right)=\frac{1+2.2{z}^{-1}+1.85{z}^{-2}+0.5{z}^{-3}}{1-0.5{z}^{-1}+0.84{z}^{-2}+0.09{z}^{-3}}\\ =\frac{\left(1+0.5{z}^{-1}\right)\left(1+1.7{z}^{-1}+{z}^{-2}\right)} {\left(1+0.1{z}^{-1}\right)\left(1-0.6{z}^{-1}+0.9{z}^{-2}\right)}\\ =5.5556-\frac{3.4639}{1+0.1{z}^{-1}}+\frac{-1.0916+3.0086{z}^{-1}}{1-0.6{z}^{-1}+0.9{z}^{-2}}.\end{array}$
{"url":"https://kr.mathworks.com/help/fixedpoint/ug/canonical-forms.html","timestamp":"2024-11-07T20:09:27Z","content_type":"text/html","content_length":"70843","record_id":"<urn:uuid:fd0de526-c071-407a-a709-572738d6cbb5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00342.warc.gz"}
cal_length: Recursive function to calculate the length of branches in dendsort: Modular Leaf Ordering Methods for Dendrogram Nodes cal_length is a code modified from plotNode() to calculate the length of lines to draw the branch of a dendrogram. This function was developed to evaluate the use of ink for visualization. 1 cal_length(x1, x2, subtree, center, nodePar, edgePar, horiz = FALSE, sum) cal_length(x1, x2, subtree, center, nodePar, edgePar, horiz = FALSE, sum) x1 A x coordinatex1 x2 Another x coordinatex2 subtree A dendrogram object.subtree center A logical whether the dendrogram is centered.center nodePar A node parameter.nodePar edgePar An edge parameter.edgePar horiz A logical about layout.horiz sum A sum of length.sum 1 #generate sample data 2 set.seed(1234); par(mar=c(0,0,0,0)) 3 x <- rnorm(10, mean=rep(1:5, each=2), sd=0.4) 4 y <- rnorm(10, mean=rep(c(1,2), each=5), sd=0.4) 5 dataFrame <- data.frame(x=x, y=y, row.names=c(1:10)) 6 #calculate Euclidian distance 7 distxy <- dist(dataFrame) 8 #hierachical clustering "complete" linkage by default 9 hc <- hclust(distxy) 11 total_length <- cal_total_length(as.dendrogram(hc)) #generate sample data set.seed(1234); par(mar=c(0,0,0,0)) x <- rnorm(10, mean=rep(1:5, each=2), sd=0.4) y <- rnorm(10, mean=rep(c(1,2), each=5), sd=0.4) dataFrame <- data.frame(x=x, y=y, row.names=c (1:10)) #calculate Euclidian distance distxy <- dist(dataFrame) #hierachical clustering "complete" linkage by default hc <- hclust(distxy) total_length <- cal_total_length(as.dendrogram(hc)) For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/dendsort/man/cal_length.html","timestamp":"2024-11-15T02:59:00Z","content_type":"text/html","content_length":"31987","record_id":"<urn:uuid:479304cb-9631-49b6-8f13-4a1cf85a1823>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00105.warc.gz"}
Deep Blue Analysis Ocean Forecast – Tides The sea level tidal height variations and transports are generated with the OSU TXPO Tide Models. TXPO is a series of fully global models of ocean tides, which best fits, in a least squares sense, the Laplace Tidal Equations and satellite altimetry data. The OSU Topex Poseidon Tidal Inverse Model (volkov.oce.orst.edu/tides/global.html) is a global model of ocean tides which best-fits, in a least-squares sense, the Laplace Tidal Equations and along track averaged data from TOPEX/Poseidon and Jason (on TOPEX/POSEIDON tracks since 2002). The methods used to compute the model are described in detail by Egbert, Bennett, and Foreman, 1994 and further by Egbert and Erofeeva, 2002. The tides are provided as complex amplitudes of earth-relative sea-surface elevation for eight primary (M2, S2, N2, K2, K1, O1, P1, Q1), two long period (Mf, Mm) and 3 non-linear (M4, MS4, MN4) harmonic constituents, on a 1440×721, 1/4 degree resolution full global grid. It is a simple matter to adjust the tides to Lowest Astronomical Tide (LAT). It is a two-step process where the earth loading tides are first computed and then added to the sea surface elevations. There were no active tide stations in Morocco for a MBES project; the closest station was Las Palmas, Canary Islands. How well would the OSU tidal model predict the tidal heights. The last available data from the Las Palmas station is for 2013. The comparison between measured tides at Las Palmas, Canary Islands (28.1333N, 15.4167W) and the OSU predicted tidal heights for a representative one-week period are shown in the above figure. The histogram of the bias between measured and predicted tidal height for all of 2013 is shown in the figure to the left The mean bias is -1.0 cm, i.e., the predicted tides tend to be slightly higher than the measured, and 95% of the bias’ magnitude is less than 15 cm. The OSU tidal model is extremely accurate. Comparisons to other locations where tidal measuring station data are available show the same excellent comparison. ©Deep Blue Analysis LLC, 2020 - 2024
{"url":"https://deepblueanalysis.com/seastate-ocean_tides/","timestamp":"2024-11-11T22:27:30Z","content_type":"text/html","content_length":"103871","record_id":"<urn:uuid:7fde948b-83c2-4d53-8119-7fff1196ca21>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00092.warc.gz"}
Elastic and Inelastic Collisions Understanding conservation of energy using collision of elastic and inelastic bodies Consider the impact between two bodies which move with different velocities along the same straight line. It is assumed that the point of impact lies on the line joining the centers of gravity of the two bodies. The behavior of these colliding bodies during the complete period of impact will depend upon the properties of the materials of which they are made. The material of the two bodies may be perfectly elastic or perfectly inelastic. The bodies which rebound after impact are called elastic bodies and the bodies which does not rebound at all after its impact are called inelastic bodies. The impact between two lead spheres or two clay spheres is approximately an inelastic impact. The loss of kinetic energy (EL) during impact of inelastic bodies is given by m1 = Mass of the first body, m2 = Mass of the second body, u and u2 = Velocities of the first and second bodies respectively. The loss of kinetic energy (EL) during impact of elastic bodies is given by e = Coefficient of restitution. 1. The relative velocity of two bodies after impact is always less than the relative velocity before impact. 2. The value of e=0, for perfectly inelastic bodies and e=1, for perfectly elastic bodies. In case the bodies are neither perfectly inelastic nor perfectly elastic, then the value of e lies between zero and one. No comments:
{"url":"https://www.aboutmech.com/2013/11/elastic-and-inelastic-collisions.html","timestamp":"2024-11-08T22:08:33Z","content_type":"application/xhtml+xml","content_length":"194143","record_id":"<urn:uuid:d68ee67c-7029-4a7a-b307-3831bfbf6f6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00764.warc.gz"}
Groundhog Years to Human Years - Phil the Groundhog Age Calculator This groundhog years to human years calculator is a unique tool designed to find how old is Phil the Groundhog in human years?. This calculator is based on the concept of comparing the lifespan of groundhogs to that of humans, providing an interesting perspective on how these small mammals age relative to us. Groundhog Years to Human Years Groundhog Years Human Years 1. Each groundhog year is roughly equivalent to 9 human years, though this varies slightly due to rounding. 2. The maximum lifespan of a groundhog (9 years) equates to the assumed average human lifespan of 80 years. 3. The middle of a groundhog’s life (around 4-5 years) corresponds to human middle age (36-44 years). More Calculators: – Wolf Years to Human Years – Squirrel Years to Human Years Groundhog Age in Human Years Calculation Formula Human Years = round(Groundhog Years * 8.89) The formula for converting groundhog years to human years is based on a comparison of lifespans. While the exact lifespan of a groundhog can vary, the calculation typically uses the following • Maximum lifespan of a groundhog: 9 years • Average human lifespan: 80 years Using these figures, we can derive a conversion factor: Conversion Factor = Human Lifespan / Groundhog Lifespan = 80 / 9 ≈ 8.89 The formula to convert groundhog years to human years is then: Human Years = Groundhog Years * Conversion Factor For example: • A 1-year-old groundhog would be equivalent to about 9 human years • A 5-year-old groundhog would be equivalent to about 44 human years • A 9-year-old groundhog (at maximum lifespan) would be equivalent to 80 human years
{"url":"https://ctrlcalculator.com/biology/groundhog-years-to-human-years/","timestamp":"2024-11-04T21:05:36Z","content_type":"text/html","content_length":"100034","record_id":"<urn:uuid:0be64d5b-b353-4d33-94d1-ac3b9c55f529>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00055.warc.gz"}
Gauss' law (electrostatics)/Related Articles Main Article Discussion Related Articles ^[?] Bibliography ^[?] External Links ^[?] Citable Version ^[?] A list of Citizendium articles, and planned articles, about Gauss' law (electrostatics). See also changes related to Gauss' law (electrostatics), or pages that link to Gauss' law (electrostatics) or to this page or whose text contains "Gauss' law (electrostatics)". Parent topics Other related topics Bot-suggested topics Auto-populated based on Special:WhatLinksHere/Gauss' law (electrostatics). Needs checking by a human. • Carl Friedrich Gauss [r]: German mathematician, who was one of the most influential figures in the history of mathematics and mathematical physics (1777 – 1855). ^[e] • Displacement current [r]: Time derivative of the electric displacement D; Maxwell's correction to Ampère's law. ^[e] • Divergence theorem [r]: A theorem relating the flux of a vector field through a surface to the vector field inside the surface. ^[e] • Electric field [r]: force acting on an electric charge—a vector field. ^[e] • Electromagnetism [r]: Phenomena and theories regarding electricity and magnetism. ^[e] • Gauss' Law (disambiguation) [r]: Add brief definition or description • Gauss' law (magnetism) [r]: States that the total magnetic flux through a closed surface is zero; this means that magnetic monopoles do not exist. ^[e] • James Clerk Maxwell [r]: (1831 – 1879) Scottish physicist best known for his formulation of electromagnetic theory and the statistical theory of gases. ^[e] • Maxwell equations [r]: Mathematical equations describing the interrelationship between electric and magnetic fields; dependence of the fields on electric charge- and current- densities. ^[e] Articles related by keyphrases (Bot populated) • Gauss (disambiguation) [r]: Add brief definition or description • Carl Friedrich Gauss [r]: German mathematician, who was one of the most influential figures in the history of mathematics and mathematical physics (1777 – 1855). ^[e] • Polar coordinates [r]: Two numbers—a distance and an angle—that specify the position of a point on a plane. ^[e] • Multipole expansion (interaction) [r]: A mathematical series representing a function that depends on angles, and frequently used in the study of electromagnetic, and gravitational fields, where the fields at distant points are given in terms of sources in a small region. ^[e] • Hill sphere [r]: Add brief definition or description • Gaussian units [r]: A centimeter-gram-second system of units often used in electrodynamics and special relativity. ^[e]
{"url":"https://en.citizendium.org/wiki/Gauss%27_law_(electrostatics)/Related_Articles","timestamp":"2024-11-11T12:01:55Z","content_type":"text/html","content_length":"44580","record_id":"<urn:uuid:297568bb-89b1-4e3c-95b8-15c548c95736>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00133.warc.gz"}
The ZENO Temporal Planner The ZENO Temporal Planner We have built a least commitment planner, ZENO, that handles actions occuring over extended intervals of time and whose preconditions and effects can be temporally quantified. These capabilities enable ZENO to reason about deadline goals, piecewise-linear continuous change, external events and, to a limited extent, simultaneous actions. In particular, actions are allowed to overlap in time only when their effects do not interfere. While other planners exist with some of these features, ZENO is different because it is both sound and complete. Since ZENO relies on constraint satisfaction for all temporal and metric aspects reasoning, sound and efficient algorithms are essential. Specialized routines cooperate to handle the different types of constraints: codesignations, linear equalities, linear inequalities, and nonlinear equations. Codesignations are handled as they were in the UCPOP planner. A simple algorithm maintains equivalence classes of all logical variables, then determines whether the noncodesignations are inconsistent with the classification. Mathematical formulae posted by ZENO are parsed dynamically into a set of linear equations, inequalities, and pairwise nonlinear equations. These canonical forms are identical to the matrix representation of equations used in linear algebra and operations research. Linear equations are solved by Gaussian elimination, linear inequalities by the Simplex algorithm, and nonlinear equations are delayed until they become linear via the solution of other equations and inequalities. To ensure sound constraint handling, each equality that is derived by one algorithm is passed to all other algorithms. Determining an inconsistency using Gaussian elimination is straightforward; if a constraint c=0 is detected during elimination, where c is non-zero constant, then the equations are inconsistent. Finding inconsistencies in linear inequalities is a bit trickier. ZENO uses the incremental version of the phase one Simplex algorithm, devised for the CLP(R) programming language. To expedite temporal queries, ZENO caches temporal relations with Warshall's transitive closure algorithm. For further information, see J.S. Penberthy, Planning with Continuous Change. PhD thesis, University of Washington, 1993 (available as technical report UW-CSE-93-12-01).
{"url":"http://aiweb.cs.washington.edu/ai/zeno.html","timestamp":"2024-11-07T23:11:59Z","content_type":"text/html","content_length":"7023","record_id":"<urn:uuid:97b9ce10-ce9d-40ad-90c8-1d37814f9965>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00326.warc.gz"}
7 Decameter to Centimeter Calculator | Calculator Bit 7 Decameter to Centimeter Calculator 7 Decameter = 7000 Centimeter (cm) Rounded: Nearest 4 digits 7 Decameter is 7000 Centimeter (cm) 7 Decameter is 70 m How to Convert Decameter to Centimeter (Explanation) • 1 decameter = 1000 cm (Nearest 4 digits) • 1 centimeter = 0.001 dam (Nearest 4 digits) There are 1000 Centimeter in 1 Decameter. To convert Decameter to Centimeter all you need to do is multiple the Decameter with 1000. In formula distance is denoted with d The distance d in Centimeter (cm) is equal to 1000 times the distance in decameter (dam): L [(cm)] = L [(dam)] × 1000 Formula for 7 Decameter (dam) to Centimeter (cm) conversion: L [(cm)] = 7 dam × 1000 => 7000 cm How many Centimeter in a Decameter One Decameter is equal to 1000 Centimeter 1 dam = 1 dam × 1000 => 1000 cm How many Decameter in a Centimeter One Centimeter is equal to 0.001 Decameter 1 cm = 1 cm / 1000 => 0.001 dam The decameter (symbol: dam) is unit of length in the International System of Units (SI), equal to 10 meters. Decameter is less frequently used measure unit amongs peoples, decameter is used in volumetric form like cubic decameter(dam^3) that is equal to 1 megalitre(ML). The centimeter (symbol: cm) is a unit of length in the International System of Units (SI), equal to 1/100 meters. Centimeres is used to measure and show rainfall and snowfalls. Cite, Link, or Reference This Page If you found information page helpful you can cite and reference this page in your work. • <a href="https://www.calculatorbit.com/en/length/7-decameter-to-centimeter">7 Decameter to Centimeter Conversion</a> • "7 Decameter to Centimeter Conversion". www.calculatorbit.com. Accessed on November 9 2024. https://www.calculatorbit.com/en/length/7-decameter-to-centimeter. • "7 Decameter to Centimeter Conversion". www.calculatorbit.com, https://www.calculatorbit.com/en/length/7-decameter-to-centimeter. Accessed 9 November 2024. • 7 Decameter to Centimeter Conversion. www.calculatorbit.com. Retrieved from https://www.calculatorbit.com/en/length/7-decameter-to-centimeter. Decameter to Centimeter Calculations Table Now by following above explained formulas we can prepare a Decameter to Centimeter Chart. Decameter (dam) Centimeter (cm) Nearest 4 digits Convert from Decameter to other units Here are some quick links to convert 7 Decameter to other length units. Convert to Decameter from other units Here are some quick links to convert other length units to Decameter. More Decameter to Centimeter Calculations More Centimeter to Decameter Calculations FAQs About Decameter and Centimeter Converting from one Decameter to Centimeter or Centimeter to Decameter sometimes gets confusing. Here are some Frequently asked questions answered for you. Is 1000 Centimeter in 1 Decameter? Yes, 1 Decameter have 1000 (Nearest 4 digits) Centimeter. What is the symbol for Decameter and Centimeter? Symbol for Decameter is dam and symbol for Centimeter is cm. How many Decameter makes 1 Centimeter? 0.001 Decameter is euqal to 1 Centimeter. How many Centimeter in 7 Decameter? Decameter have 7000 Centimeter. How many Centimeter in a Decameter? Decameter have 1000 (Nearest 4 digits) Centimeter.
{"url":"https://www.calculatorbit.com/en/length/7-decameter-to-centimeter","timestamp":"2024-11-09T23:48:53Z","content_type":"text/html","content_length":"52879","record_id":"<urn:uuid:c48dd666-3cab-481e-87c2-3a32c5bb685e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00422.warc.gz"}
Cylindrical Capacitor Calculator: A Tool for Capacitance Estimation Home » Simplify your calculations with ease. » Electrical » Cylindrical Capacitor Calculator: A Tool for Capacitance Estimation Understanding the capacitance of a cylindrical capacitor is crucial in many areas of electrical engineering and physics. Our Cylindrical Capacitor Calculator helps make these calculations straightforward and easy to understand. Understanding Cylindrical Capacitor A cylindrical capacitor consists of two concentric cylindrical conducting shells separated by a dielectric medium. The capacitance of such a capacitor depends on the properties of the dielectric material and the dimensions of the cylinders. The capacitance C is given by the formula: C = 2 * π * ε_0 * ε_r * h / ln(b/a) • C is the capacitance in Farads. • π is Pi (approximately 3.14159). • ε_0 is the permittivity of free space (8.8541878128 × 10^-12 Farads per meter). • ε_r is the relative permittivity (dielectric constant) of the material. • h is the height of the capacitor in meters. • a is the inner radius of the capacitor in meters. • b is the outer radius of the capacitor in meters. • ln denotes the natural logarithm. How to Use the Cylindrical Capacitor Calculator Using the calculator is simple. Enter the relative permittivity (dielectric constant), height of the capacitor, inner radius of the capacitor, and outer radius of the capacitor. The calculator then provides the estimated capacitance in Farads. Example Calculation For example, consider a cylindrical capacitor with a relative permittivity of 2.2, height of 0.1 m, an inner radius of 0.01 m, and an outer radius of 0.02 m. Entering these values into the calculator: C = 2 * π * 8.8541878128 × 10^-12 * 2.2 * 0.1 / ln(0.02/0.01) The calculator estimates the capacitance to be approximately 1.53973 × 10^-11 Farads. In Conclusion The Cylindrical Capacitor Calculator simplifies the process of calculating capacitance. However, always remember that actual capacitance can vary due to several factors in real-world scenarios. For critical applications, it’s recommended to consult with a professional. Enjoy exploring the fascinating world of electromagnetism! Leave a Comment
{"url":"https://calculatorshub.net/electrical/cylindrical-capacitor-calculator/","timestamp":"2024-11-09T16:05:13Z","content_type":"text/html","content_length":"110351","record_id":"<urn:uuid:f282de51-0549-4fdf-b1cd-41076a0d39f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00020.warc.gz"}
Estimated Recovery Value (ERV) Calculator Author: Neo Huang Review By: Nancy Deng LAST UPDATED: 2024-10-02 19:50:18 TOTAL USAGE: 16652 TAG: Business Finance Valuation Unit Converter ▲ Unit Converter ▼ Powered by @Calculator Ultra Find More Calculator☟ The Estimated Recovery Value (ERV) is a crucial financial metric, especially in scenarios involving asset liquidation or bankruptcy. It represents the potential recovery amount from the sale or liquidation of an asset. This figure is paramount for companies in distress, their creditors, and investors, as it helps in estimating the recoverable value of assets during liquidation events. Historical Background ERV comes into play in financial analysis and asset management, particularly within the context of bankruptcy proceedings or when a company needs to liquidate its assets. It's a practical measure that aids in assessing the financial health and recovery prospects of distressed assets. Calculation Formula The formula to calculate ERV is given by: \[ ERV = \frac{RR}{100} \times BV \] • \(ERV\) is the estimated recovery value, • \(RR\) is the recovery rate (in percent), • \(BV\) is the book value of the asset (in dollars). Example Calculation To understand how ERV is calculated, consider an asset with a book value of $40,000 and a recovery rate of 85%. The ERV would be calculated as follows: \[ ERV = \frac{85}{100} \times 40,000 = 34,000 \] Therefore, the estimated recovery value of the asset would be $34,000. Importance and Usage Scenarios ERV is particularly significant in the context of financial distress or bankruptcy. It allows creditors and investors to estimate the amount that can be recovered from the sale of the assets. This estimation is vital for making informed decisions regarding loans, investments, and recovery strategies during liquidation. Common FAQs 1. What does the recovery rate represent? □ The recovery rate represents the percentage of the asset's book value that is expected to be recovered through liquidation or sale. 2. How can ERV impact financial decisions? □ ERV impacts financial decisions by providing a quantitative basis for evaluating the potential return from liquidating assets, influencing loan agreements, investment considerations, and recovery strategies. 3. Is ERV only relevant for companies in bankruptcy? □ While ERV is particularly relevant for companies in bankruptcy or liquidation, it can also be a useful metric in healthy companies for assessing the liquidation value of non-core or underperforming assets. Calculating ERV is a crucial step in the financial assessment process, helping stakeholders understand the potential recovery value of assets in distress.
{"url":"https://www.calculatorultra.com/en/tool/estimated-recovery-value-calculator.html","timestamp":"2024-11-03T03:09:07Z","content_type":"text/html","content_length":"47825","record_id":"<urn:uuid:9bc796f4-c082-4c7f-8ba3-6eb2c620b505>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00446.warc.gz"}
Wentzel-Kramers-Brillouin method in the Bargmann representation We demonstrate that the Bargmann representation of quantum mechanics is ideally suited for semiclassical analysis, using as an example the WKB method applied to the bound-state problem in a single well of one degree of freedom. While the WKB expansion formulas are basically the usual ones, in this representation they describe approximations that are uniform and nonsingular in the classically allowed region of phase space because no turning points appear there. The quantization of energy levels relies on a complex contour integral that tests the eigenfunction for analyticity. For the harmonic oscillator, this WKB method trivially gives the exact eigenfunctions in addition to the exact eigenvalues. For an anharmonic well, a self-consistent variational choice of the representation greatly improves the accuracy of the semiclassical ground state. Also, a simple change of scale illuminates the relationship of semiclassical versus linear perturbative expansions, allowing a variety of multidimensional extensions. All in all, the Bargmann representation appears to combine the advantages of a linear description and of a phase-space representation of the quantum state vectors. Physical Review A Pub Date: December 1989 □ Harmonic Oscillators; □ Quantum Mechanics; □ Wentzel-Kramer-Brillouin Method; □ Degrees Of Freedom; □ Eigenvalues; □ Hamiltonian Functions; □ Rayleigh-Ritz Method; □ Schroedinger Equation; □ Physics (General); □ 03.65.Sq; □ Semiclassical theories and applications
{"url":"https://ui.adsabs.harvard.edu/abs/1989PhRvA..40.6814V/abstract","timestamp":"2024-11-10T20:23:37Z","content_type":"text/html","content_length":"38663","record_id":"<urn:uuid:54366ed6-aece-40fc-b340-bb123063db89>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00118.warc.gz"}
What do you want to work on? About 46822 Algebra, Elementary (3-6) Math, Geometry, Algebra 2, Midlevel (7-8) Math, Statistics, Pre-Calculus Bachelors in Cognitive Science from Wellesley College Masters in Psychology, General from SUNY at Binghamton PHD in Psychology, General from SUNY at Binghamton Career Experience I started teaching statistics while I was earning my doctorate in psycholinguistics (which is a really fun way to say "how people understand language"). After I finished graduate school, I taught statistics, research methods, and psychology courses at the college level for several years. Recently, I moved with my family to a new state, so right now I'm focusing on tutoring and taking classes to learn new techniques for data analysis. I Love Tutoring Because working one-on-one with students can be a great experience and rewarding in a way that being in front of a hundred students in a lecture hall just isn't. Having someone go from "I am confused and frustrated" to "Finally, I get this!" in a session is so much more fun than being the teacher and really only getting to see students learn if they come by office hours needing help. Because of the subjects I tutor, sometimes students come in with really unusual questions -- going through typical sessions covering things like quadratic equations, t-tests, or the Pythagorean theorem is certainly fun, but once in a while a student will have a logic question or a number-theory topic that may not exactly fit with algebra or geometry or statistics, but is awesome to work through. I remember one student who was stuck on turning numbers from base 10 to base 7 -- it was something that I'd never been asked for help with before, and I was so fascinated by the question and the procedure for doing that conversion that after my shift ended, I played with it some more even though I would probably never need it again (and I haven't... yet :) ) FP - Geometry FP - Algebra II Math - Pre-Calculus Amazing! Helped me a lot! Math - Statistics I love the fact that she made me think about the problem before assisting me
{"url":"https://stg-www.princetonreview.com/academic-tutoring/tutor/46822--46832?s=ap%20statistics","timestamp":"2024-11-15T00:43:42Z","content_type":"application/xhtml+xml","content_length":"271388","record_id":"<urn:uuid:cd2736d6-4f32-4e05-bc4e-9ffcb1aaccd1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00179.warc.gz"}
True Truths about Fibonacci "The DNA SUPRA-CODE" Discovery, proves and evidence of the hidden language of DNA. by Jean-Claude PEREZ, September 2000 Fibonacci, known as Leonard of Pisa, also called "the heretic one of Sicily", is regarded as that which, to the Middle Ages, will give their foundations to our contemporary mathematics. In addition to the introduction of the "zero", one owes him this famous integer numbers series known as of Fibonacci's numbers series : 1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 ... Each term is obtained there by making the sum of the 2 precedents. The ratio of two consecutive terms (exp. 89/55) approach very quickly the famous proportion PHI known as "the Golden section": PHI=1.618033989â ¦ Ancient people considered the Golden section to the same scientific level as PI ... As testify some the enciphered messages from genius Leonardo di VINCI, they knew even subtle relations between these two universal constants (see the cover of the book " the deciphered DNA ", summarized nowâ ¦ ). When the radius of the circle equal exactly ONE, And, When the square is 1.618 by 1.618 length, Then, surface of CIRCLE = PI and surface of SQUARE = PHI*2â ¦ Ratio between Surfaces CIRCLE/SQUARE is PI / PHI*2 = 6/5 = 1.2 6/5 = 1.2 was called by ancient people the "Osiris section"â ¦ The Golden section is discredited today, one even equipped it with a esoteric connotation and sometimes even not very advisable... Always it is that Nature uses this golden section in its constructions : for example the shell of the nautilus or snail ... I demonstrate that this proportion has properties of "mathematical cohesion" which must, doubtless, to result in a cohesion and a solidity in the plan of mechanics. Nature thus knew to discover a subtle marriage between aesthetics and the effectiveness (one speaks even about a possible "principle of economy" on this subject). As he puts it 'one speaks even about a possible "principle of economy" on this subject' and I affirm that it is the same kind of principle law (economic principle in the sense research of optimality and stability of the system from the economical or energetical point of view) that my equations follow and so the existence of Golden ratios in stock market has more to do with that principle that with the supposed psychology of crowd or more appealingly called by Prechter "socio-economics science" which is more likely an esoteric metaphore than a Science since it is not demonstrated and could never be because in details it contains the same kind of paradoxical non-sense than Astrology at least within the framework of science. Suffice that Prechter claims only that Market follows Golden ratio but he shouldn't go further by giving fake causal explanations. murray t turtle • 4,654 Something else I would add;not to minimize the scientific logic,some of which gets blurred in the markets. Look on enough charts,with enough time frames; its better to have too may trend lines, color coded -than too few. • 2 Is there any simple trading method based on these ratios ? • 0 ...have you read any of Larry Pasavento's materials on fibonacci retracements or heard him speak? I've spoken to him on the phone several times and used his service, I find him easy to understand and extremely logical. rttrader - • 5 I admit that the only thing I know about Fibonacci numbers is that it was a numbering scheme devised by an Italian mathematician from the Middle Ages to predict the rate at which rabitts will multiply. Since then, it seems that these Fibonacci numbers have been used not only to explain the workings of the universe, the laws of nature and most of the physical sciences, but also human behavior as it relates to trading. Few "sciences" have crossed all such barriers. I cannot think of any others at the moment. Is it possible that we may have lost some perspective along the way? I think that if you have enough of these so-called Fibonacci lines, levels or retracements, one of them is bound to "work." But what about all of the other ones along the way that were so rudely ignored? I don't know him very well but he seems to have a great observation of market and I am particularly interested in the Gartley patterns and alike which are more complex than the comon flag, wedge etc. (I intend to study these patterns to map with my model). For that I respect his work as well as the work of Prechter and others I just mean that between observations of golden ratios and the research of true causal explanation there is a big gap and I can claim that I don't have the same explanation as Prechter (as for Pasavento I must look more thoroughly). What I mean is that the causal explanation of Prechter is wrong because he says that psychology of the crowd conducts the market whereas I say that it is the contrary and it is not opinion it is because I have a model that has no golden ratio as input whereas at the output I get Golden ratio. To explain I will take a metaphore by using the nim game . The nim game is a 2 players game which consists of several rows of objects the winner is the one that remove the last object. In a special type of nim game the rule applied limitation to the number of objects which can be removed : you cannot remove more than twice the number of objects removed previously. In the rule you don't have any Fibonacci ratio. But if you study the optimal strategy you will see that the optimal sequence should follow Fibonacci ratio. The market is somehow applying a kind of fibonacci nim game strategy against the crowd. So for me it is the market that directs the price (I mean those who organised it or the so called initiates of Dow) and not the crowd. Now in practice what difference does it make ? The difference is that you can better quantify in the first case whereas in the second case you can only rely on fuzzy and complex rules: when you look at Elliott wave rules there are a huge numbers of rules. The same for patterns like the Gartley pattern and other more or less complex ones. Whereas if you know that it has more to do with rationality than with irrationality you can look for something better. I have also some kind of waves I called e-wave not for elliott but for ecometric rather There is also big consequences as for the so called free market. Prechter and others affirm that the market follows these kind of law because the market is free whereas I affirm that it is completely the contrary : the market is not free it is constrained artificially because the kind of order it exhibits cannot occur without entropy reduction and the crowd is not capable to do so. The crowd is irrational, unstable it is the contrary to the entropy reduction necessity and any simulation made by scientists can only lead to mimetic behaviors but none will ever conduct to Fibo ratios and that's logical because it cannot be. That's why I affirm that scientifically this market is manipulated (some traders use an oxymoron by saying that the market is "enginiered" if you prefer that term) - but don't tell everybody this a secret between you and me You're right that's the problem that some Fibo believers have because you can take any high and low and then find a fibo ratio that suits the point. It's kind of curve-fitting then. This because they don't have a reference model so that their view is a kind of tautology. Whereas I have a model of reference which don't use golden ratio at all as input so that I can't do curve fitting. I will show you another day because it needs pictures. So at least my model could serve the Fibo believers that they are not just dreaming In fact I use a more fundamental quantitative model than just fibo ratios. Nevertheless golden ratios are attractors (in the chaos theory vocabulary) in my model and can help the interpretation of the model. My model shows that not only there are fibo ratios in the market but that each ratio represents a recurrent structure that have the same interpretation from day to day and I will illustrate that later. The classical trading method are the one used by the elliottists. Typically one tactic is to put a buying stop above the wave 1 in a bullish trend and my model agree that there is often a break zone very early in a trend so that I called this break zone seed wave like the elliottist. #10 Jun 26, 2003
{"url":"https://www.elitetrader.com/et/threads/true-truths-about-fibonacci.19259/","timestamp":"2024-11-13T01:20:16Z","content_type":"text/html","content_length":"65934","record_id":"<urn:uuid:49212c0d-7ac6-444e-95e5-4123a83012df>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00003.warc.gz"}
Digital and Analog Electronics Course Interrupts are used in microcontrollers to allow it to respond to external events by suspending current processing and switching to the routine to service the interrupt. Below is the top 4 in priority order of the interrupts of the Atmega328 microcontroller used in the Arduino. 1. Reset 2. External Interrupt Request 0 (pin D2) 3. External Interrupt Request 1 (pin D3) 4. Pin Change Interrupt Request 0 (pins D8 to D13) The priority order means that regardless what happens to 3, 4 if 2 External Interrupt Request 0 occurs, it will be executed and the rest of the interrupts will have to wait until 2 is serviced. If there are ONLY 4 interrupts, a 4 to 2 Priority Encoder can be used to implement this function. A 4 to 2 priority encoder provide 2 bits of binary coded output representing the position of the highest order active input of 4 inputs. If two or more inputs are high at the same time, the input having the highest priority will take precedence. In this design I3 has the highest priority and I0 the lowest . In this partial truth table, for all the non-explicitly defined input combinations (i.e. inputs containing 2, 3, or 4 high bits) the lower priority bits are shown as don't cares (X). Similarly when the inputs are 0000, the outputs are not valid and therefore they are XX. I3 I2 I1 I0 O1 O0 0 0 0 0 X X 0 0 1 X 0 1 0 1 X X 1 0 1 X X X 1 1 1. Learn how to design a 4 to 2 Priority Encoder. Since there are 2 outputs O0 and O1, you will need to do 2 designs, one for each output 2. Practise to breadboard the circuit correctly on the simulator 3. Do it on a real breadboard and observe the results
{"url":"https://electronics-course.com/","timestamp":"2024-11-13T07:46:33Z","content_type":"text/html","content_length":"33388","record_id":"<urn:uuid:120c920a-c874-43c2-8459-eb275f0f6419>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00407.warc.gz"}
class mne_rsa.searchlight(shape, dist=None, spatial_radius=None, temporal_radius=None, sel_series=None, samples_from=0, samples_to=-1)[source]# Generate indices for searchlight patches. Generates a sequence of tuples that can be used to index a data array. Depending on the spatial and temporal radius, each tuple extracts a searchlight patch along time, space or both. This function is flexible in regards to shape of the data array. The intepretation of the dimensions is as follows: 4 or more dimensions (n_folds, n_items, n_series, n_samples, ...) 3 dimensions (n_items, n_series, n_samples) 2 dimensions (n_items, n_series) when spatial_radius is not None. (n_items, n_samples) when temporal_radius is not None. 1 dimension The returned tuples will match the dimensions of the data array. shapetuple of int The shape of the data array to compute the searchlight patches for, as obtained with the .shape attribute. distndarray or sparse matrix, shape (n_series, n_series) | None The distances between all source points or sensors in meters. This parameter needs to be specified if a spatial_radius is set. Since the distance matrix can be huge, sparse matrices are also supported. When the distance matrix is sparse, all zero distances are treated as infinity. This allows you to skip far away points during your distance computations. Defaults to None. spatial_radiusfloat | list of list of int | None This controls how spatial patches will be created. There are several ways to do this: The first way is to specify a spatial radius in meters. In this case, the dist parameter must also be specified. This will create a searchlight where each patch contains all source points within this radius. The second way is to specify a list of predefined patches. In this case, each element of the list should itself be a list of integer indexes along the spatial dimension of the data array. Each element of this list will become a separate patch using the data at the specified indices. The third way is to set this to None, which will disable the making of spatial patches and only perform the searchlight over time. This can be thought of as pooling everything into a single spatial patch. Defaults to``None``. temporal_radiusint | None The temporal radius of the searchlight patch in samples. Set to None to only perform the searchlight over sensors/source points. Defaults to None. sel_seriesndarray, shape (n_selected_series,) | None When set, searchlight patches will only be generated for the subset of time series with the given indices. Defaults to None, in which case patches for all series are generated. When set, searchlight patches will only be generated for the subset of time samples with indices equal or greater than the given value. Only used when the given data shape includes a temporal dimension. Defaults to 0. When set, searchlight patches will only be generated for the subset of time samples with indices up to, but not including, the given value. Only used when the given data shape includes a temporal dimension. Defaults to -1, which means there is no upper bound. patchtuple of (slice | ndarray) A single searchlight patch. Each element of the tuple corresponds to a dimension of the data array and can be used to index along this dimension to extract the searchlight patch. Get the number of generated patches along multiple dimensions. Return hash(self). Get an iterator over the searchlight patches. Get total number of searchlight patches that will be generated.
{"url":"https://users.aalto.fi/~vanvlm1/mne-rsa/functions/mne_rsa.searchlight.html","timestamp":"2024-11-04T02:52:05Z","content_type":"text/html","content_length":"26700","record_id":"<urn:uuid:b930c1c6-b5a2-42e8-87d5-e874395177d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00437.warc.gz"}
Potatoes 10581 - math word problem (10581) Potatoes 10581 Potatoes contain 76.1% starch. How much starch does 250g of potatoes contain? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators percentage calculator will help you quickly calculate various typical tasks with percentages. You need to know the following knowledge to solve this word math problem: Units of physical quantities: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/10581","timestamp":"2024-11-06T09:07:57Z","content_type":"text/html","content_length":"50547","record_id":"<urn:uuid:505e9856-e985-4305-a84e-49d37f583de5>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00068.warc.gz"}
Intramolecular Interactions Overcome Hydration to Drive the Collapse Transition of Gly<sub>15</sub> Simulations and experiments show oligo-glycines, polypeptides lacking any side chains, can collapse in water. We assess the hydration thermodynamics of this collapse by calculating the hydration free energy at each of the end points of the reaction coordinate, here taken as the end-to-end distance (r) in the chain. To examine the role of the various conformations for a given r, we study the conditional distribution, P(R[g]|r), of the radius of gyration for a given value of r. The free energy change versus R[g], -k[B]T ln P(R[g]|r), is found to vary more gently compared to the corresponding variation in the excess hydration free energy. Using this observation within a multistate generalization of the potential distribution theorem, we calculate a tight upper bound for the hydration free energy of the peptide for a given r. On this basis, we find that peptide hydration greatly favors the expanded state of the chain, despite primitive hydrophobic effects favoring chain collapse. The net free energy of collapse is seen to be a delicate balance between opposing intrapeptide and hydration effects, with intrapeptide contributions favoring collapse. ASJC Scopus subject areas • Physical and Theoretical Chemistry • Surfaces, Coatings and Films • Materials Chemistry Dive into the research topics of 'Intramolecular Interactions Overcome Hydration to Drive the Collapse Transition of Gly[15]'. Together they form a unique fingerprint.
{"url":"https://researchexperts-staging.utmb.edu/en/publications/intramolecular-interactions-overcome-hydration-to-drive-the-colla","timestamp":"2024-11-13T08:10:17Z","content_type":"text/html","content_length":"52498","record_id":"<urn:uuid:ff7d9dd9-b4ad-41b1-aae4-4a7c5eb56e00>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00388.warc.gz"}
A characteristic velocity for gas-fed PPT performance scaling The performance scaling of gas-fed pulsed plasma thrusters (GPPPTs) is investigated theoretically and experimentally. A characteristic velocity for GFPPTs that depends on the inductance-per-unit-length and the square root of the capacitance to initial inductance ratio has been identified. An analytical model of the discharge current predicts the efficiency to be proportional to the GFPPT performance scaling number, defined here as the ratio of the exhaust velocity to the GFPPT characteristic velocity. To test the validity of the predicted scaling relations, the performance of two rapid-pulse-rate GFPPT designs, PT5 (coaxial electrodes) and PT9 (parallel-plate electrodes), has been measured over 70 different operating conditions with argon propellant. The measurements demonstrate that the impulse bit scales linearly with the integral of the discharge current squared as expected for an electromagnetic accelerator. The measured performance scaling in both electrode geometries is shown to be in good agreement with theoretical predictions using the performance scaling number. Normalizing the exhaust velocity and the impulse-to-energy ratio by the GFPPT characteristic velocity collapses almost all the measured data onto single curves that represent the scaling relations for these GFPPTs. Other 35th Intersociety Energy Conversion Engineering Conference and Exhibit 2000 Country/Territory United States City Las Vegas, NV Period 7/24/00 → 7/28/00 All Science Journal Classification (ASJC) codes • Energy Engineering and Power Technology • Renewable Energy, Sustainability and the Environment Dive into the research topics of 'A characteristic velocity for gas-fed PPT performance scaling'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/a-characteristic-velocity-for-gas-fed-ppt-performance-scaling-3","timestamp":"2024-11-07T04:48:41Z","content_type":"text/html","content_length":"50327","record_id":"<urn:uuid:4dc3dc27-a8f1-4c1b-9c94-6d65b9ff1511>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00718.warc.gz"}
Do the Math How did the Romans actually do any mathematical calculations with Roman numerals? Without the concept of places (units, tens, etc.) how did they add, subtract, multiply, divide, sell slaves and build —Leonard Frankford, Baltimore Let me toss that question right back at you, Leonard: How do you solve complex math problems? You're probably not working through them in your head, or even on paper. If you need to figure something unwieldy or tricky—say, the square root of 41,786—you reach for a calculator. And so did the ancient Romans. Their counting devices weren't electronic, of course, but the tech was high enough for them to establish and administer an empire of nearly 2 million square miles without even coming up with a notation for zero. The Romans' contributions to the arena of mathematics weren't exactly mind-blowing, especially compared to their cultural forebears across the sea in Greece—the Pythagorean theorem is a hard act to follow. When it came to manipulating numbers, the Romans were pragmatists, not theoreticians. As you suggest, conquest, commerce and engineering were their domains, all fields that do require a certain computational acumen. But the average Roman citizen learned only basic arithmetic in school, under the tutelage of a calculator, as math instructors were often called, unless he or she (but almost always he) needed greater knowledge for professional purposes. And basic Roman arithmetic is largely rather simple, even for those of us spoiled by Arabic notation. Addition is no sweat, because complex Roman numbers already use what math pros call additive notation, with numerals set beside one another to create a larger number. VI is just V plus I, after all. To add large numbers, simply pile all the letters together, arrange them in descending order, and there's your sum. CLXVI plus CLXVI? CCLLXXVVII, or CCCXXXII. And one of the advantages of the Roman system is that you don't need to memorize multiplication tables. What's VI times VI? Six V's and six I's, which converts to three X's, a V and an I: XXXVI. You can do all this because of the limitation Leonard pointed out above. Roman numerals don't have what's called place value or positional value, the way digits in our system do. The value represented by the Arabic numeral 5 changes depending on its placement within a figure: It can mean five units, five tens or five hundreds. But to a Roman, V always meant just plain five, regardless of position. And before you chime in with "What about in IV?" keep in mind that the Roman numerals we use aren't necessarily the ones the Romans used. Subtractive notation—expressing a value as the difference between a larger number and a smaller one set to its left was rare in classical Rome and didn't take off until the middle ages; the Romans greatly preferred the simpler IIII to IV, XXXX to XL, and so on. (The IIII-for-4 notation survives today on the faces of clocks.) You'll notice I haven't mentioned long division—that's where positional value really pays off. What's CCXVII divided by CLI? The pile-and-sort method isn't going to work here. For this one, as well as for the multiplication of larger numbers, you need an abacus. Not too much physical evidence survives, but judging from references in poems by Catullus, Juvenal and others, and from contemporary devices found in Greece, the standard Roman abacus used glass, ivory or bronze counters placed on a board marked off into rows and columns. (The counters were at first made of stone and called calculi or "pebbles," the obvious root of several math-related words in English). A later, more portable version (and this one we've found examples of) consisted of a metal plate with beads that slid back and forth in slots. In either case, the columns or slots were labeled I, X, C, etc., corresponding to the ones column, the tens column, the hundreds, and so on up to millions; the counters or beads kept track of how many you had of each. Essentially, abacuses allowed you to convert Roman figures into a place-based system, do your calculations, then convert back. Some, at least, could even handle fractions, using other specialized slots: Though the Romans kept to base 10 for whole numbers (as we 10-fingered creatures are wont to do), for smaller values they had a separate base-12 system, making it easier to work with thirds and quarters. These devices remained in use for centuries after Rome fell. I've been talking about the Romans, but remember, their numbering system was still the only one that numerate medieval Europeans had at their disposal. After arriving on the Iberian peninsula in the eighth century, the Arabs introduced their own snazzy notation system (more accurately referred to as Hindu-Arabic), which made its written debut in Christian Europe courtesy of some Spanish monks in 976. Resistance to these foreign ciphers was fierce until the 15th century, when the invention of the printing press spread them widely enough that their utility could no longer be denied, sparking a mathematical revolution. And that's why I'm able to tell you today that the square root of 41,786 is 204.41624201613725978. Send Adams questions via straightdope.com or write him c/o Chicago Reader, 350 N. Orleans, Chicago 60654.
{"url":"https://m.cityweekly.net/utah/do-the-math/Content?oid=3904957","timestamp":"2024-11-06T08:13:29Z","content_type":"text/html","content_length":"49411","record_id":"<urn:uuid:e654d912-359c-49d5-b2a6-646e6ce73da8>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00841.warc.gz"}
Hours Calculator - TOOLYATRI.COM 1. Steps to Use the Hours Calculator: 1. Enter Start Time: Input the start time using the time picker provided for the “Start Time” field. 2. Enter End Time: Similarly, input the end time using the time picker for the “End Time” field. 3. Calculate Hours: Click the “Calculate Hours” button to compute the total hours between the start and end times. 4. View Result: The calculator will display the total hours below the button. 2. Information about the Hours Calculator: The Hours Calculator is a simple tool designed to calculate the total number of hours between two given times. It takes into account the start and end times provided by the user and computes the time difference in hours. 3. Benefits of Using the Hours Calculator: • Efficiency: Quickly determine the total hours worked or elapsed between two time points. • Accuracy: Obtain precise results for time differences, aiding in time management and scheduling. • Convenience: User-friendly interface with intuitive time input fields for hassle-free calculations. • Versatility: Useful for various applications, including calculating work hours, duration of events, or time elapsed for tasks. 4. Frequently Asked Questions (FAQ): • Q: Can I input times in different formats, such as 24-hour or AM/PM?A: Yes, the calculator accepts times in both 24-hour format (e.g., 13:30) and AM/PM format (e.g., 1:30 PM). • Q: What happens if I input the end time earlier than the start time?A: The calculator will still provide a result, but the total hours will be displayed as negative, indicating a backward time • Q: Is there a limit to the time range supported by the calculator?A: The Hours Calculator can handle a wide range of time values, from midnight to 11:59 PM, allowing for calculations spanning a full day. However, extremely large time differences may be subject to limitations imposed by JavaScript. • Q: Can I use this tool to calculate hours across multiple days?A: No, this calculator calculates the time difference within a single day. If your calculation spans multiple days, you may need to consider a more advanced time tracking tool.
{"url":"https://toolyatri.com/hours-calculator/","timestamp":"2024-11-07T05:39:35Z","content_type":"text/html","content_length":"174905","record_id":"<urn:uuid:6fc7b83e-d829-4ffd-b6ef-cfd62d9607cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00630.warc.gz"}
The art of alpha generation: method and examples A look into the principles and practical examples of alpha generation Now that we have taken a comprehensive look at the datasets that act as the bedrock for systematic trading strategies in equities, it's time to delve deeper into the methods for translating this information into actionable insights. A pivotal element in the investment world is the concept of alpha, a term representing the edge that a trader or an algorithm has over the market. Systematic Trading with AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Generating alpha is akin to extracting value that the majority fail to see. In essence, the alpha comes from four integral components: information, processing, modeling, and speed. The four types of alpha To delve into the intricacies of these elements, let's leverage a simple toy example of predicting a person's weight from their height. Imagine our initial prediction is that weight aligns with the mean weight of the overall population. To better this, we can introduce a model such as: \(weight = mean(weight) + β ∗ height^3\) where β signifies the regression coefficient. In this scenario: • The information alpha is derived from the incorporation of height data. • The processing alpha emerges from the cubing of the height parameter. • The modeling alpha stems from the utilization of the linear model. Although in this example, the distinction between modeling and processing alpha is somewhat blurred, in practical scenarios, these differences can be more pronounced. Speed, the final source of alpha, holds significance as acting in the future relative to other traders is equivalent to predicting the future. It's similar to the uncertainty principle but applied to financial markets. Diverse examples of alpha This post is for paid subscribers
{"url":"https://blog.paperswithbacktest.com/p/the-art-of-alpha-generation-method","timestamp":"2024-11-03T04:08:40Z","content_type":"text/html","content_length":"133130","record_id":"<urn:uuid:1d9f8587-5bf6-4bf3-8b4b-4bf187e127f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00064.warc.gz"}
Generating Random Numbers in Swift with GameplayKit Table of Contents Many games have a need to generate random numbers. If you are making an iOS game with SpriteKit, the GameplayKit framework has classes for generating random numbers. Generating random numbers has two steps. 1. Create a random number generator 2. Generate a random number Create an instance of GKRandomDistribution, which is GameplayKit’s class for generating random numbers. Supply the low and high values to define the range of numbers the generator can generate. The following code demonstrates how to create a random number generator for rolling a die: let randomGenerator = GKRandomDistribution(lowestValue: 1, highestValue: 6) After creating the random number generator, you can use it to generate random numbers. Call the generator’s nextInt function to generate a random number. The following code simulates a die roll: let dieRoll = randomGenerator.nextInt() I have a simple game prototype on GitHub that spawns objects at random starting points. The spawnFood function in the GameScene.swift file has the random number code.
{"url":"https://www.checksimgames.com/generating-random-numbers-in-swift-with-gameplaykit/","timestamp":"2024-11-12T15:42:38Z","content_type":"text/html","content_length":"13970","record_id":"<urn:uuid:a05f76ec-590b-42ad-aa49-e99f4bd88ba4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00563.warc.gz"}
effects of small class size on academic achievement of pupils in mathematics The topic of the study is effects of small class size on academic achievement of pupils in mathematics in Ogbaru local government area of Anambra state. The purpose of the study was to find out the effect of small class size on academic achievement of pupils in mathematics in primary schools. The study adopted a quasi-experimental research design, specifically the non-equivalent control group design. The population of the study consisted of primary schools pupils in all the nineteen (19) private primary schools in Ogbaru. The total population was 3490 pupils. A purposive sampling technique was used to select an intact class of small and large class of primary four (4) pupils. A sample size of 65 was used. The study was guided by two(2) research questions and two(2) research hypotheses were tested at 0.05 level of significance using Analysis of covariance (ANCOVA). The data were collected using fraction achievement test. The Instrument was found to be reliable by the reliability coefficient of 0.8 using Kuder-Richardson formula method (K-R 20). Validation of the instrument was made by experts in mathematics education and measurement and evaluation. Face validation of the instrument was done as well as content validation through construction of a test blue print. The result indicated that there is a significant difference in the achievement of pupils taught mathematics in small class and those taught in large class The study also showed that there is no significant difference between the mathematics achievement of male and female pupils taught mathematics in small class size. Hence, the study recommend that educators should develop great interest in finding ways of improving pupils reasoning in mathematics and this could be achieved through reduction of class size which makes it easier for the teacher to have more one on one interaction with the pupils. Background of the Study Education is the key factor to industrial and technological development of any country in the world. Knowledge holds key to the attainment of the millennium development goals, which include, food security, eradication of child mortality, and reduction of the spread of HIV and AIDS, among others. Husen (2000) stated that education is widely regarded as a basic human right, a key to enlightenment, and a great tool for human and society development. For any Nation to achieve her aims and objectives in education there must be good, dedicated and committed teachers. The teachers must possess characteristics that will enhance effective teaching and learning. Therefore, teachers play a major role in the educational sector and their role is a major determinant of educational attainment of any students. However, for anyone to qualify to be a teacher, the person must obtain degree or National Certificate in Education during the course of study. The pre –service teacher will be exposed to both pedagogy and content knowledge in the field of study; psychology and philosophy courses in preparation for the teaching assignment. To be prepared to teach mathematics adequately, teachers must have a comprehensive understanding of technology pedagogical content knowledge (TPCK)” (Niess, 2005). Shulman (1987) defined content knowledge as the knowledge about the subject (such as knowledge of mathematics and mathematical representations), while knowledge of students, knowledge of teaching, and knowledge of educational contexts characterize pedagogical content knowledge. The sum and intersection of technological knowledge, pedagogical knowledge, and content knowledge serve as a framework for effective mathematics teaching and learning. As mathematics teachers think about teaching with technology, they should concurrently consider how to teach mathematical concepts in such a way that students can experiment with ideas, make conjectures, test hypotheses, and form generalizations. For more than two thousand years, Mathematics has been a part of the human search for understanding the world. Mathematical discoveries have come both from the attempt to describe the natural world and from the desire to arrive at a form of inescapable truth from careful reasoning. These remain fruitful and important motivations for mathematical thinking, but in the last century, mathematics has been successfully applied to many other aspects of the human life. Today, mathematics as a mode of thought and expression is more valuable than ever before. It is absolutely an essential subject in the world today. It is a compulsory subject in both Primary and post primary schools in Nigeria. The study of mathematics is taken serious amongst students and school authorities of various institutions of learning. For instance, students generally cannot gain admission into any course of study in higher institutions without a pass or credit in mathematics. Mathematics is the study of topics such as quantity (numbers), structure, space, and change (Encyclopedia Britannica, 2010). It is the science that studies and explains numbers, quantities, shapes and the relationship between them (Merriam Webster Dictionary 2015). It is alsothe systematic treatment of magnitude, relationships between figures and forms, and the relations between quantities expressed symbolically (Dictionary.com 2016). Mathematics is very useful in the society, more so in the present science and technology age. A sound science curriculum cannot be spoken of realistically without considering the important role of mathematics (Nneji, 2011). Agwagah (2001) stated that mathematics is a scientific tool in realizing the nation's scientific and technological Mathematics is the mother of all sciences. No wonder Mathematics is a compulsory subject at primary and secondary school levels, though, not all the students are expected to become mathematicians but because Mathematics cuts across all the areas of human life even at the domestic level. For a person to function very well within the immediate environment, the knowledge of Mathematics is very Adesina (2000) defined mathematics as the science that draws necessary conclusions and also the manipulation of the meaningless symbols of a first order language according to explicit, syntactical rules. Anibueze (2015) stated that, mathematics plays important roles in three areas which are mathematics as a core skill for life ,as a key to economic prosperity and mathematics education. Mathematics is the queen of science and a tool for scientific and technological development, an indispensible tool for effective use of electronic resources for national development. It is also a way to communicate ideas. More than anything, it is a way of reasoning that is unique to human beings. Mathematics is identified as a specialised language in which knowledge of the physical world has been recorded; a language in which idea originating in the minds of scientists can be encoded, transmitted to others and decoded with a much exact method and much less error (Oyedeji, 1999). Olutosin (2007) described mathematics as an instrument to ease or facilitate the learning of other subjects and that, the importance of Mathematics permeates all aspects of human endeavor. Mathematics ideas have helped make possible the revolution in electronics, which has transformed the world the way we think and live today. Mathematics being an abstract subject which needs to be concretized, does not require much population during the cause of teaching for effective learning to take place. As school population increases, class sizes also increase, the performances of students become an issue. According to Dror (2010), class size has become a phenomenon often mentioned in the educational literature as an influence on pupil’s feelings and achievement, on administration, quality and school budgets. Dror noted that class size is almost an administrative decision over which teachers have little or no control. Most researchers start from the assumption that size of the class would prove a significant determinant of the degree of success of students. In fact, with the exception of a few, many studies have reported that under ideal situation, class size in itself appears to be an important factor. Class size refers to an educational tool that can be used to describe the average number of students per class in a school (Adeyemi, 2008). It also refers to the number of students a teacher faces during a given period of instruction. The relationship between class size and academic performance has been a perplexing one for educators. Studies have found that the physical environment, class overcrowding and teaching methods are all variables that affect students achievement (Molnar, Smith, Zahorik, Ehrle, Halbach, Kuehl 2000). Other factors that could affect students achievement are school population and class size (Gentry, 2000, and Swift, 2000). The issue of poor academic performance of students in Nigeria has been of much concern to all and sundry. The problems are so much that it has led to the decline in standard of education. In order to better understand the skill levels of students, it might be necessary to evaluate factors affecting their performance. These factors can include; school structure and organization, teachers quality, curriculum and teaching philosophies (Driscoill, Halcoussis and Sony, 2003) and class size (Gentry,2000; Swift, 2000). According to Michael (2010) and Lori (2016), each class has its own advantages and disadvantages. Lori (2016), maintained that students in large classes are independent, develop more ideas, have better social opportunities, develop competitive spirit and discussion activity whereas for students in the smaller classes, teachers have more of an opportunity to get to know students on a personal level, helping them to tailor their teaching strategies to meet individual learning needs. It is noted that discussion time becomes fragmented among students in large classes and instructors may rely on passive lecturing, assign less written homework or fewer problem sets, and may not require written papers. Instructors may find it difficult to know each student personally and tailor pedagogy to individual student needs in a large class. According to (Adeyemi 2013)overcrowded classrooms have increased the possibilities for mass failure and make studentslose interest in school. This is because large class sizes do not allow individual students to get attention from teachers which invariably lead to low reading scores, frustration and poor academic performance. Small class size helps students to be able to forge better relationships with classmates and teachers. Increasing class size negatively affects students academic achievement that teachers change pedagogical practices in smaller classrooms and their relationship with students was much closer. In smaller classes, teachers had a better understanding of their students and could customize lessons to individual needs much more than in larger classes. Teachers adopt more group work to take advantage of the smaller classroom and also engage more students by varying types of coursework. These changes create a greater sense of unity and belonging in the classroom hence, it leads to increase in student achievement. Englehart (2007) discovered that students were able to transition from one task to another quicker in the small class and spent a greater amount of time engaged in the material presented. In the small class, the atmosphere is much more conversational and familial. This helps to facilitate their learning by opening lines of communication between teachers and students. Thus, smaller class size seems to be beneficial to student achievement. It (smaller class sizes) leads to a decrease in classroom management issues which would be particularly beneficial to lower achieving students. It fosters more intrapersonal relationships with students. Teachers spend moretime for the review of material if needed, and have fewer discipline problems in smaller Blatchford, Bassett, and Brown (2011) noted that student engagement increased in smaller classrooms as well as their interaction with teachers hence lower achievers were off-task much more. Din (1999) found that students in smaller classes tended to help the teacher with classroommanagement, had more positive student-teacher interactions, and received more individualized help from teachers. Fan (2012) found that smaller classes gave students more access tocomputers and additional space, and teachers were able to spend less time on classroommanagement, which in turn led to greater student achievement. Konstantopoulos and Sun(2014) found that teacher effects (teaching skills and practices) had a larger impact on student achievement in smaller classrooms than regular size classrooms. Smaller class sizes also give teachers an opportunity to increase parental involvement and improve teacher curriculum planning and development. Researchers found that smaller classes gave teachers moreopportunities to reach out to parents and include them in the educational process. And teachers who used smaller classes to differentiate and individualize their curriculums showed significantgains in student achievement. Rodriguez and Elbaum (2014) analyzed that teachers with smaller class sizes had more time to interact with parents and to develop more personal bonds. Isocrates (392 B.C.) opened an academy of rhetoric in Athens to train Athenian generals and statesmen, and insisted on enrolling not more than six or eight students in the school at a time. Power (1966) explained that Isocrates admitted "only a few students to the classes because of the extraordinary concern for care." Quintilian (1875) a rhetorician writing in the Roman Empire around 100 CE, cited the practices in Isocrates' school as evidence that a caring education required small class sizes. Quintilian argued in Institutes of Oratory, as Power summarized the book's thesis, that "care had nothing whatever to do with discipline: It meant simply that only a few students at a time could be taught effectively. However, since there is no concensus on the effect of class size on academic achievement. it becomes imperative to examine the effect of class size on pupils achievement in Mathematics. Moreover, gender is another factor that could affect students’ achievement in mathematics. The widespread belief that males outperform females in mathematics is apparently a myth. A meta-analysis (Hyde, Fennema, and Lamon 1990) showed that boys tend to do better in mathematics tests that involve problem solving, at least by the time they reach high school. Girls, however, do better in computation and there is no gender difference in understanding concepts. According to Kimball (1989), girls tend to earn better grades in mathematics than boys. Gender differences in mathematics performance that favour males are usually attributed to gender socialization (Boswell 1980; Brush 1980; Eccles and Jacobs 1986; Linn and Peterson 1986; Parsons, Adler and Kaczala and Meece1982; Sherman 1979, 1980; Sherman and Fennema 1997; Stallings 1979). Basically, the argument is that girls are thought to have low aptitude for mathematics and that they will not need skills in advanced mathematics as adults (Chipman and Thomas 1985). These socialization practices cause girls to lose interest in mathematics and to lack confidence in their mathematical ability. As a result, they avoid mathematics courses in high school. This situation puts them at even a greater disadvantage because the most accurate predictor of performance on tests in mathematics is the number of mathematics courses taken (Jones 1984). Girls also may experience math anxiety (fear of mathematics) because of the messages they receive, which can interfere with learning and test performance (Meece, Wigfield and Eccles 1990; Tobias 1987). Gherasim, Butnaru and Mairean (2013) found gender effects in such variables as achievement goals, classroom environments and achievement in mathematics among young adolescents showing that girls obtained higher grades in mathematics than boys. Girls reported higher classroom support, lower performance-avoidance goals (Shim, Ryan and Anderson, 2008) and more mastery of the learning materials (Perkun, Elliot and Maier, 2006). Another aspect, students' attitude, was studied by Jones and Young (1995), who found that boys had more favorable attitudes towards mathematics and science than girls. Emotions towards mathematics were studied by Frenzel, Pekrun and Goetz (2007) who found that girls experienced less enjoyment and pride than boys. Boys on the other hand , experienced less anxiety and less hopelessness towards mathematics than girls. They also found that girls felt slightly more shame than boys. Statement of the Problem The ever-growing world population and the craze for education mean that classes will continue to grow. A common feature in institutions of learning is the large number of students taught by a single teacher. With such a high teacher-student ratio, the teacher has no option but to adopt self-help measures, which are in no way ideal or adequate for appropriate learning. A critical issue that becomes a focus in the recent development is the issue of the ability of the regular classroom lessons to meet the learning requirements of the pupils in mathematics subject. It is now thought that complementing the classroom lessons with a small number of pupils may help in guiding them towards better performance. Inspite of the importance of Mathematics, there is a general low-level of pupils performance in Mathematics in examination, therefore the class-size couldbe the cause of this low performance. Purpose of the Study The purpose of this study was to find out the effect of small class size on academic achievement of pupils in mathematics in Primary schools. Specifically, the study sought to find out 1. The mean scores of the pupils taught mathematics in small class and those taught in large class. 2. The mean scores of male and female pupils taught mathematics in small class. Scope of the study This study is restricted to primary four pupils in private owned primary schools in Ogbaru Local Government Area in Anambra State in teaching and learning of fractions in Numbers and numeration in Significance of the study A study such as this will be significant in many ways to the pupils, teachers, educational administration, and government. There is going to be a great improvement on the part of the pupils who learn because the teacher will know them, their skills, passion, strength and learning styles and more likely offer individual attention and guidance to them. They are going to be exposed to more one on one interaction with the teacher, thereby enhancing their strengths and improving their weaknesses. They will have the opportunity to speak up and be heard among their peers which will help them to build self confidence and public speaking skills. Teachers will be in the better position to justify the performance of each child which will guide their subsequent steps and strategies towards enhancing better teaching with regards to the pupil’s They are going to be prevented from becoming overwhelmed and overworked, which leads to higher teacher satisfaction rates. This is because small class size consists of small number of pupils which the teacher will have limited time to spend with. This study will be of great value to schools and educational administrators in their educational planning and reformations by knowing the number of pupils that should be taught in a particular class by a teacher. Finally, the study will be very important as it might create jobs for unemployed Mathematics teachers. The government might realize the needs for more hands with regards to recruitment of many Mathematics experts who would be deployed in schools. Research questions The following Research questions are formulated to guide the study 1. What are the mean scores of the pupils taught Mathematics in small class and those taught in large class? 2. What are the mean scores of the male and female pupils taught mathematics in small class. Research hypothesis The following hypotheses are tested at 0.05 level of significance. Ho[1: ]There is no significant difference in the mean scores of pupils taught Mathematics in small class and those taught in large class. Ho[2: ]There is no significant difference in the mean scores of male and female pupils taught Mathematics in small class. Post a Comment
{"url":"https://www.uniprojects.com.ng/2020/11/effects-of-small-class-size-on-academic.html","timestamp":"2024-11-10T15:23:46Z","content_type":"application/xhtml+xml","content_length":"288417","record_id":"<urn:uuid:e96e2b49-965e-452f-b8bb-8fa84049ecfb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00296.warc.gz"}
Calories Burned Biking Calculator Last updated: Calories Burned Biking Calculator This calories burned biking calculator may satisfy your curiosity about how many calories biking burns. Whether you are a recreational cyclist, a bike commuter, or a professional, you can calculate calories and kilograms lost while riding a bike. If you want to know how other sports and activities help in losing weight, check our calorie calculator. What are calories? A calorie is a unit of energy. It is the approximate amount of energy needed to raise the temperature of one gram of water by one degree Celsius at a pressure of one atmosphere. One calorie (cal) is a really small amount, so in everyday life, we use kilocalories (kcal), which is one thousand calories. In this article, we use the term "calories" to describe kilocalories, which is quite a common What factors affect calories burned and weight loss? There are three main factors in calculations of burned calories and weight loss: • Bodyweight – the more you weigh, the more you burn, as you use more energy to move a heavier body • Exercise intensity and duration – more extended and more intense activities burn calories faster • Choice of the exercise – it's intuitive: we know that recreational biking, mountain racing, and stationary cycling, each practiced for 30 minutes, will give different results How many calories does biking burn? To calculate the calories burned, we need to know how to assess the intensity and evaluate the choice of exercise. The easiest and most objective way to assess this is by using your power output. If you don't have a power meter on your bike, we recommend you use our cycling wattage calculator to help you obtain this number. In this case, the formula to obtain your calories consumed takes into account your average power (Power) and the time of the activity (T): calories = ((Power × T) / 4.18 ) / 0.24 where 4.18 is the conversion factor from Joules (SI unit) to calories, and 0.24 is the efficiency (24%) of an average human body when cycling. However, you might not have the time to do those calculations, or maybe you're looking for a simple estimation. Here is where the unit "MET" appears. METs (metabolic equivalent of task) express the energy cost of physical activities. Simply, they measure how many calories you burn per hour of activity and per one kilogram of body weight. For example, MET for leisure biking is equal to 4, and the value for somebody taking part in a race and cycling over 20 mph can be as high as 16. Therefore, choosing only the type of activity is not enough - in other calculators, you can usually find that one MET value for biking equal to 8 or 8.5, which is an average value for all types of cycling. The calories burned biking calculator uses the formula for calories burned: calories = T × 60 × MET × 3.5 × W / 200 where T is the duration of activity in hours, W is your weight in kilograms (including bike and extra equipment), and MET is a metabolic equivalent of the chosen task. To calculate weight loss, we need to know how much energy our fat tissue contains - there are approximately 7700 calories stored in every kilogram of body fat. It means that once you've found the number of burned calories, all you have to do is to divide by 7700 to obtain the weight loss in kilograms: weight_loss = calories/7700 If you want to check what other advantages of cycling there are, have a look at biking life gain and car vs. bike calculators. Cycling for weight loss You want to lose some weight, but you still hesitate: which activity is better? Half-day trip biking for 5 hours at a leisurely pace or 1.5 hour of stationary biking? Thanks to our calculator, you can assess which form of biking activity may serve you best: 1. Choose the type of biking you are considering, e.g., bicycling 10-11.9 mph leisurely slow, with light effort. 2. Enter your weight into the calories burned biking calculator. Let's assume it's 80 kg. 3. Next, determine the duration of the activity. Let's say you went for a half-day trip and were biking for 5 hours. 4. Input all of these values into the calorie burn formula: calories = 5 × 60 × 6 × 3.5 × 80 / 200 = 2520 kcal 5. Finally, divide this value by 7700 to obtain your weight loss: 2520/7700 = 0.33 kg Repeat the steps for the second activity, changing only the form and duration - the results for 1.5 hour stationary biking are 630 kcal burned and 0.082 kg weight loss. Tadaaam! The calories burned biking calculator helped you figure out which form of biking is more profitable. If you want to take care of your weight more systematically, visit our BMR (basal metabolic rate) calculator. It will tell you how many calories your body requires to maintain its basic existence. With this knowledge, it will be easier to create a healthy and efficient diet plan. Can I burn more calories by walking or biking? Cycling is a faster and more efficient way to burn calories, lose weight and build muscle compared to walking. Cycling burns at least two times more calories per hour. You have the option to increase the resistance of your cycle and make the biking process more challenging and, hence, more effective in burning calories. Whereas in walking, there is only so much you can do before it is running and not walking. How can I calculate the calories I burn, biking? The formula to calculate the calories you may burn while biking is: Calories = T × 60 × MET × 3.5 × W / 200 • T – Duration of activity in hours; • W – Your weight in kilograms (including bike and extra equipment); and • MET – Metabolic equivalent of the chosen task. This formula will give you the calories you burnt in terms of kcal. And in case you don't know the MET value, you may use the average MET value for biking, which is 8 to 8.5. How much weight do I lose while biking? Biking is a nice way to lose weight, and the formula to calculate the amount of weight you lose by biking is: Weight lost = Calories burnt / 7700 Once you figure out how many calories you burned, all you have to do is: • Divide the calories burned by 7700. • The result is the amount of weight you lost in kilograms. We divide the calories by 7700 because in every kilogram of our body fat, there are approximately 7700 calories stored. I am 200 lbs, how many calories will I burn while biking? You will burn 476.3 kcal while biking for one hour on an exercise cycle at home. The weight equivalent of these calories is 0.136 lbs. There are factors other than body weight that determine the number of calories you may burn while working out. 1. The choice of exercise; 2. The exercise intensity; and 3. The duration of the exercise/workout.
{"url":"https://www.omnicalculator.com/sports/calories-burned-biking","timestamp":"2024-11-05T22:53:50Z","content_type":"text/html","content_length":"600195","record_id":"<urn:uuid:5cd1a40c-8960-466a-a2cc-d642db404130>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00708.warc.gz"}
Manual similarity measure | Machine Learning | Google for Developers As just shown, k-means assigns points to their closest centroid. But what does "closest" mean? To apply k-means to feature data, you will need to define a measure of similarity that combines all the feature data into a single numeric value, called a manual similarity measure. Consider a shoe dataset. If that dataset has shoe size as its only feature, you can define the similarity of two shoes in terms of the difference between their sizes. The smaller the numerical difference between sizes, the greater the similarity between shoes. If that shoe dataset had two numeric features, size and price, you can combine them into a single number representing similarity. First scale the data so both features are comparable: • Size (s): Shoe size probably forms a Gaussian distribution. Confirm this. Then normalize the data. • Price (p): The data is probably a Poisson distribution. Confirm this. If you have enough data, convert the data to quantiles and scale to \([0,1]\). Next, combine the two features by calculating the root mean squared error (RMSE). This rough measure of similarity is given by \(\sqrt{\frac{(s_i - s_j)^2+(p_i - p_j)^2}{2}}\). For a simple example, calculate similarity for two shoes with US sizes 8 and 11, and prices 120 and 150. Since we don't have enough data to understand the distribution, we'll scale the data without normalizing or using quantiles. Action Method Scale the size. Assume a maximum possible shoe size of 20. Divide 8 and 11 by the maximum size 20 to get 0.4 and 0.55. Scale the price. Divide 120 and 150 by the maximum price 150 to get 0.8 and 1. Find the difference in size. \(0.55 - 0.4 = 0.15\) Find the difference in price. \(1 - 0.8 = 0.2\) Calculate the RMSE. \(\sqrt{\frac{0.2^2+0.15^2}{2}} = 0.17\) Intuitively, your similarity measure should increase when feature data is more similar. Instead, your similarity measure (RMSE) actually decreases. Make your similarity measure follow your intuition by subtracting it from 1. \[\text{Similarity} = 1 - 0.17 = 0.83\] In general, you can prepare numerical data as described in Prepare data, then combine the data by using Euclidean distance. What if that dataset included both shoe size and shoe color? Color is categorical data, discussed in Machine Learning Crash Course in Working with categorical data. Categorical data is harder to combine with the numerical size data. It can be: • Single-valued (univalent), such as a car's color ("white" or "blue" but never both) • Multi-valued (multivalent), such as a movie's genre (a movie can be both "action" and "comedy," or only "action") If univalent data matches, for example in the case of two pairs of blue shoes, the similarity between the examples is 1. Otherwise, similarity is 0. Multivalent data, like movie genres, is harder to work with. If there are a fixed set of movie genres, similarity can be calculated using the ratio of common values, called Jaccard similarity. Example calculations of Jaccard similarity: • [“comedy”,”action”] and [“comedy”,”action”] = 1 • [“comedy”,”action”] and [“action”] = ½ • [“comedy”,”action”] and [“action”, "drama"] = ⅓ • [“comedy”,”action”] and [“non-fiction”,”biographical”] = 0 Jaccard similarity is not the only possible manual similarity measure for categorical data. Two other examples: • Postal codes can be converted into latitude and longitude before calculating Euclidean distance between them. • Color can be converted into numeric RGB values, with differences in values combined into Euclidean distance. See Working with categorical data for more. In general, a manual similarity measure must directly correspond to actual similarity. If your chosen metric does not, then it isn't encoding the information you want it to encode. Pre-process your data carefully before calculating a similarity measure. The examples on this page are simplified. Most real-world datasets are large and complex. As previously mentioned, quantiles are a good default choice for processing numeric data. As the complexity of data increases, it becomes harder to create a manual similarity measure. In that situation, switch to a supervised similarity measure, where a supervised machine learning model calculates similarity. This will be discussed in more detail later.
{"url":"https://developers.google.com/machine-learning/clustering/kmeans/manual-similarity","timestamp":"2024-11-10T17:53:48Z","content_type":"text/html","content_length":"97859","record_id":"<urn:uuid:6dedebf0-5a2b-4bc4-b0f7-b0a6d174f6eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00796.warc.gz"}
In finance, the capital asset pricing model (CAPM) is a model used to determine a theoretically appropriate required rate of return of an asset, to make decisions about adding assets to a well-diversified portfolio. An estimation of the CAPM and the security market line (purple) for the Dow Jones Industrial Average over 3 years for monthly data. The model takes into account the asset's sensitivity to non-diversifiable risk (also known as systematic risk or market risk), often represented by the quantity beta (β) in the financial industry, as well as the expected return of the market and the expected return of a theoretical risk-free asset. CAPM assumes a particular form of utility functions (in which only first and second moments matter, that is risk is measured by variance, for example a quadratic utility) or alternatively asset returns whose probability distributions are completely described by the first two moments (for example, the normal distribution) and zero transaction costs (necessary for diversification to get rid of all idiosyncratic risk). Under these conditions, CAPM shows that the cost of equity capital is determined only by beta.^[1]^[2] Despite its failing numerous empirical tests,^[3] and the existence of more modern approaches to asset pricing and portfolio selection (such as arbitrage pricing theory and Merton's portfolio problem), the CAPM still remains popular due to its simplicity and utility in a variety of situations. The CAPM is a model for pricing an individual security or portfolio. For individual securities, we make use of the security market line (SML) and its relation to expected return and systematic risk (beta) to show how the market must price individual securities in relation to their security risk class. The SML enables us to calculate the reward-to-risk ratio for any security in relation to that of the overall market. Therefore, when the expected rate of return for any security is deflated by its beta coefficient, the reward-to-risk ratio for any individual security in the market is equal to the market reward-to-risk ratio, thus: ${\displaystyle {\frac {E(R_{i})-R_{f}}{\beta _{i}}}=E(R_{m})-R_{f}}$ The market reward-to-risk ratio is effectively the market risk premium and by rearranging the above equation and solving for ${\displaystyle E(R_{i})}$ , we obtain the capital asset pricing model ${\displaystyle E(R_{i})=R_{f}+\beta _{i}(E(R_{m})-R_{f})\,}$ • ${\displaystyle E(R_{i})~~}$ is the expected return on the capital asset • ${\displaystyle R_{f}~}$ is the risk-free rate of interest such as interest arising from government bonds • ${\displaystyle \beta _{i}~~}$ (the beta) is the sensitivity of the expected excess asset returns to the expected excess market returns, or also ${\displaystyle \beta _{i}={\frac {\mathrm {Cov} (R_{i},R_{m})}{\mathrm {Var} (R_{m})}}=\rho _{i,m}{\frac {\sigma _{i}}{\sigma _{m}}}}$ • ${\displaystyle E(R_{m})~}$ is the expected return of the market • ${\displaystyle E(R_{m})-R_{f}~}$ is sometimes known as the market premium • ${\displaystyle E(R_{i})-R_{f}~}$ is also known as the individual risk premium • ${\displaystyle \rho _{i,m}}$ denotes the correlation coefficient between the investment ${\displaystyle i}$ and the market ${\displaystyle m}$ • ${\displaystyle \sigma _{i}}$ is the standard deviation for the investment ${\displaystyle i}$ • ${\displaystyle \sigma _{m}}$ is the standard deviation for the market ${\displaystyle m}$ . Restated, in terms of risk premium, we find that: ${\displaystyle E(R_{i})-R_{f}=\beta _{i}(E(R_{m})-R_{f})\,}$ which states that the individual risk premium equals the market premium times β. Note 1: the expected market rate of return is usually estimated by measuring the arithmetic average of the historical returns on a market portfolio (e.g. S&P 500). Note 2: the risk free rate of return used for determining the risk premium is usually the arithmetic average of historical risk free rates of return and not the current risk free rate of return. For the full derivation see Modern portfolio theory. Modified betas There has also been research into a mean-reverting beta often referred to as the adjusted beta, as well as the consumption beta. However, in empirical tests the traditional CAPM has been found to do as well as or outperform the modified beta models. Security market line The SML graphs the results from the capital asset pricing model (CAPM) formula. The x-axis represents the risk (beta), and the y-axis represents the expected return. The market risk premium is determined from the slope of the SML. The relationship between β and required return is plotted on the security market line (SML), which shows expected return as a function of β. The intercept is the nominal risk-free rate available for the market, while the slope is the market premium, E(R[m])− R[f]. The security market line can be regarded as representing a single-factor model of the asset price, where β is the exposure to changes in the value of the Market. The equation of the SML is thus: ${\displaystyle \mathrm {SML} :E(R_{i})=R_{f}+\beta _{i}(E(R_{M})-R_{f}).~}$ It is a useful tool for determining if an asset being considered for a portfolio offers a reasonable expected return for its risk. Individual securities are plotted on the SML graph. If the security's expected return versus risk is plotted above the SML, it is undervalued since the investor can expect a greater return for the inherent risk. And a security plotted below the SML is overvalued since the investor would be accepting less return for the amount of risk assumed. Asset pricing Once the expected/required rate of return ${\displaystyle E(R_{i})}$ is calculated using CAPM, we can compare this required rate of return to the asset's estimated rate of return over a specific investment horizon to determine whether it would be an appropriate investment. To make this comparison, you need an independent estimate of the return outlook for the security based on either fundamental or technical analysis techniques, including P/E, M/B etc. Assuming that the CAPM is correct, an asset is correctly priced when its estimated price is the same as the present value of future cash flows of the asset, discounted at the rate suggested by CAPM. If the estimated price is higher than the CAPM valuation, then the asset is overvalued (and undervalued when the estimated price is below the CAPM valuation).^[5] When the asset does not lie on the SML, this could also suggest mis-pricing. Since the expected return of the asset at time ${\displaystyle t}$ is ${\displaystyle E(R_{t})={\frac {E(P_{t+1})-P_{t}}{P_{t}}}}$ , a higher expected return than what CAPM suggests indicates that ${\displaystyle P_{t}}$ is too low (the asset is currently undervalued), assuming that at time ${\displaystyle t+1}$ the asset returns to the CAPM suggested The asset price ${\displaystyle P_{0}}$ using CAPM, sometimes called the certainty equivalent pricing formula, is a linear relationship given by ${\displaystyle P_{0}={\frac {1}{1+R_{f}}}\left[E(P_{T})-{\frac {\mathrm {Cov} (P_{T},R_{M})(E(R_{M})-R_{f})}{\mathrm {Var} (R_{M})}}\right]}$ where ${\displaystyle P_{T}}$ is the future price of the asset or portfolio.^[5] Asset-specific required return The CAPM returns the asset-appropriate required return or discount rate—i.e. the rate at which future cash flows produced by the asset should be discounted given that asset's relative riskiness. Betas exceeding one signify more than average "riskiness"; betas below one indicate lower than average. Thus, a more risky stock will have a higher beta and will be discounted at a higher rate; less sensitive stocks will have lower betas and be discounted at a lower rate. Given the accepted concave utility function, the CAPM is consistent with intuition—investors (should) require a higher return for holding a more risky asset. Since beta reflects asset-specific sensitivity to non-diversifiable, i.e. market risk, the market as a whole, by definition, has a beta of one. Stock market indices are frequently used as local proxies for the market—and in that case (by definition) have a beta of one. An investor in a large, diversified portfolio (such as a mutual fund), therefore, expects performance in line with the Risk and diversification The risk of a portfolio comprises systematic risk, also known as undiversifiable risk, and unsystematic risk which is also known as idiosyncratic risk or diversifiable risk. Systematic risk refers to the risk common to all securities—i.e. market risk. Unsystematic risk is the risk associated with individual assets. Unsystematic risk can be diversified away to smaller levels by including a greater number of assets in the portfolio (specific risks "average out"). The same is not possible for systematic risk within one market. Depending on the market, a portfolio of approximately 30–40 securities in developed markets such as the UK or US will render the portfolio sufficiently diversified such that risk exposure is limited to systematic risk only. This number may vary depending on the way securities are weighted in a portfolio which alters the overall risk contribution of each security. For example, market cap weighting means that securities of companies with larger market capitalization will take up a larger portion of the portfolio, making it effectively less diversified. In developing markets a larger number of securities is required for diversification, due to the higher asset volatilities. A rational investor should not take on any diversifiable risk, as only non-diversifiable risks are rewarded within the scope of this model. Therefore, the required return on an asset, that is, the return that compensates for risk taken, must be linked to its riskiness in a portfolio context—i.e. its contribution to overall portfolio riskiness—as opposed to its "stand alone risk". In the CAPM context, portfolio risk is represented by higher variance i.e. less predictability. In other words, the beta of the portfolio is the defining factor in rewarding the systematic exposure taken by an Efficient frontier The (Markowitz) efficient frontier. CAL stands for the capital allocation line. The CAPM assumes that the risk-return profile of a portfolio can be optimized—an optimal portfolio displays the lowest possible level of risk for its level of return. Additionally, since each additional asset introduced into a portfolio further diversifies the portfolio, the optimal portfolio must comprise every asset, (assuming no trading costs) with each asset value-weighted to achieve the above (assuming that any asset is infinitely divisible). All such optimal portfolios, i.e., one for each level of return, comprise the efficient frontier. Because the unsystematic risk is diversifiable, the total risk of a portfolio can be viewed as beta. All investors:^[7] 1. Aim to maximize economic utilities (Asset quantities are given and fixed). 2. Are rational and risk-averse. 3. Are broadly diversified across a range of investments. 4. Are price takers, i.e., they cannot influence prices. 5. Can lend and borrow unlimited amounts under the risk free rate of interest. 6. Trade without transaction or taxation costs. 7. Deal with securities that are all highly divisible into small parcels (All assets are perfectly divisible and liquid). 8. Have homogeneous expectations. 9. Assume all information is available at the same time to all investors. In their 2004 review, economists Eugene Fama and Kenneth French argue that "the failure of the CAPM in empirical tests implies that most applications of the model are invalid".^[3] • The traditional CAPM using historical data as the inputs to solve for a future return of asset i. However, the history may not be sufficient to use for predicting the future and modern CAPM approaches have used betas that rely on future risk estimates.^[8] • Most practitioners and academics agree that risk is of a varying nature (non-constant). A critique of the traditional CAPM is that the risk measure used remains constant (non-varying beta). Recent research has empirically tested time-varying betas to improve the forecast accuracy of the CAPM.^[9] • The model assumes that the variance of returns is an adequate measurement of risk. This would be implied by the assumption that returns are normally distributed, or indeed are distributed in any two-parameter way, but for general return distributions other risk measures (like coherent risk measures) will reflect the active and potential shareholders' preferences more adequately. Indeed, risk in financial investments is not variance in itself, rather it is the probability of losing: it is asymmetric in nature as in the alternative safety-first asset pricing model.^[10]^[11] Barclays Wealth have published some research on asset allocation with non-normal returns which shows that investors with very low risk tolerances should hold more cash than CAPM suggests.^[12] • Some investors prefer positive skewness, all things equal, which means that these investors accept lower returns when returns are positively skewed. For example, Casino gamblers pay to take on more risk. The CAPM can be extended to include co-skewness as a priced factor, besides beta.^[13]^[14] • The model assumes that all active and potential shareholders have access to the same information and agree about the risk and expected return of all assets (homogeneous expectations assumption). • The model assumes that the probability beliefs of active and potential shareholders match the true distribution of returns. A different possibility is that active and potential shareholders' expectations are biased, causing market prices to be informationally inefficient. This possibility is studied in the field of behavioral finance, which uses psychological assumptions to provide alternatives to the CAPM such as the overconfidence-based asset pricing model of Kent Daniel, David Hirshleifer, and Avanidhar Subrahmanyam (2001).^[15] • The model does not appear to adequately explain the variation in stock returns. Empirical studies show that low beta stocks offer higher returns than the model would predict.^[16] ^[17] • Some data to this effect was presented as early as a 1969 conference in Buffalo, New York in a paper by Fischer Black, Michael Jensen, and Myron Scholes. Either that fact is itself rational (which saves the efficient-market hypothesis but makes CAPM wrong), or it is irrational (which saves CAPM, but makes the EMH wrong – indeed, this possibility makes volatility arbitrage a strategy for reliably beating the market).^[18]^[19]^[20] The puzzling empirical relationship between risk and return is also referred to as the low-volatility anomaly. • The model assumes that there are no taxes or transaction costs, although this assumption may be relaxed with more complicated versions of the model.^[21] • The market portfolio consists of all assets in all markets, where each asset is weighted by its market capitalization. This assumes no preference between markets and assets for individual active and potential shareholders, and that active and potential shareholders choose assets solely as a function of their risk-return profile. It also assumes that all assets are infinitely divisible as to the amount which may be held or transacted. • The market portfolio should in theory include all types of assets that are held by anyone as an investment (including works of art, real estate, human capital...) In practice, such a market portfolio is unobservable and people usually substitute a stock index as a proxy for the true market portfolio. Unfortunately, it has been shown that this substitution is not innocuous and can lead to false inferences as to the validity of the CAPM, and it has been said that, due to the impossibility of observing the true market portfolio, the CAPM might not be empirically testable. This was presented in greater depth in a paper by Richard Roll in 1977, and is generally referred to as Roll's critique.^[22] However, others find that the choice of market portfolio may not be that important for empirical tests.^[23] Other authors have attempted to document what the world wealth or world market portfolio consists of and what its returns have been.^[24]^[25]^[26] • The model assumes economic agents optimize over a short-term horizon, and in fact investors with longer-term outlooks would optimally choose long-term inflation-linked bonds instead of short-term rates as this would be more risk-free asset to such an agent.^[27]^[28] • The model assumes just two dates, so that there is no opportunity to consume and rebalance portfolios repeatedly over time. The basic insights of the model are extended and generalized in the intertemporal CAPM (ICAPM) of Robert Merton,^[29] and the consumption CAPM (CCAPM) of Douglas Breeden and Mark Rubinstein.^[30] • CAPM assumes that all active and potential shareholders will consider all of their assets and optimize one portfolio. This is in sharp contradiction with portfolios that are held by individual shareholders: humans tend to have fragmented portfolios or, rather, multiple portfolios: for each goal one portfolio — see behavioral portfolio theory^[31] and Maslowian portfolio theory.^[32] • Empirical tests show market anomalies like the size and value effect that cannot be explained by the CAPM.^[33] For details see the Fama–French three-factor model.^[34] Roger Dayala^[35] goes a step further and claims the CAPM is fundamentally flawed even within its own narrow assumption set, illustrating the CAPM is either circular or irrational. The circularity refers to the price of total risk being a function of the price of covariance risk only (and vice versa). The irrationality refers to the CAPM proclaimed ‘revision of prices’ resulting in identical discount rates for the (lower) amount of covariance risk only as for the (higher) amount of Total risk (i.e. identical discount rates for different amounts of risk. Roger’s findings have later been supported by Lai & Stohs.^[36] See also • Black, Fischer., Michael C. Jensen, and Myron Scholes (1972). The Capital Asset Pricing Model: Some Empirical Tests, pp. 79–121 in M. Jensen ed., Studies in the Theory of Capital Markets. New York: Praeger Publishers. • Black, F (1972). "Capital market equilibrium with restricted borrowing". J. Bus. 45 (3): 444–455. doi:10.1086/295472. • Fama, Eugene F. (1968). "Risk, Return and Equilibrium: Some Clarifying Comments". Journal of Finance. 23 (1): 29–40. doi:10.1111/j.1540-6261.1968.tb02996.x. • Fama, Eugene F.; French, Kenneth (1992). "The Cross-Section of Expected Stock Returns". Journal of Finance. 47 (2): 427–466. doi:10.1111/j.1540-6261.1992.tb04398.x. • French, Craig W. (2003). The Treynor Capital Asset Pricing Model, Journal of Investment Management, Vol. 1, No. 2, pp. 60–72. Available at http://www.joim.com/ • French, Craig W. (2002). Jack Treynor's 'Toward a Theory of Market Value of Risky Assets' (December). Available at http://ssrn.com/abstract=628187 • Lintner, John (1965). "The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets". Review of Economics and Statistics. 47 (1): 13–37. doi:10.2307 /1924119. JSTOR 1924119. • Markowitz, Harry M. (1999). "The early history of portfolio theory: 1600–1960". Financial Analysts Journal. 55 (4): 5–16. doi:10.2469/faj.v55.n4.2281. • Mehrling, Perry (2005). Fischer Black and the Revolutionary Idea of Finance. Hoboken, NJ: John Wiley & Sons, Inc. • Mossin, Jan (1966). "Equilibrium in a Capital Asset Market". Econometrica. 34 (4): 768–783. doi:10.2307/1910098. JSTOR 1910098. • Ross, Stephen A. (1977). The Capital Asset Pricing Model (CAPM), Short-sale Restrictions and Related Issues, Journal of Finance, 32 (177) • Rubinstein, Mark (2006). A History of the Theory of Investments. Hoboken, NJ: John Wiley & Sons, Inc. • Sharpe, William F. (1964). "Capital asset prices: A theory of market equilibrium under conditions of risk". Journal of Finance. 19 (3): 425–442. doi:10.1111/j.1540-6261.1964.tb02865.x. hdl: 10.1111/j.1540-6261.1964.tb02865.x. S2CID 36720630. • Stone, Bernell K. (1970) Risk, Return, and Equilibrium: A General Single-Period Theory of Asset Selection and Capital-Market Equilibrium. Cambridge: MIT Press. • Tobin, James (1958). "Liquidity Preference as Behavior towards Risk" (PDF). The Review of Economic Studies. 25 (1): 65–86. doi:10.2307/2296205. JSTOR 2296205. Archived from the original (PDF) on 2020-11-27. Retrieved 2019-12-12. • Treynor, Jack L. (8 August 1961). Market Value, Time, and Risk. Vol. 95–209. Unpublished manuscript. • Treynor, Jack L. (1962). Toward a Theory of Market Value of Risky Assets. Unpublished manuscript. A final version was published in 1999, in Asset Pricing and Portfolio Performance: Models, Strategy and Performance Metrics. Robert A. Korajczyk (editor) London: Risk Books, pp. 15–22. • Mullins Jr., David W. (January–February 1982). "Does the capital asset pricing model work?". Harvard Business Review: 105–113.
{"url":"https://www.knowpia.com/knowpedia/Capital_asset_pricing_model","timestamp":"2024-11-12T06:59:41Z","content_type":"text/html","content_length":"205178","record_id":"<urn:uuid:229ab2c5-19c8-4559-bb54-3acc5203865d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00353.warc.gz"}
Convert Calorie (th)/hour to Kilocalorie (th)/minute Please provide values below to convert calorie (th)/hour [cal (th)/h] to kilocalorie (th)/minute, or vice versa. Calorie (th)/hour to Kilocalorie (th)/minute Conversion Table Calorie (th)/hour [cal (th)/h] Kilocalorie (th)/minute 0.01 cal (th)/h 1.6666666666667E-7 kilocalorie (th)/minute 0.1 cal (th)/h 1.6666666666667E-6 kilocalorie (th)/minute 1 cal (th)/h 1.66667E-5 kilocalorie (th)/minute 2 cal (th)/h 3.33333E-5 kilocalorie (th)/minute 3 cal (th)/h 5.0E-5 kilocalorie (th)/minute 5 cal (th)/h 8.33333E-5 kilocalorie (th)/minute 10 cal (th)/h 0.0001666667 kilocalorie (th)/minute 20 cal (th)/h 0.0003333333 kilocalorie (th)/minute 50 cal (th)/h 0.0008333333 kilocalorie (th)/minute 100 cal (th)/h 0.0016666667 kilocalorie (th)/minute 1000 cal (th)/h 0.0166666667 kilocalorie (th)/minute How to Convert Calorie (th)/hour to Kilocalorie (th)/minute 1 cal (th)/h = 1.66667E-5 kilocalorie (th)/minute 1 kilocalorie (th)/minute = 60000 cal (th)/h Example: convert 15 cal (th)/h to kilocalorie (th)/minute: 15 cal (th)/h = 15 × 1.66667E-5 kilocalorie (th)/minute = 0.00025 kilocalorie (th)/minute Popular Power Unit Conversions Convert Calorie (th)/hour to Other Power Units
{"url":"https://www.unitconverters.net/power/calorie-th-hour-to-kilocalorie-th-minute.htm","timestamp":"2024-11-12T02:10:17Z","content_type":"text/html","content_length":"14674","record_id":"<urn:uuid:8663f49c-4036-44ad-8134-43a1e91baf93>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00181.warc.gz"}
civ is an implementation of the categorical instrumental variable estimator proposed by Wiemann (2023; arxiv:2311.17021). civ allows for optimal instrumental variable estimation in settings with relatively few observations per category as seen in many economic applications including judge IV designs. To obtain valid inference in these challenging empirical settings, civ leverages a regularization assumption that implies existence of a latent categorical variable with fixed finite support achieving the same first stage fit as the observed instrument. See the corresponding working paper Optimal Categorical Instrumental Variables for further discussion and theoretical details. Install the latest development version from GitHub (requires devtools package): Install the latest public release from CRAN: Example from the Simulation of Wiemann (2023) To illustrate civ on a simple example, consider the data generating process from the simulation of Wiemann (2023): For \(i = 1, \ldots, n\), the data generating process is given by \(Y_i = D_i \pi_ {0}(X_i) + X_i\beta_0 + U_i\) and \(D_i= m_0(Z_i) + X_i\gamma_0 + V_i,\) where \((U_i, V_i)\) are mean-zero multivariate normal with \(\sigma_U^2 = 1\), \(\sigma_V^2 = 0.9\), and \(\sigma_{UV} = 0.6 \). \(D_i\) is a scalar-valued endogenous variable, \(X_i\sim\textrm{Bernoulli}(0.5)\) is a binary covariate and \(\beta_0 = \gamma_0 = 0\), and \(Z_i\) is the categorical instrument taking values in \(\{1, \ldots, 40\}\) with equal probability. To introduce correlation between \(Z_i\) and \(X_i\), I further set \(\Pr(Z_i \text{ is odd}\vert X_i = 0) = \Pr(Z_i \text{ is even}\vert X_i = 1) = 0\). The optimal instrument \(m_0\) is constructed by first partitioning the support of \(Z_i\) into two equal subsets and then assigning either \(0\) or \(C\) as values. The scalar \(C\) is chosen such that the variance of the first stage variable is fixed to 1 and the concentration parameter for \(n=800\) is \(\mu^2 = 180\). The data generating process allows for individual treatment effects \(\ pi_0(X_i)\) to differ with covariates. Here, \(\pi_0(X_i) = 1 + 0.5(1 - 2X_i)\) so that the expected treatment effect is simply \(E\pi_0(X) = 1.\) The code snippet below draws \(n=800\) observations from this data generating process. In the generated sample, the observed instrument takes 40 values with varying numbers of observations per instrument: Using only the observed instrument Z, the goal is to estimate the in-sample average treatment effect: Estimate CIV We load the civ package and estimate the categorical instrumental variable estimator where the first stage is restricted to K=2 support points. We also load the AER package to compute heteroskedasticity robust standard errors. See also ?civ and ?summary.civ for details. The CIV estimate and the corresponding standard error are shown below. The associated 95% confidence interval covers the true effect as indicated by the t-value of less than 1.96. Why does CIV do well? The key idea of CIV is to leverage a latent categorical variable Z0 with fewer categories that achieves the same population-level fit in the first stage as the observed instrument Z. Under the assumption that the support of the latent categorical variable is fixed with finite cardinality, it is possible to estimate a mapping from the observed categories to the latent categories. This estimated mapping can then be used to simplify the optimal instrumental variable estimator to a finite dimensional regression problem. Asymptotic properties of the CIV estimator then follow if the first-stage mapping can be estimated at a sufficient rate. Wiemann (2023) provides sufficient conditions for estimation of the mapping at exponential rate using a K-Conditional-Means (KCMeans) estimator. The proposed KCMeans estimator is exact and computes very quickly with time polynomial in the number of observed categories, thus avoiding heuristic solution approaches otherwise associated with KMeans-type problems. See also the kcmeans R package for additional details. In the considered data generating process, the underlying optimal instrument Z0 has two support points. In the first step, CIV attempts to map values of the observed instrument to these support points. The below code snippet shows that is largely succeeds in doing so: Among the 800 observations, only 16 observations are missclassified. Thanks to classifying values of the instrument in the first step, the IV estimation problem substantially simplifies: Instead of using a categorical instrument with 40 values, CIV is equivalent to using the constructed binary instrument: Since correct classification requires very few observations per instrument, CIV is nearly identical to the infeasible oracle estimator that presumes knowledge of the low-dimensional optimal instrument Z0: Comparison of CIV to Alternative Optimal Instrument Estimators To provide some evidence for the practical benefits of CIV over alternative estimators, consider estimating three commonly considered alternatives: two-stage least squares (TSLS), the post-Lasso IV estimator of Belloni et al. (2012), and an IV estimator that uses random forests in the first stage. Key takeaways from the results: TSLS, post-Lasso IV, and random forest-based IV are heavily biased. For extensive finite-sample comparisons, see Wiemann (2023). The TSLS estimate is substantially more biased than the CIV estimate. Post-Lasso IV The post-Lasso IV estimator of Belloni et al. (2012) is heavily biased and the corresponding 95% confidence interval does not cover the in-sample treatment effect as indicated by the t-value of more than 1.96. Random Forest IV The random forest-based IV estimator is heavily biased and the corresponding 95% confidence interval does not cover the in-sample treatment effect as indicated by the t-value of more than 1.96. Belloni A, Chen D, Chernozhukov V, Hansen C (2018). “Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain.” Econometrica, 80(6), 2369-2429. Wiemann T (2023). “Optimal Categorical Instruments.” https://arxiv.org/abs/2311.17021
{"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/civ/readme/README.html","timestamp":"2024-11-12T07:08:30Z","content_type":"application/xhtml+xml","content_length":"34687","record_id":"<urn:uuid:736d18f0-48c6-47b7-989a-99a4917325dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00386.warc.gz"}
Two rectangular blocks, having identical dimensions, can be arranged See Our New JEE Book on Amazon Two rectangular blocks, having identical dimensions, can be arranged : Heat Conduction Problem: Two rectangular blocks, having identical dimensions, can be arranged either in configuration I or in configuration II as shown in the figure. One of the blocks has thermal conductivity κ and the other 2κ. The temperature difference between the ends along the x-axis is the same in both the configurations. It takes 9 s to transport a certain amount of heat from the hot end to the cold end in the configuration I. The time to transport the same amount of heat in the configuration II is? (IIT JEE 2013) A. 2 s B. 3 s C. 4.5 s D. 6.0 s Answer: The answer is (A) i.e., 2 s. Solution: Let $T_h$ and $T_l$ be the temperatures of hot and cold ends and $T_m$ be the temperature at the middle point of configuration I. The rate of heat flow is equal in both the blocks of configuration I because the blocks are connected in series, i.e., \frac{\Delta Q_{\text{I}}}{\Delta t_{\text{I}}}&=\kappa A \frac{(T_h-T_m)}{x}\nonumber \\ & =2\kappa A\frac{(T_m-T_l)}{x}. Solve above equation to get $T_m={(T_h+2T_l)}/{3}$. Thus, above equation becomes &\frac{\Delta Q_{\text{I}}}{\Delta t_\text{I}}=2\kappa A\frac{(T_h-T_l)}{(3x)}. In the configuration II, total rate of heat flow is the sum of heat flows through the two blocks because the blocks are joined in parallel i.e., \frac{\Delta Q_{\text{II}}}{\Delta t_{\text{II}}}&=\kappa A \frac{(T_h-T_l)}{x}+2\kappa A\frac {(T_h-T_l)}{x}\nonumber\\ &=3\kappa A\frac{(T_h-T_l)}{x}. Divide first equation by the second. Substitute $\Delta Q_{\text{I}}=\Delta Q_{\text{II}}$ and $\Delta t_{\text{I}}=9$ s to get $\Delta t_\ text{II}=2$ s. Alternately, the thermal resistance ($\frac{x}{\kappa A}$) of series combination in configuration I is $\frac{3}{2}\frac{x}{\kappa A}$ and that of parallel combination in configuration II is $R_\text {II}=\frac{1}{3}\frac{x}{\kappa A}$. More on Heat Conduction See Our Book
{"url":"https://www.concepts-of-physics.com/thermodynamics/two-rectangular-blocks-having-identical.php","timestamp":"2024-11-14T16:48:18Z","content_type":"text/html","content_length":"15209","record_id":"<urn:uuid:ae6d3fc0-3599-4b74-bd4d-95e4a22ec1b9>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00542.warc.gz"}
The links to the datasets that contain all the common ordinary fractions compared and sorted in ascending order by the users • The links to all the operations of comparing fractions, performed by the users. At this moment, these operations have been grouped into six smaller distinct datasets:
{"url":"https://www.fractii.ro/compared-fractions-by-users.php","timestamp":"2024-11-10T01:18:12Z","content_type":"text/html","content_length":"18183","record_id":"<urn:uuid:625d7ebc-2489-4e08-88d5-06007e11916c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00287.warc.gz"}
Quadratic Zeeman effect - Peeter Joot's Blogenergy shift – Peeter Joot's Blog Quadratic Zeeman effect December 13, 2015 phy1520 energy perturbation, energy shift, perturbation theory, quadratic Zeeman effect, Zeeman effect [Click here for a PDF of this post with nicer formatting] Q: [1] pr. 5.18 Work out the quadratic Zeeman effect for the ground state hydrogen atom due to the usually neglected \( e^2 \BA^2/2 m_e c^2 \) term in the Hamiltonian. The first order energy shift is For a z-oriented magnetic field we can use \BA = \frac{B}{2} \setlr{ -y, x, 0 }, so the perturbation potential is &= \frac{e^2 \BA^2}{2 m_e c^2} \\ &= \frac{e^2 \BB^2 (x^2 + y^2)}{8 m_e c^2} \\ &= \frac{ e^2 \BB^2 r^2 \sin^2\theta }{8 m_e c^2} The ground state wave function is &= \braket{\Bx}{0} \\ &= \inv{\sqrt{\pi a_0^3}} e^{-r/a_0}, so the energy shift is &= \bra{0} V \ket{0} \\ &= \inv{ \pi a_0^3 } 2 \pi \frac{ e^2 \BB^2 }{8 m_e c^2} \int_0^\infty r^2 \sin\theta e^{-2r/a_0} r^2 \sin^2\theta dr d\theta \\ \frac{ e^2 \BB^2 }{4 a_0^3 m_e c^2} \int_0^\infty r^4 e^{-2r/a_0} dr \int_0^\pi \sin^3\theta d\theta \\ &= – \frac{ e^2 \BB^2 }{4 a_0^3 m_e c^2} \frac{4!}{(2/a_0)^{4+1} } \evalrange{\lr{u – \frac{u^3}{3}}}{1}{-1} \\ \frac{ e^2 a_0^2 \BB^2 }{4 m_e c^2}. If this energy shift is written in terms of a diamagnetic susceptibility \( \chi \) defined by \Delta = -\inv{2} \chi \BB^2, the diamagnetic susceptibility is \chi = -\frac{ e^2 a_0^2 }{2 m_e c^2}. [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. [Click here for a PDF of this post with nicer formatting] Q: [1] pr 5.1 Given a perturbed 1D SHO Hamiltonian H = \inv{2m} p^2 + \inv{2} m \omega^2 x^2 + \lambda b x, calculate the first non-zero perturbation to the ground state energy. Then solve for that energy directly and compare. The first order energy shift is seen to be zero &= V_{00} \\ &= \bra{0} b x \ket{0} \\ &= \frac{x_0}{\sqrt{2}} \bra{0} a + a^\dagger \ket{0} \\ &= \frac{x_0}{\sqrt{2}} \braket{0}{1} \\ &= 0. The first order perturbation to the ground state is &= \sum_{m \ne 0} \frac{ \ket{m} \bra{m} b x \ket{0} }{ \Hbar \omega/2 – \Hbar \omega (m – 1/2) } \\ &= -b \frac{x_0}{\sqrt{2} \Hbar \omega} \sum_{m \ne 0} \frac{ \ket{m} \braket{m}{1} }{ m } \\ &= -b \frac{x_0}{\sqrt{2} \Hbar \omega} \ket{1}. The second order ground state energy perturbation is \bra{0} b x \ket{0^{(1)}} \\ \frac{b x_0}{\sqrt{2}} \bra{0} a + a^\dagger \lr{ -b \frac{x_0}{\sqrt{2} \Hbar \omega} \ket{1} } \\ \frac{b x_0}{\sqrt{2}} \lr{ -b \frac{x_0}{\sqrt{2} \Hbar \omega} } \\ -\frac{b^2 x_0^2}{ 2 \Hbar \omega } \\ -\frac{b^2 }{ 2 \Hbar \omega } \frac{\Hbar}{m \omega} \\ -\frac{b^2 }{ 2 m \omega^2 }, so the total energy perturbation up to second order is \Delta_0 = -\lambda^2 \frac{b^2 }{ 2 m \omega^2 }. To compare to the exact result, rewrite the Hamiltonian as &= \inv{2m} p^2 + \inv{2} m \omega^2 \lr{ x^2 + \frac{2 \lambda b x}{m \omega^2} } \\ &= \inv{2m} p^2 + \inv{2} m \omega^2 \lr{ x + \frac{\lambda b }{m \omega^2} }^2 – \inv{2} m \omega^2 \lr{ \frac{\lambda b }{m \omega^2} }^2. The Hamiltonian is subject to a constant energy shift \Delta E – \inv{2} m \omega^2 \frac{\lambda^2 b^2 }{m^2 \omega^4} \\ – \frac{\lambda^2 b^2 }{2 m \omega^2}. This is an exact match with the second order perturbation result of \ref{eqn:harmonicOscillatorEnergyShiftPertubation:100}. [1] Jun John Sakurai and Jim J Napolitano. Modern quantum mechanics. Pearson Higher Ed, 2014. Harmonic oscillator with energy shift December 5, 2015 phy1520 energy shift, Harmonic oscillator, perturbation
{"url":"https://peeterjoot.com/tag/energy-shift/","timestamp":"2024-11-02T21:30:28Z","content_type":"text/html","content_length":"99791","record_id":"<urn:uuid:5f7e80b4-c73a-41b9-b85d-56f3f83b5279>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00204.warc.gz"}
Your Christmas ski vacation was great, but it unfortunately ran a bit over budget. All is... Number of months 61.08 Number of months 63.61 Your Christmas ski vacation was great, but it unfortunately ran a bit over budget. All is... Your Christmas ski vacation was great, but it unfortunately ran a bit over budget. All is not lost: You just received an offer in the mail to transfer your $13,300 balance from your current credit card, which charges an annual rate of 21.1 percent, to a new credit card charging a rate of 11.7 percent. How much faster could you pay the loan off by making your planned monthly payments of $290 with the new card? (Do not round intermediate calculations and round your answer to 2 decimal places, e.g., Number of months What if there was a 3 percent fee charged on any balances transferred? (Do not round intermediate calculations and round your answer to 2 decimal places, e.g., 32.16.) Number of months Need Online Homework Help? Get Answers For Free Most questions answered within 1 hours. Ask a Question
{"url":"https://justaaa.com/finance/1293979-your-christmas-ski-vacation-was-great-but-it","timestamp":"2024-11-11T07:18:14Z","content_type":"text/html","content_length":"43580","record_id":"<urn:uuid:6eb53372-15ff-4159-91b7-2ef82a11b05a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00679.warc.gz"}
Geovane Gomes - MATLAB Central of 295,001 16 Questions 1 Answer of 20,160 0 Files of 153,000 0 Problems 0 Solutions How to improve K-means clustering with TF-IDF? Hi all, I’m currently working on a project where I need to classify company segments based on their activity descriptions. I’v... 25 days ago | 1 answer | 0 FE Model with function handle Hello everyone. Is it possible to use a function handle within a femodel? So I could change the value of a material property or... 4 months ago | 1 answer | 0 How to change labels in pde toolbox Hello everyone, Is it possible to change the labels when using the pde toolbox? For example, make face labels correspond to th... 4 months ago | 1 answer | 0 Error using generateMesh function of the pde toolbox Hello everyone, I am trying to generate a mesh to analyze an airplane wing using the PDE toolbox. The attached geometry was mo... 5 months ago | 1 answer | 0 Using a curve to contrutct a geometry for a model in pde toolbox Dear all, How could I consctruct a geometry using a curve (x-y coordinates) for a model in the pde toolbox? For example if I h... 10 months ago | 0 answers | 0 Determine the relational operator in an expression I need to find out the operator in an expression. I tried to convert the symbolic expression using char and then find the posi... 12 months ago | 3 answers | 0 How to get the coefficients of an equation Dear all, Is it possible to extract the coefficients of an equantion defined as below: syms x1 x2 eq = 2*x1 + x2 <= 0 By usi... 12 months ago | 2 answers | 0 Can not use gamma function in an external function Dear, I am trying to generate some random numbers with a specific distribution, but to calculate the parameters of this distrib... 1 year ago | 1 answer | 0 How to generate a Frechet distribution using Methods of Moments? How could I generate extreme value type II distribution for maximum using the method of moments? I think of something similar t... 1 year ago | 1 answer | 0 Using a cell array as an argument of a function handle Hi all, Is it possible to use a cell array as arguments in a function handle? For example: a(1) = {@(x) x(1) + 2}; a(2) = {@... 1 year ago | 1 answer | 0 Assign a value to a matrix element defined as a variable Hi all, Is it possible to assign values (variables) to elements in a matrix that is defined as a function handle? For example:... 1 year ago | 2 answers | 0 Using a class as a function handle Hi all, I'm trying to use a class as a function handle. This works perfectly in this case: k = 8 * ones(1,3) * 1e7; m = [3.57... 1 year ago | 1 answer | 0 dot indexing in a function handle Hi all, Is it possible to use dot indexing in a function handle? Or at least ignore an output? Please look at the code below. ... 1 year ago | 1 answer | 0 Cannot simplify a result Hi all, I am trying to simplify at most the result of Td, but the maximum I get is (3*pi*39270^(1/2))/1666. When I put this res... 2 years ago | 2 answers | 0 Function handle for standard deviation of a GPR Hi all, I am trying to find a way to calculate the standard deviation of any point I want from a GPR model. Function handle fo... 3 years ago | 1 answer | 0
{"url":"https://au.mathworks.com/matlabcentral/profile/authors/22814961","timestamp":"2024-11-01T19:48:54Z","content_type":"text/html","content_length":"88742","record_id":"<urn:uuid:00544a81-41f3-4009-8522-808b9848408e>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00253.warc.gz"}
Birth By Sleep Not open for further replies. What Do you guys spectulate what Birth by sleep means? i think Birth by Sleep was describing the births of nobodies. For instance Roxas: he was "born" when Sora became a heartless..so since he's been described as Sora's half, hes been sleeping within Sora. but when it was mentioned in the beginning of the secret ending, it kinda threw me off. so im not really sure what it means now. Maybe it refers to a new idea by nomura. Kinda like Kingdom Hearts COM's focus was on memories maybe this series is on dreams. Feb 3, 2005 i always thought birth by sleep meant the birth of a keyblader cause the all got their keyblades in that dream state. that makes sense...idk Nomura is really good at tricking people. Think smaller, more legs. Dec 28, 2005 no no, it's japanese, so you read right to left, and get "Sleep By Birth" ...so then how do you speculate that...? it doesnt make sense to me.. Thats not true because there would be no reason to write the phrase in english if you were going to right it backwards. yeah thats what i thought...it doesnt make sense if it was "Sleep by Birth". Maybe It has something to do xemnas getting his memories pieced back like sora did after COM May 13, 2005 I think when sora became a heartless, he kinda was like asleep in the darkness and POOF! the nobodies are born! Jul 9, 2005 I know it has something to do with Yensid's book. The book he has you read has three parts and one of them says something along the lines of "birth by sleep" Jan 25, 2007 When I saw birth by sleep I figured it could mean birth of keybladers. Look at the intros of both kingdom hearts games with roxas and sora floating down to a platform. I bet those two are not the only ones who got that dream. Mickey, Riku, Xehanort, and the knights as well must have gotten this mysterious dream. Once they wake up, they are a whole new person. The question is...how do the dreams come to that certain individual. Where do the dreams come from? Mar 16, 2007 i always thought birth by sleep meant the birth of a keyblader cause the all got their keyblades in that dream state. Ooo.. I like that theory. Nov 11, 2006 for some reason i thought it was a name of a keyblade..... Think smaller, more legs. Dec 28, 2005 yeah thats what i thought...it doesnt make sense if it was "Sleep by Birth". oh, and like 'Birth By Sleep" makes much sense Jan 21, 2006 i always thought the birth of roxas when sora was put in the chamber and slept.. Nov 25, 2006 I believe Sepheroth 1 practically nailed this one. Birth By Sleep could very well mean "birth of the keyblade master" their dreams. Sora got his keyblade from dreaming of it. Roxas got his own from entering the dream stage. Riku probably had the same. So basically, we're almost forgetting about how the keyblade came to the characters. I could laugh at how far back when we used to think the knights were "Chasers" that are born from sleep. Now we see them as heroes. Apr 6, 2007 I think it has something to do with Roxas' llife during Sora's year long sleep(that doesn't include his life in Twighlight town):thumbsup: Jul 31, 2005 The phrase itself "Birth by Sleep" is a very open statement. Someone once theorized that it meant the birth of the worlds again when the tiny bit of lights in the hearts of the children remade the worlds. or something along those line, but other than that it's a pretty open statement that Nomura wants us to ponder upon. So it could mean just about anything in my opinion in reference to Kingdom Not open for further replies.
{"url":"https://www.khinsider.com/forums/index.php?threads/birth-by-sleep.80232/#post-2063382","timestamp":"2024-11-02T21:30:31Z","content_type":"text/html","content_length":"140773","record_id":"<urn:uuid:97dd6113-0d24-4398-bffb-b32f98fa4765>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00447.warc.gz"}
Known Problems and Caveats Known Problems and Caveats Zero Impedance Branches Branches with zero impedance will lead to a non-converging power flow: This is due to the fact that the power flow is based on admittances, which would be infinite for an impedance of zero. The same problem might occur with impedances very close to zero. Zero impedance branches occur for: □ lines with length_km = 0 □ lines with r_ohm_per_km = 0 and x_ohm_per_km = 0 □ transformers with vk_percent=0 If you want to directly connect to buses without voltage drop, use a bus-bus switch.
{"url":"https://pandapower.readthedocs.io/en/latest/powerflow/caveats.html","timestamp":"2024-11-06T00:45:50Z","content_type":"text/html","content_length":"9979","record_id":"<urn:uuid:a1663780-52d4-4a6b-8cd0-f62891d0cb2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00898.warc.gz"}
Introduction to Machine Learning and Reinforcement Learning at Carnegie Mellon University | Lecture notes Probability and Statistics | Docsity Download Introduction to Machine Learning and Reinforcement Learning at Carnegie Mellon University and more Lecture notes Probability and Statistics in PDF only on Docsity! 18-661 Introduction to Machine Learning Reinforcement Learning Spring 2020 ECE – Carnegie Mellon University Announcements • Homework 7 is due on Friday, April 24. You may use a late day if you have any left. • Wednesday’s lecture will be a set of four guest mini-lectures from Samarth Gupta, Jianyu Wang, Mike Weber and Yuhang Yao. Please attend! • Recitation on Friday will be a review for the final exam. • Practice final (multiple choice only) on Monday, April 27. • Final exam, Part I: Wednesday, April 29 during the usual course time. We will email students with a timezone conflict about starting and finishing 1 hour earlier. Closed book except one double-sided letter/A4-size handwritten cheat sheet. • Final exam, Part II: Take-home exam from Friday, May 1 (evening) to Sunday, May 3. Open everything except working with other people, designed to take 2 to 3 hours. 1 Online Learning Recap What Is Online Learning? Online learning occurs when we do not have access to our entire training dataset when we start training. We consider a sequence of data and update the predictor based on the latest sample(s) in the sequence. • Stochastic gradient descent • Perceptron training algorithm • Multi-armed bandits Online learning has practical advantages: • Helps us handle large datasets. • Automatically adapts models to changes in the underlying process over time (e.g., house prices’ relationship to square footage). 4 Feedback in Online Learning Problems What if we don’t get full feedback on our actions? • Spam classification: we know whether our prediction was correct. • Online advertising: we only know the appeal of the ad shown. We have no idea if we were right about the appeal of other ads. • Partial information often occurs because we observe feedback from an action taken on the prediction, not the prediction itself. • Analogy to linear regression: instead of learning the ground truth y , we only observe the value of the loss function, l(y). • Makes it very hard to optimize the parameters! • Often considered via multi-armed bandit problems. 5 Bandit Formulation We can play multiple rounds t = 1, 2, . . . ,T . In each round, we select an arm it from a fixed set i = 1, 2, . . . , n; and observe the reward r(it) that the arm gives. Arm 1 Arm 2 Arm 3 Objective: Maximize the total reward over time, or equivalently minimize the regret compared to the best arm in hindsight. 8 Bandit Formulation Arm 1 Arm 2 Arm 3 • The reward at each arm is stochastic (e.g., 0 with probability pi and otherwise 1). • Usually, the rewards are i.i.d over time. The best arm is then the arm with highest expected reward. • We cannot observe the reward of each arm (the entire reward function): we just know the reward of the arm that we played. Online ads example: arm = ad, reward = 1 if the user clicks on the ad and 0 otherwise 9 Exploration vs. Exploitation Tradeoff Which arm should I play? • Best arm observed so far? (exploitation) • Or should I look around to try and find a better arm? (exploration) We need both in order to maximize the total reward. 10 Thompson Sampling Which arm should you pick next? • ε-greedy: Best arm so far (exploit) with probability 1− ε, otherwise random (explore). • UCB1: Arm with the highest UCB value. Thompson sampling instead fits a Gaussian distribution to the observations for each arm. • Assume the reward of arm i is drawn from a normal distribution. • Find the posterior distribution of the expected reward for each arm: N ( r(i), (Ti + 1)−1 ) . • Generate synthetic samples from this posterior distribution for each arm, which represent your understanding of its average reward. • Play the arm with the highest sample. 13 Continuous Bandits • So far we have assumed a finite number of discrete arms. • What happens if we assume continuous arms? x-axis is the arm, y -axis is the (stochastic) reward. 14 Bayesian Optimization Assume that f (x) is drawn from a Gaussian process {f (x1), f (x2), . . . , f (xn)} ∼ N (0,K) where K is a kernel function, e.g., Kij = exp(−‖xi − xj‖22). 15 Overview of Reinforcement Learning Reinforcement learning is also sometimes called approximate dynamic programming. It can be viewed as a type of optimal control theory with no pre-defined model of the environment. 17 Grasping an Object 18 RL Applications Reinforcement learning can be applied to many different areas. • Robotics: in which direction and how fast should a robot arm move? • Mobility: where should taxis go to pick up passengers? • Transportation: when should traffic lights turn green? • Recommendations: which news stories will users click on? • Network configuration: which parameter settings lead to the best allocation of resources? Similar to multi-armed bandits, but with a notion of state or context. 19 Objectives of Reinforcement Learning We choose the actions that maximize the expected total reward: R(T ) = T∑ t=0 E [r(a(t), s(t))] , R(∞) = ∞∑ t=0 E [ γtr(a(t), s(t)) ] . We discount the reward at future times by γ < 1 to ensure convergence when T =∞. The expectation is taken over the probabilistic evolution of the state, and possibly the probabilistic reward function. A policy tells us which action to take, given the current state. • Deterministic policy: π : S → A maps each state s to an action a. • Stochastic policy: π(a|s) specifies a probability of taking each action a ∈ A given state s. We draw an action from this probability distribution whenever we encounter state s. 22 Example: Robot Movements • Reward of + 1 if we reach [4,3] and -1 if we reach [4,2]; -0.04 for taking each step • What action should we take at state [3,3]? RIGHT • How about at state [3,2]? UP 23 Key Challenges of Reinforcement Learning • The relationship of future states to past states and actions, s(t + 1) ∼ σ(a(t), s(t)), must be learned. • Partial information feedback: the reward feedback r(a(t), s(t)) only applies to the action taken a(t), and may itself be stochastic. Moreover, we may not be able to observe the full state s(t) (more on this later). • Since actions affect future states, they should be chosen so as to maximize the total future reward ∑∞ t=0 γ tr(a(t), s(t)), not just the current reward. We can address these challenges by formulating RL using Markov decision processes. 24 Transition Matrices A Markov chain can be represented with a transition matrix: P = p1,1 p1,2 . . . p1,n p2,1 p2,2 . . . p2,n ... . . . ... ... pn,1 pn,2 . . . pn,n (2) Each entry (i , j) is the probability of transitioning from state i to state j . P = S1 S2 S3 S4 S1 0 1 0 0 S2 0.5 0 0.2 0.1 S3 0.9 0 0 0.1 S4 0 0 0.8 0.2 26 Markov Processes Markov chains are special types of Markov processes, which extend the notion of a Markov chain to possibly infinite numbers of states and continuous time. • Infinite states: For instance, if the state is the temperature in this room. • Continuous time: For instance, if the state is the velocity of a robot arm. • In both cases, we can still define transition probabilities, due to the Markov property. The Markov property says that the state evolution is memoryless: the probability distribution of future states depends only on the value of the current state, not the values of previous states. 27 Motivating the Markov Property The memorylessness of the Markov property significantly simplifies our predictions of future states (and thus, future rewards). • Robotic arms: What is the probability that a block moves from point A to point B in the next 5 minutes? Does it matter where the block was 5 minutes ago? 10 minutes ago? • Taxi mobility: What is the probability there will be, on average, 10 taxis at SFO in the next 10 minutes? Does this depend on the number of taxis that were there 10 minutes ago? 20 minutes ago? 28 Markov Decision Processes in RL • At each time t, the agent experiences a state s(t). We sometimes call the state the “environment.” • At each time t, the agent takes an action a(t), which is chosen from some feasible set A. It then experiences a reward r(a(t), s(t)). • The next state (at time t + 1) is a (probabilistic) function of the current state and action taken: s(t + 1) ∼ σ(a(t), s(t)). The state-action relationships are just a Markov decision process! σ(a(t), s(t)) is given by the transition probabilities p a(t) s(t),j where j are the possible states. 31 Finding RL Policies The State-Value Function The state-value function of a given policy π : S → A gives its expected future reward when starting at state s: Vπ(s) = E [ ∞∑ t=0 γtr (π(s (t)), s(t)) |s(0) = s ] . The expectation may be taken over a stochastic policy and reward as well as the Markov decision process (MDP) of the state transitions. • The action a(t) at any time t is determined by the policy, π(s(t)). • Due to the Markov property of the underlying MDP, the optimal policy at any time is only a function of the last observed state. 32 Optimizing the Action-Value Function Qπ∗(a, s) = E r(a, s) + ∞∑ t=1 γtr (π∗(s(t)), s(t))︸︷︷︸ use policy π∗ |s(0) = s, a(0) = a • We maximize the reward by choosing the action a∗(s) at state s as a∗(s) = arg maxa Qπ∗(a ∗(s), s). • What does π∗ do at time t = 1, and state s(1)? We don’t care about what we did at t = 0...so we can pretend t = 0 again and just choose a to maximize Qπ∗(a, s(1))! • This logic is related to Bellman’s Principle of Optimality (for those familiar with dynamic programming). 35 Policy Search Methods Given the above insight, it suffices to learn either the optimal policy π∗ or the optimal action-value function Q∗. When we know the MDP of the state transitions, these approaches are called policy iteration and value iteration. They depend on the Bellman equation: Qπ∗(a, s) = r(a, s) + γ ∑ s′∈S pas,s′ [Vπ∗(s ′)]︸︷︷︸ Maximum future reward , which follows from the fact that Vπ∗(s) = maxa Qπ∗(a, s). 36 Value Iteration Initialize V (s) for each state s. Suppose we know the reward function r(a, s) and transition probabilities pas,s′ . • Update Q(a, s): Q(a, s)← E [r(a, s)] + γ ∑ s′∈S pas,s′V (s ′)︸︷︷︸ Estimate of expected future reward • Update V (s)← maxa Q(a, s). • Repeat until convergence. Value iteration is guaranteed to converge to the optimal value function V ∗(s), and optimal action-value function Q∗(a, s). 37 Direct Policy Search If we parameterize the policy, then finding the optimal policy simply means finding the optimal parameter values. • Gradient descent: Try to estimate the gradients of the action-value function Qπ(a, s) and evolve the parameters accordingly. This can be difficult since we don’t know Qπ(a, s) in the first place. • Evolutionary optimization: Simulated annealing, cross-entropy search, etc. are generic optimization algorithms that do not require knowledge of the gradient. Such methods often require temporal difference adjustments in order to converge fast enough. 40 Q-Learning Evolve the action-value function to find its optimal value Qπ∗(a, s). • Initialize our estimate of Q(a, s) to some arbitrary value. We have dropped the dependence on π∗, as π∗ is determined by Qπ∗(a, s). • After playing action a in state s(t) and observing the next state s(t + 1), update Q as follows: Q(a, s(t))← (1−α)Q(a, s(t))︸︷︷︸ old value +α ( r(a, s(t)) + γmax a Q(a, s(t + 1)) ) ︸︷︷︸ learned value . Here α is the learning rate and r(a, s(t)) is our observed reward. The term maxa Q(a, s(t + 1)) is our estimate of the expected future reward for the optimal policy π∗. Q-learning has many variants: for instance, deep Q-learning uses a neural network to approximate Q. 41 Exploration vs. Exploitation Given Q(a, s), how do we choose our action a? • Exploitation: Take action a∗ = arg maxa Q(a, s). Given our current estimate of Q, we want to take what we think is the optimal action. • Exploration: But we might not have a good estimate of Q, and we don’t want to bias our estimate towards an action that turns out not to be optimal. • ε-Greedy: With probability 1− ε, choose a∗ = arg maxa Q(a, s), and otherwise choose a randomly. Usually, we decrease ε over time as additional exploration becomes less important. 42 Extensions and Variations of RL • Multi-agent reinforcement learning: Suppose multiple agents are simultaneously using RL to find their optimal actions, and that one agent’s actions affect another’s. The agents must then learn to compete with each other. • Distributed reinforcement learning: We can speed up the search for the optimal policy by having multiple agents explore the state space in parallel. • Hierarchical reinforcement learning: Lower-level learners try to satisfy goals specified by a higher-level learner, which are designed to maximize an overall reward. • Transfer learning: Learn how to perform a new task based on already-learned methods for performing a related one. 45 Types of Machine Learning Supervised Learning • Training data: (x , y) (features, label) samples. We want to predict y to minimize a loss function. • Regression, classification Unsupervised Learning • Training data: x (features) samples only. We want to find “similar” points in the x space. • Clustering, PCA/ICA Reinforcement Learning • Training data: (s, a, r) (state,action,reward) samples. We want to find the best sequence of decisions so as to maximize long-term reward. • Robotics, multi-armed bandits 46 Summary You should know: • What a Markov decision process is (action and state variables, transition probabilities). • What the action-value and state-value functions are. • Differences between supervised, unsupervised, and reinforcement learning. 47
{"url":"https://www.docsity.com/en/docs/introduction-to-machine-learning-and-reinforcement-learning-at-carnegie-mellon-university/9846070/","timestamp":"2024-11-14T15:33:30Z","content_type":"text/html","content_length":"253648","record_id":"<urn:uuid:60c78d75-0cbb-4f60-8cac-e1b83dcdd2de>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00426.warc.gz"}
Diluted EPS - easy question Orange Companys net income for 2004 was $7,600,000 with 2,000,000 shares outstanding. The average share price in 2004 was $55. Orange had 10,000 shares of eight percent $1,000 par value convertible preferred stock outstanding since 2003. Each preferred share was convertible into 20 shares of common stock. Orange Companys diluted earnings per share (Diluted EPS) for 2004 is closest to: A) $3.40. B) $3.45. C) $3.80. D) $3.25. C 10,000 preferred shares x (1000 x .08) = 800,000 extra income 10,000 x 20 = 200000 extra shares 8.4M/2.2M = 3.81 Answer is ‘A’ First calculate the basic EPS: (7.6M-8%*1000*10,000)/2M=3.4 Check if the conversion of preferred is dilutive: 7.6M/(2M+10,000*20)=3.45> basic of 3.4, therefore not dilutive, the diluted EPS is equal with the basic EPS of 3.4, answer should be A. So in any question asking to calculate diluted EPS, we would have to calculate basic EPS first to check whether the security is dilutive or not? sure makes the question longer! Yes, you do have to calculate the basic EPS (as a precaution). oops… note to self, review this section… I thought you added back convertible preferred dividends when calculating the numerator? Because if the shares were converted the dividends would not have been paid? newsuper basic EPS = NI - Pref Divs / WACSO Diluted EPS = NI (adjusted for dilution effect) - Pref Divs (adjusted for dilution effect) / WACSO (adjusted for dilution effect) In this example, you don’t add the preferreds to NI. You just don’t subtract them (because you’re not paying them if converted). Oh yeah, I see. Only if they had given you a figure for income after dividends would you add it back? thanks you could have a bunch of preferred dividends (the non-convertible kind) (NCPD) and then the Convertible Preferred Dividends(CPD) Basic EPS = (NI - NCPD - CPD ) / WASO Diluted EPS = (NI - NCPD) / (WASO + Converted Shares due to the preferred). If Diluted < Basic --> then use the Diluted #.
{"url":"https://www.analystforum.com/t/diluted-eps-easy-question/26130","timestamp":"2024-11-03T19:25:38Z","content_type":"text/html","content_length":"38320","record_id":"<urn:uuid:721c0a79-c305-417d-a3b5-027a3a2e7d78>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00334.warc.gz"}
EViews Help: @rwish Wishart random draw. Syntax: @rwish(S, n) @rwishc(S, n) @rwishi(S, n) @rwishic(S, n) Return: sym Draw a random symmetric Wishart matrix using the The Wishart density is given by There are four different forms of the function, corresponding to different ways of specifying rwish” command and how they change the interpretation of the S matrix argument: @rwish “” Supply Supply the Cholesky decomposition of @rwishc “c” This form is more efficient when performing multiple draws from the same distribution (compute the Cholesky once, but sample many times). @rwishi “i” This form is more efficient than explicitly inverting Supply the Cholesky decomposition of @rwishic “ic” This form combines the efficiencies of the Cholesky and inverse forms. n random draws from i.e., though the mathematical definition has been extended to cover real-valued n. = @rwish(@identity(3), 5) returns a random draw from the
{"url":"https://help.eviews.com/content/functionref_r-@rwish.html","timestamp":"2024-11-05T02:54:34Z","content_type":"application/xhtml+xml","content_length":"22057","record_id":"<urn:uuid:f3d8a8a6-c2af-44a4-903a-81c0ad2f02f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00165.warc.gz"}
Go to the source code of this file. DOUBLE PRECISION function dlantr (NORM, UPLO, DIAG, M, N, A, LDA, WORK) DLANTR returns the value of the 1-norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular matrix. Function/Subroutine Documentation DOUBLE PRECISION function dlantr ( character NORM, character UPLO, character DIAG, integer M, integer N, double precision, dimension( lda, * ) A, integer LDA, double precision, dimension( * ) WORK DLANTR returns the value of the 1-norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular matrix. Download DLANTR + dependencies [TGZ] [ZIP] [TXT] DLANTR returns the value of the one norm, or the Frobenius norm, or the infinity norm, or the element of largest absolute value of a trapezoidal or triangular matrix A. DLANTR = ( max(abs(A(i,j))), NORM = 'M' or 'm' ( norm1(A), NORM = '1', 'O' or 'o' ( normI(A), NORM = 'I' or 'i' ( normF(A), NORM = 'F', 'f', 'E' or 'e' where norm1 denotes the one norm of a matrix (maximum column sum), normI denotes the infinity norm of a matrix (maximum row sum) and normF denotes the Frobenius norm of a matrix (square root of sum of squares). Note that max(abs(A(i,j))) is not a consistent matrix norm. NORM is CHARACTER*1 [in] NORM Specifies the value to be returned in DLANTR as described UPLO is CHARACTER*1 Specifies whether the matrix A is upper or lower trapezoidal. [in] UPLO = 'U': Upper trapezoidal = 'L': Lower trapezoidal Note that A is triangular instead of trapezoidal if M = N. DIAG is CHARACTER*1 Specifies whether or not the matrix A has unit diagonal. [in] DIAG = 'N': Non-unit diagonal = 'U': Unit diagonal M is INTEGER [in] M The number of rows of the matrix A. M >= 0, and if UPLO = 'U', M <= N. When M = 0, DLANTR is set to zero. N is INTEGER [in] N The number of columns of the matrix A. N >= 0, and if UPLO = 'L', N <= M. When N = 0, DLANTR is set to zero. A is DOUBLE PRECISION array, dimension (LDA,N) The trapezoidal matrix A (A is triangular if M = N). If UPLO = 'U', the leading m by n upper trapezoidal part of the array A contains the upper trapezoidal matrix, and the strictly lower triangular part of A is not referenced. [in] A If UPLO = 'L', the leading m by n lower trapezoidal part of the array A contains the lower trapezoidal matrix, and the strictly upper triangular part of A is not referenced. Note that when DIAG = 'U', the diagonal elements of A are not referenced and are assumed to be one. LDA is INTEGER [in] LDA The leading dimension of the array A. LDA >= max(M,1). WORK is DOUBLE PRECISION array, dimension (MAX(1,LWORK)), [out] WORK where LWORK >= M when NORM = 'I'; otherwise, WORK is not Univ. of Tennessee Univ. of California Berkeley Univ. of Colorado Denver NAG Ltd. September 2012 Definition at line 141 of file dlantr.f.
{"url":"https://netlib.org/lapack/explore-html-3.4.2/d0/dc4/dlantr_8f.html","timestamp":"2024-11-11T12:55:36Z","content_type":"application/xhtml+xml","content_length":"13931","record_id":"<urn:uuid:50c57fef-9213-4bfb-85a4-c60c3cb81f08>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00526.warc.gz"}
Important Math Topics and Skills Needed for Data Science Do you want to be a data scientist? If you came here and reading this article, YES! I have shared all the skills required to become a data scientist. Mathematics is one of them, very important and vital. Without mathematics, you can not be a data scientist. I’m not good at mathematics; can I become a data scientist? This is the most common question asked by every alternate Data Science aspirants. In this article, I’m sharing math needed for data science along with my own experience and learning. Going through this complete article, you will also learn how you should not be afraid of mathematics and why mathematics should not hold you back to become future Data Scientist. Let’s deep dive. Why mathematics is important in Data Science? Machines understand binaries o’s and 1’s. It deals with numbers very well than any other data format. Moving to machine learning, it is all about math. Almost all the data you used for data analytics will be in numeric format. To parse, to filter, to analyze and to find the pattern you have to dwell into numerous data numerical features. Math is Important! If I’m don’t do the math, I’m not a Data Scientist. If you are an aspiring data scientist, Now you might have lots of questions in your mind. First question. How much math knowledge Data Scientist should know? You don’t require a masters or bachelors degree in mathematics. I have mastered in Computer Science. To bring your interest in mathematics, it would be great if you have basics knowledge of mathematics. If you are a mathematics student in your college and have a good understanding and interests in math topics, things are not difficult for you. To learn mathematics, two things are required. • Knowing the importance of mathematics • Interest in learning mathematics. Without knowing the importance, you cannot bring interest. If you hold these two things, you are the Right Person to start learning Data Science. Import Skills and Topics from Math needed for Data Science What are the essentials that I need to get myself into Data Science and Machine Learning? Here are my secret sauce and important mathematic skills you required to become a data scientist. Generally, mathematics falls into two major areas- linear algebra and geometry. Forget about geometry, for data science you have to deal with linear algebra. Linear Algebra is one of the most important topics from the math you need to learn. For every data manipulation work, you need a data structure to organize your data and arithmetic operation to analyze your data. • Sets, Vectors, Matrices, Arrays are important data structures to organize your data. • Arithmetic Operations you perform on row data is called Data Wrangling. It is also called as Data Munging. For analyzing the data, you have to perform many statistical and probability operations. Some important topics form Statistics and Probability: • Descriptive Stats • Inferential Stats • Hypothesis Testing • Different Statistical Tests (Chi-square/ t-test/ Z test/ ANOVA/Regression/Sampling/BootStapping/Bagging/Cross Validation) For machine learning, along with the above topics, you have to explore some of the major math topics like linear and nonlinear algebra, calculus and limits. Calculus is an advanced topic in mathematics. Important Tips for Data Science Aspirants • If you are new to these mathematics topics, don’t hold yourself back. Rather than start learning data science with whatever mathematics knowledge you have. While going through the data science project, explore mathematics skills in parallel. • There are many Python libraries in data science. You can use those libraries for performing mathematical operations, rather than dealing with mathematics with bare hands. • Data science is highly dynamic and changing every day, so the mathematics. You have to keep eyes on everything going on. • To become an expert data scientist, you need to be good in these mathematical topics and skills for sure. The more you know mathematics, the better you become in your data science job. Do you have any doubt or thoughts to share? Write in the comment. Keep learning Data Science!
{"url":"https://www.csestack.org/math-needed-data-science/","timestamp":"2024-11-07T16:51:27Z","content_type":"text/html","content_length":"151650","record_id":"<urn:uuid:138bd309-97ad-4127-bd6c-721f6454c3c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00888.warc.gz"}
Car Payment Calculator NH Online Home » Simplify your calculations with ease. » Vehicle Calculators » Car Payment Calculator NH Online Planning to buy a car can be an exciting prospect, but the financial aspect of it often requires careful consideration. One crucial component to consider is the car loan payments. This is where the Car Payment Calculator NH comes into play. The Car Payment Calculator NH is a digital tool that enables you to calculate your prospective monthly car loan payments. By inputting the total cost of the car, the annual interest rate, and the total number of monthly payments, you can estimate the amount you'll pay each month. Working of the Car Payment Calculator NH The calculator uses a specific formula to estimate your monthly payments. You start by entering the total cost of the car (the principal amount), the annual interest rate, and the total number of monthly payments. The calculator then processes these variables to provide an estimate of your monthly payments. Car Payment Formula and Variables The formula used by the calculator is as follows: Monthly Payment = (P x r x (1 + r)^n) / ((1 + r)^n - 1) Where: P = Principal amount (the total cost of the car or loan amount) r = Monthly interest rate (annual interest rate divided by 12 and expressed as a decimal) n = Total number of monthly payments For instance, suppose you plan to buy a car priced at $30,000, with an annual interest rate of 5%, and intend to make payments over a period of 60 months. Inputting these values into the calculator will give you an estimated monthly payment. Personal Budgeting The calculator can help individuals plan their budgets by providing an estimate of their monthly car loan payments. Loan Comparison Using the calculator, you can compare loan offers from different lenders and choose the one that best suits your financial situation. Dealership Negotiations Knowing your estimated monthly payment can also be beneficial during negotiations with car dealerships. Frequently Asked Questions Can I use the calculator for different car models? Yes, the Car Payment Calculator NH can be used for any car model as long as you know the total cost of the car, the annual interest rate, and the total number of monthly payments. Is the estimated monthly payment accurate? While the calculator provides an estimate, the actual monthly payment might vary based on factors like your credit score, the lender's terms, and any additional fees. In conclusion, the Car Payment Calculator NH is a valuable tool for anyone planning to buy a car through a loan. It helps you plan your budget, compare loan offers, and negotiate effectively with Leave a Comment
{"url":"https://calculatorshub.net/vehicle-calculators/car-payment-calculator-nh/","timestamp":"2024-11-08T15:41:51Z","content_type":"text/html","content_length":"114278","record_id":"<urn:uuid:ec15675e-8a8a-4a31-be6e-a2746bfbedf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00253.warc.gz"}
Excel Formula for Sum of Negative Numbers without Hidden Cells In this tutorial, we will learn how to write an Excel formula in Python that calculates the sum of a column where the number is negative and the cell is not hidden. This formula can be useful when working with data in Excel and you need to perform calculations based on specific conditions. By using the SUMIFS function, we can easily calculate the sum of a column with negative numbers without including hidden cells. Let's dive into the details of this formula and how it works. The Excel formula for this task is as follows: =SUMIFS(D:D, D:D, "<0", D:D, "<>") This formula uses the SUMIFS function, which allows us to calculate the sum based on multiple criteria. The first argument of the SUMIFS function is the range to sum, which in this case is column D (D:D). The second argument is the criteria range, which is also column D (D:D). The third argument is the criteria for the first condition, which is "<0". This condition checks if the number is negative. The fourth argument is the criteria range for the second condition, which is again column D (D:D). The fifth argument is the criteria for the second condition, which is "<>". This condition checks if the cell is not hidden. To understand how this formula works, let's consider an example. Suppose we have the following data in column D: If we apply the formula =SUMIFS(D:D, D:D, "<0", D:D, "<>") to this data, it would return the sum of -3, -2, and -1, which is -6. This is because these are the negative numbers in column D that are not hidden. In conclusion, the Excel formula =SUMIFS(D:D, D:D, "<0", D:D, "<>") allows us to calculate the sum of a column where the number is negative and the cell is not hidden. This formula can be easily implemented in Python using libraries such as pandas or openpyxl. By understanding the logic behind this formula, you can perform similar calculations in your own projects and analyze data more An Excel formula =SUMIFS(D:D, D:D, "<0", D:D, "<>") Formula Explanation This formula uses the SUMIFS function to calculate the sum of column D where the number is negative and the cell is not hidden. Step-by-step explanation 1. The SUMIFS function is used to calculate the sum based on multiple criteria. 2. The first argument of the SUMIFS function is the range to sum, which is column D (D:D). 3. The second argument of the SUMIFS function is the criteria range, which is also column D (D:D). 4. The third argument of the SUMIFS function is the criteria for the first condition, which is "<0". This condition checks if the number is negative. 5. The fourth argument of the SUMIFS function is the criteria range for the second condition, which is again column D (D:D). 6. The fifth argument of the SUMIFS function is the criteria for the second condition, which is "<>". This condition checks if the cell is not hidden. 7. The formula calculates the sum of column D where the number is negative and the cell is not hidden. For example, if we have the following data in column D: | D | | | | 5 | | -3 | | 7 | | -2 | | 9 | | -1 | | 4 | The formula =SUMIFS(D:D, D:D, "<0", D:D, "<>") would return the sum of -3, -2, and -1, which is -6. This is because these are the negative numbers in column D that are not hidden.
{"url":"https://codepal.ai/excel-formula-generator/query/ms4aTMAp/excel-formula-sum-column-negative-without-hidden-cells","timestamp":"2024-11-09T10:54:18Z","content_type":"text/html","content_length":"96448","record_id":"<urn:uuid:0d04c583-7975-475e-886b-946b0adf33c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00352.warc.gz"}
Analog to Digital Converter | What is ADC, its Types, specification and Application - Easy Electronics Analog to Digital Converter | What is ADC, its Types, specification and Application Analog to Digital Converter performs a function which is exactly performed to that of a Digital to Analog Converter. The input to an A to D converter is the analog voltage V[A] and at the output, we get an “n” bit digital word. Block Diagram of ADC (Analog to Digital Converter) In the N-bit digital output d[1] represents the most significant bit (MSB) and d[n] is the least significant bit (LSB). The analog input voltage V[A] produces an output digital word, having functional value “D” is given by, D= d_1 2^{-1}\,+\,d_2 2^{-2}\,+......+\,d_n 2^{-n}\, In addition to the analog input voltage V[A], the ADC block has a reference voltage V[R] input and two control lines SOC and EOC. The start of conversion (SOC) input is used to start the A to D converter whereas the end of conversion (EOC) output goes high to indicate that the conversion is complete. The relation between analog input voltage V[i] and the digital output D is given by, D= d_1 2^{-1}\,+\,d_2 2^{-2}\,+......+\,d_n 2^{-n}\,=\frac{V_i}{K V_R}=\frac{V_i}{V_{FS}} • K=Constant, • V[R] = reference voltage, • V[FS] = Full scale input voltage, • V[i]= Analog input voltage The time difference between the SCO signal and EOC signal is called as “Conversion Time“. The Conversion time should be as small as possible. Practically t can have values from a few hundreds of µs to a few msec. Types of ADC (Analog to Digital Converter) There are various circuits are available for the A to D converter conversion. They are different principles of operation for conversion. Some of them are: • Tracking or Servo type • Single slope A to D converter Out of them the most commonly used ADCs are successive approximation type and integrator type ADC. All the circuits mentioned above have theory advantages and dendrites and they are proffered specific application. You can learn each type of Analog to digital converter in detailed by clicking their names. Guidelines to selection ADC In this section we are going to discuss the general guidelines about the selecting an ADC. While selecting the ADC the following points should be considered. 1. The number of bits: • Depending on the requirement of resolution we can select an 8-bit, 12-bit, or 16-bit ADC. Higher the number of bits better the resolution. 2. The required accuracy: • The accuracy required for different applications is going to be different, so the selection of ADC should be done accordingly. 3. Speed or conversion time: • The speed of an ADC should be large enough or conversion time should be small in order to convert the fast-changing analog signal successfully into digital form. 4. Range of the input signal: • The ADC full-scale input range should match with the complete range of the analog input signal. 5. Cost budget: • Cheaper ADCs do not provide high performance and high-performance ADCs are costly. So selection of an ADC is a compromise between performance and cost. Characteristics of ADC (Analog to Digital converter) Some of the important characteristics of ADC are: 1. Resolution: • Resolution is defined as the maximum number of digital output codes. This is the same as that of a DAC. • Alternatively, resolution can be defined as the ratio of the change in the value of the input analog voltage V[A], required to change the digital output by 1 LSB. \boxed{Resolution=\frac{V_{FS}}{2^n - 1}} 2. Conversion Time: • It is the total time required to convert the analog signal into a corresponding digital output. • As we know, the conversion time depends on the conversion technique used for an ADC. • The conversion time also depends on the propagation delay introduced by the circuit components. • Conversion time should ideally be zero and practically as small as possible. 3. Quantization Error: • As shown in the figure, the digital output is not always an accurate representation of the analog input. • For example, any input voltage between 1/8 and 2/8 of full scale will be converted to a digital word of “001”. This approximation process is called quantization and the error due to quantization is called quantization error. • The maximum value of quantization error is ±1/2 LSB. • The quantization error should be as small as possible. It can be reduced by increasing the number of bits. The increase in the number of bits will also improve resolution. Application of ADC (Analog to Digital Converter) Some of the important general applications of ADC is as follows: • In the digital instruments such as digital voltmeter, frequency counter, etc. • In the data acquisition system • In the digital tachometers for speed measurements and feedback • In digital recording and reproduction • In computerized instrument systems • NC and CNC machines 1 thought on “Analog to Digital Converter | What is ADC, its Types, specification and Application” 1. You’ve explained it better. thanks to your straightforward writing. It provides brief information about analog to digital converters. Leave a Comment This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://easyelectronics.co.in/analog-to-digital-converter/","timestamp":"2024-11-05T12:40:01Z","content_type":"text/html","content_length":"164121","record_id":"<urn:uuid:42843ea0-0902-4116-8317-e78972fa374a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00010.warc.gz"}
Cooper's Formula VO2 Max Calculator Cooper's 12 Minute Distance Equation for VO2 Max VO₂ max is the maximal oxygen consumption or maximum aerobic capacity of an individual, it is the number of liters of oxygen one's body can transport per minute during incremental exercise (exercise that increase in intensity over time.) VO₂ max is a measure of physical fitness since people who are in better shape can use and transport more oxygen throughout their bodies during intense exercise. Besides L/min, VO₂ max may also be measured in milliliters of oxygen per kilogram of body weight per minute, or mL/kg/min. A person's true VO₂ max can be measured in a clinical setting using treadmill and a device that records the O₂ and CO₂ concentrations in the air that is inhaled and exhaled. The maximal level is achieved when the body's consumption of oxygen holds steady even as the intensity of the exercise increases. Since most people can't determine their VO₂ max level in an exercise lab, there are approximation formulas. Dr. Kenneth Cooper developed the 12-minute running test in the 1960s while working for the Air Force. The Cooper Test estimates your VO₂ max from the distance run in 12 minutes. The formula has the advantage of being very simple to implement. It is programmed into the calculator and explained below. Cooper's Equation for VO2 Max Cooper's VO₂ max estimation formula depends solely on the distance covered during 12 minutes of sustained running. Here, distance is measured in meters and VO₂ max is measured in mL/kg/min. The formula is VO₂ Max = (D - 504.9)/44.73, where D is the distance run (in meters) during 12 minutes. Because it only takes one variable into account, and because it was tested on a non-representative sample of the human population, it may not always provide the best estimate, however, it is a useful metric when used in conjunction with other VO₂ formulas and other physical fitness metrics. Sara warms up for 5 minutes and then times herself for 12 minutes while running. At the end of 12 minutes she has run 1.75 miles (7 laps around a track). Since 1.75 miles = 2816 meters, she can estimate her VO₂ by computing VO₂ Max = (2816 - 504.9)/44.73 = 51.67 mL/kg/min. © Had2Know 2010
{"url":"https://www.had2know.org/health/cooper-vo2-max-formula-calculator.html","timestamp":"2024-11-11T10:24:28Z","content_type":"text/html","content_length":"37040","record_id":"<urn:uuid:0614eac6-6f53-4169-bfe8-3b361fe93a03>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00571.warc.gz"}
Convert Million BTU (MMBtu) (Energy) Convert Million BTU (MMBtu) Direct link to this calculator: Convert Million BTU (MMBtu) (Energy) 1. Choose the right category from the selection list, in this case 'Energy'. 2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and π (pi) are all permitted at this point. 3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Million BTU [MMBtu]'. 4. The value will then be converted into all units of measurement the calculator is familiar with. 5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so. Utilize the full range of performance for this units calculator With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '340 Million BTU'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Million BTU' or 'MMBtu'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Energy'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second. Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(58 * 70) MMBtu'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '34 Million BTU + 46 Million BTU' or '82mm x 94cm x 7dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question. The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4). If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 7.879 604 866 567 2×1020. For this form of presentation, the number will be segmented into an exponent, here 20, and the actual number, here 7.879 604 866 567 2. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 7.879 604 866 567 2E+20. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 787 960 486 656 720 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications.
{"url":"https://www.convert-measurement-units.com/convert+Million+BTU.php","timestamp":"2024-11-04T14:10:55Z","content_type":"text/html","content_length":"59438","record_id":"<urn:uuid:f96be4d3-dd47-4a0f-a574-95bd69e9dee7>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00363.warc.gz"}
I want to calculate a Due Date based on a defined Frequency. Audit frequencies are Annually, Biennially or Triennially. I have used the following formula but I get the #UNPARSEABLE output. =IF([Audit Frequency]@row=Biennially, "[Last Audit Date]@row+730", IF([Audit Frequency]@row=Triennially, "[Last Audit Date]@row+1095", IF([Audit Frequency]@row=Annually, "[Last Audit Date]@row+365", Best Answer • @Doris F Try this. =IF([Audit Frequency]@row="Biennially", [Last Audit Date]@row+730, IF([Audit Frequency]@row="Triennially", [Last Audit Date]@row+1095, IF([Audit Frequency]@row="Annually", [Last Audit Date] @row+365, "N/A"))) • @Doris F Try this. =IF([Audit Frequency]@row="Biennially", [Last Audit Date]@row+730, IF([Audit Frequency]@row="Triennially", [Last Audit Date]@row+1095, IF([Audit Frequency]@row="Annually", [Last Audit Date] @row+365, "N/A"))) Help Article Resources
{"url":"https://community.smartsheet.com/discussion/84450/i-want-to-calculate-a-due-date-based-on-a-defined-frequency","timestamp":"2024-11-11T07:50:48Z","content_type":"text/html","content_length":"398978","record_id":"<urn:uuid:f6b3491a-3395-4b6f-8750-d9e5e4dec9f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00316.warc.gz"}
Percentage Change Calculator Our percentage change calculator will help you calculate the percentage change between two values, just provide the two values and our calculator will do the rest. Have you ever noticed how prices of things seem to go up or down over time? Maybe your favorite snack got more expensive, or that new video game you wanted went on sale. These changes are often described as percentage changes. Understanding what percentage change means and how to calculate it can be really useful in many situations. On this page, we'll explore percentage change in detail and learn how to calculate it What is Percentage Change? Percentage change is a way to measure how much something has increased or decreased compared to its original value. It expresses the change as a percentage, which makes it easy to compare different changes. For example, if the price of an item goes up from $10 to $12, we can say that the price has increased by 20%. Similarly, if a student's test score drops from 80% to 72%, we can calculate the percentage change to see how much their score has decreased. Percentage change is used in many everyday situations, such as: • Tracking changes in prices of goods and services • Analyzing stock market performance • Monitoring changes in population sizes • Comparing test scores or grades over time How Do I Calculate the Percent Change? To calculate the percent change, you need to know two things: the original value and the new value. The original value is the starting point, and the new value is the value after the change has Here are the steps to calculate percent change: 1. Find the difference between the new value and the original value. 2. Divide the difference by the original value. 3. Multiply the result by 100 to get the percent change. Let's break it down with an example: Suppose your allowance increased from $10 to $12. To calculate the percent change, follow these steps: 1. Find the difference between the new value ($12) and the original value ($10): $12 - $10 = $2 2. Divide the difference ($2) by the original value ($10): $2 / $10 = 0.2 3. Multiply the result (0.2) by 100 to get the percent change: 0.2 × 100 = 20% So, your allowance increased by 20%. Percent Change Formula The formula for calculating percent change is: Percent Change = (New Value - Original Value) / Original Value × 100 Let's break down the components of this formula: • New Value: The value after the change has occurred • Original Value: The starting value before the change • (New Value - Original Value): The difference between the new and original values • (New Value - Original Value) / Original Value: The difference divided by the original value, which gives you the decimal form of the percent change • × 100: Multiplying the decimal form by 100 converts it to a percentage It's important to remember that the original value is used as the denominator in the formula. This is because the percent change is calculated relative to the original value. Examples of Calculating Percentage Change To better understand how to calculate percentage change, let's go through a few examples. Example 1: Positive Percent Change Suppose the price of a book increased from $15 to $18. Calculate the percent change in the price of the book. Given information: • Original Value (Original Price): $15 • New Value (New Price): $18 Step 1: Find the difference between the new value and the original value. New Value - Original Value = $18 - $15 = $3 Step 2: Divide the difference by the original value. $3 / $15 = 0.2 Step 3: Multiply the result by 100 to get the percent change. 0.2 × 100 = 20% Therefore, the price of the book increased by 20%. Example 2: Negative Percent Change Imagine your favorite basketball team scored 80 points in their last game, but only 72 points in the current game. Calculate the percent change in their score. Given information: • Original Value (Previous Game Score): 80 points • New Value (Current Game Score): 72 points Step 1: Find the difference between the new value and the original value. New Value - Original Value = 72 - 80 = -8 Step 2: Divide the difference by the original value. -8 / 80 = -0.1 Step 3: Multiply the result by 100 to get the percent change. -0.1 × 100 = -10% Therefore, the team's score decreased by 10%. How to Find the Percentage Change Between Negative Numbers? Sometimes, you may need to calculate the percentage change between two negative numbers. The process is similar, but you need to be careful with the signs. Example: Calculating Percentage Change Between Negative Numbers Suppose a company's net loss decreased from -$50,000 to -$40,000. Calculate the percent change in the company's net loss. Given information: • Original Value (Previous Net Loss): -$50,000 • New Value (New Net Loss): -$40,000 Step 1: Find the difference between the new value and the original value. New Value - Original Value = -$40,000 - (-$50,000) = $10,000 Step 2: Divide the difference by the original value. $10,000 / (-$50,000) = -0.2 Step 3: Multiply the result by 100 to get the percent change. -0.2 × 100 = -20% Therefore, the company's net loss decreased by 20%. In this example, even though both values were negative, we treated the original value (-$50,000) as the starting point and calculated the percentage change relative to that value. Other Similar Calculators Check out other calculators that are similar to this one. Frequently Asked Questions (FAQ) Can percentage change be more than 100%? Yes, percentage change can be more than 100%. This happens when the new value is more than double the original value. What if the original value is zero? If the original value is zero, you cannot calculate the percent change using the standard formula because division by zero is undefined. In such cases, you can say that the percent change is undefined or describe the change using different terms. Is percent change the same as percent difference? No, percent change and percent difference are not the same. Percent change measures the increase or decrease relative to the original value, while percent difference compares the absolute difference between two values without considering which value is the original. Can percentage change be negative? Yes, percentage change can be negative. A negative percent change indicates a decrease from the original value.
{"url":"https://thatcalculator.com/calculators/percentage-change/","timestamp":"2024-11-08T21:21:04Z","content_type":"text/html","content_length":"29005","record_id":"<urn:uuid:93a2d77d-48ad-4143-9464-92877995ceb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00341.warc.gz"}
R Functions in one variable can be represented by a graph. R Each ordered pair (x, f(x)) that makes the equation true is a point on the graph. R Graph. - ppt download Presentation is loading. Please wait. To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy , including cookie policy. Ads by Google
{"url":"https://slideplayer.com/slide/8027891/","timestamp":"2024-11-13T15:22:18Z","content_type":"text/html","content_length":"146263","record_id":"<urn:uuid:0c740a27-7442-4d04-af0c-e69d5dc680f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00160.warc.gz"}
Models and Interpretation Methods for Single-Hole Flowmeter Experiments Geosciences Montpellier, Montpellier University, CNRS, 34090 Montpellier, France Author to whom correspondence should be addressed. Submission received: 30 June 2023 / Revised: 2 August 2023 / Accepted: 14 August 2023 / Published: 16 August 2023 Subsurface and groundwater flow characterization is of great importance for various environmental applications, such as the dispersion of contaminants and their remediation. For single-hole flowmeter measurements, key characteristics, such as wellbore storage, skin factor heterogeneities, and variable pumping and aquifer flow rates, have a strong impact on the system characterization, whereas they are not fully considered in existing models and interpretation methods. In this study, we develop a new semi-analytical solution that considers all these characteristics in a physics-based consistent manner. We also present two new interpretation methods, the Double Flowmeter Test with Transient Flow rate (DFTTF) and the Transient Flow rate Flowmeter Test (TFFT), for interpreting data collected during single and multiple pumping tests, respectively. These solution and methods are used as follows. (i) The impact of wellbore storage, transient pumping rate, and property heterogeneities on the interpretation of data collected during single pumping tests are studied over 49 two-aquifer cases. (ii) The effect of the skin factor heterogeneity on transmissivity and storativity estimates, as well as the variability range of the (non-unique) corresponding solutions, are analyzed for the interpretation of multiple-pumping experiments. The results presented in this work show the importance of the various properties and processes that are considered, and the need for the new models and methods that are provided. 1. Introduction Characterizing subsurface and groundwater flow is performed by collecting data from several techniques, including hydraulic tests, thermal experiments and electrical measurements (e.g., [ ]), and inverting this data with the most appropriate inversion strategies (e.g., [ ]). Among all these methods, the most-used method is the pumping test, which gives global estimates of the hydraulic properties of the system and provide information on its transient behavior when interpreting the data with transient-flow solutions (e.g., [ ]). When characterizing the vertical heterogeneities of the system is required, for investigating the dispersion of contaminant and planning remediation strategies for instance, additional information can be acquired by downhole well logging measurements including temperature, vertical flow rates, and direct observations with optical and acoustic imaging tools (e.g., [ Using heat pulse and electromagnetic tools [ ] results in measuring vertical flow rates that are as small as 0.05 Lpm. These measurements are used either in a qualitative way to identify the most permeable zones of the system, or in a quantitative manner to evaluate the hydraulic properties of these zones [ ]. In the latter case, the properties are estimated from vertical flow rate measurements that are collected above each conductive zone in single- or cross-borehole configurations (e.g., [ ]). Two main methods are used to collect this data. (i) Log measurements are acquired during a single pumping test, which is easy and quick to perform, and results in collecting one or two values of flow rates above each conductive zone while the logging tool is lowered in the well (e.g., [ ]). (ii) Multiple pumping tests are performed with the logging tool being located above a different conductive zone for each test, resulting in monitoring the changes in vertical flow rates over time (e.g., [ ]). Method (ii) corresponds to series of local experiments that provide more information than method (i), while being time consuming, since it requires to conduct more experiments and to wait until that the system goes back to equilibrium between each experiment. The corresponding data are usually interpreted with standard analytical solutions that represent the identified conductive zones as independent aquifers, whose behavior is described with the Theis Equation [ ]. Alternatively, double-layer numerical models are used to consider more complex configurations, including vertical crossflow between adjacent porous layers or skin effect with a homogeneous skin zone located around the well [ ]. Heterogeneous skin factors, wellbore storage, and transient pumping rate are considered separately in different studies such as the semi-analytical multi-aquifer models developed in [ ] and the methods formulated in [ ]. These three features describe phenomena occurring in the pumping well and its vicinity that are important for the interpretation of pumping tests. (i) The skin factor enables one to take into account the skin effects that are related to headloss in the well and in its vicinity in the context of significant differences between the well vicinity and aquifer hydraulic conductivities. These differences are due to, for instance, mud invasion or turbulence in the well and its vicinity [ ]. (ii) The wellbore storage impacts the data collected at the beginning of the experiment and the solutions that neglect this characteristic must be applied when the wellbore storage effect is completed. This requires to conduct long-time experiments, in particular for low-permeability aquifers with positive skin effects and large well diameters [ ]. (iii) In addition to intentional changes related to the experiment need, changes in pumping flow rate along pumping experiments are very frequent due to the pump functioning, with a decrease in flow rate when the drawdown increases. These changes can also be due to potential clogging of the pump or the command valve, and must be accounted for when the decrease is important [ Some of the solutions cited above rely on strong assumptions regarding the vertical heterogeneities of the considered systems, assuming for instance homogeneous hydraulic diffusivities and storativities in the system. Furthermore, none of these solutions considers at the same time the effect of wellbore storage, skin factor heterogeneities, and variable pumping and aquifer flow rates, whereas the importance of each of those characteristics has been demonstrated separately. In this study, we develop a new semi-analytical solution that considers all these key characteristics and that is used to generate synthetic reference data. We also remind the assumptions related to standard interpretation methods and we analyze their impact on the interpretation of single-hole flowmeter measurements that are collected over single pumping tests. For the interpretation of flowmeter data collected during single or double logs along the well, a new interpretation method is presented to improve the estimation of aquifer properties, and evaluate the impact of standard assumptions over a large range of cases. For the interpretation of flowmeter data collected during multiple pumping tests, where the full transient behavior of vertical flow rates above each conductive zone is recorded, we also present a new interpretation method that is easy to implement. This method is used to evaluate the variability range of key properties that drive the transient behavior of the system, which are the storativity and skin factor. The validity and accuracy of each model and interpretation method are discussed through synthetic cases, before providing a discussion and conclusions on this work. 2. Experiments and Interpretation Methods 2.1. Considered Experiments and Methods The analysis of flowmeter tests provided in this study is performed by considering the following kinds of experiments ( Table 1 • Single-pumping single-log experiments. The vertical flow rates and hydraulic heads are measured along the borehole while the flowmeter is lowered into the well during a single pumping test. • Single-pumping double-log experiments. As before, the vertical flow rates and hydraulic heads are measured along the borehole during two log experiments that are conducted under the same pumping • Multiple-pumping local-log experiments. A pumping test is performed for each conductive zone that needs to be characterized (except the upper one) with the logging tool localized above this zone. As described in Table 1 , these experiments are interpreted with the methods SFT, DFT, DFTTF, and TFFT. SFT and DFT are standard methods whose basis are reminded in Appendix A , and DFTTF and TFFT correspond to new methods that are described below. For the methods presented below, the vertical flow rates $q i$ $i = 1 , ⋯ , N a q$ ) measured above aquifer are expressed in terms of aquifer flow rates $Q i$ $i = 1 , ⋯ , N a q$ ) with the differentiation method described in expressions ( 2.2. Double Flowmeter Test with Transient Flow Rate (DFTTF) The Double Flowmeter Test with Transient Flow rate (DFTTF) method is based on the DFT, which consists of evaluating the transmissivity $T i$ and storativity $S i$ of aquifer from Theis’ solution using two measurements of the vertical flow rates and hydraulic head above each conductive zone (see the description of the DFT method in Appendix A.2 ). While the DFT method relies on the assumption that the values of the considered vertical flow rates are equal to their average value, the DFTTF method considers these values as distinct without assumption. In the conditions of validity of the logarithmic approximation of the Theis function, the hydraulic properties of aquifer are expressed as: $T i = 1 4 π ln t i 2 / t i 1 h w i ( t i 2 ) / Q i ( t i 2 ) − h w i ( t i 1 ) / Q i ( t i 1 )$ $S i = 2.25 T i t i 1 r w i 2 exp 2 σ i − 4 π T i ( h w i ( t i 1 ) ) Q i ( t i 1 )$ with the aquifer flow rates $Q i ( t i 1 )$ $Q i ( t i 2 )$ and hydraulic heads $h w i ( t i 1 )$ $h w i ( t i 2 )$ deduced from measurements performed at times $t i 1$ $t i 2$ for aquifer . In expression (1), $r w i$ $σ i$ are the well radius and skin factor associated with aquifer , respectively, $σ i$ being considered as homogeneous over the aquifers (i.e., $σ i = σ$ ) and defined from pump test interpretation, as in DFT. Note that in expression ( $S i$ is expressed as a function of $t i 1$ $T i$ , but could also be expressed as a function of $t i 2$ $T i$ by replacing $t i 1$ $t i 2$ . Since $T i$ depends on both $t i 1$ $t i 2$ $S i$ also depends on both times regardless of the chosen expression. 2.3. Transient Flow Rate Flowmeter Test (TFFT) The TFFT (Transient Flow rate Flowmeter Test) method is developed to interpret multiple-pumping local-log experiments. These experiments consist of performing a pumping test for each aquifer to characterize (except the upper one) with the same pumping rate $Q P$. The resulting drawdown $h w S$ and the full transient behavior of the vertical flow rates $q i$ above aquifers i are recorded and the corresponding aquifer transient flow rates $Q i$ are deduced as explained before. This leads to the set of data $( Q i ( t ) , Q P ( t ) , h w S ( t ) )$ associated with each aquifer i ($i = 1 , ⋯ , N a q )$ and collected during $N P = N a q − 1$ pumping tests. These data are inverted with the following algorithm. • Model the data $Q P ( t )$ with variable pumping flow rate models, as described in Appendix C.1 • Estimate the unknowns $T i$ $S i$ $σ i$ by inverting the data $( Q i ( t ) , h w S ( t ) )$ provided by the multi-aquifer model described in Appendix B using a numerical optimization method. More precisely, the Laplace transform of $Q i ( t )$ $h w S ( t )$ are given in expressions ( ) and ( ) and numerically inverted with [ ]’s algorithm. The data inversion proceeds iteratively by a least square method using a gradient algorithm, applied to the transient drawdown and aquifers flow rates, for which different weights can be considered. The objective function is expressed as: $f o b j = ∑ t α t h w S m o d e l ( t ) − h w S d a t a ( t ) h w S d a t a ( t ) 2 + ∑ i ∑ t α i t Q i m o d e l ( t ) − Q i d a t a ( t ) Q i d a t a ( t ) 2$ with the weights $α t$ $α i t$ , and the reference ( $Q i d a t a ( t ) , h w S d a t a ( t )$ ) and simulated ( $Q i m o d e l ( t ) , h w S m o d e l ( t )$ ) data. Note that this objective function is used in Section 3.3 with synthetic data that are considered every minute and weights that are set to 1. The inversions are performed from an initial value given by the DFTTF method with null skin factors and convergence is assumed when the relative change of the objective function is less than $10 − 12$ in the last five iterations. 3. Examples of Applications on Synthetic Cases 3.1. Considered Configurations We consider a system of two aquifers, $T 1$ $S 1$ being the transmissivity and storativity of aquifer 1, respectively, and $T 2$ $S 2$ of aquifer 2. We evaluate how these properties are estimated from single- and double-log flowmeter experiments with the interpretation methods SFT, DFT, and DFTTF in various context in terms of wellbore storage and transient pumping rate ( Section 3.2 ). Then, we evaluate these properties by taking into account different skin factors in two-aquifer systems considering wellbore storage and transient pumping rate with an analysis of the range of the storativity and skin factor that can be defined ( Section 3.3 ). This is done by considering series of local flowmeter experiments that are interpreted with the TFFT method. For all the cases presented in this section, the reference synthetic data are provided by the multi-aquifer model that is presented in Appendix B and takes into account the wellbore storage effect, transient pumping flow rate, and heterogeneous skin factor. A homogeneous pumping well of radius 8 cm is considered with aquifers of 1 m thickness, resulting in transmissivity and storativity equal to hydraulic conductivity and specific storage, respectively. 3.2. Single- and Double-Log Flowmeter Experiments The considered two-aquifer system is defined by setting $T 1$ $S 1$ $10 − 4$ /s and $10 − 3$ , respectively, and having $T 2$ $S 2$ ranging from $10 − 7$ $10 − 1$ /s and $10 − 6$ to 1, respectively, with null skin factor in both aquifers. The three following configurations are considered: (i) standard models without accounting for wellbore storage and considering constant pumping flow rate ( ), (ii) models that account for wellbore storage and consider constant pumping flow rate ( ), and (iii) no wellbore storage and transient pumping flow rate ( ). The transmissivities are estimated with the interpretation methods SFT, DFT, and DFTTF, and the storativities with DFT and DFTTF, considering the measurements at time $t 1 = 10$ min for SFT and at times $t 1$ $t 2 = 300$ min for DFT and DFTTF. Note that the small time $t 1$ allows to investigate the influence of the wellbore storage and the transient pumping rate. The corresponding results are presented in Figure 1 and the pumping flow rate $Q P$ that is used in is defined in Figure 2 . Note that $Q P$ is set to the constant value 4 Lpm in The results provided in Figure 1 are described as follows. (i) When the diffusivities of aquifers (aquifer for which the properties are estimated) and (the other aquifer) are similar (i.e., $ε i j ≈ 1$ ), the ratio of estimated to true properties ( $T i$ $S i$ ) are equal to 1 (or very close to 1) for all the considered configurations and interpretation methods. This shows that for homogeneous hydraulic diffusivity (i.e., $ε i ≈ ε j$ ), the assumptions related to the considered interpretation methods hold, in particular the assumptions related to Theis’ model. (ii) For large values of $ε i j$ $ε i j > > 1$ ), the errors in estimating the hydraulic properties are relatively small in comparison with the errors obtained for small values of $ε i j$ $ε i j < < 1$ ). In most of the cases, we observe that the error in estimating $T i$ $S i$ increases when $ε i j$ varies from 1 to $10 3$ because the assumptions described before that hold when $ε i j ≈ 1$ are less and less fulfilled. This error then decreases when $ε i j$ varies from $10 3$ $10 6$ , which corresponds to configurations where the diffusivity of aquifer is negligible in comparison with that of aquifer for which the properties need to be estimated. This implies that aquifer poorly contributes to the pumping and that the assumption of constant pumping related to aquifer and used in Theis’ model holds. (iii) For small values of $ε i j$ $ε i j < < 1$ ), the transmissivities and storativities are overestimated and underestimated, respectively. $ε i < ε j$ corresponds to configurations where $T i < T j$ $S i > S j$ , whereas the methods SFT and DFT rely on the assumption of homogeneous diffusivities. Overestimating $T i$ and underestimating $S i$ result in tending to the relations $T i ≈ T j$ $S i ≈ S j$ , and thus, tending to verify the homogeneous diffusivity assumption. (iv) For all the estimates of $T 1$ $T 2$ , the results obtained with SFT and DFT are similar or slightly improved by DFT, except for the estimated value of $T 1$ . For all the cases, the DFTTF method always provides better estimates than SFT and DFT with the ratio of estimated to true value around 1 for most of the cases and reaching for example a maximum value of 3 for the transmissivity estimates. (v) Only small differences are observed between the three considered configurations, showing that the wellbore storage and transient pumping flow rate do not have a significant impact on the estimation of the considered properties. An exception is observed when estimating the storativities $S 1$ $S 2$ , for which the transient pumping flow rate results in increasing the ratio of estimated to the true value of 4 to 6 orders of magnitude with the DFT method (blue symbols in Figure 1 i). This results in a significant underestimation of $S 1$ $S 2$ 3.3. Series of Local Flowmeter Experiments We consider now the effect of different skin factors in two-aquifer configurations with $T 1$ $T 2$ set to $5 × 10 − 4$ $10 − 5$ $S 1$ $S 2$ $5 × 10 − 4$ $10 − 3$ , and the skin factors $σ 1$ $σ 2$ to 0 and 1, respectively. As before, the pumping well radius is homogeneous and equal to the wellbore storage radius $r w S = 8$ cm. The applied transient pumping flow rate is the same as in of the previous example and the resulting transient aquifer flow rates are defined from the multi-aquifer model provided in Appendix B Figure 2 . These data are interpreted with the TFFT method presented in Section 2.3 and the resulting estimated values are presented in Figure 3 Figure 3 a shows the minimum value of the objective function ( ) obtained for each value of couple $( σ 1 , σ 2 )$ . These results show that the smallest value is observed for the true values of $σ 1$ $σ 2$ $σ 1 = 0$ $σ 2 = 1$ ), demonstrating that the studied objective function is well defined. The corresponding estimated values of $T 1$ $T 2$ $S 1$ , and $S 2$ provided in Figure 3 b–e (for $σ 1 = 1$ $σ 2 = 0$ ) are equal to the true values of these properties, showing the ability of the proposed interpretation method to estimate the hydraulic properties of the considered system. The results presented in Figure 3 b–e also lead to the following observations. (i) The transmissivities $T 1$ $T 2$ are poorly affected by the values of the skin factors $σ 1$ $σ 2$ , since $T 1$ ranges from $4.92 × 10 − 4$ $9.30 × 10 − 4$ /s ( Figure 3 b) and $T 2$ $8.97 × 10 − 6$ $1.99 × 10 − 5$ /s ( Figure 3 c). (ii) On the contrary, the storativities values $S 1$ $S 2$ are strongly impacted by the values of $σ 1$ $σ 2$ , since $S 1$ ranges from $3.47 × 10 − 7$ to 9.12 ( Figure 3 d) and $S 2$ ranges from $8.71 × 10 − 9$ $8.13 × 10 − 2$ Figure 3 e). (iii) The values observed in Figure 3 d,e show minimum localized values of both $S 1$ $S 2$ for high contrast between $σ 1$ $σ 2$ . This corresponds to configurations where $σ 1$ is large and $σ 2$ small, or $σ 1$ small and $σ 2$ large, which are represented in the top right and bottom left corners of Figure 3 d,e (yellow and orange cells). (iv) We also observe that high values of $σ 1$ $σ 2$ result in overestimating $S 1$ $S 2$ with red cells located at the bottom right corners of Figure 3 d,e. High values of $σ i$ correspond to high and low values of the headlosses in the skin and aquifer, respectively, corresponding to a low contribution of the aquifer to the drawdown. Trying to fit reference data that are obtained with smaller values of $σ i$ results in overestimating the storativities to counterbalance the impact of large values of $σ i$ The storativity is usually determined from the drawdown recorded in a distant observation well. Determining this property from data collected in the pumping well is a challenge, since an infinity of couples ( $S , σ$ ) can reproduce the recorded drawdown, as demonstrated in Appendix C.2 . The resulting relationship ( ) between $S i$ $σ i$ is used to estimate an equivalent couple of parameters $( S i ′ , σ i ′ )$ that are presented in Figure 3 f,g. The global behavior of $S 1$ $S 2$ is well reproduced, except for the localized minimum values that are observed at the bottom left corners in Figure 3 d,e, and not in Figure 3 f,g. These differences are due to relationship ( ), which only depends on $( S i , σ i )$ when estimating $( S i ′ , σ i ′ )$ , implying that this relationship does not consider the impact of one aquifer on the other. These results show that the localized minimum values observed in Figure 3 d,e are due to the impact of the properties of aquifer when estimating the storativity of aquifer 4. Discussion We present a new semi-analytical multi-aquifer model relying on an independent aquifers representation. This physics-based solution is formulated for heterogeneous aquifer properties, skin factor, and pumping well radius, taking into account the wellbore storage and various transient pumping rates and transient aquifer flow rates. We also present two new interpretation methods that (i) improve the transmissivity and storativity estimates of multi-aquifer systems, while taking into account wellbore storage and transient pumping flow rates, and (ii) help to conduct sensitivity analysis of these estimates in relation to the skin factor. Our study shows that the interpretation of single- and double-log flowmeter experiments collected during a single pumping test is improved by using the DFTTF method. By accounting for transient aquifer flow rates, this method gives better estimates of the transmissivity and storativities of aquifers than the standard SFT and DFT methods. We also show that the interpretation of multiple-pumping local-log experiments conducted with the TFFT method leads to consistent results. It demonstrates radically different sensitivity of the transmissivities and storativities to the skin factors considered in the aquifers. A relationship between storativity and skin factor is provided and tested to analyze the (non-unique) couples of solutions related to these parameters in the context of single-hole data, while the storativity is usually estimated from data collected in an observation borehole. However, the presented solution and methods have been only applied to two-aquifer systems and synthetic data with relatively simple configurations. Additional work is required to demonstrate the efficiency of this solution and methods on complex configurations and their ability to interpret field data. Extending the sensitivity analysis also presented in this work to complex configurations and field data is required as well to fully demonstrate the interest of this work. 5. Conclusions For future work and applications, this easy-to-implement solution can be extended to account for inwell headlosses, and for interpreting the recovery phase of flowmeter experiments by applying the superposition principle. We will also consider structural information that helps to reduce the range of acceptable skin factors and associated storativities. For example, small and large storativities are unlikely in semi-confined systems and in weakly fissured confined hard rocks, respectively. Using logs of acoustic or optical borehole images helps to verify the presence of fractures, which gives information on the skin factor. The latter is negative when the well is intersected by local well-open conducts (such as well-open fractures), small with low flow rates, null when considering fractures with homogeneous aperture and rugosity in laminar regime, and positive for fractures filled by drilling mud or sediments. The relevance of the skin factor value, which corresponds to a singular headloss at the infinitesimal interface between the well and the aquifer, can also be checked with equivalent finite skins, such as darcian skins, or laminar-turbulent fracture skins [ Author Contributions Conceptualization, G.L. and D.R.; methodology, G.L.; software, G.L.; validation, G.L.; writing, G.L. and D.R.; review and editing, D.R.; visualization, D.R.; supervision, D.R. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Data Availability Statement The data that support the findings of this study are available from the corresponding author upon request. Conflicts of Interest The authors declare no conflict of interest. Appendix A. Standard Models and Interpretation Methods Considering a homogeneous and infinite aquifer subject to a constant pumping flow rate with negligible wellbore storage, ref. [ ]’s solution provides the following expression of the well drawdown: $h w ( t ) = Q 4 π K b W S s r w 2 4 K t + 2 σ$ $S s$ , and are the aquifer hydraulic conductivity, specific storativity, and thickness, respectively, $r w$ are the skin factor and pumping well radius, and is the Theis function. In standard pumping tests, drawdown data are collected in a distant observation well and expression ( ) is used to estimate global values of the aquifer transmissivity and storativity that are defined as $T = K b$ $S = S s b$ In a system composed of horizontal and independent aquifers, where $T i$ $S i$ are the transmissivity and storativity of aquifer , single-hole flowmeter tests rely on logs that are performed along the pumping well and provide the vertical flow rates $q i$ measured above each conductive zone . The aquifer flow rate $Q i$ of aquifer , which is defined as the flow rate provided by the aquifer during the pumping experiment, is deduced from the measurements $q i$ with the following vertical differentiation: $Q i = q i − q i + 1 , i = 1 , ⋯ , N a q − 1 , Q N a q = q N a q ,$ $N a q$ is the number of horizontal aquifers numbered from top to bottom. Estimates of the hydraulic properties of each aquifer are provided by the SFT and DFT interpretation methods for single and double flowmeter tests, each of those methods relying on different assumptions that are described below. Appendix A.1. Single Flowmeter Test (SFT) The SFT method consists of interpreting aquifer flow rates $Q i$ deduced from a single-log of vertical flow rates $q i$ and associated with the hydraulic head $h w , i$ for each conductive zone . The hydraulic transmissivity $T i$ is estimated at a given time using the following assumptions: (H1) the hydraulic diffusivities $ε i = K i / S i$ are homogeneous (i.e., $ε i = ε$ $∀ i$ ) [ ] and (H2) the specific storativities $S s i$ are homogeneous (i.e., $S s i = S s$ $∀ i$ ) [ ]. Homogeneous skin factor is also usually assumed (i.e., $σ i = σ$ ) and $T i$ is inverted from ( ) for each aquifer , analytically under assumption (H1) and numerically under (H2). In our study, we consider the SFT method with assumption (H1). Alternatively, considering in addition negligible inwell headlosses $h w i ( t ) = h w ( t )$ ) and homogeneous pumping well radius (i.e., $r w i = r w$ $T i$ is given by the well-know formula $T i = T Q i / Q$ . This expression has been deduced from numerical simulations in [ ] for horizontal layered systems with cross-flow exchanges under pseudo-steady state conditions, and from analytical demonstration in [ ], as done here for independent layered systems under transient conditions. Appendix A.2. Double Flowmeter Test (DFT) Alternatively, the DFT method relies on two logs of data which provide the aquifer flow rates $Q i ( t i 1 )$ $Q i ( t i 2 )$ at two distinct times $t i 1$ $t i 2$ . Assuming homogeneous skin factors as before, and being in the conditions of validity of the logarithmic approximation of the Theis function [ ], the following explicit expressions of $T i$ $S i$ are obtained: [ $T i = Q i ˜ 4 π ( h w i ( t i 2 ) − h w i ( t i 1 ) ) ln t i 2 t i 1$ $S i = 2.25 T i t i 1 r w e 2 exp − 4 π T i h w i ( t i 1 ) Q i ˜$ $Q i ˜$ is the average between flow rates $Q i ( t i 1 )$ $Q i ( t i 2 )$ (according to Theis’ assumption of constant flow rate) and $r w e$ is the effective radius [ ] which integrates the skin factor with expression $r w e = r w e − σ$ . For this method, and for all the double-log methods presented in this work, if the logarithmic approximation does not hold, $T i$ $S i$ can be inverted jointly from ( ) applied to aquifer at times $t i 1$ $t i 2$ by using a numerical optimization method. However, as stated in [ ], the logarithmic approximation is quickly reached in the pumping well, except in rare particular cases. Appendix B. Multi-Aquifer Model We present a semi-analytical solution for simulating transient drawdown in radially infinite confined multi-aquifers with homogeneous boundary conditions, wellbore storage, heterogeneous skin effects, and variable pumping flow rate. The presented model also accounts for a variable well radius, which is useful for deep wells in which the well radius decreases with depth. We consider $N a q$ horizontal independent aquifers numbered from top to bottom ( Figure A1 ) that are intercepted by the pumping well. When applying the pumping flow rate $Q P$ to the well, mass balance in the well is expressed as: $Q P + ∑ i = 1 N a q Q i − Q S = 0 ,$ $Q P$ is negative when the flow is extracted from the well, $Q i$ is the (positive) flow rate pumped from aquifer , and $Q S$ is the (negative) wellbore storage flow rate. Note that $Q P$ here is equivalent to $− Q$ Appendix A.1 The flow rates used in Equation ( ) are expressed as: $Q S = S w ∂ h w S ∂ t , Q i = 2 π r w i b i K i ∂ h i ∂ r | r = r w i ,$ $S w$ is the well capacity defined as $S w = π r w S 2$ $r w S$ the wellbore storage radius (i.e., the radius of the zone where the water table fluctuates), $h w S$ is the wellbore storage drawdown (i.e., the water surface level), $r w i$ is the well radius at the depth of aquifer $b i$ $K i$ are the conductive thickness and conductivity of aquifer , respectively, and $h i$ is the drawdown in aquifer In each aquifer $i = 1 , ⋯ , N a q$ ), the mass balance equation for a slightly compressible fluid is expressed as: $S s i ∂ h i ∂ t = K i r ∂ ∂ r r ∂ h i ∂ r$ $S s i$ is the specific storage of aquifer is the radial distance from the well center, and is the time elapsed since the pumping starts. Initial and boundary conditions are defined by: $h i ( r , t = 0 ) = 0 , lim r → + ∞ h i ( r , t ) = 0 ,$ and the well skin effect is represented by a singular head loss as [ $h w i = h i − r w i σ i ∂ h i ∂ r | r = r w i$ with the dimensionless skin factor $σ i$ Applying the Laplace transform to Equation ( ) with the initial condition provided in ( ) leads to: $1 ε i p h ¯ i = 1 r ∂ h ¯ i ∂ r + ∂ 2 h ¯ i ∂ r 2$ is the Laplace variable and $ε i$ is the diffusivity defined as $ε i = K i / S s i$ . Considering the boundary condition given in ( ), the solution for Equation ( ) is expressed as: $h ¯ i = C 1 i I 0 ( γ i r ) + C 2 i K 0 ( γ i r )$ are the modified Bessel functions of the first and second kind, respectively, $γ i = p / ε i$ , and $C 1 i = 0$ when applying the boundary condition from ( ). The derivative of $h i$ is then expressed in the Laplace domain as: $∂ h ¯ i ∂ r = − C 2 i γ i K 1 ( γ i r ) .$ Expression ( ) is formulated in the Laplace domain as: $h ¯ w i = h ¯ i − r w i σ i ∂ h ¯ i ∂ r | r = r w i$ and expressed as follows by using ( ) and ( $h ¯ w i = C 2 i A 1 i , A 1 i = K 0 ( γ i r w i ) + r w i σ i γ i K 1 ( γ i r w i ) .$ The following expressions of $h i$ and its derivative in the Laplace domain are deduced from ( $h ¯ i = h ¯ w i K 0 ( γ i r ) A 1 i , ∂ h ¯ i ∂ r | r = r w i = − h ¯ w i γ i K 1 ( γ i r w i ) A 1 i .$ Expressing the Laplace transform of the vertical flow rate $Q i$ , defined in ( ), and combining it with the derivative expression in ( ) result in: $Q ¯ i = − A 2 i h ¯ w i , A 2 i = 2 π r w i b i K i γ i K 1 ( γ i r w i ) / A 1 i .$ Combining ( ), ( ) and ( ) result in: $S w p h ¯ w S = Q ¯ P − ∑ i = 1 N a q A 2 i h ¯ w i ,$ and considering that the headloss in the well is negligible (i.e., $h ¯ w i = h ¯ w S$ $h ¯ w S$ is expressed as: $h ¯ w S = Q ¯ P S w p + ∑ i = 1 N a q A 2 i .$ All the drawdowns and flow rates expressed in the Laplace domain are inverted in the time domain using [ ]’s algorithm. Appendix C. Models for Transient Parameters and Properties Appendix C.1. Pumping Flow Rate Models We focus on expressions of flow rate models that have an explicit analytical formulation in the Laplace domain, for example exponential and polynomial formulations, as well as linear combinations of them. As the exponential decrease reproduces well the typical flow rate decrease that is observed due to the pump functioning [ ], we consider this formulation with a decrease from flow rates $Q t 1$ $Q t 2$ at times $t 1$ $t 2$ , respectively. This leads to the following pumping flow rate expression: $Q P ( t ) = a e − t / b + c , a = ( Q t 1 − c ) e t 1 / b , c = ( Q t 2 − Q t 1 β ) / ( 1 − β ) , β = e ( t 1 − t 2 ) / b ,$ is a fitting coefficient that controls the decrease shape. The Laplace transform is expressed as: $Q ¯ P = a / ( p + 1 / b ) + c / p$ considering that the pumping starts at time $t 1 = 0$ Appendix C.2. Couples of Equivalent Parameters (S si,σ i ) For single-hole tests, the couples of parameters $( S s i , σ i )$ $( S s i ′ , σ i ′ )$ are equivalent when they produce the same values of $h w S$ $Q S$ $Q i$ . From the demonstration of the multi-aquifer model provided in Appendix B , this is equivalent to produce the same values of $A 2 i$ $A 2 i ′$ that are defined as: $A 2 i = 2 π r w i b i K i γ i K 1 ( γ i r w i ) K 0 ( γ i r w i ) + r w i σ i γ i K 1 ( γ i r w i ) , A 2 i ′ = 2 π r w i b i K i γ i ′ K 1 ( γ i ′ r w i ) K 0 ( γ i ′ r w i ) + r w i σ i ′ γ i ′ K 1 ( γ i ′ r w i )$ $γ i ′ = p S s i ′ / K i$ . Considering $A 2 i = A 2 i ′$ results in: $σ i ′ = σ i − K 0 ( γ i ′ r w i ) γ i ′ r w i K 1 ( γ i ′ r w i ) + K 0 ( γ i r w i ) γ i r w i K 1 ( γ i r w i ) .$ Focusing on small arguments of the Bessel functions, the following approximations [ ] are used: $K 0 ( x ) ≈ − ln ( x / 2 ) − γ E , K 1 ( x ) ≈ 1 / x ,$ $l n$ is the natural logarithm and $γ E$ is the Euler constant. This leads to the following explicit relationship between the couples of properties $( S s i , σ i )$ $( S s i ′ , σ i ′ )$ $S s i ′ = S s i exp 2 ( σ i ′ − σ i ) .$ Figure 1. Estimated values of transmissivities and storativities normalized by the true values ($T i$ and $S i$, respectively) for aquifer 1 and 2 ($i = 1 , 2$) along $ε i j$ the ratio of diffusivities of aquifer i to that of aquifer j ($ε i j = ε i / ε j$), for model configurations Config1 (first row), Config2 (second row), and Config3 (third row), and the interpretation methods SFT, DFT, and DFTTF (black, blue, and red symbols, respectively). Figure 2. Transient pumping flow rate ( $Q P$ ) defined from the exponential model provided in ( ) with $Q t 1 = 4$ $Q t 2 = 3.8$ Lpm at times $t 1 = 0$ $t 2 = 300$ min with the fitting coefficient set to $10 5$ $Q P$ is the pumping flow rate considered in Section 3.2 and in all the experiments in Section 3.3 $Q t 1$ $Q t 2$ correspond to the aquifer flow rates of the reference experiments considered in Section 3.3 Figure 3. ) Minimum of the objective function ( ) obtained for each value of the couple $( σ 1 , σ 2 )$ $σ 1$ $σ 2$ ranging from −2 to 5. The corresponding estimated values of ( $log ( T 1 )$ , ( $log ( T 2 )$ , ( $log ( S 1 )$ , and ( $log ( S 2 )$ obtained with the TFFT method. ( ) Estimated values of $log ( S 1 ′ )$ $log ( S 2 ′ )$ obtained with expression ( ) from the values of $S 1$ $σ 1 = − 2$ (first line in ( )) and $S 2$ $σ 2 = − 2$ (first line in ( )). For all figures, the color cells correspond to increasing values from yellow to red and the log function corresponds to log10 function. Table 1. Considered flowmeter experiments with the following collected data: $h w , i$ $q i$ are the hydraulic heads and vertical flow rates, respectively, measured above conductive zone $N a q$ the number of conductive zones to characterize, and $h w , i j$ $q i j$ are their counterpart collected from two logs ( $j = 1 , 2$ ). SFT, DFT, DFTTF, and TFFT are the interpretation methods presented in Appendix A Section 2 Experiment Name Collected Data Interp. Methods Single-pumping single-log $( h w , i , q i )$, $i = 1 , ⋯ , N a q$ SFT Single-pumping double-log $( h w , i j , q i j )$, $j = 1 , 2$ DFT, DFTTF Multiple-pumping local-log $( h w , i ( t ) , q i ( t ) )$ TFFT Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Lods, G.; Roubinet, D. Models and Interpretation Methods for Single-Hole Flowmeter Experiments. Water 2023, 15, 2960. https://doi.org/10.3390/w15162960 AMA Style Lods G, Roubinet D. Models and Interpretation Methods for Single-Hole Flowmeter Experiments. Water. 2023; 15(16):2960. https://doi.org/10.3390/w15162960 Chicago/Turabian Style Lods, Gerard, and Delphine Roubinet. 2023. "Models and Interpretation Methods for Single-Hole Flowmeter Experiments" Water 15, no. 16: 2960. https://doi.org/10.3390/w15162960 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2073-4441/15/16/2960?utm_campaign=releaseissue_waterutm_medium=emailutm_source=releaseissueutm_term=titlelink22","timestamp":"2024-11-08T11:40:16Z","content_type":"text/html","content_length":"537387","record_id":"<urn:uuid:fca7b180-e23a-49a8-b63d-df9ff21e3f58>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00425.warc.gz"}
Barometer question Measure the height of a tall building using a barometer The following concerns a question in a physics degree exam at the University of Copenhagen: "Describe how to determine the height of a skyscraper with a barometer." One student replied: "You tie a long piece of string to the neck of the barometer, then lower the barometer from the roof of the skyscraper to the ground. The length of the string plus the length of the barometer will equal the height of the building." This highly original answer so incensed the examiner that the student was failed immediately. The student appealed on the grounds that his answer was indisputably correct, and the university appointed an independent arbiter to decide the case. The arbiter judged that the answer was indeed correct, but did not display any noticeable knowledge of physics. To resolve the problem it was decided to call the student in and allow him six minutes in which to provide a verbal answer that showed at least a minimal familiarity with the basic principles of physics. For five minutes the student sat in silence, forehead creased in thought. The arbiter reminded him that time was running out, to which the student replied that he had several extremely relevant answers, but couldn't make up his mind which to use. On being advised to hurry up the student replied as follows: "Firstly, you could take the barometer up to the roof of the skyscraper, drop it over the edge, and measure the time it takes to reach the ground. The height of the building can then be worked out from the formula H = 0.5g x t squared. But bad luck on the barometer." "Or if the sun is shining you could measure the height of the barometer, then set it on end and measure the length of its shadow. Then you measure the length of the skyscraper's shadow, and thereafter it is a simple matter of proportional arithmetic to work out the height of the skyscraper." "But if you wanted to be highly scientific about it, you could tie a short piece of string to the barometer and swing it like a pendulum, first at ground level and then on the roof of the skyscraper. The height is worked out by the difference in the gravitational restoring force T =2 pi sqr root (l /g)." "Or if the skyscraper has an outside emergency staircase, it would be easier to walk up it and mark off the height of the skyscraper in barometer lengths, then add them up." "If you merely wanted to be boring and orthodox about it, of course, you could use the barometer to measure the air pressure on the roof of the skyscraper and on the ground, and convert the difference in millibars into feet to give the height of the building." "But since we are constantly being exhorted to exercise independence of mind and apply scientific methods, undoubtedly the best way would be to knock on the janitor's door and say to him 'If you would like a nice new barometer, I will give you this one if you tell me the height of this skyscraper'." Leave a Comment
{"url":"https://web-profile.net/barometer-question/","timestamp":"2024-11-04T13:48:01Z","content_type":"text/html","content_length":"48904","record_id":"<urn:uuid:88b437ac-fe13-4473-b650-ba60877028f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00225.warc.gz"}
Famous physics cat now alive, dead and in two boxes at once Splitting the feline between boxes is a step toward building microwave-based quantum computers Yvonne Gao, Yale University Physicist Erwin Schrödinger’s cat can’t seem to catch a break. The fictitious feline is famous for being alive and dead at the same time, as long as it remains hidden inside a box. Scientists think about Schrödinger’s cat in this way so that they can study quantum mechanics. This is the science of the very small — and the way that matter behaves and interacts with energy. Now, in a new study, scientists have split Schrödinger’s cat between two boxes. Animal lovers can relax — there are no actual cats involved in the experiments. Instead, physicists used microwaves to mimic the cat’s quantum behavior. The new advance was reported May 26 in Science . It brings scientists one step closer to building quantum computers out of microwaves. Schrödinger dreamt up his famous cat in 1935. He made it the unfortunate participant in a hypothetical experiment. It’s what scientists call a thought experiment. In it, Schrödinger imagined a cat in a closed box with a deadly poison. The poison would be released if some radioactive atoms decayed. This decay occurs naturally when a physically unstable form of an element (such as uranium) sheds energy and subatomic particles. The math of quantum mechanics can calculate the odds that the material has decayed — and in this case, released the poison. But it cannot identify, for certain, when that will happen. So from the quantum perspective, the cat can be assumed to be both dead — and still alive — at the same time. Scientists called this dual state a superposition. And the cat remains in limbo until the box is opened. Only then will we learn if it’s a purring kitty or a lifeless corpse. Scientists have now created a real laboratory version of the experiment. They created a box — two actually — out of superconducting aluminum. A superconducting material is one that offers no resistance to the flow of electricity. Taking the place of the cat are microwaves, a type of electromagnetic radiation. The electric fields associated with the microwaves can point in two opposite directions at the same time — just as Schrödinger’s cat can be alive and dead at the same time. These states are known as “cat states.” In the new experiment, physicists have created such cat states in two linked boxes, or cavities. In effect, they have split the microwave “cat” into the two “boxes” at once. The idea of putting one cat into two boxes is “kind of whimsical,” says Chen Wang. A coauthor of the paper, he works at Yale University, in New Haven, Conn. He argues, however, that it’s not that far off from the real-world situation with these microwaves. The cat state is not only in one box or the other, but stretches out to occupy both. (I know, that’s weird. But even physicists acknowledge that quantum physics tends to be weird. Very weird.) What’s even weirder is that the states of the two boxes are linked, or in quantum terms, entangled. That means if the cat turns out to be alive in one box, it’s also alive in the other. Chen compares it to a cat with two symptoms of life: an open eye in the first box and a heartbeat in the second box. Measurements from the two boxes will always agree on the cat’s status. For microwaves, this means the electric field will always be in sync in both cavities. Yvonne Gao, Yale University The scientists measured how close the cat states were to the ideal cat state they wanted to produce. And the measured states came within roughly 20 percent of that ideal state. This is about what they would expect, given how complicated the system is, the researchers say. The new finding is a step toward using microwaves for quantum computing. A quantum computer makes use of the quantum states of subatomic particles to store information. The two cavities could serve the purpose of two quantum bits, or qubits. Qubits are the basic units of information in a quantum computer. One stumbling block for quantum computers has been that errors will inevitably slip into calculations. They slip in because of interactions with the outside environment that muck up the qubits’ quantum properties. The cat states are more resistant to errors than other types of qubits, the researchers say. Their system should eventually lead to more fault-tolerant quantum computers, they “I think they’ve made some really great advances,” says Gerhard Kirchmair. He is a physicist at the Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences in Innsbruck. “They’ve come up with a very nice architecture to realize quantum computation.” Sergey Polyakov says this demonstration of entanglement in the two-cavity system is very important. Polyakov is a physicist at the National Institute of Standards and Technology in Gaithersburg, Md. The next step, he says, “would be to demonstrate that this approach is actually scalable.” By this, he means that it would still work if they added more cavities to the mix to build a bigger quantum
{"url":"https://www.snexplores.org/article/famous-physics-cat-now-alive-dead-and-two-boxes-once","timestamp":"2024-11-14T05:30:58Z","content_type":"text/html","content_length":"208167","record_id":"<urn:uuid:91053e15-c88c-4b0c-ac05-1f3ae62da401>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00167.warc.gz"}
150 Best Geometry Jokes That Will Make You Laugh Out Loud - Funny Jokes Hub 150 Best Geometry Jokes That Will Make You Laugh Out Loud If you think math is all about numbers and formulas, think again! Geometry jokes offer a playful twist that can make even the most complex concepts enjoyable. From puns about shapes to clever quips about angles, these jokes are sure to tickle your funny bone. Best Geometry Jokes That Will Make You Laugh Out Loud Not only do they lighten the mood, but they also make learning about polygons and pi a lot more fun. Whether you’re a teacher looking to engage students or simply someone who appreciates a good laugh, these witty one-liners will add a spark to your day. Get ready to explore the lighter side of geometry! Best Geometry Jokes That Will Make You Laugh Out Loud • Why did the obtuse angle go to the beach? Because it was over 90 degrees! • Parallel lines have so much in common… it’s a shame they’ll never meet! • Why did the triangle refuse to be friends with the circle? It found the circle too pointless! • What did the student say when he couldn’t find his geometry book? “Looks like I’m in a bit of a ‘conundrum’!” • Why did the square break up with the circle? It found their relationship too one-dimensional! • What do you call a person who’s afraid of negative numbers? A “pi”thagorean! • Why are obtuse angles always so upset? Because they’re never right! • How do you stay warm in a cold room? You go to the corner—it’s always 90 degrees! • Why did the rectangle get kicked out of the party? It couldn’t find its “shape”! • What’s a geometry teacher’s favorite place to shop? The “Angles” store! • Why did the mathematician bring a ladder to geometry class? Because he wanted to reach new heights! • What did the right triangle say to the obtuse triangle? “You’re just not my type!” • How does a mathematician plow fields? With a pro-tractor! • Why was the equal sign so humble? Because it realized it wasn’t less than or greater than anyone else! • Why do geometry teachers love parks? Because they’re full of angles! See Also – Top 150 Hilarious Engineering Jokes to Keep You Laughing Geometry Jokes That Will Make You Laugh Out Loud Geometry jokes are the perfect blend of wit and math, guaranteed to tickle your funny bone! Whether you’re a pi enthusiast or just love a good pun, these clever quips will have you laughing out loud. Discover the humor in angles, shapes, and lines, and enjoy a new take on… Geometry Jokes That Will Make You Laugh Out Loud • Why was the equal sign so humble? Because it knew it wasn’t less than or greater than anyone else! #MathHumor • What do you call a man who lost his left side? Norman! #GeometryJokes • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #SummerMath • How does a mathematician plow fields? With a pro-tractor! #FarmersJoke • Why did the circle break up with the triangle? It found him too edgy! #ShapeDrama • What do you get when you cross a triangle and a circle? A roundabout! #GeometryFun • Why was the math book sad? Because it had too many problems! #MathProblems • What did the triangle say to the circle? You’re going around in circles! #CircleTalk • Why are obtuse angles so good at relationships? Because they always know how to keep things open-minded! #AngleWisdom • Why did the student wear glasses in math class? To improve di-vision! #VisionJokes • What’s a geometry teacher’s favorite place in NYC? Times Square! #GeometryInTheCity • Why are parallel lines so good at staying calm? Because they always stay on the same path! #CalmGeometry • What did one geometrical shape say to the other at the party? “Let’s get this party ‘circling’!” #PartyTime • Why did the rectangle break up with the square? It found him too square! #ShapeBreakup • What do you call a triangle that’s always late? A lagging angle! #LateGeometry See Also – Top 150 Hilarious Math Jokes for Endless Laughter The Best Geometry Jokes for Math Enthusiasts Geometry jokes are a delightful way for math enthusiasts to add some humor to their equations! From witty puns about triangles to clever quips about circles, these jokes not only lighten the mood but also spark interest in geometric concepts. Perfect for classrooms or casual conversations, they make math fun… The Best Geometry Jokes for Math Enthusiasts • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #AngleHumor • What’s a math teacher’s favorite place in NYC? Times Square! #MathInTheCity • Why did the triangle refuse to be friends with the circle? Because it found it too well-rounded! #ShapeShenanigans • How does a mathematician plow fields? With a pro-tractor! #FarmingGeometry • Why was the equal sign so humble? Because it realized it wasn’t less than or greater than anyone else! #EqualityInMath • What did the triangle say to the circle? You’re going around in circles! #GeometryFun • Why did the square break up with the rectangle? It found it too boxy! #ShapeRelationships • How do you stay warm in a cold room? Just huddle in the corner, where it’s always 90 degrees! #GeometryWarmth • Why are obtuse angles so upset? Because they’re never right! #AngleProblems • What did one parallelogram say to the other? We’re just two sides of the same shape! #ParallelogramPuns • Why did the math book look sad? Because it had too many problems! #MathStruggles • What do you call a number that can’t keep still? A roamin’ numeral! #MathJokes • Why did the radius break up with the diameter? Because it found it too long! #CircleDrama • What do you get when you cross a snake and a pie? A python! (π-thon) #MathPuns • Why did the student wear glasses in geometry class? To improve di-vision! #VisionInMath See Also – Top 150 Hilarious Space Jokes Guaranteed to Make You Laugh Geometry Jokes: Why Did the Triangle Cross the Road? Geometry jokes, like “Why did the triangle cross the road?” tickle our minds with a blend of humor and math. This playful question invites laughter while reminding us of shapes and angles. Geometry humor not only entertains but also reinforces concepts, making math accessible and enjoyable for everyone, from students… Geometry Jokes: Why Did the Triangle Cross the Road? • Why did the triangle cross the road? To get to the other side of the angle! #TriangleHumor • Why did the obtuse triangle go to the beach? Because it was over 90 degrees! #GeometryGiggles • Why did the triangle break up with the circle? It found the circle too superficial! #ShapeMatters • Why did the triangle get a job? It wanted to be a pro-tractor! #CareerGoals • Why did the triangle refuse to play cards? Because it was afraid of getting a bad hand! #CardSharp • Why did the right triangle always carry a pencil? Because it couldn’t draw a straight line! #PointTaken • Geometry Jokes • What did the triangle say to the circle? You’re just going around in circles! #ShapeTalk • Why was the equal-sided triangle so popular? Because it was well-balanced! #PopularityContest • Why did the mathematician break up with geometry? They couldn’t find common angles! #LoveAndMath • How do you stay warm in a cold geometry class? Just huddle in the acute corner! #StayCozy • Why did the geometry book look sad? Because it had too many problems! #BookWoes • What do you call an angle that’s gone to school? A graduated angle! #SmartAngles • Why was the geometry teacher so good at baseball? Because she knew how to find the right angles! #BaseballGeometry • Why did the parallelogram go to therapy? It had too many identity crises! #ShapeTherapy • What did one square say to the other? “You’re looking a bit cornered!” #SquareTalk See Also – Top 150 Hilarious Astronomy Jokes for Stargazers Hilarious Geometry Jokes That Prove Math Can Be Fun Geometry doesn’t have to be all angles and theorems! Hilarious geometry jokes bring laughter to the classroom and beyond, proving that math can be entertaining. From punny puns about circles to witty quips about triangles, these jokes not only lighten the mood but also make learning geometry a delightful experience! Hilarious Geometry Jokes That Prove Math Can Be Fun • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #GeometryHumor • What do you call a man who is great at geometry? A poly-gon! #MathPuns • Why was the equal sign so humble? Because it knew it wasn’t less than or greater than anyone else! #MathJokes • What did the triangle say to the circle? You’re going around in circles! #GeometryGags • Why did the student wear glasses in math class? To improve di-vision! #MathVision • How do you stay warm in a cold room? Just go to the corner, it’s always 90 degrees! #GeometryLaughs • Why was the geometry book so sad? It had too many problems! #GeometryJokes • What do you call a right triangle that won’t stop talking? A chatter-angle! #MathWit • Why are obtuse angles so sad? Because they’re never right! #AngleHumor • How do you make seven even? Take away the “s”! #MathPuns • What did the sine say to the cosine? “You’re looking so acute today!” #TrigonometryFun • Why did the triangle break up with the circle? It found someone more well-rounded! #GeometryLove • What’s a math teacher’s favorite place in NYC? Times Square! #MathInTheCity • Why did the two parallel lines never get together? Because they had too much distance between them! #GeometryRelationships • How do you organize a fantastic space party? You planet! #GeometryCelebration See Also – Hilarious Biology Jokes to Boost Your Knowledge and Humor Geometry Jokes to Share with Your Classmates Geometry jokes are a fun way to lighten the mood in the classroom! Whether you’re sharing a pun about angles or a witty quip about circles, these jokes can spark laughter and make learning more enjoyable. So, grab a few jokes, and let’s have some geometrical giggles with classmates! Geometry Jokes to Share with Your Classmates • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #AngleHumor • What did one triangle say to the other? “You’re looking sharp!” #TriangleTalk • Why was the equal sign so humble? Because it knew it wasn’t less than or greater than anyone else! #MathModesty • How does a mathematician plow fields? With a pro-tractor! #FarmGeometry • Why do plants hate math? Because it gives them square roots! #PlantJokes • Why was the geometry book sad? Because it had too many problems! #BookWoes • What do you call a rectangle that’s always getting into trouble? A “re-rectangular delinquent”! #ShapeShenanigans • Why did the circle break up with the triangle? It found him too pointed! #ShapeDrama • What’s a math teacher’s favorite place in NYC? Times Square! #MathInTheCity • What did the right angle say to the acute angle? “You’re looking a little obtuse today!” #AngleCompliments • How do you keep warm in a cold room? You go to the corner, where it’s always 90 degrees! #GeometryWarmth • Why didn’t the circle ever get lost? Because it always found its way back to the center! #CircleLogic • What’s a mathematician’s favorite dessert? Pi! #YummyMath • Why did the student wear glasses in math class? To improve di-vision! #VisionJokes • How do you stay warm in a math class? Just find a good angle! #MathChill See Also – Hilarious Chemistry Jokes to Make You Laugh Witty Geometry Jokes for Teachers and Students Geometry can be a puzzling subject, but adding a splash of humor can make it more enjoyable for both teachers and students. Witty geometry jokes, like “Why was the equal sign so humble? Because it knew it wasn’t less than or greater than!” can lighten the mood while reinforcing mathematical… Witty Geometry Jokes for Teachers and Students • Why did the student wear glasses in math class? To improve his di-vision! #MathHumor • Why did the triangle refuse to be friends with the circle? Because it found him too pointless! #GeometryJokes • What do you call a man who’s afraid of negative numbers? He’ll stop at nothing to avoid them! #MathPuns • Why was the obtuse triangle always so frustrated? Because it could never be right! #TriangleTroubles • Why did the two 4s skip lunch? Because they already eight! #NumberJokes • How does a mathematician plow fields? With a pro-tractor! #FarmGeometry • What did the triangle say to the circle? You’re going around in circles! #CircleOfLife • Why did the math teacher break up with the geometry book? It had too many problems! #TeacherHumor • Why didn’t the parallel lines ever get together? They just couldn’t meet! #GeometryLove • What’s a geometry teacher’s favorite place in NYC? Times Square! #GeometryInTheCity • Why was the equal sign so humble? Because it realized it wasn’t less than or greater than anyone else! #EqualityJokes • Why couldn’t the angle get a loan? Because it didn’t have any cosine! #MathFinances • How do you stay warm in a cold room? Just go to the corner, it’s always 90 degrees! #CornerJokes • What do you call a rectangle that’s really good at keeping secrets? A silent square! #SecretShapes • Why did the mathematician get confused at the beach? Because he couldn’t find the tangent line! #BeachMath See Also – Top 150 Hilarious Gadget Jokes Guaranteed to Make You Laugh Geometry Jokes: The Angles of Humor Geometry jokes cleverly intersect humor with math, proving that angles can indeed be funny! From punny quips like “Why was the obtuse angle upset? Because it was never right!” to playful riddles about shapes, these jokes make learning geometry enjoyable. Embracing laughter, they help students connect with mathematics in a… Geometry Jokes: The Angles of Humor • Why was the equal sign so humble? Because it realized it wasn’t less than or greater than anyone else! #MathHumor • What do you call a guy who’s really bad at geometry? A “plane” loser! #GeometryJokes • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #AngleOfHumor • What did the triangle say to the circle? You’re going around in circles! #ShapeShenanigans • How does a mathematician plow fields? With a pro-tractor! #FarmingGeometry • Why did the triangle always get in trouble? Because it had a lot of “pointed” arguments! #PointedHumor • What’s a math teacher’s favorite place in NYC? Times Square! #MathNerds • Why did the two 90-degree angles break up? They just couldn’t find common ground! #AcuteRelationships • What did one parallel line say to the other? “We’ll never meet!” #GeometryFriendship • Why are obtuse angles so sad? Because they’re never right! #AngleSadness • How does a circle greet another circle? “Well, it’s been a round time!” #CircularHumor • Why did the student wear glasses in math class? To improve his “di-vision”! #MathVision • What did the square say to the rectangle? “You’re just a little too edgy for me!” #ShapeTalk • Why was the geometry book always unhappy? It had too many problems! #MathProblems • What do you call a baby angle? A “cute” angle! #CuteAngles See Also – Top 150 Hilarious Physics Jokes Guaranteed to Make You Laugh Classic Geometry Jokes That Never Get Old Geometry jokes have a timeless charm that never fails to elicit a chuckle. From witty puns about obtuse angles to playful quips about circles, these classic jokes blend humor with mathematical concepts. Whether you’re a student or a math enthusiast, a good geometry joke can add a light-hearted twist to… Classic Geometry Jokes That Never Get Old • Why was the equal sign so humble? Because it knew it wasn’t less than or greater than anyone else! #MathHumor • What do you call a man who’s great at geometry? A poly-gon! #GeometryJokes • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #AngleHumor • How does a mathematician plow fields? With a pro-tractor! #FarmingMath • Why was the triangle always so happy? Because it found its angles! #TriangleLove • What did the triangle say to the circle? You’re just going around in circles! #CircleJokes • Why did the student wear glasses in math class? To improve di-vision! #VisionJokes • What’s a geometry teacher’s favorite dessert? Pi! #SweetMath • Why did the two lines break up? They just couldn’t meet at a point! #LineHumor • Parallel lines have so much in common. It’s a shame they’ll never meet! #ParallelLife • Why are obtuse angles so good at relationships? Because they know how to make things work without being too acute! #AngleRelationships • How do you stay warm in a cold geometry classroom? Just sit in the corner—it’s always 90 degrees! #ClassroomJokes • What do you call a three-dimensional triangle? A tri-angled! #3DGeometry • Why did the circle get lost? It couldn’t find its center! #LostInGeometry • How do you make seven even? Take away the “s”! #NumberJokes See Also – Hilarious Electricity Jokes to Spark Your Laughter Geometry Jokes to Help You Find Your Sense of Humor Geometry jokes are a delightful way to sharpen your wit while exploring the world of shapes and angles. These playful quips not only lighten the mood but also make math more relatable. So, whether you’re an aspiring mathematician or just looking for a laugh, dive into geometry humor for a… Geometry Jokes to Help You Find Your Sense of Humor • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #GeometryHumor • Parallel lines have so much in common. It’s a shame they’ll never meet. #MathJokes • What did the triangle say to the circle? You’re going around in circles! #CircleOfLaughs • Why was the equal sign so humble? Because it knew it wasn’t less than or greater than anyone else! #MathPuns • How do you stay warm in a cold room? Just huddle in the corner where it’s always 90 degrees! #GeometryJokes • Why did the student wear glasses in geometry class? To improve his di-vision! #PunIntended • What does a mathematician do when he’s constipated? He works it out with a pencil! #MathHumor • Why was the geometry book sad? Because it had too many problems. #BookHumor • How do you make seven even? Take away the “s”! #NumberJokes • Why did the square break up with the rectangle? It found him a bit too “lengthy”! #ShapeRelationships • Why did the right triangle always get invited to parties? Because it was a real “angle” to be around! #PartyJokes • What do you call a number that can’t keep still? A roamin’ numeral! #MathPuns • Why did the circle marry the triangle? They wanted to form a perfect shape! #LoveInGeometry • What’s a math teacher’s favorite place in NYC? Times Square! #CityJokes • Why did the polygon go to the doctor? It had too many sides to its story! #ShapeShenanigans Geometry Jokes: Puns and One-Liners for Shape Lovers If you’re a shape enthusiast, Geometry Jokes are the perfect blend of humor and math! With clever puns and witty one-liners, these jokes will have you laughing while you contemplate angles and curves. Whether you’re a student or a geometry aficionado, these jokes add a fun twist to learning about… Geometry Jokes: Puns and One-Liners for Shape Lovers • Why did the obtuse angle go to the beach? Because it was over 90 degrees! #AngleHumor • Parallel lines have so much in common. It’s a shame they’ll never meet! #ParallelProblems • What do you call a number that can’t keep still? A roamin’ numeral! #MathPuns • Why did the triangle refuse to be friends with the circle? It found the circle too well-rounded! #ShapeSquabbles • Did you hear about the mathematician who’s afraid of negative numbers? He’ll stop at nothing to avoid them! #MathJokes • How do you stay warm in a cold room? You go to the corner, where it’s always 90 degrees! #GeometryWarmth • Why can’t you trust an atom? Because they make up everything, even shapes! #AtomicHumor • What did the triangle say to the circle? “You’re just going around in circles!” #ShapeTalk • Why did the two triangles get into a fight? They couldn’t find common angles! #TriangleTiff • What did one parallelogram say to the other? “We’re just two shapes in a world of angles!” #ShapeLife • Why did the square break up with the rectangle? It found it too one-dimensional! #SquareProblems • What did the circle say to the tangent line? “You’re just going to touch and go!” #CircleChat • Why was the equal sign so humble? Because it realized it wasn’t less than or greater than anyone! #MathModesty • How do you make seven even? Just take away the “s”! #NumberPlay • Why did the mathematician bring a ladder to the bar? Because he heard the drinks were on the house! #MathAtTheBar Leave a Reply Cancel reply
{"url":"https://funnyjokeshub.com/geometry-jokes/","timestamp":"2024-11-02T20:29:17Z","content_type":"text/html","content_length":"119651","record_id":"<urn:uuid:8e25e305-9889-4128-886c-4eac4c654b3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00677.warc.gz"}
- The NetBSD Packages Collection fonts/tex-fourier - The NetBSD Packages Collection Using Utopia fonts in LaTeX documents Fourier-GUTenberg is a LaTeX typesetting system which uses Adobe Utopia as its standard base font. Fourier-GUTenberg provides all complementary typefaces needed to allow Utopia based TeX typesetting, including an extensive mathematics set and several other symbols. The system is absolutely stand-alone: apart from Utopia and Fourier, no other typefaces are required. The fourier fonts will also work with Adobe Utopia Expert fonts, which are only available for purchase. Utopia is a registered trademark of Adobe Systems Incorporated Binary packages Binary packages can be installed with the high-level tool pkgin (which can be installed with pkg_add) or pkg_add(1) (installed by default). The NetBSD packages collection is also designed to permit easy installation from source. Available build options Known vulnerabilities (no vulnerabilities known) The pkg_admin audit command locates any installed package which has been mentioned in security advisories as having vulnerabilities. Please note the vulnerabilities database might not be fully accurate, and not every bug is exploitable with every configuration. Problem reports, updates or suggestions for this package should be reported with send-pr.
{"url":"http://cdn.netbsd.org/pub/pkgsrc/current/pkgsrc/fonts/tex-fourier/index.html","timestamp":"2024-11-04T10:39:37Z","content_type":"text/html","content_length":"10634","record_id":"<urn:uuid:6ca873f4-4a21-4b87-b0ce-67f3644dfea0>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00880.warc.gz"}
MQ 23. Find the equation of the circle which touches the circle... | Filo Question asked by Filo student MQ 23. Find the equation of the circle which touches the circle at internally with a radius of 2 . [Ans: ) Not the question you're searching for? + Ask your question Video solutions (2) Learn from their 1-to-1 discussion with Filo tutors. 15 mins Uploaded on: 3/23/2023 Was this solution helpful? Found 7 tutors discussing this question Discuss this question LIVE for FREE 10 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text MQ 23. Find the equation of the circle which touches the circle at internally with a radius of 2 . [Ans: ) Updated On Mar 23, 2023 Topic Calculus Subject Mathematics Class Class 12 Answer Type Video solution: 2 Upvotes 265 Avg. Video Duration 8 min
{"url":"https://askfilo.com/user-question-answers-mathematics/mq-23-find-the-equation-of-the-circle-which-touches-the-34303930353932","timestamp":"2024-11-04T01:09:14Z","content_type":"text/html","content_length":"355596","record_id":"<urn:uuid:4f8a8f9a-9c69-4a94-b2b9-159b92308fc7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00118.warc.gz"}
Project Euler: Problem 10 The sum of the primes below 10 is $2 + 3 + 5 + 7 = 17$ Find the sum of all the primes below two million. Using our previously made Eratosthenes class for generating primes, this problem can be solved in a very easy way. var answer = new Eratosthenes() .TakeWhile(x => x < 2000000) .Aggregate((sum, x) => sum + x) And that was it actually. The only problem here is that it takes a bit of time. On my machine this solution takes over 2 seconds. Not very long maybe, but compared to the others that is quite a bit of time actually. Especially when I know that it could go a lot faster if I just had a faster algorithm for generating prime numbers. However, I haven't found that yet. Actually, I have, but I haven't managed to implement any of them yet... 😛 Update October, 19th 2009 Managed to finally implement another algorithm called the Sieve of Atkin. I have written about it in a different post. Using that class, the solution to this problem looks like this: var answer = new Atkin(2000000) .Aggregate((sum, x) => sum + x); Pretty similar. But! The solution now takes only about 160 milliseconds, instead of the original 2 seconds 😄 Is that cool or what?
{"url":"https://www.geekality.net/blog/project-euler-problem-10","timestamp":"2024-11-10T02:08:07Z","content_type":"text/html","content_length":"33029","record_id":"<urn:uuid:d2ce6399-09d6-4abf-a6f4-60c79861ed4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00714.warc.gz"}
How To Ignore Blank Cells In Excel. | SpreadCheaters How to ignore blank cells in Excel. When working with a large dataset in Excel, it is common to come across blank cells. These blank cells can be problematic when performing calculations or data analysis, as they can skew your results. Fortunately, Excel provides a number of ways to ignore blank cells in your calculations. Here we have a dataset, in this dataset, we have five columns containing the Date, Product, Selling Price, Original Price, and Profit. We wish to calculate the profit in Column E and we want to use a formula which will involve division by Column C entries. We can observe that Column C has some blank cells and if we use these cells as it is in the division formula, we’ll get errors. In this tutorial, we’ll walk you through how to ignore blank cells in Excel. Method – 1 Using the IF Function To ignore blanks, we need to implement a simple IF function to detect blank cells. When the blank cells will be detected, the calculations will not be performed. Follow the steps mentioned below to learn the method of implementation of IF function. Step – 1 Select the cell to implement the IF fucntion • Select the first cell in Column E, where you want to write the formula. • The syntax of the formula will be =IF(First_Cell = ” “, ” “, (First_Cell – Second_Cell) / First_Cell) • In our case the actual formula will be =IF(C2 = ” “, ” “, (C2 – D2) / C2) Step – 2 Calculate the values and ignore blanks • Press Enter and drag down the Fill Handle tool to the rest of the cells. • The formula will be applied automatically. • We can see that the formula was not applied to the cells where the Column C entries were blank, which means that we successfully ignored the blank cells and avoid errors. Method – 2 Using the ISBLANK Function Second method to detect blanks is the use of ISBLANK function of Excel. This is an easy method to detect whether a cell is blank or not, and by implementing this function we can avoid or ignore all blank cells during the calculations. Step – 1 Select the cell to implement the formula • Select the cell where you want to write the formula. • The syntax of the formula will be =IF(ISBLANK(First_Cell), ” “, (First_Cell-Second_Cell) / First_Cell) • In our case the actual formula will be =IF(ISBLANK(C2),” “,(C2-D2)/C2) Step – 2 Calculate the values and ignore blanks • Press Enter and drag down the Fill Handle tool to the rest of the cells. • The formula will be applied automatically to all the column and we can notice that the during the calculations the blank cells were ignored.
{"url":"https://spreadcheaters.com/how-to-ignore-blank-cells-in-excel/","timestamp":"2024-11-11T16:14:06Z","content_type":"text/html","content_length":"58106","record_id":"<urn:uuid:8a37954c-aa2d-437a-b930-1ec8e65c17e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00055.warc.gz"}