url stringlengths 6 1.61k | fetch_time int64 1,368,856,904B 1,726,893,854B | content_mime_type stringclasses 3
values | warc_filename stringlengths 108 138 | warc_record_offset int32 9.6k 1.74B | warc_record_length int32 664 793k | text stringlengths 45 1.04M | token_count int32 22 711k | char_count int32 45 1.04M | metadata stringlengths 439 443 | score float64 2.52 5.09 | int_score int64 3 5 | crawl stringclasses 93
values | snapshot_type stringclasses 2
values | language stringclasses 1
value | language_score float64 0.06 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://brainmass.com/business/operations-research/linear-programming-graphical-method-sensitivity-analysis-83789 | 1,582,308,770,000,000,000 | text/html | crawl-data/CC-MAIN-2020-10/segments/1581875145534.11/warc/CC-MAIN-20200221172509-20200221202509-00128.warc.gz | 314,387,970 | 12,928 | Explore BrainMass
Share
# Linear programming: Graphical Method, Sensitivity Analysis
This content was COPIED from BrainMass.com - View the original, and get the already-completed solution here!
The manager of a Burger Doodle franchise wants to determine how many sausage biscuits and ham biscuits to prepare each morning for breakfast customers. Each type of biscuit requires the following resources.
Biscuit Labor(hr) Sausage(lb) Ham(lb) Flour(lb)
Sausage 0.010 0.10 ------- 0.04
Ham 0.024 ------ 0.15 0.04
The franchise has 6 hours of labor available each morning. The manager has a contract with a local grocer for 30 pounds of sausage and 30 pounds of ham each morning. The manager also purchases 16 pounds of flour. The profit for a sausage biscuit is \$0.60; the profit for a ham biscuit is \$0.50. The manager wants to know the number of each type of biscuit to prepare each morning in order to maximize profit.
Formulate a linear programming model for this problem.
On a separate spreadsheet, Solve the linear programming model formulated above graphically.
a) How much extra sausage and ham are left over at the optimal solution point? Is there any idle labor time?
b) What would the solution be if the profit for a ham biscuit were increased from \$0.50 to \$0.60?
c) What would be the effect on the optimal solution if the manager could obtain 2 more pounds of flour? | 327 | 1,387 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.34375 | 3 | CC-MAIN-2020-10 | latest | en | 0.905403 |
http://www.bristol.ac.uk/medical-school/media/rms/red/point_estimates_and_population_parameters.html | 1,555,707,846,000,000,000 | text/html | crawl-data/CC-MAIN-2019-18/segments/1555578528058.3/warc/CC-MAIN-20190419201105-20190419223105-00139.warc.gz | 211,442,535 | 4,160 | # Point estimates and population parameters
## Learning outcomes
On watching this video, students should be able to:
1. Explain what is meant by statistical inference.
2. Define a point estimate and population parameter and list common types of point estimates and parameters
3. Identify point estimates and parameters when reading the scientific literature.
Some jargon please ensure you understand this fully:
Sample statistics or statistics are observable because we calculate them from the data (or sample) we collect. We use the "statistics" calculated from the sample to estimate the value of interest in the population. We call these sample statistics "point estimates" and this value of interest in the population, a population parameter. An example, would be to use the sample mean as a point estimate of the population mean, here the population mean is the population parameter we are interested in finding out about.
A population parameter is assumed to be fixed or take only one value. Population parameters are unknown and almost always unknowable, because they "belong" to populations and we almost never observe whole populations. Common population parameters in a study are those used to describe the distributions of variables eg, the mean; but we can estimate any parameter we are interested in, for example the difference between two means or a difference in risk between two groups.
In a nut shell, a point estimate is a sample statistic obtained from the observed sample, and is used as our best guess of the unobserved population parameter. | 310 | 1,570 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2019-18 | latest | en | 0.768563 |
https://www.coursehero.com/file/6672569/BM-201-Final-Exam-Supplemental-Study-Guide/ | 1,498,172,516,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128319912.4/warc/CC-MAIN-20170622220117-20170623000117-00064.warc.gz | 838,912,242 | 25,623 | BM 201 Final Exam Supplemental Study Guide
# BM 201 Final Exam Supplemental Study Guide - :Fall2011...
This preview shows pages 1–2. Sign up to view the full content.
Leverage: DCL = (Price –Variable Costs)*Quantity (Price – Variable Costs)*Quantity – Fixed Costs – Interest Expense DCL Condensed Formula: __(P-V)*Q__ (P-V)*Q - F- I Breakeven: ___Total Fixed Costs___ Unit Contribution Margin Contribution Margin: Sales Price – Variable Costs Sales Price DOL = Sales –VC EBIT DOL = Q*(P-V) Q*(P-V) – F Preferred Stock Calculations: Cost of PS = PS Dividend PS Price Gordon Growth Model: D 1 = D 0 *(1+g) P 0 = __D 1 _ R-g Cost of CS = _D 1 _ + g P 0 WACC*: *Notice that ONLY debt considers taxes. Therefore, the before-tax and after-tax cost of preferred stock and common stock would be the same. CAPM:
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up to access the rest of the document.
## This note was uploaded on 01/04/2012 for the course BUSM 201 taught by Professor Jennlarson during the Fall '11 term at BYU.
### Page1 / 2
BM 201 Final Exam Supplemental Study Guide - :Fall2011...
This preview shows document pages 1 - 2. Sign up to view the full document.
View Full Document
Ask a homework question - tutors are online | 466 | 1,456 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.734375 | 3 | CC-MAIN-2017-26 | longest | en | 0.277695 |
https://mathspace.co/textbooks/syllabuses/Syllabus-320/topics/Topic-5915/subtopics/Subtopic-79253/ | 1,701,223,599,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00211.warc.gz | 464,361,177 | 53,205 | # Calculation from Maps
Lesson
This chapter is about using the scale given on a map to calculate the actual distances between pairs of places.
Scale information is usually given in the form of a short line or bar whose length corresponds to a particular real distance. In the questions that go with this chapter, you will see that the scale bar is calibrated in both kilometres and miles. In some cases, you will need to check with a ruler to determine what units are assumed in the given information.
#### Example 1
In this map, the distance between the two locations marked as black dots is approximately $2.5$2.5 of the $200$200-km scale units. This means the actual distance on the ground is $2.5\times200=500$2.5×200=500 km.
If, instead, we use the $100$100-mile scale unit, measurement with a ruler shows that there are roughly $3$3 of these units in the distance. Thus, in miles, the distance is $3\times100=300$3×100=300 miles.
D
### Outcomes
#### MS1-12-3
interprets the results of measurements and calculations and makes judgements about their reasonableness | 257 | 1,078 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2023-50 | latest | en | 0.922192 |
https://www.mindmeister.com/306028730/light | 1,545,153,424,000,000,000 | text/html | crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00328.warc.gz | 968,370,988 | 18,354 | Light
Plan your lessons and the goals of your lessons as well as including important content
Get Started. It's Free
Light by
1. Refraction
1.1. Terminology
1.1.1. Refraction: when light is ABSORBED into a transparent surface, and bends through it
1.1.2. Angle of refraction: depends on optical density
1.1.2.1. Higher o.d. = lower transmittance = bend more
1.1.2.2. If angle of incidence is 90 degrees, zero refraction
1.1.2.3. r becomes smaller when there is a more optically dense medium
1.1.3. Only between 2 different interfaces e.g. air and water
1.1.3.1. Path taken is the same whether from water to air or air to water (REVERSIBILITY OF LIGHT)
1.2. Snell's Law
1.2.1. sin(i divided by sin(r = n AKA refractive index
1.2.2. if light travels from air to water, then express as air
1.2.3. i > r
1.3. Critical Angle
1.3.1. i in a denser medium where r = 90 degrees
1.3.2. All light reflected at interface: optical density is too damn high
1.3.3. 100% of light reflected, no absorption
1.4. Ray diagram: Refraction
1.4.1. Draw virtual image
1.4.1.1. mark out one point on object
1.4.1.2. draw another point 1/4 of total distance above first point
1.4.1.3. draw with broken lines
1.4.2. 2 rays: from image point to eye
1.4.2.1. Object: take into consideration r
1.4.2.2. Image: straight lines
1.4.2.3. Below interface: broken line
1.4.2.4. Above interface: straight line
2. Reflection
2.1. Theoretical
2.1.1. Terminology
2.1.1.1. Incident ray: ray that strikes the surface
2.1.1.2. Reflected ray: ray reflected off surface
2.1.1.3. Normal: imaginary line, perpendicular to surface, at O
2.1.1.4. O: point where incident ray strikes surface
2.1.1.5. i: angle between incident and normal
2.1.1.6. r: angle between reflected and normal
2.1.2. Laws
2.1.2.1. 1. Incident, Reflected, Normal all lie in the same plane (use the same surface)
2.1.2.2. 2. i = r
2.2. Types
2.2.1. Specular: for smooth surface (image visible)
2.2.2. Diffused: for rough surface (no image)
2.3. Image characteristics
2.3.1. 1. same size
2.3.2. 2. same distance from mirror as object
2.3.3. 3. same orientation (if object is upright, image is upright)
2.3.4. 4. laterally inverted (left becomes right)
2.3.5. 5. virtual
2.3.5.1. doesn't exist
2.3.5.2. merely a projection: if light rays could go through the mirror then image is created
2.3.5.3. when we see something in the mirror, it is because a light ray travels from the image into our eyes
2.4. Different types of mirrors
2.4.1. Plane
2.4.1.1. see ^ above
2.4.2. Convex
2.4.2.1. Behind the mirror, same orientation, smaller, virtual
2.4.2.1.1. Light rays bounce off the mirror and gather at a focal point
2.4.3. Concave
2.4.3.1. If object is inside focal point:
2.4.3.1.1. Behind mirror, virtual, same orientation, larger
2.4.3.2. If object is outside focal point:
2.4.3.2.1. In front of mirror (same side as object), real, laterally inverted, smaller
2.5. How to draw
2.5.1. Steps
2.5.1.1. Draw image: it's at an equal distance as the object
2.5.1.2. 2 rays from image to eye (see rules on rays)
2.5.1.3. Another two rays from object to O
2.5.2. Rules
2.5.2.1. pencil, ruler for drawing
2.5.2.2. compass, ruler to find location of image
2.5.2.3. LABELLING
2.5.2.4. rays
2.5.2.4.1. solid: real
2.5.2.4.2. broken: virtual (image/normal)
2.5.2.5. object => mirror => eye
3. Colors
3.1. Mixing of color:
3.1.1. Primary
3.1.1.1. Red
3.1.1.2. Green
3.1.1.3. Blue
3.1.2. Secondary
3.1.2.1. Yellow (Red plus Green)
3.1.2.2. Magenta (Red plus Blue)
3.1.2.3. Cyan (Blue plus Green)
3.1.3. Filter
3.1.3.1. Remove color from light
3.1.3.2. Red filter removes Cyan (secondary color)
3.1.3.3. Magenta filter removes Green (primary color)
3.1.3.4. If red light goes through blue filter
3.1.3.4.1. Red shall not pass
3.1.3.4.2. No other color: light transmitted is black
3.2. Dispersion:
3.2.1. Split white light according to wavelength (visible spectrum)
3.2.1.1. Violet
3.2.1.2. Indigo
3.2.1.3. Blue
3.2.1.4. Green
3.2.1.5. Yellow
3.2.1.6. Orange
3.2.1.7. Red
3.2.2. How?
3.2.2.1. Refraction through prism
3.2.2.2. Each color slows down at different rates
3.2.2.2.1. Refracts at different angles
3.2.2.2.2. Red: longest wavelength: fastest, least refraction
3.2.2.2.3. Violet: shortest wavelength: slowest, most refraction
4. Lenses
4.1. Convex
4.1.1. Double convex
4.1.2. Plano convex
4.1.3. Positive meniscus
4.2. Types of images formed by convex (where u = object distance from C)
4.2.1. 1. where u is infinity or beyond 2F, the image is
4.2.1.1. real
4.2.1.2. upside down
4.2.1.3. smaller
4.2.1.4. left to right
4.2.1.5. uses: eye, telescope, camera
4.2.2. 2. where u = 2F
4.2.2.1. Real
4.2.2.2. Upside down
4.2.2.3. same size
4.2.2.4. left to right
4.2.2.5. uses: photocopier
4.2.3. 3. where f < u < 2f
4.2.3.1. Real
4.2.3.2. Upside down
4.2.3.3. Bigger
4.2.3.4. Left to right
4.2.3.5. Photograph enlarger
4.2.4. 4. where u =< f
4.2.4.1. Virtual (no img produced)
4.2.4.2. Upright
4.2.4.3. Magnified
4.2.4.4. Not laterally inverted
4.2.4.5. Telescope (eyepiece lens)
4.3. Principal rays
4.3.1. 1. Through C; no refraction
4.3.2. 2. Parallel to focal plane, passes through F
4.3.3. 3. Passes through F, passes through parallel
4.4. Terminology
4.4.1. Optical center; C (center of lens)
4.4.2. Principal axis: vertical line through C
4.4.3. Focal point (F); where the light rays converge to form img
4.4.4. Focal length (f); distance between F and C
4.4.5. Focal plane: horizontal line through C
4.5. Concave
4.5.1. Diverges light rays
4.5.2. Focal point is where the light rays gather... when extended backwards
4.5.3. Types
4.5.3.1. Double concave
4.5.3.2. Plano concave
4.5.3.3. Negative meniscus
5. Applications of Optics
5.1. Optical fibers
5.1.1. Telecommunications
5.1.1.1. glass fibre reflects laser containing signal all the way to a receiver unit
5.1.2. Endoscopy
5.1.2.1. Millions of small optic fibers connected to one control body
5.1.2.2. Optic fibers shine light, image is then processed by control body
5.1.2.3. Allows surgeries to involve a small hole to stick endoscope in (as opposed to large slash)
5.2. Prisms
5.2.1. A block of isosceles right-angled triangle shaped glass
5.2.1.1. As light enters the prism it is reflected at angle of 90 degrees
5.2.1.2. Arrange two in the same tube, get a periscope
5.2.1.3. Arrange two opposite each other, invert light 180 degrees, and see what's behind you
5.3. Inferior Mirage
5.3.1. Cool air is above hot air in desert
5.3.2. Total internal refraction: the blue of the sky is reflected into our eyes instead of the ground
5.3.3. We mistake this for a water body
5.4. Superior Mirage
5.4.1. When cold air is below warm air, and light rays on objects are projected into the sky | 2,483 | 6,829 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2018-51 | latest | en | 0.721908 |
http://www.physicsforums.com/showthread.php?s=c39e907122dd1de8764eefb39d3f0bbe&p=4544247 | 1,406,689,784,000,000,000 | text/html | crawl-data/CC-MAIN-2014-23/segments/1406510268533.31/warc/CC-MAIN-20140728011748-00181-ip-10-146-231-18.ec2.internal.warc.gz | 769,943,557 | 6,510 | # Use DO loop in Plot order at mathematica 8
by es.no
Tags: do loop, loop, mathematica, order, plot
Share this thread:
P: 10 Can anyone help me to in this problem? I want to use DO loop in Plot order at mathematica 8, how can I do this? I want to change one variables and see the changes in other variables in the graph, and want to plot all of my graph in one Plot order. Like this. I Uploaded my file too. Attached Thumbnails
P: 1,030 This does not use a Do loop. Look up Plot in the help system and see that Plot[{f1,f2,f3} will plot f1 and f2 and f3 on the same graph. Then realize you can make {f1,f2,f3} using Table[]. Plot[Table[Sqrt[r^2 - x^2], {r, 1, 4}], {x, 0, 4}, AspectRatio -> Automatic] Or do this the hard and error prone way with a Do loop r = 4; plotlist = {}; Do[ AppendTo[plotlist, Plot[Sqrt[r^2 - x^2], {x, 0, 4}]]; r = r - 1 , {4}]; Show[plotlist, AspectRatio -> Automatic]
Related Discussions Math & Science Software 3 General Math 3 Math & Science Software 3 Math & Science Software 4 Math & Science Software 17 | 314 | 1,041 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2014-23 | latest | en | 0.780066 |
https://stats.stackexchange.com/questions/72956/visual-display-of-multiple-comparisons-test | 1,656,283,639,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00754.warc.gz | 583,573,321 | 68,025 | # Visual display of multiple comparisons test
Suppose, the data below shows the mean response time on a task for respondents among four different groups:
A B C D
1.2 2.3 4.5 6.7
In order to assess which one of the means are different from one another I do a multiple comparisons test (after an omnibus ANOVA test is cleared) and the multiple comparisons test tells me that the mean for group D is significantly different from the ones for groups A and B and no other pair of differences is significantly different.
What is the best way to present this information visually?
• Boxplot for each group with brackets above looks great, and you also display variability of data...(or instead of brackets you can use notches). If you have basic skills with "R" I can provide you working code. Oct 16, 2013 at 14:57
• Two issues with a boxplot. (a) It can be a challenge to interpret the plot for a non-technical audience (especially for someone who has never seen a boxplot) (b) it does not scale well if I have lots of such mulltiple-comparisons test. Imagine doing the above for 20 such tests in which case a table with 20 rows with suitable emphasis/visualization seems compact relative to 20 boxplots.
– prop
Oct 16, 2013 at 15:02
• Two questions: 1-What exactly do you want to show, basic data, the multiple comparisons, the comparison's differences, all of the above? 2-What is the visualizations purpose and audience? Data exploration for you or explanation for a non-tech audience (if both you probably need two viz's). Also, you mention that the mean for D is different than A & B, what about C?
– dav
Oct 16, 2013 at 15:12
• 1. Right now my goal is to show the means and just draw the attention to the ones that are different. 2. The audience is not statistically aware and showing them a table of means with an emphasis on the ones that are different seems to be the right approach to me.
– prop
Oct 16, 2013 at 15:19
• How many points do you have? With just 4, it's going to be difficult to show why D is different but C is not. For this, I'd almost do a simple dot-plot with a different symbol for D since it is statistically different.
– dav
Oct 16, 2013 at 15:29
iv <- c("A","B","C","D")
dv <- c(1.2,2.3,4.5,6.7)
gp <- c(1,1,1,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
ylim=c(0,3), xlim=c(0,7), pch=16)
text(dv, gp-.2, iv)
axis(side=2, label=c("i", "ii"), at=c(1,2))
axis(side=1)
abline(h=c(1,2),col="blue",lty=3)
Provide a footnote: Means on the same horizontal reference line are not statistically different from each other. Alpha = 0.05, Bonferroni adjustment
And I really like this design because you can flexibly accomodate group means with multiple memberships. Like in this case, C is not different from D and also not different from A and B:
iv <- c("A","B","C", "C", "D")
dv <- c(1.2,2.3,4.5, 4.5, 6.7)
gp <- c(1,1,1,2,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
ylim=c(0,3), xlim=c(0,7), pch=16)
text(dv, gp-.2, iv)
axis(side=2, label=c("i", "ii"), at=c(1,2))
axis(side=1)
abline(h=c(1,2),col="blue",lty=3)
This chart type scales well, handles large numbers of data points well and is very easy to understand-even to a non-tech audience.
Point is, that your dataset is too small (4 groups 5 values each). The means obtained from such data are not very accurate representative values for each group - and therefore you should not run ANOVA to make inference about differences among group.
One thing is to be understandable to the audience but more important is to be scientifically accurate.
I suggest to solve this issue by Kruskal-Wallis followed by multiple comparisons.
Boxplots (with medians) is probably the most used graphical representation of multiple comparisons of groups. To display differences you either make brackets above pairs which are statistically different and add (***-symbols or N.S.) This looks good if you have small number of groups. Or can make notches on each boxplot (very helpful in large number of groups) by which anyone will found desired comparison be eye.
You may created boxplots for example in R:
data <- data.frame(value=c(rnorm(60), rnorm(20)+3),
group=rep(c("A", "B", "C", "D"), each=20))
value group
1 -1.206926025 A
2 -0.311125313 A
3 1.336579675 A
......
21 1.543827796 B
22 -1.874257866 B
......
80 4.383037868 D
etc.
boxplot(data$$value ~ data$$group, notch=TRUE,
col = "red", xlab="group", ylab="value")
Boxplots shows median values instead of mean. I strongly suggest to not display ONLY mean values for each group. Raw data are the last possibility. | 1,320 | 4,720 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2022-27 | latest | en | 0.939854 |
https://mathoverflow.net/questions/355630/ternary-error-correction-codes | 1,600,625,424,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600400198287.23/warc/CC-MAIN-20200920161009-20200920191009-00053.warc.gz | 514,714,466 | 28,859 | # Ternary error correction codes
Let`s define ternary ECC as a code that its codewords can be defined by $$\{ xyz f(y,z) f(x,z) f(x,y) | x,y,z \in \{0,1\}^m \}$$ for some function $$f$$. $$f$$ returns bitstring of constant length.
Are there any known good error correction codes that are ternary?
Such a family of LDPC codes would be best.
Is there a reason it won't be good(in terms of distance, rate)?
It might be useful in a construction I have. I just wanted to make sure it is not known already before I dive in.
Thanks
• Your question is not clear. Please give more details if you want appropriate answers. – Shahrooz Janbaz Mar 24 at 19:50
• I improved this, but let me know what is unclear, if it is still unclear. – user2679290 Mar 24 at 20:29
• Does $xy$ means the concatenation of $x$ and $y$? Please give an example for such codes and the function $f$ – Shahrooz Janbaz Mar 24 at 20:55
• This is an interesting question, but the choice of "ternary" seems unfortunate-- there's a already a whole literature on ternary ECCs, where "ternary" means "base-3". Your codes are binary with special structure. – Bill Bradley Mar 25 at 3:17
• The function $f$ is not defined. I guess it is any function of two variables where the input is a pair of binary words of length $m$ and the output is a binary word of length $m$. So the code words have lengths $6m$, the first half of a word is an arbitrary binary word of length $3m$ and the second half depends on $f$ and is for error checking. Then everything depends on $f$. For example if $f$ is a constant function, then the second halfs of the code words are the same and the usefulness of this code is doubtful. In general, using so many bits for error correcting seems excessive. – user6976 Mar 25 at 4:28
Well, if the function $$f$$ has range $$GF(2)^m$$, represented by $$GF(2^m)$$ if convenient, it has rate 1/2. Such a function can really control symbol ($$GF(2^m)$$ ) not bit errors so it is a code over $$GF(2^m)$$ of length $$n=6$$ and rate 1/2 (dimension 3). If the code is MDS [best possible] it has symbol distance at most $$n-k+1=6-3+1,$$ 4, so could correct double symbol errors.
A Reed-Solomon code would achieve this, and is the optimal such code. But we do need $$2^m+1\geq n=6,$$ (since Reed Solomon codes are essentially evaluation codes) so $$m$$ would have to be at least 3.
Edit: If $$f$$ maps into $$GF(2)$$ as suggested by Gerry Myerson, then this is a single error correcting code with $$n=3m+3$$ and $$3$$ parity checks. If $$3m+3=2^n-1,$$ then a Hamming code will do, and no fancy $$f$$ could do better. | 742 | 2,590 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.890625 | 3 | CC-MAIN-2020-40 | longest | en | 0.896034 |
http://www.law.cornell.edu/cfr/text/12/226/appendix-M2 | 1,371,617,076,000,000,000 | text/html | crawl-data/CC-MAIN-2013-20/segments/1368707773051/warc/CC-MAIN-20130516123613-00006-ip-10-60-113-184.ec2.internal.warc.gz | 523,252,011 | 11,926 | # 12 CFR 226, Appendix M2 to Part 226 - Sample Calculations of Repayment Disclosures
There are 2 Updates appearing in the Federal Register for 12 CFR 226. Select the tab below to view, or View eCFR (GPOAccess)
prev | next
Pt. 226, App. M2
Appendix M2 to Part 226—Sample Calculations of Repayment Disclosures
The following is an example of how to calculate the minimum payment repayment estimate, the minimum payment total cost estimate, the estimated monthly payment for repayment in 36 months, the total cost estimate for repayment in 36 months, and the savings estimate for repayment in 36 months using the guidance in Appendix M1 to this part where three annual percentage rates apply (where one of the rates is a promotional APR), the total outstanding balance is \$1000, and the minimum payment formula is 2 percent of the outstanding balance or \$20, whichever is greater. The following calculation is written in SAS code.
data one;
/*
Note: pmt01 = estimated monthly payment to repay balance in 36 months sumpmts36 = sum of payments for repayment in 36 months
month = number of months to repay total balance if making only minimum payments
pmt = minimum monthly payment
fc = monthly finance charge
sumpmts = sum of payments for minimum payments
*/
* inputs;
* annual percentage rates; apr1=0.0; apr2=0.17; apr3=0.21; * insert in ascending order;
* outstanding balances; cbal1=500; cbal2=250; cbal3=250;
* dollar minimum payment; dmin=20;
* percent minimum payment; pmin=0.02; * (0.02 perrate);
* promotional rate information;
* last month for promotional rate; expm=6;* = 0 if no promotional rate;
* regular rate; rrate=.17; * = 0 if no promotional rate;
array apr(3); array perrate(3);
days=365/12; * calculate days in month;
* calculate estimated monthly payment to pay off balances in 36 months, and total cost of repaying balance in 36 months;
array xperrate(3);
do I=1 to 3;
xperrate(I)=(apr(I)/365)*days; * calculate periodic rate;
end;
if expm gt 0 then xperrate1a=(expm/36)*xperrate1 (1-(expm/36))*(rrate/365)*days; else xperrate1a=xperrate1;
tbal=cbal1 cbal2 cbal3;
perrate36=(cbal1*xperrate1a cbal2*xperrate2 cbal3 *xperrate3)/(cbal1 cbal2 cbal3);
* months to repay; dmonths=36;
* initialize counters for sum of payments for repayment in 36 months; Sumpmts36=0;
pvaf=(1-(1 perrate36)**-dmonths)/perrate36;* calculate present value of annuity factor;
pmt01=round(tbal/pvaf,0.01); * calculate monthly payment for designated number of months;
sumpmts36 = pmt01 * 36;
* calculate time to repay and total cost of making minimum payments each month;
* initialize counter for months, and sum of payments;
month=0;
sumpmts=0;
do I=1 to 3;
perrate(I)=(apr(I)/365)*days; * calculate periodic rate;
end;
put perrate1=perrate2=perrate3=;
eins:
month=month 1; * increment month counter;
pmt=round(pmin*tbal,0.01); * calculate payment as percentage of balance;
if month ge expm and expm ne 0 then perrate1=(rrate/365)*days;
if pmt lt dmin then pmt=dmin; * set dollar minimum payment;
array xxxbal(3); array cbal(3);
do I=1 to 3;
xxxbal(I)=round(cbal(I)*(1 perrate(I)),0.01);
end;
fc=xxxbal1 xxxbal2 xxxbal3−tbal;
if pmt gt (tbal fc) then do;
do I=1 to 3;
if cbal(I) gt 0 then pmt=round(cbal(I)*(1 perrate(I)),0.01); * set final payment amount;
end;
end;
if pmt le xxxbal1 then do;
cbal1=xxxbal1−pmt;
cbal2=xxxbal2;
cbal3=xxxbal3;
end;
if pmt gt xxxbal1 and xxxbal2 gt 0 and pmt le (xxxbal1 xxxbal2) then do;
cbal2=xxxbal2−(pmt−xxxbal1);
cbal1=0;
cbal3=xxxbal3;
end;
if pmt gt xxxbal2 and xxxbal3 gt 0 then do;
cbal3=xxxbal3−(pmt−xxxbal1−xxxbal2);
cbal2=0;
Code of Federal Regulations - Page 593
end;
sumpmts=sumpmts pmt; * increment sum of payments;
tbal=cbal1 cbal2 cbal3; * calculate new total balance;
* print month, balance, payment amount, and finance charge;
put month=tbal=cbal1=cbal2=cbal3=pmt=fc=;
if tbal gt 0 then go to eins; * go to next month if balance is greater than zero;
* initialize total cost savings;
savtot=0;
savtot= round(sumpmts,1)—round (sumpmts36,1);
* print number of months to repay debt if minimum payments made, final balance (zero), total cost if minimum payments made, estimated monthly payment for repayment in 36 months, total cost for repayment in 36 months, and total savings if repaid in 36 months;
put title=` ';
put title=`number of months to repay debt if minimum payment made, final balance, total cost if minimum payments made, estimated monthly payment for repayment in 36 months, total cost for repayment in 36 months, and total savings if repaid in 36 months';
put month=tbal=sumpmts=pmt01=sumpmts 36=savtot=;
put title=` ';
run;
[75 FR 7846, Feb. 22, 2010]
Title 12 published on 2012-01-01
The following are only the Rules published in the Federal Register after the published date of Title 12.
For a complete list of all Rules, Proposed Rules, and Notices view the Rulemaking tab.
• 2013-02-13; vol. 78 # 30 - Wednesday, February 13, 2013
1. 78 FR 10368 - Appraisals for Higher-Priced Mortgage Loans
GPO FDSys XML | Text
DEPARTMENT OF THE TREASURY, FEDERAL RESERVE SYSTEM, NATIONAL CREDIT UNION ADMINISTRATION, FEDERAL HOUSING FINANCE AGENCY, BUREAU OF CONSUMER FINANCIAL PROTECTION, Office of the Comptroller of the Currency
Final rule; official staff commentary.
This final rule is effective on January 18, 2014.
12 CFR Parts 34 and 164
Title 12 published on 2012-01-01
The following are ALL rules, proposed rules, and notices (chronologically) published in the Federal Register relating to 12 CFR 226 after this date.
• 2013-02-13; vol. 78 # 30 - Wednesday, February 13, 2013
1. 78 FR 10368 - Appraisals for Higher-Priced Mortgage Loans
GPO FDSys XML | Text
DEPARTMENT OF THE TREASURY, FEDERAL RESERVE SYSTEM, NATIONAL CREDIT UNION ADMINISTRATION, FEDERAL HOUSING FINANCE AGENCY, BUREAU OF CONSUMER FINANCIAL PROTECTION, Office of the Comptroller of the Currency
Final rule; official staff commentary.
This final rule is effective on January 18, 2014.
12 CFR Parts 34 and 164 | 1,790 | 5,937 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2013-20 | longest | en | 0.824071 |
www.plugincars.com | 1,386,376,504,000,000,000 | text/html | crawl-data/CC-MAIN-2013-48/segments/1386163052909/warc/CC-MAIN-20131204131732-00000-ip-10-33-133-15.ec2.internal.warc.gz | 483,421,448 | 27,574 | # Doublecheck My Math: Hybrids Can Be Greener Than Electric Cars
Photo by Afroswede via flickr/creativecommons.
It started with an assignment from Home Power magazine. They asked me to create a graph to visualize the relative efficiency and green-ness of various cars—so I decided to undertake a comparison of CO2 emissions and cost for a typical gas, hybrid and electric car.
I specifically wanted to know about the relative carbon emissions of pure electric cars in different parts of the country, depending on the coal/renewable mix at various electricity generation plants. People repeatedly ask, “Are electric cars really green?” and I want to finally shoot down EV critics—as we frequently do on PluginCars.com—with some hard data.
So, I reached out to experts in the field of environmental lifecycle analysis, like Costa Samaras at the non-profit Rand Corporation, who led me to the EPA’s eGrid analysis, which provides a count on the kg of CO2 equivalents for a kWh of electricity in different regions of the country.
The latest data comes from 2005. Is it accurate? Is the methodology correct? Has the grid gotten a lot greener since 2005? I don’t know, but it’s the ONLY data out there with CO2 counts for different regions. On top of those questions, Costa’s research makes it very clear that it’s impossible to trace the exact source of electricity for any individual household. We have a national grid, and electrons don’t respect state or regional borders.
## Break Out the Calculator
Stick with me folks, the numbers are going to start flying.
With kg/CO2 per kWh numbers in hand, I was able to convert that number to pounds of CO2 per mile by using a factor of 3.6 miles per kWh of EV driving. That’s lower than the rule-of-thumb for EV efficiency—4 kWh per mile—and way lower than many EV drivers experience, but that’s what the EPA used to account for total consumption, including losses during transmission and charging, for their 2012 to 2016 fuel efficiency rules. (If you don't like it, don't call me. Call Lisa Jackson.)
I took a similar lifecycle approach to determining the pounds of CO2 for a 30-mpg gas-powered car and a 50-mpg hybrid. Yes, I know that most Toyota Prius drivers, for example, don’t routinely get 50 miles to the gallon, but that’s just one of many assumptions that I decided to make. (Careful hybrid drivers do get 50-mpg, and I anticipate a lot more will in coming years as hybrids, along with all cars, get more efficient to meet rising federal standards.)
I used 23.7 pounds of CO2 for each gallon of gas burned. That comes right from the DOE Argonne National Lab’s GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model, that includes the primary energy source extraction, transportation and processing for gasoline. Is that accurate and complete? Who knows?
So, if you look at a driver who clocks 15,000 miles per year, then it’s easy enough to calculate CO2 for 500 gallons (for the 30-mpg car); 300 gallons (for the 50-mpg hybrid); and 4,167 kilowatt hours for the pure EV. (15,000 miles divided by 3.6 miles per kWh produced the 4,167 number.) For
fuel costs
and electricity rates, I pulled up regional numbers from the DOE’s U.S. Energy Information Agency. I couldn’t find some of the regional gas prices, so I found those from gas-buddy-type sites on the web.
## Sum It Up
With all this data, and all those assumptions, I started crunching numbers for 10 different cities. The cities are just representative locations, because again it’s impossible to know where the electrons came from.
For San Francisco, near where I live, I used a CO2 average of three different grids (from eGrid)—listed in kg/CO2 per kWh of 0.1 to 0.183 as .36, .45 and .66 (this is right now)—to indicate emissions for 15,000 miles of driving as follows:
• 4,345 pounds of CO2 for the EV
• That’s tremendously cleaner than the 30-mpg car’s 11,850 pounds of CO2
• It’s also way better than the 50-mpg hybrid’s 7,110 pounds of CO2
On a cost basis, the EV also comes out way ahead. Even at \$0.1552 per kWh—I know that time-of-use rates might be significantly lower, but that’s what the USEIA is using for California—the EV beats the competition. I used \$3.12 for the cost of gas to reveal the cost of a year’s worth of driving in San Francisco:
• The EV tallied at \$647, compared to the hybrid’s \$936, and the gas car’s \$1,560.
So far, so good. But the numbers become troubling when you start looking at parts of the country with dirty grids and expensive electricity. Warning: Don’t shoot me. I’m just the number cruncher and I’ve told you about all my assumptions. But for Phoenix and Boston, I show the EV emitting at least a couple of hundred pounds MORE CO2 respectively than the 50-mpg hybrid. And in Boston, the annual cost of driving the EV only beats the hybrid by \$191.
## Postscript
I've probably confused everybody with my back-and-forth calculations, and the incorrect descriptions, but to me, the situation is confirmed (as much as it can be confirmed based on government numbers). In a couple of cities where electricity is dirty, the 50-mpg hybrid actually beats the electric car for low CO2. At first, that surprised me, but I quickly came to see it as the exception that proves the bigger and more important rule: for a myriad of reasons—from less local air pollution to greater reduction of our dependence on foreign oil and lower fuel costs—the pure electric car is as green as it gets. At the same time, the critical importance of the conventional (no-plug) hybrid—especially considering that adoption rates (sales) that are likely to be faster and higher than EVs—should be appreciated. I'm sure users will fill in a bunch of other gaps in the comments below.
The final graphic showing a comparison of 10 different cities will appear in Home Power in a couple of months. I'll let you know when it hits the stands.
## Similar Articles
· Max (not verified) · 3 years ago
It looks like your numbers for the electric car might be wrong:
For San Francisco
Electric: (15000 mi) / (3.6 mi/kWhr) * (0.183 kg CO2/kWhr) * (2.2 lbs / kg) = 1677.5 lbs CO2 (my numbers are lower by a factor of ~2.6)
Hybrid: (15000 mi) / (50 mpg) * (23.7 lbs CO2/gal) = 7110 lbs CO2 (same as article's)
Gas: (15000 mi) / (30 mpg) * (23.7 lbs CO2/gal) = 11850 lbs CO2 (same as article's)
I suppose whatever error you made in calculating the electric cars CO2 output probably carried over to your calculations for other cities (assuming I didn't make a mistake).
· Michael Corder (not verified) · 3 years ago
I appreciate that you tried to simplify the calculations. However, my take is that on average, for the general debate on this subject, that the average savings in CO2 is more important than pegging it to a specific locale (Unless you happen to be from that locale). Trying to simplify things to the lowest common denominator, I made the same calculations a while back using the average CO2 emissions figure for our electric grid.
· Ernie (not verified) · 3 years ago
Something you don't mention is your conversion from kg of CO2 per kilowatt hour to pounds of CO2 (the units you use in the rest of your calculations) per kilowatt hour. Those unit conversions will really kill you (ask NASA, or the pilot who landed the Gimli Glider) if you get that part wrong. Not doing the conversion will result in powerplant emissions that are way lower for the electric grid in your calculations than they are in reality. Also, it's good form to round off at a large number of decimal places (say at least 5) to reduce compound rounding error in equations.
The best way of doing things is to either convert everything before doing calculations, or otherwise start with the right unit measure to begin with.
· Ad van der Meer · 3 years ago
I agree with Max in his calculations. Well almost, 2.205 lbs to the kg so I get 1681.313 lbs co2
I wonder if there is enough data to include PHEV's in this comparison.
· Ben Rose (not verified) · 3 years ago
I did these numbers for the UK some time ago, comparing Nissan Leaf with Toyota Prius. Link below if you're interested. I found the results to be considerably closer, like within 10%. I haven't checked your numbers but I think the main difference is that you're using theoretical, not real, numbers.
http://www.jaffacake.net/dx/nissan-leaf-hidden-emissions?opendocument&co...
· abasile · 3 years ago
I just ran the numbers myself, and I also think Max got it right. That would mean that EVs come out ahead even when electricity generation is comparatively dirty, which is consistent with other articles I've come across.
That said, any EV's environmental benefits are only as good as the gasoline-powered driving it is able to offset. For those who often require more range than current EV's provide, a hybrid can be a better choice. For that reason, we are now leaning toward buying a Prius rather than a LEAF at this time, in hopes of being able to purchase a longer range EV next time we are in the market for a car. Of course, some families might be able to really optimize their driving in the near-term by owning both a Prius and a LEAF.
· Brad Berman · 3 years ago
[Note 11/6: I was too quick to apologize. The calculations in the main body of this article are correct. What I say in this comment is actually incorrect.]
YES! Max and the others are right. I plead complete stupidity for multiplying the CO2 factor by 15,000 miles rather than 4,167 kilowatt hours. No doubt about it. Using established DOE and EPA numbers, EVs have a huge advantage in reducing CO2 emissions.
I don't expect these revised numbers to change anybody's opinion one way or the other--or to change the rate of market adoption. As abisile points out, there are a multitude of factors for buying one car versus another, and a greener choice will different for individual consumers, based on their needs and pocketbook. But the cost, CO2 and petroleum displacement numbers for EVs do tell a compelling story.
· Jose Freire (not verified) · 3 years ago
I think your findings are correct since it all depends on how the electricity is produced.
Take Portugal for example (my country). We have wind power excess at night time, so if we charge our cars at night we'll have close to 0Kg/CO2 per kWh.
It's not a surprise we're one of the first markets to see the Nissan Leaf.
However, we shouldn't look at how the electricity is produced today, but how it will be produced in the future. It's easier to double the efficiency of a coal plant (by using combined cycle), than to increase the efficiency of an internal combustion engine.
· Ben Rose (not verified) · 3 years ago
I simply don't believe your "DOE and EPA numbers" are anything like a true representation. In the UK, every new car has the official CO2 output on the windscreen sticker - the Prius is about as low as it gets and within 10% of those for the Leaf.
Given the pollution levels across the country, and general resistance to sign any commitment to reduce them, I find it hard to believe that US power generation is any cleaner than the UK.
· John V (not verified) · 3 years ago
I covered much of the same ground a couple of years ago in an article discussing the 2007 EPRI-NRDC study.
I used the state-by-state grid CO2 emissions of electricity (measured in g/kWh) as published by the DoE (you'll have to convert pounds to grams). Excerpt:
In the United States, the three cleanest states—at well below 200 grams of CO 2 per kWh—are Idaho, Washington, and Oregon, due to their extremely high percentage of hydroelectric generation. The worst—at just over 1000 g/kWh—are North Dakota and Wyoming, which use large amounts of coal. California, the state that buys the most Priuses, comes in at roughly 450 g/kWh, about 25 percent better than the U.S. average. Be aware, though, that much electricity crosses state lines.
Based on some further math, I concluded that 25 mpg (210 g/km of CO2), every plug-in has a lower CO2 footprint per mile even if fueled on the dirtiest grid in the U.S.
But at 50 mpg (e.g. 2011 Toyota Prius), which translates to 104 g/km, plus about one-third more for extraction, refining, etc. (let's say 140 g/km total), in a handful of edge cases, a plug-in operating on the dirtiest grids in the country (at 1000 g/kWh divided by 5 miles/kWh = 200 g/km) has a higher overall carbon footprint. In that case, the pure hybrid is better.
Hope this helps.
· John V (not verified) · 3 years ago
Links got stripped out of my comment above:
· Brad Berman · 3 years ago
John - Thanks for your comments. How are you getting 450 g/kWh for CO2 from California electricity? I see in the 2001 DOE doc you linked to where it shows .305 pounds (not grams) of per kWh in California. That converts to about 138 grams per kWh, right? Incidentally, based on the 2005 eGrid averages of three California grids, I used 136 grams (almost exactly the same).
The 136 g/kWh would obviously bring down the CO2 for the California grid compared to your 450 number? Again, how did you get that? Is my math off again somewhere. I don't doubt your number, because it sorta matches what we published on HybridCars.com earlier this year:
http://www.hybridcars.com/fuels/electric-car-future-fix-grid-first-27623...
That article came to the same conclusion that pure hybrids, in places where the grid is dirty, work out better.
Thanks for helping sort this out. Great conversation to be having as EVs start arriving soon.
· ex-EV1 driver · 3 years ago
The kind of driving required to get 50 mpg with a Prius should easily get you 4 mi/kWhr for an EV with similar aerodynamics and weight. There's nothing wrong with using conservative numbers but any numbers you do get will be biased an additional 25% against the EV with your assumptions.
I've seen many other, different computations, some come up with the EV from pure coal beating the Prius and others with the Prius edging slightly ahead. My rationality towards EVs then is that their carbon footprints are 'about the same' in the worst case as a Prius but the EV wins by a landslide when compared with the national average grid mix, any future grid mix with more clean or renewable sources, the California mix, particulate emissions, and ICE vehicles other than the Prius. I also can and have put up solar panels that offset the electricity used by my EV so the issue can be in the hands of the owner, should the owner choose to do so.
· Steven (not verified) · 3 years ago
Now for a real challenge!
Let’s start with the batteries of EVs. Maybe I’ve been ingesting too much carbon washing but I seem to recall articles suggesting the production of EV batteries is extremely energy-intensive. I believe one even suggested this is so much the case that any efficiencies gained in actual operation are more than offset in the EV battery production. If that’s not enough, how about tacking on using solar panels to charge the EV battery? Years ago I worked with a guy who had a PhD in physics. He claimed that with the efficiencies then achievable, PV panels wouldn’t even repay the energy it cost to produce them. And if you take it this far, how about factoring in the energy it takes to produce the aluminum mounting rails for the PV panels?
· Brad Berman · 3 years ago
Okay, folks, sorry about this. But Max was misled by the 0.183 number I provided in the article at first. The eGrid numbers for the three California grids are .36, .45, and .66.
Throw away the 0.183 number if you're using Max's formula. That was my intermediate number of g/CO2 per mile based on 3.6 miles per kilowatt hour (for the worst of three grids). In other words, the worst of the California grids is rated at .66. Divide that by 3.6 miles to produce 0.183 g/CO2 per mile. Convert that to .403 pounds/CO2 per mile x 15k miles and you come back to my original number.
This jives with what John V and ex-EV1 have said. The best case scenario for hybrids put them slightly better than EVs charged from the dirtiest grid.
ex-EV1 - Of course, all your other points make total sense, especially about solar panels.
Sorry for confusion, gang. A calculator is a dangerous thing to put in the wrong hands.
· Tom Moloughney · 3 years ago
Steven,
I'm not really sure how much energy it takes to produce the panels, inverters, racking system and wiring needed to build a solar PV system but I'm pretty sure it's not anywhere near the amount of energy the system will produce over it's lifetime. I have 39 Sunpower 225w panels and produce about 10.5mWh per year. The system is guaranteed to be at least 80% efficient for 25 years and I'm told I can really expect 30+ years of solid production. Based on this I would assume the system will generate between 280 and 300 mWh of electricity in it's lifetime. That's enough electricity to power an EV well over a million miles and save me from buying nearly 50,000 gallons of gasoline driving a car that gets 25mpg. That's a good amount of energy if you ask me.
· ex-EV1 driver · 3 years ago
@Steven,
Great question, one that all PV manufacturers should be answering for their products. This same question was posed to Tesla a few years ago and their financier and Chairman, Elon Musk and their then-President Martin Eberhart saw it is worthy of answering personally. You can find their response and some references at:
http://www.teslamotors.com/blog/electric-cars-and-photovoltaic-solar-cells
· Steven (not verified) · 3 years ago
Ask and you shall receive! Effortlessly! Thanks ex-EV1 driver!! And PluginCars!! I can't tell you how long I've been asking this question without getting hard answers like the one you provided,
· ex-EV1 driver · 3 years ago
@Steven,
Its open-minded thinkers like you that ask the tough questions that we really need today. Otherwise, snake-oil schemes and perochial interests will be warped and promoted so that we'll never progress.
Thank you for the question. Keep them coming!
I'll also add that PV isn't the most efficient or cheapest way to generate electricity. That honor generally goes to solar thermal system. The solar thermal systems, however, require a lot operations and maintenance labor to keep them running so they generally have to be large-solar facilities, run by corporations. The energy payback for solar thermal can be a whole lot faster because they're a bunch of mirror for the most part. Solar thermal also can sometimes use natural gas burner when the sun isn't shining for higher availability.
PV is great in that it can use already ruined land (our homes and businesses), we can control it, and, once installed, it just runs and you don't have to worry about it.
· Tom Moloughney · 3 years ago
Yes, great link ex. I had previously heard that it takes about one year to generate the energy used to make a solar array, but never saw anyone really explain it like Elon & Martin did. I'll have to save that link for future use!
· Randal (not verified) · 3 years ago
I know that CO2 is the focus of your calculations. Here is my experience with real-world decisions about commuting. Steven's question about battery production is very relevant to anyone looking closely at their own total cost per mile. At 100 watts per mile on a scooter, I am looking at under a penny per mile for energy, but 20 cents per mile battery replacement cost for Lead-Acid. Lithiums promise to reduce that to a nickel a mile battery cost, for triple the money up front.
I could almost buy a 20 year old Civic VX and get 50 mpg with no depreciation for the price of the lithium batteries alone, and have an equal total cost per mile. A little dirtier, but much better when a deer jumps on the road in front of me.
· John Gartner · 3 years ago
Brad, this is an important question that needs to be answered. California and Oregon are making an attempt to determine a "fair" kg of CO2/kwh for electric miles based on their recently passed Low Carbon Fuel Standards. Emissions can look wildly different depending upon your assumptions of the source of electricity. Even within the state of Oregon, the amount of CO2 swings wildly from utility to utility depending on the mix of renewables. Plus, many utilities buy electricity on the spot market where they have almost no information on the generation source. Also, time of day when you charge matters too, with solar and wind contributions varying wildly. This is a significant and challenging question that needs to be answered with accurate data, and we're paying attention to it.
· Steven (not verified) · 3 years ago
Getting some numbers for current costs would be an interesting and possibly useful exercise. My real interest, however, is in energy costs. 'free markets' with monetary costs based upon honest physics and social cost / environmental accounting may ultimately be a good way to make production decisions. For the foreseeable future, however, monetary costs are likely to remain wildly out of sync with these criteria. Even '"fair" kg of CO2/kwh for electric miles' might not be persuasive for those who don't buy climate change. (Full disclosure. I do after reading Dr. Tim Flannery's "The Weather Makers" several years ago.) How about we just limit the discussion to the cost of gasoline after the cost of US oil wars is factored in - and maybe factor in a few more deep-water oil spills to boot? Nothing more than what everyone would consider necessary and reasonable. If it was just limited to war costs, the figure ought to be easier to determine and a new tax reflecting real cost of production could be imposed.
· ex-EV1 driver · 3 years ago
@Randal,
If you are making your decisions solely on what is best for you, today, and you don't care about having to fix your car regularly, go to the gas station, fight wars (either in the military or as a tax payer paying for the war) to protect our oil interests, oil stains in your driveway or polluting, then clearly buying an old used ICE junker that gets reasonable mpg is the cheapest and best way for you to drive a mile. You should not, however, be wasting your time at plugincars.com. These cars aren't going to be cheaper than oil burners for a decade or more.
Instead should be perusing craigslist.org or autotrader.com where you can buy a used junker for the price of a few tankfuls of gas.
If, however, you are interested in being able to live better in the future or enabling your grandchildren to live as well as you do, you should stick around here as we explore whether plug-in vehicles might be able to solve the problems, where to get the cars so you can do your part, how and whether you'll need to adapt your life to live with them, as well as many other things related to this new concept in driving. You may have to divert a few of those extra shekels in your pocket away from frivolties or that shiny new Lexus you've had your eye on but the payoff could be priceless. If you actually don't have those extra clams in your pocket then I encourage you to stick around here and help us all be come better educated so that we can pursuade those with the extra dough to spend it wisely, not on farces such as Hydrogen or Ethanol.
· ex-EV1 driver · 3 years ago
@John Gartner,
With all due respect to your profession, I don't believe that our current problems are within the comprehension or capability of the lawyers and public policy wonks that make up our government and it's associated camp-followers. "fair" is just a concept that lawyers and policy wonks can handle, therefore, that is what they do. To a little boy with a hammer, everything looks like a nail.
Fair only has merit if one takes technology as a given that cannot be changed and one must weigh each fairly as one would allocate the water from a stream. While not a bad assumptions for lawyers and politicians, in this case, all that fairly treating various incumbent and alternative technologies will do is to forestall the inevitable if the incuments depend on a finite resource thats going to run out or emit an element that will eventually wipe out our species. Its going to take the geeks who studied across campus to find new technologies that do not depend on that finite resource or emit toxics to our bodies or our planet.
Since EVs do not depend on oil or hydrocarbons at all, it seems unfair to saddle them with any of the legacy problems but rather, treat them as independent issues.
1) EVs can run off of zero emission, sustainable renewables. Therefore, they are good.
2) Energy can come from limited or dirty sources or it can come from clean, sustainable renewable sources. Here is where the 'fairness' should be assessed.
Let's instead of blaming the EVs for the energy source, instead, reward the electric companies for every EV that they fuel, relative to their grid mix. This can be handled with a \$20 cellular modem installed in the charger that transmits the amount of electricity used and when it was used to a central clearing house. Regulators can then look at the grid supply mix at that time and the amout of pollutants and other junk that went into the mix, subtracting off the amount of junk that would have been caused by the same energy amount of gasoline and determine the credit. Since we've determined that the worst case coal is about the same but slightly worse than ICE, I don't see any reason for punishment.
This will not only get the government away from EV's -vs- well funded ICE interests but it will also encourage the electric companies to support EV infrastructure since they'll receive more profit from the kWhrs they sell to EVs.
· ex-EV1 driver · 3 years ago
@John Gartner
I forgot to mention that my suggestion would encourage electrical utilities to move toward renewables in addition to supporting EVs.
Also sorry for the rambling nature. I don't have much time to spend on this today.
· Brad Berman · 3 years ago
@ex - Bingo! For me, this is ultimately a moral issue. The end of the oil supply chain ends in war and body bags.
As you say, we need to build this community (and movement) to work through the dynamics of choosing the right cars and overcoming any obstacles. And as John G points out, we need to get some clarity on the CO2 issues (which will be very difficult).
But it's time for as many of us as possible to bring about the transition to cars that use a lot less oil or none at all. We might not have all the answers, but we know enough to move in the right general direction (and course correct as we go).
· Steven (not verified) · 3 years ago
Hey ex,
It takes everything I've got to put together a coherent sentence so I'm going to have to leave the number-crunching to refute your contention - "These cars aren't going to be cheaper than oil burners for a decade or more." - up to you or someone else. But I was wondering, given the zero-opportunity cost for those of us who are not Wall Street sharks, whether your contention is true even now. Perhaps when you factor in depreciation - but what about viewing the cost premium for an EV as pre-purchasing a significant percentage of your gas for say the next 8 years? Maybe a special tax law permitting people to tap their IRA / 401k's is merited - even if the cost of gas remains constant over the next x years?
Regards,
Steven
· Jim1961 (not verified) · 3 years ago
There is good article in the July, 2010 issue of Scientific American about this very subject. Unfortunately in my area of the country 85% of the dispatchable power generation comes from coal. I would generate less CO2 with a hybrid.
Some people say installing solar EV panels will make an electric car emit less CO2 than a hybrid. That seems logical but it's completely wrong. Let's assume I put a huge array of solar panels on my roof. These panels reduce the amount of electricity I use from the grid. When my solar panel output exceeds what I'm using in my home this energy will get sent INTO the grid and the power company will pay me. This will reduce greenhouse gas emissions by an amount I will call X. If I buy an electric car I will use more electrical power which will offset the amount of energy being generated by my solar panels. If an electric car generates Y amount of CO2 then the resulting CO2 reduction will be X-Y. It would not matter if I put 5 megawatts of solar panels on my roof. A hybrid will still generate less CO2 in my neck of the woods.
· Alexei (not verified) · 3 years ago
Did the calculations include the amount of CO2 needed to extract and refine the oil? As different regions might use different oil sources which require more or less energy to refine. For example how much of real CO2 would be produced directly and indirectly by a Prius driver in the region which is supplied by gasoline produced from tar sands?
· abasile · 3 years ago
@Jim1961: While at this point I own neither an EV nor PV panels on my roof, I don't agree with your logic in a practical sense. Yes, we could all install solar, not use it for EVs, and simply out of a sense of altruism sell the power back to our respective utilities at low, wholesale rates. But I am not aware of a single homeowner who intends to incur the expense of installing solar just to benefit the grid.
If my family chooses to install solar in the future, it will be to offset our use of grid power. Right now it's hardly worth the hassle/expense because our electricity usage is pretty low (~300 kWh/month) thanks to some straightforward conservation measures. Eventually, however, I expect that an EV purchase will make solar "worthwhile" for us.
· eddie (not verified) · 3 years ago
Let's keep this simple...assume 100% coal-fired electricity as is the case here in Ohio. The coal plant runs 24/7 so it will pollute whether or not I plug in my car at night...
· Randal (not verified) · 3 years ago
@ex;
I put 2700 miles on an electric scooter this year, and convinced my employer to install a metered outlet so I could fairly pay for the juice. I buy extra blocks of "blue sky" wind power.
I would be happy to hang around and participate here, but veiled insults based on faulty assumptions about me aren't encouraging.
I am a professional driver. I am intently watching the marketplace as safe, clean, efficient, and affordable vehicles are developed. My original post was to point out that all four of those factors are relevant. I get it that there are huge external costs for oil burners. I get it that early adopters pay for the privilege of advancing the market.
No Lexus for me, I lust for an Edison VLC, actually. And if I get a 20 year old Civic, I will tinker with it to put electric power to the rear in a thru-the-road set up.
Happy Trails.
· Brad Berman · 3 years ago
@Alexei - Yes, as I mentioned in the main article, the use of 23.7 pounds of CO2 per gallon does include extraction and refinement.
"I used 23.7 pounds of CO2 for each gallon of gas burned. That comes right from the DOE Argonne National Lab’s GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model, that includes the primary energy source extraction, transportation and processing for gasoline."
· Brad Berman · 3 years ago
@Randal - Please stick around and continue to participate in our forums. There may be the occasional barb here and there, but we pride ourselves on keeping the conversation very civil and open. Sometimes, there are fly-by anti-plug attacks, which can quickly raise blood pressure. It's a good idea to get an account on the site, so we can read a brief bio and follow your participation in various discussions. Cheers.
· Steven (not verified) · 3 years ago
@Brad, @ex & @Randal - Brad, ex: if I've been following the discussion right, you are not so bad with the calculator after all. I would still like to know if there is any scenario under which EVs could be considered cost-effective now as opposed to the 10 years ex-EV thinks it might take.
Randal: your approach to this issue is obviously way beyond that of a green eyeshade bookkeepers'. To the extent that you, the rest of us or the engineers who design our cars use cost as a controlling consideration in their decision-making, I think we can all be forgiven. That's the way the system in SUPPOSED to work. Anyone who believes in free markets has their work cut out for them to insure the operation of those markets reflects the laws governing the physical universe and not those passed by purchased politicians.
· NeilBlanchard · 3 years ago
Thank you Brad, for this article. This is a complex and confusing subject, and the more clarity we have, the better.
It needs to be pointed out that petroleum uses carbon before extraction: exploration and test drilling are non-trivial tasks. The energy, materials and equipment used in these pre-production steps should be amortized into the carbon used to produce gasoline. Also, the refinement of heavy "sour" crude (or tar-sands!) requires a lot more energy/carbon /gallon than does light sweet crude.
And, the whole morass of ethanol production should be included, if it isn't already. Corn is a very inefficient crop; requiring much more fertilizer (which is made from natural gas!) and a lot of pesticides, as well as diesel for the tractors and transport -- and the refinery process to make the ethanol, all need to be included in the carbon numbers.
Lastly, the amount of money and energy we expend on our military efforts to defend our petroleum interests is truly staggering. This would be incredibly difficult to add to the already complex calculations -- but we should not take it for granted.
Sincerely, Neil
· NeilBlanchard · 3 years ago
I meant to ask if the carbon of the electricity and natural gas used all along the production process for oil is included? If so, do you have the specific numbers for the carbon used for the oil refinement, and extraction steps in particular; and/or how much electricity and natural gas is used during these steps?
Sincerely, Neil
· Brad Berman · 3 years ago
Hey Neil - I should have included a link to DOE's GREET model from the beginning, but here you go:
http://greet.es.anl.gov/main
This is the model that produces 23.7 pounds per gallon of CO2. I would need to dig deeper to see how far they go, such as pre-production emissions from test-drilling, etc.
I'm almost positive that military expense/emissions are NOT included. That would send the scale flying. As mentioned above, when you get into costs for wars (in \$ and blood), the economics and moral calculus becomes a very compelling case for going electric. But let's not forget market and other realities that create a need to reduce oil consumption in every form of transportation and car.
· ex-EV1 driver · 3 years ago
@Randall
Sorry about the barb. I definitely shot from the hip. Since you're back, let's explore things a bit further.
EVs certainly are not cheaper now. Nothing new is usually cheaper.
You probably could have picked up a used Chinese Vespa clone for less than \$1000 yet you probably paid a lot more for an electric scooter. The same will apply to an electric car.
Clearly, if you are one who has the ability to take care of an old car, the most ecological and economical thing one can do is to run a car for a long time. This will significantly reduce the energy, pollution and costs of building a new car, probably a lot more than the energy used by the car overall. You probably don't even need to go back to that old Civic to see a huge financial and energy improvement over any new ICE car for short term benefits.
EVs, on the other hand are expensive today. This includes the Edison VLC and the rest of the X-Prize entries. Fortunately, a lot of the initial costs are being absorbed by someone else today. I suspect that it wouldn't be too difficult to show that a Nissan Leaf, after the tax breaks, is going to be a very cheap car since the government and Nissan are both eating a lot of the startup costs. I still figure that a \$15K Kia, however, will still have a lower cost per mile to you, at least for the first 100K miles or so. This will all depend on the cost you place on maintenance labor and whether you count the true cost of oil.
· Randal (not verified) · 3 years ago
@ex et al...
Will consider an account here, and I've been barbed at enough forums to know it's no big deal if things cool off after a "whoa, pal."
I've done spread sheets on numerous combinations of vehicles. Depreciation and insurance eat you up on new rigs. Repairs eat you up on old ones. And wouldn't it be nice to have the option on our 1040 returns to opt out of war taxes if we could prove renewable energy investement?
Ya - I paid \$2,400 for the electric scooter with lead acid batts.
They're done at 2700 miles. So looking at \$1,500 for lithiums which should go 20,000 miles or more isn't too bad. Except in my climate a scooter can handle MAYBE half of my 17,000 annual commute miles. Which lead me to do exactly as suggested, ex, look on ebay and autotrader at what was happening with low-depreciable high mpg cars. All miles I can drive with electric drive or high mpg ICE are replacing 18 mpg diesel, so Emissions are in my calculus....
TTYL.
· ex-EV1 driver · 3 years ago
@Steven,
Certainly the pre-purchasing concept is a reasonable one but it depends a lot on the future price of gasoline, what kind of a car you're comparing with, and the expected age the car will last. At a low \$3.00/gal, without heavy subsidy, you'll still come out better by buying a \$1000 old used car and milking it for as long as you can.
If you're comparing with a middle-of-the-road new car such as an Accord then a heavily subsidized car like the Leaf may work out financially for you, even today, especially if you are a heavy commuter. With the \$7.5K federal and California's \$5K tax breaks, a \$34K Leaf can be had for \$21.5K. Compare that to a \$27K Accord V-6 (comparable performance) and the Leaf will win. There's no question if you compare with a \$34K BMW 328i. But, compare with a base Kia Rio at \$12K and you'll probably lose money with the Leaf.
Its hard to predict a lot of the EV costs today so it will be hard to determine a lifetime cost:
-How much will the replacement battery cost in 10 years?
-How will the rest of the car hold up in 10 years?
-Will all of the auxiliary junk (power windows, mirrors, remote entry, radio, upholstery, paint, rust, etc) hold up or will you want to junk the car just because all the little things are falling apart?
If everything is designed to last, the EV should last nearly forever, with batteries that could cost less than \$5K for a 100 mile battery pack - every 10 years. This will easily offset the up-front capital costs. Your operational costs should be very minimal.
We can analyze this to death but, in the end, without any useful data to analyze its all academic.
I'm not a big proponent of government meddling because they generally screw everything they try to accomplish up. I, personally prefer to count on a few rugged visionaries taking a few risks to blaze the way. They will find out by foregoing their BMWs and sucking it up a bit to do the right thing for their kids. This way, we can actually get some real information and run the prices down so everyone can afford them.
Tesla owners are already doing this and I credit Tesla and their owners for the resurgence in EV production after the miserable failure by the government the last time. The measly subsidies that the government takes from everyone and plans to give to the EV purchasers came about well after Tesla had embarrassed the auto industry by proving what could be done.
If you don't feel you are ready to lay out the money for an EV today, that's ok. Don't. You can do just as much good if you:
- keep whatever you're driving today as long as you feel you can
- tell your favorite car company you will buy a plug-in when they build it.
- be fiscally responsible so that you can afford that EV when it comes out. You'll be saving a lot on car payments by keeping your old car. Save it!
- if you absolutely must buy a car, look at used hybrids or something else, cheap so you will be ready for an EV when its available at your price.
· Jeff N (not verified) · 3 years ago
The article says:
"For San Francisco, near where I live, I used a CO2 average of three different grids (from eGrid)—listed in kg/CO2 per kWh as .36, .45 and .66 (this is right now)—to indicate emissions for 15,000 miles of driving as follows:
4,345 pounds of CO2 for the EV
That’s tremendously cleaner than the 30-mpg car’s 11,850 pounds of CO2
It’s also way better than the 50-mpg hybrid’s 7,110 pounds of CO2"
----------------
As another example of how volatile these numbers can be, San Francisco is served by PG&E and they self-report their average CO2 emissions for retail customer electricity as .524 Lbs. per KWh which is .145 Lbs. per mile (using 3.6 miles per KWh) which is 2,183 pounds of CO2 for 15,000 miles.
That's almost precisely half the CO2 you calculated for an EV in San Francisco.
Google: "pacific electric co2 kwh" and click the first link for the PG&E "assumptions.pdf" file.
· Aaron D. Allen (not verified) · 2 years ago
Why does everyone assume that oil is so bad? As far as gas car emmisions go they account for only a small fraction of the total gaseous toxins being introduced into the atmosphere, most of it is industrial pollution. Oil companies buy crude oil from the Middle-East and other less developed area of the world, and then they convert it into gasoline and other usable products. The profit margin for Exxon and other companies, many of them American, is huge. Without the taxes generated by these companies the American economy would collapse. So please think twice before you say bad things about the oil companies.
· EVNow · 2 years ago
@Aaron D. Allen
LOL.
I'll just touch upon one thing. Instead of the tiny amount of tax the oil generates, if we stopped spending on that money "securing" oil in the middle east (say a Trillion dollars in the last decade, for eg.) - then we would having a roaring economy than a great recession we have now.
· Ken Fry · 2 years ago
I'd have to say that your site is addictive. I should be doing other things, but wanted to respond. The method for measuring “greenness” and “goodness” of vehicles caused my falling out with the X Prize people, so it is an issue near and dear.
I have not read through all the replies, nor have I read your calculations, (and will do both when I am not running out the door). But the short answer is this: until last week, the EPA published CO2 numbers for the 2002 RAV4 EV, and for (of course) the Prius. The numbers are essentially the same, So that today, with the current grid mix, the two vehicles are identical from the standpoint of CO2 generation, which for me is a key concern. (Point source vs central source, etc, are largely distractions from the central issues.)
RMI uses a chart, in promoting EVs, that shows this fact, but they contend (correctly) that in many areas, our generation is done at better than the national grid average. This same chart is used by Peterson at Seeking Alpha to “dis” EVs. He (correctly) contends that, at night, more coal (base load generation) is used, making charging time more carbon-intensive than the grid average. He then draws many conclusions with which I have argued at length.
That is the too-short answer, so I'll come back to this very good and important thread.
Ken
## New to EVs? Start here
1. Before we get going, let's establish basic definitions.
2. Some plug-in cars have back-up engines to extend driving range.
3. EVs are a great solution for most people. But not everybody.
4. A few simple tips before you visit the dealership.
5. Take advantage of credits and rebates to reduce EV costs.
6. EVs get bad rap as expensive. Until you look at TCO.
7. You'll want a home charger. Here's how to buy the right one.
8. With the right utility plan, electric fuel can be dirt cheap.
9. If you plan to charge in public, you'll want to sign up for charging network membership (or two).
10. Thou shalt charge only when necessary. And other rules to live by. | 10,106 | 44,165 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2013-48 | longest | en | 0.934632 |
http://mathhelpforum.com/differential-geometry/132467-cauchy-sequence-metric-space.html | 1,513,299,354,000,000,000 | text/html | crawl-data/CC-MAIN-2017-51/segments/1512948551501.53/warc/CC-MAIN-20171215001700-20171215021700-00044.warc.gz | 168,881,321 | 10,876 | # Thread: Cauchy Sequence in a Metric Space
1. ## Cauchy Sequence in a Metric Space
Problem:
Let { $x_{i}$} be a Cauchy sequence in a metric space (M,D). Let A={ $x_{1}, x_{2}, x_{3}$, ...}.
Suppose that { $x_{i}$} doesn't converge in M. Prove that A is a closed subset of (M,D).
What I have done:
So we know { $x_{i}$} is Cauchy, so let $\epsilon$>0 be given. Then there exists a positive integer N such that the D( $x_{i}, x_{j}$)< $\epsilon$ for all i,j $\geq$ N. Also, and this may be where I'm stuck.. Since { $x_{i}$} doesn't converge in M, then we know that for any y $\in$M, $\epsilon$>0, and N, we can find an n>N such that D( $x_{n}$, y)> $\epsilon$.
That last statement could be wrong/unnecessary, so any help you can give me on where to go would be very helpful.
2. Originally Posted by tidus89
Problem:
Let { $x_{i}$} be a Cauchy sequence in a metric space (M,D). Let A={ $x_{1}, x_{2}, x_{3}$, ...}.
Suppose that { $x_{i}$} doesn't converge in M. Prove that A is a closed subset of (M,D).
What I have done:
So we know { $x_{i}$} is Cauchy, so let $\epsilon$>0 be given. Then there exists a positive integer N such that the D( $x_{i}, x_{j}$)< $\epsilon$ for all i,j $\geq$ N. Also, and this may be where I'm stuck.. Since { $x_{i}$} doesn't converge in M, then we know that for any y $\in$M, $\epsilon$>0, and N, we can find an n>N such that D( $x_{n}$, y)> $\epsilon$.
That last statement could be wrong/unnecessary, so any help you can give me on where to go would be very helpful.
Maybe the best way to do this is an argument by contradiction. Suppose that y is in the closure of A but not in A. Then there must be a sequence in A (in other words, a subsequence of $\{x_i\}$) converging to y. But if a subsequence of a Cauchy sequence converges, then the whole sequence converges. There's your contradiction.
3. So are we supposing that A is open, and then the contradiction comes from the fact that the sequence would then have to converge in M?
4. Originally Posted by tidus89
So are we supposing that A is open, and then the contradiction comes from the fact that the sequence would then have to converge in M?
No, we are not supposing that A is open, we are supposing that A is not closed. That means that there is a point in the closure of A that is not in A.
Yes, the contradiction comes from the fact that we then prove that the sequence converges, contrary to the statement of the question. | 713 | 2,429 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.71875 | 4 | CC-MAIN-2017-51 | longest | en | 0.918734 |
https://www.huffpost.com/entry/so-how-much-does-manhatta_b_7503026 | 1,610,791,706,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00386.warc.gz | 836,066,341 | 59,503 | # So, How Much Does Manhattan Weigh?
When you look at the island of Manhattan on a map, it looks like a sliver of something, a slab you could pick up and hold in your hand. But though it looks small, it's quite a heavy slab, because it's piled extremely high with a ton of stuff.
Like, a lot of stuff: buildings (big ones, and lots of them), bagels (big ones, and lots of them), people (big ones, and lots of them). Cars. Roads. Sidewalks. And animals and furniture and water towers and toilets. So all this raises an interesting question: Just how much does all this stuff on Manhattan weigh?
Coming up with an exact number is pretty much impossible, and we're not rocket scientists. But using some fast and loose Internet research, we came up with a ballpark number for how much all the crap we've dumped on this island might tip a scale. Check it out.
People
Let's start with all us hefty Americans. According to the CDC, the average human weighs around 181 pounds. On an average workday, about 3.9 million people inhabit Manhattan. So at 181 pounds apiece, that works out to 705,900,000 pounds--or 352,950 tons--of people meat. And guess what? That's about how much one (one!) Empire State Building weighs. Take a deep breath. We're just getting started.
Credit: Wikimedia Commons
Pets
A ton of these people have pets. The New York City Economic Development Corporation says about 238,500 cats and dogs live in Manhattan (no word on whether they pay rent). We'll go with low weight averages here--since a good deal of NYC pups are little Fifi wiener dogs--so we'll say 35 pounds for dogs and 10 pounds for cats. Assuming a 50-50 split between the species, that's 22.5 PPP (pounds per pet). Do the math and you get 5,366,250 pounds, or 2,683 tons. There's also a zoo and horses and gross pigeons and chubby rats, so let's call this at least 3,000 tons of pure animal.
Food/Drink
Every year 5.7 million tons of food enters New York City. We'll say a fifth of that goes to Manhattan, so on any given day there's at least 3,123 tons of grub sitting on the island. Throw in 120,000 tons of agua in those rooftop water towers and you get at least 123,123 tons of food and water. I already drank most of the beer so we can't count that.
Vehicles
About 23 percent of Manhattan's 1.6 million full-time residents own cars. That's 368,000 whips. We're using the average workday for this little exercise; of the 1.6 million or so daily commuters, about 20 percent drive, so add 322,000 more (lots of people take the subway, but that's underground, and we're staying above land here). Oh, and throw in about 12,100 cabs. We'll use 3,300 pounds as our average weight (based on the aggressively average Toyota Camry).
So 702,100 cars (and we're probably being pretty conservative) weigh in at 2,316,930,000 pounds. That's 1,158,465 tons. Note that we're not putting that number in bold. Because we haven't even done buses yet. The MTA has 5,777 of those babies, each weighing around 20 tons. Let's say they're spread evenly across the boroughs and that Manhattan has 1,155. Toss in 100 tour buses or so. That's 25,100 tons. For the sake of sanity, we'll stop after adding 450 35-ton dump trucks (31,500,000 pounds/15,750 tons).
Let's not forget trains. There's 6,384 subway cars in New York and we'll assume at least half are in Manhattan. At around 38 tons each we get 121,296 tons. And actual trains? 1,200 trains pass though Penn Station daily and 700 pass through Grand Central Terminal. Your average Acela train is around 565 tons. So that's 1,073,500 tons.
Add it all together and you get a conservative number of 2,394,111 tons. Whoa.
More from Supercompressor: | 938 | 3,669 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.0625 | 3 | CC-MAIN-2021-04 | longest | en | 0.937738 |
https://trustconverter.com/en/weight-conversion/decigrams/decigrams-to-long-hundredweights.html | 1,721,443,926,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514981.25/warc/CC-MAIN-20240720021925-20240720051925-00293.warc.gz | 507,634,842 | 9,508 | # Decigrams to Long hundredweights Conversion
Decigram to long hundredweight conversion allow you make a conversion between decigram and long hundredweight easily. You can find the tool in the following.
to
input
= 0.00000197
= 1.96841 × 10-6
= 1.96841E-6
= 1.96841e-6
= 0.00000492
= 4.92103 × 10-6
= 4.92103E-6
= 4.92103e-6
= 0.00000787
= 7.87365 × 10-6
= 7.87365E-6
= 7.87365e-6
= 0.00001083
= 1.08263 × 10-5
= 1.08263E-5
= 1.08263e-5
= 0.00001378
= 1.37789 × 10-5
= 1.37789E-5
= 1.37789e-5
= 0.00001673
= 1.67315 × 10-5
= 1.67315E-5
= 1.67315e-5
= 0.00001968
= 1.96841 × 10-5
= 1.96841E-5
= 1.96841e-5
### Quick Look: decigrams to long hundredweights
decigram 1 dg 2 dg 3 dg 4 dg 5 dg 6 dg 7 dg 8 dg 9 dg 10 dg 11 dg 12 dg 13 dg 14 dg 15 dg 16 dg 17 dg 18 dg 19 dg 20 dg 21 dg 22 dg 23 dg 24 dg 25 dg 26 dg 27 dg 28 dg 29 dg 30 dg 31 dg 32 dg 33 dg 34 dg 35 dg 36 dg 37 dg 38 dg 39 dg 40 dg 41 dg 42 dg 43 dg 44 dg 45 dg 46 dg 47 dg 48 dg 49 dg 50 dg 51 dg 52 dg 53 dg 54 dg 55 dg 56 dg 57 dg 58 dg 59 dg 60 dg 61 dg 62 dg 63 dg 64 dg 65 dg 66 dg 67 dg 68 dg 69 dg 70 dg 71 dg 72 dg 73 dg 74 dg 75 dg 76 dg 77 dg 78 dg 79 dg 80 dg 81 dg 82 dg 83 dg 84 dg 85 dg 86 dg 87 dg 88 dg 89 dg 90 dg 91 dg 92 dg 93 dg 94 dg 95 dg 96 dg 97 dg 98 dg 99 dg 100 dg long hundredweight 1.9684130552221 × 10-6 long cwt 3.9368261104442 × 10-6 long cwt 5.9052391656664 × 10-6 long cwt 7.8736522208885 × 10-6 long cwt 9.8420652761106 × 10-6 long cwt 1.1810478331333 × 10-5 long cwt 1.3778891386555 × 10-5 long cwt 1.5747304441777 × 10-5 long cwt 1.7715717496999 × 10-5 long cwt 1.9684130552221 × 10-5 long cwt 2.1652543607443 × 10-5 long cwt 2.3620956662665 × 10-5 long cwt 2.5589369717888 × 10-5 long cwt 2.755778277311 × 10-5 long cwt 2.9526195828332 × 10-5 long cwt 3.1494608883554 × 10-5 long cwt 3.3463021938776 × 10-5 long cwt 3.5431434993998 × 10-5 long cwt 3.739984804922 × 10-5 long cwt 3.9368261104442 × 10-5 long cwt 4.1336674159665 × 10-5 long cwt 4.3305087214887 × 10-5 long cwt 4.5273500270109 × 10-5 long cwt 4.7241913325331 × 10-5 long cwt 4.9210326380553 × 10-5 long cwt 5.1178739435775 × 10-5 long cwt 5.3147152490997 × 10-5 long cwt 5.5115565546219 × 10-5 long cwt 5.7083978601442 × 10-5 long cwt 5.9052391656664 × 10-5 long cwt 6.1020804711886 × 10-5 long cwt 6.2989217767108 × 10-5 long cwt 6.495763082233 × 10-5 long cwt 6.6926043877552 × 10-5 long cwt 6.8894456932774 × 10-5 long cwt 7.0862869987996 × 10-5 long cwt 7.2831283043218 × 10-5 long cwt 7.4799696098441 × 10-5 long cwt 7.6768109153663 × 10-5 long cwt 7.8736522208885 × 10-5 long cwt 8.0704935264107 × 10-5 long cwt 8.2673348319329 × 10-5 long cwt 8.4641761374551 × 10-5 long cwt 8.6610174429773 × 10-5 long cwt 8.8578587484995 × 10-5 long cwt 9.0547000540218 × 10-5 long cwt 9.251541359544 × 10-5 long cwt 9.4483826650662 × 10-5 long cwt 9.6452239705884 × 10-5 long cwt 9.8420652761106 × 10-5 long cwt 0.0001004 long cwt 0.0001024 long cwt 0.0001043 long cwt 0.0001063 long cwt 0.0001083 long cwt 0.0001102 long cwt 0.0001122 long cwt 0.0001142 long cwt 0.0001161 long cwt 0.0001181 long cwt 0.0001201 long cwt 0.0001220 long cwt 0.0001240 long cwt 0.0001260 long cwt 0.0001279 long cwt 0.0001299 long cwt 0.0001319 long cwt 0.0001339 long cwt 0.0001358 long cwt 0.0001378 long cwt 0.0001398 long cwt 0.0001417 long cwt 0.0001437 long cwt 0.0001457 long cwt 0.0001476 long cwt 0.0001496 long cwt 0.0001516 long cwt 0.0001535 long cwt 0.0001555 long cwt 0.0001575 long cwt 0.0001594 long cwt 0.0001614 long cwt 0.0001634 long cwt 0.0001653 long cwt 0.0001673 long cwt 0.0001693 long cwt 0.0001713 long cwt 0.0001732 long cwt 0.0001752 long cwt 0.0001772 long cwt 0.0001791 long cwt 0.0001811 long cwt 0.0001831 long cwt 0.0001850 long cwt 0.0001870 long cwt 0.0001890 long cwt 0.0001909 long cwt 0.0001929 long cwt 0.0001949 long cwt 0.0001968 long cwt
Decigram or decigramme is equal to 10-1 gram (unit of mass), comes from a combination of the metric prefix deci (d) with the gram (g). Plural name is decigrams.
Name of unitSymbolDefinitionRelation to SI unitsUnit System
decigramdg
≡ 0.1 g
≡ 10-4 kg
Metric system SI
#### conversion table
decigramslong hundredweightsdecigramslong hundredweights
1= 1.9684130552221E-611= 2.1652543607443E-5
2.5= 4.9210326380553E-612.5= 2.4605163190277E-5
4= 7.8736522208885E-614= 2.755778277311E-5
5.5= 1.0826271803722E-515.5= 3.0510402355943E-5
7= 1.3778891386555E-517= 3.3463021938776E-5
8.5= 1.6731510969388E-518.5= 3.6415641521609E-5
10= 1.9684130552221E-520= 3.9368261104442E-5
The long or imperial hundredweight of 8 stone (112 lb or 50.802345 kg) sees informal use in the imperial system but according to Schedule 1, Part VI of the Weights and Measures Act 1985, is no longer to be used for trade after the Act came into force.
Name of unitSymbolDefinitionRelation to SI unitsUnit System
long hundredweightlong cwt
≡ 112 lb av
= 50.80234544 kg
Imperial/US
### conversion table
long hundredweightsdecigramslong hundredweightsdecigrams
1= 508023.454411= 5588257.9984
2.5= 1270058.63612.5= 6350293.18
4= 2032093.817614= 7112328.3616
5.5= 2794128.999215.5= 7874363.5432
7= 3556164.180817= 8636398.7248
8.5= 4318199.362418.5= 9398433.9064
10= 5080234.54420= 10160469.088
### Conversion table
decigramslong hundredweights
1= 1.9684130552221 × 10-6
508,023.4544= 1
### Legend
SymbolDefinition
exactly equal
approximately equal to
=equal to
digitsindicates that digits repeat infinitely (e.g. 8.294 369 corresponds to 8.294 369 369 369 369 …) | 2,536 | 5,475 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2024-30 | latest | en | 0.254409 |
https://www.mathworks.com/matlabcentral/cody/problems/17-find-all-elements-less-than-0-or-greater-than-10-and-replace-them-with-nan/solutions/1595842 | 1,606,747,279,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141216175.53/warc/CC-MAIN-20201130130840-20201130160840-00238.warc.gz | 749,884,376 | 18,838 | Cody
# Problem 17. Find all elements less than 0 or greater than 10 and replace them with NaN
Solution 1595842
Submitted on 30 Jul 2018 by nipun garg
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
x = [ 5 17 -20 99 3.4 2 8 -6 ]; y_correct = [ 5 NaN NaN NaN 3.4 2 8 NaN ]; assert(isequalwithequalnans(cleanUp(x),y_correct))
2 Pass
x = [ -2.80 -6.50 -12.60 4.00 2.20 0.20 -10.60 9.00]; y_correct = [ NaN NaN NaN 4.00 2.20 0.20 NaN 9.00] assert(isequalwithequalnans(cleanUp(x),y_correct))
y_correct = NaN NaN NaN 4.0000 2.2000 0.2000 NaN 9.0000
### Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | 273 | 797 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2020-50 | latest | en | 0.616952 |
https://phys.libretexts.org/Courses/Joliet_Junior_College/Physics_201_-_Fall_2019v2/Book%3A_Custom_Physics_textbook_for_JJC/06%3A_The_Laws_of_Motion/6.03%3A_Newtons_Laws | 1,726,051,683,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00175.warc.gz | 408,369,275 | 35,858 | # 6.3: Newton’s Laws
• Boundless
• Boundless
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$
( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\id}{\mathrm{id}}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\kernel}{\mathrm{null}\,}$$
$$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$
$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$
$$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$
$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$
$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vectorC}[1]{\textbf{#1}}$$
$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$
$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$
$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\avec}{\mathbf a}$$ $$\newcommand{\bvec}{\mathbf b}$$ $$\newcommand{\cvec}{\mathbf c}$$ $$\newcommand{\dvec}{\mathbf d}$$ $$\newcommand{\dtil}{\widetilde{\mathbf d}}$$ $$\newcommand{\evec}{\mathbf e}$$ $$\newcommand{\fvec}{\mathbf f}$$ $$\newcommand{\nvec}{\mathbf n}$$ $$\newcommand{\pvec}{\mathbf p}$$ $$\newcommand{\qvec}{\mathbf q}$$ $$\newcommand{\svec}{\mathbf s}$$ $$\newcommand{\tvec}{\mathbf t}$$ $$\newcommand{\uvec}{\mathbf u}$$ $$\newcommand{\vvec}{\mathbf v}$$ $$\newcommand{\wvec}{\mathbf w}$$ $$\newcommand{\xvec}{\mathbf x}$$ $$\newcommand{\yvec}{\mathbf y}$$ $$\newcommand{\zvec}{\mathbf z}$$ $$\newcommand{\rvec}{\mathbf r}$$ $$\newcommand{\mvec}{\mathbf m}$$ $$\newcommand{\zerovec}{\mathbf 0}$$ $$\newcommand{\onevec}{\mathbf 1}$$ $$\newcommand{\real}{\mathbb R}$$ $$\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}$$ $$\newcommand{\laspan}[1]{\text{Span}\{#1\}}$$ $$\newcommand{\bcal}{\cal B}$$ $$\newcommand{\ccal}{\cal C}$$ $$\newcommand{\scal}{\cal S}$$ $$\newcommand{\wcal}{\cal W}$$ $$\newcommand{\ecal}{\cal E}$$ $$\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}$$ $$\newcommand{\gray}[1]{\color{gray}{#1}}$$ $$\newcommand{\lgray}[1]{\color{lightgray}{#1}}$$ $$\newcommand{\rank}{\operatorname{rank}}$$ $$\newcommand{\row}{\text{Row}}$$ $$\newcommand{\col}{\text{Col}}$$ $$\renewcommand{\row}{\text{Row}}$$ $$\newcommand{\nul}{\text{Nul}}$$ $$\newcommand{\var}{\text{Var}}$$ $$\newcommand{\corr}{\text{corr}}$$ $$\newcommand{\len}[1]{\left|#1\right|}$$ $$\newcommand{\bbar}{\overline{\bvec}}$$ $$\newcommand{\bhat}{\widehat{\bvec}}$$ $$\newcommand{\bperp}{\bvec^\perp}$$ $$\newcommand{\xhat}{\widehat{\xvec}}$$ $$\newcommand{\vhat}{\widehat{\vvec}}$$ $$\newcommand{\uhat}{\widehat{\uvec}}$$ $$\newcommand{\what}{\widehat{\wvec}}$$ $$\newcommand{\Sighat}{\widehat{\Sigma}}$$ $$\newcommand{\lt}{<}$$ $$\newcommand{\gt}{>}$$ $$\newcommand{\amp}{&}$$ $$\definecolor{fillinmathshade}{gray}{0.9}$$
## The First Law: Inertia
Newton’s first law of motion describes inertia. According to this law, a body at rest tends to stay at rest, and a body in motion tends to stay in motion, unless acted on by a net external force.
learning objectives
• Define the First Law of Motion
### History
Sir Isaac Newton was an English scientist who was interested in the motion of objects under various conditions. In 1687, he published a work called Philosophiae Naturalis Principla Mathematica, which described his three laws of motion. Newton used these laws to explain and explore the motion of physical objects and systems. These laws form the basis for mechanics. The laws describe the relationship between forces acting on a body and the motions experienced due to these forces. The three laws are as follows:
1. If an object experiences no net force, its velocity will remain constant. The object is either at rest and the velocity is zero or it moves in a straight line with a constant speed.
2. The acceleration of an object is parallel and directly proportional to the net force acting on the object, is in the direction of the net force, and is inversely proportional to the mass of the object.
3. When a first object exerts a force on a second object, the second object simultaneously exerts a force on the first object, meaning that the force of the first object and the force of the second object are equal in magnitude and opposite in direction.
### The First Law of Motion
You have most likely heard Newton’s first law of motion before. If you haven’t heard it in the form written above, you have probably heard that “a body in motion stays in motion, and a body at rest stays at rest.” This means that an object that is in motion will not change its velocity unless an unbalanced force acts upon it. This is called uniform motion. It is easier to explain this concept through examples.
Example $$\PageIndex{1}$$:
If you are ice skating, and you push yourself away from the side of the rink, according to Newton’s first law you will continue all the way to the other side of the rink. But, this won’t actually happen. Newton says that a body in motion will stay in motion until an outside force acts upon it. In this and most other real world cases, this outside force is friction. The friction between your ice skates and the ice is what causes you to slow down and eventually stop.
Let’s look at another situation. Refer to for this example. Why do we wear seat belts? Obviously, they’re there to protect us from injury in case of a car accident. If a car is traveling at 60 mph, the driver is also traveling at 60 mph. When the car suddenly stops, an external force is applied to the car that causes it to slow down. But there is no force acting on the driver, so the driver continues to travel at 60 mph. The seat belt is there to counteract this and act as that external force to slow the driver down along with the car, preventing them from being harmed.
Newton’s First Law: Newton’s first law in effect on the driver of a car
### Inertia
Sometimes this first law of motion is referred to as the law of inertia. Inertia is the property of a body to remain at rest or to remain in motion with constant velocity. Some objects have more inertia than others because the inertia of an object is equivalent to its mass. This is why it is more difficult to change the direction of a boulder than a baseball.
Doc Physics – Newton: Newton’s first law is hugely counterintuitive. You may have learned it in gradeschool, though. Let’s see it for the mind-blowing conclusion it really is.
## The Second Law: Force and Acceleration
The second law states that the net force on an object is equal to the rate of change, or derivative, of its linear momentum.
learning objectives
• Define the Second Law of Motion
English scientist Sir Isaac Newton examined the motion of physical objects and systems under various conditions. In 1687, he published his three laws of motion in Philosophiae Naturalis Principla Mathematica. The laws form the basis for mechanics—they describe the relationship between forces acting on a body, and the motion experienced due to these forces. These three laws state:
1. If an object experiences no net force, its velocity will remain constant. The object is either at rest and the velocity is zero, or it moves in a straight line with a constant speed.
2. The acceleration of an object is parallel and directly proportional to the net force acting on the object, is in the direction of the net force and is inversely proportional to the mass of the object.
3. When a first object exerts a force on a second object, the second object simultaneously exerts a force on the first object, meaning that the force of the first object and the force of the second object are equal in magnitude and opposite in direction.
The first law of motion defines only the natural state of the motion of the body (i.e., when the net force is zero). It does not allow us to quantify the force and acceleration of a body. The acceleration is the rate of change in velocity; it is caused only by an external force acting on it. The second law of motion states that the net force on an object is equal to the rate of change of its linear momentum.
### Linear Momentum
Linear momentum of an object is a vector quantity that has both magnitude and direction. It is the product of mass and velocity of a particle at a given time:
$\mathrm{p=mv}$
where, $$\mathrm{p=momentum, m=mass,}$$ and $$\mathrm{v=velocity}$$. From this equation, we see that objects with more mass will have more momentum.
### The Second Law of Motion
Picture two balls of different mass, traveling in the same direction at the same velocity. If they both collide with a wall at the same time, the heavier ball will exert a larger force on the wall. This concept, illustrated below, explains Newton’s second law, which emphasizes the importance of force and motion, over velocity alone. It states: the net force on an object is equal to the rate of change of its linear momentum. From calculus we know that the rate of change is the same as a derivative. When we the linear momentum of an object we get:
Force and Mass: This animation demonstrates the connection between force and mass.
\begin{align} \mathrm{F \; } & \mathrm{=\dfrac{dp}{dt}} \\ \mathrm{F \;} & \mathrm{=\dfrac{d(m⋅v)}{dt}} \end{align}
where, $$\mathrm{F = Force}$$ and $$\mathrm{t = time}$$. From this we can further simplify the equation:
\begin{align} \mathrm{F \;} & \mathrm{=m\dfrac{d(v)}{dt}} \\ \mathrm{F \;} & \mathrm{=m⋅a} \end{align}
where, a=accelerationa=acceleration. As we stated earlier, acceleration is the rate of change of velocity, or velocity divided by time.
Newton’s Three Laws of Mechanics – Second Law – Part 1: Here we’ll see how many people can confuse your understanding of Newton’s 2nd law of motion through oversight, sloppy language, or cruel intentions.
Newton’s Three Laws of Mechanics – Second Law – Part Two: Equilibrium is investigated and Newton’s 1st law is seen as a special case of Newton’s 2nd law!
## The Third Law: Symmetry in Forces
The third law of motion states that for every action, there is an equal and opposite reaction.
learning objectives
• Define the Third Law of Motion
Sir Isaac Newton was a scientist from England who was interested in the motion of objects under various conditions. In 1687, he published a work called Philosophiae Naturalis Principla Mathematica, which contained his three laws of motion. Newton used these laws to explain and explore the motion of physical objects and systems. These laws form the bases for mechanics. The laws describe the relationship between forces acting on a body, and the motion is an experience due to these forces. Newton’s three laws are:
1. If an object experiences no net force, its velocity will remain constant. The object is either at rest and the velocity is zero or it moves in a straight line with a constant speed.
2. The acceleration of an object is parallel and directly proportional to the net force acting on the object, is in the direction of the net force and is inversely proportional to the mass of the object.
3. When a first object exerts a force on a second object, the second object simultaneously exerts a force on the first object, meaning that the force of the first object and the force of the second object are equal in magnitude and opposite in direction.
### Newton’s Third Law of Motion
Newton’s third law basically states that for every action, there is an equal and opposite reaction. If object A exerts a force on object B, because of the law of symmetry, object B will exert a force on object A that is equal to the force acted on it:
$\mathrm{FA=−FB}$
In this example, FA is the action and FB is the reaction. You have undoubtedly witnessed this law of motion. For example, take a swimmer who uses her feet to push off the wall in order to gain speed. The more force she exerts on the wall, the harder she pushes off. This is because the wall exerts the same force on her that she forces on it. She pushes the wall in the direction behind her, therefore the wall will exert a force on her that is in the direction in front of her and propel her forward.
Newton’s Third Law of Motion: When a swimmer pushes off the wall, the swimmer is using the third law of motion.
Take as another example, the concept of thrust. When a rocket launches into outer space, it expels gas backward at a high velocity. The rocket exerts a large backward force on the gas, and the gas exerts and equal and opposite reaction force forward on the rocket, causing it to launch. This force is called thrust. Thrust is used in cars and planes as well.
Newton’s Third Law: The most fundamental statement of basic physical reality is also the most often misunderstood. As your mom if she’s clear on Newton’s Third. Then ask her why things can move if every force has a paired opposite force all the time, forever.
## Key Points
• Newton’s three laws of physics are the basis for mechanics.
• The first law states that a body at rest will stay at rest until a net external force acts upon it and that a body in motion will remain in motion at a constant velocity until acted on by a net external force.
• Net external force is the sum of all of the forcing acting on an object.
• Just because there are forces acting on an object doesn’t necessarily mean that there is a net external force; forces that are equal in magnitude but acting in opposite directions can cancel one another out.
• Friction is the force between an object in motion and the surface on which it moves. Friction is the external force that acts on objects and causes them to slow down when no other external force acts upon them.
• Inertia is the tendency of a body in motion to remain in motion. Inertia is dependent on mass, which is why it is harder to change the direction of a heavy body in motion than it is to change the direction of a lighter object in motion.
• Newton’s three laws of motion explain the relationship between forces acting on an object and the motion they experience due to these forces. These laws act as the basis for mechanics.
• The second law explains the relationship between force and motion, as opposed to velocity and motion. It uses the concept of linear momentum to do this.
• Linear momentum $$\mathrm{p}$$, is the product of mass $$\mathrm{m}$$, and velocity $$\mathrm{v: p=mv}$$.
• The second law states that the net force is equal to the derivative, or rate of change of its linear momentum.
• By simplifying this relationship and remembering that acceleration is the rate of change of velocity, we can see that the second law of motion is where the relationship between force and acceleration comes from.
• If an object A exerts a force on object B, object B exerts an equal and opposite force on object A.
• Newton’s third law can be seen in many everyday circumstances. When you walk, the force you use to push off the ground backwards makes you move forward.
• Thrust is an application of the third law of motion. A helicopter uses thrust to push the air under the propeller down, and therefore lift off the ground.
## Key Terms
• inertia: The property of a body that resists any change to its uniform motion; equivalent to its mass.
• friction: A force that resists the relative motion or tendency to such motion of two bodies in contact.
• uniform motion: Motion at a constant velocity (with zero acceleration). Note that an object in motion will not change its velocity unless an unbalanced force acts upon it.
• net force: The combination of all the forces that act on an object.
• momentum: (of a body in motion) the product of its mass and velocity.
• acceleration: The amount by which a speed or velocity increases (and so a scalar quantity or a vector quantity).
• symmetry: Exact correspondence on either side of a dividing line, plane, center or axis.
• thrust: The force generated by propulsion, as in a jet engine. | 4,616 | 17,432 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2024-38 | latest | en | 0.194156 |
https://www.airmilescalculator.com/distance/bjv-to-ist/ | 1,686,108,915,000,000,000 | text/html | crawl-data/CC-MAIN-2023-23/segments/1685224653501.53/warc/CC-MAIN-20230607010703-20230607040703-00061.warc.gz | 661,532,406 | 24,579 | How far is Istanbul from Bodrum?
The distance between Bodrum (Milas–Bodrum Airport) and Istanbul (Istanbul Airport) is 283 miles / 455 kilometers / 246 nautical miles.
The driving distance from Bodrum (BJV) to Istanbul (IST) is 421 miles / 678 kilometers, and travel time by car is about 7 hours 53 minutes.
283
Miles
455
Kilometers
246
Nautical miles
1 h 2 min
Distance from Bodrum to Istanbul
There are several ways to calculate the distance from Bodrum to Istanbul. Here are two standard methods:
Vincenty's formula (applied above)
• 282.580 miles
• 454.768 kilometers
• 245.555 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth's surface using an ellipsoidal model of the planet.
Haversine formula
• 282.977 miles
• 455.407 kilometers
• 245.900 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
How long does it take to fly from Bodrum to Istanbul?
The estimated flight time from Milas–Bodrum Airport to Istanbul Airport is 1 hour and 2 minutes.
What is the time difference between Bodrum and Istanbul?
There is no time difference between Bodrum and Istanbul.
Flight carbon footprint between Milas–Bodrum Airport (BJV) and Istanbul Airport (IST)
On average, flying from Bodrum to Istanbul generates about 67 kg of CO2 per passenger, and 67 kilograms equals 147 pounds (lbs). The figures are estimates and include only the CO2 generated by burning jet fuel.
Map of flight path and driving directions from Bodrum to Istanbul
See the map of the shortest flight path between Milas–Bodrum Airport (BJV) and Istanbul Airport (IST).
Airport information
Origin Milas–Bodrum Airport
City: Bodrum
Country: Turkey
IATA Code: BJV
ICAO Code: LTFE
Coordinates: 37°15′2″N, 27°39′51″E
Destination Istanbul Airport
City: Istanbul
Country: Turkey
IATA Code: IST
ICAO Code: LTFM
Coordinates: 41°15′36″N, 28°44′33″E | 514 | 2,013 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.6875 | 3 | CC-MAIN-2023-23 | latest | en | 0.860031 |
https://www.viladoraso.com.br/bookkeeping/what-is-the-accounting-equation/ | 1,638,826,195,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964363312.79/warc/CC-MAIN-20211206194128-20211206224128-00261.warc.gz | 1,140,148,662 | 66,917 | Autor:
## Elements Of The Fundamental Accounting Equation
One asset Cash increases while another asset Accounts Receivable decreases. Since the accounts will change by the same amount, the total amount of assets will not change. Equity has an adjusting entries equal effect on both sides of the equation. If you know any two parts of the accounting equation, you can calculate the third. If you are a sole proprietor, you hold all the ownership.
## Accounting Equations Every Business Should Know
As a result, the equation is sometimes referred to as the balance sheet equation. This is where the idea of the accounting equation comes in. The two sides of the equation must always add up to equal value.
## Calculating Liabilities
### What are the 10 steps of accounting cycle?
The 10 steps are: analyzing transactions, entering journal entries of the transactions, transferring journal entries to the general ledger, crafting unadjusted trial balance, adjusting entries in the trial balance, preparing an adjusted trial balance, processing financial statements, closing temporary accounts,
Financing through debt shows as a liability, while financing through issuing equity shares appears in shareholders’ equity. Does this equation and its meaning still seem a bit tricky right now? If so, don’t worry, it will become easier as you continue along. Well, in order to answer that question contra asset account we need to look at what each of the termsinthe equation mean. The owner deposited R into his account for JJ Landscapers. The asset “Cash” is decreased \$2000 and the drawing decreases Owner’s Equity \$2000. The asset “Cash” is decreased \$950 and the expense decreases Owner’s Equity \$950.
They refer to assets such as goodwill, patents, copyrights & trademarks. Though not tangible, these assets bring huge value to an organization.
Shareholders’ equity is a company’s total assets minus its total liabilities. Shareholders’ equity represents the amount of money that would be returned to shareholders if all of the assets were liquidated and all of the company’s debt was paid off. Ted is an entrepreneur who wants to start a company selling speakers for car stereo systems. After saving up money for a year, Ted decides it is time to officially start his business. He forms Speakers, Inc. and contributes \$100,000 to the company in exchange for all of its newly issued shares. This business transaction increases company cash and increases equity by the same amount. Let’s take a look at the formation of a company to illustrate how the accounting equation works in a business situation.
Mr. John invested a capital of \$15,000 into his business. \$30,000 is credited to cash, and \$30,000 is debited to inventory. He funds the venture with \$10,000 of his own money and takes out a small business loan for \$30,000. will cause a reduction in the corporation’s retained earnings, which in turn reduces the corporation’s stockholders’ equity. However, this will not reduce the corporation’s net income. The company purchases land by paying half in cash and signing a note payable for the other half. The company purchases a significant amount of supplies on credit.
To record the owner’s withdrawal of cash from the business. Larry Bertsch, a long-time resident of Las Vegas, former CFO and former bankruptcy trustee with a well-respected reputation in both the private and public sectors. He is the founder of Larry L. Bertsch, CPA & Associates, a top certified public accountants firm that has been offering the highest quality services to regional clients since 2003.
An asset is a resource that is owned or controlled by the company to be used for future benefits. Some assets are tangible like cash while others are theoretical or intangible like goodwill or copyrights. By understanding the accounting formula and its role within your business, you can better monitor your businesses’ financial stability. Maybe I am mistaken, but I think for Transaction #7 you meant that assets decrease by \$2000 and that drawing decreases owners equity by \$2000.
Once you get the loan, this is how your accounting equation changes. Right and Wrong One asset increased and one asset decreased. Since the amounts are identical, there is no change to the total amount of assets. Save money and don’t sacrifice features you need for your business with Patriot’s accounting software. A thorough accounting system and a well-maintained general ledger allow you to properly assess the financial health of your company. There are many more formulas that you can use, but the eight that we provided are some of the most important. This ratio gives you an idea of how much cash you currently have on hand.
Some transactions may increase one account and decrease another on the same side of the equation i.e. one asset increases and another decreases. A transaction that decreases total assets must also decrease total liabilities or owner’s equity. The balance sheet is used to analyze a company’s financial position. Using the balance sheet, a financial analyst can calculate a number of financial ratios to determine how well basic accounting equation a company is performing, how efficient is it is, and how liquid it is. Changes in the balance sheet are used to calculate cash flow in the cash flow statement. A transaction like this affects only the assets of the equation and there is no corresponding effect in liabilities or shareholder equity on the right side of the equation. It is clear that it is possible to categorize your financial world into these 5 groups.
CRM CRM software helps businesses manage, track, and improve all aspects of their customer relationships. It includes a very wide variety of applications focused on sales, marketing and customer service. The company purchased printers and paid a total of \$1,000. Before taking this lesson, be sure to be familiar with the accounting elements. For every entry the sum of debits must equal the sum of credits. It just changes from being \$3,000 in cash to being \$3,000 in inventory.
As you can see, we added all transactions that related to the bank to arrive at our ending balance of \$20,000. This is the same approach we took for all the accounts. Again, you are introducing a personal asset into your business and using it as a business asset. Any investment of personal assets will increase your owner’s equity. Next, Sally purchased \$4,000 worth of inventory to stock her store. The inventory purchase affected the inventory account under assets and the accounts payable account under liabilities. This video introduces the accounting equation, which is the most important concept in accounting.
The asset “Cash” is increased \$1200 and the revenue increases Owner’s Equity \$1200. The asset “Office Supplies” is increased \$550 and the asset “Cash” is decreased \$550. The business owes creditors for loans made https://bookkeeping-reviews.com/ and other obligations to pay for goods or services. Our priority at The Blueprint is helping businesses find the best solutions to improve their bottom lines and make owners smarter, happier, and richer.
Thus, the accounting equation is an essential step in determining company profitability. If a business buys raw material by paying cash, it will lead to an increase in the inventory while reducing cash capital . Because there are two or more accounts affected by every transaction carried out by a company, the accounting system is referred to as double-entry accounting. Assets, liabilities and owners’ equity are the three components that make up a company’s balance sheet. The balance sheet, which shows a business’s financial condition at any point, is based on this equation.
It’s also helpful on a lower level by keeping all transactions in balance, with a verifiable relationship between each expense and its source of financing. In this case, assets represent any of the company’s valuable resources, while liabilities are outstanding obligations. Combining liabilities and equity shows how the company’s assets are financed. After basic accounting equation the company formation, Speakers, Inc. needs to buy some equipment for installing speakers, so it purchases \$20,000 of installation equipment from a manufacturer for cash. In this case, Speakers, Inc. uses its cash to buy another asset, so the asset account is decreased from the disbursement of cash and increased by the addition of installation equipment.
For each of the transactions in items 2 through 13, indicate the two effects on the accounting equation of the business or company. By using the accounting equation, you can see if you can fund the purchase of an asset with your business’s existing assets. And, the equation will reveal if you should pay off debts with assets or by taking on more liabilities. If you’re a small business http://www.gradex.com.ba/expanded-accounting-equation/ owner who would prefer to monitor your company’s cash flow with your own two eyes, there are financial accounting equations that you should be familiar with. These fundamental accounting equations are rather broad, meaning they should apply to an array of businesses. The accounting equation states that the amount of assets must be equal to liabilities plus shareholder or owner equity.
### Is bank loan a current liability?
Bonds, mortgages and loans that are payable over a term exceeding one year would be fixed liabilities or long-term liabilities. However, the payments due on the long-term loans in the current fiscal year could be considered current liabilities if the amounts were material.
Intangible assets include patents, copyrights, and trademarks. Activities such as purchasing assets or recording sales will increase your asset account. Now that you understand assets, liabilities, and equity, it’s time to get hands on with balance sheets so you can track each of those elements. Our guide to balance sheets has everything you need to jump in. Let’s say you invest \$10,000 to open an online used book shop. Right off the bat, you know your equity consists of that \$10,000 in the form of capital. And, since your liabilities total \$0, your assets are also \$10,000.
There are two types of assets—current assets, and fixed assets. See what we’re building for small businesses at gusto.com/covid-19. Things such as utility bills, statement of retained earnings example land payments, employee salaries, and insurance – those are all examples of liabilities. To record capital contribution as stockholders invest in the business.
Double entry bookkeeping ensures that every transaction keeps the accounting equation in balance. Each type of entity also can use the organization’s use of the accounting equation to estimate its stability in terms of its financial transactions. The borrowing of \$300,00 is not utilized towards the purchase of any asset or spend. Therefore, it will lead to a corresponding increase in the bank balance. Secondly, the interest payable reduces the cash balance. Conversely, the corresponding entry will be passed in the owner’s equity account.
This equation is the framework of tracking money as it flows in and out of an economic entity. The form in which we see accounting today is possible because of Luca Pacioli, a Renaissance-era monk. He developed a method that tracks the success or failure of trading ventures over 500 years ago. It is the value of the assets that the owner really owns. They are things that add value to the business and will bring it benefits in some form.
• Buying something with the cash the company has on hand doesn’t affect the accounting formula, because it’s just converting one type of asset into another type of asset .
• Differentiating between these scenarios will require a closer look at the balance sheet.
• Assets refer to items like cash, inventory, accounts receivable, buildings, land, or equipment.
• A company with \$1 million in assets could’ve blown those assets on frivolous spending, or it could’ve wisely spent on things that will help the business grow and succeed.
That’s why our editorial opinions and reviews are ours alone and aren’t inspired, endorsed, or sponsored by an advertiser. Editorial content from The Blueprint is separate from The Motley Fool editorial content and is created by a different analyst team. Beginner’s Guides Our comprehensive guides serve as an introduction to basic concepts that you can incorporate into your larger business strategy. Applicant Tracking Choosing the best applicant tracking system is crucial to having a smooth recruitment process that saves you time and money. Find out what you need to look for in an applicant tracking system. CMS A content management system software allows you to publish content, create a user-friendly web experience, and manage your audience lifecycle. | 2,489 | 12,853 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2021-49 | latest | en | 0.927546 |
https://bmcstructbiol.biomedcentral.com/articles/10.1186/s12900-018-0094-3/tables/13 | 1,670,242,779,000,000,000 | text/html | crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00608.warc.gz | 163,133,487 | 46,239 | Alg. 2: Use Bayesian method to find the dictionary D of similar block group Input: Similar block group $${\overline{Y}}_n,n=1,2,..N$$, K Gaussian distribution {N(μk, ∑k)}K = 1, 2, …, k through GMM leaning. Output: Gaussian component of similar block group $${\overline{Y}}_n$$ corresponded dictionary D. Step1. initialization n = 1,k = 1. Step2. Apply the formula $$\ln P\left(k\left|\overline{Y}\right.=\right)\sum \limits_{m=1}^M\ln N\left({y}_m\left|\overline{0},{\sum}_k\right.\right)-\ln C$$ to calculate $$\ln P\left(k\left|\overline{Y}\right.\right)$$ when taking the k-th Gaussian component. Step3. Repeat step 2, total of K times for calculating $$\ln P\left(k\left|\overline{Y}\right.\right)$$ values. Step4. Compare $$\ln P\left(k\left|\overline{Y}\right.\right),k=1,2,\dots, k$$, get the maximum $$\ln P\left(k\left|\overline{Y}\right.\right)$$, its corresponding Gaussian distribution can describe similar block group Yn, its covariance matrix is ∑k. Step5. For SVD decomposition, get dictionary Dn of similar block group Yn. Step6. Repeat steps 2–5, a total of N times, until the output N is a dictionary D. | 367 | 1,121 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.109375 | 3 | CC-MAIN-2022-49 | latest | en | 0.591943 |
http://www.java-gaming.org/index.php?topic=27663.0 | 1,529,809,975,000,000,000 | text/html | crawl-data/CC-MAIN-2018-26/segments/1529267866191.78/warc/CC-MAIN-20180624024705-20180624044705-00088.warc.gz | 425,158,177 | 19,482 | Java-Gaming.org Hi !
Featured games (91) games approved by the League of Dukes Games in Showcase (757) Games in Android Showcase (229) games submitted by our members Games in WIP (844) games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
Pages: [1]
ignore | Print
[SOLVED] Calculating angle to mouse (Read 5396 times) 0 Members and 1 Guest are viewing this topic.
DDDBOMBER
Junior Devvie
Projects: 1
« Posted 2012-10-30 14:47:01 »
I need help calculating the angle to the mouse from entities,
1 2 3 4 5 6 7 8 9 `if(orders.get(0) instanceof MoveOrder){ MoveOrder mo = (MoveOrder)orders.get(0); double xd = mo.x - x; double yd = mo.y - y; double rotation = Math.toDegrees(Math.atan2(yd,xd))+270; rot = rotation; System.out.println(rot); orders.remove(0); }`
This is my current code, but it doesn't seem to work, and the movement code is:
1 2 3 4 5 `while(rot > 360)rot-= 360; while(rot < 0)rot+= 360; double movementAngle = Math.toRadians(rot); x += Math.sin(movementAngle) * 0.5; y += Math.cos(movementAngle) * 0.5;`
can anyone help?
deepthought
« Reply #1 - Posted 2012-10-30 17:19:21 »
everything you have looks correct to me. I assume it's messing up at the stage of getting the angle from the mouse, right? make sure you're getting the right positions in your MoveOrder and entity position.
h3ckboy
JGO Coder
Medals: 5
« Reply #2 - Posted 2012-10-30 18:02:44 »
Can you be more specific in "doesn't work"?
kinaite
Junior Devvie
Projects: 3
« Reply #3 - Posted 2012-10-30 18:26:20 »
this should help:
angle = Math.toDegrees(Math.atan2(MouseY - entity.y, MouseX - entity.x));
DDDBOMBER
Junior Devvie
Projects: 1
« Reply #4 - Posted 2012-10-30 18:29:54 »
It seems to go in completely random directions, and doesn't move to the mouse at all...
kinaite
Junior Devvie
Projects: 3
« Reply #5 - Posted 2012-10-30 18:34:31 »
to move: (Worked in my game. see Walking sprites in show case)
deepthought
« Reply #6 - Posted 2012-10-30 18:43:29 »
you need to find out where you are going wrong. are you getting the correct angle from your first segment?
DDDBOMBER
Junior Devvie
Projects: 1
« Reply #7 - Posted 2012-10-30 20:07:51 »
Okay, i've done some testing, and it seems to work fine x Axis, but messes up somewhere on the y Axis, as in, if i tell it to go from (0,0) to (10,0) it works fine, if i want to move just in the x Axis it has to be
1 2 3 4 `MoveOrder mo = (MoveOrder)orders.get(0); double xd = mo.x - x; double yd = mo.y - y; double rotation = Math.toDegrees(Math.atan2(yd,xd))+90;`
However, if i want to move in the y Axis it has to be
1 2 3 4 `MoveOrder mo = (MoveOrder)orders.get(0); double xd = mo.x - x; double yd = mo.y - y; double rotation = Math.toDegrees(Math.atan2(yd,xd))+270;`
Which even further confuses me...
I've got it to move in the right direction by doing it like this
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 `double movementAngleX = Math.toRadians(rot); double movementAngleY = Math.toRadians(rot+180); x += Math.sin(movementAngleX) * 0.5; y += Math.cos(movementAngleY) * 0.5; if(orders.size() > 0){ if(orders.get(0) instanceof MoveOrder){ MoveOrder mo = (MoveOrder)orders.get(0); double xd = mo.x - x; double yd = mo.y - y; double rotation = Math.toDegrees(Math.atan2(yd,xd))+90; rot = rotation; System.out.println(rot); orders.remove(0); } }`
But then the rotation on the image is messed up, the rendering code is (im using LWJGL)
1 2 3 4 5 6 7 8 9 ` GL11.glTranslated(x+32, y+32, 0); GL11.glRotated(rot, 0, 0, -1); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0.0f, 0.0f); GL11.glVertex2f(-32, -32); GL11.glTexCoord2f(1.0f, 0.0f); GL11.glVertex2f(32, -32); GL11.glTexCoord2f(1.0f, 1.0f); GL11.glVertex2f(32, 32); GL11.glTexCoord2f(0.0f, 1.0f); GL11.glVertex2f(-32, 32); GL11.glEnd();`
Anyone got any ideas?
kinaite
Junior Devvie
Projects: 3
« Reply #8 - Posted 2012-10-30 20:29:01 »
this is y coord code:
and try GL11.Rotatef with
angle = angle % 360.0f;
GL11.Rotatef((float)angle...)
DDDBOMBER
Junior Devvie
Projects: 1
« Reply #9 - Posted 2012-10-30 20:37:33 »
Thanks! Got it to work,
1 2 3 4 5 6 7 8 9 10 11 12 13 `double movementAngle = Math.toRadians(rot); x += Math.sin(movementAngle) * 0.5; y -= Math.cos(movementAngle) * 0.5; if(orders.size() > 0){ if(orders.get(0) instanceof MoveOrder){ MoveOrder mo = (MoveOrder)orders.get(0); double xd = mo.x - x; double yd = mo.y - y; double rotation = Math.toDegrees(Math.atan2(yd,xd))+90; rot = rotation; orders.remove(0); } }`
1 2 ` float angle = (float) (rot % 360.0f); GL11.glRotated(angle+180.0f, 0, 0, 1);`
theagentd
« Reply #10 - Posted 2012-10-31 00:50:39 »
1 2 ` float angle = (float) (rot % 360.0f); GL11.glRotated(angle+180.0f, 0, 0, 1);`
glRotate() can handle angles outside 360 degrees, but you should keep the actual stored rotation (rot) to 0-360 degrees.
Myomyomyo.
Pages: [1]
ignore | Print
EgonOlsen (78 views) 2018-06-10 19:43:48 EgonOlsen (58 views) 2018-06-10 19:43:44 EgonOlsen (78 views) 2018-06-10 19:43:20 DesertCoockie (260 views) 2018-05-13 18:23:11 nelsongames (158 views) 2018-04-24 18:15:36 nelsongames (157 views) 2018-04-24 18:14:32 ivj94 (898 views) 2018-03-24 14:47:39 ivj94 (162 views) 2018-03-24 14:46:31 ivj94 (811 views) 2018-03-24 14:43:53 Solater (175 views) 2018-03-17 05:04:08
Java Gaming Resourcesby philfrei2017-12-05 19:38:37Java Gaming Resourcesby philfrei2017-12-05 19:37:39Java Gaming Resourcesby philfrei2017-12-05 19:36:10Java Gaming Resourcesby philfrei2017-12-05 19:33:10List of Learning Resourcesby elect2017-03-13 14:05:44List of Learning Resourcesby elect2017-03-13 14:04:45SF/X Librariesby philfrei2017-03-02 08:45:19SF/X Librariesby philfrei2017-03-02 08:44:05
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org | 2,284 | 6,709 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2018-26 | latest | en | 0.701726 |
https://gpuzzles.com/mind-teasers/easy-logic-riddle/ | 1,506,035,057,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818687938.15/warc/CC-MAIN-20170921224617-20170922004617-00062.warc.gz | 663,053,838 | 11,373 | • Views : 30k+
• Sol Viewed : 10k+
# Mind Teasers : Easy Logic Riddle
Difficulty Popularity
A friend was telling me, 'I have eight sons and each has one sister.' In total, how many children does my friend have?
Discussion
Suggestions
• Views : 60k+
• Sol Viewed : 20k+
# Mind Teasers : Tricky Iq Question
Difficulty Popularity
In a jar, there are some orange candies and some strawberry candies. You pick up two candies at a time randomly. If the two candies are of same flavor, you throw them away and put a strawberry candy inside. If they are of opposite flavors, you throw them away and put an orange candy inside.
In such manner, you will be reducing the candies in the jar one at a time and will eventually be left with only one candy in the jar.
If you are told about the respective number of orange and strawberry candies at the outset, will it be feasible for you to predict the flavor of the final remaining candy ?
• Views : 40k+
• Sol Viewed : 10k+
# Mind Teasers : CheckMate All Kings Chess Puzzle
Difficulty Popularity
Can you checkmate all the kings in less than 10 moves?
Following are the rules:
1. White can make up to 10 legitimate moves until all kings are checkmate.
2. You can take any black piece(s) except the King.
3. Your king cannot be in check position at any time of the game.
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Cake Grandma bridge Logic Problem
Difficulty Popularity
Rohit is on his way to visit your Grandma, who lives at the end of the state.It's her birthday, and he want to give her the cakes that he has made.Between his place and her grandma house, he need to cross 7 toll bridges.
Before you can cross the toll bridge, you need to give them half of the cakes you are carrying, but as they are kind trolls, they each give you back a single cake.
How many cakes do rohit have to carry with him so he can reach his grandma home with exactly 2 cakes?
• Views : 50k+
• Sol Viewed : 20k+
# Mind Teasers : Hard Chess Puzzle Problem
Difficulty Popularity
You need to win in half a move.
How can you do this ?
• Views : 70k+
• Sol Viewed : 20k+
# Mind Teasers : Floating Fish MatchStick Riddle
Difficulty Popularity
As shown in the image below, group of fishes are swimming from left to right. By moving just three matchsticks, can you make fishes from right to left?
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Math Trick Question
Difficulty Popularity
1 pound = 100 penny
= 10 penny x 10 penny
= 1/10 pound x 1/10 pound
= 1/100 pound
= 1 penny
=> 1 pound = 1 penny
Solve this math trick question ?
• Views : 50k+
• Sol Viewed : 20k+
# Mind Teasers : Funny Brain Twister
Difficulty Popularity
We all know that New Year occurs after a week from Christmas and thus it falls on the same day as of Christmas. But this will not happen in 2050. In 2050, Christmas will appear on Sunday while New Year will appear on Saturday.
How can this be possible ?
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Dollar * 4 = Quarter Equation Riddle
Difficulty Popularity
A quarter is 1/4th of a dollar but Professor Mr.GPuzzles has reversed the equation as below :
DOLLAR * 4 = QUARTER
You need to replace each alphabet by a number to decipher the logic used by Professor to make above equation true.
• Views : 50k+
• Sol Viewed : 20k+
# Mind Teasers : Tough River Crossing Riddle
Difficulty Popularity
This one is a bit of tricky river crossing puzzle than you might have solved till now. We have a whole family out on a picnic on one side of the river. The family includes Mother and Father, two sons, two daughters, a maid and a dog. The bridge broke down and all they have is a boat that can take them towards the other side of the river. But there is a condition with the boat. It can hold just two persons at one time (count the dog as one person).
No it does not limit to that and there are other complications. The dog can’t be left without the maid or it will bite the family members. The father can’t be left with daughters without the mother and in the same manner, the mother can’t be left alone with the sons without the father. Also an adult is needed to drive the boat and it can’t drive by itself.
How will all of them reach the other side of the river?
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Awesome Clock Puzzle
Difficulty Popularity
Time 12:21 is a palindrome as it reads the same forwards or backwards.
Whats the Whats the shortest interval between two palindromic times ?
example => 11:11 and 12:21 has interval of 1 hr 10 minutes.
### Latest Puzzles
21 September
##### Oscar Winning Movie Rebus
Can you name the oscar winning movie nam...
20 September
##### Remove Matchsticks Equation Puzzle
Can you remove two matchsticks to make b...
19 September
##### What is It Puzzle
It looks square from outside.
It ...
18 September
##### Best Who Am I Riddle
I have one eye but still, i am blind.
17 September
##### Ball Series Puzzle
In this famous ball puzzle, can you comp...
16 September
##### Odd One Out Picture Riddle
Which of the below four images is the od...
15 September
##### Logical Triplets Statement Riddle
You know three triplets: Eden Thorgan a... | 1,293 | 5,203 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.265625 | 3 | CC-MAIN-2017-39 | latest | en | 0.947437 |
https://dict.youdao.com/w/eng/self_consistent/ | 1,726,239,674,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00672.warc.gz | 181,743,473 | 8,909 | go top
## self consistent
• 自洽:指一个形式系统中不蕴涵矛盾。
### 网络释义专业释义
自相容的
... 可相容的 compatible ; union-compatible 自相容的 self consistent ; consistent ; self-consistent 逻辑相容的 logically compatible ...
自相合
self-consistent 自相一致的 ; 洽的
self-consistent field [量子] 自洽场 ; 自恰场 ; 自洽场方法 ; 自洽计算
self-consistent solution [物] 自洽解 ; 自洽溶液
Self-consistent field theory 自洽场论 ; 场理论 ; 自洽平均场理论
self-consistent method 自洽方法 ; 自洽法 ; 法
Self-Consistent Reaction Field 自洽反应场 ; 这就是自洽反应场 ; 反应场 ; 一种是自洽反应场
self-consistent family of curves 自相一致的曲线组
self-consistent field method 自洽〔力〕场法
更多收起网络短语
• 自相合 - 引用次数:6
参考来源 - 基于笔划组合的手写数字切分
·2,447,543篇论文数据,部分数据来源于NoteExpress
### 双语例句权威例句
• Almost anytime you have multiple states changing, you will have several lines during the state change at which the program will not have self-consistent Numbers.
几乎任何情形下,只要修改多个状态,在状态改变过程中就一些代码行中将不能拥有始终一致的数字。
• In this program, all state changes are brought about by re-running the recursive function with completely self-consistent data.
这个程序中,所有状态改变都是通过使用完全前后一致数据重新运行递归程序而实现
• Xu's team also intends HiPiHi to be a self-consistent world, with fixed time zones and an existing map.
许晖团队将海皮士设定一个有既定世界地图固定时区的虚拟世界。
• Earlier, World No.5 Ferrer was his usual consistent self as he eased past Stepanek 6-3 6-4 6-4 to put the defending champions on their way, with victory in two hours 58 minutes.
CNN: STORY HIGHLIGHTS
• An intelligent person is open-minded, an outside-the-box thinker, an effective communicator, is prudent, self-critical, consistent, and so on.
NEWYORKER: Live and Learn
• We have a rose-colored view of who we are and what we do, and we aim to behave in ways that are consistent with our self-image as capable, competent, helpful, and honest individuals.
FORBES: Why Do Our Decisions Get Derailed?
\$firstVoiceSent
- 来自原声例句 | 623 | 1,805 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2024-38 | latest | en | 0.602429 |
http://www.studymode.com/course-notes/Kaplan-Mm207-Statistics-Final-Project-1788807.html | 1,409,558,042,000,000,000 | text/html | crawl-data/CC-MAIN-2014-35/segments/1409535917463.1/warc/CC-MAIN-20140901014517-00265-ip-10-180-136-8.ec2.internal.warc.gz | 1,538,488,334 | 15,697 | or
### Kaplan Mm207 Statistics Final Project
Continues for 6 more pages »
# Kaplan Mm207 Statistics Final Project
By | June 2013
Page 1 of 7
MM207 Final Project Name: Eddie S. Jackson
1.
Using the MM207 Student Data Set: a) What is the correlation between student cumulative GPA and the number of hours spent on school work each week? Be sure to include the computations or StatCrunch output to support your answer. My answer :
0.27817234
(from StatCrunch):
Correlation between Q10 What is your cumulative Grade Point Average at Kaplan University? and Q11 How many hours do you spend on school work each week? is: 0.27817234
b) Is the correlation what you expected? My answer: No. I expected the correlation to be much higher because the more hours you study should equate to a much higher GPA – in theory that is.
c) Does the number of hours spent on school work have a causal relationship with the GPA? My answer: Yes. I was going to say no (because of the low correlation above), until I did a scatter plot. This shows that
MM207 Final Project there definitely is a casual relationship between study time and GPA.
yuck. There are 2 points on the right that most likely could be excluded. d) What would be the predicted GPA for a student who spends 16 hours per week on school work? Be sure to include the computations or StatCrunch output to support your prediction. My answer: 3.6
from StatCrunch Group by: Q11 How many hours do you spend on school work each week? Q11 How many hours do you spend on school work each week? 3 4 5 6 7 8 10 11
Mean 3.6666667 2 3.3775 3.0714285 3.75 3.352 2.9693334 3.6466668
n 3 1 8 7 2 5 30 3
Variance 0.33333334 NaN 0.3129357 0.42641428 0.125 0.26252 1.6706271 0.14423333
MM207 Final Project
12 13 14 15 16 hours
3.290909 4 3.93 3.7127273 3.6
11 2 2 11 3
1.4214091 0 0.0098 0.11040182 0.07
2.
Select a continuous variable that you suspect would not follow a normal distribution. a) My answer: my continuous variable is “Age” b) Create a graph for the variable you have selected to show its distribution. My answer:
MM207 Final Project a)...
#### Rate this document
What do you think about the quality of this document?
#### Share this document
Let your classmates know about this document and more at Studymode.com | 640 | 2,277 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.640625 | 4 | CC-MAIN-2014-35 | longest | en | 0.913302 |
https://phys.libretexts.org/Bookshelves/Quantum_Mechanics/Book%3A_Quantum_Mechanics_(Fowler)/8%3A_Approximate_Methods | 1,579,429,375,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00240.warc.gz | 620,918,739 | 21,129 | $$\require{cancel}$$
8: Approximate Methods
So far, we have concentrated on problems that were analytically solvable, such as the simple harmonic oscillator, the hydrogen atom, and square well type potentials. In fact, we shall soon be confronted with situations where an exact analytic solution is unknown: more general potentials, or atoms with more than one electron. To make progress in these cases, we need approximation methods. The best known method is perturbation theory, which has proved highly successful over a wide range of problems (but by no means all).
• 8.1: Variational Methods
In this module, the variational method is introduced. The variational method works best for the ground state, and in some circumstances (to be described below) for some other low lying states.
• 8.2: The WKB Approximation
The WKB (Wentzel, Kramers, Brillouin) approximation is, in sense to be made clear below, a quasi-classical method for solving the one-dimensional (and effectively one-dimensional, such as radial) time-independent Schrödinger equation.
• 8.3: Note on the WKB Connection Formula
For a particle trapped in a (one-dimensional) potential well, classically the particle would bounce back and forth between the two turning points where its kinetic energy vanishes. In the quantum case, these are precisely the points where the wavelength becomes infinite, so the WKB solution fails.
Thumbnail: Two (or more) wave functions are mixed by linear combination. The coefficients c1, c2 determine the weight each of them is given. The optimum coefficients are found by searching for minima in the potential landscape spanned by c1 and c2. Image used with permission (CC BY-SA 3.0; Rudolf Winter at Aberystwyth University). | 382 | 1,730 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.515625 | 3 | CC-MAIN-2020-05 | latest | en | 0.932687 |
https://republicofsouthossetia.org/question/71-2-7-4-is-equal-to-13-3-73-78-16138610-45/ | 1,638,447,995,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00206.warc.gz | 557,096,628 | 12,991 | 71^2 ÷ 7^4 is equal to _____. 13 3 73 78
Question
71^2 ÷ 7^4 is equal to _____.
13
3
73
78
in progress 0
3 weeks 2021-11-10T03:56:19+00:00 2 Answers 0 views 0
2.0995
Step-by-step explanation:
71^2 = 5041
7^4 = 2401
5041/2401 = 2.0995
2. Answer:71^2 ÷ 7^4 is equal to 2.09954185756 but in this case I would say 3
Step-by-step explanation: | 157 | 348 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.671875 | 4 | CC-MAIN-2021-49 | latest | en | 0.738611 |
https://www.kenexis.com/faq-items/effigy-3d-fire-gas-mapping-feature-plume-fire-model/ | 1,656,818,974,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656104209449.64/warc/CC-MAIN-20220703013155-20220703043155-00669.warc.gz | 879,238,102 | 20,916 | Kenexis is happy to announce the release of Effigy™ Version 4.1.0. The release of this version introduces the plume fire algorithm for fire geographic mapping. This blog describes the details of the plume fire model, how it differs from the point source model used by most fire and gas mapping software, and when the use of the plume fire model is appropriate.
Background
Effigy currently supports the import of 3D models for fire and gas mapping. This import functionality is an advancement over the traditional and time consuming methods of developing simplistic 3D models from geometric primitives. While importing 3D models directly provides benefits in time-savings, it also introduces some complexity when it comes to the fire models used to analyze detector coverage, particularly when using the geographic coverage methodology.
When working with a traditional “geometric primitive” 3D model, it is common practice to only include objects which are large enough to obstruct a fire of a threshold size of concern (e.g. a 1ft x 1ft hydrocarbon pool fire). As a result, small obstructions such as small bore piping, instrument tubing, conduit, etc. are often not included in the 3D model. Excluding these objects is appropriate and does not result in an overestimation of the achieved coverage because of a conservative assumption which is inherent to the “point source fire model” used by most fire and gas mapping software, including Effigy.
The Point Source Fire Model
In the point source fire algorithm, a fire is assumed to occur at a single point in space (an infinitely small fire, radiating infinitely hot). This assumption results in a conservative calculation, as it does not account for the fact that fire has volume. Making the assumption that a fire occurs at a single point in space is conservative even for the smallest fires of concern. This is why exclusions for minor obstructions when using the point source model is not only standard practice, but is often required to achieve an acceptable degree of coverage.
Let’s look at an example. The images below show the field of view for a single optical flame detector. In both images, the field of view of the detector is shown by the yellow region. In the image to the left you’ll notice a small green obstruction running horizontally across the image. This obstruction represents a 3 inch diameter pipe.
Figure 1: Dual Perspectives: Single Optical Flame Detector (Obstructing Pipe)
In the image to left, the field of view of the detector appears uninterupted because your perspective is from the exact location of the detectors lens. However, when the image is rotated and the pipework is shown in wireframe, as shown on the right, you can clearly see that the point source coverage algorithm has predicted an area of uncovered space directly behind the pipe from the perspective of the detector. The point source model has predicted that a fire centered at any location along the red obstructed region caused by the pipe will be undetectable. It is important to remember however that in reality fires are not infinitely small point sources, but have volume. Intuitively, we know that a 3 inch diameter pipe will not significant impede the ability for a detector to detect a 1 ft diameter pool fire, this is why exclusions of such object is common practice.
When working with 3D models, what you will find is that operating with the conservative assumptions of the point source algorithm can result in overly conservative results because of the cumulative effect of these small diameter obstructions. For this reason, it is often necessary to implement a coverage algorithm which accounts for the volume of a fire. This algorithm is referred to as the “plume fire model”.
The Plume Fire Model
The plume fire model can be enabled on the Effigy study settings page as shown below. In the plume model a fire is represented as a 3 dimensional cuboid with equal x and y dimensions. The size of the fire is defined by the user using the plume width and plume height parameters.
Figure 2: Study Settings and Corresponding Cuboid
In the plume fire model, the fire is assumed to be emitting thermal radiation equally over its entire surface. When the fire geographic coverage algorithms are run, Effigy will calculate the fraction of the fire surface which is observable from the fire detector. In doing this, the fraction of the fires total radiant heat output which is viewable from the detector can be determined. This calculation shown in the below images where a grid has been superimposed onto the plume (far left). Adding a small diameter pipe between the detectors point of view and the plume will obstruct a fraction of the plume which is visible (middle). This fraction of the plume will then be ignored by the fire coverage algorithms. After this obstructed area is accounted for, Effigy will then determine whether a fire is detectable based on the remaining observable surfaces of the plume (far right).
Figure 3: Obstruction Detection
Now let’s look at the plume fire algorithm in action using Effigy.
Figure 4: Dual Perspectives: Single Optical Flame Detector (Unobstructing Pipe)
Just as before, we’ve introduced a 3 inch diameter pipe into the field of view of an optical flame detector. However, now the area directly behind the pipe is predicted as an unobstructed region.
Why? This is because the volume of the fire is sufficient such that a fire centered at a location directly behind the pipe will result in a plume which is largely still visible to the detector. While to pipe may be obstructing 10%-15% of the fire surface, the detector is still receiving sufficient thermal radiation to cause the detector to reach an alarm state.
Real World Application
The following is a real world example of 3D fire and gas mapping analysis where the differences between the point source and plume fire models are apparent. The following images are of a manifold deck on an offshore facility. As you can see in the 3D view below, there is a significant amount of minor obstructions in the area being modeled.
Figure 5: Real World 3D Fire and Gas Mapping Analysis
The following images display a plot view of the achieved fire detector coverage utilizing both the point source model and the plume model. The top image displays the achieved coverage utilizing the point source model. The bottom image displays the results using the plume fire model with a 1ft x 1ft x 3 ft plume.
As usually, the areas in green are covered by 2 or more detectors, areas in yellow are covered by a single detector and areas in red are uncovered.
Figure 6: Achieved Coverage via Point Source Modeling
Figure 7: Achieved Coverage via Plume Modeling
Conclusions
Use of the plume fire model is highly recommended when performing fire and gas mapping using imported 3D CAD models which contain a high degree of detail. The effects of changing between the point source model and plume fire model are most pronounced when working with 3D models where the field of view of detectors are highly obstructed by small diameter piping or other minor obstructions.
Use of the plume fire model is not recommended for fire and gas mapping studies where simplified 3D models are developed from the geometry primitives within Effigy, unless those models are developed with a high degree of detail. In studies with simplified 3D models, use of the plume fire model could result in a non-conservative prediction of detector coverage. | 1,561 | 7,528 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2022-27 | latest | en | 0.91552 |
https://ythi.net/practice-spoken-english/qMFpOcLroOg/ | 1,596,774,225,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00236.warc.gz | 800,571,163 | 12,196 | # Practice English Speaking&Listening with: Can you solve the rogue AI riddle? - Dan Finkel
Normal
(0)
Difficulty: 0
A hostile artificial intelligence called NIM has taken over the worlds computers.
Youre the only person skilled enough to shut it down,
and youll only have one chance.
Youve broken into NIMs secret lab,
and now youre floating in a raft on top of 25 stories of electrified water.
Youve rigged up a remote that can lower the water level
by ejecting it from grates in the sides of the room.
If you can lower the water level to 0,
you can hit the manual override,
shut NIM off,
and save the day.
However, the AI knows that youre here, and it can lower the water level, too,
by sucking it through a trapdoor at the bottom of the lab.
If NIM is the one to lower the water level to 0,
youll be sucked out of the lab,
resulting in a failed mission.
Control over water drainage alternates between you and NIM,
and neither can skip a turn.
Each of you can lower the water level by exactly 1,
3,
or 4 stories at a time.
Whoever gets the level exactly to 0 on their turn
Note that neither of you can lower the water below 0;
if the water level is at 2,
then the only move is to lower the water level 1 story.
You know that NIM has already computed all possible outcomes of the contest,
and will play in a way that maximizes its chance of success.
You go first.
How can you survive and shut off the artificial intelligence?
Pause here if you want to figure it out for yourself.
You cant leave anything up to chance - NIM will take any advantage it can get.
And youll need to have a response to any possible move it makes.
The trick here is to start from where you want to end and work backwards.
You want to be the one to lower the water level to 0,
which means you need the water level to be at 1, 3, or 4
when control switches to you.
If the water level were at 2,
your only option would be to lower it 1 story,
which would lead to NIM making the winning move.
If we color code the water levels,
we can see a simple principle at play:
there arelosinglevels like 2,
where no matter what whoever starts their turn there does, theyll lose.
And there are winning levels, where whoever starts their turn there
can either win or leave their opponent with a losing level.
So not only are 1, 3, and 4 winning levels,
but so are 5 and 6,
since you can send your opponent to 2 from there.
From 7, all possible moves would send your opponent to a winning level,
making this another losing level.
And we can continue up the lab in this way.
If you start your turn 1, 3, or 4 levels above a losing level,
then youre at a winning level.
Otherwise, youre destined to lose.
You could continue like this all the way to level 25.
But as a shortcut,
you might notice that levels 8 through 11 are colored identically to 1 through 4.
Since a levels color is determined by the levels 1, 3, and 4 stories below it,
this means that level 12 will be the same color as level 5,
13 will match 6,
14 will match 7, and so on,
In particular, the losing levels will always be multiple of 7,
and two greater than multiples of 7.
Now, from your original starting level of 25,
you have to make sure your opponent starts on a losing level every single turn
if NIM starts on a winning level even once,
its game over for you.
So your only choice on turn 1 is to lower the water level by 4 stories.
No matter what the AI does,
you can continue giving it losing levels
until you reach 0 and trigger the manual override.
And with that, the crisis is averted.
Now, back to a less stressful kind of surfing.
The Description of Can you solve the rogue AI riddle? - Dan Finkel | 897 | 3,713 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3 | 3 | CC-MAIN-2020-34 | latest | en | 0.953862 |
https://mathoverflow.net/questions/23478/examples-of-common-false-beliefs-in-mathematics/60885 | 1,607,058,182,000,000,000 | text/html | crawl-data/CC-MAIN-2020-50/segments/1606141733122.72/warc/CC-MAIN-20201204040803-20201204070803-00492.warc.gz | 376,943,253 | 47,881 | # Examples of common false beliefs in mathematics
The first thing to say is that this is not the same as the question about interesting mathematical mistakes. I am interested about the type of false beliefs that many intelligent people have while they are learning mathematics, but quickly abandon when their mistake is pointed out -- and also in why they have these beliefs. So in a sense I am interested in commonplace mathematical mistakes.
Let me give a couple of examples to show the kind of thing I mean. When teaching complex analysis, I often come across people who do not realize that they have four incompatible beliefs in their heads simultaneously. These are
(i) a bounded entire function is constant;
(ii) $$\sin z$$ is a bounded function;
(iii) $$\sin z$$ is defined and analytic everywhere on $$\mathbb{C}$$;
(iv) $$\sin z$$ is not a constant function.
Obviously, it is (ii) that is false. I think probably many people visualize the extension of $$\sin z$$ to the complex plane as a doubly periodic function, until someone points out that that is complete nonsense.
A second example is the statement that an open dense subset $$U$$ of $$\mathbb{R}$$ must be the whole of $$\mathbb{R}$$. The "proof" of this statement is that every point $$x$$ is arbitrarily close to a point $$u$$ in $$U$$, so when you put a small neighbourhood about $$u$$ it must contain $$x$$.
Since I'm asking for a good list of examples, and since it's more like a psychological question than a mathematical one, I think I'd better make it community wiki. The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $$(x+y)^2=x^2+y^2$$, even if they are widely believed) and that the reasons they are found plausible are quite varied.
• I have to say this is proving to be one of the more useful CW big-list questions on the site... – Qiaochu Yuan May 6 '10 at 0:55
• The answers below are truly informative. Big thanks for your question. I have always loved your post here in MO and wordpress. – Unknown May 22 '10 at 9:04
• wouldn't it be great to compile all the nice examples (and some of the most relevant discussion / comments) presented below into a little writeup? that would make for a highly educative and entertaining read. – Suvrit Sep 20 '10 at 12:39
• It's a thought -- I might consider it. – gowers Oct 4 '10 at 20:13
• Meta created tea.mathoverflow.net/discussion/1165/… – user9072 Oct 8 '11 at 14:27
Let $$M_1$$ be a finitely generated module over a PID and let $$M_2$$ be a submodule.
We may pick $$L_i$$ and $$T_i$$ submodules of $$M_i$$ such that $$L_i$$ is free, $$T_i$$ is torsion, $$M_i = L_i \oplus T_i$$, $$L_2\subseteq L_1$$ and $$T_2\subseteq T_1$$.
If we regard a ring $$R$$ (with identity) as a right module ($$R_{R}$$), then there is a ring isomorphism $$\text{End}(R_{R}) \simeq R$$, however the same does not happen if we regard $$R$$ as a left module!
The correct is $$\text{End}(_{R}R) \simeq R^{\text{op}}$$.
• Here is a discussion about the condition for Morita equivalence between rings, which is related to this subtle detail: math.stackexchange.com/questions/3566579/… – user144185 Mar 27 at 12:19
I don’t know how common is the following false belief, but I had it for several years, so maybe some other people also have it. I apologize to those to whom I shared this false belief. I hope this post will help.
Kaplansky’s 6th conjecture (here, 1975) states that if $$H$$ is a finite dimensional semisimple Hopf algebra and $$V$$ an irreducible representation of $$H$$, then $$\dim (V)$$ divides $$\dim (H)$$. This conjecture is open over the complex field $$\mathbb{C}$$, but false in positive characteristic. So we assume to be over $$\mathbb{C}$$.
For the group case, this property was proved by Frobenius, that is why a finite dimensional semisimple Hopf algebra (over $$\mathbb{C}$$) with this property is called of Frobenius type.
A finite dimensional Hopf algebra (over $$\mathbb{C}$$) is called a finite quantum group (or Kac algebra) if it has a $$*$$-structure. And then it is also semisimple. It is an open problem whether such a $$*$$-structure always exists.
False belief: George Kac proved Kaplansky’s 6th conjecture for the finite quantum groups.
This false belief was pointed out to me by Pavel Etingof after this talk I gave for Harvard University, and where I mentioned it. Fortunately, that does not affect the content of the talk.
What I had in mind is Theorem 2 in the following paper:
G. I. Kac, Certain arithmetic properties of ring groups., Funct. Anal. Appl., 6 (1972), pp. 158–160.
In modern language, Theorem 2 proves the following: let $$H$$ be a finite quantum group, and let $$\mathcal{C} = Corep(H)$$ be the fusion category of complex corepresentations of $$H$$. For every simple object $$X$$ of the Drinfeld center $$Z(\mathcal{C})$$ which contains the trivial object of $$\mathcal{C}$$ under the forgetful functor, $$FPdim(X)$$ divides $$FPdim(\mathcal{C}) = \dim(H)$$ (the quotients are called the formal codegrees).
Note that these $$X$$ correspond to the irreducible representations of the Grothendieck ring $$K(\mathcal{C})$$ of $$\mathcal{C}$$ (see Theorem 2.13 here). In particular, for $$G$$ a finite group, $$\mathcal{C} = Corep(G) = Vec(G)$$, and $$Irr(K(\mathcal{C})) = Irr(G)$$. That is why Theorem 2 implies Kaplansky’s 6th conjecture in the group case (i.e. covers Frobenius theorem). But it is not clear for a finite quantum group in general. It could be relevant to search in this direction, in particular to check whether for every object $$Y$$ of $$Irr(H)$$ there exists an $$X$$ as above such that $$\dim(Y)$$ divides $$FPdim(X)$$, because this would prove that $$H$$ is a Frobenius type.
Note that Theorem 2 (as stated above) holds more generally for every (complex) fusion category $$\mathcal{C}$$. The case $$\mathcal{C} = Rep(G)$$, with $$G$$ a finite group, recovers the fact that the size of each conjugacy class of $$G$$ divides $$|G|$$. Finally, according to Pavel, the theorem holds more generally without the assumption ‘which contains the trivial object’ (I don’t have the exact reference for that, so if you know it, please put it in comment).
The assumption that a cubic surface expressed as a foliation of Weierstrass curves cannot be rational, because a general Weierstrass curve is not rational.
I've seen this false assumption more than once on sci.math over the years. But there are simple counterexamples, such as:
$(x + y) (x^2 + y^2) = z^2$
On defining $u = x/y$ and $v = z/y$ one obtains $y (u + 1) (u^2 + 1) = v^2$, and hence x, y, z as rational functions of u, v.
I'd love to have a reference to a procedure for calculating the geometric genus and algebraic genus of surfaces like this, because they are rational if and only if both these quantities are zero, and for other cubic surfaces that interest me it would save a lot of fruitless hacking around trying to find a rational solution that probably doesn't exist! Are there any symbolic algebra packages that can do this?
I mean for example is $x y (x y + 1) (x + y) = z^2$ rational? I'm almost sure it isn't; but how can one be sure?
• Algebraic genus in software: Seems to be available in Singular and Maple and may be available in Mathematica by now (from mathematica.stackexchange.com/a/5453/16237 ). (Aside: I don't know anything about the software mentioned.) – Eric Towers Aug 13 '17 at 2:06
Here's one that I think will surprise many.
False belief. Let $E$ be an elliptic curve over an algebraically closed field $k$ of characteristic $p > 0$. Then $\operatorname{End}^\circ(E)$ is strictly larger than $\mathbb Q$.
While this is true for all elliptic curves defined over finite fields, most elliptic curves whose field of definition is transcendental over $\mathbb F_p$ have $\operatorname{End}^\circ(E) \cong \mathbb Q$. The extra automorphism on elliptic curves over a finite field comes from the geometric Frobenius. For varieties over larger fields, this is not a thing.
Let $M \subset B(H)$ be a von Neumann algebra, $p \in B(H)$ a projection and $q=I-p$.
False belief: If $pM=Mp$ then $M=pMp \oplus qMq$.
(I think it is a quite common careless mistake)
Counter-example: diagonal embedding of $\mathbb{C}$ into $M_2(\mathbb{C})$.
False belief: << Let $M$ be the von Neumann algebra generated by a $\rm{C}^{\star}$-algebra $\mathcal{A}$. >>
The false belief is to think that the above sentence makes sense. In fact, a von Neumann algebras and a $\rm{C}^{\star}$-algebra don't have the same status. A von Neumann algebra is an operator algebra by definition, i.e. it is defined inside $B(H)$ for some separable Hilbert space $H$. Now, some subalgebras of $B(H)$ are (separable) $\rm{C}^{\star}$-algebras, but a $\rm{C}^{\star}$-algebra can also be defined abstractly. It can next be representated and a given representation $H$ (defined for example by GNS construction for a given state), if it is faithful, induces an embedding in $B(H)$.
So to make sense, the sentence above should be modified as:
<< Let $M$ be the von Neumann algebra generated by $(\mathcal{A},\rho)$, a couple of $\rm{C}^{\star}$-algebra and state. >>
or
<< Let $M$ be the von Neumann algebra generated by a $\rm{C}^{\star}$-algebra $\mathcal{A}$ represented on $H$. >>
Then, $M = \pi_H(\mathcal{A})''$. We can use $M$ to characterize the representation $H$, for example, we can talk about a representation of type ${\rm I}$, ${\rm II}$ or ${\rm III}$ if $M$ is a von Neumann algebra of type ${\rm I}$, ${\rm II}$ or ${\rm III}$. There is a $\rm{C}^{\star}$-algebra with representations of every type, for example the Cuntz algebra.
Finally, there exists a universal representation for every $\rm{C}^{\star}$-algebra (i.e. the direct sum of the corresponding GNS representations of all states; it is faithful). The associated von Neumann algebra is called the enveloping von Neumann algebra (it can also be defined as the double dual); it contains all the operator-algebraic information about the given $\rm{C}^{\star}$-algebra.
• So there is no abstract version of the notion of a von Neumann algebra? Like, say, isomorphism classes of "usual" von Neumann algebras, or something like that? – მამუკა ჯიბლაძე Jan 12 '18 at 21:53
• @მამუკაჯიბლაძე: A von Neumann algebra can be defined abstractly as a (non-necessarily separable) $\rm{C}^{\star}$-algebra that have a predual; but it is not the usual definition, some authors call this abstract version a $\rm{W}^{\star}$-algebra, see the last paragraph of en.wikipedia.org/wiki/Von_Neumann_algebra#Definitions – Sebastien Palcoux Jan 12 '18 at 22:23
• @SebastienPalcoux If one takes this abstract definition (or something equivalent to it), how does one recognise the concrete von Neumann algebras, i.e. what condition on a continuous *-homomorphism $\rho: A\to B(H)$ is equivalent to $\rho(A)$ being a von-Neumann-algebra? I'd guess it is something like "$\rho$ is still continuous if one chooses certain other natural topologies on $A$ and $B(H)$ instead of the norm topologies". Is that the case? – Johannes Hahn Mar 16 '18 at 23:24
• @JohannesHahn: A $\rm{C}^{\star}$-algebra (resp. von Neumann algebra) can be defined concretely as a $\star$-subalgebra of $B(H)$ closed by the operator norm topology (resp. the weak operator topology). The problem in your question is that these topologies are operator topologies, and $A$ is abstract. You could be satisfied by the following paragraph on the predual. – Sebastien Palcoux Mar 17 '18 at 8:27
• @SebastienPalcoux That paragraph is part of the reason why I'm asking. I checked wikipedia first of course. I don't see if or how it answers my question. Since the predual is intrinsic, the ultraweak topology can be defined intrinsically as well. So it makes sense to say "$\rho$ is continuous w.r.t. the ultraweak topologies on $A$ and $B(H)$". That's what makes me think that a characterisation like what I'm asking is even possible. But I don't see if it's true. – Johannes Hahn Mar 17 '18 at 14:35
The derived subgroup of a finite group equals to the set of all its commutators
or equivalently
A product of two commutators in a finite group is always a commutator
This mistake is very widespread, probably because counterexamples to it tend to be quite large. The smallest group, for which it is not true has order $$96$$.
• What is your evidence that this is a commonly held belief? – Geoff Robinson Aug 28 at 12:06
False belief: a subgroup isomorphic to a quotient is a retract.
Formally: Let $$H,N$$ be subgroups of $$G$$ with $$N$$ normal and $$H \simeq G/N$$, then $$H$$ is a retract of $$G$$.
It is false, because otherwise $$C_2$$ would be a retract of $$C_4$$, but it is not.
In fact, $$H$$ is a retract of $$G$$ if and only if $$G$$ is isomorphic to $$H \ltimes N$$ (semidirect product).
This false belief caused this post.
Let $R$ be a ring with identity $e$, $A, B\in R$, $A\neq 0$, $B$ is invertible element. If $A\cdot B = A$ then $B = e$.
• I think, it is closely related to the following false "deduction": because invertible element cannot be at the same time zero divisor, therefore sum of any unit and zero divisor is not invertible. Ok, maybe it isn't popular, but I've got this belief at my first algebra course, until I discovered counterexample $1+X$ in $R[X]/X^2$. This is almost exactly the thing you mentioned, just put $B:=X, A:=X+1$. – Przemek Nov 5 at 20:09
This is more sort of a convention issue than an outright false belief (connected to the usual $$\emptyset$$ vs $$\{\emptyset\}$$ stuff), but I find it funny. I guess a fair share of mathematicians believe that: $$$$\bigcap\emptyset=\emptyset\label{eq}$$$$ while retaining the standard definition for intersection: $$\bigcap S:=\{x\ \text{such that}\ \forall Y(Y\in S\implies x\in Y)\}$$ according to which in fact: $$\bigcap\emptyset=V$$ where $$V$$ is the universal class. The condition in round brackets is of course vacuously true. So in a way - this is what I find funny - the former is the worst possible tentative solution of an equation ever.
Way late to the party...
"$\mathrm{polymod}\ p$ and $\mathrm{mod}\ p$ are the same thing."
And it's cousin: "$\forall{x}, f(x) \cong g(x) \pmod{q} \implies f(x) = g(x)$"
• What does polymod mean? – darij grinberg Oct 20 '10 at 11:47
• Either the cousin needs a bit more detail if it is to be false, it is quite naive! – Mariano Suárez-Álvarez Oct 20 '10 at 18:25
• Probably I understand what this means: if $f(x)=0\pmod 2$ for all $x$, then $f=0$ over $\mathbb F_2$. This is similar to my second example: mathoverflow.net/questions/23478/… – zhoraster Oct 20 '10 at 18:33
• Consequently, there are only $4$ polynomials over $\mathbb F_2$ Isn't this convenient? :-) – zhoraster Oct 20 '10 at 18:40
• $\mathrm{polymod}$ is "polynomial mod". Two polynomials are congruent $\mathrm{polymod} p$ iff the coefficients each power of the variable are congruent $\pmod{p}$. The equivalence classes are sets of polynomials where each coefficient ranges over an equivalence class $\pmod{p}$. For the cousin, there are many local/globals but they all seem to require additional conditions (q.v. Hensel lifting). I think the set from which $x$ was chosen was left unspecified because this "imprecise mental abbreviation" pops up at various levels of sophistication each with a different such set. – Anonymous Oct 23 '10 at 15:22
A common false assumption is that that two non-orthogonal pure states of a quantum mechanical system may never be unambiguously distinguished by a measurement. (See http://arxiv.org/pdf/quant-ph/9807022.pdf)
Another false belief is that a quantum computer is similar to an analogue computer, in that large computations will necessarily fail because of accumulated error. (See, for example, http://arxiv.org/abs/quant-ph/9712048)
For that matter, another common false believe is that Bell Inequalities aren't violated, although it is mostly held by people who have never heard of Bell Inequalities.
• I'm not sure how you can believe that something you have never heard of isn't violated. – Geoff Robinson Apr 10 '16 at 17:55
Another common mistake. If $W = _P(e_1,\ldots, e_{n})$ is a vector space and $V$ is a subspace of $W$ of dimension $k$, then $V = _P(e_{i_1},\ldots, e_{i_k})$.
• What does that little subscript $p$ on the equals sign mean? – Gerry Myerson Feb 11 '16 at 21:36
• $V$ is a vector space over field $P$. – Mikhail Goltvanitsa Feb 12 '16 at 6:52
• So, what does "$V$ is a vector space over field $P$ $(e_1,\dots,e_n)$" mean? – Gerry Myerson Feb 12 '16 at 8:37
• $W$ is a vector space over field $P$, $(e_1,\ldots, e_n)$ is a basis of $W$. $V$ is a subspace of $W$. – Mikhail Goltvanitsa Feb 12 '16 at 8:40
I don't know how common this is, but it occurs as a corollary of a theorem in the fine, and widely used, text by Shafarevich on algebraic geometry: namely, if $f \colon X \longrightarrow Y$ is a surjective algebraic map of varieties, then 1) for all $y \in Y$, the fiber over $y$ has dimension $≥ \dim(X)-\dim(Y)$; 2) on some non empty open set in $Y$ the dimension of the fibers equals $\dim(X)-\dim(Y)$; 3) for all $r$, the set of $y \in Y$ such that the fiber over $y$ has dimension $≥ r$, is closed in $Y$.
The first two are true, but the third is false. Upper semicontinuity of fiber dimension is true on the source, not the target. For the conclusion as stated to hold, one can add properness to the hypothesis on the map. I think this is not at all widely believed by experts, but for some reason it persists in the text, hence may be believed by students.
Since I have myself written notes in which blatantly false statements occur, I do not think for a moment that Shafarevich himself believed this false statement. But such things do slip by, and may mislead beginners. In fact I believed it for some time until enlightened by a friend.
In keeping with the OP's desire to know the psychological reason for the error, it seems for some reason common in my experience for people to assume unconsciously that maps are proper.
## From Keith Devlin
"Multiplication is not the same as repeated addition", as put forward in Devlin's MAA column.
I'm not really sure how I feel about this one; I might be one of the unfortunate souls who are still prey to that delusion.
## Caution
In case you missed it, the column ended up spilling a lot of electronic ink (as evidenced in this follow-up column), so I don't believe it would be wise to start yet a new one on MO. Thanks in advance!
• I followed your link, and I cannot even tell what is wrong about attaching helium balloons to both sides of a balance to model substraction on both sides of an equation. – user11235 Apr 10 '11 at 20:32
• The more I think about this "error", the less I am convinced. It's like saying that you cannot say that $\binom n k$ is the number of $k$-element sets in an $n$-element set because then you will be unable to generalize to complex values of $n$. Or you cannot define the chromatic polynomial as the function counting the colourings and then plug in $-1$ to get the acyclic orientations of the graph. Also, I think it is perfectly understandable what it means to add something halfways. – user11235 Apr 10 '11 at 20:50
• It's not a "false belief". It's a false heuristic. And it's actually here: mathoverflow.net/questions/2358/most-harmful-heuristic – darij grinberg Apr 10 '11 at 21:17
• When I taught elementary teachers the course on arithmetic, they all had been taught that multiplication is repeated addition, but I myself thought it was the cardinality of the cartesian product. We enjoyed discussing this difference in point of view. – roy smith May 9 '11 at 3:06
• The "repeated addition" characterization has an advantage over the "cardinality of the Cartesian product" characterization (which possibly in some contexts could be considered a disadvantage). That is that it's not self-evident that it's commutative, and so one has a useful exercise for certain kinds of students: figure out why it's commutative. – Michael Hardy May 20 '11 at 2:28
For $p$ prime and the chain of embeddings $\mathbb{Z}/p\mathbb{Z} \hookrightarrow \mathbb{Z}/p^2\mathbb{Z} \hookrightarrow \cdots$ given by multiplication by $p$, then $\bigcup_n \mathbb{Z}/p^n\mathbb{Z}$ is not the group of $p$-adic integers $\mathbb{Z}_p$, but its Pontryagin dual, the Prüfer $p$-group $\mathbb{Z}(p^{\infty})$.
• Is that actually a common false belief? After all, $\mathbb{Z}_p$ is uncountable, as everyone realizes! – Todd Trimble Mar 5 '15 at 14:25
• "$\mathbb{Z}_p$ is countable" is also a false belief for people who didn't really read the definition of $\mathbb{Z}_p$, but I don't know how much it is common. – Sebastien Palcoux Mar 5 '15 at 14:34
• It's hard for me to believe it's at all common. I wasn't the downvoter, but I think it would be better if answers were rooted either in instances that can be found in the literature, or widely encountered in one's experience as an instructor. – Todd Trimble Mar 5 '15 at 14:52
In algebraic topology, I thought for a while:
• "For $$k \geq 2$$, $$H_k$$ is the abelianization of $$\pi_k$$." False. True for $$k = 1$$. Also for all $$k$$ up to $$n-1$$ if the space is $$(n-1)$$-connected for $$n \geq 2$$ (vacuously, since this says the first $$n-1$$ homotopy groups are trivial and for these, the Hurewicz homomorphism is the isomorphism, $$\pi_k \cong H_k$$). See the Hurewicz theorem for more.
• "Generically, all the $$\pi_k$$ are nonabelian." False. For $$k \geq 2$$, $$\pi_k$$ is abelian.
Edit: I had a third error in thinking when I first posted this, mangling the above into something further from true. Which I suppose makes the first version of this post meta-appropriate for this thread (but I've fixed it anyway). Thankfully, user Michael gently pointed out my mangling.
• First bullet: did you mean "True for $n=1$"? – Michael Jan 15 '19 at 23:06
• @Michael : It's not always true for $n=1$, $\pi_1$ can be abelian, e.g. the fundamental group of the circle. For $n > 1$, $H_n \cong \pi_n$. It's easy to imagine "$\pi_n$s are (usually) nonabelian monsters and their associated homology groups are friendly abelian groups", but this difference *only* happens for $n=1$. – Eric Towers Jan 15 '19 at 23:31
• I think you are confusing a few things here. Compare $H_2$ of the 2-dimensional torus with its $\pi_2$, for example. – Michael Jan 15 '19 at 23:34
• @Michael : After actually looking up what I was talking about, I find that I have mashed together (at least) two errors to make another. Yay? – Eric Towers Jan 16 '19 at 4:42
• @Michael : I think I've disentangled my mangling. I may still have a fumble-thought in the first bullet that I'm just not seeing. – Eric Towers Jan 16 '19 at 5:11
Two commons believes :
1. The godel incompleteness theorem said that there is true things which are not provable, this is certainly false and I think that this create some spiritual pseudo mathematical reflection!
I think that the right way to imagine godel incompleteness theorem is by analogy to linear independence or transcendentality
Limit in analysis is a special case of limit in category theory
This is not true since $$2\mathbb{N}$$ is cofinal in $$\mathbb{N}$$
(This breaked my heart when I realize this thing!!!)
If there is another way to see why limit in analysis is a special case of category theory I will be very very happy so if you have a way react !
• Regarding Godel incompleteness theorem: why you think it is false? The Godel theorem constructs a program $P$ such that $P$ does not halt (in reality) but $PA$ does not prove "P does not halt", so this is an unprovable statement that is true. Regarding your second question, see: mathoverflow.net/questions/9951/… – kp9r4d Sep 16 at 7:37
• @kp9r4d The point is that Godel create a statement P for a theory T with certain conditions such that $\{ P \} \cup T$ *and* $\{ \neg P \} \cup T$ are coherent this can be think as $P$ is logically independent of the theory – Anonyme Sep 16 at 8:01
• But (if $PA$ is consistent) one of statement $P$ or $\neg P$ should be true in the standard model of arithmetic and the second one is false, right? And we can even say which one is true. – kp9r4d Sep 16 at 8:08
• @kp9r4d you right we need to put caution on the difference of true In general (relatively to a theory $T$) and true relatively to a special model the point is that respectively to the model $\mathbb{N}$ for arithmetic we can find a true statement which is not provable but this statement is not true in general(in any model) ! – Anonyme Sep 16 at 8:11
• Sure, but it seems to me to say "this is certainly false" is too harsh, usually when one talk about the falsity or truth of an arithmetic statement, one means exactly its falsity/truth in the standard model. Therefore, I do not see anything criminal in the statement "if $PA$ consistent, there is an arithmetic statement that is true but not provable" – kp9r4d Sep 16 at 8:17
I once very briefly thought that:
Given a vector space $V$ and a sub-space $U \subset V$ that $V-U$ is also a subspace.
I've heard this several times as a TA also.
• Why the downvote! I heard this from more than one student in introductory linear algebra classes and when marking. – Benjamin May 12 '15 at 22:21
• I think this falls under $(x+y)^2=x^2+y^2$, – Thomas Rot Aug 10 '15 at 12:48
• It always fails... But I don't think this is a common held belief. – Thomas Rot Aug 10 '15 at 21:40
• @ThomasRot But it always fails, while $(x+y)^2=x^2+y^2$ sometimes holds, especially in characteristic 2. – ACL Apr 21 '16 at 6:37
• I meant that $V-U$ cannot be a subspace since it doesn't contain 0. On the other hand, in any commutative ring where $1+1=0$, then the formula $(x+ y )^2=x^2+y^2$ holds. – ACL Apr 21 '16 at 10:02
I'm not sure how common it is but I've certainly been able to trick a few people into answering the following question wrong:
Given $n$ identical and independently distributed random variables, $X_k$, what is the limiting distribution of their sum, $S_n = \sum_{k=0}^{n-1} X_k$, as $n \to \infty$?
Most (?) people's answer is the Normal distribution when in actuality the sum is drawn from a Levy-stable distribution. I've cheated a little by making some extra assumptions on the random variables but I think the question is still valid.
• I don't understand your third paragraph. Are you saying that under the assumptions in the 2nd paragraph, the limiting distribution (rescaling if necessary) is always Levy-stable? – Yemon Choi Apr 12 '11 at 1:28
• @Yemon, Yes, this is what I was implying. Perhaps I was a little too cavalier? Certainly the sum of (well enough behaved) i.i.d. r.v.'s with power law tails converge to a Levy-Stable distribution... – dorkusmonkey Apr 12 '11 at 23:53
• Generally such a limiting distribution doesn't exist. Perhaps you need to divide your sum by the square root of $n$? – John Bentin Dec 29 '11 at 13:56
I had the false belief that recursive functions are always decidable in ZFC.
When I was a kid (8th grade), I solved a bunch of math problems in an exam using the well-known identity'' that $(x+y)^2=x^2+y^2$, which I was sure I had been taught the year before. It was of course way before I heard about characteristic two and I didn't get a good grade that day!
• Quoth the question, "The properties I'd most like from examples are that they are from reasonably advanced mathematics (so I'm less interested in very elementary false statements like $(x+y)^2=x^2+y^2$, even if they are widely believed)". – JBL Dec 1 '10 at 23:39
• Also, this is of course just a special case of the more general “law of universal linearity”, which iirc was mentioned in earlier answers… – Peter LeFanu Lumsdaine Dec 2 '10 at 0:40
I don't know if this is what you are looking for, but I keep hearing that "a differentiable function is one that is locally linear", not one whose local variation can be approximated linearly. No one stops to think about e.g, $x^2$, and the fact that its graph does not look like a line at any value of $x$.
• I would say this is more a heuristic than a false statement; as such, it would be more appropriate as an answer to mathoverflow.net/questions/2358/most-harmful-heuristic (although I do not think anyone interprets it the way you apparently do). – Qiaochu Yuan May 5 '10 at 4:53
• Yes, I did not read the question very carefully. I realize it is not a good comment, and, yes, it is more of a abd heuristic than anything else. – Herb May 25 '10 at 23:59
• it is also a comment on the imprecision of the words locally, infinitesimally,.... This once led Oort-Steenbrink to give some careful restatements of results previously called as "local Torelli theorems"... – roy smith Apr 14 '11 at 19:02 | 7,958 | 28,823 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 99, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0} | 3.53125 | 4 | CC-MAIN-2020-50 | latest | en | 0.958586 |
http://mathhelpforum.com/discrete-math/119217-pigeonhole-problem.html | 1,480,968,250,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698541783.81/warc/CC-MAIN-20161202170901-00427-ip-10-31-129-80.ec2.internal.warc.gz | 176,445,112 | 10,263 | 1. ## Pigeonhole Problem
Let n be a positive integer and let a1, a2,...,a(n+1) be an real numbers in the interval [0,1). Show that there exists two integers i,j with 1< i, j < (n+1) and i doesn't equal j such that |ai - aj| < (1/n).
Ok. I can come up with an example...ai = .25 and aj = .3 and n = 5, but I'm sure that it's asking for a more general answer...
Anyone have any ideas?
2. Express the interval $[0,1)$ as the union of $n$ subintervals.
$[0,1)=[0,1/n) \cup [1/n,2/n) \cup \dots \cup [(n-1)/n, 1)$.
Since there are $n+1$ of the $a_i's$, at least two of them must be in one of the $n$ intervals; i.e,
$\exists i,j$ s.t $i\not= j$ and $a_i,a_j \in [k/n, (k+1)/n)$, for $0\le k \le n-1$.
Therefore $|a_i-a_j| < 1/n$.
3. Originally Posted by Black
Express the interval $[0,1)$ as the union of $n$ subintervals.
$[0,1)=[0,1/n) \cup [1/n,2/n) \cup \dots \cup [(n-1)/n, 1)$.
Since there are $n+1$ of the $a_i's$, at least two of them must be in one of the $n$ intervals; i.e,
$\exists i,j$ s.t $i\not= j$ and $a_i,a_j \in [k/n, (k+1)/n)$, for $0\le k \le n-1$.
Therefore $|a_i-a_j| < 1/n$.
hey, can you explain that in little easier language if there is any!!!
thanksss | 457 | 1,186 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.859375 | 4 | CC-MAIN-2016-50 | longest | en | 0.819484 |
https://community.fabric.microsoft.com/t5/DAX-Commands-and-Tips/Calcular-total-de-faixas-acima-de-12h-acima-de-24h-acimas-de-4h/td-p/2681955 | 1,686,262,640,000,000,000 | text/html | crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00415.warc.gz | 214,885,921 | 70,842 | cancel
Showing results for
Did you mean:
Helper I
## Calcular total de faixas acima de 12h acima de 24h acimas de 4h acima de 7dias até 4h dividido
Como calcular total de faixas ... acima de 12h... acima de 24h... acimas de 4h... acima de 7dias... até 4h ...dividido pelo total de cada periodo para obter a porcentagem
3 ACCEPTED SOLUTIONS
Super User
@Victor1986 , You want to get month total of say measure M1
then
calculate([m1], filter(allselected(Date), Date[Period] = max(Date[Period]) ) )
or
calculate([m1], filter(allselected(Date[Period] ), Date[Period] = max(Date[Period]) ) )
!! Microsoft Fabric !!
Microsoft Power BI Learning Resources, 2023 !!
Learn Power BI - Full Course with Dec-2022, with Window, Index, Offset, 100+ Topics !!
Did I answer your question? Mark my post as a solution! Appreciate your Kudos !! Proud to be a Super User! !!
Helper I
Bom dia @amitchandak
Necessito calcular a coluna (FAIXAS DE TEMPO) que contempla os valores abaixo na tabela, de acima de 12h, acima de 24h, acima de 4h etc... 21 etc.) dividido pelo TOTAL de cada periodo
consegue me ajudar?
Frequent Visitor
Hi @Victor1986 ,
Please try following steps:
1.Is your data as shown picture 1 ? You may need to transform the data from the picture 1 to picture 2.
picture 1
picture 2
2.you need to create a measure called total.
``total = SUMX('table2','table2'[column]+'table2'[ATE 4h]+'table2'[ACIMA DE 7Dias]+'table2'[ACIMA DE 4h]+'table2'[ACIMA DE 24h]+'table2'[ACIMA DE 12h]) ``
3.Create the following measure to calculate the percentage for each column.
``````ACIMA DE 12h% =
VAR ACIMADE12hsum = SUM('table2'[ACIMA DE 12h])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE12hsum,total)
RETURN
column_percentage
ACIMA DE 24h% =
VAR ACIMADE24hsum = SUM('table2'[ACIMA DE 24h])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE24hsum,total)
RETURN
column_percentage
ACIMA DE 4h% =
VAR ACIMADE4hsum = SUM('table2'[ACIMA DE 4h])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE4hsum,total)
RETURN
column_percentage
ACIMA DE 7Dias% =
VAR ACIMADE7Diassum = SUM('table2'[ACIMA DE 7Dias])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE7Diassum,total)
RETURN
column_percentage
ATE 4h% =
VAR ATE4hsum = SUM('table2'[ATE 4h])
VAR total = [total]
VAR column_percentage = DIVIDE(ATE4hsum,total)
RETURN
column_percentage
column% =
VAR columnsum = SUM('table2'[column])
VAR total = [total]
VAR column_percentage = DIVIDE(columnsum,total)
RETURN
column_percentage``````
4.In report page, you can get result you want.
Best Regards,
3 REPLIES 3
Frequent Visitor
Hi @Victor1986 ,
Please try following steps:
1.Is your data as shown picture 1 ? You may need to transform the data from the picture 1 to picture 2.
picture 1
picture 2
2.you need to create a measure called total.
``total = SUMX('table2','table2'[column]+'table2'[ATE 4h]+'table2'[ACIMA DE 7Dias]+'table2'[ACIMA DE 4h]+'table2'[ACIMA DE 24h]+'table2'[ACIMA DE 12h]) ``
3.Create the following measure to calculate the percentage for each column.
``````ACIMA DE 12h% =
VAR ACIMADE12hsum = SUM('table2'[ACIMA DE 12h])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE12hsum,total)
RETURN
column_percentage
ACIMA DE 24h% =
VAR ACIMADE24hsum = SUM('table2'[ACIMA DE 24h])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE24hsum,total)
RETURN
column_percentage
ACIMA DE 4h% =
VAR ACIMADE4hsum = SUM('table2'[ACIMA DE 4h])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE4hsum,total)
RETURN
column_percentage
ACIMA DE 7Dias% =
VAR ACIMADE7Diassum = SUM('table2'[ACIMA DE 7Dias])
VAR total = [total]
VAR column_percentage = DIVIDE(ACIMADE7Diassum,total)
RETURN
column_percentage
ATE 4h% =
VAR ATE4hsum = SUM('table2'[ATE 4h])
VAR total = [total]
VAR column_percentage = DIVIDE(ATE4hsum,total)
RETURN
column_percentage
column% =
VAR columnsum = SUM('table2'[column])
VAR total = [total]
VAR column_percentage = DIVIDE(columnsum,total)
RETURN
column_percentage``````
4.In report page, you can get result you want.
Best Regards,
Super User
@Victor1986 , You want to get month total of say measure M1
then
calculate([m1], filter(allselected(Date), Date[Period] = max(Date[Period]) ) )
or
calculate([m1], filter(allselected(Date[Period] ), Date[Period] = max(Date[Period]) ) )
!! Microsoft Fabric !!
Microsoft Power BI Learning Resources, 2023 !!
Learn Power BI - Full Course with Dec-2022, with Window, Index, Offset, 100+ Topics !!
Did I answer your question? Mark my post as a solution! Appreciate your Kudos !! Proud to be a Super User! !!
Helper I
Bom dia @amitchandak
Necessito calcular a coluna (FAIXAS DE TEMPO) que contempla os valores abaixo na tabela, de acima de 12h, acima de 24h, acima de 4h etc... 21 etc.) dividido pelo TOTAL de cada periodo
consegue me ajudar? | 1,497 | 4,835 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.703125 | 4 | CC-MAIN-2023-23 | longest | en | 0.381307 |
https://www.expertsmind.com/library/calculate-and-evaluate-the-firm-sustainable-growth-rate-5742681.aspx | 1,721,064,844,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514713.2/warc/CC-MAIN-20240715163240-20240715193240-00008.warc.gz | 694,190,215 | 14,744 | ### Calculate and evaluate the firm sustainable growth rate
Assignment Help Finance Basics
##### Reference no: EM13742681
Part 1
It is now time to work on your final draft, summarizing the findings and analysis that you conducted over the past few weeks. Your report should include the following:
• A comprehensive summary of the firm that you chose to study and your initial assessment you conducted in Phase 1 before you conducted your formal financial analysis.
• A summary of your findings after you conducted your financial ratio analysis in Phase 2, including any major concerns or findings that you encountered while conducting the analysis.
• A summary of your findings after you prepared the firm's pro forma statements and what you learned about the firm while performing this task.
Part 2
Using the company's financial statements, calculate and evaluate the firm's sustainable growth rate (SGR) for the last 2-3 years, and summarize your findings in your paper. Be sure to address the following:
• What are the sustainable growth rates for your subject company over the period that you studied?
• How do they compare with the actual growth rates that the company experienced over the period studied?
• What are the consequences faced by firms that grow at a rate that is not consistent with their sustainable rate?
• If the firm grew at a rate above or below the SGR, how did it finance its excessive growth or reward its stockholders for the underperformance?
• Based on your analysis, do you believe the firm's growth strategy is sound and maximizes the value of the firm with reasonable levels of risk?
Be sure to document your statements with credible sources, in-text citations, and references using proper APA format.
### Write a Review
#### Option makes a portfolio that is gamma neutral
Explain what position in the option makes a portfolio that is gamma neutral and Give size of position and state whether it is long or short
#### Calculate the required return based on the capm
Based on the following information, calculate the required return based on the CAPM.
#### What is the amount of dividends received during the year
Kevin purchased a stock a year ago that pays a dividend. He has earned a 50%. The stock was purchased for \$16 and is now worth \$21. What is the amount of dividends received during the year?
#### What is the implied annual interest rate inherent
Suppose the December CBOT Treasury bond futures contract has a quoted price of 80-07. What is the implied annual interest rate inherent in the futures contract?
#### What is the equation for facilities cost
Using the high low method, with student credit hours as the activity driver, what is the equation for facilities cost (FC) as a function of student credit hours?
#### Explain fannie mae
Explain Fannie Mae
#### Bond rating agencies have invested significant sums of money
Bond rating agencies have invested significant sums of money in an effort to determine which quantitative and non-quantitative factors best predict bond defaults. Furthermore, some of the raters invest time and money to meet privately with cor..
#### Show the whole analysis
Their net income is taxed at a 31% rate. Do a cash flow analysis, show the whole analysis, and recommend whether the new machine should be bought now or in 3 years when the old press must be replaced. Project WACC is 9.5%. Provide NPV, IRR, MIRR &..
#### You are a manager in a fictitious company of your choice
you are a manager in a fictitious company of your choice. your director has asked you to explain to the department
#### On a spreadsheet prepare a multistep income statement for
in this module you were introduced to the income statement and profitability ratios.nbsp in this assignment you will
#### You would like to purchase a treasury bill that has a 10000
you would like to purchase a treasury bill that has a 10000 face value and is 68 days from maturity. the current price
#### What is the bank cost of preferred stock
Sixth Fourth Bank has an issue of preferred stock with a \$6.40 stated dividend that just sold for \$126 per share. | 846 | 4,131 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2024-30 | latest | en | 0.929992 |
https://forum.allaboutcircuits.com/threads/jog-start-stop-using-only-basic-logic-circuits.9535/ | 1,571,058,796,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00557.warc.gz | 485,825,905 | 19,570 | # Jog-Start-Stop(using only basic logic circuits)
Discussion in 'The Projects Forum' started by Real, Feb 17, 2008.
1. ### Real Thread Starter New Member
Feb 17, 2008
2
0
Here is the condition:
o I can turn on and off the light using two momentary switch one is for "on" and the other is for "off".
o The other condition is the jog. Either the light is on or off still I can jog using the jog switch (the 3rd momentary switch).
Pls. help me with this problem.
2. ### hgmjr Retired Moderator
Jan 28, 2005
9,029
219
Just as a clarification, does jog mean to toggle the light alternately on and off?
hgmjr
3. ### James Sonroll New Member
Feb 22, 2008
4
0
The condition is press to on and release to off. My problem is that using basic logic gates is impossible but my instructor told me that still theres a solution.
My project only used one lamp or LED.....
My problem is in jog button because when i switch the jog the light will toggle due to the feedback that i input to OR gate.
My circuit is this:
From input1 or start, OR gate is connected to first pin of AND gate then load (Q1) then feedback to OR gate at other pin.
From input2 or stop, inverter is connected to AND gate the load (Q1).
From Input3 or Jog, OR gate is connected Load directly.
4. ### beenthere Retired Moderator
Apr 20, 2004
15,808
295
If the condition is press on and release off, then a momentary normally-open switch will satisfy it.
If your first post is accurate, then a flip-flop will do. One button sets, one resets, and the other clocks it for the jog.
5. ### Søren Senior Member
Sep 2, 2006
472
30
Hi,
I'm not too sure about what you term "jog", but if you mean toggling the state of the output while pressed, this should fit the bill:
Each of the inputs should be switched to "Common" (ground/0V) through a momentary action (push button) switch.
2 NAND gates and one EXNOR (which ought to be called NEXOR's really, to comply with the general naming convention) is used, the rest are just tied to one of the power rails (whichever is the most practical).
If that's not what you want, please describe your apprehension of the term "jog" which I only know as "to 'run' through several options" or as a jog-wheel on a VHS-recorder (or even the really weird physical activity performed by overweigth overbored and overly optimistic people who can't let go of the eighties )
In short: How should the output behave when pressed and when released when the output is respective "1" and "0" before the "jog" press.
6. ### Secret Mover New Member
Feb 25, 2008
4
0
Hello me secret mover is originally real and James Sonroll account... I change my account because i forgot my password...
For my project, I'm very thankful that you help me solving those problem...
The toggle that I mean is when press is on and off if release.
The (1rst button)start and (2nd button)stop momentary button is easy to implement but the (3rd button)jog is hard to incorporate because it only use one light for 3 momentary buttons.
I hope you can do it!
7. ### Dave Retired Moderator
Nov 17, 2003
6,960
171
Can I ask that you stop creating multiple accounts. If you loose your password you can reset it by reading the details here: http://forum.allaboutcircuits.com/faq.php?faq=vb_user_maintain#faq_vb_lost_password
Thank you.
Dave
8. ### Secret Mover New Member
Feb 25, 2008
4
0
Dave (administrator) can you erase my account James Sonroll, Secret Mover2, and Real? | 897 | 3,450 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2019-43 | latest | en | 0.909046 |
https://www.tutorialcup.com/interview/dynamic-programming/moser-de-bruijn-sequence.htm | 1,603,562,832,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00529.warc.gz | 958,398,455 | 29,796 | Home » Technical Interview Questions » Dynamic Programming Interview Questions » Moser-de Bruijn Sequence
# Moser-de Bruijn Sequence
Difficulty Level Easy
In this problem, you are given an integer input n. Now you need to print the first n elements of the Moser-de Bruijn Sequence.
## Example
`7`
`0, 1, 4, 5, 16, 17, 20`
Explanation
The output sequence has the first seven elements of the Moser-de Bruijn Sequence. Thus the output is absolutely correct.
## Approach
In number theory, the Moser–de Bruijn sequence is an integer sequence named after Leo Moser and Nicolaas Govert de Bruijn, consisting of the sums of distinct powers of 4. So that means it will contain all the numbers which can be represented using distinct powers of 4.
We can also define the numbers which make up the Moser-de Bruijn Sequence in a little bit different manner. If the number converted into base-4 number system contains only 0 or 1. Then we say that the number exists in Moser-de Bruijn Sequence. It does not mean that the base-4 number system contains only 0 and 1 as its digits. Base-4 representation contains 0, 1, 2, and 3. But if the number exists in our sequence. It needs to follow some prerequisites which are to contain only 0 or 1 in base-4 representation. So now we are familiar with what type of numbers form the sequence. But how do we generate such numbers?
One simple way is to make use of the recurrence formula which is used to generate the numbers of the sequence. But there’s a catch.
The recurrence relation
The base case is for n = 0, S(0) = 0. Now, if we simply use the recurrence relation, we’ll be calculating some values more than once. This process will only add up to increase the time complexity. To improve our algorithm, we’ll store these values which will reduce the computations. This technique where we store the data which can be used later during computation is generally referred to as Dynamic Programming. Check out the basics of dynamic programming here.
## Code
### C++ code to generate Moser-de Bruijn Sequence
```#include <bits/stdc++.h>
using namespace std;
int main(){
// number of elements to be generated
int n;cin>>n;
if(n >= 1)cout<<0<<" ";
if(n >= 2)cout<<1<<" ";
if(n >= 3) {
int dp[n];
dp[0] = 0,dp[1] = 1;
for(int i=2; i<n;i++){
if(i%2 == 0)
dp[i] = 4*dp[i/2];
else
dp[i] = 4*dp[i/2] + 1;
cout<<dp[i]<<" ";
}
}
}```
`6`
`0 1 4 5 16 17`
### Java code to generate Moser-de Bruijn Sequence
```import java.util.*;
class Main{
public static void main(String[] args){
Scanner sc = new Scanner(System.in);
// number of elements to be generated
int n = sc.nextInt();
if(n >= 1)System.out.print("0 ");
if(n >= 2)System.out.print("1 ");
if(n >= 3) {
int dp[] = new int[n];
dp[0] = 0;dp[1] = 1;
for(int i=2; i<n;i++){
if(i%2 == 0)
dp[i] = 4*dp[i/2];
else
dp[i] = 4*dp[i/2] + 1;
System.out.print(dp[i]+" ");
}
}
}
}```
`6`
`0 1 4 5 16 17`
## Complexity Analysis
### Time Complexity
O(N), because once a number is calculated, there is no time required to use it later in the computation. Since pre-computation requires O(N) time. The time, complexity is linear.
### Space Complexity
O(N), because we have created a new DP array which is dependent on the input. The space complexity for the problem is linear.
READ Subset with sum divisible by m
Array Interview Questions Graph Interview Questions LinkedList Interview Questions String Interview Questions Tree Interview Questions | 931 | 3,428 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.625 | 4 | CC-MAIN-2020-45 | longest | en | 0.854053 |
https://www.airmilescalculator.com/distance/ybr-to-yqr/ | 1,621,069,711,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243991378.48/warc/CC-MAIN-20210515070344-20210515100344-00488.warc.gz | 650,173,381 | 12,024 | # Distance between Brandon (YBR) and Regina (YQR)
Flight distance from Brandon to Regina (Brandon Municipal Airport – Regina International Airport) is 212 miles / 342 kilometers / 184 nautical miles. Estimated flight time is 54 minutes.
Driving distance from Brandon (YBR) to Regina (YQR) is 232 miles / 373 kilometers and travel time by car is about 4 hours 11 minutes.
## Map of flight path and driving directions from Brandon to Regina.
Shortest flight path between Brandon Municipal Airport (YBR) and Regina International Airport (YQR).
## How far is Regina from Brandon?
There are several ways to calculate distances between Brandon and Regina. Here are two common methods:
Vincenty's formula (applied above)
• 212.314 miles
• 341.687 kilometers
• 184.496 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth’s surface, using an ellipsoidal model of the earth.
Haversine formula
• 211.675 miles
• 340.658 kilometers
• 183.940 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
## Airport information
A Brandon Municipal Airport
City: Brandon
IATA Code: YBR
ICAO Code: CYBR
Coordinates: 49°54′36″N, 99°57′6″W
B Regina International Airport
City: Regina
IATA Code: YQR
ICAO Code: CYQR
Coordinates: 50°25′54″N, 104°39′57″W
## Time difference and current local times
The time difference between Brandon and Regina is 1 hour. Regina is 1 hour behind Brandon.
CDT
CST
## Carbon dioxide emissions
Estimated CO2 emissions per passenger is 56 kg (124 pounds).
## Frequent Flyer Miles Calculator
Brandon (YBR) → Regina (YQR).
Distance:
212
Elite level bonus:
0
Booking class bonus:
0
### In total
Total frequent flyer miles:
212
Round trip? | 457 | 1,851 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.65625 | 3 | CC-MAIN-2021-21 | latest | en | 0.818493 |
http://www.jiskha.com/display.cgi?id=1358523189 | 1,498,308,498,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128320261.6/warc/CC-MAIN-20170624115542-20170624135542-00361.warc.gz | 555,685,161 | 4,116 | # Calculus
posted by .
Which of the following would have a horizontal asymptote of y = b/a? Which one has a horizontal asymptote of y = 0?
( bx^3 - x^2 + 3 ) / ( ax^2 - 2 )
( bx^3 - x^2 + 3 ) / ( ax^3 - 2 )
( bx^3 - x^2 + 3 ) / ( ax^4 - 2 )
( bx^2 / a ) - 6
• Calculus -
b/a is the same as bx^n/ax^n, so that would be
( bx^3 - x^2 + 3 ) / ( ax^3 - 2 )
as x gets large, the lower powers become insignificant, so you just have to worry about the highest power of top and bottom.
If the top has lower power than bottom, the asymptote is always y=0. That would be the 3rd function
If the top has a power one more than the bottom, there will be a slant asymptote. The function is basically bx/a + c
• Calculus -
Okay, I think I got it... So then, of these, the limit as x approaches infinity does not exist for the second one?
• Calculus -
ummm. The second one is the answer to the question posed. The limit is b/a.
The limit does not exist when the top power is greater than the bottom power. That would be the first one. The limit is bx/a, which grows without limit as x grows. The 4th one also grows unbounded. | 339 | 1,125 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.328125 | 3 | CC-MAIN-2017-26 | latest | en | 0.941831 |
http://swagger-media.com/split-song-jkod/basic-statistics-formulas-d60a6c | 1,653,755,423,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652663016949.77/warc/CC-MAIN-20220528154416-20220528184416-00037.warc.gz | 53,815,652 | 11,004 | But here are some basic statistics formula that can help the students to get started with statistics. Therefore, we have provided all the details on basic statistics formula: Theoretically, it is the sum of the components of a set that is divided by the total number of components. Here are some of the basic statistic formulas you need to understand: #1: Population Mean: Where, μ – population mean. U Can: Statistics For Dummies Cheat Sheet, Looking at Confidence Interval Critical Values. It is the value that is frequently used in a single dataset. For example, “relationship status” is a categorical variable, and an individual could be single, dating, married, divorced, and so on. P (A C) + P (A) = 1. A. There is a list of basic statistics formulas whose values are related to statistical concepts or analyses. 2. To calculate the test statistic for the sample mean for samples of size 30 or more, you. The eminent statistician Karl Pearson (the guy who invented the correlation coefficient) was so ... you just don’t like memorizing all those formulas Pearson and company came up with. Geometric Mean. ... ned by a formula that speci es what values can be taken by data points within the distribution and how common each value (or range) will be. Crash Course on Basic Statistics Marina Wahl, marina.w4hl@gmail.com University of New York at Stony Brook November 6, 2013 is the population standard deviation of all values. Basic statistics functions. Basic statistics presentation 1. X variable. Statistics Formulas - Basic Statistics Formulas Statistics as a whole is a set of concepts, rules and procedures that help us to analyze the given data. In the upcoming paragraphs, we will discuss several statistical formulae that are used for different purposes. Class Interval Arithmetic Mean. Median = [(n/2) term + ((n/2) + 1)] /2 ; where n is the even number. So, a “statistic” is nothing but some numerical value to that can describe certain property of your data set. Before calculating the regression line, you need five summary statistics: The standard deviation of the x values (denoted sx), The standard deviation of the y values (denoted sy), The correlation between X and Y (denoted r), So, to calculate the best-fit regression line, you. You just can’t get away from them, when you are studying statistics. n N f sum w weight. Guide to Basic Formulas in Excel. σ population standard deviation. is an unknown value that you need, you may have to do a pilot study (small experimental study) to come up with a guess for the value of the standard deviation. Freeman & Co, 5th edition. For an odd amount of numbers, choose the one that falls exactly in the middle. The proportion of a given category, denoted by p, is the frequency divided by the total sample size. Here are ten statistical formulas you’ll use frequently and the steps for calculating them. σ2 population variance. Here are some formulas that make your life easy in Python. Oriana calculates a variety of single sample statistics as well as inter-sample comparisons. Each formula is linked to a web page that describe how to use the 8 CHAPTER 2. Mastering the basic Excel formulas is critical for beginners to become highly proficient in financial analysis Financial Analyst Job Description The financial analyst job description below gives a typical example of all the skills, education, and experience required to be hired for an analyst job at a bank, institution, or corporation. It doesn’t measure any other type of relationship, and it doesn’t apply to categorical variables. David Unger is a faculty member in the Department of Statistics at the University of Illinois at Urbana-Champaign. Median Statistics Formula. of the sets, then the two central values can be used to calculate the as the median. List of Basic Statistics Formulas. Chapter 2. highest value - lowest value Class Width = (increase to next integer) number classes upper limit + lower limit Class Midpoint = 2. Therefore, the standard deviation is 7.92. The standard deviation of a sample is a measure of the amount of variability in the sample. Grouped Data Arithmetic Mean. Where n=5, therefore (5+1)/2 = 3,which means 3rd term is the median of the data set. The basic statistics include the circular mean, length of the mean vector, circular standard deviation and 95% and 99% confidence limits. Applied Statistics A subject called applied statistics is one branch of statistics that involves formulas for simple random sampling. A commonly used measure of the center of a batch of numbers. It is the square root of the variance of the given information. sample mean. +xn n = 1 n! STATISTICS – is a branch of mathematics that deals with the collection, organization, presentation, analyzation and interpretation of numerical data. But there are several students who get frustrated by all these types; this is because of two reasons. DESCRIPTIONS OF STATISTICS FORMULAS MEAN: The mean, symbolized by x-bar, equals one divided by the number of samples multiplied by the sum of all data points, symbolized by x-sub-i. Probability: the basics. Basic Probability Formulas . Offered by University of Amsterdam. Trust us... this will be super fun. On the other side, if a particular set contains even no. Piece together the results from Steps 1 and 2 to give you the regression line: y = mx + b. Deborah J. Rumsey, PhD, is a professor of statistics and the director of the Mathematics and Statistics Learning Center at the Ohio State University. Mastering statistics for data science is no exception. The mean is also called the average. You can easily understand the whole concept of calculating the mean. It is used for calculating the deviation of a data set by its mean value. Statistical Features. of items. Statistics Formulas - Basic Statistics Formulas Statistics as a whole is a set of concepts, rules and procedures that help us to analyze the given data. The mean, or the average of a data set, is one way to measure the center of a numerical data set. Divide by n, the number of individuals in the sample. For an even amount of numbers, take the two numbers exactly in the middle and average them to find the median. If you come in at the 90th percentile, for example, 90 percent of the test scores of all students are the same as or below yours (and 10 percent are above yours). Special math constant. If you do not have formal math training, you'll find this approach much more intuitive than trying to decipher complicated formulas. Divide by n, the number of individuals in the sample. Formula Sheet and List of Symbols, Basic Statistical Inference. The median is the middle value after you order the data from smallest to largest. For each (x, y) pair in the data set, take x minus. Count up all the individuals in the sample who fall into the specified category. 0 ≤ P (A) ≤ 1 Rule of Complementary Events. From a high-level view, statistics is the use of mathematics to perform technical analysis of data. It has several applications in the advanced concepts of mathematics and statistics. But it's really not convenient especially if the number of cells change ofter. 2. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. With the help of data collection, data analysis, and data summarization is easy. Statistics - Formulas - Following is the list of statistics formulas used in the Tutorialspoint statistics tutorials. Statistical Formulas Description; ANOVA. With the help of data collection, data analysis, and data summarization is easy. Divide by n, the number of values in the data set. Round any fractional amount up to the nearest integer (so you achieve your desired MOE or better). Each formula is linked to a web page that describe how to use the Recent Articles. You can use this list as a go-to sheet whenever you need any mathematics formula. 7. Calculating basic statistics formulas in Python Data science needs some measures to explain the data well. Population Mean The term population mean, which is the parameter of a given population, is represented by:? Mean deviation = M.D. Determine the confidence level and find the appropriate z*. Basic math test. It is the sum of all observations divided by the number of (nonmissing) observations. Descriptive Statistics - used to describe the basic features of data in a study. What are The Uses of Excel in Our Daily Life? Apr 13, 2020 - The complete list of statistics & probability functions basic formulas cheat sheet for PDF download. Find the standard deviation of all the x values and call it sx. Google Classroom Facebook Twitter. Statistical features is probably the most used statistics concept in data science. The median of a numerical data set is another way to measure the center. The formula for correlation is, Find the mean of all the x values and call it, Find the mean of all the y values and call it. p population proportion. Find the median of the data 10,20,30,40,50. When to use. Variance = ${\sigma ^2} = \frac{{{{\sum {\left( {{x_i} - \bar x} \right)} }^2}}}{N}$ 5. Basic theoretical probability. Where is variance; x = given items; x bar = mean; and n = total no of itmes. The average is easy to understand. May 27, 2020 - The complete list of statistics & probability functions basic formulas cheat sheet for PDF download. Formulas — you just can’t get away from them when you’re studying statistics. Now, the median is calculated by Median = [(n/2) term + ((n/2) + 1)] /2 ; therefore. Sample mean = x = ( Σ xi ) / n 2. Statistical methods are used by various organizations and governments to calculate a collaborative property about employees or people; such properties then influence the decisions taken by the organizations and governments. There are several kinds of distribution in statistics, and each book has listed them with their properties. Instead, a continuous variable has to be referenced as a formula as the variables could be infinite. Mean Statistics Formula $\large bar{x}=\frac{\sum x}{n}$ Where, x = Items given n = Total number of items. But before we are going to learn these formulas. Standard Deviation. Where S = Standard deviation and is the square root of the variance. Understanding Formulas for Common Statistics. Divide by the desired margin of error, MOE. = $\frac{{\sum {\left| {{x_i} - M} \right|} }}{N}$ (from average deviation) 4. Let’s start with the introduction to statistics. These basic formulas of statistics & probability functions help users, learners, teachers, professionals or researchers to analyze, model, design & test various statistical surveys & experiments. α significance level How to Understand and Use Basic Statistics. Percentiles are a way to determine an individual value relative to all the other values in a data set. P(A∪B) = P(A) + P(B) - P(A∩B). In this article, you will formulas from all the Maths subjects like Algebra, Calculus, Geometry, and more. Intro to theoretical probability. Statistics Formulas & Basic Concepts Statistic formulas for class 8th, 9th, 10th, Calculation of mean, mode, median, mean deviation about mean & median, standard deviation, variance. The most important is to understand the purpose of each one of these functions. Chapter 3. sample size population size frequency. Therefore, it must be a positive value, and it is also used to measure the value of the standard deviation, which is considered as the essential concept of the statistics values. Basic statistics and common formulas for Six Sigma projects. The symbol ‘? In fact, we're going to tackle key statistical concepts by programming them with code! 3. s2 sample variance. where z* is the standard normal value for the confidence level you want. The ability of the mean is used to show the overall dataset with a single value. Therefore, the sigma (variance) can be calculated as [(10)^2 + (5)^2 + (-6)^2 + (3)^2 + (12)^2]/5. The term ‘Σ (X i – μ) 2 ’ used in the statistical formula represents the sum of the squared deviations of the scores from their population mean. The methods of statistics are generated to examine the large data and their properties. Probability: the basics. 1. Which means 2nd and 3rd term will be used for median i.e. The formula for the standard deviation is. Elementary statistics formulas | Statistics formulas such as mean median mode, variance, and standard deviation formulas are given here in accordance with the number of observations Determining basic statistics about the values that are in a range of data. The Excel Statistical functions are all listed in the tables below, grouped into categories, to help you to … Understanding statistics is essential to understand research in the social and behavioral sciences. Send me an email here and ask me any questions you want about these basic math formulas. DESCRIPTIONS OF STATISTICS FORMULAS MEAN: The mean, symbolized by x-bar, equals one divided by the number of samples multiplied by the sum of all data points, symbolized by x-sub-i. Formula npr n permutation total number of objects number of objects taken at a time getcalc Formula Y = a + bx y linear regression line a y-intercept b slope of regression line Ex sum of x values sum of y values —Þ sum of squard x values —Þ sum of xy products (Ex) —Þ sum of x values squared getcalc In other words, Statistics help us to • Organize numerical information in the form of tables, graphs, and charts. Basic statistical functions including COUNT, COUNTA, AVERAGE, MAX, MIN, MEDIAN and MODE. That is why statistics are combined with classifying, presenting, collecting, and arranging the numerical information in some manner. Some variables are categorical and identify which category or group an individual belongs to. Even then, you face any difficulty regarding the statistics assignments help; then, you can connect our team or our customer executive support to get the help. There is a list of basic statistics formulas whose values are related to statistical concepts or analyses. As 3 is repeated 4 times; therefore, the mode of the data is 3. After data has been collected, the first step in analyzing it is to crunch out some descriptive statistics to get a feeling for the data. Basic Excel Formulas Guide. of items. Basic Statistics. When taking a standardized test, you get an individual raw score and a percentile. Basic Statistical Formulas. The term ‘sqrt’ used in this statistical formula denotes square root. N – the total number of cases or individuals in the population. The formula for the margin of error for, dealing with samples of size 30 or more, is. Statistics deals with the analysis of data; statistical methods are developed to analyze large volumes of data and their properties. Correlation. Oct 13, 2014 - statistics symbols | Basic Statistics Formula Sheet Maths Formulas | Basic Mathematics Formulas for CBSE Class 5 to 12. Frequently Used Statistics Formulas and Tables. Maths Formulas can be difficult to memorize. For all the statistical computations, the basic concept and formulas of mean, mode, standard deviation, median, and variance are the stepping stones. Count up all the individuals in the sample who fall into the specified category. Rule of Addition. A basic visualisation such as a bar chart might give you some high-level information, but with statistics we get to operate on the data in a much more information-driven and targeted way. Statistics deals with the analysis of data; statistical methods are developed to analyze large volumes of data and their properties. You can achieve destructive statistics from the … Statistics formulas are used by several companies to calculate the report of the people or employees. Measure of Dispersion Excel provides an extensive range of Statistical Functions, that perform calculations from basic mean, median & mode to the more complex statistical distribution and probability tests. Standard deviation is the variability within a data set around the mean value. There are also a. Just giving a basic number of the percentage of students is an example of a descriptive statistic. In other words, Statistics help us to • Organize numerical information in the form of tables, graphs, and charts. (This is the same as multiplying by one over n – 1.). Explore what probability means and why it's useful. Sample variance = s2 = Σ ( xi - x )2 / ( n - 1 ) 4. With the help of statistics, we are able to find measures of central values which gives a rough idea about where data points are centered. You can think of it, in general terms, as the average distance from the mean. Crash Course on Basic Statistics Marina Wahl, marina.w4hl@gmail.com University of New York at Stony Brook November 6, 2013. Statistic Formulas. Basic Statistics. xi • Standard deviation (use a calculator): s = 1 n−1 (xi −x)2• Median: Arrange all observations from smallest to largest. Some commonly used statistical formulas that you can apply on the series points are exposed via static methods in the BasicStatisticalFormulas type. Unless otherwise noted, these formulas assume simple random sampling. These are the basic statistics formula to calculate the median of the given data. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equation and measure-theoretic probability theory. A proportion, or relative frequency, represents the percentage of individuals that falls into each category. It helps us to make sense of all the raw data by systematic organisation and interpretation. Introduction to the Basic Practice of Statistics. It can be defined as a function of the given data. Instructions COUNT vs COUNTA An Introduction to Basic Statistics and Probability – p. 10/40. Convert the original value to a standard score by using the z-formula, is the population mean of all values, and. Learn statistical formulas you’ll use frequently and the steps for calculating them here at Vedantu.com Variance of sample proportion = sp2 = pq / (n - 1) 5. To calculate the median, you have to arrange the components of the set in increasing order; only then you can find the median of the data. The margin of error for your sample mean, is the amount you expect the sample mean to vary from sample to sample. Applied statistics is essential to understand the purpose of each one of these functions an number... ≤ p ( A∩B ) of two reasons, if a particular contains! Calculated by writing the data set of sample proportion = p = ( p1 * n1 + p2 n2... Sample mean, is the variability within a data set. ) presenting, collecting and. And their properties a powerful tool when performing the art of data in range! With the analysis of the concepts and applications of probability with a single value concept of ;. Group of statistical symbols used to show the overall dataset with a single data each formula is linked a! Are generated to examine the large data and the steps for calculating them member in the mean. Analyzation and interpretation or we can calculate the standard deviation that involves formulas for random. To learn these formulas Following steps: order the data set around the mean value to find median! Divide your result from Step 2 by the number of individuals in the sample who fall into the specified.... One branch of mathematics that is why we have created basic statistics formulas huge list of maths formulas | basic mathematics for! Overall dataset with a single data of Complementary Events is another way to determine an individual belongs.. You need any mathematics formula median i.e a way to measure the center of branches. Min, median and mode the main values make a statistical statement of data! The summary of the data as the group of statistical symbols used to calculate report! Mean, or the average from it the value that is why we have calculated variance. The methods of statistics & probability functions basic formulas cheat sheet for PDF download a... Here we will discuss popular formulas and what they stand for email here and ask any. Average ( or “ mean ” ) value, and each book has listed them code. And measure-theoretic probability theory have calculated the variance of the mean n the symbol ‘? ’ represents population. Mean = x = ( sum of all observations divided by the standard normal value the! Better ) symbol ‘? ’ represents the percentage of students is an example of a data around... Not convenient especially if the number of individuals in the above example we. Statistics formula to calculate the median is the amount of variability in the BasicStatisticalFormulas type 0 ≤ p ( )! Middle and average them to find the corresponding percentile basic statistics formulas the test statistic for the sample simply average! For you the sum of all observations divided by the number of values, then the value! ( A∪B ) = p = ( sum of all the individuals in basic statistics formulas category! Mean ” ) value, and more after you order the data set take. Two or more, is the same as multiplying by one over n –.! Presenting, collecting, and each book has listed them with code is an example of a category! But if a particular set contains even no basic mathematics formulas for simple random sampling values can considered! Are the Uses of Excel in Our Daily Life: mean = x = given items ) n. Common statistical formulas that make your Life easy in Python within a data set in ascending order i.e. * n1 + p2 * n2 ) / n the symbol ‘? ’ represents the population used. How to calculate the test statistic for the next course in the population mean the to... Numerical data set, MIN, median and mode p2 * n2 ) total... Are present in the upcoming paragraphs, we have created a huge of... One over n – the total number of values, and charts from... 3, which means 2nd and 3rd term is the frequency divided by desired! Organisation and basic statistics formulas of numerical data set classifying, presenting, collecting, and arranging the information! Symbol ‘? ’ represents the population one over n – 1 where n is the middle value you. One of these functions these are the average ( or “ mean ” ) value, and doesn. Statistics formula to calculate the standard deviation of a descriptive statistic variables x and.. A standard score by using the value of the population mean, the... For simple random sampling p. 10/40 term is the median a few formulas that can help you understand! Is to understand the basic features of data giving a basic number of individuals in the located. As the median can be calculated by writing the data and charts pq / ( +... This is the sum of all the individuals in the sample for Dummies cheat sheet for download... The group of statistical symbols used to study the analysis of data ; statistical are. Of sample proportion = p ( a ) + 1 ) ] /2 ; where n the! Paragraphs, we have calculated the variance of the basic concept of statistics formulas Author: K! Help the students finding it difficult to learn statistics = total no of itmes • Organize numerical information in sample... Sqrt ’ used in a given variable features of data and the “ standard of. The given items ; x bar = mean ; and n = total.... P2 * n2 ) 6 two numbers exactly in the Tutorialspoint statistics tutorials /2 = 3 which. First one is the square root Distribution in statistics with formulas differential equation and measure-theoretic probability.! Trying to decipher complicated formulas + n2 ) / ( n1 + n2 ) / total no few Know... Data by systematic organisation and interpretation of calculating the mean of relationship, and arranging the numerical information some...: //www.udacity.com/course/st101 basic statistics formulas of mathematics that deals with the help of data xi – sum of all scores that present! * n1 + p2 * n2 ) / ( n - 1 ) 4 variance ; =... From them, but also how to calculate the standard deviation = s = sqrt Σ! Relative to all the individuals in the basic statistics formulas hypothesis ) now, using the value which most... Using the z-formula, is the summary of the percentage of individuals in the sample fall! Divide your result from Step 2 by the number of cases or individuals in the sample who fall the. Of statistics ; not just how to calculate them, but also how to calculate the of... Formula for the next course in the null hypothesis ) analyzation and interpretation or “ ”... Σ ( xi - x ) 2 / ( n - 1 ) /2! List of basic statistics formulas used in this statistical formula denotes square of. 30 or more groups of data size 30 or more groups of data ; statistical methods are developed to large! Of error for, dealing with samples of size 30 or more, you by the number... X = given items ; x = given items ) / n 2 you are studying statistics n the ‘! By n, the number of values in the upcoming applications /2 ; where n is the as! By the total number of the data set in ascending order, i.e 's useful nearest integer so. As well as inter-sample comparisons ) + 1 ) 5 x, y ) pairs a function of given... The differences between the means of two or more, is the center referenced as a of... And forecast various possibilities for the sample who fall into the specified category of data collection, organization,,. The analysis of data science deals with the help of data ; statistical methods developed. Here and ask me any questions you want relationship between two quantitative variables x and y formulae... Use the Z-table to find the corresponding percentile for the mean value sample... Formula for the upcoming paragraphs, we can say that mode is central! ) observations the students to get started with statistics and charts / ( +... Probability means and why it 's really not convenient especially if the number of values in a range of science! Practice to develop a good understanding of the variance of the center of the center of a set. Data set in ascending order, i.e, data analysis, linear,... In any given category, denoted by p, is one of the data set by its mean.. The average from it calculates a variety of single sample statistics as well as inter-sample comparisons in... /2 ; where n is the use of mathematics and statistics are generated to examine the large data and properties! Used by several companies to calculate the test statistic for the upcoming,! Practice to develop a good understanding of the amount of numbers s start with the introduction to statistics in! It also facilitates to interpret several outcomes from it and forecast various possibilities for the sample x, y pairs. Can help the students to get started with statistics MAX, MIN, median and.. For you, Calculus, Geometry, and it doesn ’ t get away from,. Evaluate them of statistics & probability functions basic formulas cheat sheet, Looking at confidence Interval Critical.... To show the overall dataset with a single value that deals with the help of and! Static methods in the sample with the Excel statistics functions ; and n = no. Arranging the numerical information in some manner tackle key statistical concepts by programming them their! This is the even number can say that mode is the parameter of a batch of numbers, take minus., the mode of the values that are used by several companies to them. The Excel statistics functions statistics is the sum of all the y values and call it sx studying.!
Bennett University Admission 2021, 12 Week Ultrasound Girl Vs Boy, Bennett University Admission 2021, Takers Rotten Tomatoes, Vw Touareg W12 For Sale, | 5,777 | 27,450 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.09375 | 4 | CC-MAIN-2022-21 | longest | en | 0.88615 |
https://www.khanacademy.org/math/cc-fourth-grade-math/imp-multiplication-and-division-2/imp-comparing-with-multiplication/v/comparisons-multiplication-addition | 1,722,864,902,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640447331.19/warc/CC-MAIN-20240805114033-20240805144033-00506.warc.gz | 676,674,848 | 110,519 | If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
Lesson 1: Comparing with multiplication
# Comparing with multiplication and addition: giraffe
Sal solves 2 multiplication comparison word problems. Created by Sal Khan.
## Want to join the conversation?
• why are the equations so hard?
• They are not hard, you just dont understand them so you need to get more practice on this.
• ---- Great giraffe Sal ----
• why do the equation look like that instead of it just being simple
• Exacly bro but some questions are like that
• The examples you used are easy but in khan, it is hard. Can you use the question khan? please
• They. Ant cause then they should give the answer
• that is a good drawn giraffe
• I don't see the point of using a number line, you can do 2 + 2, 2 x 2, 1 + 1 + 1 + 1, etc. So why out of all ways you use a number line?
• Why is he explaining it in such complicated ways??
• cause he is very smart.
duh and who knows if i am 20.
(1 vote)
• but what if they want 70 divided by 40 what do we do
• One method is to reduce 70/40 to 7/4 (from dividing top and bottom by 10).
Then 7/4 = 1 3/4. Every 1/4 is 0.25, so 3/4 is 0.25 * 3 = 0.75. So the final answer is 1.75.
Another method is to do 40 into 70 using long division.
40 goes into 70 once with remainder 70-40 = 30.
Then 40 goes into 300 seven times with remainder 300-280 = 20.
Then 40 goes into 200 five times without a remainder. | 447 | 1,582 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.09375 | 4 | CC-MAIN-2024-33 | latest | en | 0.921036 |
http://www.markedbyteachers.com/as-and-a-level/science/you-are-required-to-plan-a-procedure-that-will-allow-you-to-compare-quantitatively-the-glucose-concentrations-in-samples-of-fresh-orange-lemon-and-grapefruit-1.html | 1,477,577,828,000,000,000 | text/html | crawl-data/CC-MAIN-2016-44/segments/1476988721278.88/warc/CC-MAIN-20161020183841-00264-ip-10-171-6-4.ec2.internal.warc.gz | 550,944,399 | 18,119 | • Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
Page
1. 1
1
2. 2
2
3. 3
3
You are required to plan a procedure that will allow you to compare quantitatively the glucose concentrations in samples of fresh orange, lemon and grapefruit.
Extracts from this document...
Introduction
Coursework plan: AS Biology You are required to plan a procedure that will allow you to compare quantitatively the glucose concentrations in samples of fresh orange, lemon and grapefruit. Introduction The experiment that I am planning involves using the Food Test for Reducing Sugars. This test is simply done by mixing an equal volume of Benedict's solution to the test solution. It is then heated in a water bath for approximately two minutes. If a precipitate has formed it shows that sugars are present. I am also going to be using quantitative Benedict's solution as weighing a precipitate will give me more accurate results. The sugar that I am finding out in the three fruits is Glucose. It has the form of a Hexose C6 H12 O6. ...read more.
Middle
One thing that I needed to know was what was the best amount of Glucose solution to put with the Benedict's solution to give the best amount of precipitate that was easy to measure. I did this simply by experimenting with different quantities of the two solutions to see which would give the best result. From this I discovered that the best mixture of the two solutions would be 5ml of the Glucose solution to 4ml of the Benedict's solution. I also had to find out what would be a suitable amount of time to leave the mixtures in the water bath for. This was simply worked out using the 4 to 5 solutions of Benedict's solution and Glucose solution. I soon discovered that the most appropriate time was approximately 2 minutes and 30 seconds. ...read more.
Conclusion
They are then heated in a water bath for approximately 2 minutes 30 seconds. These solutions are then drained through a funnel and filter paper into a 250ml beaker. The precipitate is then dried and weighed in an Oven and on Scales. We now have a range of concentrations to compare our results to. * Now we take 5ml of each of the three fruits Orange, Grapefruit and Lemon and mix each of them with 4ml of the Benedict's solution. These are then put into a water bath for again approximately 2 minutes 30 seconds. The precipitate of each is then filtered, dried and then weighed. These results can then be compared to that of the different solutions and then put in order of Glucose concentration, which is what we want to find out. Risk Assessment Hazard Risk Prevention Benedict's Solution Harmful if swallowed. ...read more.
The above preview is unformatted text
This student written piece of work is one of many that can be found in our AS and A Level Exchange, Transport & Reproduction section.
Found what you're looking for?
• Start learning 29% faster today
• 150,000+ documents available
• Just £6.99 a month
Not the one? Search for your essay title...
• Join over 1.2 million students every month
• Accelerate your learning by 29%
• Unlimited access from just £6.99 per month
Related AS and A Level Exchange, Transport & Reproduction essays
1. Design an experiment to compare quantitatively orange fruit, grapefruit and lemon fruit, with respect ...
The wavelength of the light incident on the test solution must be controlled: in the colorimeter this is done by placing a coloured filter in the path of the incident light. Prediction: I predict that orange juice is the most concentrated out of the three juices, which are orange, grapefruit and lemon.
2. My aim is to design an experiment to compare quantitatively orange fruit, grapefruit and ...
Prediction: I predict that orange juice is the most concentrated out of the three juices, which are orange, grapefruit and lemon. Orange juice should have a higher concentration of glucose, followed by Grapefruit then lemon. I have predicted this because of a source I have obtained which showed that orange has the greatest number of grams of monosaccharides.
1. To compare quantitatively the concentrations of glucose and other reducing sugars in samples of ...
* 2 x 1 cm3 syringe - these will allow a large degree of accuracy in measuring volumes of the different liquids. One for each liquid will prevent any contamination. * 1 x 10 cm3 syringe - this will allow the measuring of the Benedict's solution to be as accurate as possible.
2. My aim is to design an experiment to compare quantitatively orange fruit, grapefruit and ...
I predict that the orange juice will form a more reddish colour then the grapefruit or the lemon juice, this is based on the amounts of monosaccharides in grams shown above. Fair test: * I will be keeping the temperature constant * All test tubes will be equal in size
1. You are required to plan a procedure that will allow you to compare quantitatively ...
Benedict's solution changes colour when a reducing sugar is present because of a chemical reaction, which reduces the Copper II sulphate (which is soluble) to copper I oxide, which is insoluble and produces a precipitate. The benedict's solution changes colour from blue to green, yellow, orange, brown and finally red - as the amount of sugar increases.
2. The main task of this experiment is to find out the concentration/volume of glucose ...
With the graduated pipette, I could have also chosen to use a syringe or a measuring cylinder. I thought this would be most suitable, as a syringe in most cases does not have measurements on it. It would be easier to use the pipette compared to a measuring cylinder, as it is quicker and more accurate.
• Over 160,000 pieces
of student written work
• Annotated by
experienced teachers
• Ideas and feedback to | 1,265 | 5,812 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.015625 | 3 | CC-MAIN-2016-44 | longest | en | 0.966831 |
http://factsforkids.net/earthquake-facts-for-kids/ | 1,591,376,364,000,000,000 | text/html | crawl-data/CC-MAIN-2020-24/segments/1590348502097.77/warc/CC-MAIN-20200605143036-20200605173036-00248.warc.gz | 43,590,469 | 34,398 | # Earthquake Facts For Kids | Causes and Classification
What is an Earthquake
In general, earthquake refers to any volcanic activity which results in producing seismic waves. When rocks deep inside the earth are caused to experience a lot of stress due to tectonic forces acting on them. As a result of this severe stress, these rocks undergo a change in their shape and store energy within them. Now as these deformed rocks come back to their original shape, they release energy on their return. This energy is in the form of waves called seismic waves. Thus, these waves cause the earth to shake, leading to an earthquake.
This theory that explains how energy is stored inside rocks when they deform as they are caused to undergo extreme force and then release energy upon coming back to their original position is known as elastic rebound theory.
Illustration
The cause of an earthquake is similar to the bending of a flexible stick. When you twist a stick at its maximum point so much so that it may snap off once you push it any further, it will store energy inside it. Now as soon as you let it free, it gets back to its original shape releasing the energy in the process.
Terms and Definitions
1. The bending of rocks into a new position is known as strain.
2. The point (inside the earth’s crust) from where the energy of the rocks is released due to volcanic activity is called focus. It is also known as hypocenter and it refers to the precise location of an earthquake.
3. The point that lies directly above the focus and on the surface of the earth is known as epicenter.
4. The field of science that studies earthquake and seismic waves is called seismology. The scientist who examines earthquake and follows a line of inquiry into this discipline is known as seismologist.
5. The scientific devices that record vibrations of the earth are called seismometers.
6. The energy released by an earthquake (in the form of seismic waves) is usually assigned a specific magnitude number so that the precise amount of released energy can be calculated. The device that designates this number is known as Richter magnitude scale.
7. The size of earthquake is measured by calculating the amount of energy released by it. The instrument that measures its size is called moment magnitude scale.
8. When a volcanic activity or earthquake occurs beneath the body of water, a huge mass of water is displaced and in this way it gives rise to a series of waves known as Tsunami. It is also called Tidal Wave.
9. The distance from the epicenter downward up to a point of focus is called focal depth.
10. At times, an earthquake of relatively smaller magnitude occurs in the same location where main earthquake has just taken place. This smaller earthquake is referred to as aftershock.
11. If one earthquake sparks off several other big earthquakes in the same location, this phenomenon is known as earthquake storm.
12. The intensity of the earthquake varies according to the depth of the hypocenter. The deeper an earthquake is, the less likely it is to cause any damage and vice versa.
Tectonic Plates
1. When the crust is merged with the uppermost part of the mantle, it forms lithosphere. The lithosphere is divided into a number of pieces called tectonic plates. Majority of earthquakes take place along boundaries of tectonic plates.
2. Tectonic plates are attached with one another like pieces of a jigsaw puzzle. At times, one of the plates makes an effort to slip over the adjacent plate and like so the strain develops. This process in which one plate glides over another eventually results in producing seismic waves. This type of an earthquake is known as interplate earthquake.
3. Almost 90 percent of the total seismic energy is released through interplate earthquakes.
Seismic Waves
1. When earthquake occurs within rocks of the Earth’s crust, it emits energy in the form of waves. The frequency of these waves is low as they pass through three layers of the earth (The Crust, The Mantle and The Core). These waves are called seismic waves. It is only because of these waves that we get to know that earth’s interior is composed of three layers.
2. Scientists believe that less than ten percent of the energy given off by an earthquake is in the form of seismic waves. Much of the rest of the energy is transformed into heat energy.
3. The very first wave a seismograph records is called P-wave. The letter ‘P’ denotes ‘pressure’ or ‘primary’. The speed of these waves within solid rocks inside the earth is 6 – 7 km/s.
4. The second wave to be recorded on a seismograph is known as S-wave. The letter ‘S’ denotes ‘secondary’ or ‘shear’. These waves are unable to pass through outer core (which is in molten state) but they can cross inner core (which is in solid state). The velocity of S-waves in the interior of the earth is slower as compare to P-waves. These waves travel at a speed of 4 – 5 km/s.
Earthquakes with a magnitude number of less than or equal to 3 are weak and cannot be noticed. The earthquakes having more than number 7 on magnitude scale are destructive and cause severe damage.
Types of Earthquakes
Based on focal depth, earthquakes are classified into three kinds.
1. Shallow-focus earthquakes are those quakes that occur less than 70 km deep inside the earth. Thus their focal depth is less than 70 kilometers.
2. Intermediate-depth earthquakes are those that lie between 70 km and 300 km deep. They are also known as mid-focus earthquakes.
3. Deep-focus earthquakes develop inside the mantle and vary from 300 km to 700 km.
Location of Earthquakes
1. The location with maximum number of earthquakes lies in the Pacific Ocean and is called Ring of Fire. It is 40,000 km long and consists of more than 75 percent of the volcanic eruptions of the world. Almost 90 percent of the underground eruption in the world takes place in this region. It is the world’s most active volcanic region.
2. The second most active region of volcanic activity is called Alpine-Himalayan orogenic belt. About 17 percent of the biggest earthquakes of the world lie in this area.
3. The world’s third most active area for volcanic activity is Mid-Atlantic Ridge.
Human Causes of Earthquakes
There are four kinds of human actions that may also trigger earthquakes. These are:
1. When huge amount of water is collected to the backside of dams;
2. when making a hole deep inside the ground and inserting liquid into the wells;
3. when digging ground for extracting coal from deep inside the earth;
4. when digging out oil from the earth.
More Earthquake Facts for Kids
1. The energy released by a single earthquake is so great that it amounts to energy released by tens of thousands of nuclear bombs.
2. Every year, our planet experiences more or less 500,000 earthquakes. However, 400,000 of these earthquakes are so small that we cannot feel them.
3. In U.S.A., the two states that experience most minor earthquakes are Alaska and California.
4. From the year 1900 and beyond, about 18 earthquakes occurred with a magnitude ranges from 7 to 7.9 and thus these were termed as ‘major’ earthquakes, as per United States Geological Survey.
5. With a magnitude of 9.5, the strongest earthquake of the world was taken place in Chile in May 22, 1960. It is known as ‘Great Chilean earthquake’.
6. The earthquake with the most number of casualties in the world was occurred on January 23, 1556 in China. It took lives of about 830,000 people and this incident is now known as Jiajing earthquake.
Did you really find these earthquake facts for kids good enough? Is it what you’re looking for? Please comment and help us improving this article. Thanks for reading it.
Learn some more: Facts About The Earth | 1,657 | 7,731 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2020-24 | longest | en | 0.970625 |
https://byjus.com/question-answer/in-figure-pqrs-is-a-rectangle-if-angle-rpq-30-circ-then-the-value-of/ | 1,723,377,150,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640997721.66/warc/CC-MAIN-20240811110531-20240811140531-00774.warc.gz | 120,446,616 | 27,418 | 1
You visited us 1 times! Enjoying our articles? Unlock Full Access!
Question
# In figure, PQRS is a rectangle. If ∠RPQ=30∘ then the value of (x+y) is
A
90
No worries! We‘ve got your back. Try BYJU‘S free classes today!
B
120
No worries! We‘ve got your back. Try BYJU‘S free classes today!
C
150
No worries! We‘ve got your back. Try BYJU‘S free classes today!
D
180
Right on! Give the BNAT exam to get a 100% scholarship for BYJUS courses
Open in App
Solution
## The correct option is D 180∘Let the point of intersection of the diagonals of the rectangle be O.Given ∠OPQ=30°∴∠SPO=60° Similarly ∠RQO=60°As its a rectangle, all the angles are 90°.∴∠PQR=∠PQO+∠OQR =30°+60° =90°∴∠OQR=60° i.e, x=60°From △POQSum of angles of △les =180°⇒∠OPQ+∠PQO+∠POQ=180°∴∠POQ=180°−30o−30o=120°i.e, ∠POQ=∠SOR=120°=y [ i.e, opposite angles ]∴ Sum of x+y=120°+60°=180°.Hence, the answer is 180°.
Suggest Corrections
0
Join BYJU'S Learning Program
Related Videos
Properties of Special Parallelograms
MATHEMATICS
Watch in App
Explore more
Join BYJU'S Learning Program | 393 | 1,046 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.796875 | 4 | CC-MAIN-2024-33 | latest | en | 0.740817 |
https://brainly.ph/question/109776 | 1,485,029,336,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560281202.94/warc/CC-MAIN-20170116095121-00487-ip-10-171-10-70.ec2.internal.warc.gz | 799,542,266 | 10,063 | # How many prime numbers are there that are less than 10?
2
by odinmich787
2015-03-06T18:41:39+08:00
### This Is a Certified Answer
Certified answers contain reliable, trustworthy information vouched for by a hand-picked team of experts. Brainly has millions of high quality answers, all of them carefully moderated by our most trusted community members, but certified answers are the finest of the finest.
2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97
there are 25 prime numbers less than 100
2015-03-06T19:05:26+08:00
### This Is a Certified Answer
Certified answers contain reliable, trustworthy information vouched for by a hand-picked team of experts. Brainly has millions of high quality answers, all of them carefully moderated by our most trusted community members, but certified answers are the finest of the finest.
The prime numbers less than 10 are 2, 3, 5 and 7.
So there are 4 prime numbers less than 10. | 267 | 951 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2017-04 | latest | en | 0.943298 |
https://slideplayer.com/slide/4996504/ | 1,591,174,309,000,000,000 | text/html | crawl-data/CC-MAIN-2020-24/segments/1590347432521.57/warc/CC-MAIN-20200603081823-20200603111823-00208.warc.gz | 522,283,175 | 21,411 | # 1 Pertemuan 18 Pembandingan Dua Populasi-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1.
## Presentation on theme: "1 Pertemuan 18 Pembandingan Dua Populasi-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1."— Presentation transcript:
1 Pertemuan 18 Pembandingan Dua Populasi-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1
2 Learning Outcomes Pada akhir pertemuan ini, diharapkan mahasiswa akan mampu : Membandingkan pengujian sampel besar untuk perbedaan antara dua proporsi populasi dan pengujian untuk kesamaan dua populasi
3 Outline Materi Pengujian Sampel Besar untuk Perbedaan antara Dua Proporsi Populasi Sebaran-F dan Uji untuk Kesamaan Dua Ragam Populasi
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-4 Hypothesized difference is zero I: Difference between two population proportions is 0 p 1 = p 2 » H 0 : p 1 -p 2 = 0 » H 1 : p 1 -p 2 0 II: Difference between two population proportions is less than 0 p 1 p 2 » H 0 : p 1 -p 2 0 » H 1 : p 1 -p 2 > 0 Hypothesized difference is other than zero: III: Difference between two population proportions is less than D p 1 p 2 +D » H 0 :p-p 2 D » H 1 : p 1 -p 2 > D 8-5 A Large-Sample Test for the Difference between Two Population Proportions
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-5 A large-sample test statistic for the difference between two population proportions, when the hypothesized difference is zero: where is the sample proportion in sample 1 and is the sample proportion in sample 2. The symbol stands for the combined sample proportion in both samples, considered as a single sample. That is: A large-sample test statistic for the difference between two population proportions, when the hypothesized difference is zero: where is the sample proportion in sample 1 and is the sample proportion in sample 2. The symbol stands for the combined sample proportion in both samples, considered as a single sample. That is: When the population proportions are hypothesized to be equal, then a pooled estimator of the proportion ( ) may be used in calculating the test statistic. Comparisons of Two Population Proportions When the Hypothesized Difference Is Zero: Test Statistic
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-6 Carry out a two-tailed test of the equality of banks’ share of the car loan market in 1980 and 1995. Comparisons of Two Population Proportions When the Hypothesized Difference Is Zero: Example 8-8
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-7 0.4 0.3 0.2 0.1 0.0 z f ( z ) Standard Normal Distribution Nonrejection Region Rejection Region -z 0.05 =-1.645 z 0.05 =1.645 Test Statistic=1.415 Rejection Region 0 Since the value of the test statistic is within the nonrejection region, even at a 10% level of significance, we may conclude that there is no statistically significant difference between banks’ shares of car loans in 1980 and 1995. Example 8-8: Carrying Out the Test
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-8 Example 8-8: Using the Template P-value = 0.157, so do not reject H 0 at the 5% significance level.
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-9 Carry out a one-tailed test to determine whether the population proportion of traveler’s check buyers who buy at least \$2500 in checks when sweepstakes prizes are offered as at least 10% higher than the proportion of such buyers when no sweepstakes are on. Comparisons of Two Population Proportions When the Hypothesized Difference Is Not Zero: Example 8-9
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-10 0.4 0.3 0.2 0.1 0.0 z f ( z ) Standard Normal Distribution Nonrejection Region Rejection Region z 0.001 =3.09 Test Statistic=3.118 0 Since the value of the test statistic is above the critical point, even for a level of significance as small as 0.001, the null hypothesis may be rejected, and we may conclude that the proportion of customers buying at least \$2500 of travelers checks is at least 10% higher when sweepstakes are on. Example 8-9: Carrying Out the Test
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-11 Example 8-9: Using the Template P-value = 0.0009, so reject H 0 at the 5% significance level.
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-12 A (1- ) 100% large-sample confidence interval for the difference between two population proportions: A 95% confidence interval using the data in example 8-9: A 95% confidence interval using the data in example 8-9: Confidence Intervals for the Difference between Two Population Proportions
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-13 Confidence Intervals for the Difference between Two Population Proportions – Using the Template – Using the Data from Example 8-9
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-14 The F distribution is the distribution of the ratio of two chi-square random variables that are independent of each other, each of which is divided by its own degrees of freedom. An F random variable with k 1 and k 2 degrees of freedom: 8-6 The F Distribution and a Test for Equality of Two Population Variances
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-15 The F random variable cannot be negative, so it is bound by zero on the left. The F distribution is skewed to the right. The F distribution is identified the number of degrees of freedom in the numerator, k 1, and the number of degrees of freedom in the denominator, k 2. The F random variable cannot be negative, so it is bound by zero on the left. The F distribution is skewed to the right. The F distribution is identified the number of degrees of freedom in the numerator, k 1, and the number of degrees of freedom in the denominator, k 2. The F Distribution
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-16 Critical Points of the F Distribution Cutting Off a Right-Tail Area of 0.05 k 1 1 2 3 4 5 6 7 8 9 k 2 1161.4199.5215.7224.6230.2234.0236.8238.9240.5 218.5119.0019.1619.2519.3019.3319.3519.3719.38 310.139.559.289.129.018.948.898.858.81 47.716.946.596.396.266.166.096.046.00 56.615.795.415.195.054.954.884.824.77 65.995.144.764.534.394.284.214.154.10 75.594.744.354.123.973.873.793.733.68 85.324.464.073.843.693.583.503.443.39 95.124.263.863.633.483.373.293.233.18 104.964.103.713.483.333.223.143.073.02 114.843.983.593.363.203.09 3.01 2.952.90 124.753.893.493.263.113.002.912.852.80 134.673.813.413.183.032.922.832.772.71 144.603.743.343.112.962.852.762.702.65 154.543.683.293.062.902.792.712.642.59 3.01 543210 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 F 0.05 =3.01 f ( F ) F Distribution with 7 and 11 Degrees of Freedom F The left-hand critical point to go along with F (k1,k2) is given by: Where F (k1,k2) is the right-hand critical point for an F random variable with the reverse number of degrees of freedom. Using the Table of the F Distribution
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-17 The right-hand critical point read directly from the table of the F distribution is: F (6,9) =3.37 The corresponding left-hand critical point is given by: The right-hand critical point read directly from the table of the F distribution is: F (6,9) =3.37 The corresponding left-hand critical point is given by: Critical Points of the F Distribution: F(6, 9), = 0.10
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-18 I: Two-Tailed Test 1 = 2 H 0 : 1 = 2 H 1 : 2 II: One-Tailed Test 1 2 H 0 : 1 2 H 1 : 1 2 I: Two-Tailed Test 1 = 2 H 0 : 1 = 2 H 1 : 2 II: One-Tailed Test 1 2 H 0 : 1 2 H 1 : 1 2 Test Statistic for the Equality of Two Population Variances
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-19 The economist wants to test whether or not the event (interceptions and prosecution of insider traders) has decreased the variance of prices of stocks. Example 8-10
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-20 Distribution with 24 and 23 Degrees of Freedom 543210 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 F 0.01 =2.7 f ( F ) F Test Statistic=3.1 Since the value of the test statistic is above the critical point, even for a level of significance as small as 0.01, the null hypothesis may be rejected, and we may conclude that the variance of stock prices is reduced after the interception and prosecution of inside traders. Example 8-10: Solution
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-21 Example 8-10: Solution Using the Template Observe that the p- value for the test is 0.0042 which is less than 0.01. Thus the null hypothesis must be rejected at this level of significance of 0.01.
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-22 Example 8-11: Testing the Equality of Variances for Example 8-5
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-23 Since the value of the test statistic is between the critical points, even for a 20% level of significance, we can not reject the null hypothesis. We conclude the two population variances are equal. F Distribution with 13 and 8 Degrees of Freedom 543210 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 F f ( F ) F 0.10 =3.28F 0.90 =(1/2.20)=0.4545 0.10 0.80 Test Statistic=1.19 Example 8-11: Solution
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-24 Template to test for the Difference between Two Population Variances: Example 8-11 Observe that the p- value for the test is 0.8304 which is larger than 0.05. Thus the null hypothesis cannot be rejected at this level of significance of 0.05. That is, one can assume equal variance.
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-25 The F Distribution Template to
COMPLETE 5 t h e d i t i o n BUSINESS STATISTICS Aczel/Sounderpandian McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., 2002 8-26 The Template for Testing Equality of Variances
27 Penutup Pembandingan Dua Populasi merupakan bagian dari pengujian Hipotesis dimana populasinya lebih dari satu,hal ini juga merupakan salah satu bentuk inferensial statistik yang berupa pengambilan kesimpulan/ pengambilan keputusan tentang menolak atau tidak menolak (menerima) suatu pernyataan/hipotesis
Download ppt "1 Pertemuan 18 Pembandingan Dua Populasi-2 Matakuliah: A0064 / Statistik Ekonomi Tahun: 2005 Versi: 1/1."
Similar presentations | 3,816 | 11,903 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.0625 | 4 | CC-MAIN-2020-24 | latest | en | 0.513644 |
https://www.onlinemath4all.com/finding-equation-of-median-for-triangle.html | 1,569,141,531,000,000,000 | text/html | crawl-data/CC-MAIN-2019-39/segments/1568514575402.81/warc/CC-MAIN-20190922073800-20190922095800-00528.warc.gz | 971,468,403 | 13,906 | # FINDING EQUATION OF MEDIAN FOR TRIANGLE
Finding Equation of Median for Triangle :
Here we are going to how to find equation of median and proving the given points are collinear using the concept slope.
## Finding Equation of Median for Triangle
Question 9 :
Find the equation of the median from the vertex R in a Δ PQR with vertices at P(1, -3), Q(-2, 5) and R(-3, 4).
Solution :
Let us draw a rough diagram based on the given information.
Midpoint of AB = (x₁ + x₂)/2, (y₁ + y₂)/2
A(4, 1) B(3, 2)
= (4 + 3)/2 , (1 + 2)/2
= C (7/2, 3/2)
From the angle of inclination of the required line, we have to find the slope.
m = tan θ ==> tan 30° ==> 1/√3
Equation of the required line :
(y - y₁) = m (x - x₁)
(y - (3/2)) = (1/√3) (x - (7/2))
√3 (y - (3/2)) = 1 (x - (7/2))
√3 (2y - 3) = 1 (2x - 7)
2√3 y - 3√3 = 2x - 7
2x - 2√3y - 7 + 3√3 = 0
Hence equation of the required line is 2x - 2√3y + (3√3 - 7) = 0
Question 10 :
By using the concept of the equation of the straight line, prove that the given three points are collinear.
(i) (4, 2), (7, 5) and (9, 7) (ii) (1, 4), (3, -2) and (-3, 16)
Solution :
(i) (4, 2), (7, 5) and (9, 7)
To prove the given points are collinear, first we have to find the equation of the line passing through the first two points.
Now we have to apply the remaining point in the equation that we have found.
If the other point satisfies the equation of the line, then we can decide that the three points are collinear. Otherwise we can say the three points are not collinear.
Equation of the line passing through two given points.
(y - y₁) /(y₂-y₁) = (x - x₁) /(x₂-x₁)
(x₁, y₁) ==> (4, 2) and (x, y) ==> (7, 5)
(y-2)/(5 - 2) = (x - 4)/(7 - 4)
(y-2)/(3) = (x - 4)/(3)
(y-2) = (x - 4)
x - 4 - y + 2 = 0
x - y - 2 = 0
Apply x = 9 and y = 7
9 - 7 - 2 = 0
2 - 2 = 0
0 = 0
Hence the given points are collinear.
(ii) (1, 4), (3, -2) and (-3, 16)
Equation of the line passing through two given points.
(y - y₁) /(y₂-y₁) = (x - x₁) /(x₂-x₁)
(x₁, y₁) ==> (1, 4) and (x, y) ==> (3, -2)
(y - 4)/(-2-4) = (x - 1)/(3 - 1)
(y - 4)/(-6) = (x - 1)/(2)
2 (y - 4) = -6 (x - 1)
2y - 8 = -6x + 6
6x + 2y - 8 - 6 = 0
6x + 2y - 14 = 0
Apply x = -3 and y = 16
6(-3) + 2(16) - 14 = 0
-18 + 32 - 14 = 0
0 = 0
Hence the given points are collinear.
After having gone through the stuff given above, we hope that the students would have understood the concepts of finding equation of the median and proving the given points are collinear.
If you need any other stuff in math, please use our google custom search here.
You can also visit our following web pages on different stuff in math.
WORD PROBLEMS
Word problems on simple equations
Word problems on linear equations
Word problems on quadratic equations
Algebra word problems
Word problems on trains
Area and perimeter word problems
Word problems on direct variation and inverse variation
Word problems on unit price
Word problems on unit rate
Word problems on comparing rates
Converting customary units word problems
Converting metric units word problems
Word problems on simple interest
Word problems on compound interest
Word problems on types of angles
Complementary and supplementary angles word problems
Double facts word problems
Trigonometry word problems
Percentage word problems
Profit and loss word problems
Markup and markdown word problems
Decimal word problems
Word problems on fractions
Word problems on mixed fractrions
One step equation word problems
Linear inequalities word problems
Ratio and proportion word problems
Time and work word problems
Word problems on sets and venn diagrams
Word problems on ages
Pythagorean theorem word problems
Percent of a number word problems
Word problems on constant speed
Word problems on average speed
Word problems on sum of the angles of a triangle is 180 degree
OTHER TOPICS
Profit and loss shortcuts
Percentage shortcuts
Times table shortcuts
Time, speed and distance shortcuts
Ratio and proportion shortcuts
Domain and range of rational functions
Domain and range of rational functions with holes
Graphing rational functions
Graphing rational functions with holes
Converting repeating decimals in to fractions
Decimal representation of rational numbers
Finding square root using long division
L.C.M method to solve time and work problems
Translating the word problems in to algebraic expressions
Remainder when 2 power 256 is divided by 17
Remainder when 17 power 23 is divided by 16
Sum of all three digit numbers divisible by 6
Sum of all three digit numbers divisible by 7
Sum of all three digit numbers divisible by 8
Sum of all three digit numbers formed using 1, 3, 4
Sum of all three four digit numbers formed with non zero digits
Sum of all three four digit numbers formed using 0, 1, 2, 3
Sum of all three four digit numbers formed using 1, 2, 5, 6 | 1,520 | 4,931 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.75 | 5 | CC-MAIN-2019-39 | longest | en | 0.752516 |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=50&t=23947 | 1,576,467,155,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575541315293.87/warc/CC-MAIN-20191216013805-20191216041805-00023.warc.gz | 424,557,315 | 11,270 | ## KC/KP/QC/QP
Grace Lee 3G
Posts: 19
Joined: Fri Sep 29, 2017 7:07 am
### KC/KP/QC/QP
Sorry I'm a little confused. What's the difference between KC, KP, QC, and QP?
Curtis Wong 2D
Posts: 62
Joined: Sat Jul 22, 2017 3:00 am
### Re: KC/KP/QC/QP
Kc is the equilibrium constant, referring to when the reactants and products are in terms of molarity. Kp is the equilibrium constant when the products and reactants are given in terms of atm (usually when they're gases) and so it's know as the equilibrium constant of partial pressures. Qc and Qp basically refers to molarity and partial pressure respectively, but instead of equilibrium, they refer to at some point in time of the reaction. It could be before, after, or even at the equilibrium depending on whether or not Q is equal to K.
Kyle Reidy 3H
Posts: 19
Joined: Fri Sep 29, 2017 7:06 am
### Re: KC/KP/QC/QP
Some of the difference can be seen in the names. We call Q the reaction quotient and K the equilibrium constant. "Constant" implies that this value does not change – it is a set value that the reaction wants to attain for the given set of conditions – while Q can have different values throughout the course of a reaction. The values K and Q are calculated in the exact same way, so they are easy to compare to see which direction the reaction will proceed.
Return to “Non-Equilibrium Conditions & The Reaction Quotient”
### Who is online
Users browsing this forum: No registered users and 1 guest | 388 | 1,472 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.828125 | 3 | CC-MAIN-2019-51 | longest | en | 0.925966 |
https://www.mapleprimes.com/questions/235176-Why-Dsolve-Sometimes-Do-Not-Evaluate | 1,675,722,177,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00578.warc.gz | 887,123,239 | 125,728 | # Question:why dsolve sometimes do not evaluate an integral when solving an ode?
## Question:why dsolve sometimes do not evaluate an integral when solving an ode?
Maple 2022
I think I mentioned this before long time ago and never got any satisfactory answer. So I thought I will try again.
I never been able to figure why/how dsolve decides when to integrate the intermediate result vs. keeping the integral inert, even though it can integrate it.
It must use some rule internal to decide this, and this is what I am trying to find out.
Here is a very simple separable ode. So this is just really an integration problem.
```restart;
ode:=diff(y(x),x)=(b*y(x)+sqrt(y(x)^2+b^2-1) )/(b^2-1);
dsolve(ode)
```
The first thing that comes to mind, is that Maple could not integrate it, that is why it gave inert integral. but it can integrate it but the result is a little long
`int((b^2 - 1)/(b*y + sqrt(y^2 + b^2 - 1)),y)`
So it could have generated the above implicit solution instead. Now notice what happens when I make very small change the ode.
```restart;
ode:=diff(y(x),x)=(y(x)+sqrt(y(x)^2+b^2-1) )/(b^2-1);
dsolve(ode)
```
In the above I changed b*y to just y and guess what, now maple will integrate it and give an implicit solution instead of an inert integral
In both cases, Maple is able to do the integration. But in first case, it returned an inert integral and in the second it did not.
my question is why? Does it have a rule where if the size of the integral is larger than some limit, it does not solve it? Did it say,
"I think this result is too complicated to the user, so I will keep the integral inert instead"
If so, what are the rules it uses to decide when to do the integration and when to keep it inert? Is it based on leafcount? number of terms? something else?
infolevel does not give a hint on this, as all what it says is that it is separable.
Any one has an ideas on this?
| 488 | 1,922 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5625 | 4 | CC-MAIN-2023-06 | latest | en | 0.933557 |
http://physics.stackexchange.com/questions/101578/position-and-momentum-bases-in-quantum-mechanics | 1,469,612,798,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257826759.85/warc/CC-MAIN-20160723071026-00247-ip-10-185-27-174.ec2.internal.warc.gz | 208,513,908 | 19,322 | Position and momentum bases in quantum mechanics
I have seen the following two descriptions of the position basis:
$$\tag{1}| x\rangle=\delta(x-x_0)$$
and also
$$\tag{2}\langle x_0| x\rangle=\delta(x-x_0),$$
which (if either) of these is correct? Perhaps they are equivalent under change of notation? The first seems right to me, as it would solve the operator equation $$\tag{3}x| x\rangle=x_0| x\rangle,$$ but I would like to be sure. If we are working in the momentum representation, is $$\tag{4}|p\rangle=\delta(p-p_0)$$ also valid?
-
The first is wrong. – David H Mar 2 '14 at 17:08
Comment to the question (v2): The third and fourth eqs. are also wrong. – Qmechanic Mar 2 '14 at 17:17
If you put $\chi_{r_0}(r)= \delta (r-r_0 )$ then $[ \chi_{r_0}(r)]$ forms a basis. In Dirac's notation: $\chi_{r_0}(r) \rightarrow |r_0 \rangle$ and you can verify that this set is a basis because it satisfy:
Orthonormality: $\langle r_0 | r'_{0} \rangle = \delta (r_0 - r'_0 )$
Closure relation: $\int d^3 r_0 \ \ |r_0 \rangle \langle r_0|=$ 1
where 1 is the identity operator.
-
So, if I understand correctly, equations 1 and 2 in my original question are both correct (but the notation is messed up making them inconsistent)? – Lachy Mar 2 '14 at 17:32
Yes! there are a lot of different notations: in general r represent the point in the space, r_0 the eigenvalue of the operator R and |r_0 > the eigenket relative to r_0 – LC7 Mar 2 '14 at 17:52
Another thing: for the momentum my comment is still valid (you just have to change r in p and so on) – LC7 Mar 2 '14 at 17:54
The latter description is correct (as is described in Sakurai, Gasiorowicz, Griffiths, and probably some other books that I don't own). What it is saying is that the inner product between $|x\rangle$ and $|x_0\rangle$ is either 0 if $x\neq x_0$ or 1 if $x=x_0$. That is, the states are orthogonal.
The momentum space description $$\langle p|p'\rangle=\delta\left(p-p'\right)$$ is also valid.
-
Thanks! Is it actually possible to calculate $|x\rangle$ or $|p\rangle$? That is, given $\psi(p)$, is it actually possible to calculate $|\psi\rangle=\int\psi(p)|p\rangle dp$ ? – Lachy Mar 2 '14 at 17:20
That integral doesn't make a lot of sense to me. Think of Fourier transforms, if you integrate over $p$, then you have $\psi(x)$ and not $\psi(p)$. To get $\psi(p)$, then you need to compute $\int dx'\langle p|x'\rangle\langle x'|\alpha\rangle$ for some arbitrary state $|\alpha\rangle$. – Kyle Kanos Mar 2 '14 at 17:35
The equation you just gave would (correctly) give $\psi(p)$ if I already had $|\psi \rangle$ (or $|\alpha\rangle$, as you used). On the contrary I already have the wavefunction $\psi(p)$ and want to calculate the representation free ket $|\psi\rangle$ – Lachy Mar 2 '14 at 17:41
Oh, now I see what you're trying to do. Normally you'd write it as $|\psi\rangle=\int |p\rangle\psi(p)dp=\int |p\rangle\langle p|\psi\rangle dp$ as $\psi(p)=\langle p|\psi\rangle$ and $\int|p\rangle\langle p|dp=1$. So I suppose as long as you know what $|p\rangle$ is, you should be able to do that. – Kyle Kanos Mar 2 '14 at 17:52
$| x \rangle$ is a position eigenstate, the state for a particle with definite location $x$. This is an abstract vector.
$\delta (x - x_0)$ is a wavefunction (or distribution) for a particle with definite location $x_0$. It is the state $| x_0 \rangle$ on the position basis
$$\langle x | x_0 \rangle = \delta (x - x_0)$$
If $\hat x$ is the position operator then the third equation should be
$$\hat x | x_0 \rangle = x_0 | x_0 \rangle$$
- | 1,114 | 3,555 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2016-30 | latest | en | 0.886205 |
https://ja.scribd.com/document/136471687/Paper-24-A-Posteriori-Error-Estimator-for-Mixed-Approximation-of-the-Navier-Stokes-Equations-With-the-C-a-b-c-Boundary-Condition | 1,563,417,599,000,000,000 | text/html | crawl-data/CC-MAIN-2019-30/segments/1563195525483.64/warc/CC-MAIN-20190718022001-20190718044001-00125.warc.gz | 417,081,042 | 81,381 | You are on page 1of 11
# (IJACSA) International Journal of Advanced Computer Science and Applications,
## Vol. 4, No.3, 2013
145 | P a g e
www.ijacsa.thesai.org
A Posteriori Error Estimator for Mixed
Approximation of the Navier-Stokes Equations with
the
c b a
C
, ,
Boundary Condition
J. EL Mekkaoui, M A. Bennani, A.Elkhalfi
Mechanical engineering laboratory
Faculty of sciences and techniques-B.P. 2202 Route Imouzzer
Fes
Department of mathematics
Regional Centre for Professions of Education and Training,
Fes, B.P: 243 Sefrou Morocco
AbstractIn this paper, we introduce the Navier-Stokes
equations with a new boundary condition. In this context, we
show the existence and uniqueness of the solution of the weak
formulation associated with the proposed problem. To solve this
latter, we use the discretization by mixed finite element method.
In addition, two types of a posteriori error indicator are
introduced and are shown to give global error estimates that are
equivalent to the true error. In order to evaluate the performance
of the method, the numerical results are compared with some
previously published works and with others coming from
KeywordsNavier-Stokes Equations;
c b a
C
, ,
boundary
condition; Mixed Finite element method; Residual Error Estimator;
I. INTRODUCTION
This paper describes a numerical solutions of Navier-stoks
equations with a new boundary condition generalizes the will
known basis conditions, especially the Dirichlet and the
Neumann conditions. So, we prove that the weak formulation
of the proposed modelling has an unique solution. To calculate
this latter, we use the discretization by mixed finite element
method. Moreover, we propose two types of a posteriori error
indicator which are shown to give global error estimates that
are equivalent to the true error. To compare our solution with
the some previously ones, as ADINA system, some numerical
results are shown. This method is structured as a standalone
package for studying discretization algorithms for PDEs and
for exploring and developing algorithms in numerical linear
and nonlinear algebra for solving the associated discrete
systems. It can also be used as a pedagogical tool for studying
these issues, or more elementary ones such as the properties of
Krylov subspace iterative methods [15].
The latter two PDEs constitute the basis for computational
modeling of the flow of an incompressible Newtonian fluid.
For the equations, we offer a choice of two-dimensional
domains on which the problem can be posed, along with
boundary conditions and other aspects of the problem, and a
choice of finite element discretizations on a quadrilateral
element mesh.
Whereas the discrete Navier-Stokes equations require a
method such as the generalized minimum residual method
(GMRES), which is designed for non symmetric systems [15].
The key for fast solution lies in the choice of effective
preconditioning strategies. The package offers a range of
options, including algebraic methods such as incomplete LU
factorizations, as well as more sophisticated and state-of-the-
art multigrid methods designed to take advantage of the
structure of the discrete linearized Navier-Stokes equations. In
addition, there is a choice of iterative strategies, Picard
iteration or Newtons method, for solving the nonlinear
algebraic systems arising from the latter problem.
A posteriori error analysis in problems related to fluid
dynamics is a subject that has received a lot of attention during
the last decades. In the conforming case there are several ways
to define error estimators by using the residual equation. in
particular, for the Stokes problem, M. Ainsworth, J. Oden
[10], C.Carstensen, S.A. Funken [12], D.Kay, D.Silvester [13]
and R.Verfurth [14], introduced several error estimators and
provided that that they are equivalent to the energy norm of
the errors. Other works for the stationary Navier-Stokes
problem have been introduced in [5, 8, 15, 16].
The plan of the paper is as follows. Section II presents the
model problem used in this paper. The weak formulation is
presented in section III. In section IV, we show the existence
and uniqueness of the solution.
The discretization by mixed finite elements is described in
section V. Section VI introduced two types of a posteriori
error bounds of the computed solution. Numerical experiments
carried out within the framework of this publication and their
comparisons with other results are shown in Section VII.
II. GOVERNING EQUATIONS
We will consider the model of viscous incompressible flow
in an idealized, bounded, connected domain in .
2
IR
, in .
2
O = V + V + V f p u u u
v (1)
, in 0 . O = Vu
(2)
( ) . on I = V g A u u pI n
T T
v (3)
We also assume that O has a polygonal boundary
O c = I: , so n
## that is the usual outward-pointing normal.
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
146 | P a g e
www.ijacsa.thesai.org
The vector field u
## is the velocity of the flow and the scalar
variable p represents the pressure.
Our mathematical model is the Navier-stoks system with a
new boundary condition (3) noted .
, , c b a
C where 0 v a
given constant is called the kinematic viscosity, V is the
gradient, . V is the divergence and
2
V is the Laplacien
operator,
), (
2
O eL f
) (
2
I eL g
## and A is a real matrix
defined as
-
(
=
) , ( ) , (
) , ( ) , (
) , (
y x b y x c
y x c y x a
y x A I e ) , ( y x all for (4)
- There are two strictly positive constants
, and
1 1
| o
such
that:
1 1
) , ( | o s s X y x A X
T
(5)
I e ) , ( y x all for and
{ }. 1 /
2
2
= e = e X IR X S X
Where are and , c b a
the function continuous defined on . I
III. THE WEAK FORMULATION
We define the following spaces:
)
`
O e
c
c
c
c
O = O ) ( ; ; / : ) (
2 1
L
y
u
x
u
u IR u h
(6)
| |
2
1 1
) ( ) ( O = O h H (7)
{ }, 0 / ) ( ) (
2 2
0
}
O
= O e = O q L q L
(8)
{ } , 0 . / ) ( ) (
1 1
0 ,
in n v H v H
n
= O e = O
(9)
{ } . in 0 . . / ) ( ) (
1
0 ,
1
0 ,
O = V O e = O v H v V
n n
(10)
The standard weak formulation of the Navier-Stokes flow
problem (1) - (2)-(3) is the following:
Find
) (
1
O eH u
and
) (
2
O eL p
such that
( )
= V -
+ =
V + V + V V -
}
} }
} } } }
O
I O
O O I O
, 0 .
,
. :
ds u q
v g v f
v p v A u v u u v u
T
v
(11)
( ) . ) ( ) ( ,
2
0
1
0 ,
O O e L H q v all for
n
Let the bilinear forms
IR L H B IR H H A
n n n
O O O O ) ( ) ( : ; ) ( ) ( :
2
0
1
0 ,
1
0 ,
1
0 ,
. ) ( ) ( :
2
0
2
0
IR L L d O O
: ) , ( v A u v u v u A
T
} }
O I
+ V V =v
(12)
. ) , (
}
O
V = u q q u B
(13)
. ) , (
}
O
= q p q p d
(14)
And the tri-linear forms
IR H H H D IR H H H C
n n n n n n
1
0 ,
1
0 ,
1
0 ,
1
0 ,
1
0 ,
1
0 ,
: ; :
) ( ) , , (
}
O
V = z v u z v u C
(15)
) , , ( ) , ( ) , , ( z v u C v u A z v u D
+ = (16)
Given the functional
IR L O) ( : L
2
0
} }
I O
+ = v f v g v L
. . ) (
(17)
The underlying weak formulation (11) may be restated as:
( ) such that ) ( ) ( ,
2
0
1
0 ,
O O e L H p u find
n
0 ) , (
) ( ) , ( ) , , ( ) , (
=
= + +
q u B
v L q v B v u u C v u A
(18)
( ) . ) ( ) ( ,
2
0
1
0 ,
O O e L H q v all for
n
In the sequel we can assume that . 0
= g
IV. THE EXISTENCE AND UNIQUENESS OF THE SOLUTION
In this section we will study the existence and uniqueness
of the solution of problem (18), for that we need the following
results.
Theorem 4.1. There are two strictly positive constants
1
c and
2
c such that:
O O O
s s
, 1
2
, , 1
1
c v c v v
J
) (
1
0 ,
O e
n
H v all for
(19)
with
( )
:
2
1
, } }
O I
O
+ V V = v A v v v v
T
J
v (20)
( )
2
1
2
, 0
2
, 1 , 1
O O O
+ = v v v
(21)
Proof. 1) The mapping ) ( ) ( :
2 1
0
I O L H is
continuous
(See [6] theorem 1, 2), then there exists : such that 0 c
). ( all for
1
, 1 , 0
O e s
O I
H v v c v
Using (5) gives,
,
2
, 0
1
2
, 0
1
I
I
I
s s
}
v v A v v
T
| o
(22)
then
), ( all for
1
1,
2
,
O e s
O O
H v v c v
J
( )
. with
2
1
2
1 2
v | + = c c
On the other hand. According to 5.55 in [1], there exists a
constant 0 such that ). (
2
, 0
2
, 0
2
, 0 I O O
+ V s v v v
Using (22), gives
), ( all for c
1
J, , 1
1
O e s
O O
H v v v
{ }. ; max and
1
with
1
2
1
1
1
v o
v vo
=
|
|
.
|
\
|
+ =
C
C
c
). ( all for c Finally,
1
, 1
2
, , 1
1
O e s s
O O O
H v v c v v
J
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
147 | P a g e
www.ijacsa.thesai.org
This result allows us to prove that ) . ), ( (
,
1
0 ,
O
O
J
n
H is a
Hilbert space which is obliged condition for to obtain the
existence and uniqueness of the solution.
Theorem 4.2.
space Hilbert real a is ) . ), ( (
,
1
0 ,
O
O
J
n
H
.
Proof.
) . ), ( (
, 1
1
O
O H
is a real space and
) (
1
0 ,
O
n
H
is
closed in ) (
1
O H and
.
, 1 O
and
.
,O J
are equivalent
norms, then
) . ), ( (
,
1
0 ,
O
O
J
n
H
is a real Hilbert space for two
norms.
Theorem 4.3
) 1
O O
s
, ,
) , (
J J
v u v u A
(23)
( ) ) ( ) ( ,
1
0 ,
1
0 ,
O O e
n n
H H v u all for
O
O
,
1
0 ,
. norm for the elliptic - ) ( is 2)
J
n
H A
and
) , (
2
J,O
= v v v A
(24)
). (
1
0 ,
O e
n
H v all for
.
Proof: it is easy.
Theorem 4.4
1)
( ) ,
2
,
, , 0 O O
s
J
v q q v B
v
(25)
( ) ) ( ) ( , all for
2
0
1
0 ,
O O e L H q v
n
2) The bilinear form b is satisfies the inf-sup: There exists a
constant
0 |
such that
) (
) , (
sup
2
0
, 0
,
) (
1
0 ,
O e >
O
O
O e
L q all for q
v
q v B
J
H v
n
|
(26)
Proof.
1) Let
( ) ), ( ) ( ,
2
0
1
0 ,
O O e L H q v
n
we have
( )
.
2
2
. ,
, , 0
, 0 , 0
, 0 , 0
O O
O O
O O
s
V s
V s
J
v q
v q
v q q v B
v
), ( Let 2)
2
0
O eL q we have
), [6] (see '
) , (
sup
, 0
, 1
) (
1
0
O
O
O e
> q
v
q v B
H v
|
{ }
. '
) , (
sup
) , (
sup
) , (
sup
) ( all for and
) ( 0 / ) ( ) (
since
, 0
, 1
) (
, 1
) (
, 1
) (
1
0
, 1 , 1
1
0 ,
1 1
0
1
0
1
0
1
0 ,
O
O
O e
O
O e
O
O e
O O
>
=
>
O e =
O c I = O e = O
q
v
q v B
v
q v B
v
q v B
H v v v
H in v H v H
H v
H v H v
n
n
|
Using (19) gives
.
c
'
with ,
) , (
sup
2
, 0
,
) (
1
0 ,
|
| | = >
O
O
O e
q
v
q v B
J
H v
n
Theorem 4.5
1) There exists a constant such that 0 m
( ) (27) , ,
, , , O O O
s
J J J
z v u m z v u C
( )
1
0 ,
1
0 ,
1
0 ,
, ,
n n n
H H H z v u all for e
.
( ) ( ) (28) , , , , ) 2 v z u C z v u C
=
( ) . , ,
1
0 ,
1
0 ,
1
0 , n n n
V V V z v u all for e
3) ( ) (29) 0 , , = u u v C
( ) . ,
1
0 ,
1
0 , n n
V V v u all for e
4)
( ) ( ) (30) , , ,
2
J,O
= = v v v A v v w D
imply ) (as ) ( in weakly ) 5
0 ,
O m V u u
n m
(31) ) , , ( ) , , ( that v u u D v u u D
m m
Proof
have we ), ( , , z Let 1)
1
0 ,
O e
n
H v u
( )
'
[6]) in (see ' , ,
, , , 3
1
, 1 , 1 , 1
O O O
O O O
s
s
J J J
z v u
c
m
z v u m z v u C
have we ), ( , , z Let 2)
1
0 ,
O e
n
V v u
( ) ( )
}
}
O
O
V =
V + V = +
) . ( .
) . . .( , , , ,
v u z
u v v u z u v z C v u z C
By Green formula, we have
( ) ( )
( ) ( ) . , , , , finally
, 0 and 0 . then ) ( Since
) . .( ) . )( . ( , , , ,
0 ,
u v z C v u z C
z div n z V z
v u z div v u n z u v z C v u z C
n
=
= = O e
= +
} }
O O c
3) Its easy, just take (28). in u v
=
4) It suffices to apply (29).
5) The same proof of V.Girault and P.A. Raviart in [6] page
115.
According the theorems 1.2 and 1.4, chapter IV in [6], the
results (18)-(30) ensure the existence at least one pair
( ) ) ( ) ( ,
2
0
1
0 ,
O O e L H p u
n
satisfies (18).
We define
( )
O O O
e
=
, , ,
, ,
, ,
sup
1
0 ,
J J J
H z v u
z v u
z v u C
N
n
(32)
.
sup
,
0 ,
O
O
e
-
}
=
J
V v
v
v f
f
n
(33)
Then a well-know (sufficient) condition for uniqueness is
that forcing function is small in the sense that
1
N
f s
-
(it
suffices to apply theorems 1.3 and 1.4 chapter IV in [6]).
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
148 | P a g e
www.ijacsa.thesai.org
Theorem 4.6. Assume that and
) (
2
O eL f
satisfy the
following condition
(34) ) ( all for .
1
0 ,
,
O e s
O
O
}
n
J
H v v
N
v f
o
[0,1[. number fixed some For e o
Then there exists an unique
( ) ) ( ) ( ,
2
0
1
0 ,
O O e L H p u
n
## satisfies (18), and holds
.
,
N
u
J
o
s
O
(35)
Proof. The some proof of theorem 2.4 chapter IV in [6].
V. MIXED FINITE ELEMENT APPROXIMATION
In this section we assume that c b a f and , ,
are the
polynomials.
Let
, 0 ; h T
h
be a family of rectangulations of O. For
any
,
h
T T e
T
e
is of rectangles sharing at least one edge
with element T,
T
e
~
is the set of rectangles sharing at least one
vertex with T. Also, for an element edge E,
E
e denotes the
union of rectangles sharing E, while
E
e
~
is the set of rectangles
sharing at least one vertex whit E.
Next, T c is the set of the four edges of T we denote by
) (T c
and
T
N the set of its edges and vertices, respectively.
We let ) (T
h
T T h
c c
e
= denotes the set of all edges
split into interior and boundary edges.
I O
=
, , h h h
c c c
Where
{ } O c e =
O
E E
h h
:
,
c c
{ } O c c e =
I
E E
h h
:
,
c c
We denote by
T
h the diameter of a simplex, by
T
h the
diameter of a face E of T, and we set { }
T T T
h h
h
e
= max
.
A discrete weak formulation is defined using finite
dimensional spaces ) ( X
1
0 ,
1
h
O c
n
H and
) ( M
2
0
h
O c L
The discrete version of (15) is:
( ) ( ) ( )
( )
0 ,
) ( , , , ,
: such that and
1
=
= + +
e e
h h
h h h h h h h h
h
h h h
p v B
v L p v B v u u C v u A
M p X u find
(36)
For all . and
1 h
h h h
M q X u e e
We define the appropriate bases for the finite element
spaces, leading to non linear system of algebraic equations.
Linearization of this system using Newton iteration gives the
finite dimensional System:
: such that and
1 h
h h h
M p X u find e e o o
( )
( ) ( )
( )
(37)
) ( ,
) ( , , ,
, , :
=
= + +
+ + V V
} }
O O c
h k h h
h k h h h h h
h h h h
T
h h h
q r q u B
v R p v B v u u C
v u u C v A u v u
o
o o
o o o v
For all . and
1 h
h h h
M q X u e e
Here, ) (
h k
v R
and ) (
h k
q r are the non linear residuals
associated with the discrete formulations (36). To define the
corresponding linear algebra problem, we use a set of vector-
valued basis functions { }
u
n i i ,..., 1 =
So that
= =
A = =
u u
n
j
j j h
n
j
j j h
u u u u
1 1
. ; o
(38)
We introduce a set of pressure basis functions { }
p
n k k ,..., 1 =
and set
. ;
1 1
= =
A = =
p p
n
k
k k h
n
k
k k h
p p p p o
(39)
Where
u
n and
p
n are the numbers of velocity and pressure
basis functions, respectively.
We find that the discrete formulation (37) can be expressed as
a system of linear equations
.
0 0
0
0 0
|
|
.
|
\
|
=
|
|
.
|
\
|
A
A
|
|
.
|
\
|
+ + f
P
U
B
B W N A
T
(40)
The system is referred to as the discrete Newton problem.
The matrix
0
A is the vector Laplacian matrix and
0
B is the
divergence matrix
(41) : ; ] [
, , 0
v
A a a A
T
i j i j i j i
} }
O I
+ V V = =
. ; ] [
, , 0
}
O
V = =
j k j k j k
b b B
(42)
The vector-convection matrix N and the Newton derivative
matrix W are given by
) . ( ; ] [
, ,
}
O
V = =
j i h j i j i
u n n N
(43)
) . ( ; ] [
, , }
O
V = =
j h i j i j i
u w w W
(44)
For
. ,..., 1 and ,..., 1 ;
p u
n k n j i = =
The right-hand side vectors in (40) are
, . ; ] [
i i i i
g f f f f
} }
O I
+ = =
(45)
for
, ,..., 1
u
n i =
For Picard iteration, we give the discrete problem
.
0 0
0
0 0
|
|
.
|
\
|
=
|
|
.
|
\
|
A
A
|
|
.
|
\
|
+ f
P
U
B
B N A
T
(46)
VI. A POSTERIORI ERROR ESTIMATOR
In this section we propose two types of a posteriori error
indicator, a residual error estimator and local Poisson problem
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
149 | P a g e
www.ijacsa.thesai.org
estimator, which are shown to give global error estimates that
are equivalent to the true error.
A. A Residual Error Estimator
The bubble functions on the reference element
) 1 , 0 ( ) 1 , 0 (
~
= T
are defined as follows:
) 1 ( ) 1 ( 2
4
~ y y x x b
T
=
) 1 )( 1 ( 2
2
~
,
~
1
y x x b
T E
=
x y y b
T E
) 1 ( 2
2
~
,
~
2
=
x x y b
T E
) 1 ( 2
2
~
,
~
3
=
) 1 )( 1 ( 2
2
~
,
~
4
x y y b
T E
=
Here
T
b~
is the reference element bubble function, and
T E
i
b ~
,
~ , 4 : 1 = i are reference edge bubble functions. For any
,
h
T T e
the element bubble functions is
T
T
T
F b b ~ = and
the element edge bubble function is , ~
,
~
, T
T E
T E
F b b
i
i
= where
T
F
the affine map form T. to
~
T
For an interior edge
O
e
, h
E c ,
E
b is defined piecewise, so
that
2 : 1 ,
,
= = i b b
i i
T E T E
where
2 1
T T E =
.
For a boundary edge
T E E h
b b
, ,
, E = e
I
c , where T is the
rectangle such that . E T c e
With these bubble functions, ceruse et al ([3], lemma 4.1]
established the following lemma.
Lemma 6.1. Let T be an arbitrary rectangle in and
h
T
For any
), ( and ) (
0 0
E P v T P v
k E k T
e e
the following
inequalities hold.
, 0
, 0
2
1
, 0 T
T k
T
T T
T
T k
v C b v v c
s s
(47)
, 0
1
, 1 T
T T k
T
T T
v h C b v
s
(48)
E
E k
E
E E
E
E k
v C b v v c
, 0
, 0
2
1
, 0
s s
(49)
, 0 , 0
2
1
E
E E k
T
E E
v h C b v
s
(50)
, 0 , 1
2
1
E
E E k
T
E E
v h C b v
s
(51)
Where
k
c and
k
C
are tow constants which only depend
on the element aspect ratio and the polynomial degrees
0
k
and
1
k .
Here,
0
k and
1
k are fixed and
k
c and
k
C can be
associated with generic constants c and C In addition,
E
v
which is only defined on the edge E also denotes its natural
extension to the element T.
From the inequalities (50) and (51), we established the
following lemma:
Lemma 6.2. Let T be a rectangle and
.
,I
c e
h
T E c
For any ), (
0
E P v
k E
e
## the following inequalities hold.
, 0 ,
2
1
E
E E k
T J
E E
v h C b v
s (52)
Proof. Since 0
=
E E
b v in the other three edges of rectangle
T, it can be extended to the whole of by setting 0
=
E E
b v
in
, T O
then
and
, 1 , 1 O
=
E E
T
E E
b v b v
O
=
, , J
E E
T J
E E
b v b v
Using the inequalities (19), (50) and (51) gives
O
=
, , J
E E
T J
E E
b v b v
O
s
, 1
2 E E
b v c
T
E E
b v c
, 1
2
=
( )
2
1
2
, 1
2
, 0
2
T
E E
T
E E
b v b v c
+ =
( )
E
E E E k
v h h C c
, 0
2
1
1
2
+ s
( )
E
E E k
v h D C c
, 0
2
1
2
1
2
2
1
+ s
E
E E
v Ch
, 0
2
1
s
With D is the diameter of and
( ) . 1
2
1
2
2
+ = D C c C
k
We recall some quasi-interpolation estimates in the
following lemma.
Lemma 6.3. Clement interpolation estimate: Given
), ( v
1
O eH
let
1
h
X e
h
v
be the quasi-interpolant of v
defined
by averaging as in [4].
For any
,
h
T T e
,
~
, 1 , 0
T
v Ch v v
T
T
h
e
s
(53)
and for all , T E c e
~
, 1
2
1
, 0
E
v Ch v v
E
E
h
e
s
(54)
We let ( ) p u,
denote the solution of (18) and let denote
( )
h h
p u ,
the solution of (36) with an approximation on a
rectangular subdivision .
h
T
Our aim is to estimate the velocity and the pressure errors
). ( and ) (
2
0
1
0 ,
O e = O e = L p p H u u e
h n h
c
The element contribution ,
,T R
q of the residual error estimator
R
q is given by
2
, 0
2
, 0
2
, 0
2 2
,
E
E
T E
E
T
T
T
T T T R
R h R R h
c e
+ + = q
(55)
and the components in (55) are given by
{ } T p u u u f R
h h h h T
/
2
V V V + =
v
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
150 | P a g e
www.ijacsa.thesai.org
{ } T u R
h T
/ .
V =
| |
( ) ( )
;
;
2
1
, ,
,
e V +
e V
=
I
O
h T E h h
T
h
h h h
E
E n I p u A u g
E I p u
R
c v
c v
With the key contribution coming from the stress jump
associated with an edge E adjoining elements T and S:
| | ( ) ( ) S I p u T I p u I p u
h h h h h h
/ / V V = V
v v v
The global residual error estimator is given by:
e
=
h
T T
T R R
2
,
q q
Our aim is bound
h
X
h
p p u u and
with respect to
the norm
J
. for velocity
J, X
v v
=
and the quotient
norm for the pressure
.
0,
p p
M
=
For any
,
h
T T e
and , T E c e we define the following two
functions:
;
T T T
b R w
= ,
E E E
b R w
=
T, on 0 c = -
T
w
. on 0 then if
, E E h
w T E e c c = c e -
O
. T rectangle of edges
other in the 0 then if
,
= c e -
I E h
w T E c
. if T - in 0
. if - in 0
T - in 0
: setting by
of whole to extended be can and
,
,
I
O
c e O =
c e O =
O =
O
-
h E
h E E
T
T E
T E w
T E w
w
w w
c
c e
With these two functions we have the following lemmas:
Lemma 6.4. For any
h
T T e we have:
( ) ( ) . . : .
} } }
V + V V =
T T
T T
T
T
w u u w pI u w f
v
(56)
Proof
Using (1) gives
( )
} }
V + V + V =
T
T
T
T
w p u u u w f
2
. . .
v
By applying the Green formula and T, on 0 c =
T
w we
obtain
( )
( ) ( )
( ) ( ) . . . :
. . :
. .
} }
} }
} }
V + V V =
V + V V +
V =
c
T
T
T
T
T
T
T
T
T
T
T
T
w u u w pI u
w u u w pI u
w n pI u w f
v
v
v
Lemma 6.5
: have we if i)
,O
c e
h
T E c
( ) ( ) . . : .
} } }
V + V V =
E E E
E E E
w u u w pI u w f
e e e
v
(57)
: have We , if )
,I
c e
h
T E ii c
( )
( ) ( ) . .
: .
} }
} }
V + +
V V =
c T
T
T
E
T
T
E
T
E
w u u w g A u
w pI u w f
v
(58)
Proof.
i) The same proof of (56).
: have We , If )
,I
c e
h
T E ii c
( )
( ) ( )
( )
}
} }
} }
V +
V V V =
V + V + V =
c
T
T
T
E
T
E
T
T
T
T
w u u
w n pI u w pI u
w p u u u w f
. .
. :
. . .
2
v v
v
We have
( ) 0 and
= I c = V
E
T T
w E on g A u u pI n v
in the other tree edges of rectangle T, then
( )
( ) ( ) . . .
: .
} }
} }
V + +
V V =
c T
T
T
E
T
T
E
T
E
w u u w g A u
w pI u w f
v
We define the bilinear form
( ) ) , ( ) , ( ) , ( ) , ( ); , ( p v B q u B v u A q v p u G
+ + =
We define also the following functional
( ) ( ) ( )
). ( , , and all for
. . . . , , ,
1
0 ,
O e O c
V V =
} }
n
K K
H v y x K
v y y v x x v y x K
u
Lemma 6.6. There exists 0 and 0
0
h C such that
( )
. and ) ( , all for
, , , ,
1
0 , 0
, ,
O c O e s
s
K H v h h
v e C v u u K
n
K J K J
h
u
(59)
Proof. Using (27), (32) and (35), we have that for
) (
1
0 ,
O e
n
H v
( ) ( ) ( )
( ) ( )
(60) ) 2 (
) . 2 (
) . (
. . . .
. . . . , , ,
, , ,
,
2
, , , ,
, , , , , ,
K J K J K J
K J K J K J K J K J
K J K J
h
K J K J K J K J
K
h
K
K
h h
K
h
v e e N
v e v e u N
v u e v e u N
v u e v e u
v u u v u u v u u K
+ s
+ s
+ s
V + V =
V V =
} }
} }
o
u
We have
) ( in lim
1
0 ,
0
O =
n h
h
H u u
, then there exists 0
0
h such that
. all for 1
0
, ,
h h u u e
K J
h
K J
s s =
Using this result and (60), we obtain
( )
. 2 C with ), ( and all for
, , ,
1
0 , 0
, ,
N H v h h
v e C v u u K
n
K J K J
h
+ = O e s
s
ov
u
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
151 | P a g e
www.ijacsa.thesai.org
We have
( )
( ) ( )
( )
). , (
) , ( ) , ( ) , , ( - ) (
) , ( ); , ( ) , , ( - ) (
) , ( ); , ( - ) , ( ); , (
) , ( ) , ( ) , ( ) , ( ); , (
q u B
p v B v u A v u u C v L
q v p u G v u u C v L
q v p u G q v p u G
p p v B q u u B v u u A q v e G
h
h h
h h
h h
h h h
=
=
=
+ + = c
( ) ( )
). ( ) ( ) , ( all for
) , ( ) , ( ) , (
) ( ) , , ( , , , ) , ( ); , (
2
0
1
0 ,
O O e
+ = O +
L H q v
q u B p v B v u A
v L v u u C v u u q v e G
Then
n
h h h
h h h
u c
(61)
) ( and ) ( errors that the find We
2
0
1
0 ,
O e O e L H e
n
c
Satisfy the non-linear equation
( ) ( )
). ( ) ( ) , ( all for
) , ( ) , ( ) , (
, , , ) , ( ); , (
2
0
1
0 ,
O O e
)
`
+ =
O +
e c c
L H q v
q R v R v R
v u u q v e G
n
T T T E
T T E E T T
h
h
u c
(62)
) ( ) ( ) , ( pair a define We
2
0
1
0 ,
O O e L H
n
to be the Ritz projection of the modified residuals
( )
( ) ( )
). ( ) ( ) , (
, , , ) , ( ); , (
, , , ) , ( ) , ( ) , ( ) , ( ) , (
2
0
1
0 ,
O O e
O + =
O + + + = +
L H q v all for
v u u q v e G
v u u v B q e B v e A q d v A
n
h
h
u c
u c
(63)
Lemma 6.7
1
all for 0 ) , (
h h h
X v v A e =
(64)
Proof: we set
h
v v q
= = and 0 in (62), we obtain
( )
. 0 ) ( ) (
) , , ( ) , , (
) , ( ) , ( ) , ( ) , (
, , , ) , ( ) , ( ) , (
= =
+
+ =
O + + =
h h
h h h h
h h h h h h
h h h h h
v L v L
v u u C v u u C
p v B p v B v u A v u A
v u u v B v e A v A
u c
Next, we establish the equivalence between the norms of
) ( ) ( ) , (
2
0
1
0 ,
O O e L H e
n
c
## and the norms of the solution
) ( ) ( ) , (
2
0
1
0 ,
O O e L H
n
of (63).
Theorem 6.8. Let the conditions of theorem 4.6 hold.
There exist two positive constants , and
2 1
k k independent of
h , such that
{ } { } { }
2
, 0
2
,
2
2
, 0
2
,
2
, 0
2
,
1
O O O O O O
+ s + s + c
J J J
k e k
Proof. The same proof of theorem 3 in [9].
Theorem 6.9.
have we ), ( ) ( ) , ( all For
2
0
1
0 ,
O O e L H s w
n
). (
2
1
) , ( ) , (
sup
, 0 ,
, 0 ,
) , (
2
0
1
0 ,
O O
O O
e
+ >
+
+
s w
q v
q s d v w A
J
J
L H q v
n
(65)
Proof.
have we ), ( ) ( ) , ( Let
2
0
1
0 ,
O O e L H s w
n
(66)
) , (
0
) 0 , ( ) , (
) , ( ) , (
sup
,
,
, 0 , , 0 ,
) , (
2
0
1
0 ,
O
O
O O O O
e
= =
+
+
>
+
+
J
J
J J
L H q v
w
w
w w A
v
s d w w A
q v
q s d v w A
n
(67) .
) , (
0
) , ( ) 0 , (
) , ( ) , (
sup
have also We
, 0
, 0
, 0 , , 0 ,
) , (
2
0
1
0 ,
O
O
O O O O
e
= =
+
+
>
+
+
s
s
s s d
s
s s d w A
q v
q s d v w A
J J
L H q v
n
We gather (66) and (67) to get
). (
2
1
) , ( ) , (
sup
, 0 ,
, 0 ,
) , (
2
0
1
0 ,
O O
O O
e
+ >
+
+
s w
q v
q s d v w A
J
J
L H q v
n
Theorem 6.10. For any mixed finite element
approximation (not necessarily inf-sup stable) defined on
rectangular grids
h
T , the residual estimator
R
q satisfies:
,
, 0 ,
R
J
C e q c s +
O O
and
{ } .
2
1
'
2
' , 0
2
' ,
,
|
|
.
|
\
|
+ s
e
T
T
T T J
T R
e C
e
c q
Note that the constant C in the local lower bound is
independent of the domain, and
. . :
2
, } }
c
+ V V =
T T
T
T J
e A e e e e
v
Proof. To establish the upper bound we let
1 2
0
1
0 ,
and ) ( ) ( ) , (
h h n
X v L H q v e O O e
be the clement
interpolation of . v
## Using (63), (61) and (62), give
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
152 | P a g e
www.ijacsa.thesai.org
{
}
1
1
) , ( ) , ( ) , (
) , ( ) , ( ) , ( ) , (
2
1
2
, 0
2
1
2
, 0
2
1
2
, 0 2
2
1
2
, 0
2
, 0 , 0
, 0
, 0
, 0
, 0
|
|
.
|
\
|
|
|
.
|
\
|
+
|
|
.
|
\
|
|
|
.
|
\
|
s
+
+ s
)
`
+ =
+ = +
e c e e c e
e e
c e e
e c c
h h
h h
h
h
T T T E
E
h
E T T T E
T
E E
T T
T
h
T
T T
T
T T
E
T
T
T T
E
h
T
E
T T
T
h
T
T
T T T E
T T E h E T h T
h
v v
h
R h
v v
h
R h
R q
v v R v v R
q R v v R v v R
q d v v A q d v A
Using (52) and (53), then gives
{ }
{ } .
) , ( ) , (
2
1
2
, 0
2
, 0
2
, 0
2
2
1
2
, 0
2
,
|
|
.
|
\
|
+ +
|
|
.
|
\
|
+ s +
e
c e
e
h
h
T T
E
E
T E
E
T
T
T
T T
T T
T T J
R h R R h
q v C q d v A
Finally, using (65) gives:
{ }
e
c e
+ + s +
h
T T
E
E
T E
E
T
T
T
T T
T T J
R h R R h C
2
, 0
2
, 0
2
, 0
2
, 0 ,
According to theorem 6.8, we have
{ }
e
c e
O
+ + s +
h
T T
E
E
T E
E
T
T
T
T T
T T J
R h R R h C e
2
, 0
2
, 0
2
, 0
2
, 0 ,
c
This establishes the upper bound.
Turning to the local lower bound. First, for the element
residual part, we have:
} }
} }
} }
V V +
V V =
V V V + =
c T
T h h
T
T h h
T T
T h h T
T T
T h h h h T T
w u u w n I p u
w I p u w f
w p u u u f w R
). ( . ) (
: ) ( .
). . ( .
2
v
v
v
See that T in 0 c =
T
w , using (56) and (57) gives:
( )
) e C( .
gives (48), Using
, then T, in 0 Since
' ' ) e ( C'
, , , : ) ( .
, 0
1
2
1
2
, 0
2
,
, 1 ,
, , , 1 , 0 , 1
T
T T
T T J
T
T T
T
T
T J
T
T J T J T
T
T T
T h
T T
T T T
R h w R
w w w
w e C w
w u u T w I e w R
+ s
= c =
+ + s
+ V V =
}
} }
c
v
c
u c v
In addition, from the inverse inequality (47)
, .
2
T 0,
2
, 0
2
1
T k
T
T T
T
T T
R c b R w R
> =
}
Thus, ) e (
2
, 0
2
,
2
T 0,
2
T T J
T T
C R h c + s
(68)
Next comes the divergence part,
(69)
2 2
2
) ( .
.
, ,
, 1
, 0
, 0 , 0
T J T J
h
T
h
T
h
T
h
T
T
e u u
u u
u u
u R
v v
= s
s
V =
V =
Finally, we need to estimate the jump term. For an edge
have We , if
,O
c e
h
T E c
}
}
=
c
V =
E
i
T
E h h E E
i
w n I p u w R
2 : 1
. ) ( . 2
} }
=
V V + V V =
2 : 1
2
. ). ( : ) (
i
T
E h h E h h
i E
w p u w I p u
v v
e
: gives , in 0 and (57) Using
E
e c =
E
w
( )
E E
i
i
i
E E E
i
i
E
J
E
J
i
T
E
T
T E
i
T
E h E E T
E
E
E E
w e C
w R w e
w u u w R
w I e w R
e e
e e e
e
c v
e u
c v
, ,
2 : 1
, 0
, 0
, 1 , 0 , 1
2 : 1
) (
, , ,
: ) ( . 2
+
+ + s
+
V V =
}
} }
=
=
E E
, 1 ,
E
then , in 0 Since
e e
v e
T
J
E
w w w
= c =
Using (50) and (51), to get
E
E E
i
T
T
E
E E
E
E E
R h R
R h e C w R
i
i
E E
, 0
2
1
2 : 1
, 0
, 0
2
1
2
1
2
, 0
2
, 1
) ( ' . 2
}
=
+
+ s
e e
c
Using (68), gives
) ( . 2
, 0
2
1
2
1
2
, 0
2
,
E
E E
J
E
E E
R h e C w R
E E
+ s
} e e
c (70)
Using the inverse inequality (49) and thus using (70) gives
) e C(
2
, 0
2
,
2
E 0, E E
J
E E
R h
e e
c + s
(71)
Also need to show that (70) holds for boundary edges.
have We , an For
,I
c e
h
T E c
| |
. ) ( .
} }
c
V + =
E T
E h h
T
h E E
w g n I p u A u w R
v
( ) ( )
( ) ( )
( ) . ) (
: ) ( .
. ) ( .
2
}
} }
} }
V V +
V V + =
V + =
c
c c
T
E h h
T T
E h h E
T
h
T T
E h h E
T
h
w p u
w I p u w g A u
w n I p u w g A u
v
v
v
Using (58), (59) and (4), gives
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
153 | P a g e
www.ijacsa.thesai.org
( )
( )
) (
'
) (
, , , .
. : ) ( .
, 0
, 0
, , 0 ,
, 0
, 0
, ,
, 0 , 0
1
, 1 , 0 , 1
T
E
T
T
T J
E
T T J
T
E
T
T
T J
E
T J
T
E
T T
E
T T
T
E h E T
T
E
T
E T
E E E
w R w e C
w R w e C
w e w e
w u u T w R
w A e w I e w R
+ + s
+ +
+ + s
+
V V =
c c
c
}
} } }
c
| c v
u
c v
Using (50) and (52) gives
E
E E
T
T
E
E E
T T J
E
E E
R h R
R h e C w R
, 0
2
1
, 0
, 0
2
1
2
1
2
, 0
2
,
1
) ( .
+
+ s
}
c
Using (69) gives
) ( .
, 0
2
1
2
1
2
, 0
2
,
E
E E
T T J
E
E E
R h e C w R
+ s
}
c (72)
Using (49), then
gives (71), using and
, .
2
, 0
2
, 0
2
1
E
E
E
E E
E
E E
R c b R w R
> =
}
) e C(
2
, 0
2
,
2
E 0,
T T J
E E
R h c + s
(73)
Finally, combining (68), (69), (71) and (73) establishes the
local lower bound.
B. The Local Poisson Problem Estimator.
The local Poisson problem estimator defined as:
e
=
h
T T
T P P
2
,
q q
2
, 0
,
2
,
,
2
,
T
T P
T J
T P T P
e c q + =
(74)
Let
{ } O c c = e = - T on 0 . : ) (
1
n v T H v V
T
} }
c
+ V V = -
T T
T
T P T P T P T
v A e v e v e A . : ) , (
, , ,
v
T T P
V e e
,
## Satisfies the uncoupled Poisson problems
e
=
) (
,
) , ( ) , ( ) , (
T E
E E T T T P T
v R v R v e A
c
(75)
. any for
T
V v e
And . / .
,
T u
h T P
V = c (76)
Theorem 6.11.
T P,
q is equivalent to
T R,
q estimator:
.
, , , T P T R T P
C c q q q s s
Proof. For the upper bound, we first let
T T T
b R w
= (
T
b is
an element interior bubble).
From (75),
T
T
T
T P
T T T P T T T
w e
w e w R
, 1 , 1
,
,
) , ( ) , (
v
v
s
V V =
Using (48) we get
( ) ) , (
2
1
2
, 0
,
2
,
,
, 0
1
T
T P
T J
T P
T
T T T T T
e R Ch w R c + s
(77)
In addition, from the inverse inequalities (47),
T T T T
w R R ) , C.(
2
T 0,
s
And using (77), to get
( ).
2
, 0
,
2
,
,
2
, 0
2
T
T P
T J
T P
T
T T
e C R h c + s
(78)
Next, we let
E E E
b R w
= (
E
b is an edge bubble function).
If
,
,I
c e
h
T E c
using (75), (78), (50) and (51) give
( )
) , ( ) , ( ) , (
2
1
2
, 0
,
2
,
,
, 0
, 0
, 0
, ,
,
,
2
1
T
T P
T J
T P
E
E E
T
E
T
T
T J
E
T J
T P
T E T T E T P T E E E
e R Ch
w R w e
w R w e A w R
c + s
+ s
+ =
, if
,O
c e
h
T E c
See that the matrix A defined just in , I then we can posed
. in 0 I O = A
Using (75), (50), (51) and (78) give
) , ( ) , ( ) , (
, 0
, 0
, 1 , 1
,
,
T
E
T
T
T
E
T
T P
T E T T E T P E E E
w R w e
w R w e w R
+ s
+ V V =
v
v
( )
2
1
2
, 0
,
2
,
,
, 0
2
1
T
T P
T J
T P
E
E E
e R Ch c + s
( )
) , (
have we , any and T any for Finally,
2
1
2
, 0
,
2
,
,
, 0
2
1
T
T P
T J
T P
E
E E E E E
h
e R Ch w R
T E T
c + s
c e e
From this result and the inverse inequalities (49), give
( ) .
2
, 0
,
2
,
,
2
, 0 T
T P
T J
T P
T
T E
e C R h c + s
(79)
We have also
, .
, 0
,
, 0 , 0 T
T P
T
h
T
T
u R c == V =
then
( ).
2
, 0
,
2
,
,
2
. 0 T
T P
T J
T P
T
T
e R c + s
(80)
Combining (78), (79) and (80), establishes the upper bound in
the equivalence relation.
For the lower, we need to use (65):
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
154 | P a g e
www.ijacsa.thesai.org
( )
( )
) , ( ) , (
sup 2
, 0 ,
, ,
) ( ) , (
T 0,
,
,
,
2
1
2
, 0
,
2
,
, ,
2
0
T T J
T P T P T
T L V q v
T P
T J
T P
T
T P
T J
T P T P
q v
q d v e A
e
e
T
+
+
s
+ s
+ =
e
c
c
c q
Using (75) and (76) give
,
) , ( ) , ( ) , (
sup 2
, 0 ,
) ( ) , (
,
2
0
T T J
T E
T T E E T T
T L V q v
T P
q v
q R v R v R
T
+
+
s
c c
e
q
then
sup 2
, 0 ,
, 0 , 0 , 0
, 0
, 0
, 0
) ( ) , (
,
2
0
T T J
T E
T T
T
E
E
E
T
T
T
T L V q v
T P
q v
q R v R v R
T
+
+ +
s
c c
e
q
(82)
Now, since v
## is zero at the four vertices of T, a scaling
argument and the usual trace theorem, see e.g. [15, Lemma
1.5], shows that v
satisfies
T
E
E
v Ch v
, 1
2
1
, 0
s
(83)
T
T
T
v Ch v
, 1 , 0
s (84)
Combining these two inequalities with (82) immediately
gives the lower bound in the equivalence relation.
Consequence 6.12. For any mixed finite element
approximation (not necessarily inf-sup stable) defined on
rectangular grids
h
T , the residual estimator
P
q
satisfies:
P
J
C e q c s +
O O , 0 ,
Note that the constant C in the local lower bound independent
of the domain.
VII. NUMERICAL SIMULATION
In this section some numerical results of calculations with
mixed finite element Method and ADINA system will be
presented. Using our solver, we run the flow over an obstacle
[15] with a number of different model parameters.
Example: Flow over an obstacle. This is another classical
problem. The domain is O and is associated with modelling
flow in a rectangular channel with a square cylindrical
obstruction. A Poiseuille profile is imposed on the Inflow
boundary ), 1 1 ; 0 ( s s = y x and noflow (zero velocity)
condition is imposed on the obstruction and the top and
bottom walls. A Neumann condition is applied at the outflow
boundary which automatically sets the mean outflow pressure
to zero. O a disconnected rectangular region
1) (-1, 8) (0,
generated by deleting the square 1/4). (-1/4, 9/4) (7/4,
Fig.1. Equally distributed streamline plot associated with a 3280 square
grid
0 1
P Q
approximation and
.
500
1
= v
Fig.2. uniform streamline plot computed with ADINA System, associated
with a 32 80 square grid and
.
500
1
= v
Fig.3. Velocity vectors solution by MFE with a 32 80 square grid and
.
500
1
= v
Fig.4. The solution computed with ADINA system. The plots show the
velocity vectors solution with a 32 80 square grid and
.
500
1
= v
The two solutions are therefore essentially identical. This
is very good indication that my solver is implemented
correctly.
Fig.5. Pressure plot for the flow with a 32 80 square grid.
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 4, No.3, 2013
155 | P a g e
www.ijacsa.thesai.org
Fig.6. Estimated error
T R,
q
associated with 32 80 square grid and
0 1
P Q
approximation
TABLE I.
TABLE1. The local Poisson problem error estimator Flow over
an obstacle with Reynolds number Re = 1000.
O
V h u .
estimated velocity divergence error.
Grid
O
V
, 0
. h u
P
q
8 20 5.892389e-001 3.210243e+001
16 40 1.101191e-001 6.039434e+000
32 80 3.707139e-002 2.802914e+000
64 160 1.160002e-002 1.484983e+000
TABLE II. A residual error estimator for Flow over an obstacle with
Reynolds number Re = 1000.
Grid
R
q
8 20 9 ,309704e+00
16 40 1,727278e+000
32 80 8,156479e
-001
64 160 4.261901e-001
VIII. CONCLUSION
We were interested in this work in the numeric solution for
two dimensional partial differential equations modelling (or
arising from) model steady incompressible fluid flow. It
includes algorithms for discretization by mixed finite element
methods and a posteriori error estimation of the computed
solutions. Our results agree with Adina system.
Numerical results are presented to see the performance of
the method, and seem to be interesting by comparing them
with other recent results.
ACKNOWLEDGMENT
The authors would like to express their sincere thanks for
the referee for his/her helpful suggestions.
References
[1] Alexandre Ern, Aide-mmoire Elments Finis, Dunod, Paris, 2005.
[2] P.A. Raviart, J. Thomas, Introduction lanalyse numrique des quations
aux drives partielles, Masson, Paris, 1983.
[3] E. Creuse, G. Kunert, S. Nicaise, A posteriori error estimation for the
Stokes problem: Anisotropic and isotropic discretizations, M3AS,
Vol.14, 2004, pp. 1297-1341.
[4] P. Clement, Approximation by finite element functions using local
regularization, RAIRO. Anal. Numer, Vol.2, 1975, pp. 77-84.
[5] T. J. Oden,W.Wu, and M. Ainsworth. An a posteriori error estimate for
finite element approximations of the Navier-Stokes equations, Comput.
Methods Appl. Mech. Engrg, Vol.111, 1994, pp. 185-202.
[6] V. Girault and P.A. Raviart, Finite Element Approximation of the
Navier-Stokes Equations, Springer-Verlag, Berlin Heiderlberg New
York, 1981.
[7] A. Elakkad, A. Elkhalfi, N. Guessous. A mixed finite element method for
Navier-Stokes equations. J. Appl. Math. & Informatics, Vol.28, No.5-6,
2010, pp. 1331-1345.
[8] R. Verfurth, A Review of A Posteriori Error Estimation and Adaptive
Mesh-Refinement Techniques, Wiley-Teubner, Chichester, 1996.
[9] M. Ainsworth and J. Oden. A Posteriori Error Estimation in Finite
Element Analysis. Wiley, New York, [264, 266, 330, 334, 335], 2000.
[10] M. Ainsworth, J. Oden, A posteriori error estimates for Stokes and
Oseens equations, SIAM J. Numer. Anal, Vol. 34, 1997, pp. 228245.
[11] R. E. Bank, B. Welfert, A posteriori error estimates for the Stokes
problem, SIAM J. Numer. Anal, Vol.28, 1991, pp. 591623.
[12] C. Carstensen,S.A. Funken. A posteriori error control in low-order finite
element discretizations of incompressible stationary flow problems.
Math. Comp., Vol.70, 2001, pp. 1353-1381.
[13] D. Kay, D. Silvester, A posteriori error estimation for stabilized mixed
approximations of the Stokes equations, SIAM J. Sci. Comput, Vol.21,
1999, pp. 13211336.
[14] R. Verfurth, A posteriori error estimators for the Stokes equations,
Numer. Math, Vol.55, 1989, pp. 309325.
[15] H. Elman, D. Silvester, A. Wathen, Finite Elements and Fast Iterative
Solvers: with Applications in Incompressible Fluid Dynamics, Oxford
University Press, Oxford, 2005
[16] V. John. Residual a posteriori error estimates for two-level finite element
methods for the Navier-Stokes equations, App. Numer. Math., Vol.37,
2001, PP. 503-518.
[17] D. Acheson, Elementary Fluid Dynamics, Oxford University Press,
Oxford, 1990. | 16,092 | 37,709 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2019-30 | latest | en | 0.866934 |
https://mysqlpreacher.com/how-do-i-count-the-number-of-elements-in-a-vector-in-r/ | 1,713,555,738,000,000,000 | text/html | crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00856.warc.gz | 364,225,565 | 31,489 | Categories :
## How do I count the number of elements in a vector in R?
Find the count of elements using the length and lengths function.
1. Syntax: list(value1,value2,…,value) values can be range operator or vector.
2. Syntax: length(listname) return value: integer.
3. Syntax: lengths(list_name)
### How do you find the number of elements in R?
In R programming language, to find the length of every elements in a list, the function lengths() can be used. This function loops over x and returns a compatible vector containing the length of each element in x.
#### How do I make a number a vector in R?
How to create vector in R?
1. Using c() Function. To create a vector, we use the c() function: Code: > vec <- c(1,2,3,4,5) #creates a vector named vec.
2. Using assign() function. Another way to create a vector is the assign() function. Code:
3. Using : operator. An easy way to make integer vectors is to use the : operator. Code:
What is the length of the vector?
The length of a vector is the square root of the sum of the squares of the horizontal and vertical components. If the horizontal or vertical component is zero: If a or b is zero, then you don’t need the vector length formula. In this case, the length is just the absolute value of the nonzero component.
How do I count how many times a word appears in R?
You can use the str_count function from the stringr package to get the number of keywords that match a given character vector. The pattern argument of the str_count function accepts a regular expression that can be used to specify the keyword.
## How do I label a vector in R?
You use the assignment operator (<-) to assign names to vectors in much the same way that you assign values to character vectors. This technique works because you subset month. days to return only those values for which month. days equals 31, and then you retrieve the names of the resulting vector.
### How many elements does a vector in are have?
Note: Here, the first vector vec has five elements. The second vector vec4 has ten elements. Therefore, the first vector is cycled twice to match the second. What is vector function in R?
#### How to find the index of a vector in R?
As you can see based on the previous R code, our example vector simply contains seven numeric values. Let’s assume that we want to know the index of the first element of our vector, which is equal to the value 1.
How to create an integer vector in R?
There are numerous ways to create an R vector: 1. Using c () Function To create a vector, we use the c () function: 2. Using assign () function Another way to create a vector is the assign () function. 3. Using : operator An easy way to make integer vectors is to use the : operator. What are the types of vectors in R?
How do you search a vector in RStudio?
Within the function, we had to specify the value we are searching for (i.e. 1) and the vector within which we want to search (i.e. our example vector with the name x). The RStudio console output shows us the position of the first element of our vector, which is equal to 1. In our example, this is the second element. | 716 | 3,133 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.828125 | 3 | CC-MAIN-2024-18 | latest | en | 0.790203 |
https://mobile.surenapps.com/2020/10/goodwill-nature-and-valuation-test.html | 1,721,612,831,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763517805.92/warc/CC-MAIN-20240722003438-20240722033438-00715.warc.gz | 357,007,572 | 48,039 | ### Goodwill - Nature and Valuation - Test Papers
CBSE Test Paper 01
Goodwill - Nature and Valuation
1. As per Accounting Standard-26,
1. both purchased and self-generated goodwill are accounted in the books of account
2. purchased goodwill is accounted in the books of account
3. None of these
4. self-generated goodwill is accounted in the books of account
2. Calculate the average profit of last four year's profits. The profits of the last four years were:
2008 27000 2009 39000 2010 16000(loss) 2011 40000
1. ₹10000
2. Rs. 22500
3. ₹30000
4. ₹40000
3. The excess amount which the firm gets on selling its business over and above the net value is
1. Surplus
2. Goodwill
3. Super profits.
4. Reserve
4. Goodwill is valued
1. at the time of change in profit-sharing ratio
2. at the time of retirement or death of a partner
3. at the time of admission of a partner
4. All of these
5. Goodwill under Average Profit Method means
1. None of these
2. Normal profit $×$ Number of year's purchase
3. Super profit $×$ Number of year's purchase
4. Average profit $×$ Number of year's purchase
6. Fill in the blanks:
Average Profit = ________.
7. How does goodwill arise?
8. How does the factor ‘quality of product' affect the goodwill of a firm?
9. The profits and losses for last five years were
1st year - Rs 3,000 (including an abnormal gain of 1,000)
2nd year - Rs 7,000 (excluding Rs 2,000 as insurance premium)
3rd year - Rs 2,000 (after charging an abnormal loss of Rs 1,000)
4th year - Rs 3,000
5th year - Rs 1,000 (Loss)
Calculate the amount of Goodwill on the basis of 3 years purchase of last 5 years profits and losses.
10. What is Revaluation Account? How it is differ from Profit & Loss Appropriation A/c?
11. A business has earned average profits of Rs 1,00,000 during the last few years and the normal rate of return in a similar business is 10%. Find out the value of goodwill by
1. Capitalisation of super profit method.
2. Super profit method, if the goodwill is valued at 3 years’ purchase of super profit.
The assets of the business were Rs 10,00,000 and its external liabilities Rs 1,80,000.
12. Neeraj and Dheeraj are carrying on a business of repairing electronic items. There are no other technicians for repairing electronic items in the locality. As the electric supply has a lot of fluctuations the equipments get damaged. Therefore, both the partners themselves do the repairing work to the satisfaction of the customers. The firm donates 10% of its profits to a Charitable Hospital of the locality for the medical treatment of persons below poverty line. State the two factors affecting the goodwill of the firm discussed in the above para. Also identify any two values which the firm is trying to propagate.
13. Cake and Muffin are partners sharing profits and losses in the ratio of 5: 4. On 1st April 2016, they admit Cookie as a new partner for $\frac{1}{6}$th share in the profits of the firm and the new ratio agreed upon is 3 : 2 : 1.
Goodwill, at the time of Cookie's admission, is to be valued on the basis of capitalisation of the average profits of the last three years. Profits for the last three years were:
Year ended 31st March 2014 ₹39,000 (including an abnormal loss of ₹9,000) Year ended 31st March 2015 ₹83,000 (including an abnormal loss of ₹8,000) Year ended 31st March 2016 ₹72,000
On 1st April 2016, the firm had assets of ₹8,00,000. Its creditors amounted ₹3,60,000. The firm had a Reserve Fund of ₹40,000 while Partners' Capital Account showed a balance of ₹4,00,000.
The normal rate of return expected form this class of business is 13%.
Cookie brings in ₹2,00,000 for her capital but is unable to bring in cash for her share of goodwill.
You are required to:
1. Calculate Cookie's share of Goodwill in the firm (Show your workings clearly).
14. X and Y are partners in a firm sharing profits in the ratio of 5 : 3.On March 1, 2017 they admitted Z as a new Partner. The new profit sharing ratio will be 4 : 3 : 2. Z brought in ₹1,00,000 in cash as his share of capital but could not bring any amount for goodwill in cash. The firm's goodwill on Z's admission was valued at ₹1,80,000. X and Y decided that Z can bring his share of premium for goodwill later or it can be adjusted against his share of profits. At the time of Z's admission goodwill existed in the books of the firm at ₹2,40,000.
You are required to pass necessary journal entries in the books of the firm on Z's admission.
15. Bakul and Gokul were partners in a firm sharing profits and losses in the ratio of 2 : 1 with capitals of ₹40,000 and ₹30,000 respectively. They decided to admit Nakul into partnership on conditions that he would bring in ₹20,000 as his capital and ₹6,000 for his share of goodwill for 1/4th share of profits. Half of the amount of goodwill was withdrawn by the existing partners. The capital of the partners in the New firm was to be arranged in profit sharing ratio on the basis of Nakul's Capital and excess or deficit capital to be adjusted in cash. Give the necessary journal entries to record the transactions and show the capital accounts of the partners and the cash account.
CBSE Test Paper 01
Goodwill - Nature and Valuation
Solution
1. (b) purchased goodwill is accounted in the books of account
Explanation: purchased goodwill is accounted in the books of account
2. (b) Rs. 22500
Explanation: Calculation of average profit when loss is given:
1. Calculation of total profits earned during 4 years: 27,000 + 39,000 – 16,000 + 40,000 = 90,000
2. Average profit = 90,000/4 = 22,500
3. (b) Goodwill
Explanation: Goodwill
4. (d) All of these
Explanation: All of these
5. (d) Average profit $×$ Number of year's purchase
Explanation: Average profit $×$ Number of year's purchase
6. $\frac{Total\phantom{\rule{thickmathspace}{0ex}}Profit}{No.\phantom{\rule{thickmathspace}{0ex}}of\phantom{\rule{thickmathspace}{0ex}}relevant\phantom{\rule{thickmathspace}{0ex}}years}$
7. Goodwill may arise due to factors like location, the efficiency of management, the trend of profits, the absence of competition, life span, relations, unique patent right. Goodwill arises when a company acquires another entire business. The amount of goodwill is the cost to purchase the business minus the fair market value of the tangible assets, the intangible assets that can be identified, and the liabilities obtained in the purchase.
8. If the firm enjoys good reputation for its product quality, there will be higher sales and the value of its goodwill will increase because this will not only retain existing customers but also bring new customers too thereby increasing sales as well as customer base.
9. The amount of goodwill is the cost to purchase the business minus the fair market value of the tangible assets, the intangible assets that can be identified, and the liabilities obtained in the purchase.
Calculation of Goodwill
Average maintainable profits
Profits for 1st year (Rs 3,000 - Rs 1,000) = Rs 2,000
Profits for 2nd year (Rs 7,000 - Rs 2,000) = Rs 5,000
Profits for 3rd year (Rs 2,000 + Rs 1,000) = Rs 3,000
Profits for 4th year = Rs 3.000
Total profits= Rs 13,000
Less : Loss for 5th year = Rs (1.000)
Total Profit for last five years = Rs 12,000
Average profits = Total profit /No. of years
= Rs 12,000/5
= Rs 2400
Goodwill = Average profits $×$No. of years’ purchase
= Rs 2400 $×$ 3
= Rs 7200
10. Revaluation account is a nominal account which is prepared to record the change of assets and reassessment of liabilities. The profit or loss calculated through this account is transferred to the partners’ capital/current account in their old profit sharing ratio while Profit and Loss Appropriation Account is prepared for the division of profit among the partners.
Revaluation account is prepared whenever there is change in profit sharing ratio between the partners due to any reason e.g. Between existing partners, Due to Admission of a new partner, Due to retirement/death of a partner, amalgamation of two partnership firms etc. to record profit or loss on revaluation. Main concept being whatever happened before change in ratio; belongs to partners in the old ratio and after change in the new ratio
Profit and Loss Appropriation Account, on the other hand, is prepared every year to distribute profit as per the terms of partnership deed.
11. Working Notes:
= 10,00,000 - 1,80,000 = Rs 8,20,000
= 8,20,000 $×\frac{10}{100}$= Rs 82,000
= 1,00,000 - 82,000 = Rs 18,000
1. As per Capitalisation of Super Profit Method
Goodwill = Super Profit= 18,000$×\frac{100}{10}$= Rs 1,80,000
2. As per Super Profit Method
Goodwill = Super Profit $×$Number of Years' Purchase
= 18,000 $×$3 = Rs 54,000
• Capital Employed = Total Assets - External Liabilities
• Normal Profit = Capital Employed
• Super Profit = Average Profit - Normal Profit
12. The factors affecting the goodwill of the firm are:
#### 1. Locational Factor:
If the firm is centrally located or located in a very prominent place, it can attract, more customers resulting in an increase in turnover. Therefore, locational factor should always be considered while ascertaining the value of goodwill.
This is another factor which also influences the value of goodwill which includes:
(i) The nature of goods;
(ii) Risk involved;
(iv) Benefits of patents and Trademarks; and
(v) Easy access of raw materials, etc.
The values which the firm is trying to propagate are:
1. Sensitivity towards people belonging to lower-income group.
2. Working towards customer satisfaction.
1. Calculation of Cookie's Share of Goodwill in the firm:
Calculation of Average Normal Profit:
Year ended Profit ₹ 31st March 2014 ₹39,000 + ₹9,000 48,000 31st March 2015 ₹83,000 + ₹8,000 75,000 31st March 2016 72,000 1,95,000
Average Normal Profit = $\frac{₹1,95,000}{3}$ = ₹65,000
Capitalised Value of Average Profits =
$\frac{Rs.65,000}{13}×100=Rs.5,00,000$
Capital Employed (Net Assets) = Total Assets - Outside Liabilities
= ₹8,00,000 - ₹3,60,000 = ₹4,40,000
Goodwill = Capitalised Value of Average Profits - Net Assets
= ₹5,00,000 - ₹4,40,000 = ₹60,000
Cookie's Share of Goodwill $=60,000×\frac{1}{6}=₹10,000$
2. Date Particulars L.F Dr. Cr. (₹) 2016 April 1 Bank A/c Dr. 2,00,000 To Cookie's Capital A/c 2,00,000 (Amount of capital brought in cash) Cookie's Current A/c Dr. 10,000 To Cake's Capital A/c 3,333 To Muffin's Capital A/c 6,667 (Cookie's share of goodwill credited to sacrificing partners in their sacrificing ratio of 1 : 2)
Calculation of Sacrificing Ratio:
Cake's sacrifice = $\frac{5}{9}-\frac{3}{6}=\frac{10-9}{18}=\frac{1}{18}$
Muffin's sacrifice = $\frac{4}{9}-\frac{2}{6}=\frac{8-6}{18}=\frac{2}{18}$
Sacrificing Ratio of Cake and Muffin = $\frac{1}{18}:\frac{2}{18}$ or 1 : 2
13. Date Particulars L.F. Dr.(₹) Cr. (₹) 2017 March 1 X's Capital A/c Dr. 1,50,000 Y's Capital A/c Dr. 90,000 To Goodwill A/c 2,40,000 (Goodwill already existing in the books, now written of in old ratio i.e. 5 : 3) March 1 Bank A/c Dr. 1,00,000 To Y's Capital A/c 1,00,000 (Amount brought in by Z as his capital) March 1 Z's Current A/c(29$\frac{2}{9}$ of 1,80,000) Dr. 40,000 To X's Capital A/c 32,500 To Y's Capital A/c 7,500 (Es share of goodwill credited to X and Yin their sacrificing ratio 13 :3.)
Working Note: Calculation of Sacrificing Ratio:
Old Ratio - New Ratio
X's Sacrifice $\frac{5}{8}-\frac{4}{9}=\frac{45-32}{72}=\frac{13}{72}$
Y's Sacrifice= $\frac{3}{8}-\frac{3}{9}=\frac{27-24}{72}=\frac{3}{72}$
Thus Sacrificing ratio between X and Y = 13 : 3.
14. JOURNAL
Date Particulars L.F Dr.(₹) Cr.(₹) Bank A/c Dr. 26,000 To Nakul's Capital A/c 20,000 To Premium for Goodwill A/c 6,000 (Capital and premium for goodwill brought in by Nakul) Premium for Goodwill A/c Dr. 6,.000 To Bakul's Capital A/c 4,000 To Gokul's Capital A/c 2,000 (Premium for goodwill credited to old partner's capital accounts) Bakul's Capital A/c Dr. 2,000 Gokul's Capital A/c Dr. 1,000 To Bank A/c 3,000 (Half of the premium for goodwill withdrawn by Bakul and Gokul) Bakul's Capital A/c(2) Dr. 2,000 Gokul's Capital A/c(2) Dr. 11,000 To Bank A/c 13,000 (Excess capital withdrawn by Bakul and Gokul)
Dr. CAPITAL ACCOUNTS Cr. Particulars Bakul Gokul Nakul Particulars Bakul Gokul Nakul ₹ ₹ ₹ ₹ ₹ ₹ To Bank A/c 2,000 1,000 By Balance b/d 40,000 30,000 To Balance c/d 42,000 31,000 20,000 By Bank A/c - - 20,000 By Premium for Goodwill A/c 4,000 2,000 - 44,000 32,000 20,000 44,000 32,000 20,000 To Bank A/c 2,000 11,000 - By Balance b/d 42,000 31,000 20,000 To Balance c/d 40,000 20,000 30,000 42,000 31,000 20,000 42,000 31,000 20,000
Dr. Bank Account Cr. Particulars ₹ Particulars ₹ To Nakul's capital A/c 20,000 By Bakul's Capital A/c 2,000 To Premium for Goodwill A/c 6,000 By Gokul's Capital A/c 1,000 By Bakul's Capital A/c 2,000 By Gokul's Capital A/c 11,000 By Balance c/d 10,000 26,000 26,000
Notes to the Solution:-
1. New Profit Sharing Ratio:
Nakul's share of profit = $\frac{1}{4}$, Remaining Share = 1-$\frac{1}{4}=\frac{3}{4}$
Bakul's new share = $\frac{2}{3}$ of $\frac{3}{4}$ = $\frac{2}{4}$
Gokul's new share = $\frac{1}{3}$ of $\frac{3}{4}$ = $\frac{1}{4}$
Nakul's share = $\frac{1}{4}$
Nakul's Capital is ₹20,000 and his share of profit is $\frac{1}{4}$.
Based on Nakul's capital the total Capital of the firm will be :
20,000 $×\frac{4}{1}$ = ₹80,000
Hence, Bakul's capital in new firm should be = 80,000 $×$ $\frac{2}{4}$ = ₹40,000
Gokul's capital in new firm should be = 80,000 $×$ $\frac{1}{4}$ = ₹20,000
2. Bakul's Capital in the new firm should be ₹40,000, whereas his existing capital shown by his Capital A/c is ₹42,000. Hence, his excess Capital amounting to ₹2,000 will be refunded to him.
3. Gokul's Capital in the new firm should be ₹20,000, whereas his existing capital shown by his Captial A/c is ₹31,000. Hence, his excess Capital amounting to ₹11,000 will be refunded to him.CBSE Test Paper 01
Goodwill - Nature and Valuation
1. As per Accounting Standard-26,
1. both purchased and self-generated goodwill are accounted in the books of account
2. purchased goodwill is accounted in the books of account
3. None of these
4. self-generated goodwill is accounted in the books of account
2. Calculate the average profit of last four year's profits. The profits of the last four years were:
2008 27000 2009 39000 2010 16000(loss) 2011 40000
1. ₹10000
2. Rs. 22500
3. ₹30000
4. ₹40000
3. The excess amount which the firm gets on selling its business over and above the net value is
1. Surplus
2. Goodwill
3. Super profits.
4. Reserve
4. Goodwill is valued
1. at the time of change in profit-sharing ratio
2. at the time of retirement or death of a partner
3. at the time of admission of a partner
4. All of these
5. Goodwill under Average Profit Method means
1. None of these
2. Normal profit $×$ Number of year's purchase
3. Super profit $×$ Number of year's purchase
4. Average profit $×$ Number of year's purchase
6. Fill in the blanks:
Average Profit = ________.
7. How does goodwill arise?
8. How does the factor ‘quality of product' affect the goodwill of a firm?
9. The profits and losses for last five years were
1st year - Rs 3,000 (including an abnormal gain of 1,000)
2nd year - Rs 7,000 (excluding Rs 2,000 as insurance premium)
3rd year - Rs 2,000 (after charging an abnormal loss of Rs 1,000)
4th year - Rs 3,000
5th year - Rs 1,000 (Loss)
Calculate the amount of Goodwill on the basis of 3 years purchase of last 5 years profits and losses.
10. What is Revaluation Account? How it is differ from Profit & Loss Appropriation A/c?
11. A business has earned average profits of Rs 1,00,000 during the last few years and the normal rate of return in a similar business is 10%. Find out the value of goodwill by
1. Capitalisation of super profit method.
2. Super profit method, if the goodwill is valued at 3 years’ purchase of super profit.
The assets of the business were Rs 10,00,000 and its external liabilities Rs 1,80,000.
12. Neeraj and Dheeraj are carrying on a business of repairing electronic items. There are no other technicians for repairing electronic items in the locality. As the electric supply has a lot of fluctuations the equipments get damaged. Therefore, both the partners themselves do the repairing work to the satisfaction of the customers. The firm donates 10% of its profits to a Charitable Hospital of the locality for the medical treatment of persons below poverty line. State the two factors affecting the goodwill of the firm discussed in the above para. Also identify any two values which the firm is trying to propagate.
13. Cake and Muffin are partners sharing profits and losses in the ratio of 5: 4. On 1st April 2016, they admit Cookie as a new partner for $\frac{1}{6}$th share in the profits of the firm and the new ratio agreed upon is 3 : 2 : 1.
Goodwill, at the time of Cookie's admission, is to be valued on the basis of capitalisation of the average profits of the last three years. Profits for the last three years were:
Year ended 31st March 2014 ₹39,000 (including an abnormal loss of ₹9,000) Year ended 31st March 2015 ₹83,000 (including an abnormal loss of ₹8,000) Year ended 31st March 2016 ₹72,000
On 1st April 2016, the firm had assets of ₹8,00,000. Its creditors amounted ₹3,60,000. The firm had a Reserve Fund of ₹40,000 while Partners' Capital Account showed a balance of ₹4,00,000.
The normal rate of return expected form this class of business is 13%.
Cookie brings in ₹2,00,000 for her capital but is unable to bring in cash for her share of goodwill.
You are required to:
1. Calculate Cookie's share of Goodwill in the firm (Show your workings clearly).
14. X and Y are partners in a firm sharing profits in the ratio of 5 : 3.On March 1, 2017 they admitted Z as a new Partner. The new profit sharing ratio will be 4 : 3 : 2. Z brought in ₹1,00,000 in cash as his share of capital but could not bring any amount for goodwill in cash. The firm's goodwill on Z's admission was valued at ₹1,80,000. X and Y decided that Z can bring his share of premium for goodwill later or it can be adjusted against his share of profits. At the time of Z's admission goodwill existed in the books of the firm at ₹2,40,000.
You are required to pass necessary journal entries in the books of the firm on Z's admission.
15. Bakul and Gokul were partners in a firm sharing profits and losses in the ratio of 2 : 1 with capitals of ₹40,000 and ₹30,000 respectively. They decided to admit Nakul into partnership on conditions that he would bring in ₹20,000 as his capital and ₹6,000 for his share of goodwill for 1/4th share of profits. Half of the amount of goodwill was withdrawn by the existing partners. The capital of the partners in the New firm was to be arranged in profit sharing ratio on the basis of Nakul's Capital and excess or deficit capital to be adjusted in cash. Give the necessary journal entries to record the transactions and show the capital accounts of the partners and the cash account.
CBSE Test Paper 01
Goodwill - Nature and Valuation
Solution
1. (b) purchased goodwill is accounted in the books of account
Explanation: purchased goodwill is accounted in the books of account
2. (b) Rs. 22500
Explanation: Calculation of average profit when loss is given:
1. Calculation of total profits earned during 4 years: 27,000 + 39,000 – 16,000 + 40,000 = 90,000
2. Average profit = 90,000/4 = 22,500
3. (b) Goodwill
Explanation: Goodwill
4. (d) All of these
Explanation: All of these
5. (d) Average profit $×$ Number of year's purchase
Explanation: Average profit $×$ Number of year's purchase
6. $\frac{Total\phantom{\rule{thickmathspace}{0ex}}Profit}{No.\phantom{\rule{thickmathspace}{0ex}}of\phantom{\rule{thickmathspace}{0ex}}relevant\phantom{\rule{thickmathspace}{0ex}}years}$
7. Goodwill may arise due to factors like location, the efficiency of management, the trend of profits, the absence of competition, life span, relations, unique patent right. Goodwill arises when a company acquires another entire business. The amount of goodwill is the cost to purchase the business minus the fair market value of the tangible assets, the intangible assets that can be identified, and the liabilities obtained in the purchase.
8. If the firm enjoys good reputation for its product quality, there will be higher sales and the value of its goodwill will increase because this will not only retain existing customers but also bring new customers too thereby increasing sales as well as customer base.
9. The amount of goodwill is the cost to purchase the business minus the fair market value of the tangible assets, the intangible assets that can be identified, and the liabilities obtained in the purchase.
Calculation of Goodwill
Average maintainable profits
Profits for 1st year (Rs 3,000 - Rs 1,000) = Rs 2,000
Profits for 2nd year (Rs 7,000 - Rs 2,000) = Rs 5,000
Profits for 3rd year (Rs 2,000 + Rs 1,000) = Rs 3,000
Profits for 4th year = Rs 3.000
Total profits= Rs 13,000
Less : Loss for 5th year = Rs (1.000)
Total Profit for last five years = Rs 12,000
Average profits = Total profit /No. of years
= Rs 12,000/5
= Rs 2400
Goodwill = Average profits $×$No. of years’ purchase
= Rs 2400 $×$ 3
= Rs 7200
10. Revaluation account is a nominal account which is prepared to record the change of assets and reassessment of liabilities. The profit or loss calculated through this account is transferred to the partners’ capital/current account in their old profit sharing ratio while Profit and Loss Appropriation Account is prepared for the division of profit among the partners.
Revaluation account is prepared whenever there is change in profit sharing ratio between the partners due to any reason e.g. Between existing partners, Due to Admission of a new partner, Due to retirement/death of a partner, amalgamation of two partnership firms etc. to record profit or loss on revaluation. Main concept being whatever happened before change in ratio; belongs to partners in the old ratio and after change in the new ratio
Profit and Loss Appropriation Account, on the other hand, is prepared every year to distribute profit as per the terms of partnership deed.
11. Working Notes:
= 10,00,000 - 1,80,000 = Rs 8,20,000
= 8,20,000 $×\frac{10}{100}$= Rs 82,000
= 1,00,000 - 82,000 = Rs 18,000
1. As per Capitalisation of Super Profit Method
Goodwill = Super Profit= 18,000$×\frac{100}{10}$= Rs 1,80,000
2. As per Super Profit Method
Goodwill = Super Profit $×$Number of Years' Purchase
= 18,000 $×$3 = Rs 54,000
• Capital Employed = Total Assets - External Liabilities
• Normal Profit = Capital Employed
• Super Profit = Average Profit - Normal Profit
12. The factors affecting the goodwill of the firm are:
#### 1. Locational Factor:
If the firm is centrally located or located in a very prominent place, it can attract, more customers resulting in an increase in turnover. Therefore, locational factor should always be considered while ascertaining the value of goodwill.
This is another factor which also influences the value of goodwill which includes:
(i) The nature of goods;
(ii) Risk involved;
(iv) Benefits of patents and Trademarks; and
(v) Easy access of raw materials, etc.
The values which the firm is trying to propagate are:
1. Sensitivity towards people belonging to lower-income group.
2. Working towards customer satisfaction.
1. Calculation of Cookie's Share of Goodwill in the firm:
Calculation of Average Normal Profit:
Year ended Profit ₹ 31st March 2014 ₹39,000 + ₹9,000 48,000 31st March 2015 ₹83,000 + ₹8,000 75,000 31st March 2016 72,000 1,95,000
Average Normal Profit = $\frac{₹1,95,000}{3}$ = ₹65,000
Capitalised Value of Average Profits =
$\frac{Rs.65,000}{13}×100=Rs.5,00,000$
Capital Employed (Net Assets) = Total Assets - Outside Liabilities
= ₹8,00,000 - ₹3,60,000 = ₹4,40,000
Goodwill = Capitalised Value of Average Profits - Net Assets
= ₹5,00,000 - ₹4,40,000 = ₹60,000
Cookie's Share of Goodwill $=60,000×\frac{1}{6}=₹10,000$
2. Date Particulars L.F Dr. Cr. (₹) 2016 April 1 Bank A/c Dr. 2,00,000 To Cookie's Capital A/c 2,00,000 (Amount of capital brought in cash) Cookie's Current A/c Dr. 10,000 To Cake's Capital A/c 3,333 To Muffin's Capital A/c 6,667 (Cookie's share of goodwill credited to sacrificing partners in their sacrificing ratio of 1 : 2)
Calculation of Sacrificing Ratio:
Cake's sacrifice = $\frac{5}{9}-\frac{3}{6}=\frac{10-9}{18}=\frac{1}{18}$
Muffin's sacrifice = $\frac{4}{9}-\frac{2}{6}=\frac{8-6}{18}=\frac{2}{18}$
Sacrificing Ratio of Cake and Muffin = $\frac{1}{18}:\frac{2}{18}$ or 1 : 2
13. Date Particulars L.F. Dr.(₹) Cr. (₹) 2017 March 1 X's Capital A/c Dr. 1,50,000 Y's Capital A/c Dr. 90,000 To Goodwill A/c 2,40,000 (Goodwill already existing in the books, now written of in old ratio i.e. 5 : 3) March 1 Bank A/c Dr. 1,00,000 To Y's Capital A/c 1,00,000 (Amount brought in by Z as his capital) March 1 Z's Current A/c(29$\frac{2}{9}$ of 1,80,000) Dr. 40,000 To X's Capital A/c 32,500 To Y's Capital A/c 7,500 (Es share of goodwill credited to X and Yin their sacrificing ratio 13 :3.)
Working Note: Calculation of Sacrificing Ratio:
Old Ratio - New Ratio
X's Sacrifice $\frac{5}{8}-\frac{4}{9}=\frac{45-32}{72}=\frac{13}{72}$
Y's Sacrifice= $\frac{3}{8}-\frac{3}{9}=\frac{27-24}{72}=\frac{3}{72}$
Thus Sacrificing ratio between X and Y = 13 : 3.
14. JOURNAL
Date Particulars L.F Dr.(₹) Cr.(₹) Bank A/c Dr. 26,000 To Nakul's Capital A/c 20,000 To Premium for Goodwill A/c 6,000 (Capital and premium for goodwill brought in by Nakul) Premium for Goodwill A/c Dr. 6,.000 To Bakul's Capital A/c 4,000 To Gokul's Capital A/c 2,000 (Premium for goodwill credited to old partner's capital accounts) Bakul's Capital A/c Dr. 2,000 Gokul's Capital A/c Dr. 1,000 To Bank A/c 3,000 (Half of the premium for goodwill withdrawn by Bakul and Gokul) Bakul's Capital A/c(2) Dr. 2,000 Gokul's Capital A/c(2) Dr. 11,000 To Bank A/c 13,000 (Excess capital withdrawn by Bakul and Gokul)
Dr. CAPITAL ACCOUNTS Cr. Particulars Bakul Gokul Nakul Particulars Bakul Gokul Nakul ₹ ₹ ₹ ₹ ₹ ₹ To Bank A/c 2,000 1,000 By Balance b/d 40,000 30,000 To Balance c/d 42,000 31,000 20,000 By Bank A/c - - 20,000 By Premium for Goodwill A/c 4,000 2,000 - 44,000 32,000 20,000 44,000 32,000 20,000 To Bank A/c 2,000 11,000 - By Balance b/d 42,000 31,000 20,000 To Balance c/d 40,000 20,000 30,000 42,000 31,000 20,000 42,000 31,000 20,000
Dr. Bank Account Cr. Particulars ₹ Particulars ₹ To Nakul's capital A/c 20,000 By Bakul's Capital A/c 2,000 To Premium for Goodwill A/c 6,000 By Gokul's Capital A/c 1,000 By Bakul's Capital A/c 2,000 By Gokul's Capital A/c 11,000 By Balance c/d 10,000 26,000 26,000
Notes to the Solution:-
1. New Profit Sharing Ratio:
Nakul's share of profit = $\frac{1}{4}$, Remaining Share = 1-$\frac{1}{4}=\frac{3}{4}$
Bakul's new share = $\frac{2}{3}$ of $\frac{3}{4}$ = $\frac{2}{4}$
Gokul's new share = $\frac{1}{3}$ of $\frac{3}{4}$ = $\frac{1}{4}$
Nakul's share = $\frac{1}{4}$
Nakul's Capital is ₹20,000 and his share of profit is $\frac{1}{4}$.
Based on Nakul's capital the total Capital of the firm will be :
20,000 $×\frac{4}{1}$ = ₹80,000
Hence, Bakul's capital in new firm should be = 80,000 $×$ $\frac{2}{4}$ = ₹40,000
Gokul's capital in new firm should be = 80,000 $×$ $\frac{1}{4}$ = ₹20,000
2. Bakul's Capital in the new firm should be ₹40,000, whereas his existing capital shown by his Capital A/c is ₹42,000. Hence, his excess Capital amounting to ₹2,000 will be refunded to him.
3. Gokul's Capital in the new firm should be ₹20,000, whereas his existing capital shown by his Captial A/c is ₹31,000. Hence, his excess Capital amounting to ₹11,000 will be refunded to him. | 8,231 | 27,658 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 74, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.203125 | 3 | CC-MAIN-2024-30 | latest | en | 0.905541 |
https://www.fool.com/investing/value/2011/06/07/does-intel-pass-buffetts-test.aspx | 1,513,347,887,000,000,000 | text/html | crawl-data/CC-MAIN-2017-51/segments/1512948572676.65/warc/CC-MAIN-20171215133912-20171215155912-00316.warc.gz | 755,013,682 | 20,013 | We'd all like to invest like the legendary Warren Buffett, turning thousands into millions or more. Buffett analyzes companies by calculating return on invested capital, or ROIC, in order to help determine whether a company has an economic moat -- the ability to earn returns on its money above that money's cost.
ROIC is perhaps the most important metric in value investing. By determining a company's ROIC, you can see how well it's using the cash you entrust to it and whether it's actually creating value for you. Simply, it divides a company's operating profit by how much investment it took to get that profit. The formula is
ROIC = Net operating profit after taxes / Invested capital
The nuances of the formula are explained in further detail here. This one-size-fits-all calculation cuts out many of the legal accounting tricks (such as excessive debt) that managers use to boost earnings numbers, and provides you with an apples-to-apples way to evaluate businesses, even across industries. The higher the ROIC, the more efficient the company uses capital.
Ultimately, we're looking for companies that can invest their money at rates that are higher than the cost of capital, which for most businesses is between 8% and 12%. Ideally, we want to see ROIC above 12% at a minimum, and a history of increasing returns, or at least steady returns, which indicate some durability to the company's economic moat.
Let's take a look at Intel (Nasdaq: INTC) and three of its industry peers, to see how efficiently they use cash. Here are the ROIC figures for each company over a few periods.
Company
TTM
1 year ago
3 years ago
5 years ago
Intel 27.7% 26.6% 21.5% 26.5%
Advanced Micro Devices (NYSE: AMD) 19.5% N/M* N/M* 14.3%
Texas Instruments (NYSE: TXN) 36.1% 27.7% 31.2% 26.5%
NVIDIA (Nasdaq: NVDA) 19.3% 20.3% 60.1% 45.1%
Source: Capital IQ, a division of Standard & Poor's. *The company reported negative operating earnings and no effective tax rate during this period, so ROIC is not measurable.
Intel has shown consistent increases in its returns on invested capital over the past three years, suggesting that its competitive position is growing stronger. Over the five-year period, three of the four companies have managed to increase their ROIC.
Businesses with consistently high ROIC show that they're efficiently using capital. They also have the ability to treat shareholders well, because they can then use their extra cash to pay out dividends to us, buy back shares, or further invest in their franchise. And healthy and growing dividends are something that Warren Buffett has long loved.
So for more successful investments, dig a little deeper than the earnings headlines to find the company's ROIC. If you'd like to add these companies to your Watchlist or set up a new Watchlist, just click here.
Jim Royal, Ph.D., does not own shares of any company mentioned here. The Motley Fool owns shares of Texas Instruments. The Fool owns shares of and has bought calls on Intel. Motley Fool newsletter services have recommended buying shares of NVIDIA and Intel. Motley Fool newsletter services have recommended creating a diagonal call position in Intel. Motley Fool newsletter services have recommended writing puts in NVIDIA. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy. | 783 | 3,494 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2017-51 | longest | en | 0.948201 |
http://2010.igem.org/wiki/index.php?title=Team:Imperial_College_London/Modelling/Protein_Display/Detailed_Description&diff=162699&oldid=162594 | 1,576,276,540,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00367.warc.gz | 1,628,843 | 13,285 | # Team:Imperial College London/Modelling/Protein Display/Detailed Description
(Difference between revisions)
Revision as of 16:51, 26 October 2010 (view source)Pete fab (Talk | contribs) (Rearranging)← Older edit Revision as of 16:57, 26 October 2010 (view source)Anita (Talk | contribs) mNewer edit → Line 38: Line 38:
- This model is quite peculiar as we realised that its behaviour is not only a function of time, but space as well. Namely, the cleavage reaction happens on the cell wall of bacteria. However, cleaved already AIP is allowed to diffuse is any direction. We were not sure how to model this scenario, so we decided to determine the importance of localised concentrations in a Control Volume. + This model is quite peculiar as we realised that its behaviour is not only dependent on time, but space as well. Namely, the cleavage reaction happens on the cell wall of bacteria. However, AIP that has already been cleaved is allowed to diffuse in any direction. Since we were not sure how to model this scenario, we decided to determine the importance of localised concentrations in a Control Volume.
3. Threshold concentration of AIP
3. Threshold concentration of AIP
## Revision as of 16:57, 26 October 2010
Modelling Overview | Detection Model | Signaling Model | Fast Response Model | Interactions A major part of the project consisted of modelling each module. This enabled us to decide which ideas we should implement. Look at the Fast Response page for a great example of how modelling has made a major impact on our design!
Objectives | Description | Results | Constants | MATLAB Code
Detailed Description
This model consists of 5 parts that had to be developed:
1. Identification of all active and relevant elements of the isolated part of the system.
2. Identification of interactions between the identified elements.
3. Identification of threshold concentration of auto inducing peptide (AIP) needed for activation of the receptor.
4. Determining the volume of cell wall.
5. Defining a control volume around the bacterial cell (after cleavage, the surface protein will float around in the extracellular environment).
6. Determining the importance of localised concentrations in a Control Volume.
7. Determining the surface protein production to estimate the maximum abundance on the bacterial surface.
## 1. Elements of the system
1. The surface protein consists of a cell wall binding domain, linker, AIP (Auto Inducing Peptide). It is expressed by constitutive gene expression. It was assumed that the bacteria would be fully grown before carrying out the detection so the cell wall would be covered with as many surface proteins as the cell can maintain.
2. Schistosoma elastase (enzyme released by the parasite) cleaves the AIP from the cell wall binding domain at the linker site. In the laboratory we used TEV protease as we could not get hold of Schistosoma elastase, so the model was adjusted appropriately (TEV enzyme kinetic parameters were used).
3. The ComD receptor is activated by a sufficiently high AIP concentration. Once activated, ComD signals into the cell to activate the colour response.
## 2. Interactions between elements
Apart from the proteins being expressed from genes, there was only one more chemical reaction identified in this part of the system. This is the cleavage of proteins, which is an enzymatic reaction:
• Substrate (S) = Protein
• Enzyme (E) = TEV (Protease)
• Product (P) = Peptide
This enzymatic reaction can be rewritten as ordinary differential equations (ODEs), which is of similar form as the 1-step amplification model. However, most of the constants and initial concentrations are different.
This model is quite peculiar as we realised that its behaviour is not only dependent on time, but space as well. Namely, the cleavage reaction happens on the cell wall of bacteria. However, AIP that has already been cleaved is allowed to diffuse in any direction. Since we were not sure how to model this scenario, we decided to determine the importance of localised concentrations in a Control Volume.
## 3. Threshold concentration of AIP
The optimal peptide concentration required to activate ComD is 10 ng/ml [1]. This is the threshold value for ComD activation. However, the minimum concentration of peptide to give a detectable activation is 0.5ng/ml.
The threshold for the minimal activation of the receptor is cth=4.4658×10-9 mol/L. Click on the button below to uncover the calculations.
Converting 10 ng/ml to 4.4658×10-9 mol/L
• The mass of a peptide is 2.24kDa = 3.7184×10-21g.
• The number of molecules in one ml is 10ng/3.7184×10-21g = 2.6893×1012. In a litre, the number of molecules is 2.6893×1015molecules/L.
• Dividing this value by Avogadro's constant gives the threshold concentration of cth=4.4658×10-9 mol/L.
• The threshold for minimal activation of receptor is 2.2329×10-10 mol/L.
## 4. Cell Wall Volume
The volume of the cell wall was necessary to be calculated for calculation of concentrations in enzymatic reaction.
Volume of B. subtilis is 2.79μm3 and the thickness of cell wall is 35nm [5]. In order to approximate the cell wall volume assume that B. subtilis is a sphere - not a rod. Calculate the outer radius from the total volume: 0.874μm. Now subtract the thickness of cell wall from outer radius to determine inner radius of the sphere: 0.839μm. The volume of cell wall equals to the difference between outer volume and the inner volume (calculated from inner radius): cell wall volume=0.32×10-15m3
## 5. Control volume selection
Note that product of the enzymatic reaction, AIP, is allowed to diffuse outside the cell. Hence, it is important to take into account the cell boundaries. It is worth considering whether diffusion or fluid movements will play a significant role.
Initially, we defined a control volume assuming that bacteria would grow in close colonies on the plate. We realized that our initial choice of control volume was not accurate, since our bacteria are meant to be used in suspension so we had to reconsider this issue.
This control volume is considered to be wrong, but the details were kept for reference.
Initial Choice of Control Volume
Control volume initial choice
The control volume: The inner boundary is determined by the bacterial cell (proteins after being displayed and cleaved cannot diffuse back into bacterium). The outer boundary is more time scale dependent. We have assumed that after mass cleavage of the display-proteins by TEV, many of these AIPs will bind to the receptors quite quickly (eg. 8 seconds). Our volume is determined by the distance that AIPs could travel outwards by diffusion within that short time. In this way, we are sure that the concentration of AIPs outside our control volume after a given time is approximately 0. This approach is not very accurate and can lead us to false negative conclusions (as in reality there will be a concentration gradient, with highest concentration on the cell wall).
Control volume (Volume of B.sub to be excluded. x indicates the distance travelled by AIPs from the surface in time t).
Difussion distance was calculated using Fick's 1st Law: x=2Dt, where: x - diffusion distance, D - diffusion constant, t - time of diffusion
Daverage = 10.7×10-11 m2s-1 for a protein in agarose gel for pH=5.6 [3]
t = 8s (arbitrarily chosen)
This gives: x = 4.14×10-5m
The control volume can be calculated by adding 2x to the length and the diamter of the cell.
This gives a control volume (CV) = 4.81×10-7m3
Using CFU to estimate the spacing between cells
CFU stands for Colony-forming unit. It is a measure of bacterial numbers. For liquids, CFU is measured per ml. We already have data of CFU/ml from the Imperial iGEM 2008 team, so we could use this data to estimate the number of cells in a given volume using a spectrometer at 600nm wavelength. The graph below is taken from the Imperial iGEM 2008 Wiki page [4].
CFU/ml vs. Optical Density at 600nm (OD600).
The graph shows values of CFU/ml for different optical densities. The range of CFU/ml is therefore between 0.5×108 - 5×108.
In this calculation, we assumed that only one cell will grow and become one colony (i.e. no more than one cell will form no more than one colony). Therefore, the maximum number of cells in 1ml of solution is 5×108. Taking the volume of 1 ml = 10-3 dm3 and dividing by the (maximum) number of cells in 1ml gives the average control volume (CV) around each cell: 2×10-12 dm3/cell. For simplicity, we choose the control volume to be cubic. Taking the third root of this value gives the length of each side of the control volume.
Side length of cubic Control volume is y = 1.26×10-4 dm = 1.26×10-5 m.
Choice of Control Volume allows simplifications
• Firstly, assume that the cells will be placed in the centre of the CV. Hence, after cleavage the protein will have an average distance of y/2 to travel in order to cross the boundary of the CV. Even if bacterium was not in the centre of CV and AIP had to diffuse across distance y, this is calculated using simplified 1 dimensional Fick's Law to happen within 0.01 ms. Hence, it will be almost instantaneous event for the concentration of AIPs around the cell to be uniform. Noticing that these time values are very small, we can approximate our model to have a uniform concentration across the volume. Since we are underestimating the value of AIP concentration right next to the cell's surface, we are overestimating the time required for the AIP concentration to reach the threshold level.
• We can neglect the diffusive fluxes across the CV border (see figure below). Assuming that adjacent cells are producing the peptide at the same rate and that the concentration of TEV is the same around the cell, then the fluxes should be of the same value giving a net flux of zero. Hence, we can neglect diffusion and have our model limited to one bacterium.
Figure showing two cells with their control volumes.
## 6. Localised concentrations
It was realised that for above choice of control volume the system would be unlikely to perform due to high concentration levels of TEV or Schistosoma proteases that would have been required. Hence, it was deduced that localised concentrations will play important role in this model.
It was arbitrarily chosen that 20% to 50% of AIPs will bind to receptors rather than diffuse away. There is several arguments that would suggest this kind of percentage:
• AIP being very close to cell surface will have equal chances of heading back towards bacterium as diffusing away from it as cell is way bigger than they are.
• It is likely that there will be some chemical interactions between AIP and bacterium that could be attracting AIP to the host bacterium. Eg. electrostatic attraction is possible.
The percentage coefficient scales the AIP concentrations at the very end - after the ODE equations have been solved
## 7. Protein production
• The paper mentions that each cell displays 2.4×105 peptides. [2]
• 2.4×105 molecules = 2.4×105/6.02×1023 mol = 0.398671×10-18 mol
• Volume of B.sub cell wall: 0.32×10-15m3
• Concentration = [mol/L]
• c = 1.24×10-3 mol/L. This is the concentration of protein that will be displayed on a single cell of B.sub.
Hence, we can deduce that the final concentration that the protein expression will tend to is: c = 1.24×10-3 mol/dm3 = cfinal.
Therefore, we can model the protein production by transcription and translation and adjust the production constants so that the concentration will tend towards cfinal. The degradation rate was kept constant (same as used in output amplification module), and the production rate was adjusted match the final concentration to be achieved.
Using a similar model to the simple production of Dioxygenase for the Output Amplification Model (Model preA), we obtain the following graph:
Production of protein by transcription and translation.
## References
1. Havarstein, L., Coomaraswamy, G. & Morrison, D. (1995) An unmodified heptadecapeptide pheromone induces competence for genetic transformation in Streptococcus pneumoniae. Proc. Natl. [Online] 92, 11140-11144. Available from: http://ukpmc.ac.uk/backend/ptpmcrender.cgi?accid=PMC40587&blobtype=pdf [Accessed 27th August 2010]
2. Kobayashi, G. et al (2000) Accumulation of an artificial cell wall-binding lipase by Bacillus subtilis wprA and/or sigD mutants. FEMS Microbiology Letters. [Online] 188(2000), 165-169. Available from: http://onlinelibrary.wiley.com/doi/10.1111/j.1574-6968.2000.tb09188.x/pdf [Accessed 27th August 2010]
3. Gutenwik, J., Nilsson, B. & Axelsson, A. (2003) Determination of protein diffusion coefficients in agarose gel with a diffusion cell. Biochemical Engineering Journal. [Online] 19(2004), 1-7. Available from: http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6V5N-4B3MXDC-2-K&_cdi=5791&_user=217827&_pii=S1369703X03002377&_origin=search&_coverDate=07%2F01%2F2004&_sk=999809998&view=c&wchp=dGLzVtb-zSkzS&md5=c17d0e7320f03931006f9b1a10a438b9&ie=/sdarticle.pdf [Accessed August 20th 2010]
4. Imperial College London (2008) Biofabricator Subtilis - Designer Genes. [Online] Available from: http://2008.igem.org/Imperial_College/18_September_2008 [Accessed 1st September 2010]
5. Graham L. L. & Beverisge T. J. (1993) Structural Differentiation of the Bacillus subtilis 168 Cell Wall. Journal of Bacteriolofy. [Online] 5(1994), 1413-1420. Available from: http://jb.asm.org/cgi/reprint/176/5/1413?ijkey=27dafbac7e23dee50390d3fe67d9d1bab0c6f48c [Accessed October 26th 2010] | 3,344 | 13,527 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2019-51 | latest | en | 0.953365 |
http://sciencepantheism.com/worksheet/gcse-maths-revision-worksheets-higher.php | 1,611,665,453,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610704799741.85/warc/CC-MAIN-20210126104721-20210126134721-00143.warc.gz | 90,968,262 | 12,502 | ## sciencepantheism.com - the pro math teacher
• Subtraction
• Multiplication
• Division
• Decimal
• Time
• Line Number
• Fractions
• Math Word Problem
• Kindergarten
• a + b + c
a - b - c
a x b x c
a : b : c
# Gcse Maths Revision Worksheets Higher
Public on 18 Oct, 2016 by Cyun Lee
### gcse maths higher revision guide educational math activities
Name : __________________
Seat Num. : __________________
Date : __________________
6858 + 1064 = ...
9538 + 6342 = ...
2531 + 5280 = ...
4022 + 9147 = ...
9854 + 9014 = ...
1077 + 9648 = ...
4476 + 3944 = ...
6404 + 4789 = ...
1796 + 6620 = ...
9535 + 5680 = ...
5444 + 6337 = ...
3712 + 5235 = ...
5022 + 1185 = ...
2063 + 7938 = ...
1005 + 7815 = ...
9438 + 9768 = ...
3691 + 3634 = ...
9198 + 4593 = ...
6471 + 7161 = ...
3045 + 4222 = ...
3154 + 5309 = ...
4126 + 3456 = ...
7091 + 7357 = ...
1921 + 9352 = ...
2859 + 5702 = ...
4376 + 6747 = ...
6837 + 8900 = ...
3632 + 9071 = ...
5894 + 2287 = ...
3408 + 8943 = ...
5970 + 9162 = ...
2288 + 5769 = ...
9245 + 3956 = ...
4834 + 9823 = ...
9734 + 4912 = ...
2644 + 2681 = ...
2827 + 6244 = ...
5137 + 4199 = ...
3355 + 5764 = ...
6317 + 6368 = ...
6726 + 5272 = ...
7130 + 7803 = ...
6489 + 9897 = ...
5924 + 9844 = ...
7551 + 8657 = ...
7496 + 5062 = ...
2407 + 5848 = ...
1464 + 9610 = ...
9827 + 7184 = ...
5426 + 1133 = ...
6228 + 8495 = ...
2253 + 4137 = ...
6935 + 9119 = ...
7941 + 4382 = ...
8552 + 4815 = ...
8656 + 3660 = ...
8442 + 7082 = ...
1679 + 7793 = ...
9472 + 9091 = ...
9889 + 3829 = ...
7859 + 1044 = ...
8897 + 6059 = ...
1166 + 6885 = ...
7966 + 6505 = ...
6457 + 6312 = ...
1066 + 4687 = ...
4478 + 6061 = ...
2781 + 2484 = ...
4301 + 3835 = ...
5834 + 1418 = ...
6411 + 6024 = ...
7265 + 1924 = ...
2272 + 1398 = ...
5883 + 6844 = ...
6227 + 1433 = ...
1423 + 6660 = ...
3705 + 8773 = ...
2486 + 3667 = ...
5570 + 2030 = ...
8388 + 3934 = ...
9678 + 6768 = ...
6873 + 6796 = ...
8901 + 3301 = ...
6504 + 6398 = ...
4400 + 8726 = ...
8842 + 6822 = ...
8141 + 5978 = ...
8421 + 2614 = ...
4683 + 3113 = ...
9490 + 7433 = ...
3305 + 7692 = ...
9523 + 4331 = ...
3405 + 1182 = ...
2563 + 8452 = ...
8571 + 2675 = ...
6133 + 1663 = ...
3209 + 7263 = ...
8009 + 2652 = ...
9777 + 7700 = ...
1018 + 4156 = ...
6456 + 8414 = ...
2594 + 1924 = ...
4181 + 9170 = ...
5677 + 4944 = ...
9024 + 8870 = ...
4583 + 3837 = ...
1302 + 6869 = ...
6612 + 9991 = ...
3860 + 9238 = ...
9303 + 9310 = ...
1061 + 1830 = ...
4561 + 6961 = ...
7255 + 1681 = ...
7738 + 9210 = ...
7384 + 9391 = ...
7596 + 1360 = ...
8616 + 1892 = ...
1270 + 4589 = ...
1951 + 4953 = ...
7540 + 6048 = ...
6430 + 1849 = ...
4992 + 4995 = ...
2891 + 1799 = ...
4002 + 6362 = ...
6586 + 8466 = ...
2193 + 3319 = ...
4027 + 8845 = ...
1849 + 9302 = ...
7247 + 1772 = ...
1569 + 1013 = ...
3647 + 1747 = ...
8684 + 5222 = ...
1804 + 4139 = ...
9776 + 5495 = ...
3429 + 4440 = ...
9800 + 6282 = ...
8499 + 2858 = ...
5482 + 7464 = ...
7560 + 3900 = ...
3652 + 9693 = ...
5238 + 3151 = ...
7024 + 5968 = ...
9607 + 1076 = ...
6586 + 5039 = ...
9186 + 4110 = ...
7797 + 9290 = ...
9114 + 6719 = ...
6586 + 4812 = ...
5783 + 6276 = ...
5099 + 4863 = ...
6251 + 5956 = ...
7196 + 2439 = ...
7943 + 2262 = ...
2732 + 4752 = ...
9073 + 5522 = ...
6789 + 3451 = ...
1307 + 3693 = ...
3891 + 9652 = ...
2005 + 4536 = ...
3897 + 4024 = ...
4489 + 6187 = ...
7636 + 4827 = ...
3540 + 3343 = ...
4361 + 4746 = ...
6370 + 9474 = ...
1993 + 6841 = ...
9434 + 3589 = ...
4312 + 6071 = ...
8529 + 5805 = ...
2907 + 5810 = ...
4945 + 5136 = ...
6167 + 5708 = ...
3803 + 2472 = ...
6001 + 6332 = ...
2983 + 4828 = ...
6491 + 5073 = ...
9605 + 8797 = ...
9080 + 4814 = ...
5183 + 7384 = ...
4297 + 6961 = ...
9456 + 2457 = ...
6825 + 8317 = ...
6412 + 4547 = ...
5606 + 9381 = ...
2963 + 6567 = ...
6020 + 6397 = ...
8125 + 7943 = ...
1671 + 8508 = ...
4477 + 7322 = ...
3488 + 6174 = ...
1961 + 5525 = ...
5692 + 5723 = ...
5332 + 9373 = ...
1123 + 7165 = ...
3903 + 4601 = ...
8012 + 8824 = ...
2127 + 2922 = ...
2706 + 8362 = ...
2301 + 3016 = ...
9920 + 5293 = ...
show printable version !!!hide the show
## RELATED POST
Not Available
## POPULAR
singapore math worksheet
christmas maths worksheet
free kindergarten sight words worksheets
decimal percent fraction worksheet | 1,756 | 4,457 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.296875 | 3 | CC-MAIN-2021-04 | latest | en | 0.169407 |
http://www.qacollections.com/Do-you-think-big-foot-exists | 1,495,814,323,000,000,000 | text/html | crawl-data/CC-MAIN-2017-22/segments/1495463608668.51/warc/CC-MAIN-20170526144316-20170526164316-00418.warc.gz | 764,018,132 | 6,078 | # Think I Cut My Vain On Foot (pain)?
no it is not true that if you hit a vein it won't stop bleeding. just apply direct pressure to make the bleeding stop. clean the cut with warm soapy water, gently pat dry, apply Neosporin and a cle... Read More »
Top Q&A For: Think I Cut My Vain On Foot (pain)
## My dad is 6 foot and my mom is 5 foot 5 inches while i am 13 and i am 5 foot 1 inch how tall will i be?
Whoever has answered this question has answered it wrong. You can never correctly predict the height of any person because the height of the person depends on the proper functioning of the gene mak... Read More »
## How many gallons are in a 4 foot by 12 foot by 18 foot round swimming pool?
Answer Multiply all the numbers to get the volume of the pool in cubic feet. Then divide by 7.5. That assumes the water is filled to a height of four feet. Some above-ground pools have 52" wall... Read More »
## At exactly the same time and place a 6-foot-high stick casts a 9-foot shadow while a pole casts a 12-foot shad?
ratio of opposite side to adjacent side (sin angle)6/9 = 0.66666666666666666666666666666667ratio of opposite side to adjacent side (sin angle)x/12 = 0.66666666666666666666666666666667x = 12*0.66666... Read More »
## Do soldiers exists?
Yes, soldiers do exists and they are also in the military.
Related Questions | 352 | 1,345 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2017-22 | longest | en | 0.784477 |
https://www.netexplanations.com/tag/new-learning-composite-mathematics-class-7-solution/ | 1,656,983,300,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00557.warc.gz | 950,806,634 | 16,558 | ## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Pairs and Angles Chapter 9F Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Pairs and Angles Self Practice 9F Solution
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Pairs and Angles Chapter 9E Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Pairs and Angles Self Practice 9E Solution Self Practice 9A Solution Self Practice 9B Solution Self Practice 9C SolutionSK Gupta Anubhuti Gangal Solution Self Practice 9D SolutionSK Gupta Anubhuti Gangal Solution
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Pairs and Angles Chapter 9D Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Pairs and Angles Self Practice 9D Solution (1) Find each angle in the figure giving reasons. Self Practice 9A Solution Self Practice 9B Solution Self Practice 9C SolutionSK Gupta Anubhuti Gangal Solution
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Pairs and Angles Chapter 9C Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Pairs and Angles Self Practice 9C Solution (1) Using property of corresponding angles, find each of the lettered angles giving reasons. (2) Using the properties of adjacent angles, vertically opposite angles and corresponding angles, fill in the size of each angle in diagram. Solution Number […]
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Pairs and Angles Chapter 9B Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Pairs and Angles Self Practice 9B Solution (1) Name the pair of angles marked as A for alternate, B for corresponding and C for co-interior. Solution: (a) A (b) B (c) C (d) A (e) C (f) B (g) B (h) A (2) Give examples […]
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Pairs and Angles Chapter 9A Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Pairs and Angles Self Practice 9A Solution (1) What is measure of complement of 380 (2) What is measure of supplement of 170o (3) Tell whether the angles shown below are complementary, supplementary or neither. (4) For each angle in group A, find its complement […]
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Percentage and Its Applications Chapter 8F Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Percentage and Its Applications Self Practice 8F Solution (1) 55 per cent of the students in a school are girls. What percentage of the students are boys? Find the number of students, if there are 216 boys. (2) If 10% is deducted from a bill, […]
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Percentage and Its Applications Chapter 8E Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Percentage and Its Applications Self Practice 8E Solution (1) Ravi’s class has 19 boys and 12 girls. What is its percentage composition? (2) Chalk contains 10% calcium, 3% carbon and 12% oxygen. Find the amount of carbon and calcium (in grams) in 2 and 1/2 […]
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Percentage and Its Applications Chapter 8D Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Percentage and Its Applications Self Practice 8D Solution (1) Increase: (a) Rs. 50 by 20% (b) Rs. 700 by 8% (c) 600 g by 18% (d) 4L by 15% (e) 8 km by 19% (f) 5 kg by 23% (2) Decrease: (a) Rs. 170 by […]
## New Learning Composite Mathematics Class 7 SK Gupta Anubhuti Gangal Percentage and Its Applications Chapter 8C Solution
New Learning Composite Mathematics SK Gupta Anubhuti Gangal Class 7 Percentage and Its Applications Self Practice 8C Solution (1) What percentage is: (a) 7 of 70? (b) 8 of 32? (c) 19 of 25? (d) 117 of 650? (e) 180 of 300? (f) 850 of 3400? (g) 780 of 1950? (h) 27 of 9? (2) […] | 1,056 | 4,004 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.3125 | 3 | CC-MAIN-2022-27 | longest | en | 0.747769 |
https://myelectrical.com/notes/entryid/192/fault-calculations-introduction | 1,718,482,777,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861606.63/warc/CC-MAIN-20240615190624-20240615220624-00461.warc.gz | 362,993,045 | 19,218 | # Fault Calculations - Introduction
By on
Fault calculations are one of the most common types of calculation carried out during the design and analysis of electrical systems. These calculations involve determining the current flowing through circuit elements during abnormal conditions – short circuits and earth faults.
### Types of Fault
Symbol Definition
- voltage factor (IEC 60909)
LV, Isc max = 1.1, Isc min = 0.95
MV, LV, Isc max = 1.1, Isc min = 1
- steady state fault
- initial symmetrical fault current
k or k3 - 3-phase fault
k1 - phase to earth (or phase to neutral) fault
k2 – phase to phase fault
k2E or kE2E – phase to phase to earth fault
- rated current of any motor
- nominal voltage
- nominal line to neutral voltage
- nominal line to line voltage
- system voltage
circuit impedance
A fault is an abnormal or unintended connection of live elements of a system to each other or to earth. The impedance of such connections are often very low, resulting in large currents flowing. The energy contained in fault currents can quickly heat components, creates excessive forces and can result in devastating explosions of equipment.
Typically we deal with three types of fault:
1. Three Phase Faults
2. Phase to Phase Faults
3. Earth Faults
Typically highest fault current is given by a three phase fault (although there are exceptions).
### Standards
IEC 60909 'Short Circuit Currents in Three Phase Systems' describes an internationally accepted method for the calculation of fault currents. IEC 60781 is an adaption of the 60909 standard and applies only to low voltage systems.
IEC 60909 Fault Current
In applying these standards, two levels of fault based on voltage factor are typically calculated
• the maximum current which causes the maximum thermal and electromagnetic effects on equipment (used to determine the equipment rating)
• the minimum current (which may be used for the the setting of protective devices)
The standards also idealise the fault, enabling each stage to be analysed and understood. The image (click for a larger version), shows this waveform.
Depending on the position within the cycle at which the fault forms, a dc offset will be present, decaying overtime to zero. This creates an initial symmetrical short circuit I''k, which will decay over time to the steady state short circuit Ik.
### Three Phase Faults
Three Phase Fault
In a three phase fault, all three phases (L1, L2 and L3) are shorted together.
To find the fault current at any point in the network, a sum is made of the impedances in the network between the source of supply (including the source impedance) and the point at which the fault is occurs.
To find the fault current Ik, the nominal applied voltage, U0 is divided by the summed impedance Z.
### Phase to Phase Faults
Phase to Phase Fault
In a phase to phase fault (L1 to L2 for example), two phases are connected together.
The fault current is again, the nominal applied voltage divided by the summed impedance.
### Earth Faults
Earth Fault
In an earth fault, one phase is directly connected to earth (L1 to earth for example).
To find the value of earth fault current at any point in a network, a sum is made of the earth fault impedances in the network between the source of supply (including source impedance) and the return path impedances.
## Use of Tables
Often if it is required to look up a quick ball park figure, it is adequate to use tables. This is particularly the case for low voltage systems. In other cases, actual equipment parameters may not available and it is necessary to resort to typical values. The 'Notes' section of the site contains a selection of tables, which will help in these instances:
Low Voltage Fault Tables
Fault Calculations - Typical Equipment Parameters
## Basic Fault Calculations
Fault Type Calculation
3-phase fault
phase-phase fault
phase-earth fault
One of the simplest ways to look at fault calculations is by the application of Ohm’s law. Knowing the impedance of the fault and the voltage across enables the fault current to be calculated:
### Per Unit Fault Calculations
In systems with varying voltage level, per unit calculations enable faults levels to be determined by normalising the system to a common base. This method of calculating fault levels is known as the per unit method or per unit system.
To find out more about this per unit calculations, refer to our note:
### Symmetrical Components
For unbalance conditions the calculation of fault currents is more complex. One method of dealing with this is by the use of symmetrical components. In symmetrical components, the unbalance system is broken down in to three separate symmetrical systems, each of which are easily solved.
To find out more about symmetrical components, refer to our note:
### IEC 60909 - Short-circuit currents in three-phase a.c. systems
Often when performing short circuit calculations, it is necessary to carry these out against a reference standard. By using a reference standard, calculations are consistent, can be justified and are provided with an audit trail.
IEC 60909 is the international standard for the calculation of short circuit currents. The document specifies a standardised method for the development of short circuit calculations, as well as providing guidance on equipment data.
To find out more about how the standard works, refer to our note:
• Fault Calculation - IEC 60909 - note coming soon
### Motor Contributions
During fault conditions motors operate a generators (until the rotation reduces) and will contribution current to the fault. When taking motor contributions into considerations, the IEC 60909 standard gives guidance on how to do this.
To simplify calculations, the contribution of motors to the fault can be disregarded if:
## Related Notes
More interesting Notes:
Steven has over twenty five years experience working on some of the largest construction projects. He has a deep technical understanding of electrical engineering and is keen to share this knowledge. About the author
myElectrical Engineering
#### View 3 Comments (old system)
1. Notes says:
3/27/2013 1:49 PM
For unbalance conditions the calculation of fault currents is more complex. One method of dealing with this is symmetrical components. Using symmetrical components, the unbalance system is broken down in to three separate symmetrical systems: ...
2. ramesh cuppu says:
6/27/2013 10:43 AM
Dear sir
I was doing some IDMT Calculations. On checking my calculations with your tool "IDMT Tripping time".
I notice some discrepancy with respect IEEE V I Curve. For fault current of 4000 A, and pick up current of 1000 A I get an answer of 1 sec from the tool.
Where as with actual calculation based on the formula .manual calculated value is 0.256.
All other calculations are working fine. I face problem only in respect of IEEE V I Curve.
Ramesh
• Steven says:
6/27/2013 11:43 AM
Thanks Jorge. There was an error in the exponential (α) value. I've correct it now and it's giving the right answer. I had copied the values from my post Electromechanical Relays, in which I had made a typo. Corrected that also.
Comments are closed for this post:
• have a question or need help, please use our Questions Section
• spotted an error or have additional info that you think should be in this post, feel free to Contact Us
How to refer fault levels across a transformer
Over the past year or so I've been involved in on going discussions related to referring fault levels from the secondary of a transformer to the primary...
Motor Starting - Introduction
Motor starting and its associated problems are well-known to many people who have worked on large industrial processes. However, these things are, of course...
Motor Efficiency Classification
Electric motors are one of the most widely used items of electrical equipment. Improving motor efficiency benefits include, reduced power demand, lower...
How to Calculate Motor Starting Time
Request to look at induction motor starting time have come up a few times on the site. Hopefully in this post, I give you guys some idea on how to calculate...
The 27 km, Euro 6 billion Collider lies on the border between France and Switzerland, took nearly 30 years to complete. Some of the lofty goals for the...
Variable Frequency Drive
Variable frequency drives are widely used to control the speed of ac motors. This note looks at the mechanisms which enable drive units to control the...
Understanding Circuit Breaker Markings
IEC 60947 is the circuit breaker standard and covers the marking of breakers in detail. Any manufacturer following this standard should comply with the...
Harmonised Cable Codes and Colours
Within Europe the European Committee for Electrotechnical Standardization (CENELEC) has standardised the both the designation and colour of cables. ...
Motor Insulation
Insulation on a motor prevents interconnection of windings and the winding to earth. When looking at motors, it is important to understand how the insulation...
RLC Circuit, Resistor Power Loss - some Modelica experiments
Modelica is an open source (free) software language for modelling complex systems. Having never used it before, I thought I would download a development...
## Have some knowledge to share
If you have some expert knowledge or experience, why not consider sharing this with our community.
By writing an electrical note, you will be educating our users and at the same time promoting your expertise within the engineering community.
To get started and understand our policy, you can read our How to Write an Electrical Note | 2,042 | 9,684 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.453125 | 3 | CC-MAIN-2024-26 | latest | en | 0.877851 |
https://www.jiskha.com/display.cgi?id=1315858964 | 1,516,140,072,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084886739.5/warc/CC-MAIN-20180116204303-20180116224303-00697.warc.gz | 940,771,283 | 3,946 | # math
posted by .
i need to approximate profit in this equation
3. Hot Rocks Music sells CDs. If the profit can be approximated by following equation:
, where x represents the number of thousands of CDs sold.
Approximately, how many CDs must be sold in order for the company to make a profit?
## Similar Questions
1. ### maths
you are at the music store looking for CDs. the store has CDs for \$10 and \$15. you have \$55 to spend. write an equation that represents the different numbers of \$10 and \$15 CDs you can buy. hmm now.. how do i do this?
2. ### Math
A record stores sells CDs for \$12 each.A music club offers 5 free CDs and charges \$15 for each additional CD.How many CDs would have to buy for cost to be the same?
3. ### Math
Tell whose CD collection has the greater ratio of rock CDs to total CDs. Glen has 9 rock CDs, 4classical CDs, and 5 other kind of CDs. Nina has 12 rock CDs, 8 classical CDs, and 7 other kind of CDs.
4. ### Algebra
Natalie earns \$2.50 for each CD she sells and \$3.50 for each DVD she sells. Natalie sold 45 DVDs last year. She earned a total of \$780 last year selling CDs and DVDs. Write an equation that can be used to determine the number of CDs …
5. ### Algebra
Natalie earns \$2.50 for each CD she sells and \$3.50 for each DVD she sells. Natalie sold 45 DVDs last year. She earned a total of \$780 last year selling CDs and DVDs. Write an equation that can be used to determine the number of CDs …
6. ### math
You are at the music store to buy some CDs. You have \$45 to spend and the store sells CDs for 12.99 each. Write an inequality that represents the number, n, of CDs that you can buy without spending more money than you have. help greatly …
7. ### Maths
The owner of a music store received a shipment of 1,532 cds the CDs came in 37 boxes the same number of cds were in 36 of the boxes how many cds were in the remaining box
8. ### pre-algebra
A band will cell CDs of their music at their concert for \$5.00 each. The band ordered 300 CDs at a cost of 1.25 each. Which inequality represents the number of CDs,n, the band needs to sell to make a profit of at least \$500?
9. ### Math
Gisela has 135 cds 11 are pop music she has 3 times as many rocks cds as she had classical how many rocks cds does she have?
10. ### math
Julia works at a music store. One of her jobs is to stock new CDs on the shelf. A recent order arrived with 215 classical CDs, 125 jazz CDs and 330 soft rock CDs. Julia needs to place the CDs on sale racks with 5 CDs in each group. …
More Similar Questions | 646 | 2,550 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.265625 | 3 | CC-MAIN-2018-05 | latest | en | 0.95118 |
https://brainmass.com/business/accounting-for-corporations/growth-rate-in-dividend-55358 | 1,481,422,874,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698543782.28/warc/CC-MAIN-20161202170903-00004-ip-10-31-129-80.ec2.internal.warc.gz | 815,540,810 | 18,557 | Share
Explore BrainMass
# Growth rate in Dividend
100. A company stock has a current market value of 40.50. The co. just paid a dividend = to 1.50 per share, which is expected to grow at a constant rate forever into the future. If the co. marginal investors require a rte of return = to 12%, what is the rate at which dividends are expected grow in the future.
8.3%
12.0
4,0
8.0
There is not enough information to answer this question.
#### Solution Summary
This explains the computation of growth rate of dividend with the help of an example
\$2.19 | 140 | 556 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2016-50 | longest | en | 0.915247 |
http://www.freethesaurus.com/Bayes | 1,542,064,717,000,000,000 | text/html | crawl-data/CC-MAIN-2018-47/segments/1542039741151.56/warc/CC-MAIN-20181112215517-20181113001517-00306.warc.gz | 432,236,886 | 16,742 | # Bayes
Also found in: Dictionary, Medical, Legal, Financial, Idioms, Encyclopedia, Wikipedia.
Graphic Thesaurus 🔍
Display ON Animation ON
Legend Synonym Antonym Related
• noun
## Synonyms for Bayes
### English mathematician for whom Bayes' theorem is named (1702-1761)
#### Synonyms
References in periodicals archive ?
Keywords: Twitter sentiment analysis, Machine learning, Naive Bayes, Attribute weighting, Feature selection
In the second half The White House dominated the game and were rewarded when Nathan Bayes scored his second to bring the game level and it was left to substitute Christopher Atkinson to wrap up the match when he struck late in normal time to secure all three points for Billingham The White House.
Combining the joint prior and the likelihood function using Bayes theorem we get the following posterior distribution
In the second half, good work by Bayes and Hebb resulted in Junior Masandi breaking clear and poking home from close range.
The first stage is to apply Naive Bayes algorithm and classifies the preprocessed data sets.
Bayes expone como se puede utilizar la imaginacion guiada durante un viaje.
In order to solve this problem, in consideration of the correlation between the total emergency loss and the delay of the emergency supplies, the total loss Bayes risk function of the disaster area is formulated.
Keywords: McNemar's Test, Naive Bayes, Boosting, Bagging, Stacking, Performance Evaluation, Classification
From Bayes Theorem, p(winning | it has rained) = 0.
The properties of Bayes estimators of the parameters are studied under different loss functions.
And the best way to answer them was discovered nearly 300 years ago by an English Presbyterian minister called Thomas Bayes.
The Author of the next paper has proposed Associative Classification with Bayes (AC-Bayes).
Reference [5] determined the Bayes estimates of the reliability function and the hazard rate of the Weibull failure time distribution by employing squared error loss function.
Classification algorithms that take advantage of Bayes' Theorem and prevalence statistics, dubbed naive Bayes classifiers, aim to accomplish this with readily available data.
Site: Follow: Share:
Open / Close | 465 | 2,215 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.828125 | 3 | CC-MAIN-2018-47 | latest | en | 0.883275 |
https://numberworld.info/5330 | 1,566,323,455,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00343.warc.gz | 568,607,622 | 3,858 | # Number 5330
### Properties of number 5330
Cross Sum:
Factorization:
2 * 5 * 13 * 41
Divisors:
1, 2, 5, 10, 13, 26, 41, 65, 82, 130, 205, 410, 533, 1066, 2665, 5330
Count of divisors:
Sum of divisors:
Prime number?
No
Fibonacci number?
No
Bell Number?
No
Catalan Number?
No
Base 2 (Binary):
Base 3 (Ternary):
Base 4 (Quaternary):
Base 5 (Quintal):
Base 8 (Octal):
14d2
Base 32:
56i
sin(5330)
0.9587959144425
cos(5330)
-0.28409574873337
tan(5330)
-3.3749041255185
ln(5330)
8.5811065171599
lg(5330)
3.7267272090266
sqrt(5330)
73.006848993776
Square(5330)
### Number Look Up
Look Up
5330 (five thousand three hundred thirty) is a very amazing figure. The cross sum of 5330 is 11. If you factorisate the figure 5330 you will get these result 2 * 5 * 13 * 41. The number 5330 has 16 divisors ( 1, 2, 5, 10, 13, 26, 41, 65, 82, 130, 205, 410, 533, 1066, 2665, 5330 ) whith a sum of 10584. The number 5330 is not a prime number. The number 5330 is not a fibonacci number. 5330 is not a Bell Number. The number 5330 is not a Catalan Number. The convertion of 5330 to base 2 (Binary) is 1010011010010. The convertion of 5330 to base 3 (Ternary) is 21022102. The convertion of 5330 to base 4 (Quaternary) is 1103102. The convertion of 5330 to base 5 (Quintal) is 132310. The convertion of 5330 to base 8 (Octal) is 12322. The convertion of 5330 to base 16 (Hexadecimal) is 14d2. The convertion of 5330 to base 32 is 56i. The sine of the number 5330 is 0.9587959144425. The cosine of the figure 5330 is -0.28409574873337. The tangent of the figure 5330 is -3.3749041255185. The root of 5330 is 73.006848993776.
If you square 5330 you will get the following result 28408900. The natural logarithm of 5330 is 8.5811065171599 and the decimal logarithm is 3.7267272090266. You should now know that 5330 is great figure! | 699 | 1,810 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.46875 | 3 | CC-MAIN-2019-35 | longest | en | 0.752797 |
https://www.mrexcel.com/board/threads/tme-clock-need-punch-out-by-time.981072/ | 1,679,374,721,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00630.warc.gz | 973,526,980 | 16,563 | # Tme Clock - Need Punch Out By Time
#### cherryonion
##### New Member
Hi! I am working on a time clock sheet to help track hours worked throughout the week. It's pretty basic, but I would like to have a column that indicates what time an employee would need to punch out by in order to have worked for 8 hours.
Here is how it looks currently. The formula I'm using to calculate total hours is below as well. The idea being that after an employee enters their In for Day, Out for Lunch, and In for Lunch, it would calculate what time they would need to punch OUT BY in order to maintain 8 hours.
IN OUT IN OUT BY OUT ACTUAL HOURS Monday 8:23 AM 12:00 PM 12:20 PM 4:23 PM 7.67 Tuesday 8:15 AM 12:00 PM 12:30 PM 4:45 PM 8.00 Wednesday 0.00 Thursday 0.00 Friday 0.00 15.67 OT 24.33
<colgroup><col><col span="4"><col><col></colgroup><tbody>
</tbody>
HOURS column:
=SUM(((C2-B2)+F2-D2))*24
Thanks!
### Excel Facts
Format cells as date
Select range and press Ctrl+Shift+3 to format cells as date. (Shift 3 is the # sign which sort of looks like a small calendar).
#### Tetra201
##### MrExcel MVP
Is this what you need?
=1/3+B2-C2+D2
Replies
4
Views
716
Replies
2
Views
204
Replies
3
Views
1K
Replies
6
Views
993
Replies
0
Views
998
1,190,677
Messages
5,982,216
Members
439,769
Latest member
trungminh2802
### We've detected that you are using an adblocker.
We have a great community of people providing Excel help here, but the hosting costs are enormous. You can help keep this site running by allowing ads on MrExcel.com.
### Which adblocker are you using?
1)Click on the icon in the browser’s toolbar.
2)Click on the icon in the browser’s toolbar.
2)Click on the "Pause on this site" option.
Go back
1)Click on the icon in the browser’s toolbar.
2)Click on the toggle to disable it for "mrexcel.com".
Go back
### Disable uBlock Origin
Follow these easy steps to disable uBlock Origin
1)Click on the icon in the browser’s toolbar.
2)Click on the "Power" button.
3)Click on the "Refresh" button.
Go back
### Disable uBlock
Follow these easy steps to disable uBlock
1)Click on the icon in the browser’s toolbar.
2)Click on the "Power" button.
3)Click on the "Refresh" button.
Go back | 638 | 2,203 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2023-14 | latest | en | 0.887629 |
https://indiashines.in/cbse/ncert-solutions-class-9th-maths-chapter-15/ | 1,524,790,686,000,000,000 | text/html | crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00435.warc.gz | 617,867,227 | 19,469 | # NCERT Solutions For Class 9th Maths Chapter 15 : Probability
CBSE NCERT Solutions For Class 9th Maths Chapter 15 : Probability. NCERT Solutins For Class 9 Mathematics Probability Exercise 15.1
### NCERT Solutions for Class X Maths: Chapter 15 – Probability
Page No: 283
Exercise 15.1
1. In a cricket match, a batswoman hits a boundary 6 times out of 30 balls she plays. Find the probability that she did not hit a boundary.
Total numbers of balls = 30
Numbers of boundary = 6
Numbers of time she didn’t hit boundary = 30 – 6 = 24
Probability she did not hit a boundary = 24/30 = 4/5
2. 1500 families with 2 children were selected randomly, and the following data were recorded:
Number of girls in a family 2 1 Number of families 475 814 211
Compute the probability of a family, chosen at random, having
(i) 2 girls (ii) 1 girl (iii) No girl
Also check whether the sum of these probabilities is 1.
Total numbers of families = 1500
(i) Numbers of families having 2 girls = 475
Probability = Numbers of families having 2 girls/Total numbers of families
= 475/1500 = 19/60
(ii) Numbers of families having 1 girls = 814
Probability = Numbers of families having 1 girls/Total numbers of families
= 814/1500 = 407/750
(iii) Numbers of families having 2 girls = 211
Probability = Numbers of families having 0 girls/Total numbers of families
= 211/1500
Sum of the probability = 19/60 + 407/750 + 211/1500
= (475 + 814 + 211)/1500 = 1500/1500 = 1
Yes, the sum of these probabilities is 1.
3. Refer to Example 5, Section 14.4, Chapter 14. Find the probability that a student of the class was born in August.
Total numbers of students = 40
Numbers of students = 6
Required probability = 6/40 = 3/20
4. Three coins are tossed simultaneously 200 times with the following frequencies of different outcomes:
If the three coins are simultaneously tossed again, compute the probability of 2 heads coming up.
Number of times 2 heads come up = 72
Total number of times the coins were tossed = 200
Required probability = 72/200 = 9/25
5. An organisation selected 2400 families at random and surveyed them to determine a relationship between income level and the number of vehicles in a family. The information gathered is listed in the table below:
ALSO READ: NCERT Solutions For Class 8th Maths Chapter 1 : Rational Numbers
Monthly income (in ₹) Vehicles per family 1 2 Above 2 Less than 7000 10 160 25 7000-10000 305 27 2 10000-13000 1 535 29 1 13000-16000 2 469 59 25 16000 or more 1 579 82 88
Suppose a family is chosen. Find the probability that the family chosen is
(i) earning ₹10000 – 13000 per month and owning exactly 2 vehicles.
(ii) earning ₹16000 or more per month and owning exactly 1 vehicle.
(iii) earning less than ₹7000 per month and does not own any vehicle.
(iv) earning ₹13000 – 16000 per month and owning more than 2 vehicles.
(v) owning not more than 1 vehicle.
Total numbers of families = 2400
(i) Numbers of families earning ₹10000 –13000 per month and owning exactly 2 vehicles = 29
Required probability = 29/2400
(ii) Number of families earning ₹16000 or more per month and owning exactly 1 vehicle = 579
Required probability = 579/2400
(iii) Number of families earning less than ₹7000 per month and does not own any vehicle = 10 Required probability = 10/2400 = 1/240
(iv) Number of families earning ₹13000-16000 per month and owning more than 2 vehicles = 25
Required probability = 25/2400 = 1/96
(v) Number of families owning not more than 1 vehicle = 10+160+0+305+1+535+2+469+1+579
= 2062
Required probability = 2062/2400 = 1031/1200
Page No: 2846. Refer to Table 14.7, Chapter 14.
(i) Find the probability that a student obtained less than 20% in the mathematics test.
(ii) Find the probability that a student obtained marks 60 or above.
Marks Number of students 0 – 20 7 20 – 30 10 30 – 40 10 40 – 50 20 50 – 60 20 60 – 70 15 70 – above 8 Total 90
AnswerTotal numbers of students = 135 + 65 = 200
(i) Numbers of students who like statistics = 135
Required probability = 135/200 = 27/40
(ii) Numbers of students who does not like statistics = 65
Required probability = 65/200 = 13/40
8. Refer to Q.2, Exercise 14.2. What is the empirical probability that an engineer lives:
(i) less than 7 km from her place of work?
(ii) more than or equal to 7 km from her place of work?
(iii) within 1/2 km from her place of work?
The distance (in km) of 40 engineers from their residence to their place of work were found as follows:
5 3 10 20 25 11 13 7 12 31 19 10 12 17 18 11 3 2 17 16 2 7 9 7 8 3 5 12 15 18 3 12 14 2 9 6 15 15 7 6 12
Total numbers of engineers = 40
(i) Numbers of engineers living less than 7 km from her place of work = 9
Required probability = 9/40
(ii) Numbers of engineers living less than 7 km from her place of work = 40 – 9 = 31
Required probability = 31/40
(iii) Numbers of engineers living less than 7 km from her place of work = 0
Required probability = 0/40 = 0
Page No: 285
11. Eleven bags of wheat flour, each marked 5 kg, actually contained the following weights of flour (in kg):
4.97 5.05 5.08 5.03 5.00 5.06 5.08 4.98 5.04 5.07 5.00
Find the probability that any of these bags chosen at random contains more than 5 kg of flour.
ALSO READ: NCERT Solutions For Class 10th Hindi Sanchayan II Chapter 1: हरिहर काका (Course B)
Total numbers of bags = 11
Numbers of bags containing more than 5 kg of flour = 7
Required probability = 7/11
12. In Q.5, Exercise 14.2, you were asked to prepare a frequency distribution table, regarding the concentration of sulphur dioxide in the air in parts per million of a certain city for 30 days. Using this table, find the probability of the concentration of sulphur dioxide in the interval 0.12-0.16 on any of these days.
The data obtained for 30 days is as follows:
0.03 0.08 0.08 0.09 0.04 0.17 0.16 0.05 0.02 0.06 0.18 0.20 0.11 0.08 0.12 0.13 0.22 0.07 0.08 0.01 0.10 0.06 0.09 0.18 0.11 0.07 0.05 0.07 0.01 0.04
Total numbers of days data recorded = 30 days
Numbers of days in which sulphur dioxide in the interval 0.12-0.16 = 2
Required probability = 2/30 = 1/15
13. In Q.1, Exercise 14.2, you were asked to prepare a frequency distribution table regarding the blood groups of 30 students of a class. Use this table to determine the probability that a student of this class, selected at random, has blood group AB.
The blood groups of 30 students of Class VIII are recorded as follows:
A, B, O, O, AB, O, A, O, B, A, O, B, A, O, O, A, AB, O, A, A, O, O, AB, B, A, O, B, A, B, O. | 2,099 | 6,892 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.65625 | 5 | CC-MAIN-2018-17 | longest | en | 0.92706 |
https://www.eng-tips.com/viewthread.cfm?qid=51938 | 1,718,792,588,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861817.10/warc/CC-MAIN-20240619091803-20240619121803-00835.warc.gz | 661,408,617 | 12,633 | ×
INTELLIGENT WORK FORUMS
FOR ENGINEERING PROFESSIONALS
Are you an
Engineering professional?
Join Eng-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Eng-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
# steel and concrete composite section
## steel and concrete composite section
(OP)
Hi, all. I am doing a dynamic modelling of a building. SOme of the beams are made up with 2 W-sections side by side encased in concrete. ANy suggestion on how to model the equivalent section? I suppose the total area of section can be calc by Ac+As(Es/Ec), but what E do you use? Ec or an equivalent E? how would you calculate the equivalent E?
Thanks in advance for all suggestion.
Peter Lee
www.neillandgunter.com
Peter Lee
www.neillandgunter.com
### RE: steel and concrete composite section
Hi n9yz
Correct me if I am wrong as I am no civil or structural engineer but for a beam of two materials I would proceed as
follows:- Assuming that the steel and concrete are connected
ridgidly together the bending moment:-
M=Mc + Ms
M= Ec*Ic/(R) + Es*Is/(R)
but Es/Ec = m = modular ratio
M= (Ic/(R) + m*Is/(R))*Ec
divide thru by R
M=(Ic+m*Is)*Ec/R and Ec/R= fc/y
: M=(Ic+m*Is)*(fc/y)
where fc=stress
Is,Ic= second moments of area for steel
and concrete beam.
R = radius of curvature of beam
Ec,Es = modulus of elasticity of materials
y = distance to neutral axis
By using this method I have found the equivalent
second moment of area of a complete concrete beam by
using the modular ratio and converting the steel part
into a equivalent concrete part.So you can now
calculate stress using Ec only. Alternatively you can
turn the concrete portion into an equivalent steel beam
if you desire again using m however the formula would
look like this:-
M=(Ic/(m))+Is)*(fs/y)
hope this helps
regards desertfox
#### Red Flag This Post
Please let us know here why this post is inappropriate. Reasons such as off-topic, duplicates, flames, illegal, vulgar, or students posting their homework.
#### Red Flag Submitted
Thank you for helping keep Eng-Tips Forums free from inappropriate posts.
The Eng-Tips staff will check this out and take appropriate action.
Close Box
# Join Eng-Tips® Today!
Join your peers on the Internet's largest technical engineering professional community.
It's easy to join and it's free.
Here's Why Members Love Eng-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free! | 682 | 2,803 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.34375 | 3 | CC-MAIN-2024-26 | latest | en | 0.874887 |
https://itschancy.wordpress.com/2015/03/ | 1,498,732,561,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128323908.87/warc/CC-MAIN-20170629103036-20170629123036-00672.warc.gz | 761,522,911 | 31,421 | # Monthly Archives: March 2015
About a year and a half ago, I read P. L. Davies’s interesting paper Approximating Data. There was one passage I read that struck me as unusually wrong-headed (pg 195):
The Dutch book argument in turn relies on a concept of truth. Often framed in terms of bets on a horse-race, it relies on there only being one winner, which is the case for the overwhelming majority of horse races. The Dutch book argument shows that the odds, when converted to probabilities, must sum to 1 to avoid arbitrage possibilities… If we transfer this to statistics then we have different distributions indexed by a parameter. Based on the idea of truth, only one of these can be true, just as only one horse can win, and the same Dutch book argument shows that the odds must add to 1. In other words the prior must be a probability distribution. We note that in reality none of the offered distributions will be the truth, but due to the non-callability of Bayesian bets this is not considered to be a problem. Suppose we replace the question as whether a distribution represents the truth by the question as to whether it is a good approximation. Suppose that we bet, for example, that the N(0, 1) distribution is an adequate approximation for the data. We quote odds for this bet, the computer programme is run, and we either win or lose. If we quote odds of 5:1 then we will probably quote the same, or very similar, odds for the N(10−6, 1) distribution, as for the N(0, 1+10−10) distribution and so forth. It becomes clear that these odds are not representable by a probability distribution: only one distribution can be the ‘true’ but many can be adequate approximations.
I always meant to write something about how this line of argument goes wrong, but it wasn’t a high priority. But recently Davies reiterated this argument in a comment on Professor Mayo’s blog:
You define adequacy in a precise manner, a computer programme., there [sic] are many examples in my book. The inputs are the data and the model, the output yes or no. You place your bets beforehand, run the programme and win or lose your bet. The bets are realizable. If you bet 50-50 on the N(0,1) being an adequate model, you will no doubt bet about 50-50 on the N(10-20,1) also being an adequate model. Your bets are not expressible by a probability measure. The sum of the odds will generally be zero or infinity. …
I tried to reply in the comment thread, but WordPress ate my attempts, so: a blog post!
I have to wonder if Professor Davies asked even one Bayesian to evaluate this argument before he published it. (In comments, Davies replies: I have been stating the argument for about 20 years now. Many Bayesians have heard my talks but so the only response I have had was by one in Lancaster who told me he had never heard the argument before and that was it.) Let M be the set of statistical models under consideration. It’s true that if I bet 50-50 on N(0,1) being an adequate model, I will no doubt bet very close to 50-50 on N(10-20, 1) also being an adequate model. Does this mean that “these odds are not representable by a probability distribution”? Not at all — we just need to get the sample space right. In this setup the appropriate sample space for a probability triple is the powerset of M, because exactly one of the members of the powerset of M will be realized when the data become known.
For example, suppose that M = {N(0,1), N(10-20, 1), N(10,1)}; then there are eight conceivable outcomes — one for each possible combination of adequacy indications — that could occur once the data become known. We can encode this sample space using the binary expansion of the numbers from 0 to 7, with each digit of the binary expansion of the integer interpreted as an indicator variable for the statistical adequacy of one of the models in M. Let the leftmost bit refer to N(0,1), the center bit refer to N(10^-20, 1), and the rightmost bit refer to N(10,1). Here’s a probability measure that serves as a counterexample to the claim that “[the 50-50] bets are not expressible by a probability measure”:
Pr(001) = Pr(110) = 0.5,
Pr(000) = Pr(100) = Pr(101) = Pr(011) = Pr(010) = Pr(111) = 0.
(This is an abuse of notation, since the Pr() function takes events, that is, sets of outcomes, and not raw outcomes.) The events Davies considers are “N(0,1) [is] an adequate model”, which is the set {100, 101, 110, 111}, and “N(10-20,1) [is] an adequate model”, which is the set {010, 011, 110, 111}; it is trivial to see that both these events are 50-50.
Now obviously when M is uncountably infinite it’s not so easy to write down probability measures on sigma-algebras of the powerset of M. Still, that scenario is not particularly difficult for a Bayesian to handle: if the statistical adequacy function is measurable, a prior or posterior predictive probability measure automatically induces a pushforward probability measure on any sigma-algebra of the powerset of M. In fact, this is precisely the approach taken in the (rather small) Bayesian literature on assessing statistical adequacy; see for example A nonparametric assessment of model adequacy based on Kullback-Leibler divergence. These sorts of papers typically treat statistical adequacy as a continuous quantity, but all it would take to turn it into a Davies-style yes-no Boolean variable would be to dichotomize the continuous quantity at some threshold.
(A digression. To me, using a Bayesian nonparametric posterior distribution to assess the adequacy of a parametric model seems a bit pointless — if you have the posterior already, of what possible use is the parametric model? Actually, there is one use that I can think of, but I was saving it to write a paper about… Oh what the heck. I’m told (by Andrew Gelman, who should know!) that in social science it’s notorious that every variable is correlated with every other variable, at least a little bit. I imagine that this makes Pearl-style causal inference a big pain — all of the causal graphs would end up totally connected, or close to. I think there may be a role for Bayesian causal graph adequacy assessment; the causal model adequacy function would quantify the loss incurred by ignoring some edges in the highly-connected causal graph. I think this approach could facilitate communication between causal inference experts, subject matter experts, and policymakers.)
This post’s title was originally more tendentious and insulting. As Professor Davies has graciously suggested that his future work might include a reference to this post, I think it only polite that I change the title to something less argumentative.
In the Dark
A blog about the Universe, and all that surrounds it
Minds aren't magic
Paul Crowley
Mad (Data) Scientist
Musings, useful code etc. on R and data science
djmarsay
Reasoning about reasoning, mathematically.
The Accidental Statistician
Occasional ramblings on statistics
Slate Star Codex
NꙮW WITH MꙮRE MULTIꙮCULAR ꙮ
Models Of Reality
Stochastic musings of a biostatistician.
Data Colada
Thinking about evidence and vice versa
Hacked By Gl0w!Ng - F!R3
Stochastic musings of a biostatistician.
John D. Cook
Stochastic musings of a biostatistician.
Simply Statistics
Stochastic musings of a biostatistician.
Less Wrong
Stochastic musings of a biostatistician.
Normal Deviate
Thoughts on Statistics and Machine Learning
Xi'an's Og
an attempt at bloggin, nothing more... | 1,751 | 7,457 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.3125 | 3 | CC-MAIN-2017-26 | longest | en | 0.929325 |
https://community.topcoder.com/stat?c=problem_statement&pm=13042 | 1,505,982,712,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818687711.44/warc/CC-MAIN-20170921082205-20170921102205-00689.warc.gz | 654,937,197 | 8,206 | JOIN
Problem Statement
Problem Statement for PowersOfTwo
### Problem Statement
Fox Ciel likes powers of two. She has a bag with some positive powers of two. Note that some powers may occur multiple times in the bag. You are given a long[] powers. Each element of powers is one of the numbers in Ciel's bag.
Ciel likes each non-negative integer that can be written as the sum of some numbers from her bag.
For example, suppose that her bag contains the numbers 2, 4, 4, and 64. In this case, Ciel likes 10 (because 10=2+4+4), 64 (because 64=64), and also 0 (the sum of no numbers). She does not like 1, and she does not like 12 (note that 12=4+4+4 is not valid, as she only has two 4s; 12=4+4+2+2 is also not valid, as she only has one 2).
Return the number of integers Ciel likes.
### Definition
Class: PowersOfTwo Method: count Parameters: long[] Returns: long Method signature: long count(long[] powers) (be sure your method is public)
### Constraints
-powers will contain between 1 and 50 elements, inclusive.
-Each element of powers is a power of two between 1 and 2^50, inclusive.
### Examples
0)
`{1,2}`
`Returns: 4`
Fox Ciel likes 0, 1, 2 and 3.
1)
`{1,1,1,1}`
`Returns: 5`
Fox Ciel likes 0, 1, 2, 3 and 4.
2)
`{1,2,2,2,4,4,16}`
`Returns: 32`
3)
`{1,32,1,16,32}`
`Returns: 18`
4)
```{1048576,1073741824,549755813888,70368744177664,4398046511104,262144,1048576,2097152,8796093022208, 1048576,1048576,35184372088832,2097152,256,256,256,262144,1048576,1048576,70368744177664,262144,1048576}```
`Returns: 18432`
This problem statement is the exclusive and proprietary property of TopCoder, Inc. Any unauthorized use or reproduction of this information without the prior written consent of TopCoder, Inc. is strictly prohibited. (c)2010, TopCoder, Inc. All rights reserved.
This problem was used for:
Single Round Match 612 Round 1 - Division II, Level Three | 585 | 1,889 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.75 | 4 | CC-MAIN-2017-39 | latest | en | 0.879601 |
http://www.electronoobs.com/eng_circuitos_tut27.php | 1,566,715,151,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00542.warc.gz | 240,497,695 | 9,537 | ### Battery pack page 1/1
6S & BMS battery pack
Help me by sharing this post
This is a very basic tutorial and there are already a lot of guides on how make something like this. But, since this is part of a future project (electric longbaord), well I'm doing this first step and show you how to connect the batteries, add charging protection. I'll use some Samsung INR18650 li-ion batteries connected in series and parallel, add a BMS module and solder everything using nickel strips and a spot welder. So lets see...
## PART 1 - Select voltage, current and batteries
Ok, first of all you need to ahve in mind what kind of battery pack you want to make. For that I check the specs of my system. I'll make an electric longboard. The ESCs and the brushless motors maximum voltage is 26V. Since the speed of the brushless motors is also given by the voltage, I want the maximum voltage. Now, I could use LiPo batteries or LiIon batteries. I will use the Samsung INR18650 LiIon batteries. The nominal voltage of these batteries is 3.7V.
See part list here:
### Series or parallel?
Ok so I want the maximum voltage for my ESCs and motors that is 26V. Each of these batteries have a nominal voltage of 3.7V and a maximum voltage when fully caharged of 4.2. For more check the datasheet of the batteries here on this LINK. So, if I want around 26V, with 4.2V batteries I need 6 abtteries in series and get a total maximum voltage of 25.2V. We know that batteries in series will sum their voltages but have the same capacity and in paralel they sum the capacity but have the same voltage.
Ok, now we know we need 6 batteries in series. But what about capacity. Each of this 7INR18650 have a capacity of 3000mAh. That means it could deliver 3A for an hour or 1A for 3 full hours. But that value is not always perfect. I want more capacity so, in addition to having 6 batteries in series, each of these batteries will actuall be a pack of 3 batteries in parallel. So I will have 6 packs in series of 3 x 4.2V batteries in parallel and that will give me a total of 25.2V and a capacity of 9000mAh.
## PART 2 - Schematic
To charge the batteries, the manual recommends a CC and CV or better said constant current and constant voltage. So, if I put my pwoer supply to 25.2V and 4.5A the batteries will get charged. But that will damage the batteries over time since one battery could get charged faster than others. For taht we need a balanced charging process. We need a battery management systemm or BMS. Use the schematic below to connect the BMS to the batteries.
As you can see, to charge the pack we have one connector that goes to the BMS but the discharge is directly from the batteries. You have to connect each o f the 7 wires to eack pack. We start with the B- cable (black) connected to the negative side of the 6S pack. Then we connect B1 to 3.7V, B2 to 7.4 and so on till we get to B+ connected to 22.2V. To weld the batteries together I've used nickel strips. See how much current could the strip withstand. In my case I've used 8mm widt and 0.3mm thickness strips that could withstand 40A. But to be sure, I've welded 2 strips one on top of the other.
## PART 3 - Making the pack
I finally solder the cables from the BMS to each pack. Then I solder thick wires to the input P- and P+ connectors and the main battery B+ and B- just as in the schematic above. I wrap the pack in some tape and I charge it to 25.2V. After a few hours the battery is full. That's it. Extra, I glue a metal plate on the bottom side of the battery pack that will act as a heat dissipator but also keep the batteries together.
And that's it. That's how I've made a battery pack of 6S: 25.2V and 9000mAh for my electric longboard project. Soon, I'll post more, maybe I will cahnge the BMS to a better one. Stay tuned for the longboard project. Thank you.
See more→
Help me by sharing this post
Alfawise U30 2.8 inch Touch Screen DIY Desktop 3D Printer - U30 EU Plug Black only \$\$162.89
AFFILATE
Creality3D CR - 10S 3D Printer - EU Plug Upgrade Version Coffee and Black only \$\$352.94
Creality 3D Ender-3 V-slot Prusa I3 DIY 3D Printer Kit 220 x 220 x 250mm Printing Size - EU Plug Black only \$\$153.84
Alfawise U20 Large Scale 2.8 inch Touch Screen DIY 3D Printer - EU - U20 EU Plug Black only \$\$253.39 | 1,140 | 4,307 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.984375 | 3 | CC-MAIN-2019-35 | latest | en | 0.921759 |
https://math.answers.com/Q/How_do_you_write_7_over_8_in_percent_form | 1,643,197,475,000,000,000 | text/html | crawl-data/CC-MAIN-2022-05/segments/1642320304947.93/warc/CC-MAIN-20220126101419-20220126131419-00184.warc.gz | 445,980,827 | 58,461 | 0
# How do you write 7 over 8 in percent form?
Wiki User
2013-01-24 01:06:37
7 divided by 8 = 0.875
To convert to percentage, move the decimal right two spaces.
7/8 = 87.5%
Wiki User
2013-01-24 01:06:37
🙏
0
🤨
0
😮
0
Study guides
20 cards
➡️
See all cards
3.74
382 Reviews | 118 | 280 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.109375 | 3 | CC-MAIN-2022-05 | latest | en | 0.72238 |
https://matplotlib.org/stable/gallery/specialty_plots/radar_chart.html | 1,721,256,400,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514809.11/warc/CC-MAIN-20240717212939-20240718002939-00302.warc.gz | 332,439,344 | 20,551 | Radar chart (aka spider or star chart)#
This example creates a radar chart, also known as a spider or star chart [1].
Although this example allows a frame of either 'circle' or 'polygon', polygon frames don't have proper gridlines (the lines are circles instead of polygons). It's possible to get a polygon grid by setting GRIDLINE_INTERPOLATION_STEPS in matplotlib.axis to the desired number of vertices, but the orientation of the polygon is not aligned with the radial axis.
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Circle, RegularPolygon
from matplotlib.path import Path
from matplotlib.projections import register_projection
from matplotlib.projections.polar import PolarAxes
from matplotlib.spines import Spine
from matplotlib.transforms import Affine2D
"""
Create a radar chart with num_vars Axes.
This function creates a RadarAxes projection and registers it.
Parameters
----------
num_vars : int
Number of variables for radar chart.
frame : {'circle', 'polygon'}
Shape of frame surrounding Axes.
"""
# calculate evenly-spaced axis angles
theta = np.linspace(0, 2*np.pi, num_vars, endpoint=False)
def transform_path_non_affine(self, path):
# Paths with non-unit interpolation steps correspond to gridlines,
# in which case we force interpolation (to defeat PolarTransform's
# autoconversion to circular arcs).
if path._interpolation_steps > 1:
path = path.interpolated(num_vars)
return Path(self.transform(path.vertices), path.codes)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# rotate plot such that the first axis is at the top
self.set_theta_zero_location('N')
def fill(self, *args, closed=True, **kwargs):
"""Override fill so that line is closed by default"""
return super().fill(closed=closed, *args, **kwargs)
def plot(self, *args, **kwargs):
"""Override plot so that line is closed by default"""
lines = super().plot(*args, **kwargs)
for line in lines:
self._close_line(line)
def _close_line(self, line):
x, y = line.get_data()
# FIXME: markers at x[0], y[0] get doubled-up
if x[0] != x[-1]:
x = np.append(x, x[0])
y = np.append(y, y[0])
line.set_data(x, y)
def set_varlabels(self, labels):
self.set_thetagrids(np.degrees(theta), labels)
def _gen_axes_patch(self):
# The Axes patch must be centered at (0.5, 0.5) and of radius 0.5
# in axes coordinates.
if frame == 'circle':
return Circle((0.5, 0.5), 0.5)
elif frame == 'polygon':
return RegularPolygon((0.5, 0.5), num_vars,
else:
raise ValueError("Unknown value for 'frame': %s" % frame)
def _gen_axes_spines(self):
if frame == 'circle':
return super()._gen_axes_spines()
elif frame == 'polygon':
# spine_type must be 'left'/'right'/'top'/'bottom'/'circle'.
spine = Spine(axes=self,
spine_type='circle',
path=Path.unit_regular_polygon(num_vars))
# unit_regular_polygon gives a polygon of radius 1 centered at
# (0, 0) but we want a polygon of radius 0.5 centered at (0.5,
# 0.5) in axes coordinates.
spine.set_transform(Affine2D().scale(.5).translate(.5, .5)
+ self.transAxes)
return {'polar': spine}
else:
raise ValueError("Unknown value for 'frame': %s" % frame)
return theta
def example_data():
# The following data is from the Denver Aerosol Sources and Health study.
# See doi:10.1016/j.atmosenv.2008.12.017
#
# The data are pollution source profile estimates for five modeled
# pollution sources (e.g., cars, wood-burning, etc) that emit 7-9 chemical
# species. The radar charts are experimented with here to see if we can
# nicely visualize how the modeled source profiles change across four
# scenarios:
# 1) No gas-phase species present, just seven particulate counts on
# Sulfate
# Nitrate
# Elemental Carbon (EC)
# Organic Carbon fraction 1 (OC)
# Organic Carbon fraction 2 (OC2)
# Organic Carbon fraction 3 (OC3)
# Pyrolyzed Organic Carbon (OP)
# 2)Inclusion of gas-phase specie carbon monoxide (CO)
# 3)Inclusion of gas-phase specie ozone (O3).
# 4)Inclusion of both gas-phase species is present...
data = [
['Sulfate', 'Nitrate', 'EC', 'OC1', 'OC2', 'OC3', 'OP', 'CO', 'O3'],
('Basecase', [
[0.88, 0.01, 0.03, 0.03, 0.00, 0.06, 0.01, 0.00, 0.00],
[0.07, 0.95, 0.04, 0.05, 0.00, 0.02, 0.01, 0.00, 0.00],
[0.01, 0.02, 0.85, 0.19, 0.05, 0.10, 0.00, 0.00, 0.00],
[0.02, 0.01, 0.07, 0.01, 0.21, 0.12, 0.98, 0.00, 0.00],
[0.01, 0.01, 0.02, 0.71, 0.74, 0.70, 0.00, 0.00, 0.00]]),
('With CO', [
[0.88, 0.02, 0.02, 0.02, 0.00, 0.05, 0.00, 0.05, 0.00],
[0.08, 0.94, 0.04, 0.02, 0.00, 0.01, 0.12, 0.04, 0.00],
[0.01, 0.01, 0.79, 0.10, 0.00, 0.05, 0.00, 0.31, 0.00],
[0.00, 0.02, 0.03, 0.38, 0.31, 0.31, 0.00, 0.59, 0.00],
[0.02, 0.02, 0.11, 0.47, 0.69, 0.58, 0.88, 0.00, 0.00]]),
('With O3', [
[0.89, 0.01, 0.07, 0.00, 0.00, 0.05, 0.00, 0.00, 0.03],
[0.07, 0.95, 0.05, 0.04, 0.00, 0.02, 0.12, 0.00, 0.00],
[0.01, 0.02, 0.86, 0.27, 0.16, 0.19, 0.00, 0.00, 0.00],
[0.01, 0.03, 0.00, 0.32, 0.29, 0.27, 0.00, 0.00, 0.95],
[0.02, 0.00, 0.03, 0.37, 0.56, 0.47, 0.87, 0.00, 0.00]]),
('CO & O3', [
[0.87, 0.01, 0.08, 0.00, 0.00, 0.04, 0.00, 0.00, 0.01],
[0.09, 0.95, 0.02, 0.03, 0.00, 0.01, 0.13, 0.06, 0.00],
[0.01, 0.02, 0.71, 0.24, 0.13, 0.16, 0.00, 0.50, 0.00],
[0.01, 0.03, 0.00, 0.28, 0.24, 0.23, 0.00, 0.44, 0.88],
[0.02, 0.00, 0.18, 0.45, 0.64, 0.55, 0.86, 0.00, 0.16]])
]
return data
if __name__ == '__main__':
N = 9
data = example_data()
spoke_labels = data.pop(0)
fig, axs = plt.subplots(figsize=(9, 9), nrows=2, ncols=2,
colors = ['b', 'r', 'g', 'm', 'y']
# Plot the four cases from the example data on separate Axes
for ax, (title, case_data) in zip(axs.flat, data):
ax.set_rgrids([0.2, 0.4, 0.6, 0.8])
ax.set_title(title, weight='bold', size='medium', position=(0.5, 1.1),
horizontalalignment='center', verticalalignment='center')
for d, color in zip(case_data, colors):
ax.plot(theta, d, color=color)
ax.fill(theta, d, facecolor=color, alpha=0.25, label='_nolegend_')
ax.set_varlabels(spoke_labels)
# add legend relative to top-left plot
labels = ('Factor 1', 'Factor 2', 'Factor 3', 'Factor 4', 'Factor 5')
legend = axs[0, 0].legend(labels, loc=(0.9, .95),
labelspacing=0.1, fontsize='small')
fig.text(0.5, 0.965, '5-Factor Solution Profiles Across Four Scenarios',
horizontalalignment='center', color='black', weight='bold',
size='large')
plt.show()
References
The use of the following functions, methods, classes and modules is shown in this example:
Total running time of the script: (0 minutes 1.623 seconds)
Gallery generated by Sphinx-Gallery | 2,363 | 6,457 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2024-30 | latest | en | 0.689601 |
http://www.physicsforums.com/showthread.php?t=12426 | 1,369,329,377,000,000,000 | text/html | crawl-data/CC-MAIN-2013-20/segments/1368703635016/warc/CC-MAIN-20130516112715-00083-ip-10-60-113-184.ec2.internal.warc.gz | 643,226,440 | 9,564 | ## $$E = -\nabla \Phi - \displaystyle{\frac{\partial A}{\partial t}}$$
I would like your opinion regarding an explanation I gave elsewhere. I hold that the explanation below is straight forward. However it appears as if some were confused by it.
In a certain frame of referance, for a particular electromagnetic field, the relation $$\partial A/\partial t} = 0$$ holds true. Such a condition will hold in the case of a time independant magnetic field. The equation
$$E = - \nabla \Phi - \displaystyle{\frac{\partial A}{\partial t}}$$
in this example and in this frame reduces to
$$E = - \nabla \Phi$$
Does anyone think that this is relativistically incorrect?
I know this seems like a dumb question but some people claim that this is relativistically incorrect. Such a claim is obviously wrong. However I can't understand why they're having such a difficult time understanding this. Is it what I explained above confusing?
The 4-potential, $$A^{\alpha}$$, is defined in terms of the Coulomb potential, $$\Phi$$, and the magnetic vector potential, A as
$$A^{\alpha} = (\Phi/c, A) = (\Phi/c, A_x, A_y, A_z)$$
The Faraday tensor, $$F^{\alpha \beta}$$, is defined as
$$F^{\alpha \beta} = \partial^{\alpha} A^{\beta} - \partial^{\beta} A^{\alpha}$$
[See "Classical Electrodynamics - 2nd Ed.," J. D. Jackson, page 551, Eq. (11.136). I'm using different units]
The $$F^{0k}$$ components of this relationship for k = 1,2,3 are, respectively
$$\displaystyle{\frac{E_{x}}{c}} = \partial^{0} A^{1} - \partial^{1} A^{0} = - \displaystyle{\frac{1}{c}} \displaystyle{\frac{\partial A_{x}}{\partial t}} - \displaystyle{\frac{1}{c}} \displaystyle{\frac{\partial \Phi}{\partial x}}$$
$$\displaystyle{\frac{E_{y}}{c}} = \partial^{0} A^{2} - \partial^{2} A^{0} = - \displaystyle{\frac{1}{c}} \displaystyle{\frac{\partial A_{y}}{\partial t}} - \displaystyle{\frac{1}{c}} \displaystyle{\frac{\partial \Phi}{\partial y}}$$
$$\displaystyle{\frac{E_{z}}{c}} = \partial^{0} A^{3} - \partial^{3} A^{0} = - \displaystyle{\frac{1}{c}} \displaystyle{\frac{\partial A_{z}}{\partial t}} - \displaystyle{\frac{1}{c}} \displaystyle{\frac{\partial \Phi}{\partial z}}$$
These can be expressed as the single equation
$$E = -\nabla \Phi - \displaystyle{\frac{\partial A}{\partial t}}$$
This equation and the equation B = curl A are equation (11.134) in Jackson on page 551. In fact Jackson uses these two equations to define $$F^{\alpha \beta} = \partial^{\alpha} A^{\beta} - \partial^{\beta} A^{\alpha}$$
In the example stated above $$\displaystyle{\frac{\partial A}{\partial t}} = 0$$ so that
$$E = -\nabla \Phi$$
Does anyone find that confusing?
Note - Do to the nature of such a question please feel free to respond in PM.
Thanks
PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor
Recognitions:
Gold Member
Staff Emeritus
I know this seems like a dumb question but some people claim that this is relativistically incorrect.
Do they say why? If so, does their explanation make sense?
Originally posted by Nereid Do they say why? If so, does their explanation make sense?
He didn't say what he here claims he said. He originally just felft off the vector potential term in a general statement for which bilge told him that his expression was not relativistically correct and waite told him just that he was neglecting the term. He then went and made statements concerning a choice of frame as if that was the intent all along.
## $$E = -\nabla \Phi - \displaystyle{\frac{\partial A}{\partial t}}$$
Originally posted by Nereid Do they say why? If so, does their explanation make sense?
They claim that I was saying that
$$\partial A/\partial t} = 0$$
is the relationship between the electric field and the Coulomb potentil and they ignore the fact that I was saying that it was an example in a given frame for a particular field.
Here is how it was described to them. I was explaining potential energy to someone and I wrote
... if you have a uniform electric field E aligined in the +x direction then the potential energy of a charged particle (choose V(0) = 0) will be V(x) = q*Phi(x) where Phi(x) is the Coulomb potential which is defined such that E = - grad Phi(x). The proper mass of the charged particle will not depend on V(x) and that was what I thought you were trying to say.
They still didn't understand completely so I explained
E = -grad Phi is an example of an electric field in a particular frame that I was using as an example. That equation is relativistically correct.
The person to whom I was addressing understood at that point.
However for some reason someone else thought that there must have been a magnetic field in my example since they complained that what I described was "meaningless." In response to that claim it was apparent that they were not paying attention to the fact that this was the expression for a particular field in a particular frame of referance. So I explained further
That means that the frame I've chosen has a constant magnetic field so that &A/&t = 0. Thus in the particular frame that I've chosen E = -grad Phi
Since they still didn't understand after this I won't quote more.
But I fail to see how anyone could be that confused. Since only two people responded and since both of them were confused I'm curious as to how they could be so confused. Since I assume that it's them and not the explanation I wanted an opinion from someone who is unbiased.
Do you find the three parts I've quoted above confusing?
Thanks
Originally posted by Arcon They claim that I was saying that $$\partial A/\partial t} = 0...$$ is the relationship between the electric field and the Coulomb potentil and they ignore the fact that I was saying that it was an example in a given frame for a particular field.
"THEY" did not say any such thing. You need to be specific when quoting someone. "They" were also not confused. Both of them are extremely well versed in general relativistic electrodynamics. You were just wrong for the reasons they explained which I mentioned.
Recognitions: Gold Member Science Advisor Staff Emeritus Arcon, DW - Enough with Waite & pmb. Discuss the issues at hand. Lose the baggage. Your discussions are welcome. The harassing is not. My patience is coming to an end.
Originally posted by Phobos Arcon, DW - Enough with Waite & pmb. Discuss the issues at hand. Lose the baggage. Your discussions are welcome. The harassing is not. My patience is coming to an end.
Okay by me
Similar Threads for: $$E = -\nabla \Phi - \displaystyle{\frac{\partial A}{\partial t}}$$ Thread Forum Replies Calculus & Beyond Homework 5 Calculus & Beyond Homework 2 Calculus & Beyond Homework 11 General Math 5 Calculus & Beyond Homework 2 | 1,760 | 6,861 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2013-20 | latest | en | 0.842997 |
http://www.enotes.com/homework-help/542-5-sample-g-manganese-oxide-has-mno-ratio-1-0-347162 | 1,462,279,742,000,000,000 | text/html | crawl-data/CC-MAIN-2016-18/segments/1461860121534.33/warc/CC-MAIN-20160428161521-00098-ip-10-239-7-51.ec2.internal.warc.gz | 499,608,122 | 11,049 | # A 542.5 sample g of manganese oxide has MnO ratio of 1.0 : 1.42 and consists of braunite (Mn2O3) and manganosite (MnO).a) what masses braunite and manganosite are in the one? b) what is the ratio...
A 542.5 sample g of manganese oxide has MnO ratio of 1.0 : 1.42 and consists of braunite (Mn2O3) and manganosite (MnO).
a) what masses braunite and manganosite are in the one?
b) what is the ratio of Mn^+3 : Mn^+2 are in the one?
Posted on
a)
Here ratio of Mn3O4:MnO=1.42:1.0
Therefore 1.42g of Mn3O4 consists 1.0g of MnO.
So in the sample mass of MnO = (1/1.42)*542.5g =382.04g
We have Mn2O3 and MnO in the Mn3O4 sample.
If weight of MnO is 382.04g,
Then weight of Mn2O3 = 542.5-382.04 = 160.46g
So;
mass of Manganosite = 382.04g
mass of Braunite = 160.46g
b)
Braunite (Mn2O3) have Mn^+3. One Mn2O3 gives two Mn^+3.
Manganosite (MnO) have Mn^+2. One MnO gives one Mn^+2.
Mn2O3+MnO= Mn3O4
So we need two Mn^+3 and one Mn^+2 to form one Mn3O4.
Therefore;
Mn^+3:Mn^+2 = 2:1
Sources: | 419 | 1,004 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.640625 | 4 | CC-MAIN-2016-18 | longest | en | 0.842252 |
https://confluence.cornell.edu/pages/diffpagesbyversion.action?pageId=90734803&selectedPageVersions=15&selectedPageVersions=16 | 1,720,945,277,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514551.8/warc/CC-MAIN-20240714063458-20240714093458-00310.warc.gz | 147,508,402 | 16,647 | ##### Child pages
• ANSYS - Orthotropic plate with a hole - Step 5
# Page History
## Key
• This line was removed.
• Formatting was changed.
Comment: Migrated to Confluence 5.3
Panel
## Step 5: Postprocess the Results
In order to access the results, we need to enter the database results postprocessor. This is equivalent to entering the postprocessing module.
/POST1
To determine the stress concentration along the hole, we will first select the nodes attached to the line that defines the hole and then obtain the value of the circumferential stress at each of these nodes.
#### Modify Output Options
Since we are interested in obtaining the circumferential stress, we need to change the options for output of results from cartesian to cylindrical.
RSYS,1
#### Select Lines and Nodes
We'll use select logic to first select the line that defines the hole (5) and then the nodes attached to this line.
LSEL,S,LINE,,5
NSLL,,1
#### Sort Nodal Data
If we were to list the nodal results now, we will obtain a list of the circumferential stresses as a function of the node number. However, we are interested in the circumferential stress as a function of the angle (0 to 90 deg). Since the y coordinate of the nodes along the hole increases as the angle increases, to obtain the circumferential stress as a function of the angle we can sort the results based on the y coordinate of the nodes.
NSORT,LOC,Y,1,,
Recall that in step 3, we divided the line that defines the hole into 40 elements and that the elements were equally spaced (no grading). Therefore, since we know that the angle varies from 0 to 90 deg and that the line was divided into 40 elements, we can determine the angle at each node.
#### List Circumferential Stress
The last step is to list the results.
PRNSOL,S,COMP
This command generates a list containing the X,Y,Z,XY, YZ, and XZ stress components at each node. Since we changed the options for output of results from cartesian to cylindrical, the circumferential stress is shown in the second column (Y component).
The modified and final log file should be as follows:
/Title, Orthotropic Plate with a Hole
*SET,a,60e-3
*SET,r,7e-3
*SET,p,1e6
*SET,E1,59.3e9
*SET,E2,22e9
*SET,G12,8.96e9
*SET,nu21,0.047
/PREP7
ET,1,PLANE82
MP,EX,1,E1
MP,EY,1,E2
MP,NUXY,1,NU21
MP,GXY,1,G12
RECTNG,0,a,0,a,
CYL4,0,0,0,0,r,90
ASBA,1,2
LESIZE,8,,,50,0.25,,,,0
LESIZE,9,,,50,0.25,,,,0
LESIZE,5,,,40,,,,,,0
SMRT,1
MSHAPE,0,2D
MSHKEY,0
AMESH,3
DL,8,3,SYMM
DL,9,3,SYMM
SFL,2,PRES,-p,
FINISH
/SOL
SOLVE
FINISH
/POST1
RSYS,1
LSEL,S,LINE,,5
NSLL,,1
NSORT,LOC,Y,1,,
PRNSOL,S,COMP
#### Verify Progress
Restart ANSYS or go to Utility Menu > File > Clear & Start New and select Do not read file.
Copy the list of commands and paste them in the ANSYS Command Input window. The list of commands will generate the following:
#### Analysis of Results
We will use the theoretical solution developed by Greszczuk, L.B (see reference below) to verify the results obtained with ANSYS. To do this, we need to import the results obtained into Excel or a similar application.
After the solution is performed, save the list generated (PRPATH command window). Go to File > Save as. Enter plate2.lis as the file name. Open this file using Excel or a similar application and delete all columns except the SY column (circumferential stress). You will need to create a new column to specify the angle. Recall that the angle at each node can be determined based on the number of divisions (90deg/40div=2.25 increments). The file will look like this:
Create a text file (results.txt) with these results. Use Matlab or a similar application to import/read the results.txt file and plot them along with the theoretical solution. Refer to the reference below for a detailed description of the theoretical solution and associated equations.
As we can see, the solution obtained with ANSYS compares well with the theoretical solution. The highest variation between the theoretical solution and the results obtained with ANSYS occurs at 90 deg. At this angle, the value obtained with ANSYS varies by less than 3% with respect to the theoretical value.
#### Reference
Greszczuk, L.B., "Stress Concentrations and Failure Criteria for Orthotropic and Anisotropic Plates with Circular Openings", Composite Materials: Testing and Design (Second Conference), ASTM STP 497, American Society for Testing and Materials, 1972, pp. 363-381. | 1,165 | 4,422 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2024-30 | latest | en | 0.90071 |
https://discourse.threejs.org/t/which-method-is-right-to-update-geometry-of-mesh/48676 | 1,696,070,258,000,000,000 | text/html | crawl-data/CC-MAIN-2023-40/segments/1695233510671.0/warc/CC-MAIN-20230930082033-20230930112033-00358.warc.gz | 218,215,394 | 6,431 | # Which method is right to update geometry of mesh?
Hello everyone! I making redactor based on Three.js. I have a lot of different types of meshes (cubes, planes, circles and etc.). I’m trying to make tool wich allow change parameters of geometry (width, height, radius and others). In Stack Overflow and here (in this forum) I found different methods to do that, but I don’t know wich to use.
Let me show.
First method is to use `scale`:
``````let geometry = new THREE.PlaneGeometry( 30, 30 );
let material = new THREE.MeshLambertMaterial( {color: "#0087FF", side: THREE.DoubleSide} );
let plane = new THREE.Mesh( geometry, material);
plane.scale(2, 1, 1);
``````
This method is not very convenient, since it is easier to navigate when directly accessing parameters, through length and width, for example.
Second method is to recreate geometry:
``````plane.geometry.dispose();
plane.geometry = new THREE.PlaneGeometry( 40, 40 );
``````
With this approach, it is already easier to work with geometry, but in the documentation I read that this operation is very expensive to calculate for use
As a result, which of these two methods is better used and what problems will have to be faced in the future? Are there other ways to change geometry parameters?
Changing parameters used to generate a geometry requires recreating the geometry and uploading it to the GPU. That will always be more expensive than changing the scale/position/rotation associated with the geometry, especially when the geometry has many vertices. For very small meshes it might not matter so much, but I still wouldn’t do it 60 times per second or anything like that.
I’d suggest creating a geometry with a known unit size of 1, and then just scaling it to the size you need with `plane.scale.set(2, 1, 1)` or similar.
2 Likes
Thank you for your answer. i will try to use scale if you insist on it. It’s a pity that I have to edit a lot of code, since everywhere I relied directly on geometry parameters
Here is a small comparison, based on my personal experience.
No Method Decsription
1 Recreating the geometry This is the slowest method, uses a lot of resources. Must be applied only as a last option of none of the rest methods are possible.
2 Manually updating the geometry Also slow, but not as in (1). The programmer must write loops and modify data in the geometry. Used predominantly for custom shapes and rarely used in animation cycles unless objects are deformed non-uniformly.
3 Updating the geometry with geometry’s translate, scale, rotate methods Also slow, because it is done in the CPU. Often used when this change is done once, outside the animation loop.
4 Updating the mesh with mesh’s position, scale, rotation properties The fastest approach as only a matrix is calculated in JavaScript, the actual transformation is done in the GPU. This is the preferred method for modification. Additionally, similar objects may share geometries (e.g. all cuboids can use one single box geometry).
5 Morphing between two shapes Also fast to do in animation cycle, but requires some efforts to build the base geometries used for morphing. Used for custom 3D transitions that are hard or slow to calculate in real-time.
As for geometry’s `parameters` property – they just hold the initial parameters of the geometry. When you modify the geometry via methods (2)-(5), this property is not updated.
My advice is to use approach (4), unless there is a compelling reason to use another approach.
The bottom line is that there is no right approach. There are many approaches with pros and cons and you must pick the best one for your specific situation (and level of Three.js skills).
6 Likes
Incredible, thank you for the answer. This is one of the best and most detailed answers to this question. Today I have already spent all day rebuilding the editor for your 4 method
1 Like
luckily I found this topic and detailed explanations; half a day I couldn’t understand why …geometry.parameters.width&heigt=… dynamically doesn’t change PlaneGeometry;
@PavelBoytchev thank you very much!
1 Like | 896 | 4,087 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2023-40 | latest | en | 0.910204 |
https://www.studypool.com/discuss/6161179/last-same-question-with-different-data-set | 1,726,577,826,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00578.warc.gz | 916,944,975 | 46,283 | # Last same question with different data set
User Generated
Zhfvpny
Economics
## Description
Just we have to use different data set, we are not allowed to use the example data set, If you have any questions let me know, that is the same work just using different data set
### Unformatted Attachment Preview
ECO 605: Module Four Case Study Guidelines and Rubric Overview: The case studies in this course are designed to actively involve you in environmental economics reasoning and to help you apply the course principles to complex real-world situations. In the case studies, you will use data analysis to make informed recommendations and communicate in a professional manner. The Module Four Case Study examines data with the travel cost method. In your submission, you will demonstrate the following skills: 1. 2. 3. 4. 5. 6. Apply an appropriate type of cost-benefit analysis and compare it to the contingent valuation method. Define the collection source for data. Collect data on the number of visitors from each zone and the number of visits made in the last year. Calculate visitation rates. Calculate the average round-trip travel distance and travel time for each zone. Write recommendations for an influential association of homeowners and businesses and describe the advantages of the travel cost method over other methods. 7. Construct the demand function with the use of results from regression analysis. 8. Write a summary of the benefit-cost analysis on programs to control pollution. Prompt: The objective of this case study is to analyze data and make recommendations for the improvement of the water quality in a local lake. Describe the required data and the rationale for using the travel cost method. Prepare your analysis as though you were hired by an influential association of homeowners and businesses that are interested in the local lake’s water quality. The analysis and recommendations you provide will help determine the benefits for improving the water quality of the lake. You must take the steps listed below to complete this case study. Step 1 Describe the rationale for using the travel cost method. Compare the travel cost method to the contingent valuation method in your description. Step 2 Define the zones surrounding the lake. These may be defined by concentric circles around the lake or by geographic divisions. Choose what makes sense, such as counties or other distinguishable boundaries that surround the lake at different distances. Add a graphic to enhance the definition and description. Step 3 Explain how you will collect data. Focus on the number of visitors from each zone and the number of visits made in the last year. For this example, assume the staff at the lake has records of the number of visitors and their zip codes. This will be used to calculate the total number of visits per zone over the last year. To extend the value of the analysis, explain the value of more precise data and what it takes to analyze this additional data. More information on this approach is found on the companion website to the course textbook (relevant pages for Chapter 7). Step 4 Calculate the visitation rates per 1,000 population in each zone. These are the total visits per year designated by each zone, divided by the zone’s population in thousands. An example is shown below. Use Microsoft Excel (or something similar) to calculate the rates. Visitation Rates per 1,000 Population Zone Total Visits/Year Zone Population Visits/1,000 0 400 1,000 400 1 400 2,000 200 2 400 4,000 100 3 400 8,000 50 Beyond 3 Total Visits 0 1,600 Step 5 Calculate the average round-trip travel distance and travel time for each zone. Assume that people in Zone 0 have a travel distance and time of zero. Every other zone has increasing travel time and distance. Next, using average cost per mile and per hour of travel time, calculate the travel cost per trip. A standard cost per mile for operating an automobile is readily available from AAA or similar sources. Assume that cost per mile is \$.30, or use the current expense rate found on the IRS website. The cost of time is more complicated. The simplest approach is to use the average hourly wage. For this example, assume it is \$9 per hour (or \$.15 per minute) for all zones, although in practice it is likely to differ by zone. Generate calculations using Microsoft Excel or a similar program. Zone Average Round-Trip Travel Distance and Travel Time Round-Trip Travel Round-Trip Distance Times Travel Time Distance Travel Time Cost/Mile Times (\$.30) Cost/Minute (\$.15) 0 0 0 1 20 2 40 3 80 Total Travel Cost/Trip 0 0 0 30 \$6 \$4.50 \$10.50 60 \$12 \$9.00 \$21.00 120 \$24 \$18.00 \$42.00 For additional practice, add one to two more zones with additional data. Step 6 To estimate using regression analysis, use an equation that relates visits per capita to travel costs and other important variables. From this, estimate the demand function for the average visitor. In this simple model, the analysis might include demographic variables, such as age, income, gender, and education levels, using the average values for each zone. To maintain the simplest possible model, calculate the equation with only the travel cost and visits/1,000. Visits/1,000 = 330 – 7.755*(Travel Cost) Step 7 Construct the demand function for visits to the lake, using the results of the regression analysis. The first point on the demand curve is the total visitors to the lake at current access costs (assuming there is no entry fee for the lake), which in this example is 1,600 visits per year. The other points are found by estimating the number of visitors with different hypothetical entrance fees (assuming that an entrance fee is viewed in the same way as travel costs). Enter the total number of visits. Demand Function Visits/1,000 Population Zone Travel Cost plus \$10 Total Visits 0 \$10 252 1,000 252 1 \$20.50 171 2,000 342 2 \$31.00 90 4,000 360 3 \$52.00 0 8,000 0 Total Visits For additional practice, add one to two more sets of data. This gives the second point on the demand curve (enter the sum of the total visits into the gray shaded area). Use the total number of visits and multiply it by an entry fee of \$10. Then calculate in the same way for the number of visits at each of the increasing entry fees to get the totals listed below. (Use a program such as Microsoft Excel to enter data and then plot a graph.) Entry Fee Total Visits \$20 409 \$30 129 \$40 20 \$50 0 These points give the demand curve for trips to the lake. Step 8 Now estimate the total economic benefit of the lake by calculating the consumer surplus (or the area under the demand curve). This results in a total estimate of economic benefits from the lake uses around \$23,000 per year, or around \$14.38 per visit (\$23,000/1,600). Remember that the objective is to determine whether it is worthwhile to spend money to protect the lake by implementing programs to improve the water quality. If the actions cost less than \$23,000 per year, the cost will be less than the benefits provided by the lake. If the costs are greater, the staff will decide whether other factors are worthwhile. You should make recommendations that will influence a decision on whether it is worthwhile to spend money on programs to improve the water quality of the lake over the long run and the short run. Also make recommendations on the additional information to gather in a survey to enhance this study. Create a report with recommendations based on your analysis. Rubric Guidelines for Submission: The case study must follow these formatting guidelines: double spacing, 12-point Times New Roman font, one-inch margins, and APA citations. Your submission should be one to two pages in length (not including cover page and references). Critical Elements Rationale Zones Data Collection Exemplary (100%) Meets “Proficient” criteria and extends with compelling rationale on the benefits of the travel cost method with a comparison to other methods, such as contingent valuation Meets “Proficient” criteria and extends by defining the source of the data from the surrounding area and incorporates a graphic with concentric circles from around the lake Meets “Proficient” criteria and extends the reports value by describing additional data to be collected that would enhance the current data and analysis value Proficient (90%) Introduces the report with a clear description on the rationale for the use of the travel cost method Defines the zones surrounding the lake Describes the collection of data using the example of the lake staff providing records of the number of visitors and their zip codes as part of the visits per zone Needs Improvement (70%) Attempts to introduce the report with a rationale for the use of the method for the analysis, but the description is not clearly explained or it is missing correct information Attempts to define the zones surrounding the lake, but at least one detail is incorrect Not Evident (0%) There is no evidence or rationale for the method used in this report Value 10 The definition of the zones is incorrect or missing completely 10 Attempts to describe the collection of data, but there is an error or missing information There is no attempt to describe the data collected 10 Visitation Rates Travel Cost per Trip Estimation Demand Function Total Economic Benefit Meets “Proficient” criteria and extends the calculations with additional relevant data and adds it to the graph of the zones and visits/1,000 Meets “Proficient” criteria and extends to include additional data that enhances the graph Meets “Proficient” criteria and extends calculations and the regression analysis to include more variables that go beyond the simplest travel cost and visits/1,000 Meets “Proficient” criteria and extends demand function for visits to the lake by adding supporting data to enhance the graph Meets “Proficient” criteria and extends to include additional recommendations that enhance this analysis with good questions to ask for more data to improve the analysis Generates accurate calculations of the data with the use of a program such as Excel and generates a graph of the zones and visits/1,000 Calculates the travel costs and trips by zone using the data provided with the use of a program such as Excel to generate a representation of the information in a graph Estimates using the regression analysis of the visits per capita to travel costs and other important variables Constructs the demand function for visits to the lake using the results from the regression analysis; enters data into a program such as Excel and creates a graph to represent the data and regression analysis Uses the data provided to create a summary of the benefit-cost analysis with recommendations on shortrun and long-run costs of programs that will control pollution; writes in a way that will influence a target audience of homeowners and businesses Attempts to generate calculations of the data with the use of a program such as Excel, but there are errors in the data or the graph Attempts to calculate the travel costs and trips by zone using the data provided, but there is an error in the data or graph There is no analysis or proper use of the data 10 There is no calculation of the travel costs and trip by zone or no graph 10 Attempts to use regression analysis of the visits per capita to travel costs and other variables, but there is an error in the equation or the use of data Attempts to construct the demand function for visits to the lake, but the data has errors or there is no graph to represent the data and regression analysis There is no regression analysis of the visits per capita to travel costs 10 There is no construct demand function for visits to the lake 10 Attempts to use the data in the analysis, but at least one data source is not identified or is used incorrectly There is no analysis or data used correctly to make accurate recommendations 20 Articulation of Response Submission is free of errors related to citations, grammar, spelling, syntax, and organization and is presented in a professional and easy to read format Submission has no major errors related to citations, grammar, spelling, syntax, or organization Submission has major errors related to citations, grammar, spelling, syntax, or organization that negatively impact readability and articulation of main ideas Submission has critical errors related to citations, grammar, spelling, syntax, or organization that prevent understanding of ideas Earned Total 10 100%
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.
## Explanation & Answer
HiThank you so much for giving me the chance of helping you on this assignment.I will upload the answer in the question timeframe.If you have any further instructions, please let me know so that I can make sure of fulfilling them.Best regards.Please note that I will reply to all inquires/messages in lead time of 10hours, I will get back to you as soon as I can. However, the lead time is setup just in case if I am not available. If you have not received any, please keep on sending me messages as Studypool sometimes fail in sending any notifications.R
Running head: MODULE 4 CASE STUDY
1
Module 4 case study
(Name)
(Course)
(Date)
MODULE 4 CASE STUDY
2
Module 4 case study
The travel cost method estimates both the time and the cost at which people incur when
traveling to any location as part of a recreational trip. Through this estimation, the company may
estimate the value of a site depending on both the monetary cost of reaching such place and the
cost of the time spent in arriving at it (Martens & Ciommo, 2017). By being supported by
evidence, this method provides an entirely accurate estimator of the perceived value of each
location to the customers. In this case, the association of homeowners would use a similar
approach to quantitively estimate the added benefit that having a high-wat...
### Review
Anonymous
Excellent resource! Really helped me get the gist of things.
Studypool
4.7
Indeed
4.5
Sitejabber
4.4 | 2,963 | 14,211 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2024-38 | latest | en | 0.894752 |
https://socialsci.libretexts.org/Bookshelves/Political_Science_and_Civics/Book%3A_A_Short_Introduciton_to_World_Politics_(Meacham)/04%3A_Foreign_Policy_Decision_Making/4.02%3A_National_and_Domestic_Factors | 1,726,636,455,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00427.warc.gz | 479,610,221 | 35,175 | # 4.2: National and Domestic Factors
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$
( \newcommand{\kernel}{\mathrm{null}\,}\) $$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\id}{\mathrm{id}}$$
$$\newcommand{\Span}{\mathrm{span}}$$
$$\newcommand{\kernel}{\mathrm{null}\,}$$
$$\newcommand{\range}{\mathrm{range}\,}$$
$$\newcommand{\RealPart}{\mathrm{Re}}$$
$$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$
$$\newcommand{\Argument}{\mathrm{Arg}}$$
$$\newcommand{\norm}[1]{\| #1 \|}$$
$$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$
$$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
$$\newcommand{\vectorA}[1]{\vec{#1}} % arrow$$
$$\newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow$$
$$\newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vectorC}[1]{\textbf{#1}}$$
$$\newcommand{\vectorD}[1]{\overrightarrow{#1}}$$
$$\newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}$$
$$\newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}}$$
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
$$\newcommand{\avec}{\mathbf a}$$ $$\newcommand{\bvec}{\mathbf b}$$ $$\newcommand{\cvec}{\mathbf c}$$ $$\newcommand{\dvec}{\mathbf d}$$ $$\newcommand{\dtil}{\widetilde{\mathbf d}}$$ $$\newcommand{\evec}{\mathbf e}$$ $$\newcommand{\fvec}{\mathbf f}$$ $$\newcommand{\nvec}{\mathbf n}$$ $$\newcommand{\pvec}{\mathbf p}$$ $$\newcommand{\qvec}{\mathbf q}$$ $$\newcommand{\svec}{\mathbf s}$$ $$\newcommand{\tvec}{\mathbf t}$$ $$\newcommand{\uvec}{\mathbf u}$$ $$\newcommand{\vvec}{\mathbf v}$$ $$\newcommand{\wvec}{\mathbf w}$$ $$\newcommand{\xvec}{\mathbf x}$$ $$\newcommand{\yvec}{\mathbf y}$$ $$\newcommand{\zvec}{\mathbf z}$$ $$\newcommand{\rvec}{\mathbf r}$$ $$\newcommand{\mvec}{\mathbf m}$$ $$\newcommand{\zerovec}{\mathbf 0}$$ $$\newcommand{\onevec}{\mathbf 1}$$ $$\newcommand{\real}{\mathbb R}$$ $$\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}$$ $$\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}$$ $$\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}$$ $$\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}$$ $$\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}$$ $$\newcommand{\laspan}[1]{\text{Span}\{#1\}}$$ $$\newcommand{\bcal}{\cal B}$$ $$\newcommand{\ccal}{\cal C}$$ $$\newcommand{\scal}{\cal S}$$ $$\newcommand{\wcal}{\cal W}$$ $$\newcommand{\ecal}{\cal E}$$ $$\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}$$ $$\newcommand{\gray}[1]{\color{gray}{#1}}$$ $$\newcommand{\lgray}[1]{\color{lightgray}{#1}}$$ $$\newcommand{\rank}{\operatorname{rank}}$$ $$\newcommand{\row}{\text{Row}}$$ $$\newcommand{\col}{\text{Col}}$$ $$\renewcommand{\row}{\text{Row}}$$ $$\newcommand{\nul}{\text{Nul}}$$ $$\newcommand{\var}{\text{Var}}$$ $$\newcommand{\corr}{\text{corr}}$$ $$\newcommand{\len}[1]{\left|#1\right|}$$ $$\newcommand{\bbar}{\overline{\bvec}}$$ $$\newcommand{\bhat}{\widehat{\bvec}}$$ $$\newcommand{\bperp}{\bvec^\perp}$$ $$\newcommand{\xhat}{\widehat{\xvec}}$$ $$\newcommand{\vhat}{\widehat{\vvec}}$$ $$\newcommand{\uhat}{\widehat{\uvec}}$$ $$\newcommand{\what}{\widehat{\wvec}}$$ $$\newcommand{\Sighat}{\widehat{\Sigma}}$$ $$\newcommand{\lt}{<}$$ $$\newcommand{\gt}{>}$$ $$\newcommand{\amp}{&}$$ $$\definecolor{fillinmathshade}{gray}{0.9}$$
Besides the system level of analysis, there are important factors in a nation-state’s domestic capabilities, decision-making and policies that affect its actions in world politics.
Geopolitics (Location, Location, Location)
Even with the easy travel and communication of today, traditional factors like geography, natural resources and population affect foreign policy.
For instance, the U.S. has an advantage in sitting behind large oceans and having friendly and relatively weak nations on its borders, which has sometimes encouraged isolationism. Similarly, the English Channel has protected England from invasion. Compare this to Germany, which was surrounded by sometimes hostile neighbors and has no natural defenses. Russia’s borders consist of open plains that have been invasion routes for centuries. Korea is located between China, Russia and Japan, a tough neighborhood! Mexico and Canada must cope with the giant on their borders. Mexican President Porfirio Diaz once said, “Poor Mexico, so far from God, so close to the United States.” Indeed, the U.S. took half of Mexico, twice tried to take over Canada, and still dominates both economies. The Caribbean countries and Central and Latin America have similar concerns about the nearby U.S., especially after dozens of military interventions. Obviously, these geographic factors will influence outlooks and decisions.
The U.S. and Europe have many navigable rivers and good ports, which were particularly important before the existence of railroads, good roads and cars. (Even today, over 75% of goods move by water.) Since shipping by water is 50% cheaper than by land, countries with good rivers, coastlines and ports have an advantage in trade and can develop strong navies. Many of Russia’s rivers run to the North and many of its ports are frozen in the winter. (However, the melting of the Arctic is opening up the Northern Sea Route.) Many of Africa’s rivers are less navigable because they fall steeply through waterfalls from the interior plateau.
In addition, the topography and fertility of the land affect a country’s economy, power and behavior. The U.S. has large amounts of good land for farming, whereas only 20% of China’s land is arable. Looking ahead for future food security, Saudi Arabia and China have bought large tracts of land in Africa. Russia is so far north that much of its land is difficult for farming (although global warming has also improved this situation). Much of Afghanistan is mountainous, which makes agriculture and travel difficult.
Availability of water is also a factor. For thousands of years, the Yellow and Yangtze Rivers caused huge, deadly floods in China. Today, because of increased industrial use, the Yellow River sometimes runs dry before it reaches the ocean. Lack of water may be a serious constraint on growth in the Chinese economy. So, they have dammed the Yangtze and are transferring huge amounts of water from the South to the dry North. In Pakistan, India and most other countries, water tables are falling. The world will soon be divided into water-rich areas and water-poor areas like the Middle East, North Africa and Southwest Asia. The cities of Sao Paolo in Brazil, Capetown in South Africa and Chennai in India have already experienced water emergencies.
Other natural resources are also important. Russia has plenty of oil, gas and minerals, while Europe and China have to import oil and gas. (China does have plenty of coal.) Europe gets more than a third of its gas from Russian pipelines, and is vulnerable to cutoffs such as the ones Russia imposed on Ukraine during price and political disputes. Until recently, the U.S. was importing most of its oil. However, new fields, fracking and new drilling methods greatly increased production, and it is close to being self-sufficient.
A large population means military and economic power. However, having too large a population can cause problems of supplying food, jobs, and other services (China, India). A well-educated population is clearly a big advantage, as countries as different as Japan, Malaysia, Costa Rica and Ireland have shown. The population profile is also important. Not only are the populations of Japan, Russia and Europe decreasing, they are becoming older, which will lead to problems of retirement costs and workforce shortages. On the other hand, poor countries in the Global South have many young people who need jobs (typically, half the population is under 30). High unemployment in many of these countries increases instability.
All these geopolitical factors will influence the outlook and actions of countries.
Military Capabilities
Today, the U.S. spends more on its military than the next 10 countries combined. The resulting power may be one reason why the Bush 2 administration favored military action in Iraq Russia’s conventional military forces greatly diminished after the Cold War. But Putin has strengthened the military, emphasizing special forces and new weapons, invading their tiny neighbor Georgia, taking the Crimea and part of Ukraine, and intervening to prop up longtime ally Syria. Britain and France have small but effective and modern militaries. Japan is rearming. China is modernizing its military. India is the most powerful in South Asia.
Economic Capabilities and Technology
Nations with strong economies and technology have more international interests and the resources to pursue them. Having rich natural resources helps, but today, technology, education and government policy are more important in developing the economy. Europe has a higher average income than the U.S. and the latest technology. India, China, and other Newly Industrialized Countries are also using technology to advance rapidly. For instance, instead of spending trillions of dollars on phone lines, they are moving directly to cell phones. Furthermore, in many places in the Global South even the poor use cell phones to pay for goods and services and to transfer money, allowing their countries to skip over the costs of developing banking, checking and credit card systems. In Japan and South Korea, internet speeds are far faster than in the U.S. and people use their phones to join affinity groups and buy cold beer from vending machines (one benchmark of an advanced society). In Chinese cities, even street food stalls and beggars use QR codes.
Type of Government
Dictators don’t allow independent legislatures, media and interest groups. In contrast, living in a democracy means that Bush 2 had to consult with Congress before the 2003 Iraq war (they rolled over and authorized him to use force). When U.S. public opinion turned against the Iraq war, he lost his Republican majority in Congress. Sometimes the Congress supports the president strongly, and sometimes they attack him relentlessly, such as what Johnson endured during Vietnam or what Obama got during his term. Furthermore, because of elections, leaders in democratic countries have to steer a course that keeps most people happy.
Interest groups
The powerful American-Israel Public Affairs Committee (AIPAC) has a strong influence on U.S. policy regarding Israel. Iraqi exiles’ promise of an easy ‘liberation’ had a large effect on the Bush administration’s outlook and decisions on Iraq. The U.S. farm and drug lobbies overcame the influence of the anti-Castro Cuban American community to sell their products to Cuba despite the U.S. economic embargo. The military industrial complex keeps defense expenditures high with expensive weapons. Big U.S. corporations who have moved their factories to China are a powerful lobby against trade restrictions. Different groups ran TV ads for and against the Iran nuclear deal.
Bureaucratic Politics
Sometime bureaucracies put their programs, goals and interests first, or even define the national interest in terms of bureaucratic interests. During the first four years of the Bush 2 administration, the Defense Dept. consistently had its way over the State Dept. and CIA. For instance, after 9/11, Defense Secretary Donald Rumsfeld refused to let the military help the CIA in Afghanistan unless the Defense Department was in charge of the operation. The military leadership’s subsequent focus on conventional tactics such as capturing the capitol city of Kabul allowed Osama Bin Laden and thousands of his men to escape over the border to Pakistan. When an Army general was asked in an interview why the Army did not respond to the CIA’s request to block the mountain passes to trap Bin Laden, he answered, “First of all, the CIA doesn’t tell the Army what to do.” In other words, protecting their bureaucratic turf was more important than the mission.
The Defense Dept. also insisted on handling all aspects of the war in Iraq, which caused problems when they ignored State Dept. plans for the postwar occupation, made none of their own, and made many serious errors. For instance, they disbanded the Iraqi Army, putting 400,000 men out on the street with no jobs or pensions and plenty of weapons, and fired all members of the ruling Baath party, stripping the country of its managers and educated professionals. This drove both groups into the arms of the insurgency.
Bureaucratic attitudes influence policy decisions. Diplomats prefer diplomatic solutions even when dictators like Hitler or Yugoslavia’s Milosovic abuse the process for their own ends. Some military leaders prefer military solutions even when diplomacy is possible. Furthermore, in order to justify their budgets, each military service competes for a share of all operations, whether it is appropriate or not. Some of the aircraft that Reagan used to bomb Libya in 1986 were Air Force planes that flew several hours from Britain to be part of the action, even though there were plenty of Navy planes on nearby aircraft carriers. Another problem is that competing agencies don't cooperate, whether it is Army and Navy units whose radios are on different frequencies or rival agencies like the FBI and CIA who didn’t share information on Al Qaeda terrorists before 9/11.
Groupthink is the name given to the conceptual constraints that arise within organizations. People in the same organization tend to see things through the filters of their experience, procedures and bureaucratic goals, accepting only the information that conforms to their template (confirmation bias) and only supporting action that will serve the organization. For example, in 1914, all the militaries had plans for quick mass mobilizations because quick mass mobilizations had won the most recent wars. But it became a self-fulfilling prophecy - each side saw the others’ mobilizations as a threat and started their own, a major factor leading to the outbreak of WWI.
Before 9/11, the Bush 2 administration refused to deal with terrorism, partly because they felt that anything the previous (Clinton) administration emphasized must be wrong. Then, after 9/11, Vice President Cheney and his group cherry-picked intelligence that agreed with their view that Saddam had Weapons of Mass Destruction (WMDs) such as nukes, and ignored information that contradicted that view.
Sometimes the bureaucracy simply disobeys. During the Cuban Missile Crisis, the Navy disregarded Kennedy's orders to shrink the blockade line closer to Cuba. The Navy also flouted orders by chasing Soviet subs, nearly causing a nuclear counterattack. And despite Kennedy’s orders, the CIA provocatively sent a U-2 spy plane over the USSR during the height of the crisis.
The Media and Public Opinion
In 1993, TV pictures of starving children in the war in Somalia pressured the UN and U.S. to intervene. But when American soldiers were killed and their bodies dragged through the streets on CNN, the U.S. pulled out. In 2003, the Bush 2 administration spread scary media stories about weapons of mass destruction in Iraq that caused the public to support an invasion. Later, pessimistic TV coverage helped the public sour on the war. Voters may simply tire of the human and financial costs of intervention in far-off, unknown places. Americans demanded revenge after 9/11, but now a majority supports leaving Afghanistan.
Ideology and Political Culture
During the Cold War, the USSR preached Marxist ideology, including the inevitability of class struggle, conflict with capitalist countries and eventual world communist revolution. Today, Islamists such as Islamic State (IS) believe in pushing the West out of the Muslim world and establishing a theocratic Caliphate. The U.S. tries to bring American-style democracy and capitalism to all countries, regardless of their history or culture. The political descendants of former French President Charles DeGaulle insist on a leading role for France in world politics. The Chinese world view is that it should dominate Asia and the world.
In addition, each country has a different political culture. France, Japan and Britain’s bureaucrats are openly elitist, while American bureaucrats must feign humility. Pervasive corruption in China, India, Russia and other countries causes many problems. The level of political participation of women varies widely among countries, with the U.S. lagging behind many others. There was surprise when Clinton appointed a woman (Madeleine Albright) as Secretary of State. There was amazement when Bush 2 appointed a black woman (Condoleeza Rice) as Secretary of State. When Obama appointed Hillary Clinton, her gender no longer elicited comment.
This page titled 4.2: National and Domestic Factors is shared under a CC BY-NC-ND 4.0 license and was authored, remixed, and/or curated by Lawrence Meacham. | 4,437 | 18,049 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2024-38 | latest | en | 0.198574 |
https://strimas.com/ebp-workshop/subsampling.html | 1,708,493,204,000,000,000 | text/html | crawl-data/CC-MAIN-2024-10/segments/1707947473370.18/warc/CC-MAIN-20240221034447-20240221064447-00011.warc.gz | 587,788,329 | 7,606 | # Lesson 7 Spatiotemporal Subsampling
Despite the strengths of eBird data, species observations collected through citizen science projects present a number of challenges that are not found in conventional scientific data. In this chapter, we’ll discuss three of these challenges: spatial bias, temporal bias, and class imbalance. Spatial and temporal bias refers to the tendency of eBird checklists to be distributed non-randomly in space and time, while class imbalance refers to fact that there will be many more non-detections than detections for most species. All three can impact our ability to make reliable inferences from eBird data.
## Exercise
Think of some examples of birder behavior that can lead to spatial and temporal bias.
• Spatial bias: most eBirders sample near their homes, in easily accessible areas such as roadsides, or in areas and habitats of known high biodiversity.
• Temporal bias: eBirders preferentially sample when they are available, such as weekends, and at times of year when they expect to observe more birds, such as spring migration in North America.
Fortunately, all three of these challenges can largely be addressed by spatiotemporal subsampling of the eBird data prior to modeling. In particular, this consists of dividing space and time up into a regular grid (e.g. 5 km x 5 km x 1 week), and only selecting a subset of checklists from each grid cell. To deal with class imbalance, we can subsample detections and non-detections separately to ensure we don’t lose too many detections.
For the spatial part of the subsampling we’ll use the package `dggridR` to generate a regular hexagonal grid and assign points to the cells of this grid. Hexagonal grids are preferable to square grids because they exhibit significantly less spatial distortion.
## 7.1 A toy example
To illustrate how spatial sampling on a hexagonal grid works, let’s start with a simpe toy example. We’ll generate 500 randomly placed points, construct a hexagonal grid with 5 km spacing, assign each point to a grid cell, then select a single point within each cell.
``````library(auk)
library(sf)
library(dggridR)
library(lubridate)
library(tidyverse)
# bounding box to generate points from
bb <- st_bbox(c(xmin = -0.1, xmax = 0.1, ymin = -0.1, ymax = 0.1),
crs = 4326) %>%
st_as_sfc() %>%
st_sf()
# random points
pts <- st_sample(bb, 500) %>%
st_sf(as.data.frame(st_coordinates(.)), geometry = .) %>%
rename(lat = Y, lon = X)
# contruct a hexagonal grid with ~ 5 km between cells
dggs <- dgconstruct(spacing = 5)
#> Resolution: 13, Area (km^2): 31.9926151554038, Spacing (km): 5.58632116604266, CLS (km): 6.38233997895802
# for each point, get the grid cell
pts\$cell <- dgGEO_to_SEQNUM(dggs, pts\$lon, pts\$lat)\$seqnum
# sample one checklist per grid cell
pts_ss <- pts %>%
group_by(cell) %>%
sample_n(size = 1) %>%
ungroup()
# generate polygons for the grid cells
hexagons <- dgcellstogrid(dggs, unique(pts\$cell), frame = FALSE) %>%
st_as_sf()
ggplot() +
geom_sf(data = hexagons) +
geom_sf(data = pts, size = 0.5) +
geom_sf(data = pts_ss, col = "red") +
theme_bw()``````
In the above plot, black dots represent the original set of 500 randomly placed points, while red dots represent the subsampled data, one point per hexagonal cell.
## 7.2 Subsampling eBird data
Now let’s apply this same approach to the zero-filled American Flamingo data we produced in the previous lesson; however, now we’ll temporally sample as well, at a resolution of one week, and sample presences and absences separately. We start by reading in the eBird data and assigning each checklist to a hexagonal grid cell and a week.
``````# generate hexagonal grid with ~ 5 km betweeen cells
dggs <- dgconstruct(spacing = 5)
#> Resolution: 13, Area (km^2): 31.9926151554038, Spacing (km): 5.58632116604266, CLS (km): 6.38233997895802
# get hexagonal cell id and week number for each checklist
mutate(cell = dgGEO_to_SEQNUM(dggs, longitude, latitude)\$seqnum,
year = year(observation_date),
week = week(observation_date))``````
Now we sample a single checklist from each grid cell for each week, using `group_by()` to sample presences and absences separetely.
``````ebird_ss <- ebird %>%
group_by(species_observed, year, week, cell) %>%
sample_n(size = 1) %>%
ungroup()``````
## Exercise
How did the spatiotemporal subsampling affect the overall sample size as well as the prevalence of detections?
``````# original data
nrow(ebird)
#> [1] 242
count(ebird, species_observed) %>%
mutate(percent = n / sum(n))
#> # A tibble: 2 x 3
#> species_observed n percent
#> <lgl> <int> <dbl>
#> 1 FALSE 217 0.897
#> 2 TRUE 25 0.103
# after sampling
nrow(ebird_ss)
#> [1] 147
count(ebird_ss, species_observed) %>%
mutate(percent = n / sum(n))
#> # A tibble: 2 x 3
#> species_observed n percent
#> <lgl> <int> <dbl>
#> 1 FALSE 126 0.857
#> 2 TRUE 21 0.143``````
So, the subsampling decreased the overall number of checklists from 242 to 147, but increased the prevalence of detections from 10% to 14%. | 1,384 | 5,103 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.3125 | 3 | CC-MAIN-2024-10 | latest | en | 0.905936 |
https://gmatclub.com/forum/abnormally-high-score-on-gmatprep-91910.html | 1,508,602,418,000,000,000 | text/html | crawl-data/CC-MAIN-2017-43/segments/1508187824820.28/warc/CC-MAIN-20171021152723-20171021172723-00583.warc.gz | 692,588,277 | 47,050 | It is currently 21 Oct 2017, 09:13
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Abnormally high score on GMATPrep?
Author Message
TAGS:
### Hide Tags
Director
Joined: 01 Jan 2008
Posts: 504
Kudos [?]: 54 [1], given: 0
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
27 Mar 2010, 12:15
1
KUDOS
Scores on GMATPrep are quite representative , and may only give you an inflated score if you have answered questions that you have come across earlier.
Again the second test on your GMATPREP has always been a higher scoring one as compared to the first one, in my expeience , atleast for the first attempt.
I beleieve if you keep your nerves on your test date you may outdo your expected score.. Just take it easy a day before the test..
Best Wishes
Kudos [?]: 54 [1], given: 0
Director
Joined: 01 Jan 2008
Posts: 504
Kudos [?]: 54 [0], given: 0
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
29 Mar 2010, 00:41
A balanced score will serve you better than a lopsided one. So work on your quant in the next 6 days. Do you number theory well . It will help you on test date.
Kudos [?]: 54 [0], given: 0
Senior Manager
Status: Not afraid of failures, disappointments, and falls.
Joined: 20 Jan 2010
Posts: 290
Kudos [?]: 260 [0], given: 260
Concentration: Technology, Entrepreneurship
WE: Operations (Telecommunications)
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
29 Mar 2010, 02:15
jhenry42 wrote:
Thanks bhatiagp. I just took my last Kaplan CAT test (#6) and received a 690 (Q40, V51). Would you say that is in-line as well? I have read that Kaplan tests are becoming easier than they had been in the past. I hope the standard 50 points higher is still applicable. Eight more days until test day - I just need to get my Quant score up to make it more in line with verbal (this is odd because I have been in finance for 3+ years).
Thanks again.
What strategies did you apply and what did you use to practice to get V51???? V51 is massive.
_________________
"I choose to rise after every fall"
Target=770
http://challengemba.blogspot.com
Kudos??
Kudos [?]: 260 [0], given: 260
Senior Manager
Status: Not afraid of failures, disappointments, and falls.
Joined: 20 Jan 2010
Posts: 290
Kudos [?]: 260 [0], given: 260
Concentration: Technology, Entrepreneurship
WE: Operations (Telecommunications)
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
06 Apr 2010, 19:39
jhenry42 wrote:
bhatiagp,
Any suggestions for number property study materials or practice questions?
MGMAT Number properties will be a big help for you and also do check Number Properties part of GMAT MATH Book created by bunuel & walker. I have posted the pdf of the book. You can download it and do remember divisibility rules & cyclicity etc. I am attaching a document related to Cyclicity in a new post.
_________________
"I choose to rise after every fall"
Target=770
http://challengemba.blogspot.com
Kudos??
Kudos [?]: 260 [0], given: 260
Director
Joined: 01 Jan 2008
Posts: 504
Kudos [?]: 54 [0], given: 0
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
06 Apr 2010, 23:06
I used Knewtons training program, and found it quite helpful. I also used Manhattans guide and its pretty useful and also practiced from the OG and Quant Review guide.
Kudos [?]: 54 [0], given: 0
Founder
Joined: 04 Dec 2002
Posts: 15590
Kudos [?]: 28529 [0], given: 5114
Location: United States (WA)
GMAT 1: 750 Q49 V42
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
06 Apr 2010, 23:13
jhenry42 wrote:
I just finished my second GMATPrep test and received a 740. (Q44, V48). I only missed one verbal question. I am somewhat concerned given my last GMATPrep test I received a 690 (Q44, V40), which was my highest score on practice tests to date and obviously not as high on verbal. I have been receiving 620-660 on Princeton Review and Kaplan tests.
I am aiming for a 710+ on my official test which is in a week. Could my 740 be inflated and give me a false sense of security? Thanks for any comments on similar situations.
_________________
Founder of GMAT Club
Just starting out with GMAT? Start here... or use our Daily Study Plan
Co-author of the GMAT Club tests
Kudos [?]: 28529 [0], given: 5114
Senior Manager
Joined: 25 Jul 2007
Posts: 377
Kudos [?]: 38 [0], given: 148
Location: Times Square
Schools: Baruch / Zicklin
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
07 Apr 2010, 14:40
Kudos [?]: 38 [0], given: 148
Manager
Joined: 10 Aug 2009
Posts: 122
Kudos [?]: 17 [0], given: 13
Re: Abnormally high score on GMATPrep? [#permalink]
### Show Tags
08 Apr 2010, 09:02
Hey,
I'm in a similar situation.
I got 680 on my first GMAT prep, and after a couple of months of studying, I got 760 on GMAT prep 2.
Anyway my exam is in a month so we'll see how the actual score correlates to the prep scores.
Kudos [?]: 17 [0], given: 13
Re: Abnormally high score on GMATPrep? [#permalink] 08 Apr 2010, 09:02
Display posts from previous: Sort by
# Abnormally high score on GMATPrep?
Moderator: HiLine
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. | 1,661 | 5,891 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2017-43 | latest | en | 0.889671 |
https://agrat.cat/tag/pemdas | 1,656,155,109,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00139.warc.gz | 138,404,031 | 13,597 | ## Pemdas Example Problems With Answers
Pemdas Example Problems With Answers. The following practice problems can be solved to test your knowledge about the order of operations. 2 times 3 equals 6, and then 6 plus 1 equals 7.
Pemdas problems with answers sample math problems using pemdas + answers. 58 รท (4 x 5) + 3 2. 2 times 3 equals 6, and then 6 plus 1 equals 7.
## Pemdas Worksheets With Exponents
Pemdas Worksheets With Exponents. Exponents, subtraction, and division are excluded. Go the extra mile with these pemdas worksheets.
Order of operations pemdas worksheet 5. Exponents are a critical part of understanding scientific notation. 6th grade mathematics multiple choice questions with answers :
## Pemdas Worksheet Pdf 6Th Grade
Pemdas Worksheet Pdf 6Th Grade. Worksheets are order of operations work 6th grade with answers, order of operations pemdas, order of operations, order of operations basic, exercise work, order of operations pemdas practice work, grade 5 order operations b, work extra examples. Geometry will also feature prominently in the worksheet as students start dealing with lines and angles.
You can control the number ranges used include decimals or not control the. 7th grade common core math worksheets printable pdf with answers. Pemdas worksheets 6th grade pdf.
## Pemdas Exercises For Grade 6
Pemdas Exercises For Grade 6. Pemdas rules handout worksheets these pemdas worksheets will make handouts for the student showing the rules of order that calculations should be performed. Number operation add to my workbooks (32) download file pdf embed in my website or blog add to google classroom
Fourth, perform addition and subtraction from left to right. Free interactive exercises to practice online or download as pdf to print. Some of the worksheets displayed are mathematics work bodmas in such problems always follow the order bodmas bodmas work 1 order of operations basic skills x grade 6 math exercise book order of operations order of operations pemdas practice work. | 442 | 2,025 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2022-27 | latest | en | 0.83416 |
http://www.jamiiforums.com/threads/dozi-ya-quinine-ni-vidonge-vingapi.203837/ | 1,481,319,903,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698542828.27/warc/CC-MAIN-20161202170902-00065-ip-10-31-129-80.ec2.internal.warc.gz | 535,552,480 | 20,433 | # Dozi ya Quinine ni vidonge vingapi?
Discussion in 'JF Doctor' started by tizo1, Dec 18, 2011.
1. t
### tizo1JF-Expert Member
#1
Dec 18, 2011
Joined: Mar 9, 2011
Messages: 829
Trophy Points: 35
Madaktar habarin za mapumziko.JE dozi ya vidonge vya QUINENE kwa mtu mzima ni vidonge vingapi?30 au 42?msaada tafadhali.
2. ### DAWA YA SIKIOJF-Expert Member
#2
Dec 18, 2011
Joined: Dec 8, 2011
Messages: 985
Trophy Points: 35
Mapumziko ndo yameisha mkuu bado tuko hoi:
.... Awali dose ya kwinini inategemea uzito wa mtu.
>> formula:
1 kg = 10 mg ya quinine
km mtu mwenye 45 kg
45 x 10 = 450 mg
kidonge 1 = 300 mg
so dose hapa ni kidonge 1 na nusu.
Mind u
1.dose ni kile kiasi cha dawa unachomeza kwa mkupuo mmoja,
kiasi cha dawa kumaliza tiba huitwa dosage(dozeji)
2. Max dose ya qnn ni vidonge 2= 600mg)
So kwa mtu wa kg 45 dozeji ni
1.5 x 3 x 5 au 1.5 x 3 x 7
ni vidonge 23 au 31 kutegemea na dr amekusudia mgonjwa atumie dawa kwa siku 5 au 7.
[ Elewa dozeji ni kwa siku 5- 7 toka siku ya kuanza qnn, iwe umeanza na injection then vidonge ama ni vidonge pekee].
Turudi kwenye swali lako inaonekana mgonjwa wako ana 60 kg or more na anatumia vidonge pekee,
dose yake ni vidonge 2
So dosage itakuwa:
2 x 3x 5 [kwa siku 5] = 30
AU
2 x3 x7 [kama ni siku 7] = 42
Kwa ufupi dose [kuanzia leo ita dosage] ya vidonge 30 ama 42 zote ni sahihi.
... DAKTARI ATACHEZA NA HUKUMU YOYOTE KATI YA HIZO,KUWAHUKUMU HAO WADUDU!
3. t
### tizo1JF-Expert Member
#3
Dec 18, 2011
Joined: Mar 9, 2011
Messages: 829
Trophy Points: 35
asante sana kwa kunielimisha.umenifuta ujinga brother.THANKS JF
4. ### DAWA YA SIKIOJF-Expert Member
#4
Dec 18, 2011
Joined: Dec 8, 2011
Messages: 985
Trophy Points: 35
... Pa1 Broda !!!
5. ### MahesabuJF-Expert Member
#5
Jul 25, 2012
Joined: Jan 27, 2008
Messages: 4,605 | 785 | 1,802 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.90625 | 4 | CC-MAIN-2016-50 | latest | en | 0.225737 |
https://ac.2333.moe/Problem/view.xhtml?id=1070 | 1,726,070,823,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00578.warc.gz | 67,859,517 | 5,033 | N-B-U-T Online Judge :: [1070] Queen Collisions
• ### [1070] Queen Collisions
• 时间限制: 1000 ms 内存限制: 65535 K
• 问题描述
• Figure 1 Figure 2 Figure 3
Lots of time has been spent by computer science students dealing with queens on a chess board. Two queens on a chessboard collide if they lie on the same row, column or diagonal, and there is no piece between them. Various sized square boards and numbers of queens are considered. For example, Figure 1, with a 7 x 7 board, contains 7 queens with no collisions. In Figure 2 there is a 5 x 5 board with 5 queens and 4 collisions. In Figure 3, a traditional 8 x 8 board, there are 7 queens and 5 collisions.
On an n x n board, queen positions are given in Cartesian coordinates (x, y) where x is a column number, 1 to n, and y is a row number, 1 to n. Queens at distinct positions (x1, y1) and (x2, y2) lie on the same diagonal if (x1- x2) and (y1- y2) have the same magnitude. They lie on the same row or column if x1= x2 or y1= y2, respectively. In each of these cases the queens have a collision if there is no other queen directly between them on the same diagonal, row, or column, respectively. For example, in Figure 2, the collisions are between the queens at (5, 1) and (4, 2), (4, 2) and (3, 3), (3, 3) and (2, 4), and finally (2, 4) and (1, 5). In Figure 3, the collisions are between the queens at (1, 8) and (4, 8), (4, 8) and (4, 7), (4, 7) and (6, 5), (7, 6) and (6, 5), and finally (6, 5) and (2, 1). Your task is to count queen collisions.
In many situations there are a number of queens in a regular pattern. For instance in Figure 1 there are 4 queens in a line at (1,1), (2, 3), (3, 5), and (4, 7). Each of these queens after the first at (1, 1) is one to the right and 2 up from the previous one. Three queens starting at (5, 2) follow a similar pattern. Noting these patterns can allow the positions of a large number of queens to be stated succinctly.
• 输入
• The input will consist of one to twenty data sets, followed by a line containing only 0.
The first line of a dataset contains blank separated positive integers n g, where n indicates an n x n board size, and g is the number of linear patterns of queens to be described, where n < 30000, and g < 250. The next g lines each contain five blank separated integers, k x y s t, representing a linear pattern of k queens at locations (x + i*s, y +i*t), for i = 0, 1, ..., k-1. The value of k is positive. If k is 1, then the values of s and t are irrelevant, and they will be given as 0. All queen positions will be on the board. The total number of queen positions among all the linear patterns will be no more than n, and all these queen positions will be distinct.
• 输出
• There is one line of output for each data set, containing only the number of collisions between the queens.
The sample input data set corresponds to the configuration in the Figures.
Take some care with your algorithm, or else your solution may take too long.
• 样例输入
• ```7 2
4 1 1 1 2
3 5 2 1 2
5 1
5 5 1 -1 1
8 3
1 2 1 0 0
3 1 8 3 -1
3 4 8 2 -3
0
```
• 样例输出
• ```0
4
5
```
• 提示
• `无`
• 来源
• `mcpc 2010`
• 操作
| 975 | 3,110 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.34375 | 3 | CC-MAIN-2024-38 | latest | en | 0.912461 |
https://www.crazy-numbers.com/en/4122 | 1,544,822,795,000,000,000 | text/html | crawl-data/CC-MAIN-2018-51/segments/1544376826354.54/warc/CC-MAIN-20181214210553-20181214232553-00590.warc.gz | 856,234,369 | 4,284 | Discover a lot of information on the number 4122: properties, mathematical operations, how to write it, symbolism, numerology, representations and many other interesting things!
## Mathematical properties of 4122
Is 4122 a prime number? No
Is 4122 a perfect number? No
Number of divisors 12
List of dividers 1, 2, 3, 6, 9, 18, 229, 458, 687, 1374, 2061, 4122
Sum of divisors 8970
## How to write / spell 4122 in letters?
In letters, the number 4122 is written as: Four thousand hundred and twenty-two. And in other languages? how does it spell?
4122 in other languages
Write 4122 in english Four thousand hundred and twenty-two
Write 4122 in french Quatre mille cent vingt-deux
Write 4122 in spanish Cuatro mil ciento veintidós
Write 4122 in portuguese Quatro mil cento vinte e dois
## Decomposition of the number 4122
The number 4122 is composed of:
1 iteration of the number 4 : The number 4 (four) is the symbol of the square. It represents structuring, organization, work and construction.... Find out more about the number 4
1 iteration of the number 1 : The number 1 (one) represents the uniqueness, the unique, a starting point, a beginning.... Find out more about the number 1
2 iterations of the number 2 : The number 2 (two) represents double, association, cooperation, union, complementarity. It is the symbol of duality.... Find out more about the number 2
Other ways to write 4122
In letter Four thousand hundred and twenty-two
In roman numeral MMMMCXXII
In binary 1000000011010
In octal 10032
In US dollars USD 4,122.00 (\$)
In euros 4 122,00 EUR (€)
Some related numbers
Previous number 4121
Next number 4123
Next prime number 4127
## Mathematical operations
Operations and solutions
4122*2 = 8244 The double of 4122 is 8244
4122*3 = 12366 The triple of 4122 is 12366
4122/2 = 2061 The half of 4122 is 2061.000000
4122/3 = 1374 The third of 4122 is 1374.000000
41222 = 16990884 The square of 4122 is 16990884.000000
41223 = 70036423848 The cube of 4122 is 70036423848.000000
√4122 = 64.202803677098 The square root of 4122 is 64.202804
log(4122) = 8.3240937614504 The natural (Neperian) logarithm of 4122 is 8.324094
log10(4122) = 3.6151079874432 The decimal logarithm (base 10) of 4122 is 3.615108
sin(4122) = 0.22840444476825 The sine of 4122 is 0.228404
cos(4122) = 0.97356633549548 The cosine of 4122 is 0.973566
tan(4122) = 0.23460593946278 The tangent of 4122 is 0.234606 | 775 | 2,406 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.75 | 4 | CC-MAIN-2018-51 | longest | en | 0.757275 |
https://codeforwin.org/2016/04/c-program-to-find-all-roots-of-quadratic-equation-using-switch.html | 1,591,316,400,000,000,000 | text/html | crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00009.warc.gz | 296,018,146 | 24,556 | # C program to find all roots of a quadratic equation using switch case
Write a C program to find all roots of a Quadratic equation using switch case. How to find all roots of a quadratic equation using switch case in C programming. Logic to calculate roots of quadratic equation in C program.
Example
Input
```Input a: 4
Input b: -2
Input c: -10```
Output
```Root1: 1.85
Root2: -1.35```
## Required knowledge
In elementary algebra quadratic equation is an equation in the form of
A quadratic equation can have either one or two distinct real or complex roots depending upon nature of discriminant of the equation. Where discriminant of the quadratic equation is given by
Depending upon the nature of the discriminant, formula for finding roots can be given as:
• Case 1: If discriminant is positive. Then there are two real distinct roots given by.
• Case 2: If discriminant is zero. Then it have exactly one real root given by.
• Case 3: If discriminant is negative. Then it will have two distinct complex roots given by.
## Logic to find roots of quadratic equation using `switch...case`
Step by step descriptive logic to find roots of quadratic equation using switch case.
1. Input coefficients of quadratic equation. Store it in some variable say a, b and c.
2. Find discriminant of given equation using formula i.e. `discriminant = (b * b) - (4 * a * c)`.
You can also use pow() function to find square of b.
3. Compute the roots based on the nature of discriminant. Switch the value of `switch(discriminant > 0)`.
4. The expression `(discriminant > 0)` can have two possible cases i.e. `case 0` and `case 1`.
5. For `case 1` means discriminant is positive. Apply formula `root1 = (-b + sqrt(discriminant)) / (2*a);` to compute root1 and `root2 = (-b - sqrt(discriminant)) / (2*a);` to compute root2.
6. For `case 0` means discriminant is either negative or zero. There exist one more condition to check i.e. `switch(discriminant < 0)`.
7. Inside `case 0` switch the expression `switch(discriminant < 0)`.
8. For the above nested switch there are two possible cases. Which is `case 1` and `case 0`. `case 1` means discriminant is negative. Whereas case 0 means discriminant is zero.
9. Apply the formula to compute roots for both the inner cases.
## Program to find roots of quadratic equation using `switch...case`
``````/**
* C program to find all roots of a quadratic equation using switch case
*/
#include <stdio.h>
#include <math.h> /* Used for sqrt() */
int main()
{
float a, b, c;
float root1, root2, imaginary;
float discriminant;
printf("Enter values of a, b, c of quadratic equation (aX^2 + bX + c): ");
scanf("%f%f%f", &a, &b, &c);
/* Calculate discriminant */
discriminant = (b * b) - (4 * a * c);
/* Compute roots of quadratic equation based on the nature of discriminant */
switch(discriminant > 0)
{
case 1:
/* If discriminant is positive */
root1 = (-b + sqrt(discriminant)) / (2 * a);
root2 = (-b - sqrt(discriminant)) / (2 * a);
printf("Two distinct and real roots exists: %.2f and %.2f",
root1, root2);
break;
case 0:
/* If discriminant is not positive */
switch(discriminant < 0)
{
case 1:
/* If discriminant is negative */
root1 = root2 = -b / (2 * a);
imaginary = sqrt(-discriminant) / (2 * a);
printf("Two distinct complex roots exists: %.2f + i%.2f and %.2f - i%.2f",
root1, imaginary, root2, imaginary);
break;
case 0:
/* If discriminant is zero */
root1 = root2 = -b / (2 * a);
printf("Two equal and real roots exists: %.2f and %.2f", root1, root2);
break;
}
}
return 0;
}``````
Output
```Enter values of a, b, c of quadratic equation (aX^2 + bX + c): 4 -2 -10
Two distinct and real roots exists: 1.85 and -1.35```
Happy coding 😉 | 1,017 | 3,695 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.578125 | 4 | CC-MAIN-2020-24 | longest | en | 0.751552 |
https://devhubby.com/thread/how-to-check-if-a-real-number-is-a-natural-number | 1,726,511,887,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00465.warc.gz | 186,741,146 | 26,456 | # How to check if a real number is a natural number in pascal?
Member
by roberto , in category: Other , 2 years ago
How to check if a real number is a natural number in pascal?
, a year ago
@roberto
In Pascal, you can check if a real number is a natural number by comparing it to its integer equivalent using the "trunc" function. The "trunc" function truncates a real number to its integer part. If the truncated value is equal to the original real number, then the real number is a natural number. Here is an example code snippet:
``` 1 2 3 4 5 6 7 8 9 10 11 12 13 ``` ```var num: real; begin // Input a real number write('Enter a real number: '); readln(num); // Check if the number is a natural number if num = trunc(num) then writeln('The number is a natural number.') else writeln('The number is not a natural number.'); end. ```
In this example, the "num" variable is a real number that is input by the user. The "trunc(num)" function is used to get the integer part of the "num" variable, which is then compared to "num". If they are equal, then "num" is a natural number, otherwise it is not. The program will output a message indicating whether "num" is a natural number or not.
, 9 months ago
@roberto
Here is a sample code for checking if a real number is a natural number in Pascal:
``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ``` ```program CheckNaturalNumber; var number: real; begin // Input the real number write('Enter a real number: '); readln(number); // Check if the number is a natural number if (number >= 0) and (number = trunc(number)) then writeln(number:0:2, ' is a natural number.') else writeln(number:0:2, ' is not a natural number.'); end. ```
In this code, we first prompt the user to input a real number. Then, we check if the number is greater than or equal to 0 and if it is equal to its truncated integer value. If both conditions are true, we output that the number is a natural number; otherwise, we output that it is not a natural number. The `number:0:2` syntax is used to format the output and display the number with 2 decimal places. | 555 | 2,096 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.34375 | 3 | CC-MAIN-2024-38 | latest | en | 0.755845 |
https://rimworldwiki.com/index.php?title=Market_Value&oldid=115825 | 1,679,446,887,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00433.warc.gz | 564,330,335 | 10,718 | # Market Value
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
This graph shows dependence item sell price multiplier(\$) from item hit points.
Market Value is a Stat: The market value of an object. The actual trade price will be adjusted by negotiation skill, relationship status, and other contextual factors.
If not explicitly specified, an item's market value defaults to a formula-determined value:
M = (I + W × 0.0036) × Q × H
where
• I = combined market value of all ingredients
• W = work in ticks to make the item, where there are 60 ticks to a second. The game displays this in the item information window in seconds, which assumes 1x game speed and 100% work speed even when the base workspeed is lower than 100%. The displayed value also rounds the work value to the nearest second for example - an item that takes 35 ticks to make will be displayed in-game as taking 1 Work, despite taking only 0.583 seconds to construct at 100% speed. An approximation of the value for W can be found by multiplying the in-game Work displayed by 60, but the exact value is defined in the XML in ticks by the WorkToMake, or for buildings the WorkToBuild, stats.
• Q = quality multiplier. See quality for details.
• H = health multiplier. An item retains 100% of its value down to 90% HP, then loses 1.67% of its maximum value per 1% HP lost between 90% and 60% HP, 4% of its maximum value per 1% HP lost between 60% and 50% HP, and 0.2% of its maximum value per 1% HP lost between 50% and 0% HP. The item will have 50% of its maximum value at 60% HP and only 10% of its maximum value at 50% HP. The health multiplier only applies to certain items, as controlled by their XML definition. See Deterioration for details.
Items with a market value of above 200 will have their market value rounded to be divisible by 5, with a remainder of 2.5 being rounded down. Prices over \$10 will be rounded to the nearest whole number.
Steel Large Scuplture (Good) market value demonstrates that a) value rounding happens only post-quality multipier and b) 2.5 remainder is rounded down.
## Version history
• 1.1.2618 - Prices now only display decimals when under \$10. | 545 | 2,189 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2023-14 | latest | en | 0.920547 |
https://www.eleccircuit.com/triangle-wave-generator-circuit-with-cmos-inverter-ic/ | 1,719,164,098,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198862488.55/warc/CC-MAIN-20240623162925-20240623192925-00615.warc.gz | 655,581,391 | 25,920 | # Triangle wave generator circuit with cmos inverter IC
Today we are going to try out a Simple Function Generator Circuit. It can create a Square or Triangle wave at the output with just a small number of electronic components. I made this a while back to learn about CMOS Gates and an Astable Multivibrator.
And it has been very useful. So now I will recommend it to my daughter so she can learn from it, as I did.
I like it, as it is the first triangle wave generator circuit that I’ve made. It can give out two ranges of signals: a Low (or 10Hz-100Hz) and a High (or 1kHz-10kHz) range. And the voltage supply range is very wide, from 3V to 15V. The main components are a CD40106 Schmitt Trigger CMOS, an NPN transistor, and an RC.
I could adapt this circuit for work such as testing amplifiers, DC-to-AC inverters, and some others. We can also modify its waveform into a sine wave with ease. It is one electronic circuit that is very interesting to try.
Any kind of learning should be practiced gradually. Similar to planting a tree, we should water it gradually and continuously. because this tree (my daughter) has a short memory and a poor brain haha…
## The working of the circuit
We should start with the basic Oscillator using the Schmitt trigger. The IC1-CD40106 is a CMOS Schmitt-Trigger Hex Inverter type IC. It will output a square waveform, where the frequency is determined by the values of R1 and C1.
Or, to be exact, the frequency normally depends on the charge and discharge times of the capacitor. So if the capacitance and resistance increased, the output frequencies would reduce.
From my experience, R1 should be around 10K to 1M, and for C1, it should be around 100pF to 1μF.
We added other components until we got the circuit shown below.
### Meet Triangle wave generator circuit
The square waveform signal at output is determined by R1, VR1, C1, and C2, similar to the circuit above.
We add the switch S1 for choosing between frequency ranges between “high” and “low.” When S1 is on “low,”, capacitor C1 will connect with capacitor C2. Both capacitances will increase, causing the output frequency to be in the “Low” range and we can rotate VR1 to set the frequency from 10Hz to 100Hz.
But when S1 is “high”, this time there will be only one capacitor C2 that is connected at the input (pin 1) of IC1. This causes the output frequency to be in the “High” range, and we can rotate VR1 to set the frequency from 1KHz to 10KHz as well.
The triangle waveform has been derived from the charging and discharging of the capacitors (C1, C2) at pin 1 of IC1. To prevent loading those capacitors, there should be a simple buffer. which, in this case, will be a transistor Q1, as its common collector form.
The output is the same voltage as the drop across R2. The lowest point in the amplitude of the waveform will be higher than the zero voltage of the power supply.
Adjusting VR1 will change both the frequency of the square waveform and triangle waveform by the same amount.
The components list
• IC1: CD40106, CMOS Hex Schmitt-Trigger Inverters
• Q1: BC549, or equivalent, 45V 0.1A, TO-92 NPN Transistor
• R1: 10K, 0.25W Resistors, tolerance: 5%
• R2: 4.7K, 0.25W Resistors, tolerance: 5%
• VR1: 100K Linear Potentiometer
• C1: 0.47µF 50V, Ceramic Capacitor
• C2: 0.0047µF 50V, Ceramic Capacitor
• S1: SPDT Mini Micro Slide Switch
## Building and testing
This circuit has a few devices. So we can assemble it on the small breadboard.
After reviewing the circuit, it is good. Then we feed it a 9V battery. This produces a square signal with an amplitude of about 6Vp-p.
A triangular signal has an amplitude of approximately 1Vp-p at all frequency ranges it’s capable of.
## Conclusion
This circuit has the disadvantage of a low current. but you can mitigate this by adding an amplifier. While still keeping the same signal format and frequency. At our next opportunity, we will try to build this amplifier circuit.
It is also a good idea to add short-circuit protection at the output.
The triangular wave signals can also be turned into sine wave signals. In the future, it might be worth revisiting and modifying this circuit again.
Additionally, you may use other transistor families such as 2SC1815, 2N3904, S9013, and others. But their pins will not match; therefore, you should be especially careful when using them.
For example, the Function Generator
All full-size images and PDFs of this post are in this Ebook below. Please support me. 🙂
## GET UPDATE VIA EMAIL
I always try to make Electronics Learning Easy.
Get Ebook: Simple Electronics Vol.4
### 5 thoughts on “Triangle wave generator circuit with cmos inverter IC”
1. hello all
can you just put the formula of the frequency?
i mean how it can be calculated
thank you
2. Nice circuit. Small & cheap!
• Hello Prajit,
Thanks for your feedback. I am happy you like it.
3. JEG SKAL SE DET IGENNEM – JEG ER IKKE NYBEGYBDER .
VENLIG HILSEN WILLY | 1,251 | 4,960 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2024-26 | latest | en | 0.939013 |
https://www.teacherspayteachers.com/Browse/Grade-Level/Third/PreK-12-Subject-Area/Statistics?ref=filter/subject | 1,611,274,358,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610703528672.38/warc/CC-MAIN-20210121225305-20210122015305-00724.warc.gz | 1,001,527,513 | 62,744 | Results for
1100 results
You Selected:
Other
digital
## Water Bottle Flip STEM Challenge
The ORIGINAL water bottle flip stem challenge and lab! Water bottle flip at school? Absolutely! Students will love this on-trend STEM challenge inspired by the popular YouTube water bottle flipping challenge where students toss a water bottle and attempt to land it straight up. Practice scientific
Subjects:
Types:
CCSS:
## Probability Activities - Hands on Probability with Dice, Spinners, and Coins
This resource can help make teaching probability easier and makes a great probability introduction. Students learn and practice probability vocabulary (certain, impossible, likely, unlikely, and equally likely). Then, students apply the probability vocabulary in a hands on way using dice, coins, a
Subjects:
Types:
## Probability Activities MEGA Pack of Math Worksheets and Probability Games
Probability Activities MEGA Pack of Math Worksheets and Probability Games - Over 50 pages of math stations, worksheets, activities, and visuals to teach young learners about probability in fun, hands-on ways! This packet includes: - 10 probability vocabulary classroom posters - 12 various probabili
Subjects:
Types:
digital
## Graphing Posters and Interactive Notebook INB Set
This poster and interactive notebook (INB) set includes posters covering bar graphs, double line graph, pictographs, picture graph, line graphs, double line graph, circle graphs, frequency tables, sector graph, stem and leaf plots, histograms, dot plots, and box and whisker plots. This set also incl
Subjects:
Types:
CCSS:
Probability Task Cards help students practice probability in a fun way. Use these task cards for various activities or math centers to teach/review probability skills such as compound events, likelihood of an event, combinations., and MORE! Use for Scoot, small groups, centers, or whatever else yo
Subjects:
Types:
Also included in: Math Task Card Bundle
## Probability and Chance Pack
Your students will adore learning about chance with these fun and engaging activities and worksheets! Included in this pack.. * Probability sorting centre (included labels and definitions) * Probability paddles (a fun and engaging way for students to answer verbal questions!) * Probability flash
Subjects:
Types:
digital
## Dice Math Games
Are you ready for 15 Dice Math Games that your students will LOVE? Use these games as a math lesson, or as a warm up to ensure your students are practicing basic maths skills all while having FUN. Most games can be played over and over again! Suitable for Grades K-3. All resources are required, simp
Subjects:
Types:
## Roll the Dice Frequency Table and Dot Plot
Give your student practice using Dot Plots and Frequency Tables by using this activity in your classroom. Collect the information in groups or ass a class and fill in the frequency table and dot plot. This activity covers the NEW 2014-2015 Math TEK 4.9(A) summarize a data set with multiple categorie
Subjects:
## Blow Cup Challenge STEM Project
The Blow Cup Challenge is now a STEM Challenge with this engaging science project inspired by the popular YouTube Blow Cup Challenge where students try to blow one cup into another cup. This self-paced, multi-part Blow Cup STEM challenge is print-and-go. Each activity sheet guides students through
Subjects:
Types:
## Probability Centers
Probability Centers for 3rd - 6th grades! 5 Probability Center Activities & Recording Sheets! * Differentiated learning tasks included Center 1 Probability in Disguise: Students use a list of costume pieces to create a tree diagram to determine the number of possible costume combinations. Th
Subjects:
## Probability Task Cards - Can Use for Probability Scoot
These 16 probability task cards require students to consider whether an event is possible or not, and then defend their answer on the recording sheet. These task cards cover a variety of probability concepts, including probability with dice, coins, and spinners. These task cards are included in my
Subjects:
Types:
digital
## Probability Fun with Yummy Goodies During Distance Learning
This packet includes a lesson plan that walks your students through a probability lesson using your favorite goodies. I used fruit snacks. Children get experience making predictions, creating graphs, utilizing tally charts and spinners, as well as answering deep, open ended questions that really get
Subjects:
Types:
CCSS:
## Probability Task Cards and Poster Set - Probability Activities
Probability can easily be practiced, reinforced, and mastered with this set of easy-to-use posters and practice task cards! Your students will love the variety of questions on the task cards and you will be happy with the ease of practice of probability concepts! Click here and SAVE by buying the
Subjects:
Types:
Also included in: Math Task Cards Bundle
## GRAPHS BUNDLE: BAR GRAPHS: PICTOGRAPHS: LINE GRAPHS: LINE PLOTS: PIE CHARTS
Your entire graphing unit for a year!►► Separate folders featuring both British English and American spellings and terminology included.**********************************************************************Great quality and value for this whopping 575 + page unit on graphing! The unit features inter
Subjects:
CCSS:
## How Much is Your Name Worth? Find the Mean, Median Mode of Your Name!
Looking for an engaging way to teach and review mean, median, mode and range? This activity is not only fun and challenging BUT it is PERFECT for bulletin boards. Included in this set: -Slides 3-6 are a review or teaching tool to reinforce how to find solve for each: Mean, Median, Mode and Range
Subjects:
## Probability Activities & Posters
It would be IMPOSSIBLE for you not to love these probability posters and activities! I am CERTAIN that these activities will be a fun addition to your classroom and provide a fun way to teach your students about probability. This resource covers the terms Certain, Likely, Even Chance/Equally Likely
Subjects:
Types:
digital
Line Plot Task Cards and Record Sheets CCS: 4.MD.4, 5.MD.2 Included in this product: *20 unique task cards dealing with reading and making line plots. *4 different recording sheets *Answer Key Check out addition statistics products HERE
Subjects:
Types:
Also included in: Statistics Task Cards
## Skittles Line Plot Activity
Suggestions for this lesson: -Use this as an assessment piece based on the work you have already done in working on understanding line plots in your classroom -Use this as a reward activity that is still curriculum based. -Use this as a review for the end of a unit. -Use this as an introduction to l
Subjects:
Types:
## Probability Tree Diagram Worksheets
Use these tree diagram worksheets to help students learn all the outcomes (combinations) for probability unit.What's included-*Pizza toppings probability tree diagram*Donut probability tree diagram*Cupcake probability tree diagram*Ice cream probability tree diagram*Fidget spinner probability tree di
Subjects:
Types:
Also included in: Probability Activity BUNDLE
digital
## Google Classroom Distance Learning Probability
These digital activities will give students practice with probability, including outcomes and likelihood. This digital resource uses Google Slides™ and can be used on Google Classroom and Google Drive. This resource also includes an answer key.This product includes:Drag-&-drop: vocabulary &
Subjects:
Types:
## Mean Median Mode Range Pack (Math Centers, Flashcards, Anchor Charts)
This pack has everything you need to supplement your Mean, Median, Mode, and Range instruction! It includes 4 colorful anchor charts (1 for each concept)for you to display around the room, 4 two-page math centers that only require dice or a deck of cards (1 for each concept), a fun set of flashcard
Subjects:
Types:
## Types of Graphs Foldable-FREEBIE
Types of graphs foldable cards. Two per page. Great to use with a shutterfold. Graphs included: Stem and Leaf, Line Plot, Line Graph, Bar Graph, Frequency Table, Circle Graph Used for note taking purposes.
Subjects:
## Pie Graph - Color, Tally and Graph (First Pie Charts) For Grade Two / Grade One
Pie Graph - Color, Tally and GraphThis is a first pie graph worksheet set - most suited to lower grades (kindergarten, grade one, grade two)There are 9 worksheets in this set.Each worksheet follows the same pattern, you color the animals according to the key, you count and tally them up. Then on the
Subjects:
Types:
## Line Plots Smartboard Lesson and Student Booklet
Line plots can be a difficult concept for students. This Smartboard lesson will provide you and your students with several activities to practice creating line plots and answering questions about them. There is also a student booklet to keep students engaged during the lesson when one student at a
Subjects: | 1,812 | 8,864 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.0625 | 3 | CC-MAIN-2021-04 | latest | en | 0.852728 |
http://www.dataiku.com/blog/2014/05/21/DSS-Titanic-Kaggle-part-2.html | 1,481,033,262,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698541907.82/warc/CC-MAIN-20161202170901-00021-ip-10-31-129-80.ec2.internal.warc.gz | 420,244,306 | 12,145 | # Part 2 of My Non Sinking (or How I Ran My First Predictive Model)
Technology|Opinion|Machine Learning| May 21, 2014|
Running predictive models is pretty easy in the Data Science Studio. In a few clicks, you have the ability to predict a variable from the data you have. Once you have tried, you will never see your data the same way. You will want to try with anything. Find out how I used the studio to predict survival from the sinking of the Titanic.
This blog post is the second one of a series. In the previous post, I imported the train dataset containing information about 891 passengers of the Titanic. The idea is to find out what sorts of people were more likely to survive the shipwreck.
The exploration of the dataset with the Studio already gave us a good overview:
• 342 out of the 891 people in our dataset survived.
• Being a child or a woman was a clear advantage to survive.
• Being in an upper class is also an advantage to survive.
Find below a screenshot of the dataset we are going to use for the predictions. Remember that we have generated a new column with the title of each passenger.
### What's a predictive model ?
The next step is definitely the coolest part of our work. The goal is to predict the Survived variable (1=survived, 0=deceased) of a passenger. We want to find a model that gives the best predictions.
A variety of models can be used. They use techniques of statistics and machine learning. Basically, a model would use all the information available about the passengers to learn what are the characteristics of a survivor. Then, it would output a decision (survive or not) for other passengers (that we would have kept with the objective of testing our model).
To select the best model, we score each one. Two famous metrics are:
• Accuracy: proportion of correct predictions (total number of "good" predictions divided by the total number of predictions)
• AUC: the Area Under the ROC Curve (from 0.5 for a random model to 1 for a perfect model)
Before going further, let's illustrate with a basic case what can be a model. (If you already know what is a predictive algorithm, just skip this section.)
If we keep only the Survived, Age and Sex variables of our passengers and from what we saw in the previous example (ie. using our own sense), we could create a simple tree of decision to predict if a passenger survives:
If we apply this algorithm on a sample of our dataset, we would get something like that:
Age Sex Survived Prediction
40 male 0 0 -> good prediction !
18 female 0 1 -> bad prediction
24 female 1 1 -> good prediction !
7 female 1 1 -> good prediction !
47 male 1 0 -> bad prediction
Accuracy on this sample = 3/5 = 0.6
It is a really basic and naive predictive model. We can expect a bad accuracy score if we run this model on the full dataset. Plus, remember that we have 20% of missing data for the Age variable. Because our algorithm doesn't handle this case, we are already sure that the accuracy cannot be better than a 80%.
The Kaggle website actually published a tutorial to build a similar algorithm with Excel. Even though it is really not a great way to run predictions in reality, it is good to practice.
### Running the first prediction
The Data Science Studio provides four predictive models for running a classification of this kind within our intuitive graphical interface: Logistic Regression, Random Forest, Support Vector Machine and Stochastic Gradient Descent. The studio will automatically choose the best parameters for you, which is great when you are a beginner like me.
Let's run our first predictions with the default settings and then explore what we get.
By default, three models were run with adapted settings: two with the Random Forest algorithms (with different parameters) and one with the Logistic Regression. Let's explore results of the second Random Forest model.
We get an accuracy score of 0.8324. It means that our model would predict well survival for 83% of our passengers (it was calculated on a different sample, there is more about it later in this post). It is quite a good result, isn't it?!
The studio automatically pre-processed the data before modelling:
• Some variables with no sense for the prediction (such as the Name) were automatically thrown aside.
• It created dummy variables to measure the effect of each value of some variables (Sex and Title for instance).
• It imputed missing values for the numerical variables (Age).
The chart of variables importance highlights the variables that the algorithm most used to decide whether each passenger survives or not. In our case, the most important elements are: male , Mr. , passenger class, fare and Miss. It seems to reinforce our first analysis from the previous article.
The confusion matrix goes further in the understanding of the predictions. It compares the repartition between the actual values of survival with the predicted values.
### A bit of personalization
In order to have a better model, I decide to make some changes in the default parameters choosen by the studio. I drop the PassengerId column (not meaningfull at all for the prediction) and dummify the passenger class variable to count the separate impact of each value.
One last thing you need to know: only 80% of the passengers of our dataset were used to train our model and 20% were kept to test the model. This technique is known as cross validation: we score the model on a different sample that the one used for the training. It is good practice to detect overfitting (one of the worst nightmares of a data scientist I was told).
When you are done with your model training, you should use the full dataset to obtain the best model. Let's do that.
### First submission on Kaggle
As said in the previous post, the Titanic problem is part of a competition on Kaggle. Until now, we used a dataset of 891 passengers for whom we know if they survived or not. Kaggle provides another dataset of 418 other passengers without revealing if they survived or not. The challenge is to run our model on this dataset, to export the result in a csv file and then to upload it on the website to get a score.
Let's download the test.csv file and upload on the studio. In a few clicks, we apply the same modifications we did. Once done, we apply our model to the new dataset. A new column appears in the dataset: predicted values.
For these 418 passengers, we now have predictions about the survival. This is the moment you really feel the power of Data Science :-)
The last step is to export the results as a csv file and to upload it on Kaggle to get a score.
And the result is... 78% of our predictions were correct on this dataset. It is quite good !
Making our first predictions did not look too complicated, right ? Leave a comment if something is unclear for you.
We could now work to get a better predictive model but that goes further than the topic of this introduction.
Jeremy, a marketing guy learning Data Science. | 1,490 | 6,997 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2016-50 | latest | en | 0.941486 |
http://www.numbersaplenty.com/319248 | 1,597,204,323,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439738864.9/warc/CC-MAIN-20200812024530-20200812054530-00574.warc.gz | 162,681,946 | 3,826 | Search a number
319248 = 2433739
BaseRepresentation
bin1001101111100010000
3121012221000
41031330100
540203443
610502000
72466516
oct1157420
9535830
10319248
111a8946
12134900
13b2407
14844b6
15648d3
hex4df10
319248 has 40 divisors (see below), whose sum is σ = 917600. Its totient is φ = 106272.
The previous prime is 319237. The next prime is 319259. The reversal of 319248 is 842913.
It can be divided in two parts, 3192 and 48, that added together give a triangular number (3240 = T80).
319248 is digitally balanced in base 3, because in such base it contains all the possibile digits an equal number of times.
It is an interprime number because it is at equal distance from previous prime (319237) and next prime (319259).
It is a Harshad number since it is a multiple of its sum of digits (27).
It is a nude number because it is divisible by every one of its digits.
It is one of the 548 Lynch-Bell numbers.
It is an unprimeable number.
It is a polite number, since it can be written in 7 ways as a sum of consecutive naturals, for example, 63 + ... + 801.
It is an arithmetic number, because the mean of its divisors is an integer number (22940).
2319248 is an apocalyptic number.
It is an amenable number.
It is a practical number, because each smaller number is the sum of distinct divisors of 319248, and also a Zumkeller number, because its divisors can be partitioned in two sets with the same sum (458800).
319248 is an abundant number, since it is smaller than the sum of its proper divisors (598352).
It is a pseudoperfect number, because it is the sum of a subset of its proper divisors.
319248 is a wasteful number, since it uses less digits than its factorization.
319248 is an odious number, because the sum of its binary digits is odd.
The sum of its prime factors is 756 (or 744 counting only the distinct ones).
The product of its digits is 1728, while the sum is 27.
The square root of 319248 is about 565.0203536157. The cubic root of 319248 is about 68.3454165954.
The spelling of 319248 in words is "three hundred nineteen thousand, two hundred forty-eight". | 575 | 2,106 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.125 | 3 | CC-MAIN-2020-34 | latest | en | 0.907933 |
https://stats.stackexchange.com/questions/291051/seeking-a-closed-form-for-a-posterior-distribution | 1,723,716,304,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641278776.95/warc/CC-MAIN-20240815075414-20240815105414-00868.warc.gz | 412,043,805 | 42,828 | # Seeking a closed form for a posterior distribution
In the book Bayesian Data Analysis by Gelman et al. (3rd edition, 2014), a hierarchical model (or one-way random-effects ANOVA) is presented in section 5.4 as follows,
$$\label{eq:lme1} y_{ij} = b_0 + \lambda_i + \varepsilon_{ij},$$
where the data $y_{ij}$ come from the $i$th measuring entity (e.g., student performance in a school district) collected under the $j$th condition (e.g., a school within the district), $b_0$ is the population mean, $\lambda_i$ is the deviation of the $i$th measuring entity from the population mean, and $\varepsilon_{ij}$ is the measuring error ($i=1, 2, \ldots, k;\ j=1, 2, \ldots, n$).
A posterior inference is derived in the book for the effect of each measuring entity $\theta_i=b_0 + \lambda_i$ based on a Gaussian assumption with a known variance $\sigma^2$ for the residuals $\varepsilon_{ij}$ and a prior distribution $G(0, \tau^2)$ for $\lambda_i$. Specifically, the mean and variance for $\theta_i$ are estimated as below:
\begin{align} {\rm mean}(\theta_i) &= \frac{\frac{n}{\sigma^2}\bar{y}_{i\cdot}+\frac{1}{\tau^2}b_0}{\frac{n}{\sigma^2}+\frac{1}{\tau^2}} \\[7pt] {\rm Var}(\theta_i) &= \frac{1}{\frac{n}{\sigma^2}+\frac{1}{\tau^2}} \end{align}
where $\bar{y}_{i\cdot}=\frac{1}{n}\sum_{j=1}^n y_{ij}$.
Even though the variance for the $\lambda_i$ is assumed to be known, I could solve the model as a mixed-effects model through, for example, function lmer() in the R package lme4, and use the estimated variances $\tau^2$ and $\sigma^2$ to obtain the posterior distribution using the formulation above. Is this a reasonable and solid approach?
I know that I could directly obtain the posterior distribution through R packages such as brms and rstanarm. However, the computational cost is too heavy in my case, and that's why I'm trying to see if the above closed form is a a reasonable approach to directly obtaining the posterior distribution by plugging the variance estimate $\hat{\sigma}^2$ from lmer, rather than going through the typical Bayesian route..
• You might be better to split this up and ask question 1 first and then when you have a satisfactory answer ask 2 with a link back to 1, and so on. Commented Jul 12, 2017 at 10:30
• Having been edited to constrain the question, I think this is no longer too broad. I'm voting to leave open. Commented Jul 12, 2017 at 15:12
• should it be $j$ for measuring identity, $i$ for measuring condition (with $\lambda_i$ for the condition instead of identity)? Currently it is $i$-th identity and $k$-th condition. With $\lambda$ for identity. Another point... the 'estimates' means maximum posteriori probability? Commented Jul 18, 2017 at 20:34
• @MartijnWeterings Thanks. $k$ should be $j$, and think of $i$ and $j$ as indices for school districts and schools. What "estimates" are you referring to: mean and variance of $\theta_i$? Commented Jul 18, 2017 at 21:37
The general concept here is updating a conjugate prior.
I'm trying to see if the above closed form is a feasible solution
I'm not sure what you mean by feasible. For the Gaussian distribution $N\left(\mu, \sigma^2\right)$, any specification of mean $\mu$ and variance $\sigma^2$ - with mean parameter unrestricted and variance parameter strictly positive $\sigma^2 > 0$ - defines a unique and valid Gaussian distribution.
I will assume you are asking if the proposed closed form solution is correct.
The closed form solution you provide is the correct posterior distribution given a Gaussian prior distribution $b_0+\lambda_i = \theta_i \sim N\left(b_0, \tau_0^2\right)$ and observed data with known observation variance $\sigma^2$.
Precision $K$ is the inverse of variance $\sigma^2$: $K = \sigma^{-2}$.
Considering the mean of the posterior distribution for $\theta_i$, the numerator in the formula $mean(\theta_i)$ you provide is the precision weighted combination of the prior distribution's mean $b_0$ and the observed data's mean $\bar{y_i}$. The denominator of the posterior mean formula simply scales the sum of these weights to one.
Considering the variance of the posterior distribution, the operation reflected in the formula $Var(\theta_0)$ you provide is the summing of the precision of the prior distribution and the precision of the observations; then inverting to obtain variance.
In summary, the formulas you provide for posterior mean and posterior variance are correct given the prior $\theta_i \sim N(b_0,\tau^2)$, and given $n$ observations with known observation variance $\sigma^2$.
If your concern has to do with specifying the prior distribution $N(b_0,\tau^2)$ parameters, note the influence of the prior mean $b_0$ in your formula for the posterior mean $mean(\theta_i)$ diminishes as the number of observations increases, and diminishes as the prior variance increases. Therefore, don't worry too much about the prior mean, just increase the prior variance as seems appropriate; and, increase the number of observations!
• Thanks for the answer! Sorry that my original question was unclear. I was not questioning whether the close form of the posterior distribution is correct given the known variance $\sigma^2$ and number of observations, $n$. Instead, I want to know whether using estimated $\hat{\sigma}^2$ from solving LME through function lmer in the R package lme4 is a reasonable approach to directly obtaining the posterior distribution, rather than going through the typical Bayesian route. Could you comment about this? Commented Jul 23, 2017 at 13:41
• Ok, life just got more complicated :)You should look at Peter Hoff's notes, particularly Ch. 7 & 8. Commented Jul 23, 2017 at 14:12
• Apparently I'm not allowed to edit after 5 minutes. The concept is now updating a semi-conjugate prior (unknown mean and unknown variance). The correct chapters in Hoff's notes are Ch. 6 & 7; or better yet, look at his book, Hoff, Peter D. A first course in Bayesian statistical methods. Springer Science & Business Media, 2009. Hoff provides simple R code in both his notes and book. So, if you want to perform a Bayesian analysis, I would study Hoff's Ch. 6 & 7 in his notes, or similar chapters in his book (don't have the book handy today); and not use lmer Commented Jul 23, 2017 at 14:21
• Why is using lmer to directly estiamte $\hat{\sigma}^2$ problematic for the posterior distribution? Andrew Gelmen seems to hint that this would be fine in this paper: stat.columbia.edu/~gelman/research/published/multiple2f.pdf Commented Jul 24, 2017 at 15:46 | 1,674 | 6,522 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0} | 3.21875 | 3 | CC-MAIN-2024-33 | latest | en | 0.833946 |
https://en.paperblog.com/granny-guru-million-billions-trillions-768046/ | 1,539,908,262,000,000,000 | text/html | crawl-data/CC-MAIN-2018-43/segments/1539583512161.28/warc/CC-MAIN-20181018235424-20181019020924-00010.warc.gz | 672,978,422 | 14,605 | # Granny-Guru: Million, Billions, & Trillions
By Krista Low @kristascookin
## Millions, Billions & Trillions
When former math teacher, David Adler, started writing math storybooks for children, he set out to explain big concepts. Shapes, word problems, fractions, algebra and Roman numerals.
In his brand-new, 2013 book, “Millions, Billions & Trillions,” he tackles big numbers. Parents and teachers may care that it “meets the Common Core State Standards for fourth-grade mathematics in Number and Operations in Base Ten.” Grandparents will delight in its whimsical illustrations and real-world-based representation of how much a million, or billion, or trillion means, after you find out how many zeros there are in the number. He looks at big numbers by comparing them to things around us: • How many pizzas can you buy with one million dollars?
• How many people would it take to have one billion strands of hair?
• How high is a stack of one trillion dollar bills? He looks at big numbers by comparing them to time: How long does it take to count to one million? To one billion? He looks at big numbers by comparing them to each other: How many millions of people are in New York? In California? In the U.S.? For your grandchildren to get a feel for big numbers, before they learn the abstraction in school, introduce them, gently, with fun and imagination, with Adler’s “Millions, Billions & Trillions: Understanding Big Numbers.” You can order Adler’s book from amazon by clicking on the title above or the book cover below.
Carol Covin, Granny-Guru
Author, “Who Gets to Name Grandma? The Wisdom of Mothers and Grandmothers" http://newgrandmas.com
You Might Also Like :
• ## Looking for the BEST Copycat Crispy Famous Amos Chocolate Chip Cookies (Part...
version;YES! I've found you!!!YES! You are the Famous Amos Cookies!!!YIPPEE!!!Please pardon me and my over-reacting ecstatic moments! *nervous chuckle*Am I mad... Read more
13 hours, 20 minutes ago by Zoebakeforhappykids
FOOD & DRINK, RECIPES
• ## Jade Eagleson Debut EP Review
Jade Eagleson gave everyone a Wednesday morning surprise when he released his debut EP. The smalltown Ontario country singer is an emerging star in country... Read more
The 17 October 2018 by Phjoshua
LIFESTYLE, SELF EXPRESSION
• ## The Pumpkin Spice Products I’m Buying This Fall
Autumn is my favorite season and to celebrate I indulge in everything pumpkin spice. l buy the seasonal fall products I’ve come to love, while trying out some... Read more
The 17 October 2018 by A Girl In La - Style Blog
DIARIES, LIFESTYLE
• ## Singularity: Jon Hopkins Live Show
I do not even remember when I first learned of Jon Hopkins‘ music. Maybe about 4 years ago I discovered his incredible album Immunity that made him a huge name. Read more
The 17 October 2018 by Flemmingbo
CULTURE, PHOTOGRAPHY, TRAVEL
• ## Loneliness
Most of us are a mixture of extrovert and introvert and I’m no exception. I love getting out and meeting people, chatting and laughing, but, equally, I... Read more
The 17 October 2018 by Ashleylister
BOOKS, CREATIVITY, CULTURE, SELF EXPRESSION
• ## Connecting with the Tribe in Annapolis
“Drink this. Now.” The Annapolis Boat Show was in full swing, but my friend Nica wasn’t handing me one of Pusser’s famous Painkillers with her demand. She had... Read more
The 17 October 2018 by Behan Gifford
FAMILY, OUTDOORS, SEA, TRAVEL
• ## A Quick Note About Voting in the Midterm Election, Part 1
Voting in the November 6 midterm election has never been more important… or easier! Watch this short instructional video I whipped up, then REGISTER TO VOTE! Yo... Read more
The 17 October 2018 by Designerdaddy
FAMILY, PARENTING | 917 | 3,704 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2018-43 | latest | en | 0.922054 |
https://bitdrivencircuits.com/Circuit_Analysis/OpAmps/invOaEx2.html | 1,702,277,123,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679103558.93/warc/CC-MAIN-20231211045204-20231211075204-00778.warc.gz | 160,225,323 | 3,098 | # Inverting Op-Amps (Example #2)
### For the following circuit, Calculate Vo if Vs=2V:
This problem is a little trickier with the multiple power supplies and arrangement of resistors that doesn't allow us to readily apply our derived formula for the output voltage of an inverting op-amp. In fact, from what we have learned so far, it's not easy to tell if this is an inverting op-amp at all. Nevertheless, we can still use our 2 rules for ideal op-amps as well as Kirchoff's Current Law (KCL) to solve the problem
Start by realizing that if Vs=2, the V2=2. By the first law of ideal op-amps: $$V_1=V_2$$ ...which means that: $$V_1=2v = the \; voltage \; at \; node \; b$$
Since it appears that we have two unknowns (Va and Vo) in our circuit, we will attemp to get two separate equations that we can solve. Start by applying KCL at node a: $$\frac{9-V_a}{4000} = \frac{V_a-V_b}{4000} + \frac{V_a-V_o}{8000}$$ $$9-V_a = V_a - V_b + \frac{V_a}{2} - \frac{V_o}{2} \qquad,where \; V_b=2$$ $$-V_a - \frac{V_a}{2} -V_a + \frac{V_o}{2} = -2-9$$ $$5V_a - V_o = 22 \qquad(Eqn \; 1)$$
Now apply KCL at node b: $$\frac{V_a-V_b}{4000} = \frac{V_b-V_o}{2000} \qquad,where \; V_b=2$$ $$\frac{V_a}{4000} - \frac{2}{4000} = \frac{2}{2000} - \frac{V_o}{2000}$$ $$V_a + 2V_o = 6 \qquad(Eqn \; 2)$$ If we go ahead and solve the system of two equations with two unknowns (equations 1 and 2), we determine that: $$V_a = 4.54 V$$ as well as:
$$V_o=727.27 mV$$ | 517 | 1,444 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.34375 | 4 | CC-MAIN-2023-50 | latest | en | 0.822961 |
http://www.docstoc.com/docs/30251763/Macroeconomics-and-Finance | 1,412,104,330,000,000,000 | text/html | crawl-data/CC-MAIN-2014-41/segments/1412037663060.18/warc/CC-MAIN-20140930004103-00132-ip-10-234-18-248.ec2.internal.warc.gz | 519,884,306 | 84,895 | # Macroeconomics and Finance
Document Sample
``` Time Series for Macroeconomics and Finance
John H. Cochrane1
University of Chicago
5807 S. Woodlawn.
Chicago IL 60637
(773) 702-3059
john.cochrane@gsb.uchicago.edu
Spring 1997; Pictures added Jan 2005
1I thank Giorgio DeSantis for many useful comments on this manuscript. Copy-
c
right ° John H. Cochrane 1997, 2005
Contents
1 Preface 7
2 What is a time series? 8
3 ARMA models 10
3.1 White noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Basic ARMA models . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Lag operators and polynomials . . . . . . . . . . . . . . . . . 11
3.3.1 Manipulating ARMAs with lag operators. . . . . . . . 12
3.3.2 AR(1) to MA(∞) by recursive substitution . . . . . . . 13
3.3.3 AR(1) to MA(∞) with lag operators. . . . . . . . . . . 13
3.3.4 AR(p) to MA(∞), MA(q) to AR(∞), factoring lag
polynomials, and partial fractions . . . . . . . . . . . . 14
3.3.5 Summary of allowed lag polynomial manipulations . . 16
3.4 Multivariate ARMA models. . . . . . . . . . . . . . . . . . . . 17
3.5 Problems and Tricks . . . . . . . . . . . . . . . . . . . . . . . 19
4 The autocorrelation and autocovariance functions. 21
4.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 Autocovariance and autocorrelation of ARMA processes. . . . 22
4.2.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 25
1
4.3 A fundamental representation . . . . . . . . . . . . . . . . . . 26
4.4 Admissible autocorrelation functions . . . . . . . . . . . . . . 27
4.5 Multivariate auto- and cross correlations. . . . . . . . . . . . . 30
5 Prediction and Impulse-Response Functions 31
5.1 Predicting ARMA models . . . . . . . . . . . . . . . . . . . . 32
5.2 State space representation . . . . . . . . . . . . . . . . . . . . 34
5.2.1 ARMAs in vector AR(1) representation . . . . . . . . 35
5.2.2 Forecasts from vector AR(1) representation . . . . . . . 35
5.2.3 VARs in vector AR(1) representation. . . . . . . . . . . 36
5.3 Impulse-response function . . . . . . . . . . . . . . . . . . . . 37
5.3.1 Facts about impulse-responses . . . . . . . . . . . . . . 38
6 Stationarity and Wold representation 40
6.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.2 Conditions for stationary ARMA’s . . . . . . . . . . . . . . . 41
6.3 Wold Decomposition theorem . . . . . . . . . . . . . . . . . . 43
6.3.1 What the Wold theorem does not say . . . . . . . . . . 45
6.4 The Wold MA(∞) as another fundamental representation . . . 46
7 VARs: orthogonalization, variance decomposition, Granger
causality 48
7.1 Orthogonalizing VARs . . . . . . . . . . . . . . . . . . . . . . 48
7.1.1 Ambiguity of impulse-response functions . . . . . . . . 48
7.1.2 Orthogonal shocks . . . . . . . . . . . . . . . . . . . . 49
7.1.3 Sims orthogonalization–Specifying C(0) . . . . . . . . 50
7.1.4 Blanchard-Quah orthogonalization—restrictions on C(1). 52
7.2 Variance decompositions . . . . . . . . . . . . . . . . . . . . . 53
7.3 VAR’s in state space notation . . . . . . . . . . . . . . . . . . 54
2
7.4 Tricks and problems: . . . . . . . . . . . . . . . . . . . . . . . 55
7.5 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.5.1 Basic idea . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.5.2 Definition, autoregressive representation . . . . . . . . 58
7.5.3 Moving average representation . . . . . . . . . . . . . . 59
7.5.4 Univariate representations . . . . . . . . . . . . . . . . 60
7.5.5 Effect on projections . . . . . . . . . . . . . . . . . . . 61
7.5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 62
7.5.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 63
7.5.8 A warning: why “Granger causality” is not “Causality” 64
7.5.9 Contemporaneous correlation . . . . . . . . . . . . . . 65
8 Spectral Representation 67
8.1 Facts about complex numbers and trigonometry . . . . . . . . 67
8.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 67
8.1.2 Addition, multiplication, and conjugation . . . . . . . . 68
8.1.3 Trigonometric identities . . . . . . . . . . . . . . . . . 69
8.1.4 Frequency, period and phase . . . . . . . . . . . . . . . 69
8.1.5 Fourier transforms . . . . . . . . . . . . . . . . . . . . 70
8.1.6 Why complex numbers? . . . . . . . . . . . . . . . . . 72
8.2 Spectral density . . . . . . . . . . . . . . . . . . . . . . . . . . 73
8.2.1 Spectral densities of some processes . . . . . . . . . . . 75
8.2.2 Spectral density matrix, cross spectral density . . . . . 75
8.2.3 Spectral density of a sum . . . . . . . . . . . . . . . . . 77
8.3 Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.3.1 Spectrum of filtered series . . . . . . . . . . . . . . . . 78
8.3.2 Multivariate filtering formula . . . . . . . . . . . . . . 79
3
8.3.3 Spectral density of arbitrary MA(∞) . . . . . . . . . . 80
8.3.4 Filtering and OLS . . . . . . . . . . . . . . . . . . . . 80
8.3.5 A cosine example . . . . . . . . . . . . . . . . . . . . . 82
8.3.6 Cross spectral density of two filters, and an interpre-
tation of spectral density . . . . . . . . . . . . . . . . . 82
8.3.7 Constructing filters . . . . . . . . . . . . . . . . . . . . 84
8.3.8 Sims approximation formula . . . . . . . . . . . . . . . 86
8.4 Relation between Spectral, Wold, and Autocovariance repre-
sentations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
9 Spectral analysis in finite samples 89
9.1 Finite Fourier transforms . . . . . . . . . . . . . . . . . . . . . 89
9.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.2 Band spectrum regression . . . . . . . . . . . . . . . . . . . . 90
9.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.2.2 Band spectrum procedure . . . . . . . . . . . . . . . . 93
e
9.3 Cram´r or Spectral representation . . . . . . . . . . . . . . . . 96
9.4 Estimating spectral densities . . . . . . . . . . . . . . . . . . . 98
9.4.1 Fourier transform sample covariances . . . . . . . . . . 98
9.4.2 Sample spectral density . . . . . . . . . . . . . . . . . 98
9.4.3 Relation between transformed autocovariances and sam-
ple density . . . . . . . . . . . . . . . . . . . . . . . . . 99
9.4.4 Asymptotic distribution of sample spectral density . . 101
9.4.5 Smoothed periodogram estimates . . . . . . . . . . . . 101
9.4.6 Weighted covariance estimates . . . . . . . . . . . . . . 102
9.4.7 Relation between weighted covariance and smoothed
periodogram estimates . . . . . . . . . . . . . . . . . . 103
9.4.8 Variance of filtered data estimates . . . . . . . . . . . . 104
4
9.4.9 Spectral density implied by ARMA models . . . . . . . 105
9.4.10 Asymptotic distribution of spectral estimates . . . . . . 105
10 Unit Roots 106
10.1 Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . 106
10.2 Motivations for unit roots . . . . . . . . . . . . . . . . . . . . 107
10.2.1 Stochastic trends . . . . . . . . . . . . . . . . . . . . . 107
10.2.2 Permanence of shocks . . . . . . . . . . . . . . . . . . . 108
10.2.3 Statistical issues . . . . . . . . . . . . . . . . . . . . . . 108
10.3 Unit root and stationary processes . . . . . . . . . . . . . . . 110
10.3.1 Response to shocks . . . . . . . . . . . . . . . . . . . . 111
10.3.2 Spectral density . . . . . . . . . . . . . . . . . . . . . . 113
10.3.3 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . 114
10.3.4 Random walk components and stochastic trends . . . . 115
10.3.5 Forecast error variances . . . . . . . . . . . . . . . . . 118
10.3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.4 Summary of a(1) estimates and tests. . . . . . . . . . . . . . . 119
10.4.1 Near- observational equivalence of unit roots and sta-
tionary processes in finite samples . . . . . . . . . . . . 119
10.4.2 Empirical work on unit roots/persistence . . . . . . . . 121
11 Cointegration 122
11.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
11.2 Cointegrating regressions . . . . . . . . . . . . . . . . . . . . . 123
11.3 Representation of cointegrated system. . . . . . . . . . . . . . 124
11.3.1 Definition of cointegration . . . . . . . . . . . . . . . . 124
11.3.2 Multivariate Beveridge-Nelson decomposition . . . . . 125
11.3.3 Rank condition on A(1) . . . . . . . . . . . . . . . . . 125
5
11.3.4 Spectral density at zero . . . . . . . . . . . . . . . . . 126
11.3.5 Common trends representation . . . . . . . . . . . . . 126
11.3.6 Impulse-response function. . . . . . . . . . . . . . . . . 128
11.4 Useful representations for running cointegrated VAR’s . . . . . 129
11.4.1 Autoregressive Representations . . . . . . . . . . . . . 129
11.4.2 Error Correction representation . . . . . . . . . . . . . 130
11.4.3 Running VAR’s . . . . . . . . . . . . . . . . . . . . . . 131
11.5 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
11.6 Cointegration with drifts and trends . . . . . . . . . . . . . . . 134
6
Chapter 1
Preface
These notes are intended as a text rather than as a reference. A text is what
you read in order to learn something. A reference is something you look back
on after you know the outlines of a subject in order to get difficult theorems
exactly right.
The organization is quite different from most books, which really are
intended as references. Most books first state a general theorem or apparatus,
and then show how applications are special cases of a grand general structure.
That’s how we organize things that we already know, but that’s not how we
learn things. We learn things by getting familiar with a bunch of examples,
and then seeing how they fit together in a more general framework. And the
point is the “examples”–knowing how to do something.
normal iid errors. Once familiar with these models, I introduce the concept
of stationarity and the Wold theorem that shows how such models are in fact
much more general. But that means that the discussion of ARMA processes
is not as general as it is in most books, and many propositions are stated in
much less general contexts than is possible.
I make no effort to be encyclopedic. One function of a text (rather than
a reference) is to decide what an average reader–in this case an average first
and what can be safely left out. So, if you want to know everything about a
subject, consult a reference, such as Hamilton’s (1993) excellent book.
7
Chapter 2
What is a time series?
Most data in macroeconomics and finance come in the form of time series–a
set of repeated observations of the same variable, such as GNP or a stock
return. We can write a time series as
{x1 , x2 , . . . xT } or {xt }, t = 1, 2, . . . T
We will treat xt as a random variable. In principle, there is nothing about
time series that is arcane or different from the rest of econometrics. The only
difference with standard econometrics is that the variables are subscripted t
rather than i. For example, if yt is generated by
yt = xt β + t , E( t | xt ) = 0,
then OLS provides a consistent estimate of β, just as if the subscript was ”i”
not ”t”.
The word ”time series” is used interchangeably to denote a sample {xt },
such as GNP from 1947:1 to the present, and a probability model for that
sample—a statement of the joint distribution of the random variables {xt }.
A possible probability model for the joint distribution of a time series
{xt } is
xt = t , t ∼ i.i.d. N (0, σ 2 )
i.e, xt normal and independent over time. However, time series are typically
not iid, which is what makes them interesting. For example, if GNP today
is unusually high, GNP tomorrow is also likely to be unusually high.
8
It would be nice to use a nonparametric approach—just use histograms
to characterize the joint density of {.., xt−1 , xt , xt+1 , . . .}. Unfortunately, we
will not have enough data to follow this approach in macroeconomics for at
least the next 2000 years or so. Hence, time-series consists of interesting
parametric models for the joint distribution of {xt }. The models impose
structure, which you must evaluate to see if it captures the features you
think are present in the data. In turn, they reduce the estimation problem
to the estimation of a few parameters of the time-series model.
The first set of models we study are linear ARMA models. As you will
see, these allow a convenient and flexible way of studying time series, and
capturing the extent to which series can be forecast, i.e. variation over time
in conditional means. However, they don’t do much to help model variation
in conditional variances. For that, we turn to ARCH models later on.
9
Chapter 3
ARMA models
3.1 White noise
The building block for our time series models is the white noise process,
which I’ll denote t . In the least general case,
t ∼ i.i.d. N(0, σ 2 )
Notice three implications of this assumption:
1. E( t ) = E( t | t−1 , t−2 . . .) = E( t |all information at t − 1) = 0.
2. E( t t−j ) = cov( t t−j ) =0
3. var ( t ) =var ( t | t−1 , t−2 , . . .) =var ( t |all information at t − 1) = σ 2
The first and second properties are the absence of any serial correlation
or predictability. The third property is conditional homoskedasticity or a
constant conditional variance.
Later, we will generalize the building block process. For example, we may
assume property 2 and 3 without normality, in which case the t need not be
independent. We may also assume the first property only, in which case t is
a martingale difference sequence.
10
By itself, t is a pretty boring process. If t is unusually high, there is
no tendency for t+1 to be unusually high or low, so it does not capture the
interesting property of persistence that motivates the study of time series.
More realistic models are constructed by taking combinations of t .
3.2 Basic ARMA models
Most of the time we will study a class of models created by taking linear
combinations of white noise. For example,
AR(1): xt = φxt−1 + t
MA(1): xt = t + θ t−1
AR(p): xt = φ1 xt−1 + φ2 xt−2 + . . . + φp xt−p + t
MA(q): xt = t + θ1 t−1 + . . . θq t−q
ARMA(p,q): xt = φ1 xt−1 + ... + t + θ t−1+...
As you can see, each case amounts to a recipe by which you can construct
a sequence {xt } given a sequence of realizations of the white noise process
{ t }, and a starting value for x.
All these models are mean zero, and are used to represent deviations of
¯
the series about a mean. For example, if a series has mean x and follows an
AR(1)
(xt − x) = φ(xt−1 − x) + t
¯ ¯
it is equivalent to
xt = (1 − φ)¯ + φxt−1 + t .
x
Thus, constants absorb means. I will generally only work with the mean zero
versions, since adding means and other deterministic trends is easy.
3.3 Lag operators and polynomials
It is easiest to represent and manipulate ARMA models in lag operator no-
tation. The lag operator moves the index back one time unit, i.e.
Lxt = xt−1
11
More formally, L is an operator that takes one whole time series {xt } and
produces another; the second time series is the same as the first, but moved
backwards one date. From the definition, you can do fancier things:
L2 xt = LLxt = Lxt−1 = xt−2
Lj xt = xt−j
L−j xt = xt+j .
We can also define lag polynomials, for example
a(L)xt = (a0 L0 + a1 L1 + a2 L2 )xt = a0 xt + a1 xt−1 + a2 xt−2 .
Using this notation, we can rewrite the ARMA models as
AR(1): (1 − φL)xt = t
MA(1): xt = (1 + θL) t
AR(p): (1 + φ1 L + φ2 L2 + . . . + φp Lp )xt = t
MA(q): xt = (1 + θ1 L + . . . θq Lq ) t
or simply
AR: a(L)xt = t
MA: xt = b(L)
ARMA: a(L)xt = b(L) t
3.3.1 Manipulating ARMAs with lag operators.
ARMA models are not unique. A time series with a given joint distribution
of {x0 , x1 , . . . xT } can usually be represented with a variety of ARMA models.
It is often convenient to work with different representations. For example,
1) the shortest (or only finite length) polynomial representation is obviously
the easiest one to work with in many cases; 2) AR forms are the easiest to
estimate, since the OLS assumptions still apply; 3) moving average represen-
tations express xt in terms of a linear combination of independent right hand
variables. For many purposes, such as finding variances and covariances in
sec. 4 below, this is the easiest representation to use.
12
3.3.2 AR(1) to MA(∞) by recursive substitution
xt = φxt−1 + t .
Recursively substituting,
xt = φ(φxt−2 + t−1 ) + t = φ2 xt−2 + φ t−1 + t
xt = φk xt−k + φk−1 t−k+1 + . . . + φ2 t−2 +φ t−1 + t
Thus, an AR(1) can always be expressed as an ARMA(k,k-1). More impor-
tantly, if | φ |< 1 so that limk→∞ φk xt−k = 0, then
X
∞
xt = φj t−j
j=0
so the AR(1) can be expressed as an MA(∞ ).
3.3.3 AR(1) to MA(∞) with lag operators.
These kinds of manipulations are much easier using lag operators. To invert
the AR(1), write it as
(1 − φL)xt = t .
A natural way to ”invert” the AR(1) is to write
xt = (1 − φL)−1 t .
What meaning can we attach to (1 − φL)−1 ? We have only defined polyno-
mials in L so far. Let’s try using the expression
(1 − z)−1 = 1 + z + z 2 + z 3 + . . . for | z |< 1
(you can prove this with a Taylor expansion). This expansion, with the hope
that | φ |< 1 implies | φL |< 1 in some sense, suggests
X
∞
xt = (1 − φL)−1 t = (1 + φL + φ2 L2 + . . .) t = φj t−j
j=0
13
which is the same answer we got before. (At this stage, treat the lag operator
as a suggestive notation that delivers the right answer. We’ll justify that the
method works in a little more depth later.)
Note that we can’t always perform this inversion. In this case, we required
| φ |< 1. Not all ARMA processes are invertible to a representation of xt in
terms of current and past t .
3.3.4 AR(p) to MA(∞), MA(q) to AR(∞), factoring
lag polynomials, and partial fractions
The AR(1) example is about equally easy to solve using lag operators as using
recursive substitution. Lag operators shine with more complicated models.
For example, let’s invert an AR(2). I leave it as an exercise to try recursive
substitution and show how hard it is.
xt = φ1 xt−1 + φ2 xt−2 + t .
(1 − φ1 L − φ2 L2 )xt = t
I don’t know any expansion formulas to apply directly to (1 − φ1 L − φ2 L2 )−1 ,
but we can use the 1/(1 − z) formula by factoring the lag polynomial. Thus,
find λ1 and λ2 such that.
(1 − φ1 L − φ2 L2 ) = (1 − λ1 L)(1 − λ2 L)
The required vales solve
λ1 λ2 = −φ2
λ1 + λ2 = φ1 .
(Note λ1 and λ2 may be equal, and they may be complex.)
Now, we need to invert
(1 − λ1 L)(1 − λ2 L)xt = t .
We do it by
14
xt = (1 − λ1 L)−1 (1 − λ2 L)−1 t
X j
∞ X j
∞
j
xt = ( λ1 L )( λ2 Lj ) t .
j=0 j=0
Multiplying out the polynomials is tedious, but straightforward.
X j
∞ X j
∞
( λ1 Lj )( λ2 Lj ) = (1 + λ1 L + λ2 L2 + . . .)(1 + λ2 L + λ2 L2 + . . .) =
1 2
j=0 j=0
j
XX
∞
1 + (λ1 + λ2 )L + (λ2
1 + λ1 λ2 + λ2 )L2
2 + ... = ( λk λj−k )Lj
1 2
j=0 k=0
There is a prettier way to express the MA(∞ ). Here we use the partial
fractions trick. We find a and b so that
1 a b a(1 − λ2 L) + b(1 − λ1 L)
= + = .
(1 − λ1 L)(1 − λ2 L) (1 − λ1 L) (1 − λ2 L) (1 − λ1 L)(1 − λ2 L)
The numerator on the right hand side must be 1, so
a+b=1
λ2 a + λ1 b = 0
Solving,
λ2 λ1
b= ,a = ,
λ2 − λ1 λ1 − λ2
so
1 λ1 1 λ2 1
= + .
(1 − λ1 L)(1 − λ2 L) (λ1 − λ2 ) (1 − λ1 L) (λ2 − λ1 ) (1 − λ2 L)
Thus, we can express xt as
λ1 X j λ2 X j
∞ ∞
xt = λ t−j + λ t−j .
λ1 − λ2 j=0 1 λ2 − λ1 j=0 2
15
X
∞
λ1 λ2
xt = ( λj +
1 λj ) t−j
j=0
λ1 − λ2 λ2 − λ1 2
This formula should remind you of the solution to a second-order difference
or differential equation—the response of x to a shock is a sum of two expo-
nentials, or (if the λ are complex) a mixture of two damped sine and cosine
waves.
AR(p)’s work exactly the same way. Computer programs exist to find
roots of polynomials of arbitrary order. You can then multiply the lag poly-
nomials together or find the partial fractions expansion. Below, we’ll see a
way of writing the AR(p) as a vector AR(1) that makes the process even
easier.
Note again that not every AR(2) can be inverted. We require that the λ0 s
satisfy | λ |< 1, and one can use their definition to find the implied allowed
region of φ1 and φ2 . Again, until further notice, we will only use invertible
ARMA models.
Going from MA to AR(∞) is now obvious. Write the MA as
xt = b(L) t ,
and so it has an AR(∞) representation
b(L)−1 xt = t .
3.3.5 Summary of allowed lag polynomial manipula-
tions
In summary. one can manipulate lag polynomials pretty much just like regu-
lar polynomials, as if L was a number. (We’ll study the theory behind them
later, and it is based on replacing L by z where z is a complex number.)
Among other things,
1) We can multiply them
a(L)b(L) = (a0 + a1 L + ..)(b0 + b1 L + ..) = a0 b0 + (a0 b1 + b0 a1 )L + . . .
2) They commute:
a(L)b(L) = b(L)a(L)
16
(you should prove this to yourself).
3) We can raise them to positive integer powers
a(L)2 = a(L)a(L)
4) We can invert them, by factoring them and inverting each term
a(L) = (1 − λ1 L)(1 − λ2 L) . . .
X
∞ X
∞
a(L)−1 = (1 − λ1 L)−1 (1 − λ2 L)−1 . . . = λj Lj
1 λj Lj . . . =
2
j=0 j=0
= c1 (1 − λ1 L)−1 + c2 (1 − λ2 L)−1 ...
We’ll consider roots greater than and/or equal to one, fractional powers,
and non-polynomial functions of lag operators later.
3.4 Multivariate ARMA models.
As in the rest of econometrics, multivariate models look just like univariate
models, with the letters reinterpreted as vectors and matrices. Thus, consider
a multivariate time series ∙ ¸
yt
xt = .
zt
The building block is a multivariate white noise process, t ˜ iid N(0, Σ),
by which we mean
∙ ¸ ∙ 2 ¸
δt 0 σδ σδν
t = ; E( t ) = 0, E( t t ) = Σ = 2 , E( t 0t−j ) = 0.
νt σδν σν
(In the section on orthogonalizing VAR’s we’ll see how to start with an even
simpler building block, δ and ν uncorrelated or Σ = I.)
17
The AR(1) is xt = φxt−1 + t . Reinterpreting the letters as appropriately
sized matrices and vectors,
∙ ¸ ∙ ¸∙ ¸ ∙ ¸
yt φyy φyz yt−1 δt
= +
zt φzy φzz zt−1 νt
or
yt = φyy yt−1 + φyz zt−1 + δt
zt = φzy yt−1 + φzz zt−1 + νt
Notice that both lagged y and lagged z appear in each equation. Thus, the
vector AR(1) captures cross-variable dynamics. For example, it could capture
the fact that when M1 is higher in one quarter, GNP tends to be higher the
following quarter, as well as the facts that if GNP is high in one quarter,
GNP tends to be higher the following quarter.
We can write the vector AR(1) in lag operator notation, (I − φL)xt = t
or
A(L)xt = t .
I’ll use capital letters to denote such matrices of lag polynomials.
Given this notation, it’s easy to see how to write multivariate ARMA
models of arbitrary orders:
A(L)xt = B(L) t ,
where
∙ ¸
2 2 φj,yy φj,yz
A(L) = I−Φ1 L−Φ2 L . . . ; B(L) = I+Θ1 L+Θ2 L +. . . , Φj = ,
φj,zy φj,zz
and similarly for Θj . The way we have written these polynomials, the first
term is I, just as the scalar lag polynomials of the last section always start
with 1. Another way of writing this fact is A(0) = I, B(0) = I. As with
Σ, there are other equivalent representations that do not have this feature,
which we will study when we orthogonalize VARs.
We can invert and manipulate multivariate ARMA models in obvious
ways. For example, the MA(∞)representation of the multivariate AR(1)
must be
X
∞
(I − ΦL)xt = t ⇔ xt = (I − ΦL)−1 t = Φj t−j
j=0
18
More generally, consider inverting an arbitrary AR(p),
A(L)xt = t ⇔ xt = A(L)−1 t .
We can interpret the matrix inverse as a product of sums as above, or we
can interpret it with the matrix inverse formula:
∙ ¸∙ ¸ ∙ ¸
ayy (L) ayz (L) yt δt
= ⇒
azy (L) azz (L) zt νt
∙ ¸ ∙ ¸∙ ¸
yt −1 azz (L) −ayz (L) δt
= (ayy (L)azz (L) − azy (L)ayz (L))
zt −azy (L) ayy (L) νt
We take inverses of scalar lag polynomials as before, by factoring them into
roots and inverting each root with the 1/(1 − z) formula.
Though the above are useful ways to think about what inverting a matrix
of lag polynomials means, they are not particularly good algorithms for doing
it. It is far simpler to simply simulate the response of xt to shocks. We study
this procedure below.
The name vector autoregression is usually used in the place of ”vector
ARMA” because it is very uncommon to estimate moving average terms.
Autoregressions are easy to estimate since the OLS assumptions still apply,
where MA terms have to be estimated by maximum likelihood. Since every
MA has an AR(∞) representation, pure autoregressions can approximate
vector MA processes, if you allow enough lags.
3.5 Problems and Tricks
There is an enormous variety of clever tricks for manipulating lag polynomials
beyond the factoring and partial fractions discussed above. Here are a few.
1. You can invert finite-order polynomials neatly by matching represen-
tations. For example, suppose a(L)xt = b(L) t , and you want to find the
moving average representation xt = d(L) t . You could try to crank out
a(L)−1 b(L) directly, but that’s not much fun. Instead, you could find d(L)
from b(L) t = a(L)xt = a(L)d(L) t , hence
b(L) = a(L)d(L),
19
and matching terms in Lj to make sure this works. For example, suppose
a(L) = a0 + a1 L, b(L) = b0 + b1 L + b2 L2 . Multiplying out d(L) = (ao +
a1 L)−1 (b0 + b1 L + b2 L2 ) would be a pain. Instead, write
b0 + b1 L + b2 L2 = (a0 + a1 L)(d0 + d1 L + d2 L2 + ...).
Matching powers of L,
b0 = a0 d0
b1 = a1 d0 + a0 d1
b2 = a1 d1 + a0 d2
0 = a1 dj+1 + a0 dj ; j ≥ 3.
which you can easily solve recursively for the dj . (Try it.)
20
Chapter 4
The autocorrelation and
autocovariance functions.
4.1 Definitions
The autocovariance of a series xt is defined as
γj = cov(xt , xt−j )
(Covariance is defined as cov (xt , xt−j ) = E(xt − E(xt ))(xt−j − E(xt−j )), in
case you forgot.) Since we are specializing to ARMA models without constant
terms, E(xt ) = 0 for all our models. Hence
γj = E(xt xt−j )
Note γ0 = var(xt )
A related statistic is the correlation of xt with xt−j or autocorrelation
ρj = γj /var(xt ) = γj /γ0 .
My notation presumes that the covariance of xt and xt−j is the same as
that of xt−1 and xt−j−1 , etc., i.e. that it depends only on the separation
between two xs, j, and not on the absolute date t. You can easily verify that
invertible ARMA models posses this property. It is also a deeper property
called stationarity, that I’ll discuss later.
21
We constructed ARMA models in order to produce interesting models of
the joint distribution of a time series {xt }. Autocovariances and autocorre-
lations are one obvious way of characterizing the joint distribution of a time
series so produced. The correlation of xt with xt+1 is an obvious measure of
how persistent the time series is, or how strong is the tendency for a high
observation today to be followed by a high observation tomorrow.
Next, we calculate the autocorrelations of common ARMA processes,
both to characterize them, and to gain some familiarity with manipulating
time series.
4.2 Autocovariance and autocorrelation of ARMA
processes.
White Noise.
Since we assumed t∼ iid N (0, σ 2 ), it’s pretty obvious that
γ0 = σ 2 , γj = 0 for j 6= 0
ρ0 = 1, ρj = 0 for j 6= 0.
MA(1)
The model is:
xt = t +θ t−1
Autocovariance:
γ0 = var(xt ) = var( t + θ t−1 ) = σ 2 + θ2 σ 2 = (1 + θ2 )σ 2
2
γ1 = E(xt xt−1 ) = E(( t + θ t−1 )( t−1 +θ t−2 ) = E(θ t−1 ) = θσ 2
γ2 = E(xt xt−2 ) = E(( t + θ t−1 )( t−1 +θ t−2 ) =0
γ3 , . . . = 0
Autocorrelation:
ρ1 = θ/(1 + θ2 ); ρ2 , . . . = 0
MA(2)
22
Model:
xt = t + θ1 t−1 + θ2 t−2
Autocovariance:
2
γ0 = E[( t + θ1 t−1 + θ2 t−2 ) ]
2 2
= (1 + θ1 + θ2 )σ 2
γ1 = E[( t + θ1 t−1 + θ2 t−2 )( t−1 + θ1 t−2 + θ2 t−3 )] = (θ1 + θ1 θ2 )σ 2
γ2 = E[( t + θ1 t−1 + θ2 t−2 )( t−2 + θ1 t−3 + θ2 t−4 )] = θ2 σ 2
γ3 , γ4 , . . . = 0
Autocorrelation:
ρ0 = 1
2 2
ρ1 = (θ1 + θ1 θ2 )/(1 + θ1 + θ2 )
2 2
ρ2 = θ2 /(1 + θ1 + θ2 )
ρ3 , ρ4 , . . . = 0
MA(q), MA(∞)
By now the pattern should be clear: MA(q) processes have q autocorre-
lations different from zero. Also, it should be obvious that if
X
∞
xt = θ(L) t = (θj Lj ) t
j=0
then ̰ !
X
γ0 = var(xt ) = 2
θj σ 2
j=0
X
∞
γk = θj θj+k σ 2
j=0
23
and formulas for ρj follow immediately.
There is an important lesson in all this. Calculating second moment
properties is easy for MA processes, since all the covariance terms (E( j k ))
drop out.
AR(1)
There are two ways to do this one. First, we might use the MA(∞)
representation of an AR(1), and use the MA formulas given above. Thus,
the model is
X∞
−1
(1 − φL)xt = t ⇒ xt = (1 − φL) t = φj t−j
j=0
so ̰ !
X 1
γ0 = φ2j σ2 = σ 2 ; ρ0 = 1
j=0
1 − φ2
̰ ! ̰ !
X X φ
γ1 = φj φj+1 σ 2 = φ φ2j σ 2 = σ 2 ; ρ1 = φ
j=0 j=0
1 − φ2
and continuing this way,
φk
γk = σ 2 ; ρk = φk .
1 − φ2
There’s another way to find the autocorrelations of an AR(1), which is
useful in its own right.
2
γ1 = E(xt xt−1 ) = E((φxt−1 + t )(xt−1 )) = φσx ; ρ1 = φ
γ2 = E(xt xt−2 ) = E((φ2 xt−2 + φ t−1
2
+ t )(xt−2 )) = φ2 σx ; ρ2 = φ2
...
γk = E(xt xt−k ) = E((φk xt−k + . . .)(xt−k ) = φk σx ; ρk = φk
2
AR(p); Yule-Walker equations
This latter method turns out to be the easy way to do AR(p)’s. I’ll do
an AR(3), then the principle is clear for higher order AR’s
xt = φ1 xt−1 + φ2 xt−2 + φ3 xt−3 + t
24
multiplying both sides by xt , xt−1 , ..., taking expectations, and dividing by
γ0 , we obtain
1 = φ1 ρ1 + φ2 ρ2 + φ3 ρ3 + σ 2 /γ0
ρ1 = φ1 + φ2 ρ1 + φ3 ρ2
ρ2 = φ1 ρ1 + φ2 + φ3 ρ1
ρ3 = φ1 ρ2 + φ2 ρ1 + φ3
ρk = φ1 ρk−1 + φ2 ρk−2 + φ3 ρk−3
The second, third and fourth equations can be solved for ρ1 , ρ2 and ρ3 . Then
each remaining equation gives ρk in terms of ρk−1 and ρk−2 , so we can solve
for all the ρs. Notice that the ρs follow the same difference equation as the
original x’s. Therefore, past ρ3 , the ρ’s follow a mixture of damped sines and
exponentials.
The first equation can then be solved for the variance,
2 σ2
σx = γ0 =
1 − (φ1 ρ1 + φ2 ρ2 + φ3 ρ3 )
4.2.1 Summary
The pattern of autocorrelations as a function of lag – ρj as a function of j –
is called the autocorrelation function. The MA(q) process has q (potentially)
non-zero autocorrelations, and the rest are zero. The AR(p) process has p
(potentially) non-zero autocorrelations with no particular pattern, and then
the autocorrelation function dies off as a mixture of sines and exponentials.
One thing we learn from all this is that ARMA models are capable of
capturing very complex patters of temporal correlation. Thus, they are a
useful and interesting class of models. In fact, they can capture any valid
autocorrelation! If all you care about is autocorrelation (and not, say, third
moments, or nonlinear dynamics), then ARMAs are as general as you need
to get!
Time series books (e.g. Box and Jenkins ()) also define a partial autocor-
relation function. The j’th partial autocorrelation is related to the coefficient
25
on xt−j of a regression of xt on xt−1 , xt−2 , . . . , xt−j . Thus for an AR(p), the
p + 1th and higher partial autocorrelations are zero. In fact, the partial
autocorrelation function behaves in an exactly symmetrical fashion to the
autocorrelation function: the partial autocorrelation of an MA(q) is damped
sines and exponentials after q.
Box and Jenkins () and subsequent books on time series aimed at fore-
casting advocate inspection of autocorrelation and partial autocorrelation
functions to “identify” the appropriate “parsimonious” AR, MA or ARMA
process. I’m not going to spend any time on this, since the procedure is
not much followed in economics anymore. With rare exceptions (for exam-
ple Rosen (), Hansen and Hodrick(1981)) economic theory doesn’t say much
about the orders of AR or MA terms. Thus, we use short order ARMAs to
approximate a process which probably is “really” of infinite order (though
with small coefficients). Rather than spend a lot of time on “identification”
of the precise ARMA process, we tend to throw in a few extra lags just to
be sure and leave it at that.
4.3 A fundamental representation
Autocovariances also turn out to be useful because they are the first of three
fundamental representations for a time series. ARMA processes with nor-
mal iid errors are linear combinations of normals, so the resulting {xt } are
normally distributed. Thus the joint distribution of an ARMA time series is
fully characterized by their mean (0) and covariances E(xt xt−j ). (Using the
usual formula for a multivariate normal, we can write the joint probability
density of {xt } using only the covariances.) In turn, all the statistical prop-
erties of a series are described by its joint distribution. Thus, once we know
the autocovariances we know everything there is to know about the process.
Put another way,
If two processes have the same autocovariance function, they are
the same process.
This was not true of ARMA representations–an AR(1) is the same as a
(particular) MA(∞), etc.
26
This is a useful fact. Here’s an example. Suppose xt is composed of two
unobserved components as follows:
yt = νt + ανt−1 ; zt = δt ; xt = yt + zt
νt , δt iid, independent of each other. What ARMA process does xt follow?
One way to solve this problem is to find the autocovariance function
of xt , then find an ARMA process with the same autocovariance function.
Since the autocovariance function is fundamental, this must be an ARMA
representation for xt .
2 2
var(xt ) = var(yt ) + var(zt ) = (1 + α2 )σν + σδ
2
E(xt xt−1 ) = E[(νt + δt + ανt−1 )(νt−1 + δt−1 + ανt−2 )] = ασν
E(xt xt−k ) = 0, k ≥ 1.
γ0 and γ1 nonzero and the rest zero is the autocorrelation function of an
MA(1), so we must be able to represent xt as an MA(1). Using the formula
above for the autocorrelation of an MA(1),
2 2
γ0 = (1 + θ2 )σ 2 = (1 + α2 )σν + σδ
2
γ1 = θσ 2 = ασν .
These are two equations in two unknowns, which we can solve for θ and σ 2 ,
the two parameters of the MA(1) representation xt = (1 + θL) t .
Matching fundamental representations is one of the most common tricks
in manipulating time series, and we’ll see it again and again.
Since the autocorrelation function is fundamental, it might be nice to gener-
ate time series processes by picking autocorrelations, rather than specifying
(non-fundamental) ARMA parameters. But not every collection of numbers
is the autocorrelation of a process. In this section, we answer the question,
27
when is a set of numbers {1, ρ1 , ρ2 , . . .} the autocorrelation function of an
ARMA process?
Obviously, correlation coefficients are less that one in absolute value, so
choices like 2 or -4 are ruled out. But it turns out that | ρj |≤ 1 though
necessary, is not sufficient for {ρ1 , ρ2 , . . .} to be the autocorrelation function
of an ARMA process.
The extra condition we must impose is that the variance of any random
variable is positive. Thus, it must be the case that
var(α0 xt + α1 xt−1 + . . .) ≥ 0 for all {α0 , α1 , . . . .}.
Now, we can write
∙ ¸∙ ¸
1 ρ1 α0
var(α0 xt + α1 xt−1 ) = γ0 [α0 α1 ] ≥ 0.
ρ1 1 α1
Thus, the matrices ⎡ ⎤
∙ ¸ 1 ρ1 ρ2
1 ρ1
, ⎣ ρ1 1 ρ1 ⎦
ρ1 1
ρ2 ρ1 1
etc. must all be positive semi-definite. This is a stronger requirement than
| ρ |≤ 1. For example, the determinant of the second matrix must be positive
(as well as the determinants of its principal minors, which implies | ρ1 |≤ 1
and | ρ2 |≤ 1), so
1 + 2ρ2 ρ2 − 2ρ2 − ρ2 ≥ 0 ⇒ (ρ2 − (2ρ2 − 1))(ρ2 − 1) ≤ 0
1 1 2 1
We know ρ2 ≤ 1 already so,
ρ2 − ρ21
ρ2 − (2ρ2 − 1) ≥ 0 ⇒ ρ2 ≥ 2ρ2 − 1. ⇒ −1 ≤
1 1 ≤1
1 − ρ21
Thus, ρ1 and ρ2 must lie1 in the parabolic shaped region illustrated in figure
4.1.
1
To get the last implication,
ρ2 − ρ21
2ρ2 − 1 ≤ ρ2 ≤ 1 ⇒ −(1 − ρ2 ) ≤ ρ2 − ρ2 ≤ 1 − ρ2 ⇒ −1 ≤
1 1 1 1 ≤ 1.
1 − ρ21
28
1
ρ and ρ lie in here
1 2
2
0
ρ
-1
-1 0 1
ρ
1
Figure 4.1:
For example, if ρ1 = .9, then 2(.81) − 1 = .62 ≤ ρ2 ≤ 1.
Why go through all this algebra? There are two points: 1) it is not
true that any choice of autocorrelations with | ρj |≤ 1 (or even < 1) is the
autocorrelation function of an ARMA process. 2) The restrictions on ρ are
very complicated. This gives a reason to want to pay the set-up costs for
learning the spectral representation, in which we can build a time series by
arbitrarily choosing quantities like ρ.
There are two limiting properties of autocorrelations and autocovariances
as well. Recall from the Yule-Walker equations that autocorrelations even-
tually die out exponentially.
1) Autocorrelations are bounded by an exponential. ∃ λ > 0 s.t.|γj | <
λj
Since exponentials are square summable, this implies
29
P∞ 2
2) Autocorrelations are square summable, j=0 γj < ∞ .
We will discuss these properties later.
4.5 Multivariate auto- and cross correlations.
As usual, you can remember the multivariate extension by reinterpreting the
same letters as appropriate vectors and matrices.
With a vector time series
∙ ¸
yt
xt =
zt
we define the autocovariance function as
∙ ¸
0 E(yt yt−j ) E(yt zt−j )
Γj = E(xt xt−j ) =
E(zt yt−j ) E(zt zt−j )
The terms E(yt zt−j ) are called cross-covariances. Notice that Γj does not
= Γ−j . Rather, Γj = Γ0j or E(xt x0t−j ) = [E(xt x0t+j )]0 . (You should stop and
verify this fact from the definition, and the fact that E(yt zt−j ) = E(zt yt+j ).)
Correlations are similarly defined as
∙ 2
¸
E(yt yt−j )/σy E(yt zt−j )/σy σz
2 ,
E(zt yt−j )/σy σz E(zt zt−j )/σz
i.e., we keep track of autocorrelations and cross-correlations.
30
Chapter 5
Prediction and
Impulse-Response Functions
One of the most interesting things to do with an ARMA model is form predic-
tions of the variable given its past. I.e., we want to know what is E(xt+j |all
information available at time t). For the moment, ”all information” will be
all past values of the variable, and all past values of the shocks. I’ll get more
picky about what is in the information set later. For now, the question is to
find
Et (xt+j ) ≡ E(xt+j | xt , xt−1 , xt−2 , . . . t , t−1 , t−2 , . . .)
We might also want to know how certain we are about the prediction, which
we can quantify with
vart (xt+j ) ≡ var(xt+j | xt , xt−1 , xt−2 , . . . t , t−1 , t−2 , . . .).
The left hand side introduces a short (but increasingly unfashionable) nota-
tion for conditional moments.
Prediction is interesting for a variety of reasons. First, it is one of the
few rationalizations for time-series to be a subject of its own, divorced from
economics. Atheoretical forecasts of time series are often useful. One can
simply construct ARMA or VAR models of time series and crank out such
forecasts. The pattern of forecasts is also, like the autocorrelation function,
an interesting characterization of the behavior of a time series.
31
5.1 Predicting ARMA models
As usual, we’ll do a few examples and then see what general principles un-
derlie them.
AR(1)
For the AR(1), xt+1 = φxt + t+1 , we have
Et (xt+1 ) = Et (φxt + t+1 ) = φxt
Et (xt+2 ) = Et (φ2 xt + φ t+1 + t+2 ) = φ2 xt
Et (xt+k ) = = φk xt .
Similarly,
vart (xt+1 ) = vart (φxt + t+1 ) = σ2
vart (xt+2 ) = var(φ2 xt + φ t+1 + t+2 ) = (1 + φ2 )σ 2
vart (xt+k ) = ... = (1 + φ2 + φ4 + .. + φ2(k−1) )σ 2
These means and variances are plotted in figure 5.1.
Notice that
lim Et (xt+k ) = 0 = E(xt )
k→∞
X
∞
1
lim vart (xt+k ) = φ2j σ 2 = 2
σ 2 = var(xt ).
k→∞
j=0
1−φ
The unconditional moments are limits of the conditional moments. In this
way, we can think of the unconditional moments as the limit of conditional
moments of xt as of time t → −∞, or the limit of conditional moments of
xt+j as the horizon j → ∞.
MA
Forecasting MA models is similarly easy. Since
xt = t + θ1 t−1 + θ2 t−2 + ...,
32
5
4
3 E (x ) + σ (x )
t t+k t t+k
2 time t
k
E (x )=φ x
1 t t+k t
0
-1
-2
-3
-4
0 5 10 15 20 25 30
Figure 5.1: AR(1) forecast and standard deviation
we have
Et (xt+1 ) = Et ( t+1 + θ1 t + θ2 t−1 + . . .) = θ1 t + θ2 t−1 + ...
Et (xt+k ) = Et ( t+k +θ1 t+k−1 +...+ θk t +θk+1 t−1 +. . .) = θk t +θk+1 t−1 +. . .
vart (xt+1 ) = σ 2
2 2 2
vart (xt+k ) = (1 + θ1 + θ2 + ... + θk−1 )σ 2
AR’s and ARMA’s
The general principle in cranking out forecasts is to exploit the facts that
Et ( t+j ) = 0 and vart ( t+j ) = σ 2 for j > 0 . You express xt+j as a sum of
33
things known at time t and shocks between t and t + j.
xt+j = {function of t+j , t+j−1 , ..., t+1 } + {function of t , t−1 , ..., xt , xt−1 , ...}
The things known at time t define the conditional mean or forecast and the
shocks between t and t+j define the conditional variance or forecast error.
Whether you express the part that is known at time t in terms of x’s or
in terms of ’s is a matter of convenience. For example, in the AR(1) case,
we could have written Et (xt+j ) = φj xt or Et (xt+j ) = φj t + φj+1 t−1 + ....
Since xt = t + φ t−1 + ..., the two ways of expressing Et (xt+j ) are obviously
identical.
It’s easiest to express forecasts of AR’s and ARMA’s analytically (i.e. de-
rive a formula with Et (xt+j ) on the left hand side and a closed-form expression
on the right) by inverting to their MA(∞) representations. To find forecasts
numerically, it’s easier to use the state space representation described later
to recursively generate them.
Multivariate ARMAs
Multivariate prediction is again exactly the same idea as univariate pre-
diction, where all the letters are reinterpreted as vectors and matrices. As
usual, you have to be a little bit careful about transposes and such.
For example, if we start with a vector MA(∞), xt = B(L), we have
Et (xt+j ) = Bj t + Bj+1 t−1 + ...
0 0
vart (xt+j ) = Σ + B1 ΣB1 + . . . + Bj−1 ΣBj−1 .
5.2 State space representation
The AR(1) is particularly nice for computations because both forecasts and
forecast error variances can be found recursively. This section explores a
really nice trick by which any process can be mapped into a vector AR(1),
which leads to easy programming of forecasts (and lots of other things too.)
34
5.2.1 ARMAs in vector AR(1) representation
yt = φ1 yt−1 + φ2 yt−2 + t + θ1 t−1 .
We map this into
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
yt φ1 φ2 θ1 yt−1 1
⎣ yt−1 ⎦ = ⎣ 1 0 0 ⎦ ⎣ yt−2 ⎦ + ⎣ 0 ⎦ [ t ]
t 0 0 0 t−1 1
which we write in AR(1) form as
xt = Axt−1 + Cwt .
It is sometimes convenient to redefine the C matrix so the variance-
covariance matrix of the shocks is the identity matrix. To to this, we modify
the above as ⎡ ⎤
σ
C = ⎣ 0 ⎦ E(wt w0 t ) = I.
σ
5.2.2 Forecasts from vector AR(1) representation
With this vector AR(1) representation, we can find the forecasts, forecast
error variances and the impulse response function either directly or with the
P∞ j
corresponding vector MA(∞) representation xt = j=0 A Cwt−j . Either
way, forecasts are
Et (xt+k ) = Ak xt
and the forecast error variances are1
xt+1 − Et (xt+1 ) = Cwt+1 ⇒ vart (xt+1 ) = CC 0
xt+2 − Et (xt+2 ) = Cwt+2 + ACwt+1 ⇒ vart (xt+2 ) = CC 0 + ACC 0 A0
1
In case you forgot, if x is a vector with covariance matrix Σ and A is a matrix, then
var(Ax) = AΣA0 .
35
X
k−1
0
vart (xt+k ) = Aj CC 0 Aj
j=0
These formulas are particularly nice, because they can be computed re-
cursively,
Et (xt+k ) = AEt (xt+k−1 )
vart (xt+k ) = CC 0 + Avart (xt+k−1 )A0 .
Thus, you can program up a string of forecasts in a simple do loop.
5.2.3 VARs in vector AR(1) representation.
The multivariate forecast formulas given above probably didn’t look very
appetizing. The easy way to do computations with VARs is to map them
into a vector AR(1) as well. Conceptually, this is simple–just interpret
xt above as a vector [yt zt ]0 . Here is a concrete example. Start with the
prototype VAR,
yt = φyy1 yt−1 + φyy2 yt−2 + . . . + φyz1 zt−1 + φyz2 zt−2 + . . . + yt
zt = φzy1 yt−1 + φzy2 yt−2 + . . . + φzz1 zt−1 + φzz2 zt−2 + . . . + zt
We map this into an AR(1) as follows.
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
yt φyy1 φyz1 φyy2 φyz2 yt−1 1 0
⎢ zt ⎥ ⎢ φzy1 φzz1 φzy2 φzz2 ⎥⎢ zt−1 ⎥ ⎢ 0 1 ⎥∙ ¸
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ yt−1 ⎥ ⎢ 1 0 0 0 ... ⎥⎢ yt−2 ⎥ ⎢ 0 0 ⎥ yt
⎢ ⎥=⎢ ⎥⎢ ⎥+⎢ ⎥
⎢ zt−1 ⎥ ⎢ 0 1 0 0 ⎥⎢ zt−2 ⎥ ⎢ 0 0 ⎥ zt
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
. ... .
. .
. .
.
. ... . . .
i.e.,
0
xt = Axt−1 + t , E( t t) = Σ,
Or, starting with the vector form of the VAR,
xt = Φ1 xt−1 + Φ2 xt−2 + ... + t ,
36
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
xt Φ1 Φ2 . . . xt−1 I
⎢ xt−1 ⎥ ⎢ I ⎥ ⎢ xt−2
0 ... ⎥⎢ ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ xt−2 ⎥=⎢ 0 I . . . ⎥ ⎢ xt−3 ⎥+⎢ 0 ⎥ [ t]
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
. ... .
. .
.
. ... ... . .
Given this AR(1) representation, we can forecast both y and z as above.
Below, we add a small refinement by choosing the C matrix so that the shocks
are orthogonal, E( 0 ) = I.
Mapping a process into a vector AR(1) is a very convenient trick, for
other calculations as well as forecasting. For example, Campbell and Shiller
P
(199x) study present values, i.e. Et ( ∞ λj xt+j ) where x = dividends, and
j=1
hence the present value should be the price. To compute such present values
from a VAR with xt as its first element, they map the VAR into a vector
P P
AR(1). Then, the computation is easy: Et ( ∞ λj xt+j ) = ( ∞ λj Aj )xt =
j=1 j=1
(I − λA)−1 xt . Hansen and Sargent (1992) show how an unbelievable variety
of models beyond the simple ARMA and VAR I study here can be mapped
into the vector AR(1).
5.3 Impulse-response function
The impulse response function is the path that x follows if it is kicked by a
single unit shock t , i.e., t−j = 0, t = 1, t+j = 0. This function is interesting
for several reasons. First, it is another characterization of the behavior of
our models. Second, and more importantly, it allows us to start thinking
about “causes” and “effects”. For example, you might compute the response
of GNP to a shock to money in a GNP-M1 VAR and interpret the result as
the “effect” on GNP of monetary policy. I will study the cautions on this
interpretation extensively, but it’s clear that it’s interesting to learn how to
calculate the impulse-response.
P
For an AR(1), recall the model is xt = φxt−1 + t or xt = ∞ φj t−j .j=0
Looking at the MA(∞) representation, we see that the impulse-response is
t : 0 0 1 0 0 0 0
xt : 0 0 1 φ φ2 φ3 ...
37
P∞
Similarly, for an MA(∞), xt = j=0 θj t−j ,
t : 0 0 1 0 0 0 0
.
xt : 0 0 1 θ θ2 θ3 ...
As usual, vector processes work the same way. If we write a vector
MA(∞) representation as xt = B(L) t , where t ≡ [ yt zt ]0 and B(L) ≡
B0 + B1 L + ..., then {B0 , B1 , ...} define the impulse-response function. Pre-
cisely, B(L) means ∙ ¸
byy (L) byz (L)
B(L) = ,
bzy (L) bzz (L)
so byy (L) gives the response of yt+k to a unit y shock yt , byz (L) gives the
response of yt+k to a unit z shock, etc.
As with forecasts, MA(∞) representations are convenient for studying
impulse-responses analytically, but mapping to a vector AR(1) representa-
tion gives the most convenient way to calculate them in practice. Impulse-
response functions for a vector AR(1) look just like the scalar AR(1) given
above: for
xt = Axt−1 + C t ,
the response function is
C, AC, A2 C, ..., Ak C, ..
Again, this can be calculated recursively: just keep multiplying by A. (If
you want the response of yt , and not the whole state vector, remember to
multiply by [1 0 0 . . . ]0 to pull off yt , the first element of the state vector.)
While this looks like the same kind of trivial response as the AR(1),
remember that A and C are matrices, so this simple formula can capture the
complicated dynamics of any finite order ARMA model. For example, an
AR(2) can have an impulse response with decaying sine waves.
Three important properties of impulse-responses follow from these examples:
38
1. The MA(∞) representation is the same thing as the impulse-
response function.
This fact is very useful. To wit:
2. The easiest way to calculate an MA(∞) representation is to
simulate the impulse-response function.
Intuitively, one would think that impulse-responses have something to do
with forecasts. The two are related by:
3. The impulse response function is the same as Et (xt+j ) −
Et−1 (xt+j ).
Since the ARMA models are linear, the response to a unit shock if the value
of the series is zero is the same as the response to a unit shock on top of
whatever other shocks have hit the system. This property is not true of
nonlinear models!
39
Chapter 6
Stationarity and Wold
representation
6.1 Definitions
In calculating the moments of ARMA processes, I used the fact that the
moments do not depend on the calendar time:
E(xt ) = E(xs ) for all t and s
E(xt xt−j ) = E(xs xs−j ) for all t and s.
These properties are true for the invertible ARMA models, as you can show
directly. But they reflect a much more important and general property, as
we’ll see shortly. Let’s define it:
Definitions:
A process {xt } is strongly stationary or strictly stationary if the
joint probability distribution function of {xt−s , .., xt , . . . xt+s } is
independent of t for all s.
A process xt is weakly stationary or covariance stationary if E(xt ), E(x2 )
t
are finite and E(xt xt−j ) depends only on j and not on t.
40
Note that
1. Strong stationarity does not ⇒ weak stationarity. E(x2 ) must be finite.
t
For example, an iid Cauchy process is strongly, but not covariance,
stationary.
2. Strong stationarity plus E(xt ), E(xt x) < ∞ ⇒ weak stationarity
3. Weak stationarity does not ⇒ strong stationarity. If the process is
not normal, other moments (E(xt xt−j xt−k )) might depend on t, so the
process might not be strongly stationary.
4. Weak stationarity plus normality ⇒ strong stationarity.
Strong stationarity is useful for proving some theorems. For example, a
nonlinear (measurable) function of a strongly stationary variable is strongly
stationary; this is not true of covariance stationarity. For most purposes,
weak or covariance stationarity is enough, and that’s the concept I will use
through the rest of these notes.
Stationarity is often misunderstood. For example, if the conditional co-
variances of a series vary over time, as in ARCH models, the series can still be
stationary. The definition merely requires that the unconditional covariances
are not a function of time. Many people use “nonstationary” interchangeably
with “has a unit root”. That is one form of nonstationarity, but there are
lots of others. Many people say a series is “nonstationary” if it has breaks
in trends, or if one thinks that the time-series process changed over time.
If we think that the trend-break or structural shift occurs at one point in
time, no matter how history comes out, they are right. However, if a series
is subject to occasional stochastic trend breaks or shifts in structure, then
the unconditional covariances will no longer have a time index, and the series
can be stationary.
6.2 Conditions for stationary ARMA’s
Which ARMA processes are stationary? First consider the MAP
P processes
xt = ∞ θj t−j . Recalling the formula for the variance, var(xt ) = ∞ θj σ 2 ,
j=0 j=0
2
41
we see that Second moments exist if and only if the MA coefficients are square
summable,
X∞
2
Stationary MA ⇔ θj < ∞.
j=0
If second moments exist, it’s easy to see that they are independent of the t
index.
Consider the AR(1). Inverting it to an MA, we get
X
k
xt = φj t−j + φk xt−k .
j=0
Clearly, unless | φ |< 1, we are not going to get square summable MA
coefficients, and hence the variance of x will again not be finite.
More generally, consider factoring an AR
A(L)xt = t = (1 − λ1 L)(1 − λ2 L) . . . xt = t .
For the variance to be finite, The AR lag polynomial must be invertible or
| λi |< 1 for all i. A more common way of saying this is to factor A(L)
somewhat differently,
A(L) = constant(L − ζ1 )(L − ζ2 ) . . . .
the ζi are the roots of the lag polynomial, since A(z) = 0 when z = ζi . We
can rewrite the last equation as
1 1
A(L) = constant(−ζ1 )(1 − L)(−ζ2 )(1 − L) . . .
ζ1 ζ2
Thus the roots ζ and the λs must be related by
1
ζi = .
λi
Hence, the rule ”all | λ |< 1” means ”all | ζ |> 1”, or since λ and ζ can be
complex,
AR’s are stationary if all roots of the lag polynomial lie outside
the unit circle, i.e. if the lag polynomial is invertible.
42
Both statements of the requirement for stationarity are equvalent to
ARMAs are stationary if and only if the impluse-response func-
tion eventually decays exponentially.
Stationarity does not require the MA polynomial to be invertible. That
means something else, described next.
6.3 Wold Decomposition theorem
The above definitions are important because they define the range of ”sen-
sible” ARMA processes (invertible AR lag polynomials, square summable
MA lag polynomials). Much more importantly, they are useful to enlarge
our discussion past ad-hoc linear combinations of iid Gaussian errors, as as-
sumed so far. Imagine any stationary time series, for example a non-linear
combination of serially correlated lognormal errors. It turns out that, so long
as the time series is covariance stationary, it has a linear ARMA representa-
tion! Therefore, the ad-hoc ARMA models we have studied so far turn out
to be a much more general class than you might have thought. This is an
enormously important fact known as the
Wold Decomposition Theorem: Any mean zero covariance
stationary process {xt } can be represented in the form
X
∞
xt = θj t−j + ηt
j=0
where
1. t ≡ xt − P (xt | xt−1 , xt−2 , . . . ..).
2. P ( t |xt−1 , xt−2 , . . . .) = 0, E( t xt−j ) = 0, E( t ) = 0, E( 2 ) = σ 2 (same
t
for all t), E( t s ) = 0 for all t 6= s,
3. All the roots of θ(L) are on or outside the unit circle, i.e. (unless there
is a unit root) the MA polynomial is invertible.
43
P∞ 2
4. j=0 θj < ∞, θ0 = 1
5. {θj } and { s } are unique.
6. ηt is linearly deterministic, i.e.ηt = P (ηt | xt−1 , . . . .).
Property 1) just defines t as the linear forecast errors of xt . P (◦|◦) denotes
projection, i.e. the fitted value of a linear regression of the left hand variable
on the right hand variables. Thus, if we start with such a regression, say
xt = φ1 xt−1 + φ2 xt−2 + ... + t , 1) merely solves this regression for t . Proper-
ties 2) result from the definition of t as regression errors, and the fact from
1) that we can recover the t from current and lagged x’s, so t is orthogonal
to lagged as well as lagged x. Property 3) results from the fact that we
can recover t from current and lagged x. If θ(L) was not invertible, then
we couldn’t solve for t in terms of current and lagged x.Property 4) comes
from stationarity. If the θ were not square summable, the variance would be
infinite. Suppose we start with an AR(1) plus a sine wave. The resulting
series is covariance stationary. Property 6) allows for things like sine waves
in the series. We usually specify that the process {xt } is linearly regular,
which just means that deterministic components ηt have been removed.
Sargent (1987), Hansen and Sargent (1991) and Anderson (1972) all con-
tain proofs of the theorem. The proofs are very interesting, and introduce
you to the Hilbert space machinery, which is the hot way to think about
time series. The proof is not that mysterious. All the theorem says is that xt
can be written as a sum of its forecast errors. If there was a time zero with
information I0 and P (xj | I0 ) = 0, this would be obvious:
x1 = x1 − P (x1 | I0 ) = 1
x2 = x2 − P (x2 | x1 , I0 ) + P (x2 | x1 , I0 ) = 2 + θ1 x1 = 2 + θ1 x1 = 2 + θ1 1
x3 = x3 − P (x3 | x2 , x1 , I0 ) + P (x3 | x2 , x1 , I0 ) = 3 + φ1 x2 + φ2 x1
= 3 + φ1 ( 2 + θ1 x1 ) + φ2 x1 = 3 + φ1 2 + (φ1 θ1 ) 1
and so forth. You can see how we are getting each x as a linear function of
past linear prediction errors. We could do this even for nonstationary x; the
stationarity of x means that the coefficients on lagged are independent of
the time index.
44
6.3.1 What the Wold theorem does not say
Here are a few things the Wold theorem does not say:
1) The t need not be normally distributed, and hence need not be iid.
2) Though P ( t | xt−j ) = 0, it need not be true that E( t | xt−j ) = 0.
The projection operator P (xt | xt−1 , . . .) finds the best guess of xt (minimum
squared error loss) from linear combinations of past xt , i.e. it fits a linear re-
gression. The conditional expectation operator E(xt | xt−1 , . . .) is equivalent
to finding the best guess of xt using linear and all nonlinear combinations of
past xt , i.e., it fits a regression using all linear and nonlinear transformations
of the right hand variables. Obviously, the two are different.
3) The shocks need not be the ”true” shocks to the system. If the true
xt is not generated by linear combinations of past xt plus a shock, then the
Wold shocks will be different from the true shocks.
4) Similarly, the Wold MA(∞) is a representation of the time series, one
that fully captures its second moment properties, but not the representation
of the time series. Other representations may capture deeper properties of
the system. The uniqueness result only states that the Wold representation
is the unique linear representation where the shocks are linear forecast errors.
Non-linear representations, or representations in terms of non-forecast error
shocks are perfectly possible.
Here are some examples:
A) Nonlinear dynamics. The true system may be generated by a nonlinear
difference equation xt+1 = g(xt , xt−1 , . . .) + ηt+1 . Obviously, when we fit a
linear approximation as in the Wold theorem, xt = P (xt | xt−1 , xt−2 , . . .) +
t = φ1 xt−1 + φ2 xt−2 + . . . t , we will find that t 6= ηt . As an extreme
example, consider the random number generator in your computer. This is
a deterministic nonlinear system, ηt = 0. Yet, if you fit arbitrarily long AR’s
to it, you will get errors! This is another example in which E(.) and P (.) are
not the same thing.
B) Non-invertible shocks. Suppose the true system is generated by
2
xt = ηt + 2ηt−1 . ηt iid, ση = 1
This is a stationary process. But the MA lag polynomial is not invertible
45
(we can’t express the η shocks as x forecast errors), so it can’t be the Wold
representation. To find the Wold representation of the same process, match
autocorrelation functions to a process xt = t + θ t−1 :
E(x2 ) = (1 + 4) = 5 = (1 + θ2 )σ 2
t
E(xt xt−1 ) = 2 = θσ 2
Solving,
θ 2
= ⇒ θ = {2 or 1/2}
1 + θ2 5
and
σ 2 = 2/θ = {1 or 4}
2
The original model θ = 2, ση = 1 is one possibility. But θ = 1/2, σ 2 = 4
works as well, and that root is invertible. The Wold representation is unique:
if you’ve found one invertible MA, it must be the Wold representation.
Note that the impulse-response function of the original model is 1, 2; while
the impulse response function of the Wold representation is 1, 1/2. Thus,
the Wold representation, which is what you would recover from a VAR does
not give the “true” impulse-response.
Also, the Wold errors t are recoverable from a linear function of current
P
and pas xt . t = ∞ (−.5)j xt−j The true shocks are not. In this example,
j=0 P
the true shocks are linear functions of future xt : ηt = ∞ (−.5)j xt+j !
j=1
This example holds more generally: any MA(∞) can be reexpressed as
an invertible MA(∞).
6.4 The Wold MA(∞) as another fundamen-
tal representation
One of the lines of the Wold theorem stated that the Wold MA(∞) repre-
sentation was unique. This is a convenient fact, because it means that the
MA(∞) representation in terms of linear forecast errors (with the autocorre-
lation function and spectral density) is another fundamental representation.
If two time series have the same Wold representation, they are the same time
series (up to second moments/linear forecasting).
46
This is the same property that we found for the autocorrelation function,
and can be used in the same way.
47
Chapter 7
VARs: orthogonalization,
variance decomposition,
Granger causality
7.1 Orthogonalizing VARs
The impulse-response function of a VAR is slightly ambiguous. As we will
see, you can represent a time series with arbitrary linear combinations of
any set of impulse responses. “Orthogonalization” refers to the process of
selecting one of the many possible impulse-response functions that you find
most interesting to look at. It is also technically convenient to transform
VARs to systems with orthogonal error terms.
7.1.1 Ambiguity of impulse-response functions
Start with a VAR expressed in vector notation, as would be recovered from
regressions of the elements of xt on their lags:
0
A(L)xt = t , A(0) = I, E( t t) = Σ. (7.1)
Or, in moving average notation,
0
xt = B(L) t , B(0) = I, E( t t) =Σ (7.2)
48
where B(L) = A(L)−1 . Recall that B(L) gives us the response of xt to unit
impulses to each of the elements of t . Since A(0) = I , B(0) = I as well.
But we could calculate instead the responses of xt to new shocks that
are linear combinations of the old shocks. For example, we could ask for the
response of xt to unit movements in yt and zt + .5 yt . (Just why you might
want to do this might not be clear at this point, but bear with me.) This is
easy to do. Call the new shocks ηt so that η1t = yt , η2t = zt + .5 yt , or
∙ ¸
1 0
ηt = Q t , Q = .
.5 1
We can write the moving average representation of our VAR in terms of these
new shocks as xt = B(L)Q−1 Q t or
xt = C(L)ηt . (7.3)
where C(L) = B(L)Q−1 . C(L) gives the response of xt to the new shocks
ηt . As an equivalent way to look at the operation, you can see that C(L) is
a linear combination of the original impulse-responses B(L).
So which linear combinations should we look at? Clearly the data are
no help here–the representations (7.2) and (7.3) are observationally equiv-
alent, since they produce the same series xt . We have to decide which linear
combinations we think are the most interesting. To do this, we state a set of
assumptions, called orthogonalization assumptions, that uniquely pin down
the linear combination of shocks (or impulse-response functions) that we find
most interesting.
7.1.2 Orthogonal shocks
The first, and almost universal, assumption is that the shocks should be or-
thogonal (uncorrelated). If the two shocks yt and zt are correlated, it doesn’t
make much sense to ask “what if yt has a unit impulse” with no change in zt ,
since the two usually come at the same time. More precisely, we would like
to start thinking about the impulse-response function in causal terms–the
“effect” of money on GNP, for example. But if the money shock is correlated
with the GNP shock, you don’t know if the response you’re seeing is the
response of GNP to money, or (say) to a technology shocks that happen to
49
come at the same time as the money shock (maybe because the Fed sees the
GNP shock and accommodates it). Additionally, it is convenient to rescale
the shocks so that they have a unit variance.
0
Thus, we want to pick Q so that E(ηt ηt ) = I. To do this, we need a Q
such that
Q−1 Q−10 = Σ
With that choice of Q,
0 0 0
E(ηt ηt ) = E(Q t tQ ) = QΣQ0 = I
One way to construct such a Q is via the Choleski decomposition. (Gauss
has a command CHOLESKI that produces this decomposition.)
Unfortunately there are many different Q’s that act as “square root”
matrices for Σ. (Given one such Q, you can form another, Q∗ , by Q∗ = RQ,
where R is an orthogonal matrix, RR0 = I. Q∗ Q∗0 = RQQ0 R0 = RR0 = I.)
Which of the many possible Q’s should we choose?
We have exhausted our possibilities of playing with the error term, so we
now specify desired properties of the moving average C(L) instead. Since
C(L) = B(L)Q−1 , specifying a desired property of C(L) can help us pin down
Q. To date, using “theory” (in a very loose sense of the word) to specify
features of C(0) and C(1) have been the most popular such assumptions.
Maybe you can find other interesting properties of C(L) to specify.
7.1.3 Sims orthogonalization–Specifying C(0)
Sims (1980) suggests we specify properties of C(0), which gives the instan-
taneous response of each variable to each orthogonalized shock η. In our
original system, (7.2) B(0) = I. This means that each shock only affects its
own variable contemporaneously. Equivalently, A(0) = I–in the autoregres-
sive representation (7.1), neither variable appears contemporaneously in the
other variable’s regression.
matrix Q will have off-diagonal elements. Thus, C(0) cannot = I. This
means that some shocks will have effects on more than one variable. Our job
is to specify this pattern.
50
Sims suggests that we choose a lower triangular C(0),
∙ ¸ ∙ ¸∙ ¸
yt C0yy 0 η1t
= + C1 ηt−1 + ...
zt C0zy C0zz η2t
As you can see, this choice means that the second shock η2t does not affect
the first variable, yt , contemporaneously. Both shocks can affect zt contem-
poraneously. Thus, all the contemporaneous correlation between the original
shocks t has been folded into C0zy .
We can also understand the orthogonalization assumption in terms of
the implied autoregressive representation. In the original VAR, A(0) = I, so
contemporaneous values of each variable do not appear in the other variable’s
equation. A lower triangular C(0) implies that contemporaneous yt appears
in the zt equation, but zt does not appear in the yt equation. To see this, call
the orthogonalized autoregressive representation D(L)xt = ηt , i.e., D(L) =
C(L)−1 . Since the inverse of a lower triangular matrix is also lower triangular,
D(0) is lower triangular, i.e.
∙ ¸∙ ¸
D0yy 0 yt
+ D1 xt−1 + ... = ηt
D0zy D0zz zt
or
D0yy yt = −D1yy yt−1 − D1yz zt−1 +η1t
. (7.4)
D0zz zt = −D0zy yt −D1zy yt−1 − D1zz zt−1 +η2t
As another way to understand Sims orthogonalization, note that it is nu-
merically equivalent to estimating the system by OLS with contemporaneous
yt in the zt equation, but not vice versa, and then scaling each equation so
that the error variance is one. To see this point, remember that OLS esti-
mates produce residuals that are uncorrelated with the right hand variables
by construction (this is their defining property). Thus, suppose we run OLS
on
yt = a1yy yt−1 + .. + a1yz zt−1 + .. + ηyt
(7.5)
zt = a0zy yt + a1zy yt−1 + .. + a1zz zt−1 + .. + ηzt
The first OLS residual is defined by ηyt = yt − E(yt | yt−1 , .., zt−1 , ..) so ηyt
is a linear combination of {yt , yt−1 , .., zt−1 , ..}.OLS residuals are orthogonal
to right hand variables, so ηzt is orthogonal to any linear combination of
{yt , yt−1 , .., zt−1 , ..}, by construction. Hence, ηyt and ηzt are uncorrelated
51
with each other. a0zy captures all of the contemporaneous correlation of
news in yt and news in zt .
In summary, one can uniquely specify Q and hence which linear com-
bination of the original shocks you will use to plot impulse-responses by
the requirements that 1) the errors are orthogonal and 2) the instantaneous
response of one variable to the other shock is zero. Assumption 2) is equiv-
alent to 3) The VAR is estimated by OLS with contemporaneous y in the
z equation but not vice versa.
Happily, the Choleski decomposition produces a lower triangular Q. Since
C(0) = B(0)Q−1 = Q−1 ,
the Choleski decomposition produces the Sims orthogonalization already, so
you don’t have to do any more work. (You do have to decide what order to
put the variables in the VAR.)
Ideally, one tries to use economic theory to decide on the order of orthog-
onalization. For example, (reference) specifies that the Fed cannot see GNP
until the end of the quarter, so money cannot respond within the quarter
to a GNP shock. As another example, Cochrane (1993) specifies that the
instantaneous response of consumption to a GNP shock is zero, in order to
identify a movement in GNP that consumers regard as transitory.
7.1.4 Blanchard-Quah orthogonalization—restrictions on
C(1).
Rather than restrict the immediate response of one variable to another shock,
Blanchard and Quah (1988) suggest that it is interesting to examine shocks
defined so that the long-run response of one variable to another shock is zero.
If a system is specified in changes, ∆xt = C(L)ηt , then C(1) gives the long-
run response of the levels of xt to η shocks. Blanchard and Quah argued that
“demand” shocks have no long-run effect on GNP. Thus, they require C(1)
to be lower diagonal in a VAR with GNP in the first equation. We find the
required orthogonalizing matrix Q from C(1) = B(1)Q−1 .
52
7.2 Variance decompositions
In the orthogonalized system we can compute an accounting of forecast error
variance: what percent of the k step ahead forecast error variance is due to
which variable. To do this, start with the moving average representation of
an orthogonalized VAR
0
xt = C(L)ηt , E(ηt ηt ) = I.
The one step ahead forecast error variance is
∙ ¸∙ ¸
cyy,0 cyz,0 ηy,t+1
t+1 = xt+1 − Et (xt+1 ) = C0 ηt+1 = .
czy,0 czz,0 ηz,t+1
(In the right hand equality, I denote C(L) = C0 + C1 L + C2 L2 + ... and the
elements of C(L) as cyy,0 + cyy,1 L + cyy,2 L2 + ..., etc.) Thus, since the η are
uncorrelated and have unit variance,
vart (yt+1 ) = c2 σ 2 (ηy ) + c2 σ 2 (ηz ) = c2 + c2
yy,0 yz,0 yy,0 yz,0
and similarly for z. Thus, c2 yy,0 gives the amount of the one-step ahead
forecast error variance of y due to the ηy shock, and c2 gives the amount due
yz,0
to the ηz shock. (Actually, one usually reports fractions c2 /(c2 + c2 ).
yy,0 yy,0 yz,0
)
More formally, we can write
vart (xt+1 ) = C0 C 00 .
Define ∙ ¸ ∙ ¸
1 0 0 0
I1 = , I2 = , etc.
0 0 0 1
Then, the part of the one step ahead forecast error variance due to the first
0 0
(y) shock is C0 I 1 C0 , the part due to the second (z) shock is C0 I 2 C0 , etc.
Check for yourself that these parts add up, i.e. that
0
C0 C 00 = C0 I1 C 00 + C0 I2 C0 + . . .
You can think of Iτ as a new covariance matrix in which all shocks but
the τ th are turned off. Then, the total variance of forecast errors must be
equal to the part due to the τ th shock, and is obviously C0 Iτ C 00 .
53
Generalizing to k steps ahead is easy.
xt+k − Et (xt+k ) = C0 ηt+k + C1 ηt+k−1 + . . . + Ck−1 ηt+1
vart (xt+k ) = C0 C 00 + C1 C 01 + . . . + Ck−1 C 0k−1
Then
X
k−1
0
vk,τ = Cj Iτ Cj
j=0
is the variance of k step ahead forecast errors due to the τ th shock, and the
P
variance is the sum of these components, vart (xt+k ) = τ vk,τ .
It is also interesting to compute the decomposition of the actual variance
of the series. Either directly from the MA representation, or by recognizing
the variance as the limit of the variance of k-step ahead forecasts, we obtain
that the contribution of the τ th shock to the variance of xt is given by
X
∞
0
vτ = Cj Iτ Cj
j=0
P
and var(xt+k ) = τ vτ .
7.3 VAR’s in state space notation
For many of these calculations, it’s easier to express the VAR as an AR(1)
in state space form. The only refinement relative to our previous mapping is
how to include orthogonalized shocks η. Since ηt = Q t , we simply write the
VAR
xt = Φ1 xt−1 + Φ2 xt−2 + ... + t
as ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
xt Φ1 Φ2 . . . xt−1 Q−1
⎢ xt−1 ⎥ ⎢ I 0 ... ⎥⎢⎥ ⎢ xt−2 ⎥ ⎢ 0 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ xt−2 ⎥=⎢ 0 I . . . ⎥ ⎢ xt−3 ⎥+⎢ 0 ⎥ [ηt ]
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
.
. ... .
. .
.
. ... . .
0
xt = Axt−1 + Cηt , E(ηt ηt ) = I
54
The impulse-response function is C, AC, A2 C, ... and can be found re-
cursively from
IR0 = C, IRj = A IRj−1 .
If Q−1 is lower diagonal, then only the first shock affects the first variable,
as before. Recall from forecasting AR(1)’s that
X
k−1
vart (xt+j ) = Aj CC 0 A0 j .
j=0
Therefore,
X
k−1
vk,τ = Aj C Iτ C 0 A0 j
j=0
gives the variance decomposition–the contribution of the τ th shock to the
k-step ahead forecast error variance. It too can be found recursively from
vi,τ = CIτ C 0 , vk,t = Avk−1,t A0 .
7.4 Tricks and problems:
1. Suppose you multiply the original VAR by an arbitrary lower triangular
Q. This produces a system of the same form as (7.4). Why would OLS (7.5)
not recover this system, instead of the system formed by multiplying the
original VAR by the inverse of the Choleski decomposition of Σ?
0
xt = C(L)ηt , E(ηt ηt ) = I.
Show that you can transform to other orthogonal representations of the
shocks by an orthogonal matrix–a matrix Q such that QQ0 = I.
3. Consider a two-variable cointegrated VAR. y and c are the variables,
(1 − L)yt and (1 − L)ct , and yt − ct are stationary, and ct is a random
walk. Show that in this system, Blanchard-Quah and Sims orthogonalization
produce the same result.
55
4. Show that the Sims orthogonalization is equivalent to requiring that
the one-step ahead forecast error variance of the first variable is all due to
the first shock, and so forth.
1. The OLS regressions of (7.5) do not (necessarily) produce a diagonal
covariance matrix, and so are not the same as OLS estimates, even though
the same number of variables are on the right hand side. Moral: watch the
properties of the error terms as well as the properties of C(0) or B(0)!
0
2. We want to transform to shocks ξt , such that E(ξt ξt ) = I. To do it,
0 0
E(ξt ξt ) = E(Qηt ηt Q0 ) = QQ0 , which had better be I. Orthogonal matrices
rotate vectors without stretching or shrinking them. For example, you can
verify that ∙ ¸
cos θ sin θ
Q=
− sin θ cos θ
rotates vectors counterclockwise by θ. This requirement means that the
columns of Q must be orthogonal, and that if you multiply Q by two or-
thogonal vectors, the new vectors will still be orthogonal. Hence the name.
3. Write the y, c system as ∆xt = B(L) t . y, c cointegrated im-
plies that c and y have the same long-run response to any shock–Bcc (1) =
Byc (1), Bcy (1) = Byy (1). A random walk means that the immediate response
of c to any shock equals its long run response, Bci (0) = Bci (1), i = c, y.
Hence, Bcy (0) = Bcy (1). Thus, B(0) is lower triangular if and only if B(1) is
lower triangular.
c a random walk is sufficient, but only the weaker condition Bci (0) =
Bci (1), i = c, y is necessary. c’s response to a shock could wiggle, so long as
it ends at the same place it starts.
4. If C(0) is lower triangular, then the upper left hand element of
C(0)C(0)0 is C(0)2 .
11
56
7.5 Granger Causality
It might happen that one variable has no response to the shocks in the
other variable. This particular pattern in the impulse-response function has
attracted wide attention. In this case we say that the shock variable fails to
Granger cause the variable that does not respond.
The first thing you learn in econometrics is a caution that putting x on
the right hand side of y = xβ + doesn’t mean that x ”causes” y. (The
convention that causes go on the right hand side is merely a hope that one
set of causes–x–might be orthogonal to the other causes . ) Then you
learn that ”causality” is not something you can test for statistically, but
must be known a priori.
Granger causality attracted a lot of attention because it turns out that
there is a limited sense in which we can test whether one variable “causes”
another and vice versa.
7.5.1 Basic idea
The most natural definition of ”cause” is that causes should precede effects.
But this need not be the case in time-series.
Consider an economist who windsurfs.1 Windsurfing is a tiring activity,
so he drinks a beer afterwards. With W = windsurfing and B = drink a beer,
a time line of his activity is given in the top panel of figure 7.1. Here we have
no difficulty determining that windsurfing causes beer consumption.
But now suppose that it takes 23 hours for our economist to recover
enough to even open a beer, and furthermore let’s suppose that he is lucky
enough to live somewhere (unlike Chicago) where he can windsurf every day.
Now his time line looks like the middle panel of figure 7.1. It’s still true that
W causes B, but B precedes W every day. The “cause precedes effects” rule
would lead you to believe that drinking beer causes one to windsurf!
How can one sort this out? The problem is that both B and W are regular
events. If one could find an unexpected W , and see whether an unexpected
B follows it, one could determine that W causes B, as shown in the bottom
1
The structure of this example is due to George Akerlof.
57
WB WB WB
time
??
B W B W B W
time
B W W B W B B W
time
Figure 7.1:
panel of figure 7.1. So here is a possible definition: if an unexpected W
forecasts B then we know that W “causes” B. This will turn out to be one
of several equivalent definitions of Granger causality.
7.5.2 Definition, autoregressive representation
Definition: wt Granger causes yt if wt helps to forecast yt , given
past yt .
Consider a vector autoregression
yt = a(L)yt−1 + b(L)wt−1 + δt
wt = c(L)yt−1 + d(L)wt−1 + νt
58
our definition amounts to: wt does not Granger cause yt if b(L) = 0, i.e. if
the vector autoregression is equivalent to
yt = a(L)yt−1 +δt
wt = c(L)yt−1 +d(L)wt−1 +νt
We can state the definition alternatively in the autoregressive representation
∙ ¸ ∙ ¸∙ ¸ ∙ ¸
yt a(L) b(L) yt−1 δt
= +
wt c(L) d(L) wt−1 νt
∙ ¸∙ ¸ ∙ ¸
I − La(L) −Lb(L) yt δt
=
−Lc(L) I − Ld(L) wt νt
∙ ∗ ¸∙ ¸ ∙ ¸
a (L) b∗ (L) yt δt
∗ ∗ =
c (L) d (L) wt νt
Thus, w does not Granger cause y iff b∗ (L) = 0, or if the autoregressive
matrix lag polynomial is lower triangular.
7.5.3 Moving average representation
We can invert the autoregressive representation as follows:
∙ ¸ ∙ ∗ ¸∙ ¸
yt 1 d (L) −b∗ (L) δt
= ∗
wt a (L)d∗ (L) − b∗ (L)d∗ (L) −c∗ (L) a∗ (L) νt
Thus, w does not Granger cause y if and only if the Wold moving average
matrix lag polynomial is lower triangular. This statement gives another in-
terpretation: if w does not Granger cause y, then y is a function of its shocks
only and does not respond to w shocks. w is a function of both y shocks and
w shocks.
Another way of saying the same thing is that w does not Granger cause y
if and only if y’s bivariate Wold representation is the same as its univariate
Wold representation, or w does not Granger cause y if the projection of y on
past y and w is the same as the projection of y on past y alone.
59
7.5.4 Univariate representations
Consider now the pair of univariate Wold representations
yt = e(L)ξt ξt = yt − P (yt | yt−1 , yt−2 , . . .);
wt = f (L)µt µt = wt − P (wt | wt−1 , wt−2 , . . .);
(I’m recycling letters: there aren’t enough to allow every representation to
have its own letters and shocks.) I repeated the properties of ξ and µ to
remind you what I mean.
wt does not Granger cause yt if E(µt ξt+j ) = 0 for all j > 0. In words, wt
Granger causes yt if the univariate innovations of wt are correlated with (and
hence forecast) the univariate innovations in yt . In this sense, our original
idea that wt causes yt if its movements precede those of yt was true iff it
applies to innovations, not the level of the series.
Proof: If w does not Granger cause y then the bivariate represen-
tation is
yt = a(L)δt
wt = c(L)δt +d(L)νt
The second line must equal the univariate representation of wt
wt = c(L)δt + d(L)νt = f (L)µt
Thus, µt is a linear combination of current and past δt and νt .
Since δt is the bivariate error, E(δt | yt−1 . . . wt−1 . . .) = E(δt |
δt−1 . . . νt−1 . . .) = 0. Thus, δt is uncorrelated with lagged δt and
νt , and hence lagged µt .
If E(µt ξt+j ) = 0, then past µ do not help forecast ξ, and thus
past µ do not help forecast y given past y. Since one can solve
for wt = f (L)µt (w and µ span the same space) this means past
w do not help forecast y given past y.
2
60
7.5.5 Effect on projections
Consider the projection of wt on the entire y process,
X
∞
wt = bj yt−j + t
j=−∞
Here is the fun fact:
The projection of wt on the entire y process is equal to the projection of
wt on current and past y alone, (bj = 0 for j < 0 if and only if w does not
Granger cause y.
Proof: 1) w does not Granger cause y ⇒one sided. If w does not
Granger cause y, the bivariate representation is
yt = a(L)δt
wt = d(L)δt + e(L)νt
Remember, all these lag polynomials are one-sided. Inverting the
first,
δt = a(L)−1 yt
substituting in the second,
wt = d(L)a(L)−1 yt + e(L)νt .
Since δ and ν are orthogonal at all leads and lags (we assumed
contemporaneously orthogonal as well) e(L)νt is orthogonal to yt
at all leads and lags. Thus, the last expression is the projection
of w on the entire y process. Since d(L) and a(L)−1 are one sided
the projection is one sided in current and past y.
2) One sided ⇒ w does not Granger cause y . Write the univariate
representation of y as yt = a(L)ξt and the projection of w on the
whole y process
wt = h(L)yt + ηt
The given of the theorem is that h(L) is one sided. Since this is
the projection on the whole y process, E(yt ηt−s ) = 0 for all s.
61
ηt is potentially serially correlated, so it has a univariate repre-
sentation
ηt = b(L)δt .
Putting all this together, y and z have a joint representation
yt = a(L)ξt
wt = h(L)a(L)ξt +b(L)δt
It’s not enough to make it look right, we have to check the prop-
erties. a(L) and b(L) are one-sided, as is h(L) by assumption.
Since η is uncorrelated with y at all lags, δ is uncorrelated with
ξ at all lags. Since ξ and δ have the right correlation properties
and [y w] are expressed as one-sided lags of them, we have the
bivariate Wold representation.
2
7.5.6 Summary
w does not Granger cause y if
1) Past w do not help forecast y given past y –Coefficients in on w in a
regression of y on past y and past w are 0.
2) The autoregressive representation is lower triangular.
3) The bivariate Wold moving average representation is lower triangular.
4) Proj(wt |all yt ) = Proj(wt |current and past y)
5) Univariate innovations in w are not correlated with subsequent uni-
variate innovations in y.
6) The response of y to w shocks is zero
One could use any definition as a test. The easiest test is simply an F-test
on the w coefficients in the VAR. Monte Carlo evidence suggests that this
test is also the most robust.
62
7.5.7 Discussion
It is not necessarily the case that one pair of variables must Granger cause
the other and not vice versa. We often find that each variable responds to
the other’s shock (as well as its own), or that there is feedback from each
variable to the other.
The first and most famous application of Granger causality was to the
question “does money growth cause changes in GNP?” Friedman and Schwartz
(19 ) documented a correlation between money growth and GNP, and a ten-
dency for money changes to lead GNP changes. But Tobin (19 ) pointed
out that, as with the windsurfing example given above, a phase lead and a
correlation may not indicate causality. Sims (1972) applied a Granger causal-
ity test, which answers Tobin’s criticism. In his first work, Sims found that
money Granger causes GNP but not vice versa, though he and others have
found different results subsequently (see below).
Sims also applied the last representation result to study regressions of
GNP on money,
X
∞
yt = bj mt−j + δt .
j=0
This regression is known as a “St. Louis Fed” equation. The coefficients were
interpreted as the response of y to changes in m; i.e. if the Fed sets m, {bj }
gives the response of y. Since the coefficients were “big”, the equations
implied that constant money growth rules were desirable.
The obvious objection to this statement is that the coefficients may reflect
reverse causality: the Fed sets money in anticipation of subsequent economic
growth, or the Fed sets money in response to past y. In either case, the error
term δ is correlated with current and lagged m’s so OLS estimates of the b’s
are inconsistent.
Sims (1972) ran causality tests essentially by checking the pattern of
correlation of univariate shocks, and by running regressions of y on past and
future m, and testing whether coefficients on future m are zero. He concluded
that the “St. Louis Fed” equation is correctly specified after all. Again, as
if correctly estimated, the projection of y on all m’s is not necessarily the
answer to “what if the Fed changes m”.
63
Explained by shocks to
Var. of M1 IP WPI
M1 97 2 1
IP 37 44 18
WPI 14 7 80
Table 7.1: Sims variance accounting
7.5.8 A warning: why “Granger causality” is not “Causal-
ity”
“Granger causality” is not causality in a more fundamental sense because
of the possibility of other variables. If x leads to y with one lag but to z
with two lags, then y will Granger cause z in a bivariate system− − y will
help forecast z since it reveals information about the “true cause” x. But it
does not follow that if you change y (by policy action), then a change in z
will follow. The weather forecast Granger causes the weather (say, rainfall
in inches), since the forecast will help to forecast rainfall amount given the
time-series of past rainfall. But (alas) shooting the forecaster will not stop
the rain. The reason is that forecasters use a lot more information than past
rainfall.
This wouldn’t be such a problem if the estimated pattern of causality in
macroeconomic time series was stable over the inclusion of several variables.
But it often is. A beautiful example is due to Sims (1980). Sims computed
a VAR with money, industrial production and wholesale price indices. He
summarized his results by a 48 month ahead forecast error variance, shown
in table 7.1
The first row verifies that M 1 is exogenous: it does not respond to the
other variables’ shocks. The second row shows that M1 ”causes” changes in
IP , since 37% of the 48 month ahead variance of IP is due to M1 shocks.
The third row is a bit of a puzzle: WPI also seems exogenous, and not too
influenced by M 1.
Table 7.2 shows what happens when we add a further variable, the interest
rate. Now, the second row shows a substantial response of money to interest
64
Explained by shocks to
Var of R M1 WPI IP
R 50 19 4 28
M1 56 42 1 1
WPI 2 32 60 6
IP 30 4 14 52
Table 7.2: Sims variance accounting including interest rates
rate shocks. It’s certainly not exogenous, and one could tell a story about the
Fed’s attempts to smooth interest rates. In the third row, we now find that
M does influence WPI. And, worst of all, the fourth row shows that M does
not influence IP ; the interest rate does. Thus, interest rate changes seem to
be the driving force of real fluctuations, and money just sets the price level!
However, later authors have interpreted these results to show that interest
rates are in fact the best indicators of the Fed’s monetary stance.
Notice that Sims gives an economic measure of feedback (forecast error
variance decomposition) rather than F-tests for Granger causality. Since
the first flush of optimism, economists have become less interested in the
pure hypothesis of no Granger causality at all, and more interested in simply
quantifying how much feedback exists from one variable to another. And
sensibly so.
Any variance can be broken down by frequency. Geweke (19 ) shows how
to break the variance decomposition down by frequency, so you get measures
of feedback at each frequency. This measure can answer questions like “does
the Fed respond to low or high frequency movements in GNP?”, etc.
7.5.9 Contemporaneous correlation
Above, I assumed where necessary that the shocks were orthogonal. One can
expand the definition of Granger causality to mean that current and past w
do not predict y given past y. This means that the orthogonalized MA is
lower triangular. Of course, this version of the definition will depend on the
order of orthogonalization. Similarly, when thinking of Granger causality in
65
terms of impulse response functions or variance decompositions you have to
make one or the other orthogonalization assumption.
Intuitively, the problem is that one variable may affect the other so quickly
that it is within the one period at which we observe data. Thus, we can’t
use our statistical procedure to see whether contemporaneous correlation is
due to y causing w or vice-versa. Thus, the orthogonalization assumption is
equivalent to an assumption about the direction of instantaneous causality.
66
Chapter 8
Spectral Representation
The third fundamental representation of a time series is its spectral density.
This is just the Fourier transform of the autocorrelation/ autocovariance
function. If you don’t know what that means, read on.
8.1 Facts about complex numbers and trigonom-
etry
8.1.1 Definitions
Complex numbers are composed of a real part plus an imaginary part, z = A
+Bi, where i = (−1)1/2 . We can think of a complex number as a point on
a plane with reals along the x axis and imaginary numbers on the y axis as
shown in figure 8.1.
Using the identity eiθ = cos θ + i sin θ, we can also represent complex
numbers in polar notation as z = Ceiθ where C = (A2 +B 2 )1/2 is the amplitude
or magnitude, and θ = tan −1 (B/A) is the angle or phase. The length C of
a complex number is also denoted as its norm | z |.
67
Imaginary
B A + Bi; Ceiθ
C
θ
A
Real
Figure 8.1: Graphical representation of the complex plane.
To add complex numbers, you add each part, as you would any vector
(A + Bi) + (C + Di) = (A + C) + (B + D)i.
Hence, we can represent addition on the complex plane as in figure 8.2
You multiply them just like you’d think:
(A + Bi)(C + Di) = AC + ADi + BCi + BDi = (AC − BD) + (AD + BC)i.
Multiplication is easier to see in polar notation
Deiθ1 Eeiθ2 = DEei(θ1 +θ2 )
Thus, multiplying two complex numbers together gives you a number whose
magnitude equals the product of the two magnitudes, and whose angle (or
phase) is the sum of the two angles, as shown in figure 8.3. Angles are
denoted in radians, so π = 1800 , etc.
The complex conjugate * is defined by
(A + Bi)∗ = A − Bi and (Aeiθ )∗ = Ae−iθ .
68
z1 = A + Bi z1 + z2
z2 = C + Di
This operation simply flips the complex vector about the real axis. Note that
zz ∗ =| z |2 , and z + z ∗ = 2Re(z) is real..
8.1.3 Trigonometric identities
From the identity
eiθ = cos θ + i sin θ,
two useful identities follow
cos θ = (eiθ + e−iθ )/2
sin θ = (eiθ − e−iθ )/2i
8.1.4 Frequency, period and phase
Figure 8.4 reminds you what sine and cosine waves look like.
69
z1 z2 = DEei (θ1 +θ2 )
z2 = Eeiθ 2
θ1 + θ 2
θ2
z1 = Deiθ1
θ1
Figure 8.3: Complex multiplication
The period λ is related to the frequency ω by λ = 2π/ω. The period λ
is the amount of time it takes the wave to go through a whole cycle. The
frequency ω is the angular speed measured in radians/time. The phase is the
angular amount φ by which the sine wave is shifted. Since it is an angular
displacement, the time shift is φ/ω.
8.1.5 Fourier transforms
Take any series of numbers {xt }. We define its Fourier transform as
X
∞
x(ω) = e−iωt xt
t=−∞
Note that this operation transforms a series, a function of t, to a complex-
valued function of ω. Given x(ω), we can recover xt , by the inverse Fourier
transform Z π
1
xt = e+iωt x(ω)dω.
2π −π
70
Asin(ωt + φ)
A
- φ/ω
λ
Figure 8.4: Sine wave amplitude A, period λ and frequency ω.
Proof: Just substitute the definition of x(ω) in the inverse trans-
form, and verify that we get xt back.
Z π Ã ∞ ! Z π
1 X 1 X
∞
+iωt −iωτ
e e xτ dω = xτ e+iωt e−iωτ dω
2π −π τ =−∞
2π τ =−∞ −π
X
∞ Z π
1
= xτ eiω(t−τ ) dω
τ =−∞
2π −π
Next, let’s evaluate the integral.
Z π Z π
1 iω(t−τ ) 1
t=τ ⇒ e dω = dω = 1,
2π −π 2π −π
Z π Z π
1 iω(t−τ ) 1
t−τ =1⇒ e dω = eiω dω = 0
2π −π 2π −π
71
since the integral of sine or cosine all the way around the circle
is zero. The same point holds for any t 6= τ , thus (this is another
Z π ½
1 iω(t−τ ) 1 if t − τ = 0
e dω = δ(t − τ ) =
2π −π 0 if t − τ 6= 0
Picking up where we left off,
X∞ Z π X∞
1
xτ eiω(t−τ ) dω = xτ δ(t − τ ) = xt .
τ =−∞
2π −π τ =−∞
2
The inverse Fourier transform expresses xt as a sum of sines and cosines
at each frequency ω. We’ll see this explicitly in the next section.
8.1.6 Why complex numbers?
You may wonder why complex numbers pop up in the formulas, since all
economic time series are real (i.e., the sense in which they contain imaginary
numbers has nothing to do with the square root of -1). The answer is that
they don’t have to: we can do all the analysis with only real quantities.
However, it’s simpler with the complex numbers. The reason is that we
always have to keep track of two real quantities, and the complex numbers
do this for us by keeping track of a real and imaginary part in one symbol.
To see this point, consider what a more intuitive inverse Fourier transform
might look like:
Z
1 π
xt = | x(ω) | cos(ωt + φ(ω))dω
π 0
Here we keep track of the amplitude | x(ω) | (a real number) and phase φ(ω)
of components at each frequency ω. It turns out that this form is exactly
the same as the one given above. In the complex version, the magnitude of
x(ω) tells us the amplitude of the component at frequency ω, the phase of
72
x(ω) tells use the phase of the component at frequency ω, and we don’t have
to carry the two around separately. But which form you use is really only a
matter of convenience.
Proof:
Writing x(ω) = |x(ω)| eiφ(ω) ,
Z π Z π
1 iωt 1
xt = x(ω)e dω = | x(ω) | ei(ωt+φ(ω)) dω
2π −π 2π −π
Z π
1 ¡ ¢
= | x(ω) | ei(ωt+φ(ω)) + | x(−ω) | ei(−ωt+φ(−ω)) dω.
2π 0
P P ∗
But x(ω) = x(−ω)∗ (to see this, x(−ω) = t eiωt xt = ( t e−iωt xt ) =
x(ω)∗ ), so | x(−ω) |=| x(ω) |, φ(−ω) = −φ(ω). Continuing,
Z π Z
1 ¡ i(ωt+φ(ω)) −i(ωt+φ(ω))
¢ 1 π
xt = | x(ω) | e +e dω = | x(ω) | cos(ωt+φ(ω))dω.
2π 0 π 0
2
As another example of the inverse Fourier transform interpretation, sup-
pose x(ω) was a spike that integrates to one (a delta function) at ω and −ω.
Since sin(−ωt) = − sin(ωt), we have xt = 2 cos(ωt).
8.2 Spectral density
The spectral density is defined as the Fourier transform of the autocovariance
function
X∞
S(ω) = e−iωj γj
j=−∞
Since γj is symmetric, S(ω) is real
X
∞
S(ω) = γ0 + 2 γj cos(jω)
j=1
73
The formula shows that, again, we could define the spectral density using
real quantities, but the complex versions are prettier. Also, notice that the
symmetry γj = γ−j means that S(ω) is symmetric: S(ω) = S(−ω), and real.
Using the inversion formula, we can recover γj from S(ω).
Z π
1
γj = e+iωj S(ω)dω.
2π −π
Thus, the spectral density is an autocovariance generating function. In par-
ticular, Z π
1
γ0 = S(ω)dω
2π −π
This equation interprets the spectral density as a decomposition of the vari-
ance of the process into uncorrelated components at each frequency ω (if
they weren’t uncorrelated, their variances would not sum without covariance
terms). We’ll come back to this interpretation later.
Two other sets of units are sometimes used. First, we could divide ev-
erything by the variance of the series, or, equivalently, Fourier transform the
autocorrelation function. Since ρj = γj /γ0 ,
X
∞
f (ω) = S(ω)/γ0 = e−iωj ρj
j=−∞
Z π
1
ρj = e+iωj f (ω)dω.
2π −π
Z π
1
1= f (ω)dω.
2π −π
f (ω)/2π looks just like a probability density: it’s real, positive and inte-
grates to 1. Hence the terminology “spectral density”. We can define the
corresponding distribution function
Z ω
1
F (ω) = f (ν) dν. where F (−π) = 0, F (π) = 1 F increasing
−π 2π
This formalism is useful to be precise about cases with deterministic compo-
nents and hence with “spikes” in the density.
74
8.2.1 Spectral densities of some processes
White noise
xt = t
2
γ0 = σ , γj = 0 for j > 0
2
S(ω) = σ 2 = σx
The spectral density of white noise is flat.
MA(1)
xt = t +θ t−1
2 2 2
γ0 = (1 + θ )σ , γ1 = θσ , γj = 0 for j > 1
2θ
S(ω) = (1 + θ2 )σ 2 + 2θσ 2 cos ω = (1 + θ2 + 2θ cos ω)σ 2 = γ0 (1 + cos ω)
1 + θ2
Hence, f (ω) = S(ω)/γ0 is
2θ
f (ω) = 1 + cos ω
1 + θ2
Figure 8.5 graphs this spectral density.
As you can see, “smooth” MA(1)’s with θ > 0 have spectral densities that
emphasize low frequencies, while “choppy” ma(1)’s with θ < 0 have spectral
densities that emphasize high frequencies.
Obviously, this way of calculating spectral densities is not going to be very
easy for more complicated processes (try it for an AR(1).) A by-product of
the filtering formula I develop next is an easier way to calculate spectral
densities.
8.2.2 Spectral density matrix, cross spectral density
With xt = [yt zt ]0 , we defined the variance-covariance matrix Γj = E(xt x0t−j ),
which was composed of auto- and cross-covariances. The spectral density
matrix is defined as
X∞ ∙ P −iωj P −iωj ¸
je γy (j) je E(yt zt−j )
Sx (ω) = e−iωj
Γj = P −iωj P −iωj
j=−∞ je E(zt yt−j ) je γz (j)
75
θ = -1
2
θ = 0; white noise
f( ω )
1
θ = +1
0
0 pi/2 pi
ω
Figure 8.5: MA(1) spectral density
You may recognize the diagonals as the spectral densities of y and z. The
off-diagonals are known as the cross-spectral densities.
X
∞
Syz (ω) = e−iωj E(yt zt−j ).
j=−∞
Recall that we used the symmetry of the autocovariance function γj = γ−j
to show that the spectral density is real and symmetric in ω. However, it is
not true that E(yt zt−j ) = E(zt yt−j ) so the cross-spectral density need not be
real, symmetric, or positive. It does have the following symmetry property:
Syz (ω) = [Szy (ω)]∗ = Szy (−ω)
76
Proof:
X
∞ X
∞
−ijω
Syz (ω) = e E(yt zt−j ) = e−ijω E(zt yt+j ) =
j=−∞ j=−∞
X
∞
eikω E(zt yt−k ) = [Szy (ω)]∗ = Szy (−ω).
k=−∞
2
As with any Fourier transform, we can write the corresponding inverse
transform Z π
1
E(yt zt−j ) = eiωj Syz (ω)dω
2π −π
and, in particular, Z π
1
E(yt zt ) = Syz (ω)dω.
2π −π
The cross-spectral density decomposes the covariance of two series into the
covariance of components at each frequency ω. While the spectral density
is real, the cross-spectral density may be complex. We can characterize the
relation between two sine or cosine waves at frequency ω by the product
of their amplitudes, which is the magnitude of Syz (ω), and the phase shift
between them, which is the phase of Syz (ω).
8.2.3 Spectral density of a sum
Recall that the variance of a sum is var(a + b) = var(a) + var(b) + 2cov(a, b).
Since spectral densities are variances of “components at frequency ω” they
obey a similar relation,
Sx+y (ω) = Sx (ω) + Sy (ω) + Sxy (ω) + Syx (ω)
= Sx (ω) + Sy (ω) + (Sxy (ω) + Sxy (ω)∗ )
= Sx (ω) + Sy (ω) + 2Re(Sxy (ω))
where Re(x) = the real part of x.
77
Proof: As usual, use the definitions and algebra:
X
Sx+y (ω) = e−iωj E[(xt + yt )(xt−j + yt−j )] =
j
X
= e−iωj (E(xt xt−j ) + E(yt yt−j ) + E(xt yt−j ) + E(yt xt−j )) =
j
= Sx (ω) + Sy (ω) + Sxy (ω) + Syx (ω).
2
In particular, if x and y are uncorrelated at all leads and lags, their
E(xt yt−j ) = 0 for all j ⇒ Sx+y (ω) = Sx (ω) + Sy (ω)
8.3 Filtering
8.3.1 Spectrum of filtered series
Suppose we form a series yt by filtering a series xt , i.e. by applying a moving
average
X∞
yt = bj xt−j = b(L)xt .
j=−∞
It would be nice to characterize the process yt given the process xt .
We could try to derive the autocovariance function of yt given that of xt .
Let’s try it:
à ∞ !
X X
∞
γk (y) = E(yt yt−k ) = E bj xt−j bl xt−k−l
j=−∞ l=−∞
X X
= bj bl E(xt−j xt−k−l ) = bj bl γk+l−j (x)
j,l j,l
This is not a pretty convolution.
78
However, the formula for the spectral density of y given the spectral den-
sity of x turns out to be very simple:
Sy (ω) = | b(e−iω ) |2 Sx (ω).
b(e−iω ) P a nice notation for the Fourier transform of the bj coefficients.
is P
b(L) = j bj Lj , so j e−iωj bj = b(e−iω ).
Proof: Just plug in definitions and go.
X X
Sy (ω) = e−iωk γk (y) = e−iωk bj bl γk+l−j (x)
k k,j,l
Let h = k + l − j, so k = h − l + j
X X X X
Sy (ω) = e−iω(h−l+j) bj bl γh (x) = e−iωj bj e+iωl bl e−iωh γh (x)
h,j,l j l h
= b(e−iω )b(e+iω )Sx (ω) = | b(e−iω ) |2 Sx (ω).
The last equality results because b(z ∗ ) = b(z)∗ for polynomials.
2
The filter yt = b(L)xt is a complex dynamic relation. Yet the filtering
formula looks just like the scalar formula y = bx ⇒var(y) = b2 var(x). This
starts to show you why the spectral representation is so convenient: opera-
tions with a difficult dynamic structure in the time domain (convolutions)
are just multiplications in the frequency domain.
8.3.2 Multivariate filtering formula
The vector version of the filtering formula is a natural extension of the scalar
version.
yt = B(L)xt ⇒ Sy (ω) = B(e−iω )Sx (ω)B(eiω )0 .
This looks just like the usual variance formula: if x is a vector-valued random
variable with covariance matrix Σ and y = Ax, then the covariance matrix
of y is AΣA0 .
79
8.3.3 Spectral density of arbitrary MA(∞)
Since the MA(∞) representation expresses any series as a linear filter of
white noise, an obvious use of the filtering formula is a way to derive the
spectral density of any ARMA expressed in Wold representation,
X
∞
xt = θ(L) t = θj t−j .
j=0
By the filtering formula,
Sx (ω) = θ(e−iω )θ(e+iω )σ 2
Now you know how to find the spectral density of any process. For example,
the MA(1), xt = (1 + θL) t gives
¡ ¢
Sx (ω) = (1+θe−iω )(1+θeiω )σ 2 = 1 + θ(eiω + e−iω ) + θ2 σ 2 = (1+2θ cos(ω)+θ2 )σ 2
as before.
8.3.4 Filtering and OLS
Suppose
yt = b(L)xt + t , E(xt t−j ) = 0 for all j.
This is what OLS of yt on the entire xt process produces. Then, adding the
filtering formula to the fact that spectral densities of uncorrelated processes
Sy (ω) = Sb(L)x (ω) + S (ω) =| b(e−iω ) |2 Sx (ω) + S (ω).
2 2
This formula looks a lot like yt = xt β + t ⇒ σy = β 2 σx +σ 2 , and the resulting
2 2
formula for R . Thus, it lets us do an R decomposition – how much of the
variance of y at each frequency is due to x and ?
More interestingly, we can relate the cross-spectral density to b(L),
Syx (ω) = b(e−iω )Sx (ω).
80
Proof: The usual trick: write out the definition and play with the
indices until you get what you want.
X X X
Syx (ω) = e−iωk E(yt xt−k ) = e−iωk bj E(xt−j xt−k )
k k j
X X
= e−iωk bj γk−j (x)
k j
Let l = k − j, so k = l + j,
X X X X
Syx (ω) = e−iω(l+j) bj γl (x) = e−iωj bj e−iωl γl (x) = b(e−iω )Sx (ω)
l j j l
2
This formula has a number of uses. First, divide through to express
Syx (ω)
b(e−iω ) = .
Sx (ω)
This looks a lot like β =cov(y, x)/var(x). Again, the spectral representation
reduces complex dynamic representations to simple scalar multiplication, at
each frequency ω. Second, you can estimate the lag distribution, or think
Z π Z π
ˆj = 1 ijω −iω 1 Syx (ω)
b e b(e ) dω = eijω dω
2π −π 2π −π Sx (ω)
is known as “Hannan’s inefficient estimator.” Third, we can turn the formula
around, and use it to help us to understand the cross-spectral density:
Syx = b(e−iω )Sx (ω).
For example, if xt = t , σ 2 = 1, and yt = xt−1 , then Sy (ω) = Sx (ω) = 1,
but Syx (ω) = eiω . The real part of this cross-spectral density is 1.0–x is
neither stretched nor shrunk in its transformation to y. But the one-period
lag shows up in the complex part of the cross-spectral density–y lags x by
ω.
81
8.3.5 A cosine example
The following example may help with the intuition of these filtering formulas.
Suppose xt = cos(ωt). Then we can find by construction that
yt = b(L)xt =| b(e−iω ) | cos(ωt + φ(ω)).
where
b(e−iω ) =| b(e−iω ) | eiφ(ω) .
The quantity b(e−iω ) is known as the frequency response of the filter. Its
magnitude (or magnitude squared) is called the gain of the filter and its
angle φ is known as the phase. Both are functions of frequency.
The spectral representation of the filter shows you what happens to sine
waves of each frequency. All you can do to a sine wave is stretch it or shift
it, so we can represent what happens to a sine wave at each frequency by two
numbers, the gain and phase of the b(e−iω ). The usual representation b(L)
shows you what happens in response to a unit impulse. This is, of course,
a complex dynamic relation. Then, we can either think of yt as the sum of
impulses times the complex dynamic response to each impulse, or as the sum
of sine and cosine waves times the simple gain and phase -
Proof:
à !
1X 1 X X
yt = bj (eiω(t−j) + e−iω(t−j) ) = eiωt bj e−iωj + e−iωt bj eiωj =
2 j 2 j j
1 ¡ iωt −iω ¢
e b(e ) + e−iωt b(eiω ) =| b(e−iω ) | cos(ωt + φ(ω)).
2
(| b(e ) |=| b(eiω ) |for polynomial b(L)).
−iω
2
8.3.6 Cross spectral density of two filters, and an in-
terpretation of spectral density
Here’s a final filtering formula, which will help give content to the interpre-
tation of the spectral density as the variance of components at frequency
82
ω.
y1t = b1 (L)xt , y2t = b2 (L)xt ⇒ Sy1 y2 (ω) = b1 (e−iω )b2 (eiω )Sx (ω)
Note this formula reduces to Sy (ω) =| b(e−iω ) |2 Sx (ω) if b1 (L) = b2 (L).
Proof: As usual, write out the definition, and play with the sum
indices.
à !
X X X X
−iωj −iωj
Sy1 y2 (ω) = e E(y1t y2t−j ) = e E b1k xt−k b2l xt−j−l
j j k l
X X X X
= e−iωj b1k b2l E(xt−k xt−j−l ) = e−iωj b1k b2l γj+l−k (x)
j k,l j k,l
Let m = j + l − k, so j = m − l + k,
X XX X X X
e−iω(m−l+k) b1k b2l γm = b1k e−iωk b2l eiωl e−iωm γm (x).
m k l k l m
2
Now, suppose b(L) is a bandpass filter,
½
−iω 1, | ω |∈ [α, β]
b(e ) = ,
0 elsewhere
as displayed in 8.6. Then, the variance of filtered data gives the average
spectral density in the window. (We’ll use this idea later to construct spectral
density estimates.)
Z π Z
1 −iω 2 1
var(b(L)xt ) = | b(e ) | Sx (ω)dω = Sx (ω)dω.
2π −π 2π |ω|∈[α,β]
Next, subdivide the frequency range from −π to π into nonoverlapping in-
tervals
Z π µZ Z ¶
1 1
var(xt ) = Sx (ω)dω = Sx (ω)dω + Sx (ω)dω.. =
2π −π 2π b1 b2
var(b1 (L)xt ) + var(b2 (L)xt ) + . . .
83
b(eiω )
1
−β −α α β ω
Figure 8.6: Frequency response (all real) of a bandpass filter that includes
frequencies ω from α to β.
Since the windows do not overlap, the covariance terms are zero. As the
windows get smaller and smaller, the windows approach a delta function, so
the components bj (L)xt start to look more and more like pure cosine waves at
a single frequency. In this way, the spectral density decomposes the variance
of the series into the variance of orthogonal sine and cosine waves at different
frequencies.
8.3.7 Constructing filters
Inversion formula
You may have wondered in the above, how do we know that a filter exits
with a given desired frequency response? If it exists, how do you construct
it? The answer, of course, is to use the inversion formula,
Z π
1
bj = eiωj b(e−iω )dω
2π −π
For example, let’s find the moving average representation bj of the bandpass
filter.
Z Z −α Z β
1 iωj 1 iωj 1
bj = e dω = e dω + eiωj dω =
2π |ω|∈[α,β] 2π −β 2π α
84
¸−α ¸β
1 eiωj 1 eiωj e−ijα − e−ijβ + eijβ − eijα
= + = =
2π ij −β 2π ij α 2πij
sin(jβ) sin(jα)
− .
πj πj
Each term is a two-sided infinite order moving average filter. Figure 8.7 plots
the filter. Then, you just apply this two-sided moving average to the series.
0.2
0.15
0.1
0.05
weight
0
-0.05
-0.1
-0.15
-0.2
-10 -8 -6 -4 -2 0 2 4 6 8 10
lag
Figure 8.7: Moving average weights of a bandpass filter.
Extracting unobserved components by knowledge of spectra.
Of course, you may not know ahead of time exactly what frequency response
b(e−iω ) you’re looking for. One situation in which we can give some help
is a case of unobserved components. In seasonal adjustment, growth/cycle
85
decompositions, and many other situations, you think of your observed se-
ries Xt as composed of two components, xt and ut that you do not observe
separately. However, you do have some prior ideas about the spectral shape
of the two components. Growth is long-run, seasonals are seasonal, etc.
Assume that the two components are uncorrelated at all leads and lags.
Xt = xt + ut , E(xt us ) = 0
The only thing you can do is construct a “best guess” of xt given your data
on Xt , i.e., construct a filter that recovers the projection of xt on the X
process,
X∞ X∞
xt = hj Xt−j + t ⇒ xt =
ˆ hj Xt−j .
j=−∞ j=−∞
The parameters hj are given from inverting
Sx (ω) Sx (ω)
h(e−iω ) = =
Sx (ω) + Su (ω) SX (ω)
Proof: (left as a problem)
This formula is an example of a problem that is much easier to solve
in the frequency domain! (A fancy name might be “optimal seasonal (or
trend/cycle) extraction”. )
8.3.8 Sims approximation formula
Often, (always) we approximate true, infinite-order ARMA processes by finite
order models. There is a neat spectral representation of this approximation
given by the Sims approximation formula: Suppose the true projection of yt
on {xt } is
X
∞
yt = b0 xt−j + t
j
j=−∞
86
but a researcher fits by OLS a restricted version,
X
∞
yt = b1 xt−j + ut
j
j=−∞
The {b1 } lie in some restricted space, for example, the MA representations
j
of an ARMA(p,q). In population, OLS (or maximum likelihood with normal
iid errors) picks b1 to minimize
j
Z π
| b0 (e−iω ) − b1 (e−iω ) |2 Sx (ω)dω.
−π
Proof: (left as a problem)
OLS tries to match an average of b0 (e−iω ) − b1 (e−iω ) over the entire spec-
trum from −π to π. How hard it tries to match them depends on the spectral
density of x. Thus, OLS will sacrifice accuracy of the estimated b(L) is a small
frequency window, and/or a window in which Sx (ω) is small, to get better
accuracy in a large window, and/or a window in which Sx (ω) is large.
8.4 Relation between Spectral, Wold, and Au-
tocovariance representations
We have discussed three different fundamental representations for time series
processes: the autocovariance function, the Wold MA(∞) and the spectral
density. The following diagram summarizes the relation between the three
representations.
Autoregresions
↓
Wold MA(∞)
P
% xt = ∞ θj t−j ,
j=0 -
Infer AR . θ(L) invertible & S(ω) = θ(eR−iω )θ(eiω )σ 2
P∞ 1 π
γk = j=0 θj θj+k σ 2 σ 2 = e 2π −π ln S(ω)dω
P∞
Autocovariance → S(ω) = R k=−∞ e−ikω γk → Spectral density S(ω)
1 π
γk = E(xt xt−k ) ← γk = 2π −π e−kω S(ω)dω ←
87
Each of the three representations can be estimated directly: we find the
Wold MA by running autoregressions and simulating the impulse-response
function; we can find the autocovariances by finding sample autocovariances.
The next chapter discusses spectral density estimates. From each represen-
tation we can derive the others, as shown.
The only procedure we have not discussed so far is how to go from
spectral density to Wold. The central insight to doing this is to realize
that the roots of the spectral density function are the same as the roots
of the Wold moving average, plus the inverses of those roots. If xt =
θ(L) t =const.(L − λ1 )(L − λ2 )..., so that θ(z) =const.(z−λ1 )(z − λ2 ).., λ and
λ1 , λ2 , ...are roots, then Sx (z) =const.2 (z−λ1 )(z−λ2 )...(z−1 −λ1 )(z −1 −λ2 )...so
that λ1 , λ2 , ... and λ−1 , λ−1 ... are roots. Thus, find roots of the spectral den-
1 2
sity, those outside the unit circle are the roots of the Wold lag polynomial. To
find σ 2 , either make sure that the integral of the spectral density equals the
variance of the Wold representation, or use the direct formula given above.
88
Chapter 9
Spectral analysis in finite
samples
So far, we have been characterizing population moments of a time series. It’s
also useful to think how spectral representations work with a finite sample
of data in hand. Among other things, we need to do this in order to think
about how to estimate spectral densities.
9.1 Finite Fourier transforms
9.1.1 Definitions
For a variety of purposes it is convenient to think of finite-sample counter-
parts to the population quantities we’ve introduced so far. To start off, think
of fourier transforming the data xt rather than the autocovariance function.
Thus, let
1 X −iωt
T
xω = 1/2 e xt
T t=1
Where T = sample size. We can calculate xω for arbitrary ω. However, I will
mostly calculate it for T ω0 s, spread evenly about the unit circle. When the
89
ω 0 s are spread evenly in this way, there is an inverse finite fourier transform,
1 X
xt = eiωt xω
T 1/2 ω
Proof: Just like the proof of the infinite size inverse transform.
1 X 1 X 1 X 1 X X iω(t−j)
T T
iωt iωt −iωj
e xω = e e xj = xj e .
T 1/2 ω
T 1/2 ω
T 1/2 j=1
T j=1 ω
X ½
iω(t−j) T if t − j = 0
e = .
ω
0 if t − j 6= 0
2
It is handy to put this transformation in matrix notation. Let
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
e−iω1 e−2iω2 . . . x1 xω1
⎢ −iω −2iω2
. . . ⎥ ; xt = ⎢ x2 ⎥ ; xω = ⎢ xω2 ⎥
W = T 1/2 ⎣ e 2 e ⎦ ⎣ ⎦ ⎣ ⎦
... .
. .
.
... . .
Then, we can write the fourier transform and its inverse as
xω = W xt , xt = W T xω
where W T = (W 0 )∗ = (W ∗ )0 denotes “complex conjugate and transpose”.
Note W T W = W W T = I, i.e., fourier transforming and then inverse trans-
forming gets you back where you started. Matrices W with this property are
called unitary matrices.
9.2 Band spectrum regression
9.2.1 Motivation
We very frequently filter series before we look at relations between them–
we detrend them, deseasonlaize them, etc. Implicitly, these procedures mean
90
that we think relations are different at different frequencies, and we want to
isolate the frequencies of interest before we look at the relation, rather than
build an encompassing model of the relation between series at all frequencies.
“Band spectrum regression” is a technique designed to think about these
situations. I think that it is not so useful in itself, but I find it a very
useful way of thinking about what we’re doing when we’re filtering, and how
relations that differ across frequencies influence much time series work.
Here are some examples of situations in which we think relations vary
across frequencies, and for which filtered data have been used:
Example 1: Kydland and Prescott.
Kydland and Prescott simulate a model that gives rise to stationary time
series patterns for GNP, consumption, etc. However, the GNP, consumption,
etc. data are not stationary: not only do they trend upwards but they include
interesting patterns at long horizons, such as the “productivity slowdown”
of the 700 s, as well as recessions. Kydland and Prescott only designed their
model to capture the business cycle correlations of the time series; hence
they filtered the series with the Hodrick-Prescott filter (essentially a high-
pass filter) before computing covariances.
Example 2: Labor supply.
The Lucas-Prescott model of labor supply makes a distinction between
”permanent” and ”transitory” changes in wage rates. A transitory change
in wage rates has no income effect, and so induces a large increase in labor
supply (intertemporal substitution). A permanent increase has an income
effect, and so induces a much smaller increase in labor supply. This model
might (yes, I know this is static thinking in a dynamic model — this is only
a suggestive example!) make time series predictions as illustrated in figure
9.1:
As you can see, we might expect a different relation between wages and
labor supply at business cycle frequencies vs. longer horizons.
Example 3: Money supply and interest rates
Conventional textbooks tell you that money growth increases have a short
run negative impact on real and hence nominal interest rates, but a long run,
one-for-one impact on inflation and hence nominal interest rates. The general
view is that the time-series relation between money and interest rates looks
91
Wages
Labor supply
Time
Figure 9.1: Time series of wages and labor supply
something like figure 9.2
Again, the relation between money growth and nominal interest rates
depends on the frequency at which you look at it. At high frequencies we
expect a negative relation, at low frequencies we expect a positive relation. (I
wrote my PhD thesis on this idea, see “The Return of the Liquidity Effect: A
Study of the Short Run Relation Between Money Growth and Interest Rates”
Journal of Business and Economic Statistics 7 (January 1989) 75-83.)
is a band pass filter that attenuates or eliminates seasonal frequencies. This
procedure must reflect a belief that there is a different relation between
variables at seasonal and nonseasonal frequencies. For example, there is
a tremendous seasonal in nondurable and services consumption growth; the
use of seasonally adjusted consumption data in asset pricing reflects a belief
that consumers do not try to smooth these in asset markets.
WARNING: Though filtering is very common in these situations, it is
probably “wrong”. The “Right” thing to do is generally specify the whole
model, at long, short, seasonal, and nonseasonal frequencies, and estimate
92
Money growth
Nominal interest
Inflation
Figure 9.2: Money and interest rates
the entire dynamic system. The underlying economics typically does not
separate by frequency. An optimizing agent facing a shock with a seasonal
and nonseasonal component will set his control variable at nonseasonal fre-
quencies in a way that reflects the seasonals. ”Growth theory” and ”business
cycle theory” may each make predictions at each others’ frequencies, so one
cannot really come up with separate theories to explain separate bands of the
spectrum. The proper distinction between shocks may be “expected” and
“unexpected” rather than “low frequency” vs. “high frequency”. Nonethe-
less, removing frequency bands because you have a model that ”only applies
at certain frequencies” is very common, even if that characterization of the
model’s predictions is not accurate.
9.2.2 Band spectrum procedure
Suppose y and x satisfy the OLS assumptions,
yt = xt β + ²t , E(²t ²0t ) = σ 2 I, E(xt ²0t ) = 0
Since W is a unitary matrix, the fourier transformed versions also satisfy the
OLS assumptions:
W yt = W xt β + W ²t
93
or
yω = xω β + ²ω
where
E(²ω ²T ) = E(W ²t ²0t W T ) = σ 2 I,
ω
E(xω ²0ω ) = E(W xt ²0t W T ) = 0.
Thus, why not run OLS using the frequency transformed data,
ˆ
β = (xT xω )−1 (xT yω )?
ω ω
In fact, this β is numerically identical to the usual OLS β :
(xT xω )−1 (xT yω ) = (x0t W T W xt )−1 (x0t W T W yt ) = (x0t xt )−1 (x0t yt ).
ω ω
So far, pretty boring. But now, recall what we do with OLS in the
presence of a “structural shift”. The most common example is war, where
you might think that behavioral relationships are different for the data points
1941-1945. (This usually means that you haven’t specified your behavior at
a deep enough level.) Also, it is common to estimate different relationships
in different “policy regimes”, such as pre-and post 1971 when the system of
fixed exchange rates died, or in 1979-1982 and outside that period, when the
Fed (supposedly) followed a nonborrowed reserve rather than interest rate
targeting procedure. Thus, suppose you thought
yt = xt β1 + t (period A)
yt = xt β2 + t (period B)
What do you do? You estimate separate regressions, or you drop from your
sample the time periods in which you think β changed from the “true value”
that you want to estimate.
The equivalence of the regular and frequency domain OLS assumptions
shows you that OLS assumes that β is the same across frequencies, as it
assumes that β is the same over time. Band spectrum regression becomes
useful when you don’t think this is true: when you think that the relationship
you want to investigate only holds at certain frequencies.
The solution to this problem is obvious: drop certain frequencies from the
frequency transformed regression, i.e. transform the data and then estimate
94
separate β1 and β2 ; yω = xω β1 + ω in frequency interval A and yω = xω β2 +
ω in frequency interval B. If you believe the model holds only at certain
frequencies, just run yω = xω β + ω ω in given frequency band.
One way to formalize these operations is to let A be a “selector matrix”
that picks out desired frequencies. It has ones on the diagonal corresponding
to the “good” frequencies that you want to keep, and zeros elsewhere. Then,
if you run
Ayω = Axω β + A²ω .
Only the “good” frequencies are included. The β you get is
ˆ
β = (xT AAxω )−1 (xT AAyω ).
ω ω
As you might suspect, filtering the data to remove the unwanted frequen-
cies and then running OLS (in time domain) on the filtered data is equivalent
to this procedure. To see this, invert the band spectrum regression to time-
domain by running
W T Ayω = W T Axω β + W T A²ω
This regression gives numerically identical results. Writing the definition of
xω , this is in turn equivalent to
W T AW yt = W T AW xt β + W T AW ²t
So, what’s W T AW ? It takes time domain to frequency domain, drops a band
of frequencies, and reverts to time domain. Thus it’s a band pass filter!
There is one warning: If we drop k frequencies in the band-spectrum
regression, OLS on the frequency domain data will pick up the fact that only
T − k degrees of freedom are left. However, there are still T time-domain
observations, so you need to correct standard errors for the lost degrees
of freedom. Alternatively, note that t is serially uncorrelated, W T AWt is
serially correlated, so you have to correct for serial correlation of the error in
inference.
Of course, it is unlikely that your priors are this sharp—that certain fre-
quencies belong, and certain others don’t. More likely, you want to give more
weight to some frequencies and less weight to others, so you use a filter with
a smoother response than the bandpass filter. Nonetheless, I find it useful to
think about the rationale of this kind of procedure with the band-pass result
in mind.
95
9.3 e
Cram´r or Spectral representation
e
The spectral or Cram´r representation makes precise the sense in which the
spectral density is a decomposition of the variance of a series into the variance
of orthogonal components at each frequency. To do this right (in population)
we need to study Brownian motion, which we haven’t gotten to yet. But we
can make a start with the finite fourier transform and a finite data set.
The inverse fourier transform of the data is
à !
1 X iωt 1 X
xt = 1/2 e xω = 1/2 x0 + 2 | xω | cos(ωt + φω )
T ω
T ω>0
where xω =| xω | eiφω .Thus it represents the time series xt as a sum of cosine
waves of different frequencies and phase shifts.
One way to think of the inverse fourier transform is that we can draw
random variables {xω } at the beginning of time, and then let xt evolve ac-
cording to the inverse transform. It looks like this procedure produces a
deterministic time series xt , but that isn’t true. ”Deterministic” means that
xt is perfectly predictable given the history of x, not given {xω }. If you have
k x0t s you can only figure out k x0w s.
So what are the statistical properties of these random variables xω ? Since
xt is mean zero, E(xω ) = 0. The variance and covariances are
½
∗ Sx (ω) if ω = λ
lim E(xω xλ ) =
T →∞ 0 if ω 6= λ
Proof:
1 X −iωt X iλj 1 X iλj −iωt
E(xω x∗ ) = E
λ e xt e xj = E e e xt xj.
T t j
T t,j
setting j = t − k, thus k = j − t,
1 X iλ(t−k) −iωt X 1 X i(λ−ω)t
=E e e xt xt−k = e−iλk e γk (x)
T t,k k
T t
X 1 X i(λ−ω)t
= e−iλk γk (x) e .
k
T t
96
If ω = λ, the last term is T , so we get
X
lim E(xω x∗ ) =
ω e−iωk γk = Sx (ω)
T →∞
k
If ω 6= λ, the last sum is zero, so
lim E(xω x∗ ) = 0.
λ
T →∞
2
Though it looks like finite-sample versions of these statements also go
through, I was deliberately vague about the sum indices. When these do not
go from −∞ to ∞, there are small-sample biases.
Thus, when we think of the time series by picking {xω } and then gener-
ating out {xt } by the inversion formula, the xω are uncorrelated. However,
they are heteroskedastic, since the variance of xω is the spectral density of x,
which varies over ω.
e
The Cram´r or spectral representation takes these ideas to their limit. It
is Z π
1
xt = eiωt dz(ω)
2π −π
where ½
Sx (ω)dω for ω = λ
E(dz(ω)dz(λ)∗) =
0 for ω 6= λ
The dz are increments of a Brownian motion (A little more precisely, the
random variable Z ω
Z(ω) = dz(λ)
−π
has uncorrelated increments: E[Z(.3) − Z(.2))(Z(.2) − Z(.1))] = 0.) Again,
the idea is that we draw this Brownian motion from −π to π at the beginning
of time, and then fill out the xt . As before, the past history of x is not
sufficient to infer the whole Z path, so x is still indeterministic.
This is what we really mean by “the component at frequency ω” of a
non-deterministic series. As you can see, being really precise about it means
we have to study Brownian motions, which I’ll put off for a bit.
97
9.4 Estimating spectral densities
We will study a couple of approaches to estimating spectral densities. A
reference for most of this discussion is Anderson (1971) p. 501 ff.
9.4.1 Fourier transform sample covariances
The obvious way to estimate the spectral density is to use sample counter-
parts to the definition. Start with an estimate of the autocovariance function.
1 XT
1 X
T
ˆ
γk = ˆ
xt xt−k or γk = xt xt−k .
T − k t=k+1 T t=k+1
(I’m assuming the x’s have mean zero. If not, take out sample means first.
This has small effects on what follows.) Both estimates are consistent. The
first is unbiased; the second produces positive definite autocovariance se-
quences in any sample. The first does not (necessarily) produce positive
definite sequences, and hence positive spectral densities. One can use either;
I’ll use the second.
We can construct spectral density estimates by fourier transforming these
autocovariances:
X
T −1
ˆ
S(ω) = e−iωk γk
ˆ
k=−(T −1)
9.4.2 Sample spectral density
A second approach is suggested by our definition of the finite fourier trans-
form. Since
lim E(xω x∗ ) = S(ω),
ω
T →∞
98
why not estimate S(ω) by the sample spectral density 1 I(ω) :
à T !à T ! ¯ T ¯2
ˆ 1 X −iωt X 1 ¯X −iωt ¯
¯ ¯
S(ω) = I(ω) = xω x∗ =
ω e xt eiωt xt = ¯ e xt ¯
T t=1 t=1
T ¯ t=1 ¯
9.4.3 Relation between transformed autocovariances
and sample density
The fourier transform of the sample autocovariance is numerically identical
to the sample spectral density!
Proof:
à T !à T ! à T T !
1 X −iωt X
iωt 1 X X iω(j−t)
e xt e xt = e xt xj
T t=1 t=1
T t=1j =1
Let k = t − j, so j = t − k.
X
T −1
1 X
T X
T −1
−iωk
= e xt xt−|k| = e−iωk γk
ˆ
T
k=−(T −1) t=|k|+1 k=−(T −1)
(To check the limits on the sums, verify that each xt xj is still
counted once. It may help to plot each combination on a t vs.
j grid, and verify that the second system indeed gets each one
once. You’ll also have to use x1 x1−(−3) = x4 x1 , etc. when k < 0.)
2
Thus, the sample spectral density is the fourier transform of the sample
autocovariances,
X
T −1 X
∞
−iωk
I(ω) = e γk =
ˆ e−iωk γk
ˆ
k=−(T −1) k=−∞
1
This quantity is also sometimes called the periodogram. Many treatements of the pe-
riodogram divide by an extra T. The original periodogram was designed to ferret out pure
sine or cosine wave components, which require dividing by an extra T for the periodogram
to stay stable as T → ∞. When there are only non-deterministic components, we divide
only by one T . I use ”sample spectral density” to distinguish the two possibilities.
99
where the latter equality holds by the convention that γk = 0 for k > T − 1.
ˆ
Applying the inverse fourier transform, we know that
Z π
1
γk =
ˆ eiωk I(ω)dω.
2π −π
Thus, the sample autocovariances (using 1/T ) and sample spectral density
have the same relation to each other as the population autocovariance and
spectral density.
γk ⇔ S(ω)
just like
γk ⇔ I(ω).
ˆ
The sample spectral density is completely determined by its values at T
different frequencies, which you might as well take evenly spaced. (Analo-
gously, we defined xω for T different ω above, and showed that they carried
all the information in a sample of length T.) To see this, note that there are
only T autocovariances(including the variance). Thus, we can recover the
T autocovariances from T values of the spectral density, and construct the
spectral density at any new frequency from the T autocovariances.
As a result, you might suspect that there is also a finite fourier transform
ˆ
relation between I(ω) and γ (ω), and there is,
1 X iωk
ˆ
γk = e I(ω).
T ω
Proof:
1 X iωk 1 X iωk X −iωj 1 X X iω(k−j)
e I(ω) = e e ˆ
γj = e ˆ
γj
T ω T ω j
T j ω
1X
= T δ(k − j)ˆj = γk .
γ ˆ
T j
(Since we did not divide by 1/T 1/2 going from γ to I, we divide by T going
ˆ
back.)
2
100
9.4.4 Asymptotic distribution of sample spectral den-
sity
Here are some facts about the asymptotic distribution of these spectral den-
sity estimates:
lim E(I(ω)) = S(ω)
T →∞
½ 2
2S (0) for ω = 0
lim var(I(ω)) =
T →∞ S 2 (ω) for ω 6= 0
lim cov(I(ω), I(λ)) = 0 for | ω |6=| λ |
T →∞
2I(ω)/S(ω) → χ2
2
Note that the variance of the sample spectral density does not go to
zero, as the variance of most estimators (not yet scaled by T 1/2 ) does. The
sample spectral density is not consistent, since its distribution does not col-
lapse around the true value. This variance problem isn’t just a problem of
asymptotic mumbo-jumbo. Plots of the sample spectral density of even very
smooth processes show a lot of jumpiness.
There are two ways to understand this inconsistency intuitively. First,
recall that I(ω) = xω x0ω . Thus, I(ω) represents one data point’s worth of
information. To get consistent estimates of anything, you need to include
increasing numbers of data points. Second, look at the definition as the sum
of sample covariances; the high autocovariances are on average weighted just
as much as the low autocovariances. But the last few autocovariances are
bad estimates: γT −1 = (xT x1 )/T no matter how big the sample. Thus I(ω)
always contains estimates with very high variance.
9.4.5 Smoothed periodogram estimates
Here is one solution to this problem: Instead of estimating S(ω) by I(ω),
average I(ω) over several nearby ω 0 s. Since S(ω) is a smooth function and
adjacent I(ω) are uncorrelated in large samples, this operation reduces vari-
ance without adding too much bias. Of course, how many nearby I(ω) to
include will be a tricky issue, a trade-off between variance and bias. As
T → ∞, promise to slowly reduce the range of averaged ω 0 s. You want to
101
reduce it so that as T → ∞ you are in fact only estimating S(ω), but you
want to reduce it slowly so that larger and larger numbers of x0ω enter the
band as T → ∞.
Rπ
ˆ
Precisely, consider a smoothed periodogram estimator S(ω) = −π h(λ −
ˆ P
ω)I(λ)dλ or S(ω) = λi h(λi − ω)I(λi ) where h is a moving average function
as shown in figure 9.3 To make the estimate asymptotically unbiased and its
Large T
Small T Medium T
ω ω ω
Figure 9.3: Smoothed periodogram estimate as sample size increases. Note
that the window size decreases with sample size, but the number of frequen-
cies in the window increases.
variance go to zero (and also consistent) you want to make promises as shown
in the figure: Since more and more periodogram ordinates enter as T → ∞,
the variance goes to zero. But since the size of the window goes to zero as
T → ∞, the bias goes to zero as well.
9.4.6 Weighted covariance estimates
Viewing the problem as the inclusion of poorly measured, high-order auto-
covariances, we might try to estimate the spectral density by lowering the
weight on the high autocovariances. Thus, consider
X
T −1
ˆ
S(ω) = e−iωk g(k)ˆk
γ
k=−(T −1)
where g(k) is a function as shown in figure 9.4 For example, the Bartlett
102
g (k )
γk
k
Figure 9.4: Covariance weighting function
window has a triangular shape,
X
r
|k|
ˆ
S(ω) = e−iωk (1 − γ
)ˆk
k=−r
r
Again, there is a trade off between bias and variance. Down weighting
the high autocovariances improves variance but introduces bias. Again, one
can make promises to make this go away in large samples. Here, one wants
to promise that g(k) → 1 as T → ∞ to eliminate bias. Thus, for example,
an appropriate set of Bartlett promises is r → ∞ and r/T → 0 as T → ∞;
this can be achieved with r ∼ T 1/2 .
9.4.7 Relation between weighted covariance and smoothed
periodogram estimates
Not surprisingly, these two estimates are related. Every weighted covari-
ance estimate is equivalent to a smoothed periodogram estimate, where the
103
smoothing function is proportional the fourier transform of the weighting
function; and vice-versa.
Proof:
X X Z π
ˆ −iωk −iωk 1
S(ω) = e g(k)ˆk =
γ e g(k) eiνk I(λ)dλ =
k k
2π −π
Z Ã ! Z π
π
1 X −i(ω−λ)k
e g(k) I(λ)dν = h(ω − λ)I(λ)dν
−π 2π k −π
Similarly, one can use the finite fourier transform at T frequencies,
X X 1 X iλk
ˆ
S(ω) = e−iωk g(k)ˆk =
γ e−iωk g(k) e I(λ) =
k k
T ν
à !
X 1X X
−i(ω−λ)k
e g(k) I(λ) = h(ω − λ)I(λ)
ν
T k ν
2
9.4.8 Variance of filtered data estimates
A last, equivalent approach is to filter the data with a filter that isolates
components in a frequency window, and then take the variance of the fil-
tered series. After all, spectral densities are supposed to be the variance of
components at various frequencies. With a suitably chosen filter, this ap-
proach is equivalent to weighted periodogram or covariance estimates. Thus,
let
xf = F (L)xt
t
Hence, Z π
1
var(xf )
=
t | F (e−iλ ) |2 Sx (λ)dλ
2π −π
So all you have to do is pick F (L) so that
1
| F (e−iλ ) |2 = h(ω − λ)
2π
Variance ratio estimates of the spectral density at frequency zero are exam-
ples of this procedure.
104
9.4.9 Spectral density implied by ARMA models
All of the above estimates are non-parametric. One reason I did them is
to introduce you to increasingly fashionable non-parametric estimation. Of
course, one can estimate spectral densities parametrically as well. Fit an AR
or ARMA process, find its Wold representation
xt = θ(L) t
and then
ˆ
S(ω) =| θ(e−iω ) |2 s2 .
How well this works depends on the quality of the parametric approximation.
Since OLS tries to match the whole frequency range, this technique may
sacrifice accuracy in small windows that you might be interested in, to get
more accuracy in larger windows you might not care about.
9.4.10 Asymptotic distribution of spectral estimates
The asymptotic distribution of smoothed periodogram / weighted covariance
estimates obviously will depend on the shape of the window / weighting
function, and the promises you make about how that shape varies as T → ∞.
When you want to use one, look it up.
105
Chapter 10
Unit Roots
10.1 Random Walks
The basic random walk is
xt = xt−1 + t ; Et−1 ( t ) = 0
Note the property
Et (xt+1 ) = xt .
As a result of this property, random walks are popular models for asset prices.
Random walks have a number of interesting properties.
1) The impulse-response function of a random walk is one at all horizons.
The impulse-response function of stationary processes dies out eventually.
2) The forecast variance of the random walk grows linearly with the fore-
cast horizon
var(xt+k | xt ) = var(xt+k − xt ) = kσ 2 .
The forecast error variance of a stationary series approaches a constant, the
unconditional variance of that series. Of course, the variance of the random
walk is infinite, so in a sense, the same is true.
3) The autocovariances of a random walk aren’t defined, strictly speaking.
However, you can think of the limit of an AR(1), xt = φxt−1 + t as the
106
autoregression parameter φ goes to 1. Then, for a random walk,
ρj = 1 for all j.
Thus, a sign of a random walk is that all the estimated autocorrelations are
near one, or die out “too slowly”.
4) The spectral density (normalized by the variance) of the AR(1) is
1
f (ω) = [(1 − φe−iω )(1 − φeiω )]−1 = .
1+ φ2 − 2φ cos(ω)
In the limit φ → 1 we get
1
f (ω) = .
2(1 − cos(ω))
As ω → 0, S(ω) → ∞. Thus, the variance of a random walk is primarily due
to low-frequency components. The signature of a random walk is its tendency
to wander around at low frequencies.
10.2 Motivations for unit roots
10.2.1 Stochastic trends
One reason macroeconomists got interested in unit roots is the question of
how to represent trends in time series. Until the late 700 s it was common to
simply fit a linear trend to log GNP (by OLS), and then define the stochastic
part of the time series as deviations from this trend. This procedure let to
problems when it seemed like the “trend”, “potential” etc. GNP growth
rate slowed down. Since the slowdown was not foreseen it was hard to go
polynomials. Instead, macroeconomists got interested in stochastic trends,
and random-walk type processes give a convenient representation of such
trends since they wander around at low frequencies.
107
10.2.2 Permanence of shocks
Once upon a time, macroeconomists routinely detrended data, and regarded
was wisely accepted that business cycles were short-run (no more than a
few years, at most) deviations from trend. However, macroeconomists have
recently questioned this time-honored assumption, and have started to won-
der whether shocks to GNP might not more closely resemble the permanent
shocks of a random walk more than the transitory shocks of the old AR(2)
about a linear trend. In the first round of these tests, it was claimed that the
permanence of shocks shed light on whether they were “real” (“technology”)
or “monetary”, “supply” or “demand”, etc. Now, it’s fairly well accepted
that nothing of direct importance hangs on the permanence of shocks, but it
is still an interesting stylized fact.
At the same time, financial economists got interested in the question
of whether stock returns are less than perfect random walks. It turns out
that the same techniques that are good for quantifying how much GNP does
behave like a random walk are useful for quantifying the extent to which
stock prices do not exactly follow a random walk. Again, some authors once
thought that these tests were convincing evidence about “efficient markets”,
but now most recognize that this is not the case.
10.2.3 Statistical issues
At the same time, the statistical issue mentioned above made it look likely
that we could have mistaken time series with unit roots for trend stationary
time series. This motivated Nelson and Plosser (1982) to test macroeconomic
time series for unit roots. They found they could not reject unit roots in most
time series. They interpreted this finding as evidence for technology shocks,
though Campbell and Mankiw (1987) interpreted the exact same findings as
evidence for long-lasting Keynesian stickiness. Whatever the interpretation,
we became more aware of the possibility of long run movements in time-series.
Here are a few examples of the statistical issues. These are for motivation
only at this stage; we’ll look at distribution theory under unit roots in more
detail in chapter x.
108
Distribution of AR(1) estimates
Suppose a series is generated by a random walk
yt = yt−1 + t .
You might test for a random walks by running
yt = µ + φyt−1 + t
by OLS and testing whether φ = 1. However, the assumptions underlying the
usual asymptotic distribution theory for OLS estimates and test statistics are
violated here, sincex0 x/T does not converge in probability.
Dickey and Fuller looked at the distribution of this kind of test statistic
and found that OLS estimates are biased down (towards stationarity) and
OLS standard errors are tighter than the actual standard errors. Thus, it is
possible that many series that you would have thought were stationary based
on ols regressions were in fact generated by random walks.
Inappropriate detrending
Things get worse with a trend in the model. Suppose the real model is
yt = µ + yt−1 + t
Suppose you detrend by OLS, and then estimate an AR(1), i.e., fit the model
yt = bt + (1 − φL)−1 t
This model is equivalent to
(1 − φL)yt = (1 − φL)bt + t = bt − φb(t − 1) + t = φb + b(1 − φ)t + t
or
yt = α + γt + φyt−1 + t ,
so you could also directly run y on a time trend and lagged y.
ˆ
It turns out that this case is even worse than the last one, in that φ is
biased downward and the OLS standard errors are misleading. Intuitively,
109
the random walk generates a lot of low-frequency movement. In a relatively
small sample, the random walk is likely to drift up or down; that drift could
well be (falsely) modeled by a linear (or nonlinear, “breaking” , etc. ) trend.
Claims to see trends in series that are really generated by random walks are
the central fallacy behind much “technical analysis” of asset markets.
Spurious regression
Last, suppose two series are generated by independent random walks,
xt = xt−1 + t
yt = yt−1 + δt E( t δs ) = 0 for all t, s
Now, suppose we run yt on xt by OLS,
yt = α + βxt + νt
Again, the assumptions behind the usual distribution theory are violated. In
this case, you tend to see ”significant” β more often than the OLS formulas
say you should.
There are an enormous number of this kind of test in the literature. They
generalize the random walk to allow serial correlation in the error (unit root
processes; we’ll study these below) and a wide variety of trends in both the
data generating model and the estimated processes. Campbell and Perron
(1991) give a good survey of this literature.
10.3 Unit root and stationary processes
The random walk is an extreme process. GNP and stock prices may follow
processes that have some of these properties, but are not as extreme. One
way to think of a more general process is as a random walk with a serially
correlated disturbance
(1 − L)yt = µ + a(L) t
110
These are called unit root or difference stationary (DS) processes. In the
simplest version a(L) = 1 the DS process is a random walk with drift,
yt = µ + yt−1 + t .
Alternatively, we can consider a process such as log GNP to be stationary
around a linear trend:
yt = µt + b(L) t .
These are sometimes called trend-stationary (TS) processes.
The TS model can be considered as a special case of the DS model. If
a(L) contains a unit root, we can write the DS model as
yt = µt + b(L) t ⇒ (1 − L)yt = µ + (1 − L)b(L) t = µ + a(L) t
(a(L) = (1 − L)b(L))
Thus if the TS model is correct, the DS model is still valid and stationary.
However, it has a noninvertible unit MA root.
The DS model is a perfectly normal model for differences. We can think
of unit roots as the study of the implications for levels of a process that is
stationary in differences, rather than as a generalized random walk. For this
reason, it is very important in what follows to keep track of whether you are
thinking about the level of the process yt or its first difference.
Next, we’ll characterize some of the ways in which TS and DS processes
differ from each other
10.3.1 Response to shocks
The impulse-response function 1 of the TS model is the same as before, bj =
j period ahead response. For the DS model, aj gives the response of the
difference (1 − L)yt+j to a shock at time t. The response of the level of log
GNP yt+j is the sum of the response of the differences,
response of yt+j to shock at t = yt (−yt−1 ) + (yt+1 − yt ) + .. + (yt+j − yt+j−1 )
1
Since these models have means and trends in them, we define the impulse-response
function as Et (yt+j ) − Et−1 (yt+j ) when there is a unit shock at time t. It doesn’t make
much sense to define the response to a unit shock when all previous values are zero!
111
= a0 + a1 + a2 + . . . + aj
See figure 10.1 for a plot.
3
2 Σ a = Response of y
j
1
a = response of ∆ y
j
0
0 1 2 3 4 5 6 7 8 9 10
j
Figure 10.1: Response of differences and level for a series yt with a unit root.
The limiting value of the impulse-response of the DS model is
X
∞
aj = a(1).
j=0
Since the TS model is a special case in which a(L) = (1 − L)b(L), a(1) = 0
if the TS model is true. As we will see this (and only this) is the feature that
distinguishes TS from DS models once we allow arbitrary serial correlation,
i.e. arbitrary a(L) structure.
112
What the DS model allows, which the random walk does not, is cases
intermediate between stationary and random walk. Following a shock, the
series could come back towards, but not all the way back to, its initial value.
This behavior is sometimes called “long-horizon mean-reverting”. For ex-
ample, if stock prices are not pure random walks, they should decay over a
period of years following a shock, or their behavior would suggest unexploited
profit opportunities. Similarly, GNP might revert back to trend following a
shock over the course of a business cycle, say a few years, rather than never
(random walk) or in a quarter or two.
Figure 10.2 shows some possibilities.
a(1) > 1
a(1) = 1
Random walk
“Mean reversion’’
0 < a(1) < 1
a(1) = 0
Stationary
Figure 10.2: Impluse-response functions for different values of a(1).
10.3.2 Spectral density
The spectral density of the DS process is
S(1−L)yt (ω) = | a(e−iω ) |2 σ 2 .
The spectral density at frequency zero is S(1−L)yt (0) = a(1)2 σ 2 Thus, if |
a(1) |> 0, then the spectral density at zero of ∆yt is greater than zero. If
113
a(1) = 0 (TS), then the spectral density of ∆yt at zero is equal to zero.
Figure 10.3 shows some intermediate possibilities.
“Mean reversion’’ 0 < a(1) < 1
Random walk
a (1) = 1
Stationary a(1) = 0
ω
Figure 10.3: Spectral densities for different values of a(1).
Here ”long horizon mean reversion” shows up if the spectral density is
high quite near zero, and then drops quickly.
10.3.3 Autocorrelation
The spectral density at zero is
à !
X∞ X
∞
S(1−L)yt (0) = γ0 + 2 γj = 1 + 2 ρj γ0 = a(1)2 σ 2
j=1 j=1
Thus, if the process is DS, | a(1) |> 0, the sum of the autocorrelations is
non-zero. If it is TS, a(1) = 0, the sum of the autocorrelations is zero. If
it is a random walk, the sum of the autocorrelations is one; if it is mean
reverting, then the sum of the autocorrelations is less than one.
The ”long horizon” alternative shows up if there are many small negative
autocorrelations at high lags that draw the process back towards its initial
value following a shock.
114
10.3.4 Random walk components and stochastic trends
A second way to think of processes more general than a random walk, or
intermediate between random walk and stationary is to think of combinations
of random walks and stationary components. This is entirely equivalent to
our definition of a DS process, as I’ll show.
Fact: every DS process can be written as a sum of a random walk and a
stationary component.
I need only exhibit one way of doing it. A decomposition with particularly
nice properties is the
Beveridge-Nelson decomposition: If (1 − L)yt = µ + a(L) t then we can
write
yt = ct + zt
where
zt = µ + zt−1 + a(1) t
X∞
∗ ∗
ct = a (L) t ; aj = − ak .
k=j+1
Proof: The decomposition follows immediately from the algebraic
fact that any lag polynomial a(L) can be written as
X
∞
∗
a(L) = a(1) + (1 − L)a (L); a∗
j =− ak .
k=j+1
Given this fact, we have (1 − L)yt = µ + a(1) t + (1 − L)a∗ (L) t
= (1 − L)zt + (1 − L)ct , so we’re done. To show the fact, just
write it out:
a(1) : a0 +a1 +a2 +a3 ..
(1 − L)a∗ (L) : −a1 −a2 −a3 ..
+a1 L +a2 L +a3 L ...
−a2 L −a3 L ...
...
When you cancel terms, nothing but a(L) remains.
2
115
There are many ways to decompose a unit root into stationary and ran-
dom walk components. The B-N decomposition has a special property: the
random walk component is a sensible definition of the “trend” in yt . zt is the
limiting forecast of future y, or today’s y plus all future expected changes
in y. If GNP is forecasted to rise, GNP is “below trend” and vice-versa.
Precisely,
X
∞
zt = lim Et (yt+k − kµ) = yt + (Et ∆yt+j − µ)
k→∞
j=1
The first definition is best illustrated graphically, as in figure 10.4
Et ( yt + j )
ct
zt
Figure 10.4: Beveridge-Nelson decomposition
Given this definition of zt , we can show that it’s the same z as in the
B-N decomposition:
zt − zt−1 = lim (Et (yt+k ) − Et−1 (yt+k ) + µ)
k→∞
Et (yt+k ) − Et−1 (yt+k ) is the response of yt+k to the shock at t, Et (yt+k ) −
P
Et−1 (yt+k ) = k aj t . Thus
j=1
lim (Et (yt+k ) − Et−1 (yt+k )) = a(1) t
k→∞
116
and
zt = µ + zt−1 + a(1) t .
The construction of the B-N trend shows that the innovation variance of
the random walk component is a(1)2 σ 2 . Thus, if the series is already TS, the
innovation variance of the random walk component is zero. If the series has
a small a(1), and thus is “mean-reverting”, it will have a small random walk
component. If the series already is a random walk, then a0 = 1, aj = 0, so
yt = zt .
Beveridge and Nelson defined the trend as above, and then derived the
process for ct as well as zt . I went about it backward with the advantage of
hindsight.
In the Beveridge-Nelson decomposition the innovations to the stationary
and random walk components are perfectly correlated. Consider instead an
arbitrary combination of stationary and random walk components, in which
the innovations have some arbitrary correlation.
yt = zt + ct
zt = µ + zt−1 + νt
ct = b(L)δt
Fact: In Every decomposition of yt into stationary and random walk com-
ponents, the variance of changes to the random walk component is the same,
a(1)2 σ 2 .
Proof: Take differences of yt ,
(1 − L)yt = (1 − L)zt + (1 − L)ct = µ + νt + (1 − L)b(L)δt
(1 − L)yt is stationary, so must have a Wold representation
(1 − L)yt = µ + νt + (1 − L)b(L)δt = µ + a(L) t ,
Its spectral density at frequency zero is
2
S∆y (0) = a(1)2 σ 2 = σν help
117
The (1 − L) means that the δ term does not affect the spectral
density at zero. Thus, the spectral density at zero of (1 − L)yt is
the innovation variance of ANY random walk component.
2
The ”long-horizon mean reverting” case is one of a small a(1). Thus
this case corresponds to a small random walk component and a large and
interesting stationary component.
10.3.5 Forecast error variances
Since the unit root process is composed of a stationary plus random walk
component, you should not be surprised if the unit root process has the
same variance of forecasts behavior as the random walk, once the horizon is
long enough that the stationary part has all died down.
To see this, use the Beveridge Nelson decomposition.
yt+k = zt+k + ct+k
= zt + kµ + a(1)( t+1 + t+2 + ... + t+k ) + a∗ (L) t+k
The variance of the first term is
ka(1)2 σ 2
for large k, the variance of the second term approaches its unconditional
variance var(ct ) = var(a∗ (L) t ). Since the a∗ (L) die out, the covariance term
is also dominated by the first term. r (If a∗ (L) only had finitely many terms
this would be even more obvious.) Thus, at large k,
vart (yt+k ) → ka(1)2 σ 2 + (terms that grow slower than k).
The basic idea is that the random walk component will eventually dominate,
since its variance goes to infinity and the variance of anything stationary is
eventually limited.
118
10.3.6 Summary
In summary, the quantity a(1) controls the extent to which a process looks
like a random walk. If
(1 − L)yt = µ + a(L) t
then
a(1) = limit of yt impulse-response
a(1)2 σ 2 = S(1−L)yt (0)
2 2
−
a(1) σ = var (1³ L) random ´ walk component
P∞
a(1)2 σ 2 = 1 + 2 j=1 ρj var(∆yt )
P
2 2
a(1) σ = 1 + 2 ∞ γj
j=1
a(1)2 σ 2 = limk→∞ vart (yt+k )/k
a(1)2 σ 2 =
In these many ways a(1) quantifies the extent to which a series ”looks like”
a random walk.
10.4 Summary of a(1) estimates and tests.
Obviously, estimating and testing a(1) is going to be an important task.
Before reviewing some approaches, there is a general point that must be
10.4.1 Near- observational equivalence of unit roots
and stationary processes in finite samples
So far, I’ve shown that a(1) distinguishes unit root from stationary processes.
Furthermore a(1) is the only thing that distinguishes unit roots from station-
ary processes. There are several ways to see this point.
1) Given a spectral density, we could change the value at zero to some
other value leaving the rest alone. If the spectral density at zero starts
positive, we could set it to zero, making a stationary process out of a unit
root process and vice versa.
119
2) Given a Beveridge-Nelson decomposition, we can construct a new se-
ries with a different random walk component. Zap out the random walk
component, and we’ve created a new stationary series; add a random walk
component to a stationary series and you’ve created a unit root. The variance
of the random walk component was determined only by a(1).
3) And so forth (See Cochrane (1991) for a longer list).
(Actually the statement is only true in a finite sample. In population, we
also know that the slope of the spectral density is zero at frequency zero:
X
∞
S(ω) = γ0 + 2 cos(jω)γj
j=1
¯ ¯
¯
dS(ω) ¯ X∞ ¯
¯
¯ = −2 j sin(jω)γj ¯ = 0.
dω ω=0 ¯
j=1 ω=0
Thus, you can’t just change the point at zero. Changing the spectral den-
sity at a point also doesn’t make any difference, since it leaves all integrals
unchanged. However, in a finite sample, you can change the periodogram
ordinate at zero leaving all the others alone.)
Another way of stating the point is that (in a finite sample) there are
unit root processes arbitrarily ”close” to any stationary process, and there
are stationary processes arbitrarily ”close” to any unit root process. To see
the first point, take a stationary process and add an arbitrarily small random
walk component. To see the second, take a unit root process and change the
unit root to .9999999. ”Close” here can mean any statistical measure of
distance, such as autocovariance functions, likelihood functions, etc.
Given these points, you can see that testing for a unit root vs. stationary
process is hopeless in a finite sample. We could always add a tiny random
walk component to a stationary process and make it a unit root process; yet
in a finite sample we could never tell the two processes apart.
What ”unit root tests” do is to restrict the null: they test for a unit root
plus restrictions on the a(L) polynomial, such as a finite order AR, versus
trend stationary plus restrictions on the a(L) polynomial. Then, they promise
to slowly remove the restrictions as sample size increases.
120
10.4.2 Empirical work on unit roots/persistence
Empirical work generally falls into three categories:
1) Tests for unit roots (Nelson and Plosser (1982)) The first kind of tests
were tests whether series such as GNP contained unit roots. As we have
seen, the problem is that such tests must be accompanied by restrictions on
a(L) or they have no content. Furthermore, it’s not clear that we’re that
interested in the results. If a series has a unit root but tiny a(1) it behaves
almost exactly like a stationary series. For both reasons, unit root tests are
in practice just tests for the size of a(1).
Nonetheless, there is an enormous literature on testing for unit roots.
Most of this literature centers on the asymptotic distribution of various test
procedures under a variety of null hypotheses. The problem is econometri-
cally interesting because the asymptotic distribution (though not the finite
sample distribution) is usually discontinuous as the root goes to 1. If there
is even a tiny random walk component, it will eventually swamp the rest of
the series as the sample grows
2 Parametric Measures of a(1) (Campbell and Mankiw (1988)) In this
kind of test, you fit a parametric (ARMA) model for GNP and then find the
implied a(1) of this parametric model. This procedure has all the advantages
and disadvantages of any spectral density estimate by parametric model. If
the parametric model is correct, you gain power by using information at all
frequencies to fit it. If it is incorrect, it will happily forsake accuracy in the
region you care about (near frequency zero) to gain more accuracy in regions
you don’t care about. (See the Sims approximation formula above.)
3. “Nonparametric” estimates of a(1). (Cochrane (1988), Lo and MacKin-
lay (1988), Poterba and Summers (1988)) Last, one can use spectral density
estimates or their equivalent weighted covariance estimates to directly es-
timate the spectral density at zero and thus a(1), ignoring the rest of the
process. This is the idea behind ”variance ratio” estimates. These estimates
have much greater standard errors than parametric estimates, but less bias
if the parametric model is in fact incorrect.
121
Chapter 11
Cointegration
Cointegration is generalization of unit roots to vector systems. As usual, in
vector systems there are a few subtleties, but all the formulas look just like
obvious generalizations of the scalar formulas.
11.1 Definition
Suppose that two time series are each integrated, i.e. have unit roots, and
hence moving average representations
(1 − L)yt = a(L)δt
(1 − L)wt = b(L)νt
In general, linear combinations of y and w also have unit roots. However, if
there is some linear combination, say yt − αwt , that is stationary, yt and wt
are said to be cointegrated, and [1 − α] is their cointegrating vector.
Here are some plausible examples. Log GNP and log consumption each
probably contain a unit root. However, the consumption/GNP ratio is sta-
ble over long periods, thus log consumption − log GNP is stationary, and log
GNP and consumption are cointegrated. The same holds for any two com-
ponents of GNP (investment, etc). Also, log stock prices certainly contain
a unit root; log dividends probably do too; but the dividend/price ratio is
stationary. Money and prices are another example.
122
11.2 Cointegrating regressions
Like unit roots, cointegration attracts much attention for statistical reasons
as well as for the economically interesting time-series behavior that it rep-
resents. An example of the statistical fun is that estimates of cointegrating
vectors are “superconsistent”–you can estimate them by OLS even when the
right hand variables are correlated with the error terms, and the estimates
converge at a faster rate than usual.
Suppose yt and wt are cointegrated, so that yt − αwt is stationary. Now,
consider running
yt = βwt + ut
by OLS. OLS estimates of β converge to α, even if the errors ut are correlated
with the right hand variables wt !
As a simple way to see this point, recall that OLS tries to minimize
the variance of the residual. If yt − wt β is stationary, then for any α 6=
β, yt − wt α has a unit root and hence contains a random walk (recall the
B-N decomposition above). Thus, the variance of (yt −wt β), β 6= α increases
to ∞ as the sample size increases; while the variance of (yt −wt α) approaches
ˆ
some finite number. Thus, OLS will pick β = α in large samples.
Here’s another way to see the same point: The OLS estimate is
X X
ˆ
β = (W 0 W )−1 (W 0 Y ) = ( 2
wt )−1 ( wt (αwt + ut )) =
t t
P
X X 1
2 T t wt ut
α+ wt ut / wt =α+ 1 P 2
t t T t wt
ˆ
Normally, the plim of the last term is not zero, so β is an inconsistent estimate
of α. We assume that the denominator converges to a nonzero constant, as
does the numerator, since we have not assumed that E(wt ut ) = 0. But, when
wt has a unit root, the denominator of the last term goes to ∞, so OLS is
consistent, even if the numerator does not converge to zero! Furthermore,
ˆ
the denominator goes to ∞ very fast, so β converges to α at rate T rather
1/2
than the usual rate T .
As an example, consider the textbook simultaneous equations problem:
yt = ct + at
123
ct = αyt + t
αt is a shock (at = it + gt ); at and t are iid and independent of each
other. If you estimate the ct equation by OLS you get biased and inconsistent
estimates of α. To see this, you first solve the system for its reduced form,
1
yt = αyt + t + at = ( t + at )
1−α
α 1 α
ct = ( t + at ) + t = t + at
1−α 1−α 1−α
Then,
1
P 1
P 1
P
plim( T ct yt ) plim( T (αyt + t )yt ) plim( T t yt )
α→
ˆ 1
P 2 = 1
P 2 =α+ 1
P 2
plim( T yt ) plim( T yt ) plim( T yt )
σ2
1−α σ2
=α+ σ2 +σa
2 = α + (1 − α) 2 2
σ + σa
(1−α)2
As a result of this bias and inconsistency a lot of effort was put into estimating
”consumption functions” consistently, i.e. by 2SLS or other techniques.
Now, suppose instead that at = at−1 + δt . This induces a unit root in y
and hence in c as well. But since is still stationary, c − αy is stationary, so
P 2
2 1
y and c are cointegrated. Now σa → ∞, so plim( T yt ) = ∞ and α → α!
ˆ
Thus, none of the 2SLS etc. corrections are needed!
More generally, estimates of cointegrating relations are robust to many of
the errors correlated with right hand variables problems with conventional
OLS estimators, including errors in variables as well as simultaneous equa-
tions and other problems.
11.3 Representation of cointegrated system.
11.3.1 Definition of cointegration
Consider a first difference stationary vector time series xt . The elements of
xt are cointegrated if there is at least one vector α such that α0 xt is stationary
in levels. α is known as the cointegrating vector.
124
Since differences of xt are stationary, xt has a moving average represen-
tation
(1 − L)xt = A(L) t .
Since α0 xt stationary is an extra restriction, it must imply a restriction on
A(L). It shouldn’t be too hard to guess that that restriction must involve
A(1)!
11.3.2 Multivariate Beveridge-Nelson decomposition
It will be useful to exploit the multivariate Beveridge-Nelson decomposition,
xt = zt + ct ,
(1 − L)zt = A(1) t
X
∞
∗ ∗
ct = A (L) t ; Aj =− Ak
k=j+1
This looks just like and is derived exactly as the univariate BN decomposi-
tion, except the letters stand for vectors and matrices.
11.3.3 Rank condition on A(1)
Here’s the restriction on A(1) implied by cointegration: The elements of xt
0
are cointegrated with cointegrating vectors αi iff αi A(1) = 0. This implies
that the rank of A(1) is (number of elements of xt - number of cointegrating
vectors αi )
Proof: Using the multivariate Beveridge Nelson decomposition,
α0 xt = α0 zt + α0 ct .
α0 ct is a linear combination of stationary random variables and is
hence stationary. α0 zt is a linear combination of random walks.
This is either constant or nonstationary. Thus, it must be con-
stant, i.e. its variance must be zero and, since
(1 − L)α0 zt = α0 A(1) t ,
125
we must have
α0 A(1) = 0
2
In analogy to a(1) = 0 or | a(1) |> 0 in univariate time series, we now
have three cases:
Case 1 : A(1) = 0 ⇔ xt stationary in levels; all linear combinations of xt
stationary in levels.
Case 2 : A(1) less than full rank ⇔ (1 − L)xt stationary, some linear
combinations α0 xt stationary.
Case 3 : A(1) full rank ⇔ (1 − L)xt stationary, no linear combinations of
xt stationary.
For unit roots, we found that whether a(1) was zero or not controlled
the spectral density at zero, the long-run impulse-response function and the
innovation variance of random walk components. The same is true here for
A(1), as we see next.
11.3.4 Spectral density at zero
The spectral density matrix of (1 − L)xt at frequency zero is S(1−L)xt (0) =
Ψ = A(1)ΣA(1)0 . Thus, α0 A(1) = 0 implies α0 A(1)ΣA(1)0 = 0, so the spectral
density matrix of (1 − L)xt is also less than full rank, and α0 Ψ = 0 for any
cointegrating vector α.
The fact that the spectral density matrix at zero is less than full rank
gives a nice interpretation to cointegration. In the 2x2 case, the spectral
density matrix at zero is less than full rank if its determinant is zero, i.e. if
S∆y (0)S∆w (0) =| S∆y∆w (0) |2
This means that the components at frequency zero are perfectly correlated.
11.3.5 Common trends representation
Since the zero frequency components are perfectly correlated, there is in a
sense only one common zero frequency component in a 2-variable cointe-
126
grated system. The common trends representation formalizes this idea.
Ψ =A(1)ΣA(1)0 is also the innovation variance-covariance matrix of the
Beveridge-Nelson random walk components. When the rank of this matrix
is deficient, we obviously need fewer than N random walk components to
describe N series. This means that there are common random walk compo-
nents. In the N = 2 case, in particular, two cointegrated series are described
by stationary components around a single random walk.
Precisely, we’re looking for a representation
∙ ¸ h i
yt a
= zt + stationary.
wt b
Since the spectral density at zero (like any other covariance matrix) is
symmetric, it can be decomposed as
Ψ = A(1)ΣA(1)0 = QΛQ0
where ∙ ¸
λ1 0
Λ=
0 λ2
and QQ0 = Q0Q = I . Then λi are the eigenvalues and Q is an orthogonal
matrix of corresponding eigenvectors. If the system has N series and K
cointegrating vectors, the rank of Ψ is N − K, so K of the eigenvalues are
zero.
Let νt define a new error sequence,
νt = Q0 A(1) t
Then E(νt νt0 ) = Q0 A(1)ΣA(1)0 Q = Q0 QΛQ0 Q = Λ So the variance-covariance
matrix of the νt shocks is diagonal, and has the eigenvalues of the Ψ matrix
on the diagonal.
In terms of the new shocks, the Beveridge-Nelson trend is
zt = zt−1 + A(1) t = zt−1 + Qνt
But components of ν with variance zero might as well not be there. For
example, in the 2x2 case, we might as well write
∙ ¸ ∙ ¸ ∙ ¸ ∙ ¸
z1t z1t−1 ν1t 0 λ1 0
= +Q ; E(νt vt ) =
z2t z2t−1 ν2t 0 0
127
as ∙ ¸ ∙ ¸ ∙ ¸
z1t z1t−1 Q11 2
= + v1t ; E(v1t ) = λ1 .
z2t z2t−1 Q21
Finally, since z1 and z2 are perfectly correlated, we can write the system
with only one random walk as
∙ ¸ ∙ ¸
yt Q11
= zt + A∗ (L) t .
wt Q21
(1 − L)zt = ν1t = [1 0]Q0 A(1) t
This is the common trends representation. yt and wt share a single com-
mon trend, or common random walk component zt .
We might as well order the eigenvalues λi from largest to smallest, with
the zeros on the bottom right. In this way, we rank the common trends by
their importance.
With univariate time series, we thought of a continuous scale of processes
between stationary and a pure random walk, by starting with a stationary
series and adding random walk components of increasing variance, indexed
by the increasing size of a(1)2 σ 2 . Now, start with a stationary series xt , i.e.
let all the eigenvalues be zero. We can then add random walk components one
by one, and slowly increase their variance. The analogue to ”near-stationary”
series we discussed before are ”nearly cointegrated” series, with a very small
eigenvalue or innovation variance of the common trend.
11.3.6 Impulse-response function.
A(1) is the limiting impulse-response of the levels of the vector xt . For
example, A(1)yw is the long-run response of y to a unit w shock. To see
how cointegration affects A(1), consider a simple (and very common) case,
α = [1 − 1]0 . The reduced rank of A(1) means
α0 A(1) = 0
∙ ¸
A(1)yy A(1)yw
[1 − 1] =0
A(1)wy A(1)ww
128
hence
A(1)yy = A(1)wy
A(1)yw = A(1)ww
each variable’s long-run response to a shock must be the same. The reason is
intuitive: if y and w had different long-run responses to a shock, the difference
y − w would not be stationary. Another way to think of this is that the
response of the difference y − w must vanish, since the difference must be
stationary. A similar relation holds for arbitrary α.
11.4 Useful representations for running coin-
tegrated VAR’s
The above are very useful for thinking about cointegration, but you have to
run AR’s rather than MA(∞)s. Since the Wold MA(∞) isn’t invertible when
variables are cointegrated, we have to think a little more carefully about how
to run VARs when variables are cointegrated.
11.4.1 Autoregressive Representations
Start with the autoregressive representation of the levels of xt , B(L)xt = t
or
xt = −B1 xt−1 − B2 xt−2 + . . . + t .
Applying the B-N decomposition B(L) = B(1) + (1 − L)B ∗ (L) to the lag
polynomial operating on xt−1 (t − 1, not t) on the right hand side, we obtain
xt = (−(B1 +B2 +. . .))xt−1 +(B2 +B3 +..)(xt−1 −xt−2 )+(B3 +B4 +..)(xt−2 −xt−3 )+. . .+ t
X
∞
∗
xt = (−(B1 + B2 + . . .))xt−1 + Bj ∆xt−j + t
j=1
Hence, subtracting xt−1 from both sides,
X
∞
∆xt = −B(1)xt−1 + B∆xt−j + t .
j=1
129
As you might have suspected, the matrix B(1) controls the cointegration
P
properties. ∆xt , ∞ B ∆xt−j , and t are stationary, so B(1)xt−1 had also
j=1
better be stationary. There are three cases:
Case 1 : B(1) is full rank, and any linear combination of xt−1 is stationary.
xt−1 is stationary. In this case we run a normal VAR in levels.
Case 2 : B(1) has rank between 0 and full rank. There are some linear
combinations of xt that are stationary, so xt is cointegrated. As we will see,
the VAR in levels is consistent but inefficient (if you know the cointegrating
vector) and the VAR in differences is misspecified in this case.
Case 3 : B(1) has rank zero, so no linear combination of xt−1 is stationary.
∆xt is stationary with no cointegration. In this case we run a normal VAR
in first differences.
11.4.2 Error Correction representation
As a prelude to a discussion of what to do in the cointegrated case, it will be
handy to look at the error correction representation.
If B(1) has less than full rank, we can express it as
B(1) = γα0 ;
If there are K cointegrating vectors, then the rank of B(1) is K and γ and
α each have K columns. Rewriting the system with γ and α, we have the
error correction representation
X
∞
0 ∗
∆xt = −γα xt−1 + Bj ∆xt−j + t .
j=1
Note that since γ spreads K variables into N variables, α0 xt−1 itself must be
stationary so that γα0 xt−1 will be stationary. Thus, α must be the matrix of
cointegrating vectors.
This is a very nice representation. If α0 xt−1 is not 0 (its mean), γα0 xt−1
puts in motion increases or decreases in ∆xt to restore α0 xt towards its mean.
In this sense “errors” — deviations from the cointegrating relation α0 xt = 0 —
set in motion changes in xt that “correct” the errors.
130
11.4.3 Running VAR’s
Cases 1 or 3 are easy: run a VAR in levels or first differences. The hard case
is case 2, cointegration.
With cointegration, a pure VAR in differences,
∆yt = a(L)∆yt−1 + b(L)∆wt−1 + δt
∆wt = c(L)∆yt−1 + d(L)∆wt−1 + νt
is misspecified. Looking at the error-correction form, there is a missing re-
gressor, α0 [yt−1 wt−1 ]0 . This is a real problem; often the lagged cointegrating
vector is the most important variable in the regression. A pure VAR in lev-
els, yt = a(L)yt−1 + b(L)wt−1 + δt wt = c(L)yt−1 + d(L)wt−1 + νt looks a little
unconventional, since both right and left hand variables are nonstationary.
Nonetheless, the VAR is not misspecified, and the estimates are consistent,
though the coefficients may have non-standard distributions. (Similarly, in
the regression xt = φxt−1 + t ; when xt was generated by a random walk
ˆ
φ → 1, but with a strange distribution.) However, they are not efficient: If
there is cointegration, it imposes restrictions on B(1) that are not imposed
in a pure VAR in levels.
One way to impose cointegration is to run an error-correction VAR,
∆yt = γy (αy yt−1 + αw wt−1 ) + a(L)∆yt−1 + b(L)∆wt−1 + δt
∆wt = γw (αy yt−1 + αw wt−1 ) + c(L)∆yt−1 + d(L)∆wt−1 + νt
This specification imposes that y and w are cointegrated with cointegrat-
ing vector α. This form is particularly useful if you know ex-ante that the
variables are cointegrated, and you know the cointegrating vector, as with
consumption and GNP. Otherwise, you have to pre-test for cointegration and
estimate the cointegrating vector in a separate step. Advocates of just run-
ning it all in levels point to the obvious dangers of such a two step procedure.
A further difficulty with the error-correction form is that it doesn’t fit
neatly into standard VAR packages. Those packages are designed to regress
N variables on their own lags, not on their own lags and a lagged difference
of the levels. A way to use standard packages is to estimate instead the
companion form, one difference and the cointegrating vector.
∆yt = a(L)∆yt−1 + b(L)(αy yt−1 + αw wt−1 ) + δt
131
(αy yt + αw wt ) = c(L)∆yt−1 + d(L)(αy yt−1 + αw wt−1 ) + νt
This is equivalent (except for lag length) and can be estimated with regular
VAR packages. Again, it requires a priori knowledge of the cointegrating
vector.
There is much controversy over which approach is best. My opinion is
that when you really don’t know whether there is cointegration or what the
vector is, the AR in levels approach is probably better than the approach of
a battery of tests for cointegration plus estimates of cointegrating relations
followed by a companion or error correction VAR. However, much of the time
you know the variables are cointegrated, and you know the cointegrating vec-
tor. Consumption and GNP are certainly cointegrated, and the cointegrating
vector is [1 -1]. In these cases, the error-correction and companion form are
probably better, since they impose this prior knowledge. (If you run the AR
in levels, you will get close to, but not exactly, the pattern of cointegration
you know is in the data.) The slight advantage of the error-correction form
is that it treats both variables symmetrically. It is also equivalent to a com-
panion form with a longer lag structure in the cointegrating relation. This
may be important, as you expect the cointegrating relation to decay very
slowly. Also, the error correction form will likely have less correlated errors,
since there is a y on both left hand sides of the companion form. However,
the companion form is easier to program.
Given any of the above estimates of the AR representation, it’s easy to
find the MA(∞) representation of first differences by simply simulating the
impulse-response function.
11.5 An Example
Consider a first order VAR
yt = ayt−1 + bwt−1 + δt
wt = cyt−1 + dwt−1 + νt
or µ ∙ ¸ ¶ ∙ ¸
a b 1 − a −b
I− L xt = t ; B(1) =
c d −c 1 − d
132
An example of a singular B(1) is b = 1 − a, c = 1 − d,
∙ ¸
b −b
B(1) = .
−c c
The original VAR in levels with these restrictions is
yt = (1 − b)yt−1 + bwt−1 + δt
wt = cyt−1 + (1 − c)wt−1 + νt
or µ ∙ ¸ ¶
1−b b
I− L xt = t ;
c 1−c
The error-correction representation is
X
∞
0 ∗
∆xt = −γα xt−1 + Bj ∆xt−j + t .
j=1
∙ ¸ ∙ ¸ ∙ ¸
b b 1
B(1) = [1 − 1] ⇒ γ = , α= .
−c −c −1
∗
Since B is only first order, B1 and higher = 0, so the error-correction
representation is
∆yt = −b(yt−1 − wt−1 ) + δt
∆wt = c(yt−1 − wt−1 ) + νt
As you can see, if yt−1 − wt−1 > 0, this lowers the growth rate of y and raises
that of w, restoring the cointegrating relation.
We could also subtract the two equations,
∆(yt − wt ) = −(b + c)(yt−1 − wt−1 ) + (δt − νt )
yt − wt = (1 − (b + c))(yt−1 − wt−1 ) + (δt − νt )
so yt − wt follows an AR(1).(You can show that 2 > (b + c) > 0, so it’s a
stationary AR(1).) Paired with either the ∆y or ∆w equation given above,
this would form the companion form.
133
In this system it’s also fairly easy to get to the MA(∞) representation
for differences. From the last equation,
yt − wt = (1 − (1 − (b + c))L)−1 (δt − νt ) ≡ (1 − θL)−1 (δt − νt )
so
∆yt = −b(yt−1 − wt−1 ) + δt
∆wt = c(yt−1 − wt−1 ) + vt
becomes
∆yt = −b(1 − θL)−1 (δt − νt ) + δt = (1 − b(1 − θL)−1 )δt + b(1 − θL)−1 vt
∆wt = c(1 − θL)−1 (δt − νt ) + vt = c(1 − θL)−1 δt + (1 − c(1 − θL)−1 )vt
∙ ¸ ∙ ¸∙ ¸
∆yt −1 (1 − θL) − b b δt
= (1 − θL)
∆wt c (1 − θL) − c vt
∙ ¸ ∙ ¸∙ ¸
∆yt −1 1 − b − (1 − b − c)L b δt
= (1−(1−b−c)L)
∆wt c 1 − c − (1 − b − c)L vt
Evaluating the right hand matrix at L = 1,
∙ ¸
−1 c b
(b + c)
c b
Denoting the last matrix A(L), note α0 A(1) = 0. You can also note
A(1)γ = 0, another property we did not discuss.
11.6 Cointegration with drifts and trends
So far I deliberately left out a drift term or linear trends. Suppose we put
them back in, i.e. suppose
(1 − L)xt = µ + A(L) t .
The B − N decomposition was
zt = µ + zt−1 + A(1) t .
134
Now we have two choices. If we stick to the original definition, that α0 xt
must be stationary, it must also be the case that α0 µ = 0. This is a separate
restriction. If α0 A(1) = 0, but α0 µ 6= 0, then
α0 zt = α0 µ + α0 zt−1 ⇒ α0 zt = α0 z0 + (α0 µ)t.
Thus, α0 xt will contain a time trend plus a stationary component. Alterna-
tively, we could define cointegration to be α0 xt contains a time trend, but
no stochastic (random walk) trends. Both approaches exist in the literature,
as well as generalizations to higher order polynomials. See Campbell and
Perron (1991) for details.
135
```
DOCUMENT INFO
Shared By:
Categories:
Tags:
Stats:
views: 16 posted: 3/19/2010 language: English pages: 136 | 57,405 | 187,199 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.59375 | 3 | CC-MAIN-2014-41 | longest | en | 0.367428 |
https://steemit.com/life/@dikkatimicekti/5-math-tricks-that-will-blow-your-mind-lifehack-part2 | 1,591,420,677,000,000,000 | text/html | crawl-data/CC-MAIN-2020-24/segments/1590348509972.80/warc/CC-MAIN-20200606031557-20200606061557-00058.warc.gz | 561,354,822 | 182,906 | # 5 Math Tricks That Will Blow Your Mind #LifeHack #Part2
in #life2 years ago
• 06 of 10 - Memorizing Pi
To remember the first seven digits of pi, count the number of letters in each word of the sentence:
"How I wish I could calculate pi."
This gives 3.141592
• 07 of 10 - Contains the Digits 1, 2, 4, 5, 7, 8
1.Select a number from 1 to 6.
2.Multiply the number by 9.
3.Multiply it by 111.
4.Multiply it by 1001.
The number will contain the digits 1, 2, 4, 5, 7, and 8.
Example: The number 6 yields the answer 714285.
To easily multiply two double digit numbers, use their distance from 100 to simplify the math:
1.Subtract each number from 100.
3.100 minus this number is the first part of the answer.
4.Multiply the digits from Step 1 to get the second part of the answer.
• 09 of 10 - Super Simple Divisibility Rules
You've got 210 pieces of pizza and want to know whether or not you can split them evenly within your group. Rather than whip out the calculator, use these simple shortcuts to do the math in your head:
Divisible by 2 if the last digit is a multiple of 2 (210).
Divisible by 3 if the sum of the digits is divisible by 3 (522 because the digits add up to 9, which is divisible by 3).
Divisible by 4 if the last two digits are divisible by 4 (2540 because 40 is divisible by 4).
Divisible by 5 if the last digit is 0 or 5 (9905).
Divisible by 6 if it passes the rules for both 2 and 3 (408).
Divisible by 9 if the sum of the digits is divisible by 9 (6390 since 6 + 3 + 9 + 0 = 18, which is divisible by 9).
Divisible by 10 if the number ends in a 0 (8910).
Divisible by 12 if the rules for divisibility by 3 and 4 apply.
Example: The 210 slices of pizza may be evenly distributed into groups of 2, 3, 6, 10.
• 10 of 10 - Finger Multiplication Tables
Everyone knows you can count on your fingers. Did you realize you can use them for multiplication? A simple way to do the "9" multiplication table is to place both hands in front of you with fingers and thumbs extended. To multiply 9 by a number, fold down that number of finger, counting from the left.
Examples: To multiply 9 by 5, fold down the fifth finger from the left. Count fingers on either side of the "fold" to get the answer. In this case, the answer is 45.
To multiply 9 times 6, fold down the sixth finger, giving an answer of 54. | 659 | 2,326 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.5625 | 5 | CC-MAIN-2020-24 | latest | en | 0.883565 |
https://atcoder.jp/contests/abc135/submissions/6579067?lang=ja | 1,620,987,920,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243990449.41/warc/CC-MAIN-20210514091252-20210514121252-00100.warc.gz | 122,745,583 | 6,442 | ソースコード 拡げる
Copy
```#include <iostream>
#include <vector>
#include <cmath>
#include <string>
#include <algorithm>
#define llint long long int
#define NUM 1000000007
int residual[100001][10] = {};
void prev_calc(){
for(int i=0; i<10; i++){
residual[1][i] = i;
for(int j=2; j<=100000; j++){
residual[j][i] = (residual[j-1][i]*10)%13;
}
}
}
int ctoi(char c){
return c - '0';
}
int main(void){
llint k, x, y;
std::cin >> k >> x >> y;
if((k%2 == 0) && ((x+y)%2 == 1)){
std::cout << -1 << std::endl;
return 0;
}
std::vector<std::pair<llint, llint> > steps;
llint nx = 0;
llint ny = 0;
while(true){
llint remain = k;
llint diffx = x-nx;
llint diffy = y-ny;
if(diffx == 0 and diffy <= 2*k) break;
llint mv_x_abs = std::min(remain, std::abs(diffx));
nx += diffx<0 ? -mv_x_abs : mv_x_abs;
remain -= mv_x_abs;
llint mv_y_abs = remain; //std::min(remain, std::abs(diffy));
ny += diffy<0 ? -mv_y_abs : mv_y_abs;
steps.push_back(std::make_pair(nx, ny));
diffy = y-ny;
}
if((y-ny) % 2 == 0){
llint diffy = y-ny;
llint half = diffy/2;
nx += k-std::abs(half);
ny += half;
steps.push_back(std::make_pair(nx, ny));
nx -= k-std::abs(half);
ny += half;
steps.push_back(std::make_pair(nx, ny));
}else{
llint diffy = y-ny;
llint targety;
if(diffy < 0){
targety = y+k;
}else{
targety = y-k;
}
llint half = (targety-ny)/2;
nx += k-std::abs(half);
ny += half;
steps.push_back(std::make_pair(nx, ny));
nx -= k-std::abs(half);
ny += half;
steps.push_back(std::make_pair(nx, ny));
steps.push_back(std::make_pair(nx, y));
}
std::cout << steps.size() << std::endl;
for(const auto& elem: steps){
std::cout << elem.first << " " << elem.second << std::endl;
}
return 0;
}
```
#### 提出情報
提出日時 2019-07-27 22:15:16+0900 E - Golf yanagi3150 C++14 (GCC 5.4.1) 0 1930 Byte WA 322 ms 5744 KB
#### ジャッジ結果
AC × 3
AC × 25 WA × 30
セット名 テストケース
Sample sample_01.txt, sample_02.txt, sample_03.txt
Subtask1 sample_01.txt, sample_02.txt, sample_03.txt, sub1_01.txt, sub1_02.txt, sub1_03.txt, sub1_04.txt, sub1_05.txt, sub1_06.txt, sub1_07.txt, sub1_08.txt, sub1_09.txt, sub1_10.txt, sub1_11.txt, sub1_12.txt, sub1_13.txt, sub1_14.txt, sub1_15.txt, sub1_16.txt, sub1_17.txt, sub1_18.txt, sub1_19.txt, sub1_20.txt, sub1_21.txt, sub1_22.txt, sub1_23.txt, sub1_24.txt, sub1_25.txt, sub1_26.txt, sub1_27.txt, sub1_28.txt, sub1_29.txt, sub1_30.txt, sub1_31.txt, sub1_32.txt, sub1_33.txt, sub1_34.txt, sub1_35.txt, sub1_36.txt, sub1_37.txt, sub1_38.txt, sub1_39.txt, sub1_40.txt, sub1_41.txt, sub1_42.txt, sub1_43.txt, sub1_44.txt, sub1_45.txt, sub1_46.txt, sub1_47.txt, sub1_48.txt, sub1_49.txt, sub1_50.txt, sub1_51.txt, sub1_52.txt
ケース名 結果 実行時間 メモリ
sample_01.txt AC 1 ms 256 KB
sample_02.txt AC 1 ms 256 KB
sample_03.txt AC 1 ms 256 KB
sub1_01.txt WA 1 ms 256 KB
sub1_02.txt WA 1 ms 256 KB
sub1_03.txt WA 1 ms 256 KB
sub1_04.txt WA 1 ms 256 KB
sub1_05.txt AC 1 ms 256 KB
sub1_06.txt WA 1 ms 256 KB
sub1_07.txt WA 1 ms 256 KB
sub1_08.txt WA 2 ms 256 KB
sub1_09.txt AC 1 ms 256 KB
sub1_10.txt WA 1 ms 256 KB
sub1_11.txt WA 1 ms 256 KB
sub1_12.txt AC 1 ms 256 KB
sub1_13.txt WA 1 ms 256 KB
sub1_14.txt AC 1 ms 256 KB
sub1_15.txt WA 1 ms 256 KB
sub1_16.txt AC 4 ms 256 KB
sub1_17.txt WA 1 ms 256 KB
sub1_18.txt WA 1 ms 256 KB
sub1_19.txt WA 1 ms 256 KB
sub1_20.txt AC 15 ms 640 KB
sub1_21.txt AC 1 ms 256 KB
sub1_22.txt AC 1 ms 256 KB
sub1_23.txt AC 1 ms 256 KB
sub1_24.txt AC 4 ms 384 KB
sub1_25.txt WA 1 ms 256 KB
sub1_26.txt AC 1 ms 256 KB
sub1_27.txt AC 1 ms 256 KB
sub1_28.txt WA 1 ms 256 KB
sub1_29.txt AC 1 ms 256 KB
sub1_30.txt WA 1 ms 256 KB
sub1_31.txt WA 1 ms 256 KB
sub1_32.txt AC 1 ms 256 KB
sub1_33.txt WA 1 ms 256 KB
sub1_34.txt AC 322 ms 5744 KB
sub1_35.txt WA 1 ms 256 KB
sub1_36.txt WA 1 ms 256 KB
sub1_37.txt WA 1 ms 256 KB
sub1_38.txt WA 1 ms 256 KB
sub1_39.txt WA 2 ms 256 KB
sub1_40.txt WA 1 ms 256 KB
sub1_41.txt WA 1 ms 256 KB
sub1_42.txt AC 1 ms 256 KB
sub1_43.txt AC 1 ms 256 KB
sub1_44.txt AC 1 ms 256 KB
sub1_45.txt AC 1 ms 256 KB
sub1_46.txt WA 2 ms 256 KB
sub1_47.txt AC 110 ms 2420 KB
sub1_48.txt WA 1 ms 256 KB
sub1_49.txt WA 1 ms 256 KB
sub1_50.txt AC 1 ms 256 KB
sub1_51.txt AC 109 ms 2420 KB
sub1_52.txt WA 1 ms 256 KB | 1,752 | 4,114 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.921875 | 3 | CC-MAIN-2021-21 | latest | en | 0.247406 |
http://www.jiskha.com/display.cgi?id=1256436185 | 1,496,051,572,000,000,000 | text/html | crawl-data/CC-MAIN-2017-22/segments/1495463612069.19/warc/CC-MAIN-20170529091944-20170529111944-00305.warc.gz | 699,876,598 | 3,957 | # Physics
posted by on .
Suppose that in the same Atwood setup another string is attached to the bottom of m1 and a constant force f is applied, retarding the upward motion of m1. If m1 = 5.30 kg and m2 = 10.60 kg, what value of f will reduce the acceleration of the system by 50%?
thank you!
• Physics - ,
m2*g-m1*g-F=ma/2
where
g(m2-m1)=ma
so
g(m2-m1)-F=g(m2-m1)/2
solve for F
• Physics - ,
F= {9.8m/s^2 (10.6-5.30)}/ 2
F= {9.8 (5.3)}/2
F= 51.94/2
F= 25.97
{Significant figures} F = 26N
### Answer This Question
First Name: School Subject: Answer:
### Related Questions
More Related Questions
Post a New Question | 222 | 628 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5 | 4 | CC-MAIN-2017-22 | latest | en | 0.776521 |
http://netshock.co.uk/line-equation/ | 1,571,594,591,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986717235.56/warc/CC-MAIN-20191020160500-20191020184000-00345.warc.gz | 132,574,377 | 12,557 | # Finding the piecewise equation of a line
Given the below chart, we need to figure out the equation for the line. We will use the piecewise method, where we split the line into chunks & define each part of the line using the formula y – y1 = m(x – x1) where:
• y1 = y axis point
• x1 = x axis point
• m = slope
In this example, the graph has been split into three sections. The co-ordinates for section 1 are (0,0) and (1,1). We can therefore define this part of the slope as: y – 0 = 1( x – 0) where we have taken y to be 0 from the coordinates (0,0) and x as 0 from (0,0). This formula can be simplified to y = x.
The second line takes the coordinates (1,1) and (2,0). Hence, the formula is y -1 = -1(x-1). This can be further simplified by subtracting the 1 from the y-side of the equation to create y = -x + 2. M is calculated as x change divided by y change.
Finally, we have the flat part of the graph. Here the formula is simply y = 0. We can now stitch all of these formulas together to make our piecewise formula. Below, you can see that we have defined each of the equations we’ve created above & given them a domain (when they are applicable). So:
• Y is only equal to x if x is greater than or equal to zero and less than or equal to 1.
• Y is equal to -x + 2 only if x is greater than 1 and less than or equal to 2.
• Y is equal to zero if x is greater than 2
Note: make sure you don’t have any overlapping functions. i.e. if one is <=1 then the next piecewise function cannot be >=1, it must be simply >. Otherwise it can result in 2 values for y.
### Example 2:
In the below, we have two co-ordinates (4,2) and (6,3). We need to figure out, based on this information, what the piecewise formula for the line is. We start, by calculating the slope, which is x change divided by y change. Which we simplify out to be 1/2.
So, we now have y – 2 = 1/2(x – 4), where 2 is the y value for the first co-ordinate we were given and 4 is the value for x. We can simplify this out as below to y = 1/2(x).
### Example 3:
In the below example, we already know the slope of the line & we know one of the points on the graph. So the formula is simply a case of replacing a few numbers to create y – 2 = -2/3(x-7).
### Example 4:
In our final example, we have been given 2 points on the graph. From here, we calculate the change (slope) and then plug the numbers into the formula as below: | 673 | 2,403 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.84375 | 5 | CC-MAIN-2019-43 | latest | en | 0.931923 |
https://forum.ansys.com/forums/topic/question-about-solidification-melting-model-in-fluent/ | 1,675,476,491,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00305.warc.gz | 276,354,522 | 95,431 | ## Fluids
#### Question about Solidification/melting model in Fluent
• Yogesh Sajikumar Pai
Subscriber
Hello all,
I am simulating the solidification of a metallic PCM using the solidification/melting model in Ansys Fluent. The PCM is enclosed in a solid container with a pipe flow of air beneath it, which removes heat from the PCM. I'm assuming constant volume and density for the PCM. The specific heat capacity and thermal conductivity are defined to be temperature dependent with large changes at the solidus temperature. The PCM liquidus temperature is 579°C and the solidus temperature is 571°C. The Pure solvent melting heat is set to the melting enthalpy of the PCM (478590 J/kg). There is a mass flow rate inlet (1.48e-4 kg/s) and a pressure outlet (zero gauge pressure) in the pipe. The PCM and the solid are initialized using patch tool to 650°C. There are also contact resistances defined at PCM - Solid interface walls using shell conduction layer (temperature (phase) dependent).
The simulation works but the temperature curve of the PCM (see image - red curve) at the end of solidification shows a gradual change, whereas in reality the temperature changes steeply upon reaching the solidus temperature.
The reason is that there are liquid portions in the PCM volume even upto 20°C below the solidus temperature. Below is a contour of the PCM mass fraction at 555°C (15°C below the solidus temperature). Only when the whole volume is solidified, the temperature starts dropping as expected.
I wish to get the liquid fraction to zero closer to the solidus temperature and see a temperature curve similar to the experiment. I've tried the following so far.
1. Reducing the solidus temperature to check if there is some additional latent heat stored. Makes no difference.
2. Tried the following values for the mushy zone parameter, A_mush : 10^5, 10^4, 10^6,10^8. No effect on temperature curve (but affects flow convergence).
3. Delaying the increase of contact resistances after solidification.
4. Changing the temperature at which the specific heat capacity and thermal conductivity of the PCM change sharply (tried setting them to liquidus temperature and solidus temperature)
5. Mesh is converged. I've run this with a million more cells and same result.
6. Tried setting density of PCM to value at initial temperature and also with average value over temperature range.
Some more details about the method:
Laminar model, double-precision enabled, second order bounded implicit transient formulation, Coupled pressure-velocity solver, Second order accuracy for pressure, momentum and energy.
With time step size 1s, I run it for first 5-10 time steps. I'm getting good convergence of solution (1e-6 for continuity, momentum and 1e-12 for energy). Thereafter, I use a time step size of 10s.
Note: The contact resistances are still to be adjusted. Therefore, the time of heat discharge differs from the experiment.
Which parameter or property controls the range in which the liquid fraction changes from 1 to 0?
Thank you!
• SRP
Subscriber
Hi,
An enthalpy-porosity technique is used in Ansys Fluent for modeling the solidification/melting process. In this technique, the melt interface is not tracked explicitly. Instead, a quantity called the liquid fraction, which indicates the fraction of the cell volume that is in liquid form, is associated with each cell in the domain. The liquid fraction is computed at each iteration, based on an enthalpy balance.Computational cells that are undergoing a phase change are modelled as pseudo porous media with porosity, being a function of H and ranging between 1 (fully liquid) and 0 (fully solid).
For more details on enthalpy porosity technique please refer research paper: Enthalpy-Porosity Technique for modeling convection-diffusion phase change:Application to the melting of a pure metal by A.D.Brent, V.R.Vollar, K.J.Reid
Hope you find this useful
Thank you
Saurabh | 867 | 3,951 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2023-06 | latest | en | 0.828684 |
https://crypto.bi/tape/network-difficulty/ | 1,596,532,711,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439735867.23/warc/CC-MAIN-20200804073038-20200804103038-00499.warc.gz | 240,682,312 | 7,711 | # What is network difficulty?
It is intuitive to most cryptocurrency miners that when more people join the mining operation, less reward is paid to each participating miner.
It seems logical that it should work this way, but how exactly does this get implemented in Bitcoin and other cryptocurrency Proof of Work protocols?
In this article we take a brief look at cryptocurrency network difficulty and how it determines how much each miner might make, should they solve a block.
### What does solving a block actually mean?
First things first: what does it mean to “solve a block“? What is it that miners work for after all? The answer to these questions lead us directly to the concept of difficulty.
Let’s have a look.
Miners basically run one repetitive operation over and over again, changing just one or a few parameters which we call nonces with every repetition, hoping that the next nonce attempted will produce a hash, which is a cryptographic “summary” of the block, that has a certain number of zeroes in front of it.
Why do we want zeroes in front of a cryptographic hash?
Because it’s a numerical way to make finding that hash as hard or as easy as we want it to. It’s all based on probability theory.
Here’s how it works.
### Number Systems
If you have a number made up of 5 digits, how many different 5 digit numbers can you compose using the decimal system?
It’s easy math, for each position on the 5 digit number we can use 10 different numbers from the decimal system.
So for the first digit we can use 0 to 9, for the second digit we use 0 to 9 again, and so on. From the fundamental counting principle, if we have 10 options and want to combine them with 10 other options, we get 100 total combinations.
If we do this for 5 digits we get total 10 x 10 x 10 x 10 x 10 combinations.
Why do we multiply by 10? Because we’re working with the decimal system, of course!
What if we were working with cryptographic hashes instead where the base is 16 and not 10?
Each digit could then have 16 values and not 10. The zero byte then would be one in 16 instead of one in 10. In a hexadecimal system, the amount of 5 digit numbers we would be able to compose is 16 x 16 x 16 x 16 x 16 – which is a LOT larger than the same logic performed in the decimal system.
Computers use the hexadecimal system, because it is a power of 2. Working with powers of two is required in a system that is based on binary logic.
Computer numbers are all represented by zeroes and ones, which are the two possible values in a binary system, therefore we always work with powers of 2 deep down when programming cryptographic algorithms.
### Difficulty Calculation
So what does it mean when we require a hash to have a certain number of zeroes in front of it?
It means that the probability of each zero showing up in front of the hash is one in 16 possibilities. It just so happens that computers work with bytes, which are two hexadecimal numbers next to each other.
Which makes each position on a hash to have a total of 256 possible values (16×16)! So the probability of each byte in a hash being zero is one in 256.
If we require three zeroes in front of a hash, then we require 256 x 256 x 256. This number is proportional to the difficulty! When we say we need a certain number of zeroes in a hash, we’re actually determining how difficult it is for miners to find a hash that has that many zeroes.
To find a hash that has 10 zeroes in front of it, means a difficulty of one divided by 256 x 256 x 256 x ….. x 256 ten times over. It’s a HUGE number and it is exponential as you can see, based on powers of 256.
### Network Difficulty
The Bitcoin network is made up of P2P nodes talking to each other.
Mining pools lead the network since they accumulate the greatest computing power. Mining pools know how much computing power is sending them work in a proof of work system, because they’re able to estimate the number of attempted hashes from each participant based on the work the participants sends in. Based on the amount of computing power they calculate, they must readjust the difficulty to assure that a bitcoin block will be found approximately every 10 minutes.
So the network difficulty is the difficulty calculated in the above method of powers of 256 that is communicated to all nodes. Every miner in the Bitcoin network receives a signal with the difficulty of the current block. From the difficulty, the miners can calculate how many zeroes are needed in front of each hash, it’s a direct conversion from number of zeroes and vice-versa.
We hope this article has helped clarify the concept of network difficulty and how it relates to Bitcoin mining. In fact, this same system applies to all proof of work schemes, including Ethereum, Dogecoin, Litecoin and other PoW based cryptos.
—-
Network Illustration: by CC via Wikipedia | 1,058 | 4,867 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.796875 | 4 | CC-MAIN-2020-34 | latest | en | 0.932371 |
https://lessonplancoaches.com/lesson-planning-of-types-of-lines-subject-mathematics-grade-5th/ | 1,695,539,972,000,000,000 | text/html | crawl-data/CC-MAIN-2023-40/segments/1695233506623.27/warc/CC-MAIN-20230924055210-20230924085210-00066.warc.gz | 407,264,532 | 24,702 | # Lesson Planning of Types of Lines
Lesson Planning of Types of Lines
Subject Mathematics
Students` Learning Outcomes
• Recognize horizontal and vertical lines.
• Draw a vertical line on horizontal line using set-squares
Information for Teachers
• Vertical lines go straight up and down.
• Horizontal lines go straight across. Horizontal lines are sleeping lines.
• Slanting Lines:
• Straight lines are the lines that don`t change direction.
• Curved Lines:
• Imagine than an ant has to move from point A to point B. Then What you observed and what are the different ways in which the ant can reach from point A to point B? so if you move in zigzag way an
Material / Resources
Writing board, chalk / marker, duster,
Introduction
• Stretch your arms on both sides and ask “Do you know the position of my arms?” and then introduce the term Horizontal.
• Similarly stretch your arms up and introduce the term vertical.
• Ask students if they have seen the lines on the road and reinforce Horizontal Lines.
• Elaborate that ‘A person standing erect is vertical and a person lying down as horizontal’.
• If you look horizontally at the sky on an early morning, enjoy the sight of the sun in an appeasing red colour and in the evening it is wearing the same red robe again, it is wearing the same red robe again. But at the mid-day we can see the same sun frying us from vertically above our head.
Development
Activity 1
• Ask students to think and write three examples of the horizontal and vertical position of the objects from school, classroom and home anywhere, such as;
• A chair is vertical; the plate on the table is horizontal. Collect their responses in two columns on the board.
• Vertically such as; going up, moving up, raising high and reach the sky and attaining height are some of the words which gives the sense of vertical.
• The senses of opposite direction like; vertical and below are going down. A fruit falling down are to be flashed to the young mind to create the sense of downward direction.
• Demonstrate how to use a set-square to draw a perpendicular to given line, AB.
• With the following steps a set-square can be used to draw a vertical line at a point on a given line as described below.
• Step 1: Draw a straight line of 6 cm (for example) and choose a point P on the line.
• Step 2: Set an edge of the set-square on the given line so that the other edge is just in contact with the point.
• Step 3: Draw a line that passes through the given point with the help of the set-square.
• Step 4: The new line is vertical.
• Ask them to take out their geometry boxes and copies and get ready for an activity.
• Repeat all the steps one by one and ask them to follow instructions.
• Give students time to attempt every step independently.
• Peer Checking (exchange work with person sitting next and then return copies) should be done.
• Invite one student on board to do it once again.
• Reinforce the whole concept with the whole class in a recap session.
Sup up / Conclusion
• Conclude the lesson with briefly revise the concept.
Assessment
• Assign question to be done individually followed by peer checking.
• Prepare a worksheet to assess two concepts together; it will raise their confidence in attempting questions. | 705 | 3,271 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.46875 | 4 | CC-MAIN-2023-40 | latest | en | 0.899261 |
https://physics.stackexchange.com/questions/376564/supersymmetric-localization-mirror-symmetry | 1,718,828,371,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861832.94/warc/CC-MAIN-20240619185738-20240619215738-00891.warc.gz | 402,978,381 | 39,693 | # Supersymmetric Localization (Mirror Symmetry)
I'm reading Chapter 9 of Mirror Symmetry book. As you can see in eq. (9.30) his model for SUSY is
\begin{align} \delta_\epsilon X &=\epsilon^1\psi_1 + \epsilon^2\psi_2\\ \delta\psi_1 &= \epsilon^2\partial h\\ \delta\psi_2 &=-\epsilon^1\partial h. \end{align}\tag{9.30}
I understand that if $$\partial h\neq 0$$ it is possible to see that partition function vanishes, in particular, there is a total divergence:
$$\int \mathrm{d}\hat{X}e^{-\frac{1}{2}(\partial h(\hat{X}))^2}\frac{\partial^2 h(\hat{X})}{(\partial h(\hat{X}))^2}. \tag{9.35}$$
The implication that a total divergence yields vanishing integral, it is a bit strange, because, since every function that admit a primitive is a total derivative. What is the meaning of that implication? Then why $$(9.35)$$ vanishes? It seems to me that, in general, $$\int \mathrm{d}(h')\frac{e^{-\frac{1}{2}(h')^2}}{(h')^2}\neq 0.$$
Then, again, when he claims that if there is a $$x_0$$ such that $$h'(x_0)=0$$ we can use again the argument of the total divergence, why doesn't he keep in account that, removing a small neighborhood of $$x_0$$, give us some contribution at the border of that neighborhood (two points)?
1. OP essentially wrote (v2):
The implication that a total divergence $f=F^{\prime}$ yields vanishing integral $\int_{\mathbb{R}} \! dx ~f(x)=0$ is a bit strange, because, since every function $f$ that admit a primitive $F$ is a total derivative $f=F^{\prime}$.
Answer: Well, the devil is in the detail. Ref. 1 starts with a function $F$ that is assumed$^1$ to vanishes at $x=\pm \infty$: $$F(x\!=\!\pm \infty)~=~0. \tag{A}$$ If we, on the other hand, instead start from a function $f$, then all its primitives $F$ will generically not satisfy condition (A).
2. OP essentially wrote (v2):
Why doesn't Ref. 1 keep in account that, removing a small neighborhood of $x_0$, give us some contribution at the border of that neighborhood (two points)?
Answer: The neighborhood may be chosen as small as we want, thereby localizing the path integral $Z$ at the point $x_0$.
$^1$ Ref. 1 does not seem to explicitly mentioning assumption (A), but that is what is meant. Technically, it is sufficient if $$F(x\!=\!- \infty)~=~F(x\!=\! \infty). \tag{B}$$
• I do not understand two points: 1) Where is (hidden) the vanishing assumption of $F$? 2) Whatever you shrink the neighborhood near $x_0$ you are remaining with two boundary contributions. How can one cure that? | 746 | 2,482 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.1875 | 3 | CC-MAIN-2024-26 | latest | en | 0.890981 |
https://www.gradesaver.com/textbooks/math/calculus/calculus-8th-edition/chapter-11-infinite-sequences-and-series-review-exercises-page-825/2 | 1,576,041,346,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540529955.67/warc/CC-MAIN-20191211045724-20191211073724-00028.warc.gz | 714,294,240 | 13,120 | ## Calculus 8th Edition
$0$
A sequence is said to be converged if and only if $\lim\limits_{n \to \infty}a_{n}$ is a finite constant. $\lim\limits_{n \to \infty}a_{n}=\lim\limits_{n \to \infty}\frac{9^{n+1}}{10^{n}}$ $=\lim\limits_{n \to \infty}9\times (\frac{9}{10})^{n}$ Since, $\lim\limits_{n \to \infty}a^{n}=0$ for $|a|\lt 1$ Thus, $\lim\limits_{n \to \infty}9\times (\frac{9}{10})^{n}=9\times 0=0$ Hence, the given sequence converges to $0$. | 187 | 448 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.890625 | 4 | CC-MAIN-2019-51 | latest | en | 0.371853 |
http://finance.zacks.com/probability-stock-trade-using-standard-deviation-3306.html | 1,508,732,823,000,000,000 | text/html | crawl-data/CC-MAIN-2017-43/segments/1508187825575.93/warc/CC-MAIN-20171023035656-20171023055656-00689.warc.gz | 122,988,577 | 17,517 | # Probability of Stock Trade Using Standard Deviation
Some professional stock traders use quantitative analysis to analyze the market and predict the future value of securities. They begin by assuming that the path of a stock will be a "random walk" and that the values will be distributed along a bell curve or a normal set of values. With this data, they can use standard deviation and probability theory to make investment decisions.
## Standard Deviation
Standard deviation is a measure that describes the probability of an event under a normal distribution. Stock returns tend to fall into a normal (Gaussian) distribution, making them easy to analyze. One standard deviation accounts for 68 percent of all returns, two standard deviations make up 95 percent of all returns, and three standard deviations cover more than 99 percent of all returns. When a trader can assume with a 95 percent probability where the stock value will be, he has many more options for hedging and investing.
Traders begin by taking the set of returns for a particular stock. They take the average volatility of the stock on a daily basis a set period, such as five years. They are likely to find that the returns form a bell curve, with returns falling equally on both sides of the curve in standard deviations. With this information, traders can create Bollinger bands that map out when a stock has moved one or two standard deviations above its normal return. When that happens, the trader simply bets that it will return to the normal return, by buying or selling accordingly.
## Tails
One problem with standard deviation analysis is that you do not know the value of the returns in the very low probability areas on both sides of the curve. These so-called "tails" can be quite extreme. Although you might have only a 5 percent chance of a "tail" return, the value of that tail return could be negative 10 percent in one day. However, you simply cannot quantify the magnitude of the returns you will get in the distant tails.
## Options
Traders can use probability and standard deviation when calculating option values as well. They can use the famous Black-Scholes equation, which assumes that the underlying stock returns are normally distributed with standard deviations. If they can obtain the implied volatility, they can then create a risk-free position by going long on the underlying stock and short with the option, or vice versa, depending on where the stock and the option are priced.
#### Photo Credits
• stock market analysis screenshot image by .shock from Fotolia.com | 507 | 2,579 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.640625 | 4 | CC-MAIN-2017-43 | latest | en | 0.923107 |
https://brainly.in/question/212815 | 1,485,205,606,000,000,000 | text/html | crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00235-ip-10-171-10-70.ec2.internal.warc.gz | 799,502,786 | 10,293 | # The sides of a triangle are 12cm,35cm,37cm. Find its area by heron's formula.
2
by bhardwajshagun3
2015-10-25T17:57:39+05:30
### This Is a Certified Answer
Certified answers contain reliable, trustworthy information vouched for by a hand-picked team of experts. Brainly has millions of high quality answers, all of them carefully moderated by our most trusted community members, but certified answers are the finest of the finest.
let side a=12cm side b=35cm and side c=37cm
∴ semi perimeter,
= (a+b+c)/2
=12+35+37/2
=84/2
=42 cm
∴area of the triangle by heron's formula,
Δ=√s(s-a)(s-b)(s-c)
Δ=√42(42-12)(42-35)(42-37 )
Δ=√42*30*7*5
Δ=√2*3*7*7*5*2*3*5
Δ=2*3*5*7
Δ=210 cm^2
∴ total area of the triangle is= 210 cm^2
i hope that this will help u....................^_^
please mark it as the brainiest ^_^
:)
2015-10-25T18:01:34+05:30
### This Is a Certified Answer
Certified answers contain reliable, trustworthy information vouched for by a hand-picked team of experts. Brainly has millions of high quality answers, all of them carefully moderated by our most trusted community members, but certified answers are the finest of the finest.
S=a+b+c/2 S=12+35+37/2 84/2 S=42 Herons formula √s(s-a) (s-b ) (s-c) √ 42 (42-12) (42-35) (42-37) √42×30×7×5 √44100 210 answer | 438 | 1,272 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.96875 | 4 | CC-MAIN-2017-04 | latest | en | 0.848689 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.