content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Search Results
We can describe the relationship between a 3D world point and a 2D image plane point, both expressed in homogeneous coordinates, using a linear transformation – a 3×4 matrix. Then we can extend this
to account for an image plane which is a regular grid of discrete pixels. | {"url":"https://robotacademy.net.au/?s=linear%20transformation","timestamp":"2024-11-15T03:09:20Z","content_type":"text/html","content_length":"38931","record_id":"<urn:uuid:0c264881-df70-4847-b93f-9aafd1c5cbdd>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00384.warc.gz"} |
150,000 mile Buddy 125!
There's another Buddy with over 40,000 miles on it! This one is a 2009 125 cc.
The only repairs it has needed that weren't regular maintenance items were two stators that went bad, first lasted 4,800 miles, second lasted 3,000 miles(the third one is an NCY and it has lasted
over 32,000 miles.) Also, the clutch bearings went bad, not sure when maybe 15K, and the contra spring got weak by 18k so I replaced the whole clutch unit with a used one from a Buddy 150 that had
about 4k on it. And I had to replace the spark plug cap.
It hasn't even taken that much maintenance to get to 40k, belts have lasted ~13-16k, spark plugs ~14k, rollers ~ 8 to ~12k, valves done ever ~8k. It still has the original air filter, fuel filter,
brakes, and battery.
Most of those miles were done with me riding like I stole it, with lots and lots of 100+ mile days, a bunch of 200+ days, and several 300 mile days. Plus one 400 mile day. Just last month I did a 350
mile day.
I can put together a more detailed maintenance history if anyone wants, but the main thing I do is change the oil and ride the thing. I try to change the oil ever 1500 miles with Shell Rotella T
15-40W, and the filter ever 2nd or 3rd oil change. For gas 99% of the time I just put the cheap stuff in it, usually getting ~92 BMPG.
I hit 40k in front of the local dealer (after doing 10 laps of the parking lot.)
*Not sure what if any maintenance was done to the scooter before I got it, I bought it used with ~940* miles on it.
Last edited by scootERIK on Sat Nov 21, 2020 10:46 pm, edited 20 times in total.
WOW! Congrats!
I read with interest your maintenance schedules. I too have been milking the original plug, fuel filter, brakes, belt, and rollers at 12,000. I broke down and swapped the air filter at 11,000 for no
reason other than it was showing some black over the red.
I suspect I'll switch out the belt, rollers, and plug at my next oil change interval at 13,500. I use Rotella T 5W-40 full synthetic every 2,000 to 2,500 miles and change the filter each time. It
appears I'm throwing away perfectly good filters. Or perhaps it helps to throw them out with my longer interval.
My original battery only last 8,000 miles because I never hooked it up to a tender. It only needed a valve adjustment at 5,000, since then, the valves have held fine. My rear tire needs to be
replaced about every 6,000.
its cool that it happened in front of the dealer, or close to it anyway
that pesky stator, i just changed mine, i thought about getting the NCY one but didnt, now i wish i would have
i will next time it goes out, which is inevitable it seems
Congrats! Still looks great!
"Things fall apart - it's scientific" - David Byrne
'06 Cream Buddy 125, 11 Blur 220, 13 BMW C 650 GT, 68 Vespa SS180, 64 Vespa GS MK II, 65 Lambretta TV 175, 67 Vespa GT, 64 Vespa 150 VBB 64 Vespa GL
Great work! I love hearing that you rode it hard, and you clearly weren't concerned with the number of miles you were accumulating. I'm always perplexed when I see scooters where the rider seems to
be trying to limit the number of miles they put on a year. Not sure what the point of having a scooter is if you aren't riding it A LOT.
scootERIK wrote:I didn't really feel it happening but the seat on my Buddy is pretty much done. I was at the local shop and I sat on a new Buddy, and the difference became very noticeable. I
might have to try finding a used low mileage seat.
Typical that you won't notice the chance as it's happened over time.
Honestly? I don't think I'd want to swap out my seat, I thought about it after I read this, I think my seat has broken in just nicely to fit my scrawny arse, I don't want to 'try on' a new one.....
Aging is mandatory, growing up is optional.
My kids call me 'crazy', I prefer 'Eccentric'.
Nullius in verba
TCaruso wrote:Amazing.
I just hit 3,000 and posted a discussion on what should I service. I guess I'll just do another oil change.
Good luck with yours. Hope you hit 100,000.
I don't see 100,000 happening, unless Genuine wants to give me a parts sponsorship then maybe. My big goal is 56,500 miles, then I will have done 50k real miles on it since the odometer is off by
about 11%, plus the 942 miles it had on it when I got it(plus some rounding.) But in the mean time I am going for 45,500(=40,000 real miles) then on to 50k.
The speedometer is off, but people have said the odometer is accurate, which I proved for myself on my Blur using my phone's GPS. I downloaded a speedometer app which includes an odometer function.
The scooter's odometer matched the GPS reading just about exactly.
Some people can break a crowbar in a sandbox.
Slam wrote:...I'm always perplexed when I see scooters where the rider seems to be trying to limit the number of miles they put on a year. Not sure what the point of having a scooter is if you
aren't riding it A LOT.
+1. Same for motorcycles. I bought my '08 S40 with 2700 miles on it, June of 2014. I've already got it to 10K in the year I've had it. Granted, my 2 wheelers have always been my "daily drivers"
unless it's snowy/icy. But man, how can you just not ride?!
babblefish wrote:The speedometer is off, but people have said the odometer is accurate, which I proved for myself on my Blur using my phone's GPS. I downloaded a speedometer app which includes an
odometer function. The scooter's odometer matched the GPS reading just about exactly.
In my experience, ALL brands of motorcycles have "optimistic" speedometers, generally in the neighborhood of 10%. Also my experience that those same bikes' odometers are in fact, accurate so you can
forget about having to factor in any odometer error.
babblefish wrote:The speedometer is off, but people have said the odometer is accurate, which I proved for myself on my Blur using my phone's GPS. I downloaded a speedometer app which includes an
odometer function. The scooter's odometer matched the GPS reading just about exactly.
I ran a GPS app for a bunch of miles last(~4K) and if I remember right my odometer was off by ~9.8% average, had single runs from ~9 to ~11%. On a long ride this year I measured 317 miles on the GPS
app and it was 351 miles on the odometer.
Federal regs state MC's must not read slower than actual speeds.
http://www.unece.org/fileadmin/DAM/tran ... 039r1e.pdf
http://www.popularmechanics.com/cars/ho ... 7/4260708/
Lots of reasons. Variation in tire height plus tire wear. Don't want us "speeding" but have the speedometer say we're goin' slower.
And, depending on the profile of your tire, your tire will rotate faster anytime you are not upright such as in a curve or a turn. If your tire has a more rounded profile, you are riding on the sides
in curves/turns so the diameter is smaller, thus faster rotation. Cars don't do that. They're always on the tread and never on the sidewall. Except for stunt drivers.
edited for some silly wording to make the meaning clear(er). "if your tire is round"..... I meant "rounded" Those square and tetrahedral tires ride a little rough.
Well at 44,611.6 miles my Buddy 125 died. Not sure what happened yet, but there is a good chance it needs a new top end. I have been losing power and burning more oil over the last few thousand
miles. Today I did a long ride near full throttle, the second half into a 20+ mph headwind and that was too much for it. Going to open it later in the week is see what happened and figure out what it
will take to get it back on the road.
This was the first time my Buddy failed to get me home, that's not to shabby.
About 100 miles before the end.
Congrats on the milege. I'm just at 10,341 on my Buddy so that means I'm only a quarter of the way. Just normal maint and replacement of normal wear parts so far. Zero problems.
If I didn't ride the Vespa and the Ural a lot more than the Buddy, I'd be a lot closer to the all 4's.
The Racer's Motto:
Broken bones heal,
Chicks dig the scars,
The pain is temporary,
but the glory is forever!
Finally opened it up. Turns out part of the piston came off and was banging around in there(didn't find the piece.) The piece messed the head up pretty good(the piece hit just below the spark plug in
the picture.)
Haven't fully inspected the cylinder since I can seem to get it off the rest of the motor.
lovemysan wrote:I've got a complete 125 top with maybe 5k miles on it I'd sell cheap. But if I were you big bore it. Do 150 head too.
I sent you a PM, at least I think I did.
Concerning getting the cylinder barrel off, in addition to the 4 nuts holding the head down, did you remove the 2 bolts at the base of the barrel that secure it to the crankcase? It's also sometimes
necessary to use a rubber mallet on the barrel to break it loose from the crankcase due to the base gasket.
Edit:Oops, my bad. The two extra bolts I was referring to also hold the head down, so you obviously got them off.
I second the suggestion of going to a 150cc BBK since the cost is more or less the same. Rejetting the carb may not be required since according to the service manual, both the 125 and 150cc engines
use the same size main jet, but I would double check with sparkplug readings. Oh, if you go the 150cc route, make sure you get a BBK meant for a 125cc engine because the piston wrist pin is smaller
in a 125 engine, 13mm vs 15mm on a 150cc engine.
Good luck with the engine work and good job on getting so many miles from your Buddy.
Last edited by babblefish on Tue Oct 27, 2015 12:27 pm, edited 2 times in total.
Some people can break a crowbar in a sandbox.
avescoots1134 wrote:Ouch, looks like your engine issue was more-so heat related than wear related. Kaboom!
Two things catch my eye: 1) the sparkplug protrudes pretty far into the combustion chamber (detonation perhaps?), and 2) the sparkplug color looks a bit light. If the scooter was run at wide open
throttle a lot during it's life, then heat and/or detonation could have definitely been a problem. But still, 44K+ miles isn't bad.
Some people can break a crowbar in a sandbox.
The spark plug in the picture isn't the one that was in it when it died, I put an old one in right after it died just in case that was the problem. The spark plug that was in it when it died was
pretty black.
Ever time I showed people my spark plugs they told me that I was running a little lean but not enough to be concerned. Maybe running a little lean added up over time, and I did a lot of wide open
riding for extended periods.
scootERIK wrote:The spark plug in the picture isn't the one that was in it when it died, I put an old one in right after it died just in case that was the problem. The spark plug that was in it
when it died was pretty black.
Ever time I showed people my spark plugs they told me that I was running a little lean but not enough to be concerned. Maybe running a little lean added up over time, and I did a lot of wide open
riding for extended periods.
This has me a little confused. If the original sparkplug was pretty black, then why would anyone say you were running lean? Did the original plug protrude into the combustion chamber the same amount?
Some people can break a crowbar in a sandbox.
babblefish wrote:
scootERIK wrote:The spark plug in the picture isn't the one that was in it when it died, I put an old one in right after it died just in case that was the problem. The spark plug that was in
it when it died was pretty black.
Ever time I showed people my spark plugs they told me that I was running a little lean but not enough to be concerned. Maybe running a little lean added up over time, and I did a lot of wide
open riding for extended periods.
This has me a little confused. If the original sparkplug was pretty black, then why would anyone say you were running lean? Did the original plug protrude into the combustion chamber the same
I might have worded that poorly. I changed the spark plug on the side of the road after the scooter died to see if that was the problem. The plug that was in the scooter when it died was black, but
it wasn't black 300 miles earlier when I had it out to show it to a buddy.
As for the amount of protrusion, I don't know. This plug was installed into a hot motor and I probably over tightened it.
Check the part #s of both the plug you removed and the one in there now to see if they were recommended for use in that model scooter. The depth that plug is protruding concerns me too. Certainly
possible that the piece of piston broke off after striking the plug. Check the other plug for signs of damage.
babblefish wrote:
scootERIK wrote:The spark plug in the picture isn't the one that was in it when it died, I put an old one in right after it died just in case that was the problem. The spark plug that was in
it when it died was pretty black.
Ever time I showed people my spark plugs they told me that I was running a little lean but not enough to be concerned. Maybe running a little lean added up over time, and I did a lot of wide
open riding for extended periods.
This has me a little confused. If the original sparkplug was pretty black, then why would anyone say you were running lean? Did the original plug protrude into the combustion chamber the same
He did mention that he was burning some oil before - that will cause the ash buildup pretty quickly.
avescoots1134 wrote:
babblefish wrote:
scootERIK wrote:The spark plug in the picture isn't the one that was in it when it died, I put an old one in right after it died just in case that was the problem. The spark plug that was
in it when it died was pretty black.
Ever time I showed people my spark plugs they told me that I was running a little lean but not enough to be concerned. Maybe running a little lean added up over time, and I did a lot of
wide open riding for extended periods.
This has me a little confused. If the original sparkplug was pretty black, then why would anyone say you were running lean? Did the original plug protrude into the combustion chamber the same
He did mention that he was burning some oil before - that will cause the ash buildup pretty quickly.
May be it started burning oil because a piece of the piston broke off thereby compromising the piston ring seal.
Some people can break a crowbar in a sandbox.
Just as a point of reference, I posted a picture of a 150cc engine head with a stock CR7 sparkplug installed. Note that none of the sparkplug's threads are showing in the combustion chamber. The
reason this is important is because under a heavy load, it is possible for the edge of the exposed thread to heat up to red hot which can cause detonation problems. Also, because of the flat top
piston, it is not possible for the sparkplug to contact the piston unless it protrudes into the combustion chamber by 15mm or more.
IMG_5722.JPG (111.58 KiB) Viewed 44516 times
Some people can break a crowbar in a sandbox.
Does detonation sound anything like an exhaust valve that needs adjustment? If so then you may be onto something. There have been a lot of days recently where it ran good for the first 30-45 minutes,
after that it would lose top speed and get a little noisy(but only at speeds over 45 bmph.)
The plug in the picture is a CR7HSA, it's an NGK plug that comes in a box that says Honda Genuine Parts.
scootERIK wrote:Does detonation sound anything like an exhaust valve that needs adjustment? If so then you may be onto something. There have been a lot of days recently where it ran good for the
first 30-45 minutes, after that it would lose top speed and get a little noisy(but only at speeds over 45 bmph.)
Yes it can. Kind of a muted tapping sound. And the scenario you described leading up to the sound is, well, the perfect scenario.
The plug in the picture is a CR7HSA, it's an NGK plug that comes in a box that says Honda Genuine Parts.
That is the same plug that is installed in the head that I posted. Not sure why it sticks so much further into the combustion chamber in your head. Maybe a mis-machined head that sinks the plug
deeper in your head?
One other thing, the main jet size spec'ed in the service manual is a #102, but mine had a #92 installed when I bought the scooter. Maybe you should check yours to see if it also has a too small main
jet. Might have been done in order to meet US emissions laws.
Some people can break a crowbar in a sandbox.
I still can't get the cylinder off. I beat on it pretty good and even tried a strap wrench around it to see if I could turn it to break it loose but haven't had any luck. It looks so easy in all the
Youtube videos.
As much as I wanted to get to 50k I am starting to wonder if I would be better off just selling it or parting it out.
scootERIK wrote:I still can't get the cylinder off. I beat on it pretty good and even tried a strap wrench around it to see if I could turn it to break it loose but haven't had any luck. It looks
so easy in all the Youtube videos.
As much as I wanted to get to 50k I am starting to wonder if I would be better off just selling it or parting it out.
Just for sentimental value, it would be great to get the scoot going again. After all, you've come this far with it.
Can you turn the crank to move the piston up and down? Just checking to see if the piston is frozen to the cylinder which would make it a little more difficult to remove.
You won't be able to twist the cylinder because although you can see clearance around the studs at the top, there are spacers at the bottom of the studs the same as the ones at the top which help
locate the head.
Since you're not going to reuse the cylinder anyway, don't worry about doing any damage to it. Like Avescoots said, just wack it harder, but not sideways, hit it for an upward motion, toward the head
side. Work around the cylinder. If you have a propane torch, use it to heat up the crankcase at the base of the cylinder.
Some people can break a crowbar in a sandbox.
Couple quick questions if I go big bore and big valve-
1. Do I need to run a decompression tube? Like this http://www.scooterworks.com/ncy-oil-dec ... jeO024aunI
did a little research and on some of the generic GY6 forums it seems like most people think it is a waste of money.
2. For jetting what would be a decent starting size? I was thinking about starting with a 105(with a stock air box and exhaust,) based on what I have read I probably have a 92 in it right now and
that would run way too lean with the BBK. I know people don't like to make jetting suggestion, but I'm not looking for perfect, just something close or maybe a little rich that I can run for the
first few miles.
scootERIK wrote:Couple quick questions if I go big bore and big valve-
1. Do I need to run a decompression tube? Like this http://www.scooterworks.com/ncy-oil-dec ... jeO024aunI
did a little research and on some of the generic GY6 forums it seems like most people think it is a waste of money.
That's one thing I've been meaning to try since I've bumped my engine up to 180cc. At the moment, the only crankcase vent I'm using is the one that comes stock on the valve cover.
2. For jetting what would be a decent starting size? I was thinking about starting with a 105(with a stock air box and exhaust,) based on what I have read I probably have a 92 in it right now and
that would run way too lean with the BBK. I know people don't like to make jetting suggestion, but I'm not looking for perfect, just something close or maybe a little rich that I can run for the
first few miles.
Mine was a 92 and after the displacement increase to 180, I increased the mj size a step at a time while checking sparkplug color until I reached my present 118 size. It's still just a touch
lean, but acceptable. But, keep in mind that my engine has a lot more than just a bbk installed. For you, a 102 - 105 would be a good starting point.
Some people can break a crowbar in a sandbox. | {"url":"https://www.modernbuddy.com/forum/viewtopic.php?f=1&t=29089&sid=d6a4046b204c51f83d068ffe3e00ad94","timestamp":"2024-11-06T07:06:23Z","content_type":"text/html","content_length":"164935","record_id":"<urn:uuid:f0418b2c-903b-4c97-9709-49abc82da282>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00229.warc.gz"} |
DK7ZB’s balun
(Steyer nd) describes the DK7ZB balun / match for VHF and UHF Yagis.
To understand how the “DK7ZB-Match” works look at the left picture. Inside the coax cable we have two currents I1 and I2 with the same amount but with a phase shift of 180°.
No. At any point along the coaxial line, a current I on the outer surface of the inner conductor causes an equal current in the opposite direction on the inner surface of the outer conductor.
As the currents are shown with the designated directions, I2=I1, not I2=I1<180.
A consequent simplification is that I4=I2-I3=I1-I3.
There is an issue with the current arrow I3 in the lower right of the diagram. It might imply that the only current in the conductors is I3, but the current between the nearby node and lower end of
the shield is I3-I1.
If the structure was much much shorter than the wavelength, there would be negligible phase change in currents along the structure, so I1 would be uniform along the centre conductor, I2 uniform along
the inside surface of the outer conductor, and I3 uniform along the outer surface of the outer conductor.
The diagram notation does show that I3 (which is equal to the dipole drive imbalance) is uniform along the structure, and that I3 flows to ground.
It seems that the diagram appears in (Straw 2003).
DK7ZB goes on:
If we connect a dipole or the radiator of a Yagi direct to the coax, a part of I2 is not running to the arm 2 but down the outer part of the coax shield. Therefore I1 and I4 are not in balance
and the dipole is fed asymmetric.
But how can we suppress the common-mode current I3? A simple solution is to ground the outer shield in a distance of lambda/4 at the peak of the current.
So, the length of the structure is in fact a quarter wavelength electrically, or close to it to achieve the choking effect. I3 will be in the form of a standing wave with current maximum at the lower
(‘grounded’) end, and current minimum at the upper end.
It happens also that his usual configuration of this balun is that there is a standing wave on the inside of the coax, and so I1 and I2 are not uniform along the conductor, and whilst it is relevant
to the designed impedance transformation, it is inconsequential to reduction of dipole current imbalance.
DK7ZB continues with the development of his variation of a Pawsey balun:
But now we get a new interesting problem: For the transformation 28/50 Ohm we need a quarterwave piece of coax with an impedance of 37,5 Ohm (2×75 Ohm parallel). The velocity of the wave inside
the coax is lower than outside (VF = 0,667 for PE).
The outside of the shield has air (and a litle bit of insulation) in the surrounding and VF = 0,97. For grounding the common mode currents this piece should have a length of 50 cm, with a VF =
0,667 and a length of 34,5 cm this piece of coax is to short. By making a loop of this two cables as shown in the picture down we get an additional inductivity and we come closer to an electrical
length of lambda/4. Ideal is coax cable with foam-PE and a VF = 0,82
Above is DK7ZB’s implementation of his balun with the loop and “additional inductivity.”
I copied the above implementation and measured the common mode impedance Zcm.
Above is the Zcm measurement. There is a quite narrow self resonance where Zcm is quite high for about 10MHz bandwidth centred on 125MHz, but at 144MHz Zcm=83-j260Ω which is too low to qualify as a
good common mode choke.
Like all narrowband / tuned common mode chokes, tuning to the desired frequency band is essential to their effective operation.
Like most published balun designs, this one is published without measurements to demonstrate its operation or effectiveness. | {"url":"https://owenduffy.net/blog/?p=5622","timestamp":"2024-11-09T15:35:21Z","content_type":"text/html","content_length":"60622","record_id":"<urn:uuid:697e2512-3613-4523-848c-20200c86410c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00121.warc.gz"} |
The Hire Success Parity Index Value | Hire Success®
One of the newest features of the Hire Success Personality Profile reports is the Trait "Parity" Index Value (or "Parity"). This index differs from the Temperament's Relative Strength Index ("RSI")
that has always been shown on reports and reflected graphically in the bar chart.
Where the RSI shows the relationship between the strength of each of the temperaments in a holistic manner (that is, they must all add up to 100% or the "whole" person), the Parity Index shows how
the applicant responded to all of the adjectives as related to each temperament independently of all others. Therefore, the Parity Index is on a scale of 0-100.
A score of 0%, although unlikely, would theoretically show that the person responded to all of the adjectives on the test form with a value of "5,” meaning that not a single adjective associated with
that temperament described him or her. Conversely, a score of 100% would indicate that the applicant responded with a value of "1" on all of the adjectives related to that particular temperament.
Why is this important?
When evaluating the temperaments relative to each other, the sum of all the RSI values must equal 100%. Think of this like the values in a typical pie chart. If there are 4 elements to the pie chart,
no matter how high or low the values in each "slice" of the pie, they still always add up to 100%, or a whole pie. Using this as an illustration, let's examine the following two examples:
Person X has the following values for the 4 quadrants – in our case, the four Personality Temperaments. Let's say the values are:
In a pie chart, "A" would equal 35% of the whole pie, even though the "parity" on a scale of 0-100 was 35. Now, consider Person Y, who has the following values:
In a pie chart, "A" would still equal 35% of the whole pie, even though the "parity" index was 70, or twice as close to a perfect "A" type as Person X. This would mean that the RSI would make them
appear to be equally strong as an "A" type, simply because relative to all of the values used, 70 = 35% of the total of 200, just as 35 = 35% of 100.
The Parity Index Value is a way of illustrating that even though relatively, both sets are equally distributed when compared to each other, Person Y far more closely parallels (or has "parity" with)
the "A" type than Person X. In this way, we’re trying to provide more information that illustrates the "parity" or "alignment" with the characteristics of each temperament, as well as a relative
picture of the strength of each type.
By providing this value, we don’t want to make the report confusing, but instead provide you with a different perspective of the strength of each of the Personality Temperament. | {"url":"https://www.hiresuccess.com/help/parity","timestamp":"2024-11-02T21:22:58Z","content_type":"text/html","content_length":"41782","record_id":"<urn:uuid:130f0603-204a-443a-8d27-44c4d2676992>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00259.warc.gz"} |
Angle of Depression Calculator
Angle of Depression Calculator
Angle of Depression Calculator
Understanding the Angle of Depression Calculator
The Angle of Depression Calculator displayed above is a useful tool for determining the angle at which an observer should look downward from a higher point to a lower point. This calculation can be
beneficial in various practical scenarios. In construction, architecture, and even in daily life activities, knowing the angle of depression can help in assessing and planning different tasks
What is the Angle of Depression?
The angle of depression is the angle formed between the horizontal line from the observer to the point of observation below. It essentially measures how steeply the observer is looking downward.
Understanding and calculating this angle can be crucial for ensuring safety, precision, and effectiveness in projects that require height and distance assessment.
Applications of the Angle of Depression Calculator
The calculator can be particularly useful in:
• Construction Projects: When determining the appropriate slopes for ramps or the angle for drainage systems.
• Surveying: For land surveyors who need to measure the angle of their line of sight to an object below them.
• Aviation: Pilots can use it to calculate their approach angle when landing.
• Photography: When setting up camera angles for specific shots from elevated positions.
Benefits of Using the Angle of Depression Calculator
Using this calculator brings several benefits:
• Accuracy: It allows for precise calculation, ensuring that any conclusions or decisions drawn from the angle measurements are reliable.
• Convenience: It simplifies the process of performing angle calculations, saving both time and effort for users.
• Versatility: Applicable in diverse fields such as construction, surveying, aviation, and photography.
How the Angle of Depression Calculation Works
The angle of depression is determined by two key measurements: the vertical height (h) between the observer and the point observed, and the horizontal distance (d) between these two points. By taking
the inverse tangent of the height divided by the distance, the angle of depression is calculated in degrees. This method ensures an accurate and quick computation, making it easy to use even for
those unfamiliar with trigonometric calculations.
Enhancing Project Efficiency
Incorporating this calculator can significantly improve efficiency in various projects. By providing a quick and accurate means to measure angles, users can ensure more precise planning and
execution. This aids in the overall success and safety of projects, making it a valuable resource in several professional and personal contexts.
What is the angle of depression?
The angle of depression is the angle formed between the horizontal line from the observer's eye level to the point of observation below. It essentially measures the steepness of the downward line of
sight from the observer.
How does the Angle of Depression Calculator work?
The calculator requires two inputs: the vertical height (h) between the observer's position and the point of observation, and the horizontal distance (d) from the observer to the point. By using
these inputs, the calculator computes the angle of depression by taking the inverse tangent (arctan) of the height divided by the distance.
Why is the angle of depression important in construction?
In construction, determining the angle of depression is essential for planning slopes for ramps, drainage systems, and other structural elements that require precise angle measurements to ensure
stability and functionality.
Can the Angle of Depression Calculator be used in photography?
Yes, photographers often use the angle of depression to set up camera angles for specific shots taken from elevated positions. This ensures the correct perspective and composition, especially in
aerial and landscape photography.
Is this calculator useful for land surveyors?
Absolutely, land surveyors frequently measure the angle of depression when mapping terrains or assessing land elevations. The calculator provides a quick and accurate means to determine this angle,
facilitating efficient surveying.
How accurate are the results from the calculator?
The results are highly accurate as they are based on the principles of trigonometry. However, the accuracy also depends on the precision of the input measurements. Ensure that both the vertical
height and horizontal distance are measured correctly for the best results.
Are there any prerequisites to using the Angle of Depression Calculator?
There are no specific prerequisites other than having the vertical height and horizontal distance measurements available. Basic knowledge of these concepts can be helpful but is not necessary, as the
calculator simplifies the process.
Can pilots use this calculator for landing approaches?
Yes, pilots can use the angle of depression calculator to calculate their approach angle for landing. This helps in making safe and precise landings by determining the correct angle of descent from a
given height to the runway.
How do I measure the vertical height and horizontal distance?
Vertical height can be measured using tools like a measuring tape or laser distance measurer from the observer’s position to the point below. The horizontal distance can be measured using similar
tools along a straight line connecting the observer to the point of observation. Accurate measurements are crucial for precise angle calculations. | {"url":"https://www.onlycalculators.com/other/angle-of-depression-calculator/","timestamp":"2024-11-08T04:44:39Z","content_type":"text/html","content_length":"238199","record_id":"<urn:uuid:af5eb0de-3c27-486c-a207-07ee1d29183d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00401.warc.gz"} |
011 (part 2 of 3) 10.0 points/nDetermine how long the ski jumper is ai
011 (part 2 of 3) 10.0 points/nDetermine how long the ski jumper is air-
Answer in units of s./n012 (part 3 of 3) 10.0 points
What is the magnitude of the relative angle o
with which the ski jumper hits the slope?
Answer in units of °./n013 10.0 points
A particle is moving at constant speed in a
counterclockwise circle of radius r about the
point x = y = 0. At a certain instant it is at
x = -r, y = 0.
What is the direction of its acceleration at
that instant?
1. It is along ĵ.
2. It is along î.
3. It is along -ĵ.
4. It is along -î./n014 10.0 points
A ball on the end of a string is whirled around
in a horizontal circle of radius 0.394 m. The
plane of the circle is 1.22 m above the ground.
The string breaks and the ball lands 2.56 m
away from the point on the ground directly
beneath the ball's location when the string
The acceleration of gravity is 9.8 m/s².
Find the centripetal acceleration of the ball
during its circular motion.
Answer in units of m/s²./n015
10.0 points
A rowboat crosses a river with a velocity of
0.6 m/s at an angle 41° North of West relative
to the water./n41°
0.6 m/s
Water's frame of reference
The river is 519 m wide and carries a current
of 0.89 m/s due East.
Shore's frame
of reference
0.89 m/s
When the boat reaches the opposite bank,
how far East or West (downstream "+" or
upstream "_") from its starting point (on the
opposite bank) does it land.
Note: In your answer, take an Eastward
displacement to be positive and a Westward
displacement to be negative.
Answer in units of m./n016 10.0 points
A bee wants to fly to a flower located due
North of the hive on a windy day. The wind
blows from East to West at speed 6 m/s. The
airspeed of the bee (i.e., its speed relative to
the air) is 14.6 m/s.
In which direction should the bee head in
order to fly directly to the flower, due North
relative to the ground?
Answer in units of East of North.
Fig: 1
Fig: 2
Fig: 3
Fig: 4
Fig: 5
Fig: 6
Fig: 7
Fig: 8 | {"url":"https://tutorbin.com/questions-and-answers/011-part-2-of-3-10-0-points-ndetermine-how-long-the-ski-jumper-is-air-borne-answer-in-units-of-s-n012-part-3-of-3-10-0","timestamp":"2024-11-13T07:56:16Z","content_type":"text/html","content_length":"81590","record_id":"<urn:uuid:2309d623-bb4a-4287-b959-90be8774ab8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00394.warc.gz"} |
Arc Length Calculator
When any two following values are known, you can use the arc length calculator to calculate the length of an arc along with the other related measurements:
• Central Angle and Radius
• Radius and Segment Height
• Radius and Sector Area
• Radius and Chord Length
• Central Angle and Diameter
• Central Angle and Sector Area
• Central Angle and Chord Length
• Chord Length and Central Height
What Is Arc Length?
The length of an arc can be defined as the total distance between two points along a section of any curve. It depends on:
• Sector Area
• Chord
• Radius of Circle
Calculation of the length of an irregular arc segment is known as rectification of a curve.
Arc Length Formula:
When the angle is equal to \( 360 \) degrees or \( 2π \), then the arc length will be equal to the circumference. It can be stated as:
\(L / θ = C / 2π\)
In the equation for the circumference \(C = 2πr\)
\(L / θ = 2πr / 2π\)
After division there will be only: \(L / θ = r\)
To calculate arc length, you have to multiply the radius by the central angle \(θ: L = r\times θ\)
How To Find The Arc Length?
There are 2 different ways to find a circle’s arc length which are:
In radians: To find arc length with radius the formula is as follows: \(\ s = \theta\times\ r\)
In degrees: To find arch length degrees the formula will be: \(\ s =\ 2 \pi\ r (\dfrac{\theta}{360°})\)
Also, you can use the arc length calculator for quick calculations.
If you have the following values, then find the sector area and length of arc?
\(\ Radius\ of\ circle \ (r) =\ 50\ cm\)
\(\ The\ Central\ Angle =\ \dfrac{\pi}{4}\)
To find the arc length:
\(\ S = 50\times \dfrac{pi}{4} =\dfrac{25\pi}{2cm} =\ 39.32\ cm\)
For Sector Area of the circle:
\(\ Area =\ A =\pi\ r^{2}=\ 3.1459\times \ 50^{2}= 7,864.75\ cm^{2}\)
How To Find Arc Length Using Sector Area And Central Angle?
If you have a sector area and the central angle, you can still calculate the arc of a circle. The formula is as follows:
\(\ L =\theta\times \sqrt(\dfrac{2A}{\theta})\)
Suppose the sector area is 300000 cm2 and the central angle is 60 degrees, Now how do you calculate an arc length without the radius of the circle? Let's see how!
\(\ 1\ centimeter^{2} =\dfrac{1}{10000}\ meter\ square\)
\(\ 300000\ cm^{2} =\dfrac{300000}{10000}\ meter\ square\)
\(\ 1\ degree =\dfrac{\pi}{180}\ radians\)
\(\ 60\ degrees =\ 60\times \dfrac{\pi}{180}\ radians =\ 1.0472\ rad\)
\(\ Sector\ Area\ of\ Circle\ (A) =\ 30\ m^{2}\)
\(\ The\ Central\ Angle =\ 1.0472\ rad\)
\(\ L =\theta\times \sqrt(\dfrac{2A}{\theta})\)
\(\ L =\ 1.0472\times \sqrt(\dfrac{2(30)}{1.0472})\)
\(\ L =\ 1.0472\times \sqrt(\dfrac{60}{1.0472})\)
\(\ L =\ 1.0472\times \sqrt(57.295)\)
\(\ L =\ 1.0472\times \ 7.569\)
\(\ L =\ 7.926\ m\)
Is Arc Length The Same As The Angle?
No, the angle is the span between two radii of a circle, and on the other hand, the arc length is the distance between two radii along the curve.
What Is The Difference Between Chord Length And Arc Length?
The chord length is the straight line distance between two points, while an arc shows the total portion covered between two points(a segment of a circle).
From the source of Wikipedia: Arc length, general approach. | {"url":"https://calculator-online.net/arc-length-calculator/","timestamp":"2024-11-06T14:32:21Z","content_type":"text/html","content_length":"76611","record_id":"<urn:uuid:cf4a407b-591b-45ab-aea2-ace5e1faa00d>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00707.warc.gz"} |
Neural Network
Topic 1: Neural Networks¶
Continuing with Module 4 on "Deep Learning and Its Applications," the next topic delves into "Neural Networks," laying the groundwork for understanding the architecture, functioning, and types of
neural networks that form the basis for deep learning models.
Slide 1: Title Slide¶
• Title: Neural Networks
• Subtitle: The Building Blocks of Deep Learning
• Instructor's Name and Contact Information
Slide 2: Introduction to Neural Networks¶
- Definition and overview of neural networks as the foundation of deep learning.
- The inspiration behind neural networks: mimicking the human brain's architecture and functioning.
- Basic components: neurons, weights, biases, and activation functions.
Shifting our focus to neural networks, let's prepare an introduction that serves as the foundation for understanding deep learning. This topic is essential for grasping how complex models like those
in deep learning are constructed and function to perform tasks ranging from image and speech recognition to natural language processing.
Definition and Overview¶
Neural networks are a subset of machine learning and form the backbone of deep learning. They are designed to recognize patterns and solve complex problems by mimicking the structure and function of
the human brain. At their core, neural networks consist of layers of interconnected nodes, or "neurons," which process input data and can learn to perform tasks without being explicitly programmed
with task-specific rules.
Inspiration Behind Neural Networks¶
The fundamental inspiration for neural networks comes from the desire to replicate the human brain's incredible computing power. The brain's ability to process information through a vast network of
neurons, each connecting to thousands of others, has inspired the development of artificial neural networks (ANNs). These networks aim to simulate the brain's efficiency in pattern recognition and
decision-making processes.
Basic Components¶
• Neurons: The basic units of a neural network, analogous to the nerve cells in the human brain. In an ANN, a neuron receives input, processes it, and passes the output to the next layer of
• Weights and Biases: Weights are the parameters that scale the input received by a neuron, determining how much influence the input will have on the output. Biases are additional parameters that
allow the activation function to be adjusted. Together, weights and biases are adjusted during the learning process to minimize the difference between the predicted output and the actual output.
• Activation Functions: These functions determine whether a neuron should be activated or not, based on whether the neuron's input is relevant for the model's prediction. Activation functions
introduce non-linearity to the model, enabling it to learn complex patterns. Common examples include the sigmoid, tanh, and ReLU functions.
This introduction sets the stage for a deeper exploration into the workings of neural networks, including how they are trained to recognize patterns and make predictions. Understanding these basic
components is crucial for anyone looking to delve into the field of deep learning and artificial intelligence.
Would you like to expand on any of these components, or should we move on to another topic or concept within neural networks?
Slide 3: Architecture of Neural Networks¶
- Explanation of the layered structure of neural networks: input layer, hidden layers, and output layer.
- The role of each layer in processing information and making predictions.
- Introduction to the concept of depth in neural networks and its importance.
Slide 4: How Neural Networks Learn¶
- Overview of the learning process in neural networks: forward propagation and backpropagation.
- The concept of loss functions and how they guide the learning process.
- The role of optimization algorithms (e.g., gradient descent) in adjusting weights to minimize loss.
Slide 5: Activation Functions¶
- Introduction to activation functions and their purpose in neural networks.
- Examples of common activation functions: Sigmoid, Tanh, ReLU, and their variants.
- How activation functions introduce non-linearity, enabling neural networks to learn complex patterns.
Let's dive deeper into the architecture and functioning of neural networks, breaking down the concepts across the next slides to ensure a comprehensive understanding.
Slide 3: Architecture of Neural Networks¶
Explanation of the Layered Structure¶
Neural networks are structured in layers, each consisting of a number of interconnected neurons that process information: - Input Layer: This is the first layer of the neural network. It receives the
raw input data and passes it on to the next layers for processing. Each neuron in the input layer represents a feature of the input data. - Hidden Layers: Located between the input and output layers,
hidden layers perform the bulk of the computation. They extract and process features from the input data, with each subsequent layer working with a more abstract representation of the data. The
number of hidden layers and the neurons within them determine the network's "depth." - Output Layer: This layer produces the final output of the network. The structure of the output layer depends on
the specific task (e.g., classification, regression).
Role of Each Layer in Processing Information¶
• The input layer acts as the interface between the raw data and the neural network.
• Hidden layers extract and refine features from the input, with deeper layers capturing more complex patterns.
• The output layer translates the processed information from the hidden layers into a form suitable for the task at hand, such as a class label or a continuous value.
Introduction to the Concept of Depth¶
• The depth of a neural network is defined by the number of hidden layers it contains. Deeper networks, with more hidden layers, can model more complex relationships by learning a hierarchy of
features, from simple to complex. However, increasing depth comes with challenges, such as the potential for overfitting and increased computational cost.
Slide 4: How Neural Networks Learn¶
Overview of the Learning Process¶
• Forward Propagation: The process of moving the input data through the network to generate an output. Each neuron applies its weights and biases to the input, passes it through an activation
function, and forwards the result to the next layer.
• Backpropagation: After comparing the output with the actual expected result, the network calculates the error using a loss function. Backpropagation then helps in distributing the error back
through the network, allowing the weights to be adjusted to minimize this error.
Concept of Loss Functions¶
• Loss functions measure the difference between the network's prediction and the actual target values. Common examples include Mean Squared Error (MSE) for regression tasks and Cross-Entropy Loss
for classification tasks. The choice of loss function is crucial as it guides the learning process.
Role of Optimization Algorithms¶
• Optimization algorithms, like gradient descent, update the weights and biases to minimize the loss. They work by calculating the gradient of the loss function with respect to each weight and
bias, then adjusting those weights and biases in the direction that reduces the loss.
Slide 5: Activation Functions¶
Introduction to Activation Functions¶
Activation functions are non-linear transformations applied to the input of a neuron, determining whether it should be activated ("fired") or not. They are essential for neural networks to model
complex, non-linear relationships.
Examples of Common Activation Functions¶
• Sigmoid: Outputs a value between 0 and 1, useful for binary classification.
• Tanh (Hyperbolic Tangent): Similar to the sigmoid but outputs values between -1 and 1, offering a zero-centered range which often leads to better convergence in practice.
• ReLU (Rectified Linear Unit): Outputs the input directly if it's positive, otherwise, it outputs zero. It has become very popular due to its computational efficiency and the ability to enable
faster learning.
How Activation Functions Introduce Non-Linearity¶
Without non-linearity, a neural network, regardless of its depth, would behave just like a single-layer network. Activation functions allow neural networks to learn and model complex patterns, such
as those found in images, speech, and text data, which are not possible to model with linear transformations alone.
Each of these slides builds upon the last, offering a structured approach to understanding neural networks—from their architecture and learning mechanisms to the critical role of activation
functions. Would you like to delve into any of these topics further, or discuss additional aspects of neural networks?
Slide 6: Types of Neural Networks¶
- Brief overview of different types of neural networks and their applications:
- Feedforward Neural Networks (FNNs) for basic predictions.
- Convolutional Neural Networks (CNNs) for image processing.
- Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks for sequential data and time series analysis.
Slide 7: Training Neural Networks¶
- The process of training neural networks, including data preparation, model fitting, and validation.
- The importance of training data quality and quantity for successful model learning.
- Techniques to avoid overfitting, such as regularization and dropout.
Slide 8: Neural Network Applications¶
- Highlighting various applications of neural networks across different industries:
- Image and speech recognition, natural language processing, gaming, and autonomous vehicles.
- Discussion on the impact of neural networks in advancing AI capabilities and solving complex problems.
Slide 9: Challenges and Solutions¶
- Common challenges in designing and training neural networks: computational resources, data requirements, model interpretability.
- Emerging solutions and best practices to address these challenges, including transfer learning and model compression techniques.
Slide 10: Tools and Libraries for Neural Networks¶
- Overview of popular frameworks and libraries for building and training neural networks: TensorFlow, Keras, PyTorch.
- Comparison of these tools in terms of features, usability, and community support.
Slide 11: Future of Neural Networks¶
- Exploration of future directions and trends in neural network research and applications.
- The potential for new architectures and algorithms to further enhance the capabilities of neural networks.
Slide 12: Getting Started with Neural Networks¶
- Practical tips for students interested in exploring neural networks, including online resources, courses, and project ideas.
- Encouragement to engage with the AI community through forums, hackathons, and conferences.
Slide 13: Conclusion and Q&A¶
- Recap of the key concepts covered in the lecture on neural networks.
- Emphasis on the transformative potential of neural networks in various domains.
- Open floor for questions, encouraging students to discuss their thoughts or clarify doubts about neural networks.
Let's outline detailed content for these slides, focusing on neural networks, their types, applications, challenges, and the future landscape.
Slide 6: Types of Neural Networks¶
Brief Overview¶
• Feedforward Neural Networks (FNNs): The simplest type of neural network where connections between nodes do not form a cycle. Ideal for basic prediction problems.
• Convolutional Neural Networks (CNNs): Specialized for processing data with a grid-like topology, such as images. They use convolutional layers to efficiently learn spatial hierarchies of
• Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks: Suited for sequential data like time series or natural language. RNNs have the ability to retain information across
inputs, while LSTMs are designed to avoid long-term dependency problems.
Slide 7: Training Neural Networks¶
Training Process¶
• Data Preparation: Involves collecting, cleaning, and preprocessing data to feed into the neural network.
• Model Fitting: Adjusting the weights of the network through backpropagation based on the error between the predicted and actual outputs.
• Validation: Using a separate dataset not seen by the model during training to evaluate its performance and generalizeability.
Importance of Data Quality¶
High-quality and diverse data sets are crucial for the successful training of neural networks, directly impacting their ability to learn and make accurate predictions.
Avoiding Overfitting¶
Introduce techniques like regularization (L1, L2) and dropout to prevent neural networks from overfitting to the training data, enhancing their ability to generalize.
Slide 8: Neural Network Applications¶
Applications Across Industries¶
• Image and Speech Recognition: Use of CNNs for facial recognition systems and voice-activated assistants.
• Natural Language Processing (NLP): Utilizing RNNs and LSTMs for translation, sentiment analysis, and chatbots.
• Gaming and Autonomous Vehicles: Neural networks drive decision-making in real-time gaming and are key to the development of self-driving cars.
Impact on AI¶
Discuss how neural networks have been pivotal in advancing AI, solving complex problems that were previously thought to be beyond the capabilities of machines.
Slide 9: Challenges and Solutions¶
Common Challenges¶
• Computational Resources: High demand for processing power and memory.
• Data Requirements: Need for large, annotated datasets.
• Model Interpretability: Difficulty in understanding the decision-making process of complex models.
Highlight emerging solutions like transfer learning, which allows models to leverage pre-trained networks for new tasks, and model compression techniques to reduce the size and computational needs of
neural networks.
Slide 10: Tools and Libraries for Neural Networks¶
Overview of Tools¶
• TensorFlow: An open-source platform for machine learning developed by Google.
• Keras: A Python deep learning API running on top of TensorFlow, designed for easy and fast prototyping.
• PyTorch: Developed by Facebook, known for its flexibility and dynamic computational graph.
Discuss the features, usability, and community support of these tools, helping students understand which framework might be best suited for their projects.
Slide 11: Future of Neural Networks¶
Future Directions¶
Explore potential advancements in neural network architectures and algorithms, such as attention mechanisms and transformers, that could further enhance their capabilities.
Discuss the trends towards more efficient, explainable, and scalable neural network models and the exploration of unsupervised learning techniques.
Slide 12: Getting Started with Neural Networks¶
Practical Tips¶
Offer resources for learning, such as MOOCs (e.g., Coursera, edX), documentation and tutorials from TensorFlow or PyTorch, and project ideas for hands-on experience.
Community Engagement¶
Encourage students to engage with the AI community through forums like Stack Overflow, GitHub, hackathons, and conferences to learn from real-world projects and networking.
Slide 13: Conclusion and Q&A¶
Summarize the transformative potential of neural networks across various domains, underscoring the importance of understanding different types, training techniques, and the latest trends.
Encouragement for Exploration¶
Motivate students to dive into neural network technologies, emphasizing that the field is rapidly evolving and offers endless opportunities for innovation.
Open Floor for Questions¶
Invite questions and discussions on neural networks, encouraging students to share their thoughts or seek clarification on any aspects covered in the lecture.
This structured presentation aims to provide a comprehensive overview of neural networks, from foundational concepts to practical applications and future directions, fostering an engaging and
informative learning experience. | {"url":"https://flexappdev.com/ai-courses/ai-course-level-4/topic-1/","timestamp":"2024-11-11T00:12:31Z","content_type":"text/html","content_length":"137832","record_id":"<urn:uuid:4702f60a-76c9-4b4b-9939-06aa4306df9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00822.warc.gz"} |
What is: Triangular Distribution
What is Triangular Distribution?
The Triangular Distribution is a continuous probability distribution that is defined by three parameters: the minimum value (a), the maximum value (b), and the mode (c). This distribution is
particularly useful in scenarios where limited sample data is available, allowing practitioners to make informed estimates based on the known minimum, maximum, and most likely outcomes. The shape of
the distribution resembles a triangle, hence the name, and it is often employed in fields such as project management, risk analysis, and decision-making processes. Its simplicity and ease of use make
it a popular choice for modeling uncertain variables.
Characteristics of Triangular Distribution
One of the key characteristics of the Triangular Distribution is its shape, which is defined by the three parameters mentioned earlier. The distribution is symmetric when the mode is exactly halfway
between the minimum and maximum values. In contrast, it becomes skewed when the mode is closer to either the minimum or maximum. This flexibility allows analysts to represent various types of
uncertainty in their data. Additionally, the Triangular Distribution has a finite support, meaning that the probability of outcomes outside the defined range is zero, making it a bounded
Probability Density Function (PDF)
The Probability Density Function (PDF) of the Triangular Distribution is piecewise-defined, reflecting the triangular shape of the distribution. For a given value x within the interval [a, b], the
PDF can be expressed as follows:
– For ( a leq x < c ):
[ f(x) = frac{2(x – a)}{(b – a)(c – a)} ]
– For ( c leq x leq b ):
[ f(x) = frac{2(b – x)}{(b – a)(b – c)} ]
This mathematical representation allows users to calculate the likelihood of various outcomes within the defined range, providing insights into the distribution of potential results.
Cumulative Distribution Function (CDF)
The Cumulative Distribution Function (CDF) of the Triangular Distribution provides the probability that a random variable X is less than or equal to a certain value x. The CDF is also
piecewise-defined and can be expressed as follows:
– For ( x < a ):
[ F(x) = 0 ]
– For ( a leq x < c ):
[ F(x) = frac{(x – a)^2}{(b – a)(c – a)} ]
– For ( c leq x < b ):
[ F(x) = 1 – frac{(b – x)^2}{(b – a)(b – c)} ]
– For ( x geq b ):
[ F(x) = 1 ]
This function is essential for understanding the probability of outcomes and is widely used in statistical analysis and simulations.
Applications of Triangular Distribution
Triangular Distribution finds its applications in various fields, particularly in project management and risk assessment. It is frequently used in Monte Carlo simulations to model uncertain variables
when precise data is unavailable. For instance, project managers can use the Triangular Distribution to estimate project completion times by defining the best-case, worst-case, and most likely
scenarios. This approach helps in assessing risks and making informed decisions based on the potential range of outcomes.
Comparison with Other Distributions
When comparing the Triangular Distribution to other probability distributions, such as the Normal or Uniform distributions, it is important to note its unique characteristics. Unlike the Normal
Distribution, which is defined by its mean and standard deviation, the Triangular Distribution relies on three specific values, making it easier to use in situations with limited data. Additionally,
while the Uniform Distribution assumes equal probability across its range, the Triangular Distribution allows for varying probabilities based on the mode, providing a more nuanced representation of
Advantages of Using Triangular Distribution
One of the primary advantages of using the Triangular Distribution is its simplicity and ease of understanding. It requires only three parameters, making it accessible for practitioners who may not
have extensive statistical training. Furthermore, the Triangular Distribution is versatile and can be applied in various scenarios, from financial modeling to engineering projects. Its ability to
represent skewed data effectively makes it a valuable tool for analysts seeking to model uncertainty in their predictions.
Limitations of Triangular Distribution
Despite its advantages, the Triangular Distribution does have limitations. One significant drawback is its assumption of linearity between the minimum, mode, and maximum values, which may not
accurately reflect the underlying data in all cases. Additionally, the distribution may not be suitable for modeling complex phenomena that require more sophisticated statistical techniques. Analysts
must be cautious when applying the Triangular Distribution and consider whether it adequately represents the uncertainty inherent in their specific context.
Conclusion on Triangular Distribution
The Triangular Distribution serves as a practical and effective tool for modeling uncertainty in various fields, particularly when data is limited. Its straightforward parameters and
piecewise-defined functions allow analysts to make informed decisions based on the known minimum, maximum, and most likely outcomes. While it has its limitations, the Triangular Distribution remains
a popular choice for practitioners in statistics, data analysis, and data science, providing valuable insights into uncertain variables and aiding in risk assessment and decision-making processes. | {"url":"https://statisticseasily.com/glossario/what-is-triangular-distribution/","timestamp":"2024-11-04T08:18:15Z","content_type":"text/html","content_length":"139494","record_id":"<urn:uuid:2d9af327-6bda-4491-b3d3-f2c90839fd97>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00455.warc.gz"} |
How many amps are in a Volt?
A “volt” is a unit of electric potential, also known as electromotive force, and represents “the potential difference between two points of a conducting wire carrying a constant current of 1 ampere,
when the power dissipated between these points is equal to 1 watt.” Stated another way, a potential of one volt appears …
How many watts is 1 amp 24 volts?
Equivalent Volts and Watts Measurements
Voltage Power Current
24 Volts 24 Watts 1 Amps
24 Volts 48 Watts 2 Amps
24 Volts 72 Watts 3 Amps
24 Volts 96 Watts 4 Amps
How many watts is 1 amp 220 volts?
220 watts
In order to know how many watts are in a charge, simply multiply the the amps by the voltage for the current. So 1 amplifiers for 220 volts is equal to 220 watts.
How do you convert amps to volts?
Volts = Watts / Amps
1. 2400 Watts / 20 Amps = 120 Volts.
2. 2400 Watts / 10 Amps = 240 Volts.
How many volts is 70 amps?
Equivalent Volts and Amps Measurements
Voltage Current Power
12 Volts 5 Amps 60 Watts
12 Volts 5.417 Amps 65 Watts
12 Volts 5.833 Amps 70 Watts
12 Volts 6.25 Amps 75 Watts
How many watts can 1 amp handle?
120 watts
That means that 1 amp = 120 watts.
How many watts is 3 amps?
360 watts
3 Amps To Watts (Example 1) In short, 3 amps is 360 watts.
What’s the difference between amps, volts and Watts?
Amps represent the volume of water present.
Voltage represents the water pressure
Watts are the energy created by the closed system that powers the mill.
Ohms represent the amount of resistance created by the size of the pipe.
How many volts are in 1 amp?
At 120V, 120 watts make 1 amp. That means that 1 amp = 120 watts. At 240V, 240 watts make 1 amp. Example 1: How Many Amps Is 500 Watts? Let’s say we have a 500W air conditioner plug into 120 V
How do Watts compare to amps?
Amps is the unit of current flow, while Watts is the unit for power. Amps, when multiplied by voltage, equates to Watts. Measuring amps is much easier compared to measuring watts. Amps is applicable
only to electricity while watts is can be used for other forms of energy.
What is the relationship between volts and amps?
The volt is the unit of potential difference, voltage and electromotive force, whereas the amp is the unit of current. The volt is measured by the voltmeter whereas the amp is measured by the
ammeter. The volts and amp both are correlated with ohms law. | {"url":"https://www.rhumbarlv.com/how-many-amps-are-in-a-volt/","timestamp":"2024-11-03T15:31:18Z","content_type":"text/html","content_length":"63307","record_id":"<urn:uuid:bf231505-ffd7-4ed6-827c-76980e329fd3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00544.warc.gz"} |
Operators | MistQL
Version: 0.4.12
Operators are functions with specialized syntax for ease of use. They come in two forms: unary and binary.
Unary Operators#
There are only two unary operators in MistQL
Operator Parameter Type Return Type Description
! Any Boolean false if the argument is truthy, true otherwise
- Number Number Returns the negative of the number
Binary Operators#
Binary operators make up the vast majority of MistQL's operators.
Operator Parameter Types Return Type Description
+ number or string or array number or string or Adds two numbers, concatanates two strings, or concatanates two arrays, depending on argument type
- number number Subtracts one number from another
* number number Multiplies 2 numbers
/ number number Divides one number by another
% number number Computes a mod b
< number, string, boolean or boolean Less Than
> number, string, boolean or boolean Greater Than
<= number, string, boolean or boolean Less Than or Equal
>= number, string, boolean or boolean Greater Than or Equal
== any boolean Whether two values are equivalent
!= any boolean Whether two values are not equivalent
=~ string or regex boolean Whether the left hand value matches the right hand pattern. Alias for match.
&& t t Returns the first if the first is falsy, the second otherwiseotherwise.
\|\| t t Returns the first if the first is truthy, the second otherwise. NOTE: The backslashes aren't necessary. I just can't figure out how to format
it properly for Docusaurus.
Operator precedence and associativity#
Below are in order from highest to lowest, where all operators on the same level are equal precedence.
Operator Associativity
. ltr
unary !, unary - rtl
*, /, % ltr
+, - ltr
<, >, <=, >= ltr
==, !=, =~ ltr
&& ltr
\|\| ltr
[function application] ltr
\| ltr | {"url":"https://www.mistql.com/docs/reference/operators/","timestamp":"2024-11-10T04:34:07Z","content_type":"text/html","content_length":"15794","record_id":"<urn:uuid:24c2a8bb-3479-48a9-a8c8-6c1b0709a3f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00683.warc.gz"} |
Eigenvalue problems (Femlisp User Manual)
3.3.6 Eigenvalue problems
There is some preliminary support for solving eigenvalue problems by Wielandt’s iteration. For example, the first eigenvalue of the Laplace operator on a unit square can be approximated with
(let ((problem (cdr-model-problem 2 :evp (list :lambda (box 20.0)
:mu (box 1.0)))))
(solve (blackboard :problem problem
:success-if '(or (>= :time 5) (>= :nr-levels 5))
:output 1))))
(slot-value (^ :problem) 'lambda)
(plot (^ :solution))
Note that the multigrid algorithm has not yet been adapted for eigenvalue problems. Therefore, a sparse decomposition is used for solving the linear systems which does not work for large problems. | {"url":"http://femlisp.org/femlisp-html/Eigenvalue-problems.html","timestamp":"2024-11-11T08:09:07Z","content_type":"text/html","content_length":"4254","record_id":"<urn:uuid:d38c1844-4a80-4172-837c-88977925af17>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00222.warc.gz"} |
Mentor Interview with Dr. Nicholas Pritchard
\documentclass[a4paper]{article} \usepackage[english]{babel} \usepackage[utf8]{inputenc} \usepackage{amsmath} \usepackage{graphicx} \usepackage[colorinlistoftodos]{todonotes} \title{Mentor Interview
with Dr. Nicholas Pritchard} \author{Ray Martin Llavore Lipat} \date{\today} \begin{document} \maketitle \section{How it's like being good at Math?} According to Dr. Prtichard he said that earning
the power of doing mathematics is based on Natural Ability. Since Mathematics is a primary quantitative skills from a left brain, some people like to count by object such as patterns. He said that to
be good at mathematics doesn't have to be genetic sometimes but based on one's hobbies that are intellectual such as Chess. Without fear of making a single mistake, Mathematics can be time consuming
and with the use of patience it takes a lot of tries to get a single correct answer. Doing a very intense work and have proficient study habits are one of the best benefits to become good at math.
Overall what makes Dr. Pritchard good at mathematics is by aptitude, having no fear of getting wrong answer and have basic numeracy since elementary school. \section{What are blockades from being
good at Math?} He emphasized part of "Attitude is Everything!", but when it comes with mathematics many people prefer memorization by words. The opposite of what Dr. Pritchard was saying of
mathematical ability was lack of numeracy and emphasizing creativity which one's brain could be imbalance. Not having persistence when one is mentally lethargic and not able to . He explained that if
one does not work really hard or give a little work which the consequences are having a bad study habits that lead of deficiency of mathematical ability. Some students born with different genetics
and have different hobbies which may vary. Being timid or frightened at showing effort also decrease mathematical skills even though making mistakes doesn't have to be a major problem. Therefore Dr.
Pritchard believes that what stops him from being good at mathematics are random natural ability from genes or hobby-based, fear of making mistakes and showing a little effort. \end{document} | {"url":"https://da.overleaf.com/articles/mentor-interview-with-dr-nicholas-pritchard/crqmwsfnhvkb","timestamp":"2024-11-09T23:08:03Z","content_type":"text/html","content_length":"38011","record_id":"<urn:uuid:258bf0db-bf57-4727-8499-696c38ab66a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00529.warc.gz"} |
What our customers say...
Thousands of users are using our software to conquer their algebra homework. Here are some of their experiences:
Ive never seen anything like it! Step-by-step, Im learning complicated algebra right alongside my kids!
Cathy Dixx, OH
Its nice to know that educational software thats actually fun for the kids even exists. It sure does beat a lot of that junk they try to sell you these days.
Brian Clapman, WI
If it wasnt for Algebrator, I never would have been confident enough in myself to take the SATs, let alone perform so well in the mathematical section (especially in algebra). I have the chance to go
to college, something no one in my family has ever done. After the support and love of both my mother and father, I think wed all agree that I owe the rest of my success as a student to your
software. It really is remarkable!
John Davis, TX
Search phrases used on 2008-05-08:
Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among
• Holt modern biology study guide section 9.1 review answers
• flow chart to solve math story problems
• saxon college
• math answers for algebra 2
• middle school pre algebra final exam review
• solve the equation for the given variable
• Free Fraction worksheets 7th grade
• slop calculator
• grammer arabic
• ti-83 plus usage instructions cubes
• vc++ code solving polynomial
• 2 step algebra equation worksheets
• solving proportions worksheets
• solve for square feet
• Solving Second Order Differential Equation
• free online algebra tutorials
• printable math problem solvers for third grade
• linear equations fifth grade activities
• holt middle school course 2 workbook answers
• algebra ratios
• math lesson plans/slope
• solve roots equation matlab
• algerbra for dummies
• fractions for dummies mathematics
• math order of operations worksheets
• erb math 4th grade practice
• solving simple inequations worksheets
• 7th grade formula sheet with pictures
• ks2 math scales gamea
• online prentice hall algebra 1 book
• free algebra ks3
• Algebraic equasions
• exponent practice problems 5th grade
• factoring calculator polynomial
• printable measurement papers for first grade
• Radical Expressions Online Calculator
• Mathamatics + area
• interger worksheets
• maple solve implicit equation
• "thank you for your order" download
• simplify difference quotient calculator
• subtract fractions and simplify with an odd denominator
• measurement math tricks and trivia
• McDougal Littell Inc. Chapter8 review games and activities for algebra 1
• finding slope in quadratic
• mcgraw-hill algebra test answers
• fraction answers
• Glencoe Algebra 1 answers
• prentice hall algebra 1 indiana homework site
• how to add or subtract rational expressions
• objective mathematics
• glencoe physics principles and problems guided reading
• two adding subtracting multiplying and dividing word problems
• sample star questions for 6th grade math
• partial differential equations "sample exams"
• telecommunications for 3rd graders
• Quadratic equation texas TI-84
• complementary solution of second order nonhomogeneous linear differential equation
• prentice hall mathematics algebra 1 answers key
• algebrahelp software
• prentice hall mathematics algebra 1 book answer key
• mathpower worksheet
• 11th grade math games
• equivalent polar equations
• polynomials for idiots
• mathematics powerpoint gcse sequences
• free college worksheets
• how can i cheat on algebra 2
• mathematics trivia
• math applet 6 X 8 McKenna
• integer practice worksheet
• TI 84 PLUS EMULATOR
• fluid mechanics beginners
• using probability on ti-83
• first gradeprintable worksheets
• sat test questions fo 5 grader
• greatest common factors of 325
• free science test papers for secondary 1
• free download agebra worksheets
• solving higher order polynomials
• how to solve math combinations
• runge kutta for solving 2nd order differential equations
• 1st grade math homework sheets
• ontario grade 12 maths exam paper
• ratios formula and example
• solving second order nonlinear differential equations in matlab
• WWWFREE .ENGLISH GRAMMER | {"url":"https://softmath.com/algebra-help/pritable-math-work-for-the-ged.html","timestamp":"2024-11-11T04:41:34Z","content_type":"text/html","content_length":"35278","record_id":"<urn:uuid:47e009b7-d9a7-44a8-aac9-c45070a10472>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00453.warc.gz"} |
Algorithms and Complexity (Freiburg)
Group Seminar
"Algorithms and Complexity"
Winter Term 2011/12
Alexander Souza
24.11.2011, 13:30, SR 052-02-017
PTAS for Densest k-Subgraph in Interval Graphs
Dr. Tim Nonner, IBM Research
Given an interval graph and integer k, we consider the problem of finding a subgraph of size k with a maximum number of induced edges, called densest k subgraph problem in interval graphs. It has
been shown that this problem is NP hard even for chordal graphs (Perl,Corneil'84), and there is probably no PTAS for general graphs (Khot'06). However, the exact complexity status for interval graphs
is a long-standing open problem (Perl,Corneil'84), and the best known approximation result is a 3-approximation algorithm (Liazi,Milis08}. We shed light on the approximation complexity of finding a
densest k-subgraph in interval graphs by presenting a polynomial-time approximation scheme (PTAS), that is, we show that there is an (1+epsilon)-approximation algorithm for any epsilon > 0, which is
the first such approximation scheme for the densest k subgraph problem in an important graph class without any further restrictions.
24.11.2011, 14:00, SR 052-02-017
Local Matching Dynamics in Social Networks
Jun.-Prof. Dr. Martin Hoefer, RWTH Aachen
Stable marriage and roommates problems have many applications in economics and computer science. In these scenarios, each player is a rational agent and wants to be matched to another player. Players
have preferences over their possible matches. In many applications, however, a player is not aware of all other players and must explore the population before finding a good match. We incorporate
this aspect by studying stable matching under dynamic locality constraints in social networks. Our interest is to characterize the behavior of local improvement dynamics and their convergence to
locally stable matchings that do not allow any incentive to deviate with respect to their imposed information structure in the network.
24.11.2011, 15:00, SR 052-02-017
The Car Sharing Problem
Jun.-Prof. Dr. Patrick Briest, U Paderborn
We consider a novel type of metric task system, termed the car sharing problem, in which the operator of a car sharing program aims to serve the requests of customers occurring at different
locations. Requests are modeled as a stochastic process with known parameters and a request is served if a car is located at the position of its occurrence at this time. Customers pay the service
provider according to the distance they travel and similarly the service provider incurs cost proportional to the distance traveled when relocating a car from one position to another between
requests. We derive an efficient algorithm to compute a redistribution policy that yields average long-term revenue within a factor of 2 of optimal and provide a complementing proof of APX-hardness.
Considering a variation of the problem in which requests occur simultaneously in all locations, we arrive at an interesting repeated balls-into-bins process, for which we prove bounds on the average
number of occupied bins. | {"url":"https://ac.informatik.uni-freiburg.de/lak_teaching/ws11_12/group-seminar.php","timestamp":"2024-11-14T07:40:25Z","content_type":"application/xhtml+xml","content_length":"9131","record_id":"<urn:uuid:35c14084-326f-46a8-9f3e-a9bb71cd4291>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00126.warc.gz"} |
Rotor 37 compressor
Edit me
We recommend going through the tutorial in
Get started
before running this case.
The following is an aerodynamic shape optimization case for the Rotor37 axial compressor rotor.
Case: Axial compressor aerodynamic optimization at transonic conditions
Geometry: Rotor37
Objective function: Torque
Design variables: 50 FFD points moving in the y and z directions
Constraints: Constant mass flow rate and total pressure ratio
Rotation speed: 1800 rad/s
Mesh cells: 40 K
Adjoint solver: DATurboFoam
Fig. 1. Mesh and FFD points for the Rotor37 case
To run this case, first download tutorials and untar it. Then go to tutorials-main/Rotor37_Compressor and run the “preProcessing.sh” script to generate the mesh:
Then, use the following command to run the optimization with 4 CPU cores:
mpirun -np 4 python runScript.py 2>&1 | tee logOpt.txt
Post Procesing:
Use the following command to load the OpenFOAM environment:
Next, run the following command:
This will generate new folders and allow the deletion of all of the processor folders. This makes the entire directory a bit smaller and allow for easier processing.
Open the paraview.foam file in the ParaView application. Check the boxes next to blade and hub and uncheck the box next to mesh. After hitting apply, the blade will appear on the viewer. This will
show the pressure gradient by default. There is a play button at the top of the window that will show all iterations through each timestep of the optimization. The original blade should look like
figure 2 and figure 3 as the top and bottom, respectively.
Fig. 2. Pressure gradient on the top of the original blade
Fig. 3. Pressure gradient on the bottom of the orignal blade
After optimizaiton, the pressure gradients should look as follows:
Fig. 4. Pressure gradient on the top of the optimized blade
Fig. 5. Pressure gradient on the bottom of the optimized blade
After this, the blade can be sliced using a cylindrical slice. The slices allow the profile of the blade to be viewed. Figures 6 and 7 show a comparison of the original blade vs the optimized blade
at the root and at the tip.
Fig. 6. The original airfoil and the optimized airfoil at the root
Fig. 7. The original airofil and the optimized airfoil at the tip
On each slice, use the “Plot on sorted lines” filter to achieve a graph of the pressure distribution accross the airfoil. Figures 8 and 9 show overlayed plots of the orignal shape and the optimized
shape at the root and at the tip.
Fig. 8. Overlayed plot of pressure distribution at the root
Fig. 9. Overlayed plot of pressure distribution at the tip
To see the Mach gradiant, check the box next to mesh and uncheck the blade and hub. The slice will update to show the mesh. This will need to be coppied 4 or 5 more times, with each slice being
translated up by 10°. A calculator needs to be put onto each slice to calculate the Mach number. The Mach number can then be selected to view instead of the pressure. Figures 10 and 11 show the Mach
gradients for both the original and optimized cases at the root and at the tip.
Fig. 10. Mach gradient at root, left is original and right is optimized
Fig. 11. Mach gradient at tip, left is orignal and right is optimized | {"url":"https://dafoam.github.io/mydoc_tutorials_aero_rotor37.html","timestamp":"2024-11-10T23:34:24Z","content_type":"text/html","content_length":"28689","record_id":"<urn:uuid:1a554016-957a-4bcd-b0d9-39184d7f13ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00110.warc.gz"} |
How are pressure and temperature related in the Ideal Gas Law?
How are pressure and temperature related in the Ideal Gas Law?
The pressure of a given amount of gas is directly proportional to its absolute temperature, provided that the volume does not change (Amontons’s law). The volume of a given gas sample is directly
proportional to its absolute temperature at constant pressure (Charles’s law).
Is the Ideal Gas Law accurate at any temperature or pressure?
At low pressures molecules are far enough apart that they do not interact with one another. In other words, the Ideal Gas Law is accurate only at relatively low pressures (relative to the critical
pressure pcr) and high temperatures (relative to the critical temperature Tcr).
What gas law uses pressure and temperature?
Charles’s law
Charles’s law—named for J. -A. -C. Charles (1746–1823)—states that, at constant pressure, the volume V of a gas is directly proportional to its absolute (Kelvin) temperature T, or V/T = k.
Does Ideal Gas Law apply pressure?
The pressure, P, volume V, and temperature T of an ideal gas are related by a simple formula called the ideal gas law.
Why does the ideal gas law break down at high pressure and low temperature?
The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size becomes important. It also fails for most heavy gases, such as many
refrigerants, and for gases with strong intermolecular forces, notably water vapor.
How do you find pressure in ideal gas law?
The ideal gas law formula states that pressure multiplied by volume is equal to moles times the universal gas constant times temperature….Ideal Gas Law Formula
1. P = pressure.
2. V = volume.
3. n = number of moles.
4. T = temperature.
5. R = gas constant.
How do you calculate the ideal gas law?
Ideal gas law equation. The properties of an ideal gas are all lined in one formula of the form pV = nRT, where: p is the pressure of the gas, measured in Pa, V is the volume of the gas, measured in
m^3, n is the amount of substance, measured in moles, R is the ideal gas constant and.
What is the formula for ideal gas law?
The ideal gas law is an equation used in chemistry to describe the behavior of an “ideal gas,” a hypothetical gaseous substance that moves randomly and does not interact with other gases. The
equation is formulated as PV=nRT, meaning that pressure times volume equals number of moles times the ideal gas constant times temperature.
What is the ideal gas law?
Ideal Gas Law Definition. The ideal gases obey the ideal gas law perfectly. This law states that: the volume of a given amount of gas is directly proportional to the number on moles of gas, directly
proportional to the temperature and inversely proportional to the pressure.
How do you calculate ideal gas?
Ideal gas law equation. The properties of an ideal gas are all lined in one formula of the form pV = nRT , where: p is the pressure of the gas, measured in Pa, V is the volume of the gas, measured in
m^3, n is the amount of substance, measured in moles, | {"url":"https://neighborshateus.com/how-are-pressure-and-temperature-related-in-the-ideal-gas-law/","timestamp":"2024-11-02T21:40:47Z","content_type":"text/html","content_length":"51029","record_id":"<urn:uuid:a591b800-e61b-4224-a2aa-a824e2a40092>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00681.warc.gz"} |
does earth not collapse into the sun and stays in orbit ?
Why does earth not collapse into the sun and stays in orbit ?
Why does earth not collapse into the sun and stays in orbit ?
Right now and it has been since the formation of our solar system, there are two opposing forces acting on the planets. First is gravitational pull towards Sun in the inward direction, and the
centrifugal force of their orbit driving them outwards. If gravity was dominant, the planets would spiral inward. If their inertia was dominant, the planets would spiral outward into deep space. The
planets are trying to fly out into deep space, but the gravity of the Sun is pulling them into a curved orbit. That is the reason, they are not collapsing.
I hope this would be helpful, please rate my answer.... | {"url":"https://justaaa.com/physics/450560-why-does-earth-not-collapse-into-the-sun-and","timestamp":"2024-11-03T03:05:03Z","content_type":"text/html","content_length":"39498","record_id":"<urn:uuid:07e055fb-0872-402d-bb76-0bab0eb940cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00219.warc.gz"} |
Design & Analysis of Experiments (MA702)
Design & Analysis of Experiments (MA702)
Overview of probability, probability functions, random variables, probability distributions, Sampling theory: random samples, statistics, sampling distributions, central limit theorem, parameter
estimation, point estimation, interval estimation of means and variance, hypothesis testing, goodness of fit tests, Analysis of variance of one – way, two – way classified data, experimental designs:
CRD, RBD, LSD, factorial experiments
Douglas Montgomery, Design and Analysis of Experiments, 3rd Edition, John Wiley.
Sheldon Ross M., Introduction to Probability & Statistics for Engineers & Scientists, John Wiley
Hogg R. V., Craig A. T., Introduction to Mathematical Statistics, 4th Edition, McMillan | {"url":"https://chemical.nitk.ac.in/course/design-analysis-experiments-ma702","timestamp":"2024-11-02T01:58:32Z","content_type":"application/xhtml+xml","content_length":"33016","record_id":"<urn:uuid:07469aef-d5d4-4e6f-a479-a4cafbed36df>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00550.warc.gz"} |
Piet Hut: Book Review of Structure and Interpretation of Classical Mechanics
Foundations of Physics32,323-326 (2002)
Piet Hut
Institute for Advanced Study
Princeton, NJ
This is the first book I have come across that explains classical mechanics using the variational principle in such a way that there are no ambiguities in either the presentation or the notation.
The resultant leap toward clarity and precision is likely to influence a new generation of physics students, opening their eyes to the beauty of classical mechanics. Quite likely, some of these
students will be inspired to find newer and deeper interpretations of classical mechanics. In turn, such deeper insight may well lead to a deeper understanding of quantum mechanics. After all,
classical mechanics is only a limiting case of quantum mechanics, and the variational principle is the main bridge between the two.
Many physics students are introduced to Hamiltonians and Lagrangians only in their first course on quantum mechanics. It often comes as a surprise to them to hear that both Hamilton and Lagrange
were long dead by 1925, and that in fact their methods had been around for most of the nineteenth century. Rushing on to quantum mechanics this way turns the historical development on its head,
thus obscuring the most fundamental relationship between classical and quantum mechanics, and making the latter seem even more mysterious and weird than it really is. As a result, students come
away with the impression that quantum mechanics uses tools that are fundamentally different from those used in classical mechanics. Furthermore, it seems as if one has to go through great trouble
to extract something resembling classical equations of motion from approximate treatments of expectation values of quantum mechanical variables.
I wish I had been introduced to classical mechanics through this book before learning the details of quantum mechanics. As it happened, I was born thirty years too early to read this book as a
freshman. Instead, I had to stumble my way through various mechanics representations, in a way that is not atypical of what happened to many of my colleagues.
At the end of my second year as an undergraduate at the University of Utrecht, in The Netherlands, I had completed a detailed course in classical mechanics and an introductory course, more
something like a guided tour, in quantum mechanics. I knew that after the summer I would get my first real course in quantum mechanics, and I very much looked forward to it. From the introductory
tour I had realized that the world of the quantum offered a totally different reality than the clockwork world of classical mechanics. I was very curious to see how the twain would meet: how it
could be possible that the intrinsically spontaneous, unruly and unreproducible quantum world could ever have given rise on larger scales to the staid semblance of the clockwork world of
classical mechanics.
Just around this time, an older student told me that I first might want to read about the variational principle in classical mechanics. He gave me a bound volume of course notes on Lagrangian and
Hamiltonian dynamics. I vividly remember taking the volume under my arm, walking to a nice terrace next to one of the Utrecht canals, and getting engrossed in the text. The applications were all
very familiar, only the approach was sheer magic. I saw in classical garb the equivalence of wave functions and phase interference in classical mechanics, in ways that had been discovered more
than a century before quantum mechanics! How was that possible? What did that mean? I still remember putting the volume down, after a couple hours, and staring at the way the sunlight, reflected
from the waves on the water, was painting ever-shifting caustics inside the arches of the stone bridge next to my terrace. No better metaphor could have presented itself in front of my eyes. So
this was how the world is hanging together, I thought, through waves and phases and interference, even on the classical level!
For several days I felt the impact of this revelation. Now I really wanted to know what was going on. Soon I went through the library in search of books on the variational principle in classical
mechanics. I found several heavy tomes, borrowed them all, and started on the one that looked most attractive. Alas, it didn't take long for me to realize that there was quite a bit of
hand-waving involved. There was no clear definition of the procedure used for computing path integrals, let alone for the operations of differentiating them in various ways, by using partial
derivatives and/or using an ordinary derivative along a particular path. And when and why the end points of the various paths had to be considered fixed or open to variation also was unclear,
contributing to the overall confusion.
Working through the canned exercises was not very difficult, and from an instrumental point of view, my book was quite clear, as long as the reader would stick to simple examples. But the
ambiguity of the presentation frustrated me, and I started scanning through other, even more detailed books. Alas, nowhere did I find the clarity that I desired, and after a few months I simply
gave up. Like generations of students before me, I reluctantly accepted the dictum that `you should not try to understand quantum mechanics, since that will lead you astray for doing physics',
and going even further, I also gave up trying to really understand classical mechanics! Psychological defense mechanisms turned my bitter sense of disappointment into a dull sense of
Given this background, I was delighted to read the preface of Structure and Interpretation of Classical Mechanics. The very first quote from Jacobi, cited by Arnold, showed that the authors must
have had similar struggles as I did as a student: ``In almost all textbooks, even the best, this [variational] principle is presented so that it is impossible to understand.'' To see a book that
sets out on the very first page to attack this problem is heart-warming.
I hope that a new generation of students will stumble upon Structure and Interpretation of Classical Mechanics, and thereby will make a better and deeper acquaintance with both classical and
quantum mechanics than I did. The central feature of this new book is that it has a built-in guarantee that there is zero ambiguity in any of the 1200 or so mathematical equations that appear.
This sounds like a remarkable if not impossible claim. The key here is that every mathematical expression is in one-to-one correspondence with an equivalent expression written in computer code.
Each of these expressions has been compiled and tested out by the authors and by many of their students over several years, while the authors were developing and simultaneously teaching the
material in this book.
The key to using a computer language is summarized by the authors, also in the preface: ``The requirement that the computer be able to interpret any expression provides strict and immediate
feedback as to whether the expression is correctly formulated. Experience demonstrates that interaction with the computer in this way uncovers and corrects many deficiencies in understanding.''
It may seem strange that a piece of computer code can capture such notions as evaluating various types of variations of functions containing partial derivatives along either arbitrary or
prescribed paths in phase space. When expressed in more traditional programming languages encountered in physics or in business, such as Fortran or C or C++, writing a package to model
variational mechanics would have been a very complex task, resulting in a much more opaque product. Fortunately, the authors chose a functional language, Scheme, a particularly lean Lisp-like
language, in which these ideas can be expressed in a very economical and clear way. Functional composition, the key to mathematically clarifying variational mechanics, has an exact counterpart in
the functional programming approach that is central in Scheme.
Scheme is widely available, and often for free as open-source shareware, and there are some excellent textbooks on Scheme, notably "Structure and Interpretation of Computer Programs", by H.
Abelson and G. J. Sussman (M.I.T. Press, 2nd edition, 1996). While such books will be helpful, the current book is self-contained: an appendix gives an elementary introduction to Scheme, which
suffices for the book.
One characteristic of Scheme is that it is easy to learn: the syntax of the language is far simpler than that of, say, Fortran or C (to say nothing of C++). Even so, if the reader is curious
about the full power of Scheme as well as the principles behind its design, the Abelson/Sussman book is well worth reading. The invitation to both abstract and compact thinking offered by Scheme
leaves its mark, once you start playing with it, even for a short time. In my case, as soon as I read the first edition in 1985, I was hooked on Scheme as my language of choice. And even though I
find myself forced to use other languages in collaborative projects in astrophysics, my programming style in those other languages, too, clearly reveals the inspiration I have received from
The content of the book is thoroughly modern, and covers topics from Lie series to a variety of topics in chaos theory. The authors have an impressive track record in the latter area: they were
the first ones to show the fact that the solar system is chaotic, a rather startling discovery when it was announced in 1988 (in Science, 241, 433; using a special-purpose computer, the Digital
Orrery, they found that Pluto's orbit has an inverse Lyapunov exponent of 20 Myr). Finally, all chapters in the book are interspersed with numerous helpful exercises, all of which have been
tested extensively during the six years that the authors have taught the material presented in this book to undergraduates at M.I.T. | {"url":"https://www.ias.edu/piet/publ/other/sicm","timestamp":"2024-11-02T05:53:29Z","content_type":"text/html","content_length":"62837","record_id":"<urn:uuid:f0bcfaa4-c38d-4d9c-94f2-4c14477d3f99>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00411.warc.gz"} |
The Positive Effect of Fruit Supplements on Cognitive Tasks
Question 1: A researcher was interested in the positive effect of fruit supplements on cognitive tasks. He recruited 20 participants all of whom were measured on a standardised attention task. The 20
participants were then split into 4 equal sized groups and each received a different fruit supplement for a week (placebo, blueberry, strawberry and apple). All participants were then re-measured on
the standardised attention task and change in score was calculated (a high score indicates a greater improvement). His hypotheses were: the placebo supplement would yield the lowest level of change
compared to the fruit supplements and that out of the fruit supplements the blueberry would yield the greatest change. Data can be found U:\P24111\assessment\fruit.
a. Fully describe the variables in this experiment
b. Explain why it would not be suitable to use multiple t-tests to analyse these data and state which test would be appropriate to use
c. Carry out the appropriate tests to check that the two assumptions for the test named in b have been met. Report their results, and describe what the implications are for ANOVA
d. In answering c you should have found that the data does not meet the assumptions of ANOVA. At this point we could use data transformation. Explore this as an option, would we be able to transform
the data? If so which data transformation should we use?
e. Despite the violation of assumption found above carry out the test named in b, including post-hoc tests. Formally summarise all the findings of the study relating this back to the hypotheses
f. Given that there were specific hypotheses we could have use planned comparisons rather than post-hoc tests. Draw up a table of orthogonal contrasts that would test these hypotheses
g. If in section c we found that these data did not meet all of the assumptions of ANOVA, rather than data transformation we could have decided to carry out a non-parametric test. Carry out an
appropriate non-parametric test, if appropriate include post hoc tests. Report your findings
h. What are the disadvantages of using a non-parametric test over a parametric one and in this example has the use of a non-parametric test made a difference?
Question 2: A researcher was interested in how adults with and without movement difficulties process visual information. She recruited a group of adults with Developmental Coordination Disorder (DCD)
and a group of typically developing (TD) adults. Half of the adults in each group were given visuo-motor training for two weeks. Following this period all of the adults completed a visual processing
task (reaction time measured). Her specific research questions were: is there a visual processing deficit in DCD? And if so does a visuo-motor training programme improve these skills in the same way
as it does for typically developing adults? Data collected can be found U:\P24111\assessment\DCD
a. Name the dependent and independent variable(s) and explain which parametric test(s) you would need to carry out in order to answer the research questions
b. Run appropriate assumption tests for the parametric statistical test(s) named above. Report the necessary statistics and state what these statistics tell us.
c. Run the ANOVA report the findings which directly relate back to the research questions
After the researcher analysed the data she realised that her cohort may have a possible confounding variable of age. Although the groups had a similar mean age, the practice and non practice groups
had not been matched. Following this realisation she decided to treat age as a covariate and use ANCOVA to re-analyse her data
d. Draw a table with the mean ages of the four conditions (TD no practice, TD practice, DCD no practice, DCD practice). Give reasons as to why you think age might/might not be a covariate
e. In SPSS carry out the ANCOVA and report all of the findings. Has adding the covariate altered the conclusions we can make from the data, if so what would you conclude now?
f. What do these new results mean in terms of the initial research questions?
g. Regardless of whether the covariate was significant, report the parameter estimate for the covariate and explain what this means | {"url":"https://www.myassignmenthelp.net/question/the-positive-effect-of-fruit-supplements-on-cognitive-tasks","timestamp":"2024-11-02T08:08:12Z","content_type":"text/html","content_length":"54249","record_id":"<urn:uuid:2df9edfb-3185-42dd-aad1-3bf963529c32>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00557.warc.gz"} |
Online Sample Size Calculator: Slovin’s Formula Calculator
Sample size calculator
How to determine a sample size and other questions answered
Running a customer survey and not sure how big your random sample size should be? We'll walk through how to determine your sample size and set up your first survey.
Sample size calculator
How to determine a sample size and other questions answered
Running a customer survey and not sure how big your random sample size should be? We'll walk through how to determine your sample size and set up your first survey.
Get started with Surveys
Over 402 million people from 180 different countries have left feedback via Hotjar Surveys.
Unlimited surveys on paid plans
Store survey responses forever
What is a sample size?
A sample size is a total number of data points collected in a study (e.g., the number of responses to a single survey question). Identifying sample size is key to determining whether data accurately
reflects the population as a whole. A larger sample size generally means greater accuracy.
What calculator do you use?
We're big fans of CXL's brilliant calculator. Check it out here.
What is Slovin’s Formula?
Slovin’s formula is used to calculate the sample size necessary to achieve a certain confidence interval when sampling a population. This formula is used when you don’t have enough information about
a population’s behavior (or the distribution of a behavior) to otherwise know the appropriate sample size.
Slovin’s formula is written as: n= N / (1+Ne2)
Where: n= the number of samples N=the total population
How a sample size calculator works
A calculator helps you determine the number of people you need to survey (your sample size calculation) based on the level of accuracy you want to achieve. Accuracy is determined by the following
three variables:
1. Population size
Population size is the total number of people in the audience you want to study. For example, if you want to use an on-page survey to study your website visitors, and you have 500,000 unique visitors
per month, enter “500000” as your population size.
2. Confidence level
Confidence level determines how certain you can be of your results. The industry standard for market research is a 95% confidence level, which means that if you ran the experiment 20 times, you’d get
the same results (within a certain margin of error) 19 times.
3. Margin of error
Margin of error (or confidence interval) is the amount your data would be expected to vary if you ran the survey multiple times. For example, let’s say you asked your users a simple “yes or no”
question, and 80% answered “yes” (the question itself doesn’t matter). If you had a margin of error of 5%, with a confidence level of 95%, then running this experiment 20 times would mean that 75% -
85% would answer “yes” in approximately 19 out of 20 experiments.
Lower limit: 80% - 5% = 75% Upper limit: 80% + 5% = 85%
Don’t know your numbers? If you don’t know your numbers, you can submit the form using industry standards. These default figures appear in the calculator when you load the page (Population size:
20000, Confidence rate: 95%, Margin of error: +/-5%).
Other useful resources:
The beginner's guide to website feedback
The ultimate guide to survey questions with 50+ examples for you to use
How to analyze open-ended questions [free template]
External factors that affect survey accuracy
No random sampling is a perfect representation of an entire target population, but you can get close. The following factors can impact accuracy, and understanding them will help you create better
Population size If your population size is small, you’ll need to sample a much larger percentage of your population. As the following chart shows, once your population is large enough, boosting your
sample size does little (or nothing) to increase accuracy.
Non-response errors
If your questions discourage certain segments from responding, you’ve got a built-in non-response bias. For example, if you ask clients whether they’ve ever downloaded movies illegally, the “yes”
crowd may be less likely to respond for obvious reasons—and that skews your data.
Selection errors
Selection errors occur when you sample a group of people outside the demographic you want to study (e.g., you want to study all your website visitors, but you only collect feedback from your paying
Sampling errors
Sampling errors occur when a certain demographic is overrepresented in your sample (e.g., you have a 50-50 mix of men and women, but for whatever reason, more women respond).
3 golden rules to get solid data with sample sizes
1) The bigger your sample, the better the results (up to a point).
2) A smaller margin of error requires a larger population size.
3) A higher confidence level requires a larger sample size.
4 surveys you can use to improve your business
Collecting accurate survey data can help you better understand your users and improve your products and messaging in a variety of ways.
Market research Market research is any set of techniques used to better understand a company’s target market. Companies use market research to adapt their products and messaging to market demand.
When gathering market research data, use the calculator to achieve a large enough sample size.
Customer satisfaction Customer satisfaction surveys can help identify areas for improvement. That said, the sample size isn’t super important here because the scores don’t matter nearly as much as
the reasons behind them. Study as much feedback as possible to understand what’s working and what isn’t.
Net Promoter Score Net Promoter Score (NPS) is a popular way to measure customer loyalty, ranging from -100 to 100 (the higher the number, the more they love you).
Net Promoter, Net Promoter System, Net Promoter Score, NPS, and the NPS-related emoticons are registered trademarks of Bain & Company, Inc., Fred Reichheld, and Satmetrix Systems, Inc
Post-purchase surveys Sending surveys to your customers after they’ve made a purchase, asking how you’re doing and what you can improve, will give you a good sense of how to hold onto good customers
and how to win new ones.
Read more
The best way to get good at surveys is to dive right in. Take a look at our ultimate guide to survey questions to get inspired while the knowledge is fresh!
Get started with Surveys
Over 402 million people from 180 different countries have left feedback via Hotjar Surveys.
Unlimited surveys on paid plans
Store survey responses forever | {"url":"https://www.hotjar.com/poll-survey-sample-size-calculator/","timestamp":"2024-11-13T02:33:21Z","content_type":"text/html","content_length":"218583","record_id":"<urn:uuid:e7fed739-64ca-4305-b1bd-ef47263032bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00328.warc.gz"} |
[Solved] A township is to treat 5,00,000 litres of sewage per day whi
A township is to treat 5,00,000 litres of sewage per day which has a 5-day BOD of 150 ppm. An oxidation pond is used for the purpose. The effluent can have a BOD of 15 ppm. The loading is to be 40 kg
of a 5-day BOD per hectare per day. The required area of the pond is:
This question was previously asked in
JSSC JE (Civil) Re-Exam Official Paper-II (Held On: 04 Nov, 2022)
View all JSSC JE Papers >
Answer (Detailed Solution Below)
Option 4 : 1.6875 ha
JSSC JE Full Test 1 (Paper 1)
5.5 K Users
120 Questions 360 Marks 120 Mins
Treating sewage (Q[s]) = 5,00,000 lit/day
5-day BOD of sewage = 150 ppm or 150 mg/l
Effluent BOD = 15 ppm or 15 mg/l
5-day BOD loading = 40 kg/ha/day
5-day BOD removed by oxidation pond from sewage = 150 - 15 = 135 mg/l
5-day BOD of sewage deposit in pond per day = \({135\times 500000}=67.5\times 10^6\, mg\)
5-day BOD of sewage deposit in pond per day = 67.5 kg
Required area of pond (A[req]) = \({67.5\over 40}=1.6875\, ha\)
Required area of pond (Areq) = 1.6875 ha.
Hence option (4) is correct.
Latest JSSC JE Updates
Last updated on Sep 23, 2024
-> JSSC JE Additional Result has been released for the Jharkhand Diploma Level Combined Competitive Examination-2023 (Regular and Backlog Recruitment) This is for the Advertisement No. 04/2023 and 05
-> The JSSC JE notification was released for 1562 (Regular+ Backlog) Junior Engineer vacancies.
-> Candidates applying for the said post will have to appear for the Jharkhand Diploma Level Combined Competitive Examination (JDLCCE).
-> Candidates with a diploma in the concerned engineering branch are eligible for this post. Prepare for the exam with JSSC JE Previous Year Papers. | {"url":"https://testbook.com/question-answer/a-township-is-to-treat-500000-litres-of-sewage-p--64b133906d5f0d86a4bb7310","timestamp":"2024-11-09T10:05:16Z","content_type":"text/html","content_length":"197398","record_id":"<urn:uuid:8f63a591-9af7-4379-b7d7-349bb416c7bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00870.warc.gz"} |
Brainstorm: Chiron / Ascendant Astrology Aspects
aleidoscopic outlook. Iridescent vision. Wholistic outlook. Total viewpoint. How do you integrate everything you see? The doors of perception. How do you integrate what you witness? Eyewitness, with
your own eyes. How well have you tested your vision? Do you trust your own eyes? How do you integrate your environment so you can operate in the world? Resist compartmentalizing your experiences.
Resist separating and categorizing each experience. Stop putting people into categories. Let your perceptions merge and coalesce. Find the common thread. What links your interactions together? Resist
creating a separate personae for each interaction. Always be the same person, wherever you are. Stop altering your appearance depending on where you’re going. Stop altering your personality based on
who you are going to hang out with. What is your true appearance? What do you really look like? What is your true outlook? Don’t be a chameleon unless it serves a larger purpose. Come as you are.
Show your true colors. Show yourself. Help people cross the bridge of understanding, but let them meet you halfway.
Your very presence creates a bridge. You see the Rainbow Bridge. you operate from the Rainbow Bridge. Your perception is mending. The way you operate, mends the environment. Mender. Mentor. Your very
presence mends the environment. Your very presence creates a loophole. What are you doing here? You’re not supposed to be here! Did you slip through the cracks? Rejected at birth. Rejected by both
parents. Taken in by a mentor.
You have the ability to take on a multitude of roles You take on those roles especially where you act as a mentor, guide, teacher, or healer to others. Wholistic self-presentation which allows you to
express each part of your personality. Integration of personality. Fragmented outlook. Being able to see how the world is fragmented, and being able to come up with solutions to create cohesion. An
approach to life that seeks to blend and mend disparate elements in the environment. Creating bridges to understanding in your immediate environment. Creating a bridge between yourself and others.
Are you going to help, or not? Are you going move forward through the world and help create the bridge? You are the wound that needs to be seen. You lead by example. Show your bleeding heart. Bring
out your dead. Your example is a salve. Let people come to you for your expertise. You look like an expert. You look like you have some solutions. You look like you know what’s up. You look through a
window pain. You see how the fragments create a whole. You see how the whole is made up of different facets. How can you help others integrate what they encounter? You feel torn apart by being out in
the world. There’s so much to see. There are so many different things happening at once. You can’t take it all in. You can’t process everything at once. Your perception is overwhelmed. So. many.
issues. To the cave, Chiron!
The door that nobody else will go in at, seems always to swing open widely for me.
― Clara Barton, Founder of the American Red Cross (Chiron conjunct Ascendant in Aries)
Chiron / Ascendant Aspects
1st harmonic, 1/1. This aspect comes from taking the 360 degrees of a circle and dividing by 1, then multiplying by 1. Like this, 360 / 1 = 360 x 1 = 360.
Intensity and emphasis; focus/focal point; impact. Lack of awareness. Concentration of energy. Fusion. Fusing disparate energies together.
1/20 harmonic. This aspect comes from dividing the 360 degrees of a circle by 20, then multiply by 1. Like this, 360 / 20 = 18 x 1 = 18.
1/12 harmonic. Take the 360 degrees of a circle, and divide by 12, then multiply by 1. Like this, 360 / 12 = 30 x 1 = 30 . Or, 360 / 6 = 60 / 2 = 30 (half a sextile).
The natural next step; the next thing that needs to be done; how one thing naturally leads to another; what you should be preparing for next; where things are heading, whether you want them to go
there or not. What all your hard work and preparation are for. The next big thing. Natural direction.
What you might be blind to. Something just over your shoulder, out of sight, and beyond the periphery of your vision. A blindspot. The thing that came immediately before, and is already nearly
forgotten. Efforts at memory. Reclaiming. People who don’t have your best interests at heart. Better work on some yoga moves to look over your shoulder, and behind your back.
1/10 harmonic. This aspect comes from dividing the 360 degrees of a circle by 10, then multiply by 1. Like this, 360 / 10 = 36 x 1 = 36.
1/9 harmonic. This aspect comes from dividing the 360 degrees of a circle by 9, then multiply by 1. Like this, 360 / 9 = 40 x 1 = 40.
1/8 harmonic. Take the 360 degrees of a circle and divide by 8, then multiply by 1. Like this, 360 / 8 = 45 x 1 = 45. Or, take a square, and split it in half. Like this, 360 / 4 = 90 / 2 = 45.
1/8 harmonic. This aspect comes from dividing the 360 degrees of a circle by 8, then multiply by 1. Like this, 360 / 8 = 45° x 1 = 45°. This aspect is the same as the semi-square.
Some of the posts may show the octile at 90°. This is a known error and I’m working on fixing it, but I have to edit each post individually, and that takes more time than I have right now!
1/7 harmonic. This aspect comes from dividing the 360 degrees of a circle by 7, then multiply by 1. Like this, 360 / 7 = 51.26° x 1 = 51.26°.
1/6 harmonic. Take the 360 degrees of a circle and divide by 6, then multiply by 1. Like this, 360 / 6 = 60 x 1 = 60. Or, take a trine, and split it in half. Like this, 360 / 3 = 120 / 2 = 60.
Opportunities for positive change; awareness of a time to take positive advantage; opportunities that pass unnoticed without conscious effort. No, there may not be another chance. Learning to be
quicker to act on good luck. Learning not to let good things pass you by. Learning that positive experiences require effort too. Positive doesn’t mean passive.
This aspect comes from dividing the 360 degrees of a circle by 10, then multiplying by 2. Like this, 360 / 10 = 36 x 2 = 72. This aspect is the same as a quintile.
1/5 harmonic. This aspect comes from dividing the 360 degrees of a circle by 5, then multiply by 1. Like this, 360 / 5 = 72 x 1 = 72.
Creativity, talents, quirks, special skills.
2/9 harmonic. This aspect comes from dividing the 360 degrees of a circle by 9, then multiplying by 2. Like this, 360 / 9 = 40 x 2 = 80.
1/4 harmonic This aspect comes from dividing the 360 degrees of a circle by 4, then multiplying by 1. Like this, 360 / 4 = 90 x 1 = 90.
Dynamic creative tension; inner stress and tension that build until you are forced to find creative solutions. Opportunities to push your limits to greater achievement; inner restlessness and lack of
satisfaction that impel you to make changes and push yourself; you are challenged to combine these energies; applying energy to solve a problem; creative problem-solving. Internal stress and tension.
Something needs to give.
2/7 harmonic. This aspect comes from dividing the 360 degrees of a circle by 7, then multiply by 2. Like this, 360 / 7 = 51.26° x 2 = 102° 51’26”.
This aspect comes from dividing the 360 degrees of a circle by 10, then multiply by 3. Like this, 360 / 10 = 36 x 3 = 108.
This aspect comes from dividing the 360 degrees of a circle by 9 , then multiply by 3. Like this, 360 / 9 = 40 x 3 = 120.
Same as the trine.
1/3 harmonic. This aspect comes from dividing the 360 degrees of a circle by 3, then multiply by 1. Like this, 360 / 3 = 120 x 1 = 120.
Flow. Ease. Status quo. Circumstances and characteristics that are not changing. Things that are stable; maintaining the status quo; peace and tranquility; idleness; good times; use these gifts to
your advantage. Negatively, unimpeded flow. Events that cannot be stopped. Unstoppable, sweeping movement. Lack of self-discipline. Laziness and self-indulgence. Times that are easy, lacking
challenge and opportunities for growth. Taking situations for granted. Keeping everything like it is. If it ain’t broke, don’t fix it.
5/8 harmonic. Take the 360 degrees of a circle and divide by 8, then multiply by 5. Like this, 360 / 8 = 45 x 5 = 135.
Blockages. Physical manifestation of problems and accidents. Inner psychological blockages that manifest as outer events.
2/5 harmonic. This aspect comes from dividing the 360 degrees of a circle by 5, then multiply by 2. Like this, 360 / 5 = 72 x 2 = 144.
5/12 harmonic. Take the 360 degrees of a circle and divide by 12, then multiply by 5. Like this, 360 / 12 = 30 x 5 = 150.
Adjusting expectations; tempering over-enthusiasm and over-reaching; keeping yourself in check; adjusting to meet changing circumstances; doing what you need to do to make things work. Adjusting your
attitude. Adjusting your perspective. Making-do in uncomfortable situations. Developing flexibility and adaptability. Bending, until you almost break. A see-saw, with too much weight on one side, and
not enough on the other side.
3/7 harmonic This aspect comes from dividing the 360 degrees of a circle by 7, then multiply by 3. Like this, 360 / 7 = 51.26° x 3 = 154° 17’ 08”.
4/9 harmonic. This aspect comes from dividing the 360 degrees of a circle by 9, then multiply by 4. Like this, 360 / 9 = 40 x 4 = 160.
11/24 harmonic. This aspect comes from dividing the 360 degrees of a circle into 15-degree equal segments. Like this, 360 / 24 = 15, then multiply by 11. Like this, 360 / 24 = 15 x 11 = 165.
This aspect does not make sense to me at the moment, but here’s the link.
I would think a quin-decile would take the 360 degrees of a circle, divide by 10, then multiply by 5. Like this 360 / 10 = 36 x 5 = 180. Which is exactly the same as an opposition, because it is a 5/
10 harmonic, which reduces to 1/2 harmonic. 360 / 2 = 180 x 1 = 180.
2nd harmonic, 1/2. This aspect comes from dividing the 360 degrees of a circle by 2, then multiply by 1. Like this, 360 / 2 = 180 x 1 = 180.
Challenges from outside; awareness of other options; opportunities that require you to stretch outside your known boundaries; an opportunity to stand back and reflect; seeing yourself from a
distance; putting yourself in someone else’s shoes; seeing yourself through someone else’s eyes. Gaining awareness. Seeing a situation from a new vantage point. Getting distance, and gaining
This aspect comes from dividing the 360 degrees of a circle by 7, then multiply by 4.
This aspect comes from dividing the 360 degrees of a circle by 7, then multiply by 5.
This aspect comes from dividing the 360 degrees of a circle by 7, then multiply by 6.
Chiron #2060
• Cheiron
• The Wounded Healer
• Alternative medicine
• Bridge
• Chronic pain
• Coping
• Damage / Hit points
• Exhaustion
• Fatigue
• Guide
• Hands, healing hand (Cheiron means 'hand')
• Healing & Wholeness
• Healing psychic wounds of rejection, restoring spirit to body (from the Centaur Report)
• Identity crisis?
• Injury
• Key
• Learning
• Loophole
• Maverick
• Mentor / Teacher - and having your pupils turn against you, even if it's unintentional on their part
• Putting the 'Chiron' in chronic pain
• Quest
• Reaching the limit of your abilities & realizing it's time to move on from a particular path, calling, or profession
• Rejection by your parents
• 'Right to die' issues
• Unsolvable problem
• Worn out
• Wounded
Many of the Chiron keywords above generously supplied by Zane B. Stein, from his book, Essence and Application: A View from Chiron.
• Centaur
• Shot by his pupil, Heracles, with an arrow poisoned with the blood of the Hydra
• He was shot in either the foot or the thigh
• Spent the rest of his life trying to alleve the pain of the wound
• He willingly gave up immortality to die and escape chronic pain
• Associate of the Centaurs: Pholus, Nessus
• Father of Euippe
• Grandson of Uranus
• Husband of Chariklo
• Son of Saturn (Cronus, Chronos, Kronos)
• Son of Philyra
• Teacher of heroes (Asclepius, Aristaeus, Actaeon, Achilles, Jason, Medus) Source
• Victim of the Hydra. The arrow that hit Chiron was poisoned with the Hydra's blood (each time you cut one of the Hydra's heads off, two new heads grow).
Chiron is both a minor planet and a comet. It is located between
. Chiron takes about 50 years to make one complete cycle through all the signs of the zodiac. Chiron is in
for the shortest amount of time – 1.5 years; and in
the longest – about 8 years.
• Appearance, Demeanor, & Mannerisms
• Force of Personality
• Your Affect On Others
• What's Right in Front of Your Face
• How You Take Initiative
• How You Show Up & the Circumstances of Your Birth
• Approach
• First Impressions
• First Focus & Immediate Pressure
• Immediate environment
• Right in front of your face
• Leading
• Initiating
• Putting yourself out there
• Taking the first step on a new path | {"url":"https://astrofix.net/brainstorm-chiron-ascendant-astrology-aspects/","timestamp":"2024-11-02T09:44:52Z","content_type":"text/html","content_length":"400244","record_id":"<urn:uuid:5dacec49-287d-4b19-994f-40af3b185b40>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00329.warc.gz"} |
What Is a Degree in Math and Science? - Grain d'pirate
What’s Diploma in Science and Math?
The term may sound foreign for you but it’s truly a vital element the profession that’s science and math .
Z, or calculus since we call itwas developed within a technology procedure. It has an extensive history in engineering, but a great deal of value has been attached afterwards WW II if it was detected
that the recurrence of mathematical issues can thesis writer possibly be solved using a consistent issue to it. So we have the actual title, which are the origin of the sentence in addition to the
An individual could possibly fail in his application to get a job, if someone does not have an comprehensive knowledge of the basics of mathematics. Therefore it is extremely essential this you just
get yourself a certificate in math or at least a degree. This is much essential since using a level in mathematics or a certification, you’re very likely to find a job.
There https://expert-writers.net are lots of issues one needs to know before becoming in this niche. There really are plenty of topics this before they will input this area, one wants to understand
and also the important things to do is the fact that to be able to master these issues, you have to comprehend alot about mathematics.
It is extremely important this you should master a few applications such as counting, addition, subtraction, addition, multiplication, division, trigonometry, geometry, calculus, probability,
statistics, algebra, linear algebra, stats, etc.. These are a few of the basic concepts that college students want to understand as a way to succeed in math and science.
Besides knowing the concepts, it is very crucial this you https://ualr.edu/blackboard/2013/10/23/the-a-paper-writing-stronger-papers/ also possesses a sense of direction and get a grip on. It is
essential the college student should be able to apply q in different circumstances and software. Whereas in algebra, then an individual should be able to address equations, for example, if one wants
to control machines, then you need in order to do calculations.
Since there certainly are lots of institutes who are aimed just towards teaching math, It’s not easy to acquire into any math profession plus it isn’t simple to find a good institute that delivers a
path in something. In actuality, before one could learn by yourself, one must have a foundation that is solid personal. An individual go via an online analysis and then apply or simply can find out
assistance from a teacher.
http://www.graindpirate.fr/wp-content/uploads/2017/09/logo_site.png 0 0 http://www.graindpirate.fr/wp-content/uploads/2017/09/logo_site.png 2020-02-10 16:07:112020-02-10 16:07:11What Is a Degree in
Math and Science?
Se joindre à la discussion ?
Vous êtes libre de contribuer !
Laisser un commentaire | {"url":"http://www.graindpirate.fr/what-is-a-degree-in-math-and-science/","timestamp":"2024-11-09T17:09:47Z","content_type":"text/html","content_length":"37602","record_id":"<urn:uuid:c4d69669-f60c-4233-bb13-390d1545784d>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00142.warc.gz"} |
Suppose X and Y represent two different school populations where X > Y and X and Y must be greater than 0. Which of the following expressions is the largest? Explain why. Show all work necessary. X2 Y2 X2 - Y2 2 (X Y) (X Y) 2
Suppose X and Y represent two different school populations where X > Y and X and Y must be greater than 0. Which of the following expressions is the largest? Explain why. Show all work necessary.
X2 Y2
X2 - Y2
2 (X Y)
(X Y) 2
Find an answer to your question 👍 “Suppose X and Y represent two different school populations where X > Y and X and Y must be greater than 0. Which of the following ...” in 📗 Mathematics if the
answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers | {"url":"https://cpep.org/mathematics/289951-suppose-x-and-y-represent-two-different-school-populations-where-x-y-a.html","timestamp":"2024-11-12T03:34:39Z","content_type":"text/html","content_length":"24114","record_id":"<urn:uuid:81f383b4-c490-4df4-9c18-b98efc4981f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00167.warc.gz"} |
Higgs mechanism and renormalization group flow: are they compatible?
Michael Dütsch
January 31, 2015
Usually the Lagrangian of a model for massive vector bosons is derived in a geometric way by the Higgs mechanism. We investigate whether this geometric structure is maintained under the
renormalization group (RG) flow. Using the framework of Epstein-Glaser renormalization, we find that the answer is 'no', if the renormalization mass scale(s) are chosen in a way corresponding to the
minimal subtraction scheme. This result is derived for the $U(1)$-Higgs model to 1-loop order. On the other hand we give a model-independent proof that physical consistency, which is a weak form of
BRST-invariance of the time-ordered products, is stable under the RG-flow.
open access link
'Quantum Mathematical Physics - A Bridge between Mathematics and Physics', Editors Felix Finster, Johannes Kleiner, Christian Röken, Jürgen Tolksdorf, Birkhäuser Verlag (2016)
article file | {"url":"https://www.lqp2.org/node/1159","timestamp":"2024-11-08T00:46:47Z","content_type":"text/html","content_length":"16511","record_id":"<urn:uuid:a9cdec76-593b-4f05-8e78-992af4537177>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00182.warc.gz"} |
10.4 Node Voltage Method
Chapter 10 – DC Network Analysis
The node voltage method of analysis solves for unknown voltages at circuit nodes in terms of a system of KCL equations. This analysis looks strange because it involves replacing voltage sources with
equivalent current sources. Also, resistor values in ohms are replaced by equivalent conductances in siemens, G = 1/R. The siemens (S) is the unit of conductance, having replaced the mho unit. In any
event S = Ω-1. And S = mho (obsolete).
Method for Node Voltage Calculation
We start with a circuit having conventional voltage sources. A common node E[0] is chosen as a reference point. The node voltages E[1] and E[2] are calculated with respect to this point.
Replacing voltage sources and associated series resistors with equivalent current sources and parallel resistors yields the modified circuit. Substitute resistor conductances in siemens for
resistance in ohms.
I1 = E1/R1 = 10/2 = 5 A
I2 = E2/R5 = 4/1 = 4 A
G1 = 1/R1 = 1/2 Ω = 0.5 S
G2 = 1/R2 = 1/4 Ω = 0.25 S
G3 = 1/R3 = 1/2.5 Ω = 0.4 S
G4 = 1/R4 = 1/5 Ω = 0.2 S
G5 = 1/R5 = 1/1 Ω = 1.0 S
The parallel conductances (resistors) may be combined by the addition of the conductances. Though, we will not redraw the circuit. The circuit is ready for the application of the node voltage method.
GA = G1 + G2 = 0.5 S + 0.25 S = 0.75 S
GB = G4 + G5 = 0.2 S + 1 S = 1.2 S
Deriving a general node voltage method, we write a pair of KCL equations in terms of unknown node voltages V[1] and V[2] this one time. We do this to illustrate a pattern for writing equations by
GAE1 + G3(E1 - E2) = I1 (1)
GBE2 - G3(E1 - E2) = I2 (2)
(GA + G3 )E1 -G3E2 = I1 (1)
-G3E1 + (GB + G3)E2 = I2 (2)
The coefficients of the last pair of equations above have been rearranged to show a pattern. The sum of conductances connected to the first node is the positive coefficient of the first voltage in
equation (1). The sum of conductances connected to the second node is the positive coefficient of the second voltage in equation (2). The other coefficients are negative, representing conductances
between nodes. For both equations, the right-hand side is equal to the respective current source connected to the node. This pattern allows us to quickly write the equations by inspection. This leads
to a set of rules for the node voltage method of analysis.
Node Voltage Rules:
• Convert voltage sources in series with a resistor to an equivalent current source with the resistor in parallel.
• Change resistor values to conductances.
• Select a reference node(E[0])
• Assign unknown voltages (E[1])(E[2]) … (E[N])to the remaining nodes.
• Write a KCL equation for each node 1,2, … N. The positive coefficient of the first voltage in the first equation is the sum of conductances connected to the node. The coefficient for the second
voltage in the second equation is the sum of conductances connected to that node. Repeat for the coefficient of third voltage, third equation, and other equations. These coefficients fall on a
• All other coefficients for all equations are negative, representing conductances between nodes. The first equation, the second coefficient is the conductance from node 1 to node 2, the third
coefficient is the conductance from node 1 to node 3. Fill in negative coefficients for other equations.
• The right-hand side of the equations is the current source connected to the respective nodes.
• Solve the system of equations for unknown node voltages.
Example of Node Voltage Method
Example: Set up the equations and solve for the node voltages using the numerical values in the above figure.
(0.5+0.25+0.4)E1 -(0.4)E2= 5
-(0.4)E1 +(0.4+0.2+1.0)E2 = -4
(1.15)E1 -(0.4)E2= 5
-(0.4)E1 +(1.6)E2 = -4
E1 = 3.8095
E2 = -1.5476
The solution of two equations can be performed with a calculator, or with an octave (not shown). The solution is verified with SPICE based on the original schematic diagram with voltage sources.
Though, the circuit with the current sources could have been simulated.
V1 11 0 DC 10
V2 22 0 DC -4
r1 11 1 2
r2 1 0 4
r3 1 2 2.5
r4 2 0 5
r5 2 22 1
.DC V1 10 10 1 V2 -4 -4 1
.print DC V(1) V(2)
v(1) v(2)
3.809524e+00 -1.547619e+00
One more example. This one has three nodes. We do not list the conductances on the schematic diagram. However, G[1] = 1/R[1], etc.
There are three nodes to write equations for by inspection. Note that the coefficients are positive for equation (1) E[1], equation (2) E[2], and equation (3) E[3]. These are the sums of all
conductances connected to the nodes. All other coefficients are negative, representing a conductance between nodes. The right-hand side of the equations is the associated current source, 0.136092 A
for the only current source at node 1. The other equations are zero on the right-hand side for a lack of current sources. We are too lazy to calculate the conductances for the resistors on the
diagram. Thus, the subscripted G’s are the coefficients.
(G1 + G2)E1 -G1E2 -G2E3 = 0.136092
-G1E1 +(G1 + G3 + G4)E2 -G3E3 = 0
-G2E1 -G3E2 +(G2 + G3 + G5)E3 = 0
We are so lazy that we enter reciprocal resistances and sums of reciprocal resistances into the octave “A” matrix, letting octave compute the matrix of conductances after “A=”. The initial entry line
was so long that it was split into three rows. This is different than previous examples. The entered “A” matrix is delineated by starting and ending square brackets. Column elements are
space-separated. Rows are “new line” separated. Commas and semicolons are not needed as separators. Though, the current vector at “b” is semicolon-separated to yield a column vector of currents.
octave:12> A = [1/150+1/50 -1/150 -1/50
> -1/150 1/150+1/100+1/300 -1/100
> -1/50 -1/100 1/50+1/100+1/250]
A =
0.0266667 -0.0066667 -0.0200000
-0.0066667 0.0200000 -0.0100000
-0.0200000 -0.0100000 0.0340000
octave:13> b = [0.136092;0;0]
b =
octave:14> x=A\b
x =
Note that the “A” matrix diagonal coefficients are positive, That all other coefficients are negative.
The solution as a voltage vector is at “x”. E[1] = 24.000 V, E[2] = 17.655 V, E[3] = 19.310 V. These three voltages compare to the previous mesh current and SPICE solutions to the unbalanced bridge
problem. This is no coincidence, for the 0.13609 A current source was purposely chosen to yield the 24 V used as a voltage source in that problem.
• Given a network of conductances and current sources, the node voltage method of circuit analysis solves for unknown node voltages from KCL equations.
• See the rules above for details in writing the equations by inspection.
• The unit of conductance G is the siemens S. Conductance is the reciprocal of resistance: G = 1/R | {"url":"https://www.technocrazed.com/10-4-node-voltage-method","timestamp":"2024-11-14T08:40:27Z","content_type":"text/html","content_length":"87967","record_id":"<urn:uuid:93392f77-d6af-4ea8-a2f4-0d7053d8cbf7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00723.warc.gz"} |
Trapezoid Side Calculator: Find the Unknown Side Length of a Trapezoid
If you need to find the length of one of the sides of a trapezoid, our free online trapezoid side calculator can help. Simply input the length of the one base and the length of the middle line, and
our calculator will find the length of the unknown side.
Trapezoid Side Formulas
The formula to calculate the length of a side of a trapezoid is:
s = \sqrt{m^2+(b - a)^{\frac{2}{4}}}
The formula to calculate the length of a base of a trapezoid is:
b = 2m - a
where s is the length of the unknown side, m is the length of the middle line, and a and b are the lengths of the two parallel sides of the trapezoid.
Example Problems
Here are some examples of how to use the trapezoid side formula to find the length of a side of a trapezoid.
Example 1
Find the length of side in a trapezoid where b = 8, a = 6, and m = 5.
We can use the trapezoid side formula to solve for a:
s = \sqrt{m^2+(b - a)^{\frac{2}{4}}}
s = \sqrt{5^2+(8 - 6)^{\frac{2}{4}}}
s ≈ 3.17
Therefore, the length of side a is approximately 3.17 units.
Example 2
Find the length of side b in a trapezoid where a = 6, and m = 7.
b = 2m - a
b = 2*7 - 6
b =8
Therefore, the length of side b is 8 units.
Use our free online calculator to find the length of the unknown side of a trapezoid. Simply enter the length of the one side and the length of the middle line, and our calculator will use the
trapezoid side formula to find the length of the unknown side.
Using our calculator can save you time and prevent calculation errors. Try it out today! | {"url":"https://owlcalculator.com/geometry/trapezoid-side-calculator","timestamp":"2024-11-06T04:34:53Z","content_type":"text/html","content_length":"529177","record_id":"<urn:uuid:bd8195a7-24ac-44b1-bdbe-e7f8030aede3>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00604.warc.gz"} |
Math Archives | Trogramming
Draw Julia set animations by simply defining a 2d line though the complex plane. A Julia set is simply a recursive function or iterative process that takes the form z=z^2+cwhere the initial value of
z is a complex number representing a point in 2d space, and c is any complex.The z argument is updated until […] | {"url":"https://trogramming.com/blog/tag/math/","timestamp":"2024-11-04T21:03:25Z","content_type":"text/html","content_length":"53779","record_id":"<urn:uuid:1d360b3e-5e94-439f-98b3-d9cc5941a39e>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00256.warc.gz"} |
Decimal to HEX
Free Decimal to HEX Converter - Convert Decimal to Hexadecimal online
Decimal to Hexadecimal Converter is an easy-to-use decimal to hexadecimal converter, it can convert a number from base 10 (decimal) to hexadecimal (base 16).
The different bases of numbers
The decimal system, or base 10, is the most commonly used number system. It uses 10 digits, from 0 to 9. The hexadecimal system, or base 16, is a slightly less common number system that uses 16
digits, from 0 to 9 and A to F.
To convert a decimal number to hexadecimal, we first have to understand the different bases of numbers. In the decimal system, each digit has a value of 10^n, where n is the position of the digit
from right to left. For example, the number 1234 can be written as:
1 * 10^3 + 2 * 10^2 + 3 * 10^1 + 4 * 10^0 = 1000 + 200 + 30 + 4 = 1234
Similarly, in the hexadecimal system, each digit a value of 16^n.However, since there are more than 10 digits in hexadecimal (AF), we need to use these letters to represent the values 10-15. For
example, the number 1234 can be written as:
1 * 16^3 + 2 * 16^2 + 3 * 16^1 + 4 * 16^0 = 4096 + 512 + 48 + 4 = 4960
Now that we understand how numbers in different bases to be written, we can proceed to convert between them.
What is a decimal to hex converter?
Converting from decimal to hexadecimal is a common task in computer science and programming. While there are many ways to do this, a decimal to hex converter is an easy-to-use tool that can quickly
and easily convert a number from base 10 to base 16.
When working with decimal numbers, we usually use the base 10 numbering system . This means that each digit in a number can be any value from 0 to 9. However, in some cases we need to convert a
decimal number into another number system, such as binary or hexadecimal.
Hexadecimal, or "hex" for short, uses a base-16 numbering system.This means that each digit in a hexadecimal number can be any value from 0 to 15. To represent values greater than 9, we use the
letters A through F. For example, the hexadecimal number 10 is represented as A, 11 as B, 12 as C, 13 as D, 14 as E, and 15 as F.
To To convert a decimal number to hexadecimal using a conversion tool, simply enter the decimal number into the converter and click "convert". The converter will then output the equivalent
hexadecimal number.
How to use a decimal to hex converter
To convert a decimal number to hexadecimal, you can use an online decimal to hexadecimal converter, such as the one at DecimalToHex.net.
To use the converter, simply enter the decimal number you want to convert into the input field and click the "Convert" button. The converter will then show you the hexadecimal equivalent of your
decimal number.
You can also use the converter to convert a hexadecimal number back to a decimal number. To do this, simply enter the hexadecimal number in the input field and click the "Convert" button. The
converter will then show you the decimal equivalent of your hexadecimal number.
How to convert decimal to hexadecimal
To convert a decimal to a hexadecimal number, you can use the built-in basic conversion tool in the Windows calculator. For example, to convert the decimal number 123 to its hexadecimal equivalent,
open Windows Calculator and enter "123" in the input field. Then click the "Basic Conversion" button and select "Hex" from the drop-down menu. The calculator will then display the number "7B" in the
output field, which is the hexadecimal equivalent of 123.
You can also use an online decimal to hexadecimal converter such as ConvertCalculator.com. To use this converter, simply enter a decimal number in the input field and click the "Convert" button.The
converter then displays the number in both decimal and hexadecimal form.
What are the benefits of converting from decimal to hexadecimal?
Converting from decimal to hexadecimal has many advantages. Hexadecimal is a more concise way of representing binary numbers than decimals. It is also easier to read and write hexadecimal numbers
than binary numbers. In addition, the conversion between hexadecimal and binary is very simple, which is why it is often used as a shorthand for binary data.
The decimal to hex converter is a great way to quickly and easily convert between the two number bases. Whether you need to convert for a specific purpose or just want to know how to do it, this tool
is the perfect solution. Give it a try and see how easy it is to use!
David Miller
CEO / Co-Founder
Our mission is to provide 100% free online tools useful for different situations. Whether you need to work with text, images, numbers or web tools, we've got you covered. We are committed to
providing useful and easy-to-use tools to make your life easier. | {"url":"https://toolswad.com/decimal-to-hex","timestamp":"2024-11-12T00:14:54Z","content_type":"text/html","content_length":"82057","record_id":"<urn:uuid:71711c52-c621-4ff5-95b5-34c4b042fd3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00258.warc.gz"} |
Each SquareMaze object represents a randomly-generated square maze and its solution. More...
#include "maze.h"
SquareMaze ()
No-parameter constructor. More...
void makeMaze (int width, int height)
Makes a new SquareMaze of the given height and width. More...
bool canTravel (int x, int y, int dir) const
This uses your representation of the maze to determine whether it is possible to travel in the given direction from the square at coordinates (x,y). More...
void setWall (int x, int y, int dir, bool exists)
Sets whether or not the specified wall exists. More...
vector< int > solveMaze ()
Solves this SquareMaze. More...
PNG * drawMaze () const
Draws the maze without the solution. More...
PNG * drawMazeWithSolution ()
This function calls drawMaze, then solveMaze; it modifies the PNG from drawMaze to show the solution vector and the exit. More...
Each SquareMaze object represents a randomly-generated square maze and its solution.
(Note that by "square maze" we mean a maze in which each cell is a square; the maze itself need not be a square.)
SquareMaze::SquareMaze ( ) inline
No-parameter constructor.
Creates an empty maze.
void SquareMaze::makeMaze ( int width,
int height
Makes a new SquareMaze of the given height and width.
If this object already represents a maze it will clear all the existing data before doing so. You will start with a square grid (like graph paper) with the specified height and width. You will select
random walls to delete without creating a cycle, until there are no more walls that could be deleted without creating a cycle. Do not delete walls on the perimeter of the grid.
Hints: You only need to store 2 bits per square: the "down" and "right" walls. The finished maze is always a tree of corridors.)
width The width of the SquareMaze (number of cells)
height The height of the SquareMaze (number of cells)
bool SquareMaze::canTravel ( int x,
int y,
int dir
) const
This uses your representation of the maze to determine whether it is possible to travel in the given direction from the square at coordinates (x,y).
For example, after makeMaze(2,2), the possible input coordinates will be (0,0), (0,1), (1,0), and (1,1).
• dir = 0 represents a rightward step (+1 to the x coordinate)
• dir = 1 represents a downward step (+1 to the y coordinate)
• dir = 2 represents a leftward step (-1 to the x coordinate)
• dir = 3 represents an upward step (-1 to the y coordinate)
You can not step off of the maze or through a wall.
This function will be very helpful in solving the maze. It will also be used by the grading program to verify that your maze is a tree that occupies the whole grid, and to verify your maze solution.
So make sure that this function works!
x The x coordinate of the current cell
y The y coordinate of the current cell
dir The desired direction to move from the current cell
whether you can travel in the specified direction
void SquareMaze::setWall ( int x,
int y,
int dir,
bool exists
Sets whether or not the specified wall exists.
This function should be fast (constant time). You can assume that in grading we will not make your maze a non-tree and then call one of the other member functions. setWall should not prevent cycles
from occurring, but should simply set a wall to be present or not present. Our tests will call setWall to copy a specific maze into your implementation.
x The x coordinate of the current cell
y The y coordinate of the current cell
dir Either 0 (right) or 1 (down), which specifies which wall to set (same as the encoding explained in canTravel). You only need to support setting the bottom and right walls of every square
in the grid.
exists true if setting the wall to exist, false otherwise
vector< int > SquareMaze::solveMaze ( )
Solves this SquareMaze.
For each square on the bottom row (maximum y coordinate), there is a distance from the origin (i.e. the top-left cell), which is defined as the length (measured as a number of steps) of the only path
through the maze from the origin to that square.
Select the square in the bottom row with the largest distance from the origin as the destination of the maze. solveMaze() returns the winning path from the origin to the destination as a vector of
integers, where each integer represents the direction of a step, using the same encoding as in canTravel().
If multiple paths of maximum length exist, use the one with the destination cell that has the smallest x value.
Hint: this function should run in time linear in the number of cells in the maze.
a vector of directions taken to solve the maze
PNG * SquareMaze::drawMaze ( ) const
Draws the maze without the solution.
First, create a new PNG. Set the dimensions of the PNG to (width*10+1,height*10+1). where height and width were the arguments to makeMaze. Blacken the entire topmost row and leftmost column of
pixels, except the entrance (1,0) through (9,0). For each square in the maze, call its maze coordinates (x,y). If the right wall exists, then blacken the pixels with coordinates ((x+1)*10,y*10+k) for
k from 0 to 10. If the bottom wall exists, then blacken the pixels with coordinates (x*10+k, (y+1)*10) for k from 0 to 10.
The resulting PNG will look like the sample image, except there will be no exit from the maze and the red line will be missing.
a PNG of the unsolved SquareMaze
PNG * SquareMaze::drawMazeWithSolution ( )
This function calls drawMaze, then solveMaze; it modifies the PNG from drawMaze to show the solution vector and the exit.
Start at pixel (5,5). Each direction in the solution vector corresponds to a trail of 11 red pixels in the given direction. If the first step is downward, color pixels (5,5) through (5,15) red. (Red
is 0,1,0.5,1 in HSLA). Then if the second step is right, color pixels (5,15) through (15,15) red. Then if the third step is up, color pixels (15,15) through (15,5) red. Continue in this manner until
you get to the end of the solution vector, so that your output looks analogous the above picture.
Make the exit by undoing the bottom wall of the destination square: call the destination maze coordinates (x,y), and whiten the pixels with coordinates (x*10+k, (y+1)*10) for k from 1 to 9.
a PNG of the solved SquareMaze
The documentation for this class was generated from the following files: | {"url":"https://courses.grainger.illinois.edu/cs225/sp2018/doxygen/mp7/classSquareMaze.html","timestamp":"2024-11-02T04:27:20Z","content_type":"application/xhtml+xml","content_length":"20823","record_id":"<urn:uuid:547d7ef8-7c24-46d0-ade2-7631dd5e6212>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00297.warc.gz"} |
SICP Exercises Chapter 1 Part 1
I started reading through "Structure and Interpretation of Computer Programs" and thought it would be a good idea to publish my progress here. Here are my solutions for the first ten exercises in the
Exercise 1.1: Below is a sequence of expressions. What is the result printed by the interpreter in response to each expression? Assume that the sequence is to be evaluated in the order in which it is
; 10
(+ 5 3 4)
; 12
(- 9 1)
; 8
(/ 6 2)
; 3
(+ (* 2 4) (- 4 6))
; 6
(define a 3)
; 3
(define b (+ a 1))
; 4
(+ a b (* a b))
; 19
(= a b)
; false
(if (and (> b a) (< b (* a b)))
; 4
(cond ((= a 4) 6)
((= b 4) (+ 6 7 a))
(else 25))
; 16
(+ 2 (if (> b a) b a))
; 6
(* (cond ((> a b) a)
((< a b) b)
(else -1))
(+ a 1))
; 16
Exercise 1.2: Translate the following expression into prefix form.
${5 + 4 + (2 - (3 - (6 + {4\over5})))\over3(6 - 2)(2 - 7)}$
(/ (+ 5 4 (- 2 3 (/ 4 5)))
(* 3 (- 6 2) (- 2 7)))
Exercise 1.3: Define a procedure that takes three numbers as arguments and returns the sum of the squares of the two larger numbers.
(define (sqr a) (* a a))
(define (caterpillar x y z)
(- (+ (sqr x) (sqr y) (sqr z))
(sqr (min x y z))))
(define (bookworm x y z)
(if (> x y)
(+ (sqr x) (sqr (max y z)))
(+ (sqr y) (sqr (max x z)))))
(define (ladybug x y z)
(define sumsqr a b (+ (sqr a) (sqr b)))
(max (sumsqr x y) (sumsqr x z) (sumsqr y z)))
Exercise 1.4: Observe that our model of evaluation allows for combinations whose operators are compound expressions. Use this observation to describe the behavior of the following procedure:
(define (a-plus-abs-b a b)
((if (> b 0) + -) a b))
Answer: If b is positive, add it to a. Otherwise, subtract it from a. This gives us the sum of a and the absolute value of b.
Exercise 1.5: Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicative-order evaluation or normal-order evaluation. He defines the following two
(define (p) (p))
(define (test x y)
(if (= x 0)
Then he evaluates the expression
What behavior will Ben observe with an interpreter that uses applicative-order evaluation? What behavior will he observe with an interpreter that uses normal-order evaluation? Explain your answer.
(Assume that the evaluation rule for the special form if is the same whether the interpreter is using normal or applicative order: The predicate expression is evaluated first, and the result
determines whether to evaluate the consequent or the alternative expression.)
Answer: An interpreter that uses applicative-order evaluation would evaluate the parameter (p) in the expression (test 0 (p)) before it is expanded. Trying to evaluate the expression (p) would cause
an infinite loop.
In normal-order evaluation, the expression (test 0 (p)) is expanded to (if (= 0 0) 0 (p)) in which (p) isn't evaluated because the predicate (= 0 0) evaluates to true.
Exercise 1.6: Alyssa P. Hacker doesn't see why if needs to be provided as a special form. "Why can't I just define it as an ordinary procedure in terms of cond?" she asks. Alyssa's friend Eva Lu Ator
claims this can indeed be done, and she defines a new version of if:
(define (new-if predicate then-clause else-clause)
(cond (predicate then-clause)
(else else-clause)))
Eva demonstrates the program for Alyssa:
(new-if (= 2 3) 0 5)
; 5
(new-if (= 1 1) 0 5)
; 0
Delighted, Alyssa uses new-if to rewrite the square-root program:
(define (sqrt-iter guess x)
(new-if (good-enough? guess x)
(sqrt-iter (improve guess x)
What happens when Alyssa attempts to use this to compute square roots? Explain.
Answer: The recursion stop condition does not work as intended as new-if evaluates else-clause even when the predicate evaluates to false.
Exercise 1.7: The good-enough? test used in computing square roots will not be very effective for finding the square roots of very small numbers. Also, in real computers, arithmetic operations are
almost always performed with limited precision. This makes our test inadequate for very large numbers. Explain these statements, with examples showing how the test fails for small and large numbers.
An alternative strategy for implementing good-enough? is to watch how guess changes from one iteration to the next and to stop when the change is a very small fraction of the guess. Design a
square-root procedure that uses this kind of end test. Does this work better for small and large numbers?
Answer: The given test good-enough? will not be very effective for finding the square roots of very small numbers because of the fixed tolerance value 0.001, e.g. (sqrt 0.000004) would return
0.03129261341049664. Also, most computers have native support for precision up to 64 bits but variable-precision can be achieved in software with libraries such as GMP.
(define (square x) (* x x))
(define (average x y) (/ (+ x y) 2))
(define (improve guess x) (average guess (/ x guess)))
(define (good-enough? guess x) (< (abs (- (square guess) x)) 0.001))
(define (sqrt-iter guess x)
(if (good-enough? guess x)
(sqrt-iter (improve guess x) x)))
(define (sqrt x) (sqrt-iter 1.0 x))
(sqrt 0.000004)
; 0.03129261341049664 :(
(sqrt 9999999999999)
; 3162277.660168221 :)
(define (good-enough? previous-guess guess x)
(< (/ (abs (- guess previous-guess)) x) 0.001))
(define (sqrt-iter previous-guess guess x)
(if (good-enough? previous-guess guess x)
(sqrt-iter guess (improve guess x) x)))
(define (sqrt x) (sqrt-iter 0 1.0 x))
(sqrt 0.000004)
; 0.0020000000000000235 :)
(sqrt 9999999999999)
; 1.0
As you can see above, our solution works better for small numbers but not so much for big numbers.
Exercise 1.8: Newton’s method for cube roots is based on the fact that if y is an approximation to the cube root of x, then a better approximation is given by the value
$x / y^2 + 2y \over 3$
Use this formula to implement a cube-root procedure analogous to the square-root procedure. (In Section 1.3.4 we will see how to implement Newton’s method in general as an abstraction of these
square-root and cube-root procedures.)
(define (improve-cube guess x)
(/ (+ (/ x guess guess) (* 2 guess)) 3))
(define (cbrt-iter previous-guess guess x)
(if (good-enough? previous-guess guess x)
(cbrt-iter guess (improve-cube guess x) x)))
Exercise 1.9: Each of the following two procedures defines a method for adding two positive integers in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its
argument by 1.
(define (+ a b)
(if (= a 0) b (inc (+ (dec a) b))))
(define (+ a b)
(if (= a 0) b (+ (dec a) (inc b))))
Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4 5). Are these processes iterative or recursive?
(+ 4 5)
(inc (+ (dec 4) 5))
(inc (inc (+ (dec 3) 5)))
(inc (inc (inc (+ (dec 2) 5))))
(inc (inc (inc (inc (+ (dec 1) 5)))))
(inc (inc (inc (inc 5))))
(inc (inc (inc 6)))
(inc (inc 7))
(inc 8)
This process is clearly recursive, it grows with the recursion and rolls back when it hits the stop condition.
(+ 4 5)
(+ 3 6)
(+ 2 7)
(+ 1 8)
(+ 0 9)
This process is iterative thanks to applicative order evaluation.
Exercise 1.10: The following procedure computes a mathematical function called Ackermann’s function.
(define (A x y)
(cond ((= y 0) 0)
((= x 0) (* 2 y))
((= y 1) 2)
(else (A (- x 1) (A x (- y 1))))))
What are the values of the following expressions?
Consider the following procedures, where A is the procedure defined above:
(define (f n) (A 0 n))
(define (g n) (A 1 n))
(define (h n) (A 2 n))
(define (k n) (* 5 n n))
Give concise mathematical definitions for the functions computed by the procedures f, g, and h for positive integer values of n. For example, (k n) computes $5n^2$ .
(A 1 10) ; 1024
(A 2 4) ; 65536
(A 3 3) ; 65536
(define (f n) (A 0 n)) ; computes 2n
(define (g n) (A 1 n)) ; computes 2^n unless n is 0
(define (h n) (A 2 n)) ; computes 2^(2^(2^(2...(n times)))) unless n is 0
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/lvmbdv/sicp-exercises-chapter-1-part-1-lel","timestamp":"2024-11-04T06:15:50Z","content_type":"text/html","content_length":"125686","record_id":"<urn:uuid:313e8c21-4b20-4931-8023-907e82200be7>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00787.warc.gz"} |
Data Science Topics
This page contains most of the topics I've covered in a self-set curriculum as I study the field of data science (with a strong focus on machine learning). Bullets without a link are topics that I
plan to get to, but will not post an article on in the immediate future. Links labeled "coming soon" are posts currently in progress.
Machine Learning
The General ML Framework
• Building machine learning pipelines
Machine Learning Models
Classification algorithms are used when you have a dataset of observations where we'd like to use the features associated with an observation to predict its class.
Example: Predict the type of flower when provided information on sepal length, sepal width, color, petal width, and petal length.
Regression algorithms are used when you have a dataset of observations where you'd like to use the features to predict a continuous output.
Example: Predict the price of a house using the following features: sq ft, number of rooms, zip code, age of house, school district.
• Gaussian Process Regression
Clustering is a popular technique to find groups or segments in your data that are similar. This is an unsupervised learning algorithm in the sense that you don't train the algorithm and give it
examples for what you'd like it to do, you just let the clustering algorithm explore the data and provide you with new insights.
• Density-based spatial clustering of applications with noise (DBSCAN)
Dimensionality Reduction
When we're building machine learning models, sometimes we deal with datasets with well over 1,000 or even 10,000 dimensions. While this allows us to account for many features, these features are
often redundant. Ideally, due to the curse of dimensionality, we'd like to limit our data to capture the true signal in the data and ignore the noise. Dimensionality reduction is one technique to
reduce the dimension of our feature-space while maintaining the maximum amount of information. Dimensionality reduction is also very convenient for visualizing higher-dimensional data sets in two or
three dimensions. This paper provides a great overview of the different techniques available for dimensionality reduction.
Neural Networks
Neural networks are one of the most popular approaches to machine learning today, achieving impressive performance on a large variety of tasks. Often referred to as the "universal function
approximator", this approach is very flexible to learning a variety of tasks.
• Foundation
• Training
• Convolutional neural networks
□ Image segmentation
☆ Instance image segmentation
□ Object detection
☆ Two stage methods: Faster R-CNN
☆ Evaluating object detection models
□ Facial recognition
• Recurrent neural networks
□ Gated recurrent units: Introducing intentional memory
□ Long short term memory networks: Learning what to remember and what to forget
• Transfer learning
□ Image recognition
□ Natural language processing
• One-shot learning
Reinforcement Learning
Reinforcement learning is an approach to machine learning where agents are rewarded to accomplish some task. "Good" behavior is reinforced via a reward, so this approach can more realistically be
considered a method of reward maximization. This book is the canonical resource for learning RL.
Machine Learning Applications
• Building a recommendation system with collaborative filtering
Natural Language Processing
• Preprocessing text data for NLP
Data Visualization
The following links are external links to useful resources. At this time, I haven't written any blog posts on data visualizations but wanted to save a few external posts for future reference.
Data Acquisition and Wrangling | {"url":"https://www.jeremyjordan.me/data-science/","timestamp":"2024-11-05T03:08:27Z","content_type":"text/html","content_length":"25481","record_id":"<urn:uuid:8ec48b1f-9e14-4527-a0c7-8dbff0e3688b>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00864.warc.gz"} |
What is the lottery winner paradox? - Shop Smart Guides
The lottery winner paradox refers to the statistical phenomenon where multiple people winning the lottery at the same time is actually more likely than a single person winning the lottery multiple
times consecutively. This seems counterintuitive, since the odds of winning the lottery even once are extremely low. However, when looking at the probabilities involved, having multiple winners of
the same lottery drawing is mathematically more probable.
What are the odds of winning the lottery?
Lottery odds vary depending on the specific game, but they are always extremely long odds. For example, the odds of winning the Mega Millions jackpot are 1 in 302,575,350. The odds of winning
Powerball are 1 in 292,201,338. State lotteries have odds in the same range. Even scratch off instant lottery tickets have odds that usually start around 1 in 4 or 5 and go up to around 1 in
3,000,000 at the highest prize levels.
The key takeaway is that the odds of winning any lottery jackpot are extremely low for any single player. The chance of randomly selecting the correct sequence of numbers is incredibly rare. While
someone does always eventually win the lottery, the probability that any individual ticket will win is miniscule.
What does probability theory say?
Probability theory governs random events like lottery drawings. The basic principles of probability dictate that it is actually more likely for multiple people to each win the lottery only once than
it is for one person to win the lottery multiple times consecutively. The simple explanation is that the odds of any one person winning more than once are astronomically low, while the odds of
different people each winning once are higher in aggregate.
To illustrate with a simple example, say there is a 1 in 10 chance of something happening. If that event is attempted once, there is a 1 in 10 chance it happens. But if it is attempted twice in a
row, the probability becomes only 1 in 100. However, if two people each attempt it once, the probability remains 1 in 10 for each of them – combined there is a 2 in 10 chance that at least one
In the lottery example, the odds of winning might be 1 in 300 million. For one person to win twice in a row, the odds are 1 in 90 quadrillion (300 million x 300 million). However, the odds of two
people each winning the lottery once is significantly higher at about 1 in 150 million (300 million x 2 people).
Examples of the paradox
There are several real world examples that illustrate the counterintuitive nature of the lottery winner paradox:
• In 1993 the Spanish Christmas lottery had 110 second-prize winners, compared to just 1 first prize winner. The odds of multiple people winning second prize was much higher than one person winning
first prize multiple drawings in a row.
• In 2009, the Bulgarian lottery had the same winning sequence come up in two consecutive drawings. This was an extremely rare event, with odds of 1 in 4.2 million.
• The UK National Lottery has had several cases of multiple winners for the same drawing, while having no one win the jackpot more than once.
These examples demonstrate that is is very uncommon for one person to win the lottery jackpot multiple times, but relatively more likely for multiple winners to split the jackpot in a single drawing.
Why does this seem counterintuitive?
The lottery winner paradox seems to go against common sense. One would expect it to be easier to predict that a single person would win the lottery twice, versus predicting the specific outcomes of
two coin flips. However, probability theory shows the opposite is true. There are a few reasons for this:
• Our brains tend to underestimate aggregations of probability and overestimate the likelihood of rare events happening consecutively.
• We tend to think of outlandish events like winning the lottery as being predictable or destiny, even though they are random.
• We put more emotional weight and significance on one person winning repeatedly, compared to dispersed wins.
• We underestimate just how insanely unlikely it is to win the lottery multiple times, compared to multiple people winning it once.
In summary, the reason it seems so counterintuitive is due to both quirks in human psychology as well as a lack of true appreciation for just how unlikely consecutive lottery wins really are.
Can this paradox be explained mathematically?
Yes, the lottery winner paradox can be explained mathematically. First, define the following variables:
• P(A) = The probability of a single person winning the lottery once
• P(B) = The probability of a single person winning the lottery twice
• P(C) = The probability of two people each winning the lottery once
Now we can express the probabilities mathematically like so:
• P(A) = 1 in 300 million
• P(B) = P(A) x P(A) = 1 in 90 quadrillion
• P(C) = P(A) x P(A) x 2 people = 1 in 150 million
This shows that P(C) > P(B), demonstrating mathematically why it is more likely for multiple people to each win the lottery once than for one person to win multiple times.
Can this apply to real world situations beyond the lottery?
Yes, the principles at work in the lottery winner paradox can be generalized to many real world situations involving probability. Any time there is a very unlikely event that can happen repeatedly,
it will almost always be more likely for that event to be distributed across different people or examples versus concentrated in just one. Some examples include:
• DNA matches – It is extremely unlikely for one person’s DNA to match multiple crime scenes, but more probable for different people’s DNA to each match once.
• Viral social media posts – A single post going viral repeatedly is less likely than different posts each going viral once.
• Catastrophic mechanical failures – One plane having multiple failures is less likely than different planes each having one failure.
In general, whenever multiple instances of a highly unlikely event are under consideration, probability theory states we should expect a distribution across different people/objects rather than
multiple recurrences in one example.
How can this information be useful?
Understanding the lottery winner paradox can help overcome some common cognitive biases and improve decision making. Some of the key benefits include:
• Avoiding the gambler’s fallacy – Knowing consecutive unlikely events don’t become more likely can prevent risky behavior.
• Improving risk assessments – Properly accounting for aggregations of probability leads to better risk analysis.
• Counteracting coincidence bias – Makes us less likely to interpret common coincidences as significant.
• Understanding regression to the mean – Unlikely events are usually partially due to random chance and less likely to recur.
In summary, appreciating the counterintuitive nature of the lottery winner paradox can help us think more critically about probability and make better forecasts about what random events are likely to
occur in the real world.
The lottery winner paradox illustrates a fascinating quirk of probability theory – that multiple people each winning the lottery once is actually more likely than one person winning multiple times
consecutively. This highly counterintuitive result is due to the incredibly long odds of winning the lottery, making back-to-back wins astronomically unlikely compared to dispersed wins. While not
intuitive, the principles governing this paradox apply to many real world situations involving aggregations of unlikely events. Appreciating and properly applying the logic of the lottery winner
paradox can help us overcome cognitive biases and make better, more rational predictions about the probability of rare events occurring. | {"url":"https://www.shopsmartguides.com/what-is-the-lottery-winner-paradox/","timestamp":"2024-11-06T08:14:30Z","content_type":"text/html","content_length":"96932","record_id":"<urn:uuid:a7ce98f6-c4e1-4a10-88b6-8c731874d67b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00185.warc.gz"} |
In category theory, a coequalizer (or coequaliser) is a generalization of a quotient by an equivalence relation to objects in an arbitrary category. It is the categorical construction dual to the
A coequalizer is a colimit of the diagram consisting of two objects X and Y and two parallel morphisms f, g : X → Y.
More explicitly, a coequalizer of the parallel morphisms f and g can be defined as an object Q together with a morphism q : Y → Q such that q ∘ f = q ∘ g. Moreover, the pair (Q, q) must be universal
in the sense that given any other such pair (Q′, q′) there exists a unique morphism u : Q → Q′ such that u ∘ q = q′. This information can be captured by the following commutative diagram:
As with all universal constructions, a coequalizer, if it exists, is unique up to a unique isomorphism (this is why, by abuse of language, one sometimes speaks of "the" coequalizer of two parallel
It can be shown that a coequalizing arrow q is an epimorphism in any category.
• In the category of sets, the coequalizer of two functions f, g : X → Y is the quotient of Y by the smallest equivalence relation ~ such that for every x ∈ X, we have f(x) ~ g(x).^[1] In
particular, if R is an equivalence relation on a set Y, and r[1], r[2] are the natural projections (R ⊂ Y × Y) → Y then the coequalizer of r[1] and r[2] is the quotient set Y / R. (See also:
quotient by an equivalence relation.)
• The coequalizer in the category of groups is very similar. Here if f, g : X → Y are group homomorphisms, their coequalizer is the quotient of Y by the normal closure of the set
${\displaystyle S=\{f(x)g(x)^{-1}\mid x\in X\}}$
• For abelian groups the coequalizer is particularly simple. It is just the factor group Y / im(f – g). (This is the cokernel of the morphism f – g; see the next section).
• In the category of topological spaces, the circle object S^1 can be viewed as the coequalizer of the two inclusion maps from the standard 0-simplex to the standard 1-simplex.
• Coequalizers can be large: There are exactly two functors from the category 1 having one object and one identity arrow, to the category 2 with two objects and one non-identity arrow going between
them. The coequalizer of these two functors is the monoid of natural numbers under addition, considered as a one-object category. In particular, this shows that while every coequalizing arrow is
epic, it is not necessarily surjective.
• Every coequalizer is an epimorphism.
• In a topos, every epimorphism is the coequalizer of its kernel pair.
Special cases
In categories with zero morphisms, one can define a cokernel of a morphism f as the coequalizer of f and the parallel zero morphism.
In preadditive categories it makes sense to add and subtract morphisms (the hom-sets actually form abelian groups). In such categories, one can define the coequalizer of two morphisms f and g as the
cokernel of their difference:
coeq(f, g) = coker(g – f).
A stronger notion is that of an absolute coequalizer, this is a coequalizer that is preserved under all functors. Formally, an absolute coequalizer of a pair of parallel arrows f, g : X → Y in a
category C is a coequalizer as defined above, but with the added property that given any functor F : C → D, F(Q) together with F(q) is the coequalizer of F(f) and F(g) in the category D. Split
coequalizers are examples of absolute coequalizers.
See also
External links
• Interactive Web page, which generates examples of coequalizers in the category of finite sets. Written by Jocelyn Paine. | {"url":"https://www.knowpia.com/knowpedia/Coequalizer","timestamp":"2024-11-05T13:57:35Z","content_type":"text/html","content_length":"83417","record_id":"<urn:uuid:ddd3bf5b-caaf-4ba8-baf8-fd91b13cb8cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00405.warc.gz"} |
Like everybody else, you too can be unique. Just keep shuffling
You're reading: Columns, Maths Colm
Like everybody else, you too can be unique. Just keep shuffling
The first take-home lesson of this note is that you too can be unique. You’ll have to keep shuffling to get there, but it is an attainable goal.
Several years ago it dawned on me that the number of possible ways to order or permute the cards in a standard deck of size $52$ was inconceivably large. Of course it was — and still is — $52!$.
That’s easy enough to scribble down (or even surpass spectacularly) without understanding just how far we are from familiar territory.
Let’s start with something smaller: the number of possible ways to order or permute just the hearts is $13! = 6,\!227,\!020,\!800$. That’s about what the world population was in 2002. So back then if
somebody could have made a list of all possible ways to arrange those $13$ cards in a row, there would have been enough people on the planet for everyone to get one such permutation.
Had a joker been thrown in too, it wouldn’t have worked out so well. Even today, with the population of the planet presumed to hover around 7 billion, there would have to be some sharing of the
permutations on the list. In fact, since $14!$ is about 87 billion, it seems safe to predict that it will be a very long time indeed before the world’s population is large enough so that everyone
gets just one such ordering.
Let’s put this in a musical context. It was also a decade ago, in the spring of 2002, that the Queens of the Stone Age recorded their Songs for the Deaf album. The standard release lists 13 tracks,
but there is also a hidden 14th track at the end. They could have issued each person on earth their own personal copy, with their own personal track order, and also exhausted all of the possibilities
in the process, assuming they still finished with that hidden track.
Adele’s recent 21 album, however, only has 11 tracks so, noting that $11! = 39,\!916,\!800$, she’d have had to settle for Poland or California if she wanted to achieve the same effect on both counts.
That’s something to think about it the next time you hit shuffle on your favourite music player.
The number of possible ways to order all the red cards in a deck is $26!$ which is about $4 \times 10^{26}$. How big is that? It’s certainly bigger than the number of grains of sand in Brighton, or
Britain, or all of the beaches on earth for that matter.
You can be 100% sure that the compilers of the 26-track early Kinks set didn’t actually consider all possible track orders. To do so would have required making a list four times as long as a list of
$10^{26}$ items. There simply isn’t enough paper, or computer memory. As anyone who has even compiled such a songlist knows, they probably decided on openers and closers and used something like
chronological order for the rest.
Now consider taking out the four Aces from a deck, and putting the remaining hearts together in some order on the left followed by the other 26 cards in some order on the right. That can be done in
over $10^{50}$ ways which exceeds the number of atoms on Earth.
As for playing with the full deck, note that $52!$ is about $8 \times 10^{67}$, which in the great scheme of things isn’t so far from $10^{80}$, the current estimate for the number of atoms in the
Needless to say, nobody’s ever explicitly considered all of the options here either. Likewise, “the people in the office on the afternoon of Friday, September 3rd” over at Sub Pop, when coming up
with the precise order for The Sub Pop List of the Top 52 Tracks Sub Pop Released in the ’90s.
What does this all mean? Well, if you were to hit “shuffle” (not allowing repeats) with a playlist consisting of those 52 tracks, it could be argued that you’d almost certainly hear a set of music
that nobody has ever heard before. The same applies to the 52-track version of The World of Nat King Cole.
For a deck of cards, it means that there are far more shuffled states than have ever been written down. Likewise, the totality of all deck orders that have ever been achieved with actual decks in the
history of the world is a very thin set within the set of all possible deck orders.
A well-shuffled deck, such as the one displayed here, which is far from being in any “obvious” or recognisable order, is probably unique in the sense that nobody else has ever come up with it before.
((I recall just using an order that resulted from numerous shuffles of an already seemingly well-jumbled deck))
As we’ve been saying all along, you too can be unique. Just keep shuffling. You’ll get there.
The other take-home lesson today is that you can shuffle till the cows come home, but you’ll still miss the vast majority of the possibilities. Or as Hamlet once said, “There are more things in
heaven and earth, Horatio, than are dreamt of in your philosophy.”
5 Responses to “Like everybody else, you too can be unique. Just keep shuffling”
1. Neil Calkin
I like to point out to my discrete maths students that when I ask them to toss a coin a hundred times, and record the sequence of heads and tails, and then reduce it to just the number of heads
and tails, they are taking it from an outcome which has most likely never been seen before, and never will be again, to a situation in which the class will almost certainly have at least two
students with the same outcome.
I find that it helps make the distinction between a discrete sample space and events.
□ Rich
I’m afraid I take issue with “As for playing with the full deck, note that 52! is about 8×10^67 , which in the great scheme of things isn’t so far from 10^80 , the current estimate for the
number of atoms in the universe.”
To my mind, 10^67 is massively massively massively far from 10^80.
☆ Christian Perfect
Logarithmically they’re fairly close… what’s a factor of $1 \frac{1}{4}$ trillion between friends? | {"url":"https://aperiodical.com/2012/05/like-everybody-else-you-too-can-be-unique-just-keep-shuffling/","timestamp":"2024-11-05T23:36:19Z","content_type":"text/html","content_length":"46865","record_id":"<urn:uuid:2a91fa09-a557-4d85-be0f-a4a65096c2a4>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00867.warc.gz"} |
Algebra 2 Online Credit Recovery
The Algebra 2 Credit Recovery course builds on the mathematical proficiency and reasoning skills developed in Algebra 1 and Geometry to lead students into advanced algebraic work. The course
emphasizes the concept of functions throughout. Sandwiched between short forays into probability and statistics is a thorough treatment of linear, quadratic, higher-degree polynomial, exponential,
logarithmic, and trigonometric functions, with emphasis on analysis, problem solving, and graphing. Toward the end of the course, an introduction to sequences and series is presented in preparation
for future work in mathematics.
Semester 1
Unit 1: Probability Distributions
Unit 2: Data Gathering and Analysis
Unit 3: Systems of Linear Equations
Unit 4: Systems of Linear Inequalities
Unit 5: Radical Expressions
Unit 6: Complex Numbers
Unit 7: Polynomials and Factoring
Unit 8: Solving Polynomial Equations
Unit 9: Polynomial Functions
Unit 10: Rational Expressions
Semester 2
Unit 1: Exponential Functions
Unit 2: Logarithms
Unit 3: Basics of Trigonometry
Unit 4: Trigonometric Functions and Identities
Unit 5: Graphs of Sinusoidal Functions
Unit 6: More Function Types
Unit 7: Linear-Quadratic Systems
Unit 8: Sequences
Unit 9: Series
Unit 10: Counting and Probability | {"url":"https://courses.keystoneschoolonline.com/Algebra-2-Online-Credit-Recovery_3?offering=10","timestamp":"2024-11-14T20:21:55Z","content_type":"text/html","content_length":"33486","record_id":"<urn:uuid:987a0f9c-57e9-426a-a018-ae5e555bb326>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00379.warc.gz"} |
How to Generate Random Numbers in Ruby
Ruby has a Random class that can generate pseudo-random numbers. The reason for pseudo is that it produces a deterministic sequence of bits which approximate true randomness. You can generate
integers, floats, and even binary strings.
I've always found myself referring to the docs to understand various ways Ruby lets you generate random numbers and thought it'd be nice to write a cookbook-style post listing the important recipes.
So here it goes.
Random Number Generation in Ruby
The simplest way to generate random numbers in Ruby is to use the Kernel#rand method.
rand # 0.47578359467975184
rand 10 # 2
rand 4.5 # 3
rand 1.2..3.5 # 3.3037237197517415
rand 5...8 # 6
What do all these arguments mean? Let's study the Random class for a deeper investigation, since the Random::rand class method works similarly to the Kernel#rand method, returning a random number
based on the argument.
There are three different versions of this method.
• Calling Random.rand without providing any arguments will return a random float number between 0 and 1 (not including 1).
Random.rand # 0.8722060064922107
• Calling Random.rand(max) by providing the max value will generate a random number between 0 and max (but not including max).
Random.rand(10) # 7
Random.rand(10) # 3
Random.rand(14) # 2
Random.rand(1.5) # 0.008421002905374453
• Calling Random.rand(Range) with a Range will generate a random number within that range. If you use .., the range will include the end value, and if you use ..., it will exclude the end value.
# can generate 2, 3, or 4
Random.rand(2..4) # 3
# can generate 2 or 3
Random.rand(2...4) # 2
# A float range
Random.rand(2.1...4.4) # 2.35435574571588
The same method is also available on an instance of Random, so you could do Random.new.rand(5) and so on.
Should You Use Random or Random.new?
To be honest, I don't know. While researching for this post, I came across this excellent answer on Stack Overflow from Marc-André Lafortune:
In most cases, the simplest is to use rand or Random.rand.
Creating a new random generator each time you want a random number is a really bad idea. If you do this, you will get the random properties of the initial seeding algorithm which are atrocious
compared to the properties of the random generator itself.
- Stack Overflow
He goes on to suggest:
If you use Random.new, you should thus call it as rarely as possible, for example once as MyApp::Random = Random.new and use it everywhere else.
Sound advice.
The next day, right before I was going to publish this post, I stumbled across this post on Stack Overflow, where Schwern suggests the exact opposite approach:
Random.new.rand is better than rand because each separate instance of Random will be using a different seed (and thus a different psuedo-random pattern), whereas every call to rand is using the
same seed.
If your code is using rand to generate random numbers, the attacker can assume that any random number is using the same seed and the same sequence. Then they can more quickly gather observations
to guess at the seed.
If each part of your code is using its own Random.new, now the attacker has to figure out which random number goes with which seed. This makes it harder to build a sequence of random numbers, and
gives the attacker less numbers to work with.
- Stack Overflow
Which also sounds like a good advice. From the profiles of both Marc and Schwern, it seems like they both know what they're talking about. So I'm confused. Is there a consensus or best practice to
choose one or the other?
To get some clarity, I posted a question on the Ruby Subreddit, but the answers were... unsatisfactory?
If you're reading this post and know the definitive best practice, please let me know in the comments or by
me, and I'll update the post.
Anyway, let's continue!
How to Generate Secure Random Numbers?
Often you may need to generate secure random numbers which are useful for generating session keys in HTTP cookies.
While the Random class is suitable for many scenarios, it is not recommended for situations where cryptographic security is a concern, as the predictability of pseudo-random number generators makes
them vulnerable to certain types of attacks.
To generate secure random numbers, use the `securerandom` gem (make sure to install the gem first with gem install securerandom).
require "securerandom"
SecureRandom.alphanumeric # "tpEnoWgScSJRU3YB"
SecureRandom.base64 # "0jHnJ7Yx5oTW0OY+YKgUog=="
SecureRandom.hex # "b51372ee8b93eb3e1f0035d9300c3e97"
SecureRandom.rand # 0.6053942880507039
SecureRandom.urlsafe_base64 # "OTHNscnomrNjjT0g_dzpdw"
SecureRandom.uuid # "f6f54bd8-fc5a-483f-8909-05428dea2290"
The numbers generated by SecureRandom are not easily predictable, even if an attacker knows previous values generated.
Check out the documentation to learn more.
How to Generate a Random Binary String in Ruby?
The bytes(size) class method returns a random binary string. To control the length of the generated string, use the size argument.
Random.bytes(5) # "\xFFT\f\xE0\x0F"
Random.bytes(10) # ":\x84q>\xD1\x15G\xBA\xAA\xF4"
If you don't provide the size, Ruby will throw an error.
bytes is also available as an instance method.
Random.new.bytes(8) # "\xF0j\xFBa\xCC\x1C\xCF\x12"
How to Get the seed value used to generate an instance of random generator?
Use the seed class method to get the seed value used to initialize the random generator. This is useful if you want to initialize another random number generator with the same state as the first, at
a later time.
Random.seed #=> 1234
prng1 = Random.new(Random.seed)
prng1.seed #=> 1234
prng1.rand(100) #=> 47
Random.rand(100) #=> 47
The second generator will provide the same sequence of numbers.
How to Generate Random Numbers in Pre-Defined Formats?
The Random::Formatter module formates generated random numbers in many manners. To use it, simply require it. It makes several methods available as Random's instance as well as module methods.
require "random/formatter"
Random.hex # "1aed0c631e41be7f77365415541052ee"
Random.base64(4) # "bsQ3fQ=="
Random.alphanumeric # "TmP9OsJHJLtaZYhP"
Random.uuid # "f14e0271-de96-45cc-8911-8910292a42cd"
For a complete reference, check out the documentation of the Random::Formatter.
How to Initialize Random Class with Seed Values?
To initialize the Random class, you can provide a seed value. You can use the Kernel#srand method to generate this seed, or can manually supply one.
srand may be used to ensure repeatable sequences of pseudo-random numbers between different runs of the program. This is helpful in testing. You can make your tests deterministic by setting the seed
to a known value.
srand 1234 # seed with 1234
[ rand, rand ] # => [0.1915194503788923, 0.6221087710398319]
srand 1234 # seed with 1234
[ rand, rand ] # => [0.1915194503788923, 0.6221087710398319]
srand 5678 # seed with 5678
[ rand, rand ] # => [0.4893269800108977, 0.05933244265166393]
An important thing to keep in mind is that the random number generator becomes deterministic if you use the same seed value. That is, different instances will return the same values.
r1 = Random.new(1234)
r1.rand # 0.1915194503788923
r1.rand # 0.6221087710398319
r2 = Random.new(1234)
r2.rand # 0.1915194503788923
r2.rand # 0.6221087710398319
When you don't provide a seed value, Ruby uses an arbitrary seed value generated by the new_seed class method.
Random.new_seed # 281953773930427386731290692710425966835
Random.new_seed # 105614143647114073473001171940625552466
Since Ruby takes care of using a new seed value each time, I suggest you initialize the Random instances without providing an explicit seed value.
That's a wrap. I hope you found this article helpful and you learned something new.
As always, if you have any questions or feedback, didn't understand something, or found a mistake, please leave a comment below or send me an email. I reply to all emails I get from developers, and I
look forward to hearing from you.
If you'd like to receive future articles directly in your email, please subscribe to my blog. If you're already a subscriber, thank you. | {"url":"https://www.writesoftwarewell.com/how-to-generate-random-numbers-in-ruby/","timestamp":"2024-11-13T19:40:43Z","content_type":"text/html","content_length":"63089","record_id":"<urn:uuid:d83ecfe3-10d0-4cf5-81b2-57729fde8a76>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00680.warc.gz"} |
Landmark Events — History Highlight - Johannes Kepler Is Born, 1571
“The heavens declare the glory of God; and the firmament sheweth his handywork.” —Psalm 19:1
Johannes Kepler Is Born,
December 27, 1571
Johannes Kepler (1571-1630)
German mathematician, astronomer and astrologer
Kepler’s birthplace in Weil der Stadt
Kepler was born in Weil der Stadt, a free imperial city in Swabia, a southwest German state in the Holy Roman Empire. The region had divided over the Reformation, between Protestant and Catholic.
Kepler was raised by poor but devout Lutheran parents. His father was a mercenary and left when Johannes was only five years old. He likely died in the eighty-year war for Dutch independence. As a
young boy, he contracted smallpox which left him with crippled hands and weak eyesight. His love of mathematics and interest in the heavens was encouraged by his mother and grandfather. Based on his
superior work in local schools, Kepler was awarded a scholarship to Tubingen University to study for the Lutheran ministry, but that course was providentially altered.
The Holy Roman Empire at its zenith overlaid on modern European national borders
Nickolaus Copernicus (1473-1543)
Polish mathematician and astronomer known for his heliocentric model of the universe
While a college/seminary student, Kepler was introduced to the work of Nicolaus Copernicus who had written that planets orbit the sun rather than the Earth, a revolutionary break from Church dogma
and astronomic belief. With his brilliant mathematical skills and astrological acumen, Kepler was invited to become, at the age of twenty-three, professor of mathematics at the Protestant seminary in
Graz, Austria. In his spare time he studied astronomy and astrology (considered nearly the same discipline in that era). He took a public stand for the heliocentric Copernican theories in his first
published work, in opposition to both Luther’s and the Catholic Church’s belief. So dangerous was his theory considered that he and his first wife, Barbara, wrote to one another in code to minimize
the chance of persecution.
Johannes Kepler (1571-1630) Barbara Kepler (née Müller) (1574-1611)
Tycho Brahe (1546-1601)
Danish nobleman, astronomer, and writer known for his accurate and comprehensive astronomical and planetary observations
Kepler was skilled at ingratiating himself with men who could promote his career, and he received the benefits of several patrons. He also married an heiress, twice widowed, and together they had
five children. Kepler corresponded with other prominent astronomers and ran afoul of Tycho Brahe, one of the foremost scientists of his day, a wealthy Danish nobleman-astronomer. They debated
Copernican theory and eventually came together at Tycho’s new observatory near Prague, after Kepler fled Graz for refusing to convert to Roman Catholicism. Brahe added Kepler to his growing cadre of
assistants, but was careful not to reveal to him all that he had discovered about planetary motion. Upon Tycho’s death that very year, Kepler was appointed Imperial Mathematician and given the task
of completing Tycho’s work.
Emperor Rudolf II of the Holy Roman Empire (1552-1612)
Over the following eleven years Kepler came into his own using Brahe’s notes and his own observations. Kepler was so respected that he was permitted his Lutheran convictions though the emperor’s
court was Catholic. His duties included “reading the stars” and giving astrological advice to the Emperor, as he continued serious astronomical calculations and collegial contact with the other
mathematicians of Europe. It took him eight years to solve the problem of explaining the unusual Mars orbit. He discovered the planets travelled in ellipses rather than circles, now known as Kepler’s
First Law. He figured out that planets further from the sun moved more slowly, Kepler’s second law (1609). In comparing the orbits of two planets he found that “The square of the ratio of the period
of two planets is equal to the cube of the ratio of their radius” (Third Law, 1619).
Kepler’s First Law of Planetary Motion states that the orbits of planetary bodies are ellipses with the sun at one of the two foci of the ellipse (exaggerated for illustration purposes)
Albrecth von Wallenstein (1583-1634)
He wrote many other works, some of which still hold up. Kepler’s contributions to astronomy accorded him the title of “Father of Astronomy” by historians. Tubingen would not let him return because
his Calvinist beliefs conflicted with strict Lutheranism and eventually he had to move to Linz. During the Thirty Years War, Linz was laid siege to by Catholic armies and he had to move again.
Ironically, Kepler became an advisor to the Catholic General Wallenstein but failed to predict Wallenstein’s assassination. Kepler’s gravesite was lost when the Swedish army destroyed Regensburg,
where he had died in 1630. It was this great Christian astronomer who is said to have remarked that “I think God’s thoughts after him,” a central tenet of twentieth century Reformed presuppositional
Image Credits: 1 Johannes Kepler (Wikipedia.org) 2 Holy Roman Empire map (Wikipedia.org) 3 Kepler birth home (Wikipedia.org) 4 Nicolaus Copernicus (Wikipedia.org) 5 Johannes and Barbara Kepler
(Wikipedia.org) 6 Tycho Brahe (Wikipedia.org) 7 Emperor Rudolf II (Wikipedia.org) 8 Albrecht von Wallenstein (Wikipedia.org) | {"url":"https://landmarkevents.org/assets/email/2018/12-24-history-highlight/","timestamp":"2024-11-12T02:49:26Z","content_type":"text/html","content_length":"21332","record_id":"<urn:uuid:1dfe7ffa-2e0b-427c-85dd-0ee0715a0b17>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00410.warc.gz"} |
Z-Gauge Trains at Robert Leduc blog
Z-Gauge Trains. bring home your favorite z gauge trains now. Buy 1:220 scale models in our range of z scale trains and. z scale model trains are the smallest of all the scales, clocking in at 1:220
proportion. z scale (1:220) is the smallest of the major manufactured scales in the industry, featuring a much narrower but rapidly growing variety of high quality. shop a huge collection of z gauge
and z scale model railway products at gaugemaster. Plaza japan has a ton of z scale train sets to choose from, helping you perfect. Z scale is perfect for modelers who are limited in how much. Here’s
a quick primer on what you need to. Connoisseurs appreciate märklin z as much as the most inveterate modeler: 1m+ visitors in the past month z scale has a track gauge of 6.5mm or 2.56'' and a scale
of 1:220, which equates to 1.385mm to 1 foot. But how does this relate to real world size of. if you’re itching to build a model railway but don’t have space, z gauge is the answer.
Buy 1:220 scale models in our range of z scale trains and. But how does this relate to real world size of. Plaza japan has a ton of z scale train sets to choose from, helping you perfect. if you’re
itching to build a model railway but don’t have space, z gauge is the answer. bring home your favorite z gauge trains now. Connoisseurs appreciate märklin z as much as the most inveterate modeler: z
scale model trains are the smallest of all the scales, clocking in at 1:220 proportion. shop a huge collection of z gauge and z scale model railway products at gaugemaster. z scale (1:220) is the
smallest of the major manufactured scales in the industry, featuring a much narrower but rapidly growing variety of high quality. Here’s a quick primer on what you need to.
Z-Gauge Trains z scale has a track gauge of 6.5mm or 2.56'' and a scale of 1:220, which equates to 1.385mm to 1 foot. Plaza japan has a ton of z scale train sets to choose from, helping you perfect.
shop a huge collection of z gauge and z scale model railway products at gaugemaster. z scale (1:220) is the smallest of the major manufactured scales in the industry, featuring a much narrower but
rapidly growing variety of high quality. bring home your favorite z gauge trains now. Here’s a quick primer on what you need to. 1m+ visitors in the past month Buy 1:220 scale models in our range of
z scale trains and. Connoisseurs appreciate märklin z as much as the most inveterate modeler: Z scale is perfect for modelers who are limited in how much. z scale has a track gauge of 6.5mm or 2.56''
and a scale of 1:220, which equates to 1.385mm to 1 foot. But how does this relate to real world size of. z scale model trains are the smallest of all the scales, clocking in at 1:220 proportion. if
you’re itching to build a model railway but don’t have space, z gauge is the answer. | {"url":"https://cebzewju.blob.core.windows.net/z-gauge-trains.html","timestamp":"2024-11-13T06:33:19Z","content_type":"text/html","content_length":"34485","record_id":"<urn:uuid:da3cd5af-bd1b-49ae-a069-b9b8b40fce55>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00349.warc.gz"} |
The Galvanostatic Method: Analysis of Error and Computation of Parameters
Galvanostatic transients arising from multi-step processes of the type: υ0O + ne = υR are treated mathematically for systems subjected to both activation and diffusion control. Analysis of error
based on the error matrix is presented. The galvanostatic method is characterized by the information contents [Ī(i0) and Ī(Cdl)] for a single-estimate analysis (i.e., estimate of i0 for known Cdi or
the reverse) and that for a two-estimate analysis (simultaneous determination of i0 and Cdl). The correlation between the estimates of i0 and Cdl is given as a function of the dimensionless scale T/
τc. Information contents for i0 and Cdl in single and two-estimate systems are given and this permits the determination of the accuracy and the limits of the method as well as the choice of optimal
experimental conditions. The optimal full time scale for estimation of i0 depends on the value of τc/td. For τc/τd between 0.5–500, T opt/τc varies from 6 to 18 (single-estimate system) and from 4 to
25 (two-estimate system). The upper limit for i0 in a single-parameter estimation is 3–12 A cm−2 (for Cd) 10–40 μF cm−2) and the upper limit for Ks(±a = ±c = 0.5) is 8 cm sec−1. This is three times
greater than ks for the coulostatic method. For a given value of Ks, the accuracy of the galvanostatic method is better than that of the coulostatic for systems when τ/τa > 0.6, but is not as good
when τ/τd < 0.6. The upper limit of Ks and the accuracy of the galvanostatic method for two-estimate analysis (i0 and Cdl) are considerably lower than those for single-estimate analysis. An iteration
method is developed with which a large improvement is achieved. Estimation of i0 and Cdl with a computerized curvilinear regression analysis is proposed.
• diffusion
• galvanostatic method
• regression analysis
Dive into the research topics of 'The Galvanostatic Method: Analysis of Error and Computation of Parameters'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/the-galvanostatic-method-analysis-of-error-and-computation-of-par","timestamp":"2024-11-12T09:07:50Z","content_type":"text/html","content_length":"52872","record_id":"<urn:uuid:1af97582-332f-400c-9e21-af1f2d0476d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00403.warc.gz"} |
What was the date the day before yesterday? - Calculator King 👑
What was the date the day before yesterday?
Are you wondering what the date was the day before yesterday? You don’t have to anymore. With this page, we provide the answer to your question: “What day was it the day before yesterday?” or “What
is the date of the day before yesterday?” The answer is simple:
The day before yesterday was
Calculations involving dates can sometimes be complicated, but we have solved that problem for you. Our systems have already made the calculation. The day before yesterday was .
The calculation is simple: – 2 days =
What was the date the day before yesterday? The date of the day before yesterday compared to today () is .
What is the ordinal date of the day before yesterday?
But what if you want more than just the date? What if you want to understand more about time and dates? Don’t worry, we’ve got you covered. We have equipped this website with a wealth of information
about dates, calendars, and time management. | {"url":"https://calculking.com/en/what-was-the-date-the-day-before-yesterday/","timestamp":"2024-11-03T09:33:59Z","content_type":"text/html","content_length":"62909","record_id":"<urn:uuid:d5d9252a-ab52-429b-b314-d5b2f459033d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00447.warc.gz"} |
Prophet: forecasting at scale - Meta Research | Meta Research
Today Facebook is open sourcing Prophet, a forecasting tool available in Python and R. Forecasting is a data science task that is central to many activities within an organization. For instance,
large organizations like Facebook must engage in capacity planning to efficiently allocate scarce resources and goal setting in order to measure performance relative to a baseline. Producing high
quality forecasts is not an easy problem for either machines or for most analysts. We have observed two main themes in the practice of creating a variety of business forecasts:
• Completely automatic forecasting techniques can be brittle and they are often too inflexible to incorporate useful assumptions or heuristics.
• Analysts who can produce high quality forecasts are quite rare because forecasting is a specialized data science skill requiring substantial experience.
The result of these themes is that the demand for high quality forecasts often far outstrips the pace at which analysts can produce them. This observation is the motivation for our work building
Prophet: we want to make it easier for experts and non-experts to make high quality forecasts that keep up with demand.
The typical considerations that “scale” implies, computation and storage, aren’t as much of a concern for forecasting. We have found the computational and infrastructure problems of forecasting a
large number of time series to be relatively straightforward — typically these fitting procedures parallelize quite easily and forecasts are not difficult to store in relational databases such as
MySQL or data warehouses such as Hive.
The problems of scale we have observed in practice involve the complexity introduced by the variety of forecasting problems and building trust in a large number of forecasts once they have been
produced. Prophet has been a key piece to improving Facebook’s ability to create a large number of trustworthy forecasts used for decision-making and even in product features.
Where Prophet shines
Not all forecasting problems can be solved by the same procedure. Prophet is optimized for the business forecast tasks we have encountered at Facebook, which typically have any of the following
• hourly, daily, or weekly observations with at least a few months (preferably a year) of history
• strong multiple “human-scale” seasonalities: day of week and time of year
• important holidays that occur at irregular intervals that are known in advance (e.g. the Super Bowl)
• a reasonable number of missing observations or large outliers
• historical trend changes, for instance due to product launches or logging changes
• trends that are non-linear growth curves, where a trend hits a natural limit or saturates
We have found Prophet’s default settings to produce forecasts that are often accurate as those produced by skilled forecasters, with much less effort. With Prophet, you are not stuck with the results
of a completely automatic procedure if the forecast is not satisfactory — an analyst with no training in time series methods can improve or tweak forecasts using a variety of easily-interpretable
parameters. We have found that by combining automatic forecasting with analyst-in-the-loop forecasts for special cases, it is possible to cover a wide variety of business use-cases. The following
diagram illustrates the forecasting process we have found to work at scale:
For the modeling phase of the forecasting process, there are currently only a limited number of tools available. Rob Hyndman’s excellent forecast package in R is probably the most popular option, and
Google and Twitter have both released packages with more specific time series functionality — CausalImpact and AnomalyDetection, respectively. As far as we can tell, there are few open source
software packages for forecasting in Python.
We have frequently used Prophet as a replacement for the forecast package in many settings because of two main advantages:
1. Prophet makes it much more straightforward to create a reasonable, accurate forecast. The forecast package includes many different forecasting techniques (ARIMA, exponential smoothing, etc), each
with their own strengths, weaknesses, and tuning parameters. We have found that choosing the wrong model or parameters can often yield poor results, and it is unlikely that even experienced
analysts can choose the correct model and parameters efficiently given this array of choices.
2. Prophet forecasts are customizable in ways that are intuitive to non-experts. There are smoothing parameters for seasonality that allow you to adjust how closely to fit historical cycles, as well
as smoothing parameters for trends that allow you to adjust how aggressively to follow historical trend changes. For growth curves, you can manually specify “capacities” or the upper limit of the
growth curve, allowing you to inject your own prior information about how your forecast will grow (or decline). Finally, you can specify irregular holidays to model like the dates of the Super
Bowl, Thanksgiving and Black Friday.
How Prophet works
At its core, the Prophet procedure is an additive regression model with four main components:
• A piecewise linear or logistic growth curve trend. Prophet automatically detects changes in trends by selecting changepoints from the data.
• A yearly seasonal component modeled using Fourier series.
• A weekly seasonal component using dummy variables.
• A user-provided list of important holidays.
As an example, here is a characteristic forecast: log-scale page views of Peyton Manning’s Wikipedia page that we downloaded using the wikipediatrend package. Since Peyton Manning is an American
football player, you can see that yearly seasonality plays and important role, while weekly periodicity is also clearly present. Finally you see certain events (like playoff games he appears in) may
also be modeled.
Prophet will provide a components plot which graphically describes the model it has fit:
This plot more clearly shows the yearly seasonality associated with browsing to Peyton Manning’s page (football season and the playoffs), as well as the weekly seasonality: more visits on the day of
and after games (Sundays and Mondays). You can also notice the downward adjustment to the trend component since he has retired recently.
The important idea in Prophet is that by doing a better job of fitting the trend component very flexibly, we more accurately model seasonality and the result is a more accurate forecast. We prefer to
use a very flexible regression model (somewhat like curve-fitting) instead of a traditional time series model for this task because it gives us more modeling flexibility, makes it easier to fit the
model, and handles missing data or outliers more gracefully.
By default, Prophet will provide uncertainty intervals for the trend component by simulating future trend changes to your time series. If you wish to model uncertainty about future seasonality or
holiday effects, you can run a few hundred HMC iterations (which takes a few minutes) and your forecasts will include seasonal uncertainty estimates.
We fit the Prophet model using Stan, and have implemented the core of the Prophet procedure in Stan’s probabilistic programming language. Stan performs the MAP optimization for parameters extremely
quickly (<1 second), gives us the option to estimate parameter uncertainty using the Hamiltonian Monte Carlo algorithm, and allows us to re-use the fitting procedure across multiple interface
languages. Currently we provide implementations of Prophet in both Python and R. They have exactly the same features and by providing both implementations we hope to make our forecasting approach
more broadly useful in the data science communities.
How to use Prophet
The simplest way to use Prophet is to install the package from PyPI (Python) or CRAN (R). You can read our quick start guide and dive into our comprehensive documentation. If you’re looking for a fun
source of time series data, we recommend trying the wikipediatrend package which will download historical page views on Wikipedia pages.
Help us improve Prophet
There are two main ways to help us improve Prophet. First, you can try it yourself and tell us about your results. We’re always looking for more use cases in order to understand when Prophet performs
well and when it does not. Second, there are plenty of features that are left to build! We welcome pull requests with bugfixes and new features. Check out how to contribute, we look forward to
engaging with the community to make Prophet even more broadly useful. | {"url":"https://research.facebook.com/blog/2017/02/prophet-forecasting-at-scale/?ref=blog.cloudnua.io","timestamp":"2024-11-11T08:09:48Z","content_type":"text/html","content_length":"126816","record_id":"<urn:uuid:99e56eaa-990a-4e9a-bbac-c4704e238d8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00272.warc.gz"} |
How to Calculate GPM From Differential Pressure
••• BullpenAl/iStock/GettyImages
Pressure is the driving force behind volumetric liquid flows expressed in GPM (gallons-per-minute) as it is in any flowing system. This derives from the pioneering work on the relationships between
pressure and flow first conceptualized by Daniel Bernoulli over two hundred years ago. Today, the detailed analysis of flowing systems and much of flow instrumentation is based on this reliable
technology. Calculating instantaneous GPM from differential pressure readings is straightforward whether the application is a pipeline section or a specific differential pressure flow element such as
an orifice plate.
Calculating GPM from Differential Pressure in a Pipe Section
Define the flow measurement application. In this example, water is flowing downward through a 6-inch Schedule 40 steel pipe from an elevated water tank whose level is 156-feet above ground to a
distribution header at ground level where the pressure measures 54-psi. Since the water is driven purely by static head pressure, no pump is needed. You can calculate GPM from differential
pressure through this pipe.
Determine the differential pressure across the 156-feet of vertical pipe by dividing 156-feet elevation by 2.31-feet-per-psi (pounds-per-square-inch) to yield 67.53-psi at the start of the pipe.
Subtracting 54-psi from 67.53-psi results in a differential pressure of 13.53-psi across 156-feet of 6-inch Schedule 40-pipe. This results in a 100-feet/156-feet X 13.53-psi = 8.67-psi
differential pressure in 100-feet of pipe.
Look up the head-loss/flow data from the chart for 6-inch Schedule 40 steel pipe. Here 1744-GPM of flow results in a differential pressure of 8.5-psi.
Calculate actual GPM flow in your case by dividing 8.67-psi by the 8.5-psi listed and extracting the square root of the quotient, since the D'Arcy-Weisbach equation on which the tabular data is
based shows that pressure varies as the square of flow velocity (and thus GPM). 8.67/8.5 = 1.02. The square root of 1.02 = 1.099. Multiply the 1.099 flow proportion by the listed 1744-GPM to
yield 1761.35-GPM flowing through your 6-inch pipe.
GPM from Differential Pressure in an Orifice Plate
Define the application. For this example, a pre-defined orifice plate is installed in the 8-inch header fed by the pipe of Section 1. The orifice plate is sized to produce a differential pressure
of 150-inches of H2O (in H2O) differential pressure with a flow of 2500 gallons of water flowing through it. In this case, the orifice plate produces a differential pressure of 74.46-inches of
H2O differential pressure, which enables you to calculate the actual flow through the 8-inch header pipe.
Calculate the proportion of the full 2500-GPM flow at 150-in H2O when the orifice plate only produces 74.46-in H2O of differential pressure. 74.46/150 = 0.4964.
Extract the square root of 0.4964, since flow varies proportionately as the square root of pressure ratio. This results in a corrected proportion of 0.7043, which when multiplied by the 2500-GPM
full-range flow, equals 1761.39 GPM. This value is reasonable, since all the flow is coming from the feed pipe of the Section 1 calculation.
Things You'll Need
□ Calculator or spreadsheet
□ Physical data for piping type
□ Application data for an orifice plate
□ Using the lowest differential pressure range possible in an application will result in less permanent pressure loss and improve energy savings in pumped systems.
□ Always have pressure applications checked by a professional to be sure piping systems won’t rupture in high-pressure cases.
• Using the lowest differential pressure range possible in an application will result in less permanent pressure loss and improve energy savings in pumped systems.
• Always have pressure applications checked by a professional to be sure piping systems won't rupture in high-pressure cases.
About the Author
Pauline Gill is a retired teacher with more than 25 years of experience teaching English to high school students. She holds a bachelor's degree in language arts and a Master of Education degree. Gill
is also an award-winning fiction author. | {"url":"https://sciencing.com/calculate-gpm-differential-pressure-6536540.html","timestamp":"2024-11-05T00:23:05Z","content_type":"text/html","content_length":"412547","record_id":"<urn:uuid:8bef6922-e28a-4bdf-bce1-556f7763583d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00315.warc.gz"} |
Excel Formula for Product Type Mismatch
In this tutorial, we will learn how to write an Excel formula in Python to check if the product type in one column does not match the product type in another column. This formula can be useful when
working with data in Excel and you need to identify any mismatches between two columns. We will use the IF function in Excel to perform this comparison. Let's dive into the details of the formula and
how it works.
The formula uses the IF function to compare the values in cells I1 and L1. The <> operator is used to check if the values are not equal. If the values are not equal, the formula returns the string
'Mismatch'. Otherwise, it returns an empty string.
To apply this formula to other rows in the columns, you can simply copy the formula down. This will automatically update the cell references to match the corresponding rows.
Let's look at an example to better understand how the formula works. We have a dataset with product types in columns I and L. By applying the formula to each row, we can identify any mismatches
between the product types.
Remember to replace the cell references in the formula with the appropriate ones for your dataset. Now you have the knowledge to write an Excel formula in Python to check for product type mismatches
in columns. Use this formula to efficiently analyze and compare data in Excel.
An Excel formula
=IF(I1<>L1, "Mismatch", "")
Formula Explanation
This formula uses the IF function to check if the product type in column I does not match the product type in column L.
Step-by-step explanation
1. The formula compares the values in cells I1 and L1 using the <> operator, which means "not equal to".
2. If the values in cells I1 and L1 are not equal, the formula returns the string "Mismatch". Otherwise, it returns an empty string.
3. The result of the formula is displayed in the cell where the formula is entered.
For example, if we have the following data in columns I and L:
| I | J | K | L |
| | | | |
| A | | | A |
| B | | | C |
| C | | | B |
| D | | | D |
The formula =IF(I1<>L1, "Mismatch", "") would return the following results:
• In cell J1, the formula would return an empty string because the product type in column I (A) matches the product type in column L (A).
• In cell J2, the formula would return the string "Mismatch" because the product type in column I (B) does not match the product type in column L (C).
• In cell J3, the formula would return the string "Mismatch" because the product type in column I (C) does not match the product type in column L (B).
• In cell J4, the formula would return an empty string because the product type in column I (D) matches the product type in column L (D).
The formula can be copied down to apply the same logic to other rows in the columns. | {"url":"https://codepal.ai/excel-formula-generator/query/3GoBMGeW/excel-formula-product-type-mismatch","timestamp":"2024-11-10T20:46:50Z","content_type":"text/html","content_length":"91438","record_id":"<urn:uuid:f83c1b62-1058-4401-824a-750cab9b7667>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00189.warc.gz"} |
circfit_control: Auxiliary Function for Controlling Circular Regression Tree... in circmax: Circular Regression with Maximum Likelihood Estimation and Regression Trees
Auxiliary function for circfit fitting. Specifies a list of values passed to optim.
1 circfit_control(solve_kappa = "Newton-Fourier", useC = FALSE, ncores = 1, ...)
circfit_control(solve_kappa = "Newton-Fourier", useC = FALSE, ncores = 1, ...)
solve_kappa Which kappa solver should be used for the starting values for kappa. By default a "Newton-Fourier" is used. Alternatively, a "Uniroot" provides a safe option and "Banerjee_et_al_2005"
provides a quick approximation.
useC Should score function and solver be calculated in C?
ncores If useC = TRUE, number of cores for parallelization with openMP.
... additional parameters passed to optim.
Which kappa solver should be used for the starting values for kappa. By default a "Newton-Fourier" is used. Alternatively, a "Uniroot" provides a safe option and "Banerjee_et_al_2005" provides a
quick approximation.
If useC = TRUE, number of cores for parallelization with openMP.
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/rforge/circmax/man/circfit_control.html","timestamp":"2024-11-13T19:51:18Z","content_type":"text/html","content_length":"26172","record_id":"<urn:uuid:9fc40c15-ddbc-4cd3-a9bd-c8b3c10b5c0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00653.warc.gz"} |
What is Hypothesis Testing? | Quality Gurus
Hypothesis testing is a statistical procedure that allows us to test assumptions or beliefs about a population based on sample data. It is a statistical procedure that is used to determine whether a
hypothesis about a population parameter is supported by the evidence in a sample. It helps us determine the likelihood that our assumptions are true, given the collected data.
In hypothesis testing, the researcher first specifies a null hypothesis and an alternative hypothesis. The null hypothesis represents the default assumption that there is no significant difference or
relationship between the variables being studied. The alternative hypothesis represents the claim or hypothesis that the researcher is testing.
Next, the researcher collects a sample of data and uses statistical tests to determine whether the sample data provide sufficient evidence to reject the null hypothesis in favour of the alternative
hypothesis. The null hypothesis is retained if the sample data does not support the alternative hypothesis. If the sample data does support the alternative hypothesis, the null hypothesis is
The outcome of hypothesis tests comes in two forms:
The outcome of a hypothesis test is typically expressed in terms of a p-value, which represents the probability of obtaining the observed results by chance if the null hypothesis is true.
• If the p-value is below a predetermined threshold (usually 0.05), the null hypothesis is rejected, and the alternative hypothesis is accepted. (If the p is low, null must go)
• If the p-value is above the threshold, we fail to reject the null hypothesis, and the null hypothesis is retained (if the p is high, the null fly).
Hypothesis testing is a commonly used statistical method for testing claims and hypotheses about a population based on sample data. It is a valuable tool for making inferences about a population
based on the evidence in the sample data. | {"url":"https://www.qualitygurus.com/what-is-hypothesis-testing/","timestamp":"2024-11-14T10:45:38Z","content_type":"text/html","content_length":"460408","record_id":"<urn:uuid:1fdc1425-eefa-4a47-bc2b-5aaa4358d2b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00488.warc.gz"} |
ACOTH Function: Definition, Formula Examples and Usage
ACOTH Function
The ACOTH formula in Google Sheets is a useful tool for performing calculations involving inverse hyperbolic cotangents. It is a part of the standard library of functions in Google Sheets and can be
accessed by typing “=ACOTH” into a cell in your spreadsheet. To use the ACOTH formula, you simply need to provide a numeric value as an input, and the formula will return the inverse hyperbolic
cotangent of that value. The ACOTH formula is a valuable tool for performing complex calculations, and can easily be incorporated into your Google Sheets spreadsheets. Give it a try today and see how
it can help you with your calculations.
Definition of ACOTH Function
The ACOTH function in Google Sheets is a mathematical function that returns the inverse hyperbolic cotangent of a given number. The inverse hyperbolic cotangent, also known as the hyperbolic
arccotangent, is the inverse function of the hyperbolic cotangent, which is defined as the hyperbolic function that is the reciprocal of the hyperbolic tangent. The ACOTH function is useful for
working with complex mathematical expressions and equations in Google Sheets. To use the function, simply enter the desired number as the argument, and the function will return the inverse hyperbolic
cotangent of that number.
Syntax of ACOTH Function
The syntax for the ACOTH function in Google Sheets is as follows: ACOTH(number) where “number” is the number for which you want to calculate the inverse hyperbolic cotangent. The function returns the
inverse hyperbolic cotangent of the given number. For example, if you want to calculate the inverse hyperbolic cotangent of 10, you would enter the formula =ACOTH(10) in a cell in your Google Sheets
spreadsheet. This would return the value 1.3130352854992 as the inverse hyperbolic cotangent of 10.
Examples of ACOTH Function
Here are three examples of how to use the ACOTH function in Google Sheets:
1. To calculate the inverse hyperbolic cotangent of 10, you would enter the formula =ACOTH(10) in a cell in your Google Sheets spreadsheet. This would return the value 1.3130352854992 as the inverse
hyperbolic cotangent of 10.
2. To calculate the inverse hyperbolic cotangent of the value in cell A1, you would enter the formula =ACOTH(A1) in a cell in your Google Sheets spreadsheet. This would return the inverse hyperbolic
cotangent of the value in cell A1.
3. To calculate the inverse hyperbolic cotangent of the result of another formula, you can nest the ACOTH function within another formula. For example, to calculate the inverse hyperbolic cotangent
of the sum of the values in cells A1 and B1, you would enter the formula =ACOTH(A1+B1) in a cell in your Google Sheets spreadsheet. This would return the inverse hyperbolic cotangent of the sum
of the values in cells A1 and B1.
Use Case of ACOTH Function
The ACOTH function in Google Sheets can be used to calculate the inverse hyperbolic cotangent of a given value. The inverse hyperbolic cotangent is defined as the value that, when plugged into the
hyperbolic cotangent function, returns the original value.
Here are some examples of how the ACOTH function can be used in Google Sheets:
• Suppose you have a sheet with a column of values that represent the hyperbolic cotangents of various numbers, and you want to find the corresponding numbers themselves. You can use the ACOTH
function to calculate the inverse hyperbolic cotangent of each value in the column, which will give you the original numbers.
• You can also use the ACOTH function to solve equations that involve the hyperbolic cotangent. For example, if you want to solve the equation cotH(x) = 0.5 for x, you can use the ACOTH function to
calculate ACOTH(0.5), which will give you the value of x that satisfies the equation.
• In addition to solving equations, the ACOTH function can also be used in statistical analysis and data modeling. For example, if you have a dataset with values that follow a hyperbolic cotangent
distribution, you can use the ACOTH function to transform the data into a more manageable form for further analysis.
Overall, the ACOTH function in Google Sheets can be useful in a variety of situations where you need to work with the inverse hyperbolic cotangent of a value.
Limitations of ACOTH Function
The ACOTH function in Google Sheets is used to calculate the inverse hyperbolic cotangent of a given number. It is important to note that this function can only be used with real numbers, and not
complex numbers. Additionally, the input value must be greater than or equal to 1, as the cotangent function is not defined for values less than 1. Finally, the function only returns the principal
value of the inverse hyperbolic cotangent, and does not return any other possible values that may exist for a given input.
Commonly Used Functions Along With ACOTH
Some commonly used functions in Google Sheets that may be used along with the ACOTH function include the following:
• The COSH function, which calculates the hyperbolic cosine of a given number
• The SINH function, which calculates the hyperbolic sine of a given number
• The TANH function, which calculates the hyperbolic tangent of a given number
• The ATANH function, which calculates the inverse hyperbolic tangent of a given number
• The ACOS function, which calculates the inverse cosine of a given number
• The ASIN function, which calculates the inverse sine of a given number
• The ATAN function, which calculates the inverse tangent of a given number
These functions are often used in combination with one another to perform various mathematical operations in Google Sheets.
The ACOTH function in Google Sheets is a useful tool for calculating the inverse hyperbolic cotangent of a given number. It is important to note that this function can only be used with real numbers,
and not complex numbers. Additionally, the input value must be greater than or equal to 1, as the cotangent function is not defined for values less than 1. The function only returns the principal
value of the inverse hyperbolic cotangent, and does not return any other possible values that may exist for a given input. If you are working with inverse hyperbolic cotangent calculations in Google
Sheets, we encourage you to try using the ACOTH function to see how it can help you with your work.
Video: ACOTH Function
In this video, you will see how to use ACOTH function. Be sure to watch the video to understand the usage of ACOTH formula.
Related Posts Worth Your Attention
Leave a Comment | {"url":"https://sheetsland.com/acoth-function/","timestamp":"2024-11-11T00:59:11Z","content_type":"text/html","content_length":"47823","record_id":"<urn:uuid:4e703535-0219-4934-ad1f-d624241a0d27>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00746.warc.gz"} |
William and Emma, The Heart Chasers
The web is exploding with new information about science and sacred geometry, and it is indeed fascinating. Many are postulating that the universe is fractal and holographic, and that at every scale
from atom to solar system and beyond you see the same pattern. It's all about "As above, so below," meaning that the same pattern that is above is also below. There is a hierarchy of the same
fundamental patterns. Some are postulating that the universe is a torus and that the electron is also a torus. Some postulate that the origin of everything is a point, and that point is the center of
a torus, and a black hole is energy flowing in and a white hole is energy flowing out.
Pythagoras said, "All is number." The new field of
vector based mathematics
says that there are only nine numbers. There is fascinating information about the structure of space on the
I particular like the view that at the center of every vibration, every wave, is the heart of God, and God is a twin flame and God creates other twin flames in his/her image. God is both masculine
and feminine. God is in the center of everything and from that center God energizes everything. You might say that Alpha and Omega are the highest pair of twin flames in our universe. Mathematically,
it might be described like the following ( from
rex research
). You can skip the scientific stuff if you like, but for the scientifically minded:
The process of reducing a number down to one digit is practiced in mainstream science.
Buckminster Fuller
called these digits integrated digits, or "indig 9". This process is sometimes called casting out nines, and it's similar to modular arithmetic or taking "mod 9" of a number, except that instead of 0
when there is no remainder you have a 9. I think of it as going around the circuit a certain number of times, just like a clock where there are only 12 numbers. It's not 15 o'clock, or 27 o'clock, or
39 o'clock, it's 3 o'clock. I think of each new time around the circuit as a new harmonic, a new whole number multiple. You can see that if you do take the single digits of the Fibonacci sequence you
get a repeated pattern every 24 numbers, and if you keep doubling starting with 1 you get 1,2, 4, 8, 7,5 as a repeating sequence.
It is all quite fascinating, and twin flames are at the heart of it. | {"url":"http://william-moore.emma-moore.com/2016/12/the-web-is-exploding-with-new.html","timestamp":"2024-11-07T22:28:19Z","content_type":"text/html","content_length":"44655","record_id":"<urn:uuid:a0487b52-d285-465e-b632-815806064ba5>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00165.warc.gz"} |
Python Operators - Oixiesoft
In this tutorial, We will learn about Python Operators, their syntax, and how to use operators in Python.
What are operators in python?
Operators are special symbols in Python.
They carry out arithmetic or logical computation. The operand is the value that the operator operates on.
For example:
>>> 4+5
Here, + is the operator that performs addition. 4 and 5 are the operands and 9 is the output of the operation.
Arithmetic operators
Arithmetic operators are used to performing mathematical operations like addition, multiplication, subtraction, etc.
Operator Meaning Example
+ Add two operands or unary plus x + y+ 2
– Subtract right operand from the left or unary minus x – y- 2
* Multiply two operands x * y
/ Divide left operand by the right one (always results into float) x / y
% Modulus – remainder of the division of left operand by the right x % y (remainder of x/y)
// Floor division – division that results into whole number adjusted to the left in the number line x // y
** Exponent – left operand raised to the power of right x**y (x to the power y)
Example #1: Arithmetic operators in Python
a = 10
b = 5
# Output: a + b = 15
print('a + b =',a+b)
# Output: a - b = 5
print('a - b =',a-b)
# Output: a * b = 50
print('a * b =',a*b)
# Output: a / b = 2
print('a / b =',a/b)
# Output: a // b = 2
print('a // b =',a//b)
# Output: a ** b = 100000
print('a ** b =',a**b) #same as 10*10*10*10*10
a + b = 14
a - b = 5
a * b = 50
a / b = 2
a // b = 2
a ** b = 100000
Comparison operators
We can compare values with the help of Comparison operators. It returns either True or False according to the condition.
Operator Meaning Example
> Greater than – True if left operand is greater than the right a > b
< Less than – True if left operand is less than the right a < b
== Equal to – True if both operands are equal a == b
!= Not equal to – True if operands are not equal a != b
>= Greater than or equal to – True if left operand is greater than or equal to the right a >= b
<= Less than or equal to – True if left operand is less than or equal to the right a <= b
Example #2: Comparison operators in Python
a = 8
b = 5
# Output: a > b is False
print('a > b is',a>b)
# Output: a < b is True
print('a < b is',a<b)
# Output: a == b is False
print('a == b is',a==b)
# Output: a != b is True
print('a != b is',a!=b)
# Output: a >= b is False
print('a >= b is',a>=b)
# Output: a <= b is True
print('a <= b is',a<=b)
a > b is False
a < b is True
a == b is False
a != b is True
a >= b is False
a <= b is True
Logical operators
Operator Meaning Example
and True if both the operands are true x and y
or True if either of the operands is true x or y
not True if operand is false (complements the operand) not x
Example 3: Logical Operators in Python
a = True
b = False
print('a and b is',a and b)
print('a or b is',a or b)
print('not a is',not a)
a and b is False
a or b is True
not a is False
Here is the truth table for these operators.
Python Bitwise Operators
Bitwise operators are used to comparing (binary) numbers:
Operator Meaning Description
& Bitwise AND Sets each bit to 1 if both bits are 1
| Bitwise OR Sets each bit to 1 if one of two bits is 1
~ Bitwise NOT Inverts all the bits
^ Bitwise XOR Sets each bit to 1 if only one of two bits is 1
>> Bitwise right shift Shift right by pushing copies of the leftmost bit in from the left, and let the rightmost bits fall off
<< Bitwise left shift shift left by pushing zeros in from the right and let the leftmost bits fall off | {"url":"https://www.oixiesoft.com/article/python-operators/","timestamp":"2024-11-14T00:43:33Z","content_type":"text/html","content_length":"59754","record_id":"<urn:uuid:808b07c1-67b2-4eca-bb2e-600b37a387ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00409.warc.gz"} |
Derivative vs Integral (3 Key Things To Know & Comparison) | jdmeducational
Derivative vs Integral (3 Key Things To Know & Comparison)
In calculus, derivatives and integrals are two different concepts – but they are related in an important way. Both tell us something about a function and its behavior, but it helps to know exactly
how they are used.
So, what is the difference between a derivative and an integral? The derivative f’(x) of a function f(x) gives the slope of the tangent line at a point. The integral f(x) of a function f’(x) gives
the area between f’(x) & the x-axis on the interval [a, b]. The Fundamental Theorem of Calculus connects these two concepts with the equation ∫[a]^bf’(x)dx = f(b) – f(a).
To put it simply, derivatives and indefinite integrals (antiderivatives) act like inverses (opposites). That is, if we take the integral of a function and then take the derivative of the integral,
we get the same function back.
In this article, we will talk about derivatives, integrals, and the difference between the two concepts. We’ll also look at some examples to show how the ideas are connected.
Let’s get started.
Hey! If you need to learn more about derivatives, integrals, and other math concepts for physics, check out this course:
Advanced Math For Physics: A Complete Self-Study Course
Derivative vs Integral
A derivative tells us about the slope of a curve at a given point, while an integral tells us about area under a curve on a given interval. More specifically:
• The derivative of a function tells us the slope of a tangent line the function at a given point.
• The integral of a function tells us the area between the function and the x-axis on an interval.
The table below compares derivatives and integrals side-by-side.
Derivative Integral
f'(x), the f(x), the
derivative of antiderivative
f(x). or integral
of f(x).
Rate of change Area under
or slope of a curve on
a function at an interval
at point. [a,b].
Find velocity Find position
from position. from velocity.
Find Find
acceleration velocity
from from
velocity. acceleration.
This table summarizes derivatives and
integrals, as well as how they are related.
Before we dive in deeper to connect derivatives and integrals, let’s define each one.
What Is A Derivative?
The derivative of a function tells us how a function changes at a given point. That is, a derivative tells us how fast a function is increasing or decreasing at a point (“slope” or “rate of
For a function f(x), the derivative f’(x) is defined by:
• f’(x) = lim[h->0] [f(x+h) – f(x)] / h
This is called the limit definition of the derivative.
At a specific point x = a, the value f’(a) is the slope of the tangent line to f(x) at that point. You can see the graph of a function and the tangent line at a given point below.
This image shows the tangent line, L, to the function curve, C, at the point P. The slope of the line L is the derivative of the function, evaluated at the point P.
In other words, the tangent line for f(x) at x = a has the form y = mx + b with m = f’(a); that is, y = f’(a)x + b.
The best analogy to think about derivatives is a moving object, like a car or a plane:
• The function f(x) is the position of the car (where is the car at time x?)
• The derivative function f’(x) is the velocity of the (how fast is the car going at time x?)
We can extend this analogy to the second derivative f’’(x), which corresponds to the acceleration of the car (how fast is the velocity changing at time x?)
What Is An Integral?
The integral of a function tells us the area under the curve on a given interval. That is, an integral tells us the area between a function and the x-axis (this area is positive if above the x-axis,
and negative if below the x-axis).
For a function f(x), the integral f(x) on the interval [a, b] is denoted by:
On a specific interval [a, b], the value of the integral ∫[a]^bf(x)dx is the area between f(x) and the x-axis from x = a to x = b. You can see the graph of a function and the area under the curve
(integral) below.
The integral of f(x) on the interval [a, b] is the area under the curve f(x), or the area between f(x) and the x-axis.
The best analogy to think about integrals is a moving object, like a car or a plane:
• The function v(x) is the velocity (how fast is the car going at time x?)
• The function f(x) is the position of the car (where is the car at time x?)
If we take the integral of the velocity function on a time interval [a, b], we get the change in position over that time period. In other words:
• Change in position from time a to time b = ∫[a]^bv(x)dx
We can extend this analogy to the acceleration function a(x) which corresponds to the acceleration of the car (how fast is the velocity changing at time x?). We can also write:
• Change in velocity from time a to time b = ∫[a]^ba(x)dx
Keep in mind that there are two types of integrals: definite integrals and indefinite integrals.
Definite Integral
A definite integral has arguments a and b (lower and upper bounds) indicated below and above the integral sign ∫. For example, for the integral ∫[a]^bf(x)dx, we have:
• A lower bound of x = a
• An upper bound of x = b
This indicates that we want the area between f(x) and the x-axis from x = a to x = b, as shown below.
The definite integral of the function f(x) with lower bound x = a and upper bound x = b gives the area between the curve f(x) and the x-axis. Any area above the x-axis is positive, and any area below
the x-axis is negative.
A definite integral has a specific value, based on the arguments (upper and lower bounds), or we can find a specific function based on initial conditions (an initial value problem).
Indefinite Integral
An indefinite integral has no arguments (no upper and lower bound). Instead, we take a general antiderivative, which will include an unknown constant term C.
So, the indefinite integral of the function f’(x) is f(x) + C, where C is a constant. As mentioned above, we can find the value of C if we are given specific values of x and y for the function.
The notation for the indefinite integral of f(x) is ∫f(x)dx.
Fundamental Theorem Of Calculus
The Fundamental Theorem of Calculus shows us the connection between the concepts of derivatives and integrals.
If f(x) is a real-valued function and continuous on the interval [a, b], then define F(x) by:
Then the function F(x) is an antiderivative of f(x). In other words, F’(x) = f(x).
Furthermore, F(x) is continuous on [a, b] and differentiable on (a, b).
A corollary, which we mentioned earlier, says that:
In other words, the area under the curve f(x) on the interval [a, b] tells us how the function F(x) changes from a to b.
Examples Of Derivatives & Integrals
It will be much easier to see how derivatives and integrals work (and how they are connected) with some examples.
Example 1: Finding Velocity From Position
Let the function f(x) = 30x give the position of a car (in miles north of Boston) at time x >= 0. Since f(x) is the position function, we can take the derivative to get velocity function, v(x) =
30. That is, f’(x) = v(x).
This tells us that the car is traveling at a constant speed of 30 miles per hour. We can see the graph of v(x), the car’s velocity, below:
This is the graph of the velocity curve v(x) = 30.
If we find the area under the velocity curve on a given time interval, we can find out how far the car has traveled during that time.
For example, from time x = 0 to time x = 2, the area is 2*30 = 60 miles. This makes sense, since the car traveled at 30 miles per hour for 2 hours .
The area under the velocity curve v(x) = 30 from x = 0 to x = 2 is 60 (2 hours times 30 miles per hour equals 60 miles).
Likewise, from time x = 0 to x = 5, the area is 5*30 = 150 miles, which you can see graphed below.
The area under the velocity curve v(x) = 30 from x = 0 to x = 5 is 150 (5 hours times 30 miles per hour equals 150 miles).
The table below shows the cumulative area under the velocity curve from time x = 0.
Time Area
x = 0 0
x = 1 30
x = 2 60
x = 3 90
x = 4 120
x = 5 150
This table shows the area
under the velocity curve
v(x) = 30 for various times t.
If we graph these distances, we get the position function, shown below.
This is the graph of the position function f(x) = 30x. Its derivative is the velocity function v(x) = 30.
Note: if the car had started 10 miles north of Boston, then the position function would be f(x) = 30x + 10, while the velocity function would still be v(x) = 30.
Example 2: Finding Position From Velocity
Let the function g(x) = 200 – 10x be the velocity of a car in meters per second from time x = 0 to time x = 20. The graph of the velocity function is shown below:
The graph of the velocity function g(x) = 200 – 10x.
If we take the area under the velocity curve from time x = 0 to time x = b, we get the sum of the areas of two shapes:
• A rectangle with area bg(b) = b(200 – 10b) = 200b – 10b^2
• A triangle with area b(200 – g(b))/2 = b(200 – (200 – 10b))/2 = b(10b)/2 = 5b^2
The area under the curve has two parts: the rectangle (dark gray) and the triangle (light gray). Their sum is the integral of the velocity function g(x) from x = 0 to x = 10.
If we sum the areas of the individual shapes (rectangle and triangle) we get a total area of:
• Total Area = (200b – 10b^2) + (5b^2)
• Total Area = 200b – 5b^2
The table below shows the value for the total area at times between x = 0 and x = 20.
Time Area
x = 0 0
x = 4 720
x = 8 1280
x = 12 1680
x = 16 1920
x = 20 2000
This table shows the cumulative
area under the velocity curve
g(x) from x = 0 at various times.
If we graph these values, we get the position function G(x) for the car, which you can see below.
This is the position function G(x) from x = 0 to x = 20. As time passes, the car slows down to a stop.
*Note: G’(x) = g(x), since velocity is the derivative of position.
Now you know the difference between a derivative and an integral, as well as how the two concepts are related.
You can learn how to graph a function from its derivative here.
When taking derivatives, you might need to differentiate square roots – you can learn how to do it here.
You can learn about how to use derivatives and graphs to find function maximums here. | {"url":"https://jdmeducational.com/derivative-vs-integral-3-key-things-to-know-comparison/","timestamp":"2024-11-02T09:11:12Z","content_type":"text/html","content_length":"94160","record_id":"<urn:uuid:579bb473-0dac-4c03-8152-6a72ae5c70b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00037.warc.gz"} |
Intersection Calculator
Intersection Calculator
Calculate the intersection of two sets
This function returns the intersection of two series of numbers. The intersection contains the numbers that are in both sets.
To calculate, enter the two sequences of numbers. The individual numbers are separated by semicolons or spaces. Then click on the 'Calculate' button.
Intersection is \(\{4, 5\}\)
Intersection example
The intersection \(\displaystyle A\cap B\) consists of elements that are contained in both A and B.
The intersection of \(\{1, 2, 3, 4, 5\}\) and \(\{4, 5, 6, 7, 8, 9\}\) is \(\{4, 5\} \), because only the \(4\) and the \(5\) are contained in both sets. | {"url":"https://www.redcrab-software.com/en/Calculator/Sets/intersection","timestamp":"2024-11-03T05:45:06Z","content_type":"text/html","content_length":"20553","record_id":"<urn:uuid:a1bfc8a6-9558-4523-a7fa-4b347aa03b3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00430.warc.gz"} |
Basics | Deeplearning4j
Elementwise Operations And Basic Usage
The basic operations of linear algebra are matrix creation, addition and multiplication. This guide will show you how to perform those operations with ND4J, as well as various advanced transforms.
The Java code below will create a simple 2 x 2 matrix, populate it with integers, and place it in the nd-array variable nd:
INDArray nd = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2});
If you print out this array
you’ll see this
A matrix with two rows and two columns, which orders its elements by column and which we’ll call matrix nd.
A matrix that ordered its elements by row would look like this:
Elementwise scalar operations
The simplest operations you can perform on a matrix are elementwise scalar operations; for example, adding the scalar 1 to each element of the matrix, or multiplying each element by the scalar 5.
Let’s try it.
This line of code represents this operation:
[[1.0 + 1 ,3.0 + 1]
[2.0 + 1,4.0 + 1]
and here is the result
There are two ways to perform any operation in ND4J, destructive and nondestructive; i.e. operations that change the underlying data, or operations that simply work with a copy of the data.
Destructive operations will have an “i” at the end – addi, subi, muli, divi. The “i” means the operation is performed “in place,” directly on the data rather than a copy, while nd.add() leaves the
original untouched.
Elementwise scalar multiplication looks like this:
And produces this:
[[10.0 ,20.0]
[15.0 ,25.0]
Subtraction and division follow a similar pattern:
If you perform all these operations on your initial 2 x 2 matrix, you should end up with this matrix:
[[3.5 ,8.5]
[6.0 ,11.0]
Elementwise vector operations
When performed with simple units like scalars, the operations of arithmetic are unambiguous. But working with matrices, addition and multiplication can mean several things. With vector-on-matrix
operations, you have to know what kind of addition or multiplication you’re performing in each case.
First, we’ll create a 2 x 2 matrix, a column vector and a row vector.
INDArray nd = Nd4j.create(new float[]{1,2,3,4},new int[]{2,2});
INDArray nd2 = Nd4j.create(new float[]{5,6},new int[]{2,1}); //vector as column
INDArray nd3 = Nd4j.create(new float[]{5,6},new int[]{2}); //vector as row
Notice that the shape of the two vectors is specified with their final parameters. {2,1} means the vector is vertical, with elements populating two rows and one column. A simple {2} means the vector
populates along a single row that spans two columns – horizontal. You’re first matrix will look like this
[[1.00, 2.00],
[3.00, 4.00]]
Here’s how you add a column vector to a matrix:
And here’s the best way to visualize what’s happening. The top element of the column vector combines with the top elements of each column in the matrix, and so forth. The sum matrix represents the
march of that column vector across the matrix from left to right, adding itself along the way.
[1.0 ,2.0] [5.0] [6.0 ,7.0]
[3.0 ,4.0] + [6.0] = [9.0 ,10.0]
But let’s say you preserved the initial matrix and instead added a row vector.
Then your equation is best visualized like this:
[1.0 ,2.0] [6.0 ,8.0]
[3.0 ,4.0] + [5.0 ,6.0] = [8.0 ,10.0]
In this case, the leftmost element of the row vector combines with the leftmost elements of each row in the matrix, and so forth. The sum matrix represents that row vector falling down the matrix
from top to bottom, adding itself at each level.
So vector addition can lead to different results depending on the orientation of your vector. The same is true for multiplication, subtraction and division and every other vector operation.
In ND4J, row vectors and column vectors look the same when you print them out with
They will appear like this.
Don’t be fooled. Getting the parameters right at the beginning is crucial. addRowVector and addColumnVector will not produce different results when using the same initial vector, because they do not
change a vector’s orientation as row or column.
Elementwise matrix operations
To carry out scalar and vector elementwise operations, we basically pretend we have two matrices of equal shape. Elementwise scalar multiplication can be represented several ways.
[1.0 ,3.0] [c , c] [1.0 ,3.0] [1c ,3c]
c * [2.0 ,4.0] = [c , c] * [2.0 ,4.0] = [2c ,4c]
So you see, elementwise operations match the elements of one matrix with their precise counterparts in another matrix. The element in row 1, column 1 of matrix nd will only be added to the element in
row one column one of matrix c.
This is clearer when we start elementwise vector operations. We imaginee the vector, like the scalar, as populating a matrix of equal dimensions to matrix nd. Below, you can see why row and column
vectors lead to different sums.
Column vector:
[1.0 ,3.0] [5.0] [1.0 ,3.0] [5.0 ,5.0] [6.0 ,8.0]
[2.0 ,4.0] + [6.0] = [2.0 ,4.0] + [6.0 ,6.0] = [8.0 ,10.0]
Row vector:
[1.0 ,3.0] [1.0 ,3.0] [5.0 ,6.0] [6.0 ,9.0]
[2.0 ,4.0] + [5.0 ,6.0] = [2.0 ,4.0] + [5.0 ,6.0] = [7.0 ,10.0]
Now you can see why row vectors and column vectors produce different results. They are simply shorthand for different matrices.
Given that we’ve already been doing elementwise matrix operations implicitly with scalars and vectors, it’s a short hop to do them with more varied matrices:
INDArray nd4 = Nd4j.create(new float[]{5,6,7,8},new int[]{2,2});
Here’s how you can visualize that command:
[1.0 ,3.0] [5.0 ,7.0] [6.0 ,10.0]
[2.0 ,4.0] + [6.0 ,8.0] = [8.0 ,12.0]
Muliplying the initial matrix nd with matrix nd4 works the same way:
[1.0 ,3.0] [5.0 ,7.0] [5.0 ,21.0]
[2.0 ,4.0] * [6.0 ,8.0] = [12.0 ,32.0]
The term of art for this particular matrix manipulation is a Hadamard product.
These toy matrices are a useful heuristic to introduce the ND4J interface as well as basic ideas in linear algebra. This framework, however, is built to handle billions of parameters in n dimensions
(and beyond…). | {"url":"https://deeplearning4j.konduit.ai/1.0.0-m2/nd4j/how-to-guides/basics","timestamp":"2024-11-15T03:01:33Z","content_type":"text/html","content_length":"581745","record_id":"<urn:uuid:76cf997a-6f49-40ce-abe2-b8481ceae262>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00886.warc.gz"} |
Research on Algorithm of Airborne Dual-Antenna GNSS/MINS Integrated Navigation System
Department of Intelligent Manufacturing and Industrial Security, Chongqing Vocational Institute of Safety & Technology, Chongqing 404020, China
College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China
Author to whom correspondence should be addressed.
Submission received: 22 December 2022 / Revised: 25 January 2023 / Accepted: 27 January 2023 / Published: 3 February 2023
In view of the difficulties regarding that airborne navigation equipment relies on imports and the expensive domestic high-precision navigation equipment in the manufacturing field of Chinese
navigable aircraft, a dual-antenna GNSS (global navigation satellite system)/MINS (micro-inertial navigation system) integrated navigation system was developed to implement high-precision and
high-reliability airborne integrated navigation equipment. First, the state equation and measurement equation of the system were established based on the classical discrete Kalman filter principle.
Second, according to the characteristics of the MEMS (micro-electric-mechanical system), the IMU (inertial measurement unit) is not sensitive to Earth rotation to realize self-alignment; the
magnetometer, accelerometer and dual-antenna GNSS are utilized for reliable attitude initial alignment. Finally, flight status identification was implemented by the different satellite data,
accelerometer and gyroscope parameters of the aircraft in different states. The test results shown that the RMS (root mean square) of the pitch angle and roll angle error of the testing system are
less than 0.05° and the heading angle error RMS is less than 0.15° under the indoor static condition. A UAV flight test was carried out to test the navigation effect of the equipment upon aircraft
take-off, climbing, turning, cruising and other states, and to verify the effectiveness of the system algorithm.
1. Introduction
Integrated navigation system research is a hot spot in the fields of aviation and aerospace [
]. At present, the most widely used integrated navigation method in the aviation field is the integration of GNSS (global navigation satellite system) and INS (inertial navigation system) [
]. GNSS is a system capable of global positioning and time synchronization. It can provide continuous, full-time, high-precision localization services. It has now been widely used in military,
aviation, navigation, automobile, agriculture, consumer electronics and numerous other fields and has become an indispensable and significant navigation mode in people’s daily lives [
]. For integration of GNSS and INS, currently, the commonly used classification standard is divided into loose integration, tight integration and deep integration according to different coupling
degrees [
]. In the relaxed joint system, the GNSS and INS work independently and the GNSS output navigation results are then used to filter and correct the INS solution results [
]. This approach has the advantages of small system computation, high real-time performance, easy engineering implementation and high product reliability. Currently, most integrated navigation
products on the market use a loose combination scheme. A loose combination, however, has its flaws. When GNSS can receive less than four satellite signals, satellite positioning fails and the loose
combination does not function properly at this time. Therefore, other sensors are needed to ensure short-term navigation accuracy of the system. Currently, many loosely integrated navigation products
have been applied in practical projects in China. Among them, loosely integrated navigation products are mostly used in military weapons and equipment, such as various missiles, Long March launch
vehicles and aircraft. Xi’an Precision Measurement and Control has accumulated a great deal of technology in loosely integrated products and has successively launched several low-cost GNSS/INS
integrated navigation systems, which have been widely used in vehicle navigation.
At present, the integrated navigation equipment widely used in the fields of military, commercial aircraft and shipping in China is mostly based on the products of laser gyro and fiber optic gyro.
The equipment presents high precision and reliability priorities, while its price is also extremely expensive. However, due to the low cost of navigable aircraft and sensitivity to the price of
airborne equipment, it is not possible to use expensive fiber optic or laser gyro integrated navigation equipment, which causes most domestic navigable aircraft manufacturers to rely on imported
integrated navigation equipment based on MEMS (micro-electric-mechanical system) gyro and accelerometer. Navigation devices based on MEMS devices are tiny, light-weight, low-power consumption,
low-cost and convenient in later maintenance. In addition, with in-depth research of MEMS-related technologies in countries around the world in recent years, accuracy and reliability of navigation
devices based on MEMS have been considerably improved [
] and their performance could meet the requirements of integrated navigation devices for navigable aircraft. However, there is still a gap in popularity and reliability compared to foreign products [
In this paper, we describe the details of the mathematical model and algorithm design of the dual-antenna GNSS/MINS integrated navigation system. First, the coordinate system commonly used in
navigation systems is described and the Euler angle representation method and variation range of vehicular attitude are explained. Then, the mathematical model of the dual-antenna integrated
navigation system is proposed, and the core algorithms of attitude, velocity and position update are analyzed and the error equation is derived. After that, the initial attitude alignment via
accelerometer and magnetometer and pitch and heading angle alignment using dual-antenna GNSS are studied. Moreover, in the case of satellite failure, the flight state of the aircraft is judged by
accelerometer and gyroscope measurements, and then the attitude angle is corrected by the accelerometer in the low dynamic flight state of the aircraft.
2. Introduction to the Main Index Requirements and Reference Coordinate System for Integrated Navigation Systems
Navigational equipment on a navigable aircraft provides high precision and stable attitude, position, velocity and additional information during normal flight of the aircraft to assist the aircraft
system in attitude control, enabling it to move along the planned route and avoid track deviations or intersections with other aircraft track routes.
In accordance with the national aerospace standards SAE AS 8013A-2008 Minimum Performance Standard for Magnetic (Gyro Stabilized) Heading Instruments [
], SAE AS 396B-2008 Tilt Pitometer (Indicative Gyro Stabilized) (Gyro Horizon, Attitude Gyro) [
], DO-160F Airborne Equipment Environment and Test Conditions [
] and referring to the performance indicators of GNSS/INS integrated navigation system equipped with foreign navigation aircraft, the main index requirements of the proposed integrated navigation
system are shown in
Table 1
Integrated navigation requires a set of unified coordinate systems to accurately represent the state of the vehicle, taking different reference objects as objects under different circumstances. The
common coordinate system is the inertial coordinate system (
coordinate system,
$O x i y i z i$
), Earth coordinate system (
coordinate system,
$O x e y e z e$
), geographic coordinate system (
coordinate system,
$O x g y g z g$
), navigation coordinate system (
coordinate system,
$O x n y n z n$
), vehicular coordinate system (
coordinate system,
$O x b y b z b$
) etc. [
]. The relationship between each coordinate system is shown in
Figure 1
The $n$ system in this paper is the same as the local $g$ system, with the $x$, $y$ and $z$ axes pointing to the northeast sky, respectively. Vehicular attitude is represented by the included angle
between $b$ series and $n$ series; $ψ$ is the heading angle, $θ$ is the pitch angle and $γ$ is the roll angle.
3. Establishing the Mathematical Model of Dual-Antenna GNSS/INS Integrated Navigation System
Due to the requirement of real-time performance and low cost of the system, this paper chooses the loosely integrated navigation method for data fusion. Meanwhile, in order to improve the accuracy
and reliability, the indirect method is used to select the state quantity [
], which refers to the output navigation parameter error of the two navigation systems as the state quantity.
3.1. Discrete Kalman Filter
A solo navigation mode has its own limitations. Filtering and fusion of data from various sensors can make up for the disadvantages of solo navigation method and obtain a navigation system with
higher accuracy and reliability. At present, the most widely used integrated navigation filtering method is Kalman and its improved algorithms [
First, the system state space model is provided.
${ X k = Φ k / k − 1 X k − 1 + Γ k − 1 W k − 1 Z k = H k X k + R k$
$X k$
is the state vector at time
$t k$
$Z k$
is the direction-finding quantity at time
$t k$
$Φ k / k − 1$
is the state transition matrix from time
$t k − 1$
to time
$t k$
$Γ k − 1$
is the noise distribution matrix;
$H k$
is the measurement matrix;
$W k − 1$
is the system noise vector and
$R k$
is the measurement noise vector. Assuming that they are white noises subject to zero-mean Gaussian distribution, the following equation is satisfied.
${ E [ W k ] = 0 , E [ W k W j T ] = Q k δ k j E [ V k ] = 0 , E [ V k V j T ] = R k δ k j E [ W k V j T ] = 0$
$Q k$
is the process noise matrix,
$R k$
is the measurement noise matrix,
$δ k j$
is Crohn’s function,
$Q k$
is generally required to be non-negative definite and
$R k$
is positive definite, that is
$Q k ≥ 0$
$R k > 0$
KF (Kalman filter) is mainly divided into prediction and correction. The prediction part is divided into state one-step prediction and MSE (mean square error) matrix prediction. The formula is as
$X ^ k / k − 1 = Φ k / k − 1 X ^ k − 1$
$P k / k − 1 = Φ k / k − 1 P k − 1 Φ k / k − 1 T + Γ k − 1 Q k − 1 Γ k − 1 T$
In the correction step, Kalman gain is calculated first, as shown in (5).
$K k = P k / k − 1 H k T ( H k P k / k − 1 H k T + R ) − 1$
Then, state estimation and MSE matrix estimation are performed, as shown in (6) and (7).
$X ^ k = X ^ k / k − 1 + K k ( Z k − H k X ^ k / k − 1 )$
$P k = ( I − K k H k ) P k / k − 1$
3.2. State Equation, Measurement Equation and Parameter Selection of Design Navigation System
The state equation and measurement equation of the system are used to reflect the state characteristics of the system and the relationship between measurement information and state, respectively. The
simplified navigation system equation of state and measurement equation are designed separately, and the parameter selection method for the system equation of state and measurement equation is
The equation of state of the integrated navigation system is as follows.
$X ˙ ( t ) 15 × 1 = F ( t ) 15 × 15 X ( t ) 15 × 1 + G ( t ) 15 × 12 W ( t ) 12 × 1$
$F ( t )$
is the state transition matrix of the system;
$X ( t )$
is the state vector matrix;
$G ( t )$
is the system noise distribution matrix;
$W ( t )$
is the system noise matrix.
This paper selects MINS ‘attitude misalignment angle
$[ ϕ E ϕ N ϕ U ] T$
, northeast sky velocity error
$[ δ v E δ v N δ v U ] T$
, longitude and latitude height position error
$[ δ L δ λ δ h ] T$
, gyroscope correlation drift
$[ ε g x b ε g y b ε g z b ] T$
and accelerometer correlation drift
$[ ∇ a x b ∇ a y b ∇ a z b ] T$
as state vector (15 dimensions in total), as follows.
$X ( t ) = [ ϕ E ϕ N ϕ U δ v E δ v N δ v U δ L δ λ δ h ε g y b ε g z b ∇ a x b ∇ a y b ∇ a z b ]$
The state transition matrix of the system at epoch
is as follows.
$F ( t ) = [ F I ( t ) 9 × 9 F C ( t ) 9 × 6 0 6 × 9 F ε ( t ) 6 × 6 ] 15 × 15$
$F I ( t )$
is the error matrix of the strapdown inertial navigation system, which can be expressed in the following form.
$F I ( t ) = [ 0 3 × 3 0 3 × 3 0 3 × 3 f s f n × 0 3 × 3 0 3 × 3 0 3 × 3 F 3 × 3 p 0 3 × 3 ] 9 × 9$
$F C ( t ) = [ − C b n 0 3 × 3 0 3 × 3 C b n 0 3 × 3 0 3 × 3 ] 9 × 6$
$F ε ( t ) = [ − α g 0 3 × 3 0 3 × 3 − α a ] 6 × 6$
$C b n$
is the ideal direction cosine matrix,
$f s f n$
is the measurement of accelerometer in navigation frame,
$α s = d i a g ( 1 / τ s x 1 / τ s y 1 / τ s z ) ( s = g , a )$
$1 / τ s i ( s = g , a ; i = x , y , z )$
are Markov time correlation constants.
In this paper, random white noise of gyroscope is
$[ w g ε x w g ε y w g ε z ] T$
, random white noise of accelerometer is
$[ w a ε x w a ε y w a ε z ] T$
, first-order Markov drive white noise of gyroscope
$[ η g x η g y η g z ] T$
and first-order Markov drive white noise of accelerometer
$[ η a x η a y η a z ] T$
are taken as the noise of the system, so the system noise matrix is shown in the following equation.
$W ( t ) = [ w g ε x w g ε y w g ε z w a ε x w a ε y w a ε z η g x η g y η g z η a x η a y η a z ]$
The covariance matrix corresponding to the noise matrix
is shown in the following equation.
$P ( t ) = d i a g [ σ g x 2 σ g y 2 σ g z 2 σ a x 2 σ a y 2 σ a z 2 2 α σ g ε x 2 2 α σ g ε y 2 2 α σ g ε z 2 2 β σ a ε x 2 2 β σ a ε y 2 2 β σ a ε z 2 ]$
The system noise distribution matrix is shown in Equation (16) below.
$G ( t ) = [ − C b n 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 C b n 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 0 3 × 3 I 3 × 3 ] 15 × 12$
The designed dual-antenna GNSS/MINS loosely integrated navigation system uses nine dimensions of position, velocity and attitude data as observed measurement. The measurement equation is shown as
$Z ( t ) 9 × 1 = H ( t ) 9 × 15 X ( t ) 15 × 1 + R ( t ) 9 × 1$
$Z ( t ) = [ Φ → I n − Φ → G / M n v → I n − v → G n p → I n − p → G n ] T$
$Φ I n$
are the attitude calculated by INS and
$Φ G / M n$
is the attitude angle provided by GNSS and magnetometer. Since GNSS can only provide heading and pitch angle, when the system is carrying out flight attitude recognition and confirming that the
current state can use the accelerometer for error compensation, it will use the accelerometer to obtain the roll angle measurement value; otherwise, the roll angle measurement error will be set to
$v I n$
$v G n$
are the velocity output information of INS and GNSS, and
$p I n$
$p G n$
are the latitude, longitude and height output information of INS and GNSS, respectively.
$H ( t ) = [ H Φ ( t ) 3 × 15 H v ( t ) 3 × 15 H p ( t ) 3 × 15 ] T$
$H Φ = [ I 3 × 3 0 3 × 12 ]$
$H v = [ 0 3 × 3 I 3 × 3 0 3 × 9 ]$
$H p = [ 0 3 × 6 I 3 × 3 0 3 × 6 ]$
The measurement noise matrix is $R ( t ) = [ R Φ ( t ) R v ( t ) R p ( t ) ] T$, where $R Φ ( t )$ is the white noise of GNSS and magnetometer, $R v ( t )$ is the white noise of GNSS receiver
velocity and $R p ( t )$ is position measurement white noise.
Under the condition of the satellite signal being effective, it is preferred to use the heading and pitch signals of the dual-antenna GNSS as the heading reference information and the MINS heading
calculation difference as the measurement value. Magnetometer data validity judgments are completed when the dual-antenna GNSS signal is not valid or when the accuracy factor level does not meet the
requirements. When its validity meets the requirements, it is taken as the head reference information, the MINS solution heading difference is taken as the measurement value and the pitch error
information is taken as 0. When neither of them meet the requirements, the integrated filtering of attitude angle will not be performed.
4. Calculation Error of Dual-Antenna GNSS System and Design of Updating Algorithm for Strapdown Inertial Navigation System
4.1. Dual-Antenna GNSS System and Precision Factor
GNSS consists of space satellite, ground console and user receiving terminal. GNSS space satellite systems include GPS (Global Positioning System), GLONASS, GALILEO, Beidou, etc. Each system consists
of several satellites in different orbits. Each satellite continuously sends signals to Earth to help complete navigation and positioning, timing and short message communication and other functions.
From the perspective of positioning methods, GNSS can be divided into absolute positioning and differential positioning. Based on single-antenna GNSS, with antenna 1 as the reference station and
antenna 2 as the terminal station, the coordinates of the two antennas are obtained by differential positioning method, and the baseline vector between the two antennas is solved so as to complete
the heading and pitch angle measurement.
The calculation error of GNSS can be judged by three precision factors, among which
$P D O P$
(position dilution of precision) represents the square and square root of standard deviation between longitude, latitude and height,
$V D O P$
(vertical dilution of precision) represents the square and square root of standard deviation between longitude and latitude and
$H D O P$
(horizontal dilution of precision) represents the standard deviation of elevation [
].The relationship between them can be expressed as
$P D O P 2 = H D O P 2 + V D O P 2$
]. By discriminating the value of the three accuracy factors, the quality of the current satellite calculation can be identified. The better the quality of the solution, the smaller the error and the
corresponding precision factor value will be. Combine with other methods to compensate.
4.2. Updating Algorithm of Strapdown Inertial Navigation System
The strapdown inertial navigation system is firmly connected to the carrier, and the angular velocity and specific force of the carrier are measured by collecting the data of the internal inertial
measurement unit. Then, the attitude, velocity and position information of the carrier are calculated by the inertial navigation updating equation. MINS uses MEMS gyro and MEMS accelerometer devices,
making it smaller and cheaper than other inertial navigation systems and allowing it to play a more important role in more areas.
The MINS obtains carrier-specific force information from the MEMS accelerometer inside the IMU, and the MEMS gyroscope obtains carrier angular rate information. After error compensation, coordinate
system change, updating equation computation and other steps, the latest navigation information of the carrier is obtained. The inertial navigation solution graph is shown in
Figure 2
First, the original carrier-specific force information obtained by the accelerometer is compensated for error to eliminate the accelerometer bias and installation error, and then the vector value $f
→ s f n$ of the specific force in the navigation coordinate system is obtained by multiplying the compensated specific force vector $f → s f b$ by the directional cosine matrix $C b n$.
Next, the velocity vector $v → n$ under the navigation coordinate system is obtained by substituting $f → s f n$ into the velocity update equation, and then the position coordinate quantity under the
updated navigation coordinate system is obtained by substituting $v → n$ into the position update equation.
The angular rate
$ω → i b b$
is obtained from the gyro data after the effects of its bias and mounting errors are removed by error compensation. Due to the motion of the carrier, the carrier will generate an angular rate
$ω → e n n$
with respect to the
coordinate system under the
coordinate system, which is given by:
$ω → e n n = [ − v N R M + h v E R N + h v E R N + h tan L ] T$
$R M$
is the radius of curvature of the meridian circle,
$R N$
is the radius of curvature of the prime unitary circle,
is the altitude and
is the local latitude. When the curvature of Earth is neglected as a sphere, it can be approximated as
$R M = R N = R ¯ = 6,371,001 m$
The rotation rate of Earth is $ω i e$. Using Equation $ω → i e n = [ 0 ω i e cos L ω i e sin L ] T$, we obtain its projection under the $n$ coordinate system. Add $ω → e n n$ and $ω → i e n$ to
obtain the rotation angular rate $ω → i n n$ of the $n$ coordinate system with respect to the I-system, and then transform the orientation cosine matrix $C n b$ to obtain the angular rate projection
$ω → i n b$ of the B-system.
Finally, the angular rate projection $ω → n b b$ of the B-system with respect to the N-system in the B-system is obtained by subtracting $ω → i n b$ from $ω → i b b$ and substituting it into the
attitude differential equation $C ˙ b n = C b n ( ω → n b b × )$ to update the equivalent cosine matrix and the attitude angle.
Due to the large noise of the MEMS gyroscope, it is difficult to perceive the angular rate variations of the system with respect to the I-system caused by the rotation of Earth and the motion of the
aircraft. To simplify the calculation, the angular rate $ω → i n b$ can be taken to be zero.
Inertial navigation update algorithm includes attitude update algorithm, velocity update algorithm and position update algorithm, among which accuracy of attitude update algorithm has a decisive
influence on accuracy of the whole inertial navigation system.
Attitude updating algorithm
There are three common ways to represent the attitude, which are Euler angle, direction cosine matrix and Quaternion. Euler angle is the most intuitive representation, which directly reflects the
current vehicular attitude through the size of heading angle, pitch angle and roll angle. However, when the pitch angle is close to 90°, the Euler angle will be used to calculate the singular value,
resulting in the phenomenon of universal joint deadlock [
], and it is impossible to measure the full attitude. The direction cosine matrix has 9 differential equations, which requires a large amount of computation and is not conducive to the real-time
performance of navigation computer. Quaternion has been widely used because it has only four elements, a small amount of computation and can overcome the influence of singular value on attitude
measurement. In this paper, Quaternion will be used for attitude representation, and its expression method is as follows.
$Q = q 0 + q 1 i → + q 2 j → + q 3 k →$
Quaternion is used to represent the relative conversion relationship between the vehicular system and the navigation system, and the Quaternion expression of the direction cosine matrix can be
obtained in Equation (23).
$C b n = [ q 0 2 + q 1 2 − q 2 2 − q 3 2 2 ( q 1 q 1 − q 0 q 3 ) 2 ( q 1 q 3 + q 0 q 2 ) 2 ( q 1 q 2 + q 0 q 3 ) 1 − ( q 1 2 + q 3 2 ) 2 ( q 2 q 3 − q 0 q 1 ) 2 ( q 1 q 3 − q 0 q 2 ) 2 ( q 2 q 3 + q
0 q 1 ) q 0 2 − q 1 2 − q 2 2 + q 3 2 ]$
Using Euler angles to represent the direction cosine matrix is shown below.
$C b n = [ cos γ cos ψ + sin θ sin γ sin ψ cos θ sin ψ sin γ cos ψ − sin θ cos γ sin ψ − cos γ sin ψ + sin θ sin γ cos ψ cos θ cos ψ − sin γ sin ψ − sin θ cos γ cos ψ − cos θ sin γ sin θ cos γ cos θ
Euler’s angle is expressed as a Quaternion.
${ θ = arcsin ( 2 q 2 q 3 + 2 q 0 q 1 ) γ = − arctan ( ( 2 q 1 q 3 − 2 q 0 q 2 ) / ( q 0 2 − q 1 2 − q 2 2 + q 3 2 ) ) ψ = arctan ( ( 2 q 1 q 2 − 2 q 0 q 3 ) / ( q 0 2 − q 1 2 + q 2 2 − q 3 2 ) )$
The Quaternion differential equation is as follows.
$[ q ˙ 0 q ˙ 1 q ˙ 2 q ˙ 3 ] = 1 2 [ 0 − ω x b − ω y b − ω z b ω x b 0 ω z b − ω y b ω y b − ω z b 0 ω x b ω z b ω y b − ω x b 0 ] [ q 0 q 1 q 2 q 3 ]$
$ω x b , ω y b , ω z b$
is the angular velocity of the vehicular around the
axes of the vehicular coordinate system.
When non-fixed axis rotation occurs, direct Euler angle solution will introduce rotational non-exchangeable error to affect the accuracy of solution. Therefore, an equivalent rotation vector should
be introduced to reduce the impact of error on the accuracy. The differential equation of the Equivalent rotation vector is shown in Equation (27).
$ϕ → ˙ t = ω → t b + 1 2 Δ θ → t b × ω → t b$
In order to ensure the accuracy and real-time performance of the solution, this paper adopts the algorithm of “monad sample + previous period” to calculate the equivalent rotation vector, and its
formula is shown in Equation (28).
$ϕ → t = Δ θ → 1 b + 1 12 Δ θ → 0 b × Δ θ → 1 b$
where the angle increment at the previous time is
$Δ θ → 0 b = ∫ - T 0 ω → t b d t$
and the angle increment at the current time is
$Δ θ → 1 b = ∫ 0 T ω → t b d t$
By solving the above formula, the Quaternion recursive calculation formula is obtained.
$Q b ( t ) n = Q b ( t − 1 ) n ∘ Q b ( t ) b ( t − 1 )$
$Q b ( t ) b ( t − 1 ) = [ cos Δ ϕ t 2 Δ ϕ → t Δ ϕ t sin Δ ϕ t 2 ]$
$Q b ( t ) n$
is the updated Quaternion at time
$Q b ( t ) b ( t − 1 )$
is the change value of the Quaternion at time
$t − 1$
Velocity update algorithm
To calculate Earth’s rotation velocity
$ω i e e ≈ 0.025 ° / s$
, assume that the cruising velocity of a navigable aircraft is
$v = 230 km / h$
and its altitude is
$h = 2000 m$
; its maximum angular velocity of
series relative to
series is
$v / ( R ¯ + h ) ≈ 0.036 ° / h$
, both of which will be submerged in the noise of MEMS gyroscope. Therefore, the simplified inertial navigation specific force equation is directly used in this paper.
$v → ˙ e n n = C b n f → s f b + g → n$
The velocity renewal equation is as follows.
$v → t n = v → t − 1 n + C b ( t − 1 ) n ( t − 1 ) ( Δ v → t b + 1 2 Δ θ → t b × Δ v → t b ) + g n T$
$v → k n$
is the velocity of the vehicular under the
system at time
$t k$
$Δ v → k b$
is the specific force increment output by the accelerometer during the period
$t k − 1$
$t k$
$θ → k b$
is the angular velocity increment of the vehicular during the period
$t k − 1$
$t k$
is the sampling period.
Location update algorithm
Differential equations with updated positions are as follows.
$[ L ˙ λ ˙ h ˙ ] = [ 0 1 R M + h 0 sec L R N + h 0 0 0 0 1 ] [ v E v N v U ]$
4.3. Error Analysis of Strapdown Inertial Navigation System and Establishment of Error Equation and Model
The updating equations of the Jet-Link inertial guidance system are defaulted to operate the system without errors under ideal conditions. However, in actual application, measurement and operation of
the inertial devices can lead to errors, which are further accumulated by the navigation algorithm, and the overall accuracy of the system will gradually decrease. For this reason, error analysis and
modeling of the system are needed to reduce the impact of errors.
Attitude error Equation
Under the assumption that there is no error in the system, the ideal direction cosine matrix is
$C b n$
. However, realistic rotation will have errors added to it, resulting in the actual calculated direction cosine matrix being
$C b n ′$
. That means that there is a deviation between the calculated navigation coordinate system
$n ′$
and the ideal navigation coordinate system
obtained from the calculation, and according to the chain multiplication rule, there is:
$C b n ′ = C n n ′ C b n$
The equivalent rotation vector from the
system to the
$n ′$
system is denoted as
$ϕ →$
and referred to as the misalignment angle error. With short sampling time,
$ϕ →$
is a very small quantity and can be approximated using the equation for the relationship between equivalent rotation vector and direction cosine [
], which is obtained as follows:
$C b n ′ ≈ [ I 3 × 3 − ( ϕ → × ) ] C b n$
Without considering its angular rate with respect to the inertial coordinate system caused by the motion of the navigation coordinate system, there is equation
$C ˙ b n = C b n ( ω → i b b × )$
, according to which the differential equation for the direction cosine matrix in the presence of error is obtained as:
$C ˙ b n ′ = C b n ′ ( ω ˜ → i b b × )$
$ω ˜ → i b b$
is the vehicular angular rate containing the error; the equation is as follows:
$ω ˜ → i b b = ω → i b b + ε → g b + w → g ε$
$ε → g b$
is the first-order Markov process drift of the gyroscope and
$w → g ε$
is the white noise of the gyroscope.
Differentiating the two sides of Equation (35) yields:
$C ˙ b n ′ = d [ I 3 × 3 − ( ϕ → × ) ] C b n d t = − ( ϕ → ˙ × ) C b n + ( I 3 × 3 − ( ϕ → × ) ) C ˙ b n$
Associating Equations (36) and (37), the following equation can be obtained:
$− ( ϕ → ˙ × ) C b n + ( I 3 × 3 − ( ϕ → × ) ) C ˙ b n = C b n ′ ( ω ˜ → i b b × )$
Substituting Equations (35)–(37) into (39), it can be obtained:
$− ( ϕ → ˙ × ) C b n + [ I 3 × 3 − ( ϕ → × ) ] C b n ( ω → i b b × ) = [ I 3 × 3 − ( ϕ → × ) ] C b n [ ( ω → i b b + ε → g b + ω → ε ) × ]$
By swapping the two sides and neglecting the second-order minima in them, posture Equation (41) is obtained:
$ϕ → ˙ = − C b n ( ε → g b + ω → g ε )$
Velocity error Equation
As with the attitude error, in the actual navigation calculation, the inclusion of various errors due to it will result in error between the velocity
$v ˜ → n ′$
obtained from the navigation calculation and the ideal velocity
$v → n$
. The error calculation equation is as follows:
$δ v → n = v ˜ → n ′ − v → n$
Differentiating both sides of Equation (42), it is possible to obtain:
$δ v → ˙ n = v ˜ → ˙ n ′ − v → ˙ n$
For ease, the simplified differential equation for the inertial conductivity ratio Equation (43) is rewritten as follows.
$v → ˙ n = C b n f → s f b + g → n$
Without considering the error of gravity, the actual calculated differential equation for the inertia ratio force is obtained as follows.
$v ˜ → ˙ n ′ = C ˜ b n ′ f ˜ → s f b + g → n$
$f ˜ → s f b = f → s f b + ∇ → a b + w → a ε$
$∇ → a b$
is the first-order Markov process drift of the accelerometer and
$w → a ε$
is the white noise of the accelerometer.
The ratio differential equation obtained by using the actual calculation is subtracted from the ideally obtained ratio differential equation. Equation (45) minus (44), the two error differential
equations can be obtained as:
$δ v → ˙ n = v ˜ → ˙ n ′ − v → ˙ n = C ˜ b n ′ f ˜ → s f b − C b n f → s f b$
Then, bring Equations (35) and (46) into (47), expand and neglect the second-order minima about the error; the velocity error equation is obtained:
$δ v → ˙ n = C b n f → s f b × ϕ → + C b n ( ∇ → a b + w → a ε )$
Position error Equation
By deviating the equations corresponding to the differential equations of latitude, longitude and altitude of Equation (33), the following equation can be obtained.
${ δ L ˙ = δ v N / ( R M + h ) − δ h / ( R M + h ) 2 δ λ ˙ = δ v E ⋅ sec L / ( R N + h ) + δ L ⋅ v E ⋅ sec L ⋅ tan L / ( R N + h ) − δ h ⋅ v E ⋅ sec L / ( R N + h ) 2 δ h ˙ = δ v U$
Neglecting the effect of the second-order minima in it, it is written in matrix form as follows.
$[ δ L ˙ δ λ ˙ δ h ˙ ] = [ 0 1 R M + h 0 sec L R N + h 0 0 0 0 1 ] [ δ v E δ v N δ v U ] = F 3 × 3 p [ δ v E δ v N δ v U ]$
Mathematical model of gyroscope error
The main error of the gyroscope can be obtained according to Equation (37), which contains the first-order Markov process drift error of the gyroscope and the white noise random drift of the
gyroscope, and the MSE of the white noise is [
The mathematical model of the first-order Markov process drift error
$ε → g b$
is as follows.
$ε → g b = − α g ε → g b + η → g$
is the Markov correlation time,
$η → g$
is the Markov correlation white noise with mean squared difference
$σ g ε$
. The drift error is converted to the navigation coordinate system using equation
$ε → g n = C b n ε → g b$
Mathematical model of accelerometer error
According to Equation (46), the main error $e → a = ∇ → a b + η → a$ of the accelerometer can be obtained, which contains the first-order Markov process drift error $∇ → a b$ of the accelerometer,
the white noise random drift $w → a ε$ of the accelerometer and the MSE $w → a ε$ of the white noise $σ a$.
The mathematical model of the first-order Markov process drift error
$∇ → a b$
is as follows.
$∇ → a b = − α a ∇ → a + η → a$
In the same mathematical model as the gyroscope error, $β$ in the above equation is the Markov correlation time and $η → a$ is the Markov correlation white noise with mean squared deviation $σ a ε$.
The drift error is converted to the navigation coordinate system using equation $∇ → a n = C b n ∇ → a b$.
4.4. An Initial Alignment Algorithm of Dual-Antenna GNSS and Magnetometer-Assisted MINS
Before normal operation of the Jet-Link inertial guidance system, an initial alignment is first required to determine the initial position, velocity, attitude and other information of the vehicle in
the current state. Among them, initial alignment of vehicular heading is particularly important and is the difficult part of the alignment phase [
]. A high-accuracy inertial guidance system can be sensitive to the rotation of Earth and achieve the heading alignment function without other external auxiliary information. However, the MEMS IMU is
noisy and insensitive and cannot measure the angular rate of Earth’s rotation; for this reason, this paper will use magnetometer and dual-antenna GNSS for initial heading angle alignment. In this
paper, the initial velocity is defaulted to zero, and the positioning output of GNSS after stable operation is used as the initial position.
The initial attitude alignment is based on angular velocity meter and magnetometer
The vector measured by the accelerometer in the vehicular coordinate system is
$f b = [ f x b f y b f z b ] T$
, and its projection in the navigation coordinate system is
$g n = [ 0 0 - g ] T$
. In the static case, there is the specific force equation
$( C b n ) T g → n + f → b = 0 →$
, which is
$f → b = - ( C n b ) T g → n$
, expanded into the following component form.
$[ f x b f y b f z b ] = − [ C 11 C 12 C 13 C 21 C 22 C 23 C 31 C 32 C 33 ] T [ 0 0 − g ] = [ C 31 g C 32 g C 33 g ]$
By solving the above equation, it can be obtained:
$[ C 31 C 32 C 33 ] = [ f x b / g f y b / g f z b / g ]$
The equations for the pitch angle
and the cross-roll angle
can be obtained by combining Equation (20) as [
${ θ = arcsin ( f y b / g ) γ = − arctan ( f x b / f z b )$
The magnetometer is solidly connected to the vehicular, and its coordinate system is the vehicular coordinate system
. The output of the magnetometer in the
system is
$M b = [ m x b m y b m z b ] T$
. When the vehicular coordinate system
coincides with the magnetic geographic coordinate system
$M m = [ 0 m N m U ] T$
, the magnitude of the magnetic field measured by the magnetometer at this time is
$M b = C n b | ψ = ψ m M n$
. According to the formula and Equation (20), the following can be obtained:
$[ m x b m y b m z b ] = [ cos θ sin ψ m m N + ( sin γ cos ψ m − sin θ cos γ sin ψ m ) m U cos θ cos ψ m m N − ( sin γ sin ψ m + sin θ cos γ cos ψ m ) m U sin θ m N + cos γ cos θ m U ]$
Further, the heading angle can be obtained as:
$ψ = ψ m + μ = arctan ( m x b cos γ + m z b sin γ m y b cos θ + m x b sin γ sin θ − m z b cos γ sin θ ) + μ$
Alignment algorithm of initial heading angle and pitch angle based on dual-antenna GNSS
GNSS has the global positioning function, which can realize the all-weather accurate position navigation function. The positioning data provided by GNSS is also the main basis for the initial
position alignment of the system. Dual-antenna GNSS can assist in determining the heading angle and pitch angle. In this paper, dual-antenna GNSS is mainly used to measure the heading angle and pitch
angle when the satellite signal is excellent.
Dual-antenna GNSS has two satellite signal receiving antennas, the main antenna ANT1 and the secondary antenna ANT2, which are mounted on the longitudinal axis of the aircraft, and the baseline
distance between the two antennas is $D$.
First, obtain the position coordinates
$P A N T 1 = [ λ 1 L 1 h 1 ] T$
$P A N T 2 = [ λ 2 L 2 h 2 ] T$
of ANT1 and ANT2 antennas, respectively, then subtract them to obtain the difference of position coordinates
$Δ E = [ Δ e Δ n Δ u ] T$
of the two antennas, where
$Δ e , Δ n , Δ u$
is the difference of the distance between the two antennas in the direction of east, north and sky, respectively, then substitute the above values into the following formula to obtain the heading
and pitch angle
${ ψ = arctan ( Δ e / Δ n ) θ = arctan ( Δ u / Δ e 2 + Δ n 2 )$
At the time of antenna installation, the distance $D$ between the two antennas can be measured exactly, and, at the same time, based on the position coordinates provided by the two antennas, the
calculated positioning distance $D ˜ = ( L 1 − L 2 ) 2 + ( λ 1 − λ 2 ) 2 + ( h 1 − h 2 ) 2$ of the antennas can be obtained. By calculating the difference between $D$ and $D ˜$, the accuracy of the
satellite positioning signal can be judged, and thus its credibility. The coordinates of the midpoint of the line connecting the two antennas are used as the vehicular coordinates of the aircraft so
that they are used as the data for the initial alignment of the position.
4.5. Aircraft Flight State Recognition and Attitude Compensation
In different terrain environments, the navigable aircraft will face various environmental disturbances, and there may also be a satellite signal failure; when the integrated navigation system loses
the update of the satellite measurement signal, the attitude angle may gradually drift, resulting in a gradual increase in the error angle, affecting the attitude accuracy of the navigation system.
For this reason, an aircraft flight state identification and compensation procedure is added to use the IMU raw data for flight state identification in the case of satellite signal failure and to use
the transverse roll and pitch obtained by accelerometer and the heading angle obtained by magnetometer for the quantitative update in the low maneuvering flight state to improve the attitude accuracy
of the system and ensure the navigation safety of the aircraft [
This is shown in
Figure 3
below, which shows a flowchart of the aircraft’s flight state identification and navigation correction data selection method by the navigation system.
In the static alignment phase of the through aircraft, the local reference gravitational acceleration
$g ˜$
is found using Equations (59) and (60).
$| f s f b | k = ( f s f x b ) 2 + ( f s f y b ) 2 + ( f s f z b ) 2$
$g ˜ = | f ˜ s f b | = | f s f b | 1 + | f s f b | 2 + ⋯ | f s f b | n n , ( n = 3000 )$
As GNSS system updates at 1 Hz, the data measurement update frequency of the system is 1 Hz. When the data sent by GNSS are received, they are parsed to judge the validity of the content, and, if the
GNSS data are valid and available, they are used as the basis for judging the flight status of the aircraft according to the velocity of satellite decoding.
When $| v → G | < v ε$ ($v ε$ is the velocity threshold parameter, the value is taken based on the velocity measurement noise of GNSS), the aircraft is judged to be in the ground-ready state. At this
time, the heading, pitch, velocity and position information provided by GNSS are used for filtering correction, and the accelerometer is used for correction of the cross-roll angle in integration
with Equation (36). When $| v → G | ≥ v ε$, the aircraft is judged to be in the motion state, and, at this time, the filtering correction is made entirely using the heading, pitch, velocity and
position information provided by GNSS, and no correction is made for the traverse angle.
When the GNSS data are invalid, the accelerometer is used for flight mode discrimination. According to Equation (60), the accelerometer output
$| f ^ s f b | k$
at moment
is obtained. When
$| | f ^ s f b | k − g ˜ | < a 1 m$
$a 1 m$
is the threshold parameter; its value depends on the specific noise level setting of the environment where the navigation system is located), the aircraft is judged to be in steady state; at this
time, the aircraft is in uniform cruise state or ground stationary state; this stage can use the accelerometer for attitude measurement correction.
$| f ^ s f b | k = | f s f b | k + | f s f b | k − 1 ⋯ + | f s f b | k − m + 1 m , ( m = 100 )$
When $| | f ^ s f b | k − g ˜ | ≥ a 1 m$, the output $| δ f ^ s b h |$ of the horizontal biaxial accelerometer is calculated to determine the specific flight status, and the determination method is
as follows.
When $| δ f ^ s b h | < a 2 m$ ($a 2 m$ is another threshold parameter based on the specific value of the horizontal accelerometer noise), integrated with the gyroscope data to determine, the
aircraft may be in a turning state or circling state.
When $| δ f ^ s b h | ≥ a 2 m$, the aircraft has a large maneuvering state and is in the takeoff or landing phase, and the accelerometer can be used to compensate for the error and reduce the
error of attitude angle in this phase.
5. Experimental Verification and Analysis
To validate the actual performance as well as the reliability of this integrated navigation system, in this chapter, we detail an indoor static accuracy test of the equipment and UAV flight test to
check the performance of the system under different environments and to verify the correctness of the system model and the feasibility of the system scheme.
5.1. Carry Out Indoor Static Experiment
The first test is to test the static performance parameters of the system in an indoor environment. The main components of the indoor static experiment are an integrated navigation device body and a
data recorder. The device was placed on a marble table in the laboratory, the power was started and, after the initialization of the device was completed, data were sent to the serial data logger and
30 min data were collected.
Figure 4
shows the field experiment for static testing.
The experimental system installation diagram is shown in
Figure 5
The entire system is powered by an outdoor mobile 220 V AC supply. The tested dual-antenna GNSS/MINS integrated navigation system, reference fiber inertial navigation system and serial port data
recorder use 24 V DC power supply provided by an AC/DC power adapter, while the laptop and reference satellite receiver use 220 V AC power supply.
The tested integrated navigation system receives satellite data through two satellite antennas and sends the final navigation solution data to a serial port data recorder for storage. The distance
between the two satellite antennas is 1.5 m. The reference inertial navigation device reads the satellite data sent by the satellite receiver through a Moxa Upload multi-channel serial port USB to
serial port converter and performs an integrated navigation solution that then sends the navigation data through the converter to the uplink computer for saving.
In the indoor static test situation, the system first detects that the satellite signal is not available, so it performs flight status identification by accelerometer and determines that the current
device is in ground-ready state. During the ground-ready state without satellite signal, the system will use an accelerometer to correct the traverse and pitch angles of the attitude and a
magnetometer to correct the heading angle. From the results in
Table 2
, the errors of pitch and roll angles in static condition are extremely small, and the RMS is within 0.75°. The error of heading angle is larger than the pitch and roll angles corrected by
accelerometer due to fusion correction by magnetometer, but the error RMS is also within 1°.
5.2. Conduct UAV Flight Test
This flight test will carry the system inside the aircraft near the propeller using the on-board 28 V DC to power the test equipment, using the on-board 12 V battery to power the industrial serial
data logger, the reference equipment is the on-board main fiber optic inertial guidance and the GPS signal and heading are provided by the aircraft. Since the aircraft does not provide velocity
information, the system will not perform velocity measurement update and comparison in this subsection. The flight process is divided into several phases, such as pre-takeoff preparation, takeoff,
turn, cruise and landing, etc. This flight test will focus on analyzing the data of the first 2190 s, which contain the additional four phases of the flight process except for landing.
Figure 7
, it can be demonstrated that the aircraft will have an overall large amount of noise due to the vibration of the propeller. Even when the aircraft is in the ground preparation phase between 0 and
190 s, the gyroscope and accelerometer have a large amount of noise, but it can be considered as zero-mean noise, which has minor effect on later data solving. Compared with the sports car test, when
the aircraft in the flight test undergoes dynamic changes, the amount of data change of its accelerometer and gyroscope is relatively small, the dynamic impact is smaller and the navigation effect
can be expected to be better.
As shown in
Figure 8
, the aircraft is in the ground preparation state during 0–190 s, and the errors of the three attitude angles are relatively small. During the period of 191–380 s, the aircraft is in the takeoff
phase, at which point the aircraft raises its head and the error of the pitch angle increases but remains within the range of 2°. The error of the cross-roll angle also has some fluctuations and
remains within 1.5 degrees overall, and the error of the heading angle is small. During the period of 500–610 s, the aircraft starts the first large-angle turn, which leads to an increase in error,
and the error exceeds 2°. During 500–610 s, the aircraft starts to make the first large-angle turn because there is no correction for the cross-roll angle at this time, which leads to an increase in
error and further a brief large error fluctuation in the pitch angle; the error was more than 2°; this error was quickly corrected. Heading angle due to focus on stability, resulting in a slightly
slower following velocity, causes large error, about 4°; about 1–2 s later, the system followed on the true heading angle and the amount of error returned to normal. After that, the aircraft made
five climbs and one large maneuvering turn, during which the attitude angle error would increase, but the maximum error was within 5° and the correction velocity was fast.
The following table shows the quantitative analysis of the flight test attitude angle errors. In
Table 3
, the heading angle error is the smallest and the cross-roll angle error is a bit larger than the pitch angle error, but the root mean square of the error is within the allowable range of the index.
Figure 9
, the error in longitude and latitude of the system is relatively small, and a slightly larger amount of error will be generated during the takeoff, climb and turn phases. Regarding altitude, because
the system depends on the correction of the satellite, there is a certain delay in the satellite signal, which will produce slightly larger error in the climb phase, about 4 m. Although the altitude
error will be reduced if the measurement noise of the altitude is reduced at this time, the northward and skyward position error will increase, so a compromise is needed. When the aircraft is flying
smoothly, the altitude error can be corrected quickly.
The following table shows the quantitative analysis of the position error of the flight test. From
Table 4
, the root MSE of the position in the east direction and north direction are around 0.45 m, and the root MSE in the sky direction is larger, which can also meet the requirement of less than 1.5 m.
The system’s attitude solving effect in the absence of satellite is verified. After the 30S initial alignment of the system, the GNSS signal of the system is disconnected so that the system only uses
the raw IMU data for attitude and heading angle solution and filtering, and the experimental effect is shown in
Figure 10
. When there is no GNSS signal, the system will use IMU data for flight attitude discrimination and then use accelerometer for attitude angle correction in the low acceleration phase of flight. From
Figure 8
, the attitude angle solution using accelerometer will have a larger amount of error compared with the attitude acquired by GNSS, but the systematic error RMS is guaranteed to be within 1 degree,
which can be used as an inertial guide to provide higher accuracy attitude navigation information when there is no satellite signal from the aircraft.
Quantitative analysis of the attitude error is shown in
Table 5
. Through flight test verification, the effectiveness of the algorithm designed in this paper, using the angle, velocity and position information provided by GNSS and MEMS inertial guidance system
for high precision data fusion, when the satellite signal fails, the IMU data alone can also be used to obtain better attitude solution accuracy to ensure the flight safety of the aircraft.
6. Conclusions
By studying the discrete Kalman filter algorithm, the state-space model of the system is established based on the system error equation in this study. The mathematical model of the integrated
dual-antenna GNSS/MINS navigation system is established and the integrated inertial guidance algorithm of Jet-Link is designed. The system is also tested in indoor static tests and UAV flight
experiments to verify the attitude performance of the system, especially when it meets the index requirements without satellite signals. The experiments verified that the attitude, velocity and
position accuracy through the system during take-off, climbing and turning are elevated and the errors are within the index range. It is realized that the attitude accuracy and position accuracy of
the system meet the system index requirements when a GNSS signal is available. When there is no GNSS signal correction, the accelerometer can also be used to obtain a better attitude solution to
ensure safe return and landing of the aircraft under GNSS signal rejection environment.
Future research will investigate the method to maintain the accuracy of MEMS inertial guidance solution for a longer duration in the absence of a satellite signal. Combined with the current situation
of poor satellite signal due to complex terrain and mountainous plateaus in China, an airborne tight integration navigation algorithm will be studied. Using the existing hardware equipment, a more
efficient and feasible algorithm will be explored to realize an algorithm solution of superior accuracy using low-cost computing units.
Author Contributions
Conceptualization and methodology, M.X. and L.G.; software, P.S.; validation, M.X. and P.S.; writing—original draft preparation, M.X. and P.S.; writing—review and editing, L.G. and Z.Z. All authors
have read and agreed to the published version of the manuscript.
This work was supported by the Project of Science and Technology Research Program of Chongqing Education Commission of China (No. KJZD-K202104701).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
1. Hong, Y.; Zhao, Z.; Wang, H.; Hu, W.; Qin, G. A time-controllable Allan variance method for MEMS IMU. Ind. Robot. Int. J. 2013, 40, 111–120. [Google Scholar]
2. Gustafson, D.; Dowdle, J. Deeply integrated code tracking: Comparative performance analysis. In Proceedings of the International Technical Meeting of the Satellite Division of The Institute of
Navigation, Portland, OR, USA, 9–12 September 2003; pp. 2553–2561. [Google Scholar]
3. Zhang, Q.; Guan, L.; Xu, D. Odometer Velocity and Acceleration Estimation Based on Tracking Differentiator Filter for 3D-Reduced Inertial Sensor System. Sensors 2019, 19, 4501–4517. [Google
Scholar] [CrossRef] [PubMed]
4. Ding, W.; Sun, W.; Gao, Y.; Wu, J. Vehicular Phase-Based Precise Heading and Pitch Estimation Using a Low-Cost GNSS Receiver. Remote Sens. 2021, 13, 3642. [Google Scholar] [CrossRef]
5. Niu, X.; Ban, Y.; Zhang, T.; Liu, J. Research progress and prospects of GNSS/INS deep integration. Acta Aeronaut. Astronaut. Sin. 2016, 37, 2895–2908. [Google Scholar]
6. Hwang, D.; Lim, D.W.; Cho, S.L.; Lee, S.J. Unified approach to ultra-tightly-coupled GPS/INS integrated navigation system. IEEE Aerosp. Electron. Syst. Mag. 2011, 26, 30–38. [Google Scholar] [
7. Li, Z.; Liu, Q.; Wang, X.Y. Design of GPS/MEMS-IMU loose coupling navigation system. Ship Electron. Eng. 2018, 38, 52–56+59. [Google Scholar]
8. Ayazi, F. Multi-DOF inertial MEMS: From gaming to dead reckoning. In Proceedings of the Solid-State Sensors, Actuators & Microsystems Conference, Beijing, China, 5–9 June 2011; IEEE: Piscataway,
NJ, USA, 2011; pp. 2805–2808. [Google Scholar]
9. Zhao, Z.P.; Wang, X.L. MEMS-SINS/GNSS deep integrated navigation system based on vector tracking. Navig. Control. 2019, 6, 7. [Google Scholar]
10. Zhang, X.; Feng, S.G. Development status of airborne INS/GNSS deep integrated navigation system. Opt. Optoelectron. Technol. 2021, 19, 88–96. [Google Scholar]
11. SAE AS8013A; Minimum Performance Standard of the Magnetic (Gyro Stabilized Type) Heading Instrument. International Society of Automation Engineers Aerospace Standards: Warrendale, PA, USA, 2008;
pp. 4–9.
12. SAE AS396B; Inclinometer (Indicating Gyro Stabilized Type) (Gyro Horizon, Gyro Attitude Meter). International Society of Automation Engineers Aerospace Standards: Warrendale, PA, USA, 2008; pp.
13. RTCA DO-160E/EUROCAEED-14F; American Aeronautical Radio Technical Commission (RTCA). RTCA: Washinton, DC, USA, 2007; pp. 11–139.
14. Lin, Y.; Zhang, W.; Xiong, J. Specific force integration algorithm with high accuracy for strapdown inertial navigation system. Aerosp. Sci. Technol. 2015, 42, 25–30. [Google Scholar] [CrossRef]
15. Wang, J.; Sun, R.; Cheng, Q.; Zhang, W. Comparison of direct and indirect filtering modes for UAV integrated navigation. J. Beijing Univ. Aeronaut. Astronaut. 2020, 46, 2156–2167. [Google Scholar
16. Gaoge, H.; Wei, W.; Yong, M. A new direct filtering approach to INS/GNSS integration. Aerosp. Sci. Technol. 2018, 77, 755–764. [Google Scholar]
17. Li, X.; Zhang, X.; Ren, X.; Fritsche, M.; Wickert, J.; Schuh, H. Precise positioning with current multi-constellation Global Navigation Satellite Systems: GPS, GLONASS, Galileo and BeiDou. Sci.
Rep. 2015, 5, 8328. [Google Scholar] [CrossRef] [PubMed]
18. Santerre, R.; Geiger, A.; Banville, S. Geometry of GPS dilution of precision: Revisited. GPS Solut. 2017, 21, 1747–1763. [Google Scholar] [CrossRef]
19. Bacon, B. Quaternion-Based Control Architecture for Determining Controllability Maneuverability Limits. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Minneapolis, MN,
USA, 13–16 August 2012. [Google Scholar]
20. Wang, Q.; Cheng, M.; Aboelmagd, N. Research on the improved method for dual foot-mounted Inertial/Magnetometer pedestrian positioning based on adaptive inequality constraints Kalman Filter
algorithm. Measurement 2019, 135, 189–198. [Google Scholar] [CrossRef]
21. Chan, A.; Su, L.; Chu, K. Sensor data fusion for attitude stabilization in a low cost Quadrotor system. In Proceedings of the 2011 IEEE 15th International Symposium on Consumer Electronics
(ISCE), Singapore, 14–17 June 2011; IEEE: Piscataway, NJ, USA, 2011. [Google Scholar]
22. Han, H.; Wang, J.; Du, M. A Fast SINS Initial Alignment Method Based on RTS Forward and Backward Resolution. J. Sens. 2017, 2017, 7161858. [Google Scholar] [CrossRef]
23. Cong, L.; Li, E.; Qin, H. A Performance Improvement Method for Low-Cost Land Vehicle GPS/MEMS-INS Attitude Determination. Sensors 2015, 15, 5722–5746. [Google Scholar] [CrossRef] [PubMed]
24. Guan, L.; Sun, P.; Xu, X.; Zeng, J.; Rong, H.; Gao, Y. Low-cost MIMU based AMS of highly dynamic fixed-wing UAV by maneuvering acceleration compensation and AMCF. Aerosp. Sci. Technol. 2021, 117,
106975. [Google Scholar] [CrossRef]
Figure 3. Schematic diagram of flight status identification and navigation correction data selection process.
Entry Name Indicator Parameters
Heading Accuracy Double GNSS (2 m Baseline) 1° (RMS)
Heading holding (GNSS failure) 1.5 °/min (RMS)
Attitude accuracy GNSS valid (single point L1/L2) 0.75° (RMS)
Horizontal positioning accuracy GNSS valid, single point L1/L2 1.5 m (RMS)
Horizontal velocity accuracy GNSS valid, single point L1/L2 0.2 m/s (RMS)
Gyroscope Measuring range ±450 °/s
Bias stability 6 °/h
Accelerometer Measuring range ±16 g
Bias stability 0.3 mg
Working temperature −40 °C~+60 °C
Environmental indicators Vibration 5~2000 Hz, 2 g
To attack 30 g, 11 ms
Parameter Pitch Angle Error (°) Cross Roll Angle Error (°) Heading Angle Error (°)
Means 0.01493 −0.01472 −0.2335
Maximum value 0.0329 0.001092 0.1458
Minimum value −0.00667 −0.03147 −0.7275
RMS 0.004792 0.003721 0.1307
Parameter Pitch Angle Error (°) Cross Roll Angle Error (°) Heading Angle Error (°)
Means 0.0522 −0.1148 0.02089
Maximum value 3.01 5.771 4.312
Minimum value −2.784 −5.935 −2.123
RMS 0.4574 0.7051 0.2251
Parameter Eastward Position Error (m) Northward Position Error (m) Skyward Position Error (m)
Means 0.03407 0.0341 −1.035
Maximum value 3.601 2.821 0.5304
Minimum value −3.467 −3.299 −5.723
RMS 0.4995 0.4323 1.445
Parameter Pitch Angle Error (°) Cross Roll Angle Error (°) Heading Angle Error (°)
Means −0.139 −0.00824 0.03385
Maximum value 5.352 6.955 4.987
Minimum value −5.717 −4.633 −2.936
RMS 0.8448 0.9444 0.4876
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Xia, M.; Sun, P.; Guan, L.; Zhang, Z. Research on Algorithm of Airborne Dual-Antenna GNSS/MINS Integrated Navigation System. Sensors 2023, 23, 1691. https://doi.org/10.3390/s23031691
AMA Style
Xia M, Sun P, Guan L, Zhang Z. Research on Algorithm of Airborne Dual-Antenna GNSS/MINS Integrated Navigation System. Sensors. 2023; 23(3):1691. https://doi.org/10.3390/s23031691
Chicago/Turabian Style
Xia, Ming, Pengfei Sun, Lianwu Guan, and Zhonghua Zhang. 2023. "Research on Algorithm of Airborne Dual-Antenna GNSS/MINS Integrated Navigation System" Sensors 23, no. 3: 1691. https://doi.org/10.3390
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/1424-8220/23/3/1691","timestamp":"2024-11-12T23:39:41Z","content_type":"text/html","content_length":"627681","record_id":"<urn:uuid:c34f670e-d725-4f81-a592-c6684f7c7bda>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00135.warc.gz"} |
Segment tree with lazy propagation - Scaler Blog
Segment tree with lazy propagation
A Segment Tree is an efficient and flexible data structure that is used to solve range queries while handling updates at the same time. The lazy Propagation technique is used in a segment tree to
solve range queries in O(log n) time complexity. In lazy propagation, we make copy nodes for each node in the Segment Tree and use these copy nodes to store the updates. Lazy Propagation is a
technique where we postpone the updates for the future and use them only when these updates are required.
Scope of article
• This article defines Lazy Propagation in a Segment Tree and explains the intuitive logic of this algorithm. We also learn two measures of its efficiency: Time and Space Complexity.
• The article shows how we can find the minimum in a given range while handling range updates efficiently with the lazy propagation technique in a Segment Tree.
• The article also shows the C++ implementation for Range Minimum Queries with Range Updates in a Segment Tree.
As discussed above a segment tree is a highly versatile and efficient data structure used to solve problems involving range queries and is flexible enough to handle update queries over a range as
well as point update queries in O(log n) time complexity. We would recommend everyone to refer to the article Segment Tree, as a prerequisite to this article where we discussed how we can use a
Segment Tree to handle point updates. In this post, we will see how we can use a Segment Tree to handle range update queries.
Let’s consider that you have an array as follows, array= [2,5,4,3]. Now you need to solve range minimum queries or the minimum in a given range for the given array while handling range updates at the
same time. Now if we need to modify only a single element in the array we can do it using a simple Segment Tree. However, if we need to update an entire range in the update query. Say, we need to
increase every element of the array in the range [1,3] (1 based indexing) by 5. Then we will not be able to use a simple Segment tree to solve the above problem efficiently. We will see why in the
latter part of this article.
Another way to solve the above problem can be to traverse the given range in the array and update each element for every update query. Then find the minimum for the given range by again traversing
the entire range in the array. This will take a time complexity of O(N) as we are traversing the array for each update query. However, if there are Q such queries the time-complexity will be O(Q*N).
Can we solve the above problem in a better complexity?
Yes, we can! We use the lazy propagation technique to solve the range update queries in O(log N) time complexity. Let us first discuss the structure of a segment tree and then understand the lazy
propagation technique in a segment tree.
Structure of a Segment Tree.
A segment tree is a binary tree in which every node is associated with a certain range. The interval associated with the child nodes is approximately half the size of the interval associated with the
parent node.
We can visualize the structure of the segment tree as follows-
• Every node is associated with some intervals of the parent array. Parent Array refers to the array from which the segment tree is built.
• The root of a segment tree represents the entire array. i.e [0,n-1] (0 based indexing).
• Every leaf node is associated with a single element which is an element of the array.
• The intervals associated with any two child nodes of a given node are disjoint.
• The union of two child intervals gives us the interval associated with the parent node.
• If we consider the root node to be indexed at 0. The left and right children of a node will be given by 2parent_id+1 and 2parent_id+2. We can also consider the root node to be indexed at 1. Then
the index of the left and right children of a node will be 2parent_id and 2parent_id+1. Here parent_id is the index of the parent node.
Below is a visual representation of a segment tree used for minimum range queries.
In the Segment Tree shown above, we should note that we have used 1 based indexing for the root node. Each node’s associated range is taken as left range inclusive and right range exclusive. That is
Range [Left_Range, Right_Range) means [Left_Range, Right_Range-1]. The leaf nodes of the tree represent the array element from which the tree is made. Whenever the number of elements in the array is
not a power of 2 some underlying subintervals may not be completely filled.
Why a simple segment tree can’t be used for handling Range Updates efficiently?
Let us recall how a segment tree for point update works!
• To update a single element in a segment tree we need to move to the leaf node corresponding to that element in the original array.
• After updating that particular element in the parent array and in the segment tree we backtrack.
• While backtracking we updated all the values associated with that node in the segment tree.
However, in range updates, we need to update multiple elements at a time. So this means that we need to move to the leaf node associated with each element to be updated in the range. Let’s suppose
that the range to be updated consists of all the elements present in the array. Thus updating all the elements one by one would take a time complexity of O(N * log N). This is because one updating
query for a point update using the Segment Tree takes O(log N) time. Handling N such elements in a single query would make the time complexity O(N * log N ) for a range update. This time complexity
is even less efficient than the brute force approach discussed above which takes O(N) time complexity for an updating query.
Thus simply using a segment tree will not help us to solve range updates efficiently. Let’s see how can we handle range updates in a more efficient manner.
Solving Range Updates with Lazy Propagation Technique
The conditions of No Overlap, Partial Overlap and Complete Overlap will be the same as those discussed for a Simple Segment Tree.
• No Overlap: The range associated with node falls completely outside the range asked in the query. i.e RightRange_node <= query_LeftRange or LeftRange_node >= query_RightRange.
Note that we are taking equal to as we are taking the ranges as right range exclusive and left range inclusive manner.
• Complete Overlap: – The range associated with node falls completely inside the range asked in the query. i.e LeftRange_node >= query_LeftRange and RightRange_node <= query_RightRange.
• Partial Overlap: – The range associated with the node falls partially inside and partially outside the range asked in the query. If both the above condition fails it will be a case of partial
Let us look at the approach used to solve Range Updates using Lazy Propagation. Consider the array as, array=[3,7,6,4]. Let us build a minimum segment tree for this array. Refer to the segment tree
shown below.
To use Lazy Propagation in a segment tree we make copies of each node. The copy nodes are represented by red circles beside every node in the segment tree. The range and the id associated with the
copy nodes are the same as that of the nodes in the Segment Tree. These copy nodes will store the information about the updates that we need to perform in a given range. We will use these copy nodes
to suspend the updates for the future and use them whenever they are required in the query. This is what lazy actually means. We suspend the updates for the future by being lazy at the present. We
use an additional vector or an array to store the copy nodes. Let’s name this vector as LazyPropagationVector. Now let’s understand how we can apply Range Updates using Lazy Propagation.
Implementing Range Update function
Let’s define a function LazySegmentTreeRangeUpdate(updateValue, LeftRange,RightRange), which means that we need to increase every element present in the range [LeftRange, RightRange-1] by updateValue
. Note that in this function we are taking left range inclusive and right range exclusive.
Now let us implement the function LazySegmentTreeRangeUpdate(5, 0, 2). That is we need to increase every element present in the range 0 to 1 by value 5 (0 based indexing). Refer to the diagram shown
LazySegmentTreeRangeUpdate(5, 0, 2)
• The blue arrows show the direction of the dfs traversal.
• The cross represents the condition of no overlap from where we return in our dfs call.
• The green dot represents the condition of complete overlap. We return from our dfs call in the case of complete overlap and no overlap of intervals.
• We start our dfs traversal from the root of the segment tree, i.e node with id equal to 1.
• From the root node, say we move to its left that is the node with id=2.
• Since we get a condition of total overlap we increase that node of the segment tree by 5.
• Here we use the distributive property of minimum function over addition. That is we can write minimum(3+5,7+5) as minimum(3,7)+5.
• We also update the copy node associated with that node of the segment tree. By doing this we are preventing the need of moving to every leaf node associated in the range to be updated. As storing
5 in the copy node associated with the node of id=2 will also mean that we need to add 5 in the range [0,1].
• While backtracking in our dfs call we update the non-leaf nodes with the minimum of both left and the right child to get the updated segment tree. We update the node with id=1 with the minimum of
its left and right child.
• In the future, if we need the value of the node associated with the range [1,2), i.e element at index 1 of the parent array. We would first need to increase that value by 5 and then return the
LazySegmentTreeRangeUpdate(6, 0, 1)
Let’s implement the function LazySegmentTreeRangeUpdate(6, 0, 1) in the segment tree obtained after the above operation. LazySegmentTreeRangeUpdate(6, 0, 1) means that we need to increase the element
present in the range [0,0] or the element present at the 0th index of the array by 6 (0 based indexing). Refer to the diagram shown below.
• We start the dfs call from the root of our segment tree. Say we move to its right, i.e node with id=3. We get a condition of no overlap. So we stop the dfs call and return.
• Then we move to its left child, i.e the node with id=2. Here we get a condition of partial overlap. Therefore we should move to its left and right child.
• However, before moving further in the dfs call we can see that an update is pending from the past. We can see this from the value present in the copy node associated with the node of id=2.
• Since the copy node is not empty we need to apply these updates to both the children nodes before moving any further in the dfs call. That is we need to increase the nodes with id=4 and id=5 by
the value present in the copy node, i.e 5, and also update the copy nodes associated with them.
• After updating the nodes the value present at the node with id=4 becomes 3+5=8, and the value present in the node with id=5 becomes 7+5=12. Both the copy nodes associated with them will store the
value 5.
• After updating the value of the nodes and the copy nodes we move to the left and right child of the node with id=2. When moving right we get a condition of no overlap in the node with id=4 shown
by a cross in the diagram. So we return.
• Then we move to the left where we get a condition of total overlap. So we increase that node of the segment tree by 6 and also update the value of the copy node. That is the value present in the
node with id=4 becomes (3+5)+6=14 and the value present in the copy node becomes 5+6=11.
• Note that the value present in the copy nodes of the leaf nodes will never be used. This is because we propagate the value present in the copy nodes only when we move to the child nodes. However,
since leaf nodes don’t contain child nodes its value will never be used.
Pseudocode for LazySegmentTreeRangeUpdate(updateValue,LeftRange,RightRange)
Let us understand the algorithm we will use to solve range updates with the help of its pseudocode. Note that entire implementation of the algorithm is discussed below in this article.
Start the dfs call from the root node.
void LazySegmentTreeRangeUpdate(updateValue,query_LeftRange,query_RightRange)
if(no overlap condition)
return from the dfs call.
else if(total overlap condition)
update the node of the segment tree with updateValue.
update the copy node associated with it with updateValue.
return from the dfs call.
//Here we will handle the condition of partial overlap.
if(value present in the copy node is not empty)
propagate its value to the left and right child.
update the copy nodes associated with the child nodes.
Move to the left and right child of the current node.
While backtracking update the value of the parent node
with value present in its child node after the update.
Solving Minimum Range Queries with Lazy Propagation
Solving range minimum queries for a segment tree with lazy propagation is the same as that of a simple segment tree. However, when moving to the child nodes of a particular node we need to make sure
that past updates are not pending for the child nodes. We can do this by checking the value present in the copy nodes. If they are not empty we propagate its value to the child nodes and then solve
the query. Let us look at an example for a better understanding.
Range Minimum Query using Segment Tree
Let’s define a function LazySegmentTreeRangeMinimum(LeftRange,RightRange) as the minimum in the range from LeftRange to RightRange-1 ,i.e [LeftRange,RightRange-1] in the parent array (0 based
indexing). Let’s consider the segment tree shown below.
It is the same segment tree obtained after the operation LazySegmentTreeRangeUpdate(5, 0, 2) on the original segment tree. Now lets implement LazySegmentTreeRangeMinimum(0,2) on the given segment
tree. That is finding the minimum in the range [0,2) after all the elements in the range [0,2) were increased by 5.
In the segment tree shown above, dotted lines represented the value returned from the dfs call. We will start our dfs call from the root node. Since the copy node associated with the root node is
empty we can move towards its child nodes. Say we move towards the left child, that is the node with id=2. Here we get a condition of total overlap so we return the value present in the segment tree,
i.e 8.
Next, we move towards the right child, that is the node with id=3. Here we get a condition of no overlap. So we return a value such that it will never affect our desired result. In the case of the
minimum function, we return Infinty represented by INF in the diagram shown above.
Finally, we compute the minimum of 8 and INFINITY and return the result. That is minimum(8, INFINITY) = 8. So we return 8 as our answer which is the correct result.
We can see that the lazy propagation approach can be used to solve the minimum range queries in a much efficient way without affecting the correct result. It is efficient than the brute force
approach as we are not updating every leaf node in the range update queries, but we are suspending the updates for the future by storing the updates in the copy nodes.
Pseudocode for LazySegmentTreeRangeMinimum(LeftRange,RightRange)
Below, we have shown the pseudocode for LazySegmentTreeRangeMinimum(LeftRange,RightRange) function. Note that we will discuss the entire implementation of the algorithm after this section.
start the dfs from the root node.
int LazySegmentTreeRangeMinimum(LeftRange,RightRange)
if(condition of no overlap)
return Infinity;
else if(condition of total overlap)
return the value present in the segment tree.
// If condition of partial overlap.
Get the answer from its left child.
Get the answer from its right child.
return the combined answer or
the minimum of the value obtained from left and right child.
Implementation of Range Minimum Query in Segment Tree using Lazy Propagation
Let us see how we can solve range minimum queries with range updates efficiently using lazy propagation.
• At first we build our minimum range segment tree. The build function is the same as that used for a simple segment tree with point updates. We will use buildSegmentTree function to build the
minimum range segment tree. Whenever we get a node with RightRange-LeftRange=1 we get a leaf node. In such a case we update that node with its corresponding value in the parent array, i.e element
indicated by LeftRange.
• Whenever the query string is RangeUpdate we will use the LazySegmentTreeRangeMinimum function to update the range query_LeftRange to query_RightRange-1 with updateValue.
• Whenever the query string is RangeMinimum we will use the LazySegmentTreeRangeMinimum function to return the minimum in the range of query_LeftRange to query_RightRange-1.
• Note that we will have to use long long data type for larger inputs and also update the value of infinity with a larger value.
C++ Implementation-
using namespace std;
int infinity = 1e9 + 5;
void PropagatePastUpdates(vector<int>&segmentTree, vector<int>&lazyPropagationVector, int node_id)
//check if update is pending
if (lazyPropagationVector[node_id] == 0)
int left_child = node_id * 2, right_child = node_id * 2 + 1;
//propagate the update to its left child
segmentTree[left_child] += lazyPropagationVector[node_id];
lazyPropagationVector[left_child] += lazyPropagationVector[node_id];
//propagate the update to its right child
segmentTree[right_child] += lazyPropagationVector[node_id];
lazyPropagationVector[right_child] += lazyPropagationVector[node_id];
//after propagating the updates empty the copy node of parent
lazyPropagationVector[node_id] = 0;
void LazySegmentTreeRangeUpdate(vector<int>&segmentTree, vector<int>&lazyPropagationVector, int query_LeftRange, int query_RightRange, int updateValue, int node_id, int LeftRange, int RightRange)
if (RightRange <= query_LeftRange || LeftRange >= query_RightRange)
//no overlap
else if (LeftRange >= query_LeftRange && RightRange <= query_RightRange)
//total overlap
segmentTree[node_id] += updateValue;
lazyPropagationVector[node_id] += updateValue;
//check for past updates before moving to child nodes
PropagatePastUpdates(segmentTree, lazyPropagationVector, node_id);
int mid = (LeftRange + RightRange) / 2;
//update left child
LazySegmentTreeRangeUpdate(segmentTree, lazyPropagationVector, query_LeftRange, query_RightRange, updateValue, node_id * 2, LeftRange, mid);
//update right child
LazySegmentTreeRangeUpdate(segmentTree, lazyPropagationVector, query_LeftRange, query_RightRange, updateValue, node_id * 2 + 1, mid, RightRange);
//store the updated result in non leaf nodes
segmentTree[node_id] = min(segmentTree[node_id * 2], segmentTree[node_id * 2 + 1]);
int LazySegmentTreeRangeMinimum(vector<int>&segmentTree, vector<int>&lazyPropagationVector, int query_LeftRange, int query_RightRange, int node_id, int LeftRange, int RightRange)
if (RightRange <= query_LeftRange || LeftRange >= query_RightRange)
//no overlap
return infinity;
else if (LeftRange >= query_LeftRange && RightRange <= query_RightRange)
//total overlap
return segmentTree[node_id];
//check for past updates
PropagatePastUpdates(segmentTree, lazyPropagationVector, node_id);
int mid = (LeftRange + RightRange) / 2;
//get the minimum from left segment
int LeftSegmentMinimum = LazySegmentTreeRangeMinimum(segmentTree, lazyPropagationVector, query_LeftRange, query_RightRange, node_id * 2, LeftRange, mid);
//get the minimum from the right segment
int RightSegmentMinimum = LazySegmentTreeRangeMinimum(segmentTree, lazyPropagationVector, query_LeftRange, query_RightRange, node_id * 2 + 1, mid, RightRange);
//return the minimum of left and right segment
return min(LeftSegmentMinimum, RightSegmentMinimum);
void buildSegmentTree(vector<int>&segmentTree, vector<int>&parentArray, int node_id, int LeftRange, int RightRange)
if (RightRange - LeftRange == 1)
//leaf node
//store the value indicated by leftRange in the node
segmentTree[node_id] = parentArray[LeftRange];
int mid = (LeftRange + RightRange) / 2;
buildSegmentTree(segmentTree, parentArray, node_id * 2, LeftRange, mid);
buildSegmentTree(segmentTree, parentArray, node_id * 2 + 1, mid, RightRange);
segmentTree[node_id] = min(segmentTree[node_id * 2], segmentTree[node_id * 2 + 1]);
int main()
int n;
cin >> n;
for (int i = 0; i < n; i++)
cin >> parentVector[i];
vector<int>segmentTree(4 * n + 1);
//initialise the lazyPropagationVector with 0 as we need to update with addition.
vector<int>lazyPropagationVector(4 * n + 1, 0);
//build the minimum range segment tree from the parentVector
buildSegmentTree(segmentTree, parentVector, 1, 0, n);
int queries;
//take the number of queries
cin >> queries;
while (queries--)
string queryString;
cin >> queryString;
if (queryString == "RangeUpdate")
int query_LeftRange, query_RightRange, updateValue;
cin >> query_LeftRange >> query_RightRange >> updateValue;
LazySegmentTreeRangeUpdate(segmentTree, lazyPropagationVector, query_LeftRange, query_RightRange, updateValue, 1, 0, n);
else if (queryString == "RangeMinimum")
int query_LeftRange, query_RightRange;
cin >> query_LeftRange >> query_RightRange;
int ans = LazySegmentTreeRangeMinimum(segmentTree, lazyPropagationVector, query_LeftRange, query_RightRange, 1, 0, n);
cout << ans << "\n";
return 0;
Complexity Analysis in a Segment Tree
Space Complexity
What is the maximum number of nodes that can be associated with a Segment Tree?
Let’s think a maximum of how many nodes will be required in a Segment Tree. The root node will consist of two children nodes let’s name this as layer 1. The two nodes of layer 1 will again have their
own children nodes. Thus total nodes in layer 2 will be 4 nodes. This will go on until we reach the leaf nodes which will represent the array elements. So in the worst case, the number of nodes in
the segment tree can be represented as the sum of
$$1+2+4+8+…..+2^{log{_2}n}=2^{log{_2}n+1} < 4*n
So we have proven that the maximum number of nodes in a segment tree will never exceed 4*n. Where n is the number of elements of the parent array. Since we are using two vectors SegmentTree and
LazyPropagation to store the nodes of the tree and the updates. Thus the Space complexity in the array representation of a Segment Tree is O(n).
Time Complexity
Build Query –
A Segment tree can contain a maximum of 4*n+1 nodes (1 based indexing). As we visit every node once while building the Segment Tree. Hence the time complexity of the build function is O(n).
Range Minimum Query –
The time complexity for a range minimum query is also the same as that in a simple segment tree. If propagating the past updates takes O(1) time-complexity, i.e addition in this case. A Range Minimum
Query will take O(log n) time complexity.
Range Update Query –
We are not updating every leaf node present in the range query but updating only one node associated with the entire range. We can come across log n such nodes. And if the operation we are doing
takes O(1) time, i.e addition in this case. A Range Update Query will take a O(log n) time complexity.
• Lazy Propagation Technique can be used to solve Range Update Queries in O(log n) time complexity.
• The same logic can be used for other associative and distributive functions as well.
• This includes addition over a range and finding minimum, range multiplication and sum of a range, range addition and sum of a range, range bitwise OR and bitwise AND of a range and many such
• A segment tree is a very flexible data structure and allows different modifications and extensions to solve a variety of functions in an efficient way. | {"url":"https://www.scaler.in/segment-tree-with-lazy-propagation/","timestamp":"2024-11-15T03:49:38Z","content_type":"text/html","content_length":"111419","record_id":"<urn:uuid:a974a727-d4af-4c15-b065-36f9d53c4c82>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00123.warc.gz"} |
8 Easy Strategies for Teaching Math at Home - Corwin Connect
Each fall teachers struggle with how to support parents who are helping their children with math homework. When conducting back-to-school meetings with parents, teachers typically summarize the math
curriculum by course content. Sometimes, teachers will hand out a list of resources, study aids, and games that parents can share with their children to develop number sense and reasoning. Some
schools offer parent education nights that provide parents with detailed information about academic standards. Other schools host family math nights where parents can participate with their children
and play various math games that support the curricular content. At these events, parents can make math game kits to use at home with their children. Other schools provide after-school math tutoring
for parents and children through after-school literacy program funding or through federally funded school improvement funding.
When guiding parents on how they can help their children with mathematic reasoning at home, consider the following examples and activities that align to the math standards:
1. Define problems and persevere in solving them.
Students should first describe a math problem and identify what they are being asked in order to help them determine the solution and show their work. Word problems are best solved by this approach.
For example, if a student has $50 to spend on school clothes where pants cost $20 and shirts cost $10, how many shirts and pants can be purchased? A primary grade student could subtract $20 from $50
and find that they have $30 left for three shirts or one more pair of pants and one shirt. A middle school student might set up the algebraic equation $20x + $10y = $50 and create a table of values
for x and y that satisfy the equation. Content specific math games can help students learn concepts and properties of math.
2. Reason abstractly and quantitatively.
Students need the ability to justify how they solve a particular math problem. One application might be to set up a budget for school supplies. The student must research the best prices for school
paper, writing tools, a binder, a pencil holder, and art supplies. By identifying what the student needs to purchase to supplement what the student already has at home, the student must problem solve
abstractly by prioritizing needs and quantitatively by using the budget for needed supplies.
3. Construct viable arguments and critique the reasoning of others.
When working with a classroom of students, teachers often facilitate a discussion on how or why a student approached a particular math problem with a specific solution. At home, parents can have a
discussion with their child about options to consider when planning a budget or deciding on a specific purchase. For example, a parent can ask their child “what if” questions or ask if there are
other means to obtain the solution. If buying a Lego set with an allowance, the child might want to consider what set might be interchangeable with other sets that the child already owns. When
shopping for a cell phone, the child should be asked to consider which plan and type of phone best supports individual needs and overall family cost.
4. Group and sort with math.
At the elementary levels of mathematic reasoning, parents can apply mathematics in a variety of ways. For example, at the primary level, parents can help their children count and sort collections of
leaves, coins, stamps, buttons, or other collections. The children can chart groups of leaves, coins, stamps, or buttons. At a higher level of math, children can calculate the fractional quantities
of different groups, convert to percentages, graph on a computer, or create a chart on graph paper. Food recipes can be increased or reduced for different numbers of servings. Children can calculate
unit costs of various food selections when they shop and show their work to parents.
5. Use measuring tools.
Teachers can guide parents in helping their children consider the use of tools when solving a math problem. Young children love using tape measures, yard sticks, and rulers. Favorite home projects
include designing container gardens by using graph paper and a pencil to create scale drawings. Parents can help their children calculate the area required when planting different types of plants and
vegetables. Children can also compute the area of home features with measurements for carpeting, window covers, surface painting, and landscaping.
6. Use family activities to give context to problem solving.
Parents can encourage children to apply their math skills when planning for the family vacation to a national park, a local campsite, or a nearby beach. Children can accurately calculate the driving
distance to each stopping point during the trip and predict the amount of gas that must be purchased during the trip. Children can help parents plan the vacation’s budget by determining where they
will stay and if they qualify for travel discounts. Research can be conducted on statistical information on points of interest, anticipated weather forecasts, and costs for supplies required for the
7. Look for and make use of structure.
Teachers can encourage parents to talk to their children about observable patterns in solving problems. When teaching children about money conversions, such as how many nickels are in a dollar, the
parent can reinforce the discussion by helping the child count by five, ten, or twenty-five to identify the patterns of conversion options in a dollar.
8. Look for and express regularity in repeated reasoning.
As parents become proficient working with their children in mathematical problem-solving, they can help their children notice that calculations are repeated. When multiplying a two-digit number by a
single digit number, the child can be shown how to deconstruct the two-digit number into the sum of easy to multiply numbers. Then the child can multiply each of the easy to multiply numbers by the
single digit number creating two products. The two products can be added to give the answer to the original multiplication problem. For instance, the product of 5 times 27 can be found by multiplying
5 times 20 and adding the product to the product of 5 times 7. Thus, 5 times 27 will equal the sum of 100 + 35 = 135. In this example, the ability to deconstruct and multiply also provides practice
in applying the distributive property; 5(20 + 7) = 100 + 35 = 135.
Teachers can feel overwhelmed with the added job of helping parents use mathematical reasoning with their children at home. The benefits of this guidance is bittersweet. When parents understand that
the mathematical standards are internationally benchmarked, they are assured that all students have access to quality content when transferring between schools and applying for colleges out of state.
In next month’s blog we will discuss how to expand on the use of these standards when helping parents integrate technology into mathematical reasoning with their children at home. | {"url":"https://corwin-connect.com/2016/09/8-easy-strategies-teaching-math-home/","timestamp":"2024-11-09T10:20:13Z","content_type":"text/html","content_length":"168991","record_id":"<urn:uuid:a4b86ad3-33a3-49c1-b08b-0ca11dbe0f16>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00326.warc.gz"} |
Hermann Minkowski
1864 - 1909
Affiliated Members
Research Strategy
Foundational Knowledge
Our Sponsors
Call for Supporters
Offered Courses
Hermann Minkowski
Born on Minkowski's Discovery
Minkowski Institute Physics Journal
Minkowski Institute Magazine
Minkowski Institute Press
Spacetime Society
Minkowski Meetings
Spacetime Conferences
23 October 2024 - A Call for Participation was sent to colleagues whose research interests are related to the foundations and nature of spacetime.
20-22 March 2023 - An invitation was sent to colleagues whose research interests are in the foundations and philosophy of spacetime:
Minkowski Institute is inviting colleagues who are interested in joining (as affiliated members of the Institute) its most ambitious research project - to examine rigorously whether gravitational
phenomena are noting more than manifestations of the non-Euclidean geometry of spacetime, which would mean that gravitation is not a physical interaction.
As is well-known in the present situation in physics it is difficult to secure funds for projects in fundamental physics that do not fall into the area of the big projects such as LHC and LIGO, for
example. Although the research at the Minkowski Institute does overlap with the research at LHC and LIGO, our approach and particularly our research strategy are significantly different.
Due to its specific nature and goals, in the present situation the Minkowski Institute tries to become as independent of uncertain funds and donations as possible; now the only steady but small
source of funding is the Minkowski Institute Press and potentially teaching as well once the Institute is granted the right to offer courses. Such an independence ensures that it is solely the
Institute which determines its research projects and the number of colleagues invited to work on them as outlined in its Mission. Due to the limitted funds at this stage the Minkowski Institute acts
as a "crystallization centre" of an innovative research strategy (extracted from the methods behind the greatest discoveries in physics), which is now being further developed by testing its potential
for accelerating and focusing the research process in different areas of fundamental physics. Once proven to be more productive, it will allow the Institute to act as a "catalyst" in the research in
fundamental physics. Presently the Minkowski Institute is guiding and coordinating the work of researchers who are invited as members of its research teams (and who can afford doing this while
holding their University or other positions).
Now the research at the Minkowski Institute is on its first and major research project as outlined below and the Institute continues to invite researchers who are interested in working on this
project and its implications as affiliated members. Inquiries by interested researchers are given careful consideration.
Recent graduates (Physics or Mathematics) interested in the research at the Minkowski Institute are especially encouraged to contact Vesselin Petkov at vpetkov@minkowskiinstitute.com for specific
information on how to join the research on the present project.
Minkowski Institute's Major Research Project - Summary
Exploring Minkowski's program of regarding
four-dimensional physics as spacetime geometry
In his groundbreaking talk "Space and Time" in 1908 Hermann Minkowski announced the revolutionary view of the unification of space and time into an inseparable four-dimensional world (with time as
the fourth dimension), which we now call spacetime. Almost certainly Minkowski arrived independently at what Einstein called special relativity and at the notion of spacetime (but Einstein and
Poincaré published first) by successfully decoding the profound message (that the world is four-dimensional) hidden in the failed experiments to detect absolute uniform motion, that is, he revealed
the deep physical meaning of the principle of relativity (postulated by Einstein and explained by Minkowski) - physical laws are the same in all inertial refererence frames (otherwise absolute motion
can be discovered) because the reference frames in relative motion have different proper spaces and times and the physical laws are expressed in the same way in each frame in terms of its proper
space and time; but many spaces and times imply a four-dimensional world (Minkowski remarked that it was Einstein who regarded on equal footing the different times of obsetvers in relative motion,
formally introduced by Lorentz, but pointed out that neither Lorentz nor Einstein examined the notion of space, after which he explicitly stated the essence of the physical meaning of the principle
of relativity: "Hereafter we would then have in the world no more the space, but an infinite number of spaces analogously as there is an infinite number of planes in three-dimensional space.
Three-dimensional geometry becomes a chapter in four-dimensional physics"). Minkowski had clearly realized that four-dimensional physics was in fact spacetime geometry since all particles which
appear to move in space and last in time are in reality a forever given web of the particles' worldlines in spacetime. Then Minkowski outlined his program of geometrization of physics:
The whole world presents itself as resolved into such worldlines, and I want to say in advance, that in my understanding the laws of physics can find their most complete expression as interrelations
between these worldlines.
Unfortunately, this program has not been fully explored (even in general relativity) mainly due to two reasons:
1. The reality of the Minkowski four-dimensional world (spacetime) is not an accepted fact in physics since some physicists think that theories are just descriptions of physical phenomena and
therefore physics cannot say which theoretical entities have counterparts in the physical world. This metatheoretical issue (whether theoretical entities adequately represent elements of the
physical world) might have been hampering the advancement of fundamental physics in the last 80 years. Addressing and resolving it can be done by recalling that part of the art of doing physics
is to determine whether different theories are indeed simply different descriptions of the same physical phenomena (as is the case with the three representations of classical mechanics -
Newtonian, Lagrangian, and Hamiltonian), or only one of the theories representing given physical phenomena is the correct one (as is the case with general relativity, which identifies gravity
with the non-Euclidean geometry of spacetime, and other theories, which regard gravity as a force).
2. It is often stated that the notion of spacetime cannot accommodate the probabilistic behavior of quantum objects. However, the fact that quantum objects are not worldlines in spacetime only
indicates what they are not, and in no way questions the reality of spacetime. Quantum objects might simply be more complex structures in spacetime than worldlines. ^[1]
Often the issue of the reality of spacetime is ignored by physicists since they regard it as belonging to philosophy. Hardly anything can be wronger than such a position with far reaching negative
implications for fundamental physics. The dimensionality of the world is a purely physical question that should be answered by physics alone. Moreover, Minkowski himself forcefully stressed it that
the new view of spacetime arose from the domain of experimental physics.
This research project has two main stages:
• Rigorously examining the relativistic experimental evidence to determine whether it would be at all possible if the world were not four-dimensional.
• If the reality of spacetime is confirmed, which will confirm Minkowski's discovery that four-dimensional physics is merely spacetime geometry, all possible implications for fundamental physics
will be studied, particularly for quantum gravity,^[2] gravitational wave physics, and the nature of elementary particles.
1. ''As an illustration that spacetime can accommodate probability perfectly well, imagine that the probabilistic behavior of the quantum object is merely a manifestation of a probabilistic
distribution of the quantum object itself in the forever given spacetime - an electron, for instance, can be thought of (for the sake of the argument that spacetime structures can be
probabilistic as well) as an ensemble of the points of its disintegrated worldline which are scattered in the spacetime region where the electron wavefunction is different from zero. Had
Minkowski lived longer he might have described such a probabilistic spacetime structure by the mystical expression predetermined probabilistic phenomena.'' V. Petkov, Physics as Spacetime
Geometry. In: Springer Handbook of Spacetime (Springer, Heidelberg 2014), Chapter 8. Go back.
2. A comprehensive exploration of the implications of Minkowski's profound idea of regarding four-dimensional physics as spacetime geometry leads to an interesting and potentially important result -
that gravity might not be a physical interaction. This comprehensive exploration forms the basis of Minkowski Institute's most ambitious research project - the nature of gravitation. Such a
stunning explanation of the unsuccessful attempts to create a theory of quantum gravity does not appear to have been examined so far. If Einstein had examined Minkowski's idea thoroughly he would
have most probably considered and carefully analyzed this possibility. Had he lived longer, Minkowski himself might have arrived at this radical possibility. In 1921 Eddington even mentioned it
explicitly - "gravitation as a separate agency becomes unnecessary" [A. S. Eddington, The Relativity of Time, Nature 106, 802-804 (17 February 1921); reprinted in: A. S. Eddington, The Theory of
Relativity and its Influence on Scientific Thought: Selected Works on the Implications of Relativity (Minkowski Institute Press, Montreal 2015)].
This possible explanation of the failure of quantum gravity does not look so shocking when gravity is consistently and rigorously regarded as a manifestation of the non-Euclidean geometry of
spacetime. Then it becomes evident that general relativity does imply that gravitational phenomena are not caused by gravitational interaction since they are fully explained in the theory without
the need to assume the existence of gravitational interaction: what has the appearance of gravitational attraction involves only inertial (interaction-free) motion and is indeed nothing more than
a mere result of the curvature of spacetime. The actual open question in gravitational physics seems to be how matter curves spacetime, not how to quantize the apparent gravitational interaction
(Chapter 8 of the Springer Handbook of Spacetime and Chapter 6 and Appendix C of Inertia and Gravitation). Go back. | {"url":"https://www.minkowskiinstitute.com/research.html","timestamp":"2024-11-14T02:11:54Z","content_type":"text/html","content_length":"19038","record_id":"<urn:uuid:7e7ec201-09ea-4d88-80c8-2e39578bba18>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00142.warc.gz"} |
Changes in Water Surface Area of the Lake in the Steppe Region of Mongolia: A Case Study of Ugii Nuur Lake, Central Mongolia
Department of Geography, Division of Natural Sciences, School of Arts & Sciences, National University of Mongolia, Ulaanbaatar 210646, Mongolia
Department of Environment and Forest Engineering, School of Engineering and Applied Sciences, National University of Mongolia, Ulaanbaatar 210646, Mongolia
State Key Laboratory of Simulation and Regulation of Water Cycle in River Basin, Water Resources Department, China Institute of Water Resources and Hydropower Research (IWHR), Beijing 100038, China
Surface Water Research Department, Information and Research Institute of Meteorology, Hydrology, and Environment, Ulaanbaatar 210646, Mongolia
College of Hydrology and Water Resources, Hohai University, Nanjing 210098, China
College of Environmental Science and Engineering, Donghua University, Shanghai 201620, China
Department of Natural Resource Management, University of Gondar, Gondar 196, Ethiopia
Authors to whom correspondence should be addressed.
Submission received: 27 February 2020 / Revised: 20 April 2020 / Accepted: 11 May 2020 / Published: 21 May 2020
The Ugii Nuur Lake is not only one of the small hydrologically closed lakes located in the Orkhon River Basin in Central Mongolia but also the most vulnerable area for global climate change.
Therefore, this study aims to investigate the impacts of recent global climate change on the water surface area. The data we analyzed were various measured hydro-meteorological variables of the lake
basin and the lake surface area, which was estimated from Landsat series satellite data from 1986 to 2018. The methods we used were Mann-Kendall (MK), Innovative trend analysis method (ITAM), Sen’s
slope estimator test, correlation, and regression analysis. The variation of lake water surface area has a strong positive correlation with the change of the lake water level (r = 0.95). The
Mann-Kendall trend analysis has indicated that under a significant decrease in total annual precipitation ($Z$ = −0.902) and inflow river discharge ($Z$ = −5.392) and a considerable increase in total
annual evaporation ($Z$ = 4.385) and annual average air temperature ($Z$ = 4.595), the surface area of the Ugii Nuur Lake has decreased sharply ($Z$ = −6.021). The total annual evaporation (r =
−0.64) and inflow river discharge (r = 0.67) were the essential hydro-meteorological factors affecting the surface area of the Ugii Nuur Lake. The lake surface area decreased by 13.5% in 2018
compared with 1986. In the near future, it is vital to conduct scientific studies considering the volume of lake water, groundwater, and the anthropogenic impact.
1. Introduction
Global climate change has apparent impacts on the environmental and hydrological systems [
]. It is possible to determine the impacts by temporal variability in the surface water resources at the center of the mainland [
] especially over the semi-arid steppe, Gobi, and desert region of Central Asia [
]. Thus, it has become an interesting issue to study the change of water resources in those regions since there is low total annual precipitation and few surface water distributions [
]. Those regions include Mongolia, where the annual average air temperature has increased by 2.2 °C since the 1940s, which is three times over-heated than global warming speed. The territory of
Mongolia is characterized as semi-arid to arid. Total annual precipitation is 300–400 mm in the Northern mountainous regions of Mongolia; 150–250 mm in the steppe; 100–150 mm in the steppe desert;
and 50–100 mm in the Gobi Desert. About 80% to 90% of total annual precipitation falls in the warm season, which is from April to September [
]. Thus, the Gobi, desert, and steppe regions of Mongolia are considered vulnerable for global climate change [
]. Monitoring surface water ecosystems in this region may be necessary for detecting the effects of global climate change. One of these regions is the Ugii Nuur Lake basin, which is a representative
location for the dry steppe region of Mongolia and the natural ecosystem influenced by desertification and dryness (
Figure 1
) [
]. The Ugii Nuur Lake has been registered under the Ramsar Convention for the Protection of Wetlands of International Importance since 1998; also, it is a famous tourist destination in Mongolia.
Meanwhile, the Ugii Nuur Lake basin is an ideal location to study Holocene climate evolution in the continental area of the steppe region in Mongolia [
Scientists have implemented several local surveys in the fields of soil, vegetation, and ecology surrounding the Ugii Nuur Lake [
], and have also studied lake morphometric parameters [
], geological parameters such as the age of lake sediments, paleoclimate, environmental change, and climate evolution, etc. [
]. The Ugii Nuur Lake Basin belongs to the steppe region of Mongolia with brown steppe soil as a dominant soil type. As for vegetation cover, Stipa, Bidle grass (Cleistogenes), and Agropyron types of
grass are the dominant types in the lake basin area [
]. Jimee [
] considered that the lake formed as a result of the channel meandering processes of the Orkhon river. He also estimated that the surface area of the lake was 5020 km
. The main inflow of the lake is the Khogshin Orkhon river. The water regime of the lake was influenced by the flow regulating the capacity of the Orkhon river. Approximately 50% of the lake water
surface area has a depth of lower than 3 m [
]. Several studies focused on lake sediments. Pagma et al. [
] determined the interrelation between lake physics and the surrounding climate by taking samples from the borehole drilled with a depth of 0.38 m at the Ugii Nuur Lake bottom. The analysis of the
radio-carbon test, which is used for determination of sediment age, shows that the age of the Ugii Nuur Lake would be 2880 ± 50 years at a depth of 1.6–2.0 m [
The average elevation above sea level of the lake is not only a vital point of water level measurement but also a tracer of water balance and water surface area of the lake. Different studies present
several elevation values of the mean water level of the lake in different periods. The reasons can be concluded as the differences in the methods used, time determined, and climate condition.
According to Jimee [
], the elevation of the lake water level was around 1332 m and 1337 m above sea level in 1971 and 2000, respectively. Walther, Gegeensuvd [
], Schwanghart, Schütt [
], and Wang et al. [
] stated that elevations of the lake water levels were 1332 m, 1328 m, and 1332 m, respectively. Studies done by the Information and Research Institute of Meteorology, Hydrology, and Environment of
Mongolia showed that the mean water level of the lake corresponds to the elevation above sea level was 1335 m in 2018 [
The Ugii Nuur Lake is relatively shallow compared to other lakes in Mongolia, and the Khogshin Orkhon river is the unique surface inflow into the lake. There are few studies related to groundwater
and flow, so it has no highlights on the lake water supply [
]. Satellite images are widely used to investigate water surface area changes [
]. Furthermore, the simple and effective way to study the lake water supply and the regime is to estimate the components of water balance and then compare them with the water surface area of the lake
]. The use of non-parametric Mann-Kendall (MK) testing, correlation, and regression analysis is also suitable for the expression of long-term changes in hydro-meteorological variables [
]. Thus, this study aims to determine the changes in the water surface area of the Ugii Nuur Lake due to recent global climate change based on the water balance components with data obtained from
satellite and hydro-weather stations.
2. Materials and Methods
2.1. Study Area
The Ugii Nuur Lake (47°46’ N, 102°44’ E) is located in the Orkhon river basin. The total area of the Orkhon river basin is 132,835 km
, 38.2% of which belongs to the Arkhangai province of Central Mongolia. The lake exists in the Arkhangai province, which is about 350 km to the west of the capital city Ulaanbaatar (
Figure 2
) [
]. The lake has fresh water with mineralization of 0.499 g∙l
, which is one of the biggest steppe lakes in Mongolia. The main surface inflow is the Khogshin Orkhon river, which is one of the tributaries of the Orkhon river. The lake has a width of 7.9 km from
west to east and 5.3 km from north to south, and the shoreline length is 24.7 km. The surface area of the lake varies between 25.3 to 25.7 km
with a mean depth of 6.6 m, and the corresponding mean volume is 0.17 km
]. The maximum depth of the lake is observed in the center of the lake, with a value of 15.3 m. The basin area of the lake is 5020 km
2.2. Data Sources
The long-term climate change survey covers data for longer than 30 years, which is recommended by the World Meteorological Organization (WMO). Various types of observations and satellite data from
1986 to 2018 were collected and statistically analyzed. The climate and hydrological data of the Ugii Nuur Lake basin in central Mongolia during 1986 to 2018 were supplied by the Archive and Database
center at the National Agency of Meteorology and Environment of Mongolia [
]. The daily and monthly air temperature, wind speed, and precipitation data measured at the Ugii Nuur Weather Station were used for estimating climate factors impacts on lake surface area. Daily
river discharge data, measured at the Khogshin Orkhon river station as one of the income components of water balance, has been summarized into total monthly and annual amounts (mm). Then, variations
in components of water balance were statistically analyzed and compared with the lake surface area. Water level data of the Ugii Nuur Lake applied for validating the lake surface area was estimated
from satellite data. The water surface area of the Ugii Nuur Lake in all Augusts between 1986 and 2012 was calculated by using Normalized Difference Water Index (NDWI) products from Landsat 4 and 5
TM satellite, while data between 2013 and 2018 were obtained by the NDWI products of Landsat 8 OLI satellite [
]. NDWI is an approach to determine water bodies using remotely sensed digital data based on reflected near-infrared radiation and visible green light [
2.3. Methods
The lake water surface images were obtained and calculated by different multi-spectral bands of Landsat satellites. The lake area was calculated by using NDWI in the downloaded images. Analysis of
long-term trends in both the observed and adjusted data was done by using the Mann-Kendall test, with linear changes in the data represented by Kendall-Theil Robust Lines. The non-parametric
Mann-Kendall (MK) test has been applied in studies to detect the trends in hydro-meteorological observations that do not need the normal distribution of data points. This study used the MK test
method to detect the trends in climate and components of water balance time-series data. For evaluating the reliability of MK, the results were compared with the Innovative trend analysis method
(ITAM) and Sen’s slope estimator test. Besides, time-series data for annual average air temperature, total annual precipitation, total annual river discharge, total annual evaporation, and water
surface area were investigated by statistical analysis. Significance levels at 10%, 5%, and 1% were taken to assess the time-series data for climate and river discharge by MK, ITAM, and Sen’s slope
estimator method.
2.3.1. Lake Surface Area Calculation
The lake water surface mapping with multi-spectral remote-sensing images is based on the difference of the absorption and reflection of light between the water and other features in different
frequency bands. As reflections from the water of the visible to infrared bands are gradually weakened, the surface water on an image can be delineated with the NDWI indices by the contrast of the
visible wavelength with the near-infrared and short-wave infrared wavelengths [
$NDWI = PGreen − PNIR PGreen + PNIR$
is the green band,
is the near-infrared band of Landsat 4–5 TM and Landsat 8 OLI satellites. Using NDWI, lake surface water was mapped on the 33 images taken at different times by extraction using the Band math tool of
ENVI 5.5.3 and ArcGIS. For validating lake water surface area calculated by satellite image, the measured lake water level has been correlated with lake water surface area obtained by NDWI.
2.3.2. Mann-Kendall (MK) Analytical Method
The Mann-Kendal (MK) is a non-parametric statistical test used to detect the trends of hydroclimatic variables. The MK test also shows whether the trend is increasing or decreasing. The strength of
the trend depends on the magnitude, sample size, and variations of data series. The trends in the MK test are not significantly affected by the outliers that occurred in the data series since the MK
test statistically depends on positive or negative signs. The Mann-Kendall test statistics “S” is then equated as [
$S = ∑ i = 1 n − 1 ∑ j = i + 1 n sgn x j − x i$
The trend test is applied to
$x i$
data values
$( i = 1 , 2 , … n − 1 )$
$x j ( j = i + 1 , 2 , … n )$
. The data value of each
$x i$
is used as a reference point to compare with the data value of
$x j$
which is given as:
$sgn ( x j − x i ) = { + 1 if ( x j − x i ) > 0 0 if ( x j − x i ) = 0 − 1 if ( x j − x i ) < 0$
$x j$
$x i$
are the values in period
. When the number of data series is greater than or equal to ten
$( n ≥ 10 ) ,$
the MK test is then characterized by a normal distribution with the mean
$E ( S ) = 0$
and variance
$Var ( S )$
is equated as:
$Var ( S ) = n ( n − 1 ) ( 2 n + 5 ) − ∑ k = 1 m t k ( t k − 1 ) ( 2 t k + 5 ) 18$
is the number of the tied groups in the time series, and
$t k$
is the number of ties in the
tied group. The test statistics
is as follows [
$Z = { s − 1 δ if S > 0 0 , if S = 0 s + 1 δ if S < 0$
When $Z$ is greater than zero, it indicates an increasing trend, and when $Z$ is less than zero, it is a decreasing trend.
In time sequence, the statistics are defined independently:
$UF k = d k − E ( d k ) var ( d k ) ( k = 1 , 2 , … , n )$
Firstly, given the confidence level
, if the
$UF k > UF α / 2$
, the sequence has a significant trend. Then, the time sequence is arranged in reverse order. According to the equation calculation, while making
$UB k$
$UF k$
were drawn as
curves. Where
$UF k$
= 0 and
$UF k$
is a standard normal distribution. There is an upward trend when it is 0, while there is a downward trend when it is less than 0. If a significant level α is given, according to the normal
distribution table we get the critical value t
and, if
$UF k$
> t
, this indicates that there is an obvious change trend in the sequence. If the result is multiplied by −1, then we get the reverse sequence’s statistic
$UF k$
, where
$UB k$
= 0. If there is an intersection between the two curves, the intersection is the beginning of the mutation [
]. When the two curves
$UF k$
, and
$UF k$
intersect and the intersection point is between the confidence lines, then the intersection point is the abrupt point. According to the test, if between the confidence lines that were defined from
the processing of the data the value “zero” is included, then these values present mean values that are statistically equal. Thus, confidence lines are significant levels [
2.3.3. Innovative Trend Analysis Method (ITAM)
ITAM has been used in many studies to detect the hydro-meteorological observations, and its accuracy was compared with the results of MK method. The trend indicator is given as:
$φ = 1 n ∑ i = 1 n 10 ( x j − x i ) μ$
- is the trend indicator,
-is the number of observations in the subseries,
$x i$
-is the data series in the first half subseries class,
$x j$
-is the data series in the second half subseries class, and
-is the mean of data series in the first half subseries class [
2.3.4. Sen’s Slope Estimator Test
Sen’s slope estimator can be used to discover trends in a univariate time series of hydro-meteorological variables such as runoff series, etc. The slope
$Q i$
between two data points is given by the equation:
$Q i = x j − x k j − k , for i = 1 , 2 , … N$
$x j$
$x k$
are data points at times
$( j > k )$
, respectively. When there is an only single datum in each time, then
$N = n ( n − 1 ) 2$
; n is a number of time periods. However, if the number of data in each year is more than 1, then
$N < n ( n − 1 ) 2$
; n is the number of total observations. The
values of slope estimator are arranged from smallest to biggest. Then, the median of slope (
) is computed as:
$β = { Q [ ( N + 1 ) / 2 ] when N is odd Q [ ( N / 2 ) + Q ( N + 2 ) / ( 2 ) / ( 2 ) ] when N is even$
The sign of
shows whether the trend is increasing or decreasing [
2.3.5. Analysis and Estimation of Climate and Hydrological Variables Influencing the Water Surface Area of the Lake
To achieve the aim of this study, various types of climate and hydrological measurement data were taken for statistical calculations, for example, hydro-meteorological variables were statistically
analyzed with the change of lake surface area by correlation and regression analysis. Since the temporal change of water balance of the lake is directly related to the change of lake surface area [
], one of the components of water balance or evaporation has been estimated for this analysis.
The primary method for estimating the water balance of the lake is based on the difference between income and outcome water flow of the lake [
$dV dt = P − E + Q − Q ′$
is the change of lake volume within a given time of period
$( dt )$
is the precipitation amount,
is the evaporation from open surface water of a lake,
is the surface inflow to the lake,
$Q ′$
is the surface outflow from the lake. Notably, the Ugii Nuur Lake does not have an outflowing river.
The supply from groundwater and ground flow is an essential component of the water balance of the lake. However, hydro-geological studies on the Ugii Nuur Lakes are barely carried out. For detecting
the relationship between recent global climate change and variations in the lake water surface area, it should consider the precipitation and river discharge as income of the water balance equation
and the evaporation as the outcome. Since both the precipitation and river inflow were measured around the hydro-weather station, we can estimate the evaporation from the lake surface.
Dalton [
] proposed the physical law for estimating evaporation based on the relation among average daily humidity, air temperature, and wind speed. The equation was updated several times, and the widely used
equation was proposed by Penman [
]. Gombo [
] divided major Mongolian lakes into three groups with an elevation above mean sea level. Based on the equations mentioned above, the researcher proposed empiric equations for evaporation from the
lake surface for each lake groups. Since the Ugii Nuur Lake has an elevation of 1335 m above the mean sea level, it is suggested to calculate its evaporation by the following equation [
$E = 0.32 ( 1 + 0.38 v 2 ) ( e 0 − e 2 )$
is the evaporation (mm),
$v 2$
is the wind speed at 2.0 m above the ground surface (m∙s
$e 0$
is the water vapor pressure computed by water surface temperature (hPa),and
$e 2$
is the water vapor pressure at 2.0 m above the ground surface (hPa).
Magnus [
] created the first version for estimating saturated vapor pressure by the empiric method. Several scientists developed the Magnus formula. The equation by Matveev [
] was used for this research (Equation (14)).
$e = 6.108 · 10 ( at ) · ( b + t ) − 1$
where if
< 0°C, then
= 9.5,
= 265.5; if
> 0 °C, then a = 7.63,
= 241.9, and
is temperature.
It lacks long-term measurement of lake surface temperature in Mongolia. Thus, to calculate the saturated vapor pressure at the surface, lake mix-layer temperature and lake ice temperature have been
taken from the Fifth Generation of European Centre for Medium-Range Weather Forecasts (ECMWF) atmospheric reanalysis of the global climate (ERA5) [
] for estimating evaporation from the water surface area of the Ugii Nuur Lake.
The measured height of wind is 10 m above ground at each meteorological observation site [
]. The version of the empirical equation suggested by the WMO based on the law by Hellman [
] was used for estimating wind speed at 2 m above the ground surface (Equation (15)) [
$v h = v 10 [ 0.233 + 0.656 lg ( h + 4.75 ) ]$
$v h$
is the estimated wind speed at a given
level and
$v 10$
is the measured wind speed at 10 m level.
3. Results and Discussion
3.1. Changes in Lake Water Surface Area
The water surface area of the Ugii Nuur Lake was estimated by using ArcGIS software with Landsat satellite data. Then, data were analyzed in the time series.
Table 1
shows the water surface area of the lake in each August between 1986 and 2018.
Figure 3
illustrates the inter-annual variation of the water surface area of the Ugii Nuur Lake. The area of the lake decreased if we compared the water surface area in 1986 and 2018. However, among these
years, water surface areas of the Ugii Nuur Lake have been showing several increases and decreases. For instance, the water surface area of the lake decreased from 1986 to the 1990s, but it gradually
increased until the 2000s. The area of the lake was rapidly decreasing until 2011, and then it grew until 2018.
Figure 4
shows the water surface area of the lake and its spatial change in August 1986 and 2018 by satellite image. According to the changes in the lake surface area, the largest lake area was observed in
2000 with an area of 27.11 km
, while the smallest area was 21.80 km
in 2011. The average water surface area of the Ugii Nuur Lake is about 24.90 km
. In 1986, the lake area was 26.04 km
and then it decreased to 24.40 km
in 2018. Long-term observations of the water surface area of the lake revealed that the lake area has decreased by 13.5% within the last 33 years (
Figure 5
For showing a decline in time series of the water surface areas of the Ugii Nuur Lake calculated by the NDWI index from Landsat satellite, the imagery of the lake for every four years from 1986 to
2018 is shown in
Figure 5
. The shape and surface area of the lake have considerably changed over the last 30 years.
The water resources and morphometric parameters of the lake were obtained by satellite remote sensing data [
]. The water level data measured at the Ugii Nuur Lake from 2002 to 2018 were compared with the water surface area of the lake obtained from remote sensing data (
Figure 6
Figure 6
shows that the variation of lake water surface area has a strong positive correlation with the change of lake water level (r = 0.95). A strong relationship between those lake morphometric parameters
matches with the study results of Shang [
] and Fang [
]. Thus, data for the water surface area of the lake estimated by the NDWI products from Landsat satellite can be used for further studies.
3.2. Trend Analysis
The trend analysis was carried out by using MK, ITAM, and Sen’s estimator Test methods with the data series of climate and hydrological measurements at the Ugii Nuur Lake Basin from 1986 to 2018.
3.2.1. Air Temperature Analysis
The annual average air temperature for the Ugii Nuur weather station was 1.2 °C between 1986 and 2018. It was a maximum of 3.3 °C in 2017 and a minimum of −1.2 °C in 1986. Analysis of annual average
air temperature (change of parameter) by the MK trend method shows a sharp increase at the Ugii Nuur Weather Station. Such a sharp statistical increase started in 1986 and has been observed in 2008
and 2018 (
= 4.595). The rise in annual average air temperature was also confirmed by the results provided by Innovative Trend (φ = 16.076) and Sen’s Slope Estimator Test (
= 0.065) methods (
Figure 7
3.2.2. Precipitation Analysis
The total annual precipitation at the Ugii Nuur weather station was 256.6 mm as the average between 1986 and 2018. It was a maximum of 380.5 mm in 1994 and a minimum of 170.8 mm in 2002. Analysis of
total annual precipitation series shows a slight decline from 1994 to 2010s. Then, the annual precipitation exhibited an increasing trend from 2010 to 2018. A significantly statistical decline was
observed in 2010 (
= −0.902) at the Ugii Nuur Weather Station based on the Mann-Kendall analysis approach. The change of trends was consistent with the other two methods, Innovative Trend (
= −0.542) and Sen’s Slope Estimator Test (
= −0.888) (
Figure 8
3.2.3. River Discharge Analysis
The total annual inflow river discharge at the Khogshin Orkhon river gauge station was 36.2 mm as the average between 1986 and 2018. It was a maximum of 147.3 mm in 1994 and a minimum of 0.6 in 2010.
Analysis of the total annual discharge series of the Khogshin Orkhon River shows a reversely rapid decrease of river discharge from 1988 to 2018. A statistically rapid drop in river discharge was
observed from 2005 to 2018 (
= −5.392) based on the Mann-Kendall analysis approach. The change of trends also was consistent with the other two methods by Innovative Trend (
= −6.511) and Sen’s Slope Estimator Test (
= −0.015) (
Figure 9
3.2.4. Evaporation Analysis
The total annual evaporation from the Ugii Nuur Lake was 110.5 mm as the average between 1986 and 2018. It was a maximum of 222.2 mm in 2017 and a minimum of 50.8 mm in 1990. Analysis of total annual
evaporation (parameter change) by the Mann-Kendall trend method shows a sharp increase at the Ugii Nuur Lake. A statistically sharp increase started in the 1990s and has also been observed from 2013
to 2018 (
= 4.385). This increase in total annual evaporation also was confirmed by the results provided by Innovative Trend (
= 4.328) and Sen’s Slope Estimator Test (
= 2.256) methods (
Figure 10
3.2.5. Analysis of the Water Surface Area of the Lake
The water surface area has a decreasing trend (
= −6.021) from 1986 to 2018 by the Mann-Kendall trend method. During the period from 1986 to 2000, the lake water surface area showed a gradually increasing trend. From 2000 to 2018, the water
surface area of the lake had a statistically sharp decreasing trend according to the Mann-Kendall trend analysis. The above results on the lake water surface area by the Mann-Kendall method were the
same as the outputs by Innovative Trend (
= −0.896) and Sen’s Slope Estimator Test (
= −0.102) (
Figure 11
MK, ITAM, and Sen’s Slope Estimator approaches have been used for trend analysis of several parameters, such as long-term average annual air temperature, total annual precipitation, total annual
inflow river discharge, total annual evaporation at the Ugii Nuur Lake Basin, and the water surface area of the Ugii Nuur Lake (
Table 2
Table 2
presents the annual trend analysis of hydro-meteorological variables and the water surface area of the lake in the station by the MK test, ITAM, Sen’s slope estimator test. Hence, the increase and
decrease in innovative trend analysis
test value predicted that the magnitude becomes strong.
3.3. Impact of Cimate Parameters and Water Balance Components on the Water Surface Area of the Ugii Nuur Lake
According to the results of Mann-Kendall, Innovative Trend, and Estimator test methods for time series of climate and hydrological variables shown in
Section 3.2
, a decrease was observed in water surface area of the lake with the satellite data, showing a negative correlation with a sharp increase in annual average air temperature and total annual
evaporation, while a positive correlation with a decrease in total annual river discharge and total annual precipitation was also observed. According to the results of MK, ITAM, and Sen’s Estimator
test methods for the time series of climate and hydrological variables shown in
Section 3.2
, a decrease was observed in the water surface area of the lake estimated from the satellite data. Moreover, annual average air temperature and total annual evaporation showed a sharp increase, while
total annual river discharge and total annual precipitation showed a decrease. The results were consistent with the studies conducted by Chebud and Melesse [
Statistical analysis has been done between different components of the lake water balance and climatic factors [
] by using hydro-meteorological data measured at the Ugii Nuur Lake Basin and water surface area of the lake obtained from Landsat satellite from 1986 to 2018.
Figure 12
illustrated the results in the analysis mentioned above.
The scatter chart in
Figure 12
showed that the water surface area of the Ugii Nuur Lake had a reverse correlation with annual average air temperature and total annual evaporation. In contrast, it had a direct correlation with the
total annual river discharge or inflow and total annual precipitation [
]. The correlation coefficients between the annual average air temperature and lake surface area were −0.26. In other words, the increase of annual average air temperature led to an increase of total
annual evaporation, and this caused a decrease in the water surface area of the lake. Thus, the effect of the regime of annual average air temperature on the lake surface was weak (
Figure 12
The surface area of the lake is directly dependent on the volume of the lake [
]. Therefore, it is necessary to investigate the relationship between lake surface area and components of the water balance, such as total annual precipitation, river discharge, and evaporation. The
correlation coefficient between the total annual precipitation and lake surface area was 0.08 (
Figure 12
b). If the precipitation amount increases, the lake surface area will expand. However, total annual precipitation has decreased by 7.1% for the 33 years of the period from 1986 to 2018. The decrease
rate in precipitation was about two times less than the decrease in the lake surface area mentioned in
Section 3.1
. In other words, small decreases in total annual precipitation led to a weak correlation with the lake surface area. The weak relationship was confirmed by the result of a slight decrease (Z =
−0.902) calculated using the MK analysis in
Section 3.2.2
. The correlation coefficient between total annual river discharge and the lake water surface area was 0.67, which indicates that it had a positive and moderate effect on the water surface area of
the lake (
Figure 12
c). A decrease in the total annual inflow river discharge brought a decline in the lake water surface area. It was confirmed by the result of a statistically significant decrease (Z = −5.392)
calculated by the MK analysis in 3.2.3. The correlation coefficient between total annual evaporation and lake water surface area was −0.64 or a negative and moderate (
Figure 12
d). That is to say, if the total annual evaporation rate grows, the lake surface area will decrease. The moderate relationship was confirmed by the result of a statistically sharp increase (
= 4.385) calculated by the MK analysis in
Section 3.2.4
From the above results, the average annual air temperature and the total annual precipitation were not strongly related to the surface area of the lake. However, the total annual river flow and the
total annual evaporation had a strong relationship with the surface area of the lake. In other words, the total annual evaporation and inflow river discharge were the essential hydro-meteorological
factors affecting the surface area of the Ugii Nuur Lake. Therefore, the multiple regression equation of the lake surface area depending on the total annual evaporation and the total annual river
discharge was obtained (Equation (17)).
$A = 25.82396 + 0.025622 Q − 0.01679 E$
is the lake water surface area (km
is total annual river discharge (mm), and is total annual evaporation (mm).
Determination coefficient and statistical significance in equation 17 was r
= 0.64 and p < 0.0001, respectively. It means that 64% of the variations in the water surface area of the Ugii Nuur Lake related to the variability of the total annual evaporation and the total
annual river discharge by the equation. The remaining 36% can be explained by other factors that were not chosen for the regression model. The surface area of the lake was found using Equation (17)
and then compared with the values calculated from the satellite imagery (
Figure 13
The correlation coefficient between satellite and model values of the lake water surface area was r = 0.80 (p < 0.0001), and they had a positive and strong relationship with each other. It validated
the results of
Figure 12
The results of several studies showed that recent global climate change influenced the surface area of the lake [
]. It indicated that a decrease in the total annual inflow river discharge and an increase in total annual evaporation were crucial hydro-meteorological factors on the reduction of the water surface
area of the Ugii Nuur Lake. The study was a baseline study of lakes in Mongolia’s steppe region as a benchmark for determining how much of the lake area is affected by recent global climate change.
It was consistent with the results of Tao et al.’s [
] study on Mongolian plateau lakes.
The Ugii Nuur Lake surface area has sharply decreased since 1995 (
Figure 11
) [
]. It may be related to the impact of groundwater and also the socio-economic activities, including fishing, agriculture, and tourism in the lake basin.
4. Conclusions
In this study, Landsat satellite NDWI multi-channel data were used to calculate the values of the Ugii Nuur Lake surface area from 1986 to 2018. The satellite data was validated by the measurement of
the lake water level (r = 0.95). The lake surface area has decreased by 13.5% for the period from 1986 to 2018.
Non-parametric trend analysis in hydro-climatic variables was used to determine the reasons for changes in the lake water surface area. The Mann-Kendall trend analysis has indicated a slight decrease
in total annual precipitation (Z = −0.902) and inflow river discharge (Z = −5.392), respectively. In contrast, a statistically significant increase was seen in a trend in total annual evaporation (Z
= 4.385) and annual average air temperature (Z = 4.595). The total impact of the changes in the above variables has led to a dramatic change in the water surface area of the Ugii Nuur Lake. All the
results were confirmed by ITAM and the Sen’s Slope Estimator Test Approach. All results were confirmed by ITAM and the Sen’s Slope Estimator Test Approach.
Correlation and regression analysis of several hydro-meteorological variables representing global climate change were related to the lake surface area. The analysis showed that the most critical
factors affecting the water surface area of the Ugii Nuur Lake were the total annual evaporation (r = −0.64) and the total annual inflow river discharge (r = 0.67).
In the near future, it is vital to conduct scientific studies considering the volume of lake water, groundwater, and the anthropogenic impact.
Author Contributions
Conceptualization, E.S., S.D., and A.E.; methodology, B.D., and M.G.; software, T.G., and K.W.; validation, D.Y., H.W., and B.W.; formal analysis, B.D.; investigation, S.D.; resources, O.D.;
writing—original draft preparation, E.S., and A.E.; writing—review and editing, O.D., A.A., A.G., and W.B.; visualization, Y.Y. and B.G.; supervision, T.Q., and D.Y.; project administration, D.Y.;
funding acquisition, H.W., D.Y., and A.E. All authors have read and agreed to the published version of the manuscript.
This study was funded by the National Key Research and Development Project of China (grant 2016YFA0601503). Moreover, financial support for this study was provided by the Young Scientist Grant
(P2018-3568) at the National University of Mongolia.
The authors would like to thank the Information and Research Institute of Meteorology, Hydrology, and Environment of Mongolian for providing the raw hydro-meteorological data. We also thank the China
Institute of Water Resources and Hydropower Research for financing this research.
Conflicts of Interest
The authors declare no conflict of interest.
1. Dorjsuren, B.; Yan, D.; Wang, H.; Chonokhuu, S.; Enkhbold, A.; Davaasuren, D.; Girma, A.; Abiyu, A.; Jing, L.; Gedefaw, M. Observed trends of climate and land cover changes in Lake Baikal basin.
Environ. Earth Sci. 2018, 77, 725. [Google Scholar] [CrossRef]
2. Lioubimtseva, E.; Cole, R.; Adams, J.M.; Kapustin, G. Impacts of climate and land-cover changes in arid lands of Central Asia. J. Arid Environ. 2005, 62, 285–308. [Google Scholar] [CrossRef]
3. Lioubimtseva, E.; Simon, B.; Faure, H.; Faure-Denard, L.; Adams, J.M. Impacts of climatic change on carbon storage in the Sahara–Gobi desert belt since the Last Glacial Maximum. Global Planetary
Change 1998, 16–17, 95–105. [Google Scholar] [CrossRef]
4. Malsy, M.; Aus der Beek, T.; Eisner, S.; Flörke, M. Climate change impacts on Central Asian water resources. Adv. Geosci. 2012, 32, 77–83. [Google Scholar] [CrossRef] [Green Version]
5. Lioubimtseva, E.; Henebry, G.M. Climate and environmental change in arid Central Asia: Impacts, vulnerability, and adaptations. J. Arid Environ. 2009, 73, 963–977. [Google Scholar] [CrossRef]
6. Nandintsetseg, B.; Greene, J.S.; Goulden, C.E. Trends in extreme daily precipitation and temperature near lake Hövsgöl, Mongolia. Internat. J. Climatol. 2007, 27, 341–347. [Google Scholar] [
7. Liancourt, P.; Boldgiv, B.; Song, D.S.; Spence, L.A.; Helliker, B.R.; Petraitis, P.S.; Casper, B.B. Leaf-trait plasticity and species vulnerability to climate change in a Mongolian steppe. Glob.
Change Biol. 2015, 21, 3489–3498. [Google Scholar] [CrossRef]
8. Tao, S.; Fang, J.; Zhao, X.; Zhao, S.; Shen, H.; Hu, H.; Tang, Z.; Wang, Z.; Guo, Q. Rapid loss of lakes on the Mongolian Plateau. Proc. Natl. Acad. Sci. USA 2015, 112, 2281–2286. [Google Scholar
] [CrossRef] [Green Version]
9. Wang, W.; Ma, Y.; Feng, Z.; Narantsetseg, T.; Liu, K.-B.; Zhai, X. A prolonged dry mid-Holocene climate revealed by pollen and diatom records from Lake Ugii Nuur in central Mongolia. Quat. Int.
2011, 229, 74–83. [Google Scholar] [CrossRef]
10. Walther, M.; Gegeensuvd, T. Ugii Nuur (Central Mongolia) Paleo Environmental Studies of Lake Level Fluctuations and Holocene Climate Change. Geographica-Oekologica 2005, 2, 36. [Google Scholar]
11. Jimee, T. A catalog of lakes in Mongolia; Shuvuun Saaral Publishing: Ulaanbaatar, Mongolia, 2000; pp. 15–16. [Google Scholar]
12. Schwanghart, W.; Schütt, B.; Walther, M. Holocene climate evolution of the Ugii Nuur basin, Mongolia. Adv. Atmos. Sci. 2008, 25, 986–998. [Google Scholar] [CrossRef] [Green Version]
13. Schwanghart, W.; Schütt, B. Holocene morphodynamics in the Ugii Nuur basin, Mongolia—insights from a sediment profile and 1D electrical resistivity tomography. Zeitschrift für Geomorphologie,
Supplementary Issues 2008, 52, 35–55. [Google Scholar] [CrossRef]
14. Tsegmid, S. Physical geography of Mongolia; Institute of Geography and Permafrost, Mongolia, Mongolian Academy of Sciences, Press of State Publishing: Ulaanbaatar, Mongolia, 1969; pp. 148–153. [
Google Scholar]
15. Fukumoto, Y.; Kaoru, K.; Orkhonselenge, A.; Ganzorig, U. Holocene environmental changes in northern Mongolia inferred from diatom and pollen records of peat sediment. Quat. Int. 2012, 254, 83–91.
[Google Scholar] [CrossRef]
16. Wang, W.; Ma, Y.; Feng, Z.; Meng, H.; Sang, Y.; Zhai, X. Vegetation and climate changes during the last 8660 cal. a BP in central Mongolia, based on a high-resolution pollen record from Lake Ugii
Nuur. Chin. Sci. Bul. 2009, 54, 1579–1589. [Google Scholar] [CrossRef] [Green Version]
17. Pagma, K.; Peck, J.A.; Fowell, S.J. Lake Systems and Paleoclimate investigation of the Holocene, Mongolia. In Proceedings of the Mongolian Academy of Sciences; Mongolian Academy of Sciences:
Ulaanbaatar, Mongolia, 2002; Volume 42, pp. 67–89. [Google Scholar]
18. Information and Research Institute of Meteorology Hydrology and Environment of Mongolia. Assessment of Water Resources of Lakes in Mongolia Based on Land and Satellite Data Information; Project
report. Press of Admon printing: Mongolia, Ulaanbaatar, 2018; pp. 123–124. [Google Scholar]
19. Bai, J.; Chen, X.; Li, J.; Yang, L.; Fang, H. Changes in the area of inland lakes in arid regions of central Asia during the past 30 years. Environ. Monit. Assess. 2011, 178, 247–256. [Google
Scholar] [CrossRef]
20. Huang, J.; Ji, M.; Xie, Y.; Wang, S.; He, Y.; Ran, J. Global semi-arid climate change over last 60 years. Clim. Dyn. 2016, 46, 1131–1150. [Google Scholar] [CrossRef] [Green Version]
21. Valeyev, A.; Karatayev, M.; Abitbayeva, A.; Uxukbayeva, S.; Bektursynova, A.; Sharapkhanova, Z. Monitoring Coastline Dynamics of Alakol Lake in Kazakhstan Using Remote Sensing Data. Geosciences
2019, 9, 404. [Google Scholar] [CrossRef] [Green Version]
22. Li, X.-Y.; Xu, H.-Y.; Sun, Y.-L.; Zhang, D.-S.; Yang, Z.-P. Lake-Level Change and Water Balance Analysis at Lake Qinghai, West China during Recent Decades. Water Resour. Manag. 2007, 21,
1505–1516. [Google Scholar] [CrossRef]
23. Yin, X.; Nicholson, S.E. The water balance of Lake Victoria. Hydrol. Sci. J. 1998, 43, 789–811. [Google Scholar] [CrossRef]
24. Tamaddun, K.A.; Kalra, A.; Bernardez, M.; Ahmad, S. Effects of ENSO on Temperature, Precipitation, and Potential Evapotranspiration of North India’s Monsoon: An Analysis of Trend and Entropy.
Water 2019, 11, 189. [Google Scholar] [CrossRef] [Green Version]
25. Dorjsuren, B.; Yan, D.; Wang, H.; Chonokhuu, S.; Enkhbold, A.; Yiran, X.; Girma, A.; Gedefaw, M.; Abiyu, A. Observed Trends of Climate and River Discharge in Mongolia’s Selenga Sub-Basin of the
Lake Baikal Basin. Water 2018, 10, 1436. [Google Scholar] [CrossRef] [Green Version]
26. Archive and Database center at the National Agency of Meteorology and Environment of Mongolia. Available online: http://namem.gov.mn/ (accessed on 15 September 2019).
27. USGS Global Visualization Viewer. Available online: https://glovis.usgs.gov (accessed on 1 March 2013).
28. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
29. Ma, M.; Wang, X.; Veroustraete, F.; Dong, L. Change in area of Ebinur Lake during the 1998–2005 period. Int. J. Remote Sens. 2007, 28, 5523–5533. [Google Scholar] [CrossRef]
30. Lu, S.; Ouyang, N.; Wu, B.; Wei, Y.; Tesemma, Z. Lake water volume calculation with time series remote-sensing images. Int. J. Remote Sens. 2013, 34, 7962–7973. [Google Scholar] [CrossRef]
31. Mao, D.; Wang, Z.; Yang, H.; Li, H.; Thompson, J.R.; Li, L.; Song, K.; Chen, B.; Gao, H.; Wu, J. Impacts of Climate Change on Tibetan Lakes: Patterns and Processes. Remote Sens. 2018, 10, 358. [
Google Scholar] [CrossRef] [Green Version]
32. Gedefaw, M.; Wang, H.; Yan, D.; Song, X.; Yan, D.; Dong, G.; Wang, J.; Girma, A.; Ali, B.; Batsuren, D.; et al. Trend Analysis of Climatic and Hydrological Variables in the Awash River Basin,
Ethiopia. Water 2018, 10, 1554. [Google Scholar] [CrossRef] [Green Version]
33. Gedefaw, M.; Yan, D.; Wang, H.; Qin, T.; Wang, K. Analysis of the Recent Trends of Two Climate Parameters over Two Eco-Regions of Ethiopia. Water 2019, 11, 161. [Google Scholar] [CrossRef] [Green
34. Li, Y.; Sheng, Z.; Jing, J. Feature analysis of stratospheric wind and temperature fields over the Antigua site by rocket data. Earth Planet. Phys. 2019, 3, 414–424. [Google Scholar] [CrossRef]
35. Livada, I.; Synnefa, A.; Haddad, S.; Paolini, R.; Garshasbi, S.; Ulpiani, G.; Fiorito, F.; Vassilakopoulou, K.; Osmond, P.; Santamouris, M. Time series analysis of ambient air-temperature during
the period 1970–2016 over Sydney, Australia. Sci. Total Environ. 2019, 648, 1627–1638. [Google Scholar] [CrossRef]
36. Gedefaw, M.; Yan, D.; Wang, H.; Qin, T.; Girma, A.; Abiyu, A.; Batsuren, D. Innovative Trend Analysis of Annual and Seasonal Rainfall Variability in Amhara Regional State, Ethiopia. Atmosphere
2018, 9, 326. [Google Scholar] [CrossRef] [Green Version]
37. Cherinet, A.A.; Yan, D.; Wang, H.; Song, X.; Qin, T.; Kassa, M.T.; Girma, A.; Dorjsuren, B.; Gedefaw, M.; Wang, H.; et al. Impacts of Recent Climate Trends and Human Activity on the Land Cover
Change of the Abbay River Basin in Ethiopia. Adv. Meteorol. 2019, 2019, 14. [Google Scholar] [CrossRef]
38. Zhao, L.; Xia, J.; Xu, C.-y.; Wang, Z.; Sobkowiak, L.; Long, C. Evapotranspiration estimation methods in hydrological models. J. Geogr. Sci. 2013, 23, 359–369. [Google Scholar] [CrossRef]
39. Mbanguka, R.P.; Lyon, S.W.; Holmgren, K.; Girons Lopez, M.; Jarsjö, J. Water Balance and Level Change of Lake Babati, Tanzania: Sensitivity to Hydroclimatic Forcings. Water 2016, 8, 572. [Google
Scholar] [CrossRef] [Green Version]
40. Dalton, J. Experimental Essays on the Constitution of Mixed Gases: On the Force of Steam or Vapour from Water or Other Liquids in Different Temperatures, Both in a Torricelli Vacuum and in Air;
on Evaporation; and on Expansion of Gases by Heat. Mem. Liter. Philosoph. Soc. Manch. 1802, 5, 536–602. [Google Scholar]
41. Penman, H.L.; Keen, B.A. Natural evaporation from open water, bare soil and grass. Proc. R. Soc. Lond. Series, A. Mathemat. Physic. Sci. 1948, 193, 120–145. [Google Scholar] [CrossRef] [Green
42. Xu, C.Y.; Singh, V.P. Cross Comparison of Empirical Equations for Calculating Potential Evapotranspiration with Data from Switzerland. Water Resour. Manag. 2002, 16, 197–219. [Google Scholar] [
43. Gombo, D. Surface Water Regime and Resources in Mongolia; Admon Printing: Ulaanbaatar, Mongolia, 2015; pp. 120–156. [Google Scholar]
44. Magnus, G. Versuche über die Spannkräfte des Wasserdampfs. Ann. Phys. 1844, 137, 225–247. [Google Scholar] [CrossRef] [Green Version]
45. Matveev, L.T. Fundamentals of General Meteorology: Physics of the Atmosphere; Hydrometeorological press: Leningrad, Russia, 1967. [Google Scholar]
46. Fifth generation ECMWF atmospheric ReanAlysis of the global climate (ERA5). Available online: https://cds.climate.copernicus.eu/ (accessed on 14 June 2018).
47. WMO. Guide to Meteorological Instruments and Methods of Observation WMO-No. 8; WMO: Geneva, Switzerland, 2008. [Google Scholar]
48. Hellman, G. Über die Bewegung der Luft in den untersten Schichten der Atmosphäre. Meteorol Z. 1916, 34, 273–285. [Google Scholar]
49. Tar, K.; Lázár, I.; Gyarmati, R. Statistical Estimation of the Next Day’s Average Wind Speed and Wind Power. In Proceedings of the Perspectives of Renewable Energy in the Danube Region, Budapest,
Hungary, 26–27 March 2015; pp. 175–191. [Google Scholar]
50. Sawaya, K.E.; Olmanson, L.G.; Heinert, N.J.; Brezonik, P.L.; Bauer, M.E. Extending satellite remote sensing to local scales: Land and water resource monitoring using high-resolution imagery.
Remote Sens. Environ. 2003, 88, 144–156. [Google Scholar] [CrossRef]
51. Shang, S.; Shang, S. Simplified Lake Surface Area Method for the Minimum Ecological Water Level of Lakes and Wetlands. Water 2018, 10, 1056. [Google Scholar] [CrossRef] [Green Version]
52. Fang, J.; Li, G.; Rubinato, M.; Ma, G.; Zhou, J.; Jia, G.; Yu, X.; Wang, H. Analysis of Long-Term Water Level Variations in Qinghai Lake in China. Water 2019, 11, 2136. [Google Scholar] [CrossRef
] [Green Version]
53. Chebud, Y.A.; Melesse, A.M. Modelling lake stage and water balance of Lake Tana, Ethiopia. Hydrol. Process. 2009, 23, 3534–3544. [Google Scholar] [CrossRef]
54. Du, Y.; Berndtsson, R.; An, D.; Zhang, L.; Hao, Z.; Yuan, F. Hydrologic Response of Climate Change in the Source Region of the Yangtze River, Based on Water Balance Analysis. Water 2017, 9, 115.
[Google Scholar] [CrossRef] [Green Version]
55. Liao, J.; Shen, G.; Li, Y. Lake variations in response to climate change in the Tibetan Plateau in the past 40 years. Int. J. Digit. Earth 2013, 6, 534–549. [Google Scholar] [CrossRef]
56. Orkhonselenge, A.; Komatsu, G.; Uuganzaya, M. Middle to late Holocene sedimentation dynamics and paleoclimatic conditions in the Lake Ulaan basin, southern Mongolia. Géomorphol. Relief Process.
Environ. 2018, 24, 351–363. [Google Scholar] [CrossRef]
57. Krapivin, V.F.; Mkrtchyan, F.A.; Rochon, G.L. Hydrological Model for Sustainable Development in the Aral Sea Region. Hydrology 2019, 6, 91. [Google Scholar] [CrossRef] [Green Version]
58. Chikita, K.A.; Oyagi, H.; Aiyama, T.; Okada, M.; Sakamoto, H.; Itaya, T. Thermal Regime of A Deep Temperate Lake and Its Response to Climate Change: Lake Kuttara, Japan. Hydrology 2018, 5, 17. [
Google Scholar] [CrossRef] [Green Version]
59. Goshime, D.W.; Absi, R.; Ledésert, B. Evaluation and Bias Correction of CHIRP Rainfall Estimate for Rainfall-Runoff Simulation over Lake Ziway Watershed, Ethiopia. Hydrology 2019, 6, 68. [Google
Scholar] [CrossRef] [Green Version]
60. Tao, S.; Fang, J.; Ma, S.; Cai, Q.; Xiong, X.; Tian, D.; Zhao, X.; Fang, L.; Zhang, H.; Zhu, J.; et al. Changes in China’s lakes: Climate and human impacts. Nat. Sci. Rev. 2019, 7, 132–140. [
Google Scholar]
Figure 1. The Ugii Nuur Lake, northwest-facing view of the landscape in 2018. Photo by Uuganbat Tumur.
Figure 6. Linear regression for a relationship between water level and surface area of the Ugii Nuur Lake.
Figure 7. The trend of annual average air temperature at the Ugii Nuur Lake site (where UF and UB are parameters of the change, UB = −UF).
Figure 8. The trend of total annual precipitation at the Ugii Nuur Lake site (where UF and UB are parameters of the change, UB = −UF).
Figure 9. The trend of the total annual discharge of the Khogshin Orkhon River, which is an inflow of the Ugii Nuur Lake (where UF and UB are parameters of the change, UB = −UF).
Figure 10. The trend of total annual evaporation at the Ugii Nuur Lake (where UF and UB are parameters of the change, UB = −UF).
Figure 11. The trend of the annual mean lake area of the Ugii Nuur Lake (where UF and UB are parameters of the change, UB = −UF).
Figure 12. Linear regression for a relationship between the water surface area of the lake and variables such as (a) air temperature, (b) precipitation, (c) river flow, and (d) evaporation.
Figure 13. A relationship between lake surface areas estimated from satellite and multiple regression equation.
Years 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996
Lake area 26.04 25.96 25.84 24.45 25.98 26.74 25.74 26.43 26.48 26.78 25.88
Years 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007
Lake area 25.91 26.17 25.89 27.11 25.69 24.81 24.95 25.30 25.43 24.80 24.35
Years 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018
Lake area 23.68 22.79 22.24 21.80 22.16 22.52 23.37 23.13 24.19 24.38 24.60
Table 2. Comparison of results by Mann-Kendal (MK), Innovative Trend (ITAM), and Sen’s Slope Estimator Test.
No Parameters Mann-Kendall Trend Analysis Innovative Trend Analysis Sen’s SlopeEstimator Test Approach
1 Trend of annual average air temperature 4.595 *** 16.076 *** 0.065
2 Trend of total annual precipitation −0.902 −0.542 −0.888
3 Trend of total annual river discharge −5.392 *** −6.511 *** −0.015
4 Trend of total annual evaporation 4.385 *** 4.328 *** 2.256 **
5 Trend of water surface area −6.021 *** −0.896 −0.102
* 0.1 trend with low variation; ** 0.05 trend with relative changes; *** 0.01 trend with high variation.
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http:/
Share and Cite
MDPI and ACS Style
Sumiya, E.; Dorjsuren, B.; Yan, D.; Dorligjav, S.; Wang, H.; Enkhbold, A.; Weng, B.; Qin, T.; Wang, K.; Gerelmaa, T.; et al. Changes in Water Surface Area of the Lake in the Steppe Region of
Mongolia: A Case Study of Ugii Nuur Lake, Central Mongolia. Water 2020, 12, 1470. https://doi.org/10.3390/w12051470
AMA Style
Sumiya E, Dorjsuren B, Yan D, Dorligjav S, Wang H, Enkhbold A, Weng B, Qin T, Wang K, Gerelmaa T, et al. Changes in Water Surface Area of the Lake in the Steppe Region of Mongolia: A Case Study of
Ugii Nuur Lake, Central Mongolia. Water. 2020; 12(5):1470. https://doi.org/10.3390/w12051470
Chicago/Turabian Style
Sumiya, Erdenesukh, Batsuren Dorjsuren, Denghua Yan, Sandelger Dorligjav, Hao Wang, Altanbold Enkhbold, Baisha Weng, Tianlin Qin, Kun Wang, Tuvshin Gerelmaa, and et al. 2020. "Changes in Water
Surface Area of the Lake in the Steppe Region of Mongolia: A Case Study of Ugii Nuur Lake, Central Mongolia" Water 12, no. 5: 1470. https://doi.org/10.3390/w12051470
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2073-4441/12/5/1470?utm_source=releaseissue&utm_medium=email&utm_campaign=releaseissue_water&utm_term=doilink135","timestamp":"2024-11-01T23:04:23Z","content_type":"text/html","content_length":"539160","record_id":"<urn:uuid:e647b8cb-3855-4dc1-a60b-7d26e88cece4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00072.warc.gz"} |
Research On Bi-level Programming Problem Based On Improved Particle Swarm Optimization Algorithm
Posted on:2017-05-14 Degree:Master Type:Thesis
Country:China Candidate:Y R Zhang Full Text:PDF
GTID:2348330512970516 Subject:Computer technology
Bi-level programming with two layers of hierarchical structure usually is used to solve the problem of system optimization.As an important branch of operations research,it has been frequently applied
to various fields,such as resource allocation,pricing,supply chain management,and environmental protection.At the same time,it is extremely difficult to solve the bi-level programming problem.Only
when the structure of bi-level programming satisfies certain requirements can the solution efficiency be higher,otherwise the solution of bilevel programming problem will be very difficult.But for
some models based on the actual design problems,such as non-linear,non-smooth bi-level programming model,it is difficult that the above methods obtain a global optimal solution.The global search
ability of swarm intelligence optimization algorithm is strong,and there is no special requirement for the model to optimize the objective function.The intelligent optimization algorithm has become
an effective algorithm for solving bi-level programming problems,such as GA,PSO and simulated annealing algorithm.In this paper,we propose an adaptive particle swarm optimization algorithm based on
disturbances(ADPSO)to solve the bi-level programming model by summarizing the research results of related literatures.Firstly,the basic PSO algorithm is improved from three aspects,and then the
optimized particle swarm algorithm is used to solve the bi-level programming model.Finally,the algorithm proposed in this paper is validated by comparing with other algorithms.The contents of this
paper are as follows:(1)An adaptive particle swarm optimization algorithm based on disturbances(ADPSO)is proposed.The main improvement strategies are as follows:1)The disturbance factor is added to
the velocity updating formula,so that the population search range is expanded;2)The adaptive inertia weight is exponentially decreasing in order to balance the global and local optimization;3)adding
an adaptive Cauchy mutation on the best particle to expand the search space,reduce the possibility of local optimum and avoid premature convergence.The proposed algorithm can enhance global search
capability,and has higher optimization performance,so that the convergence precision and convergence speed of the PSO are improved obviously.(2)An algorithm that uses the improved PSO for solving the
bi-level programming problem is proposed.The interactive iteration between two improved particle swarm optimization algorithms reflects the decision-making process of bi-level programming
problem.Compared with other examples,it is proved that this algorithm is an effective algorithm for bi-level programming problem.Finally,a bi-level programming model of urban rail passenger transport
pricing is established and the proposed bilevel programming algorithm is used to solve the model,which verifies the feasibility of the model and algorithm.In the end,the research results and methods
in this paper are summarized,and the future is forecasted.
Keywords/Search Tags: Bi-level programming problem(BLPP), Particle swarm optimization algorithm(PSO), Disturbance factors, Inertia weight, Cauchy mutation | {"url":"https://globethesis.com/?t=2348330512970516","timestamp":"2024-11-10T02:39:28Z","content_type":"application/xhtml+xml","content_length":"9106","record_id":"<urn:uuid:871874d2-7b42-40a2-96b9-f7ea7110dc05>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00032.warc.gz"} |
MD5 vs CRC - Detailed Comparison » Network Interview
MD5 vs CRC – Detailed Comparison
Difference Between MD5 and CRC
MD5 and CRC are 2 of most commonly used hashing algorithms, infact while comparing files and including other use cases. In this article. We will understand both the concepts of hashing algorithm and
how one scores over the other.
Errors and Error detection
When bits are transmitted over the physical media, they get corrupted due to interference and network problems. The corrupted bits changes from 0 to 1 or 1 to 0, data being received by the receiver
and are called errors.
Error detection techniques are responsible for checking any error has occurred or not in the frame that has been transmitted.
In this error detection technique the sender needs to send some additional bits along with the data bits. The receiver will check based upon the additional redundant bits. If it finds data is
identical, it removes the redundant bits and pass the message to the upper layers.
There are three main categories for detecting errors:
Cyclic redundancy check (CRC)
• CRC is based on binary division.
• In CRC, a sequence of redundant bits, known as cyclic redundancy check bits, are appended to the tip information of knowledge of information unit so the ensuing data unit becomes precisely
• At the destination, the incoming information unit is split by identical variety. In this step there is no remainder, the data unit is assumed to be correct and therefore accepted.
• A remainder indicates that the info unit has been broken in transit and so should be rejected.
Steps Involved
Error detection victimization CRC technique involves below enlisted steps-
Step-01: Calculation of CRC at Sender Side
At sender side,
• A string of n 0’s is appended to the info unit to be transmitted.
• Here, n is one but the amount of bits in CRC generator.
• After division, the rest therefore obtained is termed as CRC.
• It may be noted that CRC additionally consists of n bits.
Step-02: Appending CRC to Data Unit
At sender side,
• The CRC is obtained once the binary division.
• The string of n 0’s appended to the info unit earlier is replaced by the CRC remainder.
Step-03: Transmission to Receiver
The fresh fashioned code word (Original information CRC) is transmitted to the receiver.
Step-04: Checking at Receiver Side
At receiver side,
• The received code word is divided with identical CRC generator.
• After division remainder will check.
The following two cases are possible-
If remainder is zero meaning no error receiver accepts the info. If remainder is none zero receiver reject information it suggests that some error occurred.
Introduction to MD5 Algorithm
The MD5 hash function was originally designed for use as a secure cryptographic hash algorithm for authenticating digital signature, data integrity and detect unintentional data corruption.
MD5 message digest algorithmic program is the fifth version of the Message Digest algorithmic program, that developed by Ron Rivest to provide 128 bit message digest. MD5 produces the message that is
digest through 5 steps i.e. padding, append length and divide input into 512 bit blocks.
How does the MD5 Algorithm works?
MD5 produces output of 128-bit hash. This secret writing of input of any size into hash values undergoes five steps and every step has it’s a predefined task.
Step1: Append Padding Bits
• Padding suggests that adding additional bits to the initial message. Therefore in MD5 original message is cushiony such its length in bits is congruent to 448 modulo 512.
Overall bits are sixty four less being a multiple of 512 bits length. Padding is finished, though the length of the initial message is already congruent to 448 modulo 512.
• Padding is done even if the length of the original message is already congruent to 448 modulo 512. In padding bits, the only first bit is 1 and the rest of the bits are 0.
Step 2: Append Length
After padding, 64 bits are inserted at the end which is used to record the length of the original input. Modulo 2^64.
Step 3: Initialize MD buffer
A four-word buffer (A, B, C, D) is employed to reason the values for the message digest.
Here A, B, C, D are 32- bit registers and are initialized within the following means.
Step 4: Processing message in 16-word block
MD5 uses the auxiliary functions that take the input as 3 32-bit variety and produces a 32-bit output and these functions use logical operators like OR, XOR, NOR.
The content of 4 buffers are mixed with the input auxiliary buffer and sixteen rounds are performed victimization sixteen basic operations.
Buffer A, B, C, D contains the MD5 output beginning with lower bit A and ending with higher bit D.
Input: This is an article about the cryptography algorithm
Output: e4d909c290dfb1ca068ffaddd22cbb0
Key Highlights CRC Vs MD5
• It performs at data link layer and transmission layer of OSI model. It checks sequence of redundant bits at sender and receiver side. If remainder is zero then no error if non-zero then some
• MD5 is 128 bit hash algorithm to generate password. It is easy to generate and compare hash value. | {"url":"https://networkinterview.com/md5-vs-crc-detailed-comparison/","timestamp":"2024-11-03T15:39:07Z","content_type":"text/html","content_length":"171697","record_id":"<urn:uuid:ca427c98-e4e6-48d4-8ab6-31d618f81080>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00818.warc.gz"} |
Solving int_0^infty (1+(y_1^2+y_2^2+...+y_n^2)/v dy_1dy_2... dy_n
Answered question
Solving ${\int }_{0}^{\mathrm{\infty }}\left(1+\frac{{y}_{1}^{2}+{y}_{2}^{2}+\cdots +{y}_{n}^{2}\right)}{u }\right)\mathrm{d}{y}_{1}\mathrm{d}{y}_{2}\cdots \mathrm{d}{y}_{n}$
Answer & Explanation
As you have noticed, the integral can be transformed into
${\int }_{0}^{\mathrm{\infty }}{d}^{n}y\frac{1}{\left(1+{y}^{T}y{\right)}^{\alpha }}.$
Going to the spherical coordinates leads to
$\frac{1}{{2}^{n}}\int d{\mathrm{\Omega }}_{n}{\int }_{0}^{\mathrm{\infty }}dr\frac{{r}^{n-1}}{\left(1+{r}^{2}{\right)}^{\alpha }},$
where the factor of $1/{2}^{n}$ is due to integrating over this fraction of the whole space and $\int d{\mathrm{\Omega }}_{n}$ is the volume of ${S}^{n-1}$, which is equal to
$\int d{\mathrm{\Omega }}_{n}=\frac{2{\pi }^{n/2}}{\mathrm{\Gamma }\left(\frac{n}{2}\right)},$
as can be shown for example by evaluating the integral $\int {d}^{n}y{e}^{-{y}^{T}y}$ both in Cartesian and spherical coordinates and comparing the two results.
To evaluate the simple integral over r, make the change of variables $x=\left(1+{r}^{2}{\right)}^{-1}$, which leads to
${\int }_{0}^{\mathrm{\infty }}dr\frac{{r}^{n-1}}{\left(1+{r}^{2}{\right)}^{\alpha }}=\frac{1}{2}{\int }_{0}^{1}{x}^{\alpha -n/2-1}\left(1-x{\right)}^{n/2-1}dx=\frac{\mathrm{\Gamma }\left(\alpha -n/2
\right)\mathrm{\Gamma }\left(n/2\right)}{2\mathrm{\Gamma }\left(\alpha \right)},$
where the expression for the Beta function was used in the last step. Putting the pieces together, you arrive at
${\int }_{0}^{\mathrm{\infty }}{d}^{n}y\frac{1}{\left(1+{y}^{T}y{\right)}^{\alpha }}={\left(\frac{\pi }{4}\right)}^{n/2}\frac{\mathrm{\Gamma }\left(\alpha -n/2\right)}{\mathrm{\Gamma }\left(\alpha \ | {"url":"https://plainmath.org/calculus-1/93171-solving-int-0-infty-1-y-1-2-y-2-2","timestamp":"2024-11-09T19:29:53Z","content_type":"text/html","content_length":"171557","record_id":"<urn:uuid:818aa9f9-9a66-42f9-9c4e-d81cf642ed60>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00411.warc.gz"} |
Analysis of Transient Permeation and Conduction in Composites with External Transport Resistance
Tuesday, November 10, 2015: 4:45 PM
155F (Salt Palace Convention Center)
Laplace transform methods are known to provide “Early-Time” series solutions (with fewer significant terms as t®0) to linear partial differential equations governing transient permeation and heat
conduction. These solutions tend to be particularly convenient because, unlike some “Long-Time” solutions, Early-Time solutions do not involve eigenvalues defined by transcendental equations.
The Early-Time approach is applied here to the analysis of permeation in two-layer composite membranes with external mass transfer resistance. The governing equations are then :
subject to:
The goal is an expression for M(t), the time course of the cumulative mass permeated per unit area, i.e.:
or, in dimensionless terms:
Notably, Sakai (1922) derived a “Long-Time” series solution to Eqs. 1 in the general case of an arbitrary number of layers, but with negligible external mass transfer resistance.
For purposes of deriving the Early-Time solution, Laplace-transform operator is defined as usual by:
Eqs. 1 are thereby transformed to easily solved ordinary differential equations. The end result is the following expression for the transform of m:
Recovery of and, in turn, requires inverse transformation of Eq. 5. Scientist^R numerical inversion software (Micromath Inc.) provides essentially exact results with which those based on truncated
Early-Time(“ET”) series will be compared.
The latter series emerge from inverse transformation of the expression to which simplifies when s is large. When only the lead terms are retained, the result is:
Retaining additional terms extends the time over which the Early-Time solution is accurate. In many cases of practical interest, only the lead terms are necessary to accurately model essentially the
entire non-steady-state region.
Sakai, S. (1922) “Linear conduction of heat through a series of connected rods,” Sci. Rep. Tohoku Imperial Univ., Ser. I (Math, Phys., Chem.), 11, 351- 378.
Extended Abstract: File Uploaded | {"url":"https://aiche.confex.com/aiche/2015/webprogram/Paper429960.html","timestamp":"2024-11-05T07:08:18Z","content_type":"text/html","content_length":"10940","record_id":"<urn:uuid:cea4a879-1500-46e6-8215-740458480428>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00577.warc.gz"} |
Expand (A - B)^2 to Solve for (A - B) Whole Square in Algebra. - Blog Feed Letters
When working with algebraic equations, the process of expanding and simplifying expressions is crucial for solving problems efficiently. One common operation in algebra is squaring a binomial, such
as (A – B)^2, which involves multiplying the binomial by itself.
Expanding (A – B)^2 allows us to determine the product and simplify the resulting expression. To do this, we can use the concept of FOIL, which stands for First, Outer, Inner, Last. When we square a
binomial, we are essentially multiplying it by itself, following this pattern.
Let’s break down the process step by step:
1. First: Multiply the first terms in each binomial. In this case, we have A * A = A^2.
2. Outer: Multiply the outer terms of the binomials. This means multiplying the first term of the first binomial by the second term of the second binomial, which gives us A * -B = -AB.
3. Inner: Multiply the inner terms of the binomials. Multiply the second term of the first binomial by the first term of the second binomial, resulting in -B * A = -BA.
4. Last: Finally, multiply the last terms in each binomial. This gives us -B * -B = B^2.
Now, we put these results together:
(A – B)^2 = A^2 + (-AB) + (-BA) + B^2
Simplifying further:
(A – B)^2 = A^2 – AB – BA + B^2
(A – B)^2 = A^2 – 2AB + B^2
Therefore, the square of the binomial (A – B)^2 simplifies to A^2 – 2AB + B^2. This expression is useful in various algebraic manipulations and problem-solving situations.
Applications of (A – B)^2:
Understanding how to expand and simplify expressions like (A – B)^2 is essential in algebra and mathematics in general. Here are some applications of this concept:
1. Quadratic Equations:
• The expression (A – B)^2 frequently appears when dealing with quadratic equations and completing the square. It helps in factoring and solving equations efficiently.
2. Geometric Formulas:
• In geometry, the expression (A – B)^2 can be used in formulas related to area, volume, and perimeter calculations of various shapes.
3. Physics Calculations:
• Physics problems often involve manipulating expressions like (A – B)^2 to analyze relationships between variables, such as in equations of motion or energy calculations.
FAQs (Frequently Asked Questions):
1. What is the difference between (A – B)^2 and A^2 – B^2?
• The expression (A – B)^2 represents squaring the entire binomial (A – B), while A^2 – B^2 is the result of squaring each term individually and subtracting them.
2. How do you expand (A + B)^2?
• To expand (A + B)^2, follow the same FOIL method but with plus signs: (A + B)^2 = A^2 + 2AB + B^2.
3. Why do we use FOIL when expanding binomials?
• FOIL is a mnemonic for a specific order of multiplying binomials to ensure all terms are considered. It helps simplify the expansion process.
4. Can you expand more complex expressions than (A – B)^2 using similar methods?
• Yes, the same principles of distributing and combining like terms apply to expanding higher-degree polynomials and more complex algebraic expressions.
5. How can expanding (A – B)^2 be applied in real-life scenarios?
• Real-life applications of expanding (A – B)^2 can be found in fields like engineering, finance, and computer science, where algebraic manipulations are used to model and solve practical problems.
By mastering the expansion of expressions like (A – B)^2, you’ll enhance your algebraic skills and be better equipped to tackle more advanced mathematical concepts and problem-solving challenges. | {"url":"https://blogfeedletters.com/expand-a-b2-to-solve-for-a-b-whole-square-in-algebra/","timestamp":"2024-11-11T03:16:02Z","content_type":"text/html","content_length":"134689","record_id":"<urn:uuid:c1f4208c-7b76-42c6-a124-9fcd70b0dd09>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00739.warc.gz"} |
BitVMX | Optimizing Algorithms for Bitcoin Script (part 3)
This is the third and final part in a series of articles. The first article detailed how I implemented an optimized version of SHA256 using lookup tables, and the second demonstrated the use of the
Stack Tracker library. I encourage you to read both articles before this one. In this article, I’ll explain how the BLAKE3 algorithm works and how I used the Stack Tracker to optimize the on-chain
implementation of the BLAKE3 cryptographic hash function.
BLAKE3 Algorithm
BLAKE3 is a cryptographic hash function, similar in purpose to SHA256, but it was designed to be much faster due to its ability to parallelize part of its computations and its choice of internal
operations. In the “Right Rotate” sub-section we will look at how the choice of the number of bits to shift a number impacts on the performance.
Why use it? The main reason to use BLAKE3 instead of SHA is its much smaller implementation size on Bitcoin Script. The version of SHA256 using StackTracker consumes 297K opcodes, while BLAKE3 only
uses 75K opcodes per round.
Unlike SHA256, for which there are numerous resources to help understand the algorithm, BLAKE3 has fewer readily available resources. I relied on the Rust and Python reference implementations, as
well as Bitcoin Script created by the BitVM team (now, while writing this article, I’m reading the BLAKE3 paper itself and it’s quite clear).
How it works
BLAKE3 achieves its high performance through parallelization by creating a tree structure. The hash function splits the message in chunks of 1024 bytes. However, in the context of using BLAKE3
on-chain, we would never have more than 1024 bytes (not even close) due to the stack size limit, so this implementation does not handle the tree structure.
The algorithm supports three different hashing modes. One is used for regular hashing, and the other two are designed to support an extra key, but the last two modes were not implemented in this
version either because they will probably not be needed.
Each 1024-byte chunk is divided into 64-byte blocks that are processed sequentially in a chain. These 64 bytes are handled as 16 words of 32-bit unsigned integers. If the last block contains less
than 64 bytes, it is zero-padded to fill the block.
Let’s take a look at BLAKE3 pseudocode:
(The "..." in this code indicates the presence of certain constant values that have been removed to improve readability.)
The algorithm is quite simple. as it essentially loops through the call of a function that. in turn, calls a sub-function in a loop.
• The blake function splits the message into 64 bytes blocks and calls compress for each block.
• The compress function initializes a state, calls the round function 7 times and calculates the output hash, which is used as chain input for the next block or as the resulting hash after the last
• The round function will call 8 times the G function, processing different parts of the message block with the state.
• Finally, the G function performs the actual operations over the data, using three key operations: XOR, 32 bit addition and right rotate.
The Bitcoin Script implementation is provided here and it follows the structure and function naming of the pseudocode.
The operations performed by the G functions are described in this image taken from the paper:
All of these operations are performed using lookup tables and operate on nibbles.
Lookup Table Management
To perform operations using lookup tables, the table data needs to be loaded into the stack and unloaded after use. Since loading and unloading tables is costly in terms of operations, it’s a good
idea to load all the necessary tables at the beginning and unload them once they are no longer needed.
It's important to note that dropping elements from the stack is only possible when the element is at the top, so keeping the stack organized is crucial for efficient element removal.
In some cases, the 1,000-element stack limit makes it impossible to keep all the tables in memory. When this happens, but the lookup table is used frequently, it might still be worthwhile to pay the
cost of loading and unloading the table to reduce the overall script size.
As you can see in this snippet, there is a struct that manages all the lookup tables used in the BLAKE3 implementation:
In this implementation, the u4_push_*** functions contain the actual values for the tables, which were generated using Python. However, as an example of how to generate one of these tables directly
in Rust, take a look at this example function that generates left shift tables:
To solve addition, two tables are used: one for modulo and one for quotient. The table supports adding three numbers simultaneously (as opposed to adding two numbers, and then adding the third to the
result). This approach reduces the number of times the lookup table needs to be used and minimizes the movement of elements in the stack.
Right Rotate
Unlike SHA256, where the bit rotation counts are odd, three of the four rotation counts in BLAKE3 are multiples of 4. Since we are working with nibbles (4 bits), these bit rotations can be achieved
by simply rearranging the elements of the u32 number, without the need for any additional operations or lookup tables.
For the remaining rotations of 7 bits, we only need to rotate by 3 bits, since we are working with 4-bit integers ( 7 mod 4 = 3 ). To perform 3-bit right rotation, two tables are used: one to shift
right by 3 bits, and another to rotate left by 1 bit.
To calculate XOR two modes are used depending on the length of the message to be hashed and the available room in the stack. If the message is short, the “full” XOR table is used. For long messages,
the “half-table” is applied.
The full table consumes more stack space but requires fewer opcodes to compute the result. Conversely, the half-table uses less stack space but requires more Bitcoin opcodes, which increases the
overall script size.
Combining Operations
If you explore the code, you will notice functions named right_rotate_xored and right_rotate7_xored. The purpose of these functions is to combine the XOR and rotation operations simultaneously,
reducing the cost of moving and rearranging the nibbles twice.
One step in the algorithm requires permuting the words of the message. Since moving variables around the stack is costly, our code does not actually move any part of the message. Instead, it modifies
a map that points each variable to the correct position. This creates an indirection between the permuted indices and the actual variables on the stack, allowing for efficient handling without the
overhead of rearranging the stack.
In this series of articles, I demonstrated how using nibbles and lookup tables to optimize functions in Bitcoin Script resulted in a significant reduction in script size—around 30% for both SHA256
and BLAKE3. I also introduced the Stack Tracker library, which simplifies the process of writing and debugging Bitcoin Scripts by providing tools for managing stack operations more efficiently.
I hope you enjoyed this series and that it has inspired you to think about new and creative approaches to writing Bitcoin Scripts. | {"url":"https://bitvmx.org/knowledge/optimizing-algorithms-for-bitcoin-script-part-3","timestamp":"2024-11-02T16:54:51Z","content_type":"text/html","content_length":"52666","record_id":"<urn:uuid:1c12e200-1367-477a-8852-9c9ea26b4e85>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00452.warc.gz"} |
Calculating Consumer Price Indices in Practice 8
8.1 The purpose of this chapter is to provide a general description of the ways in which consumer price indices (CPIs) are calculated in practice. The methods used in different countries are not
exactly the same, but they have much in common. There is clearly interest from both compilers and users of CPIs in knowing how most national statistical offices (NSOs) actually calculate their CPIs.
8.2 As a result of the greater insights into the properties and behavior of price indices that have been achieved in recent years, it is now recognized that some traditional methods may not
necessarily be optimal from a conceptual and theoretical viewpoint. Concerns have also been voiced in a number of countries about possible biases that may be affecting CPIs. These biases and concerns
are considered in Chapter 13. The methods used to compile CPIs are inevitably constrained by the resources available, not merely for collecting and processing prices, but also for gathering the
expenditure data needed for weighting purposes. In some countries, the methods used may be severely constrained by lack of resources. Nonetheless, there are still methods that should be avoided at
all costs because they result in severe bias in the indices.
8.3 The calculation of CPIs usually proceeds in two stages. First, price indices are estimated for the elementary aggregates. These are referred to as the elementary price indices. The elementary
aggregate is the lowest level of groups of goods or services for which expenditure weights are assigned and kept constant for a period of one year or more. An elementary aggregate should consist of a
relatively homogeneous set of goods or services, with similar end uses and similar expected price movements. More detailed weights to reflect the relative importance of individual price observations
within elementary aggregates may be applied and updated more frequently. In the second stage, these elementary price indices are aggregated to obtain higher-level price indices using the expenditure
shares of the elementary aggregates as weights. This chapter starts by explaining how the elementary aggregates are constructed, and what economic and statistical criteria need to be taken into
consideration in defining the aggregates. The index number formulas most commonly used to calculate the elementary indices are then presented, and their properties and behavior illustrated using
numerical examples. The advantages and disadvantages of the various formulas are considered, together with some alternative formulas that might be used instead. The problems created by disappearing
and new varieties (that is, one variety with another of similar or different quality) are also explained, as well as the different ways of imputing values for missing prices.
8.4 The chapter also discusses the calculation of higher-level indices. The focus is on the ongoing production of a monthly price index in which the elementary price indices are averaged, or
aggregated, to obtain higher-level indices. Price updating of weights, chain linking, and reweighting are discussed in Chapter 9. Data editing procedures are discussed in Chapter 5 on price
collection. Statistical tools and methods for index analysis such as contributions to price change appear in Chapters 9 and 14.
8.5 While the purpose of this chapter is the compilation of CPIs at the various levels of aggregation, NSOs must keep in mind that the end goal of producing the indices is to disseminate and publish
CPIs of high quality. To this end, the sampling process for selecting the items that are included in the indices and the price observations that are representative of the product varieties in the
consumer markets are critically important in determining the quality of the indices at the elementary and aggregate levels. In this regard, the sampling procedures presented in Chapter 4 are very
important to attain this end goal.
The Calculation of Price Indices for Elementary Aggregates
8.6 CPIs are typically calculated in two steps. In the first step, the elementary price indices for each of the elementary aggregates are calculated. In the second step, higher-level indices are
calculated by taking weighted averages of the elementary price indices. The elementary aggregates and their price indices are the basic building blocks of the CPI.
Construction of Elementary Aggregates
8.7 Elementary aggregates are groups of relatively homogeneous goods and services (that is, similar in characteristics, content, price, or price change). They may cover the whole country or separate
regions within the country. Likewise, elementary aggregates may be distinguished for different types of outlets. The nature of the elementary aggregates depends on circumstances and the availability
of information, such as detailed expenditure data. Elementary aggregates may therefore be defined differently in different countries. Some key points, however, should be noted:
• Elementary aggregates should consist of groups of goods or services that are as similar as possible, and preferably fairly homogeneous in construction and content.
• Elementary aggregates should consist of varieties that may be expected to have similar price movements. The objective should be to try to minimize the dispersion of price movements within the
• Elementary aggregates should be appropriate to serve as strata for sampling purposes in the light of the sampling regime planned for the data collection.
8.8 Each elementary aggregate, whether relating to the whole country or to an individual region or group of outlets, will typically contain a very large number of individual goods or services, or
varieties. In practice, only a small number can be selected for pricing. When selecting the varieties, the following considerations need to be made:
• The varieties selected should be ones for which price movements are believed to be representative of most of the products within the elementary aggregate.
• The number of varieties within each elementary aggregate for which prices are collected should be large enough for the estimated price index to be statistically reliable. The minimum number
required will vary between elementary aggregates depending on the nature of the products and their price behavior. However, there should be at least eight to ten observations for calculating the
elementary index as discussed in Chapter 4.
• The objective is to try to track the price of the same variety over time for as long as the variety continues to be representative. The varieties selected should therefore be ones that are
expected to remain on the market for some time, so that like can be compared with like, and problems associated with replacement of varieties be reduced.
The Aggregation Structure
8.9 The aggregation structure for a CPI is illustrated in Figure 8.1. Using a classification of consumers’ expenditure such as the Classification of Individual Consumption According to Purpose
(COICOP), the entire set of consumption goods and services covered by the overall CPI can be divided into divisions, such as “Food and Nonalcoholic Beverages.” Each division is further divided into
groups, such as “Food.” Groups are divided into classes, such as “Cereals and Cereal Products.” Classes are further divided into subclasses, such as “Cereals.” Many countries use an even finer
classification by further disaggregating below the level of the subclasses. For CPI purposes, each subclass can then be further divided into more homogeneous microclasses,^1 such as “Basmati Rice.”
The microclass could be the equivalent of the basic headings used in the International Comparison Program,^2 which calculates purchasing power parities between countries. Finally, the microclass may
be further subdivided by dividing according to region or type of outlet, as in Figure 8.1. In some cases, a particular microclass cannot be, or does not need to be, further subdivided, in which case
the microclass becomes the elementary aggregate. Within each elementary aggregate, one or more products are selected to represent all the products in the elementary aggregate. For example, the
elementary aggregate consisting of “Bread” sold in supermarkets in the Northern region covers all types of bread, from which “Wheat Bread” and “Whole Grain Bread” are selected as representative
products. Of course, more representative products might be selected in practice. Finally, for each representative product, several specific varieties should be selected for price collection, such as
particular brands of wheat bread. Again, the number of sampled varieties selected may vary depending on the nature of the representative product.
8.10 Methods used to calculate the elementary indices from the individual price observations are discussed in paragraphs 8.15–8.88. Working upward from the elementary price indices, all indices above
the elementary aggregate level are higher-level indices that can be calculated from the elementary price indices using the elementary aggregate expenditure as weights. The aggregation structure
should be consistent, so that the weight at each level above the elementary aggregate is always equal to the sum of its components. The price index at each higher level of aggregation can be
calculated based on the weights and price indices for its components, that is, the lower-level or elementary indices. This applies for indices with fixed weights. If the weight structure is updated
and the index series based on the new weights is chain linked, the linked index for the previous year is not consistent in aggregation. The individual elementary price indices not only should be
designed to be sufficiently reliable to be published separately, but they should also remain the basic building blocks of all higher-level indices.
Weights within Elementary Aggregates
8.11 The ideal index number formula to use for CPI calculations would have weights for each price observation used to compile the elementary price indices, as well as weights for aggregating
elementary indices to higher-level price indices. In some countries, this approach has been achieved through comprehensive sampling procedures or the use of scanner data for select item groups (for
example, food). Those countries that have weights at this level use fixed-basket indices that are discussed in paragraphs 8.89–8.136. Also, having weights for both the weight reference period and the
current period would be ideal to produce one of the preferred target indices for CPI compilation (Fisher, Törnqvist, or Walsh price indices). Several countries that have access to scanner data use
the prices and quantities for the individual observations to derive elementary aggregate indices of their CPI.
8.12 In most cases, the price indices for elementary aggregates are calculated without the use of explicit weights. The elementary aggregate is simply the lowest level at which reliable expenditure
weighting information is available. In this case, the elementary index must be calculated as an unweighted average of the prices of which it consists. However, even in this case, it should be noted
that when the varieties are selected with probabilities proportional to the size of some relevant variables such as sales (as described in Chapter 4), weights are implicitly introduced by the
sampling procedure.
8.13 For certain elementary aggregates information about sales of particular varieties, market shares, and regional weights may be used as explicit weights within an elementary aggregate. When
possible, weights that reflect the relative importance of the sampled varieties should be used, even if the weights are only approximate.
8.14 For example, assume that the number of suppliers of a certain product such as fuel for cars is limited. The market shares of the suppliers may be known from business survey statistics and can be
used as weights in the calculation of an elementary aggregate price index for car fuel. Alternatively, prices for water may be collected from a number of local water supply services where the
population in each local region is known. The expenditure weights for each region may then be used to weight the price in each region in order to obtain the elementary aggregate price index for
water. The calculation of weighted elementary indices is discussed in more detail in paragraphs 8.75–8.88.
Calculation of Elementary Price Indices
8.15 Various methods and formulas may be used to calculate elementary price indices. This section provides a summary of the methods that have been most commonly used and the advantages and
disadvantages that NSOs must evaluate when choosing a formula at the elementary level. Chapter 6 of Consumer Price Index Theory provides a more detailed discussion.
8.16 The methods most commonly used are illustrated in a numerical example in Tables 8.1–8.3. In these examples, an elementary aggregate consists of seven varieties of an item that could be collected
from several outlets, and it is assumed that prices are collected for all seven varieties in all months, so that there is a complete set of prices. There are no disappearing varieties, no missing
prices, and no replacement varieties. This is quite a strong assumption since many of the problems encountered in practice are attributable to breaks in the continuity of the price series for the
individual varieties for one reason or another. The treatment of disappearing and replacement varieties is taken up in paragraphs 8.51–8.74. It is also assumed that there are no explicit weights
Table 8.1^4
Jevons and Dutot Price Indices Using Averages of Prices
Table 8.1^4
Jevons and Dutot Price Indices Using Averages of Prices
Base Jan. Feb. Mar. Apr. May Jun. Jul
Item A Prices
Variety 1 2.36 2.09 1.93 2.59 2.05 2.85 2.59 2.36
Variety 2 5.02 5.38 5.12 5.52 4.08 4.08 5.52 5.02
Variety 3 5.34 5.07 5.09 5.88 6.29 5.86 5.88 5.34
Variety 4 6.00 5.73 4.27 6.00 4.75 5.27 6.60 6.00
Variety 5 6.12 6.39 5.50 6.12 5.86 6.29 6.74 6.12
Variety 6 2.80 2.72 2.82 3.08 2.85 2.05 3.08 2.80
Variety 7 6.21 5.45 6.95 6.21 5.27 4.75 6.84 6.21
Geometric Mean Price 4.55 4.38 4.20 4.81 4.17 4.17 5.01 4.55
L-T Price Relative 1.000 0.963 0.923 1.056 0.917 0.917 1.100 1.000
S-T Price Relative 0.963 0.959 1.143 0.868 1.000 1.200 0.909
Arithmetic Mean Price 4.84 4.69 4.53 5.06 4.45 4.45 5.32 4.84
L-T Price Relative 1.000 0.970 0.936 1.046 0.920 0.920 1.100 1.000
S-T Price Relative 0.970 0.965 1.117 0.880 1.000 1.196 0.909
Jevons Index (L-T ratio of geometric mean prices) 100.0 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Dutot Index (L-T ratio of arithmetic mean prices) 100.0 97.0 93.6 104.6 92.0 92.0 110.0 100.0
Jevons Index (chained S-T ratio of geometric mean prices) 100.0 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Dutot Index (chained S-T ratio of arithmetic mean prices) 100.0 97.0 93.6 104.6 92.0 92.0 110.0 100.0
Table 8.1^4
Jevons and Dutot Price Indices Using Averages of Prices
Base Jan. Feb. Mar. Apr. May Jun. Jul
Item A Prices
Variety 1 2.36 2.09 1.93 2.59 2.05 2.85 2.59 2.36
Variety 2 5.02 5.38 5.12 5.52 4.08 4.08 5.52 5.02
Variety 3 5.34 5.07 5.09 5.88 6.29 5.86 5.88 5.34
Variety 4 6.00 5.73 4.27 6.00 4.75 5.27 6.60 6.00
Variety 5 6.12 6.39 5.50 6.12 5.86 6.29 6.74 6.12
Variety 6 2.80 2.72 2.82 3.08 2.85 2.05 3.08 2.80
Variety 7 6.21 5.45 6.95 6.21 5.27 4.75 6.84 6.21
Geometric Mean Price 4.55 4.38 4.20 4.81 4.17 4.17 5.01 4.55
L-T Price Relative 1.000 0.963 0.923 1.056 0.917 0.917 1.100 1.000
S-T Price Relative 0.963 0.959 1.143 0.868 1.000 1.200 0.909
Arithmetic Mean Price 4.84 4.69 4.53 5.06 4.45 4.45 5.32 4.84
L-T Price Relative 1.000 0.970 0.936 1.046 0.920 0.920 1.100 1.000
S-T Price Relative 0.970 0.965 1.117 0.880 1.000 1.196 0.909
Jevons Index (L-T ratio of geometric mean prices) 100.0 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Dutot Index (L-T ratio of arithmetic mean prices) 100.0 97.0 93.6 104.6 92.0 92.0 110.0 100.0
Jevons Index (chained S-T ratio of geometric mean prices) 100.0 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Dutot Index (chained S-T ratio of arithmetic mean prices) 100.0 97.0 93.6 104.6 92.0 92.0 110.0 100.0
^Table 8.2
Jevons and Carli Price Indices Using Averages of Long-Term Price Relatives
^Table 8.2
Jevons and Carli Price Indices Using Averages of Long-Term Price Relatives
Base Jan. Feb. Mar. Apr. May Jun. Jul
Item A Long-Term (L-T) Price Relatives
Variety 1 1.000 0.886 0.818 1.097 0.869 1.208 1.100 1.000
Variety 2 1.000 1.072 1.020 1.100 0.813 0.813 1.100 1.000
Variety 3 1.000 0.949 0.953 1.101 1.178 1.097 1.100 1.000
Variety 4 1.000 0.955 0.712 1.000 0.792 0.878 1.100 1.000
Variety 5 1.000 1.044 0.899 1.000 0.958 1.028 1.100 1.000
Variety 6 1.000 0.971 1.007 1.100 1.018 0.732 1.100 1.000
Variety 7 1.000 0.878 1.119 1.000 0.849 0.765 1.100 1.000
Geometric Mean of L-T Price Relatives 1.000 0.963 0.924 1.056 0.917 0.917 1.100 1.000
Jevons Index (L-T geometric changes) 100.0 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Arithmetic Mean of L-T Price Relatives 1.000 0.965 0.933 1.057 0.925 0.932 1.100 1.000
Carli Index (L-T arithmetic changes) 100.0 96.5 93.3 105.7 92.5 93.2 110.0 100.0
^Table 8.2
Jevons and Carli Price Indices Using Averages of Long-Term Price Relatives
Base Jan. Feb. Mar. Apr. May Jun. Jul
Item A Long-Term (L-T) Price Relatives
Variety 1 1.000 0.886 0.818 1.097 0.869 1.208 1.100 1.000
Variety 2 1.000 1.072 1.020 1.100 0.813 0.813 1.100 1.000
Variety 3 1.000 0.949 0.953 1.101 1.178 1.097 1.100 1.000
Variety 4 1.000 0.955 0.712 1.000 0.792 0.878 1.100 1.000
Variety 5 1.000 1.044 0.899 1.000 0.958 1.028 1.100 1.000
Variety 6 1.000 0.971 1.007 1.100 1.018 0.732 1.100 1.000
Variety 7 1.000 0.878 1.119 1.000 0.849 0.765 1.100 1.000
Geometric Mean of L-T Price Relatives 1.000 0.963 0.924 1.056 0.917 0.917 1.100 1.000
Jevons Index (L-T geometric changes) 100.0 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Arithmetic Mean of L-T Price Relatives 1.000 0.965 0.933 1.057 0.925 0.932 1.100 1.000
Carli Index (L-T arithmetic changes) 100.0 96.5 93.3 105.7 92.5 93.2 110.0 100.0
^Table 8.3
Jevons and Carli Price Indices Using Chained Short-Term Price Relatives
^Table 8.3
Jevons and Carli Price Indices Using Chained Short-Term Price Relatives
Jan. Feb. Mar. Apr. May Jun. Jul.
Elementary Aggregate A
Variety 1 0.886 0.923 1.342 0.792 1.390 0.911 0.909
Variety 2 1.072 0.952 1.078 0.739 1.000 1.353 0.909
Variety 3 0.949 1.004 1.155 1.070 0.932 1.003 0.909
Variety 4 0.955 0.745 1.405 0.792 1.109 1.253 0.909
Variety 5 1.044 0.861 1.113 0.958 1.073 1.070 0.909
Variety 6 0.971 1.037 1.092 0.925 0.719 1.501 0.909
Variety 7 0.878 1.275 0.894 0.849 0.901 1.438 0.909
Geometric Mean of S-T Price Relatives 0.963 0.959 1.143 0.868 1.000
Jevons Index (chained S-T geometric changes) 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Arithmetic Mean of S-T Aggregate Relatives 0.965 0.971 1.154 0.875 1.018 1.219 0.909
Carli Index (chained S-T arithmetic changes) 96.5 93.7 108.1 94.6 96.3 117.4 106.7
^Table 8.3
Jevons and Carli Price Indices Using Chained Short-Term Price Relatives
Jan. Feb. Mar. Apr. May Jun. Jul.
Elementary Aggregate A
Variety 1 0.886 0.923 1.342 0.792 1.390 0.911 0.909
Variety 2 1.072 0.952 1.078 0.739 1.000 1.353 0.909
Variety 3 0.949 1.004 1.155 1.070 0.932 1.003 0.909
Variety 4 0.955 0.745 1.405 0.792 1.109 1.253 0.909
Variety 5 1.044 0.861 1.113 0.958 1.073 1.070 0.909
Variety 6 0.971 1.037 1.092 0.925 0.719 1.501 0.909
Variety 7 0.878 1.275 0.894 0.849 0.901 1.438 0.909
Geometric Mean of S-T Price Relatives 0.963 0.959 1.143 0.868 1.000
Jevons Index (chained S-T geometric changes) 96.3 92.4 105.6 91.7 91.7 110.0 100.0
Arithmetic Mean of S-T Aggregate Relatives 0.965 0.971 1.154 0.875 1.018 1.219 0.909
Carli Index (chained S-T arithmetic changes) 96.5 93.7 108.1 94.6 96.3 117.4 106.7
8.17 The properties of the three indices used to compile elementary aggregates (Jevons, Dutot, and Carli) are examined and explained in some detail in Chapter 6 of Consumer Price Index Theory where
it is shown that the Jevons is preferred in most circumstances when weights are not available. Here, the purpose is to illustrate how they perform in practice, to compare the results obtained by
using the different formulas, and to summarize their strengths and weaknesses. These widely used formulas that have been, or still are, in use by NSOs to calculate elementary price indices are
illustrated in Tables 8.1–8.3 by using average prices, averages of price relatives, and long-term versus short-term price relative methods. It should be noted, however, that these are not the only
possibilities and some alternative formulas are considered later. The first is the Jevons index for i = 1 … n varieties. It is defined as the unweighted geometric mean of the price relatives, which
is identical to the ratio of the unweighted geometric mean prices, for the two periods, 0 and t, to be compared:
The Jevons price index in 8.1 is calculated by comparing directly the prices of the two periods, 0 and t. Indices that are calculated by comparing the price of the reference period and the current
period directly are referred to as direct indices.
8.18 Assuming the time from 0 to t exists for a number of periods, 0, 1, 2, … t – 1, t, it is possible to calculate the index by first calculating the price indices from period to period, and then
multiplying, or chaining, these together to obtain the price index from 0 to t:
A price index calculated by multiplying the period-to-period, or short-term, price indices, is referred to as a chained or chain-linked price index. When calculating the Jevons index in 8.2 the
numerators and denominators of periods 1, 2, … , t – 1 cancel out leaving only the prices of period 0 and t, so that the resulting chained index is identical to the direct version of the index in
8.19 The second elementary index formula is the Dutot index, defined as the ratio of unweighted arithmetic mean prices:
A chained Dutot price index is calculated as
The third is the Carli index, defined as the unweighted arithmetic mean of the price relatives, or price ratios. The direct Carli and the chained Carli, respectively, are calculated as
The chained Carli should be avoided because it has a known, and potentially substantial, upward bias.^4
8.20 Table 8.1 shows the comparison of the Dutot and Jevons indices using the monthly average prices. The first calculation for the Dutot index uses the average prices in the long-term formula
(direct approach) where each month’s (t) average is compared to the initial base price (0) (that is, the base price reference period). The Dutot index is also calculated using the short-term
relatives (chained approach) where the month-to-month changes in average prices are used to move forward the previous month’s index level as shown in Table 8.3. The results are the same for both the
direct and chained approaches in the Dutot calculations. Similarly, in Table 8.1 the Jevons index uses the geometric average prices in the long-term and short-term formulas to derive the price index
levels that are the same for both the long-term and short-term method. The Jevons indices do, however, differ from those calculated using the Dutot formula.
8.21 In Table 8.2, the Jevons and Carli indices are calculated using the averages of long-term price relatives from the price reference period (base price). The results for the Carli indices are
different from those of both the Jevons and Dutot indices. The Jevons indices are exactly the same whether calculated using ratio average prices or average of price relatives.
8.22 The properties and behavior of the different index formulas are summarized in paragraphs 8.21–8.48 (see also Chapter 6 of Consumer Price Index Theory). First, the differences between the results
obtained by using the different formulas tend to increase as the variance of the price relatives, or ratios, increases. The greater the dispersion of the price movements, the more critical the choice
of index formula, and method, becomes. If the elementary aggregates are defined in such a way that the price variations within the aggregate are minimized, the results obtained become less sensitive
to the choice of formula.
8.23 Certain features displayed by the data in Tables 8.1 and 8.2 are systematic and predictable; they follow from the mathematical properties of the indices. For example, it is well known that an
arithmetic mean is always greater than, or equal to, the corresponding geometric mean, the equality holding only in the trivial case in which the numbers being averaged are all the same. The direct
Carli indices are therefore all greater than the Jevons indices, except in the price reference period, in June when all prices increased by 10 percent above their base prices, and the end period
(July) when all prices return to their base price values. In general, the Dutot may be greater or less than the Jevons but tends to be less than the Carli.
8.24 The Carli and Jevons indices depend only on the price relatives and are unaffected by the price level. The Dutot index, in contrast, is influenced by the price level. In the Dutot index, price
changes are implicitly weighted by the price in the base (price reference) period, so that price changes on more expensive products are assigned a higher importance than similar price changes for
cheaper products (this can be seen from equation 8.3). In Tables 8.1 and 8.3, this is illustrated in the development of the March index where prices for varieties 4, 5, and 7, which have the highest
base prices, are the same as in the price reference month and mitigate the 10 percent price increases of varieties 1, 2, 3, and 6 from the price reference month. The monthly Dutot price index is
104.6 versus 105.6 in the Jevons, and 105.7 in the Carli. Because of the relatively high base prices for varieties 4, 5, and 7, that results in a lower level for the Dutot index.
8.25 Another important property of the indices is that the Jevons and the Dutot indices are transitive, whereas the Carli is not. Transitivity means that the chained monthly indices are identical to
the corresponding direct indices. This property is important in practice, because many elementary price indices are in fact calculated as chained indices that link together the month-on-month
indices. The intransitivity of the Carli index is illustrated dramatically in Table 8.3 when each of the individual prices in the final month (July) return to the same level as it was in base month
(as observed in Table 8.1), but the chained Carli registers an increase of 6.7 percent over the base month. Similarly, in June, although each individual price is exactly 10 percent higher than the
base month, the chained Carli registers an increase of 17.4 percent. These results would be regarded as problematic in the case of a direct index, but even in the case of a chained index, the results
seem so intuitively unreasonable as to undermine the credibility of the chained Carli. The price movements between April and May illustrate the effects of “price bouncing” in which the same seven
prices are observed in both periods but they are switched between the different varieties. The monthly Carli index (short-term and long-term) increases from April to May whereas both the Dutot and
the Jevons indices are unchanged.
8.26 One general property of geometric means should be noted when using the Jevons index. If any single observation out of a set of observations is zero, their geometric mean is undefined, whatever
the values of the other observations. The Jevons index is sensitive to extreme falls in prices and it may be necessary to impose upper and lower bounds on the individual price ratios of, for example,
10 and 0.1, respectively, when using the Jevons index. This range should be determined after assessing the typical size of price movements and may vary across different product groups. Of course,
extreme observations often result from errors, so extreme price movements should be carefully checked. It is not recommended to replace a zero price by an arbitrary low value in the Jevons index as
this could lead to unstable results. If the Jevons index is used and the price moves from positive to zero, a practical solution is to split the aggregate into two and estimate weights for each part.
The zero subindex multiplied by the positive weight plus the nonzero Jevons subindex multiplied by the remaining weight is well defined, and the price change is taken into account.
8.27 The message emerging from this brief illustration of the behavior of just three possible formulas is that different index numbers and methods can deliver very different results. With the
knowledge of these interrelationships, one can infer that the chained Carli formula is not recommended. However, this information in itself is not sufficient to determine which formula should be
used, even though it makes it possible to make a more informed and reasoned choice. It is necessary to appeal to other criteria to settle the choice of formula. There are two main approaches that may
be used, the axiomatic and the economic approaches, which are presented in paragraphs 8.28–8.41. First, however, it is useful to consider the sampling properties of the elementary indices.
Sampling Properties of Elementary Price Indices
8.28 The interpretation of the elementary price indices is related to the way in which the sample of goods and services is drawn. Hence, if goods and services in the sample are selected with
probabilities proportional to the population expenditure shares in the price reference period, then:
• The sample (unweighted) Carli index provides an unbiased estimate of the population Laspeyres price index (see equation 8.11).
• The sample (unweighted) Jevons index provides an unbiased estimate of the population geometric Laspeyres price index (see equation 8.14).
8.29 If goods and services are sampled with probabilities proportional to population quantity shares in the price reference period, the sample (unweighted) Dutot index would provide an estimate of
the population Laspeyres price index. However, if the basket for the Laspeyres index contains different kinds of products whose quantities are not additive, the quantity shares, and hence the
probabilities, are undefined.
Axiomatic Approach to Elementary Price Indices
8.30 As explained in Chapters 3 and 6 in Consumer Price Index Theory, one way to decide upon an appropriate index formula is to require it to satisfy certain specified axioms or tests. The tests
throw light on the properties that different kinds of indices have, some of which may not be intuitively obvious. Four basic tests are cited to illustrate the axiomatic approach:
• Proportionality test. If all prices are λ times the prices in the price reference period, the index should equal λ. The data for June in Tables 8.1–8.3, when every price is 10 percent higher than
in the price reference period, show that all three direct indices satisfy this test. A special case of this test is the identity test, which requires that if the price of every variety is the
same as in the reference period, the index should be equal to unity, as in the last month (July) in the example.
• Changes in the units of measurement test (commensurability test). The price index should not change if the quantity units in which the products are measured are changed, for example, if the
prices are expressed per liter rather than per pint. The Dutot index fails this test, as explained in paragraphs 8.29–8.33, but the Carli and Jevons indices satisfy the test.
• Time reversal test. If all the data for the two periods are interchanged, the resulting price index should equal the reciprocal of the original price index. The chained Carli index fails this
test, but the Dutot and the Jevons indices both satisfy the test. The failure of the chained Carli to satisfy the test is not immediately obvious from the example but can easily be verified by
calculating the index backward from June to the index reference period. In this case, the chained Carli from June backward is 97.0 whereas the reciprocal of the forward chained Carli is (1/1.174)
× 100 = 85.2.
• Transitivity test. The chained index between two periods should equal the direct index between the same two periods. It can be seen from the example in Tables 8.1–8.3 that the Jevons and the
Dutot indices both satisfy this test, whereas the Carli index does not. For example, although the prices in July have returned to the same levels as the index reference period, the chained Carli
registers 106.7. This illustrates the fact that the chained Carli may have a significant built-in upward bias.
8.31 Many other axioms or tests can be devised, but the previous ones^5 illustrate the approach and also throw light on some important features of the elementary indices under consideration in this
Manual and provide evidence of the preference for the Jevons index.
8.32 The sets of products covered by elementary aggregates are meant to be as homogeneous as possible. If they are not fairly homogeneous, the failure of the Dutot index to satisfy the units of
measurement or commensurability test can be a serious disadvantage. Although defined as the ratio of the unweighted arithmetic average prices, the Dutot index may also be interpreted as a weighted
arithmetic average of the price relatives in which each relative is weighted by its price in the base price period.^6 This can be seen by rewriting formula 8.3 as
However, if the products are not homogeneous, the relative prices of the different varieties may depend quite arbitrarily on the quantity units in which they are measured.
8.33 Consider, for example, salt and pepper, which are found within the same class of Classification of Individual Consumption According to Purpose. Suppose the unit of measurement for pepper is
changed from grams to ounces, while leaving the units in which salt is measured (for example, kilos) unchanged. As an ounce of pepper is equal to 28.35 grams, the “price” of pepper increases by over
28 times, which effectively increases the weight given to pepper in the Dutot index by over 28 times. The price of pepper relative to salt is inherently arbitrary, depending entirely on the choice of
units in which to measure the two goods. In general, when there are different kinds of products within the elementary aggregate, the use of the Dutot index is not acceptable.
8.34 The use of the Dutot index is acceptable only when the set of varieties covered is homogeneous, or at least nearly homogeneous. For example, it may be acceptable for a set of apple prices even
though the apples may be of different varieties, but not for the prices of several different kinds of fruits, such as apples, pineapples, and bananas, some of which may be much more expensive per
variety or per kilo than others. Even when the varieties are fairly homogeneous and measured in the same units, the Dutot’s implicit weights may still not be satisfactory. More weight is given to the
price changes for the more expensive varieties, but in practice, they may well account for only small shares of the total expenditure within the aggregate. Consumers are unlikely to buy varieties at
high prices if the same varieties are available at lower prices.
8.35 It may be concluded that from an axiomatic viewpoint, both the Carli and the Dutot indices, although they have been, and still are, used by some NSOs, have disadvantages. The Carli index fails
the time reversal and transitivity tests. In principle, it should not matter whether it is chosen to measure price changes forward or backward in time. It would be expected to give the same answer,
but this is not the case for the chained Carli indices that may be subject to a significant upward bias. The Dutot index is meaningful for a set of homogeneous varieties but becomes increasingly
arbitrary as the set of products becomes more diverse. On the other hand, the Jevons index satisfies all the tests listed in paragraph 8.28 and emerges as the preferred index when the set of tests is
enlarged, as shown previously in paragraphs 8.28–8.29. From an axiomatic point of view, the Jevons index is clearly the index with the best properties.
Economic Approach to Elementary Price Indices
8.36 In the economic approach, the objective is to estimate an economic index, that is, a cost of living index (COLI) for the elementary aggregate (see Chapter 6 of Consumer Price Index Theory). The
varieties for which prices are collected are treated as if they constituted a basket of goods and services purchased by consumers, from which the consumers derive utility. A COLI measures the minimum
amount by which consumers would have to change their expenditures in order to keep their utility level unchanged, allowing consumers to make substitutions between the varieties in response to changes
in the relative prices of varieties.
8.37 The economic approach is based on several assumptions about consumer behavior, market conditions, and the representativity of the sample. These assumptions do not always hold in reality. At the
detailed level of elementary aggregates, special conditions will often prevail and change over time, and the information available about outlets, products, and market conditions may be incomplete.
Thus, although the economic approach may be useful in providing a possible economic interpretation of the index, conclusions should be made with caution. In general, in the decision of how to
calculate the elementary indices one should be careful not to put too much weight on a strict economic interpretation of the index formula at the expense of the statistical considerations.
8.38 In the absence of information about quantities or expenditure within an elementary aggregate, an economic index can only be estimated when certain special conditions are assumed to prevail.
There are two special cases of some interest. The first case is when consumers continue to consume the same relative quantities whatever the relative prices. Consumers prefer not to make any
substitutions in response to changes in relative prices. The cross-elasticities of demand are zero. The underlying preferences are described in the economics literature as “Leontief.” In this first
case, the Carli index calculated for a random sample would provide an estimate of the COLI if the varieties are selected with probabilities proportional to the population expenditure shares. If the
varieties were selected with probabilities proportional to the population quantity shares (assuming the quantities are additive), the sample Dutot would provide an estimate of the underlying COLI.
8.39 The second case occurs when consumers are assumed to vary the quantities they consume in inverse proportion to the changes in relative prices. The cross-elasticities of demand between the
different varieties are all unity, the expenditure shares being the same in both periods. The underlying preferences are described by a “Cobb–Douglas” utility function. With these preferences, the
Jevons index calculated for a random sample would provide an unbiased estimate of the COLI, provided that the varieties are selected with probabilities proportional to the population expenditure
8.40 On the basis of the economic approach, the choice between the sample Jevons and the sample Carli rests on which is likely to approximate more closely the underlying COLI: in other words, on
whether the (unknown) cross-elasticities of demand are likely to be closer to unity or zero, on average. In practice, the cross-elasticities of demand could take on any value ranging up to plus
infinity for an elementary aggregate consisting of a set of strictly homogeneous varieties (that is, perfect substitutes). It should be noted that in the limit when the products really are
homogeneous, there is no index number problem, and the price “index” is given by the ratio of the unit values in the two periods. It may be conjectured that the average cross-elasticity is likely to
be closer to unity than zero for most elementary aggregates, especially since these should be constructed in such a way as to group together similar varieties that are close substitutes for each
other. Thus, in general, the Jevons index is likely to provide a closer approximation to the COLI than the Carli. In this case, the Carli index must be viewed as having an upward bias.
8.41 The use of the Jevons index in the context of the economic approach implies that the quantities are assumed to vary over time in response to changes in relative prices. As a result of the
inverse relation of movements in prices and quantities, the expenditure shares are constant over time. Carli and Dutot, on the other hand, keep the quantities fixed while the expenditure shares vary
in response to change in relative prices.
8.42 The Jevons index does not imply that expenditure shares remain constant. Obviously, the Jevons index can be calculated whatever changes do or do not occur in the expenditure shares in practice.
What the economic approach shows is that if the expenditure shares remain constant (or roughly constant), then the Jevons index can be expected to provide a good estimate of the underlying COLI.
Similarly, if the relative quantities remain constant, then the Carli index can be expected to provide a good estimate, but the Carli index does not actually imply that quantities remain fixed.
8.43 It may be concluded that, on the basis of the economic approach as well as the axiomatic approach, the Jevons emerges as the preferred index, although there may be cases in which little or no
substitution takes place within the elementary aggregate and the direct Carli or Dutot indices might be used. The chained Carli should be avoided altogether. The Dutot index may be used provided the
elementary aggregate consists of homogeneous products. In general, the index compiler should use the Jevons index for the elementary aggregates.
Chained versus Direct Indices for Elementary Aggregates
8.44 In a direct elementary index, the prices of the current period are compared directly with those of the price reference period. In a chained index, prices in each period are compared with those
in the previous period, the resulting short-term indices being chained together to obtain the long-term index, as illustrated in Tables 8.1–8.3.
8.45 Provided that prices are recorded for the same set of varieties in every period, as in Table 8.1, any index formula defined as the ratio of the average prices will be transitive: that is, the
same result is obtained whether the index is calculated as a direct index or as a chained index. In a chained index, successive numerators and denominators will cancel out, leaving only the average
price in the last period divided by the average price in the reference period, which is the same as the direct index. Both the Dutot and the Jevons indices are therefore transitive. As already noted,
however, a chained Carli index is not transitive and should not be used because of its upward bias.
8.46 Although the chained and direct versions of the Jevons and Dutot indices are identical when there are no breaks in the series for the individual varieties, they offer different ways of dealing
with new and disappearing varieties, missing prices, and quality adjustments. In practice, products continually need to be excluded from the index and new ones included, in which case the direct and
the chained indices may differ if the imputations for missing prices are made differently.
8.47 When a replacement variety must be included in a direct index, it will often be necessary to estimate the price of the new variety in the price reference (base) period, which may be some time in
the past. The same happens if, as a result of an update of the sample, new varieties are linked into the index. If no information exists on the price of the replacement variety in the price reference
period, it will be necessary to estimate it using price ratios calculated for the varieties that remain in the elementary aggregate, a subset of these varieties, or some other indicator. However, the
direct approach should only be used for a limited period of time. Otherwise, most of the reference prices would end up being imputed, which would be an undesirable outcome. This effectively rules out
the use of the Carli index over a long period of time, as the Carli should only be used in its direct form and not in chained form as previously discussed. This implies that, in practice, the direct
Carli may be used only if the overall index is chain linked annually, or biannually.
8.48 In a chained index, if a variety becomes permanently missing, a replacement variety can be linked into the index as part of the ongoing index calculation by including the variety in the monthly
index as soon as prices for two successive months are obtained. Similarly, if the sample is updated and new products must be linked into the index, this will require successive old and new prices for
the present and the preceding months. For a chained index, the replacement variety for a missing observation would also need to have prices for the current and previous period. However, if the
previous price is not available, it will have an impact on the index for two months, since the substitute observation cannot be used until the subsequent month. It also is possible to impute the
price of the missing variety in the first missing month so that the next period price can be compared to the imputed price.
8.49 A missing price does not present such a problem in the case of a direct index. In a direct index a single, non-estimated missing observation will only have an impact on the index in the current
period. For example, for a comparison between periods 0 and 3, a missing price of the replacement in period 2 means that the chained index excludes the variety for the last link of the index in
periods 2 and 3. By comparison, the direct index includes it in period 3 since a direct index will be based on varieties whose prices are available in periods 0 and 3 (unless an imputation is made).
In general, however, the use of a chained index can make the estimation of missing prices and the introduction of replacements easier from a computational point of view, whereas it may be inferred
that a direct index will limit the usefulness of overlap methods for dealing with missing observations.
8.50 The direct and the chained approaches also produce different by-products that may be used for monitoring price data. For each elementary aggregate, a chained index approach gives the latest
monthly price change, which can be useful for both data editing and imputation of missing prices. By the same token, however, a direct index derives average price levels for each elementary aggregate
in each period, and this information may be a useful by-product. Nevertheless, because the availability of computing power at a low cost and of spreadsheets allows such by-products to be calculated
whether a direct or a chained approach is applied, the choice of formula should not be dictated by considerations regarding by-products.
Consistency in Aggregation
8.51 Consistency in aggregation means that if an index is calculated stepwise by aggregating lower-level indices to obtain indices at progressively higher levels of aggregation, the same overall
result should be obtained as if the calculation had been made in one step. For example, aggregating the elementary aggregate indices to the all-items index gives the same result as aggregating the
group-level indices to the all-items index. For presentational purposes, this is an advantage. If the elementary aggregates are calculated using one formula and the elementary aggregates are averaged
to obtain the higher-level indices using another formula, the resulting CPI is not consistent in aggregation. However, consistency in aggregation is not necessarily the most important criterion, and
it is unachievable when the amount of information available on quantities and expenditure is not the same at the different levels of aggregation. In addition, there may be different degrees of
substitution within elementary aggregates as compared to the degree of substitution between products in different elementary aggregates.
8.52 The Carli index would be consistent in aggregation with the Laspeyres index if the varieties were to be selected with probabilities proportional to expenditures in the reference period. This is
typically not the case. The Dutot and the Jevons indices are not consistent in aggregation with a higher-level Laspeyres. As explained in paragraphs 8.88– 8.94, however, the CPIs actually calculated
by NSOs are usually not true Laspeyres indices, even though they may be based on fixed baskets of goods and services. If the higher-level index were to be defined as a geometric Laspeyres,
consistency in aggregation could be achieved by using the Jevons index for the elementary indices at the lower level, provided that the individual varieties are sampled with probabilities
proportional to expenditure. Although unfamiliar, a geometric Laspeyres has desirable properties from an economic point of view and is considered again later.
Missing Price Observations
8.53 The price of a variety may fail to be collected in some period either because the variety is missing temporarily or because it has permanently disappeared. The two classes of missing prices
require different treatment as noted in Chapter 6. Temporary unavailability may occur for seasonal varieties (particularly for fruit, vegetables, and clothing), because of supply shortages, or
possibly because of some collection difficulty (for example, an outlet was closed or a price collector was ill). The treatment of seasonal varieties raises several particular problems. These are
dealt with in Chapter 11.
Treatment of Temporarily Missing Prices
8.54 In the case of temporarily missing observations for nonseasonal varieties, one of four actions may be taken:
• Omit the variety for which the price is missing so that a matched sample is maintained (like is compared with like) even though the sample is depleted
• Carryforward the last observed price
• Impute the missing price by the average price change for the prices that are available in the elementary aggregate
• Impute the missing price by the price change for a particular comparable variety from another similar outlet
8.55 Omitting an observation from the calculation of an elementary index is equivalent to assuming that the price would have moved in the same way as the average change in the prices of the varieties
that remain included in the index. Omitting an observation changes the implicit weights attached to the other prices in the elementary aggregate.
8.56 Carrying forward the last observed price is not recommended, except in the case of fixed or regulated prices. Special care needs to be taken in periods of high inflation or when markets are
changing rapidly as a result of a high rate of innovation and product turnover. While simple to apply, carrying forward the last observed price biases the resulting index toward zero change. In
addition, when the price of the missing variety is recorded again, there is likely to be a compensating step change in the index to return to its proper value. The adverse effect on the index will be
increasingly severe if the variety remains unpriced for some length of time. In general, to carryforward is not an acceptable procedure or solution to the problem of missing prices.
8.57 Imputation of the missing price by the average change of the available prices may be applied for elementary aggregates where the prices can be expected to move in the same direction. The
imputation can be made using all the remaining prices in the elementary aggregate. As already noted, this is numerically equivalent to omitting the variety for the immediate period, but it is useful
to make the imputation so that if the price becomes available again in a later period the sample size is not reduced in that period. In some cases, depending on the homogeneity of the elementary
aggregate, it may be preferable to use only a subset of varieties from the elementary aggregate to estimate the missing price. In some instances, this may even be a single comparable variety from a
similar type of outlet whose price change can be expected to be similar to the missing one. See Chapter 6 on imputation methods.
8.58 Tables 8.4A and 8.4B illustrate the calculation of the price index for the elementary aggregate where the price for variety 6 is missing in March. The long-term (direct) indices are therefore
calculated based on the six varieties with reported prices. The short-term (chained) indices are calculated based on all seven prices from January to February and from April to July. From February to
March and from March to April the monthly indices are calculated based on six varieties only.
^Table 8.4A
Jevons and Dutot Elementary Price Indices Using Averages with Missing Prices
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.4A
Jevons and Dutot Elementary Price Indices Using Averages with Missing Prices
Base Match Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A
Variety 1 2.36 2.36 2.09 1.93 2.59 2.05 2.85 2.59 2.36
Variety 2 5.02 5.02 5.38 5.12 5.52 4.08 4.08 5.52 5.02
Variety 3 5.34 5.34 5.07 5.09 5.88 6.29 5.86 5.88 5.34
Variety 4 6.00 6.00 5.73 4.27 6.00 4.75 5.27 6.60 6.00
Variety 5 6.12 6.12 6.39 5.50 6.12 5.86 6.29 6.74 6.12
Variety 6 2.80 2.72 2.82 2.85 2.05 3.08 2.80
Variety 7 6.21 6.21 5.45 6.95 6.21 5.27 4.75 6.84 6.21
Geometric Mean Price (seven observations) 4.55 4.38 4.20 4.17 4.17 5.01 4.55
Geometric Mean Price (six matched observations) 4.93 4.49 5.17 4.45
L-T Aggregate Relative 1.000 0.963 0.924 1.049 0.917 0.917 1.100 1.000
Jevons Index (direct) 100.0 96.3 92.4 104.9 91.7 91.7 110.0 100.0
Geometric Mean S-T Aggregate Relatives 0.963 0.959 1.152 0.859 1.000 1.200 0.909
Jevons Index (chained averages) 100.0 96.3 92.4 106.4 91.4 91.4 109.7 99.7
Arithmetic Mean Price (seven observations) 4.84 4.69 4.53 4.45 4.45 5.32 4.84
Arithmetic Mean Price (six matched observations) 5.18 4.81 5.39 4.72
L-T Aggregate Relative 1.000 0.970 0.936 1.041 0.920 0.920 1.100 1.000
Dutot Index (direct) 100.0 97.0 93.6 104.1 92.0 92.0 110.0 100.0
S-T Aggregate Relatives 0.970 0.965 1.120 0.876 1.000 1.196 0.909
Dutot Index (chained averages) 100.0 97.0 93.6 104.8 91.8 91.8 109.7 99.7
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.4A
Jevons and Dutot Elementary Price Indices Using Averages with Missing Prices
Base Match Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A
Variety 1 2.36 2.36 2.09 1.93 2.59 2.05 2.85 2.59 2.36
Variety 2 5.02 5.02 5.38 5.12 5.52 4.08 4.08 5.52 5.02
Variety 3 5.34 5.34 5.07 5.09 5.88 6.29 5.86 5.88 5.34
Variety 4 6.00 6.00 5.73 4.27 6.00 4.75 5.27 6.60 6.00
Variety 5 6.12 6.12 6.39 5.50 6.12 5.86 6.29 6.74 6.12
Variety 6 2.80 2.72 2.82 2.85 2.05 3.08 2.80
Variety 7 6.21 6.21 5.45 6.95 6.21 5.27 4.75 6.84 6.21
Geometric Mean Price (seven observations) 4.55 4.38 4.20 4.17 4.17 5.01 4.55
Geometric Mean Price (six matched observations) 4.93 4.49 5.17 4.45
L-T Aggregate Relative 1.000 0.963 0.924 1.049 0.917 0.917 1.100 1.000
Jevons Index (direct) 100.0 96.3 92.4 104.9 91.7 91.7 110.0 100.0
Geometric Mean S-T Aggregate Relatives 0.963 0.959 1.152 0.859 1.000 1.200 0.909
Jevons Index (chained averages) 100.0 96.3 92.4 106.4 91.4 91.4 109.7 99.7
Arithmetic Mean Price (seven observations) 4.84 4.69 4.53 4.45 4.45 5.32 4.84
Arithmetic Mean Price (six matched observations) 5.18 4.81 5.39 4.72
L-T Aggregate Relative 1.000 0.970 0.936 1.041 0.920 0.920 1.100 1.000
Dutot Index (direct) 100.0 97.0 93.6 104.1 92.0 92.0 110.0 100.0
S-T Aggregate Relatives 0.970 0.965 1.120 0.876 1.000 1.196 0.909
Dutot Index (chained averages) 100.0 97.0 93.6 104.8 91.8 91.8 109.7 99.7
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.4B
Jevons and Carli Elementary Price Indices Using Relatives with Missing Prices
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.4B
Jevons and Carli Elementary Price Indices Using Relatives with Missing Prices
Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A
Variety 1 0.886 0.818 1.097 0.869 1.208 1.100 1.000
Variety 2 1.072 1.020 1.100 0.813 0.813 1.100 1.000
Variety 3 0.949 0.953 1.101 1.178 1.097 1.100 1.000
Variety 4 0.955 0.712 1.000 0.792 0.878 1.100 1.000
Variety 5 1.044 0.899 1.000 0.958 1.028 1.100 1.000
Variety 6 0.971 1.007 1.018 0.732 1.100 1.000
Variety 7 0.878 1.119 1.000 0.849 0.765 1.100 1.000
Geometric Mean Price Relatives (seven observations) 0.963 0.924 0.917 0.917 1.100 1.000
Geometric Average Price Relatives (six observations) 1.049
Jevons Index (mean L-T price relative) 96.3 92.4 104.9 91.7 91.7 110.0 100.0
Arithmetic Mean Price Relative (seven observations) 0.965 0.933 0.925 0.932 1.100 1.000
Arithmetic Mean Price Relative (six observations) 1.050
Carli Index (average L-T price relative) 96.5 93.3 105.0 92.5 93.2 110.0 100.0
Elementary Aggregate A S-T Price Relatives
Variety 1 0.886 0.923 1.342 0.792 1.390 0.911 0.909
Variety 2 1.072 0.952 1.078 0.739 1.000 1.353 0.909
Variety 3 0.949 1.004 1.155 1.070 0.932 1.003 0.909
Variety 4 0.955 0.745 1.405 0.792 1.109 1.253 0.909
Variety 5 1.044 0.861 1.113 0.958 1.073 1.070 0.909
Variety 6 0.971 1.037 0.719 1.501 0.909
Variety 7 0.878 1.275 0.894 0.849 0.901 1.438 0.909
Geometric Mean Aggregate Relatives (seven observations) 0.963 0.959 1.000 1.200 0.909
Geometric Mean Aggregate Relatives (six matched observations) 1.153 0.859
Jevons Index (chained S-T price relatives) 96.3 92.4 106.4 91.4 91.4 109.7 99.7
Arithmetic Mean Aggregate Relatives (seven observations) 0.965 0.971 1.018 1.219 0.909
Arithmetic Mean Aggregate Relatives (six matched observations) 1.164 0.866
Carli Index (chained S-Taggregaterelatives) 96.5 93.7 109.1 94.5 96.2 117.3 106.6
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.4B
Jevons and Carli Elementary Price Indices Using Relatives with Missing Prices
Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A
Variety 1 0.886 0.818 1.097 0.869 1.208 1.100 1.000
Variety 2 1.072 1.020 1.100 0.813 0.813 1.100 1.000
Variety 3 0.949 0.953 1.101 1.178 1.097 1.100 1.000
Variety 4 0.955 0.712 1.000 0.792 0.878 1.100 1.000
Variety 5 1.044 0.899 1.000 0.958 1.028 1.100 1.000
Variety 6 0.971 1.007 1.018 0.732 1.100 1.000
Variety 7 0.878 1.119 1.000 0.849 0.765 1.100 1.000
Geometric Mean Price Relatives (seven observations) 0.963 0.924 0.917 0.917 1.100 1.000
Geometric Average Price Relatives (six observations) 1.049
Jevons Index (mean L-T price relative) 96.3 92.4 104.9 91.7 91.7 110.0 100.0
Arithmetic Mean Price Relative (seven observations) 0.965 0.933 0.925 0.932 1.100 1.000
Arithmetic Mean Price Relative (six observations) 1.050
Carli Index (average L-T price relative) 96.5 93.3 105.0 92.5 93.2 110.0 100.0
Elementary Aggregate A S-T Price Relatives
Variety 1 0.886 0.923 1.342 0.792 1.390 0.911 0.909
Variety 2 1.072 0.952 1.078 0.739 1.000 1.353 0.909
Variety 3 0.949 1.004 1.155 1.070 0.932 1.003 0.909
Variety 4 0.955 0.745 1.405 0.792 1.109 1.253 0.909
Variety 5 1.044 0.861 1.113 0.958 1.073 1.070 0.909
Variety 6 0.971 1.037 0.719 1.501 0.909
Variety 7 0.878 1.275 0.894 0.849 0.901 1.438 0.909
Geometric Mean Aggregate Relatives (seven observations) 0.963 0.959 1.000 1.200 0.909
Geometric Mean Aggregate Relatives (six matched observations) 1.153 0.859
Jevons Index (chained S-T price relatives) 96.3 92.4 106.4 91.4 91.4 109.7 99.7
Arithmetic Mean Aggregate Relatives (seven observations) 0.965 0.971 1.018 1.219 0.909
Arithmetic Mean Aggregate Relatives (six matched observations) 1.164 0.866
Carli Index (chained S-Taggregaterelatives) 96.5 93.7 109.1 94.5 96.2 117.3 106.6
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
8.59 The average prices (both arithmetic and geometric) are calculated using the six available prices for the base period, February, March, and April in Table 8.4A. The direct Jevons and Dutot
indices use the average of the six prices in March and the base period to derive the March index (104.9 and 104.1, respectively). This calculation uses a matched sample for the prices available in
each period (March and the base period) to derive the averages. In April, all seven prices are again available so the direct indices are derived by comparing the averages of the seven prices to their
average in the base period.
8.60 For the chained Jevons and Dutot indices that use the short-term price relatives, the average prices for the six varieties available in March are compared to the average prices of the six
available varieties in February. The resulting price relatives are multiplied by the February indices to derive the March indices (106.4 for the Jevons and 104.8 for the Dutot). The same holds true
for April’s compilation—the average of the six prices that were available in both March and April are used to derive the April indices (91.4 for the Jevons and 91.8 for the Dutot).
8.61 For both the Jevons and the Dutot indices, the direct and chained indices now differ from March onward. The first link in the chained index (January to February) is the same as the direct index,
so the two indices are identical numerically. The direct index for March completely ignores the price increase of variety 6 between January and February, while this is counted in the chained index.
As a result, the direct index is lower than the chained index for March. On the other hand, in April, when all prices are again available, the direct index captures the price development for the full
sample, whereas the chained index only tracks the long-term development in the six-price sample.
8.62 Table 8.4B shows the compilation of the Jevons and Carli indices using the long-term and short-term average of price relative methods. The long-term Carli index shows similar effects in March
and April as those for the Jevons index in missing the long-term price change for variety 6. The short-term Carli, however, shows a significant upward bias as it increased to 106.6 when all the
prices return to their base period levels in July.
8.63 As Tables 8.4A and 8.4B demonstrate, the Jevons, Dutot, and Carli direct indices return to 100.0 in the final period when all prices return to their base period levels. The chained versions do
not, with the Carli showing a large upward drift by the end month and the Jevons and Dutot with a slight downward drift.
8.64 The problem with the chained index will be resolved if the missing price is imputed using the average short-term change of the other observations in the elementary aggregate. In Table 8.5A, the
missing price for variety 6 in March is imputed using the geometric average of the price changes of the remaining varieties from February to March. While the imputation might be calculated using
long-term relatives (that is, comparing the prices of the present period with the base period prices), the imputation of missing prices should be made based on the price change from the preceding to
the present period, as shown in the table. Imputation based on the average price change from the base period to the present period should not be used as it ignores the information about the price
change of the missing variety that has already been included in the index. The treatment of imputations is discussed in more detail in Chapter 6.
^Table 8.5A
Jevons and Dutot Elementary Price Indices Using Averages with Imputed Prices
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.5A
Jevons and Dutot Elementary Price Indices Using Averages with Imputed Prices
Base Match Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A Prices
Variety 1 2.36 2.36 2.09 1.93 2.59 2.05 2.85 2.59 2.36
Variety 2 5.02 5.02 5.38 5.12 5.52 4.08 4.08 5.52 5.02
Variety 3 5.34 5.34 5.07 5.09 5.88 6.29 5.86 5.88 5.34
Variety 4 6.00 6.00 5.73 4.27 6.00 4.75 5.27 6.60 6.00
Variety 5 6.12 6.12 6.39 5.50 6.12 5.86 6.29 6.74 6.12
Variety 6 2.80 2.72 2.82 3.25 2.85 2.05 3.08 2.80
Variety 7 6.21 6.21 5.45 6.95 6.21 5.27 4.75 6.84 6.21
Geometric Mean Price (seven observations) 4.55 4.38 4.20 4.84 4.17 4.17 5.01 4.55
Geometric Mean Price (six observations) 4.93 4.49 5.17
L-T Aggregate Relative 1.000 0.963 0.924 1.064 0.917 0.917 1.100 1.000
Jevons Index (direct) 100.0 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Geometric Mean S-T Aggregate Relative 0.963 0.959 1.152 0.862 1.000 1.200 0.909
Jevons Index (chained averages) 100.0 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Variety 6 (imputed price) 2.80 2.72 2.82 3.16 2.85 2.05 3.08 2.80
Arithmetic Mean Price (seven observations) 4.84 4.69 4.53 5.07 4.45 4.45 5.32 4.84
Arithmetic Mean Price (six observations) 5.18 4.81 5.39
L-T Aggregate Relative 1.000 0.970 0.936 1.048 0.920 0.920 1.100 1.000
Dutot Index (direct) 100.0 97.0 93.6 104.8 92.0 92.0 110.0 100.0
S-T Aggregate Relatives 0.970 0.965 1.120 0.878 1.000 1.196 0.909
Dutot Index (chained averages) 100.0 97.0 93.6 104.8 92.0 92.0 110.0 100.0
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.5A
Jevons and Dutot Elementary Price Indices Using Averages with Imputed Prices
Base Match Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A Prices
Variety 1 2.36 2.36 2.09 1.93 2.59 2.05 2.85 2.59 2.36
Variety 2 5.02 5.02 5.38 5.12 5.52 4.08 4.08 5.52 5.02
Variety 3 5.34 5.34 5.07 5.09 5.88 6.29 5.86 5.88 5.34
Variety 4 6.00 6.00 5.73 4.27 6.00 4.75 5.27 6.60 6.00
Variety 5 6.12 6.12 6.39 5.50 6.12 5.86 6.29 6.74 6.12
Variety 6 2.80 2.72 2.82 3.25 2.85 2.05 3.08 2.80
Variety 7 6.21 6.21 5.45 6.95 6.21 5.27 4.75 6.84 6.21
Geometric Mean Price (seven observations) 4.55 4.38 4.20 4.84 4.17 4.17 5.01 4.55
Geometric Mean Price (six observations) 4.93 4.49 5.17
L-T Aggregate Relative 1.000 0.963 0.924 1.064 0.917 0.917 1.100 1.000
Jevons Index (direct) 100.0 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Geometric Mean S-T Aggregate Relative 0.963 0.959 1.152 0.862 1.000 1.200 0.909
Jevons Index (chained averages) 100.0 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Variety 6 (imputed price) 2.80 2.72 2.82 3.16 2.85 2.05 3.08 2.80
Arithmetic Mean Price (seven observations) 4.84 4.69 4.53 5.07 4.45 4.45 5.32 4.84
Arithmetic Mean Price (six observations) 5.18 4.81 5.39
L-T Aggregate Relative 1.000 0.970 0.936 1.048 0.920 0.920 1.100 1.000
Dutot Index (direct) 100.0 97.0 93.6 104.8 92.0 92.0 110.0 100.0
S-T Aggregate Relatives 0.970 0.965 1.120 0.878 1.000 1.196 0.909
Dutot Index (chained averages) 100.0 97.0 93.6 104.8 92.0 92.0 110.0 100.0
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
8.65 The calculations in Tables 8.5A and 8.5B show that when the missing price for variety 6 is imputed using the short-term price change of the other varieties, the trend of the Jevons, Dutot, and
Carli indices reflect the changes for all the observations using the direct and long-term relative methods. For the Jevons and Dutot indices, the chained method gives the same results as the direct
method. However, the chained Carli is significantly upward biased demonstrating that this method should not be used for index compilation.
^Table 8.5B
Jevons and Dutot Elementary Price Indices Using Relatives with Imputed Prices
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.5B
Jevons and Dutot Elementary Price Indices Using Relatives with Imputed Prices
Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A
Variety 1 0.888 0.816 1.100 0.869 1.207 1.100 1.000
Variety 2 1.072 1.019 1.100 0.813 0.813 1.100 1.000
Variety 3 0.949 0.953 1.100 1.178 1.097 1.100 1.000
Variety 4 0.955 0.712 1.000 0.792 0.878 1.100 1.000
Variety 5 1.044 0.898 1.000 0.957 1.028 1.100 1.000
Variety 6 0.974 1.008 1.160 1.018 0.733 1.100 1.000
Variety 7 0.877 1.118 1.000 0.848 0.765 1.100 1.000
Geometric Mean Price Relatives (seven observations) 0.963 0.924 1.064 0.917 0.917 1.100 1.000
Jevons Index (average L-Tprice relatives) 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Variety 6 (imputed L-T price relative) 0.963 0.924 1.173 0.917 0.917 1.100 1.000
Arithmetic Mean Aggregate Relatives (seven observations) 0.933 1.067 0.925 0.932 1.100 1.000 0.933
Carli Index (average L-Tprice relatives) 93.3 106.7 92.5 93.2 110.0 100.0 93.3
Elementary Aggregate A
Variety 1 0.886 0.923 1.342 0.792 1.390 0.909 0.911
Variety 2 1.072 0.952 1.078 0.739 1.000 1.353 0.909
Variety 3 0.949 1.004 1.155 1.070 0.932 1.003 0.908
Variety 4 0.955 0.745 1.405 0.792 1.109 1.252 0.909
Variety 5 1.044 0.861 1.113 0.958 1.073 1.072 0.908
Variety 6 0.971 1.037 1.152 0.877 0.719 1.502 0.909
Variety 7 0.878 1.275 0.894 0.849 0.901 1.440 0.908
Geometric Mean Price Relatives (seven observations) 0.963 0.959 1.152 0.862 1.000 1.200 0.909
Jevons Index (chained S-Tprice relatives) 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Variety 6 (imputed S-T price relative) 0.963 0.959 1.164 0.782 1.000 1.200 0.909
Arithmetic Mean Aggregate Price Relatives (seven observations) 0.965 0.971 1.164 0.868 1.018 1.219 0.909
Carli Index (chained S-Tprice relatives) 96.5 93.7 109.2 94.7 96.4 117.5 106.8
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
^Table 8.5B
Jevons and Dutot Elementary Price Indices Using Relatives with Imputed Prices
Jan. Feb. Mar. Apr. May Jun. Jul
Elementary Aggregate A
Variety 1 0.888 0.816 1.100 0.869 1.207 1.100 1.000
Variety 2 1.072 1.019 1.100 0.813 0.813 1.100 1.000
Variety 3 0.949 0.953 1.100 1.178 1.097 1.100 1.000
Variety 4 0.955 0.712 1.000 0.792 0.878 1.100 1.000
Variety 5 1.044 0.898 1.000 0.957 1.028 1.100 1.000
Variety 6 0.974 1.008 1.160 1.018 0.733 1.100 1.000
Variety 7 0.877 1.118 1.000 0.848 0.765 1.100 1.000
Geometric Mean Price Relatives (seven observations) 0.963 0.924 1.064 0.917 0.917 1.100 1.000
Jevons Index (average L-Tprice relatives) 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Variety 6 (imputed L-T price relative) 0.963 0.924 1.173 0.917 0.917 1.100 1.000
Arithmetic Mean Aggregate Relatives (seven observations) 0.933 1.067 0.925 0.932 1.100 1.000 0.933
Carli Index (average L-Tprice relatives) 93.3 106.7 92.5 93.2 110.0 100.0 93.3
Elementary Aggregate A
Variety 1 0.886 0.923 1.342 0.792 1.390 0.909 0.911
Variety 2 1.072 0.952 1.078 0.739 1.000 1.353 0.909
Variety 3 0.949 1.004 1.155 1.070 0.932 1.003 0.908
Variety 4 0.955 0.745 1.405 0.792 1.109 1.252 0.909
Variety 5 1.044 0.861 1.113 0.958 1.073 1.072 0.908
Variety 6 0.971 1.037 1.152 0.877 0.719 1.502 0.909
Variety 7 0.878 1.275 0.894 0.849 0.901 1.440 0.908
Geometric Mean Price Relatives (seven observations) 0.963 0.959 1.152 0.862 1.000 1.200 0.909
Jevons Index (chained S-Tprice relatives) 96.3 92.4 106.4 91.7 91.7 110.0 100.0
Variety 6 (imputed S-T price relative) 0.963 0.959 1.164 0.782 1.000 1.200 0.909
Arithmetic Mean Aggregate Price Relatives (seven observations) 0.965 0.971 1.164 0.868 1.018 1.219 0.909
Carli Index (chained S-Tprice relatives) 96.5 93.7 109.2 94.7 96.4 117.5 106.8
Note: The text in gray refers to six matched observations whereas the text in bold refers to seven matched observations.
Treatment of Permanently Disappeared Varieties
8.66 Varieties may disappear permanently for a number of reasons. The variety may disappear from the market because new varieties have been introduced or the outlets from which the price has been
collected have stopped selling the product. Where varieties disappear permanently, a replacement variety must be sampled and included in the index. The replacement variety should ideally be one that
accounts for a significant proportion of sales, is likely to continue to be sold for some time, and is likely to be representative of the sampled price changes of the market that the old variety
covered. In practice when selecting replacement varieties, compromises must be found between representativity, comparability over time, and similarity.
8.67 The timing of the introduction of replacement varieties is important. Many new products are initially sold at high prices that then gradually drop over time, especially as the volume of sales
increases. Alternatively, some products may be introduced at artifcially low prices to stimulate demand. In such cases, delaying the introduction of a new or replacement variety until a large volume
of sales is achieved may miss some systematic price changes that ought to be captured by CPIs. It is desirable to avoid making replacements when sales of the varieties they replace are significantly
discounted in order to clear out inventory. In such cases, the disappearing variety’s price should be returned to its last nondiscounted price as the new variety is introduced.
8.68 To include the new variety in the index, an imputed price needs to be calculated. The imputation will differ based on the formula used. For the Jevons index, the geometric average of short-term
relatives is used, while for the Carli index, the arithmetic average of short-term relatives is used. For the Dutot index, the short-term relative of average prices is used. If a direct index is
being calculated from average prices, the imputed price must be included in calculating the average prices in the current month. For the Jevons and Carli indices, the base price can be estimated by
using the price ratio of the new variety price to the imputed price of the old variety as the relative quality difference. This ratio is then applied to the base price of the old variety. A different
method must be used for estimating the Dutot base price that involves estimating the average base price using the long-term price change of the elementary aggregate.
8.69 Table 8.6 shows an example where variety A disappears after March and variety D is included as a replacement from April onward. Varieties A and D are not available on the market at the same time
and their price series do not overlap. The base price estimation in the examples applies to the Jevons and Carli price indices. The methods for the Dutot price index are shown in Table 8.7.
^Table 8.6
Replacing Varieties with No Overlapping Prices: Jevons and Carli Price Indices
^Table 8.6
Replacing Varieties with No Overlapping Prices: Jevons and Carli Price Indices
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.00 8.00
Geometric Mean 5.01 4.82 5.65 7.66 7.56
Average of L-T Price Relatives 0.992 1.151 1.360 1.540
(a) No Imputations for Missing Prices (price indices calculated directly from monthly averages)
Jevons Index—The Ratio of Geometric Mean Prices = Geometric Mean of Price Relatives
Direct Index 100.0 96.1 112.6 152.9 150.8
Month-to-Month Change 0.961 1.171 1.357 0.986
Chained m/m Index 100.0 96.1 112.6 152.9 150.8
Carli Index—The Arithmetic Average of Price Relatives
Direct Index 100.0 99.2 115.1 136.0 154.0
Month-to-Month Change 0.992 1.278 1.181 0.996
Chained m/m Index 100.0 99.2 127.0 149.9 149.3
(B) Imputation for Missing Prices
Jevons Index—The Ratio of Geometric Mean Prices = Geometric Mean of Price Relatives
Impute the Price of Variety A in April Using the S-T Relative of Average Prices: 5.00 × [(5 × 10)/(4 × 9)]^05 = 5.89
The April Average Price Is Derived as (5.89 × 5 × 10)^1/3 = 6.65
The April Index Is Derived Using the January Geometric Average Price (6.65/5.01) = 1.327 × 100 = 132.7
A January Base Price Is Set for Variety D to Be Equal to the Base Price of Variety A Adjusted for the Quality Difference = Relative Price
Difference Between Varieties D and A in April: 6 × (9/5.89) = 9.17. By Taking the Geometric Average of the Base Prices of B, C, and D,
One Then Obtains the adjusted Average of 5.77. The May Average Price Is (6 × 9 × 8)^1/^3 = 7.56
The May Index Is Then Calculated as (7.56/5.77) × 100 = 130.9
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00 5.89
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.17 9.00 8.00
Geometric Mean 5.01 4.82 5.65 6.65
Adjusted Average 5.77 7.66 7.56
Direct Index 100.0 96.1 112.6 132.7 130.9
The Month-to-Month Changes Are Calculated from the Geometric Mean of Price Changes of Varieties A, B, C from January through April.
The Monthly Change in May Is Calculated Using the Geometric Mean of Price Changes for Varieties B, C, D from April to May
Month-to-Month Change 0.961 1.171 1.178 0.987
Chained m/m Index 100.00 96.1 112.6 132.7 130.9
Carli Index—The Arithmetic Mean of Price Relatives
The April Price of Variety A Is Missing, and the Average S-T Price Relative in April Is Derived from Varieties B and C: (5/4 + 10/9) × 0.5 = 1.181
Impute the Price of Variety A in April as 5.00 × 1.181 = 5.90, So That L-T Relative Is (5.90/6) = 0.984
The Average L-T Relative in April Is (0.984 + 1.667 + 1.429)/3 = 1.360 × 100 =136.0, the April Index
A January Base Price Is Set for Variety D to Be Equal to the Base Price of Variety A Adjusted for the Quality Difference = Relative Price
Difference between Varieties D and A in April: 6 × (9/5.90) = 9.15. The L-T Relative for Variety D in May Is (8/9.15) = 0.8745. The May
Index Is (1/3 × (2.000 + 1.2857 + 0.8745)) = 138.67
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00 5.90
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.15 9.00 8.00
Price Relatives
Variety A 1.167 0.833 0.984
Variety B 0.667 1.333 1.667 2.000
Variety C 1.143 1.286 1.429 1.285
Variety D 0.874
Direct Index 100.00 99.2 115.1 136.0 138.7
^Table 8.6
Replacing Varieties with No Overlapping Prices: Jevons and Carli Price Indices
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.00 8.00
Geometric Mean 5.01 4.82 5.65 7.66 7.56
Average of L-T Price Relatives 0.992 1.151 1.360 1.540
(a) No Imputations for Missing Prices (price indices calculated directly from monthly averages)
Jevons Index—The Ratio of Geometric Mean Prices = Geometric Mean of Price Relatives
Direct Index 100.0 96.1 112.6 152.9 150.8
Month-to-Month Change 0.961 1.171 1.357 0.986
Chained m/m Index 100.0 96.1 112.6 152.9 150.8
Carli Index—The Arithmetic Average of Price Relatives
Direct Index 100.0 99.2 115.1 136.0 154.0
Month-to-Month Change 0.992 1.278 1.181 0.996
Chained m/m Index 100.0 99.2 127.0 149.9 149.3
(B) Imputation for Missing Prices
Jevons Index—The Ratio of Geometric Mean Prices = Geometric Mean of Price Relatives
Impute the Price of Variety A in April Using the S-T Relative of Average Prices: 5.00 × [(5 × 10)/(4 × 9)]^05 = 5.89
The April Average Price Is Derived as (5.89 × 5 × 10)^1/3 = 6.65
The April Index Is Derived Using the January Geometric Average Price (6.65/5.01) = 1.327 × 100 = 132.7
A January Base Price Is Set for Variety D to Be Equal to the Base Price of Variety A Adjusted for the Quality Difference = Relative Price
Difference Between Varieties D and A in April: 6 × (9/5.89) = 9.17. By Taking the Geometric Average of the Base Prices of B, C, and D,
One Then Obtains the adjusted Average of 5.77. The May Average Price Is (6 × 9 × 8)^1/^3 = 7.56
The May Index Is Then Calculated as (7.56/5.77) × 100 = 130.9
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00 5.89
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.17 9.00 8.00
Geometric Mean 5.01 4.82 5.65 6.65
Adjusted Average 5.77 7.66 7.56
Direct Index 100.0 96.1 112.6 132.7 130.9
The Month-to-Month Changes Are Calculated from the Geometric Mean of Price Changes of Varieties A, B, C from January through April.
The Monthly Change in May Is Calculated Using the Geometric Mean of Price Changes for Varieties B, C, D from April to May
Month-to-Month Change 0.961 1.171 1.178 0.987
Chained m/m Index 100.00 96.1 112.6 132.7 130.9
Carli Index—The Arithmetic Mean of Price Relatives
The April Price of Variety A Is Missing, and the Average S-T Price Relative in April Is Derived from Varieties B and C: (5/4 + 10/9) × 0.5 = 1.181
Impute the Price of Variety A in April as 5.00 × 1.181 = 5.90, So That L-T Relative Is (5.90/6) = 0.984
The Average L-T Relative in April Is (0.984 + 1.667 + 1.429)/3 = 1.360 × 100 =136.0, the April Index
A January Base Price Is Set for Variety D to Be Equal to the Base Price of Variety A Adjusted for the Quality Difference = Relative Price
Difference between Varieties D and A in April: 6 × (9/5.90) = 9.15. The L-T Relative for Variety D in May Is (8/9.15) = 0.8745. The May
Index Is (1/3 × (2.000 + 1.2857 + 0.8745)) = 138.67
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00 5.90
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.15 9.00 8.00
Price Relatives
Variety A 1.167 0.833 0.984
Variety B 0.667 1.333 1.667 2.000
Variety C 1.143 1.286 1.429 1.285
Variety D 0.874
Direct Index 100.00 99.2 115.1 136.0 138.7
^Table 8.7
Replacing Varieties with No Overlapping Prices: Dutot Index
^Table 8.7
Replacing Varieties with No Overlapping Prices: Dutot Index
January Prices February March April May
Elementary Aggregate B
Variety A 6.00 7.00 5.00
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.00 8.00
Arithmetic Average 5.33 5.67 6.00 8.00 7.67
(a) No Imputations for Missing Prices
Dutot Index—The Ratio of Arithmetic Mean Prices
Direct Index 100.00 106.25 112.50 150.00 143.75
Month-to-Month Change 1.0625 1.0588 1.3333 0.9583
Chained m/m Index 100.00 106.25 112.50 150.00 143.75
(b) Imputation for Missing Prices
Dutot Index—The Ratio of Arithmetic Mean Prices
Impute the Price of Variety A in April Using the S-T Relative of Average Prices: 5.00 × (5 + 10)/(4 + 9) = 5.77
The April Average Price Is Derived as (5.77 + 5 + 10)/3 = 6.92
The April Index Is Derived Using the January Average Price (6.92/5.33) = 1.2981 × 100 = 129.81
A New Imputed Average Price Is Calculated for January by Taking the April Arithmetic Mean Price of Varieties B, C, and D (5+10+9)/3 = 8 and Deflating the Value Using the April L-T Price Change (8/
1.2981) = 6.16
The May Index Is Then Calculated as (7.67/6.16) × 100 = 124.40
January Prices February March April May
Elementary Aggregate B
Variety A 6.00 7.00 5.00 5.77
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.00 8.00
Arithmetic Mean 5.33 5.67 6.00 6.92
Adjusted Average 6.16 8.00 7.67
Direct Index 100.00 106.25 112.50 129.81 124.40
The Month-to-Month Changes Are Calculated from the Average Price for Varieties A, B, C from January through April. The Monthly
Change in May Is Calculated on the Average Price for Varieties B, C, D in April and May
Month-to-Month Change 1.0625 1.0588 1.1538 0.9583
Chained m/m Index 100.00 106.25 112.50 129.81 124.40
^Table 8.7
Replacing Varieties with No Overlapping Prices: Dutot Index
January Prices February March April May
Elementary Aggregate B
Variety A 6.00 7.00 5.00
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.00 8.00
Arithmetic Average 5.33 5.67 6.00 8.00 7.67
(a) No Imputations for Missing Prices
Dutot Index—The Ratio of Arithmetic Mean Prices
Direct Index 100.00 106.25 112.50 150.00 143.75
Month-to-Month Change 1.0625 1.0588 1.3333 0.9583
Chained m/m Index 100.00 106.25 112.50 150.00 143.75
(b) Imputation for Missing Prices
Dutot Index—The Ratio of Arithmetic Mean Prices
Impute the Price of Variety A in April Using the S-T Relative of Average Prices: 5.00 × (5 + 10)/(4 + 9) = 5.77
The April Average Price Is Derived as (5.77 + 5 + 10)/3 = 6.92
The April Index Is Derived Using the January Average Price (6.92/5.33) = 1.2981 × 100 = 129.81
A New Imputed Average Price Is Calculated for January by Taking the April Arithmetic Mean Price of Varieties B, C, and D (5+10+9)/3 = 8 and Deflating the Value Using the April L-T Price Change (8/
1.2981) = 6.16
The May Index Is Then Calculated as (7.67/6.16) × 100 = 124.40
January Prices February March April May
Elementary Aggregate B
Variety A 6.00 7.00 5.00 5.77
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 9.00 8.00
Arithmetic Mean 5.33 5.67 6.00 6.92
Adjusted Average 6.16 8.00 7.67
Direct Index 100.00 106.25 112.50 129.81 124.40
The Month-to-Month Changes Are Calculated from the Average Price for Varieties A, B, C from January through April. The Monthly
Change in May Is Calculated on the Average Price for Varieties B, C, D in April and May
Month-to-Month Change 1.0625 1.0588 1.1538 0.9583
Chained m/m Index 100.00 106.25 112.50 129.81 124.40
8.70 If a chained index is calculated, the imputation method ensures that the inclusion of the new variety does not, in itself, affect the index and an adjustment of the base price is not necessary.
In the case of a chained index, imputing the missing price by the average change of the available prices gives the same result as if the variety is simply omitted from the index calculation. However,
by storing the imputed price as an observation, it can be used with a reported price for index calculation in the subsequent month as previously demonstrated in Table 8.5A. Thus, the chained index is
compiled by simply chaining the month-to-month price movement between periods t - 1 and t, based on the matched set of prices in those two periods, onto the value of the chained index for period t -
1. In the example, no further imputation is required after April, and the subsequent movement of the index is unaffected by the imputed price change between March and April.
8.71 For the Dutot index, the short-term relative of average prices is used to make imputations. In the Dutot example in Table 8.7, the average base price used in the direct calculation must be
adjusted for the relative difference between the old sample’s average price and the new sample’s average price. When using the long-term Dutot index based on an arithmetic mean of prices, the
imputation of base price is made using the new sample average price and long-term elementary index to estimate the average base price. The trend of the index is affected by the level of the base
prices where the movement of the observation with largest base price has the most importance in the trend of the elementary index. In the Jevons and Carli indices, each observation is equally
important, and estimation of the base prices is not affected by the level of the other observations in the sample.
8.72 The adjusted base price in this example is derived by dividing the new average price level by the long-term price change of the elementary index. From another perspective, the adjusted base
price is estimated by applying the ratio of the new sample’s average price to the old sample’s average price to the old base price. This implicitly assumes that the difference in the average prices
reflects the difference in quality.
8.73 The situation is somewhat simpler when there is an overlap month in which prices are collected for both the disappearing and the replacement variety. In that case, it is possible to link the
price series for the new variety to the price series for the old variety that it replaces. Linking with overlapping prices involves making an implicit adjustment for the difference in quality between
the two varieties, as it assumes again that the relative prices of the new and old varieties reflect their relative qualities. For perfect or nearly perfect markets, this may be a valid assumption,
but for certain markets and products, it may not be so reasonable. The question of when to use overlapping prices is dealt with in detail in Chapter 6. The overlap method is illustrated in Table 8.8.
^Table 8.8.
Disappearing and Replacement Varieties with Overlapping Prices
^Table 8.8.
Disappearing and Replacement Varieties with Overlapping Prices
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 10.00 9.00 8.00
Geometric Average Price A,B,C; (B,C,D) 5.01 4.82 5.65 (7.11) (7.66) (7.56)
Arithmetic Average Price A,B,C; (B,C,D) 5.33 5.67 6.00 (7.67) (8.00) (7.67)
Jevons Index—The Ratio of Geometric Mean Prices = Geometric Mean of Price Ratios
Chain the Monthly Indices Based on Matched Prices
Month-to-Month Change 1.0000 0.9615 1.1713 1.0774 0.9869
Chained m/m Index 100.00 96.15 112.62 121.33 119.75
For the Direct Index, a New Imputed Average Price Is Calculated for January by Taking the Average Price of Varieties B, C, and D in March (4 × 9 × 10)^1/3 = 7.11 and Deflating by the March L-T Index
(1.1262) to Derive the Adjusted Base Price (6.31). This Calculation Maintains the Level of the March Index
The Adjusted Base Price Is Used to Compile the April and May Indices
Direct Index 100.00 96.15 112.62 121.32 119.68
Dutot Index—The Ratio of Arithmetic Mean Prices
Chain the Monthly Indices Based on Matched Prices
Month-to-Month Change 1.0000 1.0625 1.0588 1.0435 0.9583
Chained m/m Index 100.00 106.25 112.50 117.39 112.50
For the Direct Index, a New Imputed Average Price Is Calculated for January by Taking Average Price of Varieties B, C, and D in March (4 + 9 + 10)/3 = 7.67 and Deflating by the March L-T Relative
(1.1250) to Derive the Adjusted Base Price (6.81). This Calculation Maintains the Level of the March Index. This Adjusted Base Price Is Used to Compile the April and May Indices
Direct index 100.00 106.25 112.50 117.39 112.50
Carli Index—The Arithmetic Mean of Price Relatives
Variety A 1.0000 1.1667 0.8333
Variety B 1.0000 0.6667 1.3333 1.6667 2.0000
Variety C 1.0000 1.1429 1.2857 1.4286 1.2857
Variety D 1.0000 1.0357 0.9206
L-T relative 0.9921 1.1508 1.3770 1.4021
Average L-T Relative for Elementary Index in March Is (0.8333 + 1.333 + 1.2857)/3 = 1.1508 × 100 =115.08, the March index
Impute the Price of Variety D in January as 10.00 /1.1508 = 8.69, Keeping the L-T Relative as 1.1508 So That the Introduction of Variety D
Does Not Affect the March Index Level. The New L-T Relatives for Variety D in April and May Are 1.3770 (9.00/8.69) and 0.9206 (8.00/8.69)
The Average L-T Relative for Varieties B, C, and D Are Used to Calculate the April and May Indices
Direct Index 100.00 99.21 115.08 137.70 140.21
^Table 8.8.
Disappearing and Replacement Varieties with Overlapping Prices
January February March April May
Elementary Aggregate B Prices
Variety A 6.00 7.00 5.00
Variety B 3.00 2.00 4.00 5.00 6.00
Variety C 7.00 8.00 9.00 10.00 9.00
Variety D 10.00 9.00 8.00
Geometric Average Price A,B,C; (B,C,D) 5.01 4.82 5.65 (7.11) (7.66) (7.56)
Arithmetic Average Price A,B,C; (B,C,D) 5.33 5.67 6.00 (7.67) (8.00) (7.67)
Jevons Index—The Ratio of Geometric Mean Prices = Geometric Mean of Price Ratios
Chain the Monthly Indices Based on Matched Prices
Month-to-Month Change 1.0000 0.9615 1.1713 1.0774 0.9869
Chained m/m Index 100.00 96.15 112.62 121.33 119.75
For the Direct Index, a New Imputed Average Price Is Calculated for January by Taking the Average Price of Varieties B, C, and D in March (4 × 9 × 10)^1/3 = 7.11 and Deflating by the March L-T Index
(1.1262) to Derive the Adjusted Base Price (6.31). This Calculation Maintains the Level of the March Index
The Adjusted Base Price Is Used to Compile the April and May Indices
Direct Index 100.00 96.15 112.62 121.32 119.68
Dutot Index—The Ratio of Arithmetic Mean Prices
Chain the Monthly Indices Based on Matched Prices
Month-to-Month Change 1.0000 1.0625 1.0588 1.0435 0.9583
Chained m/m Index 100.00 106.25 112.50 117.39 112.50
For the Direct Index, a New Imputed Average Price Is Calculated for January by Taking Average Price of Varieties B, C, and D in March (4 + 9 + 10)/3 = 7.67 and Deflating by the March L-T Relative
(1.1250) to Derive the Adjusted Base Price (6.81). This Calculation Maintains the Level of the March Index. This Adjusted Base Price Is Used to Compile the April and May Indices
Direct index 100.00 106.25 112.50 117.39 112.50
Carli Index—The Arithmetic Mean of Price Relatives
Variety A 1.0000 1.1667 0.8333
Variety B 1.0000 0.6667 1.3333 1.6667 2.0000
Variety C 1.0000 1.1429 1.2857 1.4286 1.2857
Variety D 1.0000 1.0357 0.9206
L-T relative 0.9921 1.1508 1.3770 1.4021
Average L-T Relative for Elementary Index in March Is (0.8333 + 1.333 + 1.2857)/3 = 1.1508 × 100 =115.08, the March index
Impute the Price of Variety D in January as 10.00 /1.1508 = 8.69, Keeping the L-T Relative as 1.1508 So That the Introduction of Variety D
Does Not Affect the March Index Level. The New L-T Relatives for Variety D in April and May Are 1.3770 (9.00/8.69) and 0.9206 (8.00/8.69)
The Average L-T Relative for Varieties B, C, and D Are Used to Calculate the April and May Indices
Direct Index 100.00 99.21 115.08 137.70 140.21
8.74 In the example in Table 8.8 overlapping prices are obtained for varieties A and D in March. There is now an overlapping sample for March—one using varieties A, B, C, and the other using
varieties B, C, and D. A monthly chain Jevons index of geometric mean prices will be based on the prices of varieties A, B, and C until March, and from April onward on the prices of varieties B, C,
and D. The replacement variety is not included until prices for two successive periods are obtained. Thus, the monthly chain index has the advantage that it is not necessary to carry out any explicit
imputation of a reference (base) price for the new variety. The same approach applies to the Dutot chain index.
8.75 If a direct index is calculated as the ratio of the arithmetic (geometric) mean prices, the price in the price reference period needs to be adjusted by deflation of the new average in March by
the long-term index so that the March index level is maintained and the new sample does not affect the long-term price change through March. If a new reference price of variety D for January was
imputed, different results would be obtained because the price changes are implicitly weighted by the relative reference period prices in the Dutot index, which is not the case for the Carli or the
Jevons indices. The April and May index change in the Dutot index is lower than the Jevons because the declines in price of varieties C and D have larger implicit weights in the Dutot (39 and 43
percent) versus the Jevons (33 and 33 percent).^7
8.76 If the index is calculated as a direct Carli, the January base period price for variety D must be imputed by dividing the price of variety D in March (10.00) by the long-term index change for
March (1.1508). This deflation of the variety D price maintains the index level in March. The long-term relative for replacement variety D in April and May is calculated by dividing the prices by the
estimated base price (8.69) of variety D in January.
Calculation of Elementary Price Indices Using Weights
8.77 The Jevons, Dutot, and Carli indices are all calculated without the use of explicit weights. However, as already mentioned, in certain cases weighting information may be available that could be
exploited in the calculation of the elementary price indices. Weights within elementary aggregates may be updated independently and possibly more often than the elementary aggregate weights
8.78 Sources of weights include scanner data for selected divisions, such as food and beverages. Because scanner data include quantities, the relative importance of sampled varieties can be
calculated. Weights can also be developed by outlet or outlet type within an elementary aggregate. For example, for bread and bakery products, scanner data could provide data to develop weights for
different grocery stores. In some countries, the household budget survey (HBS) includes a question asking respondents to identify the type of outlet where an expenditure on a particular item was
made. These data could be used to develop weights for the different outlet types identified. Another potential source of data for developing weights for outlets or outlet type would be estimates of
market shares obtained from business or trade groups and marketing firms. A special situation occurs in the case of tariff prices. A tariff is a list of prices for the purchase of a particular kind
of good or service under different terms and conditions. One example is electricity, where one price is charged during the daytime while a lower price is charged at night. Similarly, a telephone
company may charge a lower price for a call at the weekend than in the rest of the week. Another example may be bus tickets sold at one price to ordinary passengers and at lower prices to children or
pensioners. In such cases, one option, depending upon the availability of data, would be to assign weights to the different tariffs or prices in order to calculate the price index for the elementary
aggregate. Another option would be to calculate a unit value, as described in paragraph 8.85. However, changes in the tariff structure can be more difficult to capture. The treatment of tariffs is
further discussed in Chapter 11.
8.79 The increasing use of electronic points of sale in many countries, in which both prices and quantities are scanned as the purchases are made, means that valuable new sources of information are
increasingly available to NSOs. This could lead to significant changes in the ways in which price data are collected and processed for CPI purposes. The treatment of scanner data is examined in
Chapter 11.
8.80 If the weight reference period expenditure for all the individual varieties within an elementary aggregate, or estimates thereof, were to be available, the elementary price index could itself be
calculated as a fixed-basket price index, or as a geometric price index. The arithmetic index is calculated using a weighted arithmetic average of the price observations:
where $IEA0:t$ is the elementary aggregate price index $pi0$ is the base price observed for variety $wib$ is the weight for the variety in the weight reference period 8.81 The geometric index is
calculated using a weighted geometric average of the price observations:
8.82 Table 8.9 provides an example of calculations of fixed-base elementary aggregate indices. The group consists of three varieties for which prices are collected monthly. The expenditure shares are
estimated to be 0.80, 0.17, and 0.03.
^Table 8.9
Calculation of a Weighted Elementary Index
^Table 8.9
Calculation of a Weighted Elementary Index
Weight December January February Price Relative Dec.–Feb.
Variety A 0.80 7 7 9 1.2857
Variety B 0.17 20 20 10 0.0500
Variety C 0.03 28 28 12 0.4286
Weighted Arithmetic Mean of Price Relatives Index
((9/7) × 0.8 + (10/20) × 0.17 + (12/28) × 0.03) × 100 = 112.64
Weighted Geometric Mean of Price Relatives
((9/7)^0.8 × (10/20)^0.17 × (12/28)^0.03) × 100 = 105.95
^Table 8.9
Calculation of a Weighted Elementary Index
Weight December January February Price Relative Dec.–Feb.
Variety A 0.80 7 7 9 1.2857
Variety B 0.17 20 20 10 0.0500
Variety C 0.03 28 28 12 0.4286
Weighted Arithmetic Mean of Price Relatives Index
((9/7) × 0.8 + (10/20) × 0.17 + (12/28) × 0.03) × 100 = 112.64
Weighted Geometric Mean of Price Relatives
((9/7)^0.8 × (10/20)^0.17 × (12/28)^0.03) × 100 = 105.95
8.83 One option is to calculate the index as the weighted arithmetic mean of the price relatives, which gives an index of 112.64. The individual price changes are weighted according to their explicit
weights, irrespective of the price levels. The index may also be calculated as the weighted geometric mean of the price relatives, which yields an index of 105.95.
Other Formulas for Elementary Price Indices
8.84 Another type of average is the harmonic mean. In the present context, there are two possible versions: either the harmonic mean of price relatives or the ratio of harmonic mean prices. The
harmonic mean of price relatives is defined as
The ratio of harmonic mean prices is defined as
Formula 8.11, like the Dutot index, fails the commensurability test and would only be an acceptable possibility when the varieties are all fairly homogeneous. Neither formula appears to be used much
in practice, perhaps because the harmonic mean is not a familiar concept and would not be easy to explain to users. Nevertheless, at an aggregate level, the widely used Paasche index is a weighted
harmonic average.
8.85 The ranking of the three common types of mean is always arithmetic ≥ geometric ≥ harmonic. It is shown in Chapter 6 of Consumer Price Index Theory that, in practice, the Carli index (the
arithmetic mean of the price ratios) is likely to exceed the Jevons index (the geometric mean) by roughly the same amount that the Jevons exceeds the harmonic mean. The harmonic mean of the price
relatives has the same kind of axiomatic properties as the Carli index, but with opposite tendencies and biases. It fails the transitivity, time reversal, and price bouncing tests.
8.86 In recent years, attention has focused on formulas that can take account of the substitution that may take place within an elementary aggregate. As already explained, the Carli and the Jevons
indices may be expected to approximate a COLI if the cross-elasticities of substitution are close to 0 and 1, respectively, on average. A more flexible formula that allows for different elasticities
of substitution is the unweighted Lloyd–Moulton index:
where σ is the elasticity of substitution. The Carli and the Jevons indices can be viewed as special cases of the Lloyd–Moulton in which σ = 0 and σ = 1. The advantage of the Lloyd–Moulton formula is
that σ is unrestricted. Provided a satisfactory estimate can be made of σ, the resulting elementary price index is likely to approximate the underlying COLI. The Lloyd–Moulton index reduces
“substitution bias” when the objective is to estimate the COLI. The difficulty is the need to estimate elasticities of substitution, a task that will require substantial development and maintenance
work. The formula is described in more detail in Chapter 4 of Consumer Price Index Theory.
Unit Value Indices
8.87 The unit value index is simple in form. The unit value in each period is calculated by dividing total expenditure on some product by the related total quantity. The quantities must be strictly
additive in an economic sense, which implies that they should relate to a single homogeneous product. The unit value index is then defined as the ratio of unit values in the current period to that in
the reference period. It is not a price index as normally understood, as it is essentially a measure of the change in the average price of a single product when that product is sold at different
prices to different consumers, perhaps at different times within the same period. Unit values, and unit value indices, should not be calculated for sets of heterogeneous products. Unit value methods
are discussed in more detail in Chapters 10 and 11.
Formulas Applicable to Scanner Data
8.88 As noted at the beginning of this chapter, it is preferable to introduce weighting information as it becomes available rather than continuing to rely on simple unweighted indices such as Carli
and Jevons. Advances in technology, both in the retail outlets and in the computing power available to NSOs, suggest that traditional elementary price indices may eventually be replaced by
superlative indices, at least for some elementary aggregates in some countries. A superlative index is a type of index formula that can be expected to approximate the COLI. An index is said to be
exact when it equals the true COLI for consumers whose preferences can be represented by a particular functional form. A superlative index is then defined as an index that is exact for a flexible
functional form that can provide a second-order approximation to other twice-differentiable functions around the same point. The Fisher, the Törnqvist, and the Walsh price indices are examples of
superlative indices. Superlative indices are generally symmetric indices. The methodology must be kept under review in the light of the resources available.
8.89 Scanner data obtained from electronic points of sale have become an increasingly important source of data for CPI compilation. Their main advantage is that the number of price observations can
be enormously increased and that both price and quantity information is available in real time. There are, however, many practical considerations to be taken into account, which are discussed in
other chapters of this Manual, particularly in Chapter 10. To date, scanner data has been used for selected components of the CPI, primarily for goods.
8.90 Access to the detailed and comprehensive quantity and expenditure information within an elementary aggregate means that there are no constraints on the type of index number that may be employed.
Not only Laspeyres and Paasche but superlative indices such as Fisher and Törnqvist can be calculated. However, the frequent weight and price changes that are prevalent in the scanner data cause
several problems with index estimation. Scanner data application and formulas are discussed in more detail in Chapter 10.
The Calculation of Higher-Level Indices
8.91 As shown in Figure 8.1, the elementary indices are the starting point (building blocks) for calculating the CPI. These indices are then aggregated to successively higher levels (for example,
city, region, class, or group), to derive the national all-items index. These higher-level indices are derived by aggregations using weights that are generally derived from an HBS, although other
sources are presented in Chapter 3. The aggregation formulas can take several forms such as arithmetic and geometric depending on the target index. Fixed-basket indices tend to use arithmetic
aggregations while for the superlative indices, the Törn-qvist index uses geometric aggregations, the Walsh uses an arithmetic aggregation, and the Fisher is the geometric average of two arithmetic
average price indices (Paasche and Laspeyres).
8.92 An NSO must decide on the target index at which to aim. The target index takes into consideration the observable price, quantity, and expenditure data that can be used to calculate the index in
practice. The advantages of having a target index are the following:
• Providing reference and guidance for compilation of the CPI
• Ability to quantify the size of any bias, the differences between what is actually measured and what should be measured
• Use in identifying and making improvements to the CPI
• Being able to document the sources and methods used in the CPI and how they approximate the target index compilation
8.93 NSOs must consider what kind of index they would choose to calculate in the ideal hypothetical situation in which they had complete information about prices and quantities in both time periods
compared. What is the purpose of the index? Should it measure both inflation and maintaining economic welfare? If so, the best CPI measure could be a COLI, and a superlative index such as a Fisher,
Törnqvist, or Walsh would have to serve as the theoretical target. A superlative index may be expected to approximate the underlying COLI.
8.94 In many countries the purpose of the CPI is to measure inflation and to adjust wages, income, or social payments for changes in a fixed basket of goods and services, as discussed in Chapter 2.
Thus, the concept of a basket index might be preferred, sometimes also referred to as a cost of goods index. A basket index is one that measures the change in the total value of a given basket of
goods and services between two time periods. This general category of index is described here as a Lowe index (see Chapter 2 of Consumer Price Index Theory). It should be noted that, in general,
there is no necessity for the basket to be the actual basket in one or other of the two periods compared. If the target index is to be a basket index, the preferred basket might be one that attaches
equal importance to the baskets in both periods; for example, the Walsh index. Thus, the same index may emerge as the preferred target in both the cost of goods and the cost of living approaches.
8.95 In Chapters 2–4 of Consumer Price Index Theory the superlative indices Fisher, Törnqvist, and Walsh show up as being “best” in all the approaches to index number theory. These three indices, and
the Marshall–Edgeworth price index, while not superlative, give very similar results so that for any practical reason it will not make any difference which one is chosen as the preferred target
index. In practice, an NSO may prefer to designate a basket index that uses the actual basket in the earlier of the two periods as its target index on grounds of simplicity and practicality. In other
words, the Laspeyres index may be the preferred target index. Similarly, if the quantities in both periods are available, the Walsh index, which is also a fixed-basket index, might be the target. The
target indices, whether Fisher, Törnqvist, or Walsh, can be calculated retrospectively as additional expenditure estimates become available. The retrospective indices can then be used to assess the
performance of the CPI and estimate the potential bias from the target index.
8.96 The theoretical target index is a matter of choice. In practice, it is likely to be either a Laspeyres or some superlative index. Even when the target index is the Laspey-res, there may be a
considerable gap between what is actually calculated and what the NSO considers to be its target. Chapters 2–4 of Consumer Price Index Theory present the alternatives from a theoretical point of
view. It is also shown that some combination of an arithmetic index such as the Laspeyres index and a geometric index such as geometric Laspeyres index may approximate the superlative Fisher and
Törnqvist indices.^8 Such an approach may be the ideal solution when both of these indices or their proxies can be produced in real time. What many NSOs tend to do in practice varies; many use the
Laspeyres index as their target, but a few have chosen the Fisher, Törnqvist, or Walsh as their targets.
Reference Periods
8.97 It is useful to recall that three kinds of reference periods may be distinguished:
• Weight reference period. The period covered by the expenditure data used to calculate the weights. Usually, the weight reference period is a period of 12 consecutive months.
• Price reference period. The period whose prices are used as denominators in the index calculation.
• Index reference period. The period for which the index is set to 100.
8.98 The three periods are generally different. For example, a CPI might have 2016 as the weight reference year, December 2018 as the price reference month, and the year 2015 as the index reference
period. The weights typically refer to a whole year, or even two or three years, whereas the periods for which prices are compared are typically months, quarters, or a year. The weights are usually
estimated on the basis of an expenditure survey that was conducted some time before the price reference period. For these reasons, the weight reference period and the price reference period are
invariably different periods in practice. The price reference period should immediately precede the introduction of the updated index. For example, if January 2019 is the first month for the updated
CPI, the price reference period would be December 2018 or the year 2018, depending on whether December or the full year is used as the price reference period.
8.99 The index reference period is often a year, but it could be a month or some other period. An index series may also be re-referenced to another period by simply dividing the series by the value
of the index in that period, without changing the rate of change of the index. For the CPI, the expression “base period” can mean any of the three reference periods and is ambiguous. The expression
“base period” should only be used when it is absolutely clear in context exactly which period is referred to.
Higher-Level Price Indices as Weighted Averages of Elementary Price Indices
8.100 A higher-level index is an index for some expenditure aggregate above the level of an elementary aggregate, including the overall CPI. The inputs into the calculation of the higher-level
indices are the following:
• The elementary aggregate price indices
• The expenditure shares of the elementary aggregates
8.101 The higher-level indices are calculated simply as weighted averages of the elementary price indices. The weights typically remain fixed for a sequence of at least 12 months. Some countries
revise the weights at the beginning of each year to try to approximate as closely as possible to current consumption patterns, but many countries continue to use the same weights for several years
and the weights may be changed only every five years or so. The use of fixed weights has a considerable practical advantage that the index can make repeated use of the same weights. This saves both
time and resources. Revising the weights can be both time-consuming and costly, especially if it requires a new HBS to be carried out. The disadvantage is that as the weights become older, they are
less representative of consumer purchasing patterns and usually result in substitution bias in the index.
Examples of Fixed-Basket Price Indices
8.102 When describing their calculation methods, some NSOs note that the Laspeyres price index is used for the calculation of higher-level aggregate indices; however, it is not possible to calculate
a Laspeyres index in practice. The Laspeyres price index is defined as
where $wi0$ indicates the expenditure shares for the individual varieties in the reference period. As the quantities are often unknown, the index usually will have to be calculated by weighting
together the individual price relatives by their expenditure share in the price reference period, $wi0$. The expenditure shares are usually derived by consumption expenditure estimates from an HBS.
The available weighting data may refer to an earlier period than the price reference period but may still provide a reasonable estimate.
8.103 While NSOs often refer to the Laspeyres formula for compiling their CPI, this is not the case. In fact, the most commonly used formulas for compiling the CPI are either the Lowe or Young
formulas.^9 If the weights are derived from expenditure in period 0, the price reference period, as in equation 8.11, the index is a Laspeyres price index. If the weights from an early weight
reference period b (that is, b < 0) are used directly in the index as expenditure shares in period 0, the index is known as a Young index. If the weights are updated for price change from b to 0,
which keeps the quantity shares fixed, the index is called a Lowe index. This is similar to the attribution given to the more noted Laspeyres index where the b = 0 and the Paasche index where period
t weights are used in a harmonic mean formula. Whether the Lowe or Young index should be used depends on how much price change occurs between the weight and price reference period as well as the
target index. This is discussed in more detail in Chapter 9.
8.104 As noted, a more frequent version of equation 8.11 would be that of a Young or a Lowe index, where the weights are derived from an earlier period, b < 0. This is often the case because it may
take a year or longer to compile the expenditure weights from the HBS before they are available for use in the CPI. The Young index is
8.105 The Young index is general in the sense that the shares are not restricted to refer to any particular period but may refer to any period or an average of different periods, for example. The
Young index is a fixed-weight index where the focus is that the weights should be as representative as possible for the average value shares of the period covered by the index. A fixed-weight index
is not necessarily a fixed-basket index (that is, it does not necessarily measure the change in the value of an actual basket such as the Lowe index). The Young index measures the development in the
cost of a period 0 set of purchases with period b value proportions between the expenditure components. This does not correspond to the changing value of any actual basket, unless the expenditure
shares have remained unchanged from b to 0. In the special case where b equals 0, it reduces to the Laspeyres.
8.106 In the case of the Lowe index the weights from period b are updated for price change between b and 0. The Lowe index is
8.107 By price updating, the weights are aligned to the same reference period as the prices. If the NSO decides to price update the weights, the resulting index will be a Lowe index. The Lowe index
is a fixed-basket index, which from period to period measures the value of the same (annual) basket of goods and services. Because it uses the fixed basket of an earlier period, the Lowe index is
sometimes loosely described as a “Laspeyres-type” index, but this description is unwarranted. A true Laspeyres index requires the basket to be that purchased in the price reference month, whereas in
most CPIs the basket refers to a period different from the price reference month. When the weights are annual and the prices are monthly, it is not possible, even retrospectively, to calculate a
monthly Laspeyres price index.
Typical Calculation Methods for Higher-Level Indices
8.108 The most common method for calculating higher-level indices in the CPI is not done using individual prices or quantities. Instead, a higher-level index is calculated by averaging the elementary
price indices by their predetermined weights. Using weights instead of quantities, equation (8.11) can be expressed as the following formula for aggregating price indices:
where $IL0:t$ denotes the all-items CPI, or any higher-level index, from period 0 to t, and $wjb$ is the weight attached to each of the elementary price indices where the weights sum to 1. $Ij0:t$ is
the corresponding elementary price index. The elementary indices are identified by the subscript j.
8.109 The weight reference period (b) usually will refer to a year, or an average of several years, that precedes the price refence period (0). If the weights are used as they stand, without
price-updating, the resulting index will correspond to a Young index. If the weights are price-updated from period b to period 0, the resulting index will correspond to a Lowe price index.
8.110 Provided the elementary aggregate indices are calculated using a transitive formula such as Jevons or Dutot, but not Carli, and that there are no new or disappearing varieties from period 0 to
t, equation 8.16 is equivalent to
8.111 The difference is that equation 8.16 is based on the direct elementary indices from 0 to t, while (8.17) uses the chained elementary indices. $Ijt−1:t$ is the short-term price index for the
elementary aggregate between t - 1 and t. A CPI calculated according to (8.17) in this manual is referred to as a modified Lowe index if the weights are price-updated from the weight reference period
to the price reference period. If the weights are used as they stand, the index is referred to as a modified Young index.
8.112 The recommended procedure is to use the short-term price index formulation, instead of basing the aggregation on long-term elementary price indices compiled in a single stage.
8.113 There are two ways that the modified index can be compiled. First, chaining the monthly elementary indices into long-term price indices and compiling the higher-level indices by aggregating the
elementary indices using the expenditure shares as weights. Alternatively, the modified index can be compiled by each month multiplying the expenditure weights with the elementary indices to form the
long-term weighted relatives up to period t – 1. These can then be multiplied with the elementary price indices from t – 1 to t and the resulting series are aggregated into higher-level price
indices. The two methods give identical results and it is up to countries to decide which to apply.
Some Alternatives to Fixed-Weight Indices
8.114 Monthly CPIs are typically arithmetic weighted averages of the price indices for the elementary aggregates, in which the weights are kept fixed over a number of periods that may range from 12
months to many years (but no more than five). The repeated use of the same weights relating to some past period b simplifies calculation procedures and reduces data collection requirements. It is
also cheaper to keep using the results from an old HBS than to conduct an expensive new one. Moreover, when the weights are known in advance of the price collection, the index can be calculated
immediately after the prices have been collected and processed.
8.115 The longer the same weights are used, however, the less representative they become of current consumption patterns, especially in periods of rapid technological changes when new kinds of goods
and services are continually appearing on the market and old ones disappearing. This may undermine the credibility of an index that purports to measure the rate of change in the total cost of a
basket of goods and services typically consumed by households. Such a basket needs to be representative not only of the households covered by the index but also of expenditure patterns at the time
the price changes occur.
8.116 Similarly, if the objective is to compile a COLI, the continuing use of the same fixed basket is likely to become increasingly unsatisfactory the longer the same basket is used. The longer the
same basket is used, the greater the upward bias in the index is likely to become. It is well known that the Laspeyres index has an upward bias compared with a COLI. However, a Lowe index between
periods 0 and t with weights from an earlier period b will tend to exceed the Laspeyres between 0 and t by an amount that increases the further back in time period b is (see Chapter 2 of Consumer
Price Index Theory).
8.117 There are several possible ways of minimizing or avoiding the potential biases from the use of fixed-weight indices. These are outlined in the following sections.
Annual Chaining
8.118 One way in which to minimize the potential biases from the use of fixed-weight indices is obviously to keep the weights and the index reference period as up to date as possible by frequent
rebasing and chaining. Quite a few countries have adopted this strategy and revise their weights annually. In any case, as noted earlier, it would be impossible to deal with the changing universe of
products without some chaining of the price series within the elementary aggregates, even if the weights attached to the elementary aggregates remain fixed. Annual chaining eliminates the need to
choose a weight reference period, as the weight reference period is always the previous year (t - 1), or possibly the preceding year (t - 2).
8.119 Annual chaining with current weights. When the weights are changed annually, it is possible to replace the original weights based on the previous year, or years, by those of the current year,
if the index is revised retrospectively as soon as information on the current year’s expenditures becomes available. The long-term movements in the CPI are then based on the revised series. This
method could provide unbiased or less-biased results.
Other Index Formulas
8.120 When the weights are revised less frequently, say every five years, another possibility would be to use a different index formula for the higher-level indices instead of an arithmetic average
of the elementary price indices. An alternative method for aggregating elementary indices would be geometric aggregation. Geometric aggregation is similar to arithmetic aggregation but involves
weighting each elementary index by the exponent of its share weight.
8.121 The geometric version of the Laspeyres index is defined as:
where the weights, $wi0$, are again the expenditure shares in the price reference period. When the weights are all equal, equation (8.18) reduces to the Jevons index. If the expenditure shares do not
change much between the weight reference period and the current period, then the geometric Laspeyres index approximates a Törnqvist index. Using equation (8.18) the Geometric Young index, $IGY0:t$,
can be derived using the weights $wjb$, and the Geometric Lowe index, $IGLo0:t$, can be derived using the weights $wjb:0$.
8.122 The geometric version of the Young index is defined as:
8.123 The geometric version of the Lowe index is defined as:
8.124 Another form of aggregation that yields the same result as equation (8.18) is to convert the elementary indices to natural logarithms and use linear weighting of the logarithms. In this case,
the result of the aggregation must be converted from natural logarithm to a real number (the antilog or exponential function). This formula is most suited for compilation purposes using spreadsheets
or other similar software.
Again, note that if the weight reference period refers to period b, the index is a geometric Young index; if the reference period is 0, the index is a geometric Laspeyres index, and if the reference
period is the average of periods 0 and t, it is a Törnqvist index. Recent empirical research discussed in Chapter 2 of Consumer Price Index Theory has indicated that taking a geometric-average of an
upward biased fixed-weight arithmetic index and a downward biased fixed-weight geometric index may closely approximate the Fisher index. The reason for this close ft is that the possible upward bias
in the arithmetic index is offset by a possible downward bias in the geometric index.
Arithmetic versus Geometric Aggregation for Higher-Level Indices
8.125 Given that both arithmetic and geometric aggregation can be used for compiling higher-level indices in the CPI, the question arises as to which is the most appropriate. An index using
arithmetic aggregation wi | {"url":"https://www.elibrary.imf.org/view/book/9781484354841/ch08.xml","timestamp":"2024-11-07T04:04:16Z","content_type":"text/html","content_length":"1048981","record_id":"<urn:uuid:f225b5d2-4f57-4791-9231-558a4849fb48>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00085.warc.gz"} |
Empty String vs Empty Set - The Beard SageEmpty String vs Empty Set
Empty String vs Empty Set
A set is a collection of objects. It can be visualized as a container holding some elements. If the container happens to be empty it is equivalent to having an empty set. Strings are defined over an
alphabet as a finite sequence of symbols. A string of length zero is equivalent to an empty string.
Empty String
The empty string is a string of zero length. If its length is zero there should be a special way to express it. In programming, the following syntax defines an empty string
empty = ''
However, this terminology is not used while defining strings over an alphabet. A cleaner approach is to define the empty string using the symbol
An easy trick to clear this concept is to read
The last term is sometimes abbreviated as
Empty Set
The empty set is a collection of zero elements. It is represented by the symbol
Taking a union of a set with the empty set results in the set itself. This is because the empty set does not add any new elements to the result.
However, performing a concatenation with the empty set results in an empty set. This is because one half of the result needs to come from an element of the empty set of which there are none.
Kleene Star
The star operator is the set of all strings obtained by concatenating 0 or more occurrences of the operand. 0 occurrences of anything is basically an empty string. Thus,
2 Comments → Empty String vs Empty Set
1. Matteo
Thank you so so much, dear TheBeard.
I really needed to have confirm for the equivalence between epsilon and epsilon star.
Unfortunately for Informatics Fundamentals there are no topics in internet, you are the only one who posted something about it. Thank you.
Reply ↓ | {"url":"http://thebeardsage.com/empty-string-vs-empty-set/","timestamp":"2024-11-02T09:28:36Z","content_type":"text/html","content_length":"52353","record_id":"<urn:uuid:3c86000f-7e2f-4f88-92fd-aa6d1608a6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00301.warc.gz"} |
Table of Contents
1. Introduction
2. Installation
3. Additional Ressources
4. References
1. Introduction
“Exploratory data analysis is detective work” [Tukey, 1977, p.2]. This package enables the user to use graphical tools to find ‘quantitative indications’ enabling a better understanding of the data
at hand. “As all detective stories remind us, many of the circumstances surrounding a crime are accidental or misleading. Equally, many of the indications to be discerned in bodies of data are
accidental or misleading [Tukey, 1977, p.3].” The solution is to compare many different graphical tools with the goal to find an agreement or to generate an hypothesis and then to confirm it with
statistical methods. This package serves as a starting point.
The DataVisualizations package offers various visualization methods and graphical tools for data analysis, including: - Synoptic visualizations of data: Synoptic visualization methods such as
Pixelmatrices. - Distribution analysis and visualization: Visual distribution analysis for one- or higher dimensional data, including MD Plots and PDE (Pareto Density Estimation). - Spatial
visualizations: Spatial visualizations such as choropleth maps. - Visual analysis of Clusters, Correlation, Distances and Projections: Visual analysis of clusters such as Silhouette plots, or visual
projection analysis with the Shepard diagrams. - Other visualizations: For example ABC-Barplots, Errorplots and more.
Examples of synoptic visualizations:
Get synoptic view of the data, with a pixelmatrix
The Pixelmatrix can be used as a shortcut in visualizing correlations between many variables
Pixelmatrix(cc,YNames = Header,XNames = Header,main = 'Spearman Coeffs')
Examples of distribution analysis:
InspectVariables provides a summary of the most important plots for one dimensional distribution analysis such as histogram, continuous data density estimation, QQ-Plot, and Boxplot:
The MD Plot can be used for visualizing the densities of several variables, the MD Plot combines the syntax of ggplot2 with the Pareto density estimation and additional functionality useful from the
Data Scientist’s point of view:
MDplot(Data)+ylim(0,6000)+ggtitle('Two Features with MTY Capped')
Create density scatter plots in 2D:
DensityScatter(ITS, MTY, xlab = 'ITS in EUR', ylab ='MTY in EUR', xlim = c(0,1200), ylim = c(0,15000), main='Pareto Density Estimation indicates Bimodality')
Examples of visual cluster analysis:
The heatmap of the distances, ordered by clusters allows to get a synoptic view over the intra- and intercluster distances. Examples and interpretations of Heatmaps and Silhouette plots are presented
in [Thrun 2018A, 2018B].
Heatmap(Lsun3D$Data,Lsun3D$Cls,method = 'euclidean')
Plot Silhuoette plot of clustering:
Silhouetteplot(Lsun3D$Data,Lsun3D$Cls,PlotIt = T)
InputDistances shows the most important plots of the distribution of distances of the data. The distance distribution in the input space can be bimodal, indicating a distinction between the inter-
versus intracluster distances. This can serve as an indication of distance-based cluster structures (see [Thrun, 2018A, 2018B]).
2. Installation
Installation using CRAN
Install automatically with all dependencies via
Installation using Github
Please note, that dependecies have to be installed manually.
Installation using R Studio
Please note, that dependecies have to be installed manually.
Tools -> Install Packages -> Repository (CRAN) -> DataVisualizations
Tutorial Examples
The tutorial with several examples can be found on in the vignette on CRAN.
4. References
[Thrun, 2018A] Thrun, M. C.: Projection Based Clustering through Self-Organization and Swarm Intelligence, doctoral dissertation 2017, Springer, Heidelberg, ISBN: 978-3-658-20539-3, https://doi.org/
10.1007/978-3-658-20540-9, 2018.
[Thrun, 2018B] Thrun, M. C.: Cluster Analysis of Per Capita Gross Domestic Products, Entrepreneurial Business and Economics Review (EBER), Vol. 7(1), pp. 217-231, https://doi.org/10.15678/
EBER.2019.070113, 2019.
[Thrun/Ultsch, 2018] Thrun, M. C., & Ultsch, A.: Effects of the payout system of income taxes to municipalities in Germany, in Papiez, M. & Smiech,, S. (eds.), Proc. 12th Professor Aleksander Zelias
International Conference on Modelling and Forecasting of Socio-Economic Phenomena, pp. 533-542, Cracow: Foundation of the Cracow University of Economics, Cracow, Poland, 2018.
[Thrun et al., 2020] Thrun, M. C., Gehlert, T. & Ultsch, A.: Analyzing the Fine Structure of Distributions, PLoS ONE, Vol. 15(10), pp. 1-66, DOI 10.1371/journal.pone.0238835, 2020.
[Tukey, 1977] Tukey, J. W.: Exploratory data analysis, United States Addison-Wesley Publishing Company, ISBN: 0-201-07616-0, 1977. | {"url":"https://archive.linux.duke.edu/cran/web/packages/DataVisualizations/readme/README.html","timestamp":"2024-11-03T10:41:41Z","content_type":"application/xhtml+xml","content_length":"12458","record_id":"<urn:uuid:d47c19bb-ca24-43e1-bde1-95df08f007f8>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00626.warc.gz"} |
Tidy Time Series Analysis, Part 3: The Rolling Correlation | R-bloggersTidy Time Series Analysis, Part 3: The Rolling Correlation
Tidy Time Series Analysis, Part 3: The Rolling Correlation
[This article was first published on
business-science.io - Articles
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
In the third part in a series on Tidy Time Series Analysis, we’ll use the runCor function from TTR to investigate rolling (dynamic) correlations. We’ll again use tidyquant to investigate CRAN
downloads. This time we’ll also get some help from the corrr package to investigate correlations over specific timespans, and the cowplot package for multi-plot visualizations. We’ll end by reviewing
the changes in rolling correlations to show how to detect events and shifts in trend. If you like what you read, please follow us on social media to stay up on the latest Business Science news,
events and information! As always, we are interested in both expanding our network of data scientists and seeking new clients interested in applying data science to business and finance. If
interested, contact us.
If you haven’t checked out the previous two tidy time series posts, you may want to review them to get up to speed.
An example of the visualization we can create using the runCor function with tq_mutate_xy() in combination with the corrr and cowplot packages:
Libraries Needed
We’ll need to load four libraries today.
library(tidyquant) # Loads tidyverse, tidyquant, financial pkgs, xts/zoo
library(cranlogs) # For inspecting package downloads over time
library(corrr) # Tidy correlation tables and correlation plotting
library(cowplot) # Multiple plots with plot_grid()
CRAN tidyverse Downloads
We’ll be using the same “tidyverse” dataset as the last two posts. The script below gets the package downloads for the first half of 2017.
# tidyverse packages (see my laptop stickers from first post) ;)
pkgs <- c(
"tidyr", "lubridate", "dplyr",
"broom", "tidyquant", "ggplot2", "purrr",
"stringr", "knitr"
# Get the downloads for the individual packages
tidyverse_downloads <- cran_downloads(
packages = pkgs,
from = "2017-01-01",
to = "2017-06-30") %>%
tibble::as_tibble() %>%
# Visualize the package downloads
tidyverse_downloads %>%
ggplot(aes(x = date, y = count, color = package)) +
# Data
geom_point(alpha = 0.5) +
facet_wrap(~ package, ncol = 3, scale = "free_y") +
# Aesthetics
labs(title = "tidyverse packages: Daily downloads", x = "",
subtitle = "2017-01-01 through 2017-06-30",
caption = "Downloads data courtesy of cranlogs package") +
scale_color_tq() +
theme_tq() +
We’ll also investigate correlations to the “broader market” meaning the total CRAN dowloads over time. To do this, we need to get the total downloads using cran_downloads() and leaving the package
argument NULL, which is the default.
# Get data for total CRAN downloads
all_downloads <- cran_downloads(from = "2017-01-01", to = "2017-06-30") %>%
# Visualize the downloads
all_downloads %>%
ggplot(aes(x = date, y = count)) +
# Data
geom_point(alpha = 0.5, color = palette_light()[[1]], size = 2) +
# Aesthetics
labs(title = "Total CRAN Packages: Daily downloads", x = "",
subtitle = "2017-01-01 through 2017-06-30",
caption = "Downloads data courtesy of cranlogs package") +
scale_y_continuous(labels = scales::comma) +
theme_tq() +
Rolling Correlations
Correlations in time series are very useful because if a relationship exists, you can actually model/predict/forecast using the correlation. However, there’s one issue: a correlation is NOT static!
It changes over time. Even the best models can be rendered useless during periods when correlation is low.
One of the most important calculations in time series analysis is the rolling correlation. Rolling correlations are simply applying a correlation between two time series (say sales of product x and
product y) as a rolling window calculation.
One major benefit of a rolling correlation is that we can visualize the change in correlation over time. The sample data (above) is charted (below). As shown, there’s a relatively high correlation
between Sales of Product X and Y until a big shift in December. The question is, “What happened in December?” Just being able to ask this question can be critical to an organization.
In addition to visualizations, the rolling correlation is great for a number of reasons. First, changes in correlation can signal events that have occurred causing two correlated time series to
deviate from each other. Second, when modeling, timespans of low correlation can help in determining whether or not to trust a forecast model. Third, you can detect shifts in trend as time series
become more or less correlated over time.
Time Series Functions
The xts, zoo, and TTR packages have some great functions that enable working with time series. Today, we’ll take a look at the runCor() function from the TTR package. You can see which TTR functions
are integrated into tidyquant package below:
# "run" functions from TTR
tq_mutate_fun_options()$TTR %>%
## [1] "runCor" "runCov" "runMAD"
## [4] "runMax" "runMean" "runMedian"
## [7] "runMin" "runPercentRank" "runSD"
## [10] "runSum" "runVar"
Tidy Implementation of Time Series Functions
We’ll use the tq_mutate_xy() function to apply time series functions in a “tidy” way. Similar to tq_mutate() used in the last post, the tq_mutate_xy() function always adds columns to the existing
data frame (rather than returning a new data frame like tq_transmute()). It’s well suited for tasks that result in column-wise dimension changes (not row-wise such as periodicity changes, use
tq_transmute for those!).
Most running statistic functions only take one data argument, x. In these cases you can use tq_mutate(), which has an argument, select. See how runSD only takes x.
# If first arg is x (and no y) --> us tq_mutate()
## function (x, n = 10, sample = TRUE, cumulative = FALSE)
## NULL
However, functions like runCor and runCov are setup to take in two data arguments, x and y. In these cases, use tq_mutate_xy(), which takes two arguments, x and y (as opposed to select from tq_mutate
()). This makes it well suited for functions that have the first two arguments being x and y. See how runCor has two arguments x and y.
# If first two arguments are x and y --> use tq_mutate_xy()
## function (x, y, n = 10, use = "all.obs", sample = TRUE, cumulative = FALSE)
## NULL
Static Correlations
Before we jump into rolling correlations, let’s examine the static correlations of our package downloads. This gives us an idea of how in sync the various packages are with each other over the entire
We’ll use the correlate() and shave() functions from the corrr package to output a tidy correlation table. We’ll hone in on the last column “all_cran”, which measures the correlation between
individual packages and the broader market (i.e. total CRAN downloads).
# Correlation table
tidyverse_static_correlations <- tidyverse_downloads %>%
# Data wrangling
spread(key = package, value = count) %>%
left_join(all_downloads, by = "date") %>%
rename(all_cran = count) %>%
select(-date) %>%
# Correlation and formating
# Pretty printing
tidyverse_static_correlations %>%
shave(upper = F)
rowname broom dplyr ggplot2 knitr lubridate purrr stringr tidyquant tidyr all_cran
broom 0.63 0.78 0.67 0.52 0.40 0.81 0.17 0.53 0.74
dplyr 0.73 0.71 0.59 0.58 0.71 0.14 0.64 0.76
ggplot2 0.91 0.82 0.67 0.91 0.20 0.82 0.94
knitr 0.72 0.74 0.88 0.21 0.89 0.92
lubridate 0.79 0.72 0.29 0.73 0.82
purrr 0.66 0.35 0.82 0.80
stringr 0.23 0.81 0.91
tidyquant 0.26 0.31
tidyr 0.87
The correlation table is nice, but the outliers don’t exactly jump out. For instance, it’s difficult to see that tidyquant is low compared to the other packages withing the “all_cran” column.
Fortunately, the corrr package has a nice visualization called a network_plot(). It helps to identify strength of correlation. Similar to a “kmeans” analysis, we are looking for association by
distance (or in this case by correlation). How well the packages correlate with each other is akin to how associated they are with each other. The network plot shows us exactly this association!
# Network plot
gg_all <- tidyverse_static_correlations %>%
network_plot(colours = c(palette_light()[[2]], "white", palette_light()[[4]]), legend = TRUE) +
title = "Correlations of tidyverse Package Downloads to Total CRAN Downloads",
subtitle = "Looking at January through June, tidyquant is a clear outlier"
) +
expand_limits(x = c(-0.75, 0.25), y = c(-0.4, 0.4)) +
theme_tq() +
theme(legend.position = "bottom")
We can see that tidyquant has a very low correlation to “all_cran” and the rest of the “tidyverse” packages. This would lead us to believe that tidyquant is trending abnormally with respect to the
rest, and thus is possibly not as associated as we think. Is this really the case?
Rolling Correlations
Let’s see what happens when we incorporate time using a rolling correlation. The script below uses the runCor function from the TTR package. We apply it using tq_mutate_xy(), which is useful for
applying functions such has runCor that have both an x and y input.
# Get rolling correlations
tidyverse_rolling_corr <- tidyverse_downloads %>%
# Data wrangling
left_join(all_downloads, by = "date") %>%
select(date, package, count.x, count.y) %>%
# Mutation
x = count.x,
y = count.y,
mutate_fun = runCor,
# runCor args
n = 30,
use = "pairwise.complete.obs",
# tq_mutate args
col_rename = "rolling_corr"
# Join static correlations with rolling correlations
tidyverse_static_correlations <- tidyverse_static_correlations %>%
select(rowname, all_cran) %>%
rename(package = rowname)
tidyverse_rolling_corr <- tidyverse_rolling_corr %>%
left_join(tidyverse_static_correlations, by = "package") %>%
rename(static_corr = all_cran)
# Plot
tidyverse_rolling_corr %>%
ggplot(aes(x = date, color = package)) +
# Data
geom_line(aes(y = static_corr), color = "red") +
geom_point(aes(y = rolling_corr), alpha = 0.5) +
facet_wrap(~ package, ncol = 3, scales = "free_y") +
# Aesthetics
scale_color_tq() +
title = "tidyverse: 30-Day Rolling Download Correlations, Package vs Total CRAN",
subtitle = "Relationships are dynamic vs static correlation (red line)",
x = "", y = "Correlation"
) +
theme_tq() +
The rolling correlation shows the dynamic nature of the relationship. If we just went by the static correlation over the full timespan (red line), we’d be misled about the dynamic nature of these
time series. Further, we can see that most packages are highly correlated with the broader market (total CRAN downloads) with the exception of various periods where the correlations dropped. The
drops could indicate events or changes in user behavior that resulted in shocks to the download patterns.
Focusing on the main outlier tidyquant, we can see that once April hit tidyquant is trending closer to a 0.60 correlation meaning that the 0.31 relationship (red line) is likely too low going
Last, we can redraw the network plot from April through June to investigate the shift in relationship. We can use the cowplot package to plot two ggplots (or corrr network plots) side-by-side.
# Redrawing Network Plot from April through June
gg_subset <- tidyverse_downloads %>%
# Filter by date >= April 1, 2017
filter(date >= ymd("2017-04-01")) %>%
# Data wrangling
spread(key = package, value = count) %>%
left_join(all_downloads, by = "date") %>%
rename(all_cran = count) %>%
select(-date) %>%
# Correlation and formating
correlate() %>%
# Network Plot
network_plot(colours = c(palette_light()[[2]], "white", palette_light()[[4]]), legend = TRUE) +
title = "April through June (Last 3 Months)",
subtitle = "tidyquant correlation is increasing"
) +
expand_limits(x = c(-0.75, 0.25), y = c(-0.4, 0.4)) +
theme_tq() +
theme(legend.position = "bottom")
# Modify the January through June network plot (previous plot)
gg_all <- gg_all +
title = "January through June (Last 6 months)",
subtitle = "tidyquant is an outlier"
# Format cowplot
cow_net_plots <- plot_grid(gg_all, gg_subset, ncol = 2)
title <- ggdraw() +
draw_label(label = 'tidyquant is getting "tidy"-er',
fontface = 'bold', size = 18)
cow_out <- plot_grid(title, cow_net_plots, ncol=1, rel_heights=c(0.1, 1))
The tq_mutate_xy() function from tidyquant enables efficient and “tidy” application of TTR::runCor() and other functions with x and y arguments. The corrr package is useful for computing the
correlations and visualizing relationships, and it fits nicely into the “tidy” framework. The cowplot package helps with arranging multiple ggplots to create compeling stories. In this case, it
appears that tidyquant is becoming “tidy”-er, not to be confused with the package tidyr. 😉
About Business Science
We have a full suite of data science services to supercharge your organizations financial and business performance! For example, our experienced data scientists reduced a manufacturer’s sales
forecasting error by 50%, which led to improved personnel planning, material purchasing and inventory management.
How do we do it? With team-based data science: Using our network of data science consultants with expertise in Marketing, Forecasting, Finance and more, we pull together the right team to get custom
projects done on time, within budget, and of the highest quality. Learn about our data science services or contact us!
We are growing! Let us know if you are interested in joining our network of data scientist consultants. If you have expertise in Marketing Analytics, Data Science for Business, Financial Analytics,
Forecasting or data science in general, we’d love to talk. Contact us!
Follow Business Science on Social Media | {"url":"https://www.r-bloggers.com/2017/07/tidy-time-series-analysis-part-3-the-rolling-correlation/","timestamp":"2024-11-09T23:56:34Z","content_type":"text/html","content_length":"131285","record_id":"<urn:uuid:d4eee0f6-5183-44c7-9926-2c87af0a2da8>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00055.warc.gz"} |
Plane Geometry
B.H. Sanborn & Company
, 1916 -
278 pages
From inside the book
Results 1-5 of 14
Page 146 ... point is called a tangent . The point is called the point of contact . Tangent Secant Diameter Chord A straight line which intersects a circle at two points is called a secant . A line -
segment whose ends are on a circle is called a ...
Page 158 ... point of contact . Suggestions . Let AB be tangent at Cand let OC be a radius . Take D any point of AB except C. Show that OD > OC by § 151 , ( 8 ) , etc. Write the proof in full .
EXERCISES 1. The perpendicular to a tangent at the ...
Page 160 ... point to a circle , the line joining the point to the center of the circle bisects the angle between the tangents ... points of contact of the tangents . 4. The instrument called a
center square is used for locating the cen- ters ...
Page 161 ... point of contact . 11. If a circle is inscribed in an equilateral triangle , the three sides are bisected at the points of contact . 12. In any circumscribed quadrilateral , the sum of
two opposite sides equals the sum of the other two ...
Page 163 ... point of contact of the circles and line is called the point of contact of the circles . TANGENT INTERNALLY TANGENT EXTERNALLY Two circles are said to be tangent internally when they lie
on the same side of the common tangent line ...
OHAPTER 1
PROOF BY SUPERPOSITION 46
VI 94
VII 130
VIII 146
IX 190
AREAS OF POLYGONS 213
Popular passages
... they have an angle of one equal to an angle of the other and the including sides are proportional; (c) their sides are respectively proportional.
The straight line joining the middle points of two sides of a triangle is parallel to the third side and equal to half of it 46 INTERCEPTS BY PARALLEL LINES.
If two triangles have two sides of one equal respectively to two sides of the other, but the included angle of the first greater than the included angle of the second, then the third side of the
first is greater than the third side of the second.
Two triangles are congruent if two sides and the included angle of one are equal respectively to two sides and the included angle of the other. 2. Two triangles are congruent if two angles and the
included side of one are equal respectively to two angles and the included side of the other.
PERIPHERY of a circle is its entire bounding line ; or it is a curved line, all points of which are equally distant from a point within called the center.
There are three important theorems in geometry stating the conditions under which two triangles are congruent: 1. Two triangles are congruent if two sides and the included angle of one are equal
respectively to two sides and the included angle of the other.
S' denote the areas of two circles, R and R' their radii, and D and D' their diameters. Then, I . 5*1 = =»!. That is, the areas of two circles are to each other as the squares of their radii, or as
the squares of their diameters.
In any triangle, the square of the side opposite an acute angle is equal to the sum of the squares of the other two sides, minus twice the product of one of these sides and the projection of the
other side upon it.
In any quadrilateral the sum of the squares of the four sides is equal to the sum of the squares of the diagonals, plus four times the square of the line joining the middle points of the diagonals.
In a right triangle, the side opposite the right angle is called the hypotenuse and is the longest side.
Bibliographic information | {"url":"https://books.google.com.jm/books?id=BPhCAAAAIAAJ&q=point+of+contact&dq=editions:UOM39015043534356&output=html_text&source=gbs_word_cloud_r&cad=5","timestamp":"2024-11-11T10:17:10Z","content_type":"text/html","content_length":"66081","record_id":"<urn:uuid:51c8d67a-20fb-4f2d-bed4-a102698e2f3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00586.warc.gz"} |
Recall from Funções that a function is an object that maps a tuple of arguments to a return value, or throws an exception if no appropriate value can be returned. It is very common for the same
conceptual function or operation to be implemented quite differently for different types of arguments: adding two integers is very different from adding two floating-point numbers, both of which are
distinct from adding an integer to a floating-point number. Despite their implementation differences, these operations all fall under the general concept of “addition”. Accordingly, in Julia, these
behaviors all belong to a single object: the + function.
To facilitate using many different implementations of the same concept smoothly, functions need not be defined all at once, but can rather be defined piecewise by providing specific behaviors for
certain combinations of argument types and counts. A definition of one possible behavior for a function is called a method. Thus far, we have presented only examples of functions defined with a
single method, applicable to all types of arguments. However, the signatures of method definitions can be annotated to indicate the types of arguments in addition to their number, and more than a
single method definition may be provided. When a function is applied to a particular tuple of arguments, the most specific method applicable to those arguments is applied. Thus, the overall behavior
of a function is a patchwork of the behaviors of its various method defintions. If the patchwork is well designed, even though the implementations of the methods may be quite different, the outward
behavior of the function will appear seamless and consistent.
The choice of which method to execute when a function is applied is called dispatch. Julia allows the dispatch process to choose which of a function’s methods to call based on the number of arguments
given, and on the types of all of the function’s arguments. This is different than traditional object-oriented languages, where dispatch occurs based only on the first argument, which often has a
special argument syntax, and is sometimes implied rather than explicitly written as an argument.1 Using all of a function’s arguments to choose which method should be invoked, rather than just the
first, is known as *multiple dispatch*. Multiple dispatch is particularly useful for mathematical code, where it makes little sense to artificially deem the operations to “belong” to one argument
more than any of the others: does the addition operation in x + y belong to x any more than it does to y? The implementation of a mathematical operator generally depends on the types of all of its
arguments. Even beyond mathematical operations, however, multiple dispatch ends up being a very powerful and convenient paradigm for structuring and organizing programs.
Defining Methods¶
Until now, we have, in our examples, defined only functions with a single method having unconstrained argument types. Such functions behave just like they would in traditional dynamically typed
languages. Nevertheless, we have used multiple dispatch and methods almost continually without being aware of it: all of Julia’s standard functions and operators, like the aforementioned + function,
have many methods defining their behavior over various possible combinations of argument type and count.
When defining a function, one can optionally constrain the types of parameters it is applicable to, using the :: type-assertion operator, introduced in the section on Composite Types:
f(x::Float64, y::Float64) = 2x + y
This function definition applies only to calls where x and y are both values of type Float64:
Applying it to any other types of arguments will result in a “no method” error:
julia> f(2.0, 3)
no method f(Float64,Int64)
julia> f(float32(2.0), 3.0)
no method f(Float32,Float64)
julia> f(2.0, "3.0")
no method f(Float64,ASCIIString)
julia> f("2.0", "3.0")
no method f(ASCIIString,ASCIIString)
As you can see, the arguments must be precisely of type Float64. Other numeric types, such as integers or 32-bit floating-point values, are not automatically converted to 64-bit floating-point, nor
are strings parsed as numbers. Because Float64 is a concrete type and concrete types cannot be subclassed in Julia, such a definition can only be applied to arguments that are exactly of type Float64
. It may often be useful, however, to write more general methods where the declared parameter types are abstract:
f(x::Number, y::Number) = 2x - y
julia> f(2.0, 3)
This method definition applies to any pair of arguments that are instances of Number. They need not be of the same type, so long as they are each numeric values. The problem of handling disparate
numeric types is delegated to the arithmetic operations in the expression 2x - y.
To define a function with multiple methods, one simply defines the function multiple times, with different numbers and types of arguments. The first method definition for a function creates the
function object, and subsequent method definitions add new methods to the existing function object. The most specific method definition matching the number and types of the arguments will be executed
when the function is applied. Thus, the two method definitions above, taken together, define the behavior for f over all pairs of instances of the abstract type Number — but with a different behavior
specific to pairs of Float64 values. If one of the arguments is a 64-bit float but the other one is not, then the f(Float64,Float64) method cannot be called and the more general f(Number,Number)
method must be used:
julia> f(2.0, 3.0)
julia> f(2, 3.0)
julia> f(2.0, 3)
julia> f(2, 3)
The 2x + y definition is only used in the first case, while the 2x - y definition is used in the others. No automatic casting or conversion of function arguments is ever performed: all conversion in
Julia is non-magical and completely explicit. Conversion and Promotion, however, shows how clever application of sufficiently advanced technology can be indistinguishable from magic. [2]
For non-numeric values, and for fewer or more than two arguments, the function f remains undefined, and applying it will still result in a “no method” error:
julia> f("foo", 3)
no method f(ASCIIString,Int64)
julia> f()
no method f()
You can easily see which methods exist for a function by entering the function object itself in an interactive session:
julia> f
Methods for generic function f
This output tells us that f is a function object with two methods: one taking two Float64 arguments and one taking arguments of type Number.
In the absence of a type declaration with ::, the type of a method parameter is Any by default, meaning that it is unconstrained since all values in Julia are instances of the abstract type Any.
Thus, we can define a catch-all method for f like so:
julia> f(x,y) = println("Whoa there, Nelly.")
julia> f("foo", 1)
Whoa there, Nelly.
This catch-all is less specific than any other possible method definition for a pair of parameter values, so it is only be called on pairs of arguments to which no other method definition applies.
Although it seems a simple concept, multiple dispatch on the types of values is perhaps the single most powerful and central feature of the Julia language. Core operations typically have dozens of
julia> +
Methods for generic function +
+(Real,Range{T<:Real}) at range.jl:136
+(Real,Range1{T<:Real}) at range.jl:137
+(Ranges{T<:Real},Real) at range.jl:138
+(Ranges{T<:Real},Ranges{T<:Real}) at range.jl:150
+(Bool,) at bool.jl:45
+(Bool,Bool) at bool.jl:48
+(Int64,Int64) at int.jl:224
+(Int128,Int128) at int.jl:226
+(Union(Array{Bool,N},SubArray{Bool,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)}),Union(Array{Bool,N},SubArray{Bool,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)})) at array.jl:902
+{T<:Signed}(T<:Signed,T<:Signed) at int.jl:207
+(Uint64,Uint64) at int.jl:225
+(Uint128,Uint128) at int.jl:227
+{T<:Unsigned}(T<:Unsigned,T<:Unsigned) at int.jl:211
+(Float32,Float32) at float.jl:113
+(Float64,Float64) at float.jl:114
+(Complex{T<:Real},Complex{T<:Real}) at complex.jl:207
+(Rational{T<:Integer},Rational{T<:Integer}) at rational.jl:101
+(Bool,Union(Array{Bool,N},SubArray{Bool,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)})) at array.jl:896
+(Union(Array{Bool,N},SubArray{Bool,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)}),Bool) at array.jl:899
+(Char,Char) at char.jl:46
+(Char,Int64) at char.jl:47
+(Int64,Char) at char.jl:48
+{T<:Number}(T<:Number,T<:Number) at promotion.jl:68
+(Number,Number) at promotion.jl:40
+() at operators.jl:30
+(Number,) at operators.jl:36
+(Any,Any,Any) at operators.jl:44
+(Any,Any,Any,Any) at operators.jl:45
+(Any,Any,Any,Any,Any) at operators.jl:46
+(Any,Any,Any,Any...) at operators.jl:48
+{T}(Ptr{T},Integer) at pointer.jl:52
+(Integer,Ptr{T}) at pointer.jl:54
+{T<:Number}(AbstractArray{T<:Number,N},) at abstractarray.jl:232
+{S,T}(Union(Array{S,N},SubArray{S,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)}),Union(Array{T,N},SubArray{T,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)})) at array.jl:850
+{T}(Number,Union(Array{T,N},SubArray{T,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)})) at array.jl:857
+{T}(Union(Array{T,N},SubArray{T,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)}),Number) at array.jl:864
+{S,T<:Real}(Union(Array{S,N},SubArray{S,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)}),Ranges{T<:Real}) at array.jl:872
+{S<:Real,T}(Ranges{S<:Real},Union(Array{T,N},SubArray{T,N,A<:Array{T,N},I<:(Union(Int64,Range1{Int64},Range{Int64})...,)})) at array.jl:881
+(BitArray{N},BitArray{N}) at bitarray.jl:922
+(BitArray{N},Number) at bitarray.jl:923
+(Number,BitArray{N}) at bitarray.jl:924
+(BitArray{N},AbstractArray{T,N}) at bitarray.jl:986
+(AbstractArray{T,N},BitArray{N}) at bitarray.jl:987
+{Tv,Ti}(SparseMatrixCSC{Tv,Ti},SparseMatrixCSC{Tv,Ti}) at sparse.jl:536
+(SparseMatrixCSC{Tv,Ti<:Integer},Union(Array{T,N},Number)) at sparse.jl:626
+(Union(Array{T,N},Number),SparseMatrixCSC{Tv,Ti<:Integer}) at sparse.jl:627
Multiple dispatch together with the flexible parametric type system give Julia its ability to abstractly express high-level algorithms decoupled from implementation details, yet generate efficient,
specialized code to handle each case at run time.
Method Ambiguities¶
It is possible to define a set of function methods such that there is no unique most specific method applicable to some combinations of arguments:
julia> g(x::Float64, y) = 2x + y
julia> g(x, y::Float64) = x + 2y
Warning: New definition g(Any,Float64) is ambiguous with g(Float64,Any).
Make sure g(Float64,Float64) is defined first.
julia> g(2.0, 3)
julia> g(2, 3.0)
julia> g(2.0, 3.0)
Here the call g(2.0, 3.0) could be handled by either the g(Float64, Any) or the g(Any, Float64) method, and neither is more specific than the other. In such cases, Julia warns you about this
ambiguity, but allows you to proceed, arbitrarily picking a method. You should avoid method ambiguities by specifying an appropriate method for the intersection case:
julia> g(x::Float64, y::Float64) = 2x + 2y
julia> g(x::Float64, y) = 2x + y
julia> g(x, y::Float64) = x + 2y
julia> g(2.0, 3)
julia> g(2, 3.0)
julia> g(2.0, 3.0)
To suppress Julia’s warning, the disambiguating method must be defined first, since otherwise the ambiguity exists, if transiently, until the more specific method is defined.
Parametric Methods¶
Method definitions can optionally have type parameters immediately after the method name and before the parameter tuple:
same_type{T}(x::T, y::T) = true
same_type(x,y) = false
The first method applies whenever both arguments are of the same concrete type, regardless of what type that is, while the second method acts as a catch-all, covering all other cases. Thus, overall,
this defines a boolean function that checks whether its two arguments are of the same type:
julia> same_type(1, 2)
julia> same_type(1, 2.0)
julia> same_type(1.0, 2.0)
julia> same_type("foo", 2.0)
julia> same_type("foo", "bar")
julia> same_type(int32(1), int64(2))
This kind of definition of function behavior by dispatch is quite common — idiomatic, even — in Julia. Method type parameters are not restricted to being used as the types of parameters: they can be
used anywhere a value would be in the signature of the function or body of the function. Here’s an example where the method type parameter T is used as the type parameter to the parametric type
Vector{T} in the method signature:
julia> myappend{T}(v::Vector{T}, x::T) = [v..., x]
julia> myappend([1,2,3],4)
4-element Int64 Array:
julia> myappend([1,2,3],2.5)
no method myappend(Array{Int64,1},Float64)
julia> myappend([1.0,2.0,3.0],4.0)
julia> myappend([1.0,2.0,3.0],4)
no method myappend(Array{Float64,1},Int64)
As you can see, the type of the appended element must match the element type of the vector it is appended to, or a “no method” error is raised. In the following example, the method type parameter T
is used as the return value:
julia> mytypeof{T}(x::T) = T
julia> mytypeof(1)
julia> mytypeof(1.0)
Just as you can put subtype constraints on type parameters in type declarations (see Parametric Types), you can also constrain type parameters of methods:
same_type_numeric{T<:Number}(x::T, y::T) = true
same_type_numeric(x::Number, y::Number) = false
julia> same_type_numeric(1, 2)
julia> same_type_numeric(1, 2.0)
julia> same_type_numeric(1.0, 2.0)
julia> same_type_numeric("foo", 2.0)
no method same_type_numeric(ASCIIString,Float64)
julia> same_type_numeric("foo", "bar")
no method same_type_numeric(ASCIIString,ASCIIString)
julia> same_type_numeric(int32(1), int64(2))
The same_type_numeric function behaves much like the same_type function defined above, but is only defined for pairs of numbers.
Note on Optional and Named Arguments¶
As mentioned briefly in Funções, optional arguments are implemented as syntax for multiple method definitions. For example, this definition:
translates to the following three methods:
f(a,b) = a+2b
f(a) = f(a,2)
f() = f(1,2)
Named arguments behave quite differently from ordinary positional arguments. In particular, they do not participate in method dispatch. Methods are dispatched based only on positional arguments, with
named arguments processed after the matching method is identified.
[2] Arthur C. Clarke, Profiles of the Future (1961): Clarke’s Third Law. | {"url":"https://julia-cn.readthedocs.io/pt-br/latest/manual/methods.html","timestamp":"2024-11-13T22:40:57Z","content_type":"text/html","content_length":"90399","record_id":"<urn:uuid:8d2a9656-04c7-4459-8fa0-3d3808ef4fbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00104.warc.gz"} |
brake power calculation calculation for Calculations
17 Feb 2024
Popularity: ⭐⭐⭐
Brake Power Calculation
This calculator provides the calculation of brake power for an engine.
Calculation Example: Brake power is the measure of the power produced by an engine at its output shaft. It is calculated using the formula: BP = (2 * ? * N * T) / 60, where BP is the brake power in
kilowatts, N is the engine speed in revolutions per minute, and T is the torque produced by the engine in newton-meters.
Related Questions
Q: What is the difference between brake power and indicated power?
A: Brake power is the power output of an engine at its output shaft, while indicated power is the power developed inside the engine cylinders. Brake power is always less than indicated power due to
losses such as friction and heat.
Q: How does brake power affect the performance of a vehicle?
A: Brake power is a key factor in determining the performance of a vehicle. Higher brake power allows a vehicle to accelerate faster, climb hills more easily, and tow heavier loads.
| —— | —- | —- |
Calculation Expression
Brake Power Formula: Brake power is calculated using the formula: BP = (2 * ? * N * T) / 60
Calculated values
Considering these as variable values: P=100.0, T=500.0, N=2000.0, the calculated value(s) are given in table below
| —— | —- |
Brake Power Formula 52.35987755982988*N
Similar Calculators
Calculator Apps
Matching 3D parts for brake power calculation calculation for Calculations
App in action
The video below shows the app in action. | {"url":"https://blog.truegeometry.com/calculators/brake_power_calculation_calculation_for_Calculations.html","timestamp":"2024-11-04T01:02:13Z","content_type":"text/html","content_length":"25028","record_id":"<urn:uuid:0c3db5de-3858-4d2b-af64-1d0daaa1bcd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00257.warc.gz"} |
ANSYS 12 - Beam
Under Construction
The following ANSYS tutorial is under construction.
Problem Specification
Consider the beam in the figure below. There are two point forces acting on the beam in the negative y direction as shown. Note the dimensions of the beam. The Young's modulus of the material is 73
GPa and the Poisson ratio is 0.3. We'll assume that plane stress conditions apply.
Go to Step 1: Pre-Analysis & Start-Up
See and rate the complete Learning Module
Step 1: Pre-Analysis & Start-Up
Start ANSYS Workbench
We start our simulation by first starting the ANSYS workbench.
Start > All Programs > ANSYS 12.1 > Workbench
Following figure shows the workbench window.
At the left hand side of the workbench window, you will see a toolbox full of various analysis systems. In the middle, you see an empty work space. This is the place where you will organize your
project. At the bottom of the window, you see messages from ANSYS.
Select Analysis Systems
Since our problem involves static analysis, we will select the Static Structural (ANSYS) component on the left panel.
Left click (and hold) on Static Structural (ANSYS), and drag the icon to the empty space in the Project Schematic.
Since we selected Static Structural (ANSYS), each cell of the system corresponds to a step in the process of performing the ANSYS Structural analysis. Right click on Static Structural ANSYS and
Rename the project to Beam.
Now, we just need to work out each step from top down to get to the results for our solution.
• We start by preparing our geometry
• We use geometry to generate a mesh
• We setup the physics of the problem
• We run the problem in the solver to generate a solution
• Finally, we post process the solution to gain insight into the results
Specify Material Properties
We will first specify the material properties of the crank. The material has an Young's modulus E=2.8x10^7 psi and Poisson's ratio ν=0.3.
In the Crank cell, double click on Engineering Data. This will bring you to a new page. The default material given is Structural Steel. We will use this material and change the Young's modulus and
Poisson's ratio.
Left click on Structural Steel once and you will see the details of Structural Steel material properties under Properties of Outline Row 3: Structural Steel. Expand Isotropic Elasticity, change
Young's Modulus and Poisson's Ratio to E=7.9e10 pa and ν=0.3. Remember to check that you use the correct unit.
Press the Return to Project to return to Workbench Project Schematic window.
See and rate the complete Learning Module
Step 2: Geometry
At Workbench, in the Beam cell, right click on Geometry, and select Properties. You will see the properties menu on the right of the Workbench window. Under Basic Geometry Options, select Line Bodies
. This is because we are going to create a line geometry.
In the Project Schematic, double left click on Geometry to start preparing the geometry.
At this point, a new window, ANSYS Design Modeler will be opened. You will be asked to select desired length unit. Use the default meter unit and click OK.
Creating a Sketch
Like any other common CAD modeling practice, we start by creating a sketch.
Start by creating a sketch on the XYPlane. Under Tree Outline, select XYPlane, then click on Sketching next to Modeling tab. This will bring up the Sketching Toolboxes.
Note: In sketching mode, there is Undo features that you can use if you make any mistake.
On the right, there is a Graphic window. At the lower right hand corner of the Graphic window, click on the +Z axis to have a normal look of the XY Plane.
In the Sketching Toolboxes, select Line. In the Graphics window, create three rough lines from starting from the origin in the positive XY direction (Make sure that you see a letter P at the origin
and at each connection between the lines. The letter P the geometry is constrained at the point.)
You should have something like this:
Note: You do not have to worry about dimension for now, we can dimension them properly in the later step.
Under Sketching Toolboxes, select Dimensions tab, use the default dimensioning tools. Dimension the geometry as shown:
Under Details View on the lower left corner, input the value for dimension appropriately.
H1: 0.1 m
H2: 0.2 m
H3: 0.1 m
We are done with sketching.
Create Surface
Now that we have the sketch done, we can create a line body for this sketch.
Concept > Lines From Sketches
This will create a new line Line1. Under Details View, select Sketch1 as Base Objects and click Apply. Finally click Generate to generate the surface. This is what you should see under your Tree
Create Cross Section
We will now add a cross section to the line body.
Concept > Cross Section > Rectangular
Under Details View, input value as follow:
B - 0.05m
H - 1m
Finally, under expand the Line Body
Outline > 1 Part, 1 Body > Line Body
And attach Rect1 to Cross Section under Details View.
We are done with geometry. You can close the Design Modeler and go back to Workbench (Don't worry, it will auto save).
See and rate the complete Learning Module
Step 3: Mesh
Save your work in Workbench window. In the Workbench window, right click on Mesh, and click Edit. A new ANSYS Mesher window will open.
Use the default mesh. Under Outline, right click on Mesh and click Generate Mesh. This should be the mesh appear in the Graphics window.
See and rate the complete Learning Module
Step 4: Setup (Physics)
We need to specify point BC's at A, B, C and D.
Let's start with setting up boundary condition at A.
Outline > Static Structural (A5) > Insert > Remote Displacement
Select point A in the Graphics window and click Apply next to Geometry under Details of "Remote Displacement". Enter 0 for all UX, UY, UZ, ROTX and ROTY except for ROTZ. Let ROTZ to be free.
Let's move on to setting up boundary condition B.
Outline > Static Structural (A5) > Insert > Remote Displacement
Select point B in the Graphics window and click Apply next to Geometry under Details of "Displacement 2". Enter 0 for all UY, UZ, ROTX and ROTY except for ROTZ. Let UX and ROTZ to be free.
We can move on to setting up point force at point C and D.
Outline > Static Structural (A5) > Insert > Force
Select point C in the Graphics window and click Apply next to Geometry under Details of "Force". Next to Define By, change Vector to Components. Enter -4000 for Y Component.
Do the same for point D.
Check that you have for all the boundary conditions. Click on Static Structural (A5) to view this in Graphics window.
See and rate the complete Learning Module
Step 5: Solution
Now that we have set up the boundary conditions, we can actually solve for a solution. Before we do that, let's take a minute to think about what is the post-processing that we are interested in. We
are interested in the deflection and bending stress on the beam. We would also like to look at the force and moment reaction at our support A and B. Let's set up those post-processing parameters
before we click solve button.
Let's start with inserting Total Deformation.
Outline > Solution (A6) > Insert > Total Deformation
Next let's insert beam tool that will enable us to look at the stresses on the beam.
Outline > Solution (A6) > Insert > Beam Tool > Beam Tool
We would also like to look at the Force Reaction at point A and B.
Outline > Solution (A6) > Insert > Probe > Force Reaction
Select Remote Displacement (which is point A) next to Boundary Condition under Details of "Force Reaction".
Do the same step for Remote Displacement 2 (point B).
Next we will like to check and see that the moment at point A and B is zero.
Outline > Solution (A6) > Insert > Probe > Moment Reaction
Select Remote Displacement (which is point A) next to Boundary Condition under Details of "Moment Reaction".
Do the same step for Remote Displacement 2 (point B).
We are done setting up all the results. Click Solve at the top menu to obtain a solution. Wait for a minute for the solution.
See and rate the complete Learning Module | {"url":"https://confluence.cornell.edu/display/SIMULATION/ANSYS+12+-+Beam","timestamp":"2024-11-05T06:16:37Z","content_type":"text/html","content_length":"101251","record_id":"<urn:uuid:07b8702c-642b-4e45-9c23-7997702a641a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00341.warc.gz"} |
Carlsberg research grants to number theory and geometric analysis
17 December 2021
Carlsberg research grants to number theory and geometric analysis
Research grants
Associate Professors Jasmin Matz and Niels Martin Møller both receive the Carlsberg Foundation Young Researcher Fellowships.
The Carlsberg Foundation this year awards record grants For Danish basic research. 292 of Denmark’s most imaginative and visionary researchers are receiving a research grant - a total of DKK 456
million is being awarded.
The burgeoning associate professor level has also been given priority. Around DKK 209 million has been awarded to 45 young newly appointed associate professors, who are each receiving a Young
Researcher Fellowship.
The purpose of this instrument is to promote the opportunities for researchers to make their way by establishing their own independent research groups. The two MATH researchers will employ a postdoc
and a PhD student for each project.
Focus on automorphic forms
Associate Professor Jasmin Matz has worked in the Algebra & Geometry Section at the Department of Mathematical Sciences since October 2019. Previously, a senior lecturer at the Hebrew University of
Jerusalem. Her research project is called “Density and Approximation in Automorphic Families”.
“We want to combine methods from various areas of mathematics to push beyond the current state of knowledge”, Jasmin states.
“Our goal with this project is to uncover fundamental results at the intersection of various mathematical areas, all centered around number theory”, Jasmin explains.
Number theory is one of the oldest branches of mathematics, rooted in our quest to understand the integers and related objects. The area has developed immensely over time, influenced and influencing
other areas of not only mathematics, but also physics, computer science, and engineering.
“The focus of my project lies in the area of automorphic forms and their relation to spectral theory which has been a particularly fruitful and exciting area of number theory, and lies right at the
intersection of not only various mathematical fields but also mathematical physics”, says Jasmin.
“With research in pure mathematics it is often hard to tell whether practical applications lie ahead in the future. Historically, research in number theory has been quite fruitful in this regard, for
example, without research in number theory, we would not be able to communicate safely online, and currently there are indications that our research might have some applications in the area of
quantum computing, for example.
“It is thus of huge importance to continue research in such abstract areas even without concrete applications in mind as those might emerge at a later and often unexpected point”.
Read more about Jasmin Matz’s project on the Carlsberg Foundation’s homepage.
Soap bubbles, chalk and blackboards
Niels Martin Møller joined the department in 2016 as an assistant professor and was appointed associate professor in September 2020. He too is a member of the Algebra & Geometry Section. He has
played a pivotal role in the foundation of the Copenhagen Centre for Geometry and Topology.
Niels Martin Møller calls his project “Geometric Analysis of Optimal Shapes”. The following are extracts from his project application:
“Soap bubbles are useful even when they live only in our imagination, as theoretical tools to study all possible topological shapes of space - such as the very universe we inhabit. They also have
deep applications in defining the centre of mass of a physical universe.
“You might think that you could do this with just your school physics formulas. However, in modern physics, the world is not so easy to cut into pieces and add back up. Ever since Einstein, we are
dealing with what mathematically speaking is called a nonlinear problem with boundary conditions”, explains Niels Martin.
New geometric descriptions of spaces and shapes
“Geometry is quite literally everything we see around us. However, there's also lots of hidden geometry in the shape of spaces, which may not at first glance seems like spaces at all - such as sets
of coordinates in a data set, carved out by constraint equations. The by far most successful models of anything in our observable physical world are also cooked up in terms of differential geometry
and concepts of energy, which lead to (partial) differential equations.
“This is in a precise sense witnessed by physical laws ranging from Galilei and Newton over Maxwell's electromagnetic theory of light to Einstein's general relativity, quantum field theory and The
Standard Model of particle physics. Even the famous Higgs boson was discovered theoretically first, via advanced tools in geometry and differential equations known as principal fibre bundles and
gauge theory.
“This is ultimately why we are interested in developing such new geometric descriptions of spaces and optimal shapes. To study their structures. Understanding our world much more in-depth, which down
the long road will allow us to interact in this world in brand new ways. For great, as of yet unknown, benefits to humankind and society,” Niels Martin promises.
Read more about Niels Martin Møller’s project on the Carlsberg Foundation’s homepage.
Project details
Density and Approximation in Automorphic Families
Project period:
01-09-2022 - 31-08-2025
DKK 3.4 million from the Carlsberg Foundation Young Researcher Fellowships
Jasmin Matz
Associate Professor
Geometric Analysis of Optimal Shapes
Project period:
2021 -
DKK 3.3 million from the Carlsberg Foundation Young Researcher Fellowships
Niels Martin Møller
Associate Professor | {"url":"https://www.math.ku.dk/english/about/news/carlsberg-research-grants-to-number-theory-and-geometric-analysis/","timestamp":"2024-11-03T03:26:42Z","content_type":"text/html","content_length":"43851","record_id":"<urn:uuid:7db346c0-7c76-4ef3-944d-94b9adabadf7>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00381.warc.gz"} |
The future earnings, dividends, and common stock price of
Callahan Technologies Inc. are expected to grow...
The future earnings, dividends, and common stock price of Callahan Technologies Inc. are expected to grow...
The future earnings, dividends, and common stock price of Callahan Technologies Inc. are expected to grow 5% per year. Callahan's common stock currently sells for $21.75 per share; its last dividend
was $1.80; and it will pay a $1.89 dividend at the end of the current year.
Using the DCF approach, what is its cost of common equity? Round your answer to two decimal places. Do not round your intermediate calculations.
If the firm's beta is 1.20, the risk-free rate is 6%, and the average return on the market is 12%, what will be the firm's cost of common equity using the CAPM approach? Round your answer to two
decimal places.
If the firm's bonds earn a return of 9%, based on the bond-yield-plus-risk-premium approach, what will be rs? Use the midpoint of the risk premium range discussed in Section 10-5 in your
calculations. Round your answer to two decimal places.
If you have equal confidence in the inputs used for the three approaches, what is your estimate of Callahan's cost of common equity? Round your answer to two decimal places. Do not round your
intermediate calculations.
Answer a.
Cost of Common Equity = Expected Dividend / Current Price + Growth Rate
Cost of Common Equity = $1.89 / $21.75 + 0.05
Cost of Common Equity = 0.0869 + 0.0500
Cost of Common Equity = 0.1369 or 13.69%
Answer b.
Cost of Common Equity = Risk-free Rate + Beta * (Market Return - Risk-free Rate)
Cost of Common Equity = 6.00% + 1.20 * (12.00% - 6.00%)
Cost of Common Equity = 6.00% + 1.20 * 6.00%
Cost of Common Equity = 13.20%
Answer c.
Cost of Common Equity = Bond Yield + Risk Premium
Cost of Common Equity = 9.00% + 4.00%
Cost of Common Equity = 13.00%
Answer d.
Estimated Cost of Common Equity = (13.69% + 13.20% + 13.00%) / 3
Estimated Cost of Common Equity = 39.89% / 3
Estimated Cost of Common Equity = 13.30% | {"url":"https://justaaa.com/finance/514465-the-future-earnings-dividends-and-common-stock","timestamp":"2024-11-05T23:07:12Z","content_type":"text/html","content_length":"43796","record_id":"<urn:uuid:5f1edfbd-2b4f-4d45-ade9-fdcd4e7e0e5f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00863.warc.gz"} |
Lessons from the Auditing Trenches: “What do ZK Developers get Wrong?”
Zero-knowledge security is not easy. Our recent findings of our last 100 security audits show that our ZK audits have a 2x higher chance of having critical issue compared to the rest of the audits
(mostly smart contracts).
Our CTO Kostas Ferles recently spoke at L2Con during EthCC, where he demonstrated examples of ZK bugs from our previous audits. He delivered a presentation titled:
Lessons from the Auditing Trenches: What do ZK Developers get Wrong?
Three ZK vulnerability examples
We have summarized the three ZK vulnerability examples Kostas covered in the presentation. Let’s go through the three bugs Kostas discussed:
Chapter I: The missing constraint bug
— Example 1. The business logic bug
— Example 2. The novice mathematician’s bug
Chapter II: ZK and DeFi are out of sync
—Example 3. When users decide their balance bug
Chapter I: The missing constraint bug
The missing constraint bug is one of the most typical bugs in ZK systems.
It means that a ZK developer forgets to introduce some constraint in the arithmetic circuits. An attacker can exploit this by creating a malicious proof, posting it on-chain, and steal funds.
During our analysis of the last 100 audits, we found that missing constraint bugs (or “underconstrained circuits”) had the highest rate of being critical or high in severity across all bug types. 90%
of our missing constraint bugs across all ZK audits were critical or high in severity (!).
Let’s have a look at example of missing constraint bug.
Example 1: The business logic bug
Our first example is from an EVM privacy layer application built using Circom and Solidity. It uses UTXO-based infrastructure.
In the application, users own several private keys, and one of these keys is used to create nullifiers for UTXOs. A nullifier is a private value that, once revealed, invalidates the associated UTXO.
A nullifier is intended to be a function of a UTXO and a public nullifying key. Naturally, we only want to have one nullifier per UTXO. If multiple nullifiers can be created for the same UTXO,
double-spending becomes possible.
The bug in this specific application was that one of the circuits did not check that the input public nullifying key was the one corresponding to the user’s private key. Since this constraint was
missing, a malicious user could pass an arbitrary public key for a private key and create multiple nullifiers for the same UTXO, allowing the attacker to spend the same UTXO multiple times.
How can we prevent bug like these?
We can avoid such bugs by writing negative tests. This means testing the “bad case.” We pass a random public key (along with a private key and UTXO) and expect the test to fail. If the test doesn’t
fail, we know a constraint is missing.
Example 2: The Novice Mathematician’s Bug
The next bug is from one of our audits where we audited a ZK verifier implemented in gnark. It is a bit trickier and involves some math.
At the core of this application was a library for arithmetic operations over the Goldilocks field. The bug was a subtle one: there was a missing constraint when calculating the inverse of a field
So, what is an inverse? It’s simple: the inverse of field element x is another field element y such that the following equation holds: x*y = 1. However, there is a small caveat. If x and y fall
outside of the Goldilocks range, equation x*y = 1 can have multiple satisfying assignments. This means that for the same input, you can prove it has multiple inverses, which shouldn’t be happening.
Because this library was central to the whole application, it could introduce multiple critical bugs since it was widely used.
How can we avoid these tricky bugs?
The answer is not as simple as in the previous example.
What you can do is document the assumptions of your modules. This way, if someone is building on top of your modules, they can see what necessary checks they need to perform.
“Pencil and paper” proofs can also help. You essentially prove a mathematical theorem that your constraints are correct.
One positive note is that part of these proofs can be automated. At Veridise, we have developer a verifier called Picus that can verify part of these proofs. It essentially ensures that your proofs
are deterministic, checking that for the same input, you cannot have two outputs that satisfy the constraints.
Chapter II: ZK and DeFi are out of Sync
Generally, you want your ZK and DeFi components to be in sync. However, they can sometimes fall out of sync by forgetting to validate states.
Off-chain components don’t have access to on-chain data. Whenever you need a circuit to operate with on-chain data, you must pass the current data to the circuit, obtain the output, generate the
proof, and post the overall proof on-chain. The on-chain component is then responsible for validating that the proof generated by the circuit is consistent with the on-chain state.
One way these out-of-sync errors can occur is if the on-chain component forgets to validate that the proof was generated using the current on-chain state.
Example 3: When Users Decide their Balance
In the third example we examine an application written in Circom and Solidity. This was a recommendation platform where users could interact privately.
Whenever users interacted with the system, they had to encrypt their balances, either on deposit or withdrawal. Users would encrypt their current balance, encrypt their new balance, and then post the
encrypted data on-chain.
The application had a smart contract with multiple routes for depositing and withdrawing. However, some of these routes were missing a validation check to ensure that the proof was generated with the
current balance.
For example, let’s assume a user has a balance of zero. The user informs the dApp of this zero balance and requests to deposit 1 token. The dApp acknowledges this, and the user’s balance is updated
to 1. This makes sense.
However, a user could also claim a fraudulent balance. For instance, if the user had a balance of zero, they could tell the app that they wanted to deposit 1 token on top of a current balance of e.g.
1000 tokens. The user would then receive confirmation from the dApp that their new balance was 1001 tokens, even though they initially had a zero balance.
Avoiding these bugs:
If DeFi systems don’t check all necessary conditions, things can go wrong. To avoid these bugs, we can rely on our good old friend: the negative test. However, in this case, we need to write a
negative test that exercises both the DeFi and ZK components (both on-chain and off-chain logic).
At Veridise, we have developed novel static analysis tools (like Vanguard) that thoroughly examine your on-chain logic and check if your contracts are missing any necessary checks.
In general, syncing DeFi and ZK components requires a global understanding of your system. These components can get out of sync in many ways, such as through proof replays (except for Aleo),
arithmetic DoS, finite field overflows, and many more ways.
Generally speaking, these types of bugs affect systems where on-chain and off-chain components are written in different languages. We’ve noticed that if your developers are out of sync, your system
has high risk of being out of sync as well.
Full presentation recording
Watch Kostas’ full presentation here:
You can download PDF slides of the presentation here.
Finally, it’s worth investing in security early on
If you’re developing zero-knowledge solutions, it’s worth investing in security early on. That means creating a realistic threat model, understanding what attacks are feasible in the framework you
are using, and making sure your design prevents them.
We recommend to document important invariants and enforce them system-wide, write negative tests, and integrate automated security checks in your CI/CD.
Finally, we encourage you to partner with experienced security auditors who have demonstrated experience specifically in ZK audits.
Want to learn more about Veridise?
Twitter | Lens | LinkedIn | Github | Request Audit | {"url":"https://medium.com/veridise/lessons-from-the-auditing-trenches-what-do-zk-developers-get-wrong-c26199bb4b3f","timestamp":"2024-11-13T15:05:44Z","content_type":"text/html","content_length":"144920","record_id":"<urn:uuid:f0b2c7c4-55c7-4770-8dbb-fe2a8ce74af6>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00750.warc.gz"} |
Assignment 1: C++
Assignment 1: C++
Completion requirements
Opened: Thursday, 18 January 2024, 11:30 AM
Due: Thursday, 25 January 2024, 11:59 PM
Write a program to compute the optimal angle at which to kick a ball from the ground with a fixed speed such that it travels the farthest distance before hitting the ground.
Assume that there is no friction, so that the trajectory of the ball will be given by
\[ x(t) = v t \cos(\theta) \]
\[ y(t) = v t \sin(\theta) - gt^2/2\]
Here, x and y are the horizontal and vertical position of the ball, t is time, g is the gravitational acceleration, v is the initial speed of the ball, and \(\theta\) is the initial angle of the
Your program should take the value of v as input from cin, find the angle for which the distance is maximal, and print out the value of the angle and the distance.
Although this problem can actually be solved exactly, your code should find the distance traversed by computing the trajectory of the ball at regular small time intervals.
In your code, use best practices such as using several functions with interface and implementation separated, good commenting and checking for errors.
In addition to your code, your submission should also contain a text file with a brief explanation of your implementation, how you compiled the code, and how you ran it.
The deadline for this assignment is January 25th, 2024. Late assignments are possibly up to one week later at a penalty of 5% of your mark per day late.
Note about using ChatGPT: Beyond using Large Language Models like ChatGPT as a better search engine, you should not rely on them to do your assignments. While LLMs might give a near-solution, its
code would have problems that you would have to fix anyway. Furthermore, they tend to only generate a few different variants of the same solution and that would get your submission flagged as being a
copy of another student's submission, which is not allowed. Also, future assignment will increasingly get more specialized, and ChatGPT's solutions would get even further from the correct answer. | {"url":"https://education.scinet.utoronto.ca/mod/assign/view.php?id=3112","timestamp":"2024-11-09T01:25:54Z","content_type":"text/html","content_length":"49518","record_id":"<urn:uuid:7ff854b9-cffd-47de-81f2-6ad07540bd6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00385.warc.gz"} |
How To Show Your Work In Math: A Step-by-Step Guide For Clear And Effective Problem Solving - Wallpaperkerenhd.com
How to Show Your Work in Math: A Step-by-Step Guide for Clear and Effective Problem Solving
When solving math problems, it is crucial to show your work to communicate your thought process clearly. Displaying your steps serves as a roadmap for others to understand how you arrived at your
solution and allows for easy verification. To demonstrate your work effectively, start by stating the problem or question explicitly. Then, break down the problem into smaller, manageable steps,
clearly showing the intermediate calculations and formulas used. Use proper mathematical notation and concise explanations to ensure clarity. Additionally, organize your work neatly and sequentially,
making it easier for others to follow. Providing a comprehensive solution not only helps others comprehend your methodology but also showcases your mathematical skills and logical reasoning
Using Diagrams to Illustrate Mathematical Concepts
When it comes to understanding and explaining mathematical concepts, diagrams can be incredibly useful tools. They provide visual representations that make abstract ideas more concrete and easier to
grasp. Whether you’re a student trying to learn a new concept or a teacher looking for effective teaching strategies, incorporating diagrams into your math work can greatly enhance understanding and
Here are some ways you can use diagrams to illustrate mathematical concepts:
• Visualizing Relationships: Diagrams are particularly helpful for visualizing the relationships between different elements in a mathematical problem or concept. For example, you can use a Venn
diagram to show the intersection and union of sets, or a flowchart to demonstrate the steps in a mathematical algorithm.
• Representing Geometric Shapes: In geometry, diagrams are essential for representing and understanding shapes. Whether you’re working with triangles, circles, or complex polygons, a diagram can
help you accurately visualize the dimensions and relationships of the shapes involved.
• Illustrating Proportions and Ratios: When dealing with proportions and ratios, diagrams can effectively illustrate the relative sizes and quantities involved. For example, a bar graph can be used
to visually represent the distribution of data and highlight comparisons between different categories.
• Showcasing Patterns and Trends: Diagrams can also be used to identify patterns and trends in mathematical data. A line graph, for instance, can effectively depict how a variable changes over time
or in relation to another variable, allowing you to easily identify upward or downward trends.
• Simplifying Complex Problems: Complex math problems often involve multiple steps and interconnected relationships. Diagrams can simplify these problems by breaking them down into smaller, more
manageable components. By representing each step and connection visually, you can better understand the problem and devise a clear solution strategy.
Overall, using diagrams to illustrate mathematical concepts can greatly enhance understanding, facilitate communication, and simplify complex problems. Whether you’re a student or a teacher,
incorporating visual representations in your math work can make the learning process more engaging and effective.
Presenting step-by-step solutions to math problems
When it comes to math problems, it is crucial to present your solutions in a clear and concise manner. This not only helps others understand your thought process but also allows you to check your
work for any errors. By following a step-by-step approach, you can break down complex problems into manageable parts. Here are some effective ways to present your step-by-step solutions:
1. Clearly state your problem and objective
• Begin by stating the problem you are trying to solve. Make sure to include all relevant details and any constraints that may be given.
• State your objective, which is the specific question or outcome you are working towards.
2. List all given information
• Identify and list all the information given in the problem, such as numbers, variables, formulas, or equations.
• Clearly label each piece of information to avoid confusion.
3. Break down the problem into steps
• Divide the problem into smaller, manageable steps that follow a logical order.
• For each step, clearly explain the mathematical operation or concept you are using. This helps readers or reviewers understand your thought process and ensures you don’t miss any essential steps.
4. Show your working and calculations
• Write out each calculation or equation, showing all the steps taken to arrive at the solution.
• Use proper mathematical notation and symbols to present your work neatly.
• If you use formulas or equations, clearly label them and explain the reasoning behind their use.
5. Include appropriate units and labels
• When presenting numerical answers, always include the appropriate units of measurement if applicable.
• Label any graphs, diagrams, or tables used to illustrate your work. Make sure they are clear, legible, and well-organized.
6. Check and verify your solution
• After completing your solution, review your work and ensure that all steps are correct and coherent.
• Double-check your calculations, units, and labels to avoid any mistakes.
By following these steps and presenting your work in a clear and organized manner, you can effectively communicate your math solutions to others and ensure accuracy in your own work.
3. Incorporating real-life examples to demonstrate mathematical principles
One powerful way to show your work in math is by incorporating real-life examples that demonstrate mathematical principles. By using examples from everyday life, you can make abstract concepts more
relatable and easier to understand.
• Example 1: Let’s say you’re teaching the concept of fractions to a group of students. Instead of just giving them generic fraction problems to solve, you could use a real-life scenario involving
food. For instance, you could ask the students to imagine that they are baking cookies and need to divide the dough into equal parts. By visually representing the dough and explaining how
fractions can be used to divide it, you make the concept more tangible and engaging.
• Example 2: Another way to incorporate real-life examples is by using sports. For instance, if you’re teaching probability, you could use the example of a basketball player shooting free throws.
You can explain how probability can be used to calculate the likelihood of the player making or missing a shot. By using a relatable scenario like sports, students can see the practical
application of mathematical concepts and understand their relevance in real life.
• Example 3: Finance and budgeting can also be great areas to incorporate real-life examples. For instance, when teaching about interest rates and loans, you can use the example of buying a car or
a house. By showing students how interest rates can affect the total cost of a loan, you help them understand the importance of making informed financial decisions. Real-life examples like these
not only make math more interesting but also help students develop practical skills for managing their finances.
By incorporating real-life examples into your math lessons, you can bridge the gap between abstract concepts and practical applications. This approach not only enhances student engagement but also
strengthens their understanding of mathematical principles.
Utilizing technology to visually display mathematical equations and formulas
Number 4: Utilizing Graphing Calculators
Graphing calculators are powerful tools that can visually represent mathematical equations and formulas. They are commonly used in schools and colleges to enhance the learning experience and provide
a better understanding of mathematical concepts.
Here are some ways graphing calculators can be used to show your work in math:
• Graphing Equations: One of the main features of a graphing calculator is its ability to graph equations. By inputting an equation, the calculator can plot it on a coordinate plane, allowing you
to visualize the relationship between variables. This can be particularly helpful when dealing with functions, as it allows you to see how the graph changes with different inputs.
• Finding Intersections: Graphing calculators can also help find the intersections of two or more equations. This is useful for solving systems of equations, where you need to find the values of
variables that satisfy multiple equations simultaneously. The calculator can display these intersection points, making it easier to solve such problems.
• Calculating Derivatives and Integrals: Another useful feature of graphing calculators is their ability to calculate derivatives and integrals of functions. This can be extremely helpful when
studying calculus, as it allows you to quickly find the rate of change or the area under a curve.
• Storing Equations: Many graphing calculators allow you to store equations for later use. This means that you can save commonly used equations or functions and retrieve them easily whenever you
need them. This saves you time and reduces the chances of making errors during calculations.
• Analyzing Data: In addition to graphing equations, graphing calculators can also help analyze data. You can input a set of data points and the calculator can generate a scatter plot, allowing you
to visualize the relationship between the variables. This can be used to identify trends, patterns, or correlations in the data.
Overall, graphing calculators are powerful tools for visually displaying mathematical equations and formulas. They not only make it easier to understand and analyze mathematical concepts, but also
provide valuable assistance in solving complex problems. By utilizing graphing calculators, you can show your work in math in a visually appealing and comprehensive manner.
Explaining the reasoning and logic behind each mathematical step
Mathematical reasoning and logic are essential when it comes to showing your work in math. By explaining the reasoning and logic behind each mathematical step, you not only demonstrate your
understanding of the concepts, but also make your work more transparent and accessible to others. Let’s take a closer look at how you can effectively explain the reasoning and logic in your math
Number 5
For this example, let’s say we have the following problem: 5 + 3 = ?
• Step 1: Start by recognizing the problem at hand: adding two numbers, 5 and 3.
• Step 2: Next, let’s write down the problem and the given numbers: 5 + 3 = ?
• Step 3: Now, we can apply the logic behind addition. The “+” symbol represents combining or adding two quantities together. In this case, we have 5 and we want to add 3 to it.
• Step 4: To find the sum, we count upward from 5, three times because we want to add 3. We can write down the counting process as follows:
Count Number
2 6 (5 + 1)
3 7 (6 + 1)
As you can see, we count from 5 to 7 by adding 1 each time, because that’s what the problem requires us to do. This step-by-step process showcases the reasoning behind the addition and how we arrive
at the answer.
In conclusion, by explaining the reasoning and logic behind each mathematical step, we make our work more understandable and transparent. This approach not only helps others follow our thought
process, but it also reinforces our own understanding of the concept. So, next time you tackle a math problem, remember to show your work and explain the reasoning behind each step!
Number 6: Clear and Concise Explanations of Mathematical Symbols and Notation Used
In mathematics, symbols and notation are essential tools for communicating ideas and concepts effectively. To ensure clarity in your work, it is important to provide clear and concise explanations of
the mathematical symbols and notation you use. Here are a few key points to consider:
• Define symbols and notation: Begin by introducing any symbols or notation that may be unfamiliar to your readers. Clearly explain what each symbol represents and how it is used in your work. This
helps to prevent confusion or misinterpretation.
• Use consistent notation: Consistency is crucial in mathematics. Make sure to use the same symbols and notation throughout your work for the same concepts or variables. This helps to avoid
ambiguity and makes it easier for readers to follow along.
• Provide explanations within the text: When using symbols or notation, include brief explanations within the text to ensure clarity. For example, if you are using the symbol for “greater than” (>)
in an equation, explicitly state that it represents a comparison of values.
• Add a glossary or key: If your work involves a significant amount of symbols and notation, consider including a glossary or key at the beginning or end of your article. This provides readers with
a quick reference guide to the symbols and notation used throughout.
By following these guidelines, you can ensure that your work in math is not only accurate, but also easily understandable for your audience. Remember, the goal is to provide clear and concise
explanations of the mathematical symbols and notation you use, so that your readers can follow your reasoning and grasp the concepts being presented.
Showing Your Work in Math: Organizing and Structuring Mathematical Work to Enhance Readability and Comprehension
Number 7: Utilize Clear and Concise Explanations
A crucial aspect of showing your work in math is ensuring that your explanations are clear and concise. Mathematics inherently involves complex concepts and procedures, so it’s essential to present
them in a way that is understandable to your audience. Here are some tips to help you achieve clarity and conciseness:
• Avoid unnecessary jargon: While mathematical terminology is important, try to strike a balance by using clear and familiar language whenever possible. Introduce new terms gradually, ensuring you
define them clearly to avoid confusion.
• Break it down step-by-step: When presenting a solution or a proof, break it down into smaller, manageable steps. This allows your readers to follow your thought process and understand the logic
behind each step. Use bullet points or numbered lists to clearly outline each step.
• Provide explanations for your calculations: Don’t just present the final answer or equation; explain how you arrived at it. This can involve showing the intermediate steps, discussing the
reasoning behind each calculation, or providing relevant formulas or theorems. The goal is to give your readers insight into your mathematical thinking.
• Use visual aids when appropriate: Visual aids, such as diagrams, graphs, or charts, can help clarify complex concepts or illustrate the connections between different elements of your work.
Visuals can often convey information more efficiently than words alone.
Frequently Asked Questions
What is the importance of showing your work in math?
Showcasing your work in math is crucial as it helps you understand the concepts better and allows others to follow your thought process. By showing your work, you can identify and correct any
mistakes made along the way, making it easier to learn from them.
How can I show my work in an organized manner?
To show your work neatly and coherently, you can use a step-by-step approach. Start by writing down the given information, identify the applicable formulas, and outline your calculations clearly.
Properly labeling each step and providing explanations whenever necessary will ensure clarity and ease of understanding.
Should I only show the final answer, or every step of the problem?
It is highly recommended to show every step of the problem-solving process. Simply providing the final answer might not demonstrate your understanding of the concept or allow others, such as teachers
or peers, to assess your approach. Showing each step helps to clarify your thought process and enables others to provide guidance if needed.
Can I use abbreviations or symbols in my work?
While using abbreviations or symbols can save time and space, it’s important to ensure clarity. Make sure that your abbreviations and symbols are well-known and commonly used in the math field.
Additionally, always provide an explanation or key for any non-standard abbreviations or symbols you use.
What if I make a mistake while showing my work?
Mistakes are a natural part of the learning process. If you make a mistake while showing your work, don’t worry! Correct the error and clearly indicate the corrected version. This way, you not only
demonstrate your ability to identify mistakes, but also emphasize the importance of accuracy in mathematics.
Thanks for Reading!
We hope these FAQs have helped you understand the importance of showing your work in math and provided useful tips. Remember, showing your work not only helps you learn and improve, but also enables
others to assist you. Practice consistently and never be afraid to ask for help. Keep up the good work, and we look forward to seeing you again soon! | {"url":"https://wallpaperkerenhd.com/faq/how-to-show-your-work-in-math/","timestamp":"2024-11-07T03:22:57Z","content_type":"text/html","content_length":"43635","record_id":"<urn:uuid:b73888f6-0dd6-4b45-a0fb-0366ac06b3bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00486.warc.gz"} |
Problem Solving Strategies
This lecture will focus on how to solve the problems introduced in the previous lecture. Today will be an opportunity to flex problem-solving muscles: a major reason for taking physics and a skill to
apply to any field of study. Ask questions and make as many mistakes as possible, as this is as low-stakes as it gets (compared to homework, exams, and your future job).
Flipping Physics covers 1D Kinematics with some examples (refers to the subject as Uniformly accelerating motion).
Pre-lecture Study Resources
Read the BoxSand Introduction and watch the pre-lecture videos before doing the pre-lecture homework or attending class. If you have time, or would like more preparation, please read the OpenStax
textbook and/or try the fundamental examples provided below.
BoxSand Introduction
1-D Kinematics | Problem Solving Strategies
This module does not have new material but rather is focused on application of problem solving in kinematics. If you have not checked out the Problem Solving Guide, located further down on this page,
I suggest you take a look. Here is the Checklist of things to consider when analyzing kinematics situations from the Problem Solving Guide.
1. Read and re-read the whole problem carefully.
2. Visualize the scenario. Mentally try to understand what the object is doing.
a. Motion diagrams are a great tool here for visual cues as to what the motion of an object looks like.
3. Draw a physical representation of the scenario; include initial and final velocity vectors, acceleration vectors, position vectors, and displacement vectors.
4. Define a coordinate system; place the origin on the physical representation where you want the zero location of the x and y components of position.
5. Identify and write down the knowns and unknowns.
6. Identify and write down any connecting pieces of information.
7. Determine which kinematic equation(s) will provide you with the proper ratio of equations to number of unknowns; you need at least the same number of unique equations as unknowns to be able to
solve for an unknown.
8. Carry out the algebraic process of solving the equation(s).
a. If simple, desired unknown can be directly solved for.
b. May have to solve for intermediate unknown to solve for desired known.
c. May have to solve multiple equations and multiple unknowns.
d. May have to refer to the geometry to create another equation.
e. If multiple objects or constant acceleration stages or dimensions, there is a set of kinematic equations for each. Something will connect them.
9. Evaluate your answer, make sure units are correct and the results are within reason.
Key Equations and Infographics
Kinematic Equation 1 (KEq1)
BoxSand Videos
Required Videos
Note how every problem is setup the same way:
Suggested Supplemental Videos
OpenStax Reading
The reading below is the same as the previous lecture.
OpenStax section 2.5 covers Motion Equations for Constant Acceleration in 1-D.
OpenStax section 2.6 covers Problem-Solving basics for 1D Kinematics.
OpenStax section 2.7 covers Falling Objects.
Fundamental examples
1. A ball is set on the end of a table which is 6 meters long. The ball rolls to the other end of the table in 1 minute. What is the average velocity of the ball, in meters per second?
2. A ball is thrown straight down off a cliff with an initial downward velocity of $5 \frac{m}{s}$. It falls for 2 seconds, when the downward velocity is recorded as $24.6 \frac{m}{s}$. What is the
balls average acceleration($\frac{m}{s^2}$) during that time?
3. An object moving in a straight line experiences a constant acceleration of $10 \frac{m}{s^2}$ in the same direction for 3 seconds when the velocity is recorded to be $45 \frac{m}{s}$. What was the
initial velocity of the object?
CLICK HERE for solutions
Post-Lecture Study Resources
Use the supplemental resources below to support your post-lecture study.
Practice Problems
BoxSand's Quantitative Practice Problems
BoxSand's Multiple Select Problems
Practice Problems: Multiple Select and Quantitative.
Recommended example practice problems
• Set 1: 8 Problem set with solutions following each question. Be sure to try to question before looking at the solution! Your test scores will thank you. Website
• Set 2: Problems 1 through 6. Website
For additional practice problems and worked examples, visit the link below. If you've found example problems that you've used please help us out and submit them to the student contributed content
Additional Boxsand Study Resources
Additional BoxSand Study Resources
Learning Objectives
The objective is to have students solve story problems involving motion along a line that require the use of multiple representations. Particular focus is on mathematical representations.
Atomistic Goals
Students will be able to...
1. Identify that the motion all occurs along a line and can be treated with a 1-dimensional analysis.
2. Define free-fall and identify when a free-fall analysis is appropriate.
3. Denote that objects speeding up have an acceleration that points in the same direction as their velocity, while those slowing down have an acceleration that points in the opposite direction of
their velocity.
4. Translate a descriptive representation to an appropriate physical representation that includes a displacement vector, initial and final velocity vectors, an acceleration vector, and a coordinate
5. Draw an appropriate physical representation for a system that includes multiple stages or objects by including vector representations for each.
6. Identify known and unknown quantities for each stage or object.
7. Translate from the mathematical, physical, or descriptive representation to the graphical representation.
8. Translate to the mathematical representation with the help of the descriptive, physical, and graphical representations.
9. Identify the appropriate kinematic equation for constant acceleration to use when analyzing the problem.
10. Use one of the kinematic equations to find the value of an unknown, then use that value and another kinematic equation to solve for desired unknown.
11. Solve simultaneous equations when there are two or more equations with the same two unknowns.
12. Solve problems that involve multiple objects or multiple stages.
13. Apply any connections between stages or objects when appropriate, e.g. the geometric connection between two runners when they do not start at the same location.
14. Apply sign sense-making procedures to check their solutions.
15. Apply order of magnitude sense-making procedures to check their solutions.
16. Apply dimensional analysis sense-making procedures to check their solutions.
YouTube Videos
This Khan Academy video helps show how to pick which kinematic equation to use in different situations
Flipping Physics video reviewing kinematics with some problem solving strategies.
This interactive quiz from The Physics Classroom asks you to simply name that motion. This simulation should solidify difference between average velocity and instantaneous velocity as well as the
effect of acceleration on a moving object. *Note, the frame of the animation is a bit small when you first run it, you can click on the lower right corner and drag to make the animation frame larger.
For additional simulations on this subject, visit the simulations repository.
Here are some solved problems and walkthroughs
For additional demos involving this subject, visit the demo repository
Oh no, we haven't been able to write up a history overview for this topic. If you'd like to contribute, contact the director of BoxSand, KC Walsh (walshke@oregonstate.edu).
Physics Fun
Oh no, we haven't been able to post any fun stuff for this topic yet. If you have any fun physics videos or webpages for this topic, send them to the director of BoxSand, KC Walsh (
Other Resources
This page from The Physics Classroom will help in determining the difference between average and instantaneous quantities, representing vector quantities, and calculated average quantities.
Another page from The Physics Classroom that helps with the same as the above, but this time focusing on Acceleration.
Hyper Physics is also another resource that will be on almost every page. Each page on a subject is made to be a quick little note with the relevant information, making Hyper Physics a great
reference page. This page for instance gives a concise reference for velocity, and average velocity.
Resource Repository
This link will take you to the repository of other content on this topic.
Problem Solving Guide
Use the Tips and Tricks below to support your post-lecture study.
Kinematics requires the assumption that acceleration is constant, if that isn't true, none of our equations or analysis techniques are necessecarily true for the motion anymore. This is often glossed
over, however, as we move onto different types of analysis in the future and have multiple ways to solve problems (like say, on your final exam), the thing to remember about when you can or can't use
kinematics is that the acceleration MUST be constant throughout the entire motion if you wish to analyze it with kinematics.
1. Read and re-read the whole problem carefully.
2. Visualize the scenario. Mentally try to understand what the object is doing.
a. Motion diagrams are a great tool here for visual cues as to what the motion of an object looks like.
3. Draw a physical representation of the scenario; include initial and final velocity vectors, acceleration vectors, position vectors, and displacement vectors.
4. Define a coordinate system; place the origin on the physical representation where you want the zero location of the x and y components of position.
5. Identify and write down the knowns and unknowns.
6. Identify and write down any connecting pieces of information.
7. Determine which kinematic equation(s) will provide you with the proper ratio of equations to number of unknowns; you need at least the same number of unique equations as unknowns to be able to
solve for an unknown.
8. Carry out the algebraic process of solving the equation(s).
a. If simple, desired unknown can be directly solved for.
b. May have to solve for intermediate unknown to solve for desired known.
c. May have to solve multiple equations and multiple unknowns.
d. May have to refer to the geometry to create another equation.
e. If multiple objects or constant acceleration stages or dimensions, there is a set of kinematic equations for each. Something will connect them.
9. Evaluate your answer, make sure units are correct and the results are within reason.
Misconceptions & Mistakes
• When an object is thrown straight upwards near the surface of the earth, the velocity is zero at the maximum height that the object reaches, the acceleration is not zero.
• An object's acceleration does not determine the direction of motion of the object.
• The final velocity of an object when dropped from some height above ground is not zero. The kinematic equations do not know that the ground is there, thus the final velocity is the velocity of
the object just before it actually hits the ground.
• Consider the sign of the velocity while talking about whether it is speeding up or slowing down. If the acceleration is positive and velocity is also positive then that object is speeding up in
the positive direction. Moreover, if the acceleration is negative and velocity is also negative the object is speeding up in negative direction. On the contrary if the velocity and acceleration
has opposite directions such as acceleration is positive and velocity is negative or vice versa the object is slowing down.
• A high magnitude of velocity does not imply anything about the acceleration. There can be a low acceleration magnitude or a high acceleration magnitude regardless of the magnitude of velocity.
• The displacement of an object does not depend on the location of your coordinate system.
Pro Tips
• Draw a physical representation of the scenario in the problem. Include initial and final velocity vectors, displacement vector, and acceleration vector.
• Place a coordinate system on the physical representation. Location of the origin does not matter, but some locations make the math easier than others (i.e. setting the origin at the initial or
final location sets the initial or final position to zero).
• Always identify and write down the knowns and unknowns.
• Be very strict with labeling the kinematic variables; include subscripts indicating which object they are associated with, the coordinate, information about what stage if a multi-stage problem,
and initial/final identifications. Doing this early on will help avoid the common mistake of misinterpreting the meanings of the variables when in the stage of algebraically solving the
kinematic equations.
Multiple Representations
Multiple Representations is the concept that a physical phenomena can be expressed in different ways.
Physical Representations describes the physical phenomena of the situation in a visual way.
1D motion diagrams are utilized to describe the speed of an object and whether or not the object has any acceleration.
Mathematical Representation uses equation(s) to describe and analyze the situation.
Watch Professor Matt Anderson explain 1D Kinematics equations using the mathematical representation.
Kinematics equations in 1D
Graphical Representation describes the situation through use of plots and graphs.
The Physics Classroom introduces the relationship between kinematics equations and their graphical representation. In addition, for a more in-depth discussion please refer to the graphical analysis
Descriptive Representation describes the physical phenomena with words and annotations.
Example: An object undergoing parabolic (y = x^2) positional motion will have a linear (y= mx+b) velocity curve, and an acceleration curve of a constant value (y = constant).
(this example illustrates all of the most complex cases of motion studied in Kinematics, which requires constant acceleration)
Experimental Representation examines a physical phenomena through observations and data measurement.
For example, a person throws a ball upward into the air with an initial velocity of $15.0 \frac{m}{s}$ . You can calculate how high it goes and how long the ball is in the air before it comes back to
your hand. Ignore air resistance. | {"url":"https://boxsand.physics.oregonstate.edu/1-d-kinematics-lecture-2-1-d-kinematics-problem-solving-strategies","timestamp":"2024-11-05T04:26:24Z","content_type":"text/html","content_length":"76841","record_id":"<urn:uuid:f963514f-314d-45b0-a21a-9ce6aa1a406c>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00190.warc.gz"} |
Embedding into free topological vector spaces on compact metrizable spaces
For a Tychonoff space X, let V(X) be the free topological vector space over X. Denote by I, G, Q and S^k the closed unit interval, the Cantor space, the Hilbert cube Q=I^N and the k-dimensional unit
sphere for k∈N, respectively. The main result is that V(R) can be embedded as a topological vector space in V(I). It is also shown that for a compact Hausdorff space K: (1) V(K) can be embedded in V
(G) if and only if K is zero-dimensional and metrizable; (2) V(K) can be embedded in V(Q) if and only if K is metrizable; (3) V(S^k) can be embedded in V(I^k); (4) V(K) can be embedded in V(I)
implies that K is finite-dimensional and metrizable.
• Cantor space
• Compact
• Embedding
• Finite-dimensional
• Free locally convex space
• Free topological vector space
• Hilbert cube
• Zero-dimensional
ASJC Scopus subject areas
Dive into the research topics of 'Embedding into free topological vector spaces on compact metrizable spaces'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/embedding-into-free-topological-vector-spaces-on-compact-metrizab","timestamp":"2024-11-08T04:05:02Z","content_type":"text/html","content_length":"56347","record_id":"<urn:uuid:c914be0d-5af9-4164-9340-04e0b9d16e74>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00781.warc.gz"} |
How To Sum Subtotals In Excel | SpreadCheaters
How to sum subtotals in Excel
You can watch a video tutorial here.
In Excel, it is possible to group rows to arrange the data into categories. Once rows are grouped, sub-totals can be added to calculate the sum of a particular column within each category. When that
has been done, you may want to sum up all the sub-totals to get the total value for the data. If you use the SUM() function on the entire column, the individual values and the sub-totals will be
added together.
Here we will see 2 ways to sum only the sub-totals, one using the SUM() function and the second using the SUBTOTAL() function.
1. SUM() function: this returns the sum of a range of numbers or individual numbers
1. Syntax: SUM(numbers)
1. numbers: this is a range of numbers or individual ranges/numbers separated by a comma
2. SUBTOTAL() function: this returns the subtotal for a column using a specified aggregation function
1. Syntax: SUBTOTAL(function_code, ranges)
1. function_code: this is a number that corresponds to an aggregation function e.g. 1 is AVERAGE, 2 is COUNT, 9 is SUM etc.
2. ranges: the range of the cells on which the function is to be applied
Option 1 – Sum the sub-totals individually
Step 1 – Create the formula
• Select the cell where the result is to be displayed
• Type the formula using cell references:
=SUM(Sub-total1, Sub-total2, Subtotal3)
Step 2 – Copy the formula
• Using the fill handle from the first cell, drag the formula to the remaining cells
1. Select the cell with the formula and press Ctrl+C or choose Copy from the context menu (right-click)
2. Select the rest of the cells in the row and press Ctrl+V or choose Paste from the context menu (right-click)
Option 2 – Use the SUBTOTAL() function
Step 1 – Create the formula for each sub-total
• Select the cell where the first sub-total is to be displayed
• Type the formula using cell references:
=SUBTOTAL(9,range of the first group)
• Press Enter
• Repeat the steps above for each sub-total
Step 2 – Copy the formula
• Using the fill handle from the first sub-total cell, drag the formula to the remaining cells
1. Select the cell with the formula and press Ctrl+C or choose Copy from the context menu (right-click)
2. Select the rest of the cells in the row and press Ctrl+V or choose Paste from the context menu (right-click)
• Repeat the above step for each row of sub-totals
Step 3 – Create the total
• Select the cell where the result is to be displayed
• Type the formula using cell references:
=SUM(range of the column)
Step 4 – Copy the formula
• Using the fill handle from the first cell, drag the formula to the remaining cells
1. Select the cell with the formula and press Ctrl+C or choose Copy from the context menu (right-click)
2. Select the rest of the cells in the row and press Ctrl+V or choose Paste from the context menu (right-click) | {"url":"https://spreadcheaters.com/how-to-sum-subtotals-in-excel/","timestamp":"2024-11-14T17:31:14Z","content_type":"text/html","content_length":"60961","record_id":"<urn:uuid:01694405-c924-446c-b1b3-adafdbb49bb8>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00313.warc.gz"} |
An Actually Intuitive Explanation of the Oberth Effect
Like anyone with a passing interest in [S:Kerbal Space Program:S] physics and spaceflight, I eventually came across the Oberth Effect. It's a very important effect, crucial to designing efficient
trajectories for any rocket ship. And yet, I couldn't understand it.
Wikipedia's explanation focuses on how kinetic energy is proportional to the square of the speed, and therefore more energy is gained from a change in speed at a higher speed. I'm sure this is true,
but it's not particularly helpful; simply memorizing formulae is not what leads to understanding of a phenomenon. You have to know what the numbers mean, how they correspond to the actual atoms
moving around in the real universe.
This explanation was particularly galling as it seemed to violate relativity; how could a rocket's behavior change depending on its speed? What does that even mean; its speed relative to what?
Whether a rocket is traveling at 1 m/s or 10000000 m/s relative to the Earth, the people on board the rocket should observe the exact same behavior when they fire their engine, right?
So I turned to the internet; Stack Overflow, Quora, Reddit, random physicists' blogs. But they all had the same problem. Every single resource I could find would "explain" the effect with a bunch of
math, either focusing on the quadratic nature of kinetic energy, or some even more confusing derivation in terms of work.
A few at least tried to link the math up to the real world. Accelerating the rocket stores kinetic energy in the propellant, and this energy is then "reclaimed" when it's burned, leading to more
energy coming out of the propellant at higher speeds. But this seemed unphysical; kinetic energy is not a property of the propellant itself, it depends on the reference frame of the observer! So this
explanation still didn't provide me with an intuition for why it worked this way, and still seemed to violate relativity.
It took me years to find someone who could explain it to me in better terms.
Asymmetric gravitational effects
Say your spacecraft starts 1 AU away from a planet, on an inertial trajectory that will bring it close to the planet but not hit it. It takes a year to reach periapsis going faster and faster the
whole way. Then it takes another year to reach 1 AU again, slowing down the whole time.
Two things to note here: The coordinate acceleration experienced by the spacecraft (relative to the planet) is higher the closer it gets, because that's where gravity is strongest. Way out at 1AU,
the gravitational field is very weak, and there's barely any effect on the ship. Secondly, note that the trajectory is symmetric, because orbital mechanics is time-reversible. That's how we know that
if it takes 1 year to fall in it will also take 1 year to get back out, and you'll be traveling at the same speed as you were at the beginning.
Now imagine that you burn prograde at periapsis. Now you'll be traveling faster as you leave than you were as you came in. This means that gravity has less time to act on you on the way out than it
did on the way in. Of course the gravitational field extends all the way out to 1 AU, but if we take just a subregion of it, like the region within which the acceleration is at least 1 m/s^2, you'll
spend less time subject to that level of acceleration.
So the Oberth effect is just a consequence of you maximizing the amount of time gravity works on you in the desired direction, and minimizing it in the other direction. (And of course you'd get the
inverse effect if you burned retrograde; a more efficient way to slow down.)
This has nothing to do with propellant. Maybe instead of thrusters, there's a giant automatic baseball bat machine that will smack your rocket ship as you pass by, adding 1 m/s to whatever your
current speed is. It would be more efficient to put the machine on the surface of a planetWith no atmosphere. than to put it out in space.
It really is gravity, not speed
The confusing explanations of the Oberth effect focus on speed being the relevant factor, but it's really not. Being inside a gravitational well is what matters.
Think about the baseball bat machine placed on a planet. Why does the inbound part of the trajectory matter? Couldn't we just imagine the spaceship getting launched directly from that planet, and it
would have to behave the same way? Yeah, it would. And indeed, the equations for escape speed show this is the case.
Let's say you have a gun that shoots a bullet at 1 m/s. You shoot it in deep space, and wait until the bullet has traveled 1AU. It will still be going 1 m/s, obviously. If you increase the muzzle
speed to 2 m/s, the bullet will be going at 2 m/s. But what it you fire it on the surface of a planet that has an escape speed of 1 m/s? If the muzzle speed is 1 m/s, then the bullet will be
asymptotically slowing down towards 0; we can say that its speed "at infinity" is 0. But if the bullet speed at launch is 2 m/s, its speed at infinity will be sqrt((initial speed^2) - (escape speed^
2)), or about 1.7 m/s.
Look at that! Increasing the efficiency of your space gun has more of an impact if you're standing on the surface of a planet than if you're in deep space. Making the gun shoot faster has compounding
returns, because the faster the bullet leaves the gravity well of the planet, the less of its energy gravity is able to "steal" as it leaves.
Bringing propellant back into it
Of course 1.7 m/s is still less than 2 m/s, so to maximize projectile speed it's still best to put the gun in deep space, away from the planet. Why is this? With the traditional Oberth effect using
the giant baseball bat machine, it was more efficient to put it on the planet. But with our space gun, it's better to put it out in deep space. What's the difference?
The difference is that the starting position of the whole system is not the same between the two scenarios, so it's not a symmetric comparison. With the baseball bat machine, the ship starts in outer
space, and then goes back to outer space. With the space gun, the bullet starts on the surface of the planet. It costs energy to bring your space gun up into space!
In other words, the spaceship got to cheat. It started already having some potential energy for being outside of a gravity well, whereas the bullet didn't start with that energy.It feels kind of
weird to define objects floating in space to have positive potential energy just because there happens to be a planet somewhere else in the universe, which is why we traditionally define the
gravitational potential as negative and let the objects be at zero. But in this context I think the positive energy framing is clearer, so that's what I'm using. Taking everything into account, it
actually is more efficient to just launch the bullet from the surface at a higher speed, rather than spend a bunch of extra energy lugging the whole gun into space too.
Could the gun in space cheat too? Sure; let it fall into the gravity well and then fire the bullet, and you'll get the same positive effect on the bullet's total speed. Oh look, we've reinvented a
So this is another framing of the Oberth effect: If the rocket thrusts in deep space, it only gains the chemical energy of the propellant. All that propellant's potential energy from being far above
the surface of the planet is wasted, with the propellant flying off to infinity, never going anywhere near the planet. If the rocket instead falls into the gravity well, the propellant's potential
energy turns into kinetic energy. Then when the rocket fires its engine at periapsis, that propellant has to escape the gravity well, meaning it will be traveling slower at infinity that it would
have been from an engine firing in deep space. That extra kinetic energy has to go somewhere, and it goes into the rocket. The rocket gets to make use of the potential energy of the propellant in
addition to its chemical energy.One question this raises is why this effect doesn't compound with the first framing of "spends less time being decelerated by gravity". It almost seems like a 1 m/s
burn should have more of an impact on the rocket than a 1 m/s strike by a baseball bat, since both situations have the differential time spent in the gravity well, but in the former case there's also
the additional transfer of kinetic energy from the propellant to the rocket. But of course this kinetic energy transfer is non-physical, and since the baseball bat experiences a backwards force from
hitting the rocket, the bat case is really just equivalent to an engine burn where the propellant is fired at below escape speed and falls back to the planet. Thinking about this gets me confused
about reference frame invariance again.
(The seeming asymmetry is due to the direction of travel. The propellant is itself "thrusting" retrograde by ejecting a rocket behind it, and the Oberth effect causes it to decelerate more
efficiently than it would have in deep space. The rocket is thrusting prograde, and gains that energy. The Oberth effect depends on the direction of travel relative to the gravity source at the time
of the burn.One interesting questions is at what angle of thrust does the effect on the propellant go from negative to positive? I didn't do the math to check, but I'm pretty sure it's just the angle
at which the speed of the propellant in the planet's reference frame is the exact same as the rocket's speed.)
Ok, maybe it's about speed after all
Hmm, my observation that the Oberth effect depends on the direction of travel and not just "being in a gravity well" reminds me of the original claim that the Oberth effect is just a general
consequence of being moving. And the typical descriptions of the Oberth effect claim that it applies to speed anywhere, even with no planets in sight.
I think this is just a mathematical curiosity; a natural consequence of the fact that kinetic energy scales with the square of the speed. The K=1/2mv^2 formula itself is already thing thing that
seems to violate relativity, and the Oberth framing just makes this more obvious. Given that kinetic energy is proportional to the square of the speed, it must be the case that a faster speed results
in a larger gain to kinetic energy.
The solution to the seeming relativity-violation in the formula for kinetic energy is that energy is only conserved between two measurements from the same reference frame. If you measure the energy
of the universe, then accelerate, then measure it again, you will indeed get a higher number, and that's fine, because you've changed reference frames. The actual numerical value of energy is
arbitrary, and all that matters are changes and relative quantities. (Just like how we can either define an object on a planet to have 0 potential energy and it to have positive energy at infinity,
or for it to have 0 energy at infinity and negative energy on the planet's surface. Both are equally valid as long as you use them consistently.) So sure, from a stationary perspective an object
seems to gain more energy when it accelerates at higher speeds, while from a different stationary perspective where it's going a lower speed it would seem to gain less energy, but that's fine, since
those two reference frames are not comparable. (They'd also measure a passing planet as having a different kinetic energy.)
So this aspect of the Oberth effect is unphysical and of no practical consequence when there's no gravity well nearby. The Oberth effect only affects your rocket ship trajectory planning when there's
a gravity well that lets you convert kinetic energy into speed at a better-than-normal rateI don't get exactly how this works., with the rate depending on the reference frame of the planet. But when
you're in empty space, the Oberth effect just cancels itself out.
After posting this article, someone mentioned another intuition pump: Alice is in an orbit of 10 m/s around a tiny black hole, and Bob is in a lower orbit of 100,000 m/s. They then both get an
extremely high impulse of 0.1c, bringing them far away from the black hole before gravity has any significant time to act on them. Alice's ship will be traveling at about 0.1c + 10 m/s, while Bob's
will be traveling about 0.1c + 100,000 m/s.
An interesting consequence of this is that it means the upper limit of the Oberth effect, as the newly delivered impulse goes to infinity, is the difference between their original orbital speeds. | {"url":"https://outsidetheasylum.blog/an-actually-intuitive-explanation-of-the-oberth-effect/","timestamp":"2024-11-03T03:44:16Z","content_type":"text/html","content_length":"17043","record_id":"<urn:uuid:52d8d418-1db2-4574-ab06-75ad46680663>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00631.warc.gz"} |
3.2 EXPRESSIONS
Last modified: Wed, 06/24/2020 - 11:59
Throughout this manual, Ferret commands that require and manipulate data are informally called "action" commands. These commands are:
FILL (alias for CONTOUR/FILL)
Action commands may use any valid algebraic expression involving constants, operators (+,–,*,...), functions (SIN, MIN, INT,...), pseudo-variables (X, TBOX, ...) and other variables.
A variable name may optionally be followed by square brackets containing region, transformation, data set, and regridding qualifiers. For example, "temp", "salt[D=2]", "u[G=temp"], "u[Z=0:200@AVE]",
The expressions may also contain a syntax of:
IF condition THEN expression_1 ELSE expression_2
Examples: Expressions
i) temp ^ 2
temperature squared
ii) temp - temp[Z=@AVE]
for the range of Z in the current context, the temperature deviations from the vertical average
iii) COS(Y)
the cosine of the Y coordinate of the underlying grid (by default, the y-axis is implied by the other variables in the expression)
iv) IF (vwnd GT vwnd[D=monthly_navy_winds]) THEN vwnd ELSE 0
use the meridional velocity from the current data set wherever it exceeds the value in data set monthly_navy_winds, zero elsewhere.
^ (exponentiate)
For instance the exponentiate operator can compute the square root of a variable as var^0.5
Ch3 Sec2.2. Multi-dimensional expressions
Operators and functions (discussed in the next section, Functions) may combine variables of like dimensions or differing dimensions.
If the variables are of like dimension then the result of the combination is of the same dimensionality as inputs. For example, suppose there are two time series that have data on the same time axis;
the result of a combination will be a time series on the same time axis.
If the variables are of unlike dimensionality, then the following rules apply:
1) To combine variables together in an expression they must be "conformable" along each axis.
2) Two variables are conformable along an axis if the number of points along the axis is the same, or if one of the variables has only a single point along the axis (or, equivalently, is normal to
the axis).
3) When a variable of size 1 (a single point) is combined with a variable of larger size, the variable of size 1 is "promoted" by replicating its value to the size of the other variable.
4) If variables are the same size but have different coordinates, they are conformable, but Ferret will issue a message that the coordinates on the axis are ambiguous. The result of the combination
inherits the coordinates of the FIRST variable encountered that has more than a single point on the axis.
Assume a region J=50/K=1/L=1 for examples 1 and 2. Further assume that variables v1 and v2 share the same x-axis.
1) yes? LET newv = v1[I=1:10] + v2[I=1:10] !same dimension (10)
2) yes? LET newv = v1[I=1:10] + v2[I=5] !newv has length of v1 (10)
3) We want to compare the salt values during the first half of the year with the values for the second half. Salt_diff will be placed on the time coordinates of the first variable—L=1:6. Ferret will
issue a warning about ambiguous coordinates.
yes? LET salt_diff = salt[L=1:6] - salt[L=7:12]
4) In this example the variable zero will be promoted along each axis.
yes? LET zero = 0 * (i+j)
yes? LIST/I=1:5/J=1:5 zero !5X5 matrix of 0's
5) Here we calculate in-situ density; salt and temp are on the same grid. This expression is an XYZ volume of points (100×100×10) of density at 10 depths based on temperature and salinity values at
the top layer (K=1).
yes? SET REGION/I=1:100/J=1:100
yes? LET t68 = 1.00024 * temp
yes? LET dens = rho_un (salt[K=1], t68[K=1], Z[G=temp,K=1:10])
Functions are utilized with standard mathematical notation in Ferret. The arguments to functions are constants, constant arrays, pseudo-variables, and variables, possibly with associated qualifiers
in square brackets, and expressions. Thus, all of these are valid function references:
• EXP(-1)
• MAX(a,b)
• TAN(a/b)
• SIN(Y[g=my_sst])
• DAYS1900(1989,{3,6,9},1)
A few functions also take strings as arguments. String arguments must be enclosed in double quotes. For example, to test whether a dataset URL is available,
IF `TEST_OPENDAP("http://the_address/path/filename.nc") NE 0` THEN ...
You can list function names and argument lists with:
yes? SHOW FUNCTIONS ! List all functions
yes? SHOW FUNCTIONS *TAN ! List all functions containing string
Valid functions are described in the sections below and in Appendix A.
See also the section on string functions
It is generally advisable to include explicit limits when working with functions that replace axes. For example, consider the function SORTL(v). The expression
LIST/L=6:10 SORTL(v)
is not equivalent to
LIST SORTL(v[L=6:10])
The former will list the 6th through 10th sorted indices from the entire l range of variable v. The latter will list all of the indices that result from sorting v[l=6:10].
These functions in Ferret, including XSEQUENCE, SAMPLXY, and so on, are "grid-changing" functions. This means that the axes of the result may differ from the axes of the arguments. In the case of
XSEQUENCE(sst), for example, the input grid for SST is
whereas the output grid is
so all axes of the input are replaced.
Grid-changing functions create a potential ambiguity about region specifications. Suppose that the result of XSEQUENCE(sst[L=1]) is a list of 50 points along the ABSTRACT X axis. Then it is natural
LIST/I=10:20 XSEQUENCE(sst[L=1])
should give elements 10 through 20 taken from that list of 50 points (and it does.) However, one might think that "I=10:20" referred to a subset of the longitude axis of SST. Therein lies the
ambiguity: one region was specified, but there are 2 axes to which the region might apply.
It gets a degree more complicated if the grid-changing function takes more than one argument. Since the input arguments need not be on identical grids, a result axis (X,Y,Z, or T) may be replaced
with respect to one argument, but actually taken from another (consider ZAXREPLACE, for example.) Ferret resolves the ambiguities thusly:
If in the result of a grid-changing function, an axis (X, Y, Z, or T) has been replaced relative to some argument, then region information which applies to the result of the function on that axis
will NOT be passed to that argument.
So, when you issue commands like
SET REGION/X=20E:30E/Y=0N:20N/L=1
LIST XSEQUENCE(sst)
the X axis region ("20E:30E") applies to the result ABSTRACT axis -- it is not passed along to the argument, SST. The Y axis region is, in fact, ignored altogether, since it is not relevant to the
result of XSEQUENCE, and is not passed along to the argument.
The command SHOW FUNCTION/DETAILS lists the dependence of the result grid on the axes of the arguments:
yes? show func/details xsequence
unravel grid to a line in X
Axes of result:
X: ABSTRACT (result will occupy indices 1...N)
Y: NORMAL (no axis)
Z: NORMAL (no axis)
T: NORMAL (no axis)
E: NORMAL (no axis)
F: NORMAL (no axis)
VAR: (FLOAT)
Influence on output axes:
X: no influence (indicate argument limits with "[]")
Y: no influence (indicate argument limits with "[]")
Z: no influence (indicate argument limits with "[]")
T: no influence (indicate argument limits with "[]")
E: no influence (indicate argument limits with "[]")
F: no influence (indicate argument limits with "[]")
Following is a partial list of Ferret Functions. More functions are documented in Chapter 7 (string functions), and in Appendix A.
MAX(A, B) Compares two fields and selects the point by point maximum.
MAX( temp[K=1], temp[K=2] ) returns the maximum temperature comparing the first 2 z-axis levels.
MIN(A, B) Compares two fields and selects the point by point minimum.
MIN( airt[L=10], airt[L=9] ) gives the minimum air temperature comparing two timesteps.
INT (X) Truncates values to integers.
INT( salt ) returns the integer portion of variable "salt" for all values in the current region.
ABS(X) absolute value.
ABS( U ) takes the absolute value of U for all points within the current region
EXP(X) exponential e^x; argument is real.
EXP( X ) raises e to the power X for all points within the current region
LN(X) Natural logarithm log[e]X; argument is real.
LN( X ) takes the natural logarithm of X for all points within the current region
LOG(X) Common logarithm log[10]X; argument is real.
LOG( X ) takes the common logarithm of X for all points within the current region
SIN(THETA) Trigonometric sine; argument is in radians and is treated modulo 2*pi.
SIN( X ) computes the sine of X for all points within the current region.
COS(THETA ) Trigonometric cosine; argument is in radians and is treated modulo 2*pi.
COS( Y ) computes the cosine of Y for all points within the current region
TAN(THETA) Trigonometric tangent; argument is in radians and is treated modulo 2*pi.
TAN( theta ) computes the tangent of theta for all points within the current region
ASIN(X) Trigonometric arcsine (-pi/2,pi/2) of X in radians.The result will be flagged as missing if the absolute value of the argument is greater than 1; result is in radians.
ASIN( value ) computes the arcsine of "value" for all points within the current region
COS(X) Trigonometric arccosine (0,pi), in radians. The result will be flagged as missing of the absolute value of the argument greater than 1; result is in radians.
ACOS ( value ) computes the arccosine of "value" for all points within the current region
ATAN(X) Trigonometric arctangent (-pi/2,pi/2); result is in radians.
ATAN( value ) computes the arctangent of "value" for all points within the current region
ATAN2(Y,X) 2-argument trigonometric arctangent of Y/X (-pi,pi); discontinuous at Y=0.
ATAN2(Y,X ) computes the 2-argument arctangent of Y/X for all points within the current region.
MOD(A,B) Modulo operation ( arg1 – arg2*[arg1/arg2] ). Returns the remainder when the first argument is divided by the second.
MOD( X,2 ) computes the remainder of X/2 for all points within the current region
DAYS1900(year,month,day) computes the number of days since 1 Jan 1900, using the default Proleptic Gregorian calendar. This function is useful in converting dates to Julian days on the standard
Gregorian calendar. If the year is prior to 1900 a negative number is returned. This means that it is possible to compute Julian days relative to, say, 1800 with the expression
LET jday1800 = DAYS1900 ( year, month, day) - DAYS1900( 1800,1,1)
MISSING(A,B) Replaces missing values in the first argument (multi-dimensional variable) with the second argument; the second argument may be any conformable variable.
MISSING( temp, -999 ) replaces missing values in temp with –999
MISSING( sst, temp[D=coads_climatology] ) replaces missing sst values with temperature from the COADS climatology
IGNORE0(VAR) Replaces zeros in a variable with the missing value flag for that variable.
IGNORE0( salt ) replaces zeros in salt with the missing value flag
Ch3 Sec2.3.19. RANDU and RANDU2
RANDU(A) Generates a grid of uniformly distributed [0,1] pseudo-random values. The first valid value in the field is used as the random number seed. Values that are flagged as bad remain flagged as
bad in the random number field.
RANDU( temp[I=105:135,K=1:5] ) generates a field of uniformly distributed random values of the same size and shape as the field "temp[I=105:135,K=1:5]" using temp[I=105,k=1] as the pseudo-random
number seed.
RANDU2(A, iseed) Generates a grid of uniformly distributed [0,1] pseudo-random values, using an alternate algorithm and allowing control over the random generator seed. Values that are flagged as bad
remain flagged as bad in the random number field.
RANDU2( temp[I=105:135,K=1:5],iseed) generates a field of uniformly distributed random values of the same size and shape as the field "temp[I=105:135,K=1:5]" iseed is set as follows:
ISEED: -1=sys clock, 0=continue w/ previous seed, N>0 enter an integer value for a user-defined seed.
Ch3 Sec2.3.20. RANDN and RANDN2
RANDN(A) Generates a grid of normally distributed pseudo-random values. As above, but normally distributed rather than uniformly distributed.
A: field of random values will have shape of A
RANDN2(A, iseed) Generates a grid of normally distributed pseudo-random values. As above, but normally distributed rather than uniformly distributed.
A: field of random values will have shape of A
ISEED: -1=sys clock, 0=continue w/ previous seed, N>0 user-defined seed
RHO_UN(SALT, TEMP, P) Calculates the mass density rho (kg/m^3) of seawater from salinity SALT(salt, psu), temperature TEMP(deg C) and reference pressure P(dbar) using the 1980 UNESCO International
Equation of State (IES80). Either in-situ or potential density may be computed depending upon whether the user supplies in-situ or potential temperature.
Note that to maintain accuracy, temperature must be converted to the IPTS-68 standard before applying these algorithms. For typical seawater values, the IPTS-68 and ITS-90 temperature scales are
related by T_68 = 1.00024 T_90 (P. M. Saunders, 1990, WOCE Newsletter 10). The routine uses the high pressure equation of state from Millero et al. (1980) and the one-atmosphere equation of state
from Millero and Poisson (1981) as reported in Gill (1982). The notation follows Millero et al. (1980) and Millero and Poisson (1981).
RHO_UN( salt, temp, P )
For example:
! Define input variables:
let temp=2 ; let salt=35 ; let p=5000
! Convert from IT90 to ITS68 temperature scale
let t68 = 1.00024 * temp
! Define potential temperature
! at two reference pressure values
let potemp0 = THETA_FO(salt, t68, P, 0.)
let potemp2 = THETA_FO(salt, t68, P, 2000.)
! For sigma0 (sigma-theta, i.e., potential density)
let rho0 = rho_un(salt, potemp0, 0.)
let sigma0 = rho0 - 1000.
! For sigma2
let rho2 = rho_un(salt, potemp2, 2000.)
let sigma2 = rho2 - 1000.
! For sigma-t (outdated method, sigma-theta is preferred)
let rhot = rho_un(salt, t68, 0.)
let sigmat = rhot - 1000.
! For sigma (in-situ density)
let rho = rho_un(salt, t68, p)
let sigma = rho - 1000.
THETA_FO(SALT, TEMP, P, REF) Calculates the potential temperature of a seawater parcel at a given salinity SALT(psu), temperature TEMP(deg. C) and pressure P(dbar), moved adiabatically to a reference
pressure REF(dbar).
This calculation uses Bryden (1973) polynomial for adiabatic lapse rate and Runge-Kutta 4th order integration algorithm. References: Bryden, H., 1973, Deep-Sea Res., 20, 401–408; Fofonoff, N.M, 1977,
Deep-Sea Res., 24, 489–491.
THETA_FO( salt, temp, P, P_reference )
NOTE: To do the reverse calculation, that is to convert from in-situ to potential temperature, we only need to inverse the 3rd and 4th arguments.
For example:
yes? LET Tpot = THETA_FO(35,20,5000,0)
yes? LET Tinsitu = THETA_FO(35,19,0,5000)
yes? LIST Tpot, Tinsitu
Column 1: TPOT is THETA_FO(35,20,5000,0)
Column 2: TINSITU is THETA_FO(35,19,0,5000)
TPOT TINSITU
I / *: 19.00 20.00
RESHAPE(A, B) The result of the RESHAPE function will be argument A "wrapped" on the grid of argument B. The limits given on argument 2 are used to specify subregions within the grid into which
values should be reshaped.
Two common uses of this function are to view multi-year time series data as a 2-dimensional field of 12-months vs. year and to map ABSTRACT axes onto real world coordinates. An example of the former
yes? define axis/t=15-jan-1982:15-dec-1985/npoints=48/units=days tcal
yes? let my_time_series = sin(t[gt=tcal]/100)
yes? define axis/t=1982:1986:1 tyear
yes? define axis/z=1:12:1 zmonth
yes? let out_grid = z[gz=zmonth] + t[gt=tyear]
yes? let my_reshaped = reshape(my_time_series, out_grid)
yes? sh grid my_reshaped
GRID (G001)
name axis # pts start end
normal X
normal Y
ZMONTH Z 12 r 1 12
TYEAR T 5 r 1982 1986
For any axis X,Y,Z, or T if the axis differs between the input output grids, then limits placed upon the region of the axis in argument two (the output grid) can be used to restrict the geometry into
which the RESHAPE is performed. Continuing with the preceding example:
! Now restrict the output region to obtain a 6 month by 8 year matrix
yes? list reshape(my_time_series, out_grid[k=1:6])
VARIABLE : RESHAPE(MY_TIME_SERIES, OUT_GRID[K=1:6])
SUBSET : 6 by 5 points (Z-T)
1982 / 1: 0.5144 0.7477 0.9123 0.9931 0.9827 0.8820
1983 / 2: 0.7003 0.4542 0.1665 -0.1366 -0.4271 -0.6783
1984 / 3: -0.8673 -0.9766 -0.9962 -0.9243 -0.7674 -0.5401
1985 / 4: -0.2632 0.0380 0.3356 0.6024 0.8138 0.9505
1986 / 5: 0.9999 0.9575 0.8270 0.6207 0.3573 0.0610
For any axis X,Y,Z, or T if the axis is the same in the input and output grids then the region from argument 1 will be preserved in the output. This implies that when the above technique is used on
multi-dimensional input, only the axes which differ between the input and output grids are affected by the RESHAPE operation. However RESHAPE can only be applied if the reshape operation preserves
the ordering of data on the axes in four dimensions. The RESHAPE function only "wraps" the variable to the new grid, keeping the data ordered as it exists in memory, that is, ordered by X (varying
fastest) then -Y-Z-T (slowest index). It is an operation like @ASN regridding. Subsetting is done if requested by region specifiers, but the function does not reorder the data as it is put on the new
axes. For instance, if your data is in Z and T:
yes? show grid my_reshaped
GRID (G001)
name axis # pts start end
normal X
normal Y
ZMONTH Z 12 r 1 12
TYEAR T 5 r 1982 1986
and you wish to put it on a new grid, GRIDYZ
yes? SHOW GRID gridyz
GRID (GRIDYZ)
name axis # pts start end
normal X
YAX LATITUDE 5 r 15N 19N
ZMONTH Z 12 r 1 12
normal T
then the RESHAPE function would NOT correctly wrap the data from G001 to GRIDYZ, because the data is ordered with its Z coordinates changing faster than its T coordinates, and on output the data
would need to be reordered with the Y coordinates changing faster then the Z coordinates.
The following filled contour plot of longitude by year number illustrates the use of RESHAPE in multiple dimensions by expanding on the previous example: (Figure 3_2)
! The year-by-year progression January winds for a longitudinal patch
! averaged from 5s to 5n across the eastern Pacific Ocean. Note that
! k=1 specifies January, since the Z axis is month
yes? USE coads
yes? LET out_grid = Z[GZ=zmonth]+T[GT=tyear]+X[GX=uwnd]+Y[GY=uwnd]
yes? LET uwnd_mnth_ty = RESHAPE(uwnd, out_grid)
yes? FILL uwnd_mnth_ty[X=130W:80W,Y=5S:5N@AVE,K=1]
In the second usage mentioned, to map ABSTRACT axes onto real world coordinates, suppose xpts and ypts contain time series of length NT points representing longitude and latitude points along an
oceanographic ship track and the variable global_sst contains global sea surface temperature data. Then the result of
LET sampled_sst = SAMPLEXY(global_sst, xpts, ypts)
will be a 1-dimensional grid: NT points along the XABSTRACT axis. The RESHAPE function can be used to remap this data to the original time axis using RESHAPE(sampled_sst, xpts)
yes? LET sampled_sst = SAMPLEXY(global_sst,\
...? xpts[t=1-jan-1980:15-jan-1980],\
...? ypts[t=1-jan-1980:15-jan-1980])
yes? LIST RESHAPE(sampled_sst, xpts[t=1-jan-1980:15-jan-1980])
When the input and output grids share any of the same axes, then the specified sub-region along those axes will be preserved in the RESHAPE operation. In the example "RESHAPE
(myTseries,myMonthYearGrid)" this means that if myTseries and myMonthYearGrid were each multidimensional variables with the same latitude and longitude grids then
would map onto the X=130E:80W,Y=5S:5N sub-region of the grid of myMonthYearGrid. When the input and output axes differ the sub-region of the output that is utilized may be controlled by inserting
explicit limit qualifiers on the second argument
ZAXREPLACE(V,ZVALS,ZAX) Convert between alternative monotonic Zaxes, where the mapping between the source and destination Z axes is a function of X,Y, and or T. The function regrids between the Z
axes using linear interpolation between values of V. See also the related functions ZAXREPLACE_BIN and ZAXREPLACE_AVG which use binning and averaging to interpolate the values.
Typical applications in the field of oceanography include converting from a Z axis of layer number to a Z axis in units of depth (e.g., for sigma coordinate fields) and converting from a Z axes of
depth to one of density (for a stably stratified fluid).
Argument 1, V, is the field of data values, say temperature on the "source" Z-axis, say, layer number. The second argument, ZVALS, contains values in units of the desired destination Z axis (ZAX) on
the same Z axis as V — for example, depth values associated with each vertical layer. The third argument, ZAX, is any variable defined on the destination Z axis, often "Z[gz=zaxis_name]" is used.
The ZAXREPLACE function takes three arguments. The first argument, V, is the field of data values, say temperature or salinity. This variable is available on what we will refer to as the "source"
Z-axis -- say in terms of layer number. The second argument, ZVALS, contains the values of the desired destination Z axis defined on the source Z axis -- for example, it may contain the depth values
associated with each vertical layer. It should always share the Z axis from the first argument. The third argument, ZAX, is defined on the destination Z axis. Only the Z axis of this variable is
relevant -- the values of the variable, itself, and its structure in X, Y, and T are ignored. Often "Z[gz=zaxis_name]" is used for the third argument.
ZAXREPLACE is a "grid-changing" function; its output grid is different from the input arguments. Therefore it is best to use explicit limits on the arguments rather than a SET REGION command. See the
discussion of grid-changing functions.
An example of the use of ZAXREPLACE for sigma coordinates is outlined in the FAQ on Using Sigma Coordinates.
Another example:
Contour salt as a function of density:
yes? set dat levitus_climatology
! Define density sigma, then density axis axden
yes? let sigma=rho_un(salt,temp,0)-1000
yes? define axis/z=21:28:.05/depth axden
! Regrid to density
yes? let saltonsigma= ZAXREPLACE( salt, sigma, z[gz=axden])
! Make Pacific plot
yes? fill/y=0/x=120e:75w saltonsigma
Note that one could regrid the variable in the third argument to the destination Z axis using whichever of the regridding transformations that is best for the analysis, e.g. z[gz=axdens@AVE]
Ch3 Sec2.3.25. XSEQUENCE, YSEQUENCE, ZSEQUENCE, TSEQUENCE, ESEQUENCE, FSEQUENCE
XSEQUENCE(A), YSEQUENCE(A), ZSEQUENCE(A), TSEQUENCE(A), ESEQUENCE(A), FSEQUENCE(A) Unravels the data from the argument into a 1-dimensional line of data on an ABSTRACT axis.
This family of functions are "grid-changing" functions; the output grid is different from the input arguments. Therefore it is best to use explicit limits on the argument rather than a SET REGION
command. See the discussion of grid-changing functions.
FFTA(A) Computes Fast Fourier Transform amplitude spectra, normalized by 1/N
Arguments: A Variable with regular time axis.
Result Axes: X Inherited from A
Y Inherited from A
Z Inherited from A
T Generated by the function: frequency in cyc/(time units from A)
See the demonstration script ef_fft_demo.jnl for an example using this function. Also see the external functions fft_re, fft_im, and fft_inverse for more options using FFT's
FFTA returns a(j) in
f(t) = S[(j=1 to N/2)][a(j) cos(jwt + F(j))]
where [ ] means "integer part of", w=2 pi/T is the fundamental frequency, and T=N*Dt is the time span of the data input to FFTA. F is the phase (returned by FFTP, see next section)
The units of the returned time axis are "cycles/Dt" where Dt is the time unit of the input axis. The Nyquist frequency is yquist = 1./(2.*boxsize), and the frequency axis runs from freq1 = yquist/
float(nfreq) to freqn = yquist
Even and odd N's are allowed. N need not be a power of 2. FFTA and FFTP assume f(1)=f(N+1), and the user gives the routines the first N pts.
Specifying the context of the input variable explicitly e.g.
LIST FFTA(A[l=1:58])
will prevent any confusion about the region. See the note in chapter 3 above, on the context of variables passed to functions.
The code is based on the FFT routines in Swarztrauber's FFTPACK available at www.netlib.org. For further discussion of the FFTPACK code, please see the document, Notes on FFTPACK - A Package of Fast
Fourier Transform Programs.
FFTP(A) Computes Fast Fourier Transform phase
Arguments: A Variable with regular time axis.
Result Axes: X Inherited from A
Y Inherited from A
Z Inherited from A
T Generated by the function: frequency in cyc/(time units from A)
See the demonstration script ef_fft_demo.jnl for an example using this function.
FFTP returns F(j) in
f(t) = S[(j=1 to N/2)][a(j) cos(jwt + F(j))]
where [ ] means "integer part of", w=2 pi/T is the fundamental frequency, and T=N*Dt is the time span of the data input to FFTA.
The units of the returned time axis are "cycles/Dt" where Dt is the time increment. The Nyquist frequency is yquist = 1./(2.*boxsize), and the frequency axis runs from freq1 = yquist/ float(nfreq) to
freqn = yquist
Even and odd N's are allowed. Power of 2 not required. FFTA and FFTP assume f(1)=f(N+1), and the user gives the routines the first N pts.
Specifying the context of the input variable explicitly e.g.
LIST FFTP(A[l=1:58])
will prevent any confusion about the region. See the note in chapter 3 above, on the context of variables passed to functions.
The code is based on the FFT routines in Swarztrauber's FFTPACK available at www.netlib.org. See the section on Function FFTA for more discussion. For further discussion of the FFTPACK code, please
see the document, Notes on FFTPACK - A Package of Fast Fourier Transform Programs .
SAMPLEI(TO_BE_SAMPLED,X_INDICES) samples a field at a list of X indices, which are a subset of its X axis
Arguments: TO_BE_SAMPLED Data to sample
X_INDICES list of indices of the variable TO_BE_SAMPLED
Result Axes: X ABSTRACT; length same as X_INDICES
Y Inherited from TO_BE_SAMPLED
Z Inherited from TO_BE_SAMPLED
T Inherited from TO_BE_SAMPLED
See the demonstration ef_sort_demo.jnl for a common usage of this function. As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the
function call. See the discussion of grid-changing functions. For instance
yes? LET sampled_data = samplei(airt[X=160E:180E], xindices)
SAMPLEJ(TO_BE_SAMPLED,Y_INDICES) samples a field at a list of Y indices, which are a subset of its Y axis
Arguments: TO_BE_SAMPLED Data to be sample
Y_INDICES list of indices of the variable TO_BE_SAMPLED
Result Axes: X Inherited from TO_BE_SAMPLED
Y ABSTRACT; length same as Y_INDICES
Z Inherited from TO_BE_SAMPLED
T Inherited from TO_BE_SAMPLED
See the demonstration ef_sort_demo.jnl for a common usage of this function. As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the
function call. See the discussion of grid-changing functions.
SAMPLEK(TO_BE_SAMPLED, Z_INDICES) samples a field at a list of Z indices, which are a subset of its Z axis
Arguments: TO_BE_SAMPLED Data to sample
Z_INDICES list of indices of the variable TO_BE_SAMPLED
Result Axes: X Inherited from TO_BE_SAMPLED
Y Inherited from TO_BE_SAMPLED
Z ABSTRACT; length same as Z_INDICES
T Inherited from TO_BE_SAMPLED
See the demonstration ef_sort_demo.jnl for a common usage of this function. As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the
function call. See the discussion of grid-changing functions.
SAMPLEL(TO_BE_SAMPLED, T_INDICES) samples a field at a list of T indices, a subset of its T axis
Arguments: TO_BE_SAMPLED Data to sample
T_INDICES list of indices of the variable TO_BE_SAMPLED
Result Axes: X Inherited from TO_BE_SAMPLED
Y Inherited from TO_BE_SAMPLED
Z Inherited from TO_BE_SAMPLED
T ABSTRACT; length same as T_INDICES
See the demonstration ef_sort_demo.jnl for a common usage of this function. As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the
function call. See the discussion of grid-changing functions.
SAMPLEM(TO_BE_SAMPLED, E_INDICES) samples a field at a list of E indices, a subset of its E axis
Arguments: TO_BE_SAMPLED Data to sample
E_INDICES list of indices of the variable TO_BE_SAMPLED
Result Axes: X Inherited from TO_BE_SAMPLED
Y Inherited from TO_BE_SAMPLED
Z Inherited from TO_BE_SAMPLED
T Inherited from TO_BE_SAMPLED
E ABSTRACT; length same as E_INDICES
F Inherited from TO_BE_SAMPLED
See the demonstration ef_sort_demo.jnl for a common usage of this function. As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the
function call. See the discussion of grid-changing functions.
SAMPLEN(TO_BE_SAMPLED, F_INDICES) samples a field at a list of F indices, a subset of its F axis
Arguments: TO_BE_SAMPLED Data to sample
F_INDICES list of indices of the variable TO_BE_SAMPLED
Result Axes: X Inherited from TO_BE_SAMPLED
Y Inherited from TO_BE_SAMPLED
Z Inherited from TO_BE_SAMPLED
T Inherited from TO_BE_SAMPLED
E Inherited from TO_BE_SAMPLED
F ABSTRACT; length same as F_INDICES
See the demonstration ef_sort_demo.jnl for a common useage of this function. As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the
function call. See the discussion of grid-changing functions.
SAMPLEIJ(DAT_TO_SAMPLE,XPTS,YPTS) Returns data sampled at a subset of its grid points, defined by (XPTS, YPTS)
Arguments: DAT_TO_SAMPLE Data to sample, field of x, y, and perhaps z and t
XPTS X coordinates of grid points to sample
YPTS Y coordinates of grid points to sample
Result Axes: X ABSTRACT, length of list (xpts,ypts)
Y NORMAL (no axis)
Z Inherited from DAT_TO_SAMPLE
T Inherited from DAT_TO_SAMPLE
This is a discrete version of SAMPLEXY. The points defined in arguments 2 and 3 are coordinaets, but a result is returned only if those arguments are a match with coordinates of the grid of the
As with other functions which change axes, specify any region information for the variable TO_BE_SAMPLED explicitly in the function call. See the discussion of grid-changing functions.
SAMPLET_DATE (DAT_TO_SAMPLE, YR, MO, DAY, HR, MIN, SEC) Returns data sampled by interpolating to one or more times
Arguments: DAT_TO_SAMPLE Data to sample, field of x, y, z and t
YR Year(s), integer YYYY
MO Month(s), integer month number MM
DAY Day(s) of month, integer DD
HR Hour(s) integer HH
MIN Minute(s), integer MM
SEC Second(s), integer SS
Result Axes: X Inherited from DAT_TO_SAMPLE
Y Inherited from DAT_TO_SAMPLE
Z Inherited from DAT_TO_SAMPLE
T ABSTRACT; length is # times sampled. The length is determined from the length of argument 2.
As with other functions which change axes, specify any region information for the variable DAT_TO_SAMPLE explicitly in the function call. See the discussion of grid-changing functions.
List wind speed at a subset of points from the monthly navy winds data set. To choose times all in the single year 1985, we can define a variable year = constant + 0*month, the same length as month
but with all 1985 data.
yes? use monthly_navy_winds
yes? set reg/x=131:139/y=29
yes? let month = {1,5,11}
yes? let day = {20,20,20}
yes? let year = 1985 + 0*month
yes? let zero = 0*month
yes? list samplet_date (uwnd, year, month, day, zero, zero, zero)
VARIABLE : SAMPLET_DATE (UWND, YEAR, MONTH, DAY, ZERO, ZERO, ZERO)
FILENAME : monthly_navy_winds.cdf
FILEPATH : /home/porter/tmap/ferret/linux/fer_dsets/data/
SUBSET : 5 by 3 points (LONGITUDE-T)
LATITUDE : 20N
130E 132.5E 135E 137.5E 140E
1 / 1: -7.563 -7.546 -7.572 -7.354 -6.743
2 / 2: -3.099 -3.256 -3.331 -3.577 -4.127
3 / 3: -7.717 -7.244 -6.644 -6.255 -6.099
SAMPLEXY(DAT_TO_SAMPLE,XPTS,YPTS) Returns data sampled at a set of (X,Y) points, using linear interpolation.
Arguments: DAT_TO_SAMPLE Data to sample
XPTS X values of sample points
YPTS Y values of sample points
Result Axes: X ABSTRACT; length same as XPTSand YPTS
Y NORMAL (no axis)
Z Inherited from DAT_TO_SAMPLE
T Inherited from DAT_TO_SAMPLE
SAMPLEXY is a "grid-changing" function; its output grid is different from the input arguments. Therefore it is best to use explicit limits on the first argument rather than a a SET REGION command.
See the discussion of grid-changing functions.
If the x axis represents longitude, then if the axis is marked as a modulo axis, the xpts will be translated into the longitude range of the the axis, and the sampled variable is returned at those
1) See the the script vertical_section.jnl to create a section along a line between two (lon,lat) locations; this script calls SAMPLEXY.
2) Use SAMPLEXY to extract a section of data taken along a slanted line in the Pacific.
First we generate the locations xlon, ylat (Figure3_3a). One could use a ship track, specifying its coordinates as xlon, ylat.
yes? USE levitus_climatology
! define the slant line through (234.5,24.5)
! and with slope -24./49
yes? LET xlon = 234.5 + (I[I=1:50]-1)
yes? LET slope = -1*24./49
yes? LET ylat = 24.5 + slope*(i[i=1:50] -1)
yes? PLOT/VS/LINE/SYM=27 xlon,ylat ! line off Central America
yes? GO land
Now sample the field "salt" along this track and make a filled contour plot. The horizontal axis is abstract; it is a count of the number of points along the track. To speed the calculation, or if we
otherwise want to restrict the region used on the variable salt, put that information in explicit limits on the first argument. (Figure3_3b)
yes? LET slantsalt = samplexy(salt[x=200:300,y=0:30],xlon,ylat)
yes? FILL/LEVELS=(33.2,35.2,0.1)/VLIMITS=0:4000 slantsalt
Ch3 Sec2.3.35. SAMPLEXY_CLOSEST
SAMPLEXY_CLOSEST(DAT_TO_SAMPLE,XPTS,YPTS) Returns data sampled at a set of (X,Y) points without interpolation, using nearest grid intersection.
Arguments: DAT_TO_SAMPLE Data to sample
XPTS X values of sample points
YPTS Y values of sample points
Result Axes: X ABSTRACT; length same as XPTSand YPTS
Y NORMAL (no axis)
Z Inherited from DAT_TO_SAMPLE
T Inherited from DAT_TO_SAMPLE
Note:SAMPLEXY_CLOSEST is a "grid-changing" function; its output grid is different from the input arguments. Therefore it is best to use explicit limits on the first argument rather than a SET REGION
command. See the discussion of grid-changing functions.
This function is a quick-and-dirty substitute for the SAMPLEXY function. It runs much faster than SAMPLEXY, since it does no interpolation. It returns the function value at the grid point closest to
each point in XPTS, YPTS. It is less accurate than SAMPLEXY, but may give adequate results in a much shorter time for large samples.
Example: compare with SAMPLEXY output
yes? USE levitus_climatology
yes? LET xlon = 234.5 + I[I=1:20]
yes? LET dely = 24./19
yes? LET ylat = 24.5 - dely*i[i=1:20] + dely
yes? LET a = samplexy(salt[X=200:300,Y=0:30,K=1], xlon, ylat)
yes? LET b = samplexy_closest(salt[X=200:300,Y=0:30,K=1], xlon, ylat)
yes? LIST a, b
DATA SET: /home/porter/tmap/ferret/linux/fer_dsets/data/levitus_climatology.cdf
X: 0.5 to 20.5
DEPTH (m): 0
Column 1: A is SAMPLEXY(SALT[X=200:300,Y=0:30,K=1], XLON, YLAT)
Column 2: B is SAMPLEXY_CLOSEST(SALT[X=200:300,Y=0:30,K=1], XLON, YLAT)
A B
1 / 1: 34.22 34.22
2 / 2: 34.28 34.26
3 / 3: 34.35 34.39
4 / 4: 34.41 34.43
5 / 5: 34.44 34.44
6 / 6: 34.38 34.40
7 / 7: 34.26 34.22
8 / 8: 34.09 34.07
9 / 9: 33.90 33.92
10 / 10: 33.74 33.78
11 / 11: 33.64 33.62
12 / 12: 33.63 33.62
13 / 13: 33.69 33.67
14 / 14: 33.81 33.75
15 / 15: 33.95 34.00
16 / 16: 34.11 34.11
17 / 17: 34.25 34.22
18 / 18: 34.39 34.33
19 / 19: 34.53 34.56
20 / 20: 34.65 34.65
SAMPLEXY_CURV Returns data which is on a curvilinear grid, sampled at a set of (X,Y) points, using interpolation.
Arguments: DAT_TO_SAMPLE Data to sample
(2D curvilinear data field)
DAT_LON Longitude coordinates of the curvilinear grid
(2D curvilinear longitudes)
DAT_LAT Latitude coordinates of the curvilinear grid
(2D curvilinear latitudes)
XPTS X values of sample points
(1D list of points)
YPTS Y values of sample points
(1D list of points, same length as XPTS
Result Axes: X ABSTRACT; length same as XPTS and YPTS
Y NORMAL (no axis)
Z Inherited from DAT_TO_SAMPLE
T Inherited from DAT_TO_SAMPLE
Note: SAMPLEXY_CURV is a "grid-changing" function; its output grid is different from the input arguments. Therefore it is best to use explicit limits on the first argument rather than a SET REGION
command. See the discussion of grid-changing functions.
Example 1:
yes? USE curvi_data.nc
yes? SHADE u, geolon, geolat
! Define a set of locations. Note that the X values
! must be in the range of x coordinate values from the
! variable geolon. The function does not wrap values
! around to other modulo longitude values.
yes? LET xpts = {-111, -110, -6, 0, 43, 51}
yes? LET ypts = { -9, -8, -6,-2, -3, -4}
! check how the points overlay on the original grid
yes? PLOT/VS/OVER xpts, ypts
! this will list the U values at the (xpts,ypts) locations
yes? LIST SAMPLEXY_CURV(u, geolon, geolat, xpts, ypts)
Example 2:
! This method can be used to sample data on a grid of values
! Note it can be slow if you choose to sample at a lot of points.
! The CURV_TO_RECT functions may be a better choice in that case.
yes? DEFINE AXIS/X=-200:-100:5 newx
yes? DEFINE AXIS/Y=-20:20:1 newy
! Define variables containing all the x points at each y level,
! and likewise for y at each x
yes? LET xpts = x[gx=newx] + 0*y[gy=newy]
yes? LET ypts = 0*x[gx=newx] + y[gy=newy]
! The last 2 arguments to SAMPLEXY_CURV are 1D lists
yes? LET xsample = xsequence(xpts)
yes? LET ysample = xsequence(ypts)
! Check how the points overlay on the curvilienar data
yes? PLOT/VS/OVER xsample, ysample
! upts is a list of values at the (xsample,ysample) locations
yes? LET upts = SAMPLEXY_CURV(u, geolon, geolat, xsample, ysample)
! Put this onto the 2D grid defined by the new axes.
yes? LET/TITLE="U sampled to new grid"/UNITS="`u,return=units`" \
u_on_newgrid = RESHAPE(upts, xpts)
yes? SET WIN/NEW
yes? SHADE u_on_newgrid
Ferret includes a number of SCAT2GRID functions which take a set of scattered locations in 2 dimensions, a grid definition, and interpolate data at the scattered locations to a grid. See below for
the rest of the SCAT2GRIDGAUSS functions, followed by the SCAT2GRIDLAPLACE functions. Also see Appendix A for functions that use binning to put data onto a grid: SCAT2GRID_BIN_XY, and the count of
scattered data used in a gridding operation: SCAT2GRID_NBIN_XY and SCAT2GRID_NOBS_XY. These binning functions also have XYT versions.
Discrete Sampling Geometries (DSG) data:
Observational data from DSG datasets contain collections of data which while not truly scattered, has lists of coordinate information that can feed into these functions for gridding into a
multi-dimensional grid. However, with the exception of trajectory data the coordinates do not all have the same dimensionality. Timeseries data for instance has a station location for each station,
and multiple observations in time. Profile data has station location and time, and multiple heights/depths in each profile. For the SCAT2GRID functions we need all of the first arguments, the
"scattered" locations and observation, to be of the same size. We need to expand the station data to repeat the station location for each time in that station's time series.
The FAQ, Calling SCAT2GRID functions for DSG data explains how to work with these datasets.
SCAT2GRIDGAUSS_XY(XPTS, YPTS, F, XCOORD, YCOORD, XSCALE, YSCALE, CUTOFF, 0) Use Gaussian weighting to grid scattered data to an XY grid
Arguments: XPTS x-locations of scattered input triples, organized as a 1D list on any axis.
YPTS y-locations of scattered input triples, organized as a 1D list on any axis.
F F-data: 3rd component of scattered input triples. This is the variable we are putting onto the grid. It must have the scattered-data direction along an X or Y axis. May be fcn of Z,
time, E, or F.
XAXPTS coordinates of X-axis of output grid. Must be regularly spaced.
YAXPTS coordinates of Y-axis of output grid. Must be regularly spaced.
XSCALE Mapping scale for Gaussian weights in X direction, in data units (e.g. lon or m). See the discussion below.
YSCALE Mapping scale for Gaussian weights in Y direction, in data units (e.g. lat or m)
CUTOFF Cutoff for weight function. Only scattered points within CUTOFF*XSCALE and CUTOFF*YSCALE of the grid box center are included in the sum for the grid box.
0 An unused argument: previously there had been a second "cutoff" argument but it was redundant. There is only one cutoff in the algorithm once the XSCALE and YSCALE mapping is
applied. In future versions of Ferret this argument will be removed.
Result X Inherited from XAXPTS
Y Inherited from YAXPTS
Z Inherited from F
T Inherited from F
The SCAT2GRIDGAUSS functions are "grid-changing" functions; the output grid is different from the input arguments. Therefore it is best to use explicit limits on any of the arguments rather than a
SET REGION command. See the discussion of grid-changing functions.
Quick example:
yes? DEFINE AXIS/X=180:221:1 xax
yes? DEFINE AXIS/Y=-30:10:1 yax
yes? ! read some data
yes? SET DATA/EZ/VARIABLES="times,lats,lons,var" myfile.dat
yes? LET my_out = SCAT2GRIDGAUSS_XY(lons, lats, var, x[gx=xax], y[gy=yax], 2, 2, 2, 0)
If the output X axis is a modulo longitude axis, then the scattered X values should lie within the range of the actual coordinates of the axis. That is, if the scattered X values are xpts={355, 358,
2, 1, 352, 12} and the coordinates of the X axis you wish to grid to are longitudes of x=20,23,25,...,379 then you should apply a shift to your scattered points:
yes? USE levitus_climatology ! output will be on the grid of SALT
yes? LET xx = x[gx=salt]
yes? LET x1 = if lons lt `xx[x=@min]` then lons+360
yes? LET xnew = if x1 gt `xx[x=@max] then x1-360 else x1
yes? LET my_out = SCAT2GRIDGAUSS_XY(xnew, ypts, var, x[gx=xax], y[gy=yax], 2, 2, 2, 0)
The SCAT2GRIDGAUSS* functions use 1a Gaussian interpolation method to map irregular locations (x[n], y[n]) to a regular grid (x[0], y[0]). The output grid must have equally-spaced gridpoints in both
the x and y directions. For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
How to use Ferret External Functions for gridding scattered data points
In addition, see the FAQ, Calling SCAT2GRID functions for DSG data explains how to work with these datasets.
Parameters for a square grid and a fairly dense distribution of scattered points relative to the grid might be XSCALE=YSCALE = 0.5, and CUTOFF = 2. To get better coverage, use a coarser grid or
increase XSCALE, YSCALE and/or CUTOFF.
The value of the gridded function F at each grid point (x[0], y[0]) is computed by:
F(x[0],y[0]) = S[(n=1 to Np)]F(x[n],y[n])W(x[n],y[n]) / S[(n=1 to Np)]W(x[n],y[n])
Where Np is the total number of irregular points within the "influence region" of a particular grid point, (determined by the CUTOFF parameter, defined below). The Gaussian weight fucntion W[n] is
given by
W[n](x[n],y[n]) = exp{-[(x[n]-x[0])^2/(X)^2 + (y[n]-y[0])^2/(Y)^2]}
X and Y in the denominators on the right hand side are the mapping scales, arguments XSCALE and YSCALE.
The weight function has a nonzero value everywhere, so all of the scattered points in theory could be part of the sum for each grid point. To cut computation, the parameter CUTOFF is employed. If a
cutoff of 2 is used (e.g. CUTOFF* XSCALE=2), then the weight function is set to zerowhen W[n]< e^-4. This occurs where distances from the grid point are less than 2 times the mapping scales X or Y.
(Reference for this method: Kessler and McCreary, 1993: The Annual Wind-driven Rossby Wave in the Subthermocline Equatorial Pacific, Journal of Physical Oceanography 23, 1192 -1207)
SCAT2GRIDGAUSS_XZ(XPTS, ZPTS, F, XAXPTS, ZAXPTS, XSCALE, ZSCALE, CUTOFF, 0) Use Gaussian weighting to grid scattered data to an XZ grid
See the description under SCAT2GRIDGAUSS_XY. Note that the output grid must have equally-spaced gridpoints in both the x and z directions.
SCAT2GRIDGAUSS_XT(XPTS, TPTS, F, XAXPTS, TAXPTS, XSCALE, TSCALE, CUTOFF, 0) Use Gaussian weighting to grid scattered data to an XT grid
See the description under SCAT2GRIDGAUSS_XY. Note that the output grid must have equally-spaced gridpoints in both the x and z directions.
SCAT2GRIDGAUSS_YZ(YPTS, ZPTS, F, YAXPTS, ZAXPTS, YSCALE, ZSCALE, CUTOFF, 0) Use Gaussian weighting to grid scattered data to a YZ grid
See the description under SCAT2GRIDGAUSS_XY. Note that the output grid must have equally-spaced gridpoints in both the y and z directions.
SCAT2GRIDGAUSS_YT(YPTS, TPTS, F, YAXPTS, TAXPTS, YSCALE, TSCALE, CUTOFF, 0) Use Gaussian weighting to grid scattered data to a YT grid
See the description under SCAT2GRIDGAUSS_XY. Note that the output grid must have equally-spaced gridpoints in both the y and z directions.
SCAT2GRIDGAUSS_ZT(ZPTS, TPTS, F, ZAXPTS, TAXPTS, ZSCALE, TSCALE, CUTOFF, 0) Use Gaussian weighting to grid scattered data to a ZT grid
See the description under SCAT2GRIDGAUSS_XY. Note that the output grid must have equally-spaced gridpoints in both the y and z directions.
Ch3 Sec2.3. SCAT2GRIDLAPLACE_XY
SCAT2GRIDLAPLACE_XY(XPTS, YPTS, F, XAXPTS, YAXPTS, CAY, NRNG) Use Laplace/ Spline interpolation to grid scattered data to an XY grid.
Arguments: XPTS x--locations of scattered input triples, organized as a 1D list on any axis.
YPTS y-locations of scattered input triples, organized as a 1D list on any axis.
F F-data: 3rd component of scattered input triples. This is the variable we are putting onto the grid. It must have the scattered-data direction along an X or Y axis. May be fcn of
Z, time, E, or F.
XAXPTS coordinates of X-axis of output grid. Must be regularly spaced.
YAXPTS coordinates of Y-axis of output grid. Must be regularly spaced.
CAY Amount of spline equation (between 0 and inf.) vs Laplace interpolation
NRNG Grid points more than NRNG grid spaces from the nearest data point are set to undefined.
Result X Inherited from XAXPTS
Y Inherited from YAXPTS
Z Inherited from F
T Inherited from F
The SCAT2GRIDLAPLACE functions are "grid-changing" functions; the output grid is different from the input arguments. Therefore it is best to use explicit limits on any of the arguments rather than a
SET REGION command. See the discussion of grid-changing functions.
Quick example:
yes? DEFINE AXIS/X=180:221:1 xax
yes? DEFINE AXIS/Y=-30:10:1 yax
yes? ! read some data
yes? SET DATA/EZ/VARIABLES="times,lats,lons,var" myfile.dat
yes? LET my_out = SCAT2GRIDLAPLACE_XY(lons, lats, var, x[gx=xax], y[gy=yax], 2., 5)
yes? SHADE my_out
If the output X axis is a modulo longitude axis, then the scattered X values should lie within the range of the actual coordinates of the axis. That is, if the scattered X values are xpts={355, 358,
2, 1, 352, 12} and the coordinates of the X axis you wish to grid to are longitudes of x=20,23,25,...,379 then you should apply a shift to your scattered points:
yes? USE levitus_climatology ! output will be on the grid of SALT
yes? LET xx = x[gx=salt]
yes? LET x1 = if lons lt `xx[x=@min]` then lons+360
yes? LET xnew = if x1 gt `xx[x=@max] then x1-360 else x1
yes? LET my_out = SCAT2GRIDLAPLACE_XY(xnew, lats, var, x[gx=xax], y[gy=yax], 2., 5)
For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
How to use Ferret External Functions for gridding scattered data points
In addition, see the FAQ, Calling SCAT2GRID functions for DSG data explains how to work with these datasets.
The SCAT2GRIDLAPLACE* functions employ the same interpolation method as is used by PPLUS, and appears elsewhere in Ferret, e.g. in contouring. The parameters are used as follows (quoted from the
PPLUS Users Guide. A reference for this is "Plot Plus, a Scientific Graphics System", written by Donald W. Denbo, April 8, 1987.):
If CAY=0.0, Laplacian interpolation is used. The resulting surface tends to have rather sharp peaks and dips at the data points (like a tent with poles pushed up into it). There is no chance of
spurious peaks appearing. As CAY is increased, Spline interpolation predominates over the Laplacian, and the surface passes through the data points more smoothly. The possibility of spurious peaks
increases with CAY. CAY= infinity is pure Spline interpolation. An over relaxation process in used to perform the interpolation. A value of CAY=5 often gives a good surface.
Any grid points farther than NRNG away from the nearest data point will be set to "undefined" The default used by PPLUS is NRNG = 5
Ch3 Sec2.3. SCAT2GRIDLAPLACE_XZ
SCAT2GRIDLAPLACE_XZ(XPTS, ZPTS, F, XAXPTS, ZAXPTS, CAY, NRNG) Use Laplace/ Spline interpolation to grid scattered data to an XZ grid.
The gridding algorithm is discussed under SCAT2GRIDLAPLACE_XY. For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
How to use Ferret External Functions for gridding scattered data points
Ch3 Sec2.3 SCAT2GRIDLAPLACE_XT
SCAT2GRIDLAPLACE_XT(XPTS, TPTS, F, XAXPTS, TAXPTS, CAY, NRNG) Use Laplace/ Spline interpolation to grid scattered data to an XT grid.
The gridding algorithm is discussed under SCAT2GRIDLAPLACE_XY. For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
How to use Ferret External Functions for gridding scattered data points
Ch3 Sec2.3 SCAT2GRIDLAPLACE_YT
SCAT2GRIDLAPLACE_YT(YPTS, TPTS, F, YAXPTS, TAXPTS, CAY, NRNG) Use Laplace/ Spline interpolation to grid scattered data to a YT grid.
The gridding algorithm is discussed under SCAT2GRIDLAPLACE_XY. For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
How to use Ferret External Functions for gridding scattered data points
Ch3 Sec2.3 SCAT2GRIDLAPLACE_YZ
SCAT2GRIDLAPLACE_YZ(YPTS, ZPTS, F, YAXPTS, ZAXPTS, CAY, NRNG) Use Laplace/ Spline interpolation to grid scattered data to an YZ grid.
The gridding algorithm is discussed under SCAT2GRIDLAPLACE_XY. For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
How to use Ferret External Functions for gridding scattered data points
Ch3 Sec2.3 SCAT2GRIDLAPLACE_ZT
SCAT2GRIDLAPLACE_ZT(ZPTS, TPTS, F, ZAXPTS, TAXPTS, CAY, NRNG) Use Laplace/ Spline interpolation to grid scattered data to a ZT grid.
The gridding algorithm is discussed under SCAT2GRIDLAPLACE_XY. For examples of the gridding functions, run the script objective_analysis_demo, or see the on-line demonstration
Discrete Sampling Geometries (DSG) data:
Observational data from DSG datasets contain collections of data which while not truly scattered, has lists of coordinate information that can feed into these functions for gridding into a
multi-dimensional grid. However, with the exception of trajectory data the coordinates do not all have the same dimensionality. Timeseries data for instance has a station location for each station,
and multiple observations in time. Profile data has station location and time, and multiple heights/depths in each profile. For the SCAT2GRID functions we need all of the first arguments, the
"scattered" locations and observation, to be of the same size. We need to expand the station data to repeat the station location for each time in that station's time series.
The FAQ, Calling SCAT2GRID functions for DSG data explains how to work with these datasets.
How to use Ferret External Functions for gridding scattered data points
SORTI(DAT): Returns indices of data, sorted on the I axis in increasing order
SORTI_STR(DAT): returns indices of data, sorted on the I axis in increasing order. If SORTI is called with a string variable as the argument. SORTI_STR is run by Ferret.
Arguments: DAT DAT: variable to sort
Result Axes: X ABSTRACT, same length as DAT x-axis
Y Inherited from DAT
Z Inherited from DAT
T Inherited from DAT
SORTI, SORTJ, SORTK, SORTL, SORTM, and SORTN return the indices of the data after it has been sorted. These functions are used in conjunction with functions such as the SAMPLE functions to do sorting
and sampling. See the demonstration ef_sort_demo.jnl for common useage of these functions.
As with other functions which change axes, specify any region information for the variable DAT explicitly in the function call. See the discussion of grid-changing functions.
SORTJ(DAT) Returns indices of data, sorted on the I axis in increasing order
SORTJ_STR(DAT): returns indices of data, sorted on the J axis in increasing order. If SORTJ is called with a string variable as the argument, SORTJ_STR is run by Ferret.
Arguments: DAT DAT: variable to sort
Result Axes: X Inherited from DAT
Y ABSTRACT, same length as DAT y-axisInherited from DAT
Z Inherited from DAT
T Inherited from DAT
See the discussion under SORTI
SORTK(DAT) Returns indices of data, sorted on the I axis in increasing order
SORTK_STR(DAT): returns indices of data, sorted on the K axis in increasing order. If SORTK is called with a string variable as the argument, SORTK_STR is run by Ferret.
Arguments: DAT DAT: variable to sort
Result Axes: X Inherited from DAT
Y Inherited from DAT
Z ABSTRACT, same length as DAT z-axis
T Inherited from DAT
See the discussion under SORTI
SORTL(DAT) Returns indices of data, sorted on the L axis in increasing order
SORTL_STR(DAT): returns indices of data, sorted on the L axis in increasing order. If SORTL is called with a string variable as the argument, SORTL_STR is run by Ferret.
Arguments: DAT DAT: variable to sort
Result Axes: X Inherited from DAT
Y Inherited from DAT
Z Inherited from DAT
T ABSTRACT, same length as DAT t-axis
See the discussion under SORTI
SORTM(DAT) Returns indices of data, sorted on the M axis in increasing order
SORTM_STR(DAT): returns indices of data, sorted on the M axis in increasing order. If SORTM is called with a string variable as the argument, SORTM_STR is run by Ferret.
Arguments: DAT DAT: variable to sort
Result Axes: X Inherited from DAT
Y Inherited from DAT
Z Inherited from DAT
T Inherited from DAT
E ABSTRACT, same length as DAT e-axis
F Inherited from DAT
See the discussion under SORTI
SORTN(DAT) Returns indices of data, sorted on the N axis in increasing order
SORTN_STR(DAT): returns indices of data, sorted on the N axis in increasing order. If SORTN is called with a string variable as the argument, SORTN_STR is run by Ferret.
Arguments: DAT DAT: variable to sort
Result Axes: X Inherited from DAT
Y Inherited from DAT
Z Inherited from DAT
T Inherited from DAT
E Inherited from DAT
F ABSTRACT, same length as DAT f-axis
See the discussion under SORTI
TAUTO_COR(A): Compute autocorrelation function (ACF) of time series, lags of 0,...,N-1, where N is the length of the time axis.
Arguments: A A function of time, and perhaps x,y,z
Result Axes: X Inherited from A
Y Inherited from A
Z Inherited from A
T ABSTRACT, same length as A time axis (lags)
TAUTO_COR is a "grid-changing" function; its output grid is different from the input arguments. Therefore it is best to use explicit limits on the first argument rather than a SET REGION command. See
the discussion of grid-changing functions.
XAUTO_COR(A): Compute autocorrelation function (ACF) of a series in X, lags of 0,...,N-1, where N is the length of the x axis.
Arguments: A A function of x, and perhaps y,z,t
Result Axes: X ABSTRACT, same length as X axis of A (lags)
Y Inherited from A
Z Inherited from A
T Inherited from A
XAUTO_COR is a "grid-changing" function; its output grid is different from the input arguments. Therefore it is best to use explicit limits on the first argument rather than a SET REGION command. See
the discussion of grid-changing functions.
TAX_DAY(A, B): Returns days of month of time axis coordinate values
Arguments: A time steps to convert
B variable with reference time axis; use region settings to restrict this to a small region (see below)
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE: The variable given for argument 2 is loaded into memory by this function, so restrict its range in X, Y, and/or Z so that it is not too large. Or, if you are getting the time steps from the
variable, specify the time steps for both arguments. See the examples in the discussion of the TAX_DATESTRING function
[Caution: Previous to Ferret v6.8 only: Although Ferret stores and computes with coordinate values in double-precision, argument 1 of this function is a single-precision variable. This function
translates the values in argument 1 to the nearest double-precision coordinate value of the time axis of argument 2 when possible. If the truncation to single precision in an expression t[gt=var] has
resulted in duplicate single-precision values the function will return an error message. See the discussion under TAX_DATESTRING.] With Ferret v6.8 and after, all calculations are done in double
precision and this is not a problem.
yes? use "http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/reynolds_sst_wk.nc"
yes? list/t=1-feb-1982:15-mar-1982 tax_day(t[gt=wsst], wsst[i=1,j=1])
VARIABLE : TAX_DAY (T[GT=WSST], WSST[I=1,J=1])
DATA SET : Reynolds Optimum Interpolation Weekly SST Analysis
FILENAME : reynolds_sst_wk.nc
FILEPATH : http://ferret.pmel.noaa.gov/thredds/dodsC/data/PMEL/
SUBSET : 7 points (TIME)
02-FEB-1982 00 / 5: 2.00
09-FEB-1982 00 / 6: 9.00
16-FEB-1982 00 / 7: 16.00
23-FEB-1982 00 / 8: 23.00
02-MAR-1982 00 / 9: 2.00
09-MAR-1982 00 / 10: 9.00
16-MAR-1982 00 / 11: 16.00
TAX_DAYFRAC(A, B): Returns fraction of day from time axis coordinate values
Arguments: A time steps to convert
B variable with reference time axis; use region settings to restrict this to a small region (see below)
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE: The variable given for argument 2 is loaded into memory by this function, so restrict its range in X, Y, and/or Z so that it is not too large. Or, if you are getting the time steps from the
variable, specify the time steps for both arguments. See the examples in the discussion of the TAX_DATESTRING function.
[Caution: Previous to Ferret v6.8 only: Although Ferret stores and computes with coordinate values in double-precision, argument 1 of this function is a single-precision variable. This function
translates the values in argument 1 to the nearest double-precision coordinate value of the time axis of argument 2 when possible. If the truncation to single precision in an expression t[gt=var] has
resulted in duplicate single-precision values the function will return an error message. See the discussion under TAX_DATESTRING.] With Ferret v6.8 and after, all calculations are done in double
precision and this is not a problem.
Example: Show both the TAX_DATESTRING and TAX_DAYFRAC output. Because this is a climatological axis, there are no years in the dates.
yes? use coads_climatology
yes? let tt = t[gt=sst, L=1:5]
yes? let tvar = sst[x=180,y=0]
yes? list tax_datestring(tt, tvar, "hour"), tax_dayfrac(tt,tvar)
DATA SET: /home/users/tmap/ferret/linux/fer_dsets/data/coads_climatology.cdf
TIME: 01-JAN 00:45 to 01-JUN 05:10
Column 1: EX#1 is TAX_DATESTRING(TT, TVAR, "hour")
Column 2: EX#2 is TAX_DAYFRAC(TT,TVAR)
EX#1 EX#2
16-JAN / 1: "16-JAN 06" 0.2500
15-FEB / 2: "15-FEB 16" 0.6869
17-MAR / 3: "17-MAR 02" 0.1238
16-APR / 4: "16-APR 13" 0.5606
16-MAY / 5: "16-MAY 23" 0.9975
TAX_JDAY(A, B): Returns day of year from time axis coordinate values
Arguments: A time steps to convert
B variable with reference time axis; use region settings to restrict this to a small region (see below)
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE 1:The variable given for argument 2 is loaded into memory by this function, so restrict its range in X, Y, and/or Z so that it is not too large. Or, if you are getting the time steps from the
variable, specify the time steps for both arguments. See also the examples in the discussion of the TAX_DATESTRING function.
NOTE 2: This function currently returns the date relative to a Standard (Gregorian) calendar. If the time axis is on a different calendar this is NOT taken into account. This will be fixed in a
future Ferret release.
[Caution: Previous to Ferret v6.8 only: Although Ferret stores and computes with coordinate values in double-precision, argument 1 of this function is a single-precision variable. This function
translates the values in argument 1 to the nearest double-precision coordinate value of the time axis of argument 2 when possible. If the truncation to single precision in an expression t[gt=var] has
resulted in duplicate single-precision values the function will return an error message. See the discussion under TAX_DATESTRING.] With Ferret v6.8 and after, all calculations are done in double
precision and this is not a problem.
yes? use coads_climatology
yes? list tax_jday(t[gt=sst], sst[i=1,j=1])
VARIABLE : TAX_JDAY(T[GT=SST], SST[I=1,J=1])
FILENAME : coads_climatology.cdf
FILEPATH : /home/porter/tmap/ferret/linux/fer_dsets/data/
SUBSET : 12 points (TIME)
16-JAN / 1: 16.0
15-FEB / 2: 46.0
17-MAR / 3: 77.0
16-APR / 4: 107.0
16-MAY / 5: 137.0
16-JUN / 6: 168.0
16-JUL / 7: 198.0
16-AUG / 8: 229.0
15-SEP / 9: 259.0
16-OCT / 10: 290.0
15-NOV / 11: 320.0
16-DEC / 12: 351.0
TAX_MONTH(A, B): Returns months from time axis coordinate values
Arguments: A time steps to convert
B variable with reference time axis; use region settings to restrict this to a small region (see below)
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE: The variable given for argument 2 is loaded into memory by this function, so restrict its range in X, Y, and/or Z so that it is not too large. Or, if you are getting the time steps from the
variable, specify the time steps for both arguments. See also the examples in the discussion of the TAX_DATESTRING function.
[Caution: Previous to Ferret v6.8 only: Although Ferret stores and computes with coordinate values in double-precision, argument 1 of this function is a single-precision variable. This function
translates the values in argument 1 to the nearest double-precision coordinate value of the time axis of argument 2 when possible. If the truncation to single precision in an expression t[gt=var] has
resulted in duplicate single-precision values the function will return an error message. See the discussion under TAX_DATESTRING.] With Ferret v6.8 and after, all calculations are done in double
precision and this is not a problem.
yes? use "http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/reynolds_sst_wk.nc"
yes? list/t=18-jan-1982:15-mar-1982 tax_month(t[gt=wsst], wsst[i=1,j=1])
VARIABLE : TAX_MONTH(T[GT=WSST], WSST[I=1,J=1])
DATA SET : Reynolds Optimum Interpolation Weekly SST Analysis
FILENAME : reynolds_sst_wk.nc
FILEPATH : http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/
SUBSET : 9 points (TIME)
19-JAN-1982 00 / 3: 1.000
26-JAN-1982 00 / 4: 1.000
02-FEB-1982 00 / 5: 2.000
09-FEB-1982 00 / 6: 2.000
16-FEB-1982 00 / 7: 2.000
23-FEB-1982 00 / 8: 2.000
02-MAR-1982 00 / 9: 3.000
09-MAR-1982 00 / 10: 3.000
16-MAR-1982 00 / 11: 3.000
TAX_UNITS(A): Returns units from time axis coordinate values, in seconds
Arguments: A variable with reference time axis
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE: to get the time units as a string use the RETURN=tunits keyword (p. )
yes? use "http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/reynolds_sst_wk.nc"
yes? say `wsst,return=tunits`
!-> MESSAGE/CONTINUE DAYS
yes? list tax_units(t[gt=wsst])
VARIABLE : TAX_UNITS(T[GT=WSST])
DATA SET : Reynolds Optimum Interpolation Weekly SST Analysis
FILENAME : reynolds_sst_wk.nc
FILEPATH : http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/
TAX_YEAR(A, B): Returns years of time axis coordinate values
Arguments: A time steps to convert
B variable with reference time axis; use region settings to restrict this to a small region (see below)
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE :The variable given for argument 2 is loaded into memory by this function, so restrict its range in X, Y, and/or Z so that it is not too large. Or, if you are getting the time steps from the
variable, specify the time steps for both arguments. See also the examples in the discussion of the TAX_DATESTRING function.
[Caution: Previous to Ferret v6.8 only: Although Ferret stores and computes with coordinate values in double-precision, argument 1 of this function is a single-precision variable. This function
translates the values in argument 1 to the nearest double-precision coordinate value of the time axis of argument 2 when possible. If the truncation to single precision in an expression t[gt=var] has
resulted in duplicate single-precision values the function will return an error message. See the discussion under TAX_DATESTRING.] With Ferret v6.8 and after, all calculations are done in double
precision and this is not a problem.
yes? use "http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/COADS/coads_sst.cdf"
yes? list/L=1403:1408 tax_year(t[gt=sst], sst[i=1,j=1])
VARIABLE : TAX_YEAR(T[GT=SST], SST[K=1,J=1])
DATA SET : COADS Surface Marine Observations (1854-1993)
FILENAME : coads_sst.cdf
FILEPATH : http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/COADS/
SUBSET : 6 points (TIME)
15-NOV-1970 00 / 1403: 1970.
15-DEC-1970 00 / 1404: 1970.
15-JAN-1971 00 / 1405: 1971.
15-FEB-1971 00 / 1406: 1971.
15-MAR-1971 00 / 1407: 1971.
15-APR-1971 00 / 1408: 1971.
TAX_YEARFRAC(A, B): Returns fraction of year of time axis coordinate values
Arguments: A time steps to convert
B variable with reference time axis; use region settings to restrict this to a small region (see below)
Result Axes: X normal
Y normal
Z normal
T Inherited from A
NOTE: The variable given for argument 2 is loaded into memory by this function, so restrict its range in X, Y, and/or Z so that it is not too large. Or, if you are getting the time steps from the
variable, specify the time steps for both arguments. See also the examples in the discussion of the TAX_DATESTRING function.
[Caution: Previous to Ferret v6.8 only: Although Ferret stores and computes with coordinate values in double-precision, argument 1 of this function is a single-precision variable. This function
translates the values in argument 1 to the nearest double-precision coordinate value of the time axis of argument 2 when possible. If the truncation to single precision in an expression t[gt=var] has
resulted in duplicate single-precision values the function will return an error message. See the discussion under TAX_DATESTRING.] With Ferret v6.8 and after, all calculations are done in double
precision and this is not a problem.
yes? use "http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/COADS/coads_sst.cdf"
yes? list/L=1403:1408 tax_yearfrac(t[gt=sst], sst[i=1,j=1])
VARIABLE : TAX_YEARFRAC(T[GT=SST], SST[K=1,J=1])
DATA SET : COADS Surface Marine Observations (1854-1993)
FILENAME : coads_sst.cdf
FILEPATH : http://ferret.pmel.noaa.gov/pmel/thredds/dodsC/data/PMEL/PMEL/COADS/
SUBSET : 6 points (TIME)
15-NOV-1970 00 / 1403: 0.8743
15-DEC-1970 00 / 1404: 0.9563
15-JAN-1971 00 / 1405: 0.0410
15-FEB-1971 00 / 1406: 0.1257
15-MAR-1971 00 / 1407: 0.2049
15-APR-1971 00 / 1408: 0.2896
Chapter 3, Section 2.3 MINMAX
MINMAX(A): Returns the minimum and maximum of the argument
Arguments: A variable to find the extrema
Result Axes: X abstract: length 2
Y normal
Z normal
T normal
The result is returned on a 2-point abstract axis, with result[i=1]=min, result[i=1]=max
yes? let v = {1,4,56,8,12,14,,-0.5}
yes? list MINMAX(v)
VARIABLE : MINMAX(V)
SUBSET : 2 points (X)
1 / 1: -0.50
2 / 2: 56.00
Chapter 3, Section 2.4 TRANSFORMATIONS
Transformations (e.g., averaging, integrating, etc.) may be specified along the axes of a variable. Some transformations (e.g., averaging, minimum, maximum) reduce a range of data to a point; others
(e.g., differentiating or smoothers) retain the range.
When transformations are specified along more than one axis of a single variable the order of execution is X axis first, then Y, Z, T, E, and F.
The regridding transformations are described in the chapter "Grids and Regions" ).
Example syntax: TEMP[Z=0:100@LOC:20] (depth at which temp has value 20)
Valid transformations are
Transform Default Description
@DIN definite integral (weighted sum)
@IIN indefinite integral (weighted running sum)
@ITP linear interpolation
@AVE average
@VAR unweighted variance
@STD standard deviation
@MIN minimum
@MAX maximum
@MED 3 pt median smoothed
@SMN 3 pt minimum smoothed, using min
@SMX 3 pt maximum smoothed, using max
@SHF 1 pt shift
@SBX 3 pt boxcar smoothed
@SBN 3 pt binomial smoothed
@SHN 3 pt Hanning smoothed
@SPZ 3 pt Parzen smoothed
@SWL 3 pt Welch smoothed
@DDC centered derivative
@DDF forward derivative
@DDB backward derivative
@NGD number of valid points
@NBD number of bad (invalid) points flagged
@SUM unweighted sum
@RSUM running unweighted sum
@FAV 3 pt fill missing values with average
@FLN 1 pt fill missing values by linear interpolation
@FNR 1 pt fill missing values with nearest point
@LOC 0 coordinate of ... (e.g., depth of 20 degrees)
@WEQ "weighted equal" (integrating kernel)
@CDA closest distance above
@CDB closest distance below
@CIA closest index above
@CIB closest index below
The command SHOW TRANSFORM will produce a list of currently available transformations.
U[Z=0:100@AVE] – average of u between 0 and 100 in Z
sst[T=@SBX:10] – box-car smooths sst with a 10 time point filter
tau[L=1:25@DDC] – centered time derivative of tau
v[L=@IIN] – indefinite (accumulated) integral of v
qflux[X=@AVE,Y=@AVE] – XY area-averaged qflux
General information about transformations
Transformations are normally computed axis by axis; if multiple axes have transformations specified simultaneously (e.g., U[Z=@AVE,L=@SBX:10]) the transformations will be applied sequentially in the
order X then Y then Z then T, E, and F. There are a few exceptions to this: if @DIN is applied simultaneously to both the X and Y axes (in units of degrees of longitude and latitude, respectively)
the calculation will be carried out on a per-unit-area basis (as a true double integral) instead of a per-unit-length basis, sequentially. This ensures that the COSINE(latitude) factors will be
applied correctly. The same applies to @AVE, @VAR, and @STD simultaneously on X and Y as well as @NGD and @NBD for any combination of directions on the variable's grid.
If one of the multiple-direction transformations (AVE, DIN, VAR, STD, NGD, NBD) is specified, but the grid contains only one of those directions, the 1-dimensional transformation is computed instead.
(Prior to Ferret v7.41, when one of the multi-direction transformations was requested, say an XY average, Ferret required that the grid has an x and a y axis.) For all transformations that operate
axis by axis, a transformation specified for any axis not on the grid is ignored.
Data that are flagged as invalid are excluded from calculations.
All integrations and averaging are accomplished by multiplying the width of each grid box or portion of the box by the value of the variable in that grid box—then summing and dividing as appropriate
for the particular transformation.
If integration or averaging limits are given as world coordinates, the grid boxes at the edges of the region specified are weighted according to the fraction of grid box that actually lies within the
specified region. If the transformation limits are given as subscripts, the full box size of each grid point along the axis is used—including the first and last subscript given. The region
information that is listed with the output reflects this.
Some transformations (derivatives, shifts, smoothers) require data points from beyond the edges of the indicated region in order to perform the calculation. Ferret automatically accesses this data as
needed. It flags edge points as missing values if the required beyond-edge points are unavailable (e.g., @DDC applied on the X axis at I=1).
When calculating integrals and derivatives (@IIN, @DIN, @DDC, @DDF, and @DDB) Ferret attempts to use standardized units for the grid coordinates. If the underlying axis is in a known unit of length
Ferret converts grid box lengths to meters. If the underlying axis is in a known unit of time Ferret converts grid box lengths to seconds, using the definition for the calendar of that axis. If the
underlying axis is degrees of longitude a factor of COSINE (latitude) is applied to the grid box lengths in meters.
If the underlying axis units are unknown Ferret uses those unknown units for the grid box lengths. (If Ferret does not recognize the units of an axis it displays a message to that effect when the
DEFINE AXIS or SET DATA command defines the axis.) See command DEFINE AXIS/UNITS in the Commands Reference in this manual for a list of recognized units.
Version 7.6 includes support for Discrete Sampling Geometries (DSG) datasets, containing collections of Timeseries, Profiles, Trajectories, and Points. When transformations are applied to data from
a DSG file the transformation applies only within each feature. That is, for a collection of profiles the Maximum value or Indefinite Integral or Smoothing operation or Number of valid points is
computed for one profile at a time with the calculation restarting on the next feature.
Chapter 3, Section 2.4.2
Transformations applied to irregular regions
Since transformations are applied along the orthogonal axes of a grid they lend themselves naturally to application over "rectangular" regions (possibly in 3 or 4 dimensions). Ferret has sufficient
flexibility, however, to perform transformations over irregular regions.
Suppose, for example, that we wish to determine the average wind speed within an irregularly shaped region of the globe defined by a threshold sea surface temperature value. We can do this through
the creation of a mask, as in this example:
yes? SET DATA coads_climatology
yes? SET REGION/l=1/@t ! January in the Tropical Pacific
yes? LET sst28_mask = IF sst GT 28 THEN 1
yes? LET masked_wind_speed = wspd * sst28_mask
yes? LIST masked_wind_speed[X=@AVE,Y=@AVE]
The variable sst28_mask is a collection of 1's and missing values. Using it as a multiplier on the wind speed field produces a new result that is undefined except in the domain of interest.
When using masking be aware of these considerations:
• Use undefined values rather than zeros to avoid contaminating the calculation with zero values. The above commands where sst_mask_28 is defined with no ELSE statement, automatically implies "ELSE
• If the data contains values that are exactly Zero, then "IF var" (comparison to the missing-flag for the variable) will be false when the data is missing and ALSO when its value is zero. This is
because Ferret does not have a logical data type, and so zero is equivalent to FALSE. See the documentation on the IF syntax. To handle this case, you can get the missing-value for the variable
and use it to define the expression.
yes? LET mask = IF var NE `var,RETURN=bad` THEN 1
• The masked region is composed of rectangles at the level of resolution of the gridded variables; the mask does NOT follow smooth contour lines. To obtain a smoother mask it may be desirable to
regrid the calculation to a finer grid.
• Variables from different data sets can be used to mask one another. For example, the ETOPO60 bathymetry data set can be used to mask regions of land and sea.
General information about smoothing transformations
Ferret provides several transformations for smoothing variables (removing high frequency variability). These transformations replace each value on the grid to which they are applied with a weighted
average of the surrounding data values along the axis specified. For example, the expression u[T=@SPZ:3] replaces the value at each (I,J,K,L) grid point of the variable "u" with the weighted average
u at t = 0.25*(u at t-1) + 0.5*(u at t) + 0.25*(u at t+1)
The various choices of smoothing transformations (@SBX, @SBN, @SPZ, @SHN, @SWL) represent different shapes of weighting functions or "windows" with which the original variable is convolved. New
window functions can be obtained by nesting the simple ones provided. For example, using the definitions
yes? LET ubox = u[L=@SBX:15]
yes? LET utaper = ubox[L=@SHN:7]
produces a 21-point window whose shape is a boxcar (constant weight) with COSINE (Hanning) tapers at each end.
Ferret may be used to directly examine the shape of any smoothing window: Mathematically, the shape of the smoothing window can be recovered as a variable by convolving it with a delta function. In
the example below we examine (PLOT) the shape of a 15-point Welch window (Figure 3_4).
! define X axis as [-1,1] by 0.2
yes? GO unit_square
yes? SET REGION/X=-1:1
yes? LET delta =
IF X EQ 0 THEN 1 ELSE 0
! convolve delta with Welch window
yes? PLOT delta[I=@SWL:15]
Chapter 3, Section 2.4.4
@DIN—definite integral
The transformation @DIN computes the definite integral—a single value that is the integral between two points along an axis (compare with @IIN). It is obtained as the sum of the grid_box*variable
product at each grid point. Grid points at the ends of the indicated range are weighted by the fraction of the grid box that falls within the integration interval.
If @DIN is specified simultaneously on multiple axes the calculation will be performed as a multiple integration rather than as sequential single integrations. The output will document this fact by
indicating a transformation of "@IN4" or "XY integ." See the General Information on transformations for important details about this transformation. (In particular note that when the limits are given
in index values, the transformation includes the entire interval of both endpoints; if it is given in world coordinates, it uses partial grid cells out to those world limits.)
yes? CONTOUR/X=160E:160W/Y=5S:5N u[Z=0:50@DIN]
In a latitude/longitude coordinate system X=@DIN is sensitive to the COS(latitude) correction.
Integration over complex regions in space may be achieved by masking the multi-dimensional variable in question and using the multi-dimensional form of @DIN. For example
yes? LET salinity_where_temp_gt_15 = IF temp GT 15 THEN salt
yes? LIST salinity_where_temp_gt_15[X=@DIN,Y=@DIN,Z=@DIN]
Chapter 3, Section 2.4.5
The transformation @IIN computes the indefinite integral—at each subscript of the result it is the value of the integral from the start value to the upper edge of that grid box. It is obtained as a
running sum of the grid_box*variable product at each grid point. Grid points at the ends of the indicated range are weighted by the fraction of the grid box that falls within the integration
interval. See the General Information on transformations for important details about this transformation.
In particular, it's important to pay attention to how the limits are specified. If the limits of integration are given as world coordinates (var[X=10:22@IIN]), the grid boxes at the edges of the
region specified are weighted according to the fraction of grid box that actually lies within the specified region. If the transformation limits are given as subscripts (var[I=1:13@IIN]), then the
full box size of each grid point along the axis is used—including the first and last subscript given.
yes? CONTOUR/X=160E:160W/Z=0 u[Y=5S:5N@IIN]
Note 1: The indefinite integral is always computed in the increasing coordinate direction. To compute the indefinite integral in the reverse direction use
LET reverse_integral = my_var[X=lo:hi@DIN] - my_var[X=lo:hi@IIN]
Note 2: In a latitude/longitude coordinate system X=@IIN is sensitive to the COS(latitude) correction.
Note 3: The result of the indefinite integral is shifted by 1/2 of a grid cell from its "proper" location. This is because the result at each grid cell includes the integral computed to the upper end
of that cell. (This was necessary in order that var[I=lo:hi@DIN] and var[I=lo:hi@IIN] produce consistent results.)
To illustrate, consider these commands
yes? LET one = x-x+1
yes? LIST/I=1:3 one[I=@din]
X: 0.5 to 3.5 (integrated)
yes? LIST/I=1:3 one[I=@iin]
indef. integ. on X
1 / 1: 1.000
2 / 2: 2.000
3 / 3: 3.000
The grid cell at I=1 extends from 0.5 to 1.5. The value of the integral at 1.5 is 1.000 as reported but the coordinate listed for this value is 1 rather than 1.5. Two methods are available to correct
for this 1/2 grid cell shift.
Method 1: correct the result by subtracting the 1/2 grid cell error
yes? LIST/I=1:3 one[I=@iin] - one/2
ONE[I=@IIN] - ONE/2
1 / 1: 0.500
2 / 2: 1.500
3 / 3: 2.500
Method 2: correct the coordinates by shifting the axis 1/2 of a grid cell
yes? DEFINE AXIS/X=1.5:3.5:1 xshift
yes? LET SHIFTED_INTEGRAL = one[I=@IIN]
yes? LET corrected_integral = shifted_integral[GX=xshift@ASN]
yes? LIST/I=1:3 corrected_integral
1.5 / 1: 1.000
2.5 / 2: 2.000
3.5 / 3: 3.000
Method 3: Use the @IIN regridding transformation (new in Ferret v7.4.2). When the destination axis for the regridding is the same as the source axis, The result of that transformation is 0 at the
bottom of the first grid cell, and is equal to the definite integral at the top of the uppermost grid cell.
yes? define axis/x=1:3:1 x3ax
yes? let ones = one[gx=x3ax@asn]
! the transformation@IIN
yes? LIST ones[I=@iin]
VARIABLE : ONE[GX=X3AX@ASN]
indef. integ. on X
SUBSET : 3 points (X)
1 / 1: 1.000
2 / 2: 2.000
3 / 3: 3.000
! The regridding transformation, regrid to the source axis.
yes? LIST ones[gx=x3ax@iin]
VARIABLE : ONE[GX=X3AX@ASN]
regrid: 1 delta on X@IIN
SUBSET : 3 points (X)
1 / 1: 0.500
2 / 2: 1.500
3 / 3: 2.500
Chapter 3, Section 2.4.6
The transformation @AVE computes the average weighted by grid box size—a single number representing the average of the variable between two endpoints.
If @AVE is specified simultaneously on multiple axes the calculation will be performed as a multiple integration rather than as sequential single integrations. The output will document this fact by
showing @AV4 or "XY ave" as the transformation.
See the General Information on transformations for important details about this transformation. In particular, note the discussion about specifying the averaging interval in world coordinates, e.g.
longitude, meters, time as in var[x=3.4:4.6@AVE] versus specifying the interval using indices, as in var[I=4:12@AVE]. When the interval is expressed in world coordinates, the weighting is done using
partial grid boxes at the edges of the interval. If the interval is expressed using indices, the entire grid cells contribute to the weights.
The @WGT transformation returns the weights used by @AVE
yes? CONTOUR/X=160E:160W/Y=5S:5N u[Z=0:50@AVE]
Note that the unweighted mean can be calculated using the @SUM and @NGD transformations.
Averaging over complex regions in space may be achieved by masking the multi-dimensional variable in question and using the multi-dimensional form of @AVE. For example
yes? LET salinity_where_temp_gt_15 = IF temp GT 15 THEN salt
yes? LIST salinity_where_temp_gt_15[X=@AVE,Y=@AVE,Z=@AVE]
When we use var[x=@AVE] Ferret averages over the grid points of the variable along the X axis, using any region in X that is in place. IF a specified range is given X=x1:x2@ave, then Ferret uses
portions of grid cells to average over that exact region.
yes? USE coads_climatology
yes? LIST/L=1/Y=45 sst[x=301:305@AVE]
VARIABLE : SEA SURFACE TEMPERATURE (Deg C)
LONGITUDE: 59W to 55W (averaged)
LATITUDE : 45N
TIME : 16-JAN 06:00
yes? LET var = sst[x=301:305]
yes? LIST/L=1/Y=45 var
VARIABLE : SST[X=301:305]
SUBSET : 3 points (LONGITUDE)
LATITUDE : 45N
TIME : 16-JAN 06:00
59W / 141: 2.231
57W / 142: 2.604
55W / 143: 3.183
yes? LIST/L=1/Y=45 var[x=@AVE]
VARIABLE : SST[X=301:305]
LONGITUDE: 60W to 54W (averaged)
LATITUDE : 45N
TIME : 16-JAN 06:00
The last average is taken not from a specific X to another specific X, but over all grid cells in the range where the variable var is defined. Note in each listing the LONGITUDE range of the average.
Chapter 3, Section 2.4.7
@VAR—weighted variance
The transformation @VAR computes the weighted variance of the variable with respect to the indicated region (ref. Numerical Recipes, The Art of Scientific Computing, by William H. Press et al.,
As with @AVE, if @VAR is applied simultaneously to multiple axes the calculation is performed as the variance of a block of data rather than as nested 1-dimensional variances. See the General
Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.7A
STD—weighted standard deviation
The transformation @STD computes the weighted standard deviation of the variable with respect to the indicated region (ref. Numerical Recipes, The Art of Scientific Computing, by William H. Press et
al., 1986).
As with @AVE, if @STD is applied simultaneously to multiple axes the calculation is performed as the standard deviation of a block of data rather than as nested 1-dimensional standard deviation
computations. See the General Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.8
The transformation @MIN finds the minimum value of the variable within the specified axis range. See the General Information on transformations for important details about this transformation.
For fixed Z and Y
yes? PLOT/T="1-JAN-1982":"1-JAN-1983" temp[X=160E:160W@MIN]
plots a time series of the minimum temperature found between longitudes 160 east and 160 west.
Chapter 3, Section 2.4.9
The transformation @MAX finds the maximum value of the variable within the specified axis range. See also @MIN. See the General Information on transformations for important details about this
Chapter 3, Section 2.4.10
The transformation @SHF shifts the data up or down in subscript by the number of points given as the argument. The default is to shift by 1 point. See the General Information on transformations for
important details about this transformation.
associates the value of U[L=3] with the subscript L=1.
gives the forward difference of the variable U along the L axis.
Chapter 3, Section 2.4.11
@SBX:n—boxcar smoother
The transformation @SBX applies a boxcar window (running mean) to smooth the variable along the indicated axis. The width of the boxcar is the number of points given as an argument to the
transformation. The default width is 3 points. All points are weighted equally, regardless of the sizes of the grid boxes, making this transformation best suited to axes with equally spaced points.
If the number of points specified is even, however, @SBX weights the end points of the boxcar smoother as ½. See the General Information on transformations for important details about this
yes? PLOT/X=160W/Y=0 u[L=1:120@SBX:5]
The transformation @SBX does not reduce the number of points along the axis; it replaces each of the original values with the average of its surrounding points. Regridding can be used to reduce the
number of points.
Chapter 3, Section 2.4.12
@SBN:n—binomial smoother
The transformation @SBN applies a binomial window to smooth the variable along the indicated axis. The width of the smoother is the number of points given as an argument to the transformation. The
default width is 3 points. The weights are applied without regard to the widths of the grid boxes, making this transformation best suited to axes with equally spaced points. See the General
Information on transformations for important details about this transformation.
yes? PLOT/X=160W/Y=0/Z=0 u[L=1:120@SBN:15]
The transformation @SBN does not reduce the number of points along the axis; it replaces each of the original values with a weighted sum of its surrounding points. Regridding can be used to reduce
the number of points. The argument specified with @SBN, the number of points in the smoothing window, must be an odd value; an even value would result in an effective shift of the data along its
Chapter 3, Section 2.4.13
@SHN:n—Hanning smoother
Transformation @SHN applies a Hanning window to smooth the variable along the indicated axis (ref. Numerical Recipes, The Art of Scientific Computing, by William H. Press et al., 1986). In other
respects it is identical in function to the @SBN transformation. Note that the Hanning window used by Ferret contains only non-zero weight values with the window width.The default width is 3 points.
Some interpretations of this window function include zero weights at the end points. Use an argument of N-2 to achieve this effect (e.g., @SBX:5 is equivalent to a 7-point Hanning window which has
zeros as its first and last weights). See the General Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.14
@SPZ:n—Parzen smoother
Transformation @SPZ applies a Parzen window to smooth the variable along the indicated axis (ref. Numerical Recipes, The Art of Scientific Computing, by William H. Press et al., 1986). In other
respects it is identical in function to the @SBN transformation. The default window width is 3 points. See the General Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.15
@SWL:n—Welch smoother
Transformation @SWL applies a Welch window to smooth the variable along the indicated axis (ref. Numerical Recipes, The Art of Scientific Computing, by William H. Press et al., 1986). In other
respects it is identical in function to the @SBN transformation. The default window width is 3 points. See the General Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.16
@DDC—centered derivative
The transformation @DDC computes the derivative with respect to the indicated axis using a centered differencing scheme. The units of the underlying axis are treated as they are with integrations. If
the points of the axis are unequally spaced, note that the calculation used is still (F[i+1] – F[i–1]) / (X[i+1] – X[i–1]) . See the General Information on transformations for important details about
this transformation.
yes? PLOT/X=160W/Y=0/Z=0 u[L=1:120@DDC]
Chapter 3, Section 2.4.17
@DDF—forward derivative
The transformation @DDF computes the derivative with respect to the indicated axis. A forward differencing scheme is used. The units of the underlying axis are treated as they are with integrations.
See the General Information on transformations for important details about this transformation.
yes? PLOT/X=160W/Y=0/Z=0 u[L=1:120@DDF]
Chapter 3 Section 2.4.18
@DDB—backward derivative
The transformation @DDF computes the derivative with respect to the indicated axis. A backward differencing scheme is used. The units of the underlying axis are treated as they are with integrations.
See the General Information on transformations for important details about this transformation.
yes? PLOT/X=160W/Y=0/Z=0 u[L=1:120@DDB]
Chapter 3, Section 2.4.19
@NGD—number of good points
The transformation @NGD computes the number of good (valid) points of the variable with respect to the indicated axis. Use @NGD in combination with @SUM to determine the number of good points in a
multi-dimensional region.
This transformation may be applied to string variables; if there are null strings in the variable those are treated as missing.
Note that, as with @AVE, when @NGD is applied simultaneously to multiple axes the calculation is applied to the entire block of values rather than to the individual axes. See the General Information
on transformations for important details about this transformation.
Chapter 3, Section 2.4.20
The transformation @NBD computes the number of bad (invalid) points of the variable with respect to the indicated axis. Use @NBD in combination with @SUM to determine the number of bad points in a
multi-dimensional region.
This transformation may be applied to string variables; if there are null strings in the variable those are treated as missing.
Note that, as with @AVE, when @NBD is applied simultaneously to multiple axes the calculation is applied to the entire block of values rather than to the individual axes. See the General Information
on transformations for important details about this transformation.
Chapter 3, Section 2.4.21
@SUM—unweighted sum
The transformation @SUM computes the unweighted sum (arithmetic sum) of the variable with respect to the indicated axis. This transformation is most appropriate for regions specified by subscript. If
the region is specified in world coordinates, the edge points are not weighted—they are wholly included in or excluded from the calculation, depending on the location of the grid points with respect
to the specified limits. See the General Information on transformations for important details about this transformation.
Note that when @SUM is applied simultaneously to multiple axes the calculation is applied to the entire block of values rather than to the individual axes. The output will document this fact by
showing "XY sum" as the transformation.
Chapter 3, Section 2.4.22
@RSUM—running unweighted sum
The transformation @RSUM computes the running unweighted sum of the variable with respect to the indicated axis. @RSUM is to @IIN as @SUM is to @DIN. The treatment of edge points is identical to
@SUM. See the General Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.23
@FAV:n—averaging filler
The transformation @FAV fills holes (values flagged as invalid) in variables with the average value of the surrounding grid points along the indicated axis. The width of the averaging window is the
number of points given as an argument to the transformation. The default is n=3. If an even value of n is specified, Ferret uses n+1 so that the average is centered. All of the surrounding points are
weighted equally, regardless of the sizes of the grid boxes, making this transformation best suited to axes with equally spaced points. If any of the surrounding points are invalid they are omitted
from the calculation. If all of the surrounding points are invalid the hole is not filled. See the General Information on transformations for important details about this transformation.
yes? CONTOUR/X=160W:160E/Y=5S:0 u[X=@FAV:5]
Chapter 3, Section 2.4.24
@FLN:n—linear interpolation filler
The transformation @FLN:n fills holes in variables with a linear interpolation from the nearest non-missing surrounding point. n specifies the number of points beyond the edge of the indicated axis
limits to include in the search for interpolants (default n = 1). Note that this does not mean it looks only n points beyond the edges of interior gaps, but only that it looks n points beyond the
limits given in the espression. Unlike @FAV, @FLN is sensitive to unevenly spaced points and computes its linear interpolation based on the world coordinate locations of grid points.
Any gap of missing values that has a valid data point on each end will be filled, regardless of the length of the gap. However, when a sub-region from the full span of the data is requested sometimes
a fillable gap crosses the border of the requested region. In this case the valid data point from which interpolation should be computed is not available. The parameter n tells Ferret how far beyond
the border of the requested region to look for a valid data point. See the General Information on transformations for important details about this transformation.
Example: To allow data to be filled only when gaps in i are less than 15 points, use the @CIA and @CIB transformations which return the distance from the nearest valid point.
yes? USE my_data
yes? LET allowed_gap = 15
yes? LET gap_size = my_var[i=@cia] + my_var[i=@cib]
yes? LET gap_mask = IF gap_size LE gap_allowed THEN 1
yes? LET my_answer = my_var[i=@fln) * gap_mask
Example: Showing the effect of the argument n
yes? let var = {1,,,,2,,,,6}
yes? list var
VARIABLE : {1,,,,2,,,,6}
SUBSET : 9 points (X)
1 / 1: 1.000
2 / 2: ....
3 / 3: ....
4 / 4: ....
5 / 5: 2.000
6 / 6: ....
7 / 7: ....
8 / 8: ....
9 / 9: 6.000
yes? ! x=3:9@fln:1 says look only one point beyond the requested limits of x=3:9 when filling
yes? list var[x=3:9@fln:1]
VARIABLE : {1,,,,2,,,,6}
linear-filled by 1 pts on X
SUBSET : 7 points (X)
3 / 3: ....
4 / 4: ....
5 / 5: 2.000
6 / 6: 3.000
7 / 7: 4.000
8 / 8: 5.000
9 / 9: 6.000
yes? ! x=3:9@fln:5 says look within 5 grid cells beyond the requested limits of x=3:9 when filling
yes? list var[x=3:9@fln:5]
VARIABLE : {1,,,,2,,,,6}
linear-filled by 5 pts on X
SUBSET : 7 points (X)
3 / 3: 1.500
4 / 4: 1.750
5 / 5: 2.000
6 / 6: 3.000
7 / 7: 4.000
8 / 8: 5.000
9 / 9: 6.000
Ch3 Sec2.4.25
@FNR—nearest neighbor filler
The transformation @FNR is similar to @FLN, except that it replicates the nearest point to the missing value. In the case of points being equally spaced around the missing point, the mean value is
used. See the General Information on transformations for important details about this transformation.
If an argument is given, var[X=30:34@FNR:2] then the transformation will return data from only within that index range around the given location, here 2 index values below x=30 to 2 index values
above x=34. However if the expression is given without a range of coordinates or indices to return, e.g. LET filled_var = var[X=@FNR] then any argument after @FNR will be ignored. The result will be
filled with the nearest value no matter what distance it is from the missing data.
Note that as is the case with most transformations, @FNR operates in one direction at a time, so that when var[x=@FNR,y=@FNR] iks evaluated, var is filled first in X and then that result is filled
in Y. The function FILL_XY will often be a better option when doing a 2-D fill operation in X and Y.
@LOC—location of
The transformation @LOC accepts an argument value—the default value is zero if no argument is specified. The transformation @LOC finds the single location at which the variable first assumes the
value of the argument. The result is in units of the underlying axis. Linear interpolation is used to compute locations between grid points. If the variable does not assume the value of the argument
within the specified region the @LOC transformation returns an invalid data flag. See also the discussion of @EVNT, the "event mask" transformation.
For example, temp[Z=0:200@LOC:18] finds the location along the Z axis (often depth in meters) at which the variable "temp" (often temperature) first assumes the value 18, starting at Z=0 and
searching to Z=200. See the General Information on transformations for important details about this transformation.
yes? CONTOUR/X=160E:160W/Y=10S:10N temp[Z=0:200@LOC:18]
produces a map of the depth of the 18-degree isotherm. See the General Information on transformations for important details about this transformation.
Note that the transformation @LOC can be used to locate non-constant values, too, as the following example illustrates:
Example: locating non-constant values
Determine the depth of maximum salinity.
yes? LET max_salt = salt[Z=@MAX]
yes? LET zero_at_max = salt - max_salt
yes? LET depth_of_max = zero_at_max[Z=@LOC:0]
Chapter 3, Section 2.4.27
@WEQ—weighted equal; integration kernel
The @WEQ ("weighted equal") transformation is the subtlest and arguably the most powerful transformation within Ferret. It is a generalized version of @LOC; @LOC always determines the value of the
axis coordinate (the variable X, Y, Z, or T) at the points where the gridded field has a particular value. More generally, @WEQ can be used to determine the value of any variable at those points. See
also the discussion of @EVNT, the "event mask" transformation. See the General Information on transformations for important details about this transformation.
Like @LOC, the transformation @WEQ finds the location along a given axis at which the variable is equal to the given (or default) argument. For example, V1[Z=@WEQ:5] finds the Z locations at which V1
equals "5". But whereas @LOC returns a single value (the linearly interpolated axis coordinate values at the locations of equality) @WEQ returns instead a field of the same size as the original
variable. For those two grid points that immediately bracket the location of the argument, @WEQ returns interpolation coefficients. For all other points it returns missing value flags. If the value
is found to lie identically on top of a grid point an interpolation coefficient of 1 is returned for that point alone. The default argument value is 0.0 if no argument is specified.
Example 1
yes? LET v1 = X/4
yes? LIST/X=1:6 v1, v1[X=@WEQ:1], v1[X=@WEQ:1.2]
X: 1 to 6
Column 1: V1 is X/4
Column 2: V1[X=@WEQ:1] is X/4 (weighted equal of 1 on X)
Column 3: V1[X=@WEQ: 1.200] is X/4 (weighted equal of 1.2 on X)
V1 V1 V1
1 / 1: 0.250 .... ....
2 / 2: 0.500 .... ....
3 / 3: 0.750 .... ....
4 / 4: 1.000 1.000 0.2000
5 / 5: 1.250 .... 0.8000
6 / 6: 1.500 .... ....
The resulting field can be used as an "integrating kernel," a weighting function that when multiplied by another field and summed will give the value of that new field at the desired location.
Example 2
Using variable v1 from the previous example, suppose we wish to know the value of the function X^2 (X squared) at the location where variable v1 has the value 1.2. We can determine it as follows:
yes? LET x_squared = X^2
yes? LET integrand = x_squared * v1[X=@WEQ:1.2]
yes? LIST/X=1:6 integrand[X=@SUM] !Ferret output below
VARIABLE : X_SQUARED * V1[X=@WEQ:1.2]
X : 1 to 6 (summed)
Notice that 23.20 = 0.8 * (5^2) + 0.2 * (4^2)
Below are two "real world" examples that produce fully labeled plots.
Example 3: salinity on an isotherm
Use the Levitus climatology to contour the salinity of the Pacific Ocean along the 20-degree isotherm (Figure 3_5).
yes? SET DATA levitus_climatology ! annual sub-surface climatology
yes? SET REGION/X=100E:50W/Y=45S:45N ! Pacific Ocean
yes? LET isotherm_20 = temp[Z=@WEQ:20] ! depth kernel for 20 degrees
yes? LET integrand_20 = salt * isotherm_20
yes? SET VARIABLE/TITLE="Salinity on the 20 degree isotherm" integrand_20
yes? CONTOUR/LEV=(33,37,.2)/SIZE=0.12 integrand_20[Z=@SUM]
yes? GO fland !continental fill
Example 4: month with warmest sea surface temperatures
Use the COADS data set to determine the month in which the SST is warmest across the Pacific Ocean. In this example we use the same principles as above to create an integrating kernel on the time
axis. Using this kernel we determine the value of the time step index (which is also the month number, 1–12) at the time of maximum SST (Figure 3_6).
yes? SET DATA coads_climatology ! monthly surface climatology
yes? SET REGION/X=100E:50W/Y=45S:45N ! Pacific Ocean
yes? SET MODE CAL:MONTH
yes? LET zero_at_warmest = sst - sst[l=@max]
yes? LET integrand = L[G=sst] * zero_at_warmest[L=@WEQ] ! "L" is 1 to 12
yes? SET VARIABLE/TITLE="Month of warmest SST" integrand
yes? SHADE/L=1:12/PAL=inverse_grayscale integrand[L=@SUM]
Example 5: values of variable at depths of a second variable:
Suppose I have V1(x,y,z) and MY_ZEES(x,y), and I want to find the values of V1 at depths MY_ZEES. The following will do that using @WEQ:
yes? LET zero_at_my_zees = Z[g=v1]-my_zees
yes? LET kernel = zero_at_my_zees[Z=@WEQ:0]
yes? LET integrand = kernel*v1
yes? LET v1_on_my_zees = integrand[Z=@SUM]
Chapter 3, Section 2.4.28
The @ITP transformation provides the same linear interpolation calculation that is turned on modally with SET MODE INTERPOLATE but with a higher level of control, as @ITP can be applied selectively
to each axis. @ITP may be applied only to point locations along an axis. The result is the linear interpolation based on the adjoining values. Interpolation can be applied on an axis by axis and
variable by variable basis like any other transformation. To apply interpolation use the transformation "@ITP" in the same way as, say, @SBX, specifying the desired location to which to interpolate.
For example, on a Z axis with grid points at Z=10and Z=20 the syntax my_var[Z=14@ITP] would interpolate to Z=14 with the computation
The example which follows illustrates the interpolation control that is possible using @ITP:
! with modal interpolation
yes? SET DATA coads_climatology
yes? SET MODE INTERPOLATE
yes? LIST/L=1/X=180/Y=0 sst ! interpolates both lat and long
VARIABLE : SEA SURFACE TEMPERATURE (Deg C)
FILENAME : coads_climatology.cdf
FILEPATH : /home/porter/tmap/ferret/linux/fer_dsets/data/
LONGITUDE: 180E (interpolated)
LATITUDE : 0 (interpolated)
TIME : 16-JAN 06:00
! with no interpolation
yes? CANCEL MODE INTERPOLATE
yes? LIST/L=1/X=180/Y=0 sst ! gives value at 179E, 1S
VARIABLE : SEA SURFACE TEMPERATURE (Deg C)
FILENAME : coads_climatology.cdf
FILEPATH : /home/porter/tmap/ferret/linux/fer_dsets/data/
LONGITUDE: 179E
LATITUDE : 1S
TIME : 16-JAN 06:00
! using @ITP to interpolate in longitude, only
yes? LIST/L=1/Y=0 sst[X=180@ITP] ! latitude remains 1S
VARIABLE : SEA SURFACE TEMPERATURE (Deg C)
FILENAME : coads_climatology.cdf
FILEPATH : /home/porter/tmap/ferret/linux/fer_dsets/data/
LONGITUDE: 180E (interpolated)
LATITUDE : 1S
TIME : 16-JAN 06:00
See the General Information on transformations for important details about this transformation.
Chapter 3, Section 2.4.29
Syntax options:
@CDA Distance to closest valid point above each point along the indicated axis
@CDA:n Closest distance to a valid point above the indicated points, searching only within n points
The transformation @CDA will compute at each grid point how far it is to the closest valid point above this coordinate position on the indicated axis. The optional argument n gives a maximum distance
to search for valid data. N is in integer units: how far to search forward in index space. The distance will be reported in the units of the axis. If a given grid point is valid (not missing) then
the result of @CDA for that point will be 0.0. See the example for @CDB below. The result's units are now axis units, e.g., degrees of longitude to the next valid point above. See the General
Information on transformations for important details about this transformation, and see the example under @CDB below.
Chapter 3, Section 2.4.30
Syntax options:
@CDB Distance to closest valid point below each point along the indicated axis
@CDB:n Closest distance to a valid point below the indicated points, searching only within n points
The transformation @CDB will compute at each grid point how far it is to the closest valid point below this coordinate position on the indicated axis. The optional argument n gives a maximum distance
to search for valid data. N is in integer units: how far to search backward in index space. The distance will be reported in the units of the axis. The distance will be reported in the units of the
axis. If a given grid point is valid (not missing) then the result of @CDB for that point will be 0.0. The result's units are now axis units, e.g., degrees of longitude to the next valid point below.
See the General Information on transformations for important details about this transformation.
yes? USE coads_climatology
yes? SET REGION/x=125w:109w/y=55s/l=1
yes? LIST sst, sst[x=@cda], sst[x=@cdb] ! results below
Column 1: SST is SEA SURFACE TEMPERATURE (Deg C)
Column 2: SST[X=@CDA:1] is SEA SURFACE TEMPERATURE (Deg C)(closest dist above on X)
Column 3: SST[X=@CDB:1] is SEA SURFACE TEMPERATURE (Deg C)(closest dist below on X)
SST SST SST
125W / 108: 6.700 0.000 0.000
123W / 109: .... 8.000 2.000
121W / 110: .... 6.000 4.000
119W / 111: .... 4.000 6.000
117W / 112: .... 2.000 8.000
115W / 113: 7.800 0.000 0.000
113W / 114: 7.800 0.000 0.000
111W / 115: .... 2.000 2.000
109W / 116: 8.300 0.000 0.000
yes? list sst[x=121w], sst[x=121w@cda:2], sst[x=121w@cda:5], sst[x=121w@cdb:5]
Chapter 3, Section 2.4.31
@CIA—closest index above
Syntax options:
@CIA Index of closest valid point above each point along the indicated axis
@CIA:n Closest distance in index space to a valid point above the point at index or coordinate m, searching only within n points
The transformation @CIA will compute at each grid point how far it is to the closest valid point above this coordinate position on the indicated axis. The optional argument n gives a maximum distance
to search from the index or coordinate location m for valid data. N is in integer units: how far to search forward in index space. The distance will be reported in terms of the number of points
(distance in index space). If a given grid point is valid (not missing) then the result of @CIA for that point will be 0.0. See the example for @CIB below. The units of the result are grid indices;
integer number of grid units to the next valid point above. See the General Information on transformations for important details about this transformation, and see the example under @CIB below.
Chapter 3, Section 2.4.32
@CIB—closest index below
Syntax options:
@CIB Index of closest valid point below each point along the indicated axis
@CIB:n Closest distance in index space to a valid point below the indicated points, searching only within n points
The transformation @CIB will compute at each grid point how far it is to the closest valid point below this coordinate position on the indicated axis. The optional argument n gives a maximum distance
to search for valid data. N is in integer units: how far to search backward in index space. The distance will be reported in terms of the number of points (distance in index space). If a given grid
point is valid (not missing) then the result of @CIB for that point will be 0.0. The units of the result are grid indices, integer number of grid units to the next valid point below. See the General
Information on transformations for important details about this transformation.
yes? USE coads_climatology
yes? SET REGION/x=125w:109w/y=55s/l=1
yes? LIST sst, sst[x=@cia], sst[x=@cib] ! results below
Column 1: SST is SEA SURFACE TEMPERATURE (Deg C)
Column 2: SST[X=@CIA:1] is SEA SURFACE TEMPERATURE (Deg C) (closest dist above on X ...)
Column 3: SST[X=@CIB:1] is SEA SURFACE TEMPERATURE (Deg C) (closest dist below on X ...)
125W / 108: 6.700 0.000 0.000
123W / 109: .... 4.000 1.000
121W / 110: .... 3.000 2.000
119W / 111: .... 2.000 3.000
117W / 112: .... 1.000 4.000
115W / 113: 7.800 0.000 0.000
113W / 114: 7.800 0.000 0.000
111W / 115: .... 1.000 1.000
109W / 116: 8.300 0.000 0.000
yes? list sst[x=121w], sst[x=121w@cia:2], sst[x=121w@cia:5], \
DATA SET: /home/ja9/tmap/fer_dsets/data/coads_climatology.cdf
LONGITUDE: 121W
LATITUDE: 55S
TIME: 16-JAN 06:00
Column 1: SST is SEA SURFACE TEMPERATURE (Deg C)
Column 2: SST[X=@CIA:2] is SEA SURFACE TEMPERATURE (Deg C)
Column 3: SST[X=@CIA:5] is SEA SURFACE TEMPERATURE (Deg C)
Column 4: SST[X=@CIB:5] is SEA SURFACE TEMPERATURE (Deg C)
I / *: .... .... 3.000 2.00
This transformation locates "events" in data. An event is the occurrence of a particular value. The output steps up by a value of 1 for each event, starting from a value of zero. (If the variable
takes on exactly the value of the event trigger the +1 step occurs on that point. If it crosses the value, either ascending or descending, the step occurs on the first point after the crossing.)
For example, if you wanted to know the maximum value of the second wave, where (say) rising above a magnitude of 0.1 in variable "ht" represented the onset of a wave, then
yes? LET wave2_mask = IF ht[T=@evnt:0.1] EQ 2 THEN 1
is a mask for the second wave, only. The maximum waveheight may be found with
yes? LET wave2_ht = wave2_mask * ht
yes? LET wave2_max_ht = wave2_ht[T=@max]
Note that @EVNT can be used together with @LOC and @WEQ to determine the location when an event occurs and the value of other variables as the event occurs, respectively. Since there may be missing
values in the data, and since the instant at which the event occurs may lie immediately before the step in value for the event mask, the following expression is a general solution.
yes? LET event_mask = my_var[t=@evnt:<value>]
yes? LET event_n = IF ABS(MISSING(event_mask[L=@SBX],event_mask)-n) LE 0.67 THEN my_var
So that
is the time at which event "n" occurs, and
is the integrating kernel (see @WEQ). See the General Information on transformations for important details about this transformation.
@MED—median transform smoother
@MED:n Replace each value with its median based on the surrounding n points, where n must be odd.
The transformation @MED applies a median window, replacing each value with its median, to smooth the variable along the indicated axis. The width of the median window is the number of points given as
an argument to the transformation. The default width is 3 points. No weighting is applied, regardless of the sizes of the grid boxes, making this transformation best suited to axes with equally
spaced points. If there is missing data within the window, the median is based on the points that are present. See the General Information on transformations for important details about this
yes? PLOT/X=160W/Y=0 u[L=1:120@MED:5]
The transformation @MED does not reduce the number of points along the axis; it replaces each of the original values with the median of its surrounding points. Regridding can be used to reduce the
number of points.
Chapter 3, Section 2.4.34
@SMN - Smoothing with minimum
Replace each value with its minimum based on the surrounding n points, where n must be odd.
The transformation @SMN applies a minimum window, replacing each value with its minimum, to smooth the variable along the indicated axis. The width of the minimum window is the number of points given
as an argument to the transformation. The default width is 3 points. No weighting is applied, regardless of the sizes of the grid boxes, making this transformation best suited to axes with equally
spaced points. If there is missing data within the window, the minimum is based on the points that are present. See the General Information on transformations for important details about this
Chapter 3, Section 2.4.35
@SMX - Smoothing with maximum
Syntax :
SMX:n Replace each value with its maximum based on the surrounding n points, where n must be odd.
The transformation @SMX applies a maximum window, replacing each value with its maximum, to smooth the variable along the indicated axis. The width of the maximum window is the number of points given
as an argument to the transformation. The default width is 3 points. No weighting is applied, regardless of the sizes of the grid boxes, making this transformation best suited to axes with equally
spaced points. If there is missing data within the window, the maximum is based on the points that are present. See the General Information on transformations for important details about this
Chapter 3, Section 2.4.36
@WGT - return weights
@WGT Replace each value with the weights used in an averaging or integral on the same grid.
The transformation @WGT returns the weights used in an equivalent transformation averaging or integrating on the same grid. The results are expressed in the same standard units as integrals: Meters
for degrees longitude or latitude on the surface of the earth and seconds in time; for axes with other units the results are in the units of the axis. See the General Information on transformations
for important details about this transformation.
yes? use levitus_climatology
yes? list/y=0/x=140w/z=0:90 temp[z=@wgt]
VARIABLE : TEMPERATURE (DEG C)
weights for avg,int on Z
FILENAME : levitus_climatology.cdf
FILEPATH : /home/users/tmap/ferret/linux/fer_dsets/data/
SUBSET : 7 points (DEPTH (m))
LONGITUDE: 140.5W
LATITUDE : 0.5S
0 / 1: 5.00
10 / 2: 10.00
20 / 3: 10.00
30 / 4: 15.00
50 / 5: 22.50
75 / 6: 25.00
100 / 7: 2.50
Ch3 Sec2.5. IF-THEN logic ("masking") and IFV-THEN
Ferret expressions can contain embedded IF-THEN-ELSE logic. The syntax of the IF-THEN logic is simply (by example)
LET a = IF a1 GT b THEN a1 ELSE a2
LET a = IFV a1 GT b then a1 ELSE a2
(read as "if a1 is greater than b then a1 else a2").
The IFV syntax differs from IF in that the result of the IF is false wherever the value is missing or if it is zero. IFV returns false only if the value is missing - valid zero values are treated as
true. All of the syntax for IF in this section is also valid using IFV. IFV was introduced with Ferret version 6.71. Note that there is no IFV syntax for conditional execution. There is a further
example using IFV at the end of this section.
This syntax is especially useful in creating masks that can be used to perform calculations over regions of arbitrary shape. For example, we can compute the average air-sea temperature difference in
regions of high wind speed using this logic:
SET DATA coads_climatology
SET REGION/X=100W:0/Y=0:80N/T=15-JAN
LET fast_wind = IF wspd GT 10 THEN 1
LET tdiff = airt - sst
LET fast_tdiff = tdiff * fast_wind
When defining a mask, if no ELSE statement is used, there is an implied "ELSE bad-value".
We can also make compound IF-THEN statements. The parentheses are included here for clarity, but are not necessary.
LET a = IF (b GT c AND b LT d) THEN e
LET a = IF (b GT c OR b LT d) THEN e
LET a = IF (b GT c AND b LT d) THEN e ELSE q
The user may find it clearer to think of this logic as WHERE-THEN-ELSE to avoid confusion with the IF used to control conditional execution of commands. Again, if there is no ELSE, there is an
implied "ELSE bad-value". Compound and multi-line IF-THEN-ELSE constructs are not allowed in embedded logic.
Note that if the data contains values that are exactly Zero, then "IF var" (comparison to the missing-flag for the variable) will be false when the data is missing and ALSO when its value is zero.
This is because Ferret does not have a logical data type, and so zero is equivalent to FALSE. See the documentation on the IF syntax. To handle this case, you can use the IFV keyword, or with classic
"IF", get the missing-value for the variable and use it to define the expression.
yes? LET mask = IF var NE `var,RETURN=bad` THEN 1
yes? LET mask = IFV var THEN 1
Example demonstrating IFV: Define and plot a variable that has integer values: 0, 1,... 6. Notice the missing data in the southern ocean. A mask using IF will mask out all the zero values, whereas a
mask with IFV treats those zero's as valid.
! Define the variable with values 0 through 6.
yes? USE coads_climatology; SET REG/L=1
yes? LET intvar = INT(sst/5)
! Define a mask using this variable and IF
yes? LET mask = IF intvar THEN 1
! Define a mask using this variable
yes? LET maskV = IFV intvar THEN 1
yes? set view ul; shade/title="Variable with a few integer values"/lev=(0,6,1) intvar
yes? go land
yes? set view ll; shade/title="Mask with classic IF: IF var then 1" mask
yes? go land
yes? set view lr
yes? shade/title="Mask with ifValid IFV: IFV var then 1" maskV
yes? go land
Lists of constants ("constant arrays")
The syntax {val1, val2, val3} is a quick way to enter a list of constants. For example
yes? LIST {1,3,5}, {1,,5}
X: 0.5 to 3.5
Column 1: {1,3,5}
Column 2: {1,,5}
{1,3,5} {1,,5}
1 / 1: 1.000 1.000
2 / 2: 3.000 ....
3 / 3: 5.000 5.000
Note that a constant variable is always an array oriented in the X direction To create a constant aray oriented in, say, the Y direction use YSEQUENCE
yes? STAT/BRIEF YSEQUENCE({1,3,5})
Total # of data points: 3 (1*3*1*1)
# flagged as bad data: 0
Minimum value: 1
Maximum value: 5
Mean value: 3 (unweighted average)
Below are two examples illustrating uses of constant arrays. Try running the the constant_array_demo journal script.
Ex. 1) plot a triangle (Figure 3_7)
LET xtriangle = {0,.5,1}
LET ytriangle = {0,1,0}
POLYGON/LINE=8 xtriangle, ytriangle, 0
Or multiple triangles (Figure 3_8) See polymark.jnl regarding this figure
Ex. 2) Sample Jan, June, and December from sst in coads_climatology
yes? USE coads_climatology
yes? LET my_sst_months = SAMPLEL(sst, {1,6,12})
yes? STAT/BRIEF my_sst_months
Total # of data points: 48600 (180*90*1*3)
# flagged as bad data: 21831
Minimum value: -2.6
Maximum value: 31.637
Mean value: 17.571 (unweighted average) | {"url":"https://ferret.pmel.noaa.gov/Ferret/documentation/users-guide/variables-xpressions/XPRESSIONS","timestamp":"2024-11-05T22:40:40Z","content_type":"text/html","content_length":"284502","record_id":"<urn:uuid:3570d57a-3498-459c-ab02-c2a5b20111b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00035.warc.gz"} |
A certain farmer pays $30 per acre per month to rent farmland. How much does the farmer pay per month to rent a rectangular plot of farmland that is 360 feet by 605 feet? (43560 square feet = 1 acre) - Crackverbal
Get detailed explanations to advanced GMAT questions.
A certain farmer pays $30 per acre per month to rent farmland. How much does the farmer pay per month to rent a rectangular plot of farmland that is 360 feet by 605 feet? (43560 square feet = 1 acre)
Difficulty Level
Option E is the right answer.
Option Analysis
Given, 43,560 square feet = 1 acre.
So, Total Area rented(in acres) = 360*650/43650 = 5 acres
Hence, he pays 5 * 30= $150 per month
So, the correct answer is E. | {"url":"https://www.crackverbal.com/solutions/a-certain-farmer-pays-30-per-acre-per-month-to-rent-farmland-how-much-does-the-farmer-pay-per-month-to-rent-a-rectangular-plot-of-farmland-that-is-360-feet-by-605-feet-43560-square-feet-1-acre/","timestamp":"2024-11-09T19:10:25Z","content_type":"text/html","content_length":"100267","record_id":"<urn:uuid:824a0a3f-c7a2-4b28-905a-ca0c4fe59827>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00477.warc.gz"} |
Math module - various mathematical functions
Before starting this tutorial, there are a few requirements that you should have in place.
Math in Python
Mathematics is a fundamental aspect of programming, and Python provides a range of tools and resources for working with math. Python provides a robust set of tools for working with math, including:
• built-in operators, for example: +, -, *, /, %, //, **
• built-in functions, for example: sum(), min(), max(), round()
• built-in modules like math, and
• the ability to import external libraries for more specialized tasks, for example: NumPy, SciPy, and Pandas
The math module is a built-in module in Python that provides access to many mathematical functions. It includes functions that provide more advanced mathematical operations such as:
trigonometric logarithmic exponential statistical
In addition to built-in modules, Python also provides a way to import external libraries, such as NumPy, SciPy, and Pandas. These libraries contain a range of specialized functions and tools for
working with specific areas of mathematics and data analysis.
When use math module?
Using the math module in Python can help you perform complex mathematical operations with ease. This module is commonly used in scientific and mathematical computations and can be a useful tool for
data analysts, engineers, and scientists who work with numerical data. In this module, you can perform various mathematical operations, such as finding the square root of a number, finding the value
of a mathematical constant such as pi, and much more.
Getting Started
You can try out the math module on your local machine if you have Python installed.
If you want to make up for the installation step, you can follow the installation guide provided in the tutorial .
Alternatively, if you prefer to work online, you can use various online Python interpreters such as Repl.it ⤴, PythonAnywhere ⤴, or Colab ⤴. These online interpreters provide a Python shell and allow
you to run Python code without installing Python on your local machine.
For example, you can use online Python shell immediately available at https://www.python.org/shell/ ⤴ to experiment with the math module.
Simply type in your Python code in the shell and hit enter to see the output.
Built-in Operators
Python has several built-in operators for basic arithmetic operations, such as:
operator function description
+ addition
- subtraction
* multiplication
/ division
In addition, Python also provides operators for more advanced mathematical operations, such as:
% modulus for finding the remainder of a division
// integer division for performing integer division
** exponentiation for raising a number to a power
x = 10
y = 3
# Addition
z = x + y
print(z) # Output: 13
# Subtraction
z = x - y
print(z) # Output: 7
# Multiplication
z = x * y
print(z) # Output: 30
# Division
z = x / y
print(z) # Output: 3.3333333333333335
Python has an exponentiation operator ** that raises a number to a power.
x = 2
y = 3
# Exponentiation
z = x ** y
print(z) # Output: 8
Integer Division
In Python, integer division is performed using the double slash // operator
x = 23
y = 5
# Integer division
z = x // y
print(z) # Output: 4
In Python, the modulus operation is performed using the percent % operator.
x = 23
y = 5
# Modulus operation
z = x % y
print(z) # Output: 3
Built-in Functions
Python also has many built-in functions for performing mathematical operations.
function example description
abs(x) abs(-10) returns the absolute value of a number
returns: 10
round(x, y) round(3.14159, 2) rounds a number (x) to a specified number (y) of decimal places
returns: 3.14
sum(list) sum([1,2,3]) return the sum of the values in a list
returns: 6
min(list) min([1,2,3]) return the minimum value in a list
returns: 1
max(list) max([1,2,3]) return the maximum value in a list
returns: 3
Built-in Module: math
MATH is a built-in module (no installation required) that contains various mathematical constants and more advanced mathematical operations such as trigonometric functions, logarithmic functions, and
statistical functions, among others.
The math module does not contain functions for basic arithmetic operations such as addition, subtraction, multiplication, and division. These are basic mathematical operations that can be performed
using the standard arithmetic operators in Python, such as +, -, *, /.
CheatSheet: math
In the tabs below you can find the corresponding tables with math module functions along with function description and usage examples.
Basic operations:
Function Example Description
math.sqrt(x) math.sqrt(25) Returns the square root of x.
returns: 5
math.factorial(x) math.factorial(5) Returns the factorial of x.
returns: 120
math.ceil(x) math.ceil(2.7) Rounds up a number to the nearest integer.
returns: 3
math.floor(x) math.floor(2.7) Rounds down a number to the nearest integer.
returns: 2
math.trunc(x) math.trunc(3.5) Returns the integer value of x (truncates towards zero).
returns: 3
math.abs(x) math.abs(-10) Returns the absolute value of provided number in the same type.
returns: 10
math.fabs(x) math.fabs(-10) Returns the absolute value of provided number as a float-point number.
returns: 10.0
math.log(x) math.log(10) Returns the natural logarithm (base e) of x.
returns: 2.30258509
math.log10(x) math.log10(100) Returns the base-10 logarithm of x.
returns: 2.0
math.exp(x) math.exp(2) Returns the exponential value of x.
returns: 7.3890560
math.pow(x, y) math.pow(2, 3) Returns the value of x raised to the power of y.
returns: 8
math.copysign(x, y) math.copysign(10, -1) Returns a value with the magnitude of x and the sign of y.
returns: -10.0
math.fmod(x, y) math.fmod(10, 3) Returns the remainder of x/y as a float.
returns: 1.0
math.gcd(x, y) math.gcd(24, 36) Returns the greatest common divisor of x and y.
returns: 12
math.hypot(x, y) math.hypot(3, 4) Returns the Euclidean norm, sqrt(x2 + y2).
returns: 5.0
Function Example Description
math.e math.e A mathematical constant that represents the value of e.
math.inf math.inf Represents positive infinity, which is a value greater than any finite number.
math.nan math.nan Represents a value that is not a number (NaN).
math.pi math.pi A mathematical constant that represents the value of pi.
math.tau math.tau Represents the mathematical constant tau, which is equal to 2*pi.
Function Example Description
math.sin(x) math.sin(math.pi/2) Returns the sine of an angle in radians.
math.cos(x) math.cos(math.pi) Returns the cosine of an angle in radians.
math.tan(x) math.tan(math.pi/4) Returns the tangent of an angle in radians.
math.radians(x) math.radians(180) Converts an angle from degrees to radians.
returns: 3.14159265
math.degrees(x) math.degrees(math.pi) Converts an angle from radians to degrees.
returns: 180.0
math.asin(x) math.asin(0.5) Returns the arc sine (in radians) of x.
returns: 0.5235987
math.acos(x) math.acos(0.5) Returns the arc cosine (in radians) of x.
returns: 1.0471975
math.atan(x) math.atan(1) Returns the arc tangent (in radians) of x.
returns: 0.7853981
math.atan2(y, x) math.atan2(1, 1) Returns the arc tangent (in radians) of y/x, in the range [-pi, pi].
returns: 0.7853981
Function Example Description
math.isnan(x) math.isnan(10) Returns True if x is NaN (not a number), False otherwise.
returns: False
math.isinf(x) math.isinf(10) Returns True if x is positive or negative infinity, and False otherwise.
returns: False
math.isfinite(x) math.isfinite(10) Returns True if x is a finite number (i.e., not infinity or NaN), and False otherwise.
returns: True
math.isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0) math.isclose(3.5, 3.6, rel_tol=0.1) Returns True if a is close in value to b, False otherwise.
returns: True
Import MATH module
The math module is built into Python and does NOT require any installation.
To use the math module, you need to import it first using the import statement:
Once the math module is imported, you can access all its functions by prefixing them with math, followed by a dot and the function name, math.function_name().
For example, to find the square root of a number, you can use a math.sqrt() function:
my_variable = math.sqrt(5) | {"url":"https://datascience.101workbook.org/05-programming/03-python/05-round-abs-data-math-module/","timestamp":"2024-11-12T17:07:35Z","content_type":"text/html","content_length":"70491","record_id":"<urn:uuid:7c999ed4-fa72-4473-af6c-0bef6cafeb2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00721.warc.gz"} |
Lost type information in nested tuples?
Zips of several items result in iterator states which are nested tuples. I have a line in my code_warntype which looks like this:
s@_7::Tuple{Int64,Tuple{Int64,Tuple{Int64,Tuple{Int64,Tuple}}}} = (Base.Iterators.tuple)(1, (Base.Iterators.tuple)(1, (Base.Iterators.tu
ple)(1, (Core.tuple)(1, (Core.tuple)(itr@_8::Base.OneTo{Int64}, 1)::Tuple{Base.OneTo{Int64},Int64})::Tuple{Int64,Tuple{Base.OneTo{Int64},Int64
What seems to be going on is because the type is so nested that inference gives up? Is that what’s actually happening? If it is the case it means that zips of more than 4 or 5 things will be type
unstable. Does that mean that we need a new version of zip that doesn’t need so many nested types? Or will the new optimizer take care of it?
Note this is on 0.6.
PS why are zips implemented as a nested tuple of iterators rather than just a tuple of iterators? | {"url":"https://discourse.julialang.org/t/lost-type-information-in-nested-tuples/10253","timestamp":"2024-11-12T08:46:29Z","content_type":"text/html","content_length":"21098","record_id":"<urn:uuid:4e362d16-22be-406d-9163-ed6c720a8d34>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00300.warc.gz"} |
A moment approach to analyze zeros of triangular polynomial sets
Let $I=(g_1,..., g_n)$ be a zero-dimensional ideal of $ \R[x_1,...,x_n]$ such that its associated set $G$ of polynomial equations $g_i(x)=0$ for all $i=1,...,n$, is in triangular form. By introducing
multivariate Newton sums we provide a numerical characterization of polynomials in the radical ideal of $I$. We also provide a necessary and sufficient (numerical) condition for all zeros of $G$ to
be in a given set $K\subset C^n$ without computing explicitly the zeros. The technique that we use relies on a deep result of Curto and Fialkow on the $K$-moment problem and the conditions we provide
are given in terms of positive definiteness of some related moment and localizing matrices depending on the $g_i$'s via the Newton sums of $G$.
Trans. Amer. Math. Soc. 358 (2005), 1403--1420. | {"url":"https://optimization-online.org/2004/03/840/","timestamp":"2024-11-10T08:58:17Z","content_type":"text/html","content_length":"82318","record_id":"<urn:uuid:319cfcf8-84cf-4bf2-b2a9-d4996375cab2>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00808.warc.gz"} |
[SOLVED] Unitarity and amplitudes ~ Physics ~ TransWikia.com
Unitarity ($$SS^dagger = 1$$) dictates that $$begin{equation}label{key} T-T^dagger = iTT^dagger end{equation}$$ For tree-level $$2rightarrow 2$$ scattering, we have then that $$langle p_1,p_2|T|
p_3,p_4rangle - langle p_1,p_2|T|p_3,p_4rangle^* = ilangle p_1,p_2|TT^dagger|p_3,p_4rangle.$$ Using $$langle p_1,p_2|T|p_3,p_4rangle = (2pi)^4delta^{(4)}(p_1 + p_2 - p_3 - p_4)mathcal{A}[12rightarrow
34]$$, and inserting a complete set of one particle states (to stay at tree-level), we can write this as begin{align} 2text{Im}(mathcal{A}[12rightarrow 34]) &= 2pisum_kint frac{d^3k}{2E_k}delta^{(4)}
(p_1+p_2 - k)mathcal{A}[12rightarrow k]mathcal{A}^*[krightarrow 34]\ &= 2pisum_kint d^4kdelta(k^2)delta^{(4)}(p_1+p_2 - k)mathcal{A}[12rightarrow k]mathcal{A}^*[krightarrow 34] end{align} Now, the
left hand side is the imaginary part of the 4pt amplitude, which we will take to have numerators $$n_i$$ and propagators $$p_i^2 + iepsilon$$, where $$i$$ labels the ways we can arrange a particle
exchange (the $$s,t,u$$ channels). Thus, we have $$begin{equation} 2text{Im}left(sum_kfrac{n_k}{k^2+iepsilon} + text{contact}right) = 2pisum_kint d^4kdelta(k^2)delta^{(4)}(p_1+p_2 - k)mathcal{A}
[12rightarrow k]mathcal{A}^*[krightarrow 34], end{equation}$$ In a local theory of massless scalars, we can write the imaginary part of the propagator as $$text{Im}left(frac{1}{p^2 + iepsilon}right)
= frac{1}{2i}left(frac{1}{p^2 + iepsilon} - frac{1}{p^2 - iepsilon}right) = frac{-epsilon}{p^4 + epsilon^2}.$$ This seems like it vanishes for $$epsilon rightarrow 0$$, which, by the optical theorem,
means that your amplitude must be zero for real external momenta. However, this is misleading and only true when the propagator is off-shell. Recognising the fact that the last term above is the
nascent dirac delta function, we learn that $$lim_{epsilonrightarrow 0}frac{-epsilon}{p^4 + epsilon^2} = pidelta(p^2).$$
Plugging this in, we find that, as the propagator goes on shell, we have $$begin{equation} 2pisum_kn_kdelta(k^2) = 2pisum_kint d^4kdelta(k^2)delta^{(4)}(p_1+p_2 - k)mathcal{A}[12rightarrow k]mathcal
{A}^*[krightarrow 34]. end{equation}$$ Or, in other words, the numerators of the tree-level amplitudes factorize into two lower-point amplitudes (the residues) as the propagator goes on-shell $$begin
{equation} n_k = int d^4kdelta^{(4)}(p_1+p_2 - k)mathcal{A}[12rightarrow k]mathcal{A}^*[krightarrow 34] end{equation}$$ We note now a major problem: the right hand side is actually zero due to
momentum conservation, and this is probably the reason most books don't discuss the optical theorem at tree-level. This is due to the fact that Lorentz-invariant three-particle amplitudes vanish
on-shell by virtue of the fact that $$p_icdot p_j = 0$$ for all $$i,j$$ due to momentum conservation. However, this is not true if we use spinor helicity variables and assume complex momentum, which
is exactly what the bootstrapping amplitudes program does.
Correct answer by Akoben on December 15, 2020 | {"url":"https://transwikia.com/physics/unitarity-and-amplitudes/","timestamp":"2024-11-07T03:02:44Z","content_type":"text/html","content_length":"48416","record_id":"<urn:uuid:73cff17d-deec-421d-ac37-186ae058ce4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00890.warc.gz"} |
HDU 4902 Nice boat (line segment tree)
A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
• Sales Support
1 on 1 presale consultation
• After-Sales Support
24/7 Technical Support 6 Free Tickets per Quarter Faster Response
• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs. | {"url":"https://topic.alibabacloud.com/a/hdu-4902-nice-boat-line-segment-tree_1_31_32699473.html","timestamp":"2024-11-01T19:05:43Z","content_type":"text/html","content_length":"79317","record_id":"<urn:uuid:46f20cab-844b-4a4c-aea8-2604f24bd866>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00428.warc.gz"} |
September 2016 LSAT Question 13 Explanation
Which one of the following could be the complete assignment of assistants to Pricing?
Help with Game Setup
I feel like I could make barely any deductions on this game. Can I please see a game setup for this set of questions?
Please help with game setup
Let's look a the setup. The game involves an economics department assigning 6 TAs - R S T V Y Z to 3 courses - L M P. Each TA is assigned to exactly one course an, and each course will have at least
one assistant assigned to it. This rule tells us that every assistant must be assigned, at there must be at least one assistant per course, but there is no upper limit, so we could have a 1 -1 -4
scenario or 1 -2-3 or 2 -2 -2.
Let's look at the rules:
(1) M must have exactly 2 assistants assigned to it.
M: 2
This rule tells us that 1-1-4 scenario is out since at least M must have exactly 2 assistants, we are down to 1-2-3 or 2-2-2 scenarios.
(2) S & T must be assigned to the same course as each other.
Let' s remember this rule for now.
(3) V & Y cannot be assigned to the same course as each other.
This rule tells us if V is assigned to a course, then Y cannot and vice versa.
V -> ~Y
Y -> ~V
(4) Y & Z must both be assigned to pricing if either one of them is.
This rule tells us that either both YZ must be assigned to P or none of them.
We can also infer that if either Y or Z are assigned to pricing, then neither S nor T could be assigned to pricing because we cannot have more than 3 TAs per class, and we can infer that V cannot be
assigned per pricing per rule (3).
Y v Z (P) -> ~(S/T/V) (P)
Considering there a fair number of restrictions on who might be assigned to pricing, let's consider a couple of possible scenarios:
(1) Y &Z assigned to pricing.
1-2-3 scenario
L: V
M: S T
P: Y Z R
2-2-2 scenario
L: ST/ VR
M: ST/ VT
P: Y Z
(2) Y & Z are not assigned to pricing.
In a 1-2-3 scenario, S &T must be assigned to pricing because either Y or Z or both must be assigned to M since there is only one spot in L and neither of them can be assigned to P. Also, either V or
R must be assigned to P in this scenario along with S & T.
L: 1 of Y/Z/V/R
M: 2 of Y/Z/V/R
P: S T + /V/R
One possible option could be:
L: Y
Z: Z V
P: STR
In a 2-2-2 scenario, ST must still be assigned as a group to either of the classes, YZ could be split across L & M as long as Y is not assigned to any class with V or assigned as a group to L or M.
For example:
L: ST
M: YZ
P: VR
L: YR
M: ZV
P: ST
The point is not to list our every possible scenario but to see the impact of the most restrictive rule on the setup.
The question asks us which of the following could be the complete assignment of assistants to P?
We can see right away that all the options involve 3+ TAs, so we know that it is a 1 -2-3 scenario and we can right away eliminate (C) because we know that 1 -1 -4 is impossible. Let's look at our
1-2-3 scenarios above -
If Y Z are assigned to pricing, then R must be assigned with them.
If Y Z are not assigned to pricing, then ST must be assigned to pricing along with V or R.
Let's look at the answer choices:
(A) R Y Z
Correct. This is one of our scenarios above.
(B) S T Y
Incorrect. Z&Y are a package deal for this class per rule (4), so we cannot have only one of them assigned to P.
(C) S T Y Z
Incorrect. We have already eliminated (C) because it is impossible to have more than 3 assistants per class.
(D) T Y Z
Incorrect. ST must be assigned to the same class per rule (2).
(E) V Y Z
Incorrect. V & Y cannot be assigned to the same class per rule (3).
Let me know if this makes sense and if you have any further questions. | {"url":"https://testmaxprep.com/lsat/community/100002175-help-with-game-setup","timestamp":"2024-11-05T22:57:19Z","content_type":"text/html","content_length":"69612","record_id":"<urn:uuid:503acac3-86c0-4554-b516-e9d3875d6074>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00694.warc.gz"} |
Exploring numpy.stack() function In Python
Exploring numpy.stack() function In python
The numpy.stack() function is an invaluable tool for working with arrays in Python. This comprehensive guide will take you through everything you need to know to fully leverage the capabilities of
Introduction to numpy.stack()
The numpy library is the core package for scientific computing in Python. It provides powerful array and matrix manipulation capabilities.
The numpy.stack() function allows you to join, or stack, multiple NumPy arrays along a new axis. This differs from numpy.concatenate() which joins arrays along an existing axis.
Some key features of numpy.stack():
• Stacks arrays along a new axis, increasing the dimension of the result by 1
• Ability to stack more than 2 arrays at once
• Flexible control over the axis along which stacking occurs
• Handy for combining array data from different sources
• Returns a new stacked array, leaving inputs untouched
In this guide, you’ll learn:
• The syntax and parameters for using numpy.stack()
• How to stack 1D, 2D, and 3D arrays along different axes
• Use cases and examples demonstrating when to use stack()
Let’s dive in!
numpy.stack() Syntax and Parameters
The basic syntax for numpy.stack() is:
numpy.stack(arrays, axis=0, out=None)
It accepts these parameters:
• arrays – Sequence of arrays to stack. This is a required positional argument.
• axis – The axis along which the input arrays are stacked. The default is 0.
• out – Optional output array to place the result. By default a new array is created.
The key thing to understand is that the axis parameter controls the direction in which the stacking occurs.
Let’s look at examples of stacking arrays with different dimensions to see how this works.
Stacking two 1D arrays with numpy.stack() will result in a 2D array:
import numpy as np
a = np.array([1, 2, 3])
b = np.array([4, 5, 6])
stacked = np.stack((a, b)) # Defaults to axis=0
[[1 2 3]
[4 5 6]]
The two input arrays have shape (3,), and the resulting array has shape (2, 3) – they’ve been stacked vertically to form the rows.
We can also specify the axis explicitly:
np.stack((a,b), axis=0) # Vertical stack
np.stack((a,b), axis=1) # Horizontal stack
Stacking along axis=1 will join the arrays horizontally, resulting in a (3, 2) shaped output.
When stacking 2D arrays, the dimension of the result will be 3D.
For example:
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
np.stack((x, y)) # Stack along axis 0
# [[[1 2]
# [3 4]]
# [[5 6]
# [7 8]]]
np.stack((x, y), axis=1)
# [[[1 2]
# [5 6]]
# [[3 4]
# [7 8]]]
np.stack((x, y), axis=2)
# [[[1 5]
# [2 6]]
# [[3 7]
# [4 8]]]
Notice how the axis parameter controls the direction of stacking.
• Axis 0 (default): Stack the arrays along the first dimension
• Axis 1: Stack along the second dimension (row-wise)
• Axis 2: Stack along the third dimension (column-wise)
You can stack more than 2 arrays at once too:
z = np.array([[9,10],[11,12]])
np.stack((x, y, z)) # Shape (3, 2, 2)
With 3D arrays, the result will be a 4D array after stacking.
This allows stacking along the 4th dimension as well.
a = np.zeros((2,2,2))
b = np.ones((2,2,2))
np.stack((a,b)) # Shape (2, 2, 2, 2)
np.stack((a,b), axis=3) # Shape (2, 2, 2, 2)
So in summary, numpy.stack() is very useful for combining array data from different sources or adding new dimensions. The flexibility to control the stacking axis makes it applicable to many
Generally, stacking higher dimensional arrays works the same way – the inputs are stacked along a new axis, increasing the dimension of the result.
When to Use numpy.stack() vs numpy.concatenate()
The numpy.concatenate() function also joins arrays, but it does so along an existing axis. This flattens the input arrays in the process.
So when should you use stack vs concatenate?
• Use stack() when you want to join arrays along a new axis. This preserves the shape of the inputs.
• Use concatenate() when you want to join arrays along an existing axis. The inputs will be flattened down in the process.
Some examples help illustrate the difference:
a = np.array([[1,2],[3,4]])
b = np.array([[5,6],[7,8]])
np.stack((a,b), axis=0) # Shape (2, 2, 2)
np.concatenate((a,b), axis=0) # Shape (4, 2)
np.stack((a,b), axis=1) # Shape (2, 2, 2)
np.concatenate((a,b), axis=1) # Shape (2, 4)
So in summary:
• stack() – adds a new dimension to join arrays
• concatenate() – joins along an existing dimension
Common Use Cases for numpy.stack()
Some common use cases where numpy.stack() can be helpful:
Joining arrays from different sources
import numpy as np
import pandas as pd
arr1 = pd.DataFrame({'x':[1,2], 'y':[3,4]}).values
arr2 = np.array([[5,6],[7,8]])
stacked = np.stack((arr1, arr2)) # Shape (2, 2, 2)
Creating color channels for images
r = img[:, :, 0]
g = img[:, :, 1]
b = img[:, :, 2]
img_stacked = np.stack((r,g,b), axis=-1)
Adding a time sequence dimension to array data
arr1 = getData('2022-01-01')
arr2 = getData('2022-01-02')
stacked = np.stack((arr1, arr2), axis=0)
Combining training data from different distributions
X_train1 = getData('distribution1')
X_train2 = getData('distribution2')
X_train = np.stack((X_train1, X_train2))
So in summary, numpy.stack() is very useful for combining array data from different sources or adding new dimensions. The flexibility to control the stacking axis makes it applicable to many
Tips for Effective Use of numpy.stack()
Here are some tips for working with numpy.stack() effectively:
• The input arrays must have the same shape and number of dimensions. Otherwise you’ll get an error.
• Be mindful of the output shape based on the input shapes and stacking axis.
• Consider using stack() instead of repeated concatenations for better efficiency and clearer code.
• Use stack() to add new dimensions to arrays when needed for algorithms or other operations.
• Specify the output array if you want to avoid an extra copy and save memory.
• Review the stack() documentation for advanced features like sub-class support.
Mastering tools like numpy.stack() will give you more flexibility in combining, manipulating, and transforming array data in Python.
Common Questions about numpy.stack() (FAQ)
Here are answers to some frequently asked questions about the numpy.stack() function:
How is stack() different from concatenate()?
The key difference is that stack() joins arrays along a new axis, while concatenate() joins along an existing axis. Stack preserves the dimensions of the input arrays, while concatenate flattens them
What happens if the input arrays have different shapes?
You’ll get a ValueError saying all input arrays must have the same shape. The input arrays must have identical shapes and number of dimensions for stack() to work properly.
How do you control the direction of stacking?
Use the axis parameter to specify the axis along which to stack the input arrays. This controls whether they are stacked horizontally, vertically, or along other dimensions.
Can I stack more than 2 arrays together?
Yes, numpy.stack() accepts any number of input arrays. They will be stacked along the specified axis in the order provided.
What are good use cases for stack() vs concatenate()?
Use stack() when you want to join arrays along a new axis to increase dimensions. Use concatenate() when you want to join along an existing axis, flattening the arrays.
What is the benefit of specifying the output array?
If you specify the output array, it avoids creating an extra copy of the data. This can help save memory in some cases when working with large arrays.
Can stack() be used with array subclasses instead of just ndarrays?
Yes, numpy.stack() supports subclasses like masked arrays and matrix. The outputs will be instances of the subclass.
In this comprehensive guide, you learned:
• numpy.stack() joins arrays along a new axis, increasing dimensions
• How to control stacking directions with the axis parameter
• Examples of stacking 1D, 2D, and 3D arrays
• Common use cases for adding dimensions or combining array data
• Tips for using stack() effectively such as controlling output shapes
• Answers to frequently asked questions about numpy.stack()
Mastering numpy.stack() gives you a powerful multi-dimensional array joining tool for your Python data science and scientific computing work. The ability to precisely control the stacking direction
increases the utility of this function across many applications.
Leave a Comment | {"url":"https://happyprogrammingguide.com/exploring-np-stack-in-numpy-and-python/","timestamp":"2024-11-12T23:54:46Z","content_type":"text/html","content_length":"187440","record_id":"<urn:uuid:8bf8efb0-70a5-4100-8b4d-90c9bd9b3388>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00333.warc.gz"} |
What is Logistic Regression
As you may be knowing Logistic Regression is a Machine Learning algorithm. It is used for binary classification problems. We also have multiclass logistic regression, where we basically re-run the
binary classification multiple times.
It is a linear algorithm with a non-linear transform at the output. The input values are combined linearly using weights to predict the output value. The output value which is being modelled is a
binary value rather than a numeric value.
Suppose we have the results of a set of students, where the criteria for passing is that the student should score 50% or more. Else the student is classified as failed.
We can classify this problem statement using linear regression. But if our data contains some outliers in the test data, it will affect the orientation of the best fit line.
Outliers are the points that are apart from the usual group of points. These points can have a strong influence on the orientation of the best fit line.
So we go on with Logistic regression for such use cases.
Sigmoid Function
A sigmoid function is a function which limits the value of the output in the range between 0 and 1. The graph and the equation of the sigmoid function are given as follows.
The value x in the equation is the cost function we calculate using the output and weights. The sigmoid function makes sure that the output value is classified in between 0 and 1 for any large
value of x.
Cost Function
The cost function in the case of logistic regression is given by
It should be maximum for correct classification to happen.
• Suppose passing students are considered as positive output & failed students as negative output. Also, if the points classified above the classifying plane is taken as positive and those below
the line are taken as negative. i.e., the value of weight Wi.
• The cost function is determined by the product of these two and hence if the point is correctly classified, the cost function will be positive and if it is incorrectly classified, the cost
function will become negative.
• So we have to make sure that we have considered the desired weight value, to make sure that the points are correctly classified.
• Hence by looking at the cost function, we can say that whether a particular entry is correctly classified or not.
Hope you found this post useful. 😊😊
You can find me on Twitter
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/ashwinsharmap/what-is-logistic-regression-3oog","timestamp":"2024-11-13T23:04:22Z","content_type":"text/html","content_length":"66523","record_id":"<urn:uuid:150736b3-c72d-49d2-8c8e-a5f48a6a9183>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00240.warc.gz"} |
Table of Contents
Chapter 1: Limits
Chapter 1 - Overview
Section 1.1 Naive Limits
Section 1.2 Precise Definition of a Limit
Section 1.3 Limit Laws
Section 1.4 Limits for Trig Functions
Section 1.5 Limits at Infinity and Infinite Limits
Section 1.6 Continuity
Section 1.7 Intermediate Value Theorem
Chapter 2: Differentiation
Chapter 2 - Overview
Section 2.1 What is a Derivative?
Section 2.2 Precise Definition of the Derivative
Section 2.3 Differentiation Rules
Section 2.4 The Chain Rule
Section 2.5 Implicit Differentiation
Section 2.6 Derivatives of the Exponential and Logarithmic Functions
Section 2.7 Derivatives of the Trig Functions
Section 2.8 The Inverse Trig Functions and Their Derivatives
Section 2.9 The Hyperbolic Functions and Their Derivatives
Section 2.10 The Inverse Hyperbolic Functions and Their Derivatives
Chapter 3: Applications of Differentiation
Chapter 3 - Overview
Section 3.1 Tangent and Normal Lines
Section 3.2 Newton's Method
Section 3.3 Taylor Polynomials
Section 3.4 Differentials and the Linear Approximation
Section 3.5 Curvature of a Plane Curve
Section 3.6 Related Rates
Section 3.7 What Derivatives Reveal about Graphs
Section 3.8 Optimization
Section 3.9 Indeterminate Forms and L'Hôpital's Rule
Section 3.10 Antiderivatives
Chapter 4: Integration
Chapter 4 - Overview
Section 4.1 Area by Riemann Sums
Section 4.2 The Definite Integral
Section 4.3 Fundamental Theorem of Calculus and the Indefinite Integral
Section 4.4 Integration by Substitution
Section 4.5 Improper Integrals
Section 4.6 Average Value and the Mean Value Theorem
Chapter 5: Applications of Integration
Chapter 5 - Overview
Section 5.1 Area of a Plane Region
Section 5.2 Volume of a Solid of Revolution
Section 5.3 Volume by Slicing
Section 5.4 Arc Length
Section 5.5 Surface Area of a Surface of Revolution
Section 5.6 Differential Equations
Section 5.7 Centroids
Section 5.8 Work
Section 5.9 Hydrostatic Force
Chapter 6: Techniques of Integration
Chapter 6 - Overview
Section 6.1 Integration by Parts
Section 6.2 Trigonometric Integrals
Section 6.3 Trig Substitution
Section 6.4 The Algebra of Partial Fractions
Section 6.5 Integrating the Fractions in a Partial-Fraction Decomposition
Section 6.6 Rationalizing Substitutions
Section 6.7 Numeric Methods
Chapter 7: Additional Applications of Integration
Chapter 7 - Overview
Section 7.1 Polar Coordinates
Section 7.2 Integration in Polar Coordinates
Section 7.3 The Theorems of Pappus
Chapter 8: Infinite Sequences and Series
Chapter 8 - Overview
Section 8.1 Sequences
Section 8.2 Series
Section 8.3 Convergence Tests
Section 8.4 Power Series
Section 8.5 Taylor Series
Appendix: All about Maple
Appendix - Overview
Section A-1 The Working Environment
Section A-2 Arithmetic Calculations
Section A-3 Referencing
Section A-4 Algebraic Expressions and Operations
Section A-5 Evaluation
Section A-6 Coordinate Geometry
Section A-7 Trigonometry
Section A-8 Functions
Section A-9 Graphing
Section A-10 Solving Equations
Section A-11 Factoring and Collecting Terms
Section A-12 Additional Resources
© Maplesoft, a division of Waterloo Maple Inc., 2024. All rights reserved. This product is protected by
copyright and distributed under licenses restricting its use, copying, distribution, and decompilation.
For more information on Maplesoft products and services, visit www.maplesoft.com
Chapter 1: Limits
Chapter 1 - Overview
Section 1.1 Naive Limits
Section 1.2 Precise Definition of a Limit
Section 1.3 Limit Laws
Section 1.4 Limits for Trig Functions
Section 1.5 Limits at Infinity and Infinite Limits
Section 1.6 Continuity
Section 1.7 Intermediate Value Theorem
Chapter 1 - Overview
Section 1.1 Naive Limits
Section 1.2 Precise Definition of a Limit
Section 1.3 Limit Laws
Section 1.4 Limits for Trig Functions
Section 1.5 Limits at Infinity and Infinite Limits
Section 1.6 Continuity
Section 1.7 Intermediate Value Theorem
Chapter 2: Differentiation
Chapter 2 - Overview
Section 2.1 What is a Derivative?
Section 2.2 Precise Definition of the Derivative
Section 2.3 Differentiation Rules
Section 2.4 The Chain Rule
Section 2.5 Implicit Differentiation
Section 2.6 Derivatives of the Exponential and Logarithmic Functions
Section 2.7 Derivatives of the Trig Functions
Section 2.8 The Inverse Trig Functions and Their Derivatives
Section 2.9 The Hyperbolic Functions and Their Derivatives
Section 2.10 The Inverse Hyperbolic Functions and Their Derivatives
Chapter 2 - Overview
Section 2.1 What is a Derivative?
Section 2.2 Precise Definition of the Derivative
Section 2.3 Differentiation Rules
Section 2.4 The Chain Rule
Section 2.5 Implicit Differentiation
Section 2.6 Derivatives of the Exponential and Logarithmic Functions
Section 2.7 Derivatives of the Trig Functions
Section 2.8 The Inverse Trig Functions and Their Derivatives
Section 2.9 The Hyperbolic Functions and Their Derivatives
Section 2.10 The Inverse Hyperbolic Functions and Their Derivatives
Chapter 3: Applications of Differentiation
Chapter 3 - Overview
Section 3.1 Tangent and Normal Lines
Section 3.2 Newton's Method
Section 3.3 Taylor Polynomials
Section 3.4 Differentials and the Linear Approximation
Section 3.5 Curvature of a Plane Curve
Section 3.6 Related Rates
Section 3.7 What Derivatives Reveal about Graphs
Section 3.8 Optimization
Section 3.9 Indeterminate Forms and L'Hôpital's Rule
Section 3.10 Antiderivatives
Chapter 3 - Overview
Section 3.1 Tangent and Normal Lines
Section 3.2 Newton's Method
Section 3.3 Taylor Polynomials
Section 3.4 Differentials and the Linear Approximation
Section 3.5 Curvature of a Plane Curve
Section 3.6 Related Rates
Section 3.7 What Derivatives Reveal about Graphs
Section 3.8 Optimization
Section 3.9 Indeterminate Forms and L'Hôpital's Rule
Section 3.10 Antiderivatives
Chapter 4: Integration
Chapter 4 - Overview
Section 4.1 Area by Riemann Sums
Section 4.2 The Definite Integral
Section 4.3 Fundamental Theorem of Calculus and the Indefinite Integral
Section 4.4 Integration by Substitution
Section 4.5 Improper Integrals
Section 4.6 Average Value and the Mean Value Theorem
Chapter 4 - Overview
Section 4.1 Area by Riemann Sums
Section 4.2 The Definite Integral
Section 4.3 Fundamental Theorem of Calculus and the Indefinite Integral
Section 4.4 Integration by Substitution
Section 4.5 Improper Integrals
Section 4.6 Average Value and the Mean Value Theorem
Chapter 5: Applications of Integration
Chapter 5 - Overview
Section 5.1 Area of a Plane Region
Section 5.2 Volume of a Solid of Revolution
Section 5.3 Volume by Slicing
Section 5.4 Arc Length
Section 5.5 Surface Area of a Surface of Revolution
Section 5.6 Differential Equations
Section 5.7 Centroids
Section 5.8 Work
Section 5.9 Hydrostatic Force
Chapter 5 - Overview
Section 5.1 Area of a Plane Region
Section 5.2 Volume of a Solid of Revolution
Section 5.3 Volume by Slicing
Section 5.4 Arc Length
Section 5.5 Surface Area of a Surface of Revolution
Section 5.6 Differential Equations
Section 5.7 Centroids
Section 5.8 Work
Section 5.9 Hydrostatic Force
Chapter 6: Techniques of Integration
Chapter 6 - Overview
Section 6.1 Integration by Parts
Section 6.2 Trigonometric Integrals
Section 6.3 Trig Substitution
Section 6.4 The Algebra of Partial Fractions
Section 6.5 Integrating the Fractions in a Partial-Fraction Decomposition
Section 6.6 Rationalizing Substitutions
Section 6.7 Numeric Methods
Chapter 6 - Overview
Section 6.1 Integration by Parts
Section 6.2 Trigonometric Integrals
Section 6.3 Trig Substitution
Section 6.4 The Algebra of Partial Fractions
Section 6.5 Integrating the Fractions in a Partial-Fraction Decomposition
Section 6.6 Rationalizing Substitutions
Section 6.7 Numeric Methods
Chapter 7: Additional Applications of Integration
Chapter 7 - Overview
Section 7.1 Polar Coordinates
Section 7.2 Integration in Polar Coordinates
Section 7.3 The Theorems of Pappus
Chapter 7 - Overview
Section 7.1 Polar Coordinates
Section 7.2 Integration in Polar Coordinates
Section 7.3 The Theorems of Pappus
Chapter 8: Infinite Sequences and Series
Chapter 8 - Overview
Section 8.1 Sequences
Section 8.2 Series
Section 8.3 Convergence Tests
Section 8.4 Power Series
Section 8.5 Taylor Series
Chapter 8 - Overview
Section 8.1 Sequences
Section 8.2 Series
Section 8.3 Convergence Tests
Section 8.4 Power Series
Section 8.5 Taylor Series
Appendix: All about Maple
Appendix - Overview
Section A-1 The Working Environment
Section A-2 Arithmetic Calculations
Section A-3 Referencing
Section A-4 Algebraic Expressions and Operations
Section A-5 Evaluation
Section A-6 Coordinate Geometry
Section A-7 Trigonometry
Section A-8 Functions
Section A-9 Graphing
Section A-10 Solving Equations
Section A-11 Factoring and Collecting Terms
Section A-12 Additional Resources
Appendix - Overview
Section A-1 The Working Environment
Section A-2 Arithmetic Calculations
Section A-3 Referencing
Section A-4 Algebraic Expressions and Operations
Section A-5 Evaluation
Section A-6 Coordinate Geometry
Section A-7 Trigonometry
Section A-8 Functions
Section A-9 Graphing
Section A-10 Solving Equations
Section A-11 Factoring and Collecting Terms
Section A-12 Additional Resources
© Maplesoft, a division of Waterloo Maple Inc., 2024. All rights reserved. This product is protected by copyright and distributed under licenses restricting its use, copying, distribution, and
For more information on Maplesoft products and services, visit www.maplesoft.com | {"url":"https://www.maplesoft.com/support/help/maplesim/view.aspx?path=StudyGuides%2FCalculus%2FTableOfContents","timestamp":"2024-11-13T19:32:24Z","content_type":"text/html","content_length":"203493","record_id":"<urn:uuid:a27110c2-26af-4141-b5cb-a3b0d36c44d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00626.warc.gz"} |
Can u find the bug ? Plz run the code in else statement leap year program plz. | Sololearn: Learn to code for FREE!
Can u find the bug ? Plz run the code in else statement leap year program plz.
year = int(input()) if year%4!=0: print("Not a leap year") elif year%100!=0: print("Not a Leap year") elif year%400==0: print("Leap year") else: print("Not a leap year")
Mohammad Saad
not every year that gets divided by 4 is a leap year.
Mohammad Saad
year 1700,1800 and 1900 aren't leap year but 4 divides each of them
Abhay Bro can you explain how ?pls give some example
I dont know about it thanks bro year=int(input()) if(year%4==0 and year%100!=0 or year%400==0): print("leap year ") else: print("not a leap year ") #Use this code | {"url":"https://www.sololearn.com/en/Discuss/2699475/can-u-find-the-bug-plz-run-the-code-in-else-statement-leap-year-program-plz","timestamp":"2024-11-03T00:30:13Z","content_type":"text/html","content_length":"924280","record_id":"<urn:uuid:1d5bc6bf-d659-4bf9-a738-e3538ccc4eb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00665.warc.gz"} |
Metric Cooking: So Far off the Mark
I’ve spent a little time in the kitchen as of late. I’ve been pulling recipes from old and new cookbooks, and of course from the internet. One thing I’ve come to realise is that precisely nobody in
the industry uses the metric system correctly.
Measuring Utensils
The measuring utensils found in an everyday kitchen are, at best, clumsy conversions of imperial standards. Here’s an example of a typical set of measuring spoons:
The sizes of a typical set are 15ml, 7.5ml, 5ml, 2.5ml 1.25ml and 0.625ml. If you haven’t spotted it already, these don’t scale in a decimal fashion. They borrow one of the worst features of the
imperial system – fractions. Things are in halves, thirds and quarters. The best example of decimal denominations can be found in currency. Let’s take a look at Australia’s coins and bank notes:
These denominations didn’t fall in place by chance. A considerable amount of thought went into selecting these. Take a look at the pattern:
1c, 2c, 5c
10c, 20c, 50c
$1, $2, $5
$10, $20, $50
The system comprehensively includes every multiple of 1, 2, 5 and 10, spanning five orders of magnitude. Why do such a thing? Because it’s dead simple to count by 1s, 2s, 5s, and 10s, and you can get
to any sum by combining just a handful of pieces. This is the heart of decimal currency, and the foundation of how children are taught to perform fundamental arithmetic. The Americans aren’t quite
there yet, because they still have quarter dollars – a fraction which breaks the convention.
Now, back to the kitchen. An ideal set of decimal kitchen measures would need to follow this convention:
Spoons: 1ml, 2ml, 5ml, 10ml, 20ml, 50ml
Cups: 100ml, 200ml, 500ml
I’ve shopped around on the internet, and of course, a set of kitchen measures following such a convention does not exist. It seems the industry has only moved half way to metric, and every single
product is just a sloppy conversion of imperial measures. Without the availability of a properly thought-out set of kitchen measures, the rest of this article is essentially hypothetical.
Alternately, it’s easy to obtain a beaker with 100ml, 200ml and 500ml markings these days. However, a 10ml spoon appears to be non-existent.
Optimising a Recipe for Metric Measures
There are a few golden rules to follow when determining the measures for a recipe:
Use the metric system
If you find yourself specifying things in tablespoons, quarts, cups, ounces or furlongs, you’re doing it wrong. Liquids should be in ml, solids (by volume) should be in cc (which is conveniently the
same as ml, but doesn’t imply liquid), and all weights should be in grams. Temperatures should always be in Celsius, and always correctly labelled. 180°C is a good temperature for an oven. 180° is
the arc of a semicircle. Don’t confuse the two.
Target the denominations of your measuring utensils
If the main ingredient were flour, an ideal amount would be 200cc. This is because it’s a single measure from a single utensil in the (above) set. If that’s not possible, aim for two measures of a
single utensil (e.g. 400cc is 2 x 200cc measures). Each measure loses time and accuracy, so avoid measures like 80cc, which will require four 20cc measures). You should absolutely avoid specifying
quantities that require measures from multiple utensils. Adding 85ml of oil to a cake would be terrible, because you’d need either have to take 17 measures from a 5ml measure, or take 4 x 20ml
measures plus an additional 5ml (getting two utensils dirty). The absolute worst thing you could do is blindly convert from imperial to 6 significant places. specifying 14.7868ml of honey is just
Measure by volume (not weight) if possible
Anything that’s a liquid or powder/granular should be measured by volume. While most people have kitchen scales, they are terribly cumbersome and not as accurate. It’s far easier (and more accurate)
to measure a 500cc scoop of flour than it is to measure 250g on the scales. In contrast, don’t measure things like apples and beef in cubic centimetres. It’s impossible to measure those things in
that way. The lesser the weight, the less practical it is to measure Only drug dealers keep jeweller’s scales in their kitchen, so don’t specify one solitary gram of salt in a recipe. If you really
have to, you can include volume and weight with the weight in brackets (e.g. 500cc (256g) flour). If doing so, ensure that the volume is a nice round value, as it’s scales are inherently better
suited to unconventional measures.
Ditching all Remnants of the Imperial System
Even in a modern kitchen with modern utensils and the latest cookbook, you’ll find imperial measurement strewn all the way through it. I’m living in a society that has been metric for the better half
of a century, and things like this are still prevalent. The only way to fully remove any trace from imperial measurements is to start from scratch and not carry anything forward. To properly convert
would require throwing away your measuring utensils and burning your recipe books, so I don’t have high hopes this happening any time soon. | {"url":"https://www.ubermotive.com/?p=357","timestamp":"2024-11-06T11:04:45Z","content_type":"text/html","content_length":"71886","record_id":"<urn:uuid:2c858bb3-e4af-4b91-9160-2e4bdba0fb2d>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00578.warc.gz"} |
Sales Growth. Bar Graphs Example | Bar Chart Template for Word | Rainfall Bar Chart | A Sample Diagram Of A Bar Graph
This sample was created in ConceptDraw DIAGRAM diagramming and vector drawing software using the Bar Graphs Solution from Graphs and Charts area of ConceptDraw Solution Park. It is Bar Graphs
example, Sales Growth example, Column Chart Example - Sales Report.
All these bar chart templates are included in the Bar Graphs solution. You can quickly rework these bar graph templates into your own charts by simply changing displayed data, title and legend texts.
This sample shows the Horizontal Bar Chart of the average monthly rainfalls. This sample was created in ConceptDraw DIAGRAM diagramming and vector drawing software using the Bar Graphs Solution from
the Graphs and Charts area of ConceptDraw Solution Park.
Easy charting software comes with beautiful chart templates and examples. This makes it easy to create professional charts without prior experience.
This sample shows the Bar Chart of the leverage ratios for two major investment banks. The leverage ratio is the ratio of the total debt to the total equity; it is a measure of the risk taken by the
bank. The higher of the leverage ratio denotes the more risk, the greater risks can lead to the subprime crisis.
Create bar graphs for visualizing economics problem solving and financial data comparison using the ConceptDraw DIAGRAM diagramming and vector drawing software extended with the Bar Graphs Solution
from the Graphs and Charts area of ConceptDraw Solition Park.
ConceptDraw DIAGRAM diagramming and vector drawing software extended with Bar Graphs solution from the Graphs and Charts area of ConceptDraw Solution Park is ideal for drawing the Bar Charts fast and
Complete set of bar chart examples is produced using ConceptDraw software. Surfing bar chart examples you can find an example that is the best for your case.
ConceptDraw DIAGRAM extended with Bar Graphs solution from Graphs and Charts area of ConceptDraw Solution Park is ideal software for quick and simple drawing bar chart of any complexity graph.
Waterfall chart shows the changing of the value from one state to another through the series of intermediate changes. The waterfall diagrams are widely used in the business. They are used to
represent the set of figures and allow link the individual values to the whole.
The best bar chart software ever is ConceptDraw. ConceptDraw bar chart software provides an interactive bar charting tool and complete set of predesigned bar chart objects.
The answer how to create a bar chart can be found in ConceptDraw software. The simple tips guide you through the software to quickly learn how to create a bar chart.
Create bar charts for visualizing problem solving in manufacturing and economics using the ConceptDraw DIAGRAM diagramming and vector drawing software extended with the Bar Graphs Solution from the
Graphs and Charts area of ConceptDraw Solition Park.
A divided bar graph is a rectangle divided into smaller rectangles along its length in proportion to the data. Segments in a divided bar represent a set of quantities according to the different
proportion of the total amount. A divided bar diagram is created using rectangular bars to depict proportionally the size of each category. The bars in a divided bar graph can be vertical or
horizontal. The size of each rectangle displays the part that each category represents. The value of the exact size of the whole must be known because each section of the bar displays a piece of that
value. A divided bar diagram is rather similar to a sector diagram in that the bar shows the entire data amount and the bar is divided into several parts to represent the proportional size of each
category. ConceptDraw DIAGRAM in conjunction with Divided Bar Diagrams solution provides tools to create stylish divided bar charts for your presentations.
The Bar Graphs solution enhances ConceptDraw DIAGRAM functionality with templates, numerous professional-looking samples, and a library of vector stencils for drawing different types of Bar Graphs,
such as Simple Bar Graph, Double Bar Graph, Divided Bar Graph, Horizontal Bar Graph, Vertical Bar Graph, and Column Bar Chart.
The Pie Chart visualizes the data as the proportional parts of a whole and looks like a disk divided into sectors. The pie chart is type of graph, pie chart looks as circle devided into sectors. Pie
Charts are widely used in the business, statistics, analytics, mass media. It’s very effective way of displaying relative sizes of parts, the proportion of the whole thing.
Create bar charts for event management problem solving and visual data comparison using the ConceptDraw DIAGRAM diagramming and vector drawing software extended with the Bar Graphs Solution from the
Graphs and Charts area of ConceptDraw Solition Park.
Bar charts represent data in different categories or groups. Create bar graphs for visual solving your scientific problems and data comparison using the ConceptDraw DIAGRAM diagramming and vector
drawing software extended with the Bar Graphs Solution from the Graphs and Charts area of ConceptDraw Solition Park.
These bar chart templates was designed using ConceptDraw DIAGRAM diagramming and vector drawing software extended with Bar Graphs solution from Graphs and Charts area of ConceptDraw Solution Park. | {"url":"https://www.conceptdraw.com/examples/a-sample-diagram-of-a-bar-graph","timestamp":"2024-11-12T21:59:02Z","content_type":"text/html","content_length":"56908","record_id":"<urn:uuid:f8797816-a732-4a8e-b36d-069c515ba755>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00832.warc.gz"} |
Topics: Exterior Algebra and Calculus
Exterior Algebra and Calculus
Exterior Algebra / Product > s.a. forms.
$ Def: The associative, bilinear composition law for differential forms on a manifold ∧ : Ω^ p(M) × Ω^ q(M) → Ω^ p+q(M) given by
\[ (\omega\wedge\theta)_{a...bc...d} = {(p+q)!\over p!\,q!}\,\omega_{[a...b}\,\theta_{c...d]}\;.\]
* Properties:
– Under permutation, ω ∧ θ = (−1)^pq θ ∧ ω.
– Contraction with a vector field, v · (ω ∧ θ) = (v · ω) ∧ θ + (−1)^p ω ∧ (v · θ).
> Online resources: see MathWorld page; Wikipedia page.
Exterior Calculus / Derivatives > s.a. cohomology; differential forms; lie derivatives.
$ Def: An operator d: Ω^ p(M) → Ω^ p+1(M) on the graded algebra of differential forms on a manifold, defined by
(1) Action on scalars, df(X):= X(f), for all 0-forms f and vector fields X;
(2) Linearity, d(αω + βη) = α dω + β dη, for all p-forms ω, η and numbers α, β;
(3) Relation with exterior product, d(ω ∧ θ):= dω ∧ θ + (−1)^p ω ∧ dθ, for all p-forms ω and q-forms θ;
(4) Square, d^2ω = d(dω) = 0 for all p-forms ω.
* Remark: It does not need a metric to be defined (it is a concomitant).
* Notation: In abstract index and coordinate notation, respectively, for a p-form ω = ω[i...j] dx^ i ∧ ... ∧ dx^ j,
(dω)[ma... b] = (p+1) ∇[[m ]ω[a... b]] , dω = ∂[k ]ω[i... j] dx^k ∧ dx^ i ∧ ... ∧ dx^ j .
* Properties: It commutes with taking the Lie derivative with respect to some vector field v^a, d(\(\cal L\)[v] ω) = \(\cal L\)[v](dω).
@ General references: Colombaro et al Math(19)-a2002 [introduction].
@ Discrete: Harrison mp/06 [unified with continuum]; Arnold et al BAMS(10) [finite-element exterior calculus, cohomology and Hodge theory]; Boom et al a2104 [applied to linear elasticity].
@ Other generalized: Okumura PTP(96)ht [in non-commutative geometry]; Gozzi & Reuter IJMPA(94)ht/03 [quantum deformed, on phase space]; Tarasov JPA(05) [of fractional order]; Yang a1507-wd [in
non-commutative geometry, nilpotent matrix representation].
> Online resources: see Wikipedia page.
main page – abbreviations – journals – comments – other sites – acknowledgements
send feedback and suggestions to bombelli at olemiss.edu – modified 15 apr 2021 | {"url":"https://www.phy.olemiss.edu/~luca/Topics/top/exterior.html","timestamp":"2024-11-12T02:49:04Z","content_type":"text/html","content_length":"7740","record_id":"<urn:uuid:2eb494a5-2546-47ad-91da-b05a2d1185d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00539.warc.gz"} |
Microgravity: living on the International Space Station
5 Forming planets: using models
In the previous section, you met the ‘coefficient of restitution’, ‘translational energy’ (or kinetic energy) and ‘rotational energy’.
The ‘coefficient of restitution’ can take values between 0 and 1 where:
• 0 means a perfectly inelastic collision where the particles have no relative velocity after the collision and they stick together
• 1 means a perfectly elastic collision, where the particles move away from each other after the collision as fast as they were moving towards each other before it.
Equation 1 shows the ratio of final relative velocity to initial relative velocity as the coefficient of restitution.
In this case, highly inelastic collisions are key to understanding planet formation. As you saw in Video 4, these collisions show that the particles behave more like crashing cars than snooker balls!
Now look at Table 3 which lists the data obtained from a collision experiment (precise to up to 6 significant figures).
Table 3 Data obtained from a collision experiment
Velocity before collision Coefficient of restitution
0.394340 +/- 0.005249 0.312340 +/- 0.007026
0.404975 +/- 0.005254 0.430552 +/- 0.008426
0.417616 +/- 0.005249 0.472011 +/- 0.010388
0.418179 +/- 0.006828 0.527789 +/- 0.013723
0.335328 +/- 0.004141 0.354709 +/- 0.006535
0.418685 +/- 0.004471 0.870707 +/- 0.012637
You can see that, to 2 significant figures, the velocities before the collisions range between 0.34 m/s and 0.42 m/s.
The final velocities achieved in the experiment gave the results for the coefficients of restitution by using Equation 1 and the data in the experiment. The coefficients of restitution have therefore
been calculated from the values of the final velocities and, to 2 significant figures, they range between 0.31 and 0.87. Figure 6 shows the data from Table 3 as a graph.
Note that the horizontal lines are the ‘error bars’. These correspond to the +/- values in Table 3.
• Can you deduce a relationship between the relative velocity before collision and the coefficient of restitution from the graph?
• It seems that, for lower values of relative velocity greater than the collision (0.34 m/s), the coefficient of restitution also takes lower values (0.3).
This may be constant for values of relative velocity between the collision between 0.34 m/s and 0.40 m/s.
However, as the relative velocity values after the collision increase beyond 0.40 m/s, the values of the coefficient of restitution increase quickly up to 0.90.
Now complete Activity 7.
Activity 7 Calculating the final velocities of the particles
Timing: Allow approximately 10 minutes
Calculate the values of the final velocities to 3 significant figures in the table below. (Hint: look at Equation 1 and rearrange it in terms of the final velocity.)
Table 4 Final velocities
Initial velocity (m/s) Coefficient of restitution Final velocity (m/s)
0.394 0.312
0.405 0.431
0.418 0.472
0.418 0.528
0.335 0.355
0.419 0.871
Interactive feature not available in single page view (
see it in standard view
Table 4 Final velocities completed
Initial velocity (m/s) Coefficient of restitution Final velocity (m/s)
0.394 0.312 0.123
0.405 0.431 0.174
0.418 0.472 0.197
0.418 0.528 0.221
0.335 0.355 0.119
0.419 0.871 0.365
You will have found that the final velocity of the colliding objects is less than the initial velocity. The coefficient of restitution is also greater than zero which means that these particles don’t
stick together.
Next you will complete the end-of-week quiz. | {"url":"https://www.open.edu/openlearn/mod/oucontent/view.php?id=77559§ion=6","timestamp":"2024-11-02T00:14:33Z","content_type":"text/html","content_length":"152896","record_id":"<urn:uuid:a10ec577-f89b-4ae2-9b9d-31b13976505a>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00771.warc.gz"} |
Separated Twins
26325734615147 (found by Archie)
56171354632742 (found by Jamie)
and of course they both work in reverse!
It was a fab experience!
35743625427161 Ross Scrivener and Charlotte Bailey
73625324765141 Antoine Parry
these all work backwards too!
Clue 1
The combination is a six digit number.
Clue 2
There are two ones separated by one other digit.
Clue 3
There are two twos separated by two other digits.
Clue 4
There are two threes separated by three other digits.
Sign in to your Transum subscription account to see the answers
That was easy! Here's the real challenge!
The combination of the Transum safe is a 14 digit number.
The two 7s are separated by seven other digits;
The two 6s are separated by six other digits;
The two 5s are separated by five other digits;
The two 4s are separated by four other digits;
The two 3s are separated by three other digits;
The two 2s are separated by two other digits;
The two 1s are separated by one other digit.
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon
Teacher, do your students have access to computers such as tablets, iPads or Laptops? This page was really designed for projection on a whiteboard but if you really want the students to have access
to it here is a concise URL for a version of this page without the comments:
However it would be better to assign one of the student interactive activities below.
Here is the URL which will take them to a student version of this activity. | {"url":"https://transum.org/Software/SW/Starter_of_the_day/starter_September18.ASP","timestamp":"2024-11-14T18:52:08Z","content_type":"text/html","content_length":"32435","record_id":"<urn:uuid:5e471879-888f-4d4c-a25c-21abc28323c7>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00471.warc.gz"} |
From Encyclopedia of Mathematics
of the first and second kinds
The eigen values of the monodromy operator of a canonical equation.
In a complex Hilbert space, equations of the form $\dot x = i J H(t) x$, where $J$ and $H(t)$ are self-adjoint operators, $J^2 = I$ and $H(t)$ is periodic, are called canonical. In the
finite-dimensional case the eigen values of the monodromy operator $U(t)$ of this equation are called multipliers. If all solutions of a canonical equation are bounded on the entire real axis (the
equation is stable), then the multipliers lie on the unit circle. Consider a canonical equation $\dot x = i \lambda J H(t) x$ with a real parameter $\lambda$; then all multipliers can be divided into
two groups: multipliers of the first (second) kind, which move counter-clockwise (clockwise) as $\lambda$ increases.
A canonical equation is called strongly stable if it is stable and remains stable under small variations of $H(t)$. For strong stability it is necessary and sufficient that all multipliers be on the
unit circle and that there be no coincident multipliers of different kinds.
The theory of multipliers of the first and second kinds allows one to obtain a number of delicate tests for stability and estimates of the zone of stability for canonical equations. The homotopy
classification of stable and unstable canonical equations has been given in terms of multipliers.
[1] Yu.L. Daletskii, M.G. Krein, "Stability of solutions of differential equations in Banach space" , Amer. Math. Soc. (1974) (Translated from Russian)
[2] V.A. Yakubovich, V.M. Starzhinskii, "Linear differential equations with periodic coefficients" , Wiley (1975) (Translated from Russian)
[a1] M.G. Krein, "Topics in differential and integral equations and operator theory" , Birkhäuser (1983) (Translated from Russian)
[a2] I. [I. Gokhberg] Gohberg, P. Lancaster, L. Rodman, "Matrices and indefinite scalar products" , Birkhäuser (1983)
How to Cite This Entry:
Multipliers. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Multipliers&oldid=39828
This article was adapted from an original article by S.G. Krein (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/wiki/Multipliers","timestamp":"2024-11-03T02:48:00Z","content_type":"text/html","content_length":"15632","record_id":"<urn:uuid:8ba22bec-4157-429c-9595-10749de448dd>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00161.warc.gz"} |