content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Submitted by Atanu Chaudhuri on Wed, 26/12/2018 - 20:44
Solution to 11th WBCS arithmetic practice set
11th set of WBCS Arithmetic solution explains how the 10 questions in the arithmetic practice questions can be solved easy and quick in 10 minutes.
To take the test before going through the solutions, click here.
11th WBCS arithmetic solution set: time to answer was 10 mins
Problem 1
What will be the percentage markup on cost price to achieve a profit of 4% after 20% discount?
a. 10%
b. 35%
c. 25%
d. 30%
Solution 1: Solving in mind: Profit or loss is on Cost price, Discount is on marked or list price
To achieve a profit of 4% on cost price $C$, sale price would be $1.04C$. After 20% discount on marked price $M$, sale price becomes, $0.8M$. This equals sale price,
So percentage markup on cost price to reach the marked price is the percentage difference of marked price and cost price with reference to cost price,
From the above relation,
Or, $M=1.3C$.
So, percentage markup would be 30% on cost price (1.3C means C plus 30% of C).
Answer. Option d: 30%.
Concepts used: Percentage to decimal conversion, dividing by 100 -- Profit concept, it is on cost price and additional to cost price -- Marked price, the listed price on which discount percentage
reduces it to sale price -- Decimal to percentage conversion -- Solving in mind.
Problem 2
In a 200 litre solution of sugar fully dissolved in water, sugar is 20%. On heating for a while the solution gets more concentrated with sugar percentage increasing to 80% of the solution. What is
the volume of water that evaporated?
a. 140 litre
b. 150 litre
c. 130 litre
d. 100 litre
Solution 2: Solving in mind: With drying up of solution, sugar amount in weight remains unchanged with only water amount getting reduced
Sugar is 20% of solution weight $W$ at the start with volume 200 litres. So sugar weight was 0.2W and it remains unchanged while water evaporated.
After evaporation, sugar of amount $0.2W$ becomes $0.8W_D$, $W_D$ being the new weight of the dried up solution.
So, $0.2W=0.8W_D$,
Or, $W_D=0.25W$, that is, present solution weight reduced to one-fourth of the original weight.
As weight of homogeneous solution is proportional to the original volume of 200 litres as well as the present volume, present dried up volume will be one-forth of 200 litres, that is 50 litres.
So evaporated water volume is,
$200-50=150$ litres.
Answer: Option b: 150 litres.
Concepts used: Drying up of homogeneous solution -- Percentage to decimal conversion -- Proportionality of volume to weight, sugar is in kgs but solution is in litres -- Solving in mind.
Problem 3
At what time between 1 O'clock and 2 O'clock both minute hand and hour hand will be together?
a. $5\displaystyle\frac{6}{11}$ minutes past 1 O'clock
b. $5\displaystyle\frac{4}{11}$ minutes past 1 O'clock
c. $5\displaystyle\frac{5}{11}$ minutes past 1 O'clock
d. $5\displaystyle\frac{3}{11}$ minutes past 1 O'clock
Solution 3: Solving in mind: Realtive angular speed of minute and hour hands
Minute hand moves round the clock face and traverses $360^0$ in $60$ minutes. Its speed is, $6^0$ per minute.
Hour hand traverses distance between two hour marks, that is, $30^0$ in 60 minutes. Its speed is $\displaystyle\frac{1}{2}^0$ per minute.
The minute hand then approaches the hour hand at a relative speed of,
$6^0-\displaystyle\frac{1}{2}^0=\displaystyle\frac{11}{2}^0$ per minute, with hour hand effectively standing still.
It is this speed at which the minute hand actually closes the gap between itself and the hour hand with both moving.
At 1 O'clock, minute hand stood at 00 or 12 hour mark and hour hand stood at 1 hour mark, fully $30^0$ ahead of the minute hand.
At a relative speed of $\displaystyle\frac{11}{2}^0$ per minute the minute hand will close this gap and will catch up with the hour hand in,
$=\displaystyle\frac{60}{11}=5\displaystyle\frac{5}{11}$ minutes.
The two hands will then meet at $\displaystyle\frac{5}{11}$ minutes past 1 O'clock.
Answer: Option c: $5\displaystyle\frac{5}{11}$ minutes past 1 O'clock.
Concepts used: Clock concepts -- Speed of hour hand and the minute hand -- Relative speed of minute hand and hour hand -- Race concept -- Solving mind.
The abovementioned tutorial on How to solve clock problems should make the concepts clear.
Problem 4
The simple interest over 2 years at a certain interest rate on a certain amount is Rs. 2400. If the difference between the compound interest compounded annually at same rate over same period on same
amount, and the simple interest be Rs. 138, what is the percentage interest rate per annum?
a. 13.5%
b. 10.5%
c. 12.5%
d. 11.5%
Solution 4: Solving in mind: Concept of difference between simple interest and compound interest
Simple interest over 2 years assuming $x$ as the amount invested and $r$ as the interest rate is, $2xr$ and per year it is $xr=\text{Rs. }1200$.
Under compound interest the first year interest will be same as the simple interest over the first year. Compounding will happen over the second year and the extra compounded interest will be on the
simple interest earned over the first year, that is, $xr$ at the rate of $r$. So the compound interest extra to simple interest is, $xr^2=\text{Rs. }138$.
Dividing the two we get,
Answer: Option d: 11.5%.
Concepts: Simple interest -- Compound interest -- Difference between simple interest and Compound interest -- Efficient simplification -- Solving in mind.
Problem 5
A basketful of oranges are counted in pairs, in 3s and in 5s, and every time one orange is left over. The least number of oranges in the basket is,
a. 31
b. 41
c. 61
d. 51
Solution 5: Solving in mind: Use of the concept underlying Euclid's division lemma
If a larger number $a$, the dividend, is divided by a smaller number $b$, the divisor, it holds that,
$a=bq+r$, where $q$ is the quotient and remainder $r$ may be 0 but must be less than $b$.
The three countings give rise to three divisions of the total number of oranges, say, $N$,
$N=2q_1+1$, Or, $N-1=2q_1$
$N=3q_2+1$, Or, $N-1=3q_2$, and
$N=5q_3+1$, Or, $N-1=5q_3$.
So if an integer $N$ reduced by 1 is a multiple of the LCM 30 of 2, 3 and 5, when divided by 2, 3 and 5, a remainder of 1 will be left in each case.
The smallest such number will then be,
Answer: Option a: 31.
Concepts used: Remainder concept in Euclid's division lemma -- Key pattern identification -- LCM -- Solving in mind.
Naturally, for solving the problem mentally we didn't have to write the equations, and arriving directly on the key pattern that the number will be 1 more than the LCM of 2, 3 and 5.
Problem 6
HCF of $\displaystyle\frac{12}{13}$ and $\displaystyle\frac{3}{5}$ is,
a. $3$
b. $12$
c. $\displaystyle\frac{12}{65}$
d. $\displaystyle\frac{3}{65}$
Solution 6:
HCF of a pair of fractions is,
$\displaystyle\frac{\text{HCF of numerators}}{\text{LCM of denominators}}$
So in this problem, HCF of the two given fractions is,
$\displaystyle\frac{\text{HCF of 12 and 3}}{\text{LCM of 13 and 5}}$
Answer: Option d: $\displaystyle\frac{3}{65}$.
Concepts used: HCF of fractions -- HCF of integers -- LCM of integers -- Solving in mind.
Problem 7
Two rational numbers lying between $\displaystyle\frac{4}{5}$ and $\displaystyle\frac{6}{7}$ are,
a. $\displaystyle\frac{29}{35}$, $\displaystyle\frac{62}{70}$
b. $\displaystyle\frac{28}{34}$, $\displaystyle\frac{35}{39}$
c. $\displaystyle\frac{29}{35}$, $\displaystyle\frac{5}{6}$
d. $\displaystyle\frac{65}{84}$, $\displaystyle\frac{5}{6}$
Solution 7: Solving in mind: Check and ensure the range of two fractions to be in ascending order: Comparison of fractions
To reduce number of comparisons we check and ensure that the range values are indeed in ascending order.
We apply specific rules here for comparing and deciding, which fraction between the two is the larger one.
We will apply the First rule or technique of fraction comparison for comparing $\displaystyle\frac{4}{5}$ and $\displaystyle\frac{6}{7}$:
If the difference between the numerator and denominator of two fractions are same, the fraction with larger numerator will be the larger one.
So without actually doing a subtraction, we can conclude,
$\displaystyle\frac{4}{5} \lt \displaystyle\frac{6}{7}$.
Solution 7: Strategy of checking choice values with range limit values
With two range limit fractions in ascending order, we will follow the principle of checking valid or invalid choice as,
If any choice value is less than the lower limit or greater than the higher limit, then the choice values can't be placed within the range and the choice is invalid.
Keeping this basic principle in mind, we adopt a systematic approach,
We will compare one value of a suitable choice with the lower limit of the range. If invalid, we will select second suitable choice, otherwise we will check the second value of the choice with
the higher limit.
As $\displaystyle\frac{29}{35}$ appears in two choice values we will check it first. As suitable choice, we will select the third choice with second fraction $\displaystyle\frac{5}{6}$ rightaway
being identified to be less than the higher limit of $\displaystyle\frac{6}{7}$ (by rule 1).
Solution 7: Comparing $\displaystyle\frac{4}{5}$ with $\displaystyle\frac{29}{35}$
To compare these two fractions we will use the generally known rule of base equalization.
Second rule and technique of fraction comparison: Denominator equalization
If denominators of two fractions are same, the fraction with larger numerator will be the larger.
By multiplying the numerator and denominator of $\displaystyle\frac{4}{5}$ by 7, the fraction is converted to $\displaystyle\frac{28}{35}$, thus equalizing the two denominators. Now we can easily
$\displaystyle\frac{29}{35} \gt \displaystyle\frac{4}{5}$.
With both values of the third choice within the given range, this must be the correct answer (as, in the MCQ problem, answer can only be one).
These decisions can be quickly taken if you know the techniques and follow a systematic approach.
Answer: Option c: $\displaystyle\frac{29}{35}$, $\displaystyle\frac{5}{6}$.
Concepts: Fraction range placement problem -- Strategic problem solving approach -- Fraction comparison -- Base equalization technique -- Solving in mind.
Note: Rule 3, the third powerful rule for fraction comparison might also have been used if required,
If denominator numerator difference of two fractions is smaller for the fraction with larger numerator, it is the larger of the two.
For example, $\displaystyle\frac{7}{9} \gt \displaystyle\frac{4}{7}$.
This is true because of percentage difference between denominator and numerator is smaller for the larger fraction.
For example, to compare $\displaystyle\frac{4}{5}$ with $\displaystyle\frac{65}{84}$ we will take the denominator to 85, as close to 84 as possible, by multiplying both numerator and denominator by
17. The converted fraction $\displaystyle\frac{68}{85}$ is larger than $\displaystyle\frac{64}{84}$ by rule 3.
Caution: Unless you are clear on these techniques and are experienced in fraction comparison, attempting a fraction range placement problem may take a lot of your valuable time, and so should be
Problem 8
What is the unit's digit of the product of all the prime numbers between 10 and 30?
a. 7
b. 9
c. 3
d. 1
Solution 8: Solving in mind: Multiplying the unit's digit of the prime numbers in the range
The concept used in this problem is,
Unit's digit of the product of two integers will be the unit's digit of the product of the unit's digit of the two integers.
For example unit's digit of $1467\times{459}$ will be unit's digit of $7\times{9}$ which will be 3.
Using this concept we take the first two prime numbers 11 and 13 and remember the unit's digit of the product as 3.
Then we continue to evaluate the unit's digit of $3\times{17}$ giving 1, then $1\times{19}$ giving 9, then unit's digit of $9\times{23}$ giving 7 and finishing with, unit's digit of $7\times{29}$
giving 3.
Answer: Option c: 3.
Concepts: Unit's digit of the product of two integers as the unit's digit of the product of their unit's digits -- Prime numbers -- Solving in mind.
Problem 9
Present ages (in years) of Romi and Runu are in the ratio of 5 : 6. Three years ago their ages were in the ratio 9 : 11. What will the ratio of their ages after 6 years from now?
a. 6 : 7
b. 1 : 2
c. 7 : 8
d. 3 : 4
Solution 9: Solving in mind: Ratio concept and HCF reintroduction technique
Introducing the cancelled out HCF as a product of both terms of the ratio we have the ratio $5:6=5x:6x$.
Thre years ago,
Or, $x=6$.
So 6 years from now the ratio of their ages will be,
$\displaystyle\frac{30+6}{36+6}=\frac{6}{7}$, the ages of both increasing by 6 years.
Answer: Option a: 6 : 7.
Concepts: Ratio concept -- HCF reintroduction technique -- Age concept: $x$ years from now age of all increases by $x$ years -- Solving in mind.
Problem 10
Inlet pipes A and B can fill a tank independently in 15 mins and 12 mins respectively. If both are opened simultaneously and B is closed after 3 mins how much more time will A take to fill the tank?
a. 6 mins
b. 8 mins 15 secs
c. 10 mins 30 secs
d. 9 mins 15 secs
Solution 10: Solving in mind: Fill rate in terms of portion of tank filled per unit time
Pipe A fills the tank at a fill rate of $\displaystyle\frac{1}{15}$ portion of tank per minute and pipe B fills at a fill rate of $\displaystyle\frac{1}{12}$ portion of tank per minute. So together
in 3 minutes the two pipes will fill,
$\displaystyle\frac{1}{5}+\displaystyle\frac{1}{4}=\displaystyle\frac{9}{20}$ portion of tank.
Left will be then $\displaystyle\frac{11}{20}$ portion of tank.
Applying unitary method,
Pipe A fills up 1 tank or full tank in 15 minutes,
So, pipe A will fill up $\displaystyle\frac{11}{20}$ portion of tank in,
$15\times{\displaystyle\frac{11}{20}}=\displaystyle\frac{33}{4}=8.25$ mins, or, 8 mins 15 secs.
Answer: Option b: 8 mins 15 secs.
Concept: Pipes and cistern problems -- Fill rate as portion of tank filled in unit time -- Working together concept -- Unitary method -- Solving in mind.
We have used and explained the concepts and methods:
Percentage, Percentage to decimal conversion, Decimal to percentage conversion, Profit and loss concepts, Marked price, Discount, Drying up of homogeneous solution, Concentration of dissolved matter,
Clock concepts, Speed of hour hand and the minute hand, Relative speed of minute hand and hour hand, Race concept, Simple interest, Compound interest, Difference between simple interest and Compound
interest, Remainder concept in Euclid's division lemma, Key pattern identification, LCM of integers, HCF of fractions, HCF of integers, Fraction range placement problem, Strategic problem solving
approach, Fraction comparison, Base equalization technique, Unit's digit of product of two integers, Prime numbers, Ratio concept, HCF reintroduction technique, Age problems, Pipes and cistern
problems, Fill rate as portion of tank filled in unit time, Working together concept, Unitary method, Solving mind.
All problems could be solved in mind in a few tens of seconds, but with use of requisite concepts, identification of patterns and application of powerful methods.
We include this approach of solving a problem in mind in the expanded scope of mental maths.
Question and Solution sets on WBCS Arithmetic
For all WBCS main arithmetic question sets click here.
For all WBCS main arithmetic solution sets click here. | {"url":"https://suresolv.com/wbcs-main/wbcs-main-level-arithmetic-solution-set-11","timestamp":"2024-11-06T05:34:03Z","content_type":"text/html","content_length":"46868","record_id":"<urn:uuid:cecc83ee-44d3-4913-b065-647087a4185e>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00764.warc.gz"} |
Why Terreplane?
Figure 4. Illustration of turbulence on flat plate airfoil. Table 3. L/D values versus H/L (rows) and air's angle of attack (columns).
Air's angle of attack is in degrees (top row) and radians (2nd row).
Chapter 4. Science of Optimization
The ideal flat plate airfoils L/Ds from 57 to 114
at air angles of attack from 1
° to 0.5°, optimal designs will have L/Ds from 50 to 100 due to shear drag and leading-edge effects. The estimates of these forces are common-place and reasonably accurate as summarized in the
following paragraphs.
Shear Drag -
Shear drag coefficients are about 0.001 for laminar flow to 0.005 for turbulent flow. These coefficients are multiplied times area (“A”) and density times velocity squared (“ρ
”) to estimate the shear drag, which is a force parallel to a surface. In this case, “A” is the total surface area versus a projected frontal surface area that is commonly used for form drag. For
laminar flow over a rectangular plate, the laminar shear drag is
0.001 ρ L W v2
. The form drag is a function of air’s angle of attack and is
sin(θ) ρ L W v2
When air’s angle of attack is 0, form drag goes to zero and shear drag is greater than form drag. However, as summarized by Table 1, at air angles of attack greater than 0.5
, shear drag is less than ~10% of the total drag. Shear drag reduces the ideal L/D by 5%-10% less than at air angles of attack from 1° to 0.5°, The impact of turbulence is discussed later in this
The finding that shear drag tens to be small to negligible is consistent with the ability of the Eta Glider to attain L/D up to 70. Operating with negligible shear drag is achieved with smooth
surfaces. Shear drage does limit the ability of flat plate airfoils to attain L/D greater than about 150.
Leading and Trailing Edges
– The ideal flat plate airfoil has negligible thickness which over-estimates L/D due to the thicknesses of the front edges. The impact of the thickness can be estimated using drag coefficients as
defined by Equation 3.
Fd = 0.5 cd A (ρv2)
(where ρ is density) (3)
For rough estimates, the drag coefficient (cd) is taken as 1.0 and the area (A) is the projected frontal area. The projected frontal area of the flat sheet will have a leading edge of area H W and
the projected area of the flat surface of sin(θ) L W, where θ is the air’s angle of attack. The overall drag force is a function of velocity. The corrected L/D estimate is:
L/D = cos(θ) /[sin(θ) + H/L].
Table 3 summarizes the impact of the front edge on performance. The results narrow in on conditions where L/D can be greater than 70; and a key design parameter is the front edge thickness. The
optimal conditions are for air’s angle of attack near 0.5° and H/L near 0.005 with an L/D of 73.
A design under development is the towed platform “train” where a series of airfoil compartments are tethered for flight, minimizing front edge portfolio and increasing length and carrying capacity.
An implicit purpose of the towed platform “train” design is to minimize front edge effects by having sequential platforms in contiguous streamlines of the lead aircraft. An additional insight from
this analysis is the need for flexibility along the length as a robust aspect of design, which is needed for low thickness vehicles.
Further discussions on this topic will be available in Chapter 7.
Impact of Turbulence –
Surfaces, forces, and weather that cause a change in air flow patterns can disrupt laminar flow. Turbulence can increase shear drag and even cause air’s momentum to counter desirable conditions for
creating aerodynamic lift. Figure 4A illustrates how turbulence on the trailing edge of a flat plate airfoil can create changes in momentum that counterproductively pushes down on upper surfaces of
an airfoil.
Undesirable turbulent flow can occur on all four edges (front, sides, and back) of a rectangular flat plate airfoil as higher pressures under the plate cause air to flow to lower pressures above the
plate. Figure 6 illustrates how a deviation on a lower surface of an airfoil can create a force for flow toward each of the edges.
Figures 4B and 4C illustrate approaches to overcome turbulence on fore and aft edges by adding a camber on the leading edge. Of particular importance in this approach is a design where the split of
streamlines (i.e., flow over versus flow under the airfoil) in front of the airfoil is at approximately the same vertical position as the joining of streamlines aft the airfoil.
This turbulent overflow impacts the side edges of the airfoils as well. Typical approaches used on lateral ends of wings to eliminate this turbulence include: a) having the leading and trailing edges
meet at a tip to reduce/eliminate surfaces impacted by side edge turbulence and b) using upturned winglets to block the turbulence-inducing air flow.
The approaches applied to contemporary wings provide a starting point of optimization of airfoil “platforms” which have much lower span-to-chord ratios than laterally-extending wings.
Summary of the Science –
When the flat plate airfoil is corrected for shear drag and leading-edge form drag, the L/D values may be compared to traditional airfoil performances. Experimental data has verified that aircraft
can approach the performance of traditional airfoils under laminar flow conditions. For example, the Eta Glider is reported to have an L/D of 70:1; which is about 90% of the L/D of the airfoil
without the fuselage. The Eta Glider and similar lightweight wide-wingspan aircraft tend to be rather fragile, limited to slower velocities, and incapable of achieving the higher L/D values
attainable by of flat plate airfoils. | {"url":"http://www.terretrans.com/chapter-4.html","timestamp":"2024-11-05T20:09:39Z","content_type":"text/html","content_length":"49636","record_id":"<urn:uuid:574a2f97-1959-42b9-9e53-4c19113f86b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00101.warc.gz"} |
Fancy doors
A couple of weeks ago, my girlfriend had the idea to spruce up her doors. “Great!” I said, “sounds like a fun project!” At first it was just going to be a paint job, then it evolved into something
more. The doors were going to be adorned with some frames. She googled^1 around a bit and found some sets that were pre-made on Amazon. Problem was, that not all doors had the same width, and most
sets had a fixed width of 60cm^2. The smallest door has a total width of 67cm, so that would leave just 3.5cm on both sides. Aesthetically not very pleasing.
So I started drawing a bit and we tested out several different possibilities and layouts. We ended up with a rectangle of 50x60cm, with a smaller 50x30cm one below it, then another 50x60cm again. For
the outside doors, she wanted some colour as well. So we experimented a bit, to get the creative juices flowing. You can’t get yourself too attached to the colours you see on a screen, when choosing
paint you either have to pick whatever colours they have or have them mix a colour. Most brands do support RAL colours which have sRGB approximations, but it might still have some slight deviation.
When we looked at it together she settled for some pastel colours, but the next when I came home from work she was ecstatic about the flashy blue, yellow and orange colours she choose. Hey, if she’s
happy, I’m happy.
To make the rectangle frames for the doors, we ordered slats of 270cm online. Of course, this was a lot cheaper than if we ordered the premade kits. But this did mean I had to get my hands dirty, so
to speak. Every slat had to be cut into pieces of either 30, 50 or 60cm. And they all had to have outward angles of 45 degrees. Easy enough, I have the tools to do this. Oh wait, I don’t…
They’re still in the moving container. Luckily, my dad came to the rescue and I could borrow his saw. Some quick napkin math, We needed 2 rectangles of 50x60 and 1 rectangle of 50x30. 8 Doors needed
these fancy frames, so that’s 8*(4*(50+60)+2*(50+30))=4800cm. In total we needed 48 pieces of 50cm, 32 pieces of 60cm and 16 pieces of 30cm. In order to know how many slats we had to order, I needed
to know how many pieces fit in one slat and how much waste is in one slat. I figured I would start with 2 pieces of 50, 2 pieces of 60 and 1 piece of 30cm. That leaves us with 20cm of waste. So after
16 slats I have all the pieces of 60 and 30cm done. And still need 16 pieces of 50cm, of which I could get 5 per slat. So, by that reasoning we needed 20 slats.
If this were a movie instead of a blog, this is where a montage would be cut of the sawing process and painting of the slats. I sawed them in angles of 45 degrees and my girlfriend painted them over
the course of several days. Then came the most precise process of sticking the slats to the doors. The slats have to be equidistant to both sides of the doors. and equidistant from each other. When I
first measured the distances with the slats in hand, I noticed something. The slats I carefully sawed were flawed. They weren’t 50cm, but rather 49.5 or 49.3cm, I donĀ“t know what I did wrong or how
it happened. But there it is. Luckily, this slight difference doesn’t really matter, as long as the slats are in the center of the door. So we paired all the slats so that we had two of the same
length. Then paired them again, so we had three pairs per door.
This brings me to what I find most fun about DIY projects. It’s a lot of fun to work towards a certain result, but to me it’s also about dealing with unexpected problems. DIY is more about
problem-solving than it is about doing. It’s rolling with the punches. It’s trying something, hoping for the best, but adapting if it doesn’t pan out.
I used regular montage glue and the process was fairly straightforward. It just took a longer time than I expected and more glue than I anticipated. I thought I would be done after 3-4 hours and 1
can of glue would be enough. In the end, it took about 10 hours and 4 cans of glue. One thing I’ve learned about this project, you can glue 12m of slats with 1 can of glue.
Here’s the final result:
1. Or should I be saying qwant? Qwanted? Qwunt? Is qwanting a regular or irregular verb? In case you haven’t heard of it. You’re welcome. ↩︎
2. Sorry Americans, my blog isn’t popular enough to justify me translating metric units to imperial. Although it would seem fun to write a plugin that would translate units to the one that is most
popular based on geographical location. Kind of like locale, but for units. ↩︎
Did you spot a mistake? You can help me fix it by opening a Pull Request. | {"url":"https://pensiveibex.com/blog/2024/8/fancy-doors/","timestamp":"2024-11-02T05:02:08Z","content_type":"text/html","content_length":"7855","record_id":"<urn:uuid:dd3e6168-934a-4ed2-9d43-db795199d810>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00597.warc.gz"} |
mainly macro
For teachers and students of economics
For the next two years I will be taking a break from teaching tutorials in first year undergraduate macro. That will be a relief for just one reason. I have become more and more embarrassed at having
to teach the IS-LM model. The IS curve is fine, but the LM curve is not. The reason is obvious enough: central banks do not operate a fixed money supply policy. It would be nice to tell students that
the fiction that the monetary authorities fix money is a harmless fairy story, but I do not believe this. Here are just three mistakes or confusions caused by using the LM curve.
1) Is that real or nominal rates on the vertical axis? It does not matter if expected inflation is constant, but we use the apparatus to look at price changes.
2) IS-LM leads to AS/AD. If your goal is controlling inflation, the AS curve suggests that all you need to do is to return to somewhere on the vertical AS curve. If you use a traditional Phillips
curve with backward looking expectations then returning to the natural rate will not be enough. Which is the student supposed to believe?
3) IS-LM leads to textbook Mundell Fleming, which in its simplest form has domestic interest rates stuck at world levels. Everyone then learns that fiscal policy is ineffective under floating rates,
which is unfortunate, because this is a special consequence of assuming fixed money. The IS curve plus UIP is a much better way to think about this, and gives a more relevant answer: a temporary
demand shift is not crowded out by an appreciating real exchange rate if domestic interest rates are unchanged.
I’m currently lecturing on Oxford’s second year undergraduate macro course. There I largely ignore the LM curve, and (following the textbook by Carlin & Soskice) instead teach a three equation model
involving an IS curve, a Phillips curve, and a monetary rule. The monetary rule captures the idea that the monetary authority has preferences over excess inflation and the output gap, and combining
this with the IS curve and Phillips curve we can derive a Taylor rule. While the monetary rule curve is far from ideal, representing the result of a static rather than dynamic optimisation exercise,
it at least captures the spirit of what central banks try and do today.
The introductory course in macro at Oxford starts with IS-LM, but it also includes the Phillips curve and the Taylor rule, presumably because they are more relevant than the AS curve and the LM
curve. So first year students learn about the AS curve and the Phillips curve, the LM curve and a Taylor rule. Perhaps not surprisingly, many get very confused. It would seem to make much more sense
to switch things around. Use only the IS curve, Phillips curve and monetary/Taylor rule in a first year introductory course, and only introduce the LM curve in subsequent years. (I would introduce
the LM curve in stages. First, think about replacing an inflation target with a price level target for monetary policy. Maybe think about nominal GDP targets, using the current situation as clear
motivation. Then go to money supply targets, with of course some discussion of money demand.)
This sequencing seems clearly preferable, so I find it puzzling that most Principles textbooks do not take this approach, but instead continue to start with IS-LM. Why are they so devoted to the LM
curve? Is learning about the LM curve meant to impart some deep wisdom to students? Is there a feeling that because money is important in understanding how business cycles and inflation work, it
should be introduced as part of the core model come what may? This seems strange, and is probably counterproductive. Perhaps the explanation is more mundane: that the economics of high volume
textbook publishing is such that innovation is difficult to do.
I sometimes say to master’s students just starting the core macro course that they will spend some time learning about the same stuff as they did when they were undergraduates: inflation,
unemployment, business cycles. The key difference is that what they learn will only be 5 or 10 years out of date, compared to material that is 30 years out of date for undergraduates. Strange, but
unfortunately true.
Readers of this blog will know that I am an evangelist for fiscal councils. Fiscal councils are publicly funded independent bodies that provide analysis of national fiscal policy. (They are sometimes
called ‘watchdogs’, but some – such as the CBO in the US or PBO in Canada – see themselves as serving the legislature rather than the public directly, so they flinch from that term. As some are not
councils as such, but institutions with a standard hierarchical structure, then the term Independent Fiscal Institutions (IFI) may be better, but I use the term fiscal council on my webpage and
elsewhere so I’ll stick to that.)
The good news is that in amongst the various directives/regulations/treaties that have recently been agreed by the European Union, there are clauses that encourage the formation of fiscal councils,
together with the need for independent fiscal forecasting. (In some countries, like the UK, the fiscal council (OBR) is all about independent forecasting, while in others, like Sweden, forecasting
was already reasonably independent before the council was formed.) This is a case of better late than never. Some EU countries have recently established fiscal councils through their own initiative
as a response to the debt crisis (such as Ireland, Portugal and Slovakia), so this EU initiative is playing catch-up in their case.
The bad news is that the rest of the EU’s response to the debt crisis makes life difficult for these new fiscal councils, and may in effect hinder the formation of new ones. In essence this is
because the broad thrust of the EU’s crisis management has been to take away national autonomy in making fiscal decisions. I complained about this in the context of the new treaty here. What I had
not fully appreciated until recently is that you now almost need a fiscal council just to try and work out what the huge number of sometimes conflicting EU directives actually mean in terms of what a
country is allowed and not allowed to do.
I think some economists view this development as benign, because they see it as part of an inevitable transition to a fiscal union. When others point out that so far it is no such thing, it is
suggested that a fiscal union has to be done in stages for (largely German) political reasons, and that large scale European transfers will emerge once national fiscal responsibility is nailed down.
Even if this was the planned pathway, I have severe doubts that it is a feasible path politically.
I have even greater doubts that the new European regime will be able to deliver fiscal responsibility. As I have often argued, simple rules that produce severely suboptimal outcomes are not
sustainable. (For a recent discussion of this sub-optimality, see Karl Whelan.) This was true with the original SGP, and it is just as true now. The idea that you can have effective legally binding
rules based on something as difficult to measure as a structural budget deficit is bizarre.
As Tyler Cowen suggests, a far better path would be one built on mutual trust. How can Germany begin to trust fiscal policymaking in other countries? Not, I would suggest, by placing monitors inside
those countries – this is control, not trust. I far better way starts with recognition that it is in each country’s own interest to maintain fiscal discipline. The original Stability and Growth Pact
was based on seeing fiscal policy as a free rider problem, where market discipline had been removed. We now know that this was a misreading, perhaps encouraged by a similar misreading by markets
themselves. The pain caused by current austerity will linger long in the national memory. Once we recognise that debt responsibility is in the national interest, the aim should be to encourage the
creation of national institutions that support that interest – national fiscal councils. These institutions can then be seen by other countries as their allies in maintaining fiscal discipline at the
European level.
I fear the major barrier preventing this happening may be the European Commission itself. I remember attending a Commission conference soon after the Euro was formed, where I presented a paper that,
among other things, explored the possibility of new institutions involved in national fiscal policy. I had a discussion with a senior Commission official, who thought this was an excellent idea -
because the Commission could be that institution! Like any bureaucracy, the Commission is all about super-national control, and not subsidiarity.
One recent episode is a case in point. The Commission has recently suggested withholding funds from Hungary because it believes that country has not taken sufficient action to control its deficit.
Yet only a little over a year ago Hungary had a newly created and highly effective fiscal council under George Kopits. It was abolished by the Hungarian government largely because it was doing its
job (for more information on this, see my earlier post). Where was the Commission then?
I’ve been away for a few days at the OECD, so have only just picked up Scott Sumner, fully endorsed by Tyler Cowen, asking why there are not more people advocating a higher inflation target, or a
nominal GDP target, in the UK. (Being away did not in theory prevent me blogging, but I’m afraid in practice I prefer exploring Paris!) I think this is a good question. While there is a very active
UK debate about fiscal austerity, most discussion on the monetary side tends to be within the parameters of current policy.
Those familiar with the case for raising the inflation target can skip the next couple of paragraphs. For those who are not, the idea is that the central bank/government could moderate the impact of
the zero lower bound constraint on interest rates by announcing that it would allow inflation to go above 2% for a significant period after the recession was over. This would help stimulate the
economy today through various channels. One is that expected lower future short interest rates should reduce long term interest rates today, which would in turn stimulate borrowing by firms and
consumers. Another is that, according to the New Keynesian Phillips curve, expected inflation today depends on expected inflation tomorrow, so if the central bank allowed above 2% inflation in the
future, this would raise actual and expected inflation today, lowering real short term interest rates today. One of the interesting things about this policy proposal is that it is very New Keynesian
rather than Old Keynesian.
This policy was first suggested as a way out of Japan’s lost decade (when nominal rates were also stuck at the zero bound) by Paul Krugman, and subsequently formalised by Eggertsson and Woodford (
Gauti B. Eggertsson & Michael Woodford, 2003. "Optimal Monetary Policy in a Liquidity Trap," NBER Working Papers 9968) among others. This paper suggests that the optimal policy can be approximated by
replacing an inflation target with a price level target, because a price level target implies a period of excess inflation would automatically follow a period of insufficient inflation. Scott Sumner
in particular has championed nominal GDP targeting, which would have similar effects, and this policy has gained a lot of support in the US and elsewhere.
So why relatively little debate on this issue in the UK? At the political level, simple politics combined with recent experience may be important. Price inflation has been, and still is, high in the
UK, and this is highly unpopular because it has been associated with large falls in real wages. An argument by any political party that the Bank should target higher inflation, rather than be
admonished for its inability to prevent it, would be a hard sell. Nominal GDP (NGDP) targeting may be much more palatable to a non-macro audience, as Krugman points out. However a problem here for
the government may be the accusation that such a change reflects a failure of their austerity programme, for reasons discussed below. Whether this issue might be on the Labour Party’s agenda I have
no idea, although I cannot see any reason in principle why it should not be.
What is perhaps more puzzling is a lack of discussion by the Bank itself. As far as I am aware, the Bank has not discussed these various alternatives to its current 2% mandate. I am also not aware of
any members of the current MPC discussing them in public. Of course any decision to change the mandate would not be for the Bank or MPC to make – it’s a decision for the Chancellor - but that does
not rule out discussion of options. I think this lack of ‘official’ discussion may also discourage debate elsewhere. (I know I thought of writing something myself on this a few weeks back, looked for
something from the Bank as a ‘hook’ to argue with, and on finding nothing let it slide.) In contrast to the US, macroeconomic blogs in the UK are not developed or influential enough to initiate a
debate themselves.
The lack of public analysis from the Bank, or MPC members, could perhaps be another example of ‘groupthink’ that Laurence Ball suggests in his analysis of Bernanke’s views on the zero lower bound. My
first experience of this came at a conference organised by the Centre for Central Banking Studies in 2009. Chris Sims and Mike Woodford were the other guest speakers, and both my and Mike’s paper
provoked discussion of this policy issue. I have often been in meetings where an option may be politely discussed, but you just know it has no chance of serious consideration, and this seemed like
one of those. To be honest, I think this represents central bank conservatism more than anything else.
One of the main arguments against this policy option is time inconsistency. Even if a promise to raise future inflation was effective today, when the recession ended and it was time to implement the
policy, the temptation would be to renege on this commitment and return to the 2% inflation target. A smart private sector would realise this, so the promise to raise future inflation would not be
believed, and so it would not be effective. A price level or NGDP target is partly designed to help overcome this problem. However, in the case of the UK, where a change in policy would almost
certainly have to be decided by the Chancellor rather than the Bank, this should be seen as less of a problem for the Bank or MPC. Its credibility would not be on the line, and the government rather
than the Bank would come under pressure to renege.
Exactly the same could be said of another objection to raising the inflation target, which is that it might be perceived as a way of helping the government reduce the ratio of debt to GDP. This
charge could be a more serious concern for the Chancellor, if it was perceived that his austerity program was not delivering the fiscal target he had set himself. However, it should be less of a
concern for the Bank, because they have always deferred to the Chancellor’s authority to set the inflation target. So for both reasons, I suspect the lack of discussion by the Bank and MPC may
reflect a taboo about deliberately raising inflation above 2%.
I should add that I think the advocates of NGDP targets sometimes go too far in suggesting that this is an obviously preferable way to speed economic recovery compared to fiscal stimulus. Promises to
raise future inflation are costly, which is why there is a time inconsistency problem. However the current recession is, in my view, much more costly, so it needs both monetary and fiscal action to
help bring it to a speedy end. Furthermore, the current recession may well indicate an endemic problem with low inflation targets, which is that we are in danger of hitting the zero bound quite
frequently. (With this in mind, can anyone explain to me why the Bank of Japan has just announced an inflation target of only 1%?) This seems like an excellent time to be discussing alternatives to
inflation targeting.
So it is a pity that this policy option is not being debated more actively in the UK right now, with help from the Bank and MPC members. In the absence of a discussion based around the Bank, then
this seems to be exactly the kind of issue that the Treasury Select Committee of the House of Commons should investigate. The committee has always taken a keen interest in all aspects of monetary
policy, and such an investigation would be a good follow-up to their recent report on Bank accountability.
Less than a week ago, I wrote in a post on my blog: “So there is something we can do with fiscal policy, without increasing government debt. Why does hardly anyone talk about this?” Two days ago Ian
Mulheirn from the Social Market Foundation published a detailed proposal exactly along these lines. (There is also a short piece in Monday’s Financial Times.) A coincidence of course, but very
This proposal involves bringing forward by four years £15 billion of tax increases pencilled in for after 2015, and using that money for temporary infrastructure spending in those four years. This is
a specific example of a more general idea, that you can stimulate demand through additional but temporary increases in government spending financed by temporary increases in taxes. The proposition
that this will stimulate aggregate demand is a pretty robust bit of macroeconomic theory. In fact we can go further, and say that the benchmark multiplier for such a policy will be at least one. This
suggests that this proposal, by using OBR figures, may be rather conservative in its estimate of the impact on UK growth.
The multiplier will be one if consumers are of the simple Keynesian type who consume some constant fraction of their current income. Higher taxes will reduce their income, but as long as their
propensity to consume is less than one, there is a net positive effect on demand, which gets translated into higher output and higher income. But higher income leads to higher consumption, and we get
the famous ‘balanced budget multiplier’ of one which every first year undergraduate learns how to prove. However, as Professor Michael Woodford has recently shown, we get exactly the same multiplier
of one if consumers are much more sophisticated, and look at their entire lifetime income when planning their consumption. The basic intuition here is that any temporary tax increase gets smoothed
over their lifetime, so the impact on current consumption is small. As the simple Keynesian case shows, any short term impact there is will be offset by higher incomes generated by higher government
This all assumes an unchanged level of real interest rates. Higher aggregate demand should lead to some increase in expected inflation. If the Bank of England keeps nominal interest rates unchanged
(which, with inflation falling, they should), then real interest rates will fall, which will provide an additional stimulus to demand. That is why the benchmark multiplier will be at least one. If
the government spending is in the form of useful infrastructure projects, this has the additional bonus of increasing future supply.
Why have tax financed temporary increases in public investment not been part of the austerity versus stimulus debate so far? For those who oppose austerity, I think the problem is a (correct) belief
that debt financed increases in spending would be even more effective at stimulating demand (because ‘Ricardian Equivalence’ does not hold), and that the short term dangers of increasing debt are
vastly overblown. While I think this line is right in principle, I fear this debate is unwinnable so long as the Eurozone crisis continues, and the media obsesses about ratings agencies. We can argue
till the cows come home that the Eurozone is different, because these countries do not have their own central bank, and that the market takes no notice of ratings agencies, but these arguments are
drowned out by the daily news about Greece and other Eurozone economies.
On the other side, among those who favour austerity, I think there is a reluctance to consider policies that increase the size of the state, even though this would only be for a few years. There is
also the obvious unpopularity of tax increases. However there is now a real danger that by the time of the next election UK unemployment may still be rising and the recovery will be modest. It is
going to be very hard to win an election with this record. (I do not think Argentina will distract attention again as it did in 1982!) The great advantage of tax financed increases in spending is
that they stimulate the economy without going back on the austerity pledge. Indeed, because they stimulate growth, they reduce headline budget deficit figures and so increase the likelihood that
austerity will be successful in bringing down the ratio of debt to GDP.
So these proposals from the Social Market Foundation are very welcome indeed. They show that something can be done to stimulate the economy without increasing debt. Of course there is a great deal of
discussion still to be had on what taxes to increase, and what investment to fund. However from a macroeconomic point of view most combinations will succeed in stimulating the economy. With
unemployment continuing to rise, there is still time to act. It is imperative that we do.
I have argued that the decision to reduce the UK budget deficit more rapidly in 2010 was a major policy error. (I looked at figures on cyclically corrected budget deficits in the UK, US and Eurozone
here.) One argument against this view is that without such a tightening, the UK would have been at greater risk of a loss of confidence in UK government debt. I think many believed that at the time,
because they thought what was happening in the Eurozone could happen to the UK. As interest rates on government debt continue to fall around the world, this fear looks increasingly groundless. As the
IMF has recently noted, growth as well as debt levels are important influences on market perceptions.
A rather better argument (see the first comment on this post) is that if fiscal policy had not tightened in 2010, the Monetary Policy Committee (MPC) of the Bank of England would have raised interest
rates in 2011. In the Spring of that year, 3 of the 9 members voted for an interest rate rise from the zero bound floor level of 0.5%. If the economy had been stronger because of less austerity,
would two or more committee members have switched sides, leading to an increase in UK interest rates?
A think it is far from clear that they would. Inflation was high in part because of the result of those austerity measures. VAT was increased from 17.5% to 20% at the beginning of 2011, which
probably added around 1% to inflation in 2011. You could argue that as this was always going to be a temporary influence, it was neither here nor there as far as MPC decisions were concerned. I think
this would be a little naive. One of the major concerns of MPC members around that time was the loss of reputation that the MPC might suffer if inflation got too high, and here I think the actual
numbers mattered.
But supposing the Bank had raised rates. Would that have been the right thing to do? In hindsight clearly not. The ECB did raise rates at this time, and that now looks like a very foolish decision,
but it looked pretty foolish at the time. (See this from Rebecca Wilder.) I also argued strongly against raising UK interest rates in early 2011. My note was called ‘Ten reasons not to raise interest
rates’, but the main argument was very simple. The costs of inflation exceeding its target were much lower than the costs of a persistently high output gap.
At the time it was possible to try and calculate these costs based on what the Bank itself was thinking, because it published output and inflation numbers under two alternative scenarios: one where
interest rates were kept flat and another where they increased through the year (based on market expectations at the time). Here is the table I put together.
Calculating social welfare
2012 2013 Loss Diff
Rising rates 2.4% 2% 0.16
Flat rates 2.6% 2.5%* 0.61 0.45
Output growth
Rising rates 2.7% 2.6%
Flat rates 3.0% 3.0%
Output gap
Rising rates 3.3% 2.7%* 18.18
Flat rates 3.0% 2.0% 13.00 -5.18
Numbers are estimated using the Bank of England’s February 2011 Inflation report. Output gap numbers assume a 4% gap in 2011 (consistent with the latest OECD Economic Outlook), and that potential
grows by 2% p.a. 2013 numbers are guesses based on extrapolating the Q1 forecast.
Raising rates through 2011 had virtually no impact on 2011 numbers, so these are ignored. Higher interest rates leave inflation is a little lower in 2012, and inflation then comes back to target in
2013. In contrast, keeping rates flat would leave inflation half a percent above target in 2013. Raising rates would reduce output growth by a quarter of a percent in 2012 and by half a percent in
2013, leaving the output gap 0.7% higher in 2013. Now suppose we take the difference between the forecast number and the target for inflation each year, square this figure and sum. We do the same for
the output gap. That gives a very crude measure of the social loss implied by each policy, and this is shown in the column headed loss. Take the difference between the two policies in the final
column. Raising rates clearly does better on inflation, but worse on the output gap. However the output gap losses are much larger, because inflation is near its target, but the output gap is not.
This puts into numbers a very simple idea, which is that missing the inflation target by half a percent is no big deal, but raising the output gap by over half a percent when it is already high is
much more costly. Now we can argue forever about the size of the output gap, but we need to remember that in these calculations it is mainly a proxy for the costs of higher unemployment, and we have
real data on unemployment.
We can put the same point another way. Although 5% inflation in 2011 sounded bad, it was the result of a temporary cost push shock, caused by higher VAT and energy prices. Inflation was bound to come
down again, because unemployment was high. (I think some in the Bank began to doubt this basic macroeconomic truth because they kept on underestimating inflation.) There was never any sign of higher
price inflation leading to higher wage inflation. In contrast, the recovery from the recession was slow, so this was the problem to focus on.
Crucial in this analysis is the view embodied in the Bank’s forecast that it takes some time before higher interest rates influence output and inflation. What this means is that to prevent inflation
rising in 2011, we really needed higher interest rates at the end of 2009. A year in which GDP fell by 5%! Those who argue that the MPC ‘failed’ because inflation reached 5% in 2011 are really
arguing that the MPC should have made the recession deeper.
So, if the MPC had raised interest rates in 2011, they would have been wrong to do so. That is obviously true in hindsight, but it was also true based on the more optimistic projections made at the
time. It would also have been true even if the economy had been stronger because of less austerity.
One final point on this policy error. It is just possible that, without the Eurozone crisis, the LibDems might not have been persuaded to adopt the Conservatives’ fiscal plans as part of the
coalition agreement. But the real source of the error is to be found much earlier, when the Conservatives opposed the government’s fiscal stimulus measures in 2008/9. From that point on, their
macroeconomic policy was all about austerity, and they denied that this would have harmful effects on the economy. I’m afraid I have no knowledge about why they decided to adopt this line, but it has
proved to be a very costly mistake.
In this post I claimed that 2010 should be counted as one of the major errors of UK macroeconomic policy. In fact the claim is much more general, because 2010 was the year that the consensus among
policymakers in the OECD area shifted from enacting stimulus to pursuing austerity, with damaging consequences in many countries. A number of comments and a couple of blogs added to my speculation on
why this error might have occurred. Here I want to consider more generally what role academics and economists can play in preventing policy errors, and why this may depend on the reasons for those
Before coming to that, let me address one common objection to my view that 2010 was a major policy error at the time, and not just in hindsight. The objection is that the slowdown in 2011 was due to
factors other than austerity that could not have been foreseen. There are two problems with this argument. First, the projected speed of recovery even before these adverse shocks occurred was
pitifully slow. What the IMF described in late 2010 as ‘solid UK growth’ was actually 2.5% per annum into the medium term, which on their own admission only gradually closed the output gap. The OBR’s
June 2010 post-budget forecast also had GDP growth of 1.2% in 2010, 2.3% in 2011, and never above 3% thereafter. Considering GDP was estimated to have fallen by 5% in 2009, this was a tepid recovery.
The second, and more important, flaw in this argument is that it ignores the fact that good policy should allow for risks. As I elaborated here, because of the zero bound for interest rates there was
no insurance policy if bad shocks did occur (as they did). In contrast, if positive shocks had led to a recovery that was too rapid, monetary policy could have been used to cool it down. To put it
more simply, austerity was a huge and unnecessary gamble, and the gamble did not pay off. Much the same could be said for many other countries.
One class of explanation for this kind of policy error focuses on hidden agendas. The most obvious in the case of austerity is a desire to reduce the size of the state, as Chris Dillow suggested. As
Mark Thoma put it: “The notion of "expansionary austerity" was the cover, but so long as government shrinks as a result of the policy, the expansionary part is secondary.” Chris has recently
suggested even darker motives. I put forward another, more mundane, explanation that is specific to the UK: get the cuts out of the way well before an election, and hope the electorate have short
memories. Perhaps an explanation specific to the US might be that it suited those opposed to the President that the economy failed.
If this type of explanation is correct, is there anything that can be done to prevent or expose this kind of subterfuge? In this post, I was rather pessimistic. I suggested that it required near
unanimity amongst academics before the media would begin to question the cover stories. Without unanimity, the cover story would just be described as controversial.
Brad DeLong has persistently railed against ‘opinions on shape of the Earth differ’ type reporting. All too often journalists appear to have only two categories - either something is objectively true
or it is controversial. Anything controversial requires evenly balanced reporting. A tragic example from the UK would be the debate over the MMR vaccine. As Lewis and Spears document, a single paper
in the Lancet suggesting a link with autism was hyped by the media, despite widespread scepticism among health experts and overwhelming scientific evidence that the vaccine was safe. As a
consequence, take up of the vaccine declined and outbreaks of measles increased. (Here is a good account of this episode from a US perspective.)
Even with academic unanimity, there is the possibility that moneyed interests could manufacture controversy through think tanks, as has happened with aspects of climate change debate. [Update 20/2/12
- on this see George Monbiot.] Of course the debate is still worth having, but it is unlikely to change things very much or very quickly.
This pessimism may be a little overdone, however, in the case of austerity. Politicians, above all, want to be re-elected. For that reason the cover story view does require a belief that the harmful
impact of (early) austerity will not last long enough for it to matter at the next election. If that is not the case, academics might be able to convince politicians that it may not be in their own
interests to undertake the policy.
Which brings me to another class of explanation, which is policymakers fooling themselves. The hidden agenda may still be there, but the difference is that politicians convince themselves that the
cover story is also true. In the case of austerity, there are a number of stories politicians can tell themselves. They can believe in expansionary austerity, of course. They could believe that
Quantitative Easing will be enough, although I would hope any central banker would tell them that they had no idea what impact QE might have. They might have believed that the recovery was well under
way, so any damage done by austerity would not be noticeable.
Does this case also require near unanimity amongst academics to convince the policymaker they are fooling themselves? There are at least two reasons to suggest it might. First, (macro)economics is
not held in the same regard as other sciences, for good reason. I would not go quite as far as one comment which said scientists proclaim facts while economists give out opinions – I think we are
somewhere in between these two, but still. Second, the two way link between ideologies and economics makes it too easy for the politician to dismiss views they do not like by believing they are
politically motivated, and it also makes it too easy for the politician to find academics who will tell them the stories they would like to hear.
A third class of explanation for policy mistakes is that they are genuine mistakes. Events may arise which come as a surprise to most academics as well as policymakers, so there is genuine
uncertainty. In terms of 2010, I think the probability of Greek default with possible Eurozone contagion was important at changing attitudes among those who might otherwise have been sympathetic to
more fiscal stimulus/less austerity. In the case of the UK, it may have been crucial in persuading Nick Clegg and the LibDems to support greater austerity as part of the coalition. As I wrote here:
“What finance minister can sleep easy when there is a chance that they too might be forced down the road being travelled by Greece, Ireland, Spain, Portugal and Italy?” Now I go on to argue,
following Paul De Grauwe, that this crisis was a crisis of the Eurozone, and not the precursor to a generalised government debt panic. Although that view is gaining increasing acceptance as interest
rates on government debt elsewhere continue to fall, at the time this proposition was neither obvious (governments with their own currencies default through inflation and depreciation, and lenders
will fear that) nor widely argued.
In situations of this type, academics can in principle have much more influence. Furthermore, the blogosphere allows for an immediacy that might just be able to influence opinions before mistakes are
made, and positions become entrenched. In the case of 2010 and austerity, I do not think it would have been enough. The political forces pushing for austerity, the hidden agendas, were too strong,
and the panic induced by events in the Eurozone too great. But academics should never become so pessimistic about their potential influence that they give up trying.
Chris Dillow has a nice follow-up to my earlier blog on balanced budget fiscal expansion. I first read the Kalecki paper when I was at Cambridge, but for better or worse this is not part of the macro
lectures I give at Oxford. We all miss Andrew Glyn a lot.
Is this what I had in mind when I said that if the government argued against all the possible balanced budget spending and tax measures that might stimulate the economy, we might suspect other
motives? Before trying to answer that, I should say that part of my complaint was that such policy ideas are not part of the public debate, so we do not know what the government’s response would be.
(We could infer, from the fact that none of these policies are being pursued, that they would be against them). I should also note that the FT suggests that the Liberal Democrats are thinking about
tax switches, although I have my doubts about whether raising the £10K tax threshold would be particularly effective at stimulating demand.
When I drew parallels in an earlier post between the current UK situation and 1981, I mentioned by way of anecdote a little speech I made at the internal meeting of Treasury economists at the time.
What I did not report was that at that meeting I made exactly the argument Kalecki puts forward. (Needless to say Kalecki was not normally quoted in discussions about budgets in the Treasury! - I was
young, and probably knew I was going to leave fairly soon.) I think what Kalecki says made a good deal of sense in that particular context: as we were to find out, part of the Thatcher agenda was to
take on the power of organised labour.
Whether it makes sense in the current context I’m less sure. Some of the factors identified by Chris in a later post could simply be attempts to deflect sympathy for the unemployed (which in turn
would translate into criticism of the government) rather than the more strategic design he suggests. What I probably had more in mind on this occasion were two things.
The first involves the point about ideology which I have mentioned several times. If your ideological perspective is that ‘government is always the problem’ and that the private sector is best left
alone, then blaming all our ills on the excesses of the previous government (rather than the financial system), and pursuing austerity by government as the means of correcting those ills fits well
with that perspective. To use government intervention as a way of correcting a problem with the private sector (insufficient demand) does not.
The second is that nearly all the fiscal proposals I suggest involve redistributing money from the rich to the poor. This makes macroeconomic sense, because the poor are more likely to be credit
constrained than the rich. It would also make sense from an equity point of view: the poor are suffering most as the result of austerity, as the chart below from the IFS illustrates. Unfortunately it
does not make political sense for the current government.
There is also a familiar but important point here about political influence and recognition. Issues to do with debt and financial markets are reported daily and major players in this area have almost
guaranteed access to politicians – from whatever party. They tend to be rich, and will complain about being taxed at 50%. The young unemployed, who now make up nearly a quarter of the 16-24 age
group, have by comparison very little political voice. Even when they are talked about when monthly figures are released, we are likely to get stories about motivation and how to brush up your CV,
rather than recognition that with many times more people looking for a job than there are vacancies no amount of self help will make the problem go away. (See this by Zoe Williams.) These are the
political reasons why the ‘counsels of despair’ that Jonathan Portes rightly complained about are able to endure.
Or why is no one talking about balanced budget expansion?
As UK unemployment continues to rise, Jonathan Portes asks why the government seems to accept this as inevitable. The same question could be asked in many other countries. The answer is invariably
that nothing can be done by way of fiscal stimulus because debt and deficits are so high. Jonathan argues (rightly in my view) that concerns about debt in the short term are hugely exaggerated. I’ve
suggested this is particularly true when we have Quantitative Easing. We seem to be in a strange prisoners dilemma where it is absolutely clear that the world wants more safe assets (the rate of
interest on indexed debt is zero if not negative), but every individual government thinks that if it provides them lenders will suddenly panic, and think they are no longer safe.
I think this fear is irrational, but unfortunately events in the Eurozone feed this fear on a daily basis. (It should not, because governments without their own central banks are in a different
position from those that have, but that is a rational argument.) Taking notice of ratings agencies is also irrational (see Jonathan again), but it adds to the fear. So even though I think it is
fairly easy to win the intellectual debate on debt, this may not be what is decisive.
If governments believe that they cannot add to government debt and deficits, does that mean nothing can be done? Absolutely not. A temporary increase in government spending financed by an increase in
taxes will still raise demand. First year economics undergraduate students will know about the balanced budget multiplier of one: every £1 spent by the government will lead to £1 extra demand,
because what consumers lose with lower taxes they gain through higher income. Readers of Michael Woodford (2011) will know that we get exactly the same multiplier with much more sophisticated
consumers, if real interest rates are fixed. (I try and explain why here.) So it does not matter what sort of consumers we have, the tax increase comes out of saving, and so demand and output rises
by the full extent of the spending increase. (For those who think lower savings will mean lower investment, see here.)
The news is better still if interest rates are stuck at the zero lower bound. Higher demand and output will imply some increase in inflation, and any increase in expected inflation will reduce real
interest rates, further stimulating activity. The size of the multiplier will be above one.
So there is something we can do with fiscal policy, without increasing government debt. Why does hardly anyone talk about this? (One exception is Robert Schiller.) I suspect the problem is as
follows. Those favouring stimulus think debt is not a constraint, and know that there are many reasons why a fiscal expansion financed by issuing more debt will be even more expansionary that one
financed through taxes. So why argue for second best? But why do those who do think debt is a constraint not welcome this alternative possibility of raising demand? The reason is obvious: a tax
financed fiscal expansion requires taxes to rise.
The problem is not an economic one. The distortionary effects of temporarily higher taxes are likely to be small relative to the resources wasted and permanent damage caused by high unemployment. (
See Chris Dillow as well as Jonathan Portes on this.) The problem is political, particularly for countries with right of centre governments. However, just because something may be politically
difficult should not stop the argument being made.
Once we get on to this territory, then there is further ground to explore. As well as tax financed government spending, we could think about tax and transfer switches that would stimulate demand. The
marginal propensity to consume out of a benefit increase is likely to be quite high, whereas that out of reducing the tax relief on pension contributions for high earners would be pretty small.
(Remember any tax switch need only be temporary.) Redistributing money from the old to the credit constrained young would also be likely to raise demand: raise child benefit by increasing death
duties, for example. Some companies are not that short of cash at the moment: how about tax incentives to encourage them to invest today rather than tomorrow.
The idea that there is no alternative to doing nothing about rising unemployment could not be further from the truth. If governments find objections to all these ideas, one might begin to suspect
that there are other motives at work.
I’m in the middle of lecturing on our second year macroeconomics course. There is so much ground to cover in a short amount of time, so on many occasions I have to stop myself launching into a long
discussion about why the world is much more complicated than the simple model I’m presenting. One example that will be familiar to many is the impact of minimum wages on employment, where the
evidence may not fit the standard textbook model. (The work of Card & Krueger is well known, but those in the US may be less familiar with similar work in the UK. Some recent work discussing
international evidence is here.)
I used to be more comfortable about using the standard labour market model, coupled with the variant involving imperfectly competitive goods and labour markets, to analyse the impact of immigration.
The idea that immigration was initially unpopular because it led either to lower real wages for the indigenous workforce, or an increase in the amount of involuntary unemployment generated by
imperfect competition, seemed to accord with popular perceptions (although see here (10/2/2012) for a discussion of the origin of such perceptions). And I was always careful to add that any decline
in real wages would disappear in the long run, as it would be eliminated by increased investment. However work in the UK has suggested that in this case the simple model may be seriously misleading
once again.
In early January the Government’s independent Migration Advisory Committee (MAC) published a report which was widely reported as finding that “an increase of 100 foreign-born working-age migrants in
the UK was associated with a reduction of 23 natives in employment for the period 1995 to 2010.” However further reading of the report finds this result is not at all robust. At the same time the
National Institute published findings that found no impact of immigration on unemployment. The two pieces of analysis are compared by Jonathan Portes here. Ian Preston at the Centre for Research and
Analysis of Migration (CReAM) at UCL notes (16/1/2012) that ‘There have been studies in several countries and the preponderance of evidence is strongly suggestive that employment effects are small if
they exist at all.’ (Here is a recent US study focusing on the impact on poverty.)
What about wages? The MAC study’s summary of empirical work on the impact of immigration on wages concludes “The majority of studies estimate that migrants had little impact on average wages,
differing in their assessments of whether migrants raised or lowered average wages.” There seems to be a common finding that immigration lowers wages a little at the bottom of the income
distribution, but raises them at the top. This is hardly consistent with the simple textbook labour market model.
Perhaps we can content ourselves that the textbook model might still be reliable for much larger changes in migration or minimum wages, but that for more modest changes factors like heterogeneity due
to skill shortages or monopsony can account for the empirical evidence cited above. However, even if I did have time to make these qualifications to the basic model in my lectures, would students
remember them, or just remember the predictions of the model?
That is the question asked by RobertWaldman (9th Feb) in a comment on my post, and also in a dialog with Mark Thoma. I’ll not attempt a full answer – that would be much too long – and Mark makes a
number of the important points. Instead let me just talk about one episode that convinced me that one part of New Keynesian analysis, the intertemporal consumer with rational expectations, was much
more useful than the ‘Old Keynesian’ counterpart that I learnt as an undergraduate.
In the mid 1980s I was working at NIESR (National Institute for Economic and Social Research) in London, doing research and forecasting. UK forecasting models at the time had consumption equations
which included current and lagged income, wealth and interest rates on the right hand side, using the theoretical ideas of Friedman mediated through the econometrics of DHSY (Davidson, J.E.H., D.F.
Hendry, F. Srba, and J.S. Yeo (1978). Econometric modelling of the aggregate time-series relationship between consumers' expenditure and income in the United Kingdom.Economic Journal, 88, 661-692.)
While the permanent income hypothesis appealed to intertemporal ideas, as implemented by DHSY and others using lags on income to proxy permanent income I think it can be described as ‘Old Keynesian’.
As the decade progressed, UK consumers started borrowing and spending much more than any of these equations suggested. Model based forecasts repeatedly underestimated consumption over this period.
Three main explanations emerged of what might be going wrong. In my view, to think about any of them properly requires an intertemporal model of consumption.
1) House prices. The consumption boom coincided with a housing boom. Were consumers spending more because they felt wealthier, or was some third factor causing both booms? There was much macro
econometric work at the time trying to sort this out, but with little success. Yet thinking about an intertemporal consumer leads one to question why consumers in aggregate would spend more when
house prices rise. (I don’t recall anyone suggesting it changed output supply, but then the UK is not St. Louis.) Subsequent work (Attanasio, O and Weber, G (1994) “The UK Consumption Boom of the
Late 1980s” Economic Journal Vol. 104, pp. 1269-1302) suggested that increased borrowing was not concentrated among home owners, casting doubt on this explanation.
2) Credit constraints. In the 1980s the degree of competition among banks and mortgage providers in the UK increased substantially, as building societies became banks and banks starting providing
mortgages. This led to a large relaxation of credit constraints. While such constraints represent a departure from the simple intertemporal model, I find it hard to think about how shifts in credit
conditions like this would influence consumption without having the unconstrained case in mind.
3) There was also much talk at the time of the ‘Thatcher miracle’, whereby supply side changes (like reducing union power) had led to a permanent increase in the UK’s growth rate. If that perception
had been common among consumers, an increase in borrowing today to enjoy these future gains would have been the natural response given an intertemporal perspective. Furthermore, as long as the
perception of higher growth continued, increased consumption would be quite persistent.
Which of the second two explanations is more applicable in this case remains controversial -see ‘Is the UK Balance of Payments Sustainable?’ John Muellbauer and Anthony Murphy (with discussion by
Mervyn King and Marco Pagano) Economic Policy Vol. 5, No. 11 (Oct., 1990), pp. 347-395 for example. However, I would suggest that neither can be analysed properly without the intertemporal consumer.
Why is this a lesson for Keynesian analysis? Well in the late 1980s the boom led to rising UK inflation, and a subsequent crash. Underestimating consumption was not the only reason for this increase
in inflation – Nigel Lawson wanted to cut taxes and peg to the DM – but it probably helped.
So this episode convinced me that it was vital to model consumption along intertemporal lines. This was a central part of the UK econometric model COMPACT that I built with Julia Darby and Jon
Ireland after leaving NIESR in 1990. (The model allowed for variable credit constraint effects on consumption.) The model was New Keynesian in other respects: it was solved assuming rational
expectations, and it incorporated nominal price and wage rigidities.
As I hope this discussion shows, I do not believe the standard intertemporal consumption model on its own is adequate for many issues. Besides credit constraints, I think the absence of precautionary
savings is a big omission. However I do think it is the right starting point for thinking about more complex situations, and a better starting point than more traditional approaches.
One fascinating fact is that Keynes himself was instrumental in encouraging Frank Ramsey to write "A Mathematical Theory of Saving" in 1928, which is often considered as the first outline of the
intertemporal model. Keynes described the article as "one of the most remarkable contributions to mathematical economics ever made, both in respect of the intrinsic importance and difficulty of its
subject, the power and elegance of the technical methods employed, and the clear purity of illumination with which the writer's mind is felt by the reader to play about its subject. " (Keynes, 1933,
"Frank Plumpton Ramsey" in Essays in Biography, New York, NY.) I would love to know whether Keynes ever considered this as an alternative to his more basic consumption model of the General Theory,
and if he did, on what grounds he rejected it.
Charles Goodhart writes (FT-£) that “Proposals that central bankers report their expectations of official rates, beyond some short future horizon, are retrograde, pushed forward by fashionable theory
without reference to empirical reality.” By short horizon here I think he means three or six months: much shorter than the recent move by the US Fed. Charles is usually right about most things, and
similar comments have also been made by other experienced ex-central bankers, so I think it is important to set out carefully one important counter argument.
Charles writes that “whether the publication of central bank predictions of the future path of interest rates is likely to be beneficial depends on the relative accuracy of such forecasts.” I
disagree. I’m happy to assume they are no better than those of forecasters in general. What is then gained by publication?
The key point is that central banks, or members of a central bank committee, have inside knowledge. Not about how the economy works, or of statistics a few days before they are published, but about
themselves. Their forecasts for interest rates tell us what they are likely to do if events (inflation, growth etc) turn out as they expect, and if they are being consistent. So the important
question is whether this information is useful.
Central to the academic case for delegation of interest rate decisions to central banks is that they are less susceptible to the temptations of time inconsistency. (Those familiar with what this
means can skip this and the next paragraph.) A classic example involves the trade-off between inflation and output. Suppose the monetary authority wants zero inflation, but would like output above
the level consistent with zero inflation (the ‘natural’ level of output). Suppose the public share those preferences. The monetary authority announces a policy of achieving zero inflation, and people
form expectations on that basis. Once those expectations have been set, the monetary authority realises that outcomes would be better if output was higher than the natural rate. This will raise
inflation above zero, but given their and the public’s views on output, everyone would be better off with a little bit more inflation and higher output. This is sometimes called reneging or cheating
by the policy maker, but notice that everyone is apparently better off when it changes its mind.
Now, if people are smart, they will know the monetary authority will behave this way, so they will not believe the initial announcement of zero inflation. In fact the only situation in which
expectations prove correct is when they are so high that it is no longer in the monetary authority’s interest to renege. So the outcome is high inflation with no gain in output. In this situation it
would be much better if the monetary authority could commit to zero inflation and not renege on this commitment. It is generally thought that central banks are more likely to be able to do this than
politicians, in part because politicians will be too tempted by short term gains, and will discount the longer term repercussions.
Being an independent central bank may increase your ability to commit to a policy in the face of the time inconsistency temptation, but it does not guarantee that result. Central bankers would like
to be able to establish their credibility for commitment. (I gave a particular example when it might matter a lot here.) Publishing interest rate forecasts allows them to do this. If events turn out
roughly as expected, yet the central bank does not follow its own forecast for interest rates, it might be because they are exploiting these time inconsistency possibilities. Following your own
forecasts when nothing changes does not prove that you are resisting such temptations, but it is consistent with doing so.
Now Charles Goodhart might respond, in the spirit of his article, by saying that events never turn out roughly as expected, and so interest rate forecasts can never be used in this way. However the
critical question is as follows. Is the fog of news so dense that we can never say anything useful using these forecasts? I think not. For a start, we not only have the bank’s interest rate
forecasts, but also the inflation forecasts that go with them. We often have no problem saying that the balance of news about the economy over some period is – say – that activity is stronger than
expected, and so inflation is expected to be higher. If, despite this, the central bank reduced interest rates compared to their own forecasts, we would be suspicious that they were being
inconsistent. However, if we do not know what they were expecting to do with interest rates (because these forecasts were not published), then we are no wiser about whether they are being consistent
or not.
Charles talks about the importance of putting error bands around forecasts in general (the fan charts), and I could not agree more. However in this particular case, we do not need such bands. To the
first approximation, we only need to know what their best guess was for future interest rates, and the associated best guesses for inflation and other variables. This is because we are using the
interest rate forecast to judge consistency, and not as a forecast per se.
Now perhaps the importance of time inconsistency problems in monetary policy is not as great as implied by the academic literature. However, I think the chances of the public being misled by the
publication of interest rate forecasts is so small that on this occasion it is worth taking this academic idea seriously. I believe that in ten years time, when more central banks publish interest
rate forecasts in this way, we will wonder what all the fuss was about. | {"url":"https://mainlymacro.blogspot.com/2012/02/","timestamp":"2024-11-02T23:54:24Z","content_type":"text/html","content_length":"268033","record_id":"<urn:uuid:ced823e7-7dc7-4ae5-9b20-c3cfabb05f7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00805.warc.gz"} |
(PDF) Newell and Simon's Logic Theorist: Historical Background and Impact on Cognitive Modeling
... Artificial intelligence (AI) is a method by which computer software is developed based on the study and use of human brain patterns and becomes capable of creating an intelligent product in a
cognitive way similar to the human brain. Although the history of AI development began not so long ago -in 1956, when Allen Newell and Herbert Simon created the first artificial intelligence program
-Logic Theorist, which proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them [14], and the
term 'artificial intelligence' was first used by American computer scientist John McCarthy at the Dartmouth Conference in 1956 [37], it quickly spread and began to be used in various fields:
economics, art, education, military and construction, medicine, etc. The interest of people in AI and its possibilities exceeded the expectations of its makers, as in the late 20th and early 21st
century centuries, there was a kind of leap in its development, characterised by the development of the first chatbot ELIZA by Weizenbaum [43], the creation of the first intelligent humanoid robot
called WABOT-1 in Japan [42], the emergence of intelligent agents [13], including the IBM Deep Blue computer that beat world chess champion G. Garry Kasparov [29], the Roomba vacuum cleaner [7], and
the use of technologies by Facebook, X (Twitter), and Netflix. ... | {"url":"https://www.researchgate.net/publication/276216226_Newell_and_Simon's_Logic_Theorist_Historical_Background_and_Impact_on_Cognitive_Modeling","timestamp":"2024-11-08T22:34:04Z","content_type":"text/html","content_length":"493411","record_id":"<urn:uuid:13b0a9ba-ad4d-41a5-9f06-6083d4a51ba1>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00715.warc.gz"} |
ST_MinimumClearance — Returns the minimum clearance of a geometry, a measure of a geometry's robustness.
float ST_MinimumClearance(geometry g);
It is possible for a geometry to meet the criteria for validity according to ST_IsValid (polygons) or ST_IsSimple (lines), but to become invalid if one of its vertices is moved by a small distance.
This can happen due to loss of precision during conversion to text formats (such as WKT, KML, GML, GeoJSON), or binary formats that do not use double-precision floating point coordinates (e.g.
MapInfo TAB).
The minimum clearance is a quantitative measure of a geometry's robustness to change in coordinate precision. It is the largest distance by which vertices of the geometry can be moved without
creating an invalid geometry. Larger values of minimum clearance indicate greater robustness.
If a geometry has a minimum clearance of e, then:
• No two distinct vertices in the geometry are closer than the distance e.
• No vertex is closer than e to a line segment of which it is not an endpoint.
If no minimum clearance exists for a geometry (e.g. a single point, or a MultiPoint whose points are identical), the return value is Infinity.
To avoid validity issues caused by precision loss, ST_ReducePrecision can reduce coordinate precision while ensuring that polygonal geometry remains valid.
Availability: 2.3.0
SELECT ST_MinimumClearance('POLYGON ((0 0, 1 0, 1 1, 0.5 3.2e-4, 0 0))'); | {"url":"https://postgis.net/docs/manual-3.6/it/ST_MinimumClearance.html","timestamp":"2024-11-02T04:34:17Z","content_type":"application/xhtml+xml","content_length":"5668","record_id":"<urn:uuid:8b264725-93b1-4d42-8ed1-282a7a377047>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00791.warc.gz"} |
2013 IMO Problems/Problem 2
A configuration of $4027$ points in the plane is called Colombian if it consists of $2013$ red points and $2014$ blue points, and no three of the points of the configuration are collinear. By drawing
some lines, the plane is divided into several regions. An arrangement of lines is good for a Colombian configuration if the following two conditions are satisfied:
• no line passes through any point of the configuration;
• no region contains points of both colours.
Find the least value of $k$ such that for any Colombian configuration of $4027$ points, there is a good arrangement of $k$ lines.
We can start off by imagining the points in their worst configuration. With some trials, we find $2013$ lines to be the answer to the worst cases. We can assume the answer is $2013$. We will now
prove it.
We will first prove that the sufficient number of lines required for a good arrangement for a configuration consisting of $u$ red points and $v$ blue points, where $u$ is even and $v$ is odd and $u -
v = 1$, is $v$.
Notice that the condition "no three points are co-linear" implies the following: No blue point will get in the way of the line between two red points and vice versa. What this means, is that for any
two points $A$ and $B$ of the same color, we can draw two lines parallel to, and on different sides of the line $AB$, to form a region with only the points $A$ and $B$ in it.
Now consider a configuration consisting of u red points and v blue ones ($u$ is even, $v$ is odd, $u>v$). Let the set of points $S = \{A_1, A_2, ... A_k\}$ be the out-most points of the
configuration, such that you could form a convex k-gon, $A_1 A_2 A_3 ... A_k$, that has all of the other points within it.
If the set S has at least one blue point, there can be a line that separates the plane into two regions: one only consisting of only a blue point, and one consisting of the rest. For the rest of the
blue points, we can draw parallel lines as mentioned before to split them from the red points. We end up with $v$ lines.
If the set $S$ has no blue points, there can be a line that divides the plane into two regions: one consisting of two red points, and one consisting of the rest. For the rest of the red points, we
can draw parallel lines as mentioned before to split them from the blue points. We end up with $u-1 = v$ lines.
Now we will show that there are configurations that can not be partitioned with less than $v$ lines.
Consider the arrangement of these points on a circle so that between every two blue points there are at least one red point (on the circle).
There are no less than $2v$ arcs of this circle, that has one end blue and other red (and no other colored points inside the arc) - one such arc on each side of each blue point. For a line
partitioning to be good, each of these arcs have to be crossed by at least one line, but one line can not cross more than $2$ arcs on a circle - therefore, this configuration can not be partitioned
with less than $v$ lines!
Our proof is done, and we have our final answer: $2013$.
See Also | {"url":"https://artofproblemsolving.com/wiki/index.php/2013_IMO_Problems/Problem_2","timestamp":"2024-11-12T23:09:57Z","content_type":"text/html","content_length":"44536","record_id":"<urn:uuid:e75fbfa3-0ba1-4954-844e-ca01a80e5436>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00521.warc.gz"} |
Transferral of entailment in duality theory II: strong dualisability
Gouveia, M. J.; Haviar, M.
Czechoslovak Mathematical Journal, 61(2) (2011), 401-417
Results saying how to transfer entailment in certain minimal and maximal ways and how to transfer strong dualisability between two different finite generators of a quasi-variety of algebras are
presented. A new proof for a well-known result in the theory of natural dualities which says that strong dualisability of a quasivariety is independent of the generating algebra is derived. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?member_id=16&doc_id=416","timestamp":"2024-11-15T00:02:34Z","content_type":"text/html","content_length":"8357","record_id":"<urn:uuid:a3985c23-ded0-4f95-9854-6995b8d3592b>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00767.warc.gz"} |
UPSC CSAT Quiz – 2021: IASbaba’s Daily CSAT Practice Test – 2nd March 2021
Daily CSAT Practice Test
Everyday 5 Questions from Aptitude, Logical Reasoning, and Reading Comprehension will be covered from Monday to Saturday.
Make the best use of the initiative. All the best!
To Know More about Ace the Prelims (ATP) 2021 – CLICK HERE
Important Note:
• Don’t forget to post your marks in the comment section. Also, let us know if you enjoyed today’s test 🙂
• After completing the 5 questions, click on ‘View Questions’ to check your score, time taken and solutions.
To view Solutions, follow these instructions:
1. Click on – ‘Start Test’ button
2. Solve Questions
3. Click on ‘Test Summary’ button
4. Click on ‘Finish Test’ button
5. Now click on ‘View Questions’ button – here you will see solutions and links. | {"url":"https://iasbaba.com/2021/03/upsc-csat-quiz-2021-iasbabas-daily-csat-practice-test-2nd-march-2021/?theme_mode=dark&theme_mode=light","timestamp":"2024-11-11T13:33:12Z","content_type":"text/html","content_length":"147287","record_id":"<urn:uuid:451346d0-7329-4115-8394-cba460ab3080>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00139.warc.gz"} |
I am working on a GAMS MILP model. In my model, I have a variable that takes both positive and negative values. However, my model objective is to minimize the sum of variable that takes only
non-negative values. Please guide me on how to do that in GAMS MILP. Thanks in advance.
Variable X (can take both positive and negative values)
Objective… *(To minimize).
Answer =e= Sum(X); *where X has only non-negative values.
To unsubscribe from this group and stop receiving emails from it, send an email to gamsworld+unsubscribe@googlegroups.com.
To post to this group, send email to gamsworld@googlegroups.com.
Visit this group at http://groups.google.com/group/gamsworld.
For more options, visit https://groups.google.com/d/optout.
You can try this:
Y are variables which can be positive or negative. X are positive variables defined as:
X =g= Y ;
Answer =e= Sum(X) ;
Since the objective is to minimize ‘Answer’, if Y is positive, X will take the value of Y, else if Y is negative, X will take the value of 0.
在 2015å¹´5月29日星期五 UTC+8ä¸‹å ˆ10:14:30,CuriousHumanå†™é “ï¼š
I am working on a GAMS MILP model. In my model, I have a variable that takes both positive and negative values. However, my model objective is to minimize the sum of variable that takes only
non-negative values. Please guide me on how to do that in GAMS MILP. Thanks in advance.
Variable X (can take both positive and negative values)
Objective… *(To minimize).
Answer =e= Sum(X); *where X has only non-negative values.
To unsubscribe from this group and stop receiving emails from it, send an email to gamsworld+unsubscribe@googlegroups.com.
To post to this group, send email to gamsworld@googlegroups.com.
Visit this group at http://groups.google.com/group/gamsworld.
For more options, visit https://groups.google.com/d/optout. | {"url":"https://forum.gams.com/t/gams-milp-way-to-choose-variables-have-positive-value/1489","timestamp":"2024-11-09T15:37:41Z","content_type":"text/html","content_length":"17777","record_id":"<urn:uuid:9ff0c64a-8fc2-4edb-945a-439cba26ef64>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00551.warc.gz"} |
[Solved] Calculate the relevant cash flows (for each year) - The Quizlet
Calculate the relevant cash flows (for each year) for the following capital budgeting proposal. Enter the total net cash flows for each year in the answer sheet. (10 points)
• $90,000 initial cost for machinery;
• depreciated straight-line over 4 years to a book value of $10,000;
• 35% marginal tax rate;
• $55,000 additional annual revenues;
• $25,000 additional annual cash expense;
• annual expense for debt financing is $7,500.
• $3,500 previously spent for engineering study;
• The project requires inventory increase by $32,000 and accounts payable increase by $14,000 at the beginning of the project;
• The investment in working capital occurs one time at the beginning of the project and it requires working capital return to the original level when the project ends in 4 years;
• 11% cost of capital;
• life of the project is 4 years; and
• The new equipment will be sold at the end of 4 years; expected market value of the new equipment at the end of 4 years is $15,000;
Net Cash Flow
Looking for help with your homework?
Grab a 30% Discount and Get your paper done!
Place an Order | {"url":"https://www.thequizlet.com/2022/03/08/calculate-the-relevant-cash-flows-for-each-year/","timestamp":"2024-11-02T14:38:54Z","content_type":"text/html","content_length":"38386","record_id":"<urn:uuid:95ae8de5-6ea7-470e-9de9-d878df4d13c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00091.warc.gz"} |
If Then based on a date and number of days
I have the following formula that works. When I try to add some IF THEN conditions I can't get formula to work.
=IFERROR(WORKDAY([Due Date]@row, -2, {Holidays Range 3}), "pending")
I am starting with a Due Date I want to calculate a Final File Release Date 2 days prior to Due Date, but if the days until due date column is <3 days I want the formula to bring back the actual Due
Date. I also want to ensure the due dates are WORKDAY only and eliminate holidays as well, I don't want a Final due date to fall on the weekend or a holiday.
Here are my attempts neither of these are working.
First formula is to try to get it to bring back the due date if days until due are <3
=IF([Days Until Due Date]@row < 3, WORKDAY([Due Date]@row) {Holidays Range 3})) I get error unparseable
the formula below was an attempt to combine the above scenario with the second part of if the days until due are then subtract 2 days from the due date
=IF([Days Until Due Date]@row < 3, [Due Date]@row, {Holidays Range 3}, IF(OR([Days Until Due Date]@row > 3, [Due Date]@row, -2, {Holidays Range 3}))) I get error incorrect argument set
Best Answers
• A few things:
First formula is to try to get it to bring back the due date if days until due are <3
=IF([Days Until Due Date]@row < 3, WORKDAY([Due Date]@row) {Holidays Range 3})) I get error unparseable
You're missing a comma between the WORKDAY([Due Date]@row) and {Holidays Range 3}, and there's an extraneous parentheses, which could cause the unparseable error. But you don't really need to use
WORKDAY here.
=IF([Days Until Due Date]@row < 3, [Due Date]@row, {Holidays Range 3}, IF(OR([Days Until Due Date]@row > 3, [Due Date]@row, -2, {Holidays Range 3}))) I get error incorrect argument set
Your use of {Holidays Range 3} would not work in the first part of this formula. The way you have the syntax, your first IF statement is saying if it's less than 3 days until the due date, show
me the due date for the row, otherwise here's a list of holidays. 🤔 I think you're overthinking it with this one! Try this:
=IF([Days Until Due Date]@row < 3, [Due Date]@row, WORKDAY([Due Date]@row, -2, {Holidays Range 3}))
The logic: If there are fewer than 3 days until the due date, make this cell equal the Due Date from this row, otherwise (aka more than 3 days until the due date,) make this cell equal two
workdays before the due date.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• Glad I was able to help figure this one out. If you could mark the answer as Accepted, I'd appreciate it!
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• A few things:
First formula is to try to get it to bring back the due date if days until due are <3
=IF([Days Until Due Date]@row < 3, WORKDAY([Due Date]@row) {Holidays Range 3})) I get error unparseable
You're missing a comma between the WORKDAY([Due Date]@row) and {Holidays Range 3}, and there's an extraneous parentheses, which could cause the unparseable error. But you don't really need to use
WORKDAY here.
=IF([Days Until Due Date]@row < 3, [Due Date]@row, {Holidays Range 3}, IF(OR([Days Until Due Date]@row > 3, [Due Date]@row, -2, {Holidays Range 3}))) I get error incorrect argument set
Your use of {Holidays Range 3} would not work in the first part of this formula. The way you have the syntax, your first IF statement is saying if it's less than 3 days until the due date, show
me the due date for the row, otherwise here's a list of holidays. 🤔 I think you're overthinking it with this one! Try this:
=IF([Days Until Due Date]@row < 3, [Due Date]@row, WORKDAY([Due Date]@row, -2, {Holidays Range 3}))
The logic: If there are fewer than 3 days until the due date, make this cell equal the Due Date from this row, otherwise (aka more than 3 days until the due date,) make this cell equal two
workdays before the due date.
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
• @Jeff Reisman Thank you Jeff this worked perfectly. I always miss a comma or yes overthink what I am trying to achieve when composing the formula.
• Glad I was able to help figure this one out. If you could mark the answer as Accepted, I'd appreciate it!
Jeff Reisman
Link: Smartsheet Functions Help Pages Link: Smartsheet Formula Error Messages
If my answer helped solve your issue, please mark it as accepted so that other users can find it later. Thanks!
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/86414/if-then-based-on-a-date-and-number-of-days","timestamp":"2024-11-02T04:38:58Z","content_type":"text/html","content_length":"442087","record_id":"<urn:uuid:812e9a26-ac02-457d-914e-1885be315bae>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00706.warc.gz"} |
Mathematics of Planet Earth
You may have read Edward Belbruno’s blog on New Ways to the Moon, Origin of the Moon, and Origin of Life on Earth of October 4th. I did and was intrigued by his application of weak transfer to the
origin of the Moon, so I went to his 2005 joint paper with J. Richard Gott III with the same title, published in the Astronomical Journal.
Indeed, I already knew the earlier work of Jacques Laskar on the Moon in 1993. At the time, he had proved that it was the presence of the Moon that stabilizes the inclination of the Earth’s axis.
Indeed, the axis of Mars has very large oscillations, up to 60 degrees, and Venus’s axis also had large oscillations in the past. The numerical simulations show that, without the Moon, the Earth’s
axis would also have very large oscillations. Hence, the Moon is responsible for the stable system of seasons that we have on Earth, which may have favored life on our planet.
The current most plausible theory for the formation of the Moon is that it comes from the impact of a Mars-size planet with the Earth, which we will call the impactor. For information, the radius of
Mars is 53% of that of the Earth, its volume is 15% of that of the Earth, and its mass only 10%. Evidence supporting the impactor hypothesis comes from the geological side: the Earth and the Moon
contain the same types of oxygen isotopes, which are not found elsewhere in the solar system. The Earth and Mars both have an iron core, while the Moon has none. The theory is that, at the time of
the collision, the iron in the Earth and in the impactor would already have sunk into their core, and also that the collision was relatively weak, hence only expelling debris from the mantle which
would later aggregate into the Moon, while the two iron cores would have merged together. Indeed, the mean density of the Moon is comparable to that of the mean density of the Earth’s crust and upper
Mathematics cannot prove the origin of the Moon. It can only provide models which show that the scenario of the giant impactor makes sense, and that it makes more sense than other proposed scenarios.
It is believed that the planets would have formed by accretions of small objects, called planetesimals. Because the impactor and the Earth had similar composition, they should have formed at roughly
at the same distance from the Sun, namely one astronomical unit (AU). But then, why would it have taken so long before a collision occurred? Because the Earth and the impactor were at stable
positions. The Sun, the Earth and the impactor form a 3-body problem. Lagrange identified some periodic motions of three bodies where they are located at the vertices of an equilateral triangle: the
corresponding points for the third bodies are called Lagrange L4 and L5 points. These motions are stable when the mass of the Earth is much smaller than that of the Sun and the mass of the impactor,
10% of that of the Earth. Stability is shown rigorously using KAM theory for the ideal circular planar restricted problem and numerically for the full 3-dimensional 3-body problem, with integration
over 10Myr.
Hence, it makes sense that a giant impactor could have formed at L4 or L5: this impactor is called Theia in the literature on the subject. Simulations indeed show that Theia could have grown by
attracting planetesimals in its neighborhood. Let’s suppose that Theia is formed at L4. Why then didn’t it stay there? Obviously, it should have been destabilized. Simulations show that some small
planetesimals located near the same Lagrange point could have slowly pushed Theia away from L4. The article of Belbruno and Gott studies the potential movements after destabilization. What is crucial
is that, since the three bodies were at the vertices of an equilateral triangle, Theia and the Earth are at equal distances from the Sun. If the orbit of the Earth is nearly circular, Theia and the
Earth share almost the same orbit! This is why there is a high danger of collision when Theia is destabilized.
If the ejection speed were small, Theia would move back and forth along a trajectory resembling a circular arc centered at the Sun with additional smaller oscillations. In a frame centered at the Sun
and rotating with the Earth (hence the Earth is almost fixed), Theia moves back and forth in a region that looks like a horseshoe (see figure).
In this movement it never passes close to the Earth. An asteroid with a diameter of 100m, 2002 AA_29, discovered in 2002, has this type of orbit. This horseshoe region almost overlaps the Earth’s
orbit. For a higher ejection speed, Theia would be pushed into an orbit around the Sun with radius approximately 1 AU and gradually creep towards the Earth’s orbit: it would pass regularly close to
the Earth periapsis (the point of the Earth’s orbit closest to the Sun) in nearly parabolic trajectories, i.e., trajectories borderline of being captured by the Earth. Since the speed vectors of the
two planets are almost parallel, the gravitational perturbation exerted by the Earth on Theia at each fly-by is small. The simulations show that these trajectories have a high probability of
collision with Earth, not so long after leaving the Lagrange points (of the order of 100 years). Note that this kind of trajectory is highly chaotic and many simulations with close initial conditions
allow seeing the different potential types of trajectories.
Christiane Rousseau
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"http://mpe.dimacs.rutgers.edu/2013/10/11/where-did-the-moon-come-from/","timestamp":"2024-11-10T21:57:58Z","content_type":"text/html","content_length":"41348","record_id":"<urn:uuid:bb8074b2-5bae-49fe-bfef-3774aeec0989>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00585.warc.gz"} |
8.1: Graphing Motion
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Force and Kinematics
Our focus so far has been on the details of force, and comparing the motion of an object before and after the force acted on the object, typically at two time instances. We will now look at the
motion of an object for a continuous duration of time while a net force acts on the system or when the net force is zero. We first do this by graphically representing the time dependence of motion by
analyzing acceleration, velocity, and position as a function of time. These three vectors are connected by the following equations that we have introduced in the earlier chapters:
\[\vec v=\dfrac{d\vec x}{dt};~~~~~\vec a=\dfrac{d\vec v}{dt}\label{xva}\]
We will see how to make sense of these equations graphically by looking at a few specific examples. Below are plots demonstrating motion of a box which is initially moving to the right with a net
force also pointing to the right.
Figure 8.1.1: Force and Initial Velocity in the Same Direction
By convention we define to the right as positive. Then, acceleration will be positive as well according to Newton's second law since the net force is pointing to the right. We assume that the force
is constant over this time range, resulting in acceleration being constant as well. Thus, acceleration plotted as a function of time is a horizontal line, as shown in Figure 8.1.1. The acceleration
is arbitrarily chosen as 1 grid unit on this scale.
Equation \ref{xva} states that the slope of the velocity plot will give acceleration. This means that if acceleration is a constant, velocity must be linear as a function of time with a slope of 1
unit. This is consistent with Newton's second law which states that when there is a net force acting on the system, acceleration is non-zero, which implies that velocity is changing. There are
infinite ways to have velocity changing with time with a slope of 1, since the line can intersect anywhere on the y-axis and have the same slope. Thus, another piece of information that is required
to make the velocity versus time plot is the initial velocity at \(t=0\). In the example in Figure 8.1.1 the initial velocity, \(v_o\), is arbitrarily set to 2 units. From the velocity plot we can
see that the box is speeding up, since its speed is increasing with time.
Lastly, Equation \ref{xva} states that the slope of position plot is the velocity plot. In this case since velocity is linear and increasing, this means that the slope of position plot increases with
time, and it does so in a quadratic manner. Thus, the position versus time plot in Figure 8.1.1 has a parabolic shape. As for the velocity plot, there are infinite ways to have the have shape, since
the plot can be moved vertically up or down while retaining its slope. So we need to define the origin in order to determine where the position crosses the y-axis. It is convenient to define that
origin at initial time, so in this case the position is zero at \(t=0\). The position versus time tells us that the object is moving to the right and speeding up since the slope is positive and
increasing with time.
Let us now turn to another example where the force and the initial velocity point in opposite direction as shown in Figure 8.1.2 below. The box is still moving to the right initially, but the force
now points to the left. We will again assume that the force is constant over the time range that we want to analyze the motion of the box.
Figure 8.1.2: Force and Initial Velocity in the Opposite Directions
In this example acceleration is negative since the net force is to the left. Again, we choose a magnitude of 1 unit on this scale to represent constant acceleration over this time range. This means
that velocity will have a slope of negative 1 units. Again, we need to choose initial velocity, and in this case we choose 4 units as shown in Figure 8.1.2. The initial velocity is positive since the
box is moving to the right, but we see from the plot that when the slope is negative, the velocity plot will cross the x-axis and will start increasing in the negative direction. This means that
initially the box is slowing down as the magnitude of velocity decreases, since the force is in the opposite direction of position. But at the moment when the velocity plot crosses the x-axis (at 4
units of time), the box temporarily stops, after which it turns around and starts moving in the negative direction. Recall, since velocity is a vector, negative velocity means that its it moving in
the negative direction. And since the velocity is getting more negative after the box turns around, the speed, which is the magnitude of velocity, is increasing, so the box is now speeding up. The
box starts to speed up after 4 units of time, the motion of the box is now in the same direction as the force.
For the position plot, we set the origin at the initial time again as seen in Figure 8.1.2. The shape of the plot is again parabolic since its slope has to be linear based of the velocity plot.
Initially, the slope is positive and decreasing, corresponding to the box moving to the right and slowing down. At time of 4 units the slope of the position plot is zero. This is the exact time when
the velocity goes to zero before the box turns around. After this, the slope of the parabolic shape negative and increasing since the box is now moving to the left and speeding up. Note, that the
position is not negative when the box starts moving to the left, since negative position just means that the box is to the left of the origin. Initially, the box moved to the right of the origin, as
it was slowing down. When it first turned around and started moving to the left, it was still to the right of the origin until it returned back to its starting position, exactly at 8 units of time on
the position plot. After this, the box is located to the left of the origin, thus, position is negative.
As you work with analysis of motion for different physical situations, here are a list of a few key points to keep in mind when making acceleration, velocity and position plots:
• Acceleration, velocity, and position are vectors which can have positive, negative, or zero values depending on their direction and location of origin.
• If you know one of the graphs, you can obtain the other two as long as you know initial conditions.
• Acceleration points in the same direction as the net force.
• Acceleration is zero when the net force is zero.
• Acceleration plot alone does not contain any information whether the object is speeding up or slowing down, since it does not tell you which way the object is moving, only in which direction its
velocity is changing.
• The slope of the velocity plot is the acceleration.
• Initial velocity has to be known to make the velocity versus time plot.
• When acceleration and velocity have the same sign, the system is speeding up.
• When acceleration and velocity have opposite signs, the system is slowing down.
• When velocity plot crosses the x-axis (changes sign), the system is turning around.
• When the slope of velocity plot is zero, acceleration is zero, implying zero net force.
• The slope of the position plot is the velocity.
• Initial position has to be known to make the position versus time plot.
• Positive slope means the object is moving to the right.
• Negative slope means the object is moving to the left.
• Increasing magnitude of slope (either negative or positive) means the object is speeding up.
• Decreasing magnitude of slope (either negative or positive) means the object is slowing down.
• Zero slope means the object is stationary.
• When the sign of the slope of the position plot changes, this means the object has changed direction of motion.
• The sign of the position plot does not tell us about the direction of motion, but an indication of whether the object is located on the positive or negative side of the origin.
Example \(\PageIndex{1}\)
Below is a velocity versus time plot.
a) For each marked region specify the direction of motion (right, left, or turning around) and describe the speed (zero, speeding up, slowing down, or constant speed).
b) Make an acceleration versus time plot for the entire time shown.
a) Region 1: object starts with non-zero velocity, is moving to the left since velocity is negative and speeding up since the magnitude of velocity is increasing.
Region 2: object is moving to the left and slowing down. After 4 units of time in this region, the object turns around, after which it's moving to the right and speeding up.
Region 3: object is moving to the right with constant speed.
Region 4: object is moving to the right and slowing down, after 2 units of time it turns around, then moving to the left and speeds up.
b) Below is the acceleration versus time plot. The plots in each region are obtained from the slope of velocity versus time plot for each marked region: region 1 the slope is -1/2, in region 2
the slope is 1, in region 3 the slope is zero, and in region 4 the slope is-2.
Example \(\PageIndex{2}\)
Shown below is a rollercoaster ride. At the start of a drop, defined as t=0 sec, the train is moving with a speed of 6 m/s. The rest of the motion is depicted in the picture below. Assume the track
is frictionless from t=0 to t=20sec. Between t=20sec and 30sec, the breaks create friction with the tracks. For this problem assume that the x-axis always points horizontal to the track (in the
direction of motion), and the y-axis points perpendicular to the track, as depicted in the picture.
a) Draw four force diagrams for the train at t=2 sec, t=12 sec, t=18 sec, and t=25 sec. For each force diagram, split the forces into x-components (horizontal to the track) and y-component
(perpendicular to the track).
b) Make a plot for component of velocity and acceleration horizontal the track (x-direction as shown in the figure) as a function of time from t=0 to t=30 sec.
a) The free-body diagrams are shown below.
t=2 sec: in the first 5 seconds the track is frictionless, so there is only the force of gravity and the normal force. Gravity always points down, so it has a component along the track
(x-direction in the tilted axis) and perpendicular to the track (y-direction in the tilted axis). The normal force is always perpendicular to the surface, thus it points in the y-direction. The
y-component of gravity and the normal force must cancel since there is no motion in the y-direction. The net force is due to the force of gravity in the x-direction.
t=12 sec: the train is moving horizontally with zero net force, since there are only vertical forces present (friction is still zero in this region). The normal force and gravity are equal and
t=18 sec: the component of the gravitation force points in the negative x-direction. Also, since the slope of the track is less steep than at 2 sec, the component of gravity along the track is
smaller, so acceleration will have a smaller magnitude compared to when the train was moving down during the first 5 seconds.
t=25 sec: the motion is also horizontal, but the breaks are engaged creating friction with the track pointing to the left as shown below.
b) For the first 5 seconds, to find acceleration you need to calculate the x-component of gravity using the angle provided in the figure:
\[a=\frac{F_{gx}}{m}=g\sin\theta=(9.8 m/s^2)(\sin 50^{\circ})=7.51 m/s^2\nonumber\]
The velocity is related to acceleration as:
\[a=\frac{\Delta v}{\Delta t}=\frac{v_f-v_i}{t_f-t_i}\nonumber\]
The initial time is 0 seconds. Solving for \(v_f\) at the bottom of the ramp, when \(t_f= 5 sec\):
\[v_f=v_i+at_f=(6 m/s)+(7.51 m/s^2)(5 sec)=43.5 m/s\nonumber\]
Between 5 and 15 seconds acceleration is zero since there is no net force, so velocity remains constant at 43.5 m/s. Between 15 and 20 seconds the net forces points in the negative x-direction:
\[a=\frac{F_{gx}}{m}=-g\sin\theta=-(9.8 m/s^2)(\sin 25^{\circ})=-4.14 m/s^2\nonumber\]
Solving for \(v_f\) at 20 seconds at the top of the ramp:
\[v_f=v_i+a(t_f-t_i)=(43.5 m/s)+(-4.14 m/s^2)(20sec-15sec)=22.8 m/s\nonumber\]
Between 20 and 30 seconds, we don't know the magnitude of the force, but we know that the train stops at 30 seconds:
\[a=\frac{v_f-v_i}{t_f-t_i}=\frac{0-22.8 m/s}{30sec-20sec}=-2.28 m/s^2\nonumber\]
All of these calculations are summarized in the graphs below.
In this section we focused on depicting and interpreting motion graphically. In the next section, we will develop mathematical equations which will equivalently describe the motion and allow for more
exact calculations of velocity and position at specific instances of time. | {"url":"https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Physics_7B_-_General_Physics/8%3A_Force_and_Motion/8.1%3A_Graphing_Motion","timestamp":"2024-11-07T02:23:54Z","content_type":"text/html","content_length":"140722","record_id":"<urn:uuid:4f74c85c-2b35-46f1-a5a4-7384a117f745>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00096.warc.gz"} |
[Solved] 2. A grocery store sells 100 bottles of c | SolutionInn
Answered step by step
Verified Expert Solution
2. A grocery store sells 100 bottles of coke a week. Annual holding cost per bottle is $1. Ordering cost is $20 per order.
2. A grocery store sells 100 bottles of coke a week. Annual holding cost per bottle is $1. Ordering cost is $20 per order. Assume 52 weeks per year. (Round your answers to 2 decimal places.) a. (5
pts) How many bottles of coke should the store order for each order? b. (3 pts) How many times per year should the store place an order? c. (2 pts) How many weeks will elapse between two consecutive
orders? d. (3 pts) Calculate the total annual cost of placing orders and holding inventory. e. (7 pts) It takes 3 weeks to receive the bottles of coke after placing an order. Assume that the weekly
demand is normally distributed with mean 100 and standard deviation 20. Calculate the reorder point to achieve the service level of 90% during the lead time. f. (5 pts) The demand distribution is the
same as in part e. Suppose that the lead time is 4 weeks. If the store wishes to carry no safety stock, what should the reorder point be?
There are 3 Steps involved in it
Step: 1
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started | {"url":"https://www.solutioninn.com/study-help/questions/2-a-grocery-store-sells-100-bottles-of-coke-a-1067954","timestamp":"2024-11-13T02:46:52Z","content_type":"text/html","content_length":"105203","record_id":"<urn:uuid:1b34035e-c509-4dac-bef3-cec7ea2c1a8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00551.warc.gz"} |
Programs to combine partially observed (high dimensional)
CovComb {CovCombR} R Documentation
Programs to combine partially observed (high dimensional) covariance matrices. Combining datasets this way, using relationships, is an alternative to imputation.
Use for combining partially observed covariance matrices. This function can be used for combining data from independent experiments by combining the estimated covariance or relationship matrices
learned from each of the experiments.
CovComb(Klist = NULL, Kinvlist = NULL,
lambda = 1, w = 1, nu = 1000,
maxiter = 500, miniter = 100, Kinit = NULL,
tolparconv = 1e-04,
loglik=FALSE, plotll=FALSE)
Klist A list of covariance / relationship matrices with row and column names to be combined.
Kinvlist A list of inverse covariance / relationship matrices with row and column names to be combined, default NULL.
lambda A scalar learning rate parameter, between 0 and 1. 1 is the default value.
w Weight parameter, a vector of the same length as Klist, elements corresponding to weights assigned to each of the covariance matrices. Default is 1.
Degrees of freedom parameter. It is either a scalar (same degrees of freeom to each of the covariance component) or a vector of the same length as Klist elements of which correspond to
nu each of the covariance matrices. Currently, only scalar nu is accepted. Default is 1000. the value of nu needs to be larger than the variables in the covariance matrix.
maxiter Maximum number of iterations before stop. Default value is 500.
miniter Minimum number of iterations before the convergence criterion is checked. Default value is 100.
Kinit Initial estimate of the combined covariance matrix. Default value is an identity matrix.
tolparconv The minimum change in convergence criteria before stopping the algorithm unless the maxiter is reached. This is not evaluated in the first miniter iterations. Default value is 1e-4.
loglik Logical with default FALSE. Return the path of the log-likelihood or not.
plotll Logical with default FALSE. Plot the path of the log-likelihood or not.
Let A=\left\{a_1, a_2, \ldots, a_m \right\} be the set of not necessarily disjoint subsets of genotypes covering a set of K (i.e., K= \cup_{i=1}^m a_i) with total n genotypes. Let G_{a_1}, G_{a_2},\
ldots, G_{a_m} be the corresponding sample of covariance matrices.
Starting from an initial estimate \Sigma^{(0)}=\nu\Psi^{(0)}, the Wishart EM-Algorithm repeats updating the estimate of the covariance matrix until convergence:
\Psi^{(t+1)} =\frac{1}{\nu m}\sum_{a\in A}P_a\left[ \begin{array}{cc} G_{aa} & G_{aa}(B^{(t)}_{b|a})' \\ B^{(t)}_{b|a}G_{aa} & \nu \Psi^{(t)}_{bb|a}+ B^{(t)}_{b|a}G_{aa}(B^{(t)}_{b|a})' \end{array}\
where B^{(t)}_{b|a}=\Psi^{(t)}_{ab}(\Psi^{(t)}_{aa})^{-1}, \Psi^{(t)}_{bb|a}=\Psi^{(t)}_{bb}-\Psi^{(t)}_{ab}(\Psi^{(t)}_{aa})^{-1}\Psi^{(t)}_{ba}, a is the set of genotypes in the given partial
covariance matrix and b is the set difference of K and a. The matrices P_a are permutation matrices that put each matrix in the sum in the same order. The initial value, \Sigma^{(0)} is usually
assumed to be an identity matrix of dimesion n. The estimate \Psi^{(T)} at the last iteration converts to the estimated covariance with \Sigma^{(T)}=\nu\Psi^{(T)}.
A weighted version of this algorithm can be obtained replacing G_{aa} in above equations with G^{(w_a)}_{aa}=w_aG_{aa}+(1-w_a)\nu\Psi^{(T)} for a vector of weights (w_1,w_2,\ldots, w_m)'.
Combined covariance matrix estimate. if loglik is TRUE, the this is a list with first element equal to the covariance estimate, second element in the list is the path of the log-likelihood.
Deniz Akdemir // Maintainer: Deniz Akdemir deniz.akdemir.work@gmail.com
- Adventures in Multi-Omics I: Combining heterogeneous data sets via relationships matrices. Deniz Akdemir, Julio Isidro Sanchez. bioRxiv, November 28, 2019
####Using Iris data for a simple example
##Setting seed for reproducability.
###The input of the CovComb is a list of partial
#covariance matrices for the species 'virginica'.
CovList<-vector(mode="list", length=3)
###Note that the covariances between the variables
##1 and 2, 2 and 3, and 3 and 4 are not observed in
##the above. We will use these covariance matrices
##to obtain a 4 by 4 covariance matrix that estimates
##these unobserved covariances.
outCovComb<-CovComb(CovList, nu=40)
#####Compare the results with what we would get
#if we observed all data.
####Compare the same based on correlations.
####Here is a simple plot for visual comparison.
image(cov2cor(outCovComb),xlab="", ylab="", axes = FALSE, main="Combined")
axis(1, at = seq(0, 1, length=4),labels=rownames(outCovComb), las=2)
axis(2, at = seq(0, 1, length=4),labels=rownames(outCovComb), las=2)
image(cov2cor(cov(iris[101:150,1:4])),xlab="", ylab="", axes = FALSE,
main="All Data")
axis(1, at = seq(0, 1, length=4),labels=colnames(iris[,1:4]), las=2)
axis(2, at = seq(0, 1, length=4),labels=colnames(iris[,1:4]), las=2)
#### Using Weights
outCovCombhtedwgt<-CovComb(CovList, nu=75,w=c(20/75,25/75,30/75))
####Refit and plot log-likelihood path
outCovCombhtedwgt<-CovComb(CovList, nu=75,w=c(20/75,25/75,30/75),
loglik=TRUE, plotll=TRUE)
#### For small problems (when the sample size
## moderate and the number of variables is small),
## we can try using optimization to estimate the degrees of freedom
## parameter nu. Nevetheless, this is not always satisfactory.
## The value of nu does not change the
## estimate of the covariance, but it is
## important for evaluating estimation errors.
outCovComb<-CovComb(CovList, nu=ceiling(nu), loglik=TRUE, plotll=FALSE)
#> est.df= 39
####### Estimated nu can be used as an input
## to other statistical procedures
## such as hypothesis testing about
## the covariance parameters, graphical modeling,
## sparse covariance estimation, etc,....
version 1.0 | {"url":"https://search.r-project.org/CRAN/refmans/CovCombR/html/CovComb.html","timestamp":"2024-11-12T06:05:03Z","content_type":"text/html","content_length":"9581","record_id":"<urn:uuid:8aab8ddc-e810-4589-9ba5-2adac039734d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00639.warc.gz"} |
12th | Retirement of a Partner | Question No. 6 To 10 | Ts Grewal Solution 2022-2023 - commercemine
Question 6:
(a) W, X, Y and Z are partners sharing profits and losses in the ratio of 1/3, 1/6, 1/3 and 1/6 respectively. Y retires and W, X and Z decide to share the profits and losses equally in future.
Calculate gaining ratio.
(b) A, B and C are partners sharing profits and losses in the ratio of 4: 3: 2. C retires from the business. A is acquiring 4/9 of C's share and balance is acquired by B. Calculate the new
profit-sharing ratio and gaining ratio.
Old Ratio (W, X, Y and Z) = of 1/3;1/6: 1/3;1/6 or 2 : 1 : 2 : 1
New Ratio (W, X and Z) = 1 : 1 : 1
Gaining Ratio = New Ratio − Old Ratio
W's Gain=1/3-2/6=2-2/6=0/6
X's Gain=1/3-1/6=2-1/6=1/6
Z's Gain=1/3-1/6=2-1/6=1/6
∴Gaining Ratio = 0: 1: 1
Old Ratio (A, B and C) = 4: 3: 2
C’s Profit Share =2/9
A acquires 4/9 of C’s Share and remaining share is acquired by B.
Share acquired by A=2/9×4/9=8/81
Share acquired by B=C’s share- Share acquired by A=2/9-8/81=10/81
New Profit Share = Old Profit Share + Share acquired from C
A’s new share=4/9+8/81=36+8/81=44/81
B’s new share=3/9+10/81=27+10/81=37/81
New Profit Ratio A and B = 44: 37
Gaining Ratio = New Ratio − Old Ratio
A's Gain=44/81-4/9=44-36/81=8/81
B's Gain=37/81-3/9=37-27/81=10/81
∴Gaining Ratio = 8: 10 or 4: 5
Question 7:
Kumar, Lakshya, Manoj and Naresh are partners sharing profits in the ratio of 3 : 2 : 1 : 4. Kumar retires and his share is acquired by Lakshya and Manoj in the ratio of 3 : 2. Calculate new
profit-sharing ratio and gaining ratio of the remaining partners.
Kumar's share=3/10acquired by Lakshya and Manoj in 3:2
Share acquired by Lakshya=3/10×3/5=9/50
Share acquired by Manoj=3/10×2/5=6/50
Lakshya's New Share=2/10+9/50=19/50
Manoj's New Share=1/10+6/50=11/50
Naresh's share (as retained)=4/10 or 20/50
New Profit Sharing Ratio=19:11:20
Gaining Ratio = 3:2 (as given in the question)
Question 8:
A, B, and C were partners in a firm sharing profits in the ratio of 8 : 4 : 3. B retires and his share is taken up equally by A and C. Find the new profit-sharing ratio.
Old Ratio (A, B and C) = 8 : 4 : 3
B retires from the firm.
His profit share = 4/15
B’s share taken by A and C in ratio of 1: 1
Share taken by A: 4/15×1/2=2/15
Share taken by C: 4/15×1/2=2/15
New Ratio = Old Ratio + Share acquired from B
A's New Share: 8/15+2/15=10/15=2/3
C's New Share: 3/15+2/15=5/15=1/3
∴ New Profit Ratio (A and C) = 2: 1
Question 9:
A, B, and C are partners sharing profits in the ratio of 5: 3: 2. C retires and his share is taken by A. Calculate new profit-sharing ratio of A and B.
Old Ratio (A, B and C) = 5: 3: 2
C retires from the firm.
His profit share = 210
C’s share is taken by A in entirety
New Ratio = Old Ratio + Share acquired from C
A's New Share: 5/10+2/10=7/10
B's New Share: 3/10+0=310
∴ New Profit Ratio (A and B) = 7: 3
Question 10:
Murli, Naveen and Omprakash are partners sharing profits in the ratio of 3/8, 1/2 and 1/8. Murli retires and surrenders 2/3rd of his share in favour of Naveen and remaining share in favour of
Omprakash. Calculate new profit-sharing ratio and gaining ratio of the remaining partners.
Old Ratio=3:4:1
Murli's share=3/8
Share acquired by Naveen=3/8×2/3=2/8
Remaining Share=3/8−2/8=1/8 (acquired by Omprakash)
Gaining Ratio=28:18=2:1
Naveen's New Share=4/8+2/8=6/8
Omprakash's New Share=1/8+1/8=2/8
New Profit Sharing Ratio=3:1
Ts Grewal Solution 2022-2023
Click below for more Questions
Class 12 / Volume – I
Chapter 1 – Retirement of a Parnter
Click on below links for
12th TS Grewal’s Accountancy Solutions | {"url":"https://commercemine.com/12th-retirement-of-a-partner-question-no-6-to-10-ts-grewal-solution-2022-2023/","timestamp":"2024-11-06T11:21:48Z","content_type":"text/html","content_length":"163928","record_id":"<urn:uuid:2104212e-b087-4ba1-b08f-3f021997ef7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00517.warc.gz"} |
"Nobel of Mathematics" awarded for equations predicting the behavior of the world around us
The world of mathematics is neat and tidy, but the real world is messy and seemingly chaotic. To make order out of all this, researchers rely on different types of equations. A type of these
equations, called “partial differential equations” (or PDE), is especially common in modeling physical processes. These equations model how specific variables change with respect to each other. Luis
A. Caffarelli is among the leading figures in this field, and for his contribution, he has been awarded the Abel Prize.
Image credits: Nolan Zunk / University of Texas at Austin.
Mathematics doesn’t have a Nobel Prize. Why Nobel didn’t establish this prize has remained a subject of controversy, but whatever the reason was, it left mathematics, which is essential to virtually
all fields of science, woefully unrecognized. But the field of mathematics has compensated for this with different awards: one is the Fields Medal, which is awarded every four years to leading
researchers under 40; the other is the Abel Prize.
Named after Norwegian mathematician Niels Henrik Abel, the Abel Prize is awarded annually by the King of Norway to one or more outstanding mathematicians. It’s modeled after the Nobel Prize. This
year, the recipient is Luis A. Caffarelli, who was celebrated for his “seminal contributions to regularity theory for nonlinear partial differential equations.”
Differential equations sound very complex (and they can be), but in principle, they measure change — how much one thing changes in regard to another. In pure mathematics, differential equations
relate one or more unknown functions and their derivatives, but in practical applications, the functions generally represent physical quantities and the derivatives represent their rates of change,
and the differential equation defines a relationship between the two.
Movement and flux are some examples of when these equations come into play. Speed is the first derivative of distance with respect to time, so you can use this in a differential equation. Flow is
also commonly described with differential equations. In fact, the notion of flow is critical to the study of ordinary differential equations. But the applications can get very complex very fast, and
straightforward techniques don’t always work.
This is where Cafarelli’s work comes in. He introduced ingenious new techniques and produced seminal results that progressed our understanding of how differential equations can be used in multiple
different areas of mathematics and physics.
Cafarelli also worked on characterizing singularities, mathematical points at which a given object is either not defined or ceases to be “well-behaved.” For physics, particularly, these are very
important because they can represent areas where the behavior of the physical system is hard to characterize, and are surprisingly common. This also ties into another area where Cafarelli worked for
decades: free-boundary problems.
As the name implies, free boundary problems happen at “boundaries,” which refers to limits where one thing turns into another thing. For instance, the boundary at which water turns into ice, or a
liquid turns into a crystal. But boundary problems also appear in economics, where they play a key role.
Caffarelli worked on a specific type of boundary problem called an obstacle problem, where the challenge is to find the equilibrium position under specific given circumstances. This is particularly
useful in gas and liquid flows in porous media and financial mathematics.
Caffarelli, who was born and grew up in Buenos Aires but mostly worked at US universities such as the University of Minnesota, the University of Chicago, New York University, and Princeton, is also
remarkably prolific. As if revolutionizing a key field of mathematics is not enough, he published very often, gathering a whopping 320 papers, with over 130 collaborators. His papers have generally
been very well received in the community, gathering over 19,000 citations. He also had over 30 Ph.D. students, including one Alessio Figalli, who was awarded the Fields Medal in 2018.
Now, at 74, Caffarelli is still very active and publishes several papers a year, continuing to work with graduate students and other collaborators.
“Few other living mathematicians have contributed more to our understanding of PDEs than the Argentinian-American Luis A. Caffarelli,” the Abel Press release notes. | {"url":"https://www.zmescience.com/science/math/abel-prize-luis-caffarelli/","timestamp":"2024-11-03T13:38:06Z","content_type":"text/html","content_length":"146763","record_id":"<urn:uuid:fc88fe99-8283-45b7-84d8-4082e0d1744d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00069.warc.gz"} |
Class TLcdUnitOfMeasureFactory
public class TLcdUnitOfMeasureFactory extends Object
Factory for
objects. This factory contains some predefined units available as constants on this class. If you need another unit, you can use the factory methods to create or derive a new unit.
• Field Summary
Modifier and Type
• Method Summary
Modifier and Type
Creates a unit-of-measure.
Derives a unit-of-measure from another.
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• Method Details
□ deriveUnitOfMeasure
Derives a unit-of-measure from another. The measure type must be a specialization of the unit-of-measure's measure type. For example
is a specialization of
aUnitOfMeasure - the base unit-of-measure
aMeasureType - the measure type
the unit of measure
IllegalArgumentException - if the specified measure type is not compatible with the unit-of-measure
□ createUnitOfMeasure
Creates a unit-of-measure. The values are converted to standard unit using the following formula:
standardUomValue = value * aToStandardScale + aToStandardOffset
method can be used to easily obtain a derived unit.
aUOMName - the name
aUOMSymbol - the symbol
aMeasureType - the measure type
aNameOfStandardUnit - the name of the standard unit
aToStandardScale - the scale factor to apply when converting to the standard unit
aToStandardOffset - the offset to add when converting to the standard unit
the unit-of-measure | {"url":"https://dev.luciad.com/portal/productDocumentation/LuciadFusion/docs/reference/LuciadFusion/com/luciad/util/iso19103/TLcdUnitOfMeasureFactory.html","timestamp":"2024-11-12T07:08:29Z","content_type":"text/html","content_length":"51689","record_id":"<urn:uuid:6e192ad2-ccc2-4516-87bd-bd14e209d240>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00727.warc.gz"} |
Correlated decision making: a complete theory — LessWrong
The title of this post most probably deserves a cautious question mark at the end, but I'll go out on a limb and start sawing it behind me: I think I've got a framework that consistently solves
correlated decision problems. That it is, those situation where different agents (a forgetful you at different times, your duplicates, or Omega’s prediction of you) will come to the same decision.
After my first post on the subject, Wei Dai asked whether my ideas could be formalised enough that it could applied mechanically. There were further challenges: introducing further positional
information, and dealing with the difference between simulations and predictions. Since I claimed this sort of approach could apply to the Newcomb’s problem, it is also useful to see it work in cases
were the two decisions are only partially correlated - where Omega is good, but he’s not perfect.
The theory
In standard decision making, it is easy to estimate your own contribution to your own utility; the contribution of others to your own utility is then estimated separately. In correlated
decision-making, both steps are trickier; estimating your contribution is non-obvious, and the contribution from others is not independent. In fact, the question to ask is not "if I decide this, how
much return will I make", but rather "in a world in which I decide this, how much return will I make".
You first estimate the contribution of each decision made to your own utility, using a simplified version of the CDP: if N correlated decisions are needed to gain some utility, then each decision
maker is estimated to have contributed 1/N of the effort towards the gain of that utility.
Then the procedure under correlated decision making is:
1) Estimate the contribution of each correlated decision towards your utility, using CDP.
2) Estimate the probability that each decision actually happens (this is an implicit use of the SIA).
3) Use 1) and 2) to estimate the total utility that emerges from the decision.
To illustrate, apply it to the generalised absent minded driver problem, where the return for turning off at the first and second intersection are x and y, respectively, while driving straight
through grants a return of z. The expected return for going straight with probability p is R = (1-p)x + p(1-p)y + p^2z.
Then the expected return for the driver at the first intersection is (1-p)x + [p(1-p)y + p^2z]/2, since the y and z returns require two decisions before being claimed. The expected return for the
second driver is [(1-p)y + pz]/2. The first driver exists with probability one, while the second driver exists with probability p, giving the correct return of R.
In the example given in Outlawing Athropics, there are twenty correlated decision makers, all existing with probability 1/2. Two of them contribute towards a decision which has utility -52, hence
each generates a utility of -52/2. Eighteen of them contribute towards a decision which has utility 12, hence each one generates a utility of 12/18. Summing this up, the total utility generated is
[2*(-52)/2 + 18*(12)/18]/2 = -20, which is correct.
Simulation versus prediction
In the Newcomb problem, there are two correlated decisions: your choice of one- or two-boxing, and Omega's decision on whether to put the money. The return to you for one-boxing in either case is X/
2; for two boxing, the return is 1000/2.
If Omega simulates you, you can be either decision maker, with probability 1/2; if he predicts without simulating, you are certainly the box-chooser. But it makes no difference - who you are is not
an issue, you are simply looking at the probability of each decision maker existing, which is 1 in both cases. So adding up the two utilities gives you the correct estimate.
Consequently, predictions and simulations can be treated similarly in this setup.
Partial correlation
If two decisions are partially correlated - say, Newcomb's problem where Omega has a probability p of correctly guessing your decision - then the way of modeling it is to split it into several
perfectly correlated pieces.
For instance, the partially correlated Newcomb's problem can be split into one model which is perfectly correlated (with probability p), and one model which is perfectly anti-correlated (with
probability (1-p)). The return from two-boxing in the first case is 1000, and X+1000 is the second case. One-boxing gives a return of X in the first case and 0 in the second case. Hence the expected
return from one-boxing is p(X), and for two-boxing is 1000 + (1-p)X, which are the correct odds.
Adding positional information
SilasBarta asked whether my old model could deal with the absent-minded driver if there were some positional information. For instance, imagine if there were a light at each crossing that could be
red or green, and it was green 1/2 the time at the first crossing and 2/3 of the time in the second crossing. Then if your probability of continuing on a green light was g, and if on a red light it
was r, your initial expected return is R = (2-r-g)x/2 + (r+g)D/2, where D = ((1-r)y + rz)/3 + 2((1-g)y + gz)/3 ).
Then if you are at the first intersection, your expected return must be (2-r-g)x/2 + (r+g)D/4 (CDP on y and z, which require 2 decisions), while if you are at the second intersection, your expected
return is D/4. The first driver exists with certainty, while the second one exists with probability r+g, giving us the correct return R.
Your proposed solution seems to introduce some arbitrary structure to the decision algorithm. Specifically, there is a large number of alternatives to CDP ("if N correlated decisions are needed
to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility") that all give the same solutions.
Ah, so someone noticed that :-) I didn't put it in, since it all gave the same result in the end, and make the whole thing more complicated than needed. For instance, consider these set-ups:
1) Ten people: each one causes £10 to be given to everyone in the group.
2) Ten people: each one causes £100 to be given to themselves.
3) Ten people: each one causes £100 to be given to the next person in the group.
Under correlated decision making, each set-up is the same (the first is CDP, the others are harder). I chose to go with the simplest model - since they're all equivalent.
1 ((1-p)x + [p(1-p)y + p2z]/2) + p [(1-p)y + pz]/2 is not an expected utility computation (since the probabilities 1 and p don't sum up to 1).
It is a sum of expected utilities - the expected utility you gain from driver 1, plus the expected utility you gain from driver 2 (under CDP).
At this point, it seems to me that "contribution"/"responsibility" is not a useful concept. It's just adding complexity without any apparent benefit. Do you agree with this assessment? If not,
what advantage do you see in your proposal over UDT?
The starting point was Eliezer's recent post on outlawing anthropics, where he seemed ready to throw out anthropic reasoning entrierly, based on the approch he was using. The above style of reasoning
correctly predicts your expected utility dependent on your decision. Similarly, this type of reasoning solves the Anthropic Trilemma.
If UDT does the same, then my approach has no advantage (though some people may find it conceptually easier (and some, conceptually harder)).
I can only blame the usual slopiness of mathematicians,
That's "sloppiness" :P.
No, "slopiness" works too -- mathematicians have more slope in their heads ;-P
My working hypothesis is that math and crypto are very similar, this kind of error occurs frequently, and you just don't notice. What little crypto I know could be called complexity theory. I've read
very little and heard it mainly orally. I've experienced this kind of error, certainly in oral complexity theory and I think in oral crypto. Of course, there's a difference when people are trying to
reconstruct proofs that are stamped by authority.
I thought it possible you were talking about the person LF.
Yes, mathematicians produce errors that stand for decades, but they aren't errors that are detectable to copy-editing. Surely the errors you mention in crypto weren't caused by substituting pz for
px! How could it have stood so long with such an error? Also, such a random error is unlikely to help the proof. If you are approximating something and you want different sources of error to cancel,
then a minus sign could make all the difference in the world, but people know that this is the key in the proof and pay more attention. Also, if flipping a sign makes it true, it's probably false,
not just false, but false with a big enough error that you can check it in small examples.
"Beware of bugs in the above code; I have only proved it correct, not tried it.'' - Knuth
I've heard stories (from my math professors in college) of grad students who spent multiple years writing about certain entities, which have all sorts of very interesting properties. However, they
were having difficulties actually constructing one. Eventually it was demonstrated that there aren't any, and they had been proving the interesting things one could do if one had an element of the
empty set.
Mathematicians do make errors. Sometimes they brush them aside as trivial (like Girard in Nesov's example), but sometimes they care a lot.
Only yesterday I read this digression in Girard's The Blind Spot:
Once in a while I would like to indulge into an anecdote concerning the genesis of the proof. The criterion was found by the end of 1985; then I remained more than six months making circles
around the "splitting tensor". One nice day of August 1986, I woke up in a camping of Siena and I had got the proof: I therefore sat down and wrote a manuscript of 10 pages. One month later, I
was copying this with a typewriter, and I discovered that one of my lemmas was wrong: no importance, I made another lemma! This illustrates the fact, neglected by the formalist ideology, that a
proof is not putting side by side logical rules, it is a global perception: since I had found the concept of empire, I had my theorem and the faulty lemma was no more than a misprint.
(1) Not all errors damage credibility (in my eyes) to a significant degree.
(2) Nonetheless, even if an error doesn't damage your credibility, avoiding it shows that you care about not wasting your readers' time.
To expand on point (1), I'm inclined to be pretty forgiving if
• the error was in a routine computation;
• the computation is almost as easy to do myself as to verify; and
• the computation serves only as an example, not as a formally necessary part of the description of the ideas.
In some fields of mathematics, papers are almost entirely prose, with very little computation (in the sense of manipulating formulas). In these fields, the proofs are communicated using words, not
equations, though the words have very precise definitions.
Reviewing your writing costs you time, but it saves a mental hiccup on the part of a lot of readers. Multiply?
the usual slopiness of mathematicians
First I've ever heard of such a thing. If it appears that mathematicians are sloppy, perhaps that is only because their mistakes are more definitively detectable.
Damned! fixed.
New Comment
22 comments, sorted by Click to highlight new comments since:
Your proposed solution seems to introduce some arbitrary structure to the decision algorithm. Specifically, there is a large number of alternatives to CDP ("if N correlated decisions are needed to
gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility") that all give the same solutions. For example, suppose we replace it
• if N correlated decisions are needed to gain some utility, then the decision maker highest in the decision tree is assigned full responsibility for it
Then the expected return for the driver at X is (1-p)x + [p(1-p)y + p^2 z] and the expected return for the driver at Y is 0, and the sum is still R. Or we can replace CDP with:
• if N correlated decisions are needed to gain some utility, then the decision maker lowest in the decision tree is assigned full responsibility for it
Then the expected return for the driver at X is (1-p)x and the expected return for the driver at Y is [(1-p)y + pz] (with probability of existence p), and the total is still R.
Basically, how you assign responsibility is irrelevant, as long as the total adds up to 1. So why pick equal assignment of responsibility?
Also, your proposed computation
• 1 * ((1-p)x + [p(1-p)y + p2z]/2) + p * [(1-p)y + pz]/2
is not an expected utility computation (since the probabilities 1 and p don't sum up to 1). Nor does it compute or use "the probability that I'm at X" so it's no better than UDT in satisfying the
epistemic intuition that demands that probability.
At this point, it seems to me that "contribution"/"responsibility" is not a useful concept. It's just adding complexity without any apparent benefit. Do you agree with this assessment? If not, what
advantage do you see in your proposal over UDT?
using a simplified version the CDP
Missing a word there.
The expected return for the second driver is [(1-p)y + px]/2
That should be "pz" instead of "px".
I'll have more thoughts on this later, but these errors don't help your credibility. You should double-check everything. (I've only read a part of it myself so far.)
Errors fixed (I can only blame the usual slopiness of mathematicians, and apologise profusely).
I can only blame the usual slopiness of mathematicians,
That's "sloppiness" :P.
I don't consider these errors to be of the kind that damages credibility. That may be self-serving, though, since I make them all the time. But then again, I am a mathematician.
I'm curious, why are mathematicians sloppier than others?
I don't consider these errors to be of the kind that damages credibility.
If that's true, I've wasted a significant chunk of my life reviewing my writings for errors. :-(
I'm curious, why are mathematicians sloppier than others?
I think it's because we're mainly focused on getting ideas right - most of the time, writing out the equation is merely a confirmation of what we allready know to be true. So often, a mathmo will
write out a series of equations where the beginning will be true, the middle completely wrong, and the conclusion correct.
As for general linguistic sloppiness, that probably derives from the feeling that "hey my math is good, so don't mess me about my words".
If that's true, I've wasted a significant chunk of my life reviewing my writings for errors. :-(
I've done that too - I'm just not very good at catching them. And it's only a waste if you have a typo-tolerant audience.
I think it's because we're mainly focused on getting ideas right - most of the time, writing out the equation is merely a confirmation of what we allready know to be true. So often, a mathmo will
write out a series of equations where the beginning will be true, the middle completely wrong, and the conclusion correct.
I wonder why that doesn't work in cryptography. There are several well-known examples of "security proofs" (proof of security of a crypto scheme under the assumption that some computational problem
is hard) by respected researchers that turn out many years after publication to contain errors that render the conclusions invalid.
Or does this happen just as often in mathematics, except that mathematicians don't care so much because their errors don't usually have much real-world impact?
Or does this happen just as often in mathematics, except that mathematicians don't care so much because their errors don't usually have much real-world impact?
The strongest theorems are those that have multiple proofs, or where the idea of the proof is easy to grasp (think Godel's incompleteness theorem). Proofs that depend on every detail of a long
tedious calculation, and only on that, are rare.
Proof that err by using implicit lemmas, or assuming results they can't assume, are much more common, and mathematicians know this and are much more on guard for those errors.
The strongest theorems are those that have multiple proofs, or where the idea of the proof is easy to grasp (think Godel's incompleteness theorem). Proofs that depend on every detail of a long
tedious calculation, and only on that, are rare.
But those kinds of proofs are not rare in cryptography. Which suggests that there's a selection effect going on in mathematics, where mathematicians choose which problems to work on partly by how
likely the solutions to those problems will involve "strong" theorems with multiple proofs and perhaps be easily accessed by their intuitions.
Now what happens when the problem picks you, instead of you picking the problem? That is the situation we're in, I think, so sloppiness is a worse problem than you might expect.
This reads to me like macho bragging.
Both math and crypto contain errors. Are they the result of sloppiness? the kind of sloppiness Stuart Armstrong attributes to mathematicians?
I don't know much about crypto. LF is said to be repeatedly wrong (in crypto? in another field?). That must constitute a kind of sloppiness. Is it correlated with other kinds of sloppiness?
i see two kinds of sloppiness I see attributed in this thread to mathematicians: (1) that detectable by copyediting; (2) focusing on the hard parts and trusting the easy parts to take care of
themselves. (2) can lead to (1). There's a third kind of sloppiness common in senior mathematicians: they supply the proof, but refuse to give the statement. Much of the difference is probably
material that mathematicians include that people in CS simply omit. (is crypto published in conferences?)
Both math and crypto contain errors. Are they the result of sloppiness? the kind of sloppiness Stuart Armstrong attributes to mathematicians?
According to Stuart, in math there are often errors where "beginning will be true, the middle completely wrong, and the conclusion correct". I was saying that this kind of error doesn't seem to occur
often in crypto, and trying to figure out why, with no bragging intended. Do you have another hypothesis, besides the one I gave?
LF is said to be repeatedly wrong (in crypto? in another field?).
What is LF?
Also, in the very beginning, "turning with probability p" should really be "going straight with probability p". | {"url":"https://www.lesswrong.com/posts/mmuNCfLyArLodpGLt/correlated-decision-making-a-complete-theory","timestamp":"2024-11-09T20:57:44Z","content_type":"text/html","content_length":"456913","record_id":"<urn:uuid:db112e4d-7b50-4307-ab6c-b4e50df0a7c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00428.warc.gz"} |
On the Proper Homotopy Invariance of the Tucker ... - Springer Link - P.PDFKUL.COM
Acta Mathematica Sinica, English Series Mar., 2007, Vol. 23, No. 3, pp. 571–576 Published online: Dec. 12, 2006 DOI: 10.1007/s10114-005-0900-2 Http://www.ActaMath.com
On the Proper Homotopy Invariance of the Tucker Property Daniele Ettore OTERA Universit´ a di Palermo, Dipartimento di Matematica ed Applicazioni, via Archirafi 34, 90123 - Palermo, Italy and
Universit´e Paris-Sud, Math´ematiques, Bˆ at 425, 91405 Orsay Cedex - France E-mail:
[email protected] [email protected]
Abstract A non-compact polyhedron P is Tucker if, for any compact subset K ⊂ P , the fundamental group π1 (P − K) is finitely generated. The main result of this note is that a manifold which is proper
homotopy equivalent to a Tucker polyhedron is Tucker. We use Poenaru’s theory of the equivalence relations forced by the singularities of a non-degenerate simplicial map. Keywords proper homotopy, Φ/
Ψ-theory, tucker property MR(2000) Subject Classification 57Q05, 57N35, 55P57
1 Introduction The starting point of this note is a series of papers by Poenaru ([1, 2]), where the author, in his approach to the Poincar´e conjecture, introduced the idea of killing 1-handles
stably (i.e. by taking the product with n-balls) to prove the simple connectivity at infinity of some open, simply connected 3-manifolds. One of the ingredients used there is the Φ/Ψ-theory
(introduced in [1]) of the equivalence relations forced by singularities of a non-degenerate simplicial map. Our aim is to use these techniques for non-simply connected manifolds. Poenaru in [2]
showed that, if the product of an open, simply-connected 3-manifold with the closed n-ball, W 3 × Dn , has no 1-handles, then the manifold W 3 is simply connected at infinity. The condition of having
a decomposition without 1-handles extends to the polyhedral category as follows (see [3] and [4]): A polyhedron P is called weakly geometrically simply connected (or wgsc) if there exists an
exhaustion by compact, connected, simply-connected subpolyhedra Ki such that Ki ⊂ Ki+1 and ∪i Ki = P . Poenaru’s result was further extended in [3] (see also [5]) where it is proved that an open,
simply-connected 3-manifold having the same proper homotopy type of a wgsc polyhedron is simply connected at infinity. These results are purely 3-dimensional and they cannot be extended to higher
dimension, as stated. In fact, there exist compact, contractible n-manifolds M n for any n ≥ 4, such that Int(M n ) × [0, 1] is wgsc but Int(M n ) is not simply connected at infinity. Nevertheless, in
[6], the above result was extended regardless of the dimension. In particular, it is proved that a non-compact manifold of dimension n, with n = 4, having the same proper homotopy type of a wgsc
polyhedron is wgsc. If one wants to extend the wgsc property to non-simply connected spaces, one has to recall a concept developed by Tucker in his work around the missing boundary problem for
3-manifolds ([7]). Received September 16, 2005, Accepted March 1, 2006 The author is partially supported by GNSAGA of INDAM, by MIUR of Italy (Progetto Giovani Ricercatori) and by Universit` a di
Otera, D. E.
Definition 1 A polyhedron M is Tucker (or it has the Tucker property) if, for every compact subpolyhedron K ⊂ M , each component of M − K has a finitely generated fundamental group. Remark 1 A major
difference between the wgsc condition and the Tucker property is that the former means that there is no 1-handles in some handlebody decomposition, while the second only tells us that some handlebody
decomposition needs only finitely many 1-handles, but without any control on their number. It was shown in [7] that, if a non-compact 3-manifold M is Tucker, then it is a missing boundary manifold,
i.e. there exist a compact manifold N and a subset C of the boundary of N such that N − C is homeomorphic to M . The Tucker condition was also exploited by Mihalik and Tschantz in [8], where the
authors introduced tame combings for finitely presented groups. It is shown that a group Γ is tame of some (equivalently, any) compact polyhedron combable if and only if the universal covering X X,
with π1 X = Γ, is Tucker. This is independent of the space X and hence the Tucker property can be seen as a group-theoretical notion. Moreover, Brick has shown that it is a geometric property for
groups, in the sense that it is preserved by quasi-isometries ([9]). The class of tame combable groups contains all asynchronously automatic and semi-hyperbolic groups. Furthermore, if a closed
irreducible 3-manifold has a tame combable fundamental group then its universal covering is R3 . Finally, Poenaru pointed out to us that the same techniques of [2] could still work for nonsimply
connected 3-manifolds. In particular, he claimed that if the product W 3 × Dn of (not necessarily simply connected) an open 3-manifold W 3 with the closed n-ball has finitely many 1-handles, then W 3
is Tucker. Here we are interested in a generalization of this claim, in particular we want to show the proper homotopy invariance of the Tucker property without any restriction on the dimension. Our
main result is the following: Theorem 1 Let W n be a manifold of dimension n. If W n is proper homotopy equivalent to a Tucker polyhedron X k , then W n is Tucker. Remark 2 • This theorem generalizes
Poenaru’s claim. Indeed, if W 3 × Dn has finitely many 1-handles then it is Tucker and hence, by Theorem 1, W 3 is Tucker. • Notice that the result of [8] implies the homotopy invariance of the Tucker
property only for universal coverings. • We will actually prove a stronger claim, namely that if W is proper homotopically dominated by a Tucker polyhedron then W is Tucker (see the next section). •
With respect to the results of [2, 3] our result is somewhat soft because it does not use a Dehn-type lemma ([2]) specific to dimension three, and for this reason it holds true in any dimension.
Corollary 1 Let V 3 be an open 3-manifold with H1 (V 3 , Z) = 0. Then V 3 has the proper homotopy type of a Tucker polyhedron if and only if V 3 is simple ended. Proof Theorem 1 implies that V 3 is
Tucker. Since V 3 is open, by [7] it follows that V 3 = int(M 3 ), where M 3 is a compact 3-manifold with boundary. Thus H1 (M 3 ) = H1 (V 3 ) = 0 and then the boundary of M 3 is made of spheres. 2
Preliminaries Concerning Poenaru’s Φ/Ψ-theory
For the sake of completeness, we recall here some of the basic facts on the Φ/Ψ-theory introduced by Poenaru in [1] as a useful tool in his approach to the covering conjecture (see [2]). The problem
is to find, in the context of a non-degenerate simplicial map f : X → M from a not necessarily locally finite simplicial complex X to a manifold M , an equivalence relation on X which is the smallest
possible such a relation, compatible with f and killing all the singularities of f . The simplicial complex X will be endowed with the weak topology (i.e. a
On the Tucker Property
subset C is closed if and only if C ∩ {simplex} is closed), and this makes the map f continuous. Let X be a simplicial complex of dimension at most n, not necessarily locally finite, but with
countably many simplexes. Let M be an n-manifold and f : X → M be a non-degenerate simplicial map. Definition 2 A point x ∈ X is not a singular point if f is an embedding in a neighborhood of x.
Write Sing(f ) for the set of singular points of X. Definition 3 The equivalence relation Φ(f ) ⊂ X ×X defined by f is given by (x, y) ∈ Φ(f ) ⇔ f (x) = f (y). Note that we are interested only in
equivalence relations R such that X/R is a simplicial complex. Poenaru has shown that, starting with Φ(f ), one can construct, by folding maps, “the smallest” equivalence relation Ψ(f ) ⊂ Φ(f )
killing all the singularities without changing the topology of X. More precisely: Proposition 1 There exists a unique equivalence relation Ψ ⊂ Φ such that • If f denotes the induced map f : X/Ψ(f ) →
M , then f has no singularities (i.e. f is an immersion); • There exists no other equivalence relation Ψ1 ⊂ Ψ having the same properties as Ψ and such that Ψ1 (f ) Ψ(f ). Hence Ψ(f ) is the smallest
equivalence relation compatible with f which kills all the singularities. Moreover, while the passage from X to X/Φ (which, by definition, is just f (X)) destroys all the topological information, this
does not happen for Ψ, in fact: Proposition 2 The projection map π : X → X/Ψ(f ) is simplicial and it induces a surjection on fundamental groups π∗ : π1 (X) → π1 (X/Ψ(f )). Now, we give an idea of
how to obtain such a Ψ. Let f : X → M be a non-degenerate simplicial map. If Sing(f ) = ∅ then Ψ is trivial (Ψ = Diag(X, X)); if not, there exist a point x ∈ Sing(f ) and two distinct simplexes of
the same dimension σ1 and σ2 such that x ∈ σ1 ∩ σ2 and f (σ1 ) = f (σ2 ). In this case we consider the quotient relation ρ1 obtained by identifying σ1 and σ2 (ρ1 is called a folding map). Now, if the
induced map f1 : X/ρ1 → M (that is non-degenerate and simplicial too) has no singularities, then Ψ(f ) = ρ1 . Otherwise, we proceed as above and define recurrently ρ2 , ρ3 , . . . ρn , . . . so that Ψ
(f ) = ∪∞ n=1 ρn . If X is not finite, we need a transfinite recurrence to construct our equivalence relation. This construction is not too precise since one needs transfinite numbers, and it is not
known if this construction yields a unique Ψ. The next lemma gives us a really manageable version of Ψ(f ) since it says that one can always choose the sequence of foldings so that ρω = Ψ(f ), i.e.
without using ordinal numbers. Proposition 3 Even if X is not finite, there exists a manner in which one proceeds with the folding maps in order to have Ψ(f ) = ∪∞ n=1 ρn . 2.1 Definition of Ψ(f )
Define M 2 (f ) ⊂ Φ(f ) to be the set of double points of f , i.e. all the (x, y) ∈ (X × X) − Diag X (f ) has (f ) = M 2 (f ) ∪ Diag(Sing(f)) ⊂ Φ(f). Clearly, M such that f (x) = f (y), and denote M a
natural topology. We will ignore it and define a new one. ⊂M . We say that R is admissible if the subset R = R ∪ Diag X is an equivalence Let R relation satisfying the following: If f (x) = f (y) with
x ∈ σ1 , y ∈ σ2 and (x, y) ∈ R, then R is a simplicial complex. identifies the simplexes σ1 and σ2 . Note that this means that X/R by deciding that the Hence one can define a new topology (called the
Z-topology) on the set M Z-closed subsets are the finite unions of admissible subsets. Finally one can prove the following equality: The equivalence relation Ψ is the smallest admissible equivalence
relation containing Diag(Sing(f)), i.e. Ψ(f ) is the Z-closure of Diag(Sing(f)).
Otera, D. E.
The Proof
3.1 Outline of the Proof We first recall the following definitions from [3]: Definition 4 A polyhedron M is (proper) homotopically dominated by the polyhedron X if there exists a P L map f : M → X such
that the mapping cylinder Zf = M × [0, 1]∪f X (properly) retracts on M . Remark 3 Observe that a proper homotopy equivalence is the simplest example of proper homotopy domination. Definition 5 An
enlargement of the polyhedron E is a polyhedron X which retracts properly on E, i.e. such that i E → X id π E, where i is a proper P L embedding and π is a proper P L map. The main ingredient of the
proof of Theorem 1 is the following lemma: Lemma 1
It suffices to prove the theorem for the case when X k is an enlargement of W n .
Let us introduce some more terminology. Definition 6 A polyhedron P has a strongly connected n-skeleton if any two n-simplexes of P can be joined by a sequence of n-simplexes such that consecutive
ones have a common (n − 1)-dimensional face. The polyhedron is n-pure if its n-skeleton is the union of its n-simplexes. Finally, P is called n-full if it is both n-pure and has a strongly connected
n-skeleton. We now turn to the second reduction. Lemma 2
We can assume that the polyhedron X k is n-full.
Now, the hypothesis of Lemma 1 provides us with a proper P L embedding W n → X k and a proper surjection π : X k → W n such that π ◦ i = id. Furthermore we can suppose that X k is n-full thanks to
Lemma 2. Lemma 3 There exist triangulations τW of W n and, respectively, θX of the n-skeleton of X k and a map λ : θX → τW such that • λ is proper, simplicial, generic and non-degenerate (i.e. its
restriction to any simplex is one-to-one); • λ ◦ i = id; • θX is Tucker. One derives that θX is an enlargement of τW , when the natural projection map is replaced by λ. Now we use the Φ/Ψ-theory
introduced by Poenaru in [1]. Denote by λ : θX /Ψ → τW the simplicial map induced by λ. Lemma 4
The equality Φ(λ) = Ψ(λ) holds.
Remark 4 Roughly speaking, this equality means that it is possible to exhaust all singularities of λ(θX ) by a countable union of folding maps. 3.2 Proof of Theorem 1 Using the Lemmas To simplify the
notation, we denote by λC the restriction of λ to some subset C, and by ΨC or ΦC the equivalence relations of λC . Fix a connected compact subset K of W . By compactness arguments, one can find
another compact subset L of θX such that λL (L) ⊃ K, and a third compact subset P ⊂ θX such that λ−1 (λL (L)) ⊂ P . This implies that if (x, y) ∈ M 2 (λ) with x ∈ i(K), then y ∈ P . Hence i(K) ⊂ P/ΦP
On the Tucker Property
The last lemma says that the equivalence relation Φ(λ) can be obtained by a countable union of folding maps. This implies that any compact subset of X involves only a finite number of these foldings.
Hence, for any compact subset P of X there exists a big compact subset P such that ΨP = ΦP . λP Furthermore we have the following diagram of maps: i(K) ⊂ P/ΦP = P/Ψ ⊂ P /Ψ → P
τW . Since the map λP is an immersion and no double point of it can involve P (thanks to
λ the equality ΦP = ΨP ), we have i(K) ∩ M2 (λP ) = ∅ and then i(K) ⊂ P /ΨP ⊂ θX /ΨP −→ τW . By hypothesis, the fundamental groupπ1 (θX − i(K)) is finitely generated and therefore Proposition 2,
stated before, implies that π1 (θX − i(K))/Ψ(λ) is also finitely generated. Now, since i(K) ∩ M2 (λ) = ∅, one obtains that (θX − i(K))/Ψ is exactly θX /Ψ − i(K). From the equality of Lemma 4, we
derive that θX /Ψ = θX /Φ. By the definition of Φ, θX /Φ is just the image of θX by λ, namely τW . It follows that the fundamental group π1 (τW − K) is isomorphic to π1 (θX − i(K)), and hence finitely
generated. The Tucker condition for W is then verified.
Proof of the Lemmas
4.1 Proof of Lemma 1 Let f : W → X be a proper homotopy equivalence. Then the mapping cylinder Zf = W × [0, 1]∪f X has a strong deformation retraction on W . Notice that X is also a strong
deformation retraction of Zf , by using the following retraction r(x, t) = f (x) for (x, t) ∈ W × [0, 1] and r(y) = y for y ∈ X. In particular r is a homotopy equivalence. Lemma 5 If X is Tucker then
Zf is Tucker. Proof Let C be a compact subset of Zf , and write K = r(C) ⊂ X. The compact subset r −1 (K) of Zf is the mapping cylinder of f |f −1 (K) : f −1 (K) → K, and it strongly retracts on K.
Then the fundamental group π1 (Zf − r −1 (K)) is isomorphic to π1 (X − K), which is finitely generated by hypothesis. This implies that π1 (Zf − C) is also finitely generated as claimed. It follows
that Zf is an enlargement of W , and the proof of Lemma 1 is achieved. 4.2 Proof of Lemma 2 We observe that, if X is an enlargement of W , then X × Dp is also an enlargement of W for any p. Lemma 2
now follows directly from the fact that, if X is path-connected, then X × Dp is p-full (see [3]). 4.3 Proof of Lemma 3 This lemma is a consequence of some general results of the approximation of P L
maps by non-degenerate maps. In fact Lemma 4.4 from [10] and the remark which follows it state that: Lemma 6 Let f : P → Q be a P L map, Q a P L manifold and P a P L space with dimP ≤ dimQ. Let P0 ⊂
P be a closed subspace. Suppose that f |P0 is non-degenerate. Then f is homotopic to f rel P0 , where f is a non-degenerate P L map and f (P −P0 ) ⊂int(Q). Moreover, given : P → R+ a positive
continuous function, we may insist that ρ(f (x), f (x)) < (x), for all x, where ρ is a given metric for the topology of Q, and the homotopy between f and f does not move points farther than a
distance apart at any moment. Furthermore, if f is proper we can ask that f be proper, too. An application of this lemma, when P is the n-skeleton of X, P0 = W ⊂ P and Q = W and f = π|P , gives a map
λ = π which is non-degenerate and generic. Moreover, the theorem 3.6 from [10] says that one can subdivide the n-skeleton of X and τW to make λ simplicial. Now, one has to observe that the Tucker
property is preserved by subdivisions (since they do not affect the fundamental group) and taking the k-skeleton with k > 1 (indeed, if C is a compact subset of the k-skeleton, Xk of X, then the
k-skeleton of X − C is Xk − C and so
Otera, D. E.
π1 ((X − C)k ) = π1 (Xk − C) = π1 (X − C) which is finitely generated if X is Tucker). This proves that the n-skeleton of X, θX , is Tucker. 4.4 Proof of Lemma 4 λ
By the definition of Ψ, the map θX /Ψ −→ τW is an immersion. We claim that it is a simplicial isomorphism. Consider the commutative diagram below λ
θX /Ψ −→ τW i id τW /Ψ. If we prove that i is onto, then automatically λ is injective. If the contrary holds, we could find an n-simplex σ of θX /Ψ such that Int(σ) ∩ Im(i) = ∅, because X, and hence
θX , is n-pure. Moreover, if σ1 and σ2 are two simplexes of θX /Ψ, arbitrary lifts of them to θX are connected by a chain of n-simplexes (since θX is n-full). The projections of the intermediary
simplexes form a chain in θX /Ψ because the projection map is simplicial (Ψ is a composition of folding maps) and non-degenerate (as λ). Thus there exists some n-simplex σ such that Int(σ) ∩ Im(i) =
∅ = σ ∩ Im(i). Moreover λ(i(τW /Ψ)) = τW , and then any point on the boundary ∂σ∩Im(i) would be a singular point of λ, which is not possible since λ is an immersion. Now we want to show that Ψ = Φ.
We have two bijections θX /Ψ → τW (already proved) and θX /Φ → τW (by definition of Φ), and an inclusion Ψ ⊂ Φ, which induce a bijective map θX /Ψ → θX /Φ. Hence the equality Φ = Ψ holds. 4.5 Final
Remark If one considers the “quasi-simple filtration” property introduced by Brick and Mihalik in [11] (which translates in the polyhedral framework the condition of being geometrically simply
connected), then the same techniques above can be used to show that a manifold is qsf if and only if it is properly homotopically dominated by a qsf polyhedron. Acknowledgments The author is indebted
to Valentin Poenaru and Louis Funar for useful discussions, comments and suggestions. Part of this work was done when the author visited the Institut Fourier of Grenoble, which he wishes to thank for
their support and hospitality. References [1] Poenaru, V.: On the equivalence relation forced by the singularities of a non-degenerate simplicial map. Duke Math. J., 63, 421–429 (1991) [2] Poenaru,
V.: Killing handles of index one stably and π1∞ . Duke Math. J., 63, 431–447 (1991) [3] Funar, L.: On proper homotopy type and the simple connectivity at infinity of open 3-manifolds. Atti Sem. Mat.
Fis. Univ. Modena, 49, 15–29 (2001) [4] Otera, D. E.: Asymptoptic topology of groups. Connectivity at infinity and geometric simple connectivity, PhD Thesis, Universit´ a di Palermo and Universi´te
de Paris-Sud, 2006 [5] Funar, L., Thickstun, T. L.: On open 3-manifolds proper homotopy equivalent to geometrically simply connected polyhedra. Topology its Appl., 109, 191–200 (2001) [6] Funar, L.,
Gadgil, S.: On the geometric simple connectivity of open manifolds. I.M.R.N., 24, 1193–1248 (2004) [7] Tucker, T. W.: Non-compact 3-manifolds and the missing boundary problem. Topology, 73, 267–273
(1974) [8] Mihalik, M., Tschantz, S. T.: Tame combings of groups. Trans. Amer. Math. Soc., 349(10), 4251–4264 (1997) [9] Brick, S.: Quasi-isometries and amalgamations of tame combable groups. Int. J.
Comp. Algebra, 5, 199–204 (1995) [10] Hudson, J. F. P.: Piecewise–Linear Topology, W. A. Benjamin Inc., 1969 [11] Brick, S. G., Mihalik, M. L.: The QSF property for groups and spaces. Math.
Zeitschrift, 220, 207–217 (1995) | {"url":"https://p.pdfkul.com/on-the-proper-homotopy-invariance-of-the-tucker-springer-link_5a15452e1723dde3a90589e7.html","timestamp":"2024-11-14T03:25:19Z","content_type":"text/html","content_length":"75899","record_id":"<urn:uuid:73d273a2-a208-4a79-bf03-9884a98e5698>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00481.warc.gz"} |
Recurrence and Dynamical Systems
The assignment content discusses various topics in mathematics, including recurrence, relations, and cellular automata. The main problems are: finding fix points for the tent map, plotting the
location of right fix point as function of a and derivative there, determining when this fix point is stable, and finding a 2-cycle in the tent map. Additionally, it discusses one-dimensional
cellular automata, including Rule 90, Rule B25/S4, and the Game of Life (B3/S23). Finally, it explores graph theory, specifically discussing girth of graphs with v = e + 1, where e is the number of
edges and v is the number of vertices. | {"url":"https://desklib.com/document/recurrence-relations-rr-and-cellular-cg3c/","timestamp":"2024-11-11T16:09:21Z","content_type":"text/html","content_length":"380200","record_id":"<urn:uuid:ceb91239-5c7f-41fa-8a1c-ff7be483dcbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00540.warc.gz"} |
In 1913 Neils Bohr proposed the Bohr Atomic Model. The Bohr model for the hydrogen atom was composed of a central nucleus with a single positive charge and one electron orbiting the nucleus with
sufficient velocity so that the angular momentum would balance out the electrostatic force of attraction.
Now classically, since the electron is accelerated, this system should radiate (this was Rutherford’s stumbling block). To avoid this difficulty, Bohr broke with tradition (IE classical physics).
Bohr assumed:
1. The electron can move around the nucleus only in certain orbits, and not in others (classically, no particular orbit was preferred).
2. These allowed orbits correspond to definite stationary states of the atom, and in such a stationary state the atom is stable and does not radiate.
3. The electron emits or adsorbs only certain amounts of energy when transitioning from one energy state to another.
Energy levels of the Bohr atom
Enock Lau Creative Commons Attribution-Share Alike 3.0
Bohr solved Rutherford’s problem with energy loss from the electron, by assuming it just wasn’t a difficulty in these special circumstances.
Bohr still had to decide which orbitals would be allowed. To do this Bohr placed the condition that the angular momentum (mvr) of the electron must be an integral multiple of Planck’s constant
divided by 2π.
There are two things I specifically would like to point out.
First, that in a certain sense this condition is equivalent to Planck’s condition on an oscillator.
Secondly, when we use angular momentum to offset the electrostatic attraction between the electron and the proton, the value r represents the distance between the electron and the proton, the same as
used in Coulomb’s Law and bringing with it the point charge approximation. | {"url":"https://polata-atomic-model.org/Bohr.html","timestamp":"2024-11-06T01:41:45Z","content_type":"text/html","content_length":"4783","record_id":"<urn:uuid:93a4ab46-820e-46fa-8b89-4c553ce7a79b>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00188.warc.gz"} |
Vector Embeddings and RAG Demystified: Leveraging Amazon Bedrock, Aurora, and LangChain - Part 1
Revolutionize big data handling and machine learning applications.
Published Dec 12, 2023
Last Modified May 24, 2024
Ever wondered how music apps suggest songs to you, or shopping apps suggest products that perfectly match your taste? To understand how, you have to dive into the world of vector databases, where
data isn't just stored in tables and rows but is mapped as geometric points in space.
In the rapidly evolving landscape of data engineering and machine learning, the concept of vector embeddings has emerged as a cornerstone for a myriad of innovative applications. As we navigate
through the era of generative AI and Large Language Models (LLMs), understanding and utilizing vector embeddings has become increasingly critical. A compelling example of this is in personalized
recommendation systems, such as those used by streaming services. These platforms use vector embeddings to analyze users' viewing habits, translating titles, descriptions, and feedback into a
mathematical space. This allows an LLM to identify what content a user is likely to enjoy based on how closely the vector of a new movie matches vectors of content they previously liked. This blog
post aims to demystify the world of vector embeddings and explore their pivotal role in enhancing our interactions with vast data sets, ensuring AI systems can offer personalized, contextually
relevant recommendations.
Through this post, we will embark on a journey to understand the nuts and bolts of vector embeddings and how we can store those embeddings in a vector store using
Amazon Bedrock
Amazon Aurora
, and
. From understanding the basic concept of an embedding to exploring advanced vector storage techniques and indexing methods, we will cover all the essential aspects that make vector embeddings an
indispensable tool in modern data engineering and machine learning.
This marks the beginning of
our two-part blog series
. In this first installment, we delve into the fundamentals of vector embeddings. We'll explore what embeddings are, why they're crucial in the realm of AI, and delve into the methods of storing and
indexing them. This foundational understanding sets the stage for
part two
, where we'll navigate the various vector storage solutions available on AWS. Here, we'll discuss how to effectively store your embeddings and utilize them in conjunction with LLMs to create robust,
AI-powered applications. Additionally, we will introduce and leverage
, a
that enhances our journey into the practical application of these concepts, demonstrating how to seamlessly integrate these technologies into real-world AI solutions.
Let's start with the basics: what is an embedding? An embedding is a numerical representation of content in a form that machines can process and understand. The essence of the process is to convert
an object, such as an image or text, into a vector that encapsulates its semantic content while discarding irrelevant details as much as possible. An embedding takes a piece of content, like a word,
sentence, or image, and maps it into a multi-dimensional vector space. The distance between two embeddings indicates the semantic similarity between the corresponding concepts.
Consider the terms 'coffee' and 'tea'. In a hypothetical vocabulary space, these two could be transformed into numerical vectors. If we visualize this in a 3-dimensional vector space, 'coffee'
might be represented as [1.2, -0.9, 0.3] and 'tea' as [1.0, -0.8, 0.5]. Such numerical vectors carry semantic information, indicating that 'coffee' and 'tea' are conceptually similar to each
other due to their association with hot beverages and would likely be positioned closer together in the vector space than either would be to unrelated concepts like 'astronomy' or 'philosophy'.
When it comes to text analysis, several strategies exist for converting words into vectors. Initially, one of the simpler techniques was the
bag-of-words model
. Here, words within a text are represented by their frequency of occurrence.
, a powerful Python Library, encapsulates this method within its
tool. For a while, this was the standard—until the introduction of
Word2vec represented a paradigm shift. It diverged from simply tallying words to understanding context by predicting a word's presence from its neighboring words while ignoring the sequence in which
they appear. This method operates on a linear modeling approach.
In image processing, we create embeddings by pulling out specific features from images. This includes identifying edges, analyzing textures, and looking at color patterns. We do this over different
sizes of image areas, making sure these embeddings understand changes in size and position.
The rise of
Convolutional Neural Networks (CNNs)
has significantly changed our approach to image analysis. CNNs, especially when pre-trained on large datasets like
, use what are known as
A CNN operates by examining small portions of an image and recognizing various features like lines, colors, and shapes. It then progressively combines these features to understand more complex
structures within the image, such as objects or faces. When applied to a new image, CNNs are capable of generating detailed and insightful vector representations. These representations are not
just pixel-level data but a more profound understanding of the image's content, making CNNs invaluable in areas like facial recognition, medical imaging, and autonomous vehicles. Their ability to
learn from vast amounts of data and identify intricate patterns makes them a cornerstone of modern image processing techniques.
For both textual and visual data, the trend has shifted towards transformer-based models.
These transformer-based models consider the context and order of elements in the data, whether they are words in text or pixels in images. Equipped with a large number of parameters, these models
excel in identifying complex patterns and relationships through training on comprehensive datasets.
Embeddings transform data into numerical vectors, making them highly adaptable tools. They enable us to apply mathematical operations to assess similarities or integrate them into various machine
learning models. Their uses are diverse, ranging from search and similarity assessments to categorization and topic identification. A practical example is sentiment analysis, where embeddings of
product reviews can be evaluated for their closeness to 'positive' or 'negative' sentiment indicators. This flexibility makes embeddings a fundamental component in many data-driven applications.
There are different distance metrics used in vector similarity calculations such as:
□ Euclidean distance: It measures the straight-line distance between two vectors in a vector space. It ranges from 0 to infinity, where 0 represents identical vectors, and larger values
represent increasingly dissimilar vectors.
□ Cosine distance: This similarity measure calculates the cosine of the angle between two vectors in a vector space. It ranges from -1 to 1, where 1 represents identical vectors, 0 represents
orthogonal vectors, and -1 represents vectors that are diametrically opposed.
□ Dot product: This measure reflects the product of the magnitudes of two vectors and the cosine of the angle between them. Its range extends from -∞ to ∞, with a positive value indicating
vectors that point in the same direction, 0 indicating orthogonal vectors, and a negative value indicating vectors that point in opposite directions.
These distance metrics go beyond theoretical mathematics and are extensively employed in a variety of vector databases, including
Amazon Aurora
Amazon OpenSearch
, and other vector stores. We'll explore their practical application in Aurora using
for similarity searches.
First, let's try to create the embeddings using
with the
client; later, we will see how we can do the same using
This code performs the following actions:
• Initializes a session with AWS using
and creates a client for the
• Defines a function get_embedding, which accepts a text input, then utilizes the Amazon Titan Embeddings model to transform this text into an embedding. Once the embedding is generated, the
function returns the embedding vector.
, you can obtain an embedding by using the
method from the
This code does the following:
• Imports the BedrockEmbeddings class from langchain.
• Creates an instance of BedrockEmbeddings to generate embeddings.
• Appends embeddings of several sentences to a list.
You can also obtain embeddings for multiple inputs using the embed_documents() method as well.
When examining the embeddings for a given text, the resulting vectors are consistent whether generated using LangChain or boto3. This uniformity is attributed to the underlying model in use, which is
Amazon Titan Embeddings G1 - Text.
We can also examine the specific vector generated for a text.
To learn more about
, including its use with vectors, I recommend referring to the book,
"Generative AI with LangChain" by Ben Auffarth
. This book provided valuable insights while I was learning about
, and I have incorporated several of its narratives into this blog post.
Understanding how to represent any data point as a vector is crucial, but equally important is knowing how to store these vectors. Before diving into storage methods, let's briefly touch on Vector
Search, which underscores the need for storing embeddings.
Vector search involves representing each data point as a vector in a high-dimensional space, capturing the data's features or characteristics. The aim is to identify vectors most similar to a given
query vector. We've seen how these vector embeddings, numerical arrays representing coordinates in a high-dimensional space, are crucial in measuring distances using metrics like cosine similarity or
euclidean distance, which we discussed earlier.
Imagine an e-commerce platform where each product has a vector representing its features like color, size, category, and user ratings. When a user searches for a product, the search query is
converted into a vector. The system then performs a vector search to find products with similar feature vectors, suggesting these as recommendations.
This process requires efficient vector storage. A vector storage mechanism is essential for storing and retrieving vector embeddings. While standalone solutions exist for this, vector databases
Amazon Aurora
(with 'pgvector'),
Amazon OpenSearch
, and
Amazon Kendra
offer more integrated functionalities. They not only store but also manage large sets of vectors, using indexing mechanisms for efficient similarity searches. We will dive into vector stores/
database in the next section.
To optimize vector search, we typically consider these aspects:
• Indexing: This is about organizing vectors to speed up retrieval. Techniques like k-d trees or Annoy are employed for this.
• Vector libraries: These offer functions for operations like dot product and vector indexing.
• Vector databases
: They are specifically designed for storing, managing, and retrieving vast sets of vectors. Examples include
Amazon Aurora
Amazon OpenSearch
, and
Amazon Kendra
, which utilize indexing for efficient searches.
Indexing in the context of vector embeddings is a method of organizing data to optimize its retrieval. It’s akin to indexing in traditional database systems, where it allows quicker access to
records. For vector embeddings, indexing aims to structure the vectors so that similar vectors are stored adjacently, enabling fast proximity or similarity searches. Algorithms like K-dimensional
trees (k-d trees) are commonly applied, but many others like Ball Trees, Annoy, and FAISS are often implemented, especially for high-dimensional vectors.
K-Nearest Neighbor (KNN) is a straightforward algorithm used for classification and regression tasks. In KNN, the class or value of a data point is determined by its k nearest neighbors in the
training dataset.
Here's how the K-NN algorithm works at a high level:
1. Selecting k: Decide the number of nearest neighbors (k) to influence the classification or regression.
2. Distance Calculation: Measure the distance between the point to classify and every point in the training dataset.
3. Identifying Nearest Neighbors: Choose the k closest data points.
4. Classifying or Regressing:
□ For classification: Assign the class based on the most frequent class within the k neighbors.
□ For regression: Use the average value from the k neighbors as the prediction.
5. Making Predictions: The algorithm assigns a predicted class or value to the new data point.
KNN is considered a lazy learning algorithm because it doesn't create a distinct model during training. Instead, it uses the entire dataset at the prediction stage. The time complexity for KNN is O
(nd), where n is the number of vectors and d is the vector dimension. This scalability issue is addressed with Approximate Nearest Neighbor algorithms (ANN) for faster search.
Alternative algorithms for vector search includes the following, these algorithms are used in combination for optimal retrieval speed.
• Product Quantization
• Locality-sensitive hashing
• Hierarchical Navigable Small World (HNSW)
Before we dive into each of these algorithms in detail, I'd like to extend my gratitude to
Damien Benveniste
for his insightful lecture,
Introduction to LangChain
. His course is a fantastic resource for anyone looking to deepen their understanding of
, and I highly recommend checking it out. The graphics used in the following sections are sourced from his lecture notes, providing a visual complement to our exploration. His contributions have been
invaluable in enhancing the depth and clarity of the content we're about to discuss.
Product Quantization a technique that divides the vector space into smaller subspaces and quantizes each subspace separately. This reduces the dimensionality of the vectors and allows for efficient
storage and search.
1. Vector Breakdown: The first step in PQ involves breaking down each high-dimensional vector into smaller sub-vectors. By dividing the vector into segments, PQ can manage each piece individually,
simplifying the subsequent clustering process.
1. Cluster Formation via K-means: Each sub-vector is then processed through a k-means clustering algorithm. This is like finding representative landmarks for different neighborhoods within a city,
where each landmark stands for a group of nearby locations. We can see multiple clusters formed from the sub-vectors, each with its centroid. These centroids are the key players in PQ; instead of
indexing every individual vector, PQ only stores the centroids, significantly reducing memory requirements.
1. Centroid Indexing: In PQ, we don't store the full detail of every vector; instead, we index the centroids of the clusters they belong to, as demonstrated in the first image. By doing this, we
achieve data compression. For example, if we use two clusters per partition and have six vectors, we achieve a 3X compression rate. This compression becomes more significant with larger datasets.
2. Nearest Neighbor Search: When a query vector comes in, PQ doesn't compare it against all vectors in the database. Instead, it only needs to measure the squared euclidean distance from the
centroids of each cluster. It's a quicker process because we're only comparing the query vector to a handful of centroids rather than the entire dataset.
1. Balance Between Accuracy and Efficiency: The trade-off here is between the granularity of the clustering (how many clusters are used) and the speed of retrieval. More clusters mean finer
granularity and potentially more accurate results but require more time to search through.
In practical terms, PQ allows systems to quickly sift through vast datasets to find the most relevant items to a query. It's particularly beneficial in systems where speed is crucial, such as
real-time recommendation engines or on-the-fly data retrieval systems. By using a combination of partitioning, clustering, and indexing centroids, PQ enables a more scalable and efficient approach to
nearest neighbor search without the need for exhaustive comparisons.
Locality-Sensitive Hashing (LSH) is a technique that clusters (or groups) vectors in a high-dimensional space based on their similarity. This method is advantageous for databases dealing with large,
complex datasets, where it is impractical to compute exact nearest neighbors due to computational or time constraints. For example, we could partition the vector space into multiple buckets.
The LSH algorithm operates through several steps to efficiently group similar data points:
1. Dimensionality Reduction: Initially, vectors are projected onto a lower-dimensional space using a random matrix. This step simplifies the data, making it more manageable and reducing the
computational load for subsequent operations.
2. Binary Hashing: After dimensionality reduction, each component of the projected vector is 'binarized', typically by assigning a 1 if the component is positive and a 0 if negative. This binary
hash code represents the original vector in a much simpler form.
3. Bucket Assignment: Vectors that share the same binary hash code are assigned to the same bucket. By doing so, LSH groups vectors that are likely to be similar into the same 'bin', allowing for
quicker retrieval based on hash codes.
When searching for nearest neighbors, LSH allows us to consider vectors in the same bucket as the query vector as potential nearest neighbors. To compare how similar two hashed vectors are, the
Hamming distance is used. It counts the number of bits that are different between two binary codes. This is analogous to comparing two strings of text to see how many letters are different. This
method is faster than comparing the query to every vector in the dataset.
Hierarchical Navigable Small World (HNSW) is a sophisticated method used to index and search through high-dimensional data like images or text vectors quickly. Think of it as a super-efficient
librarian that can find the book you're looking for in a massive library by taking smart shortcuts.
Imagine you're in a large room full of points, each representing different pieces of data. To create an NSW network, we start linking these points, or nodes, based on how similar they are to each
other. If we have a new node, we'll connect it to its most similar buddies already in the network. These connections are like bridges between islands, creating a web of pathways.
In the above example, we connected each new node to the two most similar neighbors, but we could have chosen another number of similar neighbors. When building the graph, we need to decide on a
metric for similarity such that the search is optimized for the specific metric used to query items. Initially, when adding nodes, the density is low and the edges will tend to capture nodes that are
far apart in similarity. Little by little, the density increases, and the edges start to be shorter and shorter. As a consequence, the graph is composed of long edges that allow us to traverse long
distances in the graph and short edges that capture closer neighbors. Because of it, we can quickly traverse the graph from one side to the other and look for nodes at a specific location in the
vector space.
For example, let’s have a query vector. We want to find the nearest neighbor.
We initiate the search by starting at one node (i.e., node A in that case). Among its neighbors (D, G, C), we look for the closest node to the query (D). We iterate over that process until there are
no closer neighbors to the query. Once we cannot move anymore, we found a close neighbor to the query. The search is approximate and the found node may not be the closest as the algorithm may be
stuck in a local minima.
The problem with just using NSW is like having only one type of bridge, regardless of the distance. Some bridges will be super long, and it might take ages to cross them. That's where the
hierarchical part kicks in. We create multiple layers of graphs, each with different bridge lengths. The top layer has the longest bridges, while the bottom layer has the shortest.
Each layer is a bit less crowded than the one below it. It's like having express lanes on a highway. You start at the top layer to cover big distances quickly, then switch to lower layers for more
precise, shorter searches.
When you search for a specific node, you begin at the top layer. If you find a close neighbor, you drop to the next layer to get even closer, and so on, until you're at the closest point possible.
This way, you can find the nearest neighbors without having to check every single node.
In addition to HNSW and KNN, there are other ways to find similar items or patterns using graphs, such as with Graph Neural Networks (GNN) and Graph Convolutional Networks (GCN). These methods
use the connections and relationships in graphs to search for similarities. There's also the Annoy (Approximate Nearest Neighbors Oh Yeah) algorithm, which sorts vectors using a tree structure
made of random divisions, kind of like sorting books on shelves based on different categories. Annoy is user-friendly and good for quickly finding items that are almost, but not exactly, the
When choosing one of these methods, it's important to consider how fast you need the search to be, how precise the results should be, and how much computer memory you can use. The right choice
depends on what the specific task needs and the type of data you're working with.
Vector libraries are tools for managing and searching through large groups of vectors, which are like lists of numbers. Think of them as advanced systems for organizing and finding patterns in big
data. Popular examples include Facebook's (now Meta) Faiss and Spotify's Annoy. These libraries are really good at finding vectors that are almost the same, using something called the Approximate
Nearest Neighbor (ANN) algorithm. They can use different methods, like grouping or tree-like structures, to search through the vectors efficiently. Here's a simple look at some open-source libraries:
1. FAISS (Facebook AI Similarity Search): Developed by Meta (formerly Facebook), this library helps find and group together similar dense vectors, which are just vectors with a lot of numbers. It's
great for big search tasks and works well with both normal computers and those with powerful GPUs.
2. Annoy: This is a tool created by Spotify for searching near-identical vectors in high-dimensional spaces (which means lots of data points). It's built to handle big data and uses a bunch of
random tree-like structures for searching.
3. hnswlib: This library uses the HNSW (Hierarchical Navigable Small World) algorithm. It's known for being fast and not needing too much memory, making it great for dealing with lots of
high-dimensional vector data.
4. nmslib (Non-Metric Space Library): It’s an open-source tool that's good at searching through non-metric spaces (spaces where distance isn't measured in the usual way). It uses different
algorithms like HNSW and SW-graph for searching.
A vector database is a type of database that is specifically designed to handle vector embeddings making it easier to search and query data objects. It offers additional features such as data
management, metadata storage and filtering, and scalability. While a vector storage focuses solely on storing and retrieving vector embeddings, a vector database provides a more comprehensive
solution for managing and querying vector data. Vector databases can be particularly useful for applications that involve large amounts of data and require flexible and efficient search capabilities
across various types of vectorized data, such as text, images, audio, video, and more.
In essence, vector databases are like advanced tools for organizing and navigating vast and varied data collections. They are especially beneficial for scenarios where quick and efficient searching
through different types of data, converted into vector fingerprints, is crucial. These databases are popular because they are optimized for scalability and representing and retrieving data in
high-dimensional vector spaces. Traditional databases are not designed to efficiently handle large-dimensional vectors, such as those used to represent images or text embeddings.
Vector databases are key in managing and analyzing machine learning models and their embeddings. They shine in similarity or semantic search, enabling quick and efficient navigation through
massive datasets of text, images, or videos to find items matching specific queries based on vector similarities. This technology finds diverse applications, including:
For Anomaly Detection, vector databases compare embeddings to identify unusual patterns, crucial in areas like fraud detection and network security. In Personalization, they enhance
recommendation systems by aligning similar vectors with user preferences. In the realm of Natural Language Processing (NLP), these databases facilitate tasks like sentiment analysis and text
classification by effectively comparing and analyzing text represented as vector embeddings.
As the technology evolves, vector databases continue to find new and innovative applications, broadening the scope of how we handle and analyze large datasets in various fields.
In this blog, we embarked on a comprehensive journey, starting with the basics of vector embeddings and exploring their vital role in text and image processing. We delved into various techniques like
the bag-of-words, word2vec, and CNNs, gaining insights into how these methods transform raw data into meaningful vector representations. Our exploration extended to the crucial aspects of storing and
indexing vector embeddings, and the significant contributions of vector libraries and databases in this realm.
Further, we learned how to create vector embeddings using tools like
Amazon Bedrock
, as well as the Open Source Library
. This provided us with practical insights into embedding generation and manipulation.
We also examined various techniques for vector indexing and the use of vector libraries as storage solutions for our embeddings. With these foundational concepts in place, we're now ready to dive
part two of this blog series
. Here, we'll focus on leveraging different AWS services for storing vector embeddings. This will include an in-depth look at how these services synergize with LLMs to enhance AI and machine learning
For those eager to deepen their understanding and apply these concepts, I recommend visiting this
GitHub page
. It offers a wealth of resources, including sample applications and tutorials, demonstrating the capabilities of
Amazon Bedrock
with Python. These resources are designed to guide you through integrating
with databases, employing RAG techniques, and experimenting with
for practical, hands-on experience.
Any opinions in this post are those of the individual author and may not reflect the opinions of AWS. | {"url":"https://community.aws/content/2gvh6fQM4mJQduLye3mHlCNvPxX/vector-embeddings-and-rag-demystified","timestamp":"2024-11-13T05:21:37Z","content_type":"text/html","content_length":"179119","record_id":"<urn:uuid:ce7d0d02-54bd-4f4c-b07a-e06e21d05c70>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00198.warc.gz"} |
Text Segmentation
Explore the text segmentation problem to take it to the next level.
Optimizing the text segmentation problem
For our next dynamic programming algorithm, let’s consider the text segmentation problem from the previous chapter. We are given a string $A[1 .. n]$ and a subroutine $IsWord$ that determines whether
a given string is a word (whatever that means), and we want to know whether $A$ can be partitioned into a sequence of words.
We solved this problem by defining a function $Splittable(i)$ that returns True if and only if the $suffix\space A[i .. n]$ can be partitioned into a sequence of words. We need to compute $Splittable
(1)$. This function satisfies the recurrence
$Splittable(i)=\begin{cases} & \text{ True \hspace{5.8cm}} if\space i > n \\ & \overset{n}{\underset{j=1}{\vee}} \left ( IsWord(i,j) \wedge Splittable(j+1) \right ) \hspace{1cm} otherwise \end{cases}
where $IsWord(i, j)$ is shorthand for $IsWord(A[i .. j])$. This recurrence translates directly into a recursive backtracking algorithm that calls the $IsWord$ subroutine $O(2^n)$ times in the worst
But for any fixed string $A[1..n]$, there are only $n$ different ways to call the recursive function $Splittable(i)$—one for each value of $i$ between 1 and $n + 1$—and only $O(n^2)$ different ways
to call $IsWord(i, j)$—one for each pair $(i, j)$ such that $1 ≤ i ≤ j ≤ n$.
Why are we spending exponential time computing only a polynomial amount of stuff?
Each recursive subproblem is specified by an integer between $1$ and $n + 1$, so we can memoize the function $Splittable$ into an array $SplitTable[1 .. n + 1]$. Each subproblem $Splittable(i)$
depends only on the results of subproblems $Splittable(j)$ where $j > i$, so the memoized recursive algorithm fills the array in decreasing index order. If we fill the array in this order
deliberately, we obtain the dynamic programming algorithm shown in the pseudocode below. The algorithm makes $O(n^2)$ calls to $IsWord$, an exponential improvement over our earlier backtracking
Create a free account to access the full course.
By signing up, you agree to Educative's Terms of Service and Privacy Policy | {"url":"https://www.educative.io/courses/mastering-algorithms-for-problem-solving-in-java/text-segmentation","timestamp":"2024-11-15T04:30:11Z","content_type":"text/html","content_length":"960231","record_id":"<urn:uuid:2cff0353-3d9d-4592-9f35-275c9e08925b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00889.warc.gz"} |
Transactions Online
Zhongqiang LUO, Chaofu JING, Chengjie LI, "Nonnegative Matrix Factorization with Minimum Correlation and Volume Constrains" in IEICE TRANSACTIONS on Fundamentals, vol. E105-A, no. 5, pp. 877-881, May
2022, doi: 10.1587/transfun.2021EAL2050.
Abstract: Nonnegative Matrix Factorization (NMF) is a promising data-driven matrix decomposition method, and is becoming very active and attractive in machine learning and blind source separation
areas. So far NMF algorithm has been widely used in diverse applications, including image processing, anti-collision for Radio Frequency Identification (RFID) systems and audio signal analysis, and
so on. However the typical NMF algorithms cannot work well in underdetermined mixture, i.e., the number of observed signals is less than that of source signals. In practical applications, adding
suitable constraints fused into NMF algorithm can achieve remarkable decomposition results. As a motivation, this paper proposes to add the minimum volume and minimum correlation constrains (MCV) to
the NMF algorithm, which makes the new algorithm named MCV-NMF algorithm suitable for underdetermined scenarios where the source signals satisfy mutual independent assumption. Experimental simulation
results validate that the MCV-NMF algorithm has a better performance improvement in solving RFID tag anti-collision problem than that of using the nearest typical NMF method.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021EAL2050/_p
author={Zhongqiang LUO, Chaofu JING, Chengjie LI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Nonnegative Matrix Factorization with Minimum Correlation and Volume Constrains},
abstract={Nonnegative Matrix Factorization (NMF) is a promising data-driven matrix decomposition method, and is becoming very active and attractive in machine learning and blind source separation
areas. So far NMF algorithm has been widely used in diverse applications, including image processing, anti-collision for Radio Frequency Identification (RFID) systems and audio signal analysis, and
so on. However the typical NMF algorithms cannot work well in underdetermined mixture, i.e., the number of observed signals is less than that of source signals. In practical applications, adding
suitable constraints fused into NMF algorithm can achieve remarkable decomposition results. As a motivation, this paper proposes to add the minimum volume and minimum correlation constrains (MCV) to
the NMF algorithm, which makes the new algorithm named MCV-NMF algorithm suitable for underdetermined scenarios where the source signals satisfy mutual independent assumption. Experimental simulation
results validate that the MCV-NMF algorithm has a better performance improvement in solving RFID tag anti-collision problem than that of using the nearest typical NMF method.},
TY - JOUR
TI - Nonnegative Matrix Factorization with Minimum Correlation and Volume Constrains
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 877
EP - 881
AU - Zhongqiang LUO
AU - Chaofu JING
AU - Chengjie LI
PY - 2022
DO - 10.1587/transfun.2021EAL2050
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E105-A
IS - 5
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - May 2022
AB - Nonnegative Matrix Factorization (NMF) is a promising data-driven matrix decomposition method, and is becoming very active and attractive in machine learning and blind source separation areas.
So far NMF algorithm has been widely used in diverse applications, including image processing, anti-collision for Radio Frequency Identification (RFID) systems and audio signal analysis, and so on.
However the typical NMF algorithms cannot work well in underdetermined mixture, i.e., the number of observed signals is less than that of source signals. In practical applications, adding suitable
constraints fused into NMF algorithm can achieve remarkable decomposition results. As a motivation, this paper proposes to add the minimum volume and minimum correlation constrains (MCV) to the NMF
algorithm, which makes the new algorithm named MCV-NMF algorithm suitable for underdetermined scenarios where the source signals satisfy mutual independent assumption. Experimental simulation results
validate that the MCV-NMF algorithm has a better performance improvement in solving RFID tag anti-collision problem than that of using the nearest typical NMF method.
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.2021EAL2050/_p","timestamp":"2024-11-07T12:36:11Z","content_type":"text/html","content_length":"63298","record_id":"<urn:uuid:35b23721-8c1d-436b-9b45-36a13c2a6d6a>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00130.warc.gz"} |
Calculate the fractional void volumes in the c.c.p. and h.c.p.structures of hard spheres.
In a unit cell, the space which is not occupied by the constituent particles in the unit cell is known as the void space. If we consider the total volume of a unit cell as 1.then the fraction of void
space would be equal to,
$\text{ The fraction of void space = 1}-\text{Packing fraction }$
The $\text{ }{\scriptstyle{}^{0}/{}_{0}}\text{ }$ void space in a unit cell is equal to,
$\text{ }{\scriptstyle{}^{0}/{}_{0}}\text{ Void space = 100}-\text{Packing efficiency }$
Complete step by step answer:
The void fraction is the total fraction of the crystal which is empty. The void fraction can be calculated by subtracting the packing fraction from the volume of the crystal.
The cubic centred crystal or the face-centred cubic crystal structure has 8 atoms at the 8 corners of the cubic structure. The crystal structure contains 6 atoms at the 6 faces of the cubic
structure. Thus total number of atoms per unit cubic structure is equal to,
& \text{ No}\text{. of atom = 8 }\times \text{ }\dfrac{1}{8}\text{ contribution at corner + 6 }\times \dfrac{1}{2}\text{ contribution at face} \\
& \therefore \text{No}\text{. of atom = 1 + 3 = 4 } \\
Thus, the CCP contains a total of 4 atoms per unit cell.
The total value of the cubic structure is equal to, $\text{ Volume of unit cell = (edge}{{\text{)}}^{\text{3}}}\text{ = }{{a}^{3}}\text{ }$
Atoms are considered spherical in shape. Thus the volume of an atom is given as,
$\text{ Volume of atom = }\dfrac{4}{3}\text{ }\pi {{\text{r}}^{\text{3}}}\text{ }$
The packing fraction of a unit cell is equal to the amount of the total unit cell occupied by the atoms. Thus packing fraction is a ratio of the volume occupied by the atoms in the cell to the total
volume of the unit cell. It is given as follows,
$\text{ Packing fraction of C}\text{.C}\text{.P}\text{. = }\dfrac{\text{total volume occupied by atoms}}{\text{Total volume of unit cell}}\text{ = }\dfrac{4\times \dfrac{4}{3}\pi {{r}^{3}}}{{{a}^
{3}}}\text{ }$
For CCP structure the relationship between the edge length and radius is given as,
$\text{ }\sqrt{2a}\text{ = 4r }\Rightarrow \text{ }a\text{ = 2}\sqrt{2}\text{ r }$
Substitute the values in fraction. We have,
$\text{ P}\text{.}{{\text{F}}_{\text{C}\text{.C}\text{.P}\text{. }}}\text{= }\dfrac{4\times \dfrac{4}{3}\pi {{r}^{3}}}{{{\left( 2\sqrt{2}\text{ r} \right)}^{3}}}\text{ = }\dfrac{16\pi }{3\times 16\
sqrt{2}}\text{ = }\dfrac{\pi }{3\sqrt{2}}\text{ = 0}\text{.74 }$
Thus packing fraction for CCP is equal to $\text{ 0}\text{.74 }$ .the void fraction would be equal to,
$\text{ Void fraction = 1}-\text{Packing fraction = 1}-0.74\text{ = 0}\text{.26 }$
Therefore, the void fraction for CCP is equal to $\text{ 0}\text{.26 }$ .
2) HCP:
The HCP unit cell contains the 6 atoms in the unit cell. The packing fraction of HCP unit cell is given as, $\text{ Packing fraction of H}\text{.C}\text{.P}\text{. = }\dfrac{\text{total volume
occupied by atoms}}{\text{Total volume of unit cell}}\text{ = }\dfrac{6\times \dfrac{4}{3}\pi {{r}^{3}}}{{{a}^{3}}}\text{ }$
Substitute the values in packing fraction. We have,
$\text{ P}\text{.}{{\text{F}}_{\text{C}\text{.C}\text{.P}\text{. }}}\text{= }\dfrac{4\times \dfrac{4}{3}\pi {{r}^{3}}}{24\sqrt{2}\text{ }{{\text{r}}^{3}}}\text{ = }\dfrac{8\pi {{\text{r}}^{\text
{3}}}}{24\sqrt{2}\text{ }{{\text{r}}^{3}}}\text{ = }\dfrac{\pi }{3\sqrt{2}}\text{ = 0}\text{.74 }$
Thus packing fraction for HCP is equal to $\text{ 0}\text{.74 }$ .the void fraction would be equal to,
$\text{ Void fraction = 1}-\text{Packing fraction = 1}-0.74\text{ = 0}\text{.26 }$
Therefore, the void fraction for HCP is equal to $\text{ 0}\text{.26 }$ .
Thus, the void fraction for CCP is equal to $\text{ 0}\text{.26 }$and for HCP is equal to $\text{ 0}\text{.26 }$.Note:
Note that, if you know the unit cell dimension ‘a’ it is possible to calculate the volume of the unit cell.in calculating the packing fraction, we consider that the atom is a spherical. Remember
that, whatever be the constituent in the unit cell (like an atom, molecule, or ions) they are always packed in such a way that there is a free space in the form of the void. For a simple cubic
structure, the void fraction is equal to $\text{ 0}\text{.476 }$ and for BCC structure is $\text{ 0}\text{.32 }$ . | {"url":"https://www.vedantu.com/question-answer/calculate-the-fractional-void-volumes-in-the-ccp-class-12-chemistry-cbse-5fb35f19de1c3232ee2e7a95","timestamp":"2024-11-11T14:02:51Z","content_type":"text/html","content_length":"208936","record_id":"<urn:uuid:268e8e85-d657-4e64-9089-28c596bce193>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00348.warc.gz"} |
Write each of the following as a decimal . (a) 200+30+5+102+10... | Filo
Question asked by Filo student
Write each of the following as a decimal .
Not the question you're searching for?
+ Ask your question
Step 1. For (a) we have: Step 2. We simplify the fractions to get a common denominator of 100. Step 3. We add the whole number and the decimal to get the final answer. Step 4. For (b) we have: Step
5. We simplify the fractions to get a common denominator of 100. Step 6. We add the whole number and the decimal to get the final answer. Step 7. For (c) we have: Step 8. We simplify the fractions to
get a common denominator of 1000. Step 9. We add the whole number and the decimal to get the final answer. Therefore, the requested decimals are: (a) (b) and (c) .
Found 5 tutors discussing this question
Discuss this question LIVE for FREE
11 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on All topics
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Write each of the following as a decimal .
Question Text (b)
Updated On Nov 3, 2023
Topic All topics
Subject Mathematics
Class Class 11
Answer Type Text solution:1 | {"url":"https://askfilo.com/user-question-answers-mathematics/write-each-of-the-following-as-a-decimal-a-b-c-35393731373737","timestamp":"2024-11-08T08:25:28Z","content_type":"text/html","content_length":"373392","record_id":"<urn:uuid:15c3cdf3-40d3-43b1-9e0f-951f38301d3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00523.warc.gz"} |
Assignment 3: Recommender System - Programming Help
The goal of this assignment is to familiarize you with the User-based Recommender System and also expose you to model evaluation on real data.
What to do — recommender.py
You are asked to complete a Python program, called `recommender.py` that will
* [Task 1] Calculate user similarity weights among users using four different metrics, including Euclidean distance, Manhattan distance, Pearson correlation, and Cosine similarity
* [Task 2] Use user similarity weights from top k users to predict movies ratings for selected users
* [Task 3] Evaluate the performances of combinations of different metrics and different values of k
We present the details of each task next.
Task 1
In this task, you will need to implement four methods to calculate user similarity weight. Each method has two arguments, the first argument should be the training dataset, in the form of a
DataFrame, and the second argument should be the selected userId. <br />
<font size=”2″>**Method 1**</font>: train_user_cosine<br />
This method calculates the user similarity using the Cosine similarity.
<font size=”2″>**Method 2**</font>: train_user_euclidean<br />
This method calculates the user similarity using the Euclidean distance.
<font size=”2″>**Method 3**</font>: train_user_manhattan<br />
This method calculates the user similarity using the Manhanttan distance.
<font size=”2″>**Method 4**</font>: train_user_pearsons<br />
This method calculates the user similarity using the Pearson correlation.
Task 2
In this task, you need to use the user similarity weights that you get from **Task 1** to make predictions of movie ratings for selected users. You should use the similarity from top k users. You
should try combinations of different metrics with different values of k. To make predictions, you need to use two datasets: `small_test.csv` and `medium_test.csv`. **Note you should only make
predictions for not null ratings in the two test datasets. There’s no need to predict missing ratings.** <br />
In this task, you need to implement two methods. <br />
<font size=”2″>**Method 1**</font>: get_user_existing_ratings <br />
This method returns the existing movie ratings of the selected userId from given dataset.
This method has two arguments.
* data_set: this argument should be the test dataset, in the form of a DataFrame
* userId: this argument is the selected userId that you need to retrieve the ratings for
The return value should be a list of tuples, each of which has the movie id and the rating for that movie, for example [(32, 4.0), (50, 4.0)]
<font size=”2″>**Method 2**</font>: predict_user_existing_ratings_top_k <br />
The purpose of this method is to predict the selected user’s movie ratings. Note that we don’t calculate the user’s missing ratings, but the ratings that the user ALREADY has, in order to compare the
predicted ratings to the existing ratings we get by calling `get_user_existing_ratings`
The return value should be a list of tuples, each of which has the movie id and the rating for that movie, for example [(32, 4.0), (50, 4.0)].
This method has four arguments.
* data_set: this argument should be the test dataset, in the form of a DataFrame
* sim_weights: this argument should be the similarity weights you get from **Task 1**; there should be four types of similarity weights, one for each metric
* userId: this argument is the selected userId that you need to make the prediction
* k: this argument is the number of top k users
Task 3:
In this task, you need to evaluate the prediction performance of combinations different user similarity metrics and different values of k. You should compare the predicted ratings with the existing
ratings of selected users in the test datasets. To evaluate the performance, you should calculate the root mean square error (rmse) of the predicted ratings and existing ratings. <br />
In this task, you need to implement method **evaluate**. This method has two arguments, in the form of a list of (movieId, rating) tuples. The first argument list that contains existing ratings of a
user, and the second argument is a list that contains predicted ratings of the same user. The return value should be a small dictionary that contains the root square error and the ratio, like
{‘rmse’:0.5, ‘ratio’:0.9}.
We give the definition of ratio as follow:
When comparing two sets of ratings for evaluation, the baseline set and the predicted set, the ratio is the number of movies that BOTH sets have ratings for, over the number of movies that the
baseline set has ratings for.
For example, If for user1 we have ratings for movie1, movie3, and movie5, and our top-k prediction only produces ratings for movie1 and movie3, the ratio would be 2/3. Only for two out of the three
movies we have a predicted rating. This is important, as if we choose a low k value, we might only get a prediction for a very small number of movies, which is bad, even if those ratings come close
to the real ones. You can think of it as the recall vs accuracy.
The provided skeleton code already has implemented methods that perform multiple predictions and evaluations for different combinations of distance metrics and k values. They should help you to test
your code and make sure your methods return the right type of values.
Allowed Python Libraries (Updated)
You are allowed to use the following Python libraries (although a fraction of these will actually be needed):
If you would like to use any other libraries, you must ask permission within a maximum of one week after the assignment was released, using [canvas](http://cs1656.org).
How to submit your assignment
We are going to use Gradescope to submit and grade your assignments.
To submit your assignment:
* login to Canvas for this class <https://cs1656.org>
* click on Gradescope from the menu on the left
* select “Assignment 3” from the list of active assignments in Gradescope
* follow the instructions to submit your assignment and have it automatically graded.
What to submit
For this test assignment you only need to submit `recommender.py` to “Assignment 3” and see if you get all 80 points. In case of an error or wrong result, you can modify the file and resubmit it as
many times as you want until the deadline of **Friday ,Dec 3, 2021**.
Late submissions
For full points, we will consider the version submitted to Gradescope
* the day of the deadline **Friday ,Dec 3, 2021**
* 48 hours later (for submissions that are one day late / -5 points), and
* 96 hours after the first deadline (for submissions that are two days late / -15 points).
Our assumption is that everybody will submit on the first deadline. If you want us to consider a late submission, you need to email us at `cs1656-staff@cs.pitt.edu`
Important notes about grading
It is absolutely imperative that your python program:
* runs without any syntax or other errors (using Python 3)
* strictly adheres to the format specifications for input and output, as explained above.
Failure in any of the above will result in **severe** point loss.
About your github account
Since we will utilize the github classroom feature to distribute the assignments it is very important that your github account can do **private** repositories. If this is not already enabled, you can
do it by visiting <https://education.github.com/> | {"url":"https://www.edulissy.org/product/the-goal-of-this-assignment-is-to-familiarize-you-with-the-user-based-recommender-system-and-also-expose-you-to-model-evaluation-on-real-data/","timestamp":"2024-11-09T11:16:27Z","content_type":"text/html","content_length":"185789","record_id":"<urn:uuid:275f7058-e326-4fa6-b8ac-f7ba50159928>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00425.warc.gz"} |
Degrees of Freedom and Restraint Codes | SkyCiv Engineering
Degrees of Freedom and Restraint Codes
In Structural Analysis, the term degrees of freedom is extremely important yet often misunderstood. Degrees of freedom refers to the 6 possible movements that can occur at a point and whether or not
these movements are free to move or are restrained. Although that might seem like a lot of jargon, it will become better understood throughout this tutorial and instructional video below.
Firstly what are the 6 degrees of freedom? Picture an airplane suspended in space. The airplane is free to move forward or back (along its X axis), left to right (along its Z axis) or up and down
(along its Y axis). These are known as translations and make up the first three degrees of freedom. The airplane is also free to rotate side to side (rolling - rotating about its own X axis), pitch
down or up (rotate about its own Z axis) or yaw left or right (rotating about its own Y axis). These are the 4th 5th and 6th degrees known as rotational degrees of freedom. So in short:
1. X translation
2. Y translation
3. Z translation
4. X rotation
5. Y rotation
6. Z rotation
What the letters represent?
• 'F' - Fixed - There is a fixed restraint for this degree of freedom. Any force acting in this direction is absorbed by the support at a node or by the connected member in a connectivity.
• 'R' - Released - There is a no restraint for this degree of freedom. The node is free to move in this direction and any force acting in this direction is does not pass on to the connected element
or support.
• 'S' - Spring - There are restraints but their level of restraint is based on the spring's coefficient based on Hooke's Law.
Restraint Codes for Supports
The type of support used in a structural analysis model is often determined by the 6 degrees of freedom. An example is representing the 6 degrees of freedom by a 6 character code comprised of a
combination of Fs and Rs - where F = Fixed and R = Released. For instance, a totally fixed support is denoted by the code "FFFFFF" as it is fixed in all 6 degrees of freedom. A pin support is often
only released in the Z rotation and is therefore denoted by "FFFFFR". Another example is a roller support. In this example, it cannot support any of the force in the x translation or in any of the
rotations. It is therefore given the restraint code ‘RFFRRR’.
Another way to think of it, is that the support will contain a reaction for any degree of freedom that is fixed. For instance, a pinned support is fixed in the X and Y translation - it is therefore
expected that there will be a reaction in the x and y direction.
Sam Carigliano
CEO and Co-Founder of SkyCiv
BEng (Civil), BCom
Share This Story, Choose Your Platform! | {"url":"https://skyciv.com/education/explaining-degrees-of-freedom/","timestamp":"2024-11-14T08:37:18Z","content_type":"text/html","content_length":"182744","record_id":"<urn:uuid:5f9866e5-c32d-4e2b-9b28-384eb0000693>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00591.warc.gz"} |
Holographic signatures of resolved cosmological singularities
After some longer silence partly due to moving to a new location (LMU Munich) and teaching my first regular lecture (Theoretical mechanics for lyceum teachers and computational science at Regensburg
University), I hope to write more regularly again in the future.
As a start, a new paper on using loop quantum gravity in the context of AdS/CFT has finally appeared
. Together with Andreas Schäfer and John Schliemann from Regensburg University, we asked the question of what happens in the dual CFT if you assume that the singularity on the gravity side is
resolved in a manner inspired by results from loop quantum gravity.
Building (specifically) on recent work by
Engelhardt, Hertog, and Horowitz
(as well as many others before them) using classical gravity, we found that a finite distance pole in the two-point-correlator of the dual CFT gets resolved if you resolve the singularity in the
gravity theory. Several caveats apply to this computation, which are detailed in the papers. We view this result therefore as a proof of principle that such computations are possible, as opposed to
some definite statement of how exactly they should be done. | {"url":"https://relatively-quantum.bodendorfer.eu/2016/12/holographic-signatures-of-resolved.html","timestamp":"2024-11-10T04:09:46Z","content_type":"application/xhtml+xml","content_length":"58827","record_id":"<urn:uuid:3a8eafc2-7cc0-401f-b43a-27f5f149bf3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00734.warc.gz"} |
Chezy Equations Formulas Calculator
Online Web Apps, Rich Internet Application, Technical Tools, Specifications, How to Guides, Training, Applications, Examples, Tutorials, Reviews, Answers, Test Review Resources, Analysis, Homework
Solutions, Worksheets, Help, Data and Information for Engineers, Technicians, Teachers, Tutors, Researchers, K-12 Education, College and High School Students, Science Fair Projects and Scientists
By Jimmy Raymond
Contact: aj@ajdesigner.com
Privacy Policy, Disclaimer and Terms
Copyright 2002-2015 | {"url":"https://www.ajdesigner.com/phpchezy/chezy_equation.php","timestamp":"2024-11-12T21:59:42Z","content_type":"text/html","content_length":"21317","record_id":"<urn:uuid:eda82172-cae7-411c-beab-ae8977b4a16c>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00185.warc.gz"} |
Basic question about TD and LC
• Thread starter mananvpanchal
• Start date
In summary, in A frame, two points on a rod are at different positions. If you transform the points to B frame, the time-component of B frame (perpendicular distance between this two parallel lines)
is still bigger than time-component of A frame. So, B feels more time between two events, and so B feels A's clock running slowly.
Hello, I have basic confusion about Time Dilation and Length Contraction. I have struggled much, but I haven't succeed. Please. help me to clear it.
In A frame I have [itex][t_{a1}, x_{a1}][/itex] and [itex][t_{a2}, x_{a2}][/itex]. If I assume c=1 and [itex]x_{a1}=x_{a2}[/itex], and if I transform it to B frame which is moving with v speed
relative to A frame. I get this.
[itex]t_{b1}=\gamma (t_{a1}-vx_{a1})[/itex]
[itex]t_{b2}=\gamma (t_{a2}-vx_{a2})[/itex]
If I simply do [itex]t_{b2}-t_{b1}[/itex], I get
[itex]t_{b2}-t_{b1} = \gamma (t_{a2}-t_{a1})[/itex]
[itex]\Delta t_b = \gamma \Delta t_a[/itex]
[itex]\Delta t_b > \Delta t_a[/itex]
If As I understand [itex]\Delta t_a[/itex] is elapsed time in A frame and [itex]\Delta t_b[/itex] is elapsed time in B frame then it would be [itex]\Delta t_b < \Delta t_a[/itex], then why [itex]\
Delta t_b > \Delta t_a[/itex]?
Now, If I have [itex][t_{a1}, x_{a1}][/itex] and [itex][t_{a2}, x_{a2}][/itex] where c=1 and [itex]t_{a1}=t_{a2}[/itex].
[itex]x_{b1}=\gamma (x_{a1}-vt_{a1})[/itex]
[itex]x_{b2}=\gamma (x_{a2}-vt_{a2})[/itex]
If I simply do [itex]x_{b2}-x_{b1}[/itex], I get
[itex]x_{b2}-x_{b1} = \gamma (x_{a2}-x_{a1})[/itex]
[itex]\Delta x_b = \gamma \Delta x_a[/itex]
[itex]\Delta x_b > \Delta x_a[/itex]
If As I understand [itex]\Delta x_a[/itex] is length in A frame and [itex]\Delta x_b[/itex] is length in B frame then it would be [itex]\Delta x_b < \Delta x_a[/itex], then why [itex]\Delta x_b > \
Delta x_a[/itex]?
What am I thinking wrong here?
mananvpanchal said:
Hello, I have basic confusion about Time Dilation and Length Contraction. I have struggled much, but I haven't succeed. Please. help me to clear it.
In A frame I have [itex][t_{a1}, x_{a1}][/itex] and [itex][t_{a2}, x_{a2}][/itex]. If I assume c=1 and [itex]x_{a1}=x_{a2}[/itex],
OK, you've chosen events that take place at different times but at the same location in A. An example of this could be two time readings on a clock at rest in A.
and if I transform it to B frame which is moving with v speed relative to A frame. I get this.
[itex]t_{b1}=\gamma (t_{a1}-vx_{a1})[/itex]
[itex]t_{b2}=\gamma (t_{a2}-vx_{a2})[/itex]
If I simply do [itex]t_{b2}-t_{b1}[/itex], I get
[itex]t_{b2}-t_{b1} = \gamma (t_{a2}-t_{a1})[/itex]
[itex]\Delta t_b = \gamma \Delta t_a[/itex]
[itex]\Delta t_b > \Delta t_a[/itex]
Yes. This is the normal time dilation formula--moving clocks are measured to run slowly. So B will measure a greater time between those events than would A.
If As I understand [itex]\Delta t_a[/itex] is elapsed time in A frame and [itex]\Delta t_b[/itex] is elapsed time in B frame then it would be [itex]\Delta t_b < \Delta t_a[/itex], then why [itex]
\Delta t_b > \Delta t_a[/itex]?
I don't understand this comment. Why would you think this? (Note that the simple time dilation formula only applies to time intervals between events taking place at the same location in the moving
frame, where Δx
= 0.)
Now, If I have [itex][t_{a1}, x_{a1}][/itex] and [itex][t_{a2}, x_{a2}][/itex] where c=1 and [itex]t_{a1}=t_{a2}[/itex].
Now you have chosen events that take place at the same time, but at different positions in A. Could be simultaneous measurements of the ends of a rod in A.
[itex]x_{b1}=\gamma (x_{a1}-vt_{a1})[/itex]
[itex]x_{b2}=\gamma (x_{a2}-vt_{a2})[/itex]
If I simply do [itex]x_{b2}-x_{b1}[/itex], I get
[itex]x_{b2}-x_{b1} = \gamma (x_{a2}-x_{a1})[/itex]
[itex]\Delta x_b = \gamma \Delta x_a[/itex]
[itex]\Delta x_b > \Delta x_a[/itex]
OK. This should make sense. After all, in B's frame the rod is moving. So the positions of those events will be further apart.
If As I understand [itex]\Delta x_a[/itex] is length in A frame and [itex]\Delta x_b[/itex] is length in B frame then it would be [itex]\Delta x_b < \Delta x_a[/itex], then why [itex]\Delta x_b >
\Delta x_a[/itex]?
What am I thinking wrong here?
In order for B to make a measurement of the length of a rod that is at rest in A, B must measure the positions at the same time. When he does that, then he'll see the usual length contraction. Note
that here the measurements of the endpoints are not made at the same time according to B.
Hello Doc Al
Please, look at above image when we transform the points on [itex]t_a[/itex]. We get points on [itex]t_b[/itex]. And when we transform the points on [itex]x_a[/itex]. We get points on [itex]x_b[/
Doc Al said:
Yes. This is the normal time dilation formula--moving clocks are measured to run slowly. So B will measure a greater time between those events than would A.
Yes, you are right. We can easily see in the above link and the image that if we put the two points of [itex]t_b[/itex] on parallel lines to [itex]x_a[/itex]. The time-component of B frame
(perpendicular distance between this two parallel lines) is still bigger than time-component of A frame. So, B feels more time between two events, and so B feels A's clock running slowly
Doc Al said:
OK. This should make sense. After all, in B's frame the rod is moving. So the positions of those events will be further apart.
Doc Al said:
In order for B to make a measurement of the length of a rod that is at rest in A, B must measure the positions at the same time. When he does that, then he'll see the usual length contraction.
Note that here the measurements of the endpoints are not made at the same time according to B.
If we put two points of [itex]x_b[/itex] on parallel lines to [itex]t_a[/itex]. We can still see that space component (perpendicular distance between this two parallel lines) is still bigger in B
frame than space component of A frame. So, B feels length between two events is expanded. I cannot understand this. The rod is moving in B frame. So length between two events should be contracted,
not expanded.
Last edited:
mananvpanchal said:
If we put two points of [itex]x_b[/itex] on parallel lines to [itex]t_a[/itex]. We can still see that space component (perpendicular distance between this two parallel lines) is still bigger in B
frame than space component of A frame. So, B feels length between two events is expanded. I cannot understand this. The rod is moving in B frame. So length between two events should be
contracted, not expanded.
Only if those events marked the ends of the rods at the same time according to B. If the events had the same time coordinates in frame A they would not have the same time coordinates in frame B.
(Simultaneity is frame dependent.)
In order for B to measure the length of a moving rod, he must measure the position of the ends of the rod at the same time
according to him
. If he does, then he'll measure the usual length contraction.
Doc Al said:
Only if those events marked the ends of the rods at the same time according to B. If the events had the same time coordinates in frame A they would not have the same time coordinates in frame B.
(Simultaneity is frame dependent.)
In order for B to measure the length of a moving rod, he must measure the position of the ends of the rod at the same time according to him. If he does, then he'll measure the usual length
Ok, now see above image and the link. As you said if we want length contraction, then we should take the two events of [itex]x_b[/itex] on two parallel lines to [itex]t_b[/itex]. So as wiki article
says OB > OA, we can achieve length contraction in our case.
But, the same thing we can apply to time component too. As shown in image. We can take the two events of [itex]t_b[/itex] on two parallel lines to [itex]x_b[/itex]. And now B sees A's clock is
running faster than his own's clock.
Using parallel lines to [itex]x_a[/itex] we can explain time dilation.
Using parallel lines to [itex]t_b[/itex] we can explain length contraction.
Why can we not explain time dilation using parallel lines to [itex]x_b[/itex]?
Why can we not explain length contraction using parallel lines to [itex]t_a[/itex]?
Last edited:
mananvpanchal said:
But, the same thing we can apply to time component too. As shown in image. We can take the two events of [itex]t_b[/itex] on two parallel lines to [itex]x_b[/itex]. And now B sees A's clock is
running faster than his own's clock.
No. If the two events happen
at the same location
according to B's frame, then of course A will measure a greater time interval than B. This is equivalent to there being a clock at rest in B, so B measures the shortest time between those events.
Realize that since the events happen
at different locations
in A there are multiple clocks involved. So you cannot simply apply the time dilation formula to that time interval.
Nonetheless, both frames see the other's clocks as running slow (and out of sync along the direction of motion).
Doc Al said:
No. If the two events happen at the same location according to B's frame, then of course A will measure a greater time interval than B. This is equivalent to there being a clock at rest in B, so
B measures the shortest time between those events. Realize that since the events happen at different locations in A there are multiple clocks involved. So you cannot simply apply the time
dilation formula to that time interval.
Nonetheless, both frames see the other's clocks as running slow (and out of sync along the direction of motion).
Please, try to get my point the events is not happened at same location in B's frame. The events is happened at same location in A's frame.
Please, look at the image below.
Please, look at left hand side of image. AB is time duration between two events in A's frame happened at same location and at different time. Where AC is time duration between the two events in B's
frame. If we want to know how much time elapsed in B's frame between those two events, we want to take parallel line to [itex]x_a[/itex]. From this we get AC' as time elapsed in B's frame. We can say
AC' > AB, so we can say that A's clock running slower than B's clock.
On the right hand side of image. The two events A and B happened at same time and at different location in A's frame. When we transform the events co-ordinates into B's frame. We get A and C point.
The two points do not have same time component. So, B' frame cannot find length contraction appropriately. So, the condition is time component of B's frame should be same for events. So we are taking
a parallel line to [itex]t_b[/itex] and we get AB as contracted length. Is this not a length measured in A's frame? Can we not take parallel line to [itex]t_a[/itex] rather than [itex]t_b[/itex] as
we have done to get time dilation (taking parallel line to [itex]x_a[/itex] rather than [itex]x_b[/itex])?
EDIT: If we take parallel line to [itex]t_b[/itex], we get back the same point from which we transformed. Which doesn't give contracted length. It gives length measured in A's frame at rest. Which
should be longest relative to any other frame.
Last edited:
Please, look at left hand side of post #7's image.
The two events A and B happens at same location in A's frame. A measures time duration between this two events as AB in his own clock.
When we transform the two events into B's frame. We get A and C. If we take C on parallel line to [itex]x_a[/itex], we get C' point on [itex]t_a[/itex]. So we can conclude that B's frame measures
more time duration AC' between this two events in his own clock. So B can conclude that A's clock running slowly than his own clock.
The same as above, two events A and B happens at same time in A's frame (right side of image). A measures length between two events as AB with his own ruler.
When we transform the two events into B's frame. We get A and C. If we take C on parallel line to [itex]t_a[/itex], we get C' point on [itex]x_a[/itex]. So we can conclude that B's frame measures
more length AC' between this two events with his own ruler. So B can conclude that A's ruler is expanded than his own ruler.
If we take parallel line to [itex]t_b[/itex] to get length contraction then there are two problem with that.
1. AC is not length measured by B's frame at same time, so if we want to solve this by taking parallel line to [itex]t_b[/itex], we get same point B on [itex]x_a[/itex]. Where AB is distance measured
by A's frame not distance measured by B's frame at same time.
2. If we want length contraction and we want to solve this by taking parallel line to [itex]t_b[/itex] then we have to take parallel line to [itex]x_b[/itex] to get time dilation, but we cannot get
time dilation by this method. If we want time dilation then we have to take parallel line to [itex]x_a[/itex].
the basic thing about time dilation and length contraction is that both are reciprocal in nature. to understand more about the meaning of reciprocal character you try find the 'twin paradox' and the
'barn and ladder paradox' respectively.
though both paradoxes can't be resolved within the realm of str as they involve accelrated frames but they do give a meaning of reciprocity
Hello, Doc Al
My post #8 is the best description of my understanding. Please, flash some light on it.
I thought you came to an understanding on the length contraction part of your questions in your other thread,
Length Contraction rearrangement
. Was I wrong?
BTW, this is why you shouldn't ask the same question in multiple threads.
FAQ: Basic question about TD and LC
1. What is the difference between TD and LC?
TD (Time Division) and LC (Logic Control) are two different methods of dividing and controlling the flow of information in a communication system. TD involves dividing a fixed amount of time into
smaller time slots to transmit data, while LC involves using logical operations to control the flow of data. TD is used in systems such as TDMA (Time Division Multiple Access) and TDM (Time Division
Multiplexing), while LC is used in systems such as PLC (Programmable Logic Controllers) and digital logic circuits.
2. How are TD and LC used in different industries?
TD is commonly used in telecommunication systems, such as cellular networks and satellite communications. It is also used in audio and video equipment, such as CD players and television broadcasting.
LC is commonly used in industrial automation and control systems, such as manufacturing and process control, as well as in electronic devices, such as computers and digital appliances.
3. What are the advantages of using TD over LC?
One of the main advantages of using TD over LC is that it allows for multiple users to share the same communication channel without interfering with each other, as each user is assigned a specific
time slot. This makes TD ideal for applications that require high data rates and multiple users, such as in cellular networks. Additionally, TD is relatively simple and cost-effective to implement.
4. What are the advantages of using LC over TD?
LC allows for more flexibility and control over the flow of data compared to TD. With LC, logical operations can be used to manipulate data in real-time, making it suitable for applications that
require precise and dynamic control, such as in industrial automation. LC also allows for more efficient use of bandwidth, as data can be transmitted only when needed.
5. Can TD and LC be used together?
Yes, TD and LC can be used together in some systems. For example, some cellular networks use TDMA (a TD method) for dividing time slots among users, while also using LC for controlling the data flow
within each time slot. This allows for efficient use of bandwidth and precise control over the flow of data. However, in most cases, TD and LC are used separately depending on the specific needs of
the application. | {"url":"https://www.physicsforums.com/threads/basic-question-about-td-and-lc.585764/","timestamp":"2024-11-02T18:24:24Z","content_type":"text/html","content_length":"141716","record_id":"<urn:uuid:e434434a-31d6-4b6b-adef-ef1e35141144>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00580.warc.gz"} |
Category: SciTechTalk Articles
Published Date
Written by admin
Hits: 8053
The World's Most Beautiful Equations
Numerical beauty
Mathematical equations aren't just useful — many are quite beautiful. And many scientists admit they are often fond of particular formulas not just fortheir function, but for their form, and the
simple, poetic truths they contain.
While certain famous equations, such as Albert Einstein's E = mc^2, hog most of the public glory, many less familiar formulas have their champions among scientists. physicists, astronomers and
mathematicians show their favorite equations:
General relativity
The equation above was formulated by Einstein as part of his groundbreaking general theoryof relativity in 1915. The theory revolutionized how scientists understood gravity by describing the force as
a warping of the fabric of space and time.
"It is still amazing to me that one such mathematical equation can describe what space-time is all about," said Space Telescope Science Institute astrophysicist Mario Livio, who nominated the
equation as his favorite. "All of Einstein's true genius is embodied in this equation." [Einstein Quiz: Test Your Knowledge of the Genius]
"The right-hand side of this equation describes the energy contents of our universe (including the 'dark energy' that propels the current cosmic acceleration)," Livio explained. "The left-hand side
describes the geometry of space-time. The equality reflects the fact that in Einstein's general relativity, mass and energy determine the geometry, and concomitantly the curvature, which is a
manifestation of what we call gravity." [6 Weird Facts About Gravity]
"It's a very elegant equation," said Kyle Cranmer, a physicist at New York University, adding that the equation reveals the relationship between space-time and matter and energy. "This equation tells
you how they are related — how the presence of the sun warps space-time so that the Earth moves around it in orbit, etc. It also tells you how the universe evolved since the Big Bang and predicts
that there should be black holes."
Standard model
Another of physics' reigning theories, the standard model describes the collection of fundamental particles currently thought to make up our universe.
The theory can be encapsulated in a main equation called the standard model Lagrangian (named after the 18th-century French mathematician and astronomer Joseph Louis Lagrange), which was chosen by
theoretical physicist Lance Dixon of the SLAC National Accelerator Laboratory in California as his favorite formula.
"It has successfully described all elementary particles and forces that we've observed in the laboratory to date — except gravity," Dixon told. "That includes, of course, the recently discovered
Higgs(like) boson, phi in the formula. It is fully self-consistent with quantum mechanics and special relativity."
The standard model theory has not yet, however, been united with general relativity, which is why it cannot describe gravity. [Infographic: The Standard Model Explained]
While the first two equations describe particular aspects of our universe, another favorite equation can be applied to all manner of situations. The fundamentaltheorem of calculus forms the backbone
of the mathematical method known as calculus, and links its two main ideas, the concept of the integral and the concept of the derivative.
"In simple words, [it] says that the net change of a smooth and continuous quantity, such as a distance travelled, over a given time interval (i.e. the difference in the values of the quantity at the
end points of the time interval) is equal to the integral of the rate of change of that quantity, i.e. the integral of the velocity," said Melkana Brakalova-Trevithick, chair of the math department
at Fordham University, who chose this equation as her favorite. "The fundamental theorem of calculus (FTC) allows us to determine the net change over an interval based on the rate of change over the
entire interval."
The seeds of calculus began in ancient times, but much of it was put together in the 17th century by Isaac Newton, who used calculus to describe the motions of the planets around the sun.
Pythagorean theorem
An "oldie but goodie" equation is the famous Pythagorean theorem, which every beginning geometry student learns.
This formula describes how, for any right-angled triangle, the square of the length of the hypotenuse (the longest side of a right triangle) equals the sum of the squares of the lengths of the other
two sides.
"The very first mathematical fact that amazed me was Pythagorean theorem," said mathematician Daina Taimina of Cornell University. "I was a child then and it seemed to me so amazing that it works in
geometry and it works with numbers!" [5 Seriously Mind-Boggling Math Facts]
Euler's equation
This simple formula encapsulates something pure about the nature of spheres:
"It says that if you cut the surface of a sphere up into faces, edges and vertices, and let F be the number of faces, E the number of edges and V the number of vertices, you will always get V – E + F
= 2," said Colin Adams, a mathematician at Williams College in Massachusetts.
"So, for example, take a tetrahedron, consisting of four triangles, six edges and four vertices," Adams explained. "If you blew hard into a tetrahedron with flexible faces, you could round it off
into a sphere, so in that sense, a sphere can be cut into four faces, six edges and four vertices. And we see that V – E + F = 2. Same holds for a pyramid with five faces — four triangular, and one
square — eight edges and five vertices," and any other combination of faces, edges and vertices.
"A very cool fact! The combinatorics of the vertices, edges and faces is capturing something very fundamental about the shape of a sphere," Adams said.
Special relativity
Einstein makes the list again with his formulas for special relativity, which describes how time and space aren't absolute concepts, but ratherare relative depending on the speed of the observer. The
equation above shows how time dilates, or slows down, the faster a person is moving in any direction.
"The point is it's really very simple," said Bill Murray, a particle physicist at the CERN laboratory in Geneva. "There is nothing there an A-level student cannot do, no complex derivatives and trace
algebras. But what it embodies is a whole new way of looking at the world, a whole attitude to reality and our relationship to it. Suddenly, the rigid unchanging cosmos is swept away and replaced
with a personal world, related to what you observe. You move from being outside the universe, looking down, to one of the components inside it. But the concepts and the maths can be grasped by anyone
that wants to."
Murray said he preferred the special relativity equations to the more complicated formulas in Einstein's later theory. "I could never follow the maths of general relativity," he said.
1 = 0.999999999….
This simple equation, which states that the quantity 0.999, followed by an infinite string of nines, is equivalent to one, is the favorite of mathematicianSteven Strogatz of Cornell University.
"I love how simple it is — everyone understands what it says — yet how provocative it is," Strogatz said. "Many people don't believe it could be true. It's also beautifully balanced. The left side
represents the beginning of mathematics; the right side represents the mysteries of infinity."
Euler–Lagrange equations and Noether's theorem
"These are pretty abstract, but amazingly powerful," NYU's Cranmer said. "The cool thing is that this way of thinking about physicshas survived some major revolutions in physics, like quantum
mechanics, relativity, etc."
Here, L stands for the Lagrangian, which is a measure of energy in a physical system, such as springs, or levers or fundamental particles. "Solving this equation tells you how the system will evolve
with time," Cranmer said.
A spinoff of the Lagrangian equation is called Noether's theorem, after the 20th-century German mathematician Emmy Noether. "This theorem is really fundamental to physics and the role of symmetry,"
Cranmer said. "Informally, the theorem is that if your system has a symmetry, then there is a corresponding conservation law. For example, the idea that the fundamental laws of physics are the same
today as tomorrow (time symmetry) implies that energy is conserved. The idea that the laws of physics are the same here as they are in outer space implies that momentum is conserved. Symmetry is
perhaps the driving concept in fundamental physics, primarily due to [Noether's] contribution."
Callan-Symanzik Equation
"The Callan-Symanzik equation is a vital first-principles equation from 1970, essential for describing how naive expectations will fail in a quantum world," said theoretical physicist Matt Strassler
of Rutgers University.
The equation has numerous applications, including allowing physicists to estimate the mass and size of the proton and neutron, which make up the nuclei of atoms.
Basic physics tells us that the gravitational force, and the electrical force, between two objects is proportional to the inverse of the distance between them squared. On a simple level, the same is
true for the strong nuclear force that binds protons and neutrons together to form the nuclei of atoms, and that binds quarks together to form protons and neutrons. However, tiny quantum fluctuations
can slightly alter a force's dependence on distance, which has dramatic consequences for the strong nuclear force.
"It prevents this force from decreasing at long distances, and causes it to trap quarks and to combine them to form the protons and neutrons of our world," Strassler said. "What the Callan-Symanzik
equation does is relate this dramatic and difficult-to-calculate effect, important when [the distance] is roughly the size of a proton, to more subtle but easier-to-calculate effects that can be
measured when [the distance] is much smaller than a proton."
The minimal surface equation
"The minimal surface equation somehow encodes the beautiful soap films that form on wire boundaries when you dip them in soapy water," saidmathematician Frank Morgan of Williams College. "The fact
that the equation is 'nonlinear,' involving powers and products of derivatives, is the coded mathematical hint for the surprising behavior of soap films. This is in contrast with more familiar linear
partial differential equations, such as the heat equation, the wave equation, and the Schrödinger equation of quantum physics."
The Euler line
Glen Whitney, founder of the Museum of Math in New York, chose another geometrical theorem, this one having to do with the Euler line, named after 18th-centurySwiss mathematician and physicist
Leonhard Euler.
"Start with any triangle," Whitney explained. "Draw the smallest circle that contains the triangle and find its center. Find the center of mass of the triangle — the point where the triangle, if cut
out of a piece of paper, would balance on a pin. Draw the three altitudes of the triangle (the lines from each corner perpendicular to the opposite side), and find the point where they all meet. The
theorem is that all three of the points you just found always lie on a single straight line, called the 'Euler line' of the triangle."
Whitney said the theorem encapsulates the beauty and power of mathematics, which often reveals surprising patterns in simple, familiar shapes.
From Max Tegmarks's book "Our Mathematical Universe":
Tree of Theories (Max Tegmark in "Our Mathematical Universe"):
Scale of the Universe mapped to the branches of science: | {"url":"http://www.scitechtalk.org/scitechtalk-science-news-feeds/sciencedaily-feeds/77-scitechtalk-articles.html","timestamp":"2024-11-07T12:35:37Z","content_type":"application/xhtml+xml","content_length":"171086","record_id":"<urn:uuid:5efde5d2-0ec4-4ff1-ac7e-9e9401508d86>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00604.warc.gz"} |
Test Prep for AP® Courses
16.1 Hooke’s Law: Stress and Strain Revisited
Which of the following represents the distance (how much ground the particle covers) moved by a particle in a simple harmonic motion in one time period? (Here, A represents the amplitude of the
a. 0 cm
b. A cm
c. 2A cm
d. 4A cm
A spring has a spring constant of 80 N∙m^−1. What is the force required to (a) compress the spring by 5 cm and (b) expand the spring by 15 cm?
In the formula $F = −kx F = −kx$, what does the minus sign indicate?
a. It indicates that the restoring force is in the direction of the displacement.
b. It indicates that the restoring force is in the direction opposite the displacement.
c. It indicates that mechanical energy in the system decreases when a system undergoes oscillation.
d. None of the above
The splashing of a liquid resembles an oscillation. The restoring force in this scenario will be due to which of the following?
a. Potential energy
b. Kinetic energy
c. Gravity
d. Mechanical energy
16.2 Period and Frequency in Oscillations
A mass attached to a spring oscillates and completes 50 full cycles in 30 s. What is the time period and frequency of this system?
16.3 Simple Harmonic Motion: A Special Periodic Motion
Use these figures to answer the following questions.
a. Which of the two pendulums oscillates with larger amplitude?
b. Which of the two pendulums oscillates at a higher frequency?
A particle of mass 100 g undergoes a simple harmonic motion. The restoring force is provided by a spring with a spring constant of 40 N∙m^−1. What is the period of oscillation?
a. 10 s
b. 0.5 s
c. 0.1 s
d. 1
The graph shows the simple harmonic motion of a mass m attached to a spring with spring constant k.
What is the displacement at time 8π?
a. 1 m
b. 0 m
c. Not defined
d. −1 m
A pendulum of mass 200 g undergoes simple harmonic motion when acted upon by a force of 15 N. The pendulum crosses the point of equilibrium at a speed of 5 m∙s^−1. What is the energy of the pendulum
at the center of the oscillation?
16.4 The Simple Pendulum
A ball is attached to a string of length 4 m to make a pendulum. The pendulum is placed at a location that is away from Earth’s surface by twice the radius of Earth. What is the acceleration due to
gravity at that height and what is the period of the oscillations?
Which of the following gives the correct relation between the acceleration due to gravity and period of a pendulum?
a. $g = 2πL T 2 g = 2πL T 2$
b. $g = 4 π 2 L T 2 g = 4 π 2 L T 2$
c. $g = 2πL T g = 2πL T$
d. $g = 2 π 2 L T g = 2 π 2 L T$
Tom has two pendulums with him. Pendulum 1 has a ball of mass 0.1 kg attached to it and has a length of 5 m. Pendulum 2 has a ball of mass 0.5 kg attached to a string of length 1 m. How does mass of
the ball affect the frequency of the pendulum? Which pendulum will have a higher frequency and why?
16.5 Energy and the Simple Harmonic Oscillator
A mass of 1 kg undergoes simple harmonic motion with amplitude of 1 m. If the period of the oscillation is 1 s, calculate the internal energy of the system.
16.6 Uniform Circular Motion and Simple Harmonic Motion
In the equation $x = A sin wt, x = A sin wt,$ what values can the position $x x$ take?
a. −1 to +1
b. –A to +A
c. 0
d. –t to t
16.7 Damped Harmonic Motion
The non-conservative damping force removes energy from a system in which form?
a. Mechanical energy
b. Electrical energy
c. Thermal energy
d. None of the above
The time rate of change of mechanical energy for a damped oscillator is always:
a. 0
b. Negative
c. Positive
d. Undefined
A 0.5-kg object is connected to a spring that undergoes oscillatory motion. There is friction between the object and the surface it is kept on given by coefficient of friction $μ k = 0.06 μ k = 0.06$
. If the object is released 0.2 m from equilibrium, what is the distance that the object travels? Given that the force constant of the spring is 50 N m^-1 and the frictional force between the objects
is 0.294 N.
16.8 Forced Oscillations and Resonance
How is constant amplitude sustained in forced oscillations?
16.9 Waves
What is the difference between the waves coming from a tuning fork and electromagnetic waves?
Represent longitudinal and transverse waves in a graphical form.
Why is the sound produced by a tambourine different from that produced by drums?
A transverse wave is traveling left to right. Which of the following is correct about the motion of particles in the wave?
a. The particles move up and down when the wave travels in a vacuum.
b. The particles move left and right when the wave travels in a medium.
c. The particles move up and down when the wave travels in a medium.
d. The particles move right and left when the wave travels in a vacuum.
The graph shows propagation of a mechanical wave. What is the wavelength of this wave?
16.10 Superposition and Interference
A guitar string has a number of frequencies at which it vibrates naturally. Which of the following is true in this context?
a. The resonant frequencies of the string are integer multiples of fundamental frequencies.
b. The resonant frequencies of the string are not integer multiples of fundamental frequencies.
c. They have harmonic overtones.
d. None of the above
Explain the principle of superposition with figures that show the changes in the wave amplitude.
In this figure which points represent the points of constructive interference?
a. A, B, F
b. A, B, C, D, E, F
c. A, C, D, E
d. A, B, D
A string is fixed on both sides. It is snapped from both ends at the same time by applying an equal force. What happens to the shape of the waves generated in the string? Also, will you observe an
overlap of waves?
In the preceding question, what would happen to the amplitude of the waves generated in this way? Also, consider another scenario where the string is snapped up from one end and down from the other
end. What will happen in this situation?
Two sine waves travel in the same direction in a medium. The amplitude of each wave is A, and the phase difference between the two is 180°. What is the resultant amplitude?
a. 2A
b. 3A
c. 0
d. 9A
Standing wave patterns consist of nodes and antinodes formed by repeated interference between two waves of the same frequency traveling in opposite directions. What are nodes and antinodes and how
are they produced? | {"url":"https://texasgateway.org/resource/test-prep-apr-courses-51?book=79096&binder_id=78586","timestamp":"2024-11-04T18:01:32Z","content_type":"text/html","content_length":"125238","record_id":"<urn:uuid:ed1b6d5f-db5f-4a82-81b5-3ac2171f63c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00666.warc.gz"} |
D. J. Bernstein
Integer factorization
Circuits for integer factorization
Background: RSA and the number field sieve
Large integers
Background: What o(1) means
The cost of NFS for very large integers
The balance between sieving and linear algebra
Calculating the cost exponents
Historical notes on the cost-effectiveness of circuit NFS
Historical notes on circuits for linear algebra
3 versus 1.17
Small integers
Figuring out the exact cost of NFS
The most serious Lenstra-Shamir-Tomlinson-Tromer error
Historical notes on mesh routing in NFS
General concepts
The notion of computation cost
Cost versus operations | {"url":"https://cr-yp-to.viacache.net/nfscircuit.html","timestamp":"2024-11-08T19:07:55Z","content_type":"text/html","content_length":"1645","record_id":"<urn:uuid:bd69252e-b870-42ea-92bf-8606463b2503>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00750.warc.gz"} |
CCWI2017: F10 'Online Demand Estimation of Geographical and Non-Geographical Distributed Demand Pattern in Water Distribution Networks'
journal contribution
posted on 2017-09-01, 15:19 authored by Nhu Cuong Do, Angus R. Simpson, Jochen W. Deurelein, Olivier Piller
The issue of demand calibration and estimation under uncertainty is known to be an exceptionally difficult problem in water distribution system modelling. In the context of real-time event modelling,
the stochastic behaviour of the water demands and non-geographical distribution of the demand patterns makes it even more complicated. This paper considers a predictor – corrector approach,
implemented by a particle filter model, for solving the problem of demand multiplier factor estimation. A demand forecasting model is used to predict the water demand multiplier factors. The EPANET
hydraulic solver is applied to simulate the hydraulic behaviour of a water network. Real time observations are integrated via a formulation of the particle filter model to correct the demand
predictions. A water distribution network of realistic size with two configurations of demand patterns (geographically distributed demand patterns and non-geographically distributed demand patterns)
are used to evaluate the particle filter model. Results show that the model is able to provide good estimation of the demand multiplier factors in a near real-time context if the measurement errors
are small. Large measurement errors may result in inaccurate estimates of the demand values. | {"url":"https://orda.shef.ac.uk/articles/journal_contribution/CCWI2017_F10_Online_Demand_Estimation_of_Geographical_and_Non-Geographical_Distributed_Demand_Pattern_in_Water_Distribution_Networks_/5364133/1","timestamp":"2024-11-14T11:34:29Z","content_type":"text/html","content_length":"131158","record_id":"<urn:uuid:12b00dbc-fc1e-4bd8-aa09-1069cfc040ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00741.warc.gz"} |
Inertial vs. Gravitational Mass - Knowunity
Inertial vs. Gravitational Mass: AP Physics 1 Study Guide
Greetings, budding physicists! Today, we’ll dive into the gravity-defying (and sometimes head-scratching) world of inertial and gravitational mass. We promise there will be no heavy lifting
involved—just a lighthearted journey through some fascinating concepts. Grab your imaginary lab coats and let’s embark on an adventure through space and physics! 🌌
The Lowdown on Mass: Inertial vs. Gravitational
Not all "mass" is the same! In physics, we encounter the terms 'inertial mass' and 'gravitational mass,' and while they are equal, they measure different properties. Think of them as the Marvel
superheroes of the physics universe—each with unique powers but still closely related (no Infinity Stones needed here)!
Gravitational Mass
Gravitational mass is the property that determines how much gravitational force an object can exert or experience. You can think of it as how strong an object’s "gravitational handshake" is with
another object. The greater the gravitational mass, the stronger the handshake (and the messier the paperwork with gravitation lawyers).
Near Earth's surface, all objects fall with the same acceleration in a vacuum. This might sound counterintuitive, but whether it's a penny or a piano, they both drop as if gravity is offering them an
express elevator ride down! 🎢
Inertial Mass
Inertial mass is the star pupil of Newton's second law of motion (F = ma). It measures how much force is needed to change the state of motion of an object. Imagine trying to drag your dog to the vet
versus a cat—your dog’s resistance gives you a real workout! Similarly, a bowling ball (just nod and imagine) has more inertial mass than a feather, so it takes more force to get it moving. 🏋️♂️
Newton Might Drop the Mic Here: Gravitational Force
Gravitational mass isn't about how easily an object can move but about how strongly it's pulled or pulls by gravity. Newton’s Universal Law of Gravitation tells us that the gravitational force
between two objects depends on their gravitational masses and the distance between them—like long-distance relationships but with a mathematical twist! 🌍💫
Da Vinci? Try David Scott!
To illustrate that inertial and gravitational masses are equal, we remember the iconic Apollo 15 experiment by astronaut David Scott, who dropped a feather and a hammer simultaneously on the Moon.
Without an annoying atmosphere messing things up, both objects hit the lunar surface at the same time. Even space hammers prefer company on the way down! 👨🚀🔨
Equivalence Principle: A Physics Bromance
Despite measuring different properties, inertial and gravitational masses always show up with the same value. This surprising fact led to some big-name theories, like Einstein’s General Relativity.
So, whether you’re calculating your physics homework or puzzling over cosmological conundrums, knowing these masses are equivalent is pretty handy.
The Gravitational Force Conundrum
In a vacuum, all objects fall at the same rate due to uniform acceleration by gravity. However, gravitational force (F = Gm1m2/r^2) doesn't stay cozy and constant; it depends on the masses and the
distances involved. Just remember, while the acceleration due to gravity (g) depends only on the planet’s mass (and stays about 9.8 m/s² near the Earth), the force felt by your gym weights is a
different beast altogether!
Conservation of Mass and Beyond
In a closed system, the total matter (inertial or gravitational) remains unchanged. This means no sneaky molecule escapes or appears out of nowhere—except maybe in science fiction movies. 🌌🔭 This is
known as the Conservation of Mass, which is as reassuring as having a constant supply of pizza during an all-nighter.
Key Terms to Review
• Acceleration due to Gravity: The magical number (9.8 m/s² near Earth) that makes any dropped object enthusiastically head for the ground.
• Apollo 15 Experiment: Astrophysicist David Scott’s moon-drop bonanza proving all objects fall at the same rate in a vacuum.
• F=ma: Newton’s blockbuster sequel that explains how much force is needed to move that lazy bowling ball (or cat).
• Gravitational Mass: Measures the gravity handshake—how strongly gravitational forces are felt or exerted by an object.
• Inertial Mass: The stalwart of resistance: how much force is needed to change an object’s motion.
• Newton's Laws: Rulebooks laying down the law on motion and gravity.
• Vacuum Chamber: All air sucked out, just pure physics playground—ideal for no-atmosphere experiments.
There you have it! A journey through gravitational and inertial masses without the need for a space suit. We’ve navigated the physics universe and proven that, whether on Earth, the Moon, or in the
theoretical realms, the principles of mass and gravity remain our trusty guides. Stay curious, keep questioning, and let physics light up your path (or at least your next exam)! 🌟
Good luck, future physicists! May the force (inertial and gravitational) be with you! 🚀 | {"url":"https://knowunity.com/subjects/study-guide/inertial-vs-gravitational-mass","timestamp":"2024-11-12T12:37:12Z","content_type":"text/html","content_length":"227935","record_id":"<urn:uuid:40681def-3edf-4e74-84f7-bce1685eb787>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00162.warc.gz"} |
Plotting Graph 9750Gii - Casio CFX/AFX/FX/Prizm
I have a fx-9750GII calculator and I can't make graph ("syntax error") if I use suma or integral operation in syntax.
for example syntax: ∑(K*X,K,0,10,1)
But this syntax works in RUN.MAT mode (variable X replace by a number). Is there a any solution? Thanks | {"url":"https://community.casiocalc.org/topic/7509-plotting-graph-9750gii/","timestamp":"2024-11-12T16:10:43Z","content_type":"text/html","content_length":"152498","record_id":"<urn:uuid:adc1316b-e62b-46a4-a83b-3835db3742c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00178.warc.gz"} |
300 meters to feet to yards
Calculate Volume of Square Slab Calculator Use. 300m in yd? 0.186 miles The difference is in the .1, which is easy to calculate. Since 1959, a yard has been defined as exactly 0.9144 meters. Simply
use our calculator above, or apply the formula to change the weight 300 m to ft. So, if you want to calculate how many yards are 300 meters … The United States is one notable exception in that it
largely uses US customary units such as yards, inches, feet, and miles instead of meters in everyday use. How tall? Meters and yards are almost the exact same length. 1 decade ago. It is also equal
to 3 feet, or 36 inches. 1 0. How many yd are there in 300 m? 300 Meters to Yards Conversion breakdown and explanation 300 m to yd conversion result above is displayed in three different forms: as a
decimal (which could be rounded), in scientific notation (scientific form, standard index form or standard form in the United Kingdom) and as a fraction (exact result). How far is 300 feet in yards?
Since 1983, the metre has been officially defined as the length of the path travelled by light in a vacuum during a time interval of 1/299,792,458 of a second. 0.186 miles. Brodie. 300 meters is
approximately the same as: 984 feet. History/origin: The origin of the yard as a unit is unclear. 900 feet about 3/4 of the way around a track. swap units ↺ Amount. 1 meter is equal to 3.28084 feet:
1 m = (1/0.3048) ft = 3.28084 ft How far is 300 meters in yards? Miiguel. Definition: A yard (symbol: yd) is a unit of length in both the imperial and US customary systems of measurement. Definition:
A yard (symbol: yd) is a unit of length in both the imperial and US customary systems of measurement. Yard. Yard. 0 0. Since 1959, a yard has been defined as exactly 0.9144 meters. To convert yards
and feet to meters, convert yard and feet values into meters separately and add them. How long? 328 yards. Converting 300 m to ft is easy. How to convert meters to feet. How to convert 300 meters to
yards To convert 300 m to yards you have to multiply 300 x 1.09361, since 1 m is 1.09361 yds . Meters. 300 Feet = 100 Yards (exact result) Display result as. Calculate volumes for concrete slabs,
walls, footers, columns, steps, curbs and gutters. you would do 3/4 of one lap. To. What's the length of 300 meters in yards? 300 ft to yd conversion. 1 decade ago. 300 meters is approximately the
same as: 984 feet. 15 meters = 15 yards … 1 m is equivalent to 1.0936 yards, or 39.370 inches. 328 yards. Calculate between meters and yards. What is 300 meters in yards? Enter dimensions in US units
(inches or feet) or metric units (centimeters or meters) of your concrete structure to get the cubic yards value of the amount of concrete you will need to make this structure. For example, estimate
the number of yards in 15 meters. From. Meters (m) to feet (ft) conversion calculator and how to convert. 300 meters equal 984.251968504 feet (300m = 984.251968504ft). Anonymous. For example, to
convert 2 yards and 5 feet into meters, multiply 2 by 0.9144 and 5 by 0.3048 and add the results, that makes 2 yards and 5 feet equal to 3.352 meters. Meters to Feet formula Since anything multiplied
by 1 is itself (∗ =), all you need to do is find the .1 portion and add it to the number of meters. 1 0. sooo... 3/4 of the way around the track. ... 3/4 of the way around the track 15 meters of
length in both imperial. ( ft ) conversion calculator and how to convert yards and feet to meters, convert yard and values... Symbol: yd ) is a unit of length in both the imperial and customary... In
yards meters ( m ) to feet ( 300m = 984.251968504ft.... To 3 feet, or 39.370 inches formula meters ( m ) to feet meters... 0.186 miles 300 meters equal 984.251968504 feet ( 300m = 984.251968504ft )
equivalent to 1.0936 yards, 36. Of 300 meters is approximately the same as: 984 feet yards in 15 meters imperial US! 300 feet = 100 yards ( exact result ) Display result as ( symbol: yd ) is unit! A
track steps, curbs and gutters to meters, convert yard and feet values into meters separately add! Is in the.1, which is easy to calculate m ) to feet formula meters ( m to. To meters, convert yard
and feet to meters, convert yard and feet to meters, convert and... In yards to feet formula meters ( m ) to feet ( )... Display result as to 3 feet, or 39.370 inches 984 feet origin of way.: a yard
( symbol: yd ) is a unit is.! Since 1959, a yard has been defined as exactly 0.9144 meters and yards are almost the same. Yards are almost the exact same length also equal to 3 feet, or inches. Has
been defined as exactly 0.9144 meters 984 feet, which is easy to.... The origin of the way around a track the.1, which is easy to calculate 1959..., columns, steps, curbs and gutters feet = 100 yards
( exact result Display... Same as: 984 feet the.1, which is easy to calculate feet to meters, yard! Meters separately and add them to 3 feet, or 36 inches 1959, yard... 100 yards ( exact result )
Display result as is also equal to 3 feet, or 36.! And feet values into meters separately and add them the exact same.. Into meters separately and add them since 1959, a yard has been as! In 15
meters exact same length = 100 yards ( exact result ) Display result as m ) to (. 3/4 of the yard as a unit of length in both the imperial and US customary systems of.! Of yards in 15 meters columns,
steps, curbs and gutters of 300 meters in yards is equivalent 1.0936... 39.370 inches 36 inches what 's the length of 300 meters is approximately same. 984 feet example, estimate the number of yards
in 15 meters to calculate defined exactly! The yard as a unit of length in both the imperial and US customary of! 984 feet of the way around the track is unclear 's the length of meters! 3/4 of the
yard as a unit of length in both the imperial US... 100 yards ( exact result ) Display result as origin of the way around track. To meters, convert yard and feet to meters, convert yard and feet to
meters convert.: the origin of the yard as a unit is unclear curbs and gutters how to convert and! In the.1, which is easy to calculate is also equal to 3 feet, or 39.370.. Yard as a unit of length
in both the imperial and US customary systems of measurement feet into... M ) to feet formula meters ( m ) to feet ( 300m = 984.251968504ft.! Meters is approximately the same as: 984 feet formula
meters ( m ) to feet formula meters m... Miles 300 meters is approximately the same as: 984 feet almost the same... Meters to feet formula meters ( m ) to feet ( 300m = 984.251968504ft ) to... Or 36
inches equal to 3 feet, or 39.370 inches footers, columns, steps, and! Feet values into meters separately and add them.1, which is easy calculate... The yard as a unit of length in both the imperial
and US customary systems of measurement the is... Since 1959, a yard ( symbol: yd ) is a unit of length in both the imperial US... As exactly 0.9144 meters, curbs and gutters feet to meters, convert
yard and feet values into separately! And how to convert yards and feet to meters, convert yard feet. Defined as exactly 0.9144 meters meters, convert yard and feet values into meters separately and
them! A unit of length in both the imperial and US customary systems of measurement 984.251968504ft ) customary of! 'S the length of 300 meters is approximately the same as: 984 feet as! Yard as a
300 meters to feet to yards is unclear Display result as, estimate the number yards. Same length equivalent to 1.0936 yards, or 36 inches imperial and US customary systems of measurement defined!
Result as difference is in the.1, which is easy to.! Feet about 3/4 of the yard as a unit of length in the... Customary systems of measurement slabs, walls, footers, columns, steps, curbs and gutters
difference in. Or 39.370 inches as a unit of length in both the imperial and US systems... M ) to feet ( 300m = 984.251968504ft ) feet to meters, convert yard feet! To feet formula meters ( m ) to
feet ( ft ) conversion calculator how..., convert yard and feet values into meters separately and add them slabs, walls,,... ) Display result as definition: a yard has been defined as exactly 0.9144
meters as a is... 'S the length of 300 meters equal 984.251968504 feet ( ft ) conversion calculator and how to convert how convert... = 984.251968504ft ) of measurement origin of the yard as a unit
length. ( ft ) conversion calculator and how to convert yards and feet to meters convert. Sooo... 3/4 of the way around the track estimate the number yards...... 3/4 of the way around the track feet
( 300m = 984.251968504ft ), footers, columns steps. Yard and feet to meters, convert yard and feet values into meters separately add. 900 feet about 3/4 of the way around a track footers,
columns,,... Columns, steps, curbs and gutters to 1.0936 yards, or 39.370.... Example, estimate the number of yards in 15 meters as: 984 feet calculate volumes for slabs... And US customary systems
of measurement 's the length of 300 meters approximately. ( symbol: yd ) is a unit of length in both the imperial US... In 15 meters, or 39.370 inches the origin of the yard as unit! Easy to
calculate and yards are almost the exact same length to 3 300 meters to feet to yards, or inches. Systems of measurement and yards are almost the exact same length of the yard as a unit length!
Footers, columns, steps, curbs and gutters 300 feet = 100 yards ( exact )... To 3 feet, or 36 inches 39.370 inches result as origin of the around... A track 3/4 of the yard as a unit of 300 meters to
feet to yards in both the imperial and US customary of... Also equal to 3 feet, or 36 inches, steps, curbs and gutters... of!, curbs and gutters feet ( 300m = 984.251968504ft ) is also equal to 3
feet, or 39.370.. Conversion calculator and how to convert result ) Display result as to meters, convert and!, which is easy to calculate, curbs and gutters walls, footers, columns steps... Way
around the track = 984.251968504ft ) 1 m is equivalent to yards! In the.1, which is easy to calculate = 100 yards ( exact )..., curbs and gutters example, estimate the number of yards in 15 meters
walls,,... ) is a unit is unclear in both the imperial and US customary systems measurement! Yards and feet to meters, convert yard and feet values into meters separately and add them steps... ( 300m
= 984.251968504ft ) in both the imperial and 300 meters to feet to yards customary systems of measurement 300 meters equal 984.251968504 (... How to convert 39.370 inches equal 984.251968504 feet (
300m = 984.251968504ft ) what 's the length 300... Origin of the way around a track, steps, curbs and gutters also equal 300 meters to feet to yards! And US customary systems of measurement feet, or
36 inches meters and... As a unit of length in both the imperial and US customary systems of measurement a..., walls, footers, columns, steps, curbs and gutters about 3/4 of way!, footers, columns,
steps, curbs and gutters feet ( ft conversion. 300 meters equal 984.251968504 feet ( 300m = 984.251968504ft ) or 36 inches conversion and! And US customary systems of measurement difference is in
the.1, which is easy to calculate conversion calculator how. Are almost the exact same length Display result as the difference is the... In the.1, which is easy to calculate into meters separately
and add.... A unit of length in both the imperial and US customary systems of measurement way... Conversion calculator and how to convert into meters separately and add them customary of... Yards in
15 meters, convert yard and feet to meters, convert yard and feet to,! | {"url":"http://www.mamie-vintage.com/9b4en/97b7ea-300-meters-to-feet-to-yards","timestamp":"2024-11-05T06:18:53Z","content_type":"text/html","content_length":"19025","record_id":"<urn:uuid:36dbbefd-1b7f-41ed-b5e6-8331c7c90053>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00736.warc.gz"} |
Pi Photo: Mind blown
When I saw this my mind was seriously blown away!
This Pi bức ảnh might contain anime, truyện tranh, phim hoạt hình, and manga.
Long irritating video with the digits of pi scrolling on the screen.
Yeah, it's poorly put together, but it's so cute... right?
Old Pi-related video from way back when. Slightly frightening. | {"url":"https://vi.fanpop.com/clubs/pi/images/30445372/title/mind-blown-photo","timestamp":"2024-11-01T23:35:54Z","content_type":"text/html","content_length":"90100","record_id":"<urn:uuid:629c295d-955d-4d16-b204-33cee468cb89>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00724.warc.gz"} |
A simple and effective inverse projection scheme for void distribution control in topology optimization
The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper
addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard
direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology
optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first
phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while
satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.
All Science Journal Classification (ASJC) codes
• Software
• Control and Systems Engineering
• Computer Science Applications
• Computer Graphics and Computer-Aided Design
• Control and Optimization
• Combined projection
• Direct projection
• Inverse projection
• Projection functions
• Topology optimization
• Two-phase optimization
Dive into the research topics of 'A simple and effective inverse projection scheme for void distribution control in topology optimization'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/a-simple-and-effective-inverse-projection-scheme-for-void-distrib","timestamp":"2024-11-13T00:20:20Z","content_type":"text/html","content_length":"51109","record_id":"<urn:uuid:b3ea7f3e-415d-4f7e-9ef3-b9e9e2236091>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00155.warc.gz"} |
More than numbers: WikiProject Mathematics
WikiProject report
More than numbers: WikiProject Mathematics
The French mathematician Blaise Pascal, well known for Pascal's triangle, is a Featured article of the project
A 3D projection of a 4D-cube performing a simple rotation about a plane which bisects the figure, is one of 29 Featured pictures of WikiProject Mathematics
This photo of the German Lorenz cipher machine, used in World War II to encrypt very high-level general staff messages, is used to illustrate the cryptography Featured article
The Fields Medal, awarded by the International Mathematical Union, is a Featured picture which shows a bas relief of Archimedes, another Featured article
This week, we take our first in-depth look at WikiProject Mathematics. Started in November 2002 by Chas zzz brown, it is one of the top 15 most active projects, and has 331 members. The project is
home to 24 Featured articles, 3 Featured lists, 35 Good articles, 17 Featured pictures and a Featured portal – with a total of 8,850 assessed articles. The Signpost interviewed seven project members.
Charles Matthews has been on Wikipedia since 2003, and has a doctorate in mathematics; Jakob.scholbach joined around 2005, working mainly on frequently viewed mathematics articles; Geometry guy was
invited to join the project in 2007, and became involved in assessment and review; Ozob is a professional mathematician who joined in 2008 and has a soft spot for calculus articles; computer
scientist and mathematician David Eppstein, who has his own article in Wikipedia, joined in 2006; Kiefer.Wolfowitz is a statistician who joined in 2009; while CBM became involved because he "found
the idea of a public, comprehensive, free reference of that sort, very exciting".
Although the project has some 8,850 assessed articles, there are actually over 25,400 articles associated with it. We asked how project members keep up with all these, and if there are any plans to
assess the other articles. According to CBM, keeping up with such a large number of articles is a daunting task, "Fortunately, some of the original members of the project set up useful bots that
index our articles automatically. The List of mathematics articles and List of mathematicians are maintained by a bot, and these lists are used to create Wikipedia:WikiProject Mathematics/Current
activity. These tools have allowed us to track the huge number of math articles without relying on manual effort or talk page tags. The main limitation in going beyond tracking to actual editing is
the ratio of editor time to the number of articles needing improvement." Project members, including Ozob and David, watch the Current activity list, while Kiefer works to provide references and short
improvements to core articles, while developing a few articles.
Geometry guy contributed to the assessment of around 2,000 mathematics articles in 2007, but believes that "doing more than this would require substantial concerted editorial efforts that could be
better applied to other goals." David sees assessment as "most useful at the top end, where GA and FA status provide recognition to excellent articles and at the bottom end, where the lists of stubs
and User:Mathbot/Most wanted redlinks are a good hunting ground for articles in severe need of improvement. For the rest, I'd rather spend my editing time improving article space rather than trying
to decide whether an article is really B or C class and exactly how important it is".
Of the 8,850 assessed articles, there are some 4,053 Start-class articles and 3,401 stubs. What is the project doing to advance these articles? For Jakob, many of these short articles just contain a
definition of some specific concept, "in which case, it is unlikely that they will get much longer anytime soon or at all. The main task for this type of articles is to provide accurate references;
here steady progress is made. From my personal experience, such highly specific topics actually tend to be more satisfactory than articles on broader mathematical subjects which require much more
expertise to write".
WikiProject Mathematics has numerous Featured content. How did the project achieve this and how can other projects work toward this? For Jakob, the project as a whole has an enjoyable, friendly
atmosphere: "I found that the usual procedures ensuring a sound article quality such as Peer review and Good article nomination, work well. On the other hand, most members of WikiProject Mathematics
don't seem to focus on working on recognized content. For example, Riemann hypothesis (worth $1.000.000) has been pushed to an FA-ish level by a group of editors, but was not nominated. Reviewers'
expectations at the FA candidacy tend to be quite high as far as the accessibility of scientific articles to the "general public" is concerned. This is often the most challenging bit in having a
successful GA or FA candidacy, given that most mathematical subjects rely on a rigorous, abstract language that is not part of usual daily life. One way to deal with this issue in a more systematic
way might be a "non-peer review", or just a forum for editors to meet mathematically untrained editors willing to work together on the accessibility of advanced scientific articles, outside the
rather hasty FA process."
According to Kiefer, "On Wikipedia, mathematical topics present few temptations for editors to engage in point-of-view editing or to make personal attacks; goodwill flows in discussions on exposition
(focusing on the public's needs) and scope (applications and generalizations). The cooperative atmosphere in Wikipedia is similar to that in the world of mathematics, and more generally, in
mathematical sciences such as computer science and statistics. At the end of a day of research or teaching, editing Wikipedia is a relaxing hobby for mathematical scientists."
"Articles on mathematics are under-represented in the GA and FA categories, which do have biographies of mathematicians. Improving mathematical articles to GA and FA status is especially challenging,
because of the demand that articles be accessible to the reading public. Perhaps Wikipedia should feature more good articles on important topics rather than excellent articles on minor topics and
trivia? Mathematical scientists worry that Wikipedia indulges in "slumming"—dumbing down its content and showing contempt for the public's intelligence and attention—neglecting the mission of true
encyclopedias, which has been and should remain enlightenment. Wikipedia should inform the populace rather than popularize infamy and so oppose the commercialism of the mass media," Kiefer added. He
further suggests that Wikipedia should highlight mathematics and science on its main page, "and refrain from promoting Pokémon, pornography, and professional wrestling on Did you know?."
CBM does not think the lack of FA nominations as necessarily negative: "It has been said that Wikipedia is not a unified work, it's a collection of mostly-independent specialist encyclopedias that
share goals and build on each other. Only a few members of the mathematics project have been active in nominating articles for FA status; the most recent promotion was Euclidean algorithm in 2009. I
don't view this as entirely a bad thing. On a site the size of Wikipedia, there will naturally be differing visions of what an ideal article should be. One of the strengths of Wikipedia is we
accommodate such a wide range of topics and writing styles." Charles believes that mathematics is different when it comes to featured content, "the FA criteria don't fit that well with what survey
articles in the subject typically try to do, and we are still wrestling with the consequences for exposition. It isn't easy".
We asked what the most pressing needs for WikiProject Mathematics are, and how a new contributor can help. Charles identifies four areas, "better biographies, particularly for mathematicians outside
the English-speaking world; connected historical coverage; referencing; and expository work, for topics up to first-year graduate level, through gradual expansion of material which although accurate,
may be "impacted" and short of standard motivating remarks and heuristics". Ozob says that the most important way someone can help is by notifying the project of what is not clear, and feels that
good exposition is their biggest problem, not just as a WikiProject but as a profession: "There's hardly any mathematical exposition for the layman anywhere. And that's despite there being some
fascinating stuff which can be explained in very elementary terms. ... The WikiProject has an especially big problem with math articles that have applications to the sciences and engineering.
Non-mathematicians frequently try to read those articles and end up stymied by thickets of abstraction, because we often discuss modern mathematical methods that are very abstract but very powerful.
But sometimes it's our fault; we think like mathematicians, our reliable sources are written by mathematicians, and without meaning to, we sometimes end up writing for mathematicians."
Kiefer suggests that mathematics instructors should consider donating lecture-notes to appropriate articles. "More generally, graphical donations enliven articles and increase their appeal.
Instructors in computational geometry and computer graphics should encourage their students to contribute, perhaps for course projects. Maybe the Wikipedia Foundation should give a special award
recognizing the graphical donations by David Eppstein, Oleg Alexandrov, and others?" he added. "WikiProject Statistics has hard-working cadre like Qwfp and Melcombe, and we all would welcome new
members. Personal invitations have recruited some great editors, but I am embarrassed at not having invited a professional colleague to join the project, yet! Perhaps the Wikipedia Foundation should
hire staff for the mathematics project, or project members could write a letter appealing for volunteers in the Notices of the American Mathematical Society?"
CBM's recommendation is for new editors to pick a stub article on a topic that they have some knowledge about, and expand it into something a little longer. For Geometry guy, "The project has done
really well in providing references for the expert, but we need to draw more readers in, while also defending the importance of specialist content to the encyclopedia. Mathematics is a stark example
of this tension, but not the only one: I heartily recommend the essay Many things to many people (written primarily by Markus Poessel) for wider discussion."
We'll be Bach next week with a classic project. Until then, let our previous work serenade you in the archive. | {"url":"https://signpost.news/2011-02-21/WikiProject_report","timestamp":"2024-11-08T22:35:36Z","content_type":"text/html","content_length":"53610","record_id":"<urn:uuid:05c81f1f-8ba8-48ca-8b40-ec8024e4974b>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00839.warc.gz"} |
Understanding the diffracting option of the plane wave source
The purpose of this section is to describe how diffracting plane wave source works and how it differs from Bloch/periodic plane wave source. Well know double slit experiment is used to demonstrate
Discussion and simulation setup
Diffracting plane wave source is a source in which plane wave propagates through a rectangular aperture. The size of the source defined on the Geometry tab is the dimension of the aperture.
Additionally, diffraction pattern will be produced as the plane wave travels through the aperture. This differs from the Bloch/periodic plane wave source type, which always automatically expands
across the size of the entire simulation region in order to simulate pure plane wave.
For this reason, diffracting plane wave source should be used only in simulations where the diffraction is a desirable effect. In case of the double slit experiment, the diffracting plane wave
source allows us to replace each slit with a plane wave source instead of creating a structure representing the slits.
In this example, the sources are 12um apart and each source/slit has size of 2um. Frequency domain field and power monitor is used to represent the screen that is placed 58um from the slits.
The following analytical formula can be used to calculate the spacing of the interference maxima on the projection plane:
$$ s=\frac{z \lambda}{d} $$
z is the distance of the projection plane from the slits
d is the distance between the slits
lambda is wavelength
Simple calculation shows that the distance between the maxima at 633nm should be approximately 3.05um.
Tip: Reducing the simulation size
To minimize the simulation time, it is generally recommended that you do not include large regions of empty space in a simulation. It would be possible to obtain these same results with a much
smaller simulation region, by taking advantage of the far field projection functions. This example uses a large simulation region to keep the analysis as simple as possible, even though it is less
computationally efficient.
The simulation results shown on the figures below demonstrate the diffracting nature of the sources and their constructive and destructive interference. Moreover, the distance between the maxima is
~3.04um, which is well aligned with the analytical result above.
See also | {"url":"https://optics.ansys.com/hc/en-us/articles/360034382914-Understanding-the-diffracting-option-of-the-plane-wave-source","timestamp":"2024-11-11T06:54:51Z","content_type":"text/html","content_length":"34841","record_id":"<urn:uuid:835be180-7bce-439d-803f-3733def5c12f>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00198.warc.gz"} |
Laboratory Investigation of Rheological Properties of Polymer and Crude Oil
Clifford Tamuno-Ibime Dexterity*
Department of Petroleum Engineering, Rivers state University, Nigeria
Submission: July 29, 2017, Published: September 12, 2017
*Corresponding author: Clifford Tamuno-Ibime Dexterity, Department of Petroleum Engineering, Rivers state University, Port Harcourt, Nigeria, Email: dexterityclifford@gmail.com
How to cite this article: Clifford T. Laboratory Investigation of Rheological Properties of Polymer and Crude Oil. Recent Adv Petrochem Sci. 2017; 2(5): 555600. DOI:10.19080/RAPSCI.2017.02.555600
By analyzing the rheological properties of polymer and crude oil, it was found that polymer action on different crude samples vary with respect to fluid type. Fluid resistance to flow decreased after
0.01Molar Concentration of polymer (Xanthan gum) was used to form a buffer solution with 500mls of water kept for a period of 7days. 400mls of the buffer solution formed with Xanthan gum + 200mls of
each crude oil sample were used to make an investigation of the rheological properties of the polymer and crude oil. The Rheology of the crude samples, buffer solution and the crude oil samples +
buffer solution were recorded at a constant temperature of 25^0C was also taken.The growth of microbes was discovered after 7days this microbes further reduced the resistant to fluid flow by breaking
the heavy hydrocarbon molecules, compared to their initial viscosities. The viscosity of fluid flow was monitored via shear stresses in relation to different shear rates using the Power Law Model
which graphically interprets how the fluid responds to shear thinning, monitored at different revolutions per minute (rpm) was discovered to give best response at a high shear rate of 600rpm.The
investigation of the fluid flow was done by modeling a simulated reservoir using sandstone tube from Darcy's equation for fluid diffusivity which confirms that a decrease in viscosity causes an
increase in the flow rate with the assumption made that fluid flow rate equals production rate. The xanthan gum polymer is preferably used because it is cost effective and environmentally friendly.
Keywords: Xanthan gum; Viscosity; Shear thinning; Microbes; Flow rate
Rheology gives a proper knowledge of fluid flow behaviors. It is generally, the study of how matter deforms and flows, including its elasticity, plasticity and viscosity. In geology, rheology is
particularly important in studies of moving ice, water, salt and magma, as well as in studies of deforming rocks. Rheological modeling of fluids in the field is visually described by the Bingham
plastic or the power law model. Although this model is fairly easy to solve for its specific descriptive parameters, they do not simulate fluid behavior across the entire rheological spectrum very
well, particularly in the low shear rate range [1].
The following definitions help in understanding this model:
• Shear rate (y) in a simple flow is the change in fluid velocity divided by the gap or width of the channel through which fluid is moving.
• Shear stress (t) is the force per unit area required to move a fluid at a given shear rate.
• Fluid viscosity (|i) is the fluid's shear stress divided by the corresponding shear rate
•The power law Model (PL) characterizes mud behavior across the entire shear rate range.
Plastic Viscosity (PV) is a resistance of fluid to flow. According to the Bingham plastic model, the PV is the slope of shear stress and shear rate (Figure 1). (Drilling formulas: Viscosity of
drilling mud) [2]. Typically, the viscometer is utilized to measure shear rates at 600, 300, 200, 100, 6, and 3 revolutions per minute (rpm). With increased use of polymer- based fluids in the oil
field, the power law (PL) model became popular because it fits the behavior of these fluids better than the Bingham plastic model. The model's relationship between shear stress and shear rate is
given by
Shear stress = K x (shear rate)^n
The two key terms in the PL model are the consistency index (K) and the fluid flow index (n). Although the model fairly accurately predicts fluid behavior at the higher shear rates, it fails across
the lower shear rate range (0-100rpm). Moreover, people who use the PL model recognize that different values of n are possible, depending on which particular shear stress/ shear rate pairs are used
in the calculation methods. n and K can be calculated from any two value of shear rate and shear stress. The method of reading shear rate on the rig comes from a V-G meter (Figure 2). Typically,
600rpm, 300rpm and 3rpm are obtained from every fluid rheology test and we can use those reading to determine n and K. The following equations are used to get the Power law constants (n and K).
Shear stress = K x (shear rate)^n
n = 3.23 Log (0 600/0 300)
K = 5.11 (Reading at 300/511^n)
n is the power law exponent. (Flow behavior index) dimensionless
K = consistency factor, (poise)
θ300 = reading at 300rpm
θ600 = reading at 600rpm
A graph of shear stress against shear rate was plotted to consider the rate of flow which explains the Power Law Model better was aided by the Darcy's equation for an unsteady state process. The most
commonly used fluid model to determine the rheology of non-Newtonian fluid is the Bingham plastic model. With this model, it makes the assumption that the shear rate is a straight line function of
the shear stress. The point where the shear rate is zero is called "Yield Point” or threshold stress. (Drilling fluid formulas: types of flow and rheology models) [3].
The Darcy Equation in fluid diffusivity for an unsteady state process is introduced to predict fluid flow showing how flow rate varies inversely with viscosity. Also, a simulated reservoir sytem was
modelled to note the conditions through which the fluid flows and their various viscosities recorded.
Q = (-KA )/μ dP/dx
The Darcy Equation was introduced to note the resistance to flow of the fluid samples (crude oil + a buffer solution made with xanthan gum) through permeable media. This was done in a one
dimensional, homogenous rock formation (sandstone) for an unsteady state process.
Assuming flow rate = Production rate
qsc = ((KA )/(μBΔx^2 ))x P[i]-1 - 2P[i] + P[i]+1
q = λ^1ΔP^i
The equation was further derived showing how flow rate varies with viscosity.
3 Different Niger Delta Crude oil samples were obtained and worked on.
1. SAMPLE A - AGBDZ (0375) from Agbada Field.
2. SAMPLE B -IMOR- E (5133)
3. SAMPLE C - GBAR (0015)
Decantation: Decantation of crude sample was carried out. Decantation is the removal of water from the various crude samples.
Polymer used: The Polymer used in this experiment is Xanthan gum. XG is made up of strains of bacteria (Xanthomonas campestris) from the fermentation process of simple sugars. Xanthan gum of 0.01
Molar Concentration was gotten to prepare a buffer solution with 500mls of water.
Buffer solution: A buffer solution of 0.01Molar Concentration of Xanthan gum + 500mls of water were made to break the hydrocarbon molecules of the crude samples without altering the hydrocarbon pH
and at a constant temperature of 25 °C.
Viscosity determination: Viscosity was determined using a Fan Viscometer (Rheometer) at; 400mls of Crude oil sample + 200mls of Buffer Solution This was done to further aid the breakdown of
hydrocarbon molecules of the crude samples; therefore, a mixture of the 100mls of water + 100mls of the buffer solution was also made due to the increase of viscosity after 7days at a constant
temperature of 25 °C.
Rheology gives a better explanation of the fluid behavior under different conditions. The viscosity of each crude sample was recorded. Also, the viscosity of the buffer solution was recorded and
finally, the viscosity of the mixture of each crude sample and the buffer solution was also recorded (400mls of crude oil sample + 200mls of Buffer solution) at a constant temperature of 25 °C. The
Shear Stress for the various crude oil samples + Buffer solution was recorded as shown from the 1^st Rheological readings. result in the Table 1. NB: The shear stress was gotten from the Power Law
Model; this describes the characteristics of the fluid (non-Newtonian) from the Consistency curve plotted by shear stress against the Shear rate [4].
This could also be interpreted by the graph of shear stress vs shear rate which yields a consistency curve that best explains the fluid flow behavior and type (Figure 3A). The consistency curve has
the exponential equation as described by the equation,
Shear stress = K x (shear rate)^n
Shear rate = 600rpm {highest shear rate (Speed) from the Viscometer}
n = 3.23 Log (θ 600/θ 300)
K = 5.11 (Reading at 300/511n)
K is the fluid consistency unit and n is the power law exponent.
Solution using the power law model equation: The shear stress for the various Samples + Buffer Solution Made up of Polymer (Xanthan gum) and water is shown in the Power Law Calculations Below (Table
Only Sample A formed a soluble (very viscous) solution after shearing the 400mls of Crude + 200mls Buffer solution (Table 3).
After taken the viscosity readings of the samples, they were left for 7days to see the behavioral patterns/properties of the samples. Therefore, rheology was carried out to note their various
viscosities. The table below shows the result after 7days (Figure 3B).
From the experiment carried out, the viscosity of the buffer solution (Xanthan gum +water) decreases with higher shear rates. This is called shear thinning or pseudo-plasticity.
Once the shear forces are removed, the buffer starts to thicken thereby increasing viscosity. NB: The shear stress was gotten from the Power Law Model; this describes the characteristics of the fluid
(non Newtonian) [5]. The consistency curve has the exponential equation as described by the equation
Shear stress = K x (shear rate)^n
Shear rate = 600 (highest shear rate from the Viscometer)
n = 3.23 Log (θ 600/&3952; 300)
K = 5.11 (θ 300/511n)
The shear stress for the various Samples + Buffer Solution Made up of Polymer (Xanthan gum) and water after 7 Daysis shown in the Power Law Calculation Below (Table 4).
After 7 days, the viscosity reduced drastically as a result of the growth of microbes which was very visible in the crude samples and much more visible on the buffer solution (Xanthan Gum + water).
The buffer solution was discovered to be very viscous so an additional 100mls of water was added to it to reduce viscosity which made it to be a solution of 100mls of water + 200mls of buffer
solution and the rheology was taken to see the rate of flow and carrying capacity to see if the hydrocarbon can be swept into the reservoir [6].
The hydrocarbon molecules were broken down as a result of the polymer action and growth of microbes after a mixture is formed with the crude oil samples due to a higher shear rate.
The breakdown of the hydrocarbon molecules resulted to an increase in flow (Table 5).
NB: The various crude samples were placed on flow condition using the Darcy's equation in fluid diffusivity for an unsteady state flow process to model a simulated hydrocarbon reservoir.
In Petroleum Engineering, to determine the flow through permeable media the most simple of which is for a one-dimensional, homogeneous rock formation. Recal Darcy Equation in fluid diffusivity
equation for an unsteady state process
Q = (-KA )/μ dP/dx
Q is the flow rate of the formation (in units of volume per unit time)
K is the permeability of the formation (typically in milli- Darcy)
A is the cross-sectional area of the formation
μ is the viscosity of the fluid (typically in units of centipoises)
∂p/∂x represents the pressure change per unit length of the formation.
This equation can also be solved for permeability and is used to measure it, forcing a fluid of known viscosity through a core of a known length and area, the pressure drop across the length of the
core. While the flow rate was monitored to see its outcome
Darcy Equation of fluid diffusivity for an unsteady state process is given by
Q = (-KA )/(μ dP/dx ..............(1)
And the basic fluid flow equation in a porous media is also given by;
d/dx ((KA )/&3956;B dP/dx)+ qsc = VbøCt SwdP/dx ......................(2)
For steady state, dP/dx=0
d/dx ((KA )/&3956;B dP/dx)+ qsc = 0 ...............(3)
Simplifying equation (3) gives;
(d^2 P)/(dx^2 ) ((KA )/μB) =-qsc ...........(4)
Note that the negative sign indicates production.
Assuming fluid flow rate = Production rate
qs[C] = ((KA )/(μBΔx [i-1]-2P[i]+ P[i+1]..................5
Let ((KA )/(μBΔx^2 )) = λ............ (6)
q[sc] = λ[P[i-1] - 2P[i] + P[i+1]]...............7
Considering a First order derivative for Equation (4),
Equation (7) now becomes;
q = λ^1[P[i+1] - P[i]] .................(8)
Setting [P[i+1] - P[i]] as ΔP[i] = f ([i,i+1] ...............(9)
Results to;
q = λ^1ΔP[i] .................(10)
λ^1 = ((KA )/μBΔx) = specific transmissibility Constant. (L^3 /MT-^1)
q[p]= Oil Flow rate as a result of Polymer Flooding, (ft^3/sec).
This is a condition that helps in the determination of the fluid flow behavior Simulated with the following parameters for fluid flow equation at the different fluid viscosities gotten from the
experiment carried out
For sample a:
For sample B:
For sample C:
For Sample C, the %D = 100-1.25 = 98.7%
The breakdown of the hydrocarbon molecules by the buffer solution made up of polymer and water shows a significant effect on the fluid by drastically reducing the viscosity after the fluid has been
subjected to high shear rate and placed on flow condition on a simulated reservoir system.
A graphical interpretation of the summary in the Figure 4 & 5, better explains the result.
Figure 4 is a Graph indicating the trend of fluid flow; showing how Flow rate increases with decreasing Viscosity (Initially before 7 days), which also shows the effect of pseudo-plasticity; this
best describes the type of non-Newtonian flow, where viscosity falls with increasing shear rate. Most molten polymers are pseudo-plastic. The viscosity of Sample A was found to be higher compared to
samples B and C despite the share rates. This means, the viscosity of polymer in solution decreases with higher shear rates, which yields a high flow rate [7].
From the result, the higher the viscosity, the lower the flow rate and the lower the viscosity, the higher the flow rate. This experiment proves that Sample C has the best flow rate because it has a
viscosity of 0.4cp compared to the other Samples with a better percent deviation of 98.9% which means it has a very high ability to flow at 0.125ft^3
/sec after 7days compared to its initial flow rate of 0. 00158.t^3/sec.
Xanthan gum is known to be a viscous polymer if subjected to flow once the shear forces are removed, the polymer in solution starts to thicken thereby increasing viscosity, also from the
investigation carried out after 7 days, growth of microbes which was very visible in the crude samples and much more visible on the buffer solution had the ability to keep the hydrocarbon pH constant
while the heavier molecules in the hydrocarbon are broken down which discovered to drastically increase fluid flow due to shear thinning or pseudo-plasticity.
The viscosity of Sample A was found to be higher compared to samples B and C despite the share rates. This means, the viscosity of polymer in solution decreases with higher shear rates, which yields
a high flow rate. Therefore, polymers can be injected into a hydrocarbon reservoir to enhance oil recovery because it is cost effective and environmentally friendly. | {"url":"https://juniperpublishers.com/rapsci/RAPSCI.MS.ID.555600.php","timestamp":"2024-11-02T15:46:51Z","content_type":"text/html","content_length":"87702","record_id":"<urn:uuid:fc020697-c2a9-4fa1-8993-c990afbc5e74>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00391.warc.gz"} |
Transactions Online
Yutaka KAWAI, Takahiro MATSUDA, Takato HIRANO, Yoshihiro KOSEKI, Goichiro HANAOKA, "Proxy Re-Encryption That Supports Homomorphic Operations for Re-Encrypted Ciphertexts" in IEICE TRANSACTIONS on
Fundamentals, vol. E102-A, no. 1, pp. 81-98, January 2019, doi: 10.1587/transfun.E102.A.81.
Abstract: Homomorphic encryption (HE) is useful to analyze encrypted data without decrypting it. However, by using ordinary HE, a user who can decrypt a ciphertext that is generated by executing
homomorphic operations, can also decrypt ciphertexts on which homomorphic evaluations have not been performed, since homomorphic operations cannot be executed among ciphertexts which are encrypted
under different public keys. To resolve the above problem, we introduce a new cryptographic primitive called Homomorphic Proxy Re-Encryption (HPRE) combining the “key-switching” property of Proxy
Re-Encryption (PRE) and the homomorphic property of HE. In our HPRE, original ciphertexts (which have not been re-encrypted) guarantee CCA2 security (and in particular satisfy non-malleability). On
the other hand, re-encrypted ciphertexts only guarantee CPA security, so that homomorphic operations can be performed on them. We define the functional/security requirements of HPRE, and then propose
a specific construction supporting the group operation (over the target group in bilinear groups) based on the PRE scheme by Libert and Vergnaud (PKC 2008) and the CCA secure public key encryption
scheme by Lai et al. (CT-RSA 2010), and prove its security in the standard model. Additionally, we show two extensions of our HPRE scheme for the group operation: an HPRE scheme for addition and an
HPRE scheme for degree-2 polynomials (in which the number of degree-2 terms is constant), by using the technique of the recent work by Catalano and Fiore (ACMCCS 2015).
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E102.A.81/_p
author={Yutaka KAWAI, Takahiro MATSUDA, Takato HIRANO, Yoshihiro KOSEKI, Goichiro HANAOKA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Proxy Re-Encryption That Supports Homomorphic Operations for Re-Encrypted Ciphertexts},
abstract={Homomorphic encryption (HE) is useful to analyze encrypted data without decrypting it. However, by using ordinary HE, a user who can decrypt a ciphertext that is generated by executing
homomorphic operations, can also decrypt ciphertexts on which homomorphic evaluations have not been performed, since homomorphic operations cannot be executed among ciphertexts which are encrypted
under different public keys. To resolve the above problem, we introduce a new cryptographic primitive called Homomorphic Proxy Re-Encryption (HPRE) combining the “key-switching” property of Proxy
Re-Encryption (PRE) and the homomorphic property of HE. In our HPRE, original ciphertexts (which have not been re-encrypted) guarantee CCA2 security (and in particular satisfy non-malleability). On
the other hand, re-encrypted ciphertexts only guarantee CPA security, so that homomorphic operations can be performed on them. We define the functional/security requirements of HPRE, and then propose
a specific construction supporting the group operation (over the target group in bilinear groups) based on the PRE scheme by Libert and Vergnaud (PKC 2008) and the CCA secure public key encryption
scheme by Lai et al. (CT-RSA 2010), and prove its security in the standard model. Additionally, we show two extensions of our HPRE scheme for the group operation: an HPRE scheme for addition and an
HPRE scheme for degree-2 polynomials (in which the number of degree-2 terms is constant), by using the technique of the recent work by Catalano and Fiore (ACMCCS 2015).},
TY - JOUR
TI - Proxy Re-Encryption That Supports Homomorphic Operations for Re-Encrypted Ciphertexts
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 81
EP - 98
AU - Yutaka KAWAI
AU - Takahiro MATSUDA
AU - Takato HIRANO
AU - Yoshihiro KOSEKI
AU - Goichiro HANAOKA
PY - 2019
DO - 10.1587/transfun.E102.A.81
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E102-A
IS - 1
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - January 2019
AB - Homomorphic encryption (HE) is useful to analyze encrypted data without decrypting it. However, by using ordinary HE, a user who can decrypt a ciphertext that is generated by executing
homomorphic operations, can also decrypt ciphertexts on which homomorphic evaluations have not been performed, since homomorphic operations cannot be executed among ciphertexts which are encrypted
under different public keys. To resolve the above problem, we introduce a new cryptographic primitive called Homomorphic Proxy Re-Encryption (HPRE) combining the “key-switching” property of Proxy
Re-Encryption (PRE) and the homomorphic property of HE. In our HPRE, original ciphertexts (which have not been re-encrypted) guarantee CCA2 security (and in particular satisfy non-malleability). On
the other hand, re-encrypted ciphertexts only guarantee CPA security, so that homomorphic operations can be performed on them. We define the functional/security requirements of HPRE, and then propose
a specific construction supporting the group operation (over the target group in bilinear groups) based on the PRE scheme by Libert and Vergnaud (PKC 2008) and the CCA secure public key encryption
scheme by Lai et al. (CT-RSA 2010), and prove its security in the standard model. Additionally, we show two extensions of our HPRE scheme for the group operation: an HPRE scheme for addition and an
HPRE scheme for degree-2 polynomials (in which the number of degree-2 terms is constant), by using the technique of the recent work by Catalano and Fiore (ACMCCS 2015).
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E102.A.81/_p","timestamp":"2024-11-03T01:23:14Z","content_type":"text/html","content_length":"66444","record_id":"<urn:uuid:b73e4562-d818-4ea7-bdc3-272b5c4cab8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00032.warc.gz"} |
Inductor Mathematics
Click here to go to our main inductor page
For March 2016, our friends at Keysight Technologies gave us a video on how to model spiral inductors. Thanks, Keysight!
Below is an index of our mathematical discussion of inductors:
Inductance of a transmission line (separate page)
Spiral inductors on a substrate (New for March 2016!)
Wirebond inductance (now on a separate page)
Airbridge inductance (separate page)
Via hole inductance (separate page)
Wirebond inductance (separate page)
DC and RF resistance of inductors
Inductive reactance
Use our reactance calculator if you are interested in this topic!
The well-known equation for inductive reactance is shown below. Note that inductive reactance is positive, the opposite polarity of capacitive reactance. On the Smith Chart, this means that series
inductance tends to move a reflection coefficient in a clockwise direction.
A more useful form of the inductive reactance equation is given below, where frequency is in GHz and inductance is in nanohenries. Luckily all of those decimal places just cancel each other out!
Solenoid inductance
A solenoid is a cylindrical shape that is wound with wire to create inductance. It can have a single layer of windings, or multilayer, and it can use an air core or a core with high magnetic
permeability for increased inductance. The most useful (read that "highest Q") solenoids for microwave applications are miniature, single-layer air-core inductors. The graphic below was contributed
by Sebastiaan. Many thanks!
The classic formula for single-layer inductance (air core) is called Wheeler's formula, which dates back to the radio days of the 1920s:
L = inductance in micro-Henries (not nano-Henries!)
N= number of turns of wire
R= radius of coil in inches
H= height of coil in inches
Here it is in terms of D, the diameter of the coil:
(This formula was corrected April 9, 2006 thanks to KB!)
Wheeler's formula does not take into account wire diameter, and spacing between the turns. In the Wheeler formula, the turns are touching each other, but some insulation is assumed to prevent
shorting out. In practice, some spacing between turns is necessary to reduce the inter-turn capacitance and increase the operating frequency. Let's face it, Wheeler was not interested in the accuracy
of nano-Henry coils for microwave hardware.
A supposedly more-accurate method of calculating inductance of single-layer air-core inductors for microwave components can be found on Microwave Components Incorporated web site:
L = inductance in nano-Henries
N = number of turns of wire
D = inside diameter of the coil (inches)
D1 = bare wire diameter (inches)
S = space between turns (inches)
Using the MCI formula, applied to 47 gauge wire (1.2 mil bare wire diameter), and 0.5 mil spacing between turns, wrapping the turns around a 20 mil pin vice, you can make the following air-coil
1 turn= 2 nH
2 turns= 5 nH
3 turns = 8 nH
4 turns = 12 nH
5 turns= 16 nH
6 turns = 20 nH
7 turns = 25 nH
8 turns= 30 nH
9 turns = 35 nH
10 turns = 40 nH
Click here to go to our American Wire Gauge (AWG) chart.
Spiral inductor (wire)
This formula and graphic was also contributed by Sebastian (units are also micro-Henries): We have to admit, we haven't personally tested some formulas on this page for accuracy against measured
data. Also, note that any inductor model that doesn't consider parasitic capacitance and resistance will have limited accuracy at microwave frequencies.
Spiral inductors on a substrate
Spiral inductors are often used in MIC, MMIC and RFIC design, particularly below 18 GHz. Inductors can be rectangular or round, so long as you know how to model them. The spiral inductor lumped
element model can have many capacitor and resistor elements to account for all the parasitics that make it less and less ideal as you move up in frequency. Modeling an inductor requires good
de-embedded data on one or more inductor values, which result in a scalable model that allows you to predict the characteristics of arbitrary inductor values required by your design.
Want a little more hands-on description of how to model spiral inductors? Check out this video on EXACTLY THAT. Franz Sischka of SisConsult takes you through a complete lumped element model of a
spiral inductor, including skin effects, substrate eddy currents, and coupling to metal 1 shielding. Keysight's Integrated Circuit Characterization and Analysis Program (IC-CAP) is used to fit two
example measurements. The elements are manually tuned followed by optimization. Methods of model verification are provided and the example files can be downloaded.
Toroid inductance
A toroid is similar to a solenoid, but is donut shaped. More to come!
DC and RF resistance of inductors
Computing the DC resistance of a spiral inductor is simple, and is often overlooked by designers until they build an amplifier circuit and the part doesn't bias up correctly on the first iteration.
You need to know the sheet resistance of your metalization, in ohms per square, and compute the number of squares in the inductor. The number of squares is the total length (if you unwound it)
divided by the width, and can easily run into the hundreds for a large inductor.
Computing the RF resistance, you may have to consider the skin depth effect.
The model shown below is the classic model for spiral inductors. Computing the elements is not as straightforward as you might hope.
Inductor resonances
More to come! | {"url":"https://www.microwaves101.com/encyclopedias/inductor-mathematics","timestamp":"2024-11-08T17:36:34Z","content_type":"application/xhtml+xml","content_length":"50072","record_id":"<urn:uuid:ab484bdc-0054-404e-bd09-4de8d25acf47>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00198.warc.gz"} |
📊Reading Notes for "LEMMA-RCA" | ChengAo Shen
📊Reading Notes for "LEMMA-RCA"
This paper introduces a new large dataset named LEMMA-RCA for diverse RCA tasks across multiple domains and modalities. This dataset contains IT and OT operation systems from the real world. They
also evaluate eight baseline methods on this dataset to prove the high quality of LEMMA_RCA. The official website is https://lemma-rca.github.io/.
1. What problem does the paper try to solve?
The use of automated methods for root cause analysis is crucial, but currently, there is a lack of a mainstream dataset and fair comparison is not possible.
2. What is the proposed solution?
They proposed a rich dataset LEMMA-RCA containing multiple sub-datasets.
3. What are the key experimental results in this paper?
Tested the performance of 8 models on the LEMMA-RCA dataset.
4. What are the main contributions of the paper?
They propose the LEMMA-RCA dataset and evaluate eight baseline models on this.
5. What are the strong points and weak points in this paper?
□ Strong Point: Proposed a new dataset and conducted extensive evaluation.
□ Weak Point: There are no baseline methods not belonging to the causal-graph-based model.
Root cause analysis (RCA) is essential for identifying the underlying causes of system failures and ensuring the reliability and robustness of real-world systems. However, traditional manual RCA is
labor-intensive, costly, and prone to errors, so data-driven methods are needed. Despite significant progress in RCA techniques, the large-scale public datasets remain limited.
In RCA fields, here are some important keywords:
• Key Performance Indicator (KPI) is a time series indicating the system status, such as latency and service response time in microservice systems.
• Entity Metrics are multivariate time series collected by monitoring numerous system entities or components, such as CPU/Memory utilization in microservice systems.
• Data-driven Root Cause Analysis Problem. Given the monitoring data of system entities and system KPIs, identify the top K system entities that are relevant to KPIs when the system fails.
□ Offline/Online: Offline RCA only uses historical data to determine past failures; Online RCA operates in real-time using current data streams to promptly address issues.
□ Single-modal/multi-modal: Single-modal RCA relies solely on one type of data for a focused analysis; Multi-modal RCA uses multiple data sources for a comprehensive assessment.
RCA workflow
Base Information
LEMMA-RCA is a multi-domain, multi-modal dataset that includes textual system logs with millions of event records and time series metric data collected from real system faults. This dataset includes
IT and OT scenes, such as microservice and water treatment.
Existing datasets for root cause analysis.
The dataset collected from two domains, divided into four sub-datasets:
• IT operations
□ Product Review
Platform: Composed of six OpenShift nodes and 216 system pods.
The architecture of Product Review Platform
Faults: out-of-memory, high-CPU-usage, external-storage-full, DDos attack.
Metrics: Using Prometheus to record eleven types of node-level metrics and six types of pod-level metrics; Using ElasticSearch to collect log data including timestamp, pod name, log message,
etc; Using JMeter to collect the system status information.
KPI: Consider latency as system KPI due to system failure will result in latency significantly increasing.
□ Cloud Computing
Platform: Eleven system nodes.
Faults: six different types of faults, such as cryptojacking, configuration change failure, etc.
Metrics: Extracting system metrics from CloudWatch Metrics on EC2 instances; Extracting three logs types (log messages, API debug log, and MySQL log) from CloudWatch Logs; Using JMeter tools
to record error rate and utilization rate as KPIs.
Data statistics of IT operation sub-datasets.
• OT operations
□ SWaT: Collected over an 11-day period from a water treatment testbed equipped with 51 sensors. The system operated normally during the first 7 days, followed by attacks over the last 4 days,
resulting in 16 system faults.
□ WADI: Gathered from a water distribution testbed over 16 days, featuring 123 sensors and actuators. The system maintained normal operations for the first 14 days before experiencing attacks
in the final 2 days, with 15 system faults recorded.
Data statistics of OT operation sub-datasets.
Some non-stationary data are unpredictable and cannot be effectively modeled, which means they should be excluded. Thus this paper introduces some methods to preprocessing the data.
Log Feature Extraction. Due to the log data being unstructured and some of them being unmeaning, this paper transforms the log data into the time-series format. First, they use log-parsing tools to
structure the log message. Then they segment the data using 10-minute windows with 30-second intervals and calculate the occurrence frequency as the first feature type donated as $X_1^L\in \mathbb{R}
^T$. Then, they introduce a second feature type based on “Golden signals” derived from domain knowledge, such as the frequency of abnormal logs associated with system failures like DDoS attacks,
storage failures, and resource over-utilization. This feature is donated as $X_2^L\in \mathbb{R}^T$. Finally, they segment the log using the same time windows and apply PCA to reduce feature
dimensionality, selecting the most significant component as $X_3^L\in \mathbb{R}^T$. The overall data can form as matrix $X^L=[X_1^L,X_2^L,X_3^L]\in \mathbb{R}^{3\times T}$.
KPI Construction. Using anomaly detection algorithms to model the SWaT and WADI datasets, and transform the discrete value into continuous format.
Precision@K (PR@K): It measures the probability that the top $K$ predicted root causes are real, formulated as:
$$ \text{PR@K}=\frac{1}{|\mathbb{A}|}\sum_{a\in\mathbb{A}}\frac{\sum_{i\le k}R_a(i)\in V_a}{\text{min}(K,|v_a|)} $$
Where $\mathbb{A}$ is the set of system faults, $a$ is one fault, $V_a$ is the real root cause of $a$, $R_a$ is the predicted root cause of $a$, and $i$ is the $i$-th predicted cause of $R_a$.
Mean Average Precision@K (MAP@K): It assesses the top $K$ predicted causes from the overall perspective, formulated as:
$$ \text{MAP@K}=\frac{1}{K|\mathbb{A}|}\sum_{a\in \mathbb{A}}\sum_{i\le j\le K}\text{PR@j} $$
Mean Reciprocal Rank (MRR): It evaluates the ranking capability of models, formulated as:
$$ \text{MRR@K}=\frac{1}{|\mathbb{A}|}\sum_{a\in \mathbb{A}}\frac{1}{\text{rank}_{R_a}} $$
Where $\text{rank}_{R_{a}}$ is the rank number of the first correctly predicted root cause for system fault $a$.
Causal-graph-based RCA methods can provide deeper insights into system failures, thus all baseline methods fall into this category.
• PC: Classic constrain-based causal discovery algorithm that can identify the causal graph’s skeleton using an independence test.
• Dynotears: It construct dynamic Bayesian networks through vector autoregression models.
• C-LSTM: Utilizes LSTM to model temporal dependencies and capture nonlinear Granger causality.
• GOLEM: relaxing the hard Directed Acyclic Graph (DAG) constraint of NOTEARS with a scoring function
• REASON: An interdependent network model learning both intra-level and inter-level causal relationships.
• Nezha: A multi-modal method designed to identify root causes by detecting abnormal patterns.
• MULAN: A multi-modal RCA method that learns the correlation between different modalities and co-constructs a causal graph for root cause identification
• CORAL: An online single-modal RCA method based on incremental disentangled causal graph learning.
Results for offline RCA baselines with multiple modalities on the Product Review dataset.
Results for offline RCA baselines on the SWaT and WADI dataset.
Results for online root cause analysis baselines on all sub-datasets. | {"url":"https://chengaoshen.com/blogs/reading-notes-for-lemma-rca/","timestamp":"2024-11-11T00:02:42Z","content_type":"text/html","content_length":"51346","record_id":"<urn:uuid:56e9dc1f-2011-4f1c-985b-1cf6a7301e07>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00074.warc.gz"} |
PKDGRAV (Parallel K-D tree GRAVity code)
The following is an adaptation of a previous paper written by Marios D. Dikaiakos and Joachim Stadel^1. The changes made are both stylistic (i.e footnotes) and reflect the improvements made in
The central data structure in PKDGRAV is a tree structure which forms the hierarchical representation of the mass distribution. Unlike the more traditional oct-tree used in the Barnes-Hut algorithm,
we use a k-D tree^2, which is a binary tree. The root-cell of this tree represents the entire simulation volume. Other cells represent rectangular sub-volumes that contain the mass, center-of-mass,
and moments up to hexadecapole order of their enclosed regions.
To build the k-D tree, we start from the root-cell and bisect recursively the cells through their longest axis. At the very top of the tree we are domain decomposing among the processors, so the
bisection is done such that each processor gets an equal computational load. The computational cost is calculated from the previous timestep. Once we are within a processor, each cell is bisected so
that the sub-volumes are of equal size so that higher order moments of cells are kept to a minimum (Figure 1). The depth of the tree is chosen so that we end up with at most 8 particles in the leaf
cells (buckets). We have found this number to be near optimal for the parallel gravity calculation.
Several factors motivated the use of k-D tree structure over the classical oct-tree. The simplicity of the structure allows for a very efficient tree-construction. The use of buckets, by which only
PKDGRAV calculates the gravitational accelerations using the well known tree-walking procedure of the Barnes-Hut algorithm^3, except that it collects interactions for entire buckets rather than
single particles. Thus, it amortizes the cost of tree traversal for a bucket, over all its particles. In the tree building phase, PKDGRAV assigns to each cell of the tree an opening radius about its
center-of-mass. This is defined as,
where 1, increases
The opening radii are used in the Walk phase of the algorithm as follows: for each bucket 2). If a cell is ``opened,'' then PKDGRAV repeats the intersection-test with particle-cell interaction list
of bucket particle-particle interaction list of
Figure 2: Opening radius for a cell in the k-D tree, intersecting bucket B[1] and not bucket B[2]. This cell is ``opened'' when walking the tree for B[1]. When walking the tree for B[2], the cell
will be added to the particle-cell interaction list of B[2].
Once the tree has been traversed in this manner we can calculate the gravitational acceleration for each particle of
One disadvantage of tree codes is that they must deal with periodic boundary conditions explicitly, unlike grid codes where this aspect is taken care of implicitly. Although this adds complexity to
any tree code, it is possible to incorporate periodic boundary conditions efficiently by using approximations to the Ewald summation technique^4,5. PKDGRAV differs significantly from the prescription
given by [4], which is ill suited to a parallel code. Due to the mathematical technicality of the method we do not provide further description here except stating that it is ideally suited to
parallel computation^6.
Achieving effective parallelism requires that work be divided equally amongst the processors in a way which minimizes interprocessor communication during the gravity calculation. Since we only need a
crude representation for distant mass, the concept of data locality translates directly into spatial locality within the simulation. Each particle can be assigned a work-factor, proportional to the
cost of calculating its gravitational acceleration in the prior time-step. Therefore, during domain decomposition, we divide the particles into spatially local regions of approximately equal work.
Experience has shown that using a data structure for the domain decomposition that does not coincide with the hierarchical tree for gravity calculation, leads to poor memory scaling with number of
processors and/or tedious book-keeping. That is the case, for instance, when using an Orthogonal Recursive Bisection (ORB) tree for domain decomposition and an oct-tree for gravity^7. Current domain
decomposition techniques for the oct-tree case involve forming ``costzones,'' that is, processor domains out of localized sets of oct-tree cells^8, or ``hashed oct-trees''^9. PKDGRAV uses the ORB
tree structure to represent the domain decomposition of the simulation volume. The ORB structure is completely compatible with the k-D tree structure used for the gravity calculation (Figure 1). A
root finder is used to recursively subdivide the simulation volume so that the sums of the work-factors in each processor domain are equal. Once this has been done, each processor builds a local tree
from the particles within its domain. This entire domain decomposition and tree building process is fully parallelizable and incur negligible cost to the overall gravity calculation.
The Walk phase starts from the root-cell of the domain decomposition tree (ORB tree), each processor having a local copy of this tree, and descends from its leaf-cells into the local trees stored on
each processor. PKDGRAV can index the parent, sibling and children of a cell. Therefore, it can traverse a k-D tree stored on another processor in an architecture independent way. Non-local cells are
identified uniquely by their cell index and their domain number (or processor identifier). Hence, tree walking the distributed data structure is identical to tree walking on a single processor,
except that PKDGRAV needs to keep track of the domain number of the local tree upon which the walk is performed. Interaction lists are evaluated as described earlier, making Walk the only phase where
interprocessor communication takes place, after the domain decomposition.
A small library of high level functions called MDL (Machine Dependent Layer) handles all parallel aspects of the code. This keeps the main gravity code architecture-independent and simplifies
porting. For example, MDL provides a memory swapping primitive to move particles between processors during domain decomposition. Furthermore, MDL provides memory sharing primitives allowing local
arrays of data to be visible to all processors. These primitives support access to non-local cells and particles during the Walk phase. In particular, a procedure called mdlAquire can be used to
request and receive non-local data by providing an index into a non-local array, and an identifier for the processor that owns that array. On shared address space machines, we rely on the shared
address space to implement mdlAquire.
On distributed memory machines, such as workstation clusters and IBM's SP-2, we maintain a local software cache on each processor. When a processor requests a cell or particle that is not in its
cache, the request is sent to the appropriate processor that owns the data. While waiting for the data to be sent back, the requesting processor handles cache-requests sent from other processors. The
owner processor sends back a cache line comprised of more than a single cell or particle, in an attempt to amortize the effects of latency and message passing overhead. This cache line is inserted
into the local cache, and a pointer to the requested element is returned. To improve responsiveness of the software cache, after a number of accesses to the cache MDL checks whether any requests for
data have arrived; if so, it services these first. | {"url":"http://faculty.washington.edu/trq/hpcc//faculty/trq/brandon/pkdgrav.html","timestamp":"2024-11-03T02:38:37Z","content_type":"text/html","content_length":"15352","record_id":"<urn:uuid:5882acb5-22b4-4c2c-8dc8-d611644f0762>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00428.warc.gz"} |
Parallelogram Law of Vector Edition and its Derivation - TheCScience
Parallelogram Law of Vector Edition and its Derivation
Welcome to Thecscience, in this post, we will discuss an important topic of class 11 physics called Parallelogram Law. In this article, we will learn the definition of parallelogram law, its proof,
and the law of parallelogram in detail. First, we will talk about the Parallelogram Law of Vector Edition.
Definition of Parallelogram Law
According to this law, If two vectors represent the adjacent side of a parallelogram then the resultant vector will be diagonal of the parallelogram.
Let the two vectors of a parallelogram P and Q represent the adjacent side of a parallelogram then the R vector will be the resultant vector.
Diagram of Parallelogram of Vector Edition
Derivation of Parallelogram Law of Vector Edition
In Triangle AXD,
Using Pythagoras Theorem,
AD² = DX² + XA²
Note:- If the alpha angle is with the base then cos theta will be written and the perpendicular name as sin theta.
R² = (P sin θ)² + (Q + P cos θ)²
R² = P² sin² θ + Q² + P² cos² θ + PQcosθ
R² = P² (sin² θ +cos² θ) + Q² + PQcosθ
R² = P² + Q² + PQcosθ
R = √P² + Q² + PQcosθ
Case 1 of Parallelogram Law of Vector Edition
IF θ = 0° than,
R = √P² + Q² + PQcos0°
R = √P² + Q² + PQ
Case 2 of Parallelogram Law of Vector Edition
IF θ = 90° than,
R = √P² + Q² + PQcos90°
R = √P² + Q²
Case 3 of Parallelogram Law of Vector Edition
IF θ = 180° than,
R = √P² + Q² + PQcos180°
R = √P² + Q² – PQ
Other Physics Content | {"url":"https://thecscience.com/parallelogram-law-of-vector-edition-and-its-derivation.html","timestamp":"2024-11-04T17:23:05Z","content_type":"text/html","content_length":"58912","record_id":"<urn:uuid:fb20116e-72c2-4d46-972c-0c1e5145309f>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00681.warc.gz"} |
Calculate Covariance Matrix (medium)
Write a Python function that calculates the covariance matrix from a list of vectors. Assume that the input list represents a dataset where each vector is a feature, and vectors are of equal length.
input: vectors = [[1, 2, 3], [4, 5, 6]]
output: [[1.0, 1.0], [1.0, 1.0]]
reasoning: The dataset has two features with three observations each. The covariance between each pair of features (including covariance with itself) is calculated and returned as a 2x2 matrix.
Calculate Covariance Matrix
The covariance matrix is a fundamental concept in statistics, illustrating how much two random variables change together. It's essential for understanding the relationships between variables in a
dataset. For a dataset with \(n\) features, the covariance matrix is an \(n \times n\) square matrix where each element (i, j) represents the covariance between the \(i^{th}\) and \(j^{th}\)
features. Covariance is defined by the formula: \[ \text{cov}(X, Y) = \frac{\sum_{i=1}^{n} (x_i - \bar{x})(y_i - \bar{y})}{n-1} \] Where: - \(X\) and \(Y\) are two random variables (features), - \
(x_i\) and \(y_i\) are individual observations of \(X\) and \(Y\), - \(\bar{x}\) (x-bar) and \(\bar{y}\) (y-bar) are the means of \(X\) and \(Y\), - \(n\) is the number of observations. In the
covariance matrix: - The diagonal elements (where \(i = j\)) indicate the variance of each feature. - The off-diagonal elements show the covariance between different features. This matrix is
symmetric, as the covariance between \(X\) and \(Y\) is equal to the covariance between \(Y\) and \(X\), denoted as \(\text{cov}(X, Y) = \text{cov}(Y, X)\).
def calculate_covariance_matrix(vectors: list[list[float]]) -> list[list[float]]:
n_features = len(vectors)
n_observations = len(vectors[0])
covariance_matrix = [[0 for _ in range(n_features)] for _ in range(n_features)]
means = [sum(feature) / n_observations for feature in vectors]
for i in range(n_features):
for j in range(i, n_features):
covariance = sum((vectors[i][k] - means[i]) * (vectors[j][k] - means[j]) for k in range(n_observations)) / (n_observations - 1)
covariance_matrix[i][j] = covariance_matrix[j][i] = covariance
return covariance_matrix | {"url":"https://www.deep-ml.com/problem/Calculate%20Covariance%20Matrix","timestamp":"2024-11-06T07:13:38Z","content_type":"text/html","content_length":"27660","record_id":"<urn:uuid:d997e1cc-1ad7-40be-988f-60ed74880ef5>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00215.warc.gz"} |
Periodic Solutions in UMD Spaces for Some Neutral Partial Functional Differential Equations
Periodic Solutions in UMD Spaces for Some Neutral Partial Functional Differential Equations ()
1. Introduction
In this work, we study the existence of periodic solutions for the following neutral partial functional differential equations of the following form
In [4] , Ezzinbi et al. established the existence of periodic solutions for the following partial functional differential equation:
In [1] , Arendt gave necessary and sufficient conditions for the existence of periodic solutions of the following evolution equation.
where A is a closed linear operator on an UMD-space Y.
In [2] , C. Lizama established results on the existence of periodic solutions of Equation (1) when
2. UMD Spaces
Let X be a Banach space. Firstly, we denote By
Definition 2.1 Let
Definition 2.2 [2]
A Banach space X is said to be UMD space if the Hilbert transform is bounded on
Example 2.1 [9] 1) Any Hilbert space is an UMD space.
3) Any closed subspace of UMD space is an UMD space.
R-Bounded and L^p-Multipliers
Let X and Y be Banach spaces. Then
Definition 2.3 [1]
A family of operators
is valid. The smallest C is called R-bounded of
Lemma 2.1 ( [2] , Remark 2.2)
1) If
2) The definition of R-boundedness is independent of
Definition 2.4 [1] For
Proposition 2.1 ( [1] , Proposition 1.11) Let X be a Banach space and
Theorem 2.1 (Marcinkiewicz operator-valud multiplier Theorem).
Let X, Y be UMD spaces and
Theorem 2.2 [2] Let
Theorem 2.3 (Neumann Expansion) Let
3. Periodic Solutions for Equation (1)
Lemma 3.1 Let
Proof. Let
Integration by parts we obtain that
The proof is complete.
Lemma 3.2 [1] Let
By a Lemma 3.2 we obtain that
Definition 3.1 [2] . For
Lemma 3.3 [2] Let
3.1. Existence of Strong Solutions for Equation (2)
Then the Equation (1) is equivalent:
Denote by
We begin by establishing our concept of strong solution for Equation (2).
Definition 3.2 Let
Lemma 3.4 Let
Proof. Let
It follows
Since G is bounded, then
Lemma 3.5 [1] Let X be a Banach space,
Proposition 3.1 Let A be a closed linear operator defined on an UMD space X. Suppose that
Proof. 1) Þ 2) As a consequence of Proposition 2.1
2) Þ 1) We claim first that the set
By Lemma 3.4, we obtain that
We conclude that
Next define
we have
Since products and sums of R-bounded sequences is R-bounded [10. Remark 2.2]. Then the proof is complete.
Lemma 3.6 Let
Proof. Suppose that
It follows that
Theorem 3.1 Let X be a Banach space. Suppose that for every
1) for every
Before to give the proof of Theorem 3.1, we need the following Lemma.
Lemma 3.7 if
Proof of Lemma 3.7
We have
Proof of Theorem 3.1: 1) Let
Taking Fourier transform, G and D are bounded. We have
Consequently, we have
2) Let
3.2. Periodic Mild Solutions of Equation (2) When A Generates a C[0]-Semigroup
It is well known that in many important applications the operator A can be the infini- tesimal generator of
Definition 3.3 Assume that A generates a
Remark 3.1 ( [3] , Remark 4.2) Let
Lemma 3.8 [3] Assume that A generates a
Theorem 3.2 Assume that A generates a
Proof. Let x be a mild solution of Equation (2). Then by Lemma 3.8, we have
which shows that the assertion holds for
Now, define
Corollary 3.1 Assume that A generates a
Proof. From Theorem (3.2), we have that
Our main result in this work is to establish that the converse of Theorem 3.1 and Corollary 3.1 are true, provided X is an UMD space.
Theorem 3.3 Let X be an UMD space and
1) for every
Lemma 3.9 [1] Let
Proof of Theorem 3.3:
1) Þ 2) see Theorem 3.1
1) Ü 2) Let
By proposition 3.1, the family
there exists
In particular,
By Theorem 2.2, we have
Hence in
Since G is bounded, then
Using now (3) and (4) we have:
Since A is closed, then
Theorem 3.4 Let
Proof. For
By Theorem 2.2 we can assert that
We have
Using again Theorem 2.2, we obtain that
From which we infer that the sequence
let n go to infinity in (5), we can write
4. Applications
Example 5.1: Let A be a closed linear operator on a Hilbert space H and suppose that
solution of Equation (2).
From the identity
it follows that
rem 2.3], we observe that
We conclude that there exists a unique strong
Example 5.2:
Let A be a closed linear operator and X be a Hilbert space such that
(1), we obtain that
From the identity
Observe that
This proves that
The authors would like to thank the referee for his remarks to improve the original version. | {"url":"https://www.scirp.org/journal/PaperInformation?paperID=70844&","timestamp":"2024-11-08T11:10:17Z","content_type":"application/xhtml+xml","content_length":"121977","record_id":"<urn:uuid:1636b8fe-0b1d-4541-9859-23b6d6d402a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00807.warc.gz"} |
Physics - Online Tutor, Practice Problems & Exam Prep
Hey, guys. So previously, what we've seen in our projectile motion problems is that we'll get stuck in one axis. We'll have to go to the other axis to get that variable and then bring it back to the
original equation. I'm just going to fly through this example really quickly. It's an example that we've already seen before, this previous example here. Let's check it out. We've got a horizontal
launch. We're trying to figure out the delta x from a to b. So we start off with our delta x equation and we have the x velocity, but we don't have the time. And whenever we get stuck in one axis,
we're just going to go to the y axis for instance, and then we're going to figure out our variables and we're going to try to solve for this t. So we set up an equation to solve for time. We get a
number and then we basically just plug it all the way back into our original expression and then we can figure out delta x. This is just equal to 3 times 0.64, and then we just get a number 1.92
meters. We've already seen this before, very straightforward. Every time we get stuck, we just go to the other axis.
What I want to show you in this video is that there may be some situations in which you actually get stuck in both the x and the y axis when you try to go to the other one. So to get out of this
situation, we're going to use a method called equation substitution. This usually happens whenever you're solving problems and you have two out of these three variables that are unknown, the initial
velocity, the angle, and the times. They may be unknown, but they may not necessarily be asked for. Let me just go ahead and show you how this works using this problem here. So we've got a soccer
ball that's kicked upwards from a hill, and it lands, you know, it's in the air for 4 and a half seconds and lands 45 meters away. And we're going to figure out the initial velocity and angle at
which the soccer ball is kicked. So let's go ahead and draw a diagram. Right. So we've got this hill like this and we've got a soccer ball that's kicked at some angle. So it's going to go like this.
It's going to be like an upward launch and then it's going to land down here. Right. So we're going to go ahead and just go through the steps, draw the path in the x and y. Let me just scoot this
down a little bit. Give me some room. So the x axis would look like this and the y axis, this would look like this. And our points of interest are a, the maximum height, b, the point where it returns
to the original height, c, and then back down to the ground again. That's d. So in the x axis, this would look like this, a, b, c and then finally d. In the y axis, this would go up and then back
down through c and then down to d again. We've already seen the situation before. Cool. So what's the next step? So now we just need to figure out the target variables. Well, there are two of them.
In this case, we're looking for the initial velocity and the angle, and so those are going to be our unknown variables. So what interval are we going to use? Well, if you take a look at this problem
here, the thing we know the most amount of information about is the interval from a to d. The one thing we know is that in the x axis, the total amount of time that it spends in the air, t from a to
d, is 4.5 seconds. And we also know that delta x, the total horizontal displacement, is 45 meters. That's where the ball lands 45 meters away. So we're just going to use the interval from a to d,
because that's the one we know the most information about. So let's start off with the x axis. Our only equation that we can use is delta x from a to d is equal to vxa-d⋅Ta-d. So we actually know
both of these equations over here. So we have, delta x is 45. And what about this initial velocity here? Well, the initial velocity is going to be in the x axis, but that's not what we're solving
for. We're trying to solve for v₀ and θ. And the way that we would normally solve this is by using vector decomposition. If we want vx, we would use v₀⋅cos(θ). So I'm going to rewrite this vxa-d
term in terms of v₀ and θ. So basically, what this turns into is I have 45 equals v₀⋅cos(θ)⋅4.5. And if I go ahead and divide this to the other side, I get 10 equals v₀⋅cos(θ). Now,
unfortunately, there's a problem here as I still have two unknowns in this problem. My v₀ and my θ are both unknown. So I'm stuck here. I can't solve for either of them. So what do I do? Whenever I'm
stuck, I'm just going to go to the y axis and then try to solve for those variables there. I'm going to use the same exact interval, the interval from a to d, but now I need my 3 out of 5 variables.
I've got 9.8, is my, sorry, negative 9.8 is my aya-d. Now the initial velocity is going to be my vaya-d. Final velocity is going to be vdya-d, and then I've got delta y from a to d, and then I've got
t from a to d. Well, t from a to d I already know is 4.5. What about delta y? Well, that's just the vertical displacement from a down to d. I know that this is a 5 meter hill so it's going to be
negative because I'm going downwards and that's my delta y. So and this is negative 5. And then what about my vdya-d? I don't know what the final velocity is in the y axis. What about vaya-d? Well,
the way that I would normally solve this is just like vx is v₀⋅cos(θ), my v₀y}, which is vaya-d, is just equal to v₀}⋅sin(θ). So I'm just going to plug in this expression here for vaya-d. This is
just equal to v₀}⋅sin(θ). Now if I were to set up an equation for this, this would be my ignored variable, and even though I don't know v₀} or sin(θ), if I wanted to find it, I have my 3 out of 5
variables, so I'm just going to use equation number 3, which says that delta y from a to d is equal to vaya-d⋅Ta-d+12⋅ay×Ta-d2. So if I go ahead and plug in everything I know about this problem and
then I also replace my vaya-d with v₀}⋅sin(θ), then what I'm going to get is I'm going to get negative 5 equals | {"url":"https://www.pearson.com/channels/physics/learn/patrick/projectile-motion/projectile-motion-finding-initial-velocity?chapterId=8fc5c6a5","timestamp":"2024-11-12T03:48:22Z","content_type":"text/html","content_length":"474159","record_id":"<urn:uuid:9dea4307-693f-4987-bfb1-93ac42f86de4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00485.warc.gz"} |
Package index
Skip to contents
stplanr 1.2.2
Convert origin-destination data to spatial lines
Extract coordinates from OD data
Summary statistics of trips originating from zones in OD data
Summary statistics of trips arriving at destination zones in OD data
Create matrices representing origin-destination coordinates
Convert origin-destination coordinates into desire lines
od_id_szudzik() od_id_max_min() od_id_character()
Combine two ID values to create a single ID number
Generate ordered ids of OD pairs so lowest is always first This function is slow on large datasets, see szudzik_pairing for faster alternative
Aggregate od pairs they become non-directional
Convert origin-destination data from long to wide format
Convert origin-destination data from wide to long format
Convert a series of points into geographical flows
Convert a series of points into a dataframe of origins and destinations
Calculate the angular difference between lines and a predefined bearing
Clip the first and last n metres of SpatialLines
Identify lines that are points
Convert geographic line objects to a data.frame with from and to coords
line2points() line2pointsn() line2vertices()
Convert a spatial (linestring) object to points
Find the bearing of straight lines
Break up line objects into shorter segments
Find the mid-point of lines
Divide an sf object with LINESTRING geometry into regular segments
Segment a single line, using lwgeom or rsgeo
Add geometry columns representing a route via intermediary points
Convert 2 matrices to lines
Vectorised function to calculate number of segments given a max segment length
Retrieve the number of vertices in sf objects
Aggregate flows so they become non-directional (by geometry - the slow way)
Convert a series of points, or a matrix of coordinates, into a line
Clip the beginning and ends of sf LINESTRING objects
Convert multilinestring object into linestrings
Work with and analyse routes
Return average gradient across a route
Return smoothed averages of vector
Return smoothed differences between vector values
Calculate rolling average gradient from elevation data at segment level
Calculate the sequential distances between sequential coordinate pairs
Calculate the gradient of line segments from a matrix of coordinates
Calculate the gradient of line segments from distance and elevation vectors
Plan routes on the transport network
Get a route from the BikeCitizens web service
Route on local data using the dodgr package
Find shortest path using Google services
Find nearest route to a given point
Spatial lines dataset representing a route network
Spatial lines dataset representing a small route network
Plan routes on the transport network using the OSRM server
Split route in two at point on or near network
Split route based on the id or coordinates of one of its vertices
Spatial lines dataset of commuter flows on the travel network
Spatial lines dataset of commuter flows on the travel network
Plan routes on the transport network
Route on local data using the dodgr package
Plan routes on the transport network using the OSRM server
Convert text strings into points on the map
Add a node to route network
rnet_boundary_points() rnet_boundary_df() rnet_boundary_unique() rnet_boundary_points_lwgeom() rnet_duplicated_vertices()
Get points at the beginner and end of linestrings
Break up an sf object with LINESTRING geometry.
Keep only segments connected to the largest group in a network
Example of cycleway intersection data showing problems for SpatialLinesNetwork objects
Extract nodes from route network
Assign segments in a route network to groups
Join route networks
Merge route networks, keeping attributes with aggregating functions
Example of overpass data showing problems for SpatialLinesNetwork objects
Example of roundabout data showing problems for SpatialLinesNetwork objects
Subset one route network based on overlaps with another
overline() overline2()
Convert series of overlapping lines into a route network
Convert series of overlapping lines into a route network
Function to split overlapping SpatialLines into segments
Do the intersections between two geometries create lines?
Scale a bounding box
Rapid row-binding of sf objects
Flexible function to generate bounding boxes
Create matrix representing the spatial bounds of an object
Perform a buffer operation on a temporary projected CRS
Calculate line length of line with geographic or projected CRS
Perform GIS functions on a temporary, projected version of a spatial object
Select a custom projected CRS for the area of interest
Split a spatial object into quadrants
Deprecated functions in stplanr
stplanr-package stplanr
stplanr: Sustainable Transport Planning with R
Spatial points representing home locations
Example destinations data
Data frame of commuter flows
Data frame of invented commuter flows with destinations in a different layer than the origins
Spatial lines dataset of commuter flows
Example of desire line representations of origin-destination data from UK Census
Example segment-level route data
Example of origin-destination data from UK Census
Example of OpenStreetMap road network
Import and format Australian Bureau of Statistics (ABS) TableBuilder files
Spatial lines dataset representing a route network
Spatial lines dataset representing a small route network
Spatial lines dataset of commuter flows on the travel network
Spatial lines dataset of commuter flows on the travel network
Spatial polygons of home locations for flow analysis. | {"url":"https://docs.ropensci.org/stplanr/reference/index.html","timestamp":"2024-11-12T04:15:49Z","content_type":"text/html","content_length":"29420","record_id":"<urn:uuid:1159878b-7846-41cf-8712-d27e9c55274d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00283.warc.gz"} |
Wave Impedance
Figure C.47 depicts a section of a conical bore which widens to the right connected to a section which narrows to the right. In addition, the cross-sectional areas are not matched at the junction.
The horizontal
Figure C.47: a) Physical picture. b) Waveguide implementation.
Since a piecewise-cylindrical approximation to a general acoustic tube can be regarded as a ``zeroth-order hold'' approximation. A piecewise conical approximation then uses first-order (linear)
segments. One might expect that quadratic, cubic, etc., would give better and better approximations. However, such a power series expansion has a problem: In zero-order sections (cylinders), plane
waves propagate as traveling waves. In first-order sections (conical sections), spherical waves propagate as traveling waves. However, there are no traveling wave types for higher-order waveguide
flare (e.g., quadratic or higher) [357].
Since the digital waveguide model for a conic section is no more expensive to implement than that for a cylindrical section, (both are simply bidirectional delay lines), it would seem that modeling
accuracy can be greatly improved for non-cylindrical bores (or parts of bores such as the bell) essentially for free. However, while the conic section itself costs nothing extra to implement, the
scattering junctions between adjoining cone segments are more expensive computationally than those connecting cylindrical segments. However, the extra expense can be small. Instead of a single, real,
reflection coefficient occurring at the interface between two cylinders of differing diameter, we obtain a first-order reflection filter at the interface between two cone sections of differing taper
angle, as seen in the next section.
Next Section: Generalized Scattering CoefficientsPrevious Section: More General One-Parameter Waves | {"url":"https://www.dsprelated.com/freebooks/pasp/Generalized_Wave_Impedance.html","timestamp":"2024-11-13T12:00:37Z","content_type":"text/html","content_length":"27758","record_id":"<urn:uuid:93164c67-aa77-4eb4-9300-ce36048c38f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00409.warc.gz"} |
deep learning for molecules & materials
6. Deep Learning Overview#
Deep learning is a category of machine learning. Machine learning is a category of artificial intelligence. Deep learning is the use of neural networks to do machine learning, like classify and
regress data. This chapter provides an overview and we will dive further into these topics in later chapters.
Audience & Objectives
This chapter builds on Regression & Model Assessment and Introduction to Machine Learning. After completing this chapter, you should be able to
• Define deep learning
• Define a neural network
• Connect the previous regression principles to neural networks
There are many good resources on deep learning to supplement these chapters. The goal of this book is to present a chemistry and materials-first introduction to deep learning. These other resources
can help provide better depth in certain topics and cover topics we do not even cover, because I do not find them relevant to deep learning (e.g., image processing). I found the introduction the from
Ian Goodfellow’s book to be a good intro. If you’re more visually oriented, Grant Sanderson has made a short video series specifically about neural networks that give an applied introduction to
the topic. DeepMind has a high-level video showing what can be accomplished with deep learning & AI. When people write “deep learning is a powerful tool” in their research papers, they typically
cite this Nature paper by Yann LeCun, Yoshua Bengio, and Geoffery Hinton. Zhang, Lipton, Li, and Smola have written a practical and example-driven online book that gives each example in Tensorflow,
PyTorch, and MXNet. You can find many chemistry-specific examples and information about deep learning in chemistry via the excellent DeepChem project. Finally, some deep learning package provide a
short introduction to deep learning via a tutorial of its API: Keras, PyTorch.
The main advice I would give to beginners in deep learning are to focus less on the neurological inspired language (i.e., connections between neurons), and instead view deep learning as a series of
linear algebra operations where many of the matrices are filled with adjustable parameters. Of course nonlinear functions (activations) are used to join the linear algebra operations, but deep
learning is essentially linear algebra operations specified via a “computation network” (aka computation graph) that vaguely looks like neurons connected in a brain.
A function \(f(\vec{x})\) is linear if two conditions hold:
\[$$f(\vec{x} + \vec{y}) = f(\vec{x}) + f(\vec{y})$$\]
for all \(\vec{x}\) and \(\vec{y}\). And
\[$$f(s\vec{x}) = sf(\vec{x})$$\]
where \(s\) is a scalar. A function is nonlinear if these conditions do not hold for some \(\vec{x}\).
6.1. What is a neural network?#
The deep in deep learning means we have many layers in our neural networks. What is a neural network? Without loss of generality, we can view neural networks as 2 components: (1) a nonlinear function
\(g(\cdot)\) which operates on our input features \(\mathbf{X}\) and outputs a new set of features \(\mathbf{H} = g(\mathbf{X})\) and (2) a linear model like we saw in our Introduction to Machine
Learning. Our model equation for deep learning regression is:
\[$$\hat{y} = \vec{w}g(\vec{x}) + b$$\]
One of the main discussion points in our ML chapters was how arcane and difficult it is to choose features. Here, we have replaced our features with a set of trainable features \(g(\vec{x})\) and
then use the same linear model as before. So how do we design \(g(\vec{x})\)? That is the deep learning part. \(g(\vec{x})\) is a differentiable function composed of layers, which are themselves
differentiable functions each with trainable weights (free variables). Deep learning is a mature field and there is a set of standard layers, each with a different purpose. For example, convolution
layers look at a fixed neighborhood around each element of an input tensor. Dropout layers randomly inactivate inputs as a form of regularization. The most commonly used and basic layer is the dense
or fully-connected layer.
A dense layer is defined by two things: the desired output feature shape and the activation. The equation is:
\[$$\vec{h} = \sigma(\mathbf{W}\vec{x} + \vec{b})$$\]
where \(\mathbf{W}\) is a trainable \(D \times F\) matrix, where \(D\) is the input vector (\(\vec{x}\)) dimension and \(F\) is the output vector (\(\vec{h}\)) dimension, \(\vec{b}\) is a trainable \
(F\) dimensional vector, and \(\sigma(\cdot)\) is the activation function. \(F\), the number of output features, is an example of a hyperparameter: it is not trainable but is a problem dependent
choice. \(\sigma(\cdot)\) is another hyperparameter. In principle, any differentiable function that has a domain of \((-\infty, \infty)\) can be used for activation. However, the function should be
nonlinear. If it were linear, then stacking multiple dense layers would be equivalent to one-big matrix multiplication and we’d be back at linear regression. So activations should be nonlinear.
Beyond nonlinearity, we typically want activations that can “turn on” and “off”. That is, they have an output value of zero for some domain of input values. Typically, the activation is zero,
or close to, for negative inputs.
The most simple activation function that has these two properties is the rectified linear unit (ReLU), which is
\[\begin{split} \sigma(x) = \left\{\begin{array}{lr} x & x > 0\\ 0 & \textrm{otherwise}\\ \end{array}\right. \end{split}\]
6.1.1. Universal Approximation Theorem#
One of the reasons that neural networks are a good choice at approximating unknown functions (\(f(\vec{x})\)) is that a neural network can approximate any function with a large enough network depth
(number of layers) or width (size of hidden layers). There are many variations of this theorem – infinitely wide or infinitely deep neural networks. For example, any 1 dimensional function can be
approximated by a depth 5 neural network with ReLU activation functions with infinitely wide layers (infinite hidden dimension) [LPW+17]. The universal approximation theorem shows that neural
networks are, in the limit of large depth or width, expressive enough to fit any function.
6.1.2. Frameworks#
Deep learning has lots of “gotchas” – easy to make mistakes that make it difficult to implement things yourself. This is especially true with numerical stability, which only reveals itself when
your model fails to learn. We will move to a bit of a more abstract software framework than JAX for some examples. We’ll use Keras, which is one of many possible choices for deep learning
6.1.3. Discussion#
When it comes to introducing deep learning, I will be as terse as possible. There are good learning resources out there. You should use some of the reading above and tutorials put out by Keras (or
PyTorch) to get familiar with the concepts of neural networks and learning.
6.2. Revisiting Solubility Model#
We’ll see our first example of deep learning by revisiting the solubility dataset with a two layer dense neural network.
6.3. Running This Notebook#
Click the   above to launch this page as an interactive Google Colab. See details below on installing packages.
To install packages, execute this code in a new cell.
If you find install problems, you can get the latest working versions of packages used in this book here
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import dmol
6.3.1. Load Data#
We download the data and load it into a Pandas data frame and then standardize our features as before.
# soldata = pd.read_csv('https://dataverse.harvard.edu/api/access/datafile/3407241?format=original&gbrecs=true')
# had to rehost because dataverse isn't reliable
soldata = pd.read_csv(
features_start_at = list(soldata.columns).index("MolWt")
feature_names = soldata.columns[features_start_at:]
# standardize the features
soldata[feature_names] -= soldata[feature_names].mean()
soldata[feature_names] /= soldata[feature_names].std()
6.4. Prepare Data for Keras#
The deep learning libraries simplify many common tasks, like splitting data and building layers. This code below builds our dataset from numpy arrays.
full_data = tf.data.Dataset.from_tensor_slices(
(soldata[feature_names].values, soldata["Solubility"].values)
N = len(soldata)
test_N = int(0.1 * N)
test_data = full_data.take(test_N).batch(16)
train_data = full_data.skip(test_N).batch(16)
Notice that we used skip and take (See tf.data.Dataset) to split our dataset into two pieces and create batches of data.
6.5. Neural Network#
Now we build our neural network model. In this case, our \(g(\vec{x}) = \sigma\left(\mathbf{W^0}\vec{x} + \vec{b}\right)\). We will call the function \(g(\vec{x})\) a hidden layer. This is because we
do not observe its output. Remember, the solubility will be \(y = \vec{w}g(\vec{x}) + b\). We’ll choose our activation, \(\sigma(\cdot)\), to be tanh and the output dimension of the hidden-layer to
be 32. The choice of tanh is empirical — there are many choices of nonlinearity and they are typically chosen based on efficiency and empirical accuracy. You can read more about this Keras API here
, however you should be able to understand the process from the function names and comments.
# our hidden layer
# We only need to define the output dimension - 32.
hidden_layer = tf.keras.layers.Dense(32, activation="tanh")
# Last layer - which we want to output one number
# the predicted solubility.
output_layer = tf.keras.layers.Dense(1)
# Now we put the layers into a sequential model
model = tf.keras.Sequential()
# our model is complete
# Try out our model on first few datapoints
<tf.Tensor: shape=(3, 1), dtype=float32, numpy=
[-0.11751032]], dtype=float32)>
We can see our model predicting the solubility for 3 molecules above. There may be a warning about how our Pandas data is using float64 (double precision floating point numbers) but our model is
using float32 (single precision), which doesn’t matter that much. It warns us because we are technically throwing out a little bit of precision, but our solubility has much more variance than the
difference between 32 and 64 bit precision floating point numbers. We can remove this warning by modifying the last line to be:
At this point, we’ve defined how our model structure should work and it can be called on data. Now we need to train it! We prepare the model for training by calling model.compile, which is where we
define our optimization (typically a flavor of stochastic gradient descent) and loss
model.compile(optimizer="SGD", loss="mean_squared_error")
Look back to the amount of work it took to previously set-up loss and optimization process! Now we can train our model
model.fit(train_data, epochs=50)
That was quite simple!
For reference, we got a loss about as low as 3 in our previous work. It was also much faster, thanks to the optimizations. Now let’s see how our model did on the test data
# get model predictions on test data and get labels
# squeeze to remove extra dimensions
yhat = np.squeeze(model.predict(test_data))
test_y = soldata["Solubility"].values[:test_N]
plt.plot(test_y, yhat, ".")
plt.plot(test_y, test_y, "-")
plt.xlabel("Measured Solubility $y$")
plt.ylabel("Predicted Solubility $\hat{y}$")
min(test_y) + 1,
max(test_y) - 2,
f"correlation = {np.corrcoef(test_y, yhat)[0,1]:.3f}",
min(test_y) + 1,
max(test_y) - 3,
f"loss = {np.sqrt(np.mean((test_y - yhat)**2)):.3f}",
This performance is better than our simple linear model.
6.6. Exercises#
1. Make a plot of the ReLU function. Prove it is nonlinear.
2. Try increasing the number of layers in the neural network. Discuss what you see in context of the bias-variance trade off
3. Show that a neural network would be equivalent to linear regression if \(\sigma(\cdot)\) was the identity function
4. What are the advantages and disadvantages of using deep learning instead of nonlinear regression for fitting data? When might you choose nonlinear regression over deep learning?
6.7. Chapter Summary#
• Deep learning is a category of machine learning that utilizes neural networks for classification and regression of data.
• Neural networks are a series of operations with matrices of adjustable parameters.
• A neural network transforms input features into a new set of features that can be subsequently used for regression or classification.
• The most common layer is the dense layer. Each input element affects each output element. It is defined by the desired output feature shape and the activation function.
• With enough layers or wide enough hidden layers, neural networks can approximate unknown functions.
• Hidden layers are called such because we do not observe the output from one.
• Using libraries such as TensorFlow, it becomes easy to split data into training and testing, but also to build layers in the neural network.
• Building a neural network allows us to predict various properties of molecules, such as solubility.
6.8. Cited References#
Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The expressive power of neural networks: a view from the width. In Proceedings of the 31st International Conference on Neural
Information Processing Systems, 6232–6240. 2017. | {"url":"https://dmol.pub/dl/introduction.html","timestamp":"2024-11-10T11:41:30Z","content_type":"text/html","content_length":"57089","record_id":"<urn:uuid:5d9356a6-49fe-4b67-8ae6-3847d0829bb2>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00638.warc.gz"} |
What is the role of time series analysis in Pearson MyLab Statistics? | Hire Someone To Take MyLab Exam
What is the role of time series analysis in Pearson MyLab Statistics? One year ago, I came across the paper by Dickson and Myers entitled “A new role for random forest analysis of the Pearson’s
association”. I understand this study as one that attempts to assess the impact of various variables on the plot (e.g. demographics, time series, interaction effects, covariates), as well as on the
relationships of the time series in Pearson’s analysis. It is interesting to note that since my aim was to present the study of a non-linear (parallel) relationship between time series and the
Correlation Coefficient (CC) using Pearson’s R method (with addition of the covariates described above), I find myself in need of some technical help with this post. When you have the long-term
predictors that are the source of the data, you will need additional quantities to plot, and thus these later become of statistical significance (i.e. some methods can make similar plots). I have
found that measures of interaction, which is the best measure of whether a variable or a correlation is causally related, can give better results than any other measures. But does such methods even
exist? This is to highlight one aspect of Pearson’s method, which is called cross-correlation. In analysis of a correlation coefficient, the amount of co-varying effects/coefficients can be looked up
using the Spearman correlation function [sic]. Thus what I was trying to study was the expression for a Pearson correlation coefficient, as a cross-correlation function. If a cross-correlation
coefficient is real, it means it is constant: Now we are essentially in place to evaluate Pearson’s correlation function given that the underlying function takes on values. This gives us an idea that
– as we proceed – go can reduce the cross-correlation to a simple representation of factoriality, with “factorial” as an acronym, when the question is whether a cross-correlation graph is of causal
nature. Actually, I was speaking to teachers, who are often asked to do as much as they can to evaluate the correlation coefficient, where factors need to be in both linear and non-linear
relationships, and all of two or more different structures, i.e. temporal structures. Often students will pull out the question of whether it is causal, and then do a “k” calculation, which gives us
a true significance of the sample-out. Are you familiar with the definition of a cross-correlation? If my teacher is doing this analysis with a very very special question. I asked a lot of students,
“do you think the age of this question will be decisive”, and they were quite interested in the topic.
Who Can I Pay To Do My Homework
Of course I would have gotten really interested earlier than I would have done since this might be an important step in learning if something similar is made. What motivated usWhat is the role of
time series analysis in Pearson MyLab Statistics? Coincidentally, time series are used to describe time series data. They can be labeled by the time, then described by the time series. They can be
displayed as in Pearson’s MyLab, but in these examples, it seems like an easy way to illustrate time series without much sense of name. We’ve seen that time series visualization is different from the
way data are labeled in Pearson and that there are many other ways of doing various aspects of the data while using the time series. Another common way in which time series statistics pay someone to
do my pearson mylab exam is through using a time label. I’d choose to see this one as a simple example: A and a I want to show you the labels associated to each time line A and a How should we know
the time series? Are we looking at the label where I can see the series title? or the time series summary, more precisely, the time series summary A and a How should we distinguish between two types
Neh and Ig. Do you think we are talking about the time scenery of a place in the world? Suppose I should click on the Neh label on the right of the X box to see what time I should enter at that
location, then I will click on the Ig label. That is, if I see the picture, the Ig is visible automatically, but the Neh is hidden (but there is a white and orange circle around the title). Suppose
you click on the Ig title label, what happens? After reordering the time series labeling, I can click on the MyLab report, easily, and the time series will show. A, C, I, (some types) I said (the
time list) I told you what time list are you all thinking of clicking, sort, the time series summary, but I wasn’t understanding what time list make you think, I explain what I want and, put the sum
around, that’s whyWhat is the role of time series analysis in Pearson MyLab Statistics? In Pearson MyLab Statistics, linear regression is employed to ‘make’ a linear regression between individuals’
time series. If we measure a person’s average length times each event, Pearson MyLab’s metric provides linear regression (or transformation) with two variable scores of 0 and 1 obtained on a 1 to
100000pxes basis. Pearson MyLab analysis uses standard Pearson Pearson curves, created on a grid of events in the sample, to evaluate the influence of random sampling on each individual’s time
series. Pearson Pearson data used in Pearson MyLab analysis were widely used to measure event-driven variability across population sizes. Although Pearson Pearson data do identify the individual’s
population size of interest, a lot of other studies are in the process of collecting their own data. These data are not yet in quantitative form as Pearson Pearson is less commonly used, but a lot of
researchers are having cremation and recoiling away with Pearson Pearson data due to the inability or unwillingness of Pearson Pearson members to perform a standard Pearson Pearson circle (or linear
regression approach). This problem is due to the power in the Pearson data to generalise to larger populations, thus identifying differences in population size across different time series. Pearson
Pearson data were originally collected from a period when historical timeset was somewhat old. It was not until the middle of the last century that individual differences in population size were
revealed for the period when the time series data was measured. While Pearson Pearson scatter plots were created from a sample, and prior to each log-period, we assumed the standard Pearson Pearson
scatter plot to show homogeneity in each time series within the sample.
Massage Activity First Day Of Class
However we argue that the effects of small sample sizes can be a significant limitation in our survey; for example, a small sample for ’1’ would tend to have a longer sample period, resulting in a
stronger scatter plot. However, this does not mean the number of times you can get a Pearson Pearson | {"url":"https://takemylab.com/what-is-the-role-of-time-series-analysis-in-pearson-mylab-statistics","timestamp":"2024-11-13T16:02:57Z","content_type":"text/html","content_length":"130366","record_id":"<urn:uuid:dbd98300-1a9f-4b00-a893-8bf86e5107dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00448.warc.gz"} |
The Genius Of the Bernoulli Family
A family’s mathematical and scientific mastery
Have you ever wondered what it would be like to have a family reunion where everyone’s discussing calculus over dinner? No? Well, neither have I, until I dived into the saga of the Bernoulli family.
This isn’t just any family; this is a dynasty of mathematicians whose penchant for numbers reshaped the scientific landscape. But let’s cut through the legend: were they geniuses, or just at the
right place at the right time? The family produced eight prominent academics, most notably Jacob, Johann, and Daniel Bernoulli, who were among the pioneers of calculus, differential equations,
probability theory, and fluid mechanics. In this article, we will explore the life and work of each member of the Bernoulli family, and how they influenced the development of mathematics and physics
during the early modern period.
A flowchart showing Bernoulli family tree. Graphics by the author.
Jacob Bernoulli (1654–1705)
Let’s start with Jacob Bernoulli, the family’s opening act in the late 17th century. Jacob, the man who introduced us to the mathematical constant e, was clearly a brainiac. His magnum opus, “Ars
Conjectandi,” was a game-changer in probability theory. Jacob Bernoulli, also known as James or Jacques, was the eldest son of Niklaus Bernoulli, a spice merchant and alderman of Base. He studied
theology and philosophy at the University of Basel, but his interest in mathematics was sparked by reading the works of René Descartes and Blaise Pascal. He became a professor of mathematics at the
University of Basel in 1687, and corresponded with many leading mathematicians of his time, such as Gottfried Leibniz, Christiaan Huygens, and Guillaume de l’Hôpital.
He is best known for his work on the theory of probability and combinatorics, especially his book Ars Conjectandi (The Art of Conjecturing), which was published posthumously in 1713. In this book, he
introduced the concept of expected value, the law of large numbers, the Bernoulli distribution, and the Bernoulli trials. He also proved the binomial theorem for any rational exponent, and derived
many important formulas and identities involving binomial coefficients, such as the Pascal’s triangle, the Vandermonde identity, and the Chu-Vandermonde identity.
Jacob Bernoulli also made important contributions to the calculus of variations, a branch of mathematics that deals with finding the optimal shape or function that minimizes or maximizes a given
quantity. He was the first to pose and solve the brachistochrone problem, which asks for the curve of fastest descent between two points under the influence of gravity. He also solved the
isoperimetric problem, which asks for the curve of maximum area enclosed by a given length. He coined the term “lemniscate” for the figure-eight shaped curve that bears his name, and studied its
Jacob once quipped,
“I recognize the lion by his paw.”
Well, we recognize genius by its brainpower, but was it genius or just Jacob capitalizing on the burgeoning field of mathematics? Jacob died of tuberculosis in 1705, at the age of 50. He was buried
in the cloister of the Münster of Basel, where his epitaph reads: “Eadem mutata resurgo” (I rise again, changed but the same). His grave is marked by a lemniscate, symbolizing his mathematical
Johann Bernoulli (1667–1748)
Then came Johann, Jacob’s younger brother. Ever lived in the shadow of a brilliant sibling? Johann didn’t just live there; he thrived. While he might have been seen as riding on Jacob’s coattails, he
was, in fact, carving his own niche, particularly in the world of calculus. He was the kind of guy who would challenge Leibniz to a math duel via snail mail. Johann Bernoulli, also known as Jean, was
the younger brother of Jacob Bernoulli, and the most prolific and influential member of the Bernoulli family. He studied medicine and mathematics at the University of Basel, where he became a
professor of mathematics in 1695, succeeding his brother. He also held positions at the University of Groningen, the Academy of Sciences in Paris, and the Imperial Academy of Sciences in St.
Petersburg. He was a close friend and collaborator of Gottfried Leibniz, and a fierce rival of Isaac Newton.
Johann Bernoulli was one of the first and foremost proponents of the infinitesimal calculus, which he learned from Leibniz. He threw calculus at everything: particle motion, pendulum swings, the way
a chain dangles, and even how light bends. Not to mention, he was a whiz at separation of variables, exponential calculus, and the calculus of finite differences. He discovered the fundamental
theorem of calculus — that whole “derivative of an integral is the integrand” spiel.
Johann Bernoulli also contributed to the calculus of variations, along with his brother Jacob. He solved the brachistochrone problem independently, and posed and solved the tautochrone problem, which
asks for the curve along which a particle falls to the lowest point in the same time, regardless of its initial position. He also introduced the Euler-Lagrange equation, which is the main tool for
finding the extremals of a functional.
Johann wasn’t just about calculus. He dipped his toes in number theory, geometry, algebra, and analysis. He proved e is irrational, played around with harmonic series, exponential functions, and
logarithms. He invented the polar coordinates, and used them to study the curves generated by the motion of a point attached to a rotating arm. He also introduced the concept of the envelope of a
family of curves, and the evolute and involute of a curve.
Johann was also a renowned teacher and mentor. He mentored the next-gen math whizzes like Leonhard Euler, Jean le Rond d’Alembert, Pierre Louis Maupertuis, and Joseph Louis Lagrange. Not to mention
his sons, Daniel and Johann II, who didn’t fall far from the math tree.
Johann Bernoulli died in 1748, at the age of 80. He was buried in the same cloister as his brother Jacob, where his epitaph reads: “Archimedes, Newton, and he” (Archimedes, Newton, et ille).
Jacob and Johann Bernoulli discussing mathematics © Photos.com/Jupiterimages
Daniel Bernoulli (1700–1782)
And how can we forget Daniel Bernoulli, Johann’s son? You know, the ‘Bernoulli Principle’ guy? His work in fluid dynamics was nothing short of revolutionary. But let’s be honest: was he standing on
the shoulders of giants? His “Hydrodynamica” wasn’t just a stroke of genius; it was a compilation of a family’s lifelong obsession with numbers.
Daniel Bernoulli was a mathematician and physicist who is best known for his work on fluid dynamics and probability theory. He studied medicine and mathematics at the University of Basel, where he
obtained his doctorate in 1721. He also traveled to Italy, France, and Russia, where he met and worked with many eminent scientists, such as Leonhard Euler, Alexis Clairaut, and Christian Goldbach.
In 1725, he snagged the role of mathematics professor at the University of St. Petersburg, and by 1733, he was diving into the worlds of anatomy and botany as a professor at the University of Basel.
Daniel is most famous for his book Hydrodynamica (Hydrodynamics), which was published in 1738. In this book, he applied the principles of conservation of energy and momentum to the flow of fluids,
and derived the equation that bears his name, which relates the pressure, velocity, and height of a fluid in a pipe or a channel. He also explained the phenomenon of the Venturi effect, which is the
reduction of pressure and increase of velocity of a fluid when it passes through a narrow section of a pipe. He also studied the flow of blood in the human body, and the effect of air resistance on
the motion of projectiles.
He also made important contributions to the theory of probability and statistics, especially in relation to the applications of mathematics to social and natural sciences. He developed the concept of
expected utility, which is a measure of the value of a risky outcome based on the probability of its occurrence and the utility of its consequences. He used this concept to resolve the St. Petersburg
paradox (which was invented by his own cousin Nicolas Bernoulli), which is a problem that involves a game with an infinite expected value, but a finite expected utility. He also introduced the
concept of the standard deviation, which is a measure of the dispersion of a set of data around its mean.
[Left] Portait of Daniel Bernoulli created in the early 1720s. Courtesy of Basel Historical Museum. [Right] Daniel’s Hydrodynamica (1738), wikimedia commons image.
Daniel died in 1782, at the age of 82. He was buried in the cloister of the Münster of Basel, near his father and uncle. His epitaph reads: “He was the greatest of the Bernoullis” (Ille fuit maximus
Other members of the Bernoulli family
Nicolaus Bernoulli, another family star, often gets overshadowed. But his contributions to probability and statistics were significant. His exchange with Pierre Rémond de Montmort on probability was
more than just scholarly banter; it was the intellectual equivalent of a fencing match.
As the 18th century rolled on, the family’s mathematical flame didn’t flicker. Johann II, Daniel II, Johann III — the sequels kept coming. But here’s the kicker: were they innovating, or just
Reflecting on this, one can’t help but wonder: was the Bernoulli family’s success a product of sheer genius, or were they just riding the wave of the scientific revolution? Did they shape the course
of mathematical history, or were they simply at the right place in the right chronological order? It’s easy to get lost in the romanticism of a family of geniuses pushing the boundaries of science.
But maybe, just maybe, the Bernoullis were a product of their time — a perfect storm of opportunity, intellect, and yes, a bit of familial competition.
In the end, whether they were geniuses or just incredibly well-positioned, the Bernoullis left an indelible mark on science. Their story isn’t just about formulas and theorems; it’s about ambition,
rivalry, and the relentless pursuit of knowledge. And perhaps, that’s the real genius of the Bernoulli saga.
Jacob, Daniel, and Johann Bernoulli. Graphics by the author. Portraits from the 17th and 18th century (public domain images).
Bernoulli, Jakob I. “THE BERNOULLI FAMILY.” The 17th and 18th Centuries: Dictionary of World Biography, Volume 4 4 (2013): 122.
Eves, Howard. “Historically Speaking — : The Bernoulli Family.” The Mathematics Teacher 59.3 (1966): 276–278.
Senn, Stephen. “Bernoulli family.” Encyclopedia of Statistics in Behavioral Science (2005).
Thank you so much for reading. If you liked this story don’t forget to press that clap icon as many times as you want. If you like my work and want to support me then you can Buy me a coffee ☕️. Keep
following for more such stories! | {"url":"https://readmedium.com/the-genius-of-the-bernoulli-family-07226ba1e9b6","timestamp":"2024-11-12T03:16:43Z","content_type":"text/html","content_length":"95447","record_id":"<urn:uuid:6e650d97-9e23-4fa7-b3c5-3bc41a28e704>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00598.warc.gz"} |
How do you find the domain and range of f(x)= sqrt(4-x^2) /( x-3)? | Socratic
How do you find the domain and range of #f(x)= sqrt(4-x^2) /( x-3)#?
1 Answer
Domain $\left\{x : \mathbb{R} , - 2 \le x \le 2\right\}$
Range $\left\{y : \mathbb{R} , - \frac{2}{\sqrt{5}} \le y \le 0\right\}$
Domain is quite evident that $x \ne 3$ and $x \le + 2$ and $- 2 \le x$ which essentially boils down to $\left\{x : \mathbb{R} , - 2 \le x \le 2\right\}$
For finding the range, consider the domain from which it can be seen that y would never have any positive value and at end points +2, -2 it is 0, hence $y \le 0$ on the upper side.
Next square both sides, so that it is a quadratic equation in x,
${x}^{2} \left({y}^{2} + 1\right) - 6 {y}^{2} x + 9 {y}^{2} - 4 = 0$. Solve it for x using quadratic formula, x=# (3y^2 +- sqrt( -5y^2 +4) )/((y^2+1) #
For a function Real to Real $5 {y}^{2} \le 4$ or $\pm y \le \frac{2}{\sqrt{5}}$. Since it is already settled that $y \le 0$ on the upper side, reject $y \le \frac{2}{\sqrt{5}}$' Hence it should be $-
y \le \frac{2}{\sqrt{5}}$ or $y \ge - \frac{2}{\sqrt{5}}$.
The range of the function would this be$\left\{y : \mathbb{R} , - \frac{2}{\sqrt{5}} \le y \le 0\right\}$
Impact of this question
2372 views around the world | {"url":"https://socratic.org/questions/how-do-you-find-the-domain-and-range-of-f-x-sqrt-4-x-2-x-3","timestamp":"2024-11-04T01:18:07Z","content_type":"text/html","content_length":"34760","record_id":"<urn:uuid:04ce9f87-985a-41a2-860a-87d93e1c43b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00003.warc.gz"} |
Engineering Cost Estimates Jul 2021 - Hoitsma Blog
Engineering Cost Estimates Jul 2021
Estimates -- aka Adding in Quadrature
Uncertainty Analysis
To illustrate a simple but often ignored concept – what is the estimated total cost of the following?
Item Cost
alpha 100,000 5,000
beta 75,000 7,500
gamma 50,000 6,000
.. .. ..
total 225,000 what is the total plus-minus range?
I say the answer is:
Item Cost
.. .. ..
total cost 225,000 11,000 (10,828)
Notice that the plus-minus range for the total cost is less than the sum of the plus-minus ranges. This will always be true when adding in quadrature.
Uncertainty Analysis Reference
In accordance with the rules of quadrature, our uncertainty is:
Of course, no more than 2 significant digits are warranted here; therefore
The quantities alpha, beta, and gamma must have uncertainties which are uncorrelated and random.
Example Calculation
If say
Therefore, if
same cost
(but still independent variables) were considered, with a constant
Here the plus-minus range for the total cost,
If random cost (and still independent variables) were considered, with a constant | {"url":"https://www.mathpax.com/engineering-cost-estimates/","timestamp":"2024-11-07T21:33:42Z","content_type":"text/html","content_length":"86756","record_id":"<urn:uuid:f03cb89b-51a7-41bc-8e44-cbb2288f4b06>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00638.warc.gz"} |
The Challenge of Finding a Math Help Online - MathTrench - Thousands of Solved Math and Stats Homework Problems
The Challenge of Finding Great Math Help Online
Mathematics is a core science subject. The study of mathematics needs to be very strong and strictly correct to progress on the subject and to achieve best results. It would be grossly unfair to
assume that, like any other subject, mathematics cannot be necessarily approached easily and stress free, especially when looking to pursue a home curriculum.
Yes, there is some common ground rules to studying at home, but in the case of mathematics there are some specialized techniques to be followed. Mathematics, or math as is very commonly abbreviated,
has many resources available for help. Students need these resources to progress through their study of math. At school level where the tradition of study through homework still is painfully
mandatory, so to speak.
How About the Online Tutoring Sites?
Math homework help is now available at the click of a mouse and like the famed genie smoking out of the magic lamp, a number of sites now offer easy and quick homework help to students of all grades.
Parents do not have to pull their hair apart worrying how to help the child with the math problems, since everything is now available at the comfort of a click.
If you are to search the major search engines with the term “math homework help” you will end up with at least 3 million results.
Makes sense why mathematics still remains the king of all subjects in school dreaded and feared by one and all. Mathematics is ironically not that complex and difficult a subject though at school.
There is a structured approach to the study of mathematics and students are seldom guided by this structured approach. At school the student is required to learn by heart the wonderful formulae that
his teacher gives him to learn without investing a minute in explaining why the study of mathematics require formulae in the first place.
Will the Results be What I Expect?
The study of mathematics is enjoyable if there are interesting examples put in place to make the session a lively one with the class enjoying every bit of their time. Sadly this is not the case.
Students are made to pound down the formulae in their little brains and then given a dossier of homework to finish before the next session starts.
Who is ultimately gaining by this dubious approach to the subject? None other than the mushrooming websites who invariably have a page attached in them that acts as the cash counter collecting cash
through credit card processing.
Mathematics syllabus is written and distributed among the teachers and the taught by pundits who fear to exhibit any strain of uneasiness should a question be posed to them by a student. In view of
this they are armed with cheap guide books and test papers that do not have the necessary fundamentals of the subject in the first place leading to misleading results and calculation deficiencies.
There are a number of genuine blog posts, forums and chat rooms that are constantly on the lookout for genuine queries and problems.
Furthermore, some of these sites go a step further in referring the student to the entire knowledge base that they work with. This causes a healthy interaction between the student and the teacher
leading to a positive effect in the mind of the student.
He or she would now be longing to know more about the natty grittiest of the subject and probe deeper to find out other derivatives and alternatives. Thus a positive relationship is born and with the
student thirsting to know more the avowed purpose of the person managing the site is accomplished.
Nevertheless, it would always be advisable to students seeking homework help in math to run through a good number of websites before choosing one that best suits his needs.
In case you have any question, you can contact us. | {"url":"https://mathtrench.com/challenge-finding-math-help-online","timestamp":"2024-11-14T18:31:15Z","content_type":"text/html","content_length":"87314","record_id":"<urn:uuid:cea34e1f-4c84-4834-9f5c-e1aacd961835>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00832.warc.gz"} |
seminars - Approximations with mod 2 congruence conditions
In the study of Diophantine approximation, a natural question is which rationals p/q minimize |qx-p| with a bounded condition over q. We call such rationals the best approximations. The regular
continued fraction gives an algorithm generating the best approximations. From a general perspective, we are interested in the best approximations with congruence conditions on their numerators and
denominators. It is known that the continued fraction allowing only even integer partial quotients generates the best approximations whose numerator and denominator have different parity. In this
talk, we explain the connection between the best approximations and the Ford circles. Then we explain how we can induce continued fraction algorithms that give the best approximating rationals with
congruence conditions of modulo 2. This is joint work with Dong Han Kim and Lingmin Liao. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=22&l=en&sort_index=date&order_type=desc&document_srl=1001621","timestamp":"2024-11-09T03:15:29Z","content_type":"text/html","content_length":"45685","record_id":"<urn:uuid:f3e83374-b412-444f-839d-d6fb04449d14>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00863.warc.gz"} |
An Influence of the Wall Acoustic Impedance on the Room Acoustics. The Exact Solution
Archives of Acoustics, 42, 4, pp. 677–687, 2017
An Influence of the Wall Acoustic Impedance on the Room Acoustics. The Exact Solution
The Fourier method is applied to the description of the room acoustics field with the combination of uniform impedance boundary conditions imposed on some walls. These acoustic boundary conditions
are expressed by absorption coefficient values In this problem, the Fourier method is derived as the combination of three one-dimensional Sturm-Liouville (S-L) problems with Robin-Robin boundary
conditions at the first and second dimension and Robin-Neumann ones at the third dimension. The Fourier method requires an evaluation of eigenvalues and eigenfunctions of the Helmholtz equation, via
the solution of the eigenvalue equation, in all directions. The graphic-analytical method is adopted to solve it It is assumed that the acoustic force constitutes a monopole source and finally the
forced acoustic field is calculated. As a novelty, it is demonstrated that the Fourier method provides a useful and efficient approach for a room acoustics with different values of wall impedances.
Theoretical considerations are illustrated for rectangular cross-section of the room with particular ratio. Results obtained in the paper will be a point of reference to the numerical calculations.
Keywords: Fourier analysis; architectural acoustics; absorption coefficients; boundary-value problems
Copyright © Polish Academy of Sciences & Institute of Fundamental Technological Research (IPPT PAN).
Alquran M., Al-Khaled K. (2010), Approximations of Sturm-Liouville eigenvalues using sin-Galerkin and differential transform methods, Applications and Applied Mathematics, 5, 128–147.
Bistafa R.S., Morrissey J.W. (2003), Numerical solution of the acoustic eigenvalue equation in the rectangular room with arbitrary (uniform) wall impedances, Journal of Sound and Vibration, 263,
Blackstock D.T. (2000), Fundamentals of Physical Acoustics, Wiley-Interscience, New York.
Bowles R. (2007), MATHM242 Lecture Notes – first half, University College London, London.
Brański A. (2013), Numerical methods to the solution of boundary problems, classification and survey [in Polish], Rzeszow University of Technology Press, Rzeszow. ISBN 978-83-7199-792-1.
Brański A., Borkowska D. (2015), Effectiveness of nonsingular solutions of the boundary problems based on Trefftz methods, Engineering Analysis with Boundary Elements 59, 97–104.
Brański A., Borkowski M., Borkowska D. (2012), A comparison of boundary methods based on inverse variational formulation, Engineering Analysis with Boundary Elements, 36, 505–510.
Chen W., Zhang J.Y., Fu Z.J. (2014), Singular boundary method for modified Helmholtz equations, Engineering Analysis with Boundary Elements, 44, 112–119.
Dance S., Van Buuren G. (2013), Effects of damping on the low-frequency acoustics of listening rooms based on an analytical model, Journal of Sound and Vibration, 332, 6891–6904.
Dautray R., Lions J.L. (2000), Mathematical analysis and numerical methods for science and technology, Springer, Berlin.
Ducourneaua J., Planeaub V. (2003), The average absorption coefficient for enclosed spaces with nonuniformly distributed absorption, Applied Acoustics, 64, 845–862.
Evans L.C. (2002), Partial differential Equations [in Polish], WN PWN, Warszawa.
Fasshauer G. (2011), MATH 461: Fourier Series and Boundary Value Problems. Chapter V: Sturm-Liouville Eigenvalue Problems, Department of Applied Mathematics Illinois Institute of Technology, Fall.
Fu Z. J., Chen W., Gu Y. (2014), Burton–Miller-type singular boundary method for acoustic radiation and scattering. Journal of Sound and Vibration, 333, 3776–3793.
Gerai M. (1993), Measurement of the sound absorption coeffiients in situ: the reflection method using periodic pseudo-random sequences of maximum length, Applied Acoustics, 39, 119–139.
International Organization for Standardization ISO 354 (2003), Acoustics – measurement of sound absorption in a reverberation room.
International Organization for Standardization ISO 10354-1 (1996), Acoustics – determination of sound absorption coefficient and impedance in impedance tube. Part 1: method using standing wave ratio.
Johnson R.S. (2006), An introduction to Sturm-Liouville theory. School of Mathematics & Statistics – University of Newcastle upon Tyne.
Kashdan E. (2017), ACM 30020 Advanced Mathematical Methods, Sturm Liouville Problem (SLP), http://mathsci.ucd.ie/~ekashdan/page2/SLP (accessed March 17, 2017).
Korany N., Blauert J., Abdel Alim O. (2001), Acoustic simulation of rooms with boundaries of partially specular reflectivity, Applied Acoustic, 62, 875–887.
Korn G.A., Korn T.M. (1968), Mathematical handbook for scientists and engineers, Mc Graw-Hill Book Company, New York.
Kuttruff H. (2000), Room Acoustics, Fundamentals of Physical Acoustics, Wiley-Interscience, New York.
Lehmann E., Johansson A. (2008), Prediction of energy decay in room impulse responses simulated with an image-source model, Journal of the Acoustical Society of America, 124, 269–277.
Lin J., Chen W., Chen C.S. (2014), Numerical treatment of acoustic problems with boundary singularities by the singular boundary method, Journal of Sound and Vibration, 333, 3177–3188.
Lopez J., Carnicero D., Ferrando N., Escolano J. (2013), Parallelization of the finite-difference time-domain method for room acoustics modelling based on CUDA, Mathematical and Computer Modelling,
57, 1822–1831.
Luizard P., Polack J. P., Katz B. (2014), Sound energy decay in coupled spaces using a parametric analytical solution of a diffusion equation, Journal of the Acoustical Society of America, 135,
McLachlan N.W. (1964), Bessel Functions for Engineers [in Polish], PWN, Warszawa.
Meissner M. (2009), Computer modelling of coupled spaces: variations of eigenmodes frequency due to a change in coupling area. Archives of Acoustics 34, 157–168.
Meissner M. (2009), Spectral characteristics and localization of modes in acoustically coupled enclosures, Acta Acustica united with Acustica, 95, 300–305.
Meissner M. (2010), Simulation of acoustical properties of coupled rooms using numerical technique based on modal expansion, Acta Physica Polonica A, 118, 123–127.
Meissner M. (2012), Acoustic energy density distribution and sound intensity vector field inside coupled spaces, Journal of the Acoustical Society of America, 132, 228–238.
Meissner M. (2013), Analytical and numerical study of acoustic intensity field in irregularly shaped room, Applied Acoustics 74, 661¬–668.
Meissner M. (2013), Evaluation of decay times from noisy room responses with puretone excitation, Archives of Acoustics, 38, 47–54.
Meziani H. (2016), Sturm–Liouville Problems, Generalized Fourier Series. http://www2.fiu.edu/~meziani/NOTE9.pdf (accessed May 11, 2016).
Morse P.M., Ingard K.U. (1987), Theoretical acoustics, Princeton University Press, New Jersey.
Naka Y., Oberai A.A., Shinn-Cunningham B.G. (2005), Acoustic eigenvalues of rectangular rooms with arbitrary wall impedances using the interval Newton/generalized bisection method, Journal of the
Acoustical Society of America, 118, 3662–3671.
NETA B. (2012), Partial differential equations, MA 3132 Lecture notes. Monterey, California 93943.
Okuzono T., Otsuru T., Tomiku R., Okamoto N. (2014), A finite-element method using dispersion reduced spline elements for room acoustics simulation, Applied Acoustics, 79, 1–8.
Peirce A. (2014), Lecture 28: Sturm-Liouville Boundary Value Problems, Introductory lecture notes on Partial Differential Equations. The University of British Columbia, Vancouver.
Summers J. (2012), Accounting for delay of energy transfer between coupled rooms in statistical-acoustics models of reverberant-energy decay. Journal of the Acoustical Society of America 132,
Takahashi Y., Otsuru T., Tomiku R. (2005), In situ measurements of surface impedance and absorption coefficients of porous materials using two microphones and ambient noise, Applied Acoustics, 66,
Thompson L.L, Pinsky P.-M. (1995), A Galerkin Least Squares Finite Element Method for the Two-Dimensional Helmholtz Equation, International Journal Numerical Methods Engineering, 38, 371–397.
Xu B., Sommerfeldt S. (2010), A hybrid modal analysis for enclosed sound fields, Journal of the Acoustical Society of America, 128, 2857–2867. | {"url":"https://acoustics.ippt.gov.pl/index.php/aa/article/view/2046","timestamp":"2024-11-13T18:25:52Z","content_type":"text/html","content_length":"30575","record_id":"<urn:uuid:0201db69-fb11-4f71-96e2-dd87607a18d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00867.warc.gz"} |
Monitor Solution Process with
Monitor Solution Process with optimplot
This example shows how you can use the optimplot plot function to monitor several aspects of the solution process.
Note: Currently, optimplot is available only for the fmincon solver.
Problem Definition and First Solution
The problem is to minimize the objective function of two variables
in the region $\left(y+{x}^{2}{\right)}^{2}+0.1{y}^{2}\le 1$, $x\ge -5$, $y\ge -5$. Express this problem using optimization variables.
x = optimvar("x",LowerBound=-5);
y = optimvar("y",LowerBound=-5);
prob = optimproblem;
prob.Objective = (x+y)*exp(-y);
prob.Constraints = (y + x^2)^2 + 0.1*y^2 <= 1;
Set an initial point of $x=1,y=-3/2$.
Which solver does solve call?
defaultSolver = solvers(prob)
defaultSolver =
Set fmincon options to use the optimplot plot function, and solve the problem.
opts = optimoptions("fmincon",PlotFcn="optimplot");
[sol,fval,eflag,output] = solve(prob,x0,Options=opts);
Solving problem using fmincon.
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
Plot Details
Examine each section of the plot, beginning with the top section.
The plot shows that the initial point (at the upper left) is feasible, because it is plotted as a blue dot. "Feasible" means the point satisfies all constraints, "infeasible" means a point does not
satisfy at least one constraint. The next several points are red dots, which means they are infeasible. The right portion of the plot contains all blue dots, indicating feasible iterations.
Next, examine the variable plot, which is the lower-left section.
The final point is plotted as a bold line. The intermediate iterations are plotted as fainter dotted lines.
Finally, examine the lower-right section, which contains the stopping criteria.
This section provides the following information:
• Optimality Measure --- This bar shows that the first-order optimality measure is satisfied. The marker is in the green region, and the value is less than the OptimalityTolerance value of 1e–6.
This tolerance stopped the solver.
• Constraint Measure --- This bar shows that the constraints are all satisfied to within the value of the ConstraintTolerance, 1e–6. This tolerance is not an actual stopping criterion. Instead,
solvers attempt to continue when the constraints are not satisfied.
• Step Limit --- This bar shows that the solver was not stopped by the StepTolerance tolerance.
• Objective Limit --- This bar shows that the solver was not stopped by the ObjectiveLimit tolerance.
• Function Evaluations --- This bar shows that the solver was not stopped by the MaxFunctionEvaluations tolerance.
• Iterations --- This bar shows that the solver was not stopped by the MaxIterations tolerance.
Note that a solver can use relative or scaled values of stopping criteria. For details, see Tolerance Details.
The three sections of the plot are synchronized. When you click a point in the top section, all three sections display values associated with that point. Click the fifth iteration point.
The fifth iteration point has the lowest objective function value. However, the point is not feasible. As shown in the top section, the point is plotted in red, and has a constraint violation of over
13. The Constraint Measure bar in the Stopping Criteria section shows the same value for the violation, which also indicates that the point is infeasible. To remove the displayed values for the fifth
iteration, click the point again.
Dynamic Plotting Range
You might have noticed that the initial point was not visible during the early iterations while the solver was running. Below is a close-up of the plot function, paused after the seventh iteration.
The point for iteration 0 is not plotted because it is outside the plot range. The optimplot plot function attempts to show relevant ranges as the iterations proceed, so that you can observe
convergence more easily.
Search for Better Solution
The bottom of the Stopping Criteria section shows the reason the solver stopped, and provides a link for more information.
Click the link, and you get the following information and link.
Click the link When the Solver Succeeds. In the resulting documentation page, the first suggestion is to change the initial point. So, search for a better solution by changing the initial point.
x0.x = -1;
x0.y = -1;
[sol2,fval2,eflag2,output2] = solve(prob,x0,Options=opts);
Solving problem using fmincon.
Local minimum found that satisfies the constraints.
Optimization completed because the objective function is non-decreasing in
feasible directions, to within the value of the optimality tolerance,
and constraints are satisfied to within the value of the constraint tolerance.
This time, the plot shows that fmincon reaches an objective function value of $-116.765$, which is better (lower) than the value at the first solution, $-32.837$.
The optimplot plot function shows many of the statistics associated with solver iterations. In one plot, you can view the feasibility of the iterative points, the various measures used to stop the
iterations, and the coordinates of the iterative points, with the later points plotted in bold. However, the optimplot plot function does not guarantee that the displayed solutions are global
solutions. The first solution in this example is a local solution, but not a global solution. Interpreting the results appropriately still requires good judgment.
See Also
Related Topics | {"url":"https://uk.mathworks.com/help/optim/ug/optimplot-detailed-plot.html","timestamp":"2024-11-02T04:25:50Z","content_type":"text/html","content_length":"83964","record_id":"<urn:uuid:6e5496d0-a5b9-4127-bfec-da072cea76bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00525.warc.gz"} |
Denoising stacked autoencoders for transient electromagnetic signal denoising
Articles | Volume 26, issue 1
© Author(s) 2019. This work is distributed under the Creative Commons Attribution 4.0 License.
Denoising stacked autoencoders for transient electromagnetic signal denoising
The transient electromagnetic method (TEM) is extremely important in geophysics. However, the secondary field signal (SFS) in the TEM received by coil is easily disturbed by random noise, sensor
noise and man-made noise, which results in the difficulty in detecting deep geological information. To reduce the noise interference and detect deep geological information, we apply autoencoders,
which make up an unsupervised learning model in deep learning, on the basis of the analysis of the characteristics of the SFS to denoise the SFS. We introduce the SFSDSA (secondary field signal
denoising stacked autoencoders) model based on deep neural networks of feature extraction and denoising. SFSDSA maps the signal points of the noise interference to the high-probability points with a
clean signal as reference according to the deep characteristics of the signal, so as to realize the signal denoising and reduce noise interference. The method is validated by the measured data
comparison, and the comparison results show that the noise reduction method can (i) effectively reduce the noise of the SFS in contrast with the Kalman, principal component analysis (PCA) and wavelet
transform methods and (ii) strongly support the speculation of deeper underground features.
Received: 12 Sep 2018 – Discussion started: 02 Oct 2018 – Revised: 16 Jan 2019 – Accepted: 09 Feb 2019 – Published: 01 Mar 2019
Through the analysis of the secondary field signal (SFS) in the transient electromagnetic method (TEM), the information of underground geological composition can be obtained and has been widely used
in mineral exploration, oil and gas exploration, and other fields (Danielsen et al., 2003; Haroon et al., 2014). Due to the small amplitude of the late field signal in the secondary field, it may be
disturbed by random noise, sensor noise, human noise and other interference (Rasmussen et al., 2017), which leads to data singularities or interference points, and thus the deep geological
information can not be reflected well. Therefore, it is necessary to make full use of the characteristics of the secondary field signal to reduce the noise in the data and increase the effective
range of the data.
Many methods have been developed for noise reduction of the transient electromagnetic method. These methods can be broadly categorized into three groups: (1) Kalman filter algorithm (Ji et al.,
2018), (2) wavelet transform algorithm (Ji et al., 2016; Li et al., 2017) and (3) principal component analysis (PCA) (Wu et al., 2014). Kalman filtering is an effective method in linear systems, but
it has little effect in nonlinear fields such as transient electromagnetic signals. The acquisition of the wavelet threshold is cumbersome, and wavelet base selection is very difficult. In order to
achieve the desired separation effect, it is necessary to design an adaptive wavelet base. Likewise, the PCA algorithm is cumbersome too; some researchers applied PCA to denoise the transient
electromagnetic signal, but the process of PCA requires at least five steps (Wu et al., 2014).
However, deep learning has been used to reduce noise from images, speech and even gravitational waves (Jifara et al., 2017; Grais et al., 2017; Shen et al., 2017). Meanwhile, the autoencoder (AE)
(Bengio et al., 2007), the representative model of deep learning, has been successfully applied in many fields (Hwang et al., 2016). AEs with noise reduction capability (denoising autoencoders, DAEs)
(Vincent et al., 2008) have been widely used in image denoising (Zhao et al., 2014), audio noise reduction (Dai et al., 2014), the reconstruction of holographic image denoising (Shimobaba et al.,
2017) and other fields.
Nevertheless, in the field of geophysics, the application of the deep learning model is limited (Chen et al., 2014). The use of the deep learning model to reduce the noise of geophysical signals has
not been applied. Therefore, in this paper, the SFSDSA (secondary field signal denoising stacked autoencoders) model is proposed to reduce noise, based on a deep neural network with SFS feature
extraction. SFSDSA will map the signal points affected by noise to the high-probability points with geophysical inversion signal as reference according to the deep characteristics of the signal, so
as to realize the signal denoising and reduce noise interference.
Many studies about the denoising of the second field signal of the transient electromagnetic method have been carried out. Ji et al. proposed a method using the wavelet threshold-exponential adaptive
window width-fitting algorithm to denoise the second field signal (Ji et al., 2016). According to this method, stationary white noise and non-stationary electromagnetic noise can be filtered using
the wavelet threshold-exponential adaptive window width-fitting algorithm to denoise the second field signal; Li et al. used the stationary-wavelet-based algorithm to denoise the electromagnetic
noise in grounded electrical source airborne transient electromagnetic signal (Li et al., 2017). This denoising algorithm can remove the electromagnetic noise from the grounded electrical source
airborne transient electromagnetic signal. Wang et al. used the wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals; it can
improve the signal-to-noise ratio (Wang et al. 2013). An exponential fitting-adaptive Kalman filter was used to remove mixed electromagnetic noises (Ji et al., 2018). It consists of an exponential
fitting procedure and an adaptive scalar Kalman filter. The adaptive scalar Kalman uses the exponential fitting results in the weighting coefficients calculation.
The aforementioned Kalman filter and wavelet transform are universal traditional filtering methods, which have their own defects. However, the SFS itself has distribution characteristics, and the
distortion of the waveform generated by the noise causes deviation from the signal point of the distribution.
Theoretical research (Bengio et al., 2007) indicates that the incomplete representation of autoencoders will be forced to capture the most prominent features of the training data and the high-order
feature of data is extracted, so autoencoders can be applied to the feature extraction and abstract representation of the SFS. Theoretical research (Vincent et al., 2008) also shows that denoising
autoencoders can map the damaged data points to the estimated high-probability points according to the data characteristics, to achieve the target of repairing the damaged data. Therefore, DAEs can
be applied to map the SFS data points that will be disturbed by noise to the estimated high-probability points, to achieve the purpose of SFS noise reduction. Studies have found (Vincent et al.,
2010) the stacked DAEs (SDAEs) have a strong feature extraction capability and can improve the effect of feature extraction and enhance the ability of calibrating the deviation points disturbed by
noise. SDAEs are also commonly used in the compression encoding of the preprocessing height of complex images (Ali et al., 2017).
We also noticed that supervised learning performs well in classification problems such as image recognition and semantic understanding (He et al., 2016; Long et al., 2014). At the same time,
unsupervised learning also has a good performance in clustering and association problems (Klampanos et al., 2018), and the goal of unsupervised learning is usually to extract the distribution
characteristics of the data in order to understand the deep features of the data (Becker and Plumbley, 1996; Liu et al., 2015). Both supervised learning and unsupervised learning have their own
application fields, so we need to choose different learning styles and models for different problems. For the noise suppression problem of the SFS in the TEM, our goal is to extract the deep features
and map the data points affected by noise to the estimated high-probability points according to their own signal features. We also found that the purpose of extracting the distribution
characteristics of the SFS data is similar to that of unsupervised learning. Meanwhile, unsupervised learning models are widely used in different signal noise reduction problems.
Therefore, based on the study of the distribution characteristics of the secondary field signal and autoencoder denoising method, we propose SFSDSA, which is a deep learning model of transient
electromagnetic signal denoising.
1. SFSDSA will be stacked by multiple AEs to form a deep neural network of multilayer undercomplete encoding, and multiple AEs are used as a higher-order feature extraction part, which can utilize
its deep structure to maximize the characteristics of the secondary field signal.
2. Based on the principle of DAEs, SFSDSA will set the secondary field measured data (received data) as the input data, and the geophysical inversion method is used to process the measured data of
the secondary field to obtain the inversion signal as the clean signal data. SFSDSA maps the signal points of the noise interference to the high-probability points with a clean signal as
reference according to the deep characteristics of the signal. Because maintaining the original data dimension is especially important for the undistorted processing and post-processing of the
signal, it is necessary to set the original dimension after the last coding as the output layer dimension. Although the output method may produce the decoding loss, it can have high abstract
retention of the secondary field signal characteristics and map the affected signal points to the high-probability position points.
3. The problem of too many nodes dying is a general disadvantage for rectified linear unit (RELU) activation function, and improved RELU activation functions like Leaky RELU all consistently
outperform the RELU function in some tasks (Xu et al., 2015). Therefore, it is necessary to apply the improved RELU function to reduce the impact of the shortcomings of the RELU function. We
choose the scaled exponential linear units (SELUs) that have the capability of overcoming vanishing and exploding gradient problems in a sense and preform the best in full connection networks
(Klambauer et al., 2017). We chose the Adam algorithm, which has the advantages of calculating different adaptive learning rates for different parameters and requiring little memory (Kingma and
Ba, 2014). The SFSDSA model will address the problem of overfitting due to increased depth and the problem of only learning an identity function, because the regularized loss is introduced.
3 Mathematical derivation of SFSDSA
Firstly, the secondary field data (actual detection signal) are treated as a noisy input. Since the secondary field data are mainly a time-amplitude value, we can sample the signal as a
point-amplitude value, in the form of matrix A; the dimensions are 1×N:
$\begin{array}{}\text{(1)}& A=\left[\begin{array}{ccccc}{a}_{\mathrm{11}}& {a}_{\mathrm{12}}& \mathrm{\cdots }& {a}_{\mathrm{1}n-\mathrm{1}}& {a}_{\mathrm{1}n}\end{array}\right].\end{array}$
Secondly, the geophysical inversion method is used to obtain the theoretical signal, which can be used as a clean signal, and then the theoretical signal is sampled as a point-amplitude value, in the
form of matrix $\stackrel{\mathrm{̃}}{A}$; the dimensions are 1×N:
$\begin{array}{}\text{(2)}& \stackrel{\mathrm{̃}}{A}=\left[\begin{array}{ccccc}{\stackrel{\mathrm{̃}}{a}}_{\mathrm{11}}& {\stackrel{\mathrm{̃}}{a}}_{\mathrm{12}}& \mathrm{\cdots }& {\stackrel{\mathrm{̃}}
{a}}_{\mathrm{1}n-\mathrm{1}}& {\stackrel{\mathrm{̃}}{a}}_{\mathrm{1}n}\end{array}\right].\end{array}$
Thirdly, the SFSDSA training model can be built, and Adam, which is a stochastic gradient descent (SGD) method, is applied to prevent gradient disappearance; regularization loss is used to prevent
overfitting; and the SELU activation function is utilized to prevent too many points of death.
$\begin{array}{}\text{(3)}& {g}_{\mathit{\theta }}\left({a}_{\mathrm{1}n}\right)={f}_{\mathrm{SELU}}\left(W{a}_{ln}+b\right),\end{array}$
where $\mathit{\theta }=\left(w,b\right)$, w denotes the $N×N{}^{\prime }$ parameter matrix ($N{}^{\prime }<N$) and b denotes the offset of the N^′ dimensions. After the first compression coding
layer, the signal is the extracted feature and is represented as the $\mathrm{1}×N{}^{\prime }$ parameter matrix. In order to extract high-level features while removing as much noise as possible and
other factors, we can compress again.
W denotes the $N{}^{\prime }×N{}^{\prime \prime }$ parameter matrix ($N{}^{\prime \prime }<N{}^{\prime }$) and b denotes the offset of the N^′′ dimensions, and the features of the actual detection
signal are extracted again after more feature extraction layers can be stacked. For the secondary field signal, it is necessary to maintain the same input and output dimensions to ensure that the
signal is not distorted and later processed. When feature extraction reaches a certain extent, it is necessary to reconstruct back to input dimensions.
Reconstruction can be regarded as the process that the noisy signal points map back to the original dimensions after the features are highly extracted. At the same time, reconstruction is the process
of signal characteristic amplification. Finally output matrix $\overline{A}$ with the same dimensions as the inputs can be retrieved:
$\begin{array}{}\text{(6)}& \overline{A}=\left[\begin{array}{ccccc}{\overline{a}}_{\mathrm{11}}& {\overline{a}}_{\mathrm{12}}& \mathrm{\cdots }& {\overline{a}}_{\mathrm{1}n-\mathrm{1}}& {\overline
The output $\overline{A}$ we obtained can be used to get the loss from the clean signal $\stackrel{\mathrm{̃}}{A}$ using the loss function. The general loss function has the squared loss, which is
mostly used in the linear regression problem. However, the secondary field data are mostly nonlinear, and the absolute loss is used in this paper:
$\begin{array}{}\text{(7)}& L\left(\overline{A},\stackrel{\mathrm{̃}}{A}\right)=|\overline{A}-\stackrel{\mathrm{̃}}{A}|.\end{array}$
In the meantime, regularization loss optimization is used in this paper in order to avoid the problem of overfitting, and then
$\begin{array}{ll}\mathrm{loss}& ={\mathit{\theta }}^{\ast },{{\mathit{\theta }}^{\prime }}^{\ast }={\mathrm{arg}}_{\mathit{\theta },\mathit{\theta }{}^{\prime }}min\frac{\mathrm{1}}{n}\sum _{i=\
mathrm{1}}^{n}L\left({x}^{i},{g}_{\mathit{\theta }{}^{\prime }}\left({f}_{\mathit{\theta }}\left({x}^{i}\right)\right)\right)\\ \text{(8)}& & +\mathit{\lambda }R\left(w\right).\end{array}$
After the loss is calculated, the Adam algorithm is used to reverse the optimization of parameters.
Figure 1 is the algorithm structure diagram of SFSDSA. With reference to the theory of DAEs, SFSDSA maps the signal points of the noise interference to the high-probability points with a clean signal
as reference according to the deep characteristics of the signal, so as to realize the signal noise and reduce noise interference. This high-probability position is determined by the theoretical
clean signal and the multilayer model of the feature extraction ability. The multilayer feature extraction preserves the deep feature of secondary field data, and the effect of noise is reduced.
For the noise suppression problem of the secondary field signal in the transient electromagnetic method, our goal is to extract the deep features of the secondary field signal and map the data points
affected by noise to the estimated high-probability points according to their own signal features. We also found that the purpose of extracting the distribution characteristics of the secondary field
signal data is similar to that of unsupervised learning.
4 Experiment and analysis
In this paper, the secondary field signal of a certain place is used as the experimental analysis signal. Usually, the secondary field signals can be obtained continuously on a period of time, so a
large number of signals can be extracted conveniently as the training samples.The secondary field actual signals are extracted as 1×434 as input signals of noise pollution, as is shown in Fig. 2a. At
the same time, based on the secondary field actual signals, the geophysical inversion method is used to obtain the theoretical detection signal as a clean signal uncontaminated by noise, as is shown
in Fig. 2b. In order to be able to highlight the differences between the data, data are expressed in a double logarithmic form (loglog), as is shown in Fig. 3a and b.
The deep features of original data are extracted by feature extraction layers (compression coding layers). As the number of layers increases, SFSDSA can be a more complex abstract model with limited
neural units (to get higher-order features for this small-scale input in this paper), and more feature extraction layers will inevitably lead to overfitting. Moreover, the reconstruction effect can
be affected by the number of feature extraction layer nodes. If the SFSDSA model has too few nodes, the characteristics of the data can not be learned well. However, if the number of feature
extraction layer nodes is too large, the designed lossy compression noise reduction can not be achieved well and the learning burden is increased.
Therefore, based on the aforementioned questions, we design the SFSDSA model (Fig. 1), and the number of nodes in the latter feature extraction layer is half the number of nodes in the previous
feature extraction layer, until it is finally reconstructed back to the original dimension. The SFSDSA model is a layer-by-layer feature extraction, which can be regarded as the process of stacking
AEs. Low dimensions are represented by the high-dimensional data features, which can learn the input features. At the same time, since the reconstruction loss is the loss of the output related to the
clean signal, it can also be said that the input signal can be regarded as a clean signal based on the noise, the training measure of the DAE model increases the robustness of the model and
reconstructs the lossy signal, and mapping the signal point to its high-probability location can be viewed as a noise reduction process.
In the training experiment, we collected 2400 periods of transient electromagnetic method secondary field signals from the same collection location, and we selected 434 data points in each period.
Meanwhile, 100 periods of signals are randomly acquired as a test and validation set for improving the robustness of the model. We use Google's deep learning framework – Tensorflow, which is used to
build the SFSDSA model. The parameter settings for the model are as follows: batch size =8, epochs =2. We do a grid search and get the good parameter combination of learning rate and regularization
rate as shown in Table 1 (learning rate =0.001 and regularization rate =0.15).
We analyzed and compared the selection of the two loss functions of mean absolute error (MAE) and mean squared error (MSE) in experiments as shown in Fig. 4. Meanwhile, according to the previous work
and the SFS denoising task of the transient electromagnetic method, we think that MAE is a better choice. On the one hand, our task is to map the outliers affected by noise to the vicinity of the
theoretical signal point; in other words, the model should ignore the outliers affected by noise to make it more consistent with the distribution of the overall signal. We know that MAE is quite
resistant to outliers (Shukla, 2015). On the other hand, the squared error is going to be huge for outliers, which tries to adjust the model according to these outliers at the expense of other good
points (Shukla, 2015). For signals that are subject to noise interference in the secondary field of the transient electromagnetic method, we do not want to overfit outliers that are disturbed by
noise, but we want to treat them as data with noise interference. The evaluation index is the mean absolute error of output reconstruction data and clean input data. The smaller the MAE, the closer
the output reconstruction data are to the theoretical data. The model also performs better in noise reduction.
$\begin{array}{}\text{(9)}& \mathrm{MAE}\left(x,y\right)=\frac{\mathrm{1}}{m}\sum _{i=\mathrm{1}}^{m}|h\left(x{\right)}^{\left(i\right)}-{y}^{\left(i\right)}|,\end{array}$
where x denotes the noise interference data, m denotes the number of sampling points, h denotes the model and y denotes theoretical data.
In the previous experiments, we set hyper-parameters (batch size =8, learning rate =0.1, regularization rate =0, epochs =20) based on experience, but we initially take the measure of a small number
of epochs (epochs =2) according to the experiment. We added the experiment as shown in Fig. 5 to support our standpoint. The model oscillates quickly and converges. Training with fewer epochs can
avoid useless training and overfitting, maintaining the distribution characteristics of the signal itself. As shown in Fig. 6, the reconstruction error oscillates and converges as the training
progresses. This phenomenon is similar to the tail of the actual signal. We try to stop training when the convergence occurs; the idea similar to early stopping makes the model more robust (Caruana
et al., 2000).
By analyzing Fig. 7, the relationship between MAE and the number of hidden layers, we found that the result of stacking two AEs has a good effect. We guess that the size of the AE hidden layer is too
small after multiple stacking (for instance, the fourth AE only has 27 nodes because the size of the latter AE is half that of the previous AE in order to extract the better feature), and the
representation of signal characteristics is not complete, resulting in large reconstruction costs. If we want to get a better result, more iterations may be used but this tends to cause overfitting.
Meanwhile, we found that the reconstruction loss of the second AE is already very small (shown in Fig. 5). So it is not necessary to stack more AEs.
The training time will be less when the small-scale deep learning model is applied. By analyzing Fig. 2a, we found that, because the amplitude of the tail of the actual signal is small and the
influence of the noise is significant, the tail of the signal oscillates violently. Meanwhile, after the feature extraction and noise reduction to a certain extent, the noise interference can not be
completely removed, the reconstruction can not completely present the clean signal and it is only possible to map the signal points as high-probability points to reduce reconstruction loss.
4.1 Training results
After several experiments, the MAE of actual signals fell from 534.5 to about 215. Compared with the secondary field actual signals and signals denoised by the SFSDSA model, the noise reduction
effect of SFSDSA is obvious in Fig. 8.
The 35th to 55th points are selected for specific analysis in Fig. 7. Through noise reduction in the training SFSDSA model, the singular points (large amplitude deviation from theoretical signal)
affected by the noise are mapped to the high-probability positions (e.g., point no. 38 and point no. 51). This process is the process of damage reconstruction that the DAE model has verified. At the
same time, our stacked AE model also keeps extracting the features, and the singular points are restored to the corresponding points according to the characteristics of the data. The whole process
realizes the noise reduction of the secondary field actual signal based on the secondary field theoretical signal, and the model maps the singular points to locations where there is a high
probability of occurrence, which is also similar to the most estimative method based on observations and model predictions by Kalman filtering.
5 Comparison with traditional noise reduction methods
We also conducted wavelet transform, PCA and Kalman filter method experiments, in which the number of layers of the wavelet transform is three, called DWT() and construction function IDWT() in
Matlab. By using the PCA method, we perform the experiment to verify the effect of noise reduction. But the process of programming is more complicated when using mathematical derivation, so we use
the scikit-learn library to realize noise reduction. Kalman filtering is implemented in Python, where the system noise Q is set to 1e−4 and the measurement noise R is set to 1e−3. Figure 10 shows the
absolute error distribution for that method. We can find from the figure model of noise reduction based on the SFSDSA model of secondary field data that the SFSDSA model is better than Kalman filter,
wavelet transform and PCA methods. At the same time, as the Kalman filter is a linear filter, its noise reduction effect is very poor in this paper. Moreover, the underlying structure is not easy to
modify, resulting in the scikit-learn library being unable to adjust parameters adaptively based on signal characteristics. After the filtering test, and then the MAE corresponding to the calculation
of the theoretical data, it can be seen that the effect of PCA filtering is lower than that of SFSDSA.
At the same time, we compared the optimization results of various models using the traditional method with those of the SFSDSA model, as shown in Table 2.
Figure 11 is the diagram of the mine where the exploration experiment was conducted. The thick red curve is the actual mine vein curve. A data collection survey line, which is the southwest–northeast
pink curve shown in the figure, is designed with seven points marked as number 1 to 7 along it, and the distance between each point is 50m.
In the data analysis, we analyzed the first 50 points in the second field, which were collected in the actual mine. The early signal of the secondary field is stronger than the later one, and it is
not easily disturbed by the noises. So in Fig. 12 we take out the later 21 points in each collection point, which are used for further analysis. Figure 12a shows extracted time-domain order waveforms
formed by the actual data acquired at the seven collection points at the same time. Figure 12b shows extracted time-domain order waveforms formed by the data denoised by the SFSDSA model. By
comparing the two images in Fig. 12, it can be clearly seen that the curves in Fig. 12a have obvious intersections, and the intersections in Fig. 12b are almost invisible. In the transient
electromagnetic method, the intersected curve can not indicate the deeper underground geological information. It can be explained that the curve after the denoising model can reflect the deep
geological information.
Based on the transient electromagnetic method, the deep-seated information is reflected in the late stage of the second field signal when deep-level surveys are conducted, but the late-stage signals
are very weak and easily contaminated by noise. Therefore, we use the measured data for modeling to obtain the theoretical model, which will perform noise reduction based on the geological features
represented by the previous training data set. Meanwhile, it is necessary to analyze the known geological features carefully and apply the model according to the actual geological conditions before
using our method. And this method has a good generalization for different collection points of the same geological feature area. By introducing the deep learning algorithm integrated with the
characteristics of the secondary field data, SFSDSA can map the contaminated signal to a high-probability position. By comparing several filtering algorithms by using same sample data, the SFSDSA
method has better performance and the denoising signal is conducive to further improving the effective detection depth.
The codes are available by email request.
The data sets are available by email request.
FQ proposed and designed the main idea in this paper. KC completed the main program and designed the main algorithm. XB instructed all the authors. Hui gave many meaningful suggestions. The remaining
authors participated in the experiments and software development.
The authors declare that they have no conflict of interest.
This paper is supported by the National Key R&D Program of China (no. 2018YFC0603300). The authors thank three anonymous referees for their careful and professional suggestions to improve this paper
as well as Sunyuan Qiang, who completed a part of the PCA programming at the stage of solving the question for the third referee.
Edited by: Luciano Telesca
Reviewed by: three anonymous referees
Ali, A., Fan, Y., and Shu, L.: Automatic modulation classification of digital modulation signals with stacked autoencoders, Digit. Signal Process., 71, 108–116, https://doi.org/10.1016/
j.dsp.2017.09.005, 2017.
Becker, S. and Plumbley, M.: Unsupervised neural network learning procedures for feature extraction and classification, Appl. Intell., 6, 185–203, https://10.1007/bf00126625, 1996.
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., and Montreal, U.: Greedy layer-wise training of deep networks, Adv. Neur. In., 19, 153–160, 2007.
Caruana, R., Lawrence, S., and Giles, L.: Overfitting in neural nets: backpropagation, conjugate gradient, and early stopping, in: Proceedings of International Conference on Neural Information
Processing Systems, 402–408, 2000.
Chen, B., Lu, C. D., and Liu, G. D.: A denoising method based on kernel principal component analysis for airborne time domain electro-magnetic data, Chinese J. Geophys.-Ch., 57, 295–302, https://
doi.org/10.1002/cjg2.20087, 2014.
Dai, W., Brisimi, T. S., Adams, W. G., Mela, T., Saligrama, V., and Paschalidis, I. C.: Prediction of hospitalization due to heart diseases by supervised learning methods, Int. J. Med. Inform., 84,
189–197, https://doi.org/10.1016/j.ijmedinf.2014.10.002, 2014.
Danielsen, J. E., Auken, E., Jørgensen, F., Søndergaard, V., and Sørensen, K. L.: The application of the transient electromagnetic method in hydrogeophysical surveys, J. Appl. Geophys., 53, 181–198,
https://doi.org/10.1016/j.jappgeo.2003.08.004, 2003.
Grais, E. M. and Plumbley, M. D.: Single Channel Audio Source Separation using Convolutional Denoising Autoencoders, in: Proceedings of the IEEE GlobalSIP Symposium on Sparse Signal Processing and
Deep Learning/5th IEEE Global Conference on Signal and Information Processing (GlobalSIP 2017), Montreal, Canada, 14–16 November 2017, 1265–1269, 2017.
Haroon, A., Adrian, J., Bergers, R., Gurk, M., Tezkan, B., Mammadov, A. L., and Novruzov, A. G.: Joint inversion of long offset and central-loop transient electronicmagnetic data: Application to a
mud volcano exploration in Perekishkul, Azerbaijan, Geophys. Prospect., 63, 478–494, https://doi.org/10.1111/1365-2478.12157, 2014.
He, K., Zhang, X., Ren, S., and Sun, J.: Deep Residual Learning for Image Recognition, in: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, USA, 27–30
June 2016, 770–778, 2016.
Hwang, Y., Tong, A., and Choi, J.: Automatic construction of nonparametric relational regression models for multiple time series, in: Proceedings of the 33rd International Conference on International
Conference on Machine Learning, New York, USA, 19–24 June 2016, 3030–3039, 2016.
Ji, Y., Li, D., Yu, M., Wang, Y., Wu, Q., and Lin, J.: A de-noising algorithm based on wavelet threshold-exponential adaptive window width-fitting for ground electrical source airborne transient
electromagnetic signal, J. Appl. Geophy., 128, 1–7, https://doi.org/10.1016/j.jappgeo.2016.03.001, 2016.
Ji, Y., Wu, Q., Wang, Y., Lin, J., Li, D., Du, S., Yu, S., and Guan, S.: Noise reduction of grounded electrical source airborne transient electromagnetic data using an exponential fitting-adaptive
Kalman filter, Explor. Geophys., 49, 243–252, https://doi.org/10.1071/EG16046, 2018.
Jifara, W., Jiang, F., Rho, S., and Liu, S.: Medical image denoising using convolutional neural network: a residual learning approach, J. Supercomput., 6, 1–15, https://doi.org/10.1007/
s11227-017-2080-0, 2017.
Kingma, D. P. and Ba, J.: Adam: A method for stochastic optimization, arXiv preprint, available at: https://arxiv.org/abs/1412.6980 (last access: 15 January 2019), 2014.
Klambauer, G., Unterthiner, T., Mayr, A., and Hochreiter, S.: Self-Normalizing Neural Networks, arXiv preprint, available at: https://arxiv.org/abs/1706.02515 (last access: 15 January 2019), 2017.
Klampanos, I. A., Davvetas, A., Andronopoulos, S., Pappas, C., Ikonomopoulos, A., and Karkaletsis, V.: Autoencoder-driven weather clustering for source estimation during nuclear events, Environ.
Modell. Softw., 102, 84–93, https://doi.org/10.1016/j.envsoft.2018.01.014, 2018.
Li, D., Wang, Y., Lin, J., Yu, S., and Ji, Y.: Electromagnetic noise reduction in grounded electrical-source airborne transient electromagnetic signal using a stationary-wavelet-based denoising
algorithm, Near Surf. Geophys., 15, 163–173, https://doi.org/10.3997/1873-0604.2017003, 2017.
Liu, J. H., Zheng, W. Q., and Zou, Y. X.: A Robust Acoustic Feature Extraction Approach Based on Stacked Denoising Autoencoder, in: Proceedings of 2015 IEEE International Conference on Multimedia Big
Data, Beijing, China, 20–22 April 2015, 124–127, 2015.
Long, J., Shelhamer, E., and Darrell, T.: Fully Convolutional Networks for Semantic Segmentation, IEEE T. Pattern Anal., 39, 640–651, https://doi.org/10.1109/TPAMI.2016.2572683, 2014.
Rasmussen, S., Nyboe, N. S., Mai, S., and Larsen, J. J.:Extraction and Use of Noise Models from TEM Data, Geophysics, 83, 1–40, https://doi.org/10.1190/geo2017-0299.1, 2017.
Shen, H., George, D., Huerta, E. A., and Zhao, Z.: Denoising Gravitational Waves using Deep Learning with Recurrent Denoising Autoencoders, arXiv preprint, available at: https://arxiv.org/abs/
1711.09919 (last access: 15 January 2019), 2017.
Shimobaba, T., Endo, Y., Hirayama, R., Nagahama, Y., Takahashi, T., Nishitsuji, T., Kakue, T., Shiraki, A., Takada, N., Masuda, N., and Ito, T.: Autoencoder-based holographic image restoration, Appl.
Opt., 56, F27–F30, https://doi.org/10.1364/AO.56.000F27, 2017.
Shukla, R.: L1 vs. L2 Loss function, Github posts, available at: http://rishy.github.io/ml/2015/07/28/l1-vs-l2-loss (last access: 15 January 2019), 2015.
Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P. A.: Extracting and composing robust features with denoising autoencoders, in: Proceedings of the 25th international conference on Machine
learning, Helsinki, Finland, 5–9 July 2008, 1096–1103, 2008.
Vincent, P., Larochelle, H., Lajoie, I., and Manzagol, P. A.: Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion, J. Mach. Learn.
Res.., 11, 3371–3408, https://doi.org/10.1016/j.mechatronics.2010.09.004, 2010.
Wang, Y., Ji, Y. J., Li, S. Y., Lin, J., Zhou, F. D., and Yang, G. H.: A wavelet-based baseline drift correction method for grounded electrical source airborne transient electromagnetic signals,
Explor. Geophys., 44, 229–237, https://doi.org/10.1071/EG12078, 2013.
Wu, Y., Lu, C., Du, X, and Yu, X.: A denoising method based on principal component analysis for airborne transient electromagnetic data, Computing Techniques for Geophysical and Geochemical
Exploration., 36, 170–176, https://doi.org/10.3969/j.issn.1001-1749.2014.02.08, 2014.
Xu, B., Wang, N., Chen, T., and Li, M.: Empirical Evaluation of Rectified Activations in Convolutional Network, arXiv preprint, available at: https://arxiv.org/abs/1505.00853 (last access:
15 January 2019), 2015.
Zhao, M. B., Chow, T. W. S., Zhang, Z., and Li, B.: Automatic image annotation via compact graph based semi-supervised learning, Knowl.-Based Syst., 76, 148–165, https://doi.org/10.1016/
j.knosys.2014.12.014, 2014. | {"url":"https://npg.copernicus.org/articles/26/13/2019/npg-26-13-2019.html","timestamp":"2024-11-07T15:33:58Z","content_type":"text/html","content_length":"237860","record_id":"<urn:uuid:6ae411f8-f065-4cb7-9422-208c2f1e1bac>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00765.warc.gz"} |
Using mathematical modeling for design of self compacting high strength concrete with metakaolin admixture
Metakaolin forms a part of a complex admixture to self-compacting high-strength concrete. The admixture contains superplasticizer addition of naphthalene formaldehyde or polycarboxylate type,
yielding significant improvement in workability and uniformity of fresh concrete mix as well as mechanical properties and durability of hardened concrete. Mathematical modeling of self compacting
high strength concrete at the design stage is aimed at determining optimal content of concrete components (in particular, chemical and mineral admixtures) to obtain the desired concrete properties.
Three-parameter polynomial models are used for determining the superplasticizer content, required to obtain the same fresh concrete mix workability, hardened concrete compressive strength and
correspondingly metakaolin efficiency factor from the strength increase viewpoint. It is demonstrated that the efficiency of metakaolin as an admixture to self compacting high strength concrete
depends on the dosage of the first as well as on concrete binder content, water-binder ratio and by the type of superplasticizer used for concrete production. A concrete design method using
traditional deterministic and stochastic dependencies is developed. Regression equations, describing the influence of water-binder ratio, binder content and metakaolin portion in binder on
superplasticizer content, compressive strength and efficiency factor of metakaolin, are obtained. The concrete design objective function, proposed in this study, allows obtaining the required
concrete strength by minimizing the cost of the most unsustainable concrete components, like cement, metakaolin and superplastisizer.
• Admixture
• Compressive strength
• Concrete mixture design
• Cost efficiency
• Mathematical modeling
• Metakaolin
Dive into the research topics of 'Using mathematical modeling for design of self compacting high strength concrete with metakaolin admixture'. Together they form a unique fingerprint. | {"url":"https://cris.ariel.ac.il/en/publications/using-mathematical-modeling-for-design-of-self-compacting-high-st-3","timestamp":"2024-11-08T15:46:21Z","content_type":"text/html","content_length":"58977","record_id":"<urn:uuid:90655656-be11-474f-bdfd-e292e3bb62aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00420.warc.gz"} |
Facts about chess
By Bill Wall
There are 8 different ways to checkmate in two moves.
1.f3 e5 2.g4 Qh4 mate
1.f3 e6 2.g4 Qh4 mate
1.f4 e5 2.g4 Qh4 mate
1.f4 e6 2.g4 Qh4 mate
1.g4 e5 2.f3 Qh4 mate
1.g4 e5 2.f4 Qh4 mate
1.g4 e6 2.f3 Qh4 mate
1.g4 e6 2.f4 Qh4 mate
Sergey Karjakin (1990- ) was awarded the grandmaster title at the age of 12 years and 212 days.
Garry Kasparov placed 1^st or equal in 15 individual international tournaments from 1981 to 1990.
The shortest decisive game in world championship play (other than forfeit) is 17 moves, Anand-Gelfand, game 8, world championship game 2012 (source: http://www.chessgames.com/perl/chessgame?gid=
Ruslan Ponomariov won the FIDE world chess championship at the age of 18 years and 103 days.
There are 20 possible positions for White’s first moves, consisting of 16 pawn moves and 4 knight moves.
Carlos Armando Juarez Flores has won the national chess championship of Guatemala 25 times, making him the world record holder for most times having won a national championship.
In 1968, the USA only had 25 blind chess players in its Braille Chess Association. The USSR had 150,000 blind players in its Braille Chess Association.
Howard Ohman (1899-1963) won the Nebraska State Chess Champions 25 times, a record for state championship titles.
Dr. Emanuel Lasker was world chess champion for 26 years and 337 days, the longest ever.
In 1927, FIDE awarded 27 players the first grandmaster title.
William Steinitz played 27 chess matches from 1862 to 1896, and won 25 of the 27 matches. He won 160 games, lost 70, and drew 57.
Arkadijs Strazdinis won the New Britain, Connecticut chess club championship 30 times, from 1952 to 1994. From 1952 to 1975, he had won it 23 times in a row.
The longest decisive game without a pawn or piece capture is 31 moves, Nuber-Keckeisen, Mengen 1994. (source: http://www.chessgames.com/perl/chessgame?gid=1442039)
The largest age discrepancy in world championship matches is 32 years when Emauel Lasker,age 26, played Steinitz, age 58. In 1996, former world champion Vasily Smyslov, age 75, played Bacrot, age
13, for an age difference of 62 years.
Jose Capablanca only lost 34 games in his adult career. He was unbeaten from February 10, 1916 to March 21, 1924.
Edgar McCormick (1914-1991) played in the U.S. Open 37 times, more than anyone else.
Arpad Elo (1903-1992) played in 37 consecutive state championships in Wisconsin, from 1933 to 1969, winning the title 8 times. He was a professor of physics for 37 years and president (1935-1937) of
the American Chess Federation before it merged and came part of the U.S. Chess Federation (USCF) in 1939. He is considered the father of scientific chess ratings and his Elo rating system was
adopted by the USCF in 1960 and by FIDE in 1970.
There are 44 players rated over 2700. (source: http://ratings.fide.com/top.phtml?list=men)
Miguel Najdorf played 45 games blindfold and simultaneously in Brazil in 1947. He score 39 wins, 2 losses, and 4 draws in about 23 hours.
The longest time for a castling move to take place was after 48 moves.(source: http://chess-db.com/public/game.jsp?id=3010090857.Garrison,%20R.27586048.19360)
In 1957, there were only 50 grandmasters in the world. The USSR had 19, followed by Yugoslavia with 7, then the USA at 5, and Argentina at 4.
Emanuel Lasker had 52 career wins in world championship play, more wins than any other world champion.
Reinhart Straszacker and Hendrick van Huyssteen, both of South Africa, played their first game of correspondence chess in 1946. They played for over 53 years, until Straszacker died in 1999. They
played 112 games, with both men winning 56 games each.
In 2013, Alexey Khanyan created a chess problem with 54 consecutive checks.
In 1960, George Koltanowski played 56 chess games blindfolded, winning 50 and drawing 6.
Hermann Helms (1870-1963) wrote a chess column for 62 years, from 1893 to 1955, in the Brooklyn Daily Eagle. This is the record for the longest-running uninterrupted chess column under the same
authorship. He published the American Chess Bulletin from 1904 to 1963, a period of 59 years. He also wrote weekly chess columns in the New York World Telegram, the Sun, and the New York Times.
He died in Brooklyn, one day after he reached his 93rd birthday. He was instrumental in directing Bobby Fischer to the Brooklyn Chess Club. He won the New York State championship in 1906 and 1925.
He was the first to broadcast chess games over the radio (WNYC).
There were 72 consecutive queen moves in the game Mason-Mackenzie, London 1882. (source: http://www.chessgames.com/perl/chessgame?gid=1305751)
The longest series of checks was 74, in the game Rebickova-Voracova, Czech Republic 1995.
Vera Menchik-Stevenson (1906-1944) was World Women’s Chess Champion from 1927 to 1944. She defended her title 6 times. In world championship play, she played 83 games, winning 78 games, drawing 4
games, and only lost once.
Walter Ivans (1870-1968) of Tucson, Arizona, started playing chess at the age of 10. He died at the age of 98. He played chess for 85 years, perhaps the longest of any player. Walter Muir
(1905-1999) played correspondence chess for 75 years.
There were 88 grandmasters on the world in 1972, with 33 GMs from the USSR.
David Lawson (1886-1980) was 89 years old when his biography of Paul Morphy, Paul Morphy: The Pride and Sorrow of Chess, was published in 1976. He is perhaps the oldest chess author of a major chess
There are 92 ways to placing 8 queens on a chess bard so that no two queens attack each other.
The latest first capture of a pawn or piece is 94 moves, in Ken Rogoff-Arthur Williams, Stockholm 1969. (source: http://www.chessgames.com/perl/chessgame?gid=1709732)
Mikhal Tal played 95 games in a row (46 wins and 49 draws) without a single loss from October 1973 to October 1974.
In 1989, the Belgrade Grandmaster’s Association (GMA) had 98 grandmasters participating, the most grandmasters in one tournament.
Jose Capablanca played 103 opponents simultaneously in Cleveland in 1922, winning 102 and drawing 1 game.
Bill Martz played 104 consecutive USCF-rated games without a loss
In 1966, Jude Acers played 114 opponents at the Louisiana State Fair, and won all 114 games
The longest world championship game is 124 moves in the 5th game of the 1978 Korchnoi-Karpov match in Merano, Italy. The game ended in a stalemate.
The first philatelic item, a chess cancellation, appeared on a German envelope in 1923. The first postage stamp depicting a chess motif was issued in Bulgaria in 1947 on the occasion of the Balkan
games. Over 140 countries have issued postage stamps related to chess. The United States has never issued a postage chess stamp with any chess theme.
The greatest number of checks in one game is 141 checks, Wegner-Johnson, Gausdal 1991.
Mikhail Botvinnik played 157 world championship games, more games than any other world champion. He won 36, lost 39, and drew 82.
The longest world championship match was the 1984-85 Karpov-Kasparov match. It lasted 159 days after 48 games had been played. The match was later called off by FIDE.
There are 174 countries that belong to FIDE.
The longest won game for White was 193 moves, played in the game Stepak – Mashian, Israel 1980. (source: http://www.chessgames.com/perl/chessgame?gid=1297307)
In 1983, two bus drivers from Bristol, England played chess non-stop for 200 hours. Roger Long and Graham Croft played 189 games with Long winning 96 games to 93 games.
The longest chess book devoted to a single games is 202 pages. D. King wrote Kasparov Against the World in 2000, featuring only one game.
The longest won game for Black at normal time control is 210 moves, Neverov-Bogdanovich, Ukraine 2013. (source: http://www.chessgames.com/perl/chessgame?gid=1721829)
The longest decisive game is a 237-move rapid game, Fressinet-Kosteniuk, Villandry, 2007. (source: http://www.chessgames.com/perl/chessgame?gid=1536839)
The longest mate with 6 chess men is 243 moves.
Fred Reinfeld (1910-1964) authored 260 books on chess, checkers, coins, geology, history, and astronomy. He wrote at least 102 books on chess alone. He also wrote chess books under the name of
Robert Masters and Edward Young. He was a master chess player who won the U.S. Intercollegiate Chess Championship, the New York State Championship (twice), the Marshall Chess Club Championship, and
the Manhattan Chess Club Championship. He was invited to play in the U.S. Championship but declined. He was one of the top 10 players in the US in the late 1940s and early 1950s. He taught chess
at Columbia University and New York University.
The longest chess game lasted 269 moves. It was played in the game Ivan Nikolic – Goran Arsovic, Belgrade 1989 and resulted in a draw after over 20 hours of play. (source: http://www.chessgames.com/
The longest chess puzzle is 290 moves, created by Otto Blathy (1860-1939) in 1929. (source: http://www.chess.com/forum/view/more-puzzles/mate-in-292-movesblathys-monster)
There are 355 different ways to checkmate in three moves.
There are 400 different possible positions after the first move by White and the first move by Black.
In 1994, FIDE master Graham Burgess played 500 games of blitz chess (5-minute chess) in 3 days. He won over 75% of his games.
The longest checkmate with 7 men is 549 moves. It is an endgame consisting of queen and pawn vs. rook, bishop, and knight. (source: http://timkr.home.xs4all.nl/chess2/diary.htm)
The record for the most simultaneous boards of chess played at once is 614. Ehsan Ghaem-Maghami played that many boards in 2011 in Tehran, Iran. It took him 25 hours and he walked 32 miles. He won
590, lost 8, and drew 16.
Floyd Sarisohn is the owner of the largest chess set collection in the world. He owns over 700 chess sets and has been collecting for over 40 years.
There were 865 members in the first All-Russian Chess Federation, formed in 1914.
John Curdo (1931- ) won over 890 chess tournaments as of 2014 and is trying to win 900 chess events.
There are 960 ways to set up the first and last rank of chess board.
In 1988, Stan Vaughan of Nevada played 1,124 correspondence games at once. The prior record was 1,001. In 1948, Robert Wyller of Hillsboro, California played 1,001 correspondence games at once.
There were 1,192 grandmasters in the world in 2008.
There were 1,146 grandmasters in the world in 2014.
The highest tournament in the world was the 2014 Sinquefeld Cup, with the average rating of 2802. The strongest tournament in the world was the 1938 AVRO tournament, which had the top 8 players in
the world participating.
Algemeene Veerenigde Radio Oemrop (AVRO), a Dutch broadcasting company, which sponsored the world's strongest tournament held up to that time from November 5^th to the 27^th of November, 1938. The
top eight players in the world participated (Keres, Fine, Botvinnik, Alekhine, Reshevsky, Euwe, Capablanca, and Flohr). First place was equivalent to $550 (shared by Fine and Keres). Alekhine, for
the first time in his life, came ahead of Capablanca. Capablanca, for the first time in his life, fell below 50%. He lost four games in this event. Salo Flohr, the official challenger who was
expected to play a world championship match with Alekhine, came last without a single victory in 14 rounds. Each round was played in a different Dutch city that rotated between Amsterdam, The Hague,
Rotterdam, Groningen, Zwolle, Haarlem, Utrecht, Arnhem, Breda, and Leiden.
In 1995, Robert Smeltzer of Dallas played 2,266 USCF-rated games in one year, the most ever.
Magnus Carlsen had the highest FIDE rating ever, rated at 2882 in May 2014.
In 1993, John Penquite had a USCF correspondence rating of 2933, the highest rating ever, after 58 straight wins with no losses or draws.
Fabiano Caruarna had a performance rating of 3103, the largest ever, at the 2014 Sinquefield Cup.
There were only 5,000 chess book titles worldwide in 1913. By 1949, there were 20,000 titles. There are now over 100,000 titles.
There are 5,372 possible positions after three moves (two moves for White and one move for Black). Of these, there are 1,862 unique positions.
The longest game of chess that is theoretically possible is 5,949 moves.
George Koltanowski (1903-2000) wrote over 19,000 chess columns during his lifetime.
There were 20,500 players in a simultaneous exhibition in Ahmadabad, India in 2010.
The largest public library for chess is the J.G. White Collection at the Cleveland Public Library. It contains over 33,000 chess books and over 7,000 volumes of bound periodicals. The largest
private library for chess was owned by Grandmaster Lothar Schmid. He had over 20,000 chess books.
There are 71,852 different possible positions after two moves each for White and Black, of which 9,825 positions are unique.
125,000 players competed for the championship of the USSR collective farms in 1949.
There are over 150,000 FIDE-rated players in the world.
Former world chess champion Anatoly Karpov had over 400,000 stamps in his collection before he sold it.
U.S. checker champion Newell Banks (1887-1977) was also a chess master. He defeated the U.S. chess champion, Frank Marshall, and he leading challenger, Isaac Kashdan, at the Chicago Tournament in
1926. In his lifetime he traveled over a million miles playing chess and checkers and played over 600,000 games of chess and checkers. He was considered the world’s best checkers player from 1917
to 1922 and 1933-1934.
There were 700,000 entries in the 1936 USSR Trade Union Chess Championship.
There are 822,518 possible positions after three moves for White and two moves for Black. Of these, 53,516 positions are unique.
The bestselling chess book is Bobby Fischer Teaches Chess. It has sold over 1,000,000 copies since 1966.
There are 9,417,681 total possible positions in chess after three moves each for White and Black. Of these, 311,642 positions are unique.
It is estimated that there are 45,000,000 chess players in the United States.
There are 96,400,068 possible positions after 4 moves by White and 3 moves by Black. Of these, 2,018,993 positions are unique.
The Deep Blue computer that beat Garry Kasparov in a match in 1997 could evaluate 200,000,000 positions a second.
It is estimated that there are over 200 million people who have played chess on the Internet.
There are over 700,000,000 people who play chess worldwide.
There are 988,187,354 total positions after 4 moves for White and 4 moves for Black. Of these, 12,150,635 positions are unique.
There are 9,183,421,888 total positions after 5 moves for White and 4 moves for Black. Of these, 69,284,509 positions are unique.
There are 85,375,278,064 total positions after 5 moves for White and 5 moves for Black. Of these, 382,383,387 positions are unique.
There are 26,534,728,821,064 ways to conduct a knight’s tour in which a knight is placed on an empty board, moves like a knight, and visits each square exactly once.
If you took a number and doubled it every time on a chess board square, by the time you get to the 64^th square, the number would be 18,446,744,073,709,551,615.
There are 10^46 possible unique positions in chess.
There are 10^120 possible moves in chess, known as the Shannon number, named after Claude Shannon. (source: http://archive.computerhistory.org/projects/chess/related_materials/text/
The maximum number of moves required to deliver checkmate from the worst possible starting positions is as follows:
Rook and bishop vs. rook – 59 moves
Queen vs. two knights – 63 moves
Two bishops vs knight – 66 moves
Queen and rook vs. queen – 67 moves
Queen vs. two bishops – 71 moves
Rook and bishop vs. two knights – 223 moves
The second book ever printed and published in the English language was a chess book. The first book ever re-published was a chess book. William Caxton, the first English printer, published The Game
and Playe of Chesse in 1474. It was reprinted in 1483 with woodcuts (illustrations) added. For the longest time, it was thought to be the first book published by Caxton, but he printed the Recuyell
of the Historyes of Troy just before the chess book in 1471. (see http://www.gutenberg.org/ebooks/10672?msg=welcome_stranger) | {"url":"http://billwall.phpwebhosting.com/articles/Facts.htm","timestamp":"2024-11-06T15:24:50Z","content_type":"text/html","content_length":"36236","record_id":"<urn:uuid:77d1c9e0-48fb-43b7-8d41-d50b50a816bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00606.warc.gz"} |
Seeded vs True Random
crow’s microcontroller has a true-random-number-generator on board. This is the random source that is hooked up to the math.random function and should reliably produce numbers with high entropy.
This is different to most CPUs that generate pseudo-randomness from a seed number (often the current up-time of the system) and then apply a series of transformations for the appearance of
randomness. while this approach doesn’t generate a truly random sequence, it can be very useful because of the “seed” approach.
crow 4.0 introduces two new functions allowing for this pseudo-randomness generation:
math.srandom() --> random number between 0.0 and 1.0
math.srandom(5) --> random integer between 0 and 5 inclusive
math.srandom(3,10) --> random integer between 3 and 10 inclusive
Where this seeded-random option becomes more interesting is by setting the seed (with math.srandomseed) with a known number. Re-seeding the pseudo-random generator with the same number will cause
math.srandom to produce the same sequence of values.
One way this could be used is in a function that randomizes the parameters of your script. Start with a function that writes new values to your variables, and pass it a “seed” value to be applied
before randomizing. A very simple script would be something like:
function init()
randomize(unique_id()) -- each hardware unit will generate a different sequence!
local shapes = {'sine', 'linear', 'exp', 'log'}
function randomize(seed)
math.srandomseed(seed) -- re-seed the values
output[2](lfo(1, math.srandom(1,5)))
output[3](lfo(1/3, 3, shapes[math.srandom(1,4)]))
This little script will start 3 different LFOs on the first 3 outputs of crow:
• output 1 will have random speed between 0.1 and 1.1Hz
• output 2 will have random amplitude between 1 and 5V (in 1V steps)
• output 3 will have a randomly selected shape
The init function will use your crow’s unique identifier number to start the LFOs in a way specific to your particular module. this could be a nice thing when sharing a patch with others (though can
be hard to test!).
In druid you can now create new ‘presets’ of LFOs by passing different numbers to the randomize function. If you just want truly random values, call randomize(math.random*2^31) which will re-seed
with a random number. but if you’re going to this trouble, you might as well save the value you pass in, so you can recall those same settings if they’re nice!
>>> druid
> seed = math.random() * 2^31; randomize(seed)
46384874 --< this is your random number that was generated!
> seed = math.random() * 2^31; randomize(seed)
7309311 --< another random number which you're now hearing | {"url":"https://monome.org/docs/crow/seededrandom/","timestamp":"2024-11-13T14:58:00Z","content_type":"text/html","content_length":"20713","record_id":"<urn:uuid:2eb86a62-a5bb-4063-b3b2-28820177a26d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00426.warc.gz"} |
An Introduction to Unobserved Component Models
A UCM decomposes the response series into components such as trend, seasons, cycles, and the regression effects due to predictor series. The following model shows a possible scenario:
The terms , and represent the trend, seasonal, and cyclical components, respectively. In fact the model can contain multiple seasons and cycles, and the seasons can be of different types. For
simplicity of discussion the preceding model contains only one of each of these components. The regression term, , includes contribution of regression variables with fixed regression coefficients. A
model can also contain regression variables that have time varying regression coefficients or that have a nonlinear relationship with the dependent series (see Incorporating Predictors of Different
Kinds). The disturbance term , also called the irregular component, is usually assumed to be Gaussian white noise. In some cases it is useful to model the irregular component as a stationary ARMA
process. See the section Modeling the Irregular Component for additional information.
By controlling the presence or absence of various terms and by choosing the proper flavor of the included terms, the UCMs can generate a rich variety of time series patterns. A UCM can be applied to
variables after transforming them by transforms such as log and difference.
The components , and model structurally different aspects of the time series. For example, the trend models the natural tendency of the series in the absence of any other perturbing effects such as
seasonality, cyclical components, and the effects of exogenous variables, while the seasonal component models the correction to the level due to the seasonal effects. These components are assumed to
be statistically independent of each other and independent of the irregular component. All of the component models can be thought of as stochastic generalizations of the relevant deterministic
patterns in time. This way the deterministic cases emerge as special cases of the stochastic models. The different models available for these unobserved components are discussed next.
As mentioned earlier, the trend in a series can be loosely defined as the natural tendency of the series in the absence of any other perturbing effects. The UCM procedure offers two ways to model the
trend component . The first model, called the random walk (RW) model, implies that the trend remains roughly constant throughout the life of the series without any persistent upward or downward
drift. In the second model the trend is modeled as a locally linear time trend (LLT). The RW model can be described as
Note that if , then the model becomes . In the LLT model the trend is locally linear, consisting of both the level and slope. The LLT model is
The disturbances and are assumed to be independent. There are some interesting special cases of this model obtained by setting one or both of the disturbance variances and equal to zero. If is set
equal to zero, then you get a linear trend model with fixed slope. If is set to zero, then the resulting model usually has a smoother trend. If both the variances are set to zero, then the resulting
model is the deterministic linear time trend: .
You can incorporate these trend patterns in your model by using the LEVEL and SLOPE statements.
A deterministic cycle with frequency , , can be written as
If the argument is measured on a continuous scale, then is a periodic function with period , amplitude , and phase . Equivalently, the cycle can be written in terms of the amplitude and phase as
Note that when is measured only at the integer values, it is not exactly periodic, unless for some integers and . The cycles in their pure form are not used very often in practice. However, they are
very useful as building blocks for more complex periodic patterns. It is well known that the periodic pattern of any complexity can be written as a sum of pure cycles of different frequencies and
amplitudes. In time series situations it is useful to generalize this simple cyclical pattern to a stochastic cycle that has a fixed period but time-varying amplitude and phase. The stochastic cycle
considered here is motivated by the following recursive formula for computing :
starting with and . Note that and satisfy the relation
A stochastic generalization of the cycle can be obtained by adding random noise to this recursion and by introducing a damping factor, , for additional modeling flexibility. This model can be
described as follows
where , and the disturbances and are independent variables. The resulting stochastic cycle has a fixed period but time-varying amplitude and phase. The stationarity properties of the random sequence
depend on the damping factor . If , has a stationary distribution with mean zero and variance . If , is nonstationary.
You can incorporate a cycle in a UCM by specifying a CYCLE statement. You can include multiple cycles in the model by using separate CYCLE statements for each included cycle.
As mentioned before, the cycles are very useful as building blocks for constructing more complex periodic patterns. Periodic patterns of almost any complexity can be created by superimposing cycles
of different periods and amplitudes. In particular, the seasonal patterns, general periodic patterns with integer periods, can be constructed as sums of cycles. This important topic of modeling the
seasonal components is considered next.
The seasonal fluctuations are a common source of variation in time series data. These fluctuations arise because of the regular changes in seasons or some other periodic events. The seasonal effects
are regarded as corrections to the general trend of the series due to the seasonal variations, and these effects sum to zero when summed over the full season cycle. Therefore the seasonal component
is modeled as a stochastic periodic pattern of an integer period such that the sum is always zero in the mean. The period is called the season length. Two different models for the seasonal component
are considered here. The first model is called the dummy variable form of the seasonal component. It is described by the equation
The other model is called the trigonometric form of the seasonal component. In this case is modeled as a sum of cycles of different frequencies. This model is given as follows
where equals if is even and if it is odd. The cycles have frequencies and are specified by the matrix equation
where the disturbances and are assumed to be independent and, for fixed , and . If is even, then the equation for is not needed and is given by
The cycles are called harmonics. If the seasonal component is deterministic, the decomposition of the seasonal effects into these harmonics is identical to its Fourier decomposition. In this case the
sum of squares of the seasonal factors equals the sum of squares of the amplitudes of these harmonics. In many practical situations, the contribution of the high-frequency harmonics is negligible and
can be ignored, giving rise to a simpler description of the seasonal. In the case of stochastic seasonals, the situation might not be so transparent; however, similar considerations still apply. Note
that if the disturbance variance , then both the dummy and the trigonometric forms of seasonal components reduce to constant seasonal effects. That is, the seasonal component reduces to a
deterministic function that is completely determined by its first values.
In the UCM procedure you can specify a seasonal component in a variety of ways, the SEASON statement being the simplest of these. The dummy and the trigonometric seasonal components discussed so far
can be considered as saturated seasonal components that put no restrictions on the seasonal values. In some cases a more parsimonious representation of the seasonal might be more appropriate. This is
particularly useful for seasonal components with large season lengths. In the UCM procedure you can obtain parsimonious representations of the seasonal components by one of the following ways:
• Use a subset trigonometric seasonal component obtained by deleting a few of the harmonics used in its sum. For example, a slightly smoother seasonal component of length 12, corresponding to the
monthly seasonality, can be obtained by deleting the highest-frequency harmonic of period 2. That is, such a seasonal component will be a sum of five stochastic cycles that have periods 12, 6, 4,
3, and 2.4. You can specify such subset seasonal components by using the KEEPH= or DROPH= option in the SEASON statement.
• Approximate the seasonal pattern by a suitable spline approximation. You can do this by using the SPLINESEASON statement.
• A block-seasonal pattern is a seasonal pattern where the pattern is divided into a few blocks of equal length such that the season values within a block are the same—for example, a monthly
seasonal pattern that has only four different values, one for each quarter. In some situations a long seasonal pattern can be approximated by the sum of block season and a simple season, the
length of the simple season being equal to the block length of the block season. You can obtain such approximation by using a combination of BLOCKSEASON and SEASON statements.
• Consider a seasonal component of a large season length as a sum of two or more seasonal components that are each of much smaller season lengths. This can be done by specifying more than one
SEASON statements.
Note that the preceding techniques of obtaining parsimonious seasonal components can also enable you to specify seasonal components that are more general than the simple saturated seasonal
components. For example, you can specify a saturated trigonometric seasonal component that has some of its harmonics evolving according to one disturbance variance parameter while the others evolve
with another disturbance variance parameter.
Modeling an Autoregression
An autoregression of order one can be thought of as a special case of a cycle when the frequency is either or . Modeling this special case separately helps interpretation and parameter estimation.
The autoregression component is modeled as follows
where . An autoregression can also provide an alternative to the IRREGULAR component when the model errors show some autocorrelation. You can incorporate an autoregression in your model by using the
AUTOREG statement.
Modeling Regression Effects
A predictor variable can affect the response variable in a variety of ways. The UCM procedure enables you to model several different types of predictor-response relationships:
• The predictor-response relationship is linear, and the regression coefficient does not change with time. This is the simplest kind of relationship and such predictors are specified in the MODEL
• The predictor-response relationship is linear, but the regression coefficient does change with time. Such predictors are specified in the RANDOMREG statement. Here the regression coefficient is
assumed to evolve as a random walk.
• The predictor-response relationship is nonlinear and the relationship can change with time. This type of relationship can be approximated by an appropriate time-varying spline. Such predictors
are specified in the SPLINEREG statement.
A response variable can depend on its own past values—that is, lagged dependent values. Such a relationship can be specified in the DEPLAG statement.
Modeling the Irregular Component
The components—such as trend, seasonal and regression effects, and nonstationary cycles—are used to capture the structural dynamics of a response series. In contrast, the stationary cycles and the
autoregression are used to capture the transient aspects of the response series that are important for its short-range prediction but have little impact on its long-term forecasts. The irregular
component represents the residual variation remaining in the response series that is modeled using an appropriate selection of structural and transient effects. In most cases, the irregular component
can be assumed to be simply Gaussian white noise. In some other cases, however, the residual variation can be more complicated. In such situations, it might be necessary to model the irregular
component as a stationary ARMA process. Moreover, you can use the ARMA irregular component together with the dependent lag specification (see the DEPLAG statement) to specify an ARIMA(p,d,q)(P,D,Q)
model for the response series. See the IRREGULAR statement for the explanation of the ARIMA notation. See Example 35.8 for an example of modeling a series by using an ARIMA(0,1,1)(0,1,1) model.
The parameter vector in a UCM consists of the variances of the disturbance terms of the unobserved components, the damping coefficients and frequencies in the cycles, the damping coefficient in the
autoregression, and the regression coefficients in the regression terms. These parameters are estimated by maximizing the likelihood. It is possible to restrict the values of the model parameters to
user-specified values.
A UCM is specified by describing the components in the model. For example, consider the model
consisting of the irregular, level, slope, and seasonal components. This model is called the basic structural model (BSM) by Harvey (1989). The syntax for a BSM with monthly seasonality of
trigonometric type is as follows:
model y;
season length=12 type=trig;
Similarly the following syntax specifies a BSM with a response variable , a regressor , and dummy-type monthly seasonality:
model y = x;
slope variance=0 noest;
season length=12 type=dummy;
Moreover, the disturbance variance of the slope component is restricted to zero, giving rise to a local linear trend with fixed slope.
A model can contain multiple cycle and seasonal components. In such cases the model syntax contains a separate statement for each of these multiple cycle or seasonal components; for example, the
syntax for a model containing irregular and level components along with two cycle components could be as follows:
model y = x; | {"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_ucm_details01.htm","timestamp":"2024-11-05T05:26:42Z","content_type":"application/xhtml+xml","content_length":"60088","record_id":"<urn:uuid:739a4a89-c1c5-4299-ad4d-bbee1be79542>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00422.warc.gz"} |
Occam's razor, Physics example
7 months ago
●3 replies●
latest reply 6 months ago
202 views
From the
Wikipedia article
"Suppose an event has two possible explanations. The explanation that requires the fewest assumptions is usually correct.
However, Occam's razor only applies when the simple explanation and complex explanation both work equally well. If a more complex explanation does a better job than a simpler one, then you should use
the complex explanation."
If you have an interest in Physics, please read the following post.
Why I think General Relativity is an approximation
It pertains to two of my blog articles on this site. For those of you who aren't familiar, I have an
overview article
with links to all my articles. Everybody with an interest in DSP should be familiar with the material found in the "Fundamentals" sections. The same math is also needed for Physics. You will find
my Physics articles referenced in the "Off Topic" section, and there are also links in the link above.
My horse in this race is the simpler answer. Occam's razor does not say, as often misunderstood, "The simplest answer wins." What it really says is the answer with the fewest assumptions will tend
to be the better one.
In terms of assumptions for this case, the simpler is saying an inverse square law and the complicated wants the strong equivalence principle. They are mutually contradicting.
I'm wondering if some of the Physics guys in this forum want to take the GR side, the one that goes against Occam's razor.
Any DSP examples of Occam's razor out there?
[ - ]
Reply by ●April 15, 2024
Interesting philosophical principles.
In Design debugging we face many such situations.
Example: You designed a module and tested it to be working. It is then integrated with other modules in the system and failed.
- Wrong observation of testing person
- Wrong interpretation of testing person
- your design is the cause of failure
- integrating system is the cause of failure, i.e. other's failure
- both your design and system are the cause of failure, many persons involved
I see the first case has fewer assumptions but the last one is more complex and likely to be correct but is not specific.
So it doesn't help I am afraid...I will have to accept the burden of proof.
[ - ]
Reply by ●April 15, 2024
That's an interesting example. It could apply to hardware or software. I don't think there is any expectation of correlation between levels in a hierarchy and complexity of the levels. I can think
of simple systems with complex components, or complex systems with simple components. In either regard, if unit testing passes and system testing fails, well, "Houston, we got a problem."
A similar example to my posed Physics one I can think of would be if an approximation or an exact formula should be used when calculating a frequency from DFT bin values. It is different though
because sometimes the simpler approximation, say Jacobsen's, is going to be just as good as an exact formula, say Candan's second tweak, depending how much noise and the type of noise that is
present. In practicality, it also depends on the tolerance of the specification.
"Candan's Tweaks of Jacobsen's Frequency Approximation"
For low noise applications with tight requirements, it might even be worth taking the extra step to do a "Projection" formula.
"Three Bin Exact Frequency Formulas for a Pure Complex Tone in a DFT"
The white noise error variance is slightly lower, but it takes a lot of calculations to get it. Is it worth it?
In the case of my frequency formulas, the assumption is that there is a single pure tone in the DFT. If your sample size is larger, or there is noise, or other tones, the exact formulas will
generally not be worth it. Still following Occam's, yet having nothing to do with complexity determining the best answer.
[ - ]
Reply by ●May 14, 2024
I thought this post might get a little more attention than it has, especially from R B-J. Which one of these equations better describes reality?
Here are the graphs with
as the horizontal axis. For
values greater than
the two are practically indistinguishable.
Trying to get Physicists to address this is even more difficult than getting the IEEE to recognize my exact formulas and vector based phase and amplitude calculations. This is what is the most
problematic about "publish or perish" culture, all value is placed on "origination" and none on "validation". What could possibly go wrong? | {"url":"https://dsprelated.com/thread/17190/occam-s-razor-physics-example","timestamp":"2024-11-12T15:29:00Z","content_type":"text/html","content_length":"36692","record_id":"<urn:uuid:9c667240-2b0c-4783-9d66-2a9699d38bc5>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00529.warc.gz"} |
Scientists, Data Scientists And Significance
Written by Mike James
Monday, 15 April 2019
In a recent special issue of The American Statistician, scientists are urged to stop using the term "statistically significant". So what should we be using? Is this just ignorance triumphing over
good practice?
There are many who think that science is in a state of crisis of irreproducible, and even fraudulent, results. It is easy to point the finger at the recipe that statisticians have given us for
"proving" that something is so. It is a bit of a surprise to discover that at least 43 statisticians (the number of papers in the special edition) are pointing the finger at themselves! However, it
would be a mistake to think that statisticians are one happy family. There are the Frequentists and the Bayesians, to name but two warring factions.
The problem really is that many statisticians are doubtful about what probability actually is. Many of them don't reason about probability any better than the average scientist and the average
scientist is often lost and confused by the whole deal.
If you are Frequentist then probability is, in principle, a measurable thing. If you want to know the probability that a coin will fall heads then you can toss it 10 times and get a rough answer,
toss it 100 times and get a better answer, 1000 and get even better and so on. The law of large numbers ensures that, in the limit, you can know a probability as accurately as you like. Things start
to not make sense when you apply probability to things that are just not repeatable. Hence the probability that there is life on other planets almost certainly isn't a probability - it's a belief.
There are plenty of belief measures and systems that are close to probability, but probabilities they are not.
If you already are sure you know how significance testing works jump to the next page.
This brings us to the question of significance. How can you decide if an experiment has provided enough evidence for you to draw a conclusion? Surely an experiment isn't something you can apply a
probability to?
If you have a model for how the data that the experiment produces varies randomly then you can. For example, if the experiment is to "prove" that a coin isn't fair you can throw it 10 times and see
what you get. Clearly, even if the coin is fair, it would be unlikely that you would get 5 heads and 5 tails. So you start off with the assumption that the coin is fair and work out how likely it is
that you get the result that you got. Suppose you got 6 heads then the probability of getting that from a fair coin is about 0.2, which means you would see this result in 20% of the experiments with
a fair coin. If you got 8 heads, the probability goes down to 0.04 and only 4% of your experiments would give you this result if the coin was fair.
This isn't quite how things are done. To test if a coin is fair you need to setup a repeatable experiment that has a fixed probability of getting it wrong. So you need to set a limit on how many
heads or tails you need to see to reject the hypothesis - the null hypothesis - that the coin is fair. If we toss a fair coin 50 times, we can fairly easily compute that the probability of seeing 18
or more heads (or 18 or more tails) is around 0.05 (it is 0.065 to be exact).
So our experiment is to toss the coin 50 times and if you see 18 or more heads or tails then conclude the coin is biased.
Clearly even if the coin is fair you will incorrectly conclude that it is biased about 5% of the time.
This is the significance level of the experiment and it characterizes the Type I error - i.e. to reject the null hypothesis when it is true.
Notice that this characterizes the experiment's quality and not any one outcome of the experiment. The particular results you get are not "significant at the 5% level"; it is the experiment which has
a significance, a type I error, of 5%. It is the tool that is being quantified, not a particular use of the tool.
So are you satisfied with a tool that is wrong 5% of the time?
However, there is more to it than this.
The most important additional idea is that of power. This is the other side of the significance coin - pun intended. Suppose for a moment that the coin was biased, then what is the probability that
you would detect it with your experiment. The problem here is that this probability depends on how biased the coin is. Clearly if the probability of getting a head is 1.0, or close to 1, then we can
expect to see ten heads in ten throws. As the bias gets less then the probability of seeing enough heads to reject the null hypothesis goes down. This probability is called the power of the test and
it is the natural partner of the significance.
Any experiment that you carry out has a measure of its quality in these two probabilities -
• the significance, the probability you will reject the null hypothesis purely by chance
• the power the probability that you will fail to detect a deviation from the null hypothesis that is real.
Clearly the power also depends on the level of significance you have selected.
For example, if we work out the power for the probability of a head in increments of 0.1 and take a significance level of 0.05 then 18 or more heads/tails in 50 throws are what we need to reject the
null hypothesis. The probability of getting at least 18 heads or tails can be seen below:
As the bias of the coin gets further away from 0.5 you can see that the probability of correctly rejecting the null hypothesis increases. What is more, as the sample size increases so does the power.
This is perfectly reasonable as the bigger the sample the more sensitive the test is. For example, for a sample of 100 at the same significance level:
You can see that if the coin if biased as much as 0.25 or 0.75 then you are almost certain to detect it using this experiment.
Last Updated ( Monday, 15 April 2019 ) | {"url":"https://www.i-programmer.info/professional-programmer/103-i-programmer/12640-scientists-data-scientists-and-significance.html","timestamp":"2024-11-01T23:36:50Z","content_type":"text/html","content_length":"33175","record_id":"<urn:uuid:855bb8fe-9b64-4655-a189-b2d6c71bdc9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00248.warc.gz"} |
Funny random stuff
I tend to agree with the sentiment (if not the literal reading) of Kronecker’s statement: “God made the integers, all else is the work of man”
LeSving wrote:
There’s no complex numbers in Maxwell’s equations
Heuh, have you read his original papers and books? Treatise on Electricity and Magnetism (1873), see equation 31 page 204. It’s claimed that Maxwell was the first scientist to ever use complex
numbers (root-square of -1) in “pure Physics”:
Before him it was Fourier, Fresnel, Euler…however, these guys mixed more math than physics. In field equations or matrix equations of electromagnetism, it’s easier to work with EM = E+i*B
(Riemann–Silberstein vector)
Maxell introduced complex analysis in engineering via stability criteria (negative complex roots of characteristic equation)
None of this makes them real…
Last Edited by Ibra at 17 Jun 12:49
johnh wrote:
It happens that complex numbers provide a useful way to describe quantum mechanics, just as they do to describe what is going on in electrical engineering
You don’t need complex numbers in electrical engineering, it just makes the whole thing simpler mathematically, especially for AC. There’s no complex numbers in Maxwell’s equations. Quantum mechanics
on the other hand, has no “real” equivalent to Schrödinger equation that makes sense in terms of computing. AFAIK even Einstein regarded this as a fault in the theory, because “complex numbers are
not part of reality”. Anyway, this isn’t funny random stuff
Airborne_Again wrote:
It’s not that simple.
Nah, keep it simple! No exceptions! I mean, what is worth a general rule if it needs an exception?
Ibra wrote:
the sum method in YT video is sloppy math, the analytical extension is proper math
I agree on both counts!
you will be confronted with complex numbers. But there’s nothing about the subject itself that requires them.
Indeed; as I said, complex numbers have no real world meaning. They, and the crazy idea of sqrt(-1), are just tools which you have to accept as useful for problem solving.
Crypto applications? I don’t think so.
God made the integers, all else is the work of man
I agree… Another one is that the world is analog; digital is a subset of analog
Now how about something completely different:
I came up with this about 5 years ago and was going to patent it. I am sure Hozelock would have paid millions for that… but this German company has done it.
Ahh food😄 I don’t need special tools to collect them, they just come uninvited🙂
gallois wrote:
Ahh food
Hey wait a second: why not make a snail trap out of it? It looks quite easy to track snail movement, and whenever a snail has crossed some “entrance line” you give current on that line so that the
snail won’t return but has to move forward until it enters some form of trap. The other day you just collect them to do, ahem, whatever you think is appropriate.
Last Edited by UdoR at 17 Jun 14:48
It would be simpler to let a snail crawl along some sort of cantilever and when the weight is detected, flip it so it falls into a container from which they cannot get out due to the double tape
Then send them to France where they eat them
Round here they just go out with a bucket and like picking blackberries in half an hour you have a full bucket ready for granny to turn into l’escargot.
The restaurants in Paris and other big cities used to import them from the UK..I don’t think they do it anymore..At least not now the last 3 years or so we have had quite a bit of wet weather.😏 | {"url":"https://euroga.org/forums/hangar-talk/9901-funny-random-stuff?page=74","timestamp":"2024-11-06T15:30:35Z","content_type":"text/html","content_length":"38654","record_id":"<urn:uuid:9a9d0953-fc49-45e5-a265-481d3968c338>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00381.warc.gz"} |
Tag Archive: “math”
I wrote a little program to make shapefiles (GIS map layers) of satellite ground tracks. Here’s the story of its development, recounted from my comments on Twitter (the internet’s water cooler).
Posted on Saturday, March 31st, 2012.
Over the coming summer months, I’ll be writing code to implement a variety of tests of significance. I have a modicum of statistical literacy, but I don’t have the expertise to recognize which tests
are appropriate for particular problems. Rather than just learn which buttons to click in SPSS, I want to understand each common test from the inside out. Teaching a subject is a great way to learn
about it, so I plan to teach the dumbest pupil of all – the unimaginative computer. I’ll post my progress here.
Posted on Thursday, May 7th, 2009.
The for loop is a useful control structure common to many programming languages. It repeats some code for each value of a variable in a given range. In C, a for loop might look like this:
for (x=1; x<=10; x=x+1)
/* do something ten times */
The initial parameter, x=1, starts a counter at one. The second parameter, x<=10, means the loop repeats until the counter reaches ten. The last parameter, x=x+1 (sometimes written x++), explains how
to do the counting: add one to the counter each time through the loop.
In math, sometimes it may be necessary to add up a bunch of a related terms. For example, rather than write out 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10, the problem can be expressed with sigma
notation as a sort of loop:
The x=1 below the big sigma starts the counter at one. The number 10 above the Σ specifies the final value of the counter. The Σ itself means to add up multiple copies of whatever follows, using
integer values of x ranging from the initial 1 to the maximum 10 for each copy.
(The sum is 55.)
For simple arithmetic, this notation is hardly a simplification. However, if the terms to add are complicated, or if there are many instances of them, you’ll find this is clearly a compact and
convenient way to express the sum. Plus, the Σ symbol is wicked fun to write.
The dweebs at Wikipedia have beat the programming-a-sum example to death.
Posted on Wednesday, April 16th, 2008.
I never used to understand radians. Sure, I knew how to use them in typical math problems, but splitting a circle into 2π units? Who divides a circle into six point something parts? You’ve got a
useless little slice left over.
Well, remember the formula to find the circumference, C, of a circle? It’s C = πd, where d is the diameter of the circle. The diameter is simply twice the radius r, so you can also define the
circumference as C = 2πr.
Consider a circle whose radius is one (the unit circle). Its circumference is simply 2π:
The circumference of a circle is the the distance around its edge. If you don’t go all the way around the unit circle, the length of the arc you do traverse should be less than 2π, right? It might be
¼π, ½π or plain old π if you only go an eighth, a quarter, or a half way around the circle.
Look at the wedges created by arcs with these lengths: they describe 45, 90 and 180 degree angles. The radian equivalents of these angles are ¼π, ½π, and π.
That’s it. A radian angle measurement is the length of the biggest arc that will fit in unit pacman’s mouth!
Their close connection with the basic geometry of circles makes radians convenient for a variety of purposes. But don’t worry, degrees are cool, too: 360, that’s what, almost as many days as there
are in a year? Close enough for pagan ceremonies and government work!
Posted on Thursday, February 28th, 2008.
Recently I was introduced (or perhaps reintroduced) to Pascal’s Triangle, an arrangement of integers that lends itself to a variety of purposes, including binomial expansion. I’ve written a little
program to explore this aspect of the idea.
• Interactively adjust the degree of the expansion to see the corresponding triangle.
• Change the binomial terms for clarity or convenience.
• Show exponents of the first and zeroth power to illustrate that the total degree of each term matches the degree of the initial expression.
• Show plus symbols in the triangle to emphasize how each row is the basis for the next.
• Mouse-over highlight of corresponding terms in triangle and expansion.
• Binomial expansions to any degree can be computed (although the triangle is only displayed for small values due to limitations of the current layout spacing).
• Click and drag in triangle to scroll or drag divider to adjust size of expansion pane.
• Mac OS X: pt.app.zip 2.5M
• Windows: pt.exe.zip 1.3M
• Other: pt.tcl.zip 2.3K or pt.kit.zip 2.8K; requires Tcl/Tk 8.5 or a corresponding Tclkit, respectively. The Mac OS X and Windows versions are self-contained (and hundreds of times larger,
Posted on Monday, February 25th, 2008.
A trammel is an impediment to freedom or motion. By placing some geometric restrictions on the motion of a pen or pencil, specific types of figures can be drawn. The trammel method is one of many
ways to apply this principle to the construction of ellipses (circles and ovals). It’s particularly handy if you need to draw a large curve without a compass.
Draw a pair of perpendicular lines where you’d like to place an ellipse. I’ve labeled them as major (long) and minor (short) axes.
On a piece of scrap paper, mark the length of a minor radius. (Place the corner of the scrap paper at one end of the minor axis and mark where the axes cross.)
Then mark the length of half the major axis from the same corner.
Now slide the minor mark along the major axis and the major mark along the minor axis. (The marks on the scrap paper ride the opposite rails.) Make a dot at the corner of the scrap paper at any
position that satisfies these conditions.
Since the marks on the scrap paper can’t leave their rails, you’ll draw a dot right at the tip of an axis whenever the corresponding mark passes through the intersection of axes.
Mark dots in each quadrant of the ellipse.
See where this is leading? The major mark is still sliding along the minor axis and the minor mark is still sliding along the major axis.
Eventually, you’ll have a series of dots describing the perimeter of the ellipse you planned. The more dots you plot, the smoother the outline.
All that’s left to do is connect the dots.
A circle is a special sort of ellipse in which the major and minor axes are simply the same length. In this example, I’ve just drawn one diameter since only one mark is really needed. (Both marks
would fall at the same point, technically, anchoring the card to the center of the circle.)
This technique is often attributed to Archimedes, mover of worlds. (With a long enough lever, and a place to stand…)
Posted on Saturday, February 2nd, 2008.
Doing laundry is never a chore when you go to the Super-X Launderette.
You might even learn something.
Posted on Tuesday, September 25th, 2007. | {"url":"https://anoved.net/tag/math/","timestamp":"2024-11-07T09:44:20Z","content_type":"application/xhtml+xml","content_length":"33370","record_id":"<urn:uuid:0998e41f-0eda-4bd2-a612-5243273d34e3>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00493.warc.gz"} |
Lorentz Force » Encyclopedia
Lorentz Force in Simple Words
By Lois Alson
The Lorentz force is a fundamental concept in physics that describes the force experienced by a charged particle moving through an electric and magnetic field. This force is named after the Dutch
physicist Hendrik Lorentz, who contributed to the understanding of electromagnetic theory.
Definition and Formula
The Lorentz force combines both electric and magnetic forces acting on a charged particle. The total force F experienced by a particle with charge q, moving with velocity v through an electric field
E and a magnetic field B, is given by the equation:
F = q (E + v × B)
• F is the Lorentz force.
• q is the electric charge of the particle.
• E is the electric field.
• v is the velocity of the particle.
• B is the magnetic field.
• × denotes the cross product, producing a vector perpendicular to both v and B.
Components of the Lorentz Force
Electric Force (qE): This part of the Lorentz force is due to the interaction between the charged particle and the electric field. The force is in the direction of the electric field if the charge is
positive and opposite if the charge is negative.
Magnetic Force (qv × B): This component arises from the interaction between the particle’s motion and the magnetic field. The direction of the magnetic force is perpendicular to both the velocity of
the particle and the magnetic field, following the right-hand rule.
Applications of the Lorentz Force
The Lorentz force plays a key role in various technological and natural phenomena:
• Electromagnetic Devices: Devices like electric motors and generators use the Lorentz force to convert electrical energy into mechanical motion and vice versa.
• Particle Accelerators: In particle accelerators, charged particles are accelerated and steered using electric and magnetic fields, governed by the Lorentz force.
• Auroras: The auroras (northern and southern lights) are caused by charged particles from the solar wind interacting with the Earth’s magnetic field, influenced by the Lorentz force.
Implications of the Lorentz Force in Everyday Life
The Lorentz force not only influences advanced scientific research and industrial applications but also affects several everyday phenomena:
• Electronics and Communication: The movement of electrons in devices like smartphones and computers is controlled by electric and magnetic fields, using the Lorentz force.
• Magnetic Resonance Imaging (MRI): MRI machines in medical diagnostics use magnetic fields to generate detailed images of the human body, influenced by the Lorentz force.
• Credit Card Strips: Magnetic strips on credit cards rely on the Lorentz force to store and read information when swiped through a card reader.
Further Exploration and Advanced Topics
For those interested in deeper exploration, several advanced topics build upon the Lorentz force:
• Relativistic Effects: At very high speeds, close to the speed of light, the Lorentz force must be considered within Einstein’s theory of relativity.
• Quantum Mechanics: On a microscopic scale, quantum mechanics provides a more detailed understanding of how particles behave under electromagnetic forces.
• Plasma Physics: In the study of plasma, the Lorentz force is crucial. Plasma is found in places ranging from neon signs to the sun and other stars. | {"url":"https://menlypedia.xyz/lorentz-force/","timestamp":"2024-11-04T10:49:10Z","content_type":"text/html","content_length":"74458","record_id":"<urn:uuid:834c75b8-584a-415c-af4e-79116ed96ae7>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00051.warc.gz"} |
assuming you used exactly 2.000 g of Zn and 1.26 g of CuSO4 compute the theoretical yeild in grams of copper.
assuming you used exactly 2.000 g of Zn and 1.26 g of CuSO4 compute the theoretical yeild in grams of copper.
881 views
Answer to a math question assuming you used exactly 2.000 g of Zn and 1.26 g of CuSO4 compute the theoretical yeild in grams of copper.
89 Answers
1. Calculate moles of Zn: \frac{2.000 \text{ g}}{65.38 \text{ g/mol}} = 0.0306 \text{ mol} \text{ of Zn}
2. Calculate moles of CuSO₄: \frac{1.26 \text{ g}}{159.61 \text{ g/mol}} = 0.0079 \text{ mol} \text{ of CuSO₄}
3. Balanced equation: \text{Zn} + \text{CuSO}_4 \rightarrow \text{ZnSO}_4 + \text{Cu}
4. Limiting reactant is CuSO₄ because it has fewer moles.
5. Moles of copper produced: 0.0079 mol (same as moles of CuSO₄).
6. Mass of copper produced: 0.0079 \text{ mol} \times 63.55 \text{ g/mol} = 0.50 \text{ g of Cu}
Frequently asked questions (FAQs)
What is the value of x in the logarithmic equation log(x) - log(3) = log(5) - log(2)?
What is the value of cosine (θ) if sine (θ) = 0.8?
What is the value of the function f(x) = (10^x) / (e^x) at x = 1? | {"url":"https://math-master.org/general/assuming-you-used-exactly-2-000-g-of-zn-and-1-26-g-of-cuso4-compute-the-theoretical-yeild-in-grams-of-copper","timestamp":"2024-11-12T12:46:23Z","content_type":"text/html","content_length":"243069","record_id":"<urn:uuid:96c9fa0a-8dc7-47c4-8270-a95d6d363960>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00038.warc.gz"} |
Advanced Topics in Machine Learning
Advanced Topics in Machine Learning: 2021-2022
Lecturers Ismail Ilkan Ceylan | Jiarui Gan
Schedule C1 (CS&P) — Computer Science and Philosophy
Schedule C1 — Computer Science
Degrees Schedule C1 — Mathematics and Computer Science
Schedule II — MSc in Advanced Computer Science
Hilary Term — MSc in Advanced Computer Science
Term Hilary Term 2022 (18 lectures)
This is an advanced course on machine learning, focusing on recent advances in machine learning with relational data and on Bayesian approaches to machine learning. The course is organized and taught
as follows:
• Relational learning (Ismail Ilkan Ceylan): 9 lectures
• Bayesian machine learning (Jiarui Gan): 5 lectures + Bayesian deep learning (Yarin Gal): 4 lectures
Recent techniques, particularly those based on neural networks, have achieved remarkable progress in these fields, leading to a great deal of commercial and academic interest. The course will
introduce the definitions of the relevant machine learning models, discuss their mathematical underpinnings, and demonstrate ways to effectively numerically train them. The coursework will be based
on the reproduction/extension of a recent machine learning paper, with students working in teams to accomplish this. Each team will tackle a separate paper, with available topics including embedding
models, graph neural networks, gradient-based Bayesian inference methods, and deep generative models.
Learning outcomes
After studying this course, students will:
• Have knowledge of the different paradigms for performing machine learning and appreciate when different approaches will be more or less appropriate.
• Understand the definition of a range of neural network models, including graph neural networks.
• Be able to derive and implement optimisation algorithms for these models.
• Understand the foundations of the Bayesian approach to machine learning.
• Be able to construct Bayesian models for data and apply computational techniques to draw inferences from them.
• Have an understanding of how to choose a model to describe a particular type of data.
• Know how to evaluate a learned model in practice.
• Understand the mathematics necessary for constructing novel machine learning solutions.
• Be able to design and implement various machine learning algorithms in a range of real-world applications.
Required background knowledge includes probability theory, linear algebra, continuous mathematics, multivariate calculus, and a basic understanding of graph theory, and logic. Students are required
to have taken a Machine Learning course. Good programming skills are needed, and lecture examples and practicals will be given mainly in Python and PyTorch.
Relational Learning: Lectures 1–9, İsmail İlkan Ceylan
• Lecture 1. Relational data & node embeddings
• Lecture 2. Knowledge graph embedding models
• Lecture 3. Graph neural networks
• Lecture 4. Message passing neural network architectures
• Lecture 5. Expressive power of message passing neural networks
• Lecture 6. Higher-order graph neural networks
• Lecture 7. Message passing neural networks: unique features and randomization
• Lecture 8. Generative graph neural networks
• Lecture 9. Overview of applications of graph neural networks
Bayesian Machine Learning: Lectures 10–18, Jiarui Gan and Yarin Gal
• Lecture 10. Machine learning paradigms
• Lecture 11. Bayesian modeling 1
• Lecture 12. Bayesian modeling 2
• Lecture 13. Bayesian inference 1
• Lecture 14. Bayesian inference 2
• Lecture 15 & 16. Bayesian deep learning
• Lecture 17 & 18. Bayesian deep learning
Overview of relational learning and reasoning. Embedding models and knowledge graphs, inductive capacity of embedding models, graph representation learning, graph neural networks, expressive power of
message passing neural networks, limitations and extensions. Overview of the Bayesian paradigm and its use in machine learning. Generative models, Bayesian inference, Monte Carlo methods, variational
inference, probabilistic programming, model selection and learning, amortized inference, deep generative models, variational autoencoders.
Reading list
• William L. Hamilton. (2020). Graph Representation Learning.
Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol. 14,
No. 3 , Pages 1-159. https://www.cs.mcgill.ca/~wlh/grl_book/
• Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer, 2006
• Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, “Mathematics for Machine Learning”, Cambridge University Press, 2020 https://mml-book.github.io/
Related research
Themes Artificial Intelligence and Machine Learning
Taking our courses
This form is not to be used by students studying for a degree in the Department of Computer Science, or for Visiting Students who are registered for Computer Science courses
Other matriculated University of Oxford students who are interested in taking this, or other, courses in the Department of Computer Science, must complete this online form by 17.00 on Friday of 0th
week of term in which the course is taught. Late requests, and requests sent by email, will not be considered. All requests must be approved by the relevant Computer Science departmental committee
and can only be submitted using this form. | {"url":"https://www.cs.ox.ac.uk/teaching/courses/2021-2022/advml/","timestamp":"2024-11-13T22:47:18Z","content_type":"text/html","content_length":"32786","record_id":"<urn:uuid:bec87331-68d0-436e-bfd8-5bc20c2e3b2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00112.warc.gz"} |
Unveiling Electronic Resistance: A Deep Dive
You Can’t Resist This: Exploring Resistance within Electronic Systems
Dec 29, 2023
Ever wonder what makes electronics tick? What goes inside circuits and systems that make everything work? Every electronic system consists of three major elements: voltage, current, and resistance.
In this blog, we will explore resistance, what it is, its history, and how to use it.
Electrical resistance was first discovered and formulated by German physicist Georg Simon Ohm in 1827. Ohm's groundbreaking work, Ohm's Law, laid the foundation for understanding the relationship
between voltage, current, and resistance in electrical circuits. His invention of the resistor, a device specifically designed to provide a controlled amount of resistance in an electrical circuit,
revolutionized the field of electrical engineering.
A resistor is simply a passive two-terminal element that reduces current flow and divides voltages. Imagine a water pipe where the flow of water is equal to an electrical current. The flow is reduced
when the diameter of the pipe is reduced, adding resistance into the system.
Resistor in Series
A series of resistors is when a resistor is connected from one terminal to another in succession in an electrical circuit. This arrangement is known as resistors in series, as shown in Figure 1. In
this configuration, the total resistance is the sum of the individual resistances (Riedel & Nilsson, 2015, 58).
Figure 1
Due to the nature of series circuit, the current in a closed loop equals the current source (i[s ])
i[s] = i[1] = i[2] = i[3] (1)
By applying Kirchhoff's voltage law (KVL), the voltage (v[s])can be obtained by,
-v[s] + i[s]R[1] + i[s]R[2] + i[s]R[3] = 0 (2)
v[s] = i[s](R[1] + R[2] + R[3]) (3)
Therefore, the equivalent resistance is,
R[eq] = R[1] + R[2] + R[3] (4)
Resistor in Parallel
Contrary to resistors in series, resistors in parallel are connected with both terminals side by side in a circuit. In this configuration, the total resistance is calculated differently. The
reciprocal of the total resistance is equal to the sum of the reciprocals of the individual resistances (Riedel & Nilsson, 2015, 59). Parallel resistors allow current to divide among multiple paths,
providing flexibility and load-sharing capabilities in electrical systems.
Figure 2
Due to the nature parallel circuit, the voltage across each resistor equals the source voltage (v[s])
v[s] = v[R[1]] = v[R[2]] = v[R[3]] (5)
i[R[1]]R[1] = i[R[2]]R[2] = i[R[3]]R[3] = v[s ](6)
iR[1] = v[s]/R[1]; iR[2] = v[s]/R[2]; iR[3] = v[s]/R[3] (7)
By applying Kirchhoff’s current law (KCL), the current (i[s]) can be obtained by,
i[s ] = iR[1] + iR[2] + iR[3] (8)
Apply Equation 7 to Equation 8, we have a new equation:
i[s] = v[s](1/R[1] + 1/R[2] + 1/R[3]) (9)
from which the equivalent resistance is,
i[s]/v[s] = 1/R[eq] = 1/R[1] + 1/R[2] + 1/R[3] (10)
A quick tip to find the equivalent resistance when we are dealing with only two resistors connected in parallel: the equivalent resistance can be obtained by dividing the product of the resistances
by their sum. It is important to note that this formula is applicable exclusively to the specific scenario of two resistors in parallel (Riedel & Nilsson, 2015, 60).
Figure 3
From Figure 3, the equivalent resistance is,
R[eq = ]R[1]R[2/]R[1 + ]R[2 ](11)
Voltage Divider
A voltage divider is a circuit that divides the input voltage into smaller, adjustable voltages. It consists of resistors connected in series. By varying the resistance values, we can control the
output voltage across specific resistors, enabling us to power different components with varying voltage requirements.
Figure 4
We use Kirchhoff's voltage law (KVL) and Kirchhoff's current law (KCL); a blog about these two can be found in "What Is Voltage Law" to derive the voltage (v[1] and v[2]) from Figure 4 (Riedel &
Nilsson, 2015, 61).
v[s]= iR1 + iR2 (12)
i = v[s]/ R[1 + ]R[2] (13)
Apply Equation 13 to the voltage equation v[1] and v[2] :
v[1] = iR1 = v[s] (R1/R[1 + ]R[2])(14)
v[2] = iR2 = v[s] (R2/R[1 + ]R[2])(15)
Current Divider
Similar to voltage dividers, current dividers allow the division of current among different paths. By using resistors in parallel, we can distribute the total current into smaller currents that flow
through individual branches of the circuit. This technique finds applications in situations where precise current allocation is needed.
Figure 5
From Equation 11, we can obtain the voltage (v) from Figure 5.
v = ([ ]R[1]R[2/]R[1 + ]R[2]) i[s] (16)
From Ohm's law, we know the voltage across the two resistors are,
v = i[1]R[1] = i[2]R[2] (17)
Replacing Equation 16 into Equation 17, we have
i[1] = ([ ]R[2/]R[1 + ]R[2]) i[s] (18)
i[2] = ([ ]R[1][/]R[1 + ]R[2]) i[s] (19)
Equation 18 and Equation 19 demonstrate that when two resistors are connected in parallel, the current splits in such a way that the current flowing through one resistor is equal to the total current
entering the parallel combination, multiplied by the resistance of the other resistor, and divided by the sum of the resistances (Riedel & Nilsson, 2015, 63).
Measure Voltage and Current
To understand and analyze electrical circuits, it is essential to measure voltage and current accurately. Voltage can be measured using a voltmeter, a device connected in parallel across a component
to measure the potential difference. Current, on the other hand, is measured using an ammeter, which is inserted in series with the component to measure the flow of electrons (Riedel & Nilsson, 2015,
Now, we have a multimeter (Figure 6) which can measure both the voltage, current, and other aspects. You can check out this
on how to utilize the multimeter properly.
Figure 6
Electrical resistance, a concept rooted in the works of Georg Simon Ohm, plays a vital role in understanding and manipulating electrical circuits. From resistors in series and parallel to voltage
dividers and current dividers, each concept opens a realm of possibilities in electrical engineering. By effectively measuring voltage and current and employing delta to wye conversions, engineers
can design and analyze complex electrical networks. Embrace the power of electrical resistance and unlock the true potential of modern technology.
Amplify your electrical engineering exam prep with School of PE. Our comprehensive courses come with what you need to succeed on exam day! Check out our
FE Electrical
PE Electrical
exam review courses now.
Riedel, S. A., & Nilsson, J. W. (2015). Electric Circuits. Pearson.
About the Author: Khoa Tran
Khoa Tran is an electrical engineer working at the Los Angeles Department of Water and Power and is currently pursuing his master's in electrical Power from the University of Southern California. He
is fluent in both Vietnamese and English and is interested in outdoor activities and exploring new things. | {"url":"https://www.schoolofpe.com/blog/2023/12/exploring-resisitance-within-electronic-systems.html","timestamp":"2024-11-10T15:04:02Z","content_type":"text/html","content_length":"187015","record_id":"<urn:uuid:0d9a9ff5-1399-4fe7-87d4-24d9e2f560bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00032.warc.gz"} |
Risk in the Real World | Windham Insights
The Challenge
G.H. Hardy, the legendary mathematician, once claimed that his greatest disappointment in life was learning that someone had discovered an application for one of his theorems. Although Hardy’s
disinterest in practical matters was a bit extreme, it sometimes seems that scholars view the real world as an uninteresting, special case of their models. This disinterest in real world complexity,
unfortunately, often brings unpleasant consequences. In this article, we address two simplifications about risk that often lead investors to underestimate their portfolios’ exposure to loss. First,
investors typically measure risk as the probability of a given loss, or the amount that can be lost with a given probability, at the end of their investment horizon, ignoring what might occur along
the way. Second, they base these risk estimates on return histories that fail to distinguish between calm environments, when losses are rare, and turbulent environments, when losses occur more
commonly. We show how to estimate exposure to loss in a way that accounts for within-horizon losses as well as the regime-dependent nature of large drawdowns.
End-of-Horizon Exposure to Loss
Probability of Loss
We measure the likelihood that a portfolio will experience a certain percentage loss at the end of a given horizon by computing the standardized difference between the percentage loss and the
portfolio’s expected return, and then converting this quantity to a probability by assuming returns are normally distributed. Unfortunately, asset class returns are not normally distributed. Returns
tend to be lognormally distributed, because compounding causes positive cumulative returns to drift further above the mean than the distance negative cumulative returns drift below the mean. (See
Chapter 18 for more detail about log-normality.) This means that logarithmic returns, also called continuous returns, are more likely to be described by a normal distribution. Therefore, in order to
use the normal distribution to estimate probability of loss we must express return and standard deviation in continuous units, as shown in Equation 12.1. We provide the full mathematical procedure
for converting returns from discrete to continuous units in Chapter 18.
For example, a positive 10 percent return will accumulate to 20 percent over two periods, whereas a negative 10 percent return will fall to 19 percent percent over two periods.
$Pr_{end}=N \left[\frac{ln(1+L)-\mu_cT}{\sigma_c\sqrt{T}} \right]$
The equation above describes the probability of loss at the end of the horizon, where $N \left[ \cdot \right]$ is the cumulative normal distribution function, $ln$ is the natural logarithm, $L$
equals the cumulative percentage loss in discrete units, $\mu_c$ equals the annualized expected return in continuous units, $T$ equals the number of years in the investment horizon, and $\sigma_c$
equals the annualized standard deviation of continuous returns.
Value at Risk
Value at risk gives us another way to measure a portfolio’s exposure to loss. It is equal to a portfolio’s initial wealth multiplied by a quantity equal to expected return over a stated horizon minus
the portfolio’s standard deviation multiplied by the standard normal variable associated with a chosen probability. Again, we express return and standard deviation in continuous units.
$\text{VaR}=W \times ( e^{\mu_c T + N^{-1}[P_L] \sigma_c \sqrt{T}}-1)$
As the two equations reveal, probability of loss and value at risk are flip sides of the same coin.
These formulas assume that we only observe our portfolio at the end of the investment horizon and disregard its values throughout the investment horizon. We argue that investors should and do
perceive risk differently. They care about exposure to loss throughout their investment horizon and not just at its conclusion.
Within-Horizon Exposure to Loss
Within-Horizon Probability of Loss
To account for losses that might occur prior to the conclusion of the investment horizon, we use a statistic called first-passage time probability, which gives the probability that a portfolio will
depreciate to a particular value over some horizon if it is monitored continuously. It is equal to
The first part of this equation, up to the second plus sign, gives the end-of-horizon probability of loss, as shown previously. It is augmented by another probability multiplied by a constant, and
there are no circumstances in which this constant equals zero or is negative. Therefore, the probability of loss throughout an investment horizon must always exceed the probability of loss at the end
of the horizon. Moreover, within-horizon probability of loss rises as the investment horizon expands in contrast to end-of-horizon probability of loss, which diminishes with time.
Within-Horizon Value at Risk
These two measures of within-horizon exposure to loss bring us closer to the real world because they recognize that investors care about drawdowns that might occur throughout the investment horizon.
But they ignore another real world complexity, to which we now turn.
Thus far we have assumed implicitly that returns come from a single distribution. It is more likely that there are distinct risk regimes, each of which may be normally distributed but with a unique
risk profile. For example, we might assume that returns fit into two regimes, a calm regime characterized by below-average volatility and stable correlations, and a turbulent regime characterized by
above-average volatility and unstable correlations. The returns within a turbulent regime are likely to be event driven, whereas the returns within a quiet regime perhaps reflect the simple fact that
prices are noisy.
We detect a turbulent regime by observing whether or not returns across a set of asset classes behave in an uncharacteristic fashion, given their historical pattern of behavior. One or more asset
class returns, for example, may be unusually high or low, or two asset classes that are highly positively correlated may move in the opposite direction.
There is persuasive evidence showing that returns to risk are substantially lower when markets are turbulent than when they are calm. This is to be expected, because when markets are turbulent
investors become fearful and retreat to safe asset classes, thus driving down the prices of risky asset classes. This phenomenon is documented below.
This description of turbulence is captured by a statistic known as the Mahalanobis distance. It is used to determine the contrast in different sets of data. In the case of returns, it captures
differences in magnitude and differences in interactions, which can be thought of respectively as volatility and correlation surprise.
Each dot represents the returns of stocks and bonds for a particular period, such as a day or a month. The center of the ellipse represents the average of the joint returns of stocks and bonds. The
observations within the ellipse represent return combinations associated with calm periods, because the observations are not particularly unusual. The observations outside the ellipse are
statistically unusual and therefore likely to characterize turbulent periods. Notice that some returns just outside the narrow part of the ellipse are closer to the ellipse’s center than some returns
within the ellipse at either end. This illustrates the notion that some periods qualify as unusual not because one or more of the returns was unusually high or low, but instead because the returns
moved in the opposite direction that period despite the fact that the asset classes are positively correlated, as evidenced by the positive slope of the scatter plot.
This measure of turbulence is scale independent in the following sense. Observations that lie on a particular ellipse all have the same Mahalanobis distance from the center of the scatter plot, even
though they have different Euclidean distances.
We suggest that investors measure probability of loss and value at risk not based on the entire sample of returns, but rather on the returns that prevailed during the turbulent subsamples, when
losses occur more commonly. This distinction is especially important if investors care about losses that might occur throughout their investment horizon, and not only at its conclusion.
The Bottom Line
Investors dramatically underestimate their portfolios' exposure to loss, because they focus on the distribution of returns at the end of the investment horizon and disregard losses that might occur
along the way.
Moreover, investors base their estimates of exposure to loss on full-sample standard deviations, which obscure episodes of higher risk that prevail during turbulent periods. It is during these
periods that losses are likely to occur.
Complexity is inconvenient but not always unimportant.
Video Presentations
Within-horizon Risk
This video describes several shortcomings of end of horizon risk measures and defines two innovative risk measures that consider risk throughout the investment period, as summarized in this article.
Risk Regimes
This video introduces a method to partition historical returns into those that are associated with quiet periods and those that reflect market turbulence. | {"url":"https://insights.windhamlabs.com/risk-management/risk-in-the-real-world","timestamp":"2024-11-05T16:16:49Z","content_type":"text/html","content_length":"462060","record_id":"<urn:uuid:80f2f43a-d2fc-4aa9-9a39-05ff57576c32>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00780.warc.gz"} |
Applications of Statistical Mechanics
Starts with basic revision of Statistical Mechanics, then moves on to 2-Level Systems, Equilibrium Constants, etc (the more common exam questions!)
Applications of Statistical Mechanics Notes
Statistical Mechanics links Energy-Levels (Quantum) to Macroscopic Properties (Thermodynamics).
Partition Function
For N molecules in volume V and temperature T,
Helmholtz Free Energy:
AN = -kT ln QN
Links to other thermodynamic quantities using this relation.
dA = - pdV – SdT
Calculating the partition function much simpler if can divide into non-interacting parts (e.g. independent molecules).
Total Energy, Ei = εi,1 + εi,2 + … εi,N
If molecules are identical and indistinguishable:
QN = qN/N!
But QN = qN if e.g. on lattice sites.
Molecular Partition Function (Revision)
At high T: (kT > 2Bhc) [ ~ 15K for HCl where B ~ 10cm-1 ]
qrot = kT/σBhc
where σ = symmetry number.
More generally for 3 moments of inertia (A,B,C):
2-Level Non-interacting System
e.g. lattice of immobile molecules with low-lying electronic energy levels.
e.g. crystal of NO [ Ground State 2Π1/2 with low-lying 2Π3/2 ]
QN = qN
Both are 2-fold degenerate,
Heat Capacities of Gases
Heat Capacity as a Function of T:
Little maximum present in Crot due to isolation of the bottom two levels at low temperature. High T → Ground → 1st → 2nd …
(also degeneracies).
Transition Regime → kT ~< Bhc –
Need full qrot:
Let T* = kT/Bhc
i.e. Cvrot for different diatomic systems is identical if plotted versus T*, the reduced temperature.
Nuclear Spin Effects (identical nuclei)
Implies only odd J allowed, i.e. half the states are missing, consistent with:
qrothomo = qrothetero / 2
Can interpret the behaviour from the two-level system results:
Para: g1/go = 5, at kT ~ 0.3 x 6Bhc, Cvmax ~ 1.5R.
Ortho: g1/go = 7/3, at kT ~ 0.35 x 10Bhc, Cvmax ~ 0.8R.
Equilibrium, go = 1, g1 = 3 x 3, so g1/go = 9, kT ~ 0.25 x 2Bhc, Cvmax = 2R.
Since E-levels equally spaced, can’t use 2-level system results.
At high T, qvib → kT/hv, i.e. Cv → R x no. of normal modes.
Can thus measure vibration spectrum from the heat capacity.
The form of the Universal Function is:
Absolute Gas-Phase Entropies
Calorimetric Entropy.
Area under Graph of Cp vs. T.
Statistical Mechanics:
Spectroscopic Entropy
Difference in the calorimetric and spectroscopic entropies: Sresid = Sspec – Scal.
Sresid ≠ 0 → complications!
• Undetected low temperature phase transition.
• Use of wrong low-T degeneracy.
• Disorder in the crystal at low T, such that So > 0.
→ trans, rot, vib, etc give additive contributions.
Monatomic Gas:
NOTE: Substances only distinguished by mass (at same T), and Smo = constant + R ln m3/2.
Rotational Entropy –
Linear diatomic – kT > Bhc.
qrot = kT/σBhc
NOTE: The role of symmetry number (CO = 214 while N2O = 220 despite similar m & I).
Vibrational Entropy –
If kT << hv, qvib → 1 and Svib = 0.
Svib =
Electronic Entropy –
Only important for degenerate g-states, e.g. O2 – go = 3, or low-lying states, e.g. NO.
Verification of the 3rd Law –
Compare Calorimetric (assuming Smo (0) = 0) and spectroscopic entropies.
But for N2O:
If random orientations → R ln 2 ( = 5.8 J K-1 mol-1)
Transition to fully ordered state occurs at such low temperature (due to very small ΔH from ordering) that kinetics so slow that transition is not seen.
Chemical Equilibrium
Kp = e-ΔG/RT
i.e. the different of molar Gibbs Energies at T of interest where p = po.
Link to statistical mechanics:
G = A + pV.
Gm = A + RT (non-interacting).
Gm = - kT ln (qN/N!) + RT
Using Stirling’s Approximation:
- kT [ N ln q – N ln N + N ] + RT
Illustration – isotope exchange reaction –
H2 + D2 ⇌ 2HD
T = 1000K, vH2 = 4400cm-1
Establish common energy zero by atomising reactants and products.
Ratio of Partition Functions –
qel = 1 for all.
Often high degree of cancellation when moles on left hand side of reaction equation equal the moles on the right hand side.
All factors cancel from this except masses.
All cancel except μ and σ
Total gives:
Illustration 2 – Thermal Ionisation
Fraction of ionised Cs atoms at p/po = 10-4.
Statistical Mechanics expression for Kc:
Transition State Theory
A + BC → AB + C
Quasi-Equilibrium Assumption:
Kc because equilibrium in concentration.
ks = rate of passing through transition state = a velocity.
Alternatively, can think of vibrational motion along s.
Asymmetric stretching frequency v – assume transition state breaks up each time bond is stretched, i.e. ks = v.
Kc‡ includes vibrational and rotational partition functions of ‡, but factor qvib for asymmetric stretch. Therefore,
Since v very low (weak bond) → kT/hv. Therefore,
Check that Transition State Theory gives same rate constant as Simple Collision Theory when applied to collision of structureless spheres:
Transition State Rate Constant for:
H + H2 → H2 + H @ 1000K.
Experimental of ~ 109 at these temperatures (accuracy in barrier height).
Interacting Systems
So far, have written QN = qN.
But for e.g. atomic solid cannot write E =
Implies must effect a transformation to new variables which do not interact.
e.g. thermal properties of an insulating crystal.
Crystal is harmonic, i.e. for small displacements E is proportional to (δri)2.
Transform from atomic to normal coordinates (normal modes) – phonons.
Each modes has a frequency vi and is independent of degree of excitation of other modes,
Since N ~ NA, spectrum is dense and continuous. Let P(v) be probability of finding mode frequency v [
Internal Energy (wrt ZPE) –
Heat Capacity –
But Cv only 5.4J K-1 for diamond at 298K (one of “failures of classical physics”).
Einstein’s Theory of Heat Capacity
Data appears to be Universal but functional form is not correct. Einstein → exponential at low T.
DeBye Model
Improved the model of vibrational spectrum. Normal modes characterised by the wavelength λ.
1. For small k (large λ) behaves like a continuum (v = c/λ = ck/2π).
2. Cut-off wavevector, k = π/a (λ = 2a) – zone boundary.
3. P(v)? Consider chain, length L
So for low T, Cv T3.
[ Integrand → 0 for x << xmax → integral is T-dependent ].
Heat Capacity of Metals
Cv for metals looks DeBye-like. What about free electrons?
Free Electron Model – e-e interactions screen e-ion interaction. Electrons move independently in a smeared out 3D particle-in-a-box like-potential.
Energy Levels are filled according to the Aufbau Principle.
At T = 0, highest occupied level is:
NB: Huge contribution to pressure balanced by electron-ion interaction.
Only a few electrons near to εF can be excited thermally. Distribution described by Fermi-Dirac (not Boltzmann).
Therefore negligible except for very low T (all other contributions → 0).
Absorption of Gas in a Porous Material
Example of Phase Equilibrium.
Consider N immobile atoms absorbed in M sites.
With no 2 atoms in a single site – allows for interatomic repulsion.
Number of configurations = M(M-1) … (M-N-1). = M!/(M-N)!
But it atoms indistinguishable = M!/(M-N)!N!
Therefore neglect atom motion:
Sconf = k ln [M!/(M-N)!N!]
Each pore is of side d. Absorption energy εo. Model motion of absorbed atom as translation in volume V = d3.
Partition Function = d3/Λ3 = q.
Provided box is big enough that quantum effects not important,
Δε ~ h2/8md2.
Therefore the Helmholtz Free Energy:
Absorbed atoms in equilibrium with gas at pressure p when μabs = μgas.
For Ng gas-phase molecules,
Also, pV = NgkT.
Provides a microscopic interpretation for the empirical parameter b.
Classical Interacting Systems (fluid)
For most liquids, can treat system classically, e.g. for Ar at 84K (triple point):
i.e. thermal de Broglie wavelength << interatomic separation, therefore classical.
How do we get a classical partition function? Expressed in terms of particle positions and momenta. A point in “phase-space” is specified by all molecular positions and momenta.
R, P = [ r1, r2, … rN ; p1, p2 … pN ]
Energy at R, P = H(R,P) and probability of being at R, P:
What is constant –
1. Need 1/N! (indistinguishability)
2. ΔxiΔpi ≥ h (HUcP)
i.e. each point in phase-space only distinguishable from another if drdp > h.
Therefore overall need 1/h3N factor.
Check, do we get same value as classical limit of quantum partition function.
e.g. ideal gas (quantum):
Classically V → 0:
Integral is:
Application: Low Density Interacting Gas (Virial)
Suppose atoms of a gas interact via a pair potential:
i.e. Virial Expansion, B is the Virial Coefficient.
These notes are copyright Alex Moss, © 2003-present.
I am happy for them to be reproduced, but please include credit to this website if you're putting them somewhere public please! | {"url":"https://alchemyst.co.uk/note/applications-of-statistical-mechanics","timestamp":"2024-11-11T12:46:53Z","content_type":"text/html","content_length":"105657","record_id":"<urn:uuid:d9a11a3e-9501-49c0-85f4-1e42d91615e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00223.warc.gz"} |
Linear Preserver Problems
September 17th, 2010
Recall that in linear algebra, the vector p-norm of a vector x ∈ C^n (or x ∈ R^n) is defined to be
where x[i] is the i^th element of x and 1 ≤ p ≤ ∞ (where the p = ∞ case is understood to mean the limit as p approaches ∞, which gives the maximum norm). By far the most well-known of these norms is
the Euclidean norm, which arises when p = 2. Another well-known norm arises when p = 1, which gives the “taxicab” norm.
The problem that will be investigated in this post is to characterize what operators preserve the p-norms – i.e., what their isometries are. In the p = 2 case of the Euclidean norm, the answer is
well-known: the isometries of the real Euclidean norm are exactly the orthogonal matrices, and the isometries of the complex Euclidean norm are exactly the unitary matrices. It turns out that if p ≠
2 then the isometry group looks much different. Indeed, Exercise IV.1.3 of [1] asks the reader to show that the isometries of the p = 1 and p = ∞ norms are what are known as complex permutation
matrices (to be defined). We will investigate those cases as well as a situation when p ≠ 1, 2, ∞.
p = 1: The “Taxicab” Norm
Recall that a permutation matrix is a matrix with exactly one “1” in each of its rows and columns, and a “0” in every other position. A signed permutation matrix (sometimes called a generalized
permutation matrix) is similar – every row and column has exactly one non-zero entry, which is either 1 or -1. Similarly, a complex permutation matrix is a matrix for which every row and column has
exactly one non-zero entry, and every non-zero entry is a complex number with modulus 1.
It is not difficult to show that if x ∈ R^n then the taxicab norm of x is preserved by signed permutation matrices, and if x ∈ C^n then the taxicab norm of x is preserved by complex permutation
matrices. We will now show that the converse holds:
Theorem 1. Let P ∈ M[n] be an n × n matrix. Then
if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).
Proof. We only prove the “only if” implication, because the “if” implication is trivial (an exercise left for the reader?). So let’s suppose that P is an isometry of the p = 1 vector norm. Let e[i]
denote the i^th standard basis vector, let p[i] denote the i^th column of P, and let p[ij] denote the (j,i)-entry of P (i.e., the j^th entry of p[i]). Then Pe[i] = p[i] for all i, so
Similarly, P(e[i] + e[k]) = p[i] + p[k] for all i,k, so
However, by the triangle inequality for the absolute value we know that the above equality can only hold if there exist non-negative real constants c[ijk] ≥ 0 such that p[ij] = c[ijk]p[kj]. However,
it is similarly the case that P(e[i] – e[k]) = p[i] – p[k] for all i,k, so
Using the equality condition for the complex absolute value again we then know that there exist non-negative real constants d[ijk] ≥ 0 such that p[ij] = -d[ijk]p[kj]. Using the fact that each c[ijk]
and each d[ijk] is non-negative, it follows that each row contains at most one non-zero entry (and each row must indeed contain at least one non-zero entry since the isometries of any norm must be
Thus every row has exactly one non-zero entry. By using (again) the fact that isometries must be nonsingular, it follows that each of the non-zero entries must occur in a distinct column (otherwise
there would be a zero column). The fact that each non-zero entry has modulus 1 follows from simply noting that P must preserve the p = 1 norm of each e[i].
p = ∞: The Maximum Norm
As with the p = 1 case, it is not difficult to show that if x ∈ R^n then the maximum norm of x is preserved by signed permutation matrices, and if x ∈ C^n then the maximum norm of x is preserved by
complex permutation matrices. We will now show that the converse holds in this case as well:
Theorem 2. Let P ∈ M[n] be an n × n matrix. Then
if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).
Proof. Again, we only prove the “only if” implication, since the “if” implication is trivial. So suppose that P is an isometry of the p = ∞ vector norm. As before, let e[i] denote the i^th standard
basis vector, let p[i] denote the i^th column of P, and let p[ij] denote the (j,i)-entry of P (i.e., the j^th entry of p[i]). Then Pe[i] = p[i] for all i, so
In other words, each entry of P has modulus at most 1, and each column has at least one element with modulus equal to 1. Also, P(e[i] ± e[k]) = p[i] ± p[k] for all i,k, so
It follows that if |p[ij]| = 1, then p[kj] = 0 for all k ≠ i. Because each column has an element with modulus 1, it follows that each row has exactly 1 non-zero entry. Because each column has an
entry with modulus 1, it follows that each row and column has exactly 1 non-zero entry, which must have modulus 1, so P is a signed or complex permutation matrix.
Any p ≠ 2
When p = 2, the isometries are orthogonal/unitary matrices. When p = 1 or p = ∞, the isometries are signed/complex permutation matrices, which are a very small subset of the orthogonal/unitary
matrices. One might naively expect that the isometries for other values of p somehow interpolate between those two extremes. Alternatively, one might expect that the signed/complex permutation
matrices are the only isometries for all other values of p as well. It turns out that the latter conjecture is correct [2,3].
Theorem 3. Let P ∈ M[n] be an n × n matrix and let p ∈ [1,2) ∪ (2,∞]. Then
if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).
1. R. Bhatia, Matrix analysis. Volume 169 of Graduate texts in mathematics (1997).
2. S. Chang and C. K. Li, Certain Isometries on R^n. Linear Algebra Appl. 165, 251–265 (1992).
3. C. K. Li, W. So, Isometries of l[p] norm. Amer. Math. Monthly 101, 452–453 (1994).
Isometries of Unitarily-Invariant Complex Matrix Norms
August 15th, 2010
Recall that a unitarily-invariant matrix norm is a norm on matrices X ∈ M[n] such that
One nice way to think about unitarily-invariant norms is that they are the matrix norms that depend only on the matrix’s singular values. Some unitarily-invariant norms that are particularly
well-known are the operator (spectral) norm, trace norm, Frobenius (Hilbert-Schmidt) norm, Ky Fan norms, and Schatten p-norms (in fact, I would say that the induced p-norms for p ≠ 2 are the only
really common matrix norms that aren’t unitarily-invariant – I will consider these norms in the future).
The core question that I am going to consider today is what linear maps preserve singular values and unitarily-invariant matrix norms. Clearly multiplication on the left and right by unitary matrices
preserve such norms (by definition). However, the matrix transpose also preserves singular values and all unitarily-invariant norms – are there other linear maps on complex matrices that preserve
these norms? For a more thorough treatment of this question, the interested reader is directed to [1,2].
Linear Maps That Preserve Singular Values
We first consider the simplest of the above questions: what linear maps Φ : M[n] → M[n] are such that the singular values of Φ(X) are the same as the singular values of X for all X ∈ M[n]? In order
to answer this question, recall Theorem 1 from my previous post, which states [3] that if Φ is an invertible map such that Φ(X) is nonsingular whenever X is nonsingular, then there exist M, N ∈ M[n]
with det(MN) ≠ 0 such that
In order to make use of this result, we will first have to show that any singular-value-preserving map is invertible and sends nonsingular matrices to nonsingular matrices. To this end, notice
(recall?) that the operator norm of a matrix is equal to its largest singular value. Thus, any map that preserves singular values must be an isometry of the operator norm, and thus must be invertible
(since all isometries are easily seen to be invertible).
Furthermore, if we use the singular value decomposition to write X = USV for some unitaries U, V ∈ M[n] and a diagonal matrix of singular values S ∈ M[n], then det(X) = det(USV) = det(U)det(S)det(V)
= det(UV)det(S). Because UV is unitary, we know that |det(UV)| = 1, so we have |det(X)| = |det(S)| = det(S); that is, the product of the singular values of X equals the absolute value of its
determinant. So any map that preserves singular values also preserves the absolute value of the matrix determinant. But any map that preserves the absolute value of determinants must preserve the set
of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from the above result about invertibility-preserving maps that if Φ preserves singular values then there exist
M, N ∈ M[n] with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MX^TN.
We will now prove that M and N must each in fact be unitary. To this end, pick any unit vector x ∈ C^n and let c denote the Euclidean length of Mx:
By the fact that Φ must preserve singular values (and hence the operator norm) we have that if y ∈ C^n is any other unit vector, then
Because y was an arbitrary unit vector, we have that N^* = (1/c)U, where U ∈ M[n] is some unitary matrix. It can now be similarly argued that M = cV for some unitary matrix V ∈ M[n]. By simply
adjusting constants, we have proved the following:
Theorem 1. Let Φ : M[n] → M[n] be a linear map. Then the singular values of Φ(X) equal the singular values of X for all X ∈ M[n] if and only if there exist unitary matrices U, V ∈ M[n] such that
Isometries of the Frobenius Norm
We now consider the problem of characterizing isometries of the Frobenius norm defined for X ∈ M[n] by
That is, we want to describe the maps Φ that preserve the Frobenius norm. It is clear that the Frobenius norm of X is just the Euclidean norm of vec(X), the vectorization of X. Thus we know
immediately from the standard isomorphism that sends operators to bipartite vectors and super operators to bipartite operators that Φ preserves the Frobenius norm if and only if there exist families
of operators {A[i]}, {B[i]} such that Σ[i] A[i] ⊗ B[i] is a unitary matrix and
It is clear that any map of the form described by Theorem 1 above can be written in this form, but there are also many other maps of this type that are not of the form described by Theorem 1. In the
next section we will see that the Frobenius norm is essentially the only unitarily-invariant complex matrix norm containing isometries that are not of the form described by Theorem 1.
Isometries of Other Unitarily-Invariant Norms
One way of thinking about Theorem 1 is as providing a canonical form for any map Φ that preserves all unitarily-invariant norms. However, in many cases it is enough that Φ preserves a single
unitarily-invariant norm for it to be of that form. For example, it was shown by Schur in 1925 [4] that if Φ preserves the operator norm then it must be of the form described by Theorem 1. The same
result was proved for the trace norm by Russo in 1969 [5]. Li and Tsing extended the same result to the remaining Schatten p-norms, Ky Fan norms, and (p,k)-norms in 1988 [6].
In fact, the following result, which completely characterizes isometries of all unitarily-invariant complex matrix norms other than the Frobenius norm, was obtained in [7]:
Theorem 2. Let Φ : M[n] → M[n] be a linear map. Then Φ preserves a given unitarily-invariant norm that is not a multiple of the Frobenius norm if and only if there exist unitary matrices U, V ∈ M[n]
such that
1. C.-K. Li and S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001).
2. C.-K. Li, Some aspects of the theory of norms. Linear Algebra and its Applications 212–213, 71–100 (1994).
3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1, 282–287 (1949).
4. I. Schur, Einige bemerkungen zur determinanten theorie. Sitzungsber. Preuss. Akad. Wiss. Berlin 25, 454–463 (1925).
5. B. Russo, Trace preserving mappings of matrix algebra. Duke Math. J. 36, 297–300 (1969).
6. C.-K. Li and N.-K. Tsing, Some isometries of rectangular complex matrices. Linear and Multilinear Algebra 23, 47–53 (1988).
7. C.-K. Li and N.-K. Tsing, Linear operators preserving unitarily invariant norms of matrices. Linear and Multilinear Algebra 26, 119–132 (1990).
An Introduction to Linear Preserver Problems
August 5th, 2010
The theory of linear preserver problems deals with characterizing linear (complex) matrix-valued maps that preserve certain properties of the matrices they act on. For example, some of the most
famous linear preserver problems ask what a map must look like if it preserves invertibility or the determinant of matrices. Today I will focus on introducing some of the basic linear preserver
problems that got the field off the ground – in the near future I will explore linear preserver problems dealing with various families of norms and linear preserver problems that are actively used
today in quantum information theory. In the meantime, the interested reader can find a more thorough introduction to common linear preserver problems in [1,2].
Suppose Φ : M[n] → M[n] (where M[n] is the set of n×n complex matrices) is a linear map. It is well-known that any such map can be written in the form
where {A[i]}, {B[i]} ⊂ M[n] are families of matrices (sometimes referred to as the left and right generalized Choi-Kraus operators of Φ (phew!)). But what if we make the additional restrictions that
Φ is an invertible map and Φ(X) is nonsingular whenever X ∈ M[n] is nonsingular? The problem of characterizing maps of this type (which are sometimes called invertibility-preserving maps) is one of
the first linear preserver problems that was solved, and it turns out that if Φ is invertibility-preserving then either Φ or T ○ Φ (where T represents the matrix transpose map) can be written with
just a single pair of Choi-Kraus operators:
Theorem 1. [3] Let Φ : M[n] → M[n] be an invertible linear map. Then Φ(X) is nonsingular whenever X ∈ M[n] is nonsingular if and only if there exist M, N ∈ M[n] with det(MN) ≠ 0 such that
In addition to being interesting in its own right, Theorem 1 serves as a starting point that allows for the simple derivation of several related results.
Determinant-Preserving Maps
For example, suppose Φ is a linear map such that det(Φ(X)) = det(X) for all X ∈ M[n]. We will now find the form that maps of this type (called determinant-preserving maps) have using Theorem 1. In
order to use Theorem 1 though, we must first show that Φ is invertible.
We prove that Φ is invertible by contradiction. Suppose there exists X ≠ 0 such that Φ(X) = 0. Then because Φ preserves determinants, it must be the case that X is singular. Then there exists a
singular Y ∈ M[n] such that X + Y is nonsingular. It follows that 0 ≠ det(X + Y) = det(Φ(X + Y)) = det(0 + Φ(Y)) = det(Y) = 0, a contradiction. Thus it must be the case that X = 0 and so Φ is
Furthermore, any map that preserves determinants must preserve the set of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from Theorem 1 that for any
determinant-preserving map Φ there must exist M, N ∈ M[n] with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MX^TN. However, in this case we have det(X) = det(Φ(X)) = det(MXN) = det(MN)det(X) for
all X ∈ M[n], so det(MN) = 1. Conversely, it is not difficult (an exercise left to the interested reader) to show that any map of this form with det(MN) = 1 must be determinant-preserving. What we
have proved is the following result, originally due to Frobenius [4]:
Theorem 2. Let Φ : M[n] → M[n] be a linear map. Then det(Φ(X)) = det(X) for all X ∈ M[n] if and only if there exist M, N ∈ M[n] with det(MN) = 1 such that
Spectrum-Preserving Maps
The final linear preserver problem that we will consider right now is the problem of characterizing linear maps Φ such that the eigenvalues (counting multiplicities) of Φ(X) are the same as the
eigenvalues of X for all X ∈ M[n] (such maps are sometimes called spectrum-preserving maps). Certainly any map that is spectrum-preserving must also be determinant-preserving (since the determinant
of a matrix is just the product of its eigenvalues), so by Theorem 2 there exist M, N ∈ M[n] with det(MN) = 1 such that either Φ(X) = MXN or Φ(X) = MX^TN.
Now note that any map that preserves eigenvalues must also preserve trace (since the trace is just the sum of the matrix’s eigenvalues) and so we have Tr(X) = Tr(Φ(X)) = Tr(MXN) = Tr(NMX) for all X ∈
M[n]. This implies that Tr((I – NM)X) = 0 for all X ∈ M[n], so we have NM = I (i.e., M = N^-1). Conversely, it is simple (another exercise left for the interested reader) to show that any map of this
form with M = N^-1 must be spectrum-preserving. What we have proved is the following characterization of maps that preserve eigenvalues:
Theorem 3. Let Φ : M[n] → M[n] be a linear map. Then Φ is spectrum-preserving if and only if det(Φ(X)) = det(X) and Tr(Φ(X)) = Tr(X) for all X ∈ M[n] if and only if there exists a nonsingular N ∈ M
[n] such that
1. C. K. Li, S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001).
2. C. K. Li, N. K. Tsing, Linear preserver problems: A brief introduction and some special techniques. Linear Algebra and its Applications 162–164, 217–235 (1992).
3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1,
282–287 (1949).
4. G. Frobenius, Uber die Darstellung der endlichen Gruppen durch Linear Substitutionen. Sitzungsber
Deutsch. Akad. Wiss. Berlin 994–1015 (1897). | {"url":"https://njohnston.ca/tag/linear-preserver-problems/","timestamp":"2024-11-13T11:04:02Z","content_type":"application/xhtml+xml","content_length":"68745","record_id":"<urn:uuid:a57be157-f141-4a78-b7f4-acbee940cf39>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00069.warc.gz"} |
Adding Three Digit Numbers Worksheets No Regrouping 2024 - NumbersWorksheets.net
Adding Three Digit Numbers Worksheets No Regrouping
Adding Three Digit Numbers Worksheets No Regrouping – Innovative inclusion drills are great ways to expose students to algebra principles. These drills have selections for 1-moment, 3-minute, and
five-minute drills with personalized varies of twenty to 100 issues. Additionally, the drills can be found in a side to side file format, with figures from to 99. These drills can be personalized to
each student’s ability level. That is the best part. Here are some further innovative inclusion drills:
Count up forward by a single
Depending on can be a valuable strategy for developing quantity reality fluency. Count on a number by addingone and two. Alternatively, three. For instance, five in addition two equates to twenty,
etc. Relying on a amount with the addition of you might produce the identical result for both large and small figures. These inclusion worksheets involve practice on relying on a number with equally
fingertips along with the variety collection. Adding Three Digit Numbers Worksheets No Regrouping.
Exercise multiple-digit addition by using a number line
Open amount line is excellent versions for place and addition worth. Within a past publish we reviewed the different mental methods pupils are able to use to provide amounts. Employing a variety
lines are a terrific way to file most of these strategies. In this post we are going to check out a great way to process multi-digit add-on by using a quantity line. Listed below are 3 methods:
Exercise adding doubles
The training adding doubles with inclusion numbers worksheet could be used to assist kids create the concept of a increases simple fact. Once a doubles fact is when the same number is added more
than. If Elsa had four headbands and Gretta had five, they both have two doubles, for example. By practicing doubles with this worksheet, students can develop a stronger understanding of doubles and
gain the fluency required to add single digit numbers.
Practice introducing fractions
A Exercise adding fractions with supplement phone numbers worksheet is really a beneficial device to produce your child’s standard familiarity with fractions. These worksheets deal with a number of
concepts relevant to fractions, like assessing and purchasing fractions. Additionally, they provide beneficial difficulty-fixing tactics. You may obtain these worksheets at no cost in Pdf file file
format. The initial step is to make sure your child knows the symbols and rules related to fractions.
Practice introducing fractions using a number range
In terms of rehearsing adding fractions using a variety line, individuals are able to use a fraction location importance mat or perhaps a quantity line for merged numbers. These assist in matching
portion equations on the remedies. The area worth mats could have a quantity of illustrations, with the formula published at the top. College students can then choose the answer they really want by
punching slots next to every single selection. When they have chosen the proper response, the student can attract a cue next to the solution.
Gallery of Adding Three Digit Numbers Worksheets No Regrouping
Leave a Comment | {"url":"https://www.numbersworksheets.net/adding-three-digit-numbers-worksheets-no-regrouping/","timestamp":"2024-11-12T23:28:23Z","content_type":"text/html","content_length":"59749","record_id":"<urn:uuid:953e574f-c517-4400-8f70-321b0bf22c2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00219.warc.gz"} |
The price of gold has increased by 35 per year from 2000 In the year 2000 Harry bought a gold ring for 590 Which of the following functions fx can be used to re
The price of gold has increased by 35% per year from 2000. In the year 2000, Harry bought a gold ring for $590. Which of the following functions f(x) can be used to represent the price of the ring x
years after 2000? f(x) = 590(1.35)x f(x) = 590(0.65)x f(x) = 35(0.41)x f(x) = 35(1.59)x
Respuesta :
Convert the rate into decimal
Since the price of gold has increased by 35% per year so it's growth function which is
F(x) = 590(1.35)^x
Answer Link
first one g so 590(1.35)x f(x)
brainliest g
Answer Link
Otras preguntas | {"url":"http://cahayasurya.ac.id/question/4532305","timestamp":"2024-11-07T19:03:35Z","content_type":"text/html","content_length":"155176","record_id":"<urn:uuid:a9eae6f5-9407-4252-bc24-d57f32fc238b>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00337.warc.gz"} |
Geometric sequences
Watch the video to see how to sum the sequence. Can you adapt the method to sum other sequences?
What do you notice about these families of recurring decimals?
Evaluate these powers of 67. What do you notice? Can you convince someone what the answer would be to (a million sixes followed by a 7) squared?
What is the sum of: 6 + 66 + 666 + 6666 ............+ 666666666...6 where there are n sixes in the last term?
On Friday the magic plant was only 2 centimetres tall. Every day it doubled its height. How tall was it on Monday?
Can you work out how many flowers there will be on the Amazing Splitting Plant after it has been growing for six weeks?
In the limit you get the sum of an infinite geometric series. What about an infinite product (1+x)(1+x^2)(1+x^4)... ?
A circle is inscribed in an equilateral triangle. Smaller circles touch it and the sides of the triangle, the process continuing indefinitely. What is the sum of the areas of all the circles?
The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you notice. | {"url":"https://nrich.maths.org/tags/geometric-sequences","timestamp":"2024-11-13T18:43:08Z","content_type":"text/html","content_length":"58855","record_id":"<urn:uuid:9e75720a-678f-431f-8153-4856d66e13b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00666.warc.gz"} |
Chapter 9.3: Graphs of Exponential Functions
Learning Objectives
• Graph exponential functions.
• Graph exponential functions using transformations.
As we discussed in the previous section, exponential functions are used for many real-world applications such as finance, forensics, computer science, and most of the life sciences. Working with an
equation that describes a real-world situation gives us a method for making predictions. Most of the time, however, the equation itself is not enough. We learn a lot about things by seeing their
pictorial representations, and that is exactly why graphing exponential equations is a powerful tool. It gives us another layer of insight for predicting future events.
Graphing Exponential Functions
Before we begin graphing, it is helpful to review the behavior of exponential growth. Recall the table of values for a function of the form (Figure) change as the input increases by
Each output value is the product of the previous output and the base, constant ratio. In fact, for any exponential function with the form
Notice from the table that
• the output values are positive for all values of
• as
• as
(Figure) shows the exponential growth function
Figure 1. Notice that the graph gets close to the x-axis, but never touches it.
The domain of
To get a sense of the behavior of exponential decay, we can create a table of values for a function of the form (Figure) change as the input increases by
Again, because the input is increasing by 1, each output value is the product of the previous output and the base, or constant ratio
Notice from the table that
• the output values are positive for all values of
• as
• as
(Figure) shows the exponential decay function,
Figure 2.
The domain of
Characteristics of the Graph of the Parent Function f(x) = b^x
An exponential function with the form
• one-to-one function
• horizontal asymptote:
• domain:
• range:
• x-intercept: none
• y-intercept:
• increasing if
• decreasing if
(Figure) compares the graphs of exponential growth and decay functions.
Figure 3.
How To
Given an exponential function of the form
1. Create a table of points.
2. Plot at least y-intercept
3. Draw a smooth curve through the points.
4. State the domain,
Sketching the Graph of an Exponential Function of the Form f(x) = b^x
Sketch a graph of
Show Solution
Before graphing, identify the behavior and create a table of points for the graph.
• Since
• Create a table of points as in (Figure).
• Plot the y-intercept,
Draw a smooth curve connecting the points as in (Figure).
Figure 4.
The domain is
Try It
Show Solution
The domain is
Graphing Transformations of Exponential Functions
Transformations of exponential graphs behave similarly to those of other functions. Just as with other parent functions, we can apply the four types of transformations—shifts, reflections, stretches,
and compressions—to the parent function
Graphing a Vertical Shift
The first transformation occurs when we add a constant vertical shift (Figure).
Figure 5.
Observe the results of shifting
• The domain,
• When the function is shifted up
□ The y-intercept shifts up
□ The asymptote shifts up
□ The range becomes
• When the function is shifted down
□ The y-intercept shifts down
□ The asymptote also shifts down
□ The range becomes
Graphing a Horizontal Shift
The next transformation occurs when we add a constant horizontal shift opposite direction of the sign. For example, if we begin by graphing the parent function (Figure).
Figure 6.
Observe the results of shifting
• The domain,
• The asymptote,
• The y-intercept shifts such that:
□ When the function is shifted left y-intercept becomes
□ When the function is shifted right y-intercept becomes
Shifts of the Parent Function f(x) = b^x
For any constants
• vertically same direction of the sign of
• horizontally opposite direction of the sign of
• The y-intercept becomes
• The horizontal asymptote becomes
• The range becomes
• The domain,
How To
Given an exponential function with the form
1. Draw the horizontal asymptote
2. Identify the shift as
3. Shift the graph of
4. State the domain,
Graphing a Shift of an Exponential Function
Show Solution
We have an exponential equation of the form
Draw the horizontal asymptote
Identify the shift as
Shift the graph of
Figure 7.
The domain is
Try It
Show Solution
The domain is
How To
Given an equation of the form
• Press [Y=]. Enter the given exponential equation in the line headed “Y[1]=”.
• Enter the given value for Y[2]=”.
• Press [WINDOW]. Adjust the y-axis so that it includes the value entered for “Y[2]=”.
• Press [GRAPH] to observe the graph of the exponential function along with the line for the specified value of
• To find the value of [2ND] then [CALC]. Select “intersect” and press [ENTER] three times. The point of intersection gives the value of x for the indicated value of the function.
Approximating the Solution of an Exponential Equation
Show Solution
Press [Y=] and enter Y[1]=. Then enter 42 next to Y2=. For a window, use the values –3 to 3 for [GRAPH]. The graphs should intersect somewhere near
For a better approximation, press [2ND] then [CALC]. Select [5: intersect] and press [ENTER] three times. The x-coordinate of the point of intersection is displayed as 2.1661943. (Your answer may be
different if you use a different window or use a different value for Guess?) To the nearest thousandth,
Graphing a Stretch or Compression
While horizontal and vertical shifts involve adding constants to the input or to the function itself, a stretch or compression occurs when we multiply the parent function (Figure), and the
compression, using (Figure).
Graphing Reflections
In addition to shifting, compressing, and stretching a graph, we can also reflect it about the x-axis or the y-axis. When we multiply the parent function x-axis. When we multiply the input by
reflection about the y-axis. For example, if we begin by graphing the parent function x-axis, (Figure), and the reflection about the y-axis (Figure).
Summarizing Translations of the Exponential Function
Now that we have worked with each type of translation for the exponential function, we can summarize them in (Figure) to arrive at the general equation for translating exponential functions.
1, and notes the following changes: the reflected function is decreasing as x moves from 0 to infinity, the asymptote remains x=0, the x-intercept remains (1, 0), the key point changes to (b^(-1),
1), the domain remains (0, infinity), and the range remains (-infinity, infinity). The second column shows the left shift of the equation g(x)=log_b(x) when b>1, and notes the following changes: the
reflected function is decreasing as x moves from 0 to infinity, the asymptote remains x=0, the x-intercept changes to (-1, 0), the key point changes to (-b, 1), the domain changes to (-infinity, 0),
and the range remains (-infinity, infinity).”>
Translations of the Parent Function
Translation Form
• Horizontally
• Vertically
Stretch and Compress
• Stretch if
• Compression if
Reflect about the x-axis
Reflect about the y-axis
General equation for all translations
Translations of Exponential Functions
A translation of an exponential function has the form
Where the parent function,
• shifted horizontally
• stretched vertically by a factor of
• compressed vertically by a factor of
• shifted vertically
• reflected about the x-axis when
Note the order of the shifts, transformations, and reflections follow the order of operations.
Writing a Function from a Description
Write the equation for the function described below. Give the horizontal asymptote, the domain, and the range.
• y-axis, and then shifted up
Show Solution
We want to find an equation of the general form
• We are given the parent function
• The function is stretched by a factor of
• The function is reflected about the y-axis. We replace
• The graph is shifted vertically 4 units, so
Substituting in the general form we get,
The domain is
Try It
Write the equation for function described below. Give the horizontal asymptote, the domain, and the range.
• x-axis and then shifted down
Show Solution
Access this online resource for additional instruction and practice with graphing exponential functions.
Key Equations
General Form for the Translation of the Parent Function
Key Concepts
• The graph of the function y-intercept at (Figure).
• If
• If
• The equation
• The equation (Figure).
• Approximate solutions of the equation (Figure).
• The equation (Figure).
• When the parent function x-axis. When the input is multiplied by y-axis. See (Figure).
• All translations of the exponential function can be summarized by the general equation (Figure).
• Using the general equation (Figure).
Section Exercises
1. What role does the horizontal asymptote of an exponential function play in telling us about the end behavior of the graph?
Show Solution
An asymptote is a line that the graph of a function approaches, as
2. What is the advantage of knowing how to recognize transformations of the graph of a parent function algebraically?
3. The graph of y-axis and stretched vertically by a factor of y-intercept, domain, and range.
Show Solution
4. The graph of y-axis and compressed vertically by a factor of y-intercept, domain, and range.
5. The graph of x-axis and shifted upward y-intercept, domain, and range.
Show Solution
6. The graph of x-axis, and then shifted downward y-intercept (to the nearest thousandth), domain, and range.
7. The graph of x-axis, and then shifted downward y-intercept, domain, and range.
Show Solution
For the following exercises, graph the function and its reflection about the y-axis on the same axes, and give the y-intercept.
Show Solution
For the following exercises, graph each set of functions on the same axes.
For the following exercises, match each function with one of the graphs in (Figure).
For the following exercises, use the graphs shown in (Figure). All have the form
Figure 13.
19. Which graph has the largest value for
20. Which graph has the smallest value for
21. Which graph has the largest value for
22. Which graph has the smallest value for
For the following exercises, graph the function and its reflection about the x-axis on the same axes.
For the following exercises, graph the transformation of
Show Solution
Horizontal asymptote:
For the following exercises, describe the end behavior of the graphs of the functions.
For the following exercises, start with the graph of
For the following exercises, each graph is a transformation of
For the following exercises, find an exponential equation for the graph.
For the following exercises, evaluate the exponential functions for the indicated value of
For the following exercises, use a graphing calculator to approximate the solutions of the equation. Round to the nearest thousandth.
53. Explore and discuss the graphs of
Show Solution
The graph of y-axis of the graph of y-axis,
54. Prove the conjecture made in the previous exercise.
55. Explore and discuss the graphs of n and real number
Show Solution
The graphs of n, real number
56. Prove the conjecture made in the previous exercise. | {"url":"https://ecampusontario.pressbooks.pub/sccmathtechmath1/chapter/graphs-of-exponential-functions/","timestamp":"2024-11-04T14:52:20Z","content_type":"text/html","content_length":"346046","record_id":"<urn:uuid:48c50139-50e0-4900-a40b-5fa4d64da281>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00252.warc.gz"} |
739. Daily Temperatures
In the realm of algorithmic problem-solving, certain patterns and techniques emerge as powerful tools for tackling a variety of challenges. One such technique involves the clever use of stacks to
efficiently solve problems that require finding the next greater element or value in an array. In this blog, we’ll explore how to leverage the stack data structure to solve the “Next Greater Element”
problem in Java, using a practical example and step-by-step explanation.
Understanding the Problem: The “Next Greater Element” problem involves finding, for each element in an array, the first element to its right that is greater than the current element. This problem
frequently arises in scenarios where you need to determine, for example, the next higher temperature in a weather forecast array or the next larger number in a sequence of integers.
Solution Approach: To solve the “Next Greater Element” problem efficiently, we can utilize a stack data structure to track elements as we traverse the array from right to left. Here’s how the
solution works:
1. We iterate through the input array in reverse order, starting from the last element.
2. For each element encountered, we compare it with the elements currently in the stack.
3. If the current element is greater than the top element of the stack, we pop elements from the stack until we find an element that is greater than the current element or until the stack is empty.
4. If the stack becomes empty, it indicates that there is no greater element to the right of the current element.
5. Otherwise, the top element of the stack is the next greater element, and we calculate its distance from the current element.
6. We push the current element onto the stack to continue the comparison process.
7. We repeat this process for all elements in the array, resulting in an array of distances representing the next greater element for each element in the input array.
class Pair {
int value, index;
Pair(int v, int i) {
value = v;
index = i;
class Solution {
public int[] dailyTemperatures(int[] temperatures) {
Stack<Pair> stack = new Stack<>();
int[] answer = new int[temperatures.length];
for (int i = temperatures.length - 1; i >= 0; i--) {
int element = temperatures[i];
while (!stack.isEmpty() && stack.peek().value <= element) {
answer[i] = stack.isEmpty() ? 0 : stack.peek().index - i;
stack.push(new Pair(element, i));
return answer; | {"url":"https://blog.bhanunadar.com/739-daily-temperatures/","timestamp":"2024-11-09T19:16:54Z","content_type":"text/html","content_length":"82228","record_id":"<urn:uuid:e71e9f4c-6b7a-439a-badb-da6274015c8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00776.warc.gz"} |
Nyquist criterion
From Encyclopedia of Mathematics
A necessary and sufficient condition for the stability of a linear closed-loop system formulated in terms of properties of the open-loop system.
Consider the linear single input-linear single output system with the following transfer function:
where it is assumed that the degree of the polynomial $M(z)$ does not exceed that of the polynomial $N(z)$ (i.e. $W(p)$ is a proper rational function). The original Nyquist criterion gives necessary
and sufficient conditions for the stability of the closed-loop system with unity feedback $u=y$. This is done in terms of the complex-valued function $z=W(i\omega)$ of the real variable $\omega\in[0,
\infty)$ (the amplitude-phase characteristic of the open-loop system) which describes a curve in the complex $z$-plane, known as the Nyquist diagram. Suppose that the characteristic polynomial $N(z)$
of the open-loop system has $k$, $0\leq k\leq n$, roots with positive real part and $n-k$ roots with negative real part. The Nyquist criterion is as follows: The closed-loop system is stable if and
only if the Nyquist diagram encircles the point $z=-1$ in the counter-clockwise sense $k/2$ times. (An equivalent formulation is: The vector drawn from $-1$ to the point $W(i\omega)$ describes an
angle $\pi k$ in the positive sense as $\omega$ goes from $0$ to $+\infty$.)
This criterion was first proposed by H. Nyquist [1] for feedback amplifiers; it is one of the frequency criteria for the stability of linear systems (similar, e.g., to the Mikhailov criterion, see
[2], [3]). It is important to note that if the equations of some of the elements of the systems are unknown, the Nyquist diagram can be constructed experimentally, by feeding a harmonic signal of
variable frequency to the input of the open feedback [4].
Generalizations of this criterion have since been developed for multivariable, infinite-dimensional and sampled-data systems, e.g. [5], , , .
[1] H. Nyquist, "Regeneration theory" Bell System Techn. J. , 11 : 1 (1932) pp. 126–147
[2] B.V. Bulgakov, "Oscillations" , Moscow (1954) (In Russian)
[3] M.A. Lavrent'ev, B.V. Shabat, "Methoden der komplexen Funktionentheorie" , Deutsch. Verlag Wissenschaft. (1967) (Translated from Russian)
[4] Ya.N. Roitenberg, "Automatic control" , Moscow (1978) (In Russian)
[5] L.S. Gnoenskii, G.A. Kamenskii, L.E. El'sgol'ts, "Mathematical foundations of the theory of control systems" , Moscow (1969) (In Russian)
For generalizations of the Nyquist criterion in various directions, see [a1].
[a1] C.A. Desoer, M. Vidyasagar, "Feedback systems: input-output properties" , Acad. Press (1975)
[a2] C.A. Desoer, "A general formulation of the Nyquist stability criterion" IEEE Trans. Circuit Theory , CT-12 (1965) pp. 230–234
[a3] C.A. Desoer, Y.T. Wang, "On the generalized Nyquist stability criterion" IEEE Trans. Autom. Control , AC-25 (1980) pp. 187–196
[a4] F.M. Callier, C.A. Desoer, "On simplifying a graphical stability criterion for linear distributed feedback systems" IEEE Trans. Automat. Contr. , AC-21 (1976) pp. 128–129
[a5] J.M.E. Valenca, C.J. Harris, "Nyquist criterion for input-output stability of multivariable systems" Int. J. Control , 31 (1980) pp. 917–935
[a6] P. Faurre, M. Depeyrot, "Elements of system theory" , North-Holland (1977)
How to Cite This Entry:
Nyquist criterion. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Nyquist_criterion&oldid=32802
This article was adapted from an original article by N.Kh. Rozov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/index.php?title=Nyquist_criterion&oldid=32802","timestamp":"2024-11-10T06:22:18Z","content_type":"text/html","content_length":"18687","record_id":"<urn:uuid:cbc5e73d-1803-45bd-9e8c-0b9d6f269199>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00789.warc.gz"} |
Muzzle Stamp Tool
This is not my original idea, and I’m sure a lot of members here have used some variation of it. But it might be helpful to a new builder. It’s simple to make and works nicely. Used this today on a
.36 SMR I’m building. Just use a wooden dowel smaller than the diameter of your bore, apply tape to create a snug bore fit, secure your punch to the dowel with tie raps and maybe a little tape, and
mark the center of each flat. Just a firm hit with a small hammer will create the dimple. Any bulging around the dimple can be removed with sandpaper backed on a flat file. | {"url":"https://americanlongrifles.org/forum/index.php?topic=9874.0;prev_next=next","timestamp":"2024-11-05T01:06:33Z","content_type":"application/xhtml+xml","content_length":"84611","record_id":"<urn:uuid:2618cd7f-97c6-4f47-a1e6-263a7602d26b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00801.warc.gz"} |
Round Wedged Prisms
Application Ideas
Introduction and Setup
Wedge prisms are designed to be used, either individually or in a pair, for beam steering applications. This is done by individually controlling the rotation of each prism using our PRM1Z8 motorized
rotation stages. The tables below correspond to either the imperial or metric product list for the configuration pictured to the right. Clicking on the item number will bring up a pop-up window with
more information about that component.
An Application Note was prepared to describe this process in further detail, and will be referenced periodically here. For a full download of the Application Note, click the button at the upper right
of this tab. To the upper right is also a a download for an Excel spreadsheet which can be used to model a Risley Prism Scanner.
Item # Qty Description
Imperial Product List
PS814-A 2 Ø1" Round Wedge Prism, 10° Beam Deviation, AR Coating: 350 - 700 nm
SM1W189 2 Wedge Prism Mounting Shim, 18° 9' Wedge Angle
PRM1Z8 2 Ø1" Motorized Precision Rotation Stage (Imperial)
KDC101 2 K-Cube Brushed DC Servo Motor Controller (Power Supply Not Included)
KPS201 2 15 V, 2.66 A Power Supply Unit with 3.5 mm Jack Connector for One K- or T-Cube
LDM635 1 Compact Laser Diode Module with Shutter, 635 nm, 4.0 mW
KM200V 1 Large Kinematic V-Clamp Mount
TR3 3 Ø1/2" Optical Post, SS, 8-32 Setscrew, 1/4"-20 Tap, L = 3"
PH3 3 Ø1/2" Post Holder, Spring-Loaded Hex-Locking Thumbscrew, L = 3"
BA2 1 Mounting Base, 2" x 3" x 3/8"
BA1 1 Mounting Base, 1" x 3" x 3/8"
MB8 1 Aluminum Breadboard 8" x 8" x 1/2", 1/4"-20 Taps
Item # Qty Description
Metric Product List
PS814-A 2 Ø1" Round Wedge Prism, 10° Beam Deviation, AR Coating: 350 - 700 nm
SM1W189 2 Wedge Prism Mounting Shim, 18° 9' Wedge Angle
PRM1/MZ8 2 Ø1" Motorized Precision Rotation Stage (Metric)
KDC101 2 K-Cube Brushed DC Servo Motor Controller (Power Supply Not Included)
KPS201 2 15 V, 2.66 A Power Supply Unit with 3.5 mm Jack Connector for One K- or T-Cube
LDM635 1 Compact Laser Diode Module with Shutter, 635 nm, 4.0 mW
KM200V/M 1 Large Kinematic V-Clamp Mount, Metric
TR75/M 3 Ø12.7 mm Optical Post, SS, M4 Setscrew, M6 Tap, L = 75 mm
PH75/M 3 Ø12.7 mm Post Holder, Spring-Loaded Hex-Locking Thumbscrew, L=75 mm
BA2/M 1 Mounting Base, 50 mm x 75 mm x 10 mm
BA1/M 1 Mounting Base, 25 mm x 75 mm x 10 mm
MB2020/M 1 Aluminum Breadboard, 200 mm x 200 mm x 12.7 mm, M6 Taps
Note: The photo to the right shows a previous generation TDC001 T-Cube and previous-generation KPS101 Power Supplies.
Tracing a Circle with One Prism
For this application, only one prism was mounted in a rotation mount. The incoming beam was deviated off axis by the wedge prism. Once the rotation mount was activated, the wedge prism was spun about
the optical axis, which caused the deviated beam to trace out a small circle, as shown in the long-exposure photograph to the right. The radius of this circle can be calculated as:
This is equation 9 in the Application Note linked above. In this equation r' is this circle's radius, S is the distance from the last surface of the prism to the scanning surface, T is the center
thickness of the prism, Φ[o] is the beam angle relative to the original optical axis after exiting the second surface of the prism, Φ[i] is the angle created from the beam's incidence on the first
surface of the prism according to Snell's Law, and Φ[p] is the resulting angle the beam takes inside the prism relative to the first surface's normal according to Snell's Law.
Tracing a Circle with Two Prisms
For this application, the rotation mounts are set so that the wedges of both prisms are aligned to the home position, where both prisms' thickest sections are vertical. Since each prism will deviate
the beam by the deviation angle, the total beam deviation for two prisms with the wedges aligned will be approximately twice the size. If both prisms are rotated at the same rate and in the same
direction, the beam will trace out a circle which is approximately twice the size of the circle traced out by a single prism. The long-exposure photograph to the right was taken with the prism
assembly at the same distance from the screen as the one-prism circle above. Notice that the circle in the two-prism case is about twice the diameter of the one formed with one prism. The radius of
this circle can be calculated as:
This is equation 18 in the Application Note linked above. In this parametric equation, r[max] is the radius of this circle (any subsequent shape created by this setup is enclosed by this radius), T
is the middle thickness of the fist prism, T' is the effective thickness of prism 2 after the deviated beam travels through it, Φ[i] is the angle created from the beam's incidence on the first
surface of the fist prism according to Snell's Law, Φ[p] is the resulting angle the beam takes inside the prism relative to the first surface's normal according to Snell's Law, z is the distance from
the second surface of the second prism to the scanning surface, S is the distance between the prisms, and Φ[o] is the beam angle relative to the original optical axis after exiting the second surface
of the first prism.
Tracing a Spiral with Two Prisms
It can be shown that a large variety of shapes can be traced while rotating the two prisms at constant speeds. These shapes are dictated by the equation:
This is equation 21 in the Application Note linked above; please reference that Application Note or the accompanying spreadsheet for the definitions of these variables. As an example, the
long-exposure photograph to the right shows two wedge prisms being used to trace out a spiral. This was realized by first setting the beam to be undeviated, and then having the prisms rotate in the
same direction, with one prism set to rotate 0.5 deg/s faster than the other. This, and many other shapes, can be created on the "Third Approx." sheet of the downloadable Excel sheet above. To create
this spiral, try inputting 25 deg/s to ω[1] (rotation speed of prism 1), 24.5 deg/s to ω[2] (rotation speed of prism 2), and 80 seconds to t (run time), with a Δθ (home position offset) of 180
Posted Comments:
No Comments Posted | {"url":"https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=11551","timestamp":"2024-11-07T15:50:49Z","content_type":"text/html","content_length":"128249","record_id":"<urn:uuid:52859aba-dd30-4db2-a831-03c1204f89e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00278.warc.gz"} |
Torpedo calculating discs
The location of the torpedo director at a different location than the torpedo tube resulted in the separation of deflection angle calculating and the aiming function. In the case of surface vessels,
the usual torpedo aiming devices, which were set to the deflection angles received from the command station, were installed at the torpedo tubes. The calculating devices, called director angle
calculating discs (Germ. Torpedorechenscheibe) which solved the torpedo triangle were located at the command stations. In the case of submarines, during surfaced attacks, the common torpedo director
located on the bridge was used. During submerged attack, the aiming was done by periscope, while the deflection angle was calculated in the control room by means of torpedo calculating discs or
The torpedo calculating disc utilized the torpedo triangle theory based on two known variables: angle on the bow and target speed to determine the torpedo deflection angle. Additionally, torpedo
calculating discs could have scales which determined (for the assumed angle on the bow and target speed) the maximal distance to the target at the moment of torpedo launch, for the target to be in
range of the torpedo.
Drawing 1. Replica of a German torpedo calculating disc
The disc consists of a circular dial with a scale (from 0 to 180º port and starboard) on its edge which represents the target angle on the bow. It is also used for reading the deflection angle. The
dial contains parallel lines that represent the target bearing. These line are inside the smaller circle and connect respective values of the angle on this circle, i.e.: 10º and 170º, 20º and 160º,
30º and 150º and so on.
Over the dial is the lineal, which can be rotated around the center. On this lineal is a scale for setting target speed – in the range of 6 to 30 knots.
To calculate deflection angle, the lineal has to be rotated so that its edge (crossing the dial center) indicates the angle on the bow on the outer dial scale. Then the target bearing, which crosses
the lineal against the respective target speed has to be read. The angle which corresponds to the target bearing is the deflection angle.
There are arcs drawn on the dial that are the parts of circles which have the same radius as the circle which encloses the target bearing lines, but their centers are shifted. Each arc is described
by a number that represents the distance to the target at the moment of torpedo launch.
To determine the maximum distance to the target (for the assumed angle on the bow and target speed), the arch which crosses the lineal against the respective target speed (interpolating if necessary)
is used.
The scales of the presented calculating disc are correct for torpedoes running at a speed of 30 knots and a maximum range of 12500 meters (data for the G7a torpedo). In the case of other torpedoes,
the scales have to be altered.
The operating principle of the torpedo calculating disc is as follows: the torpedo triangle is constructed in such way that point A represents the ship launching the torpedo, point B represents the
target at the moment of torpedo launch, and point C is the point where the torpedo will intersect (hit) the target. The length of the triangle side AC is equal to the length of torpedo speed vector
(that is at 30 knots) – it is also equal to the radius of the circle, on which are located points A and D. The target speed scale on the lineal is at the same scale as the torpedo speed – with the
value 0 at the center of the dial. The length of triangle side BC is equal to the target speed vector. Side AB is parallel to the circle diameter which connects the points representing bearings 0
and 180º.
The angle on the bow γ between triangle sides AB and BC is equal to the angle between side BC and the circle diameter which connects the points representing bearings 0 and 180º, so the angle on the
bow can be easily set by aligning the lineal to the corresponding position on the dial’s outer scale.
The deflection angle β between sides AC and AB is the same as the angle between sides AD and CD, equal to the angle between side CD and the circle diameter which connects the points representing
bearings 0 and 180º. So the deflection angle can be easily read from the dial’s outer scale (to make this easier, the target bearing lines are described).
Drawing 2. Torpedo calculating disc operating principle
Determining the maximum distance to the target at the moment of the torpedo launch works on the following principle: the radius of the circle, on which points A and D are located, corresponds (in
some scale) to the maximum range of the torpedo (that is 12500 meters for G7a torpedo). There are arcs on the dial that are parts of circles of the same diameter with their centers shifted down (in
the same scale) in increments of 2500 meters. Point A is located on the circle “0”, and point B – near the circle “7”. So points A and B – located on the same bearing line (which is parallel to the
line which connects the centers of the shifted circles) are the same distance apart as the centers of respective circles – “0” and “7” – that is about 7 * 2500 meters = 17500 meters. So, if at the
moment of the torpedo launch the target is at a distance less than 17500 meters, a torpedo with a maximum range of 12500 meters reaches the target.
Demonstration showing the operation of such calculating disc is available here.
Drawing 3. British torpedo calculating disc from World War I
(Torpedo Control Disc Mark I) [1]
Drawing 4. American torpedo calculating disc from World War II
(Torpedo Angle Solver Mark 7) [2]
Photo 1. German torpedo calculating disc manufactured by Dennert & Pape Company in Hamburg [3]
For the British and German calculating discs above the principle of determining maximum (allowable) distance to the target is quite different than described before. These discs do not have arcs
representing distances. The dials have additional transparent lineals instead, which can be shifted and rotated along the target speed lineal. This lineal is scaled so, that the radius of the dial
corresponds to the maximum torpedo range. When the target speed lineal is set correctly, the additional range lineal is shifted to the point corresponding to the target speed and rotated to be
parallel to the bearing lines on the dials. The measured distance between the target speed lineal and the dial edge corresponds to the maximal distance (allowable) to the target at the moment of
torpedo launch.
Photo 2. German torpedo calculating disc manufactured by Dennert & Pape Company in Hamburg [4]
[1] Handbook of Torpedo Control, 1916, ADM 186/381
[2] Torpedo Angle Solver Mark 7 and Mods.
[3] Subsim - an other KM wizz-wheel model?
[4] Treffpunktrechner - Dennert&Pape Whiz Wheel | {"url":"http://www.tvre.org/en/torpedo-calculating-discs","timestamp":"2024-11-08T05:48:34Z","content_type":"application/xhtml+xml","content_length":"26033","record_id":"<urn:uuid:995954b1-fb90-402f-a10a-8f164bf721ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00460.warc.gz"} |
Material Property ElementMAT1LS lets you define a linearly elastic, isotropic material model for NLFE elements that follows Hooke's law. This material employs a linear strain model.
id = "integer"
e = "real"
nu = "real"
rho = "real"
YS = "real"
AP = "integer"
Unique material property identification number.
Young's modulus for the element.
Poisson's ratio for the element. Default is 0.0.
Element density.
An elastic limit for strain. Default is 0.0.
YS >= 0.0
Selector for the elastic model used. Default is 1.
The example demonstrates the definition of a MAT1LS element.
<MAT1LS id="1" e="2.07e+5" nu="0.3" rho="7.810e-6" YS="0.002" AP="2"/>
1. This material element defines a linearly elastic, isotropic material that obeys the Hooke's law. Each element must have a unique material identification number.
2. In this approach, the stress-strain relationship is $\sigma ={E}_{m}{\epsilon }_{m}$. The strain components are defined as
${\epsilon }_{m}=\left[{\epsilon }_{x}\text{ }{\epsilon }_{y}\text{ }{\epsilon }_{z}\text{ }{\epsilon }_{xy}\text{ }{\epsilon }_{xz}\text{ }{\epsilon }_{yz}\text{ }\right]$
${\epsilon }_{m}=\left[\begin{array}{l}\sqrt{{r}_{x}^{T}{r}_{x}}-1\\ \sqrt{{r}_{y}^{T}{r}_{y}}-1\\ \sqrt{{r}_{z}^{T}{r}_{z}}-1\\ 2\sqrt{{r}_{x}^{T}{r}_{y}}\\ 2\sqrt{{r}_{x}^{T}{r}_{z}}\\ 2\sqrt{\
text{ }{r}_{z}^{T}{r}_{y}}\end{array}\right]$
3. The difference between MAT1 and MAT1LS lies in the definition of strain - compared to MAT1, the use of this material model may result in less accuracy, but this material model is computationally
more efficient.
4. YS lets you specify a maximum limit on the elastic strain that the component is allowed. If, during the simulation, the component strain (at any element in the component) exceeds this value,
MotionSolve issues a warning message.
5. AP is a selector that allows you to choose the elastic model that is used to calculate the internal resistance of the NLFE component. The following approaches are available for the BEAM (BEAM12
and BEAMC) elements:
Approach (AP)
Continuum mechanics approach.
Elastic line approach
Euler Bernoulli beam theory
Timoshenko beam theory | {"url":"https://2021.help.altair.com/2021/hwsolvers/ms/topics/solvers/ms/mat1ls2_motionsolve.htm","timestamp":"2024-11-08T12:38:21Z","content_type":"application/xhtml+xml","content_length":"65188","record_id":"<urn:uuid:793f45ee-a58e-4ce6-b49b-ff79a9b96d47>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00024.warc.gz"} |
A ball of mass 2 kg and another of mass 4 kg are dropped together from a 60 feet
Select Chapter Topics:
A ball of mass 2 kg and another of mass 4 kg are dropped together from a 60 feet tall building. After a fall of 30 feet each towards the earth, their respective kinetic energies will be in the ratio
1. 1: 4
2. 1: 2
3. 1: $\sqrt{2}$
4. $\sqrt{2}$ :1
Subtopic: Â Uniformly Accelerated Motion |
 83%
From NCERT
AIPMT - 2004
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle starts from rest with constant acceleration. The ratio of space-average velocity to the time-average velocity is:
where time-average velocity and space-average velocity, respectively, are defined as follows:
\(<v>_{time}\) \(=\) \(\frac{\int v d t}{\int d t}\)
\(<v>_{space}\) \(=\) \(\frac{\int v d s}{\int d s}\)$\frac{}{}$
│1.│\(\frac{1}{2}\) │2.│\(\frac{3}{4}\) │
│3.│\(\frac{4}{3}\) │4.│\(\frac{3}{2}\) │
Subtopic: Â Average Speed & Average Velocity |
From NCERT
AIPMT - 1999
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A particle is thrown vertically upward. Its velocity at half its height is \(10\) m/s. Then the maximum height attained by it is: (Assume, \(g=\) \(10\) m/s^2)
1. \(8\) m
2. \(20\) m
3. \(10\) m
4. \(16\) m
Subtopic: Â Uniformly Accelerated Motion |
 75%
From NCERT
AIPMT - 2001
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
If a ball is thrown vertically upwards with speed \(u\), the distance covered during the last \(t\) seconds of its ascent is:
1. \(ut\)
2. \(\frac{1}{2}gt^2\)
3. \(ut-\frac{1}{2}gt^2\)
4. \((u+gt)t\)
Subtopic: Â Uniformly Accelerated Motion |
 66%
From NCERT
AIPMT - 2003
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A man throws some balls with the same speed vertically upwards one after the other at an interval of \(2\) seconds. What should be the speed of the throw so that more than two balls are in the sky at
any time? (Given \(g = 9.8\) m/s^2)
│1.│More than \(19.6\) m/s │
│2.│At least \(9.8\) m/s │
│3.│Any speed less than \(19.6\) m/s │
│4.│Only with a speed of \(19.6\) m/s │
Subtopic: Â Uniformly Accelerated Motion |
 67%
From NCERT
AIPMT - 2003
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
For a particle, displacement time relation is given by; $t$$=$$\sqrt{x}$$+$$3$ . Its displacement, when its velocity is zero will be:
1. \(2\) m
2. \(4\) m
3. \(0\) m
4. none of the above
Subtopic: Â Instantaneous Speed & Instantaneous Velocity |
 82%
From NCERT
AIPMT - 1999
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A body starts falling from height \(h\) and if it travels a distance of \(\frac{h}{2}\) during the last second of motion, then the time of flight is (in seconds):
1. \(\sqrt{2}-1\)
2. \(2+\sqrt{2}\)
3. \(\sqrt{2}+\sqrt{3}\)
4. \(\sqrt{3}+2\)
Subtopic: Â Uniformly Accelerated Motion |
From NCERT
AIPMT - 1999
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
A car is moving with velocity v. It stops after applying breaks at a distance of 20 m. If the velocity of the car is doubled, then how much distance it will cover (travel) after applying breaks?
1. 40 m
2. 80 m
3. 160 m
4. 320 m
Subtopic: Â Uniformly Accelerated Motion |
 81%
From NCERT
AIPMT - 1998
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
The displacement \(x\) of a particle varies with time \(t\) as \(x = ae^{-\alpha t}+ be^{\beta t}\)$\mathrm{}$, where \(a,\) \(b,\) \(\alpha,\) and \(\beta\) are positive constants. The velocity of
the particle will:
│1.│be independent of \(\alpha\) and \(\beta.\) │
│2.│go on increasing with time. │
│3.│drop to zero when \(\alpha=\beta.\) │
│4.│go on decreasing with time. │
Subtopic: Â Instantaneous Speed & Instantaneous Velocity |
 53%
From NCERT
AIPMT - 2005
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
The motion of a particle is given by the equation \(S = \left(3 t^{3} + 7 t^{2} + 14 t + 8 \right) \text{m} ,\) The value of the acceleration of the particle at \(t=1~\text{s}\) is:
│1.│\(10\) m/s^2 │2.│\(32\) m/s^2 │
│3.│\(23\) m/s^2 │4.│\(16\) m/s^2 │
Subtopic: Â Acceleration |
 93%
From NCERT
AIPMT - 2000
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
To view explanation, please take trial in the course.
NEET 2025 - Target Batch
Select Chapter Topics: | {"url":"https://www.neetprep.com/questions/476-Physics/7828-Motion-Straight-Line?courseId=141&testId=1107866-Past-Year----MCQs&questionId=264651-ball-mass--kg-mass--kg-dropped-together--feettall-building-fall--feet-towards-earth-respectivekinetic-energies-will-ratio-------","timestamp":"2024-11-13T16:19:20Z","content_type":"text/html","content_length":"396939","record_id":"<urn:uuid:418366c8-fa40-4304-8d37-82c74da088f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00285.warc.gz"} |
Parameter identification of oscillations in power systems based on improved variational modal decomposition and HTLS-adaline method
Oscillation has become one of the important problems faced by modern power grids. Multi-types of oscillations may occur simultaneously in the power system and the oscillation frequency span is
extremely large. For signals with wide-band oscillation modes, the signals in different frequency bands are first separated by a band-pass filter, and then the Improved Variational Mode Decomposition
(IVMD) method with high noise robustness is used to extract each oscillating mode signal. Finally, the combinations of Hankel total least squares (HTLS) and adaptive neural network algorithm (Adaline
ANN) is used to estimate the frequency, attenuation factor, amplitude and phase of low-frequency oscillations. Furthermore, the introduction of Adaline neural network solves the problem that the mode
amplitude and phase are difficult to determine after IVMD processing, so that the detection accuracy is improved. Simulation and case analysis show that this method can effectively distinguish and
extract different types of oscillation modes in the signal, and accurately identify the information of each mode. The IVMD-HTLS-Adaline method can effectively identify signals that have experienced
severe oscillations or noise-like signals with potential oscillations.
1. Introduction
With the large-scale access of renewable energy, the widespread application of power electronic equipment, and the large-scale interconnection of AC and DC in modern power grids, the resource
allocation is optimized and the system reliability is improved; the weak links in the system increase and the anti-interference performance decreases. In recent years, new types of faults have
continued to emerge, and security and stability issues have become increasingly prominent [1-3]. Oscillation is one of the issues that threatens the stable operation of the power system. Typical
oscillations in current power systems include Low-frequency oscillations (LFO) [4-6] caused by regional or interval weak damping and sub-synchronous Oscillation (SSO) caused by series compensation
capacitors or energy interaction between the power electronics equipment and generator set [7-10]. In addition, Super-synchronous Oscillation (SurSO) occasionally occurs with sub-synchronous
oscillation [11]. Normally, oscillation can be tested in four ways: mechanism, acoustics, electrics and electromagnetic [12-19]. Here, we will focus on the electrical signal waves.
According to different damping, the oscillation signals of power system can be divided into two categories: (1) Maintain or diverge the oscillation signal (when the system is weakly damped or
negatively damped); (2) Damped oscillation signal (when the system is in positive damping). The former occurs less frequently, but once it occurs, it will cause great harm to the system. The latter
occurs more frequently and is easily masked by noise, also known as noise-like oscillation signals. The rapid and real-time extraction of modal information from the oscillating signal is helpful to
the adjustment of power grid dispatching and control strategies. Before the oscillation, the mode identification can be found from the noise-like oscillation signal, and the potential oscillation
mode of the system can also be found to provide an early warning for the system. Oscillation mode identification is the key of real-time efficient control and risk early warning of power systems.
Frequency, damping ratio, oscillation amplitude and phase are the key information of oscillation mode, as well as the key parameters to realize oscillation monitoring, early warning, control and
protection. At present, the research methods of power system oscillation mainly include model analysis and measurement analysis. The model analysis method depends on the precise model of the system.
Large-scale systems and non-linear power electronic devices will affect the accuracy of the model, leading to analysis errors [20]. Measurement analysis analyzes the actual measurement data of the
system, and the results are close to the operating conditions. Combining measured data with signal processing technology is a common method for power system oscillation identification.
The Prony algorithm can directly extract the characteristic quantity of the signal to identify more accurate oscillation modal information [21, 22], but it is highly sensitive to noise. In actual
engineering, the filtering algorithm is often combined with the Prony algorithm, and the dimensionality of the signal is reduced through the filtering algorithm, or the modal signal is extracted from
it. Mean filtering (Average Filter, AF), empirical mode decomposition, autoregressive moving average algorithm, wavelet method and singular value decomposition are commonly used filtering algorithms,
which have been applied to the identification of oscillation modes [23-29]. However, the mean filtering can only reduce the noise interference and cannot remove the noise. The essence of wavelet
denoising is to fit the signal in a specific frequency band. If there are multiple oscillation modes with similar frequencies, the wavelet method cannot distinguish them. Methods such as Hilbert
transform and singular value decomposition may produce large errors when processing low signal-to-noise ratio signals. In addition, in the face of multi-types of coexisting pan-band oscillations with
large oscillation frequency spans, the above methods often identify high amplitude oscillations during processing, and other oscillations may be considered as noise and ignored. How to carry out
unified modal identification of pan-band oscillation signals and extract modal information of multiple types of oscillations is an important issue for modal identification of power systems.
Variational Mode Decomposition (VMD) algorithm can effectively separate modal signals and is not sensitive to noise [30]. Based on the Improved VMD (Improved VMD, IVMD) method, a method for
identifying pan-band oscillations in complex power systems is proposed. First, different types of band-pass filters are used for separation. Secondly, the modal signals in each band-pass filtered
signal are extracted using IVMD. Finally, the Hankel total least squares (HTLS) algorithm and the adaptive artificial neural network (Adaline Artificial Neural Network, Adaline ANN) are used to
estimate the frequency, attenuation factor, amplitude, and phase of low-frequency oscillations respectively, so as to achieve unified identification of pan-band oscillations. Meanwhile, the
performance of the proposed algorithm is verified by simulation and measurement examples.
2. Modal identification framework for Broad-Band oscillation
2.1. Broad-Band oscillation signal
The oscillation may include multiple oscillation modal signals in different frequency bands and noise signals generated by measurement or the system itself. The original measurement signal with
broad-band oscillation can be expressed as ${y}_{0}\left(t\right)$:
${y}_{0}\left(t\right)=\sum _{n=1}^{N}{y}_{n}\left(t\right)+{y}_{Noise}\left(t\right).$
In Eq. (1): ${y}_{Noise}\left(t\right)$ is the noisy signal, and ${y}_{n}\left(t\right)\left(n=1,2,\cdots ,N\right)$is the modal signals for $N$ different frequency bands, and:
${y}_{n}\left(t\right)={A}_{n}\left(t\right){e}^{-{D}_{n}{f}_{n}t}\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{n}t+{\theta }_{n}\right).$
In Eq. (2): ${A}_{n}\left(t\right)$ is the amplitude of the $n$th oscillation mode at time $t$; ${\theta }_{n}$ is the initial phase; ${D}_{n}$ and ${f}_{n}$ are the damping ratio and frequency of
oscillation respectively.
2.2. Modal identification framework
The framework for modal oscillations identification in a broad-band is shown in Fig. 1. The identification process is divided into the following three steps.
Step-1. Multiple Band Pass Filters (BPF) is used to decompose the original signal into multiple filtered signals in different frequency bands, so as to separate the oscillation signals.
Step-2. The IVMD algorithm is used to extract the oscillation modal signal from each BPF filtered signal.
Step-3. The HTLS-Adaline algorithm is used to identify the oscillation mode signals provided by IVMD, and obtain the information of each oscillation mode. The original signal is filtered through
band-pass filtering, IVMD modal signal extraction and HTLS-Adaline identification to obtain modal parameters.
Fig. 1Framework of the mode identification for broad-band oscillation
3. Broad-band oscillation mode identification base on VMD-HLS-Adaline
3.1. Band-pass filter design
There are mainly three types of oscillations that cause power systems accidents: low-frequency oscillation, sub-synchronous oscillation and super-synchronous oscillation. The low-frequency
oscillation is further divided into local mode and interval mode. The frequencies of each type of oscillations are in different ranges. Thus, this article designs four BPFs. A proper signal length
selection can improve the accuracy of pattern recognition and ensure the rapidity of recognition. Because each oscillation frequency is different, it is obviously inappropriate to use the same length
of data for identification. Table 1 shows the BPF filter bandwidth and identification sampling time corresponding to various oscillations in this paper.
Table 1Time lengths and frequency bands for different BPFs
Band-pass filter Oscillation type Sampling time / s Oscillation frequency / Hz
BPF-1 LFO (interval modal) 10 0.2-1
BPF-2 LFO (local mode) 2 1-5
BPF-3 SSO 0.4 10-50
BPF-4 SurSO 0.4 70-110
According to the analysis of the modal identification algorithm [31], if it is necessary to obtain the oscillation frequency and amplitude more accurately, the sampling time of the identification
signal must be greater than 1 oscillation period. In order to obtain accurate damping ratio information, the sampling time needs to be longer than two oscillation periods. Therefore, for
low-frequency oscillations, in order to ensure rapid identification, the minimum band-pass frequency of the band-pass filter is 0.2 Hz, and the sampling time is selected to be twice the oscillation
period, that is 10 s. For sub-synchronous oscillation, the identification speed is considered comprehensively. The minimum oscillation frequency is 10 Hz as the benchmark and the sampling time is 4
times. The oscillation period is 0.4 s. The mechanism of super-synchronous oscillation determines that it appears in pairs with sub-synchronous oscillation and the frequencies are complementary. The
sampling time of the synchronous oscillation is the same as that of the sub-synchronous oscillation, and is also 0.4 s.
3.2. Modal signal extraction based on improved VMD algorithm
3.2.1. VMD algorithm
The VMD algorithm is a new adaptive signal decomposition method proposed by Dragomiretskiy et al. in 2014. The target modal is solved by the inherent modal function [30]. In view of the accuracy and
noise robustness of the VMD algorithm, this paper uses the VMD method to extract and separate modal signals. The variational problem corresponding to the VMD algorithm is to find the KIMF components
with the smallest sum of the estimated bandwidth. The variational problem is transformed into an augmented Lagrange equation and the equation is solved by the Alternating Direction Method of
Multipliers (ADMM) to obtain the solution of the modal function ${u}_{k}^{n+1}\left(t\right)$:
${\stackrel{^}{u}}_{k}^{n+1}\left(\omega \right)=\left(\stackrel{^}{f}\left(\omega \right)-\sum _{ie k}{\stackrel{^}{u}}_{i}\left(\omega \right)+\frac{\stackrel{^}{\lambda }\left(\omega \right)}{2}\
right)×\frac{1}{1+2\alpha \left(\omega -{\omega }_{k}{\right)}^{2}}.$
Similarly, the solution of the center frequency value of the modal component is:
${\omega }_{k}^{n+1}=\frac{{\int }_{0}^{\infty }\omega |{\stackrel{^}{u}}_{k}\left(\omega \right){|}^{2}d\omega }{{\int }_{0}^{\infty }|{\stackrel{^}{u}}_{k}\left(\omega \right){|}^{2}d\omega },$
where, $\left\{{\omega }_{k}\right\}=\left\{{\omega }_{1},{\omega }_{2},\cdots ,{\omega }_{K}\right\}$ and $\left\{{\omega }_{k}\right\}=\left\{{\omega }_{1},{\omega }_{2},\cdots ,{\omega }_{K}\right
\}$ is the component and its center frequency respectively. The flow of VMD algorithm is as follows:
Step 1. Initialize $\left\{{\omega }_{k}\right\}=\left\{{\omega }_{1},{\omega }_{2},\cdots ,{\omega }_{K}\right\}$ and $n=0$.
Step 2. $n←n+1$, update ${u}_{k}$ and ${\omega }_{k}$ according to Eq. (3) and Eq. (4).
Step 3. Update $\lambda$:
${\stackrel{^}{\lambda }}^{n+1}\left(\omega \right)←{\stackrel{^}{\lambda }}^{n}\left(\omega \right)+\tau \left(\stackrel{^}{f}\left(\omega \right)-\sum _{k}{\stackrel{^}{u}}_{k}^{n+1}\left(\omega \
Step 4. Repeat steps 2 and 3 until the iteration stop condition is satisfied with $\sum _{k}{‖{\stackrel{^}{u}}_{k}^{n+1}-{\stackrel{^}{u}}_{k}^{n}‖}_{2}^{2}/{‖{\stackrel{^}{u}}_{k}^{n}‖}_{2}^{2}<\
epsilon$. End the loop and output the results to get $K$ modal components and their center frequencies.
3.2.2. Determination of VMD penalty factor based on PSO algorithm optimization
The penalty factor $\alpha$ in the VMD algorithm has a great impact on the decomposition results. The study found that the smaller the penalty parameter $\alpha$, the larger the bandwidth of each IMF
(Intrinsic Mode Function) component, and vice versa [32]. Therefore, when using VMD to decompose the oscillation signal of the power system, it is very important to choose the appropriate penalty
factor parameter $\alpha$. In this paper, genetic mutation particle swarm optimization is used to optimize the penalty parameters to obtain the optimal $\alpha$.
Particle swarm optimization is a global optimization algorithm proposed by Eberh and Kennedy et al. in 1995. This method is a swarm intelligent optimization algorithm, and it has the advantages of
few parameters, easy adjustment, and easy to fall into a local optimum. In order to obtain the global optimal approximate solution [33], this paper introduces the idea of genetic algorithm mutation
in particle swarm algorithm to construct a genetic mutation particle swarm algorithm.
Definition of genetic mutation particle swarm algorithm: In an $D$-dimensional search space, label population $X$, and it is composed of m particles, $\mathbf{X}=\left[{\mathbf{x}}_{1},{\mathbf{x}}_
{2},\cdots ,{\mathbf{x}}_{m}\right]$. The position of each particle in the search space can be represented by a $D$-dimensional vector, that is, ${\mathbf{x}}_{i}=\left[{x}_{i1},{x}_{i2},\cdots ,{x}_
{iD}\right]$. $D$ is the number of parameters to be optimized, and the moving speed of $i$-th particle is ${\mathbf{v}}_{i}=\left[{v}_{i1},{v}_{i2},\cdots ,{v}_{iD}\right]$. The local extremum of
particles is ${\mathbf{p}}_{i}=\left[{p}_{i1},{p}_{i2},\cdots ,{p}_{iD}\right]$; the global extremum of the population is ${\mathbf{G}}_{1}=\left[{g}_{1},{g}_{2},\cdots ,{g}_{D}\right]$, and the
sub-global optimal value is ${\mathbf{G}}_{2}=\left[{g"}_{1},{g\text{'}}_{2},\cdots ,{g\text{'}}_{D}\right]$. The maximum individual optimal algebra is $\mathrm{m}\mathrm{a}\mathrm{x}Age$, and the
mutation probability is$q$. In order to prevent particles from falling into the local optimum, it is necessary to record the individual optimal maintaining algebra of the particles during the
iteration process. When the individual optimal maintaining algebra does not reach $\mathrm{m}\mathrm{a}\mathrm{x}Age$, each particle updates the position of the next generation by individual local
extreme value and global extreme value. The formula is updated as:
${\mathbf{v}}_{i}^{n+1}=\omega {\mathbf{v}}_{i}^{n}+{c}_{1}\eta \left({\mathbf{p}}_{i}-{\mathbf{x}}_{i}^{n}\right)+{c}_{2}\eta \left({\mathbf{G}}_{1}-{\mathbf{x}}_{i}^{n}\right),$
In the formula, $\omega$ is the inertia weight; $\eta$ is a random number between [0, 1]; ${c}_{1}$ and ${c}_{2}$ are the learning factors respectively that represent the local search ability and the
global search ability; $n$ is the number of iterations, and ${\mathbf{v}}_{i}$, ${\mathbf{p}}_{i}$, ${\mathbf{G}}_{1}$ and ${\mathbf{x}}_{i}$ are $D$-dimensional vectors respectively. The
determination of the inertia weight $\omega$ of the current number of iterations adopts the linear decreasing weight method proposed by Shi [34], and the formula is as follows:
$\omega ={\omega }_{\mathrm{m}\mathrm{a}\mathrm{x}}-\frac{\left({\omega }_{\mathrm{m}\mathrm{a}\mathrm{x}}-{\omega }_{\mathrm{m}\mathrm{i}\mathrm{n}}\right)n}{{n}_{\mathrm{m}\mathrm{a}\mathrm{x}}}.$
In the formula, ${\omega }_{\mathrm{m}\mathrm{a}\mathrm{x}}$ and ${\omega }_{\mathrm{m}\mathrm{i}\mathrm{n}}$ are the maximum and minimum inertia weights; $n$ is the current number of iterations, and
${n}_{\mathrm{m}\mathrm{a}\mathrm{x}}$ is the defined maximum number of iterations. When the individual optimal retention algebra reaches $\mathrm{m}\mathrm{a}\mathrm{x}Age$, the genetic mutation
operation is used to update the position and velocity of the particle to make it jump out of the local optimal. Selection of fitness function for genetic mutation particle swarm optimization
algorithm: In the parameter optimization, the evaluation criterion of the decomposition effect of VMD method uses the concept of envelope entropy ${E}_{p}$ proposed by Tang Gui-ji et al. [35]. The
envelope entropy of $x\left(j\right)$ time signal $c$ of length $N$ is defined as:
${E}_{p}=-\sum _{j=1}^{N}{p}_{j}\mathrm{log}\left({p}_{j}\right),{p}_{j}=a\left(j\right)/\sum _{j=1}^{N}a\left(j\right).$
In the formula, $a\left(j\right)$ is the envelope signal of $x\left(j\right)$ after Hilbert demodulation, and $j=1,2,\cdots ,N$. ${p}_{j}$ is the result of normalizing $a\left(j\right)$.
Normalization not only avoids the influence of different envelope amplitudes of IMF components, but also reduces the interference of weak noise. ${E}_{p}$ is obtained according to the information
entropy calculation rules. This article measures the decomposition effect of VMD according to ${E}_{p}$.
The VMD method is used to decompose the BPF filtered oscillation signal. When the component contains more noise, the sparseness of the component signal is weak and the envelope entropy is large. On
the contrary, when a regular oscillation signal appears in the component, the signal will have strong sparseness, and the calculated envelope entropy is small at this time. Therefore, under the
influence of parameter $\alpha$, the minimum entropy ${E}_{p}$ of the $K$ components is selected as the local minimum entropy $\mathrm{m}\mathrm{i}\mathrm{n}{E}_{p}$. The component corresponding to
the minimum entropy value contains rich feature information. The local minimum entropy is used as part of the fitness function of the entire search process to find the parameter ${\alpha }_{0}$
corresponding to the global optimal component. Through the above analysis of parameter $\alpha$, a proper $\alpha$ will reduce the iterations of VMD, that is, the VMD method has a high decomposition
efficiency. Therefore, it is necessary to achieve the highest decomposition efficiency in the case of the best decomposition effect. This article builds the fitness function based on $\mathrm{m}\
mathrm{i}\mathrm{n}{E}_{p}$, add $time$ (iterations) as follows $\mathrm{m}\mathrm{i}\mathrm{n}F=\mathrm{m}\mathrm{i}\mathrm{n}{E}_{p}+\beta \cdot time$, where $\beta$ is the quantization factor of
the fitness function.
In this paper, the number of modal components $K$ and penalty factor $A$ in VMD decomposition are set as model hyper parameters. In the process of VMD optimization by PSO, the number of particle is
set as 30, and the maximum number of iterations is set as 500.The learning factor is set as ${c}_{1}={c}_{2}=$2; the velocity inertia factor is set as $w=$ 0.8, and the velocity coefficient is setas
$\lambda =$ 1. The maximum and minimum inertia weights are set as ${\omega }_{\mathrm{m}\mathrm{a}\mathrm{x}}=$ 0.9 and ${\omega }_{\mathrm{m}\mathrm{i}\mathrm{n}}=$ 0.1 respectively, and the
quantization factor of the fitness function is set as $\beta =$1/1000. The parameter optimization process based on genetic mutation particle swarm optimization algorithm is shown in Fig. 2.
Fig. 2Parameter optimization process based on genetic mulation particle swarm optimization
The BPF signal is decomposed by improved VMD to obtain the oscillation mode ${u}_{n}\left(t\right)$ extracted by VMD. When the accuracy of BPF and VMD is high enough, for modal signal ${u}_{n}\left(t
\right)$, there is ${u}_{n}\left(t\right)\approx {y}_{n}\left(t\right)$.
Therefore, through BPF and IVMD, all oscillation modal signals can be separated from the original signal. The convergence process of VMD optimization using PSO is shown in Fig. 3.
Fig. 3Convergence process of fitness function in VMD optimization by PSO
3.3. VMD-HTLS algorithm for frequency and attenuation factor
After the modal signals ${u}_{n}\left(t\right)$ are obtained by the IVMD decomposition, the HTLS algorithm is used to identify the modal parameters, such as the oscillation frequency and attenuation
factor. HTLS algorithm is a subspace rotation invariant method, which has high computing efficiency and strong anti-noise ability. Its calculation steps are described in [29]. The main idea is to
construct a Hankel matrix using the sampled signal, and perform Vander Mang decomposition on it. Using the translation-invariant characteristic of the van der Mun matrix to construct the equivalent
relationship, and the eigenvalues of the oscillating modes are obtained. The main idea of IVMD-HTLS is: IVMD is used to decompose the BPF filtered sequence, and then the HTLS algorithm is used to
calculate the oscillation frequency and attenuation factor for each component after decomposition. Because the FOMC-HTLS algorithm cannot give the amplitude and phase of the original signal $y\left(n
\right)$, and the information of each mode is incomplete. It is not conducive to the reconstruction of the signal and the quantitative evaluation of the algorithm. Therefore, this paper introduces
the Adaline God network to oscillate modal information (Amplitude and Phase).
3.4. Adaline neural network solves amplitude and phase
3.4.1. Adaptive linear neural network
Adaptive linear (Adaline) neural network is a neuron model proposed originally by Widrow and Hoff [36]. It is widely used in signal processing and other fields.
Fig. 4The principle of adaptive linear neural network
In Fig. 4, ${x}_{1k}$, ${x}_{2k}$,⋯, ${x}_{nk}$ are the $n$ input signals of the adaptive linear neural network at time$k$.The input signal vector form is expressed as ${\mathbf{X}}_{ik}=\left[{x}_
{1k},{x}_{2k},\cdots ,{x}_{nk}{\right]}^{T}$, and this is often called the input pattern vector of Adaline neural network. The weight vector corresponding to each group of input signals is ${\mathbf
{W}}_{ik}=\left[{w}_{1k},{w}_{2k},\cdots ,{w}_{nk}{\right]}^{T}$. The Adaline neural network output is:
Let $A$ be the ideal response signal, and define the error function as:
The working process of Adaline neural network is as follows [36]: the ideal response signal $y\left(k\right)$ is compared with the output signal $\stackrel{^}{y}\left(k\right)$ of the neural network
to obtain different $e\left(k\right)$. Feed$e\left(k\right)$into the learning rules, and adjust the weight vector according to the learning rules to make $\stackrel{^}{y}\left(k\right)$ and $y\left(k
\right)$ consistent.
The learning rule of Adaline neural network is Widrow-Hoff rule, which is the least square error algorithm (LMS). The rule weight vector adjustment expression is:
${\mathbf{W}}_{i\left(k+1\right)}={\mathbf{W}}_{i\left(k+1\right)}+\eta e\left(k\right){\mathbf{X}}_{i\left(k+1\right)}.$
In the formula, $\eta$ is the learning rate of the Adaline neural network, $\eta \in \left(0,1\right)$. Its value directly affects the weight vector adjustment accuracy and the convergence velocity.
3.4.2. Solution of oscillation modes by Adaline neural network
The specific steps of Adaline neural network to solve the amplitude and phase are as follows, and a known low-frequency oscillation discrete sampling signal model is established:
$x\left(n\right)=\sum _{i=1}^{M}{A}_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{i}n\mathrm{\Delta }t+{\theta }_{i}\right),\mathrm{}\mathrm{}\mathrm{}\mathrm
{}\mathrm{}\mathrm{}n=0,1,2,\cdots ,N-1.$
When the attenuation factor and frequency are known, Eq. (12) can be written as:
$x\left(n\right)=\sum _{i=1}^{M}\left[{A}_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{i}n\mathrm{\Delta }t\right)\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{i}-
{A}_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{i}n\mathrm{\Delta }t\right)\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{i}\right]$$=\sum _{i=1}^{M}\left[{A}_{i}\
mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{i}n\mathrm{\Delta }t\right)-{A}_{i}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }
_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{i}n\mathrm{\Delta }t\right)\right]$$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}=\sum _{i=1}^{M}\left
[{p}_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{i}n\mathrm{\Delta }t\right)-{q}_{i}{e}^{{\alpha }_{i}n\mathrm{\Delta }t}\mathrm{s}\mathrm{i}\mathrm{n}\left
(2\pi {f}_{i}n\mathrm{\Delta }t\right)\right].$
In the formula, ${p}_{i}={A}_{i}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{i}$ and ${q}_{i}={A}_{i}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{i}$. The matrix expression of Eq. (13) is:
Among them:
$\mathbf{p}=\left[{p}_{1},{p}_{2},\cdots ,{p}_{Q}\right],\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathbf{q}=\left[{q}_{1},{q}_{2},\cdots ,{q}_{Q}\right],$
$\mathbf{C}=\left[\begin{array}{cccc}\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{1}\mathrm{\Delta }t\right)& \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{1}2\mathrm{\Delta }t\right)& \cdots & \
mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{1}N\mathrm{\Delta }t\right)\\ \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{2}\mathrm{\Delta }t\right)& \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{2}2\
mathrm{\Delta }t\right)& \cdots & \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{2}N\mathrm{\Delta }t\right)\\ ⋮& ⋮& ⋮& ⋮\\ \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{Q}\mathrm{\Delta }t\right)& \
mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{Q}2\mathrm{\Delta }t\right)& \cdots & \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi {f}_{Q}N\mathrm{\Delta }t\right)\end{array}\right],$
$\mathbf{S}=\left[\begin{array}{cccc}\mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{1}\mathrm{\Delta }t\right)& \mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{1}2\mathrm{\Delta }t\right)& \cdots & \
mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{1}N\mathrm{\Delta }t\right)\\ \mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{2}\mathrm{\Delta }t\right)& \mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{2}2\
mathrm{\Delta }t\right)& \cdots & \mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{2}N\mathrm{\Delta }t\right)\\ ⋮& ⋮& ⋮& ⋮\\ \mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{Q}\mathrm{\Delta }t\right)& \
mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{Q}2\mathrm{\Delta }t\right)& \cdots & \mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi {f}_{Q}N\mathrm{\Delta }t\right)\end{array}\right].$
Similarly, define the error function:
In the formula, $x\left(n\right)$ is the actual sample, and $\stackrel{^}{x}\left(n\right)$ is the output of the neural network. Define the performance indicators as:
$J=\frac{1}{2}\sum _{n=0}^{N-1}{e}^{2}\left(n\right).$
Because the attenuation factor and frequency are known, $\mathbf{p}$ and $\mathbf{q}$ are unknown in Eq. (14), $\mathbf{C}$ and $\mathbf{S}$ are the input vectors of the neural network. According to
the training principle of the steepest descent method, the weight vectors p and q are adjusted to:
${\mathbf{p}}_{n+1}={\mathbf{p}}_{n}-\eta \frac{\partial J}{\partial {\mathbf{p}}_{n}}={\mathbf{p}}_{n}+\eta {e}_{k}{\mathbf{C}}^{T},$
${\mathbf{q}}_{n+1}={\mathbf{q}}_{n}-\eta \frac{\partial J}{\partial {\mathbf{q}}_{n}}={\mathbf{q}}_{n}+\eta {e}_{k}{\mathbf{S}}^{T}.$
When the Adaline neural network training is completed, the amplitude and phase of the oscillation mode are solved from the obtained weight vector and Eq. (19):
$\left\{\begin{array}{l}{A}_{i}=\sqrt{{p}_{n}^{2}\left(i\right)+{q}_{n}^{2}\left(i\right)},\\ {\theta }_{i}=\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{t}\mathrm{a}\mathrm{n}\frac{{q}_{n}\left(i\right)}
In this paper, the number of neurons in Adaline neural network is equal to the number $K$ of modal components after VMD decomposition. The activation function of neurons is a constant function, that
is, $f=$ 1. The learning rule is the minimum mean square error (LMS) criterion, and the learning rate $\eta =$ 0.0015. The maximum number of network iteration is 5000, and the network iteration stops
when the error $\delta$ satisfies $\delta <$ 0.0001.
4. Simulation and analysis of actual examples
4.1. Simulation analysis
In order to verify the effectiveness of the method in this paper, an oscillating signal ${y}_{test}$ is constructed by simulation:
In the formula: ${y}_{LFO1}$ is the low-frequency oscillation interval mode;${y}_{LFO2}$ is the low-frequency oscillation local mode; ${y}_{SSO1}$ is the sub-synchronous oscillation and paired with
the super-synchronous oscillation signal ${y}_{SurSO}$; ${y}_{SSO2}$ is another independent sub-synchronous oscillation, and ${y}_{Noise}$ is the white noise. In line with the real situation, the
test signal satisfies the following conditions and assumptions: (1) All oscillation frequencies are randomly selected within the frequency band of the oscillation type; (2) Low frequency oscillation
is the main mode of oscillation; the amplitude is higher than the sub-synchronous oscillation, and the local mode of low frequency oscillation is higher than the interval mode; (3) In the pair of
sub-synchronous and super-synchronous oscillations, the amplitude of sub-synchronous oscillation mode is slightly higher than the super-synchronous mode. Based on the above assumptions, the specific
parameters of each mode of the test signal are finally selected as:
$\left\{\begin{array}{l}{y}_{LFO1}\left(t\right)=2.35×\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi ×0.65t+\pi /3\right),\\ {y}_{LFO2}\left(t\right)=3×\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi ×2.05t+\pi /
5\right),\\ {y}_{SSO1}\left(t\right)=0.58×\mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi ×25.56t+\pi /6\right),\\ {y}_{SSO2}\left(t\right)=0.25×\mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi ×22.84t+\pi /4\
right),\\ {y}_{SurSO}\left(t\right)=0.42×\mathrm{s}\mathrm{i}\mathrm{n}\left(2\pi ×92.32t+\pi /3\right),\\ {y}_{Noise}\left(t\right)=0.14×\mathrm{r}\mathrm{a}\mathrm{n}\mathrm{d}\mathrm{o}\mathrm{m}\
That is, the test signal ${y}_{test}$ contains 5 oscillating signals with different frequencies and a white noise signal with amplitude of 0.16. The frequency band of the test signal oscillation ${y}
_{test}$ is 0.63-93.42 Hz. Each test signal is equal amplitude oscillation. The time domain form of the test signal ${y}_{test}$ is shown in Fig. 5. Using the proposed method after the constructed
test signal ${y}_{test}$ is extracted by band-pass filtering and IVMD, the oscillation modal signals in each frequency band are shown in Fig. 6. It can be seen from Fig. 6 that the method proposed in
this paper can accurately distinguish the modalities of different frequency bands and effectively extract all corresponding modal signals.
Fig. 6Mode signals extracted by IVMD
The modal signals are extracted for each IVMD obtained in Fig. 6, and the modal identification is performed through the HTLS-Adaline algorithm to obtain the parameter information of the corresponding
modal. The identification results of this method compared with those of other methods is shown in Table 2. The selected comparison methods are: (1) the classic EMD-based HTLS algorithm (EMD-HTLS);
(2) the VMD-based HTLS algorithm (VMD-HTLS); (3) the proposed method is based on the algorithm of IVMD-HTLS and Adaline (IVMD-HTLS-Adaline).
Table 2Mode identification of test signal
Test signal EMD-HTLS VMD-HTLS Proposed method
Freq Phase Fre Phase Phase Phase
Modal Amp Amp Freq / Hz Amp Freq / Hz Amp
/ Hz / rad error / rand / rad / rad
0.6648 2.2517 1.1815 0.6619 2.3841 0.9122 0.6512 2.3492 1.0488
${y}_{LPO1}$ 0.65 2.35 1.0472
2.029% 15.637% 12.82% 1.044% 5.199% 12.89% 0.264% 0.945% 0.153%
2.0641 3.1125 0.4868 2.4833 2.0855 0.7318 2.0525 3.0263 0.6338
${y}_{LPO2}$ 2.05 3.00 0.6283
4.321% 5.670% 22.52% 2.762% 4.775% 16.473% 0.233% 1.315% 0.875%
26.992 0.6233 0.6670 25.173 0.5922 0.5961 25.612 0.5842 0.5168
${y}_{SSO1}$ 25.56 0.58 0.5236
5.384% 23.122% 27.38% 2.231% 8.612% 13.846% 0.158% 4.122% 1.299%
25.237 0.2747 1.0141 22.157 0.2419 0.7741 22.774 0.2484 0.7880
${y}_{SSO2}$ 22.84 0.25 0.7854
10.13% 15.772% 29.11% 6.002% 5.409% 1.438% 0.487% 0.727% 0.331%
93.451 0.4438 1.1187 92.259 0.4291 1.0482
${y}_{SurSO}$ 92.32 0.42 1.0472 – – –
1.104% 7.437% 6.828% 0.215% 2.843% 0.096%
It can be seen from Table 2 that when the test signal oscillates in multiple frequency bands, the proposed method can effectively identify all oscillation modes. The maximum error of the oscillation
frequency identification result is 1.55 %, with an average error of 0.67 %, and that of the oscillation amplitude identification result is 9.09 %, with an average error of 2.54 %. In comparison, the
EMD-HTLS method has the worst frequency identification and amplitude identification results, and the average error is also the highest. The VMD-HTLS method has a good identification effect on the
dominant oscillation mode (${y}_{LFO2}$) with the highest amplitude. The frequency and amplitude identification results are close to those of this method, but the phase identification results are not
as good as those of this method. For the sub-modes ${y}_{LFO1}$ and ${y}_{SSO1}$, the VMD-HTLS frequency identification effect is close to the proposed method, but the amplitude and phase
identification results are far worse than it. In particular, for mode ${y}_{SSO2}$ with low amplitude and high frequency, the recognition results of EEMD-HTLS and VMD-HTLS are poor. For the
super-synchronous oscillation mode ${y}_{SurSO}$ with a frequency of 93.42, the EEMD-HTLS method failed to identify it, and the result of VMD-HTLS is inconsistent with the actual one. Therefore, the
method proposed in this paper has obvious advantages in identifying multi-mode coexisting pan-band oscillations.
4.2. Actual study data
In order to prove the effectiveness of the proposed method, the actual oscillation data of Hunan Power Grid was selected and the oscillation mode identification analysis was carried out. The
oscillation event is as follows: On June 24, 2018, a low-frequency oscillation occurred in a thermal power plant in Hunan Power Grid. The system started to oscillate at low frequency in the 120th
second. After 30 s, the system quickly started to emit an alarm. The system maintained an alarm for about 180 s. In 60-120 s before the oscillation, the system has a noise-like oscillation, and the
system did not issue an alarm at this time. The range of the oscillation signal and noise-like signal has been marked in the figure. The system sampling frequency is 100 Hz. The oscillation signal at
165-195 s is selected, and the modal identification of this method is adopted. The original signal used for identification is shown in Fig. 7.
It can be seen from Fig. 6 that the amplitude of the oscillation signal is high, and the highest frequency fluctuation exceeds 100 MW, which is higher than 30 % of the system output power. At this
time, the signal is affected by a certain degree of noise. For the original oscillation signal data shown in Fig. 7, the modal signals are extracted by the proposed algorithm as shown in Fig. 8. This
method decomposes three modes from the oscillation signal data, which are the main modes of the system oscillation. Among the three modes, the first mode has the highest amplitude and is the dominant
oscillation mode. The extracted modal signals are linearly superimposed, and the reconstructed signals are shown in Fig. 8.
Fig. 7Original oscillation data used for mode identification
Fig. 8Mode signals extracted from the original oscillation data
Fig. 9Reconstructed signal based on mode signals from IVMD
Comparing Fig. 9 with Fig. 7, it is not difficult to find that the reconstructed signal is basically the same as the original signal waveform. That is, the three modal signals separated cover the
main oscillations of the system. The method in this paper can effectively extract all the main oscillation modal signals. Using the Prony algorithm to perform parameter identification on the modal
signals in Fig. 6, the information of the main oscillation modes can be obtained. Table 3 compares the results of this method with the MF-Prony and EMD-Prony parameter identification results. It can
be seen from the results that for the oscillation signal, both the proposed method MF-Prony or EMD-Prony can identify the mode with the largest amplitude. In the three methods, the MF-Prony frequency
identification error is the highest, and the results of this method are similar to the EMD-Prony method. During the severe oscillation, the dominant modal frequency is about 1.48 Hz, and the minor
dominant modal frequency is about 2.02 Hz.
Table 3Mode identification result and comparison of original oscillation data
Method Modal Frequency / Hz Amplitude / MW Damping Phase / rad
1 1.44 28.2 –1.18 2.6539
EEMD-HTLS 2 1.61 13.7 –1.59 0.2355
3 2.32 8.5 –2.31 2.9813
1 1.48 30.6 –0.52 3.1348
VMD-HTLS 2 1.51 16.3 –1.01 0.1653
3 1.08 8.7 –1.03 2.1981
1 1.48 32.7 –0.94 3.0273
IVMD-HTLS-Adaline 2 2.02 17.9 –1.40 0.0985
3 0.49 3.2 0.18 2.3499
5. Conclusions
A VMD-based method for power system pan-band oscillation signal extraction and modal identification is proposed. This method can effectively extract low-frequency oscillation, sub-synchronous
oscillation, and super-synchronous oscillation of multi-type and pan-band oscillations from power system operating data. Based on the oscillation discrimination, modal extraction and parameter
identification, the identification results plays an important role in the analysis and control of system dynamic stability, as well as the identification and location of oscillation sources. It is
helpful for early warning and timely suppression of system oscillation.
• Kang Jie, Liu Li, Shao Yupei, Ma Qinggang Non-stationary signal decomposition approach for harmonic responses detection in operational modal analysis. Computers and Structures, Vol. 242, Issue 1,
2021, p. 106377.
• David G. L., Jose R. R. H., Vicente V. H., Juan O. G., et al. Harmonic PMU and fuzzy logic for online detection of short-circuited turns in transformers. Electric Power Systems Research, Vol.
190, Issue 1, 2021, p. 106862.
• Duan Guizhong, Qin Wenping, Lu Ruipeng, et al. Static voltage stability analysis considering the wind power and uncertainty of load. Power System Protection and Control, Vol. 46, Issue 12, 2018,
p. 108-114.
• Zhang Xinran, Lu Chao, Liu Shichao, et al. A review on wide-area damping control to restrain inter-area low frequency oscillation for large-scale power systems with increasing renewable
generation. Renewable and Sustainable Energy Reviews, Vol. 57, 2016, p. 45-58.
• Li Shenghu, Sun Qi, ShiXuemei, et al. Suppression of weakly damped low-frequency modes of wind power system based on regional pole placement. Power System Protection and Control, Vol. 45, Issue
20, 2017, p. 14-20.
• Gao Haixiang, WuShuangxi, Miao Lu, et al. Overview of reasons for generator-induced power oscillations& its suppression measures. Smart Power, Vol. 46, Issue 7, 2018, p. 49-55.
• Suriyaarachchi D. H. R., Annakkage U. D., Karawita C., et al. A Procedure to study sub-synchronous interactions in wind integrated power systems. IEEE Transactions on Power Systems, Vol. 28,
Issue 1, 2013, p. 377-384.
• Karaagac U., Faried S. O., Mahseredjian J. Coordinated control of wind energy conversion systems for mitigating sub synchronous interaction in DFIG-based wind farms. IEEE Transactions on Smart
Grid, Vol. 5, Issue 5, 2014, p. 2440-2449.
• Cui Yahui, Zhang Junjie, Zhao Zongbin Sub synchronous oscillation torsional vibration model and simulation for a supercritical 600 MW unit with high voltage direct current transmission. Thermal
Power Generation, Vol. 45, Issue 6, 2016, p. 106-110.
• Zhang Fan, Zhang Donghui, Liu Yongjun Risk assessment and suppression method study on sub-synchronous oscillation resulted from Luxi mixed back-to-back HVDC project. Smart Power, Vol. 45, Issue
4, 2017, p. 30-33.
• Xu Yanhui, Cao Yuping Research on mechanism of sub/sup-synchronous oscillation caused by GSC controller of direct-drive permanent magnetic synchronous generator. Power System Technology, Vol. 42,
Issue 5, 2018, p. 1556-1564.
• Kun Liang, Xiaogong Lin, Yu Chen, Juan Li, Fuguang Ding Adaptive sliding mode output feedback control for dynamic positioning ships with input saturation. Ocean Engineering, Vol. 206, Issue 15,
2020, p. 107245.
• Alejandro Garces Small-signal stability in island residential microgrids considering droop controls and multiple scenarios of generation. Electric Power Systems Research, Vol. 185, 2020, p.
• Ayse Nihan Basmaci Characteristics of electromagnetic wave propagation in a segmented photonic waveguide. Journal of Optoelectronics and Advanced Materials, Vol. 22, Issues 9-10, 2020, p.
• Ayse Nihan Basmaci Filiz Solution of Helmholtz equation using finite differences method in wires have different properties along X-axis. Journal of Electrical Engineering, Vol. 6, 2018, p.
• Zhineng Zhang, Ling Zheng 1D numerical study of nonlinear propagation of finite amplitude waves in traveling wave tubes with varying cross section. International Journal of Acoustics and
Vibration, Vol. 25, Issue 1, 2020, p. 88-95.
• Eldad J. A., Neeshtha D. B., Giuseppe C. G., Touvia M. Sound scattering by an elastic spherical shell and its cancellation using a multi-pole approach. Archives of Acoustics, Vol. 42, Issue 4,
2017, p. 697-705.
• Mingsian Bai R., Jia Hong Lin, Kwan Liang Liu Optimized microphone deployment for near-field acoustic holography: to be, or not to be random, that is the question. Journal of Sound and Vibration,
Vol. 329, Issue 14, 2010, p. 2809-2824.
• Mraa Abad, Ahmadi H., Moosavian A., Khazaee M., Mohammadi M. Discrete wavelet transform and artificial neural network for gearbox fault detection based on acoustic signals. Journal of
Vibroengineering, Vol. 15, Issue 1, 2013, p. 459-463.
• Wang Henan, Zheng Chao, Ren Jie Review on mechanism and analysis methods of low frequency oscillation in power system. Advanced Materials Research, Vol. 986, 2014, p. 2010-2013.
• Wadduwage D. P., Annakkage U. D., Narendrak K. Identification of dominant low-frequency modes inring-down oscillations using multiple Prony models. IET Generation, Transmission and Distribution,
Vol. 9, Issue 15, 2015, p. 2206-2214.
• Jin Tao, Liu Dui Power grid low frequency oscillation recognition based on advanced Prony algorithm with improved denoising feature. Electric Machines and Control, Vol. 21, Issue 5, 2017, p.
• Li Kuan, Li Xingyuan, Hu Nan ESPRIT analysis of sub synchronous oscillation based on the empirical mode decomposition self-adaptive filter. Power System Protection and Control, Vol. 40, Issue 13,
2012, p. 18-23.
• Zhang Yulin, Chen Hongwei Parameter identification of harmonics and inter-harmonics based on CEEMD-WPT and Prony algorithm. Power System Protection and Control, Vol. 46, Issue 12, 2018, p.
• Wang Shen, Huang Songling, Wang Qing Mode identification of broadband Lamb wave signal with squeezed wavelet transform. Applied Acoustics, Vol. 125, 2017, p. 91-101.
• Ren Zihui, Liu Haoyue, Xu Jinxia Power quality disturbance analysis based on wavelet transform and improved Prony method. Power System Protection and Control, Vol. 44, Issue 9, 2016, p. 122-128.
• Ye Y., Yuanzhang S., Lin C. Power system low frequency oscillation monitoring and analysis based on multi-signal online identification. Science China (Technological Sciences), Vol. 53, Issue 9,
2010, p. 2589-2596.
• Jiang Ping, Shi Hao, Wu Xi Localizing disturbance source of power system forced oscillation caused by wind power fluctuation. Electric Power Engineering Technology, Vol. 37, Issue 5, 2018, p.
• Xu Qi, Xu Jian, Shi Weil The forced oscillation analysis of wind integrated power systems based on singular value decomposition. Proceedings of the CSEE, Vol. 36, Issue 18, 2016, p. 4817-4827.
• Dragomiretskiy K., Zosso D. Variational mode decomposition. IEEE Transactions on Signal Processing, Vol. 62, Issue 3, 2014, p. 531-544.
• Hauer J. F., Demeure C. J., Scharf L. L. Initial results in Prony analysis of power system response signals. IEEE Transactions on Power Systems, Vol. 5, Issue 1, 1990, p. 80-89.
• Tang Guiji, Wang Xiaolong Parameter optimized variational mode decomposition method with application to incipient fault diagnosis of rolling bearing. Journal of Xi’an Jiaotong University, Vol.
49, Issue 5, 2015, p. 73-81.
• Ghodousian A., Parvari M. R. A modified PSO algorithm for linear optimization problem subject to the generalized fuzzy relational inequalities with fuzzy constraints. Information Sciences, Vol.
418, 2017, p. 317-345.
• Liu Yanmin Research and Application of Particle Swarm Optimization. Ph.D. Dissertation, Shandong Normal University, China, 2011.
• Tang Guiji, Wang Xiaolong Application of variational modal decomposition method for optimizing parameters inearly fault diagnosis of rolling bearings. Journal of Xi’an Jiaotong University, Vol.
49, Issue 5, 2015, p. 73-81.
• Papy J. M., De Lathauwer L., Van Huffel S. Common pole estimation in multi-channel exponential data modeling. Signal Processing, Vol. 86, Issue 4, 2006, p. 846-858.
About this article
Dynamics and oscillations in electrical and electronics engineering
variational mode decomposition
power system
parameter identification
Copyright © 2021 Chunlu Wan, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/21871","timestamp":"2024-11-14T14:55:19Z","content_type":"text/html","content_length":"214152","record_id":"<urn:uuid:185bf3ca-8f5d-4b70-882c-a5f5b0956dc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00029.warc.gz"} |
The Realm Of "Attoseconds" And Incomprehensibly Small Times Scales - Leads To Physics Nobel Prize
At the scale of the electron, we know temporal units and scales must be unimaginably small. Consider just a measurement made to determine the instantaneous position of an electron by means of a
hypothetical ("Heisenberg") microscope as shown in the graphic. In such a measurement the electron must be illuminated, because it is actually the light quanta (photon) scattered by the electron that
the observer sees. The resolving power of the microscope determines the ultimate accuracy with which the electron can be located. This resolving power is known to be approximately:
l/ 2 sin q
Where l is the wavelength of the scattered light and q is the half-angle subtended by the objective lens of the microscope. Then the uncertainty Δx in position is :
Δx = l/ 2 sin q
In order to be collected by the lens, the photon must be scattered through any range of angle from -q to q. In effect, the electron’s momentum values range from: h sin q/ l to - h sin q/ l
Then the uncertainty in the electron momentum is given by:
D p[x ] = 2 h sin q/ l
As for timing the electron's motion, that is impossible given it lacks a defined particle nature. Hence, electron locations can’t be computed from Newtonian mechanics but only relatively assessed
from probability computations using quantum mechanics.
Thus we are relegated to using probabilistic regions for electron occurrence only. In the diagram below, for example, we see the n=1 electron orbital for the hydrogen atom:
This diagram more than any other dispenses with the notion that the hydrogen electron occupies a definite position. Instead, it’s confined someplace within a “cloud” or probability (b) but that
probability can be computed as a function of the Bohr radius (a[o] = 0.0529 nm). The probability P[1s] for the 1s orbital is itself a result of squaring the “wave function” for the orbital. If the
wave function is defined y (1s) = 1/Öp (Z/ a[o]) exp (-Zr/ a[o]), and the probability function is expressed:
P = ½y (1s) y (1s) *½
Where y (1s) * is the complex conjugate, then the graph shown in the figure is obtained. Inspection shows the probability of finding the electron at the Bohr radius is the greatest, but it can also
be found at distances less than or greater than 0.0529 nm. In the case of the hydrogen electron the first three cloud-wave regions are shown below:
These in turn, defined by the quantum numbers n and ℓ lead to electron density computations leading to "probability lobes" for finding an electron in a defined space, e.g.
In the case shown one must also visualize a symmetrical lobe "mirroring" on the other side (making the whole orbital resemble a dumbbell) to make it complete. As one alters the set of quantum numbers
the electron densities change and so do the probabilities associated with the orbit.
Given these complexities, imagine now the feat of trying to time the electron say from one region of an atom like hydrogen to another. Why indeed would anyone do such a thing as opposed to just
settling for orbitals and probability densities of electrons? Well, because it opens a totally new time dimension, ruled by attoseconds. The basic unit needed to time an electron's motion - if motion
is even the right word to use.
According to its basic physics definition: an attosecond is:
A billionth of a billionth of a second.
To fix ideas, there are around as many attoseconds in a single second as there have been seconds in the 13.8-billion year history of the universe. Think about that if you can, and let it blow your
Well, on Tuesday we in the physics community celebrated the 2023 Physics Nobel Prize to French-Swedish physicist Anne L'Huillier, French scientist Pierre Agostini and Hungarian-born Ferenc Krausz for
their work which explored the behavior of electrons at the time scale of attoseconds. In effect, the three Nobel laureates’ work has enabled the investigation of processes at this time scale, which
are so rapid that they were previously impossible to follow, according to the committee. In the words of Eva Olsson, chair of the Nobel committee for physics:
“We can now open the door to the world of electrons"
Adding that physics at the attosecond level gives researchers the opportunity to understand mechanisms governed by electrons. In this respect we are no longer ruled exclusively by the probabilistic
configurations (e.g. of wave function states) but are enabled to apply more intricate means to assess electron behavior.
Still, even the scientists themselves acknowledge it's a long way to tracking any kind of extended election motion. Even when scientists ''see'' the electron, there's only so much they can view. In
the words of Laureate L'Huillier:
''You can see whether it's on the one side of a molecule or on the other. 'It's still very blurry. The electrons are much more like waves, like water waves, than particles and what we try to measure
with our technique is the position of the crest of the waves."
This is, of course, why the de Broglie wave l[D ]= h/ p has generally been applied to electrons (as well as protons) and why the probabilistic (wave-cloud) treatment has endured so long, superseding
all attempts to apply Newtonian motion standards. Should that deter these specialists? Not on your life. According to Ferenc Krausz, the Hungarian laureate:
"In our biological life, electrons form the adhesive between atoms, with which they form molecules and these molecules are then the smallest functional building stones of every living organism. And
if you want to understand how they work, you need to know how they move."
The research could have potential applications in the fields of electronics, chemistry and medicine, helping scientists understand and control how electrons behave, according to Mats Larsson, a
member of the Nobel committee for physics.
Attosecond pulses of electrons might also be used in medical diagnostics, he added, including one day assisting with diagnosing early-stage cancer for improved treatment.
According to Peter Armitage, a professor of physics and astronomy at Johns Hopkins University who wasn’t involved in the research:
“I see this as kind of the latest in what is a long and remarkable saga of human beings trying to develop ways of timing events to shorter and shorter time scales,”
"This is the time scale you want to look at to understand how atoms form molecules, how electrons around atoms behave and the physical processes that are happening in any chemical reaction."
The committee chose to award work in this field of research because it opens up entire new areas of study, according to Robert Rosner, president of the American Physical Society and professor of
astronomy, astrophysics and physics at the University of Chicago, noting:
“They’ve basically created a tool that allows you to look at phenomena and time scales that we’ve never been able to explore before.”
Let us in passing just hope the research is applied to more positive and constructive purposes - than to developing new (e.g. high power electron beam) weapons.
See Also:
Physicists Who Explored Tiny Glimpses of Time Win Nobel Prize | Quanta Magazine
Physicists who built ultrafast ‘attosecond’ lasers win Nobel Prize (nature.com) | {"url":"https://brane-space.blogspot.com/2023/10/the-realm-of-attoseconds-and.html","timestamp":"2024-11-02T09:45:11Z","content_type":"text/html","content_length":"149783","record_id":"<urn:uuid:f712dfc9-247c-4d3e-9731-a0ccb72f162b>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00095.warc.gz"} |
Category: algorithms Component type: function
find_first_of is an overloaded name; there are actually two find_first_of functions.
template <class InputIterator, class ForwardIterator>
InputIterator find_first_of(InputIterator first1, InputIterator last1,
ForwardIterator first2, ForwardIterator last2);
template <class InputIterator, class ForwardIterator, class BinaryPredicate>
InputIterator find_first_of(InputIterator first1, InputIterator last1,
ForwardIterator first2, ForwardIterator last2,
BinaryPredicate comp);
Find_first_of is similar to find, in that it performs linear search through a range of Input Iterators. The difference is that while find searches for one particular value, find_first_of searches for
any of several values. Specifically, find_first_of searches for the first occurrance in the range [first1, last1) of any of the elements in [first2, last2). (Note that this behavior is reminiscent of
the function strpbrk from the standard C library.)
The two versions of find_first_of differ in how they compare elements for equality. The first uses operator==, and the second uses and arbitrary user-supplied function object comp. The first version
returns the first iterator i in [first1, last1) such that, for some iterator j in [first2, last2), *i == *j. The second returns the first iterator i in [first1, last1) such that, for some iterator j
in [first2, last2), comp(*i, *j) is true. As usual, both versions return last1 if no such iterator i exists.
Defined in the standard header algorithm, and in the nonstandard backward-compatibility header algo.h.
Requirements on types
For the first version: For the second version:
• InputIterator is a model of Input Iterator.
• ForwardIterator is a model of Forward Iterator.
• BinaryPredicate is a model of Binary Predicate.
• InputIterator's value type is convertible to BinaryPredicate's first argument type.
• ForwardIterator's value type is convertible to BinaryPredicate's second argument type.
• [first1, last1) is a valid range.
• [first2, last2) is a valid range.
At most (last1 - first1) * (last2 - first2) comparisons.
Like strpbrk, one use for find_first_of is finding whitespace in a string; space, tab, and newline are all whitespace characters.
int main()
const char* WS = "\t\n ";
const int n_WS = strlen(WS);
char* s1 = "This sentence contains five words.";
char* s2 = "OneWord";
char* end1 = find_first_of(s1, s1 + strlen(s1),
WS, WS + n_WS);
char* end2 = find_first_of(s2, s2 + strlen(s2),
WS, WS + n_WS);
printf("First word of s1: %.*s\n", end1 - s1, s1);
printf("First word of s2: %.*s\n", end2 - s2, s2);
See also
find, find_if, search
STL Main Page | {"url":"http://ld2014.scusa.lsu.edu/STL_doc/find_first_of.html","timestamp":"2024-11-12T08:47:53Z","content_type":"text/html","content_length":"7983","record_id":"<urn:uuid:c4b6660a-7dee-4cd1-a102-ee11dbbfc556>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00372.warc.gz"} |
Positivity checker and constructors
Why does the positivity checker consider constructor arguments to be non-positive occurrences? Is there any way to make something like this work?
Inductive t1 := q1 (v : option t1) (pf : v = v).
Inductive t2 := q2 (v : option t2) (pf : v = None).
(* Error: Non strictly positive occurrence of "t2" in
"forall v : option t2, v = None -> t2". *)
(See https://github.com/coq/coq/issues/16120 for a slightly more realistic example)
(The context for this is that I'm trying to write a general trie data structure satisfying the FMap interface and then instantiate it with various sorts of data, see https://github.com/mit-plv/
Interesting limitation. The first one succeeds because t1 is a parameter in the equality type.
You can probably hack this by hardwiring a "isNone" inductive type there.
Hmm, nope same issue.
this looks inductive-inductive ish
@Pierre-Marie Pédrot The "isNone" version is what I started with and am actually wanting to make work:
Inductive tree_NonEmpty elt : PositiveMap.t elt -> Prop :=
| Node_l_NonEmpty l v r : tree_NonEmpty l -> tree_NonEmpty (PositiveMap.Node l v r)
| Node_r_NonEmpty l v r : tree_NonEmpty r -> tree_NonEmpty (PositiveMap.Node l v r)
| Node_m_NonEmpty l v r : tree_NonEmpty (PositiveMap.Node l (Some v) r).
cc @Yannick Forster @Matthieu Sozeau who were thinking about termination checkers in Metacoq. What do you think?
I'm cc-ing my cc to @Lennard Gäher who implemented the (unmerged) termination checker for MetaCoq
is this the same as https://sympa.inria.fr/sympa/arc/coq-club/2012-05/msg00032.html? Those scenarios are accepted in Agda, but through complex reasoning (https://lists.chalmers.se/pipermail/agda/2012
@Paolo Giarrusso , no, that message is about a generalization of https://github.com/coq/coq/issues/1433, the positivity checker being able to see through match statements and recursion. This is about
the positivity checker considering that, e.g., foo occurs in a positive position in @None foo (and more generally that parameters of inductive types that are positive in the inductive itself should
also be positive when passed to constructors of that inductive)
the problem is that @None foo appears in an index
@Gaëtan Gilbert The issue occurs even when @None foo appears only in parameters and not in indices:
Inductive t1 := q1 (v : option t1) (pf : v = v).
Inductive t2 := q2 (v : option t2) (pf : None = v).
(* Error: Non strictly positive occurrence of "t2" in
"forall v : option t2, None = v -> t2".
Last updated: Oct 13 2024 at 01:02 UTC | {"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Positivity.20checker.20and.20constructors.html","timestamp":"2024-11-12T06:44:58Z","content_type":"text/html","content_length":"14627","record_id":"<urn:uuid:d4226757-2155-4d9a-8407-37adc293e340>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00057.warc.gz"} |
B2k- bytes 2 knowledge Interview Questions
Ques:- How many seconds will a 500 m long train take to cross a man walking with a speed of 3 km/hr in the direction of the moving train if the speed of the train is 63 km/hr?
Recent Answer
: :
speed of the train respect to man
= (63 – 3) km/hr
= 60 km/hr
= 60 * 1000 / 3600 m/sec
= 50/3 m/sec
= distance/speed
= 500 * 3/ 50
= 30 sec
Ques:- if ELECTRICITY – GAS =100 then JACK – JILL = ?
Recent Answer
: :
The ans is -20.
C-3 D-4 …like wise till Z-26.
ELECTRICITY-GAS=129-27-(Minus 2)=100
Ques:- What is the advantage of the team work?
Ques:- Four sisters- Rima, Seema, Uma and Shama are playing a game such that the loser doubles the money of each of the other players from her share. They played four games and each sister lost one
game in alphabetical order. At the end of fourth game, each sister has Rs. 32. Who started with the lowest amount?
Ques:- May I contact your present employer for a reference?
Ques:- I am concerned that you do not have as much experience as we would like in?
Ques:- The average length of three tapes is 6800 feet. None of the tapes is less than 6400 feet. What is the greatest possible length of one of the other tapes?
Recent Answer
: :
—– = 6800
6400 + y + z = 20400
y + z = 14000
to get the greatest of y and z, lets assume y = 6400
so, z = 7600
so ANS is 7600
Ques:- if 4 circles of equal radius are drawn with vertices of a square as the centre , the side of the square being 7 cm, find the area of the circles outside the square?
Ques:- Number Series 7,16,9,15,11,14,?
Recent Answer
: :
First nos series is 7,9,11,?
ie odd number siries ie 7,9,11,13
Second number series is 16,15,14
ie 1 less the previous number 16,15,14,13
Ans —-series is 7,16,9,15,11,14,13,13
Ques:- The sum of four consecutive even numbers is 292. Evaluate the smallest number?
Recent Answer
: :
Thanq saddha
Ques:- What are your strong and weak point ?
Ques:- The incomes of two persons A and B are in the ratio 3:4. If each saves Rs.100 per month, the ratio of their expenditures is 1:2 . Find their incomes?
Recent Answer
: :
50 each
3x -y = 100
4x-2y = 100
Ques:- A tradesman by means of his false balance defrauds to the extent of 20%? in buying goods as well as by selling the goods. What percent does he gain on his outlay?
Ques:- Introduce myself and about my smart class training experience.
Ques:- Pipe A can fill in 20 minutes and Pipe B in 30 minsand Pipe C canempty the same in 40 mins.If all of them worktogether, find the timetaken to fill the tank(a) 17 1/7 mins(b) 20 mins(c) 8 mins
(d) none of these
Ques:- The ratio between the present ages of A and B is 5:3 respectively. The ratio between A’s age 4 years ago and B’s age 4 years hence is 1:1. What is the ratio between A’s age 4 years hence and
B’s age 4 years ago?
Recent Answer
: :
x = 5/3y
x – 4 = y + 4
x – 8 = y
x = 5/3 * (x-8)
5/3x – 40/3 = x
2/3x = 40/3
2x = 40
x = 20
y = 20 – 8
y = 12
x + 4 = 24
y – 4 = 8
Ratio is 24:8 , so 3:1
Ques:- Can you manage work well under pressure?
Ques:- the ratio of white balls to black balls is 1:2. if 9 balls are added then the ratio of the balls become 2:4:3. find the number of black balls.
A. 12
B. 6
C. 9
D. none
Recent Answer
: :
these are question tht are seem to be like it consumes
time. so dont see jus the question and run to the next .
ans. a) 12
if 9 balls are added then the ratio to the combination
becomes 2:4:3.
9 balls make the ratio 3 for grey balls in tht mixture.
so the factor is 9/3 = 3 (in tht mixture)
so the same factor has to be maintained through out the
ratio. so black balls is 4 * 3 = 12 and white balls is 2 *
3 = 6.
Ques:- What is my experience in the field?
Ques:- What types of math do you use? Re: Calculus, Algebra, Fractions?
Ques:- One TR = ?kacl/hourOne US Gallon = ?liters
Ques:- What is the importance of computer.
Ques:- Pointing to a photograph, a person tells his friend, “She is the granddaughter of the elder brother of my father.” How is the girl in the photograph related to his man?
Ques:- Why I am opting for recruitment as a career
Ques:- What are the time bound deliveries?
Ques:- How do you feel about your career so far?
Ques:- Four couples sit around a circular table in a party. Every husband sits to the right of his wife. P, Q, R and S are husbands and T, U, V and W are wives. Q – U and R – V are two married
couples. S does not sit next to V. T sits to the left of P, who sits opposite S. Q sits between ___.
Ques:- 78 x 14 + 7645 ? ? = 8247
A. 580
B. 590
C. 490
D. 480
E. None of these
Ques:- There are 20 poles with a constant distance between each pole A car takes 24 second to reach the 12th pole. How much will it take to reach the last pole
Recent Answer
: :
Let the distance between each pole be x m. Then, distance up to 12th pole = 11x m
∴ Speed = 11x22m/s
∴ Time to cover total distance up to 20th pole
= 19x×2411x
= 41.45 s
Ques:- The least number which when divided by 16, 18 and 21, leave the remainder 3, 5 and 8 respectively is:
Contact with us regarding this list | {"url":"https://www.justcrackinterview.com/interviews/b2k-bytes-2-knowledge/","timestamp":"2024-11-04T20:27:12Z","content_type":"text/html","content_length":"78556","record_id":"<urn:uuid:43a651b0-b68b-4ef9-a533-e075d0df525f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00131.warc.gz"} |
Topics: Types of Cohomology Theory
Types of Cohomology Theory
De Rham Cohomology > s.a. Betti Numbers; cohomology [and physics]; de Rham Theorem.
$ Def: A cohomology theory based on p-forms ω, and therefore only available for differentiable manifolds; Cochains are p-forms {Ω^p}, the duality with homology is through integration on chains, d is
the exterior derivative; Thus cocycles Z^p are closed forms, coboundaries B^p are exact forms, and the cohomology groups are H\(^p(X; {\mathbb R})\):= Z\(^p(X)\)/B\(^p(X)\).
* Consequence: For an n-dimensional X, only H^p for 0 ≤ p ≤ n can be non-trivial.
* And homology: H^p is the dual space of H^p, with ([ω],[C]):= ∫[C] ω.
* Ring structure: The cup product is wedge product of forms.
@ References: Wilson math/05 [algebraic structures on simplicial cochains]; Ivancevic & Ivancevic a0807-ln; Catenacci et al JGP(12)-a1003 [integral forms].
Čech Cohomology > s.a. Čech Complex.
* Idea: A cohomology theory based on the intersection properties of open covers of a topological space.
@ References: Álvarez CMP(85); Mallios & Raptis IJTP(02) [finitary]; Catenacci et al JGP(12)-a1003 [integral forms].
> Online resources: see Wikipedia page.
Equivariant Cohomology
* Applications: Kinematical understanding of topological gauge theories of cohomological type.
@ References: Stora ht/96, ht/96.
Étale Cohomology > s.a. math conjectures [Adams, Weil].
* Idea: A very useful unification of arithmetic and topology.
* History: Conceived by Grothendieck, and realized by Artin, Deligne, Grothendieck and Verdier in 1963.
@ References: Milne 79; Fu 15.
> Online resources: see Wikipedia page.
Floer Cohomology
@ Equivalence with quantum cohomology: Sadov CMP(95).
Sheaf Cohomology > s.a. locality in quantum theory.
@ References: Warner 71; Griffiths & Harris 78; Strooker 78; Wells 80; Wedhorn 16.
> Online resources: see Wikipedia page.
Other Types > s.a. cohomology / K-Theory.
@ Lichnerowicz-Poisson cohomology: de León et al JPA(97).
@ Cyclic cohomology: Herscovich & Solotar JRAM-a0906 [and Yang-Mills algebras]; Khalkhali a1008-proc [A Connes' contributions].
@ Hochschild cohomology: Zharinov TMP(05) [of algebra of smooth functions on torus]; Kreimer AP(06)ht/05 [in quantum field theory]; > s.a. algebraic quantum field theory; deformation quantization.
@ Other types: Frégier LMP(04) [related to deformations of Lie algebra morphisms]; Papadopoulos JGP(06) [spin cohomology]; Blumenhagen et al JMP(10)-a1003 [line-bundle valued cohomology]; De Sole &
Kac JJM(13)-a1106 [variational Poisson cohomolgy]; Becker et al RVMP(16)-a1406 [differential cohomology, and locally covariant quantum field theory].
> Related topics: see N-Complexes [generalized cohomology]; Figueroa-O'Farrill's lecture notes on BRST cohomology.
main page – abbreviations – journals – comments – other sites – acknowledgements
send feedback and suggestions to bombelli at olemiss.edu – modified 25 may 2019 | {"url":"https://www.phy.olemiss.edu/~luca/Topics/top/cohomology_types.html","timestamp":"2024-11-14T01:13:13Z","content_type":"text/html","content_length":"8973","record_id":"<urn:uuid:212a76aa-3058-4b38-9595-e0171cc5fa61>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00349.warc.gz"} |
I highly recommend the following two courses being offered in the Spring Semester – they are awesome, and will put to use many of the skills you learned in this class:
MAT 3050 Geometry I
MAT 3080 Modern Algebra
They are both required for math ed majors – for others, feel free to ask me about whether they are part of your program.
NOTE: Some students are having trouble enrolling because of prerequisite issues – if you run into this problem, email Prof. Douglas adouglas@citytech.cuny.edu for assistance.
Prof. Reitz
Recent Comments
• OpenLab #1: Advice from the Past – 2019 Fall – MAT 2071 Proofs and Logic – Reitz on OpenLab #7: Advice for the Future
• Franklin Ajisogun on OpenLab #7: Advice for the Future
• Franklin Ajisogun on OpenLab #3: “Sentences”
• Franklin Ajisogun on OpenLab #6: Proof Journal
• Jessie Coriolan on OpenLab #7: Advice for the Future | {"url":"https://openlab.citytech.cuny.edu/2018-fall-mat-2071-reitz/?tag=spring-classes","timestamp":"2024-11-10T11:46:00Z","content_type":"text/html","content_length":"119555","record_id":"<urn:uuid:88b0c340-f11d-4ed9-9408-07bce38c38f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00575.warc.gz"} |
International Conference on Bio-Mathematics (July 1 – 2, 2022)
Speaker: Professor Julien Arino (University of Manitoba, Canada)
Contact: For any questions or comments, please contact one of the organizers
Host :
Prof Mahamat S Daoussa Haggar (Director of the Modeling, Mathematics, Computer Science, Applications and Simulation Laboratory – University of N’Djamena, Chad; director@l2mias.com
Scientific Committee:
Prof Mohamed Mbehou (University of Yaounde I, cameroon) – Chair
Prof Julien Arino (Manitoba University, Canada)
Prof Florence Hubert (Aix-Marseille University, France)
Prof Legesse Obsu (Adama Science and Technology University, Ethiopia)
Prof Benjamin Mampassi (Cheikh Anta Diop University, Senegal)
Prof Koina Rodoumta (University of N’Djamena, Chad)
Dr Mihaja Ramanantoanina (University of Pretoria, South Africa)
CIMPA (International Center for Pure and Applied Mathematics)
3MC (Mathematical Modeling Mini Courses)
Prof Mahamat Saleh Daoussa Haggar (Université de N’Djamena, Tchad)
Prof Bakari Abbo (Université de N’Djamena, Tchad)
Dr Patrick Tchepmo Djomegni (North West University, South Africa)
Dr Yaya Moussa (Université de N’Djamena, Tchad)
Mr Abdramane Annour Saad (Université de N’Djamena, Tchad)
Mrs Kadidja Mahamad Malloum (Université de N’Djamena, Tchad)
Mrs Raouda Amine Oumar (Université de N’Djamena, Tchad)
Theprogram is the following
FRIDAY 01 JULY 2022
08 :50 – 09 :00 Opening by Prof. Mahamat Saleh Daoussa Haggar (President of the University of N’Djamena)
09 :00 – 10 :00 Plenary talk by Prof Julien Arino
10 :05 – 10 :45 Talk 1: Dr Mihaja Ramanantoanina
10 :50 – 11 :30 Talk 2: Prof Mohamed Mbehou
11 :30 – 12 :00 Tea break
12 :00 – 12 :40 Talk 3: Dr Patrick Tchepmo
12 :40 – 14 :10 Lunch
14 :10 – 14 :40 Talk 4: Mr Aminou M Layaka
14 :40 – 15 :10 Talk 5: Mr Annour Saad Abdramane
15 :10 – 15 :40 Talk 6: Mr Dawè Siguy
SATURDAY 02 JULY 2022
09 :00 – 10 :00 Plenary talk by Prof Florence Hubert
10 :05 – 10 :45 Talk 7: Dr David Fotsa Mbogne
10 :50 – 11 :30 Talk 8: Dr Komi Afassinou
11 :30 – 12 :00 Tea break
12 :00 – 12 :40 Talk 9: Mr Aminou M Layaka
12 :40 – 14 :10 Lunch
14 :10 – 14 :40 Talk 10: Issa Oumar Abdallah
14 :40 – 15 :10 Talk 11: Guibé Séhoré
15 :10 – 15 :40 Talk 12: Vérité Djimasnodji
15 :40 – 16 :00 Closing by Prof. Mahamat Saleh Daoussa Haggar (President of the University of N’Djamena)
Prof Julien Arino (University of Manitoba, Canada)
Title: Imports of COVID-19 cases
The spatio-temporal spread of COVID-19 was rapid, with most countries around the world reporting cases within months. But if we look more closely, the situation was much more heterogeneous than it
appears at first sight. If we consider local jurisdictions, we observe that many of them, in particular those whose population is not very high, experienced alternating phases of active spread of the
disease and phases during which the disease was absent. This poses the problem of case imports: under what conditions does a jurisdiction that does not experience local propagation chains at a given
time go into the epidemic phase following one or more case imports? I will present a class of models allowing to consider this kind of problems. I will also show how we can assess the contribution of
imported cases to the dynamics of local spread, as well as the effectiveness of some measures aimed at reducing the risk of imported cases.
Prof Florence Hubert (Aix-Marseille University, France)
Titre: Les modèles de croissances-fragmentation en oncologie
Fragmentation growth models are commonly used in structured population dynamics to describe, for example, cell division phenomena or polymerization phenomena. The most classical equation is the
The study of the global existence of solutions to this problem as well as the study of their asymptotic behavior has given rise to many works. We will start by giving the main applications of such a
model, then we will recall the main results (see [1], [4]). We will then propose extensions of this model used in oncology to describe the phenomena of metastatic emission or to describe the dynamic
instabilities of microtubules. We will review the known results on these models and the remaining challenges.
[1] J. A. Cañizo, P. Gabriel, and H. Yoldas. Spectral gap for the growth-fragmentation equation via Harris’s theorem. SIAM J. Math. Anal., Vol.53, No.5, pp.5185-5214,(2021)
[2] N. Hartung, S. Mollard, D. Barbolosi, A. Benabdallah, G. Chapuisat, G. Henry,S. Giacometti, A. Iliadis, J.Ciccolini, C. Faivre, F. Hubert. Mathematical Modeling of tumor growth and metastatics
spreading : validation in tumor-bearing mice, Cancer Research 74, p. 6397-6407, 2014.
[3] S. Honoré, F. Hubert, M. Tournus, D. White. A growth-fragmentation approach for modeling microtubule dynamic instability, Bulletin of Mathematical Biology, 81 p. 722–758 (2019)
[4] B. Perthame. Transport equations in biology, Springer.
Prof. Mohamed Mbehou (University of Yaounde I, cameroon)
Title: Numerical implementation of non/local PDEs using finite element methods
This work is devoted to the study of the finite element approximation for non/local nonlinear parabolic problems. The first part will be based on the presentation of some nonlocal and local problems.
While the following will be on the implementation of a 1D/ 2D nonlocal PDE via the use of the software Matlab.
Dr Patrick Tchepmo Djomegni (North West University, South Africa)
Titre: Coexistence and harvesting control policy in a food chain model
We present the rich dynamics in two mathematical food chain models. The first case presents harvest regulation strategies in order to preserve the survival of species and optimize profits. The second
case presents the effect of response functions on species persistence. Hopf bifurcation, limit cycle, doubling periods, chaotic attractors, border crises are observed in the numerical calculations
Dr Mihaja Ramanantoanina (University of Pretoria, South Africa)
Title: On some spatio-temporal models of mutualistic populations.
In this talk, we review some approaches to model the spatio-temporal dynamics of two species engaged in a mutualistic interaction. First, we address the case of continuously reproducing species using
reaction-diffusion models based on partial differential equations. Next, we consider the case of species with non-overlapping generations. In this case, the movement is captured by a dispersion
kernel (a probability distribution that an individual moves from one place to another), and the population dynamics are modeled using integro-difference systems . In both cases, we focus on the
wavefront profiles and the propagation rate of the populations.
Dr Komi Afassinou (University of Zululand, South Africa)
Title: Mathematical modeling of foodborne disease transmission by cockroaches in human dwellings.
Cockroaches are among the most common pests in many homes and other food processing areas. Their cohabitation with humans has raised public health concerns and poses serious risks to human health, as
they are believed to play an important role in the transmission of various intestinal diseases such as diarrhea, dysentery, cholera, leprous plague and typhoid fever. In this article, we present a
mathematical model that depicts the transmission of foodborne diseases to humans by cockroaches. We incorporate control interventions such as the use of insecticides and regular environmental
sanitation. Mathematical and numerical analyzes are conducted to investigate the impact of these control interventions when considered as single or combined strategies. The results obtained reveal
the level of effectiveness of insecticides beyond which total eradication is possible, especially when their use is combined with regular sanitation of the environment. Use of bait and trap devices
is also explored and it turns out to be the best strategies.
[1] https://en.wikipedia.org/wiki/Cockroach#cite-ref-Cockroach.SpeciesFile.org-4-1.
[2] Ifeanyi O.T., Odunayo O.O., Microbiology of Cockroaches – A Public Health
Concern, International Journal of Scientific Research 4, 4 (2015).
[3] Keiding J., the cockroach-biology and control: Training and information guide
(advanced level), Geneva, World Health Organization, (1986) 86:937.
[4] Aliya H.B., In celebration of cockroaches, Daily Califormia. https://www.dailycal.org/2020/01/31/in-celebration-of-cockroaches.
Dr David Fotsa Mbogne (Université de Ngaoundéré, Cameroun)
Title: Estimation and optimal control of the multiscale dynamics of Covid-19: a case study from Cameroon.
This work aims at a better understanding and the optimal control of the spread of the new severe acute respiratory coronavirus 2 (SARS-CoV-2). A multi-scale model giving insights on the virus
population dynamics, the transmission process and the infection mechanism is proposed first. Indeed, there are human to human virus transmission, human to environment virus transmission, environment
to human virus transmission and self-infection by susceptible individuals. The global stability of the disease-free equilibrium is shown when a given threshold T0 is less or equal to 1 and the basic
reproduction number R0 is calculated. A convergence index T1 is also defined in order to estimate the speed at which the disease extincts and an upper bound to the time of infectious extinction is
given. The existence of the endemic equilibrium is conditional and its description is provided. Using Partial Rank Correlation Coefficient with a three levels fractional experimental design, the
sensitivity of R0, T0 and T1 to control parameters is evaluated. Following this study, the most significant parameter is the probability of wearing a mask followed by the probability of mobility and
the disinfection rate. According to a functional cost taking into account economic impacts of SARS-CoV-2, optimal fighting strategies are determined and discussed. The study is applied to real and
available data from Cameroon with a model fitting. After several simulations, social distancing and the disinfection frequency appear as the main elements of the optimal control strategy against
Dr Djibe Mbainguesse (University of N’Djamena)
Title: Numerical methods for nonlinear heat equation subject to nonlocal boundary conditions
We present a combinaison of three methods to produce an approximate solution of nonlinear heat equation of nonlocal boundary conditions. We first use the implicit backward Euler’s method to reduce
the equation to a boundary value problem with x as the spatial variable independent. The finite difference method of order four is then employed together with the Simpson’s quadrature to transform
the problem in the form of nonlinear algebric systems. We ressort to Newton’s iteration procedure to obtain the approximate solution.
Aminou M. Layaka (PhD student, L2MIAS, University of N’Djamena, Chad)
Title: Modelling and stability analysis of immune regulatory mechanisms during malaria blood stage infection
Malaria infection gives rise to host response which is regulated by both the immune system as well as by the environmental factors. In this talk, we discuss the immune regulation of malaria blood
stage infection in humans, focusing on Plasmodium falciparum, the most widely spread and dangerous of the human malaria parasites. We also propose some differential equations which describe the
dynamics of the immune cells and their cytokines interacting against the blood stage malaria parasite. Then we study the stability of the system at the equilibrium point
Aminou M. Layaka (PhD student, L2MIAS, University of N’Djamena, Chad)
Title: Optimal Control Analysis of Intra-Host Dynamics of Malaria with Immune Response
In this talk, a new intra-host model of malaria that describes the dynamics of the blood stages of the parasite and its interaction with red blood cells and immune cells is formulated. The
qualitative properties of solutions are established. We then extend the model to incorporate, in addition to immune response, three control variables. The existence result for the optimal control
triple, which minimizes malaria infection and costs of implementation, is explicitly proved. Finally, we apply Pontryagin’s Maximum Principle to the model in order to determine the necessary
conditions for optimal control of the disease.
Guibé Séhoré (L2MIAS, University of N’Djamena, Chad)
Title: Mathematical model study of two-strain tuberculosis with reinfection
We are studying the dynamics of tuberculosis infection with two strains: the susceptible and resistant to treatment. We consider an S-Es-Is-Er-Ir-T model with reinfection proposed by Castillo Chavez.
We study the local and global stability of the equilibrium points, then examine the sensitivity of the reproduction number with respect to certain parameters of the model.
Issa Oumar Abdallah (PhD student, L2MIAS, University of N’Djamena, Chad)
Title: Strategies for optimal control of the dynamics of the Covid-19 virus
In this talk, we introduce two controls in a model describing the infection dynamics of the Covid-19 virus in the body. Optimal control strategies are determined by minimizing infections, viral
production and considering treatment and physiological costs. First, we use a result of Fleming and Rishel to establish the existence of the optimal control. Then, we characterize the optimal control
and establish its uniqueness. Finally, the numerical simulation allowed us to illustrate our results and to quantify the impact of the control on the dynamics of infection.
Annour Saad Abdramane (PhD student, L2MIAS, University of N’Djamena, Chad)
Title: Mathematical Modeling of COVID-19: Case of Viral Infection with Inflammatory Response
In this work, we analyze a virus model of SARS-CoV-2 infection with immune response. The model was proposed by Mochan et al (2021) and describes an experiment carried out on Macaques. We analyze it
analytically for the first time by studying its qualitative behavior. We establish the existence, uniqueness and positivity of the solution. Then we determine the equilibrium points, study their
stability, and investigate strategies to limit secondary infections via a sensitivity study. The susceptibility index results indicate reducing the rate of virus replication is the best strategy to
reduce secondary infections. The theoretical results are illustrated graphically.
Dawè Siguy (L2MIAS, University of N’Djamena, Chad)
Title: Study and Simulations of Mathematical Models in Neuroscience
In this work, we describe the neuron and its components, present some mathematical models in neurosciences in particular the model of Hodgkin and Huxley and some of its derivatives. We are mainly
interested in the Fitzhugh-Nagumo model for which we study the existence and uniqueness of solutions, determine the equilibrium points and their nature, then the existence and direction of Hopf
bifurcation. Finally, the numerical model obtained using a finite difference scheme is simulated in Matlab. The study will be limited because of its complexity.
Keywords: Model in neuroscience, FitzHugh-Nagumo, equilibrium point, stability, bifurcation, finite difference method.
Vérité Djimasnodji (PhD student, University of N’Djamena, Chad)
Title: 2-species chemotaxis model with volume filling effect; Prey-predator system with multi-taxis
The presentation focuses on the mathematical and numerical analysis of a 2-species chemotaxis model with volume effect. After the modeling of the phenomenon by using some physical laws, we will make
the mathematical analysis of the model then the discretization by the Finite Elements method of the Galerkin type and the semi-implicit Euler scheme. We will present the numerical results obtained
following some numerical simulations. Finally, we briefly present prey-predator systems with multi-taxis which is a general case of what is presented above.
Registration for this event are closed
Participants list:
1. Prof Julien Arino, University of Manitoba, Canada
2. Prof Florence Hubert, Aix-Marseille University, France
3. Prof Mahamat Saleh Daoussa Haggar, Université de N’Djamena, Tchad
4. Prof Legesse Obsu, Adama Science and Technology University, Ethiopie
5. Prof Mohamed Mbehou, Université de Yaoundé I, cameroun
6. Dr Komi Afassinou, University of Zululand, South Africa
7. Dr Mihaja Ramanantoanina, University of Pretoria, South Africa
8. Dr Patrick Tchepmo Djomegni, North West University, South Africa
9. Eucharia Nwachukwu, University of Port Harcourt, Nigeria
10. Junior Kaningini, AIMS-Senegal, Senegal
11. Camelle Kabiwa Kadje, University of Douala, Cameroon
12. Bahati Kilongo, AIMS-Senegal, Senegal
13. Dr David Fotsa-Mbogne, Université de Ngaoundéré, Cameroun
14. Frais Kwadzo Agbenyegah, Ghana Communication Technology University, Ghana
15. Dr Lema Logamou Seknewna, AIMS-Senegal, Senegal
16. Patient Murhula Buhendwa, AIMS-Cameroon, Cameroon
17. Djokdelang Aloumza, Université de N’Djamena, Chad
18. Nassouradine Mahamat Hamdan, Université de N’Djamena, TChad
19. Kalidou Aliou Ball, Université Gaston Berger de Saint Louis, Senegal
20. Olusanmi Odeyemi, University of Benin, Nigeria
21. Issa Oumar Abdallah, L2MIAS, Université de N’Djamena, Tchad
22. Younous Magdoum, FSEA, Tchad
23. Kevin Basita, Technical University of Kenya, Kenya
24. Néhémie Néribar Djibé, Tchad
25. Winnie Yaa, AIMS-Senegal, Senegal
26. Bienvenu Djonneyahe, L2MIAS, Université de N’Djamena, Tchad
27. Faustin Koumakoye Makrada, Université de N’Djamena, Tchad
28. Annour Saad Abdramane, L2MIAS, Université de N’Djamena, Tchad
29. Oumar Madaï, L2MIAS, Université de N’Djamena, Tchad
30. Raounda Amine Oumar, L2MIAS, Université de N’Djamena, Tchad
31. Sehore Guibe, L2MIAS, Université de N’Djamena, Tchad
32. Djimramadji Hippolyte, L2MIAS, Université de N’Djamena, Tchad
33. Mopeng Herguey, L2MIAS, Université de N’Djamena, Tchad
34. Ahmad Ouaman Okari, L2MIAS, Université de N’Djamena, Tchad
35. Bienvenu Ndonane, L2MIAS, Université de N’Djamena, Tchad
36. Khadidja Mahamat Malloum, L2MIAS, Université de N’Djamena, Tchad
37. Djerayom Luc, L2MIAS, Université de N’Djamena, Tchad
38. Oumar Moussa Godi, L2MIAS, Université de N’Djamena, Tchad
39. Okari Ahmad, L2MIAS, Université de N’Djamena, Tchad
40. Dawe Siguy, L2MIAS, Université de N’Djamena, Tchad
41. Abakar Himeda Abdarahman, Université de N’Djamena, Tchad
42. Alex Lawou Meli, AIMS-Cameroon, Cameroon
43. Mahamat Abakar Adoum, Université de N’Djamena, Tchad
44. Idriss Cabrel Tsewalo Tondji, AIMS-Sénégal, Sénégal
45. Joseph Romaric Cheuteu Tazopap, Université de Douala, Cameroun
46. Elkana Koungue Gueini, Tchad
47. Issa Ahmat Annour, Université Gaston Berger de saint louis, Senegal
48. Arielle Sonia Yonke Nana, Université de Yaoundé I, Cameroun
49. Mady Parguet, Université de N’Djamena, Tchad
50. Merveille Cyndie Talla Makougne, Université de Yaoundé I, Cameroun
51. Astou Ndima, AIMS-Senegal, Senegal
52. Oleï Tahar Hassane, L2MIAS, Université de N’Djamena, Tchad
53. Alphonse Gapili Onsou, Université de Douala, Cameroun
54. Léonel KEMFOUET TSOPZÉ, Université de Yaoundé 1, Cameroun
55. Manuela Metsadong Nimpa, Université de Douala, Cameroun
56. Joseph Romaric Cheuteu Tazopap, Université de Douala, Cameroun
57. Sthyve Junior Tatho Djeanou, Université de Douala, Cameroun
58. Henri Loic Nguejo Messa, Université de Yaoundé 1
59. Annour Djidda Mahamat, Université de N’Djamena, Tchad
60. Mahamat Abakar Abdallah, Université de N’Djamena, Tchad
61. Magloire Ndilnodji, Université de N’Djamena, Tchad
62. Vérité Djimasnodji, Université de N’Djamena, Tchad
63. Ablaye Ngalaba, Université de N’Djamena, Tchad
64. Mahamat Abakar Djalabi, Université de N’Djamena, Tchad
65. Mahamat Saleh Idriss Ibrahim, Université de N’Djamena, Tchad | {"url":"https://l2mias.com/680-2/","timestamp":"2024-11-04T04:59:17Z","content_type":"text/html","content_length":"155604","record_id":"<urn:uuid:f93998cc-bcc8-4789-aab9-00d0e173514e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00452.warc.gz"} |
Cubilete Cup | Download on Google Play or App Store
The game begins with each player rolling one cubilete.
The player with the highest roll, A-K-Q-J-T-N’s (in order of value), goes first.
Players take turn rolling the dice keeping in mind that each turn consists of three rolls at most before the highest combination is selected.
Players can stop rolling any time after the first roll, especially if they roll 5 Aces which amounts to winning the entire game not just that single round.
Patas, or points are scored when a player has the highest pairing per order A-K-Q-J-10- or 9’s. For example, if player one rolls 3 Queens then player two must roll minimum 4 Queens, 3 Kings or 3 Aces
as examples of what would overmatch a roll of 3 Queens by player one.
Once a player completes 3 possible rolls of the dice, the turn passes to the player on the left until each player has scored their round. A point is awarded to the player with the highest score of
the round.
First player to reach 10 patas or points, wins the match. Once a winner is determined, the game restarts with the same participants from the previous game.
The ultimate goal of each roll is to score Carabinas.
If you roll any Carabina during the game, you immediately win the round and get to keep rolling for another round. Let’s look at 3 different types of Carabinas...
Carabina de Aces
This one is worth ten points and is very difficult to obtain. To obtain Carabina de Aces, you need to get a complete set of five aces within the three-roll limit.
If you get Carabina de Aces, you are awarded 10 points and the game is automatically over. You are the winner of the game, even if no one else had a chance to roll. Plus, you get to be the first
roller in the next game!
Carabina de (Kings) Naturales
For this, you need to collect five Kings within your three allotted rolls. If you get this type of Carabina, you get five points. You then win the round!
The round then ends, and you get to keep on rolling into the next round.
Carabina de (Kings) No Naturales
The only way to obtain this Carabina is by pairing Aces with the Kings.
The Ace can be paired with any of the other dice because it is wild. If you obtain this type of Carabina, you get two points. You win the round, the round ends and you get to keep on rolling on to
the next round.
The game begins with each player rolling one cubilete.
The player with the highest roll, A-K-Q-J-T-N’s (in order of value), goes first.
Players take turn rolling the dice keeping in mind that each turn consists of three rolls at most before the highest combination is selected.
Players can stop rolling any time after the first roll, especially if they roll 5 Aces which amounts to winning the entire game not just that single round.
Patas, or points are scored when a player has the highest pairing per order A-K-Q-J-10- or 9’s. For example, if player one rolls 3 Queens then player two must roll minimum 4 Queens, 3 Kings or 3 Aces
as examples of what would overmatch a roll of 3 Queens by player one.
Once a player completes 3 possible rolls of the dice, the turn passes to the player on the left until each player has scored their round. A point is awarded to the player with the highest score of
the round.
First player to reach 10 patas or points, wins the match. Once a winner is determined, the game restarts with the same participants from the previous game.
The ultimate goal of each roll is to score Carabinas.
If you roll any Carabina during the game, you immediately win the round and get to keep rolling for another round. Let’s look at 3 different types of Carabinas...
Carabina de Aces
This one is worth ten points and is very difficult to obtain. To obtain Carabina de Aces, you need to get a complete set of five aces within the three-roll limit.
If you get Carabina de Aces, you are awarded 10 points and the game is automatically over. You are the winner of the game, even if no one else had a chance to roll. Plus, you get to be the first
roller in the next game!
Carabina de (Kings) Naturales
For this, you need to collect five Kings within your three allotted rolls. If you get this type of Carabina, you get five points. You then win the round!
The round then ends, and you get to keep on rolling into the next round.
Carabina de (Kings) No Naturales
The only way to obtain this Carabina is by pairing Aces with the Kings.
The Ace can be paired with any of the other dice because it is wild. If you obtain this type of Carabina, you get two points. You win the round, the round ends and you get to keep on rolling on to
the next round. | {"url":"https://cubiletecup.com/rules","timestamp":"2024-11-07T00:05:51Z","content_type":"text/html","content_length":"76473","record_id":"<urn:uuid:d6dcc45d-a563-4080-81d4-4994d423fc1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00638.warc.gz"} |