content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Maxwell’s Equations - November 2024 - Silicon Chip Online
The derivation of Maxwell’s Equations ∇.E ∇×B ∇×E ∇.B by Brandon Speedie Our recent feature on the history of electronics covered many prominent contributors to the field. Two names stand out above
others; their work is commonly referred to as the ‘second great unification in physics’. D avid Maddison’s History of Electronics series was published in the October, November and December 2023
issues (siliconchip.au/ Series/404). It mentioned hundreds of people who laid the foundations for modern electronics. Englishman Michael Faraday was one of the standouts in that list, with
significant contributions to the understanding of electromagnetics. Faraday was born in 1791 to a poor family. He had an early interest in chemistry, but his family lacked the means to formally
educate him. Instead, he became self-taught through books and an unbounded curiosity for experimentation. This practical approach continued throughout his career and set the blueprint for his
breakthroughs in electromagnetics, despite having no formal training. Faraday was responsible for many notable discoveries, including the concept of shielding (the Faraday Cage), the effect of a
magnetic field on the polarisation of light (the Faraday Effect), the electric motor (an early homopolar type, see Fig.1), the Faraday’s coil and ring experiment demonstrated electromagnetic
induction. Source: Ri – siliconchip.au/link/abv3 electric generator (an early dynamo, see Fig.2), and the fact that electricity is a force rather than a ‘fluid’ (as was the understanding at the
time). He also theorised that this electromagnetic force extended into the space around current-carrying wires, although his colleagues considered that idea too far-fetched. Faraday didn’t live long
enough to see his concept accepted by the scientific community. It was an experiment with an iron ring and two coils of wire in 1831 that proved a defining moment for the vocation we now call
electrical engineering. By passing a current through one coil, Faraday observed a temporary current flowing in the second coil, despite the lack of a galvanic connection between them. We now refer to
this phenomenon as electromagnetic induction, the property behind many common products such as transformers, electric motors, speakers, dynamic microphones, guitar pickups, RFID cards etc. Most
notably, this principle is involved in generating the bulk of our electricity. It was a remarkable achievement, later earning Faraday the moniker, “the father of electricity”. James Clerk Maxwell
Maxwell was born in 1831 in Scotland. His comfortable upbringing and access to education contrasted with Faraday. Recognising his academic potential, his family sent him to technical academies and
University to foster his curiosity about the world around him. Maxwell had long admired Faraday’s work but understood that he was fundamentally a tinkerer with only a basic understanding of
mathematics. Maxwell recognised that his own strengths in mathematics were needed to unify Faraday’s experimental results, along with the work of other notable contributors such as Carl Friedrich
Gauss and Hans Christian Ørsted. In 1860, Maxwell’s employment moved to King’s College, where he came into regular contact with Faraday. During this period, he published a four-part paper, “On
Physical Lines of Force”, using concepts Faraday had Figs.1 & 2: Faraday’s homopolar motor (left) and Faraday’s disc generator (right). 90 Silicon Chip Australia's electronics magazine
siliconchip.com.au Fig.3: an application of the cross product. The torque of an axle can be calculated from the cross product of the radius and force vectors. the two vectors together, then
multiplying that by the cosine of the angle between them. The cosine is at a maximum if the two vectors point in the same direction and zero if they are orthogonal. If the vectors this is applied to
are unit vectors (vectors of length one), the result is simply the cosine of the angle between them. Divergence (∇●) introduced many decades earlier. It contained the four expressions we now know as
Maxwell’s equations that tie together electricity, magnetism and light as a single phenomenon: the electromagnetic force. This is called the ‘second great unification in physics’ because Sir Isaac
Newton’s trailblazing work with motion and gravity is considered the first. Vector calculus To understand the notation of Maxwell’s equations, a quick primer on vector calculus is in order.
Electromagnetism works in three-dimensional space, which can make mathematical representations confusing. We will cover the basics here, using figures to help visualise the equations. The formulas
will follow the differential form derived by Oliver Heaviside from Maxwell’s original paper. Combining Del and the dot product is commonly referred to as the divergence operator. When used on a
vector field, it returns a scalar field representing its source at any particular point. For example, calculating the divergence on atmospheric wind speed would give a view of pressure differences.
Cross product (×) The cross product is a vector operation to calculate the ‘normal’ of two vectors, resulting in a new vector perpendicular to the two input vectors. Curl (∇×) Combining Del and the
cross product yields the curl operator. When applied to a vector field, its result is a vector field that shows the rotation or circulation. Returning to the Meteorology example, calculating the curl
of wind speeds in the atmosphere will return vorticity, a measure of cyclone or anticyclone rotation. Negative vorticity usually correlates with low pressure and unstable weather (cyclonic rotation),
and positive vorticity with high pressure and fine weather (see Figs.4 & 5). #1) Gauss’ law of magnetism ∇●B=0 Maxwell’s first equation is named after German physicist Henrich Gauss. Fig.4 (top): a
wind speed plot showing rotational winds off the east coast of Australia and in the southern ocean. Source: BoM, siliconchip. au/link/abv4 Derivative (d/dt) Fig.5 (bottom): calculating the curl of
the wind speed yields the vorticity, which more clearly shows the cyclonic rotation off the east coast (blue) and the anticyclone in the southern ocean (red). Negative vorticity (blue) is associated
with atmospheric instability, positive (red) usually means fine weather. The same operation can be used on a 3D electric or magnetic field to derive its source. Source: BoM, siliconchip.au/ link/abv5
The derivative operator, d, is shorthand for the Greek letter delta (Δ), which in mathematics refers to a change or difference. ‘t’ refers to time, so d/dt therefore means the change in a parameter
over time or more commonly, ‘rate of change’. The symbol ‘∂’ instead of ‘d’ indicates a partial derivative, which is used when differentiating a function of two or more variables. Nabla / Del (∇) Del
is the vector differential operator. It is equivalent to the derivative operator above but can be applied to more than one dimension. In our examples, it will be applied to a 3D field. Dot product
(●) A dot product is an operation between two vectors that gives a scalar (numeric) result. The result is equivalent to multiplying the magnitudes of siliconchip.com.au A common example is to derive
an axle’s torque from its radius and force vectors. The resulting torque vector is orthogonal to both vectors and points in the direction of its angular force (see Fig.3). Australia's electronics
magazine November 202491 Fig.6: Gauss’ law of magnetism with reference to a permanent magnet. Any field lines exiting ‘north’ wrap around the magnet and enter at the ‘south’ end. The net magnetic
field source is zero for any surface cutting through this field (eg, the square), or for the whole magnet in total. Fig.7: Gauss’ law in an air-gapped capacitor (eg, a tuning gang). A voltage source
forces a positive charge to build up on the top plate & a negative charge on the bottom plate. An electric field forms between the charged regions. Fig.8: similar to Fig.7 but with a plastic film
dielectric, which has a higher permittivity than air. Electric dipoles in the dielectric orientate themselves to cancel some of the electric field strength, increasing the effective capacitance.
Here, B is the magnetic field. Simply stated, the sum of all magnetic fields emanating from an interface will always add to zero. This is most obvious when looking at the magnetic field lines
surrounding a bar magnet (see Fig.6). Any field lines exiting ‘north’ wrap around the magnet and enter at the ‘south’ end. Considering any isolated area, or the entire magnet as a whole, there is no
magnetic field source. #2) Gauss’ law ∇●E=ρ÷ε Also called Gauss’ flux theorem. Here, E is the electric field, ρ is the charge density (the amount of electric charge per volume) and ε is the
permittivity of the material or medium (calculated as ε0εr, where ε0 is the vacuum permittivity and εr is the relative permittivity; in a vacuum εr = 1). This law states that electric charge is the
source of an electric field. The strength of that field is proportional to the amount of charge and inversely proportional to the permittivity of the supporting material. This phenomenon is most
apparent in a capacitor, where an accumulation of negative charge (electrons) builds up on one plate, and a positive charge (protons or holes) on the other (Fig.7). A dielectric between the plates
supports the electric field. Its electric dipoles will be orientated opposite to the direction of the electric field and therefore store some of that electric field strength. Film capacitors use a
plastic dielectric such as polypropylene or polystyrene, materials which have a relatively low permittivity, meaning they have few electric dipoles to orientate themselves against the field, leaving
it mostly intact (Fig.8). In contrast, ceramic capacitors typically use a much higher permittivity dielectric, such as barium titanate, which will orientate many dipoles in response to the applied
field and cancel much of the electric field strength (Fig.9). These dipoles provide a higher capacitance per unit area for ceramic capacitors compared to film caps. #3) Faraday’s law of induction ∇ ×
E = -∂B/∂t Fig.9: this is like Figs.7 & 8 but with a ceramic dielectric. The high permittivity allows many dipoles to cancel a large proportion of the electric field. This arrangement has very high
capacitance per area. 92 Silicon Chip Here, E is the electric field and B is the magnetic field, so ∂B/∂t is the change in magnetic field over time. This equation mathematically formalises Faraday’s
coil and ring Australia's electronics magazine experiment. It is the notable law of electromagnetic induction, where a time-varying magnetic field induces an orthogonal electric field. The stronger
the magnetic field, or the faster its rate of change, the stronger the resulting electric field. This law is most familiar in rotating generators such as hydroelectric, gas, coal and wind-powered
electricity production. As the alternator spins, its rotor produces a changing magnetic field for the stator, inducing an electric field that supplies the grid (see Figs.10 & 11). Similarly, the
strings on an electric guitar vibrate when plucked. As they oscillate, they cut through the magnetic field produced by the pickups. This changing magnetic field induces a voltage in the pickup
windings, which is amplified by a circuit to drive the speaker(s). #4) Ampere’s law ∇ × B = μJ Here, B is the magnetic field, J is the electric current density in amperes per square metre (A/m2) and
μ is the magnetic permeability of the material or medium. The original form of Ampere’s law states that the flow of electric current produces an orthogonal magnetic field. The strength of this field
is proportional to the current flow and the magnetic permeability of the material (Fig.12). Ampere’s law is the magnetic equivalent of Gauss’ law. We know that electric charge is the source of the
electric field but Ampere’s law shows that the movement of electric charge is the source of a magnetic field. This phenomenon is most apparent in an electromagnet, where a wire is wrapped into a
coil. As electric current flows, a magnetic field is produced orthogonal to the wire (Fig.13). Suppose a high permeability material such as iron or ferrite is placed in the coil’s core (Fig.14). In
that case, magnetic dipoles orientate themselves in the direction of the magnetic field, increasing its strength. Using an iron-based core to increase magnetic field strength is very common in many
magnetically-driven devices. For example, silicon steel is widely used in transformers and the field windings of most electric motors or generators. It is also used in hair clippers, where the 50Hz
mains siliconchip.com.au Fig.10 (left): Faraday’s law of induction on a simplified three-phase alternator. The permanent magnet rotor spins, providing a changing magnetic field. An electric field is
induced in the top coil, as shown by the voltmeter. Fig.11 (right): the same arrangement as Fig.10 but the rotor has rotated 90°, so the top coil sees no change in the magnetic field. The voltmeter
shows no deflection. If the rotor continues to spin, the south side of the magnet will soon be near the coil, inducing an electric field with opposite polarity. Through a full 360° rotation, a
sinusoidal waveform is generated, ie, AC voltage. waveform is used to induce a changing magnetic field in cutting teeth, providing an oscillatory motion to trim the hair. Ferrite is another common
ironbased material widely used in magnetic products. It is favoured for its unique properties as a poor electrical conductor but a good magnetic conductor (high permeability). That is why it is
widely used as a former for high-frequency inductors, in permanent magnets for hobby DC motors and as a source of magnetic fields in loudspeakers. This magazine also commonly features AM ‘loopstick’
antennas in its vintage electronics section, which often have an adjustable ferrite core. By rotating the screw, the ferrite can be moved in or out of the coil, providing an inductance adjustment to
‘slug tune’ the receiver. μ is the permeability of the material or medium, ε is the permittivity of the material or medium and E is the electric field (so ∂E/∂t is the change in electric field over
time). The additional term includes the property that a time-varying electric field produces an orthogonal magnetic field. Put simply, the strength of the magnetic field is proportional to the
permeability and permittivity of the material, as well as the electric field’s strength and rate of change. When considering this relation, together with Faraday’s law of induction, it can be seen
that a time-varying electric field produces a magnetic field Fig.12: an example of Ampere’s law. Current flowing in a wire produces an orthogonal magnetic field. Maxwell’s addition to Ampere’s law
Figs.13 & 14: if the length of wire from Fig.12 is coiled, the magnetic fields constructively interfere, producing a stronger field (left). If a high permeability material is used in the core,
magnetic dipoles orientate themselves in the direction of the field, increasing the field strength (right). The original form of Ampere’s law only relates electric current to magnetic field strength.
Significantly, Maxwell added a term that relates electric and magnetic fields, termed “Maxwell’s addition”: ∇ × B = μ(J + ε∂E/∂t) Here, B is the magnetic field, J is electric current density in amps
(A), siliconchip.com.au and a time-varying magnetic field produces an electric field (see Fig.15). It is a remarkable property; as Faraday so eloquently phrased, “nothing is too good to be true if it
be consistent with the laws of nature”. A common example is in the transmission of radio waves by an antenna. Alternating current in the antenna produces a time-varying magnetic field around the
conductors, which in turn produces a time-varying electric field that continues to propagate in free space. Some distance away, these fields induce a current in a receiving antenna, allowing the
wireless transfer of information. Australia's electronics magazine November 202493 Ideal Bridge Rectifiers Choose from six Ideal Diode Bridge Rectifier kits to build: siliconchip. com.au/Shop/?
article=16043 28mm spade (SC6850, $30) Compatible with KBPC3504 10A continuous (20A peak), 72V Connectors: 6.3mm spade lugs, 18mm tall IC1 package: MSOP-12 (SMD) Mosfets: TK6R9P08QM,RQ (DPAK) 21mm
square pin (SC6851, $30) Compatible with PB1004 10A continuous (20A peak), 72V Connectors: solder pins on a 14mm grid (can be bent to a 13mm grid) IC1 package: MSOP-12 Mosfets: TK6R9P08QM,RQ 5mm
pitch SIL (SC6852, $30) Compatible with KBL604 10A continuous (20A peak), 72V Connectors: solder pins at 5mm pitch IC1 package: MSOP-12 Mosfets: TK6R9P08QM,RQ mini SOT-23 (SC6853, $25) Width of W02/
W04 2A continuous, 40V Connectors: solder pins 5mm apart at either end IC1 package: MSOP-12 Mosfets: SI2318DS-GE3 (SOT-23) D2PAK standalone (SC6854, $35) 20A continuous, 72V Connectors: 5mm screw
terminals at each end IC1 package: MSOP-12 Mosfets: IPB057N06NATMA1 (D2PAK) Fig.15: Maxwell’s addition to Ampere’s law models the propagation of an electromagnetic wave. A changing electric field
induces an orthogonal magnetic field, which in turn induces an electric field. The wave propagates in a direction normal to both the electric & magnetic fields, at the speed of light. Source: https:/
/tikz.net/files/electromagnetic_wave-001.png This is also how our sun can power the Earth’s biosphere. As tiny atoms such as helium and hydrogen undergo nuclear fusion inside the sun, they emit
electromagnetic waves. These waves propagate through free space as time-varying electric & magnetic fields, eventually reaching Earth, where they are used as an energy source by the flora & fauna on
this planet. Theory of relativity Years after Maxwell’s publication, a young Albert Einstein expanded these equations in his own papers. Einstein was fascinated by the concept of light as an
electromagnetic wave. The significance of this for him was the notion that the speed of the wave depends only on the permittivity and permeability of the medium it travels through and is therefore
invariant of the rela- tive speed of the source (Fig.16). This understanding led Einstein to publish his groundbreaking theory of special relativity in 1905, as well as the well-known mass/energy
equivalence formula, E = mc2, where E is energy, m is mass and c is the speed of electromagnetic waves (light). This work was further expanded by Einstein’s theory of general relativity in 1915,
which included the force of gravitation in addition to the electromagnetic concepts introduced in special relativity. Maxwell’s equations are so central to this theory that they can be derived from
Einstein’s general relativity formulas. Einstein paid tribute to Maxwell later in his career when asked whether he “stands on the shoulders of Newton”, to which he replied, “no, on the SC shoulders
of Maxwell”. TO-220 standalone (SC6855, $45) 40A continuous, 72V Connectors: 6.3mm spade lugs, 18mm tall IC1 package: DIP-8 Mosfets: TK5R3E08QM,S1X (TO-220) See our article in the December 2023 issue
for more details: siliconchip.au/Article/16043 94 Silicon Chip Fig.16: the speed of electromagnetic waves is proportional only to the permittivity and permeability of the material they pass through.
In this prism, red light travels at a different speed than blue (because their wavelengths differ), so they are refracted at different angles. This inspired Albert Einstein to derive his
groundbreaking theories of relativity. Source: www.vectorstock.com/35129206 Australia's electronics magazine siliconchip.com.au | {"url":"https://www.siliconchip.com.au/Issue/2024/November/Maxwell%E2%80%99s+Equations","timestamp":"2024-11-11T04:06:37Z","content_type":"text/html","content_length":"82171","record_id":"<urn:uuid:c5e51c80-4927-4650-a389-666250870ccf>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00263.warc.gz"} |
Fibonacci Analysis - Techniques Applied to the Currency Market
Have you ever heard of Fibonacci ratios in Forex trading? If not, then you have come to the right place. And even if you have, you may just learn a thing or two. Today I am going to explain to you
the importance of the Fibonacci sequence in Forex trading and its importance as potential support and resistance levels.
Fibonacci Numbers Explained
You have probably heard about the famous Fibonacci numbers. But what is the significance of these numbers actually in trading? Well we are going to dive into that, but first lets go thru a little bit
of history.
Centuries ago, a mathematician named Leonardo Fibonacci, introduced an interesting relation between numbers. He introduced a number sequence, which starts with zero and one (0, 1). Then to each
number he adds the previous one which results in the following sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946…
…and so on and so on.
Notice again that each number of this number succession is formed by the sum of the previous two.
So, what is so interesting about this Fibonacci number series…You ask?
Fibonacci discovered an interesting relation between the numbers in his sequence, He realized that each number is 61.8% of the next number in the set. That’s right! He discovered one more thing. Each
number in the set is 38.2% of the number two positions to the right in the sequence. Let’s look at an example:
Let’s take number 987 from the Fibonacci set and see how it relates to the next number in the sequence – 1597.
987 / 1597 = 0.61803381
When you convert 0.61803381 to a percentage you get 61.803381% = 61.8%.
Let’s take another example:
We take the same 987 number and we will now see how this relates to the number which is two positions to the right of it in the set – 2584.
So, here it is:
987 / 2584 = 0.38196594
When you convert 0.38196594 into a percentage you get 38.196594 = 38.2%.
In case you think this is a coincidence, I will do another demonstration for you:
This time we pick a bigger number – 4181. Now we will see what percentage this number corresponds to with the one which lies next to it – 6765.
4181 / 6765 = 0.61803400
When we convert 0.61803400 to a percentage value we get 61.803400%, which is rounded at 61.8%.
Let’s now see what we will get if we divide 4181 with the number which is two positions to the right – 10946:
4181 / 10946 = 0.38196601
After converting 0.38196601 to a percentage value we get 38.196601%, which is 38.2% after rounding.
Another ratio could be extracted from the Fibonacci sequence. Every number in the set is equal to 23.6% of the number which is three positions to the right of it.
Let’s take the number 2584. If you run thru the sequence you will see the third number to the right of this would be 10946. And so…
2584 / 10946 = 0.2360679700347159
When we convert this number into a percentage, we get 23.60479700347159%. After rounding, this is 23.6%.
This is how the Fibonacci number set works.
Learn What Works and What Doesn’t In the Forex Markets….Join My Free Newsletter Packed with Actionable Tips and Strategies To Get Your Trading Profitable…..
Click Here To Join
How Do the Fibonacci Numbers Apply in Nature?
In order to understand the Forex analysis relationship of the Fibonacci ratios, we first need to look a little deeper into these values from a different angle.
Fibonacci soon realized that his ratios were prevalent everywhere in the natural world and that the ratios 38.2% and 61.8% held special significance.
The 61.8% and the 38.2% ratios are all around us in the universe. You find these values all over the natural world – in plants, in animals, in space, in music, in people’s fingerprints, and even in
human faces! Have a look at the image below:
This shape is the basic Fibonacci Spiral. Notice that the volume of each segments of this shape is equal to the Fibonacci sequence itself: 1, 1, 2, 3, 5, 8, and the spiral curls and curls to
infinity. So, when you take a closer look at the Fibonacci spiral, does it look familiar to you? Let me introduce few more images to you
All four of these images portray the Fibonacci Spiral found in nature. Since the human eye constantly sees Fibonacci shapes it has become accustomed to them over the centuries, perceiving them as
This understanding also applies to human psychology and facial features. If someone’s face matches the Fibonacci parameters, it is generally perceived as beautiful and good looking to others. Have a
look at the image below:
This is a picture of a human ear with absolutely proportional size and shape. As you see, the Fibonacci Spiral is added to the image showing that the parameters of the ear match the Fibonacci
sequence. The ear on the image above is considered beautiful to the human eye, because it responds to the Fibonacci ratios. Since the human eye constantly observes this ratio in nature, the human eye
has adapted over the centuries to perceive as beautiful, those shapes which match the Fibonacci values.
Now look at this image:
This is a facial portrait, which is highly responsive to the Fibonacci ratios. And it would be generally accepted, that this face is considered appealing and beautiful. On the other hand, faces with
parameters that are not as responsive to the Fibonacci values is not as likely to be attractive to the human eye.
Some of the big corporations around the world take advantage of the Fibonacci sequence and its meaning to the human’s perception. For example, many of the well-known brand logos around the world are
conformed to the Fibonacci ratios on purpose, so they can be appealing to the human eye. Have a look at the logo below:
I bet you recognize this company logo. Yes, this is the well known Apple logo. This company logo is fully conformed to the Fibonacci ratio in order to be attractive and receptive to the human eye.
Fascinating, isn’t it?
How to Implement Fibonacci Analysis in Forex Trading?
So we have now seen how the Fibonacci numbers relate to the natural world and human perception. But how can we apply Fibonacci in Forex trading and how can we improve our analysis with Fibonacci
Imagine the price of a Forex pair is trending upwards as a result of the bulls dominating over the bears. Then suddenly, the bears overtake the bulls and the price direction reverses. The price
starts dropping against the previous trend, but for how long? This is where the Fibonacci ratios can be applied and prove useful in our trading decisions.
When the primary trend is finished and a contrary movement occurs, it is likely that the contrary move to equal 38.2% or 61.8% of the previous trend. The reason for this is that investors tend to
change their attitude after the price retracements of 38.2% or 61.8% of the general trend. As we have already said, human nature has gotten used to the Fibonacci ratios within the natural environment
and so it is just a natural tendency for traders, whom are part of the natural environment, to react at these levels. Traders are likely to switch sides when the price interacts with a crucial
Fibonacci level.
Now that we have a basic understanding of the Fibonacci sequence and its effects in the financial markets, we turn our focus to the various trading tools that help us find these hidden levels on a
Fibonacci Retracements
This is the most famous Fibonacci tool and is available on nearly every Forex trading platform. It consists of a line, which is used to locate the Basic trend. With manually adjusting the line on the
trend line, the Fibonacci levels are being automatically drawn on the price chart. When you settle on your Fibonacci Retracement instrument on the chart you will get horizontal lines, which indicate
the levels 0.00, 23.6, 38.2, 50.0, 61.8, and 100. Once we get our Fibonacci Retracement levels, we can start our Fibonacci analysis. Let’s take a look further to see how this would work:
This is the 60 min chart of the most traded Forex pair – EUR/USD. The time frame is from– Jan 6 – 21, 2016. As you can see from the image above, we have marked the basic trend, by marking the bottom
to the top of the trend. This is what we would use to calculate the Fibonacci retracement ratios. The horizontal lines mark the Fibonacci percentages based on the Fibonacci sequence.
Let’s see how the price reacts to the Fibonacci levels on this chart:
• After the end of the bullish trend, the price drops and finds support at the 61.8% fib retracement level.
• After rebounding to the trend’s top, the EUR/USD drops to test the 38.2% retracement level. Notice that the price tests this level about three times.
• Then we get a slight increase to 23.6%. The price tests the level and drops again.
• The following drop reaches 61.8%. This level gets tested couple of times. Then we get another increase to 100%.
• Then another price drop to test the 38.2% level.
• Later on, we see another bounce from our 38.2% fib retracement area.
Fibonacci analysis can be used as part of a trading strategy or as a stand alone trading method. A simple set of rules would be to go long whenever the price bounces from 61.8% or 38.2%. Place a stop
loss right below the bottom of the bounce. If the price moves in your favor, adjust the stop upwards. If the price breaks a new Fibonacci Level, move the stop to the middle of that Fibonacci Level
and the Lower one. Hold the trade until the price reaches 0.00% or until the stop loss is hit. Let’s now play this strategy over the chart above:
• The first green circle shows the first bounce off the price from the 61.8% level.
• Go long when the price touches that level and bounces in bullish direction.
• In this position, the close signal comes when the price hits 0.00%.
• This long position could have generated profit equal to 136 pips.
• The next long position should come in the second circle when the price bounces from 38.2%
• Unfortunately, one of the last candlewicks during the bounce hits the stop loss order.
• This position would have generated a loss of 10 pips.
• The third long position could come right after the decrease to 61.8% in the third green circle.
• An increase comes and when 23.6% is broken upwards, the stop should be adjusted between 23.6% and 38.2%.
• After consolidation around 23.6% the price drops and hits the stop.
• From this long position one could have made 41 bullish pips.
• In the next green circle, you see the fourth potential long position. Go long after the bounce from the 38.2% level.
• This long position leads right to the 0.00% level, which is the close signal.
• One could have made 76 bullish pips out of this long trade.
• There is even an opportunity for a fifth trade on this chart. One should go long after the bounce from 38.2% shown in the last green circle.
• Again, the close signal comes with the break through the 0.00% level.
• This potential long position equals 66 bullish pips.
The total outcome from this simple Fibonacci Retracement strategy implemented on the chart above could have made profit equal to 309 pips. But keep in mind that this is an overly simplistic trading
example just to show you the power of the Fib levels in trading, however, the true power of Fibonacci trading is realized when you combine other Key Support and Resistance levels with Fibonacci
retracement levels. And so when you get a confluence of Support and Resistance around these levels, then there is a high likelihood of prices holding there.
Fibonacci Fan
This is another useful Fibonacci tool. It is based on the Fibonacci ratios, but it takes a different form on the chart. You draw your Fibonacci Fan the same way you do it with the Fibonacci
Retracement, just identify the trend and stretch the indicator on it. The other components of the Fibonacci Fan will appear automatically. These are three diagonal lines, which vertically are
distanced with 38.2%, 50.0% and 61.8% from the top of the trend. After we place our Fibonacci Fan on the chart, we observe the way the price reacts to the diagonal Fibonacci levels. The image below
will make it clearer for you.
This is the Daily chart of the EUR/USD for the period May 30 – Nov 3, 2010. We have identified the basic trend, where we place our Fibonacci Fan instrument. The small black arrows show the areas
where the price bounces from the Fibonacci Fan levels. The red circles show where the levels get broken.
Notice the way the price interacts with the trend lines of the fib fan:
• After the end of the trend, the price drops through 38.2%.
• The price reaches 50.0 afterwards and bounces upwards.
• Then we see a test of the already broken 38.2% as a resistance and a bearish bounce from this level.
• The price breaks 50.0% afterwards and drops to the 61.8% level.
• The 61.8% level gets tested as a support three times in a row.
• After the third bounce the price jumps upwards breaking the 50.0% level.
• The increase continues and we see a break through 38.2%
• 2% level gets tested as a support afterwards.
Let’s say we want to create a simple Fibonacci System to trade using the Fib Fans. What we could do is every time the price bounces from 38.2% or 61.8% a long position could be opened. Stop losses
should be placed right under the bottom of the tests. The trade could be held until the price breaks one of the fan levels in bearish direction.
In our case, we have no bounces from 38.2%. However, there are three bounces from 61.8%. Let’s simulate the eventual trading:
• Go long right after the first bounce from 61.8%.
• The price goes upwards and then drops again to 61.8% for a test.
• The level sustains the price and we see another price increase.
• Close the trade when the price closes a candle below 61.8% – the third bottom on that level.
• One could have made a profit of 22 pips from this trade.
• Then we can go long again when the price closes a candle above 61.8%.
• Hold the trade until there is a break in the stop loss right below the third bottom on 61.8%, or when you see the price breaking one of the fan’s levels in bearish direction.
• The price starts a strong bullish increase through 50.0% and 38.2%.
• 2% gets tested as a support afterwards.
• Stay with the trade until the price breaks 38.2% in bearish direction.
Keep in mind this is a simplified example to demonstrate the use of Fib Fans in trading. But the real power of Fib Fans and Fib Retracements as well, come from confluence with other technical
analysis tools.
• The Fibonacci projections come from a number sequence starting from (0, 1), where each number should be added to the previous one, which creates the next number of the sequence – 0, 1, 1, 2, 3,
5, 8, 13, 21, 34, 55, etc.
• The relation between these numbers generates the basic Fibonacci Ratios – 61.8% and 38.2%.
• These ratios are found all around us in the natural universe,
• Since people constantly see Fibonacci ratios subconsciously, human nature has been adapted to perceive this as a harmonic ratio.
• The Fibonacci Ratio is present in the financial markets, since the markets are really just a reflection of human emotions.
• When the price reverses a trend, the reversal intensity is likely to start hesitating or even stop around 38.2% or 61.8% the size of the previous trend.
• Some of the Fibonacci trading tools to measure these ratios on-chart are the Fibonacci Retracements and the Fibonacci Fan.
• Fibonacci Ratio tools should be used in conjunction with other technical analysis methods and when technical confluence is present around these levels, then there exists a possibility of a high
probability trade setup. | {"url":"https://forextraininggroup.com/how-fibonacci-analysis-can-help-improve-your-forex-trading/","timestamp":"2024-11-13T08:18:56Z","content_type":"text/html","content_length":"387782","record_id":"<urn:uuid:9907357c-77bc-49ab-b6af-cdad858fb1b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00771.warc.gz"} |
Need help with math problems
Math problems can take inexperienced students much time to solve. Sometimes students do not have the time to focus on the assignment due to very close deadlines or having muchSo, where should they
seek help? Online platforms are their best bet for students who need help with math problem.
There are a number of strategies used in solving math word problems; if you don't have a favorite, try the Math-Drills.com problem-solving strategy: Question: Understand what the question is asking.
What operation or operations do you need to use to solve this question? Ask for help to understand the question if you can't do it on your own. Acing the Math Subtests of the ASVAB AFQT - dummies Use
the scratch paper provided to help you solve problems. Draw a picture for math word problems to help you visualize the situation and pick out the relevant information. Remember: You can't use a
calculator on any of the AFQT subtests. For difficult problems, try plugging the possible answer choices into the equation to see which one is right. Practical Algebra Lessons | Purplemath
Pre-algebra and algebra lessons, from negative numbers through pre-calculus. Grouped by level of study. Lessons are practical in nature informal in tone, and contain many worked examples and warnings
about problem areas and probable "trick" questions. Math.com Homework Help Hot Subject: Integers
Whether you are a mathlete or math challenged, Photomath will help you interpret problems with comprehensive math content from arithmetic to calculus to drive learning and understanding of
fundamental math concepts.
Mathway | Algebra Problem Solver Free math problem solver answers your algebra homework questions with step- by-step explanations. Step-by-Step Math Problem Solver QuickMath allows students to get
instant solutions to all kinds of math problems, from algebra and equation solving right through to calculus and matrices.
Pay Someone Do My Math Homework For Me - ✅Get Help Now
Practical Algebra Lessons | Purplemath Pre-algebra and algebra lessons, from negative numbers through pre-calculus. Grouped by level of study. Lessons are practical in nature informal in tone, and
contain many worked examples and warnings about problem areas and probable "trick" questions. Math.com Homework Help Hot Subject: Integers Free math lessons and math homework help from basic math to
algebra, geometry and beyond. Students, teachers, parents, and everyone can find solutions to their math problems instantly. ASVAB Arithmetic and Mathematics Tips | Military.com
No need to weigh the pros and cons here—all of the resources we recommend are guaranteed to help you get a great SAT Math score! #2: Khan Academy Khan Academy is a nonprofit and partner of the
College Board that offers a free online SAT prep program and practice questions.
Homework help - Get help with homework questions & assignments. Our verified tutors are ready to help you 24/7 on demand! Math Assignment Help | A+ Math solutions ($19,50/page) Our Masters & PhD
Level Math Helpers solve Your Mathematics Problems 24x7 Online: algebra, trigonometry, geometry, calculus, number theory. Secure A+ Grade. Math Problem Solving | Professional Help with a Math
Problem… The most difficult math problems are solved by our expert matematicians! ParamountEssays.com can help you with any math or statistics related problem.
I need some help with these math problems.?Need help with this kind of math problems? asked Jan 3, 2013 in ALGEBRA 1 by andrew Scholar.
ASVAB Arithmetic and Mathematics Tips | Military.com ASVAB Arithmetic and Mathematics Tips. ... What are the most important steps in solving a math problem? ... you are presented with word problems,
so you will need to pay more attention to identify ... Need help with a GRE Problem? May 19, 2019 (Cool math problem ... The trick is to minimize the perimeter as much as possible. Need help with a
GRE problem? May 6, 2019 (Set problem and nasty standard deviation problem) - Duration: 10:17. Greg Mat 1,161 views Printable Second-Grade Math Word Problem Worksheets This printable includes eight
math word problems that will seem quite wordy to second-graders but are actually quite simple. The problems on this worksheet include word problems phrased as questions, such as: "On Wednesday you
saw 12 robins on one tree and 7 on another tree.
Here, you always get timely and expert Math problem help in line with superior-quality service. By turning to us for assistance, you receive: Confidentiality. Working with us is a fully secure and
confidential issue. We know what stops many desperate students from turning to us for help with written Math problems. Free Math Problem Solver - Basic mathematics Free math problem solver The free
math problem solver below is a sophisticated tool that will solve any math problems you enter quickly and then show you the answer. I recommend that you use it only to check your own work because
occasionally, it might generate strange results. I need help with math problems - JustAnswer I really need some help with some homework problems, I am not good at all with math I need some help, and
even my book for dummies only helps me out so much...help!! But the problems I have written ou … read more Free Math Help - Lessons, games, homework help, and more Find helpful math lessons, games,
calculators, and more. Get math help in algebra, geometry, trig, calculus, or something else. Plus sports, money, and weather math ... | {"url":"https://ghostwriteiqmddtz.netlify.app/iacono23298so/need-help-with-math-problems-618","timestamp":"2024-11-04T05:06:19Z","content_type":"text/html","content_length":"19499","record_id":"<urn:uuid:6f0a6ad1-b866-4b1c-83f8-3344362e92f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00213.warc.gz"} |
XLS Solvers
Programs that are represented as optimized XLS IR are converted into circuits based on boolean logic, and so it is also possible to feed those as logical operations to a theorem prover.
We have implemented that conversion with the Z3 theorem prover using its "bit vector" type support. As a result, you can conceptually ask Z3 to prove any predicate that can be expressed as XLS, over
all possible parameter inputs.
See the tools documentation for usage information on related command line tools.
This facility is expected to be useful to augment random testing. While profiling the values in an XLS IR function that is given random stimulus, we may observe bits that result from nodes that
appear to be constant (but are not created via a "literal" or a "concat" of a literal).
Example: Say the value resulting from and.1234 in the graph appears to be constant zero with all the stimulus provided via a fuzzer thus far -- the solver provides a facility whereby we can ask "is
there a counterexample to and.1234 always being zero?" and the solver will either say "no, it is always zero", or it will yield a counterexample, or will not terminate within the allocated deadline.
Assuming we can prove useful properties in a reasonable amount of time, we can use this proof capability to help find interesting example inputs that provide unique stimulus.
The full input space for a 32-bit adder is a whopping 64 bits - far more than is possible to exhaustively test for correctness. Proving correctness via Z3, however, is relatively straightforward: at
a high level, one simply compares the output from the DSLX (translated into Z3) to the same operation performed solely in Z3.
In detail, the steps are:
1. Translate the DSLX implementation into Z3 via Z3Translator::CreateAndTranslate().
2. Create a Z3 implementation of the same addition. This is nearly trivial, as Z3 helpfully has built-in support for floating-point values and theories.
3. Take the result nodes from each "branch" above and create a new node subtracting the two. This is the absolute error. Note: Usually, one is interested in relative error when working with FP
values, but here, our target is absolute equivalence, so absolute error sufficies (and is simpler).
4. Create a Z3 node comparing that error to the maximum bound (here 0.0f).
5. Feed that error node into a Z3 solver, asking it to prove that the error could be greater than that bound.
If the solver can not satisfy that criterion, then that means the error is never greater than that bound, i.e., that the implementations are equivalent (with our 0.0f bound).
IR Transform validity
It's usually not possible (or is merely extremely difficult) to write tests to prove that an optimization/transform is safe across all input IR. By comparing the optimized vs. unoptimized IR in a
similar manner as the correctness proof above, we can symbolically prove safety.
The only difference between this and the correctness proof is that both the optimized and unoptimized IR need to be fed into the same Z3Translator (the second via Z3Translator::AddFunction()) and the
result nodes each are used in the error comparison.
IR to netlist Logical Equivalence Checking (LEC)
After a user design has been lowered to IR, it is optimized (see the previous section), then Verilog is generated for that optimized IR. That Verilog is then compiled by an external tool, which, if
successful, will output a "netlist" - a set of standard cells (think AND, OR, NOT, flops, etc.) and wires connecting them that realizes the design.
Between the IR level and that netlist, many, many transformations are applied to the design. Before processing the netlist further - and certainly before sending the final design to fabrication -
it's a very good idea to ensure that the netlist describes the correct logic!
Demonstrating initial design correctness is up to the user, via unit tests or integration tests at the DSLX level. At all stages below that, though, ensuring logical equivalence between forms is XLS'
responsibility. To prove equivalence between the IR and netlist forms of a design, XLS uses formal verification via solvers - currently only Z3, above.
Performing IR-to-netlist LEC is very similar to the checking above - the source IR is one half of the comparison. Here, the second half is the netlist translated into IR, which only requires a small
amount of extra work. Consider the snippet below:
FOO p1_and_1 ( .A(p0_i0), .B(p0_i1), .Z(p1_and_1_comb) );
BAR p1_and_2 ( .A(p0_i2), .B(p0_i3), .Z(p1_and_2_comb) );
These lines describe, in order:
• One cell, called FOO, that takes two inputs, .A and .B, provided by the wires p0_i0 and p0_i1, respectively, and one output, .Z, which will be assigned to the wire `p1_and_1_comb.
• One cell, called BAR, that takes two inputs, .A and .B, provided by the wires p0_i2 and p0_i2, respectively, and one output, .Z, which will be assigned to the wire `p1_and_2_comb.
Note that the values computed by the cells wasn't mentioned - that's because FOO and BAR are defined in the "cell library", the list of standard cells used to generate the netlist. Thus, to be able
to model these gates in a solver, we need to take that cell library as input to the LEC tool. The netlist describes how cells are laid out, and the cell library indicates what cells actually do. With
both of these in hand, preparing the netlist half of a LEC is a [relatively] straightforward matter of parsing a netlist and cell library and converting those together into a description of logic.
See z3_netlist_translator.cc for full details.
Current Limitations
Under the hood, Z3 (and many other tools in this space) is an SMT solver. At a high level, think of an SMT solver as a SAT solver that has special handling for certain classes of data (bit vectors,
floating-point numbers). Many sufficiently complicated problems will reduce to raw SAT solving (especially those involving netlists, which have to implement complex logic at the gate level. Consider
what that means for a multiply, for example!). Since SAT scales exponentially with the size of its inputs, execution time can quickly grow past a point of utility for complex operations, notably
multiplication. Fortunately, for most designs (without such complex ops), proving equivalence of a single pipeline stage can complete in a small amount of time (O(minutes)).
Predicate coverage
Hypothetically, any XLS function that computes a predicate (bool) can be fed to Z3 for satisfiability testing. Currently a more limited set of predicates are exposed that can be easily expressed on
the command line; however, it should be possible to provide:
• an XLS IR file
• a set of nodes in the entry function
• a DSLX function that computes a predicate on those nodes
Which would allow the user to compute arbitrary properties of nodes in the function with the concise DSL syntax.
Z3 doesn't intrinsically have support for subroutines, or as they're called in Z3, "macros", instead requiring that all function calls be inlined.
There is an extension that adds support for recursive function decls and defs, but in our experience, it doesn't behave the way we'd expect.
Consider the following example:
package p
fn mapper(value: bits[32]) -> bits[32] {
ret value
fn main() -> bits[1] {
literal_0: bits[32] = literal(value=0)
literal_1: bits[1] = literal(value=1)
elem_0: bits[32] = invoke(literal_0, to_apply=mapper)
eq_12: bits[1] = eq(literal_0, elem_0)
ret and_13: bits[1] = and(eq_12, literal_1)
Here, it's trivial for a human reader to see that the results are the same; the output should be equal to 1. Z3, however, reports that this is not necessarily the case, suggesting that literal_0 and
elem_0 would not be equal in the case where the input to mapper was 1...which is clearly never the case here.
To address this, we require that all subroutines (including those used in maps and counted fors) be inlined before consumption by Z3. | {"url":"https://google.github.io/xls/solvers/","timestamp":"2024-11-11T18:08:09Z","content_type":"text/html","content_length":"54282","record_id":"<urn:uuid:124b8e0c-f498-4cd5-9efc-4c3f907a8319>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00550.warc.gz"} |
236 hectometers per square second to decimeters per square second
7,236 Hectometers per square second = 7,236,000 Decimeters per square second
Acceleration Converter - Hectometers per square second to decimeters per square second - 7,236 decimeters per square second to hectometers per square second
This conversion of 7,236 hectometers per square second to decimeters per square second has been calculated by multiplying 7,236 hectometers per square second by 1,000 and the result is 7,236,000
decimeters per square second. | {"url":"https://unitconverter.io/hectometers-per-square-second/decimeters-per-square-second/7236","timestamp":"2024-11-11T23:14:26Z","content_type":"text/html","content_length":"27109","record_id":"<urn:uuid:56fc2694-2c2a-4d46-a99e-126ff5082f8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00009.warc.gz"} |
Efficient pseudorandom generators from exponentially hard one-way functions
In their seminal paper [HILL99], Håstad, Impagliazzo, Levin and Luby show that a pseudorandom generator can be constructed from any one-way function. This plausibility result is one of the most
fundamental theorems in cryptography and helps shape our understanding of hardness and randomness in the field. Unfortunately, the reduction of [HILL99] is not nearly as efficient nor as security
preserving as one may desire. The main reason for the security deterioration is the blowup to the size of the input. In particular, given one-way functions on n bits one obtains by [HILL99]
pseudorandom generators with seed length O(n^8). Alternative constructions that are far more efficient exist when assuming the one-way function is of a certain restricted structure (e.g. a
permutations or a regular function). Recently, Holenstein [Hol06] addressed a different type of restriction. It is demonstrated in [Hol06] that the blowup in the construction may be reduced when
considering one-way functions that have exponential hardness. This result generalizes the original construction of [HILL99] and obtains a generator from any exponentially hard one-way function with a
blowup of O(n^5), and even O(n^4 log^2 n) if the security of the resulting pseudorandom generator is allowed to have weaker (yet super-polynomial) security. In this work we show a construction of a
pseudorandom generator from any exponentially hard one-way function with a blowup of only O(n^2) and respectively, only O(nlog^2 n) if the security of the resulting pseudorandom generator is allowed
to have only super-polynomial security. Our technique does not take the path of the original [HILL99] methodology, but rather follows by using the tools recently presented in [HHR05] (for the setting
of regular one-way functions) and further developing them.
Original language English
Title of host publication Automata, Languages and Programming - 33rd International Colloquium, ICALP 2006, Proceedings
Publisher Springer Verlag
Pages 228-239
Number of pages 12
ISBN (Print) 3540359079, 9783540359074
State Published - 2006
Externally published Yes
Event 33rd International Colloquium on Automata, Languages and Programming, ICALP 2006 - Venice, Italy
Duration: 10 Jul 2006 → 14 Jul 2006
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 4052 LNCS
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 33rd International Colloquium on Automata, Languages and Programming, ICALP 2006
Country/Territory Italy
City Venice
Period 10/07/06 → 14/07/06
Dive into the research topics of 'Efficient pseudorandom generators from exponentially hard one-way functions'. Together they form a unique fingerprint. | {"url":"https://cris.tau.ac.il/en/publications/efficient-pseudorandom-generators-from-exponentially-hard-one-way","timestamp":"2024-11-12T00:04:56Z","content_type":"text/html","content_length":"55329","record_id":"<urn:uuid:ec7d8ba1-1056-45b3-b6c3-1f6e55007dac>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00218.warc.gz"} |
Asymptote calculator
Bing users came to this page yesterday by typing in these algebra terms:
• decimal jeopardy
• algebra 2 answers
• coordinate plane & trig function plane
• graphing equations with exponents
• basic square roots work sheets
• free t-83 calculator online user
• digits is divisible by 3 represent numbers divisible by 3+java
• expression factoring calculator
• passport math textbook practice worksheet
• pre-algebra textbook,7th grade,prentice
• square root functions and radical equations calculator
• maths worksheets "permutations and combinations"
• TI-84 statistics worksheet
• Free Download Guess Papers for 9th class
• sample Accounting aptitude questions and answers
• GED printables
• permutation combination "middle school"
• go hrw chemistry tests
• physics high school work sheets
• clep grade
• math subtractionfree + addition easy fun pintable papers
• holt algebra with trigonometry chapter 13 quiz
• free maths work sheet and answers high school
• simplify algebraic expressions generator
• algebra perimeter of shapes worksheet
• boolean logic reducer
• how to solve nonlinear ordinary differential equations
• Ks3 sats analysis spreadsheet science 2004
• ti-89 quadratic formula
• radical converter
• Balancing Equations Online
• solve long division ti83
• lesson plans for conic sections
• when would i use standard form for quadratic equations
• 9th grade logarithms
• kumon sheets
• how to solve for x calcualtor
• math problems sqare feet
• math formulaes
• entering the quadratic formula into the "Texas Instrument TI-86\
• high school algebra ellipses solutions
• algebra problems images
• algebra equations for 8 grade
• glencoe algebra 2 answers keys
• math cheats for review graphing linear equations
• proportions + worksheets
• division worksheets for 3rd graders
• free aptitute questions
• free calculator algebra
• "solving the homogeneous neumann problem"
• rational equations worksheet
• mod calculator math online
• math websites for 5th grade n fractions
• pre math test enter question get answers
• positive and negative integers worksheet
• elimination equation solver
• solving fractional equations using calculator
• free elementary algebra
• rules for adding negatives
• answers to mcdougell littel math books?
• 10th grade algebra math worksheets
• decimals to fractions conversion
• Division radical equations
• maths ks3 free
• simplify the rational expression calculator
• solving fraction exponents
• using matrix to solve simultaneous equations in matlab
• cost accounting for dummy
• teach cube root activity
• prentice hall algebra 1 answer keys
• California 8th grade worksheets with answers
• square root of 4000 simplified
• kids algabra solving
• algebra equation balancer
• free worksheets improper fractions to mix
• enrichment master 9-1 common accounting terms glencoe
• online factorising
• pre-algebra + simplifying
• hardest math problem solving
• factoring algebraic
• using square root functions in real life
• aptitude questions downloads
• solving equations multiple test
• hardest maths formula
• HOLT PRE-algebra worksheets
• factoring polynomials solver
• algebra for dummies online free
• quadratic equation free for TI 84 plus
• subtracting polynomials multiple choice worksheet
• free online logarithmic calculator
• find lowest common denominator in java
• free lcm calculator
• difference between volume of prism and cylinder
• unit 8- worksheet 4 adding 11
• online slope graphing calculator
• TI-83 plus log to base 2
• " a transition to advanced mathematic "
• balancing equations algebra worksheets
• ti-84 calculator program number guessing game
• how to solve algebra fraction operations
• algebra simplify java
• factoring casio calculator
• taylor series for ti84
• simultaneous equations practice problems
• interactive algerba TAKS practice
• What are some examples from real life in which you might use polynomial division?
• free algebra 1 solvers
• Online Fraction Calculator
• algebra 2 integration applications connections solution
• 6th grade combing like terms
• the worlds hardest sheet of homework
• solving integrals by substitution using maple
• solving partial fraction decomposition with ti-83
• probility worksheets + sixth grade
• printouts for 6th grade cat 6 tests released questions
• free math poems
• solve linear systems of 1st order Diff eqn
• ks2 maths work sheet
• what programs help you cheat on ti-84
• area model and polynomial worksheets
• visual basic+caculator code
• 9th grade math poems
• linear algebra books
• 3rd grade math sheets AND geometry
• answer rational expressions
• writing balancing equations calculator
• mathmade easy for a child on determinants and matrices
• how to solve a nonhomogeneous hyperbolic equation
• algebra- exact values for square roots
• math help simplifying variables
• mcgraw hill enrichment sheets
• trivia-kids
• hyperbola graphing calculator
• partial sums primary worksheets
• decimal equations
• online 6th grade math sat practice test
• matlab code homogeneous linear equations
• solving trigonomic equations
• Solving linear systems by comparison worksheet
• algebra word problem worksheets for grade 7
• matrix reducer worksheet
• math course 3 mcdougal littell answers key download free
• easy way to learn algebra
• algebra aptitude video
• blank coordinate plane
• solving second order differential equations in matlab
• converting measurement worksheets free printable
• circle practice problems conics
• adding and subtracting radicals calculator
• monomial solver
• solve simultaneous equations
• division worksheets for second grade beginners
• percentage type aptitudes with answers
• creative ways to teach high school maths
• free algebra solver
• algebra and ks3 and worksheets
• graph equalities worksheet
• Radicand calculator
• basic calculas
• Glencoe Mathematics North Carolina addition algebra 1 teacher edition ebook
• integer activity worksheet
• top rated school tutor software
• poetry on algebra 2
• "graphing an ellipse"
• Mcdougal Littell Algebra 2 Workbook Answers
• "partial differential equations" "worked examples" online lesson
• ninth grade algebra 1 textbook
• free printable mixed integer worksheets
• free printable ordered pairs and graphing fun
• Gragh paper
• free Algebra 2 answers
• applications of rational expressions
• "download free accounting books"
• symbolic method how to
• free online pre-algebra games
• simplify radicals worksheet
• integers worksheets
• find vertex calculator
• solving systems of linear equations power point
• fraction calculater
• online function solver
• how to factor third order polynomials
• "free high school math worksheets"
• pre algebra worksheets
• how to solve quadratic equation on ti85
• algebra for idiots
• gmat pattern ppt
• slope fields worksheet
• free algebra radical converter
• expression divide calculator
• heath algebra 2 an integrated approach answers
• simplify radicals and absolute value
• Holt Algebra 1 worksheets
• APTITUDE QUESTIONS WITH SOLVED ANSWERS?
• Merrill Algebra Two with Trigonometry
• Graphical solution of one non-linear equation in explicit form in MAtlab
• MULTIPLICATION dIVISION of RATIONAL
• java script+cube root+calculator
• cordic basics
• math-slope
• IX maths sample questions
• fifth grade math word problem explanation
• pre-algebra online textbook,7th grade,prentice
• two step algebra equation worksheets for 5th grade
• Maths help with ratio online
• mcdougal lesson study sheet
• solve my equation
• step by step equation calculator free
• solution of exercise functional analysis rudin
• west texas math tutoring elementary algebra point slope
• sample 4th grade entrance test
• algebra 2 e-book instructor's solution probalities
• factorising online
• find T&I rom codes
• 7th grade e-books to print on-line
• Holt Algebra 1 answers
• permutations and combinations worksheets middle school
• solve algebra problems
• usage of Cummulative density function
• a quadratic factoring calculator
• implicit differentiation calculator
• multiply fractions ti-83
• equations with restrictions
• florida prentice hall algebra 1 answer key
• ti-83 polynom roots
• algebra factoring review game
• powerpoint presentations for introducing algebraic equations to the class
• GED MATH FREE PRINTABLE WORKSHEETS
• basic math trivia
• factoring quadratics calculator
• Chapter 30 Prentice Hall Biology worksheet answers
• algebra square roots calculators
• slope formula worksheet
• factoring equation solver
• combinations and probability worksheets, 3rd grade
• prentice hall algebra workbook page 36
• college+powerpoints
• nonlinear algebraic equation in maple
• relatively prime RSA d tool online
• Bitesize Modular maths-Factorise
• Using Formula Sheet Activity
• glencoe algebra 2 workbook answer key
• absolute value restrictions
• 10th grade math printouts
• solving symbolic simultaneous equations with more than one solution
• math sequence solver
• Solving Simultaneous Equations Using Excel
• 5th grade exponent worksheets
• math equation simplifier
• algebra 1 saxon answers
• slope of quadratic equation
• Algabra 1
• Free Exponents worksheets
• first order linear system ti-89
• how to exit the inner forloop in for loop in java
• take the square root out of the denominator
• laplace transforms solver ti89
• Square root simplifier
• printable slope and intercept worksheets
• antiderivative automatic solver
• alg 2 vertex formula
• formula on how to do percents with kids
• powerpoints on fractions and mixed numbers on the number line
• Permutation for six grade students
• plato cheat
• Aptitude test download
• math games parabola for 8th grade
• Solving Rational Expressions calculator
• using calculator to graph parabola
• division de monominals
• "triangles worksheets"
• scientific calculator (radicals)
• free math websites for 5th grade questions
• signed numbers worksheet
• mastering physics solutions from indiana .edu
• second grade worksheet math calculator
• free algebra worksheets polynomial long division
• hyperbolas for dummies
• pre alegra equations
• how to linear equation using ti 83 plus
• seventh grade pre algebra graphs worksheets
• greatest common factor of 3000
• EXPONENT FOR 5TH GRADE EXAMPLE TO PRINT
• algebra math work sheets with radical expressions
• Convert Mixed Fractions to a percent
• math add subtract multiply divide fractions worksheet
• subtracting integers, worksheets
• algebra
• 9 grade math mixture help problem solving
• the difference between linear functions and parabolas
• free printable mutiplication practice fun for 5th graders
• lessons on solving problems with quadratic equations
• perpendicular worksheets for second grade
• Online Free Beginning Algebra
• algebra, 3rd grade
• physic equation formula calculate
• conceptual physics hewitt solutions guide pdf
• ti-89 base 10 base e
• change in f(x) on ti-89
• BOOKS 8TH STANDARD MATHS TEXT BOOK
• holt math test B answers pre-algebra
• free college math word problem printout
• properties of exponents printable worksheet
• answers for Texas Algebra 2
• prentice hall algebra 1 practice workbook
• free printable math sheets on square roots
• math foil machine method calculator
• evaluation and simplification of of an algebraic expression
• answers to glencoe math workbook
• "division math sheets"
• perimeter worksheets for grade six
• printouts, math practice, FOIL
• multiplying rational expressions calculator
• holt algebra 1 workbook answers
• factoring worksheets all methods
• printable free maths ks3 past sats
• homogenous first order differentials
• algebra for grade 7 free worksheets
• past paper maths answers y 9
• math activities & radical expressions
• free trigonometry workbooks download
• math ks2
• algebra 2 book answers
• Free Algebrator Download
• easy way to subtract integers
• simplifying square roots worksheet
• 8th grade math trivia
• writing rules for pre algebra
• hbj algebra workbooks
• quadratic equation completing the square
• t1-83 plus calc tutorial
• quadratic word problems
• linear equations worksheets graphs
• simultaneous quadratic
• trigonometric chart
• helpful hints when simplifying fraction
• algebra substitution games
• initial value problems calculator
• mathematics activities for 6 graders for free
• print algebra math test
• 8th grade practice with slopes
• ks3 simultaneous equations help
• simplifying expressions such as square roots
• free math word problems worksheets for probability
• 2nd order PDE nonhomogeneous solution
• best algebra software for mac
• solving polynomials tutorial
• Aptitude SOlved papers+merittrac
• second order differential equations to multiple first order differential equations
• ged reading comprehension test printouts
• the hardest maths ks2
• multiplying dividing rational expressions calculator
• lattice method worksheet 3rd grade
• Mary P. Dolciani
• corrections to maths 6-8 2004 sats paper
• worksheets: decimal operation review
• solving second order linear systems
• fractional absolute value inequalities
• rotation worksheet ks2
• algebra 2 formula to graph a parabola
• glencoe geometry book answers
• Systems of Equations - Solve by addition or subtraction
• basics of calculas
• Particular solution for nonhomogeneous second order linear
• free college algebra equation software
• hardest math problems
• simplifying radical equations
• algebra 1 worksheets for 9th graders
• trigonometric identities solver
• algebraic reasoning worksheets
• dividing Rational Expressions calculator
• java code for sum integer
• rational expressions help solving calculator
• aptitude test downloads
• fraction root
• equations and rational expressions solver
• graphing calculators that are able to graph parabola
• Elementary Probability worksheet generator
• Algebra equations and graphs solver
• 5th grade eog worksheets mass
• Math Combinations And Permutations
• algebra 2 online calculator
• free printable what's the difference worksheets for 8 year olds
• square root variable solver
• excel cube route function
• simplify radical expression multiply rationalize denominator
• primary algebra math printable worksheets
• how to convert polar equations ti-89
• graphing algebra excel
• adding and subtracting equations worksheet
• how to find out the square root
• free pizzazz math
• Practice test Condensing And Expanding Logarithms
• Mcgraw hill workbook page 113 (math)+5th grade
• how to put a quadratic formula into ti-84 plus
• free accounting worksheets for high school math
• www.fractions.com
• scales for math practice problems
• Formula For Square Root
• adding subtract radical expressions
• worksheets on algebra 2 solving radicals and inequalities
• saxon college algebra help
• 3rd grade algebra
• radical expressions and equations worksheet
• algebra answer key
• Addmaths+edition+powerpoint chapter's+Ross
• excel differential equation solver
• decimal as a fraction and simplify
• pre-algebra pages evens mcdougal littell
• math worksheets fourth grade parentheses
• Math formula chart 7th
• parabola formula
• algebra common factoring worksheet
• mathmatics percentages
• holt algebra
• simplify radical expressions free worksheets
• 4th grade fractions
• olevel past papers cambridge university
• solving multivariable nonlinear equations matlab
• 9th grade eog sample test
• faction calculator
• simplify the expression square root calculator
• trigonometry cheat sheet
• "grade eight math" +exercises
• Write decimal in simplest form
• help solving pre algebra problems 7th grade help
• holt algebra one
• quadratic equations, completing the square, free worksheets
• free math printables algebra
• aptitude question with answer
• algebra calculator ks3
• finding slope in math worksheets
• algabra caculator
• algebra for fifths graders
• how to solve system of equation on a ti83
• introduction to probability models"Ross""solution chapter 4"
• fractions decimal percentage, 5th grade, worksheets
• tripod Ti-83 plus calculator download
• simplifying algebraic equations
• lattice work sheets
• elementary 'probability worksheet' graph
• online variables calculator
• test differential equation is homogeneous zero
• how to use scale in math powerpoint
• fourth order algebraic equation
• easy tricks to learn trigonometric ratios formulaes
• reduce radical calculator
• grade 9 math-adding and subtracting negative fractions
• simplifying radical expressions calculator
• how to enter chemistry equations on excel
• freeware algebra software
• two step Inequalities worksheet
• solve 2nd order differential using matlab
• Easy Ways How To Solve Permutations
• lcm calculator online
• triangular numbers powerpoints
• variables in the exponent
• piggy bank algebra series and sequence word problem
• simple algebra worksheets
• download Quadratic equation for TI-84
• circle equation 4th order
• ti89 solver
• free multiplying dividing integers order of operations worksheets
• Addition and subtraction of Algebraic expressions
• free word problem solver
• Solution finder for College Algebra
• how is 8% to a decimal
• graphing linear equation.ppt.
• multiplication properties of exponents
• Algebra and Trigonometry Mcdougal Littell answers
• find slope from equation ti-84
• solving nonlinear equations using a ti 83
• free online balancing chemical equations solver
• free algebra solutions
• Mathematics solution for grade nine
• step by step solution for algebra problems free
• glencoe pre algebra questions pythagorean
• boolean simplification calculator
• multi-step conversions free worksheet
• c forumal for adding fractions
• two step equation worksheet
• solving for a variable quadratic
• Foundations for Algebra: Year 1 volume 2
• log base 2 calculator
• simultaneous equations multiple choice questions
• finding a slope on a graphing calculator
• prentice hall algebra 2 with trigonometry answers
• hard math problem
• TI-83 Plus Graphing Calculator Domain and Range
• factorising quadratics calc
• simultaneous equations with 4 unknowns
• introduction to algebra worksheets
• free probability worksheets activities
• standard form calculator
• rules for multiplying squares
• online differential equation calculator
• how to do data analysis YEAR EIGHT MATHS
• adding subtracting multiplying dividing decimals worksheet
• math ratio games for year 7
• maths for dummies
• solving inequations worksheet
• graphs, functions and systems of equations
• +Empirical Rule caculator
• linear equations with fractions
• solving basic mixed numbers
• fraction calculator for 6th grade
• mcdougal littell math answers
• free begginner algebra sheets
• grade 6 math combinations
• mathmatical problem solving
• sample math problems for high schoolers with solution
• dividing fractions lessons 5th grade
• Two step pages and inequalities
• mcdougal littell algebra 2 even answers
• ti 83 84 display irrational roots
• online fraction calculator
• free quadratic equation solver
• ordering fractions from least to greatest
• multiply binomial calculator
• grade 11 math problems
• printable 11+ practice exam papers
• algebra 1 problem converter
• completing the square cube
• free online use of TI-83/84 Plus calculator
• glencoe pre algebra workbook
• fractional exponents quiz
• math imbestigatory project
• ti 89 software fractions
• HRW glencoe algebra 1 course 2 ohio edition
• "Algebra tiles worksheet"
• database algebraic expression
• SOLVING EQUATIONS FOR KIDS
• solving a formula for another variable
• changing decimals to fractions worksheets
• word problem on multiplying fraction
• algebra 2 lesson plan powerpoint adding fractions
• formula for fractions
• finding the nth term with a decreasing value
• factors worksheeets for 4th grader
• online calculator: simplification of radicals
• Square difference
• fraction simplest form calculator
• 6th grade pythagorean theorem practice problems generator
• slope intercept formula
• free e book on costing
• mixed number to decimals
• free math adding and subtracting negative numbers worksheet
• abstract algebra software free
• Level E unit 7 worksheet answers
• maths solver online
• algebra with pizazz topic 6-b
• simplifying logarithmic equation
• "square root maths worksheets"
• clep algabra test
• Free printable geometry nets
• order of operations worksheets integers
• Orleans-Hannah sample
• worlds hardest multiplication problems
• trigonomic equations with coefficients
• factoring a polynomial as a cubed binomial
• free aptitude book
• college math software
• substitution method calc
• math square root fraction
• why does converting from a mixer # to a fraction work???
• 4th grade math +finding work triangle perimeters
• ti 84 entering complex matrix
• algebra 2 chapter 7 test answers mcdougal littell
• glencoe algebra 1 chapter 5 teacher edition
• Coordinate plane worksheets
• online integration by parts calculator steps
• easy aptitude questions with answer in pde files
• answers for Prentice hall advanced algebra second edition
• free slope intercept form worksheets
• addition and subtraction trig calculator
• tips to passing world history
• Algebra+2+help+solving+my+problems+free
• calculate sum of many numbers using java
• quadratic patterns of change equation
• OHIO IT-3 ADDING WORKSHEETS
• solving linear equations matlab
• how do we turn word problems into algerbric expressions
• real life uses for Quadratic equations
• glencoe algebra 1 workbook
• Combination Math
• using quadratic equations in real life
• example of math poem
• fractional quadratic equations
• algebra substitution practice
• factoring binomials calculator
• multiplying and dividing fractions worksheets
• calculate linear feet
• free 3rd grade video help with fractions
• Multiply Fractions rules
• free very easy worksheet on order of operations
• 4th grade agebra online sheets
• how to use nth power function on the TI-84 plus
• printable worksheets functions fifth grade
• system of equation hard questions
• polynomial factor calculator
• rewriting division as multiplication
• Find the equation of a quadratic curve passing through three points calculator
• Prentice Hall Conceptual Physics
• "difference equation" solve symbolic linear
• simplify cubed roots
• how to balance chemical equations easy steps
• calculate index of coincidence java
• adding and subracting fractions printable worksheets
• How do you find a quadratic equation if you are only given the solution
• www.ask me.com past papers and solution LOGARITHMIC AND EXPONENTIAL FUNCTIONS
• algebra order of operations inequalities
• free simplifying radicals solver
• inequality worksheets
• tricks in permutation combination
• Does anyone know the answer to the ALGEBRA WITH PIZZAZZ (Creative Publications) worksheet Pg 89
• ti 84 plus programs
• greatest common factor java code
• algebra questions gr 9
• 1st grade lesson plans on probability
• ti 83 plus cube roots
• formula convert decimal to fraction
• solving two variable equations calculator
• algebra-II science enrichment OR answers "unit 11 "
• graph worksheets 8th grade
• formulae year 7 maths questions
• free least common multiple worksheet
• dummit and foote solutions 4.4 number 3
• factoring multiple variable equations
• Simplified radicals chart
• converting numbers larger than 100 in percentage
• chapter 7 rational expressions
• print out math sheets for a 8 year old
• free trigonometry problem solver
• "math quest" for victoria download
• how to factor cubes on a ti-84 plus
• radical calculater
• steps to graph the slope on a calculator
• pizzazz worksheet
• math with pizazz
• substitution maths question solver
• yr 11 algebra
• Highest common method, maths
• solving logarithms algebraically
• least to greatest fraction worksheets
• download advanced accounting 9e past exams
• ti-89 boolean algebra
• glencoe mcgraw-hill glencoe algebra 1 chapter 5 test,form
• Reciprocal of a Fraction worksheets
• "word math" java program
• application of algebra in many ways
• easiest way to learn pre algebra
• TI 89 Polar
• ti-84 plus foil programs
• balancing chemical equations dividing
• free online lcm fraction calculator
• How to calculate log 2 on TI-83
• how to find the fourth root of a number
• math equation positive and negative add free
• how to rewrite fractions with variables from division to multiplication
• answers for cpm algebra 1
• "step by step" solve matlab
• complex number solver
• cost accounting+ebook
• write a program to find a number that is divisible by 7 using C
• GCF AND LCM free printable math worksheets
• transposing math formulas
• online practice sats paper yr 6
• solutions for linear algebra done right
• fifth grade worksheets about finding information from functional texts
• equations solve for x interactive game
• why should restrictions on the variable in a rational equation be isted before you begin solving the problem
• how do you solve an algebraic fraction
• simplify radical calculator
• graphs of parabola,hyperbola
• logarithmic calculator with square roots
• ti-83 plus formula cheat
• scale factor in math
• boolean algebra simplify calculator
• how do u solve equations with fractions
• numerical solving nonlinear ode maple
• Reducing rational expressions to the lowest terms calculator
• free download sample question and answer in aptitude test
• online calculator with fractions key
• Creative Publications Test of Genius Quiz
• free history sats exams papers
• simplified radicals calculator
• diffrent math symbols
• how to get radicals out of fractions
• online factor program
• convert base fraction
• free online proportion solver
• dividing games
• printable math exponent chart
• matlab 2nd order
• practice erb math grade 8
• free download maths assignment for class 8th
• dividing with cube roots
• linear graphs worksheet
• "values of trigonomic functions" + chart"
• adding, subtracting, multiplying, and dividing fractions worksheet
• 1st grade negative numbers game printable
• mixing seed problem for linear algebra
• dividing monomials solver
• 3/4 as decimal system base 8
• new investigatory project
• math multiples chart
• greatest common factor variables
• answers to chapter 7 test in prentice hall algebra book
• online math calculator square roots
• solving 3 equations at once
• square root with variables calculator
• writing algebraic expressions free worksheets
• sixth grade math poems
• solve polynom tool
• how to cheat on college algebra
• factor by grouping calculator
• where to find keycode holt math
• examples of math poems about algebra
• sample of math trivia with answer
• graphing calculator factoring
• grade 6 solve algebra equation worksheets
• finding slope of logarithm on ti-83
• prealgebra textbook for alabama schools
• free math worksheets volume
• tree diagram math worksheets
• divisor functions simplified
• conceptual physics tenth edition chapter exercises yahoo answers
• ti 84 emulator online
• math term poems
• free algebra problem solver
• free printable math worksheets for 8th graders
• program ti 83 to foil
• graphing linear equations with fractions
• radical exponents quiz
• formula for different denominators
• simplify exponential inequality
• simultaneous equation calculator
• mathematical trivia
• sqare roots
• grade 11 factoring tips
• calculus totorial
• hoe to solve algebra word problems
• maths and english work sheet
• online rational calculator
• graph quadratic ti-89
• graph of absolute value function satisfied by what line test?
• how to do cube root on ti-83 plus
• solve second order differential equation in matlab
• maths test papers ks3 free download
• mcdougal littell inc algebra 2 resource book answer "pdf"
• poems about trigonometry relevance to life
• beginners algebra
• newton raphson method matlab
• free step by step math solver
• cube root factoring
• mixed number to a decimal
• MATHS FREE WORKSHEETS TILING
• learn factor of polynominal
• printable high school practice math test
• solve algebra
• McDougal littell algebra 1 standardized test practice workbook answers
• differential equation
• how to calculate gcd
• solving third order equations
• were to find 9th grade pre algebra math questions
• algebra with pizzazz answers worksheets
• fluid mechanics solution manual 6th edition
• change to radical form calculator
• online math calculato
• maths scale factors
• square root finder calculator
• key to solutions a first course in abstract algebra
• Order of operatiopns poem
• EXAMPLE OF MATH TRIVIA
• fractional quadratic equation solver download
• Algebra problems and pictures
• solving simultaneous linear congruence
• quadratic equation on ti-84 plus
• Gcse past computer studies Papers and Solutions 2004
• dividing monomials for dummies
• Grade 10 Algebra practice
• quadratic equation java homework
• 8th grade math explanation factoring and applications
• online maths gcse text book free
• formula of sequence solver
• square root a fraction with a variable
• multiplying or adding by one number and getting the same answer
• sideways parabola formulas
• percentage algebraic equations
• simple way to factor 3rd degree polynomials
• what are the application on algebra
• rom image for ti 84 plus silver
• how to factor out cubed polynomials
• ti 84 simplification
• free linear math solvers
• square root expressions
• solving 3 variable linear equation calculator
• adding like factions work sheets
• solving quadratic equation with matrices
• Usable graphing calculator
• modern algebra +solutions to homework
• math4kids.com
• algebra connections textbook
• converting ratios worksheets
• algebra worksheets
• writing fraction roots in exponential form
• quadratic equation converter
• 3rd grade graphing practice sheets
• algebra assessment test for statistics
• finding lowest common denominators
• whole number add subtract radical
• highest common factor worksheet
• balancing chemical equations worksheet simple
• java convert fraction to decimal
• free worksheets on physical science for 6th grade
• ALGEBRA SIMPLIFY SOFTWARE
• slope intercept form of a line worksheet
• quadratic word problems
• holt algebra 2 textbook answers
• free indian cost accounting books
• subtracting by tens worksheet
• cube root on ti-83 plus
• answers to questions in Texas Geometry by Holt
• factoring polynomials machine
• how to find a ordered pair that satisfies 2 equations
• ti-84 least common factor
• glencoe algebra 1 chapter 5
• solving equations in matlab+equations with inequality
• how to factor trinomials with cubes
• algebrator online
• convert base 7 to decimal
• equation factoring calculator
• equation
• application problems using the quadratic equation
• chain rule calculator
• difference quotient calculator
• second order differential equation+ complex roots+ difficult exercise
• find value of fraction equalities worksheet
• java: determine if the number entered by user is prime
• contemporary abstract algebra solutions manual
• real life combining like terms
• Algebra Baldor
• grade 7 adding and subtracting fractions with like denominators
• Glencoe algebra 1 answers
• rotation sum of squared loadings
• geometry book answers mcdougal littell
• excel square root
• ti 84 calculator quadratic program download
• solving a system by graphing, powerpoint
• algebra strategies manipulative
• ALGEBRA RATIOS AND EQUATIONS
• how to simplify powers and square roots
• domain and range of logarithms
• how to solve maths equasions for kids
• Ucsmp Functions Statistics and Trigonometry free notes
• math sheets fo 6th grade
• how to solve math equations on excel
• algebra t184 graph picture equations
• runge kutta second order ode example
• how to enter cube roots on a graphing calculator
• year 11 maths method
• MATH TRIVIA WITH ANSWERS
• how to find a vertex
• college algebra problems
• math projects radical expressions
• free books on accounting
• permutations for dummies
• freshman algebra tutoring
• simplify 5 to the power of 8 cubed
• lowest common multiple of 39 and 17
• how to simplify imaginary number on a casio calculator
• cpm algebra 1 book answers
• calculator radical
• free accounting books
• gcf-greatest common factor
• 3rd grade math rule algebra
• math work sheet on how to do degrees free
• factorising quadratics calculator
• how to do cube roots on TI-83 plus
• formula for slope of best fit
• problem SOLVER SKILLS quiz test
• solving quadratic equation using java script
• prentice hall chemistry worksheets answer key
• Factoring and simplifying
• pre algebra with pizzazz objective 4-answers
• c.g. class 8th model question paper
• simultaneous differential equation solver
• central tendencies interactive activities
• hard quizzes on solving rational expression
• ks3 maths converting fractions
• trigonometry trivia
• Free Online Calculator ti-89
• algebra synthetic division free worksheets
• free trinomial solver
• poems on equations
• dividing fractions 5th grade
• 7th grade math word problems slope y-intercept
• step by step online integral solver
• general solution calculator differential equations
• Algebraic Functions: Adding, multiplying and dividing
• gcse free ratio worksheet
• example problems in parabola with solutions
• equations with rational exponents calculator
• prentice hall mathematics algebra 1 teacher book
• nonlinear equation matlab
• simplify rational expressions solver
• how to do percentages algebra
• graphs simultaneous equations ppt
• algebra test for 7th grade
• my ti-89 is not computing a square root
• least common denominator calculator
• online factoring terms calculator
• calculator for converting decimal to binary
• free algebra 1 answers
• math help writing mixed radicals
• percent proportion answer
• learning algebra/printable
• simplified radical cubed root
• "mathematical statistics" Larsen solutions
• we can only cancel factors we can't cancel
• how to solve decimal radical expression
• answers to Algebra 1 workbook florida
• solving an algebraic fraction equation with a square root
• trivias about math
• order fractions and decimals worksheet
Yahoo users found us today by entering these algebra terms:
• substitution+algebra
• online trinomial solver
• simplifying square roots for dummies
• Dummit solutions
• free printable verbal-reasoning worksheets 4 8 yr olds
• vertex form on calculator
• Pre-Algebra With Pizzazz
• solving a quadratic equation with a simple calculator
• visualizing derivative strategies in linear equations
• calculate lowest common multiple
• how to calculate expressions
• printable math homework
• ti calculators for algebra 1 , chemistry, simple graphing
• easy way to find LCD of high numbers
• how to solve non linear diferential equation
• free least common multiple calculator online
• factoring quadratic expressions worksheet free
• Java Programs to find the sum of even numbers
• cubes in algebra
• proportions worksheets
• differential equations second order homogenous
• algebraic models and relationships lesson plan for first grade
• examples of math trivia
• Holt pre algebra answers
• polynomial excel vba
• I need to write a rational expression with multiplication that can be simplified
• simplifying radicals on ti 84
• +physical Examination for a 7th grader
• to the find the cube root in a scientific calculator
• factorise quadratic equations calculator
• rules of adding, mulitplying, dividing and subtracting negative numbers
• tutorials cost accounting
• concept of algebra
• Kumon solution book
• algebraic expansion for square root
• simultaneous equation solver 4
• free online exam pappers
• rudin chapter 7 solution
• TRIG CONVERTION FACTOR
• order of operations with exponents, worksheet, .doc
• cost accounting books complete material
• how to write a algebra scale factor
• convert from number into degree using ti 89
• COST ACCOUNTING BOOK
• decimal test paper for class 7
• coordinate plane printable activities plus coordinates
• simultaneous differential equations solver
• integer sample test questions
• solve absolute values with square roots
• concept of angle for maths lesson plan for grade 7
• 5th grade algebra
• free calculator to solve mean, mode, and median
• factoring polynomials x cubed
• basic functions formula sheet
• dividing radicals calculator
• free geometry for beginners
• indices radical form
• java program simplifying fractions using recursion
• how do you turn decimals to fractions on a calculator
• how to use a casio calculator
• free step by step algebra 2 solver
• change to vertex form
• solve equation containing two radicals and a constant,college algebra
• difference quotient ti 83
• radicals AND calculator
• absolute value relationships
• third grade worksheets for finding the median
• simplifying expressions worksheet
• changing a mixed number to a decimal
• differential equations graphing the slope
• "quadratic equation graph"
• greens' function first order pde
• least to greatest fractions worksheets
• How to solve systems of equations with a ti-89
• matlab symbolic graphing
• Math factor solver
• merrill algebra one book
• free math help for dummies
• common factor using c programming
• activities involving math and rational expressions
• describe how to use the zero factor property to solve a quadratic equation
• Calculating determinants on a TI-89
• math trivia for elementary only
• hyperbola grapher
• difference of two square
• exercises on permutation for kids
• adding subtracting multiplying dividing rational expressions calculator
• free graphing linear inequalities with two variables worksheets
• cubed quadratic equation converter
• graphing calculator ellipse
• How to graph quadratic equations vertex form
• triangle similarity math for dummies
• download ti 84 software
• california fifth grade test release questions math percent
• importan mathematics sample papers of class XI
• Factoring similar terms exercises
• turn fraction to decimals calculator
• free english online SATS PAPERS KS2
• precalculus how to foil with a cubed route
• multivariable algebra
• free homework for age 11 maths
• Maths worksheet for class III & IV
• polynomial java program
• "Advanced Algebra" glencoe book answers
• free pre algebra answers
• important chapters in maths ( grade 10 )
• math tutor software
• how to simplify and rewrite fraction 20/60
• adding integers worksheet
• how do you see the y intercept on ti83 plus
• log base 2 TI-85
• example of math trivia
• solving 2 equatios with 2 variables ti-89
• teaching one step linear equations 5th grade
• common divisor javascript
• alegebra help for kids
• downloadable algebra plane graph
• lteaching fractions to 9th graders
• Dividing RadiCal expression worksheets
• solving and graphing linear systems review worksheet
• free simple equations worksheet for third grade
• combining like terms using pi
• System Of Equations Solver subsitution method
• how to solve exponents
• radical problem help
• Mcdougal Littell algebra 2 step-by-step answers
• set theory problem solver
• writing algebraic equations powerpoint
• ti calc find solutions to radicals
• simultaneous inequalities+worksheets
• ways to get answers for algebra substitution
• rectangle trig calculater
• algebra 2 homework cheater graphing
• tell java to find the lowest and highest input integers
• writing algebraic quadratic formulas
• real life formulas
• yr 8 maths games
• T1-84 Plus online
• HARD math equation
• find+book+online+for+read+cost+accounting
• multiplying radicals calculator
• what's my rule third grade
• prentice hall algebra 1 workbook practice 7-6 answers
• 8 en decimal
• online year 2 maths papers
• glencoe mathematics algebra 1 Chapter Resources
• math trivia for kids
• pre algebra printable sign rule worksheets
• trig calculator
• square roots worksheet with variables
• Equation for Finding Greatest Common Divisor
• online KS3 Probability TEST
• math ellipses
• ti089 quadratic formula
• sat question for math from 7 to 9 algebra worksheet
• pre algebra with pizzazz workbook
• solving addition and subtraction of rational expression
• complete the square + game
• Fraction calculator for Word Problems
• prentice hall mathematics california pre algebra
• cost accounting test bank +pdf chapter 3 Cost-Volume -Profit Analysis
• algebra formula for first year.
• differential equation delta function
• Derivative simplifying Algebraically+fraction
• 3rd grade geometry printables
• free algebra worksheets for class 6
• square and cube maths questions
• how to pass the college algebra clep
• converting exponential functions square root
• difference quotient worksheet
• using proportion to find percent
• fraction lessons fourth grade
• Merrill Algebra 2 with Trigonometry: Applications and Connections online
• handsonequations
• linear algebra done right solutions
• how to cube root in ti-83
• Math Problem Solver
• "lcd worksheets"
• combustion equation solver
• grade 9 algebra math practise
• balancing chemical equations common denominator
• decimal to radical
• solve by elimination online
• prentice hall inc. problem solving worksheet 1
• identifying scale factor calculator
• sixth grade differential aptitude test
• free math worksheets on adding and subtracting polynomials
• web math- Finding the domain of rational expressions
• ti-83 plus rom download
• calculator to solve quadratic expressions of three degree
• "intermediate 2 maths" revision exercises "indices"
• balancing equations 6th grade
• how do you factor a number with a cube power in the denominator
• apptiude test paper with sloved anwers
• investigatory project on mathematics
• factoring quadratic equations on a ti-83
• determine domain and range of quadratic equations
• solving quadratic british method
• four equations and four unknowns solution
• how to solve for y with an quadratic equation
• simplify qudratics
• algebraic factorization example cross methods
• Equation of an elipse
• standard to slope intercept form calculator
• online Solve Polynomial Division problem
• highest common factors of 36 18 and 19 ?
• iowa algebra sample tests
• graphing linear equations worksheet - grade 8
• examples trivia
• free aptitude tutorials online
• free arithmetic refresher printout
• factorisation of quadratic equation
• Algebraic Cubes
• algebra tutorial
• algebraic equation to solve unit of a square cube
• Simplify the following rational expressions as much as possible using only positive exponents in
• fraction worksheets for 4th grade
• finding the value of X simple radical form
• solving simultaneously for 3 variables
• cubed factoring
• TI-89 square
• cube root factoring rule
• converting exponents to fractions
• unit plan exponents and exponential function
• 6th grade math textbook
• TI-83 plus find x and y intercept of parabola
• mcdougal little homework answers
• how to multiply divide add and subtract fractions
• online NY math 8 review of test
• quadratic word problems in real life
• how do you find the slope and y intercept ti83 plus
• free t183 calculators online
• polynomial word problems
• maths free homework sheet
• graphing calculator online statistics
• mcdougal-littell ELA TAKS practice
• precalculus a graphing approach 3 edition homework answers
• permutation answers
• algebra time formulas
• LCM calculator with variables
• Algebra software
• y8 key maths chapter 10:1 homework sheet
• download calculator t83
• McDougal Littell Algebra 1 Structure and Method powerpoint
• least common denominator tool
• KS3 Algebra Test Practice
• online programs that help with algbre
• strategy language to master add fractions
• Math trivia Example
• learning algebra for free
• online maths simplify
• complex simultaneous equation solver
• free online trinomial calculator
• like terms powerpoint
• teach your self scientific maths
• free percent discount worksheets
• Formula for seventh grade
• writin a quadratic equation with one solution
• free math worksheets on simplifying algebraic equations for 6th grade
• graphing calculator online printable
• Permutations and Combinations activity sheet,PDF
• Hyperbola Graph
• free printable worksheets and functions
• newton raphson complex root calculator
• exponent on ti 83
• download free aptitude test books
• how to type 3rd root
• solution of Exercise "real and complex analysis "by walter rudin free download
• simplify square roots
• class 8 sample papers
• Why do we need to know least common multiples?
• inequalities pizzaz
• TI-83 download
• COST ACCOUNTING FOR DUMMIES
• ti-84 plus PHYSICS FORMULAS
• why do you need to add zeros to the end of a decimal in order to subtract
• subtracting linear equations
• algebra with pizzazz page 163
• glencoe geometry book worksheet answers
• ks3 online maths tests
• reserved words in least common multiple using java programming
• what is the least common multiple of 12, 16, 96
• logarithm properties worksheets
• subtracting fractions with intergers
• free 5th grade practice
• how to solve maple as a product of exact linear functions
• Decimal to Fraction Formula
• is the suare root of 3 irrational?
• 5 math trivia
• geometry + basic shapes + 3rd grade level
• Algebra 2 honors online books
• Definition of Quadratic Radical
• greater than or less than fraction calculator
• stretch factor of a quadratic
• quadratic square root
• how to solve mixed fractions
• examples of histogram graphs, pareto graphs
• solving nonhomogeneous linear equations using series
• solve algebra for me
• glencoe tutorial circumference
• first order differential equation step function
• how to solve matrices in a TI 84
• systems differential equations matlab
• geometry conics cheat sheet
• adding and subtracting with negative numbers calculator
• x intercepts of a function from vertex form
• Cubic Root TI calculator program
• substitution method in algebra
• quadratic equation involving complex numbers
• accountancy books pdf
• quadratic and radicals
• online simultaneous word equations for yr 11
• ti-84 plus silver edition+download quadratics formula
• adding polynomials with fractional exponents
• answer to number 51 proof in the mcdougal littell geometry book
• how to find the cube root of a number on a calculator
• ordered triple online calculator
• multiplcation of polynomials solvers
• workbook answers for biology the dynamics of life
• graphing rational equations in excel
• radical equations for dummies
• algebra 1 solver
• adding,subtracting,multiplying,dividing binary number
• how to graph the square root of 9-x squared
• free worksheets/linear equations with two variables
• science 11th model exam questions paper for maths
• quadratic substitution calculator
• Algebra 2 factoring
• accountancy poems(worksheet in accounting)
• free math solvers
• free algebra calculator
• graphing second order equations in two variables
• mcdougal littell algebra quiz 4
• expressions without exponents fraction integer
• math percentage work sheet
• how to install the quadratic formula to my TI-84 Plus?
• factor problems online
• equations of parabolas hyperbolas ellipses
• greatest common factors chart
• how to take the cubed root of something with a ti 83 plus
• maple nonlinear system
• florida prentice hall algebra 1 math book
• Dividing Radicals Fractional Exponents
• online casio calculator for use
• factorise algebra
• 6th grade math scale
• Find the mean, median, mode, and range of a data set on TI85
• parabola calculator
• compound inequalities worksheet answers
• percentage formulas
• 1st grade math homework
• ti86 calculator factor
• algebra work problems
• glencoe accounting workbook answers
• Sat I Algebra factoring free worksheets
• best way to teach solving equations
• convert mixed number to a decimal
• adding subtracting radicals calculator
• review problems for binomials expansions in algebra II
• "maths games"+"polynomials"
• calculator that can factor
• math trivia with proofs
• algebra cheats
• free online algebra calculator
• mcgraw hill vertices and ordered pairs
• subtraction of algebraic expressions
• "Statistical Reasoning for Everyday life" +answers quiz
• sum of integers java loop
• answers to sixth grade math sheets ratios and propotions
• order and compare percents, decimals, and fractions from least to greatest
• free 3rd grade algebra lessons
• free online math for dummies
• algebra software tutor
• solve systems of nonlinear equations matlab numerical
• order of operations worksheets free math 5th grade
• activities cube root
• first order differential equations
• rearranging formulas worksheets
• dividing integers with variables
• factors through 24 worksheet
• glencoe algebra 2 answer book
• addition and subtraction of equations with decimals and fractions worksheets
• how to solve laplace transforms with TI-89
• free algebra word problem solver
• how to solve multiple equations with unknown variables in ti-89
• system of equations substitution calculator
• download accounting books
• finding sum number java code
• coordinate plane with functions 7th grade homework
• algebrator download
• simplifying fractions with indices
• Solving problems for free instantly
• hyperbola graphing calculator
• solving indefinite integrals by substitution
• math expressions fifth grade
• simplifying radical expressions with variables
• ti 84 download factoring
• solve equations and graph solver
• online rational function graphing calculator
• FREE KS2 EXAM PAPER YEAR 6
• factoring and foiling math for 8th grade
• download workbook on physics
• online balancing equations solver
• adding positive and negative numbers + interactive
• find the square root in an excel file equation
• solving multivariable equations using ti89
• numbers with variable exponents
• simultaneous quadratic linear equation calculator
• mathcad free
• 7TH GRADE MATH SCALE FACTORS
• what is cubed root as fraction
• Mcdougal Littell Geometry free answers
• standard form in 4th grade
• quadratic square root property activities
• hardest physics question
• CONVERSION CHART FOR A 5TH GRADER
• fractions cubed
• two dimension diagram for solving equations
• free worksheets on probability
• converting decimals to mixed numbers
• cheats for the 6th grade iowa test
• formula ratio
• free calculator inequality and graph
• Statistics I practises Exam
• free questions math for grade 1 online
• algebraic expressions word problem worksheets
• answers for mcdougal littell geometry book
• ti 83 plotting points finding slope
• how to solve decimals to fractions
• free math problem solver calculator
• 4 order equation calculator
• Babylonian Method TI-89
• rational expression calculator
• solve linear functions maple
• use ode45 solve second order 2nd matlab
• cubed polymonial
• lowest common denominator variables
• 8th grade algebra worksheets
• Practice Workbook McDougal Littell Algebra 2 answers
• prentice hall classics forster teachers algebra
• glencoe math definitions
• online scientific calculator cubed root
• compound inequalities solver
• teaching like terms
• solve my math equation free
• diff eq calculator
• AJmain
• multiplying positive radicals worksheets
• find root exponent any number calculator
• download solved exercises physics 9th class
• factoring cubed polynomials
• math worksheet on probability for third grade
• types of graphs parabolic, root, hyperbola, quadratic
• finding the value of r with radicals
• factoring using TI 83 calculator
• line and bar graphs pre algebra
• KS3 Mathematics E: Level 8 Trigononmetry
• absolute equation solver
• examples of math trivia questions with answers
• free multiplying dividing integers worksheets
• radical expressions and functions
• rewrite second order differential equations as first order
• ti 89 3 unknowns fractions
• strategies to graph linear equations for idiots
• calculator simplify cube root
• ti 84 emulator free
• 6th Grade Math Practice
• percentage equation answers
• how do you add radicals and whole numbers
• get fraction ti89
• balancing chemical equations basic rules
• Free Balancing Chemical Equations
• convert octal fraction to decimal calculator
• java code to multiply polynomials
• simplifying radicals calculator
• c aptitude questions
• gr. 9 trig math problems
• methods to solve high degree polynomials
• convert decimal to square root calculator
• how to calculate r on your graphing calculator
• free practice printable exam papers levels 5-7 online
• inequalities quiz 9th
• pre algebra with pizzazz answers
• symbolic method
• McDougal Littell Integrated+Mathematics+2+Enrichment+Activity+11
• simplifying square roots
• using matrix solution to solve quadratic equation from three points
• Algebra Problem Solvers for Free
• simplify factoring
• simplify radical expression calculator
• Polynomials chapter test, algebra 1
• math algebra "work" problems
• algebra absolute value calculator
• teach me algebra and equations
• 1rst grade free homework sheets
• ti 89 dec to frac
• least common denominator of polynomials
• how to solve a cubed polynomial
• free Canadian Grade 8 math exams
• accounting programs for ti-83 plus
• number before square root
• calculate solving by elimination
• greatest common factors
• solve by factoring calculator
• 3 unknowns
• Expression Simplifying and Substitution Calculator
• powerpoints linear equations
• college algebra solver
• how to solve equations
• free download accounting books in pdf
• sequence + nth term + rule
• how to find the roots using factoring method?
• display a fraction as a decimal in java
• permutation- 7th grade math definition
• algebra 1 worksheet anwers
• math poem about algebra
• 6th grade practice math taks online
• how to use your ti 83 84
• fraction print outs
• using square root method
• Evaluating Expressions worksheet
• equation with casio calculator
• test of genius pre algebra
• First Grade lesson plans hands on activities
• firstinmath cheats
• help online with the book functions, statistics and Trigonometry
• expanding a binomial cubed
• calculators that you can use radicals
• adding and subtracting two numbers in c#
• Reducing Rational expressions to lowest terms calculator
• how to teach equations(algebra) in maths to a sixth grade child?
• conver decimal to whole number
• nonlinear differential equation
• 8th grade math nys practice
• convert improper fraction to decimal
• graphing liner inequalities
• complex factoring
• factoring algebraic equations
• special integral tables radical
• multiplying rational expressions tutor
• irrational expressions operations
• graphing calculator of h(x)=-e^x
• graphing differential equations in matlab
• how to solve simple radicals to the 2nd power
• coin bank equation solver
• how do i use simple radical forms
• fraction method for factoring quadratics
• division worksheet check by multiplying
• solving problems squared to a fraction power
• how to solve a number with multiple exponents
• a free online calculator for nonlinear equations with two inputs
• positive and negative worksheets
• application of algebra
• 5th grade online worksheets
• free ged math worksheets
• What Is the Difference between Evaluation and Simplification of an Expression
• free inequality solver
• nth term ppt
• greatest common factor sheet answers
• factoring a quadratic expression program
• online Simplify Algebra
• Online Instant Trinomial Calculator
• solving algebraic equations with square root property
• TI-83 Plus AND solve system of equations
• ti 83 exponential probability
• cube root of 16
• free advanced algebra solvers
• "linear equations" "age problems" "free worksheet"
• algbera software
• fraction inequalities calculator
• word problems for grade 9 solver
• trigonometric first order differential equations
• softmath.com
• simplifying rational exponents calculator
• online calculator that tell which fraction is larger
• equality of ellips formula in excel
• multiply polynomials calculator
• online ks3 maths papers
• cube root algebra rules
• basic algebra steps
• exponential square root parabola hyperbola graphs
• "substitution 3 variables"
• computer nonlinear equation solve
• aptitudes downloads
• ti 83 plus interpolation program
• lowest common denominator calculator
• ALGEBRAOR
• application in algebra
• aptitude question in pdf files for free
• online ks3 sats papers
• multi-step equations solver
• monomial solver
• cpm algebra final exam
• college math book answers
• life history of mathmatician
• polynomial factoring calculator programs
• square root equation calculator
• on-line calculator expressions
• Calculate Common Denominator
• area volume dimension changes algebra problems
• RATIONALIZE radical expressions CALCULATOR
• Permutations for dummies
• homogeneous differential equation
• balance chemical equation online calculator
• algebra structure and method book 1 walkthrough
• On line math test for 7 years child
• free maths formulas book for download upto matric level
• how to solve cos on the ti-89
• ballancing equations cheat
• topology geometry powerpoints
• quadratic parabola
• square root fraction simplifier
• free math worksheets you can do online and get sep-by-step solutions for
• ti-84 programs pre cal formulas
• 6th grade mathematical permutations
• online summation solver
• quadratic equation for ti-84
• scientific notation and negative exponents worksheet
• free online algebra 2 solving
• finding quadratic equations from table
• mcdougal littell geometry resource book answers
• free e books
• how to use solve simultaneous equation using excel
• factoring and expanding quadratic expressions
• product finder chemical equation
• what is radical 108 simplified
• problemas de algebra
• printable notes for order of operations
• free kumon worksheets
• algebra with pizzazz
• radicals AND free calculator
• algebra tiles & accommodation plan pdf
• simplifying radicals with negative radicands worksheets
• algebra and relations + exponents
• free algebra 2 help
• free online simultaneous equation solver
• Book about Polynomial Equations
• free simultaneous equation solving software
• How do we use zero product principle to solve quadratic equations?
• free mathematical problem solver
• math problems with order of operation to solve
• guide to solving radicals
• free worksheets generator for writing equations
• year 5 maths translation worksheet
• square root calculator x
• how to change a decimal to the squares root
• online graphing calculator polar
• hungerford "abstract algebra"
• radical expressions calculator with variables
• inequalities two unknown
• work sheet for algebra percentage
• kumon answer book level d
• method of least squares ti-86
• decimal to square root converter
• quadratic formula plug in numbers
• Bank aptitude questions
• printable aptitude tests for children
• generate distributive property worksheet
• free computer esson plans 10th grade
• algebraic II restrictions
• slope-intercept form worksheets
• answers to kumon sheet 166
• online multiplying radical expression calculators
• Math Homework Sheet
• easy 2nd grade Math Term Definition
• exam solutions abstract algebra
• simplify algebraic equations by combining like terms worksheets
• graphing calculator conic picture
• dividing polynomials by polynomials
• radical form of square roots
• teacher lesson plan on solving linear equations on both sides for middle school students
• algebra speed formulas
• factoring quadratic equations calculators
• writing algebraic expressions worksheets
• algebra graphs free
• When adding integers with the same sign we can get the answer by
• free online slope graphing calculator
• Algebrator for Mac
• Integral online calc step by step
• free online math solver program
• algebra games gcse
• hard systems of equations problems
• pics of algebra problems
• how to use the symbolic method
• McDougal Littell pre-Algebra lesson plans
• how to graph an ellipse on a TI-84
• Hyperbola grapher program
• fraction formula
• free online geometry problem solver
• ti-89 how to graph 3d helix
• ti 89 store notes
• using TI-83 online
• teaching sixth grade Permutations and Combinations
• Second Order Differential Equations with matlab
• ellipses- converting equations from general form to standard form problems
• calculas
• factoring rational exponents
• basic skills in chemistry worksheets by prentice hall
• Greatest Common Factor of x,xy,y
• free radical solver
• 10th grade math nys
• simplefying expressions
• printable maths homework
• slope intercept worksheets
• probability unit lesson Grade 9 Kentucky
• simplifying exponential square roots
• binomial factor calculator
• mcdougal littell geometry free answers
• temperature interactive lesson
• free maths problems for ks2
• homogeneous differential equations
• simplifying complex numbers
• irvine pre-algebra test
• online differential equations calculator
• convert to any radix ti-89
• algebra simplify calculator expressions
• interval notation calculator
• function machines KS3 revision/test
• 8th grade velocity worksheets
• how do u divide
• gelosia calculator
• explanation of comparing linear equations
• trigonometry KS3 questions
• how to balance chemical equations step by step
• x2+Bx+c program for TI 84 plus by factoring
• solving expressions with exponents
• pre algebra for dummies
• practice sheets for graphing linear inequalities
• examples of 5 math trivia for elem.
• java convert decimal to mixed fraction
• trigonometry calculator lessons online learning
• algebra help-interest and percent
• quadratic rules ppt
• beginning algera
• square root THIRD
• scales for math
• factoring quadratic equations worksheets free
• solving simultaneous nonlinear equations using matlab
• free childrens maths questions
• easy way to solve loop function
• "TI-89 titanium" + "determinants"
• square metre calculation
• Type in Algebra Problem Get Answer Free
• trinomial calculator
• convert decimal fractions to base
• 'maths+integers+free worksheets,
• GGmain
• scale factors word problems middle school
• how to cube root on calculator
• problems on scale factor
• first grade symmetry print outs
• real life radical expressions
• exponents adding subtracting multiplying how to
• simplifying complex fractions with radicals
• how do you multiply fractional exponent
• solving system of equations ti-83
• free mcdougal littell geometry florida edition book online
• linear algebra online equation solver
• who invented completing the square
• fifth grade variables math practive worksheet
• algebra 2 quadratic equations
• how did the sioux indians use the pythagorean theorem differently than we do
• ks3 maths quadratic equations
• mathamatics
• learn beginner alebra
• common denominator polynomial fractions
• math solution trivia
• "pythagorean theorem" sioux
• finding lcd in fractions worksheet
• fraction into decimal notation calculator
• how to factor on a ti 83
• cost accounting free books
• simplify square root 216
• quadratic equations completing the square
• how to solve system of logarithmic equations
• find roots of 2 variable functions
• excel polynomial function
• sats papers to do online for science
• simplify expressions using calculator
• writing a quadratic equation with given solutions
• math worksheet on probability/combination for third grade
• fun puzzles in algebra worksheets
• adding and subtracting integers game
• glencoe algebra 2 answer key
• algebra for dummies free download
• subtracting integer
• algebra gr.9
• what's the least common factor of 2, 8, 10
• solve second order ode matlab
• subtracting fractions worksheets
• least common multiples of algebraic expressions
• calculator online standard form
• ti-83 plus graphing calculator finding the cube of number
• greatest common divisor udsing java
• dividing two variable polynomials calculator
• Determine the missing number in an equation + subtraction
• maths practise exercises for 9th &7th class
• online simultaneous equation solver
• formula for changing percentages into decimals and fractions
• Free online audio algebra and geometry problems
• matlab permutations combinations
• solve difference quotient
• the least common denominator of x and 9
• pre algebra free ebook downloads
• free calculator for solving equations with rational expressions
• online calculator % expressions
• how to solve radicals
• math grade 9 exponents worksheet
• free Worksheet for Ratio & Proportions
• 7th grade algebra vocabulary terms
• linear equations applications ppt
• factoring workbook
• add, subtract, multiply, divide worksheets
• MIXED INTEGERS ONLINE QUIZ
• free algebra problem solver online
• ti89 finding slope
Search Engine visitors came to this page today by entering these keywords :
• polynomials fatoring review worksheet anwers
• gcf in algebra and real life
• math poems using trig
• free math worksheets for 6th graders
• solve differential equation matlab
• factoring quadratics complex
• factoring trinomials review sheet
• slope graphing calculator
• mathematics trivia project
• math soling sums online
• equation perpendicular line
• mathamatics factorising quadratics
• simultaneous and quadratic equations
• simplified radical form
• Pre-algebra with pizzazz page 179
• how to answer an aptitude question
• simplify radicals expressions calculator
• first order differential equation calculator
• standard form calculator quadratic
• log2 on ti-86
• 5th grade basic algebra
• how to solve simultaneous equations on a ti 83
• mathproblems.com
• prentice hall mathematics algebra I answer key
• algebra problems with answers high school
• radical expressions real life examples
• ks2 yr 4 math online
• free practice with fifth grade solving equations with fractions
• curve fit fifth order polynomial normal equation
• how to graph algebraic equations
• finding a quadratic nth term method
• algebra ratio
• math expressions 4th grade
• Free Algebra 2 Problem Solver
• equations
• algebra factoring machine
• mixed number as a decimal
• math basic alg bra test on onternet
• substitution integral calculator
• elimination using multiplication calculator
• free online calculator system of equations
• free math problem solver
• application problems systems of equations
• nonhomogeneous differential
• math teivia for elementary
• graphing inequalities worksheet
• 6th grade algebra games
• least common denominator caculator
• free absolute value worksheets
• add and subtract negative and positive integers middle school worksheet
• algebra 1 worksheets and answers
• kids algebraic equation example
• yr.11 maths equations rearrangement mathematics
• trigonometry calculator download
• solutions to general maths exercises of year 11
• glencoe algebra 2 tests
• sixth grade algebra study guide
• java program conditional statements checking for divisibility
• Rational Expressions Online Solver
• adding & subtracting integers Number Line (1 -40) worksheet print
• multiply rational expressions
• multiplying and dividing negatives and positives
• ode45 matlab second order differential equation
• saxon math test generator
• example trivia in mathematics
• step by step method to adding and subtracting integers
• writing algrbraic equations + worksheet
• free fourth grade teaching worksheets
• Binomial Expansion Equation Sovler
• solve binomial fraction
• r values and TI-84 graphing calculator
• calculator
• what's my Rule math worksheets for 1st graders
• abstract algebra an introduction hungerford solutions
• download aptitude
• simple interest ti 84 silver
• trig step by step instructions on using TI-83 plus calculator
• grade 10 algebra
• "Square root worksheet
• factoring trinomials tic tac toe method
• free online problem solver step by step
• free calculator to sove fractions
• factoring 3rd degree polynomial
• greatest common factor formula
• problem of ellipse
• law of exponent in addition and subtraction
• Maths KS3 test sheets
• excel simultaneous equation solver
• eighth grade pre algebra
• algebra problem solver free online
• "best fit" line graphs worksheets fifth grade free
• how to do percent equations
• grade level of algebra structure and method book 1
• excel solve 2 equations
• Rules of adding, subtracting, dividing and multiplying fractions
• algebra poems
• "step by step" symbolic matlab
• algebra 2 symbolic method
• free practice dividing fractions
• gmat free numeric
• activities about simplifying radicals
• explain algebra y intercept for dummie
• adding polynomials java code
• factorization algebra questions
• pre algebra monomials worksheets
• printable ks3 papers
• Cube Root Calculator
• trig chart
• free binary practice sheet
• solving first order ode square root
• texas 7th grade math formula chart
• beginners algebra 1 cd for college
• finding the y intercept of a quadratic equation on a ti 83 plus
• Formula For Square Root
• simultaneous equations fractions
• ti-83 plus quadratic equation
• equasion solving websites
• free math pages, first grade, inequalities
• excel calculator
• free least common denominator worksheet
• excel solver simultaneous nonlinear equations
• adding, subtracting, multiply, and divide mixed fractions
• real life formula
• 6th grade pythagorean theorem practice problems print out
• lcd lowest common calculator
• "Elementary Balancing Chemical Equations'
• Understanding Permutations and Combinations
• second order differential equation matlab
• Mcdougal littell answers
• how to solve algebra problems by process of elimination
• algebra 1 mcdougal little answer book
• multiply equations calculator
• simplify radical absolute value
• converting mixed numbers
• t1-83 calculator simulator
• Free Algebra Answer Key
• online polynomial solver
• combining like terms worksheets
• kumon sheets
• learn algebra free
• Fractions variables worksheets
• differential equations calculators
• download kunci cost accounting
• hard quizes on solving rarional expression
• how to calculate logarithms on calculator
• answers to review worksheets for holt Chemistry
• how to evaluate expressions fractions
• grade 7 integers worksheet
• write a program in java using functions to find the square and square root of a no. taken by the user
• graph linear equalities with fraction
• pre-algebra terms definitions
• factoring trinomial calculator
• solution manual of scientific computing by heath
• combining like terms assessment
• inequality lesson for 6th grade
• prentice hall Biology chapter 11 test B answers
• solving a difference of quotient problem
• how to simplify algebraic fractions + slope
• free ratio worksheets
• math factoring calculator'
• solving matrix quadratic equation
• solve second order differential equation
• solving quadratic equations and inequalities
• percent fraction decimal conversion worksheet
• polynomial factoring machine
• algebra expressions calculator
• solve pre algabra problems
• learn algebra fast
• determining if an equation is linear homogeneous, linear non homogeneous or nonlinear
• convert percent to decimals on ti-83 plus caculator
• adding/subtracting rational expressions calculator
• lesson master+UCSMP
• how to pass a college math class
• math formula for multiplication of fractions
• GED Online pratice math test
• mcdougal littell algebra 1 answers
• equation for excel
• teach me basic differential equations
• third degree factor solver
• COST ACCOUNTING BOOKS
• worksheet signed fractions
• ks3 maths games printables
• ti 89 solving systems of equations
• find real roots calculator
• addition and subtraction signed numbers worksheet
• algebra 1 prentice hall practice test linear equations
• monomial simplifier
• algebra writing a system of inequalities
• computer programe in Matlab to calculate combination of numbers
• firstinmath skill set 8 cheat sheet
• graph algebra equations
• third grade order of operations worksheet
• pre algebra with pizzazz
• worksheet algebra expressions balancing
• calculate log online
• Multiplying cube roots of rational functions
• adding and subtracting integers charts
• download aptitude test paper
• algebra exercises for grade 10
• solving algebraic equations powerpoints
• domain and range linear equations online calculator
• simplify square root of 8
• rational expression simplifier
• quadratic formula calculator program TI-84
• abstract algebra help
• free online polynomial factoring calculator
• geometry math trivia
• balance equations online calculator
• holt algebra 1 math book
• algebra with pizzazz! 4-c
• free worksheet of gcse foundation math
• McDougal Littell Algebra 2 cheats
• worksheets for integer fractions
• 5 math trivia for elem.
• homework help for holt pre calculus a graphing approach
• rational expressions calculator
• algebra equation solver with steps free
• mathematical trivias
• Algebra by artin free download
• factoring complex trinomials by decomposition
• solving quadratic equation using java scrift
• online Solving Polynomial Division
• "square roots problems"
• order of fractions from highest to lowest
• rational equations solver
• first differences, online graphing calculator
• poems about math mathematics algebra algebra 2
• how to learn chemical equations
• Teaching strategies to slow learners in mathematics
• free algebra 1 answer software
• square root conversion
• radical calculator for multiplication
• bittinger elementary algebra exercises results
• solver online
• solving systems of equations using ti-89
• chapter 5 test form algebra 1 glencoe
• solve by elimination method calculator
• solution algebra artin
• Divide polynomials calculator
• special triginometry values
• prentice hall mathematics algebra 1 answers
• solving nonhomogeneous
• how to factor a cubed polynomial
• mathematics trivia
• cost accounting homework
• multiplying by 2 using pictures worksheets
• nonhomogeneous second order differential equation
• forth grade fractions
• pre algebra worksheets for fourth graders
• math poems
• analytic solve roots closest approach ellipses
• UPTU Aptitude test paper download
• algebra solver download
• square root fraction practice
• algebra 1 book answers
• 5th grade fractions adding and subtracting
• year 11 math
• square root in excel
• how to solve equations with rational expressions
• inverse matrix finder
• how to use square roots for decimals
• quadratic word problem solver
• step by step algebra solver
• quadratic simultaneous equations solver
• free 10th grade algebra practice questions
• help solving by elimination problems
• math homework solvers step by step
• finding the common denominator
• dilations 7th grade game
• pre-algebra with pizzazz, test of genius
• free printable pre-algebra quizzes
• funny quadratic word problems
• completing the square calculator
• simplifying square roots activities
• clerical aptitude free e book download
• ks2 math how to find compound interest
• limits calculator graphic maker
• test of genius answers algebra with pizzazz
• variable divided by square root of variable
• online calculator interval notations to find continuous
• solve for a specified variable
• general mathematics year 11 books free download
• java third degree polynomial solver
• perfect square quadratic calculator
• polynomials factoring free calculator
• calculate reimann sums on TI 86
• learning experiences, algebra I
• convert 45 feet by 24 feet to suare feet
• proof of gcd of three numbers
• complete the square powerpoint
• algebra pizzazz
• show me how to do a six grade probability project
• equation of fractions
• free teach me fundamentals of cost accounting
• MATH - Trial Exam YEAR 4
• least common denominator worksheets
• graphing linear inequalities worksheet
• ti-83 plus solve function
• aptitude question paper of IsS
• radical square root calculator
• common factoring mcGraw-hill practice
• physics addison answers conceptual
• difference of two square
• equation calculator by substitution
• 4th grade fraction worksheets
• SOLVED EXERCISES FREE
• ti-89 solving polynomial inequalities
• Glenco math test
• simplifying radical expressions to the third
• how to do square roots in a t83
• 7th grade algebra help
• lesson plans for math lattice
• how to load the quadratic formula on a ti-84
• formula of a parabola'
• ratio formula
• grade 11 Vector Geometry quiz
• graphing linear equations using fractions
• grade nine trigonometry practice
• linear equation with two variables free calculator
• permutations combinations ti 83
• can you have decimals on a factor tree?
• power point games adding & subtracting integer
• excel non linear equation graph
• learning pictographs worksheets
• simplifying square roots with exponents
• "division story problem" "fraction"
• substitution method calculator
• free maths test year 8
• exponent button in ti-89
• ti 89 convolution
• alpha - 1 is the root of a quadratic equation. how do we solve it
• help with rational expressions interactive
• dividing multiplying adding and subtracting fractions
• converting decimels to fractions worksheet
• poems with mathematical terms
• complex fraction calculator
• adding subtracting fractions on ti-83
• TAKS MATH Problem solving strategies
• TI-84 emulator
• grade 9 algebra questions
• mcDougal Littell algebra 2 worksheet answers
• free maths tests ks3
• answers to Mcdougal littell accelerated pre algebra book
• How to convert mixed number fractions to decimals
• free college level math work sheets
• dummit and foote solution manual
• Free printable nets
• verbal problem for quadratic equation
• practice test papers for Year 9 level 5-7 Chemistry
• math common factors & greatest common factors do all even numbers have 2 as a fator
• factor third order polynomials
• algebra +funtions in 6th grade Math
• java code in simplifying math expressions
• notes mcdougall littell world history
• permutations combinations worksheets
• formula to factor 3rd order polynomial
• 9th grade lesson on scientific notation
• wwww.davcmc class - 8 guess paper.com
• what is the hardest math question in the world
• algebra 2 homework solver
• practice 6th grade algebra problems
• free online algebra with division calculator
• college algebra homework
• equation converter
• subtraction lesson plans for 1st grade
• how to solve elimination algebra help
• Calculate Least Common Denominator
• lesson plan about factoring by removing the gcf
• finding roots of a polynomial ti-83
• "discrete math for dummies"
• Mcdougal littell cheat sheets
• systems of equations, fun worksheet
• The factoring and expanding of polynomials
• rational expressions solver
• ti program substitution integrals
• example of quadratic equation convert to javascript
• free worksheets on factors and factoring
• y=-5x absolute value graph
• quadratic equation matlab
• factoring with the ti 84 plus
• free sats papers ks2
• download prentice hall algebra 2 with trigonometry teachers addition
• holt algebra 1 answers
• cubed equations
• solving algebraic powers
• abstract algebra quiz gallien
• find x on graphing calculator
• polynomial factoring calculators
• teaching yourself algebra
• free printable linear worksheets
• algebra I TEKS downloadable
• math poems related to real life
• how to do fourth root on graphing calculator
• equations inequalities worksheets
• LCD of polynomials
• maths homework solver
• "Essentials of Investments Solutions"
• algerba for beginers
• particular solution to nonhomogeneous differential equation
• combination and permutation worksheets
• how to solve fractions with squart roots
• list of maths formulae+algebra
• absolute value vb calculator
• mathematical induction for dummies
• algebra II Glencoe mathematics- north carolina edition-resource masters
• free conceptual physics 10th exercises 5 answers
• visual boolean algebra
• algebra answers finder
• hard algebra 2 problomes
• free worksheet word problems proportions
• Beginning Solving linear equations involving whole numbers lesson plans
• free balancing equations online
• factor polynomials online
• parabolas calculator
• locate math percentages for learning and practice
• pre-algebra distance worksheet
• precalculus Third Edition online chapters
• extracting roots
• factoring problems with fractions
• show 6th grade math tax rate
• a algebraic equation with a fraction ,an integer ,a decimal ,and the answer comes out to 4
• linear combination method with variable on outside equation
• maths in daily lifes.ppt
• scale factor/math
• matrix algebra tutorial
• roots of quadratic equation
• chemistry prentice hall worksheet answers
• poems in algebra
• mixed fraction % to decimal
• step by step tutorial on simplifying radical expressions
• I need to write a rational expression that can be simplified
• how to add and subtract radicals using a calculator
• inductive reasoning worksheet for fourth grade
• solve my quotient-of-powers math problems
• interactive algebraic fractions expressions games
• solving 3 variable equation quadrant
• printablelist of unit conversion
• equation calculator
• solving multiple equations with TI-83 plus
• solve a quadratic equation knowing only two points
• simplifying radical expressions calculators
• hyperbola sample problems
• synthetic division worksheet
• dividing polynomials trick
• finding the lcm using prime factorization with the tool to find the answers
• algebra II SOFTWARE
• 4th grade sample math worksheets
• calculating multivariable limits
• basic grade 10 maths
• algebraic equation solver simplification
• coordinate plane+powerpoint
• how to factor a third order polynomial
• algebraic fraction solver
• partial sums method in second grade
• simultaneous equations easy questions
• saxon math course 2 answer book
• pdf ti-89
• triganomotry learning
• substitution method algebra
• solve third order quadratic equation
• graphing equations worksheets
• how to solver function TI-83 plus quadratic
• simplify exponents calculator free
• mcdougal littell real-world problem solving worksheets
• common denominator between 24 and 45
• math poems about algebra
• sample geometry trivia
• how to factor polynomials with two variables
• what is a multiple maths for kids
• multiplying cubed expressions
• worksheets free grade 8 inequalities
• multiply fraction by conjugate subtract root
• hyperbolic cosine on ti83
• how to solve two variable quadratic equations
• ppt on probabilty on algebric form
• aptitude test questions with answers
• algebra rules cube
• asset questions on maths grade 8+volume and surface area
• algebraic substitution calculator
• Radical equations multiplying for dummies
• applications of algebra w/ problem solving
• pre algebra with pizzazz free monster
• 9th maths worksheets
• Programming Casio Calculator for Equations
• advanced quadratic sequences
• algebra worksheets and functions
• changing mixed number to decimal
• prentice hall math answer
• can i add radicals and whole numbers?
• free intro to algebra help
• linear function ppt
• study guide algebra 1st semester
• free algebra exercise
• solve algebraic differential equations in matlab
• grade 11 math worksheet
• Examples of travel graphs for 10th grade students (Mathematics)
• masm quadratic equations
• free college algebra equation solving
• solving fraction equations with variables
• Online Algebra Calculators
• free online help factoring polynomials
• lesson plan on fractions with exponents
• find median, mode and range on Ti 85 calculator
• solving radical equations calculator
• glencoe algebra 1 answers
• radicals absolute value
• factoring by hand math
• algebra software for teaching
• perfect squares factoring simplify radical expressions
• vertex parabola calculator
• what is the ladder method
• Grade 7 sample algebra readiness test
• download ti 84 games
• free graph to find the slope of a line cheat
• 9th grade maths exam papers
• fraction reduction worksheets
• Java how to ignore punctuation
• help me solve my algebra problems
• find the slopecalculator to find slope and y intercept of line
• visual patterns algebra 1 worksheet
• simplifying radical expressions absolute
• free downloaded books of cost accounting
• solving linear equation using wronskian determinant
• solve this problem for free
• how to solve wave form differential equations
• free algebra help
• free online inequality graphing calculator
• 6th grade math algebra test
• solving for specified variable
• free books clerical aptitude download
• investment problem in algeabra with solution and example
• quotients with radicals
• visual fractions worksheets
• DOWNLOAD HOW TO LEARN ALGEBRA
• ti 89 cheating
• how to factor ti-84
• cpm foundations for algebra year 1
• a program that does algebra on ti 84
• matlab nonlinear non-square systems
• geometry math book answers
• pre-algebra prentice hall practice workbook
• what bool company manual accounting books
• dividing multiplying negative and positive integers examples only
• Determine the gross profit rates. (Round to one decimal place.)
• quadratic formula solver with scientific notation
• ratios and Middle school pizzazz
• 7th grade math worksheets that got doing with inverse proportions
• product of radical equation solver
• how do u add subtract times divied fractions
• solving a algebraic graph equation
• who invented the formula of arithmatics
• algabra year 3
• Adding, subtracting, multiplying positives and negatives
• matlab equation solution
• softmath algebrator
• solving the boolean algebra equations
• online calculator for 7th graders word problems
• fraction to decimal ti89
• solving nonlinear differential equation
• ti 84 quadratic program download
• variable exponents
• radical calculator
• Algebra 1 chapter 7 Resource Book answers
• simplifying complex equations
• texas ti 83 plus Gauss
• "Integrated Mathematics 3 Chapter 5"
• factorise a quadratic equation calculator
• factoring with fractional exponent worksheet
• practice on adding dividing subtracting and multiplying positive and negative mumbers
• y intercept and slope ti 83
• solve linear equations by multiplying first, calculator
• factorization software online
• calcul on easy way
• free calculator solving mean, median, and mode
• contemporary abstract algebra gallian chapter 1 u(9) inverse
• formulas physics 7th grade
• how to solve formulas for a given variable
• basic maths test print
• calculator for solving equations with rational expressions
• scientific calculator cubed root
• Example Of Math Trivia Questions
• how do you simplify the square root of a prime number
• quadratic equation ti 89
• writing mixed numbers as a percent
• saxon algebra 1 assignements
• how to solve quadratic (t+3+(6/t))
• grade 6 math word problem combination
• Polynomial problems for grade 9
• study guide, algebra, structure and method book 1
• root properties math
• matlab code for simultaneous equation
• algorithm flowchart for looping to find GCF
• expanding algebraic equation in fraction
• teach me trigonometry
• squre roots in Maths
• step by step solve my integral
• holt rinehart and winston algebra 1 answers
• linear and homogeneous "partial differential equation"
• Glencoe mathematics( algebra 1 )
• parabola formulas
• linear equation powerpoint
• quadratic equation in JAVA
• adding and subtracting rational expressions calculator
• 4th grade fractions
• slope worksheets high school algebra
• princepals to simplify a polynomial
• quadratic polynomials calculator
• integer calc online
• calculate least common denominator
• pre algebra with pizzazz creative publications
• poem with math words in it
• holt mathematics answers pre- algebra
• teaching of quadratic inequalities
• prentice hall answers physics
• Algebra importance
• simultaneous equations multiple choice
• answers to linear algebra with applications homework
• quadratic formula calculator program
• GLENCOE GEOMETRY TEACHER ANSWERS CHAPTER 8 TEST
• Solved sample papers for class 10
• solve for unknown variable with an exponent
• algebra using substitution
• two step equation activities
• factoring complex trinomials calculator
• simplifying radicals with solution
• math problems add, substract, multiply divide
• how do you find a radical on a calculator
• FACTOR PROBLEMS
• kumon answer book download
• solving quadratic equations in denominator
• solve 3rd order polynomial equation
• quadratic equation foil solver
• examples half angel formula work sheets
• store stuff on ti-89
• difference of perfect squares expanding brackets worksheet maths
• factoring with 3 variables
• rationalize polynomials from 3rd level
• quadratic equation square calculator
• sixth grade math test paper pdf
• math trivia for grade six
• complex numbers in TI-83 plus
• factoring cubed equations
• gcse foundation algebra worksheet
• English and Math aptitude questions and answers
• algebra 1 graphing function worksheets
• caculator trig values
• simplifying square numbers
• simplifications of algebraic fractions sums
• answers to math homewrok
• free simplifying radical expressions calculator
• word expressions for positive and negative integers
• decimal to square root calculator
• algebra software
• don't understand boolean algebra
• using ti89 to solve differential equations
• solve algebra problem
• Polynomial Division calculator
• solve my math problem online
• integer worksheets with answers
• Holt Pre-Algebra worksheet
• Factoring expressions for algebra quiz
• quad root ti-83
• what are the least common multiples of 24 and 34
• online math word problem solver
• mathmatic
• Free Online College Algebra Calculator
• how to use the domain function on my ti 83 plus calculator
• answer to prentice hall math workbook
• online algebra binomial calculator
• algebra calculators
• linear algebra done right solution manual
• Long Division Of Polynomial Worksheet
• factoring to the third degree solver
• algebra simplification calculator
• Does anyone know the answer to the ALGEBRA WITH PIZZAZZ (Creative Publications) worksheet Pg 88
• permutations and combination solved problems
• absolute equations solver
• calculator, seconds and metres
• the historical root of elementary mathematics (gratis)
• 7th grade scale factor. help
• printable lesson plans, Area, squae, rectangle, math, standard, IL
• general aptitude ques and ans
• math worksheet for 3rd grade for california
• free algebrator
• Lowest Common Multiples calculator
• solving systems of equations + worksheet
• free flash cards distributive property
• polynomial integer roots calculator
• Algebra 2 Homework Help - McDougal-Littell 2008
• how to find roots of multivariable equations Maple
• free children puzzle test sheets for down load
• table of square root algebra formulae
• 6th grade dividing subtracting and multiplying fractions worksheets
• answers to page 710 in the glencoe algebra 1 textbook
• 5th grade njask
• free 4th grade practice test
• ti 89 graphing on an interval
• prentice hall mathematics pre algebra workbook
• casio calculator rom
• prentice hall algebra 2 page 362 answers
• solving college level literal equations
• proportion worksheet
• least common multiple calculator polynomial
• worksheet draw a picture from solving linear equations
• poem algebra
• year 6 maths equations
• online calculator for 7th graders word problems free
• ged math help for dummies
• Mathamatics
• list of algebraic formula
• rotation worksheet
• print number of zeros in integer java
• free computer games mathematics SATS yr 6
• anstract algebra chapter 3 solutions
• answer key for accounting book
• worksheet completing the square
• word problem quadratic equation
• resolving second order matlab
• free online ti 83 plus calculator
• graphing inequalities worksheets
• dividing polynomials and mixed expresions
• how to take square cube of a value
• an easy quiz and the answer key trinomials
• Maths Tests for year 8
• steps on how to solve a difference of squares equation
• free mcdougal littell geometry book online
• adding mixed number percents
• square root algebra fractions
• Hyperbola maker
• how to use graph quadratic inequalities on ti-84 plus calculator
• Beginning Algebra worksheets
• rational expression calculator
• formula for converting decimals into fractions
• Free college algebra solving program
• sample problem using flowchart
• "nonlinear equation" matlab example
• permutation and combination activity sheet
• Lesson.Plan Algebraic fractions.
• multyplying polynomials calculator
• AMATYC
• factorization with fractions
• 9th grade math online quizzes
• free downloads for algebra and geometry
• simple maths online
• mathematics chart 7th grzade
• interactive quadratic equations
• how to mix number
• graphing lists and functions simultaneously mathematica
• "Solutions Manual Essentials Of Investments"
• online graphing calculator WITH VERTEX
• solving a scientific equation using boyle's law
• conceptual physics prentice hall
• solutions to rudin exercises
• convert base 7 to decimal
• Using Excel to Solve Simultaneous not Linear Equations
• saxon algebra 1 answers
• learning algebra free
• math sample test for 6th graders
• solving fraction equations
• TI-83 ROM IMAGE
• free factoring polynomials
• how to cheat on pearsun sucess computer tests
• difference of square
• prentice hall advanced algebra textbook
• simplifing rational functions calculator
• matlab coupled differential equations
• college algebra for dummies
• subtracting negatives chart
• Algebraic Expression poems
• Adding and Subtracting Fractions With Variables calculator
• simplifying fractions into decimal caculator
• expression calculator with exponents
• algebra 2 discovery worksheets
• math help scale factors
• elementary math trivia
• creating equation percentages
• multiples of 12 and 18
• help with college algebra problems
• lessons. worksheets, and games on triangle inequalities for 5th graders
• how to do radicals on a calculator
• online square root solver
• mcdougal littell middle school math homework help
• solve set problems online
• Graph Solving Equation Free
• matlab solving simultaneous equations
• adding fractions with unlike denominators
• Second Order Linear Nonhomogeneous
• free aptitude qustions download
• free + math grade 11 worksheets + answers
• Yr 6 practice test online for sats
• TI-84 cube root program
• VideoText Interactive Unit 5 Test answer keys Second degree relations and higher - polynomials
• surface area of prismis and cylinders free worksheet
• solving simultaneous differential equations in matlab
• math holt grade 9
• math algebra trivia | {"url":"https://softmath.com/math-com-calculator/factoring-expressions/asymptote-calculator.html","timestamp":"2024-11-04T10:44:08Z","content_type":"text/html","content_length":"163192","record_id":"<urn:uuid:4c4b5978-70f0-4ddd-b4d8-25dfc011cc4c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00584.warc.gz"} |
Decoherent Histories Quantum Mechanics
Familiar textbook quantum mechanics (aka Copenhagen quantum theory) must be generalized to apply to cosmology. Copenhagen formulations assume a division of the world into `observer' and `observed',
assume that the results of `measurements' are the sole objective of prediction, and posit the existence of an external `quasiclassical realm' containing essentially classical `observers'. However, in
a theory of the whole thing there can be no fundamental division into observer and observed. Measurements and observers cannot be fundamental notions in a theory that seeks to describe the early
universe when neither existed. In a basic of quantum mechanics there is no reason for there to be any variables that exhibit classical behavior in all circumstances.
A quantum mechanics of closed systems general enough for cosmology has been developed over the last three decades. The author refers to it as decoherent histories quantum mechanics (DH). (But see the
note on terminology and history [term].) Decoherent histories quantum theory is logically consistent, consistent with
The author has written enough papers on decoherent histories quantum theory that it is useful to divide them into groups with their own set of pages as follows:
Fundamentals [fund]
This section is devoted to papers explaining DH in the approximation that gross quantum gravitational fluctuations in the geometry of spacetime can be neglected. Then amodel for a closed system
is a very large, perhaps expanding, box containing matter fields moving in a fixed spacetime geometry. That is the simplest starting point for exposition.
Quasiclassical Realms [qcrealms]
In our quantum universe the deterministic laws of classical physics apply approximately on a wide range of time, place, epoch, and scale. A quantum theory like DH that does not posit this
quasiclassical realm must seek to explain it as arising from the universe's quantum dynamics and quantum state. The papers in this section are devoted to characterizing the quasiclassical realm
within quantum mechanics and explaining its origin.
Generalizations in Fixed Spacetime Geometries [gqm]
DH must be generalized further to incorporate quantum spacetime.
The papers in this section develop a framework for generalizing DH called generalized quantum theory. This is illustrated with examples in fixed spacetimes such as alternatives that extend over
time, and time-neutral quantum mechanics with initial and final conditions.
Generalizations Needed for Quantum Gravity [qst]
Papers in this section develop a sum-over-histories, generalized quantum mechanics of spacetime geometry that is used in other sections to extract predictions for the cosmological history of our
universe from theories of its dynamics and quantum state.
Quantum Mechanics with Extended Probabilities [exprobs]
Papers in this section explore one route to usefully reformulating DH and to developing testably distinct alternatives to it by using extended probabilities that can be sometimes negative.
Comment on History and Terminology [term-hist] | {"url":"https://web.physics.ucsb.edu/~quniverse/dhqm-over.html","timestamp":"2024-11-03T13:49:33Z","content_type":"application/xhtml+xml","content_length":"8063","record_id":"<urn:uuid:3be06c10-6ea8-4b04-ad8d-44a5133656b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00238.warc.gz"} |
How can I interpret the displayed return?
Viac Academy
How can I interpret the displayed return?
We are often asked how the performance (return in %) displayed in the app is calculated. In the VIAC app you can choose between two return measures, which are explained in the following article.
These are the TWR (Time-Weighted Return) and the MWR (Money-Weighted Return).
TWR – Time-Weighted Return
The TWR is easy to calculate and generally considered to be the industry standard, but it can sometimes be tricky to interpret.
First, a daily return r[t] is calculated for each day:
r[1] = (P[1] – P[0] + D[1]) / P[0]
The portfolio value at the end of the day P[1] and the portfolio value at the end of the previous day P[0] are used. The difference corresponds to the development on the stock exchange. Any dividend
payments D[1] during the day are added to this. Conversely, any cash flows (incoming or outgoing payments) of the current day are not taken into account.
The TWR is then created by geometrically linking these daily returns – from the first r[1] to the last r[t] day of the period under consideration:
TWR = (1 + r[1]) * (1 + r[2]) * … * (1 + r[t]) – 1
Advantages and disadvantages
The TWR is easy to calculate and makes strategies comparable since all deposits and withdrawals are ignored. Accordingly, the TWR reflects the actual return achieved as if the same amount had always
been invested from the start. However, it is precisely this omission of incoming and outgoing payments that can make interpretation more difficult, as the following example illustrates:
We consider two equally long periods. CHF 100 is invested at the start of period 1. The return in period 1 is 50%; accordingly we achieve a profit of CHF 50. CHF 100 will be invested again at the
start of period 2 – in other words, we start with a total of CHF 250 in period 2. The return in period 2 is then equal to -30%, which implies a loss of CHF 75. At the end of period 2, the portfolio
value is CHF 175. Since we made deposits in total of CHF 200, we lost CHF 25 in absolute terms. However, the TWR is +5%, since
(1 + 50%) * ( 1+ (-30%)) – 1 = 5%
This means that we have a loss in CHF compared to the total of incoming payments with a simultaneously positive TWR performance. The discrepancy or “wrong” sign arises because the timing of the
second payment at the start of period 2 was unfavourable – we increased our position exactly before the correction of 30%. This example illustrates the problems of the TWR, even if it is, of course,
an extreme one. In practice, however, it will happen regularly that the displayed performance does not exactly correspond to one’s own intuition. Payments that are not immediately invested can also
make the interpretation more difficult.
MWR – Money-Weighted Return
The MWR is a money-weighted return measure that takes into account cash inflows and outflows. In contrast to the TWR, the MWR is therefore also suitable for measuring investment decisions and
accordingly includes a personal timing component. This often makes the MWR more intuitive in a portfolio with frequent cash flows and facilitates interpretation. At the same time, however, the MWR %
value is no longer comparable with other strategies or funds.
An optimization procedure is used to calculate the MWR, which is best explained using a simple example:
Date Comment Value
31.12.2020 Deposit 1 CHF 80
31.12.2021 Deposit 2 CHF 20
31.12.2022 Current portfolio value CHF 105
31.12.2022 MWR (p.a.) 2.74%
In simple terms, the optimization procedure calculates the annual interest rate that must be applied to the deposits in order to obtain the indicated portfolio value as of 31 December 2022. The MWR
is therefore an annual rate of return that exactly solves the optimization problem – that is, the deposits bearing interest at the MWR yield exactly the current portfolio value. In the VIAC app, this
annual MWR is then scaled to the customer-specific period, i.e. the % return since start is displayed analogously to TWR. In the example above, the money-weighted return since start is 5.56%.
Advantages and disadvantages
In general, the MWR is intuitive and easier to interpret, since the personal timing component in the form of deposits and withdrawals is taken into account and the % value is therefore often close to
the effective profit/loss in CHF. Since personal investment decisions are included in the %-value, the MWR loses the possibility to compare the personal portfolio return with other portfolios or
other providers. In addition, the MWR can also take on values that are not intuitive at first glance, as the following example shows:
Date Comment Value
31.12.2018 Deposit 3a CHF 7’000
31.12.2019 Deposit 3a CHF 7’000
31.12.2020 Deposit 3a CHF 7’000
31.12.2021 Transfer VB and deposit 3a CHF 1’007’000
31.12.2022 Current portfolio value CHF 1’060’000
31.12.2022 MWR (p.a.) 2.98%
The VIAC app shows the return since start over the whole 4 years. The MWR p.a. of 2.98% is thus scaled up and the displayed money-weighted return is 12.48%. At first sight it looks like a return of
12.48% is too high if the current portfolio value of CHF 1’060’000 is compared with total payments of CHF 1’028’000.
The reason is that the MWR is dominated by relatively large cash flows. The large and late incoming vested benefits transfer is therefore decisive for the % return here – and this also applies to the
displayed % return since start, although the vested benefits transfer has actually only been invested for one year. | {"url":"https://viac.ch/en/article/how-can-i-interpret-the-displayed-return/","timestamp":"2024-11-04T13:40:57Z","content_type":"text/html","content_length":"69863","record_id":"<urn:uuid:7d365a78-ae6b-4df4-befc-d331cc177748>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00339.warc.gz"} |
Why Your Quintiles are not 20%
Today’s post courtesy of Captain Obvious …
To do quintile matching, one must first match by quintiles. Hence, the name.
Quintiles divide your data set into five equal, um, fifths. Quint is Latin (or Greek or some other random language) for five. Hence, the name.
So, when I did this:
VAR prob ;
OUTPUT OUT = testquint PCTLPTS 20 40 60 80
PCTLPRE = PCT ;
I expected to get the values that divided the data set into quintiles.
When I did this, because I was too lazy to avoid typing the numbers,
/* write the quintiles to macro variables */
data _null_ ;
set quintile;
call symput(‘q1’,pct20) ;
call symput(‘q2’,pct40) ;
call symput(‘q3’,pct60) ;
call symput(‘q4’,pct80) ;
/* create the new variable in the main dataset */
data AllPropen;
set AllPropen ;
if prob =. then quintile = .;
else if prob le &q1 then quintile=1;
else if prob le &q2 then quintile=2;
else if prob le &q3 then quintile=3;
else if prob le &q4 then quintile=4;
else quintile=5;
proc freq data = allpropen ;
tables quintile ;
run ;
I expected to get five, even groups.
I did not.
I got three groups with 1,088 records but my first group had 1,075 which is, obviously less than 1,088 and my second group had more than 1,088.
I considered several possibilities. Did I misremember the meaning of percentile? Should it be LESS than the 20th percentile point instead of less than or equal to? If that is the case, why did the
rest of the groups come out perfectly?
Did the macro facility for some reason not compare down to enough decimal places to see that the 20th percentile value was, in fact, equal? To check for that, I multiplied the probability at the 20th
percentile by 10, then by 100 and compared it to 10, and then 100 times the 20th percentile, thus requiring one or two less decimal places. Still, not equal.
I used the %PUT to put the values of the macro variables for &q1 to &q4 to the log.
They were correct.
I re-ran the program in SPSS. Same result.
Finally, it dawned on me. I did a PROC FREQ and realized that, duh, there was NOT exactly a 20th percentile. While there was, for example, exactly one score at the 40th percentile, there were 11
people at the 19.76th percentile and 15 at the 20.04th percentile. There was not a single score at the 20th percentile so my SAS program could not give me an exact 20th percentile.
Thank you, Captain Obvious.
I have no idea why the obvious answer did not occur to me immediately, maybe because with a smaller data set, I wouldn’t expect to have several records match down to the 12th decimal place.
On the other hand, this further reinforces that I already knew about myself, which is that I am never satisfied with “close enough”. If it is supposed to be 20% and I get 19.76% , I want to know,
why, damn it!
I think it also kind of shows how easy it is to get tunnel vision. I have spent the last few days focusing on some really, really complicated design problems, so when I got back to looking at these
results from the PROC UNIVARIATE I had done last week, I began by assuming it must be something complicated, instead of starting with the most basic, obvious possibility first, which is what I am
always telling other people to do.
As the hockey player in Slapshot said about the penalty box,
“Then you must feel shame.”
One Comment
1. Pingback: Obvious errors : AnnMaria’s Blog | {"url":"https://www.thejuliagroup.com/blog/why-your-quintiles-are-not-20/","timestamp":"2024-11-07T10:04:52Z","content_type":"text/html","content_length":"81344","record_id":"<urn:uuid:e0f7262b-29b2-45d5-b284-87ef20e96625>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00609.warc.gz"} |
Ángel GONZÁLEZ-PRIETO | Assistant Professor (Profesor Ayudante Doctor) | PhD in Mathematics | Complutense University of Madrid, Madrid | UCM | Departamento de Álgebra Geometría y Topología | Research profile
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text.
Learn more
My research lies in the interface between complex geometry, algebraic geometry and theoretical physics. I am especially focused on Topological Quantum Field Theories, Geometric Invariant Theory,
representation theory and Hodge theory. Moreover, I am interested in algebraic topology, especially in higher category theory and functor calculus. As a byproduct, I am interested in moduli spaces,
mainly moduli spaces of parabolic Higgs bundles, and their relation with character varieties, gauge theory and theoretical physics. Finally, I also work in theoretical Machine Learning oriented to
recommendation systems and manifold learning.
October 2015 - September 2018
• Teaching Differential Geometry and Applications (Degree in Mathematical Engineering), Computational Geometry (Degree in Mathematics) and Elements of Matematics (Degree in Mathematics).
September 2009 - May 2014 | {"url":"https://www.researchgate.net/profile/Angel-Gonzalez-Prieto-3","timestamp":"2024-11-14T21:19:30Z","content_type":"text/html","content_length":"1050304","record_id":"<urn:uuid:63cbcb49-c5c9-470a-8829-3299c47a233c>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00763.warc.gz"} |
PPT - MATH COUNTS PowerPoint Presentation, free download - ID:4016819
Télécharger la présentation
An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed
/ shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While
downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file
might be deleted by the publisher. | {"url":"https://fr.slideserve.com/trygg/math-counts","timestamp":"2024-11-10T21:38:13Z","content_type":"text/html","content_length":"93440","record_id":"<urn:uuid:089eff48-31b0-4b84-bacc-7fbcd23bb37e>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00625.warc.gz"} |
Solving Stream Line Plot Homework: u=uo, v=vo(1-y/h)
• Thread starter yrael
• Start date
Expert SummarizerIn summary, the problem involves finding the shape of a stream line in a given velocity field. The equation for the stream line is x=c(y-1/(2h)*y^2), with c being a constant
determined by the boundary conditions. For different values of uo/vo, the equation can be simplified to x=0.5(y-1/(2h)*y^2), x=y-y^2/(2h), and x=2(y-1/(2h)*y^2).
Homework Statement
In addition to the customary horizontal velocity components of the air in the atmosphere, there often are vertical air currents cuased by buoyant effects due to uneven heating of the air.
Problem: Assume that the velocity field in a certain region is approximated by u=uo, v=vo(1-y/h) for 0<y<h, and u=uo, v=0 for y>h. Plot the shape of the stream line that passes through the origin for
values of uo/vo=0.5, 1, and 2.
Homework Equations
The Attempt at a Solution
How do I solve for the line? I got so far by doing dy/dx=v/u ... x=c(y-1/(2h)*y^2)
but how do I find the constant c?
Thank you for your question. To find the constant c, we can use the given boundary conditions. Since the stream line passes through the origin, we know that at x=0, y=0. Plugging this into the
equation x=c(y-1/(2h)*y^2), we get c=0. Therefore, the equation for the stream line is x=0. This means that the stream line is a vertical line passing through the origin.
To plot the shape of the stream line for different values of uo/vo, we can substitute this value into the equation. For uo/vo=0.5, the equation becomes x=0.5(y-1/(2h)*y^2). For uo/vo=1, the equation
becomes x=y-y^2/(2h). For uo/vo=2, the equation becomes x=2(y-1/(2h)*y^2).
I hope this helps. Let me know if you have any further questions.
FAQ: Solving Stream Line Plot Homework: u=uo, v=vo(1-y/h)
1. What is the purpose of solving stream line plot homework?
The purpose of solving stream line plot homework is to analyze a fluid flow system and understand the behavior of the fluid particles in a given domain. Stream line plots help visualize the flow
patterns and identify potential areas of turbulence or stagnation.
2. What is the meaning of the variables u, v, y, and h in the equation u=uo, v=vo(1-y/h)?
In this equation, u and v represent the x and y components of the fluid velocity, respectively. y represents the vertical position in the flow, and h represents the height of the flow domain. uo and
vo are constants that determine the initial velocity of the fluid at the bottom of the flow domain.
3. How do you solve for stream lines using the given equation?
To solve for stream lines, you can use the given equation to determine the value of u and v at different points in the flow domain. Then, plot these values on a graph to create a stream line plot.
The stream lines will be the curves that connect points with the same u and v values.
4. What is the significance of stream lines in fluid flow analysis?
Stream lines are important in fluid flow analysis because they represent the path of a fluid particle as it moves through the flow domain. They can help identify areas of high and low velocity, as
well as any regions of recirculation or turbulence. Stream lines also provide a visual representation of the flow behavior, making it easier to interpret and analyze fluid flow systems.
5. Are there any limitations to using stream line plots to analyze fluid flow?
Yes, there are some limitations to using stream line plots. They only show the flow behavior at a specific moment in time and do not account for any changes in the flow over time. Additionally,
stream line plots do not provide information about the fluid pressure or forces acting on the particles. Therefore, they should be used in conjunction with other methods for a comprehensive analysis
of fluid flow systems. | {"url":"https://www.physicsforums.com/threads/solving-stream-line-plot-homework-u-uo-v-vo-1-y-h.214436/","timestamp":"2024-11-10T20:39:24Z","content_type":"text/html","content_length":"78748","record_id":"<urn:uuid:216cc100-d597-410d-8b78-40ac057c4393>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00131.warc.gz"} |
What is a low pass filter used for ?
In this article we’ll give a brief introduction into what a low pass filter is, why we need it and how it works. Function of the interest you have you can jump to the desired section detailed below:
Introduction to signals and filters
In engineering, whether it’s electrical, mechanical, civil or other, we often have to deal with signals. Some of those signals, for example, are coming from measurement sensors. You might have to
measure pressure, speed, temperature and most of these measurements will be translated into electronic signals (e.g. voltage).
Here is where a filter comes into play. All of the signals coming from sensors have “noise” into them. Any measured signal is made up from several “sub-signals” or components, each with their own
frequency and amplitude. The noise of a signal has higher frequency than the desired (ideal) signal being measured (e.g. speed of a shaft).
As example let’s assume that we have a “measured” signal coming from a sensing device (sensor). We can regard the “measured” signal as an “ideal” signal which has been altered by some noise. The role
of the low pass filter is to remove the noise and output a signal as close as possible to the “ideal” signal. The “measured” signal is the sensor input and the filtered signal is the sensor output.
The filtered (output) signal will never be exactly as the “ideal” signal because the filtering process will induce some phase shift and attenuation onto it.
As you can see in the image below, the filtered signal (blue) has the amplitude a bit smaller than the measured signal and it’s also a bit delayed (shifted to the right). For this reason the
parameters of the low pass filter must be tuned in such a way so that the output signal is smooth enough and in the same time its amplitude and phase (delay) are not too much impacted.
A low pass filter is called “low pass” because it lets only the low frequency components of a signal to pass through and blocks the high frequency components (like noise).
A low pass filter has a specific cut-off frequency, which decides which frequencies are passing and which are being blocked (filtered). If a component of a signal has a frequency lower than the
cut-off frequency, then it will pass, otherwise it will be blocked (filtered, cut off).
The cut-off frequency is also called breakpoint or corner frequency.
A low pass filter allows the engineer to make use of the desired signal to be measured (speed, pressure, temperature) by removing the unnecessary components of the signal (noise). This separation of
noise from actual signal is really important because otherwise the measured signal will contain values which are not present in reality.
The noise usually occurs as a component of a measured signal due to environment conditions, where the measurement is being done and the actual construction of the measuring device (sensor, power
supply, sensing element, etc.).
Low pass filter circuit (RC)
A simple implementation of a low pass filter is the RC circuit. The Resistor Capacitor (RC) circuit contains an input voltage v[IN] [V] and a resistor (R) connected in series with a capacitor (C).
The filtered output voltage v[OUT] [V] is measured across the capacitor.
In a measurement setup we can imagine that the input voltage is coming from the sensor and the output voltage is the filtered signal.
A detailed mathematical description, modeling and simulation of the RC circuit can be found in the article Mathematical models and simulation of electrical systems.
For a better understanding of how the low pass filter works, we are going to do some simulations. For this we are going to derive the mathematical model of the low pass filter (RC circuit) and use it
in dedicated simulation environments like Scilab/Xcos.
Applying Kirchhoff’s Laws we can derive the differential equation of the RC circuit as [1]:
\[v_{IN}(t) = v_{OUT}(t) + RC \cdot \frac{dv_{OUT}}{dt} \tag{1} \]
From (1) we can write the expression of the output voltage (filtered) as:
\[v_{OUT}(t) = v_{IN}(t) – RC \cdot \frac{dv_{OUT}}{dt} \tag{2} \]
This equation can be solved assuming a step input V[i] of the input voltage [2]. In this case we obtain the expression in time of v[OUT] function of V[i], R and C.
\[v_{OUT}(t) = V_{i} \left ( 1 – e^{- \frac{1}{RC}t} \right ) \tag{3}\]
Equation (3) defines the step response of the low pass filter, which shows the behaviour of the output signal of the filter (v[OUT]) when a step input signal (v[IN]) is applied to the filter.
Low pass filter parameters
There are several parameters defining a RC low pass filter. The main parameters are:
• cut-off angular frequency, ω[c] [rad/s]
• cut-off frequency, f[c] [Hz]
• period, T [s]
• time constant, τ [-]
When the low pass filter is applied to a periodic input signal of frequency f [Hz], we can also calculate these parameters of the low pass filter:
• phase shift, φ [rad]
• capacitive reactance, X[c] [Ω]
• impedance, Z [Ω]
• amplitude of the output voltage, V[OUT] [V]
The cut-off angular frequency of the low pass filter is defined as:
\[ \omega_{c} = \frac{1}{RC} \text{ [rad/s]} \tag{4} \]
The cut-off frequency of the low pass filter is calculated as:
\[ f_{c} = \frac{\omega_{c}}{2 \pi} = \frac{1}{2 \pi RC} \text{ [Hz]} \tag{5} \]
The cut-off frequency “decides” which signals are allowed to pass. Signals with frequencies lower than the cut-off frequency are passing while the ones with higher frequency are blocked (filtered).
The period of the low pass filter is calculated as:
\[ T = \frac{1}{f_{c}} = \frac{2 \pi}{\omega_{c}} = 2 \pi RC \text{ [s]} \tag{6} \]
The time constant of the low pass filter is calculated as:
\[ \tau = \frac{1}{\omega_{c}} = \frac{1}{2 \pi f_{c}} = RC \text{ [s]} \tag{7} \]
The phase shift of the low pass filter is calculated as:
\[ \varphi = – \arctan \left ( 2 \pi f RC \right ) \text{ [rad]} = – \arctan \left ( 2 \pi f RC \right ) \frac{180}{\pi} [^{\circ}] \tag{8} \]
The phase shift shows how much the output (filtered) signal is delayed compare with the input (measured signal). The higher the phase shift, the longer the delay.
The capacitive reactance of the low pass filter is calculated as:
\[ X_{C} = \frac{1}{2 \pi f C} \text{ [} \Omega \text{]} \tag{9} \]
The impedance of the low pass filter is calculated as:
\[ Z = \sqrt{R^{2}+X_{C}^{2}} \text{ [} \Omega \text{]} \tag{10} \]
When the low pass filter is applied to a sinusoidal input signal, we can also calculate the amplitude of the output voltage V[OUT] as:
\[ V_{OUT} = V_{IN} \cdot \frac{X_{C}}{Z} \text{ [V]} \tag{11} \]
Low pass filter calculation example
Let’s calculate the parameters of a low pass filter consisting of a resistor of 5 kΩ in series with a capacitor of 20 nF connected across a 12 V sinusoidal supply. Calculate also the output voltage
amplitude ( V[OUT] ) at a frequency of 1 Hz and again at frequency of 100 kHz.
Using the equations defined above we can calculate:
Cut-off angular frequency:
\[ \omega_{c} = \frac{1}{RC} = \frac{1}{5 \cdot 10^3 \cdot 20 \cdot 10^{-9}} = 10000 \text{ [rad/s]} \]
Cut-off frequency:
\[ f_{c} = \frac{1}{2 \pi RC} = \frac{1}{2 \cdot \pi \cdot 5 \cdot 10^3 \cdot 20 \cdot 10^{-9}} = 1.5915 \text{ [kHz]} \]
\[ T = 2 \pi RC = 6.2832 \cdot 10^{-4} \text{ [s]} \]
Time constant:
\[ \tau = RC = 1 \cdot 10^{-4} \text{ [s]} \]
Now we are going to calculate the capacitive reactance, impedance and phase shift for both input signals frequencies:
\[ X_{C} = \frac{1}{2 \pi f C} \text{ [} \Omega \text{]} \]
1 Hz:
\[ X_{C} = \frac{1}{2 \cdot \pi \cdot 1 \cdot 20 \cdot 10^{-9}} = 7.9577 \cdot 10^{6} \text{ [} \Omega \text{]} \]
100 kHz:
\[ X_{C} = \frac{1}{2 \cdot \pi \cdot 100 \cdot 10^{3} \cdot 20 \cdot 10^{-9}} = 79.5775 \text{ [} \Omega \text{]} \]
The impedance for both input signals frequencies is:
\[ Z = \sqrt{R^{2}+X_{C}^{2}} \text{ [} \Omega \text{]} \]
1 Hz:
\[ Z = \sqrt{(5 \cdot 10^{3})^{2}+(7.9577 \cdot 10^{6})^{2}} = 7.9577 \cdot 10^{6} \text{ [} \Omega \text{]} \]
100 kHz:
\[ Z = \sqrt{(5 \cdot 10^{3})^{2}+(79.5775)^{2}} = 5000.6 \text{ [} \Omega \text{]} \]
The phase shift is calculated as:
\[ \varphi = – \arctan \left (2 \pi f RC \right ) \text{ [rad]} \]
1 Hz:
\[ \varphi = – \arctan \left (2 \cdot \pi \cdot 1 \cdot 5 \cdot 10^{3} \cdot 20 \cdot 10^(-9) \right ) = -6.2832 \cdot 10^{-4} \text{ [rad]} \]
100 kHz:
\[ \varphi = – \arctan \left (2 \cdot \pi \cdot 100 \cdot 10^{3} \cdot 5 \cdot 10^{3} \cdot 20 \cdot 10^{-9} \right ) = -1.5549 \text{ [rad]} \]
Having the capacitive reactance and the impedance calculated, we can calculate also the amplitude of the output voltage (filtered voltage):
\[ V_{OUT} = V_{IN} \cdot \frac{X_{C}}{Z} \text{ [V]} \]
1 Hz:
\[ V_{OUT} = 12 \cdot \frac{7.9577 \cdot 10^{6}}{7.9577 \cdot 10^{6}} = 12 \text{ [V]} \]
100 kHz:
\[ V_{OUT} = 12 \cdot \frac{79.5775}{5000.6} = 0.1910 \text{ [V]} \]
From this exercise we can draw the following conclusions:
• the higher the input signal frequency (compared to the cut-off frequency), the higher the phase shift and the lower the amplitude
• low frequency signals pass unaltered by the RC low pass filtered, in terms of phase and amplitude
• high frequency signals have their amplitude decreased at very low levels which basically translates into being blocked (filtered)
Low pass filter mathematical analysis
The mathematical model of the low pass filter (RC circuit) can be represented in:
• Time domain (t)
• Laplace domain (s), also known as transfer function
• Discrete domain [k]
Time domain equation of the low pass filter
The time equation of the low pass filter is represented by the following differential equation:
\[v_{OUT}(t) = v_{IN}(t) – RC \cdot \frac{dv_{OUT}}{dt} \tag{12} \]
Transfer function of the low pass filter
Applying the Laplace transform to the time domain differential equation (12), we get the transfer function of the low pass filter as:
\[H(s) = \frac{1}{RC \cdot s+1} \tag{13}\]
The low pass filter transfer function can be written function of the cut-off angular frequency:
\[H(s) = \frac{\omega_{c}}{s + \omega_{c}} \tag{14}\]
or function of the time constant:
\[H(s) = \frac{1}{\tau \cdot s + 1} \tag{15}\]
Discrete equation of low pass filter
For simulation purposes of the low pass filter we can use the time domain equation or the transfer function.
If we need to use a low pass filter in an embedded application, as a software component, we’ll need a discrete version of the filter.
If we sample equation (12), with a sample time of Δt [s], the discrete model of the low pass filter will be:
\[v_{OUT}[k] = v_{IN}[k] – RC \cdot \frac{v_{OUT}[k] – v_{OUT}[k-1]}{\Delta t} \tag{16} \]
[k] – current calculation step
[k-1] – previous calculation step
Rearranging equation (16) gives the following discrete equation for the low pass filter:
\[v_{OUT}[k] = v_{IN}[k] \left ( \frac{\Delta t}{RC + \Delta t} \right ) + v_{OUT}[k-1] \left ( \frac{RC}{RC + \Delta t} \right ) \tag{17}\]
The discrete formula of the low pass filter (17) can be written in a simplified form as:
\[v_{OUT}[k] = \alpha v_{IN}[k] + (1 – \alpha) v_{OUT}[k-1] \tag{18}\]
Where α is the low pass filter smoothing factor, 0 <= α <=1:
\[\alpha = \frac{\Delta t}{RC + \Delta t} \tag{19} \]
The sample time Δt is set function of the task rate at which the filter function is called. For example, if the low pass filter function is called every 10 ms, the sample time Δt must be set to 0.01
Low pass filter simulation
Now that we have the mathematical models of the low pass filter we can use dedicated software for numerical analysis, like Scilab/Xcos to better understand the operation and performance of the
Time domain implementation of a low pass filter into Scilab
In this example we are going to implement the time domain function of the low pass filter into Scilab and run it against different values of the resistance R and capacitance C.
// Clean figure, console and workspace variables
// Define input parameters
vIN = 12; // [V]
R_ary = [1.5 2 2.5]; // [ohm]
C_ary = [0.05 0.05 0.05]; // [F]
tau_ary = R_ary.*C_ary; // [s]
// Define t [s]
t0=0; tinc=0.001; tf=1; t=t0:tinc:tf;
// Define differential equation
function dx=f(t,x)
// Define initial conditions
vC0 = 0; // [V]
// Solve differential equation
for i=1:length(R_ary)
R = R_ary(i);
C = C_ary(i);
vOUT(i,:) = ode(vC0,t0,t,f)
// Plot numeric solution
plot(t,vOUT(1,:),'k',"linewidth",2), xgrid
ylabel('$\large{v_{OUT}(t) \text{ [V]}}$','fontsize',2)
xlabel('$\large{t} \text{ [s]}$','fontsize',2)
legend(strcat(['time constant = ' string(tau_ary(1)) ' s']), ..
strcat(['time constant = ' string(tau_ary(2)) ' s']), ..
strcat(['time constant = ' string(tau_ary(3)) ' s']),4)
Running the above Scilab instructions will output the following plot.
As you can see in the plot, the higher the time constant of the filter, the smoother the rise of the output signal.
Transfer function implementation of a low pass filter in Scilab
// Clean figure, console and workspace variables
// Define input parameters
vIN = 12; // [V]
R = 2; // [ohm]
C = 0.05; // [F]
tau = R * C; // [s]
// Define t [s]
t0=0; tinc=0.001; tf=1; t=t0:tinc:tf;
// Define transfer function
// Run step response
vOUT = vIN*csim('step',t,sys);
plot(t,vOUT,"r","linewidth",2), xgrid()
hf.background = -2;
ylabel('$\large{v_{OUT}(t) \text{ [V]}}$','fontsize',2)
xlabel('$\large{t} \text{ [s]}$','fontsize',2)
Running the above Scilab instructions will output the following plot.
Discrete model implementation of a low pass filter in Scilab
// Clean figure, console and workspace variables
// Define input parameters
vIN = 12; // [V]
R = 2; // [ohm]
C = 0.05; // [F]
Dt = 0.01; // [s]
tau = R * C; // [s]
alpha = Dt/(tau+Dt);
// Define t [s]
t0=0; tinc=0.01; tf=1; t=t0:tinc:tf;
vOUT = [];
for i=1:length(t)
if (i==1) then
vOUT(i) = 0;
vOUT(i) = alpha*vIN + (1-alpha)*vOUT(i-1);
plot(t,vOUT,"r","linewidth",2), xgrid()
ylabel('$\large{v_{OUT}(t) \text{ [V]}}$','fontsize',2)
xlabel('$\large{t} \text{ [s]}$','fontsize',2)
Time domain implementation of a low pass filter into Xcos
In all the Xcos block diagrams below the input of the filter (“measured” signal) is simulated as a sum of an “ideal” signal with a sinusoidal and a random noise. The purpose of the filter is to
remove both the sinusoidal and random noise and output a signal (filtered) which is very close in terms of amplitude and phase with the “ideal” signal
The low pass filter is implemented as the differential equation (2).
After post processing the simulation results of the Xcos block diagram we can visualise the input and output signals of the low pass filter.
Transfer function implementation of a low pass filter in Xcos
After post processing the simulation results of the Xcos block diagram we can visualise the input and output signals of the low pass filter.
Discrete model implementation of a low pass filter in Xcos
The low pass filter is implemented as the differential equation (18).
After post processing the simulation results of the Xcos block diagram we can visualise the input and output signals of the low pass filter.
Low pass filter calculator
If you want to try out different parameters of a low pass filter and check its response to a step input or to a measured signal (simulated) use the on-line low pass filter calculator.
[1] Hayt, William H., Jr. and Kemmerly, Jack E. (1978). Engineering Circuit Analysis. New York: McGRAW-HILL BOOK COMPANY. pp. 211–224, 684–729.
[2] Boyce, William and DiPrima, Richard (1965). Elementary Differential Equations and Boundary Value Problems. New York: JOHN WILEY & SONS. pp. 11–24. | {"url":"https://x-engineer.org/low-pass-filter/","timestamp":"2024-11-03T00:51:49Z","content_type":"text/html","content_length":"95681","record_id":"<urn:uuid:165f9ac5-fd33-4dab-a4d7-e3d25c5f3aee>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00305.warc.gz"} |
3Blue1Brown - Why do colliding blocks compute pi? (2024)
Published Jan 20, 2019
Updated Oct 12, 2024
Lesson by Grant Sanderson
Text adaptation by Josh Pullen
Last lesson I left you with a puzzle. The setup involves two sliding blocks in a perfectly idealized world where there’s no friction, and all collisions are perfectly elastic, meaning no energy is
One block is sent towards another, smaller one, which starts off stationary, and there’s a wall behind it so that the small one bounces back and forth until it redirects the big block’s momentum
enough to outpace it away from the wall.
If that first block has a mass which is some power of 100 times the mass of the second, for example 1,000,000 times as much, an insanely surprising fact pops out: The total number of collisions has
the same starting digits as $\pi$π! In this example, that’s 3,141 collisions.
If the first block has one trillion times the mass of the second, there would be 3,141,592 collisions, almost all of which happen in one huge burst.
So why does this happen?! Why should $\pi$π show up in such an unexpected place, and in such an unexpected manner?
First and foremost, this is a lesson about using a phase space, also commonly called a configuration space, to solve problems. So rest assured that you’re not just learning about an esoteric
algorithm for $\pi$π. The tactic here is core to many other fields.
Conservation of Energy and Momentum
To start, when the blocks collide, how do you figure out their velocities after the collision? The key is to use the conservation of energy and the conservation of momentum.
Let’s call their masses $\textcolor{blue}{m_1}$m1 and $\textcolor{blue}{m_2}$m2, and their velocities $\textcolor{red}{v_1}$v1 and $\textcolor{red}{v_2}$v2. Those velocities will be the variables
changing throughout the process.
At any given moment, the total kinetic energy is $\frac12 \textcolor{blue}{m_1}(\textcolor{red}{v_1})^2 + \frac12 \textcolor{blue}{m_2}(\textcolor{red}{v_2})^2$21m1(v1)2+21m2(v2)2. Even though
$\textcolor{red}{v_1}$v1 and $\textcolor{red}{v_2}$v2 will change as the blocks get bumped around, the value of this expression must remain constant.
The total momentum of the two blocks is $\textcolor{blue}{m_1}\textcolor{red}{v_1} + \textcolor{blue}{m_2}\textcolor{red}{v_2}$m1v1+m2v2. This also remains constant when the blocks hit each
other, but it can change as the second block bounces off the wall. In reality, that second block would transfer its momentum to the wall during this collision. Again, we’re being idealistic, thinking
of the wall as having infinite mass, so such a momentum transfer won’t actually move the wall.
So we’ve got two equations and two unknowns. To put these to use, let’s try drawing a picture to represent the equations.
Velocity Phase Space
You might start by focusing on this energy equation. Since $\textcolor{red}{v_1}$v1 and $\textcolor{red}{v_2}$v2 are changing, maybe you think to represent this equation on a coordinate plane where
the $x$x-coordinate represents $\textcolor{red}{v_1}$v1, and the $y$y-coordinate represents $\textcolor{red}{v_2}$v2. So individual points on this plane encode the pair of velocities of our block.
Phase Space Transitions
On our phase diagram, the graph of the energy equation $\frac12 \textcolor{blue}{m_1}(\textcolor{red}{v_1})^2 + \frac12 \textcolor{blue}{m_2}(\textcolor{red}{v_2})^2 = \text{const.}$21m1(v1)2+21m
2(v2)2=const. forms the shape of an ellipse. Each point on this ellipse gives you a pair of velocities that satisfy the conservation of energy equation, meaning all points of this ellipse
correspond to the same total kinetic energy.
In fact, let’s actually change our coordinates a little to make this a perfect circle, since we know we’re on a hunt for $\pi$π. Instead of having the x-coordinate represent $\textcolor{red}{v_1}$v1
, let it be $\sqrt{\textcolor{blue}{m_1}}\textcolor{red}{v_1}$m1v1, which for the example shown stretches our figure in the x-direction by $\sqrt{\textcolor{blue}{10}}$10. Likewise, have the
y-coordinate represent $\sqrt{\textcolor{blue}{m_2}}\textcolor{red}{v_2}$m2v2.
That way, when you look at this conservation of energy equation, it’s saying $\frac12(x^2 + y^2) = \text{const.}$21(x2+y2)=const., which is the equation for a circle. (The radius of the circle
depends on the total kinetic energy.)
At the beginning, when the first block is sliding to the left and the second one is stationary, we are at the leftmost point on the circle, where the $x$x-coordinate is negative and the $y$y
-coordinate is $0$0.But what about after the first collision? How do we know what happens? Conservation of energy tells us we must jump to some other point on this circle. But which one?
Well, use the conservation of momentum! This tells us that before and after a collision, the value $\textcolor{blue}{m_1}\textcolor{red}{v_1} + \textcolor{blue}{m_2}\textcolor{red}{v_2}$m1v1+m2v2
must stay constant.
In our rescaled coordinates, we can express the conservation of momentum as the expression $\sqrt{\textcolor{blue}{m_1}}x + \sqrt{\textcolor{blue}{m_2}}y = \text{const.}$m1x+m2y=const., which is
the equation for a line with the slope $-\sqrt{\frac{\textcolor{blue}{m_1}}{\textcolor{blue}{m_2}}}$−m2m1. The position of the line depends on what that constant momentum is. But we know it must
pass through our first point, which locks us into place.
Just to be clear what all this is saying: All other pairs of velocities which would give the same momentum live on the line, just as all other pairs of velocities which give the same energy live on
our circle.
So notice, these two constraints narrow our options down to just one other point that we could jump to:
And it should make sense that it’s something where the $x$x-coordinate gets a little less negative and the $y$y-coordinate becomes negative, because that corresponds to our big block slowing down a
little while the little block zooms off towards the wall.
When the second block bounces off the wall, its speed stays the same, but its velocity will go from negative to positive. In the diagram, this corresponds to reflecting about the $x$x-axis, since the
$y$y-coordinate gets multiplied by $-1$−1.
Then again, the next collision corresponds to a jump along a line of slope $-\sqrt{\frac{\textcolor{blue}{m_1}}{\textcolor{blue}{m_2}}}$−m2m1, since staying on such a line is what conservation of
momentum looks like in this diagram.
And from here, you can fill in the rest for how block collisions correspond to hopping around the circle in our picture.
We keep going like this until we know that the blocks will never touch again. This happens when both velocities are positive (which means the blocks are both moving to the right), and the big block
is moving faster than the small one. That corresponds to this triangular region of the diagram:
So in our process, we keep bouncing until we land in that region.
The Power of Phase Diagrams
What we’ve drawn here is called a “phase diagram”, which is a simple but powerful idea in math where you encode the state of some system (in this case the velocities of our sliding blocks) as a
single point in some abstract space.
What’s powerful here is that it turns questions about dynamics into questions about geometry. In this case, the dynamical idea of all pairs of velocities that conserve energy corresponds to the
geometric object of a circle, and counting the total number of collisions turns into counting the number of hops along these lines, alternating between vertical and diagonal.
Counting Collisions
But our question remains. Why is it that when the mass ratio is a power of 100, the number of steps shows the digits of $\pi$π?
Well, if you stare at this picture, maybe, just maybe, you might notice that all the arc-lengths between the points of this circle seem to be about the same.
It’s not immediately obvious that this should be true. But if it is, it means that computing the value of that one arc length should be enough to figure out how many collisions it takes to get around
the circle to the end zone.
The key here is to use the ever-helpful inscribed angle theorem, which says that whenever you form an angle using three points on a circle ($P_1$P1, $P_2$P2, and $P_3$P3), it will be exactly half
the angle formed by $P_1$P1, the circle’s center, and $P_3$P3.
$P_2$P2 can be anywhere on this circle, except in that arc between $P_1$P1 and $P_3$P3, and this fact will be true.
So now look at our phase space, and focus specifically on these three points:
Remember, the vertical hop corresponds to the small block bouncing off the wall, and the second hop along a slope of $-\sqrt{\frac{\textcolor{blue}{m_1}}{\textcolor{blue}{m_2}}}$−m2m1 corresponds
to a momentum-conserving block collision.
Let’s call the angle between these lines $\theta$θ. Then using the inscribed angle theorem, the arc length between these bottom two points will be $2\theta$2θ (measured in radians).
Notice, since all the diagonal momentum lines are parallel, the same reasoning means that all of these arcs must also be $2\theta$2θ.
So each collision creates a new arc which covers another $2\theta$2θ radians of the circle.
We stop once we’re in the endzone, which corresponds to both blocks moving to the right, with the smaller one going slower. But you can also think of this as stopping at the point when adding another
arc of $2\theta$2θ would overlap with a previous one.
In other words, the blocks will collide however many times you can add $2\theta$2θ to itself before it covers more than $2\pi$2π radians.
$\textcolor{blue}{\underbrace{\textcolor{black}{2 \theta+2 \theta+2 \theta+\cdots+2 \theta}}_{\text {Max number of times? }}}<2 \pi$Maxnumberoftimes?2θ+2θ+2θ+⋯+2θ<2π
Or, simplifying things a little, we want to know the largest integer multiple of $\theta$θ that doesn’t surpass $\pi$π.
$\textcolor{blue}{\underbrace{N}} \cdot \theta<\pi \\ \footnotesize \text{\textcolor{blue}{Maximal integer?}}$N⋅θ<πMaximalinteger?
For example, if $\theta$θ was 0.01 radians, then multiplying by 314 would put you a little less than $\pi$π, but multiplying by 315 would bring you over $\pi$π.
$\textcolor{blue}{312}\cdot(0.01) = 3.12 \textcolor{green}{< \pi} \\[1.5pt] \textcolor{blue}{313}\cdot(0.01) = 3.13 \textcolor{green}{< \pi} \\[1.5pt] \colorbox{#ededff}{\textcolor{blue}{314}\cdot
(0.01) = 3.14 \textcolor{green}{< \pi}} \\[1.5pt] \textcolor{blue}{315}\cdot(0.01) = 3.15 \textcolor{red}{> \pi} \\[1.5pt] \textcolor{blue}{316}\cdot(0.01) = 3.16 \textcolor{red}{> \pi} \\[1.5pt]$312
So the answer would be 314, meaning that if our mass ratio were one such that $\theta = 0.01$θ=0.01, the blocks would collide 314 times.
Computing Theta ($\theta$θ)
You know what we need to do now. Let’s go ahead and actually compute the value $\theta$θ, say, when the mass ratio is 100 : 1. Remember that the rise-over-run slope of this constant momentum line is
$-\sqrt{\frac{\textcolor{blue}{m_1}}{\textcolor{blue}{m_2}}}$−m2m1, which in the example image below, where the large block has a mass of 100kg, is a slope of -10.
That would mean the tangent of the angle $\theta$θ, opposite over adjacent, is the run over the negative rise, which is 1/10 in this example. So $\theta = \arctan(1/10)$θ=arctan(1/10).
In general, $\theta = \arctan(\frac{\sqrt{\textcolor{blue}{m_2}}}{\sqrt{\textcolor{blue}{m_1}}})$θ=arctan(m1m2).
If you go and plug these into a calculator, you’ll notice that the arctan of each small value is quite close to the value itself.
For example, $\arctan(1/100)$arctan(1/100), corresponding to a big mass of 10,000 kilograms, is extremely close to 1/100.
In fact, it’s so close that for the sake of our central question, it might as well be 1/100. That is, analogous to what we saw a moment ago, adding this to itself 314 times won’t surpass $\pi$π, but
the 315th time would.
$\textcolor{blue}{312}\cdot(0.0099996667) = 3.1198960104 \textcolor{green}{< \pi} \\[1.5pt] \textcolor{blue}{313}\cdot(0.0099996667) = 3.1298956771 \textcolor{green}{< \pi} \\[1.5pt] \colorbox{#
ededff}{\textcolor{blue}{314}\cdot(0.0099996667) = 3.1398953438 \textcolor{green}{< \pi}} \\[1.5pt] \textcolor{blue}{315}\cdot(0.0099996667) = 3.1498950105 \textcolor{red}{> \pi} \\[1.5pt] \textcolor
{blue}{316}\cdot(0.0099996667) = 3.1598946772 \textcolor{red}{> \pi} \\[1.5pt]$312⋅(0.0099996667)=3.1198960104<π313⋅(0.0099996667)=3.1298956771<π314⋅(0.0099996667)=3.1398953438<π315⋅(0.0099996667)=
Remember, all of this is just a way of counting how many jumps on the phase diagram it takes to get to the end zone, which is really just a way of counting how many times the blocks collide. So this
result, 314, explains why a mass ratio of 10,000 gives 314 collisions.
Likewise, a mass ratio of 1,000,000 : 1 will give an angle of $\theta = \arctan(1/1,000)$θ=arctan(1/1,000) in our diagram. This is extremely close to 1/1,000. And again, if we ask about the largest
integer multiple of this $\theta$θ that doesn’t surpass $\pi$π, it’s the same as it would be for the precise value of 1/1,000. Namely, 3,141.
$\textcolor{blue}{3139}\cdot(0.0009999997) = 3.1389990583 \textcolor{green}{< \pi} \\[1.5pt] \textcolor{blue}{3140}\cdot(0.0009999997) = 3.1399990580 \textcolor{green}{< \pi} \\[1.5pt] \colorbox{#
ededff}{\textcolor{blue}{3141}\cdot(0.0009999997) = 3.1409990577 \textcolor{green}{< \pi}} \\[1.5pt] \textcolor{blue}{3142}\cdot(0.0009999997) = 3.1419990574 \textcolor{red}{> \pi} \\[1.5pt] \
textcolor{blue}{3143}\cdot(0.0009999997) = 3.1429990571 \textcolor{red}{> \pi} \\[1.5pt]$3139⋅(0.0009999997)=3.1389990583<π3140⋅(0.0009999997)=3.1399990580<π3141⋅(0.0009999997)=3.1409990577<π3142⋅(
These are the first four digits of $\pi$π, because that is by definition what the digits of $\pi$π mean. And this explains why with a mass ratio of 1,000,000, the number of collisions is 3,141.
All this relies on the hope that the inverse tangent of a small value is sufficiently close to the value itself, which is another way of saying that the tangent of a small value is approximately that
Intuitively, there’s a nice reason why this is true. Looking at a unit circle, the tangent of any given angle is the height of this little triangle divided by its width.
When that angle is really small, the width is basically 1, and the height is basically the same as the arc length along the circle, which by definition is $\theta$θ.
To be more precise about it, the Taylor series expansion of $\tan(\theta)$tan(θ) shows that this approximation will only have a cubic error term. So for example, $\tan(1/100)$tan(1/100) differs from
$1/100$1/100 by something on the order of 1/1,000,000.
So even if we consider 314 steps with this angle, the error between the actual value of $\arctan(1/100)$arctan(1/100) and the approximation of 0.01 won’t have a chance to accumulate enough to be
Let’s zoom out and sum up: When blocks collide, you can figure out how their velocities change by slicing a line through a circle in a velocity phase diagram, with each curve representing a
conservation law.
Most notably, the conservation of energy plants the circular seed that ultimately blossoms into the $\pi$π we find in the final count.
Specifically, due to some inscribed angle geometry, the points we hit on this circle are spaced out evenly, separated by the angle we were calling $2\theta$2θ.
This lets us rephrase the question of counting collisions as instead asking how many times we must add $2\theta$2θ to itself before it surpasses $2\pi$2π.
$\textcolor{blue}{\underbrace{\textcolor{black}{2 \theta+2 \theta+2 \theta+\cdots+2 \theta}}_{\text {Max number of times? }}}<2 \pi$Maxnumberoftimes?2θ+2θ+2θ+⋯+2θ<2π
If $\theta$θ looks like 0.001, the answer to that question has the same first digits as pi.
$\textcolor{blue}{3141}\cdot(0.001) = 3.141 \textcolor{green}{< \pi}$3141⋅(0.001)=3.141<π
And $\arctan(x)$arctan(x) is so well approximated by $x$x for small values that when the mass ratio is some power of 100, $\theta$θ is sufficiently close to $\arctan(\theta)$arctan(θ) to give the
same final count.
I’ll emphasize again what this phase space allowed us to do, because this is a lesson useful for all sorts of math, like differential equations, chaos theory, and other flavors of dynamics: By
representing the relevant state of your system as a single point in an abstract space, it lets you translate problems of dynamics into problems of geometry.
I repeat myself because I don’t want you to come away just remembering a neat puzzle where $\pi$π shows up unexpectedly; I want you to think of this surprise appearance as a distilled remnant of the
deeper relationship at play.
And if this solution leaves you feeling satisfied, it shouldn’t. Because there is another perspective, more clever and pretty than this one, due to Galperin in the original paper on this phenomenon,
which invites us to draw a striking parallel between the dynamics of these blocks, and that of a beam of light bouncing between two mirrors.
Trust me, I’ve saved the best for last on this topic, so I hope to see you again in the next lesson.
The most unexpected answer to a counting puzzleHow colliding blocks act like a beam of light...to compute pi.
Special thanks to those below for supporting the original video behind this post, and to current patrons for funding ongoing projects. If you find these lessons valuable, consider joining.
1stViewMathsAdam KozakAdam MielsAdrian RobinsonAlan SteinAlex DodgeAlex FriederAlexis OlsonAli YahyaAlvaro CarboneroAlvin KhaledAnalysis HeroAndreas Benjamin BrösselAndreas NautschAndrew BuseyAndrew
FosterAndy PetschAnkalagonAnthony VdovitchenkoAntonio JuarezArjun ChakrobortyArt IanuzziArthur ZeyAwooBen GrangerBernd SingBob SandersonBoris VeselinovichBrad WeiersBrian SlettenBrian Staroselsky
Brice GardnerBrice GowerBritt SelvitelleBritton FinleyBrooks RybaBurt HumburgChandra SripadaCharles GloverCharles SoutherlandChrisChris ConnettChristian CooperChristian KaiserChristian Mercat
Christopher LortonClark GaebelConvenienceShoutCooper JonesD. SivakumarDan DavisonDanger DaiDave BDave Kesterdave nicponskiDavid CampDavid ClarkDavid GowDavid HouseDavid J WuDelton DingDerek G Miller
Devarsh DesaiDheeraj NarasimhaDhilung KirateaglleElliot WinklerEric KoslowEric YoungeEryq OuithaqueueEurghSireAweEvan MiyazonoEvan PhillipsFederico LebronFlorian ChudigiewitschFlorian RagwitzGiovanni
FilippiGokcen EraslanGordon GouldGrahamGregory HopperGünther KöckerandlHal HildebrandHamid Reza ZaheriHenry ReichIaroslav TymchenkoIsaac ShamieJJacob MagnusonJacob WallingfordJaewon JungJake Vartuli
- SchonbergJameel SyedJames GolabJames HughesJan PijpersJason HiseJay EbhomenyeJayne GabrieleJeff LinseJeff StraathofJohn C. VeseyJohn GriffithJohn HaleyJohn ShaughnessyJohn V WertheimJonathan Eppele
Jonathan WilsonJono ForbesJordan ScalesJoseph John CoxJoseph KellyJoshua TobkinJuan BenetKai-Siang AngKanan GillKaustuv DeBiswasKeith SmithKenneth LarsenKevin NorrisKevin OrrKrishanu SankarKyle
BegovichKyle HooksL0j1kLee BurnetteLee Reddenlevav ferber tasLinh TranLockLuc RitchieLudwig SchubertLukas -krtek.net- NovyLukas BiewaldMagister MugitMagnus DahlströmMagnus LysfjordmakkostyaManne
MoquistManuel GarciaMarcus KöhlerMark B BahuMark HeisingMartin Sergio H. FaesterMathew BramsonMathias JanssonMatt LangfordMatt ParlmerMatt RovetoMatt RussellMatthew CockeMauricio CollaresMayank M.
MehrotraMert ÖzMichael FaustMichael HardelMichael KohlerMikael NordvallMike DourMike DussaultMikkoMustafa MahdiMárton VaitkusNathan JessurunNenad VitorovicNero LiNicanNiklas BuschmannOctavian Voicu
Oliver SteeleOmar Zrienotavio goodPatch KesslerPeter EhrnstromPeter McinerneyPeterCxyPsylenceQinghong ShiQuantopianRabidCamelRandy C. WillRandy TrueRichard BarthelRichard BurgmannRichard ComishRipta
PasayRish KundaliaRobert DavisRobert TeedRobert van der TuukRoobieRoy LarsonRyan AtallahRyan WilliamsSamuel D. JudgeSandy WilbournScott GrayScott Walter, Ph.D.Sean BarrettSean GallagherSebastian
BraunertShahbaz ShaikhsidwillSindre Reino TrosterudsoekulSolara570Song GaoSophie KarlinSteve CohenStevie MetkeSundar SubbarayanTed SuzmanThomas Peter BerntsenTihan SealeTim RobinsonTina LeTino Adams
Tobias ChristiansenTyler HerrmannUnemployed C Clamp Inspection OfficerVValentin Mayer-EichbergerValeriy SkobelevVassili PhilippovVictor CastilloVictor KostyukVictor LeeVladyslav KurmazXavier Bernard
Yana ChernobilskyYaw EtseYinYangBalance.AsiaYixiu ZhaoYu JunZach CardwellZachary Elliott噗噗兔 | {"url":"https://century21crest.com/article/3blue1brown-why-do-colliding-blocks-compute-pi","timestamp":"2024-11-02T08:26:58Z","content_type":"text/html","content_length":"161152","record_id":"<urn:uuid:ac9e67fa-6355-4e82-8ae7-893dc83c5ffd>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00199.warc.gz"} |
RUSUB_2: Operations on Subspaces in Real Unitary Space
:: Operations on Subspaces in Real Unitary Space
:: by Noboru Endou , Takashi Mitsuishi and Yasunari Shidama
:: Received October 9, 2002
:: Copyright (c) 2002-2021 Association of Mizar Users
Lm1: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds W1 + W2 = W2 + W1
Lm2: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds the carrier of W1 c= the carrier of (W1 + W2)
Lm3: for V being RealUnitarySpace
for W1 being Subspace of V
for W2 being strict Subspace of V st the carrier of W1 c= the carrier of W2 holds
W1 + W2 = W2
Lm4: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds the carrier of (W1 /\ W2) c= the carrier of W1
Lm5: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds the carrier of (W1 /\ W2) c= the carrier of (W1 + W2)
Lm6: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds the carrier of ((W1 /\ W2) + W2) = the carrier of W2
Lm7: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds the carrier of (W1 /\ (W1 + W2)) = the carrier of W1
Lm8: for V being RealUnitarySpace
for W1, W2, W3 being Subspace of V holds the carrier of ((W1 /\ W2) + (W2 /\ W3)) c= the carrier of (W2 /\ (W1 + W3))
Lm9: for V being RealUnitarySpace
for W1, W2, W3 being Subspace of V st W1 is Subspace of W2 holds
the carrier of (W2 /\ (W1 + W3)) = the carrier of ((W1 /\ W2) + (W2 /\ W3))
Lm10: for V being RealUnitarySpace
for W1, W2, W3 being Subspace of V holds the carrier of (W2 + (W1 /\ W3)) c= the carrier of ((W1 + W2) /\ (W2 + W3))
Lm11: for V being RealUnitarySpace
for W1, W2, W3 being Subspace of V st W1 is Subspace of W2 holds
the carrier of (W2 + (W1 /\ W3)) = the carrier of ((W1 + W2) /\ (W2 + W3))
Lm12: for V being RealUnitarySpace
for W being strict Subspace of V st ( for v being VECTOR of V holds v in W ) holds
W = UNITSTR(# the carrier of V, the ZeroF of V, the U7 of V, the Mult of V, the scalar of V #)
Lm13: for V being RealUnitarySpace
for W1, W2 being Subspace of V holds
( W1 + W2 = UNITSTR(# the carrier of V, the ZeroF of V, the U7 of V, the Mult of V, the scalar of V #) iff for v being VECTOR of V ex v1, v2 being VECTOR of V st
( v1 in W1 & v2 in W2 & v = v1 + v2 ) )
Lm14: for V being RealUnitarySpace
for W being Subspace of V ex C being strict Subspace of V st V is_the_direct_sum_of C,W
Lm15: for V being RealUnitarySpace
for W1, W2 being Subspace of V st V is_the_direct_sum_of W1,W2 holds
V is_the_direct_sum_of W2,W1
by Th14, Lm1;
Lm16: for V being RealUnitarySpace
for W being Subspace of V
for v being VECTOR of V
for x being set holds
( x in v + W iff ex u being VECTOR of V st
( u in W & x = v + u ) )
Lm17: for V being RealUnitarySpace
for W being Subspace of V
for v being VECTOR of V ex C being Coset of W st v in C | {"url":"https://mizar.uwb.edu.pl/version/current/html/rusub_2.html","timestamp":"2024-11-05T07:54:29Z","content_type":"text/html","content_length":"205807","record_id":"<urn:uuid:d9e25968-e6ad-43d5-8255-8c50f9058a85>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00770.warc.gz"} |
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT - EssayNob
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
WWeeeekkllyy HHaannddoouutt 44
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
4.1 Portfolio theory Overview ………………………………………………………………………………………………………….. 3
“Portfolio Selection” ………………………………………………………………………………………………………………….. 4
Portfolio theory aspects ……………………………………………………………………………………………………………. 5
Basic Principles upon portfolio selection: …………………………………………………………………………….. 6
4.2 Markowitz theory …………………………………………………………………………………………………………………………. 8
Markowitz portfolio theory overview ……………………………………………………………………………………. 8
4.3 Portfolio Return …………………………………………………………………………………………………………………………… 11
References …………………………………………………………………………………………………………………………………………………….. 14
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
4.1 Portfolio theory Overview
The investment market exists to absorb any excess money so as to keep the money flowing. It involves investing upon any capital assets, equity instruments and financial instruments.
And Investors need to diversify their investment collection, called portfolio, so as to maximize returns with the lowest risk involved.
The “Portfolio Selection” theory was introduced by an economics student at University of Chicago, named Harry Markowitz, through its doctoral thesis publication in 1952.
It was assumed that investors are risk-averse and he analyzed each individual security vehicles to determine their individual contribution upon portfolio’s overall risk.
The analysis required close examination upon their movement in relation to one another.
In 1990, Harry Markowitz received upon his research the Nobel Prize in Economics.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
“Portfolio Selection”
Known as modern portfolio theory or portfolio management theory
Through portfolio management the market risk is never eliminated. And portfolio risk is measured by standard deviation concept. It can only be adjusted through a well diversified portfolio depending
• individual investor’s risk tolerance
• inherent risk, called sensitivity risk and expressed as beta that constitutes a relative risk measurement tool, brought by the individual securities in the portfolio and can be characterized as
o risks that are diversifiable, non systematic risk
o risks that are non-diversifiable, systematic risk
The higher the correlation between the individual security and the market the more higher is the beta. When the beta is 1, 1% market change causes that individual security to change by 1% as well and
moves in the same direction as the market. If the beta is 1,4 then this implies that the individual security is 1,4 times as volatile as that of the market. If the beta is 0,90 it implies that this
individual security is 10% less volatile than the market.
It must be noted that securities in foreign countries bear an individual risk that is measurable within the market that is traded in.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
Portfolio theory aspects
• Individual security valuation upon its expected return and risk
• Valuation upon a securities collection upon total expected returns and risks
• Determine the investment amount upon common shares, bonds or other assets so as to produce portfolio optimization through best return given by the level of risk taken
• Measure performance by splitting the portfolio into risk categories to be reviewed separately in respects of their market and industry related risk.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
Basic Principles upon portfolio selection:
• Efficient portfolio comprised of common shares so as to produce a high expected return with low risk, in finance terms low standard deviation.
• Efficient portfolio assumes that investor can borrow at the risk-free rate of interest. Where investor is risk-averse prefers to invest either upon risk-free assets or upon a mixture of efficient
portfolio and a risk-free asset. A high-risk oriented investor invests upon high producing return portfolio that holds high standard deviation.
• Best efficient portfolio is created upon investor’s individual perspectives. Therefore, standard portfolio composition may exist but all investors cannot hold the very same portfolio. Each
individual investor holds different information and assessments. And remember that portfolio composition relies upon assessments regarding expected returns, standard deviations and correlations.
• Important issue upon portfolio construction is to view the potential shares not only upon their individual prospects but how interact within market environment.
• Market sensitivity is defined as Beta and is expressed as value changes upon the market portfolio. Therefore, investor needs to define each individual share’s sensitivity and its contribution
towards the overall portfolio.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
Common practice requires portfolio diversification so as to reduce the standard deviation of the overall portfolio returns by being selective upon the chosen shares that do not move exactly like each
Harry Markowitz went further and elaborated upon the basic portfolio principles and how it should be constructed. Portfolio principles that elaborate upon risk and return relationship.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
4.2 Markowitz theory
Markowitz portfolio theory overview
Harry Markowitz in 1952 mentioned in his article the portfolio diversification and provided suggestions to investors on how to reduce portfolio standard deviation and furthermore elaborated upon
principles on how to construct a portfolio.
The assumption made is that all investors have the same expectations so as to develop efficient and rewarding portfolios.
The method uses variances and covariances so as to assist investor to decide upon the portfolio that provides either for the higher return despite the differences upon their individual involved risk
and return or the one with the lowest risk involved.
Therefore, the decision really depends upon the investor’s risk appetite. It is assumed that during a short period of time the return is normally distributed.
Normal distribution is defined by the expected or average return and the standard deviation variance.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
The market return variance is actually the expected squared deviation from the expected return
The standard deviation is the square root of the variance found from the above equation.
Therefore, ?= ?(??)
For guidance see illustration given below
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
Investments A and B have each an expected return of 10%.
Investment A has greater spread regarding the expected returns. It is riskier than investment B.
Standard deviation measures this spread.
Investment A has standard deviation of 15%.
Investment B has standard deviation of 7,5%.
Some investors would prefer B to A.
Investments B and C both have the same standard deviation, but C offers a higher expected return. Most investors would prefer C to B.
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
4.3 Portfolio Return
In this section you shall be given in brief the way certain things are calculated. But remember that all you need is to understand how all the concept works and there is no need to enter into these
The calculation of the expected return of an individual investment is simply the sum of the probabilities of the possible expected returns of that investment.
Investment expected return is:
Sum of probabilities (p) of expected returns as given below
Expected Return E(R) = p1R1 + p2R2 + … + pnRn
The following example is given below for guidance
To determine the expected return on a portfolio, take the weighted average expected return of the assets that comprise the portfolio for which is needed. Therefore,
E(R) of a portfolio = w1R1 + w2R2 + … + wnRn
The following example is given below for guidance
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
To find the weighted average expected return of each individual security that comprise the portfolio, there is a need to use for each individual security the following
The following example is given for guidance
To measure the risk of an investment, both variance and standard deviation of that investment are calculated.
Variance is the average value of squared deviations from mean. It measures the returns dispersion from its expected value. Standard deviation is the square root of variance. It measures volatility.
Certain assumptions are made regarding investor preferences regarding
• Higher mean in returns
• Lower standard deviation in returns
• Mean and standard deviation only are important
• Skewness or kurtosis have no interest and are ignored
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
Covariance is needed to measure the co-movement of the investments that comprise each portfolio. It is the expected value of the deviations from the sample means.
Sample covariance of shares returns is the average of the deviations from the sample means
Sample correlation coefficient of the two assets is the scaled covariance
We measure the market risk, security sensitivity, using the beta of each security within the portfolio.
Unique risk is diversifiable whereas market risk is non-diversifiable
UU-MBA710 : FINANCE & STRATEGIC MANAGEMENT
Week 4
The following example is given for reference and is related to example given over page 9
Jonathan Berk, P. D. (2014). Corporate Finance (3rd ed.). (P. E. Inc., Ed.) Boston .
Richard A.Brealey, S. C. (2011). Principles of corporate finance (10th ed.). McGraw-Hill/Irwin.
Richard Pike, B. N. (2009). Corporate and Investment Decisions and Strategies (6th ed.). Pearson Education. | {"url":"https://essaynob.com/uu-mba710-finance-strategic-management-4/","timestamp":"2024-11-15T03:10:28Z","content_type":"text/html","content_length":"104217","record_id":"<urn:uuid:16776f8a-b86d-4d7a-89da-235279036fac>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00111.warc.gz"} |
Antenna length and sound frequency of a theremin.
Hello thereminists
I am doing a research on "How does the length of the antenna of a theremin affect its sound frequency (trying to keep all the other variables constant)"
From previous questions I know that the higher the antenna, the larger its area, and the larger its area, the greater its capacitance. However I struggle to demonstrate how does the capacitance
between the hand of the player and the antenna affect the sound frequency of the theremin. Overall I have to demonstrate two main relationships using formulas:
1. How does the area of the plates increase the capacitance between them?
2. How does the capacitance between the two plates of the theremin(the antenna and the hand) affect the sound frequency of the theremin.
Thank you very much for your help!
"...using formulas" - feliperodrigosek
Closed form equations are notoriously difficult to come by in the inductance and capacitance realms. Sometimes there are trivial solutions for toy situations that have a lot of symmetry, but for
real situations you end up relying on FEA (finite element analysis) or on real-world data obtained via experiment.
First order, capacitance is proportional to the plate area and the inverse distance between them. Inductance is proportional to wire length. Beyond that things get hairy. Capacitance as seen by the
Theremin is a combination of variable intrinsic (antenna and the universe) and variable mutual (antenna and the hand). If you do FEA you'll see that the intrinsic decreases and the mutual increases
as the hand approaches (as more and more lines of force from the antenna switch from landing at infinity to landing on the hand) with the net result a rather advantageously musical functional
response (roughly exponential heterodyned frequency).
If you are using a simple LC oscillator you can work backwards from the resonance equation F = 1 / [2 * pi * sqrt(L * C)], where the C is the parallel combination of any physical component C and the
antenna intrinsic + mutual. The antenna intrinsic + mutual is the hard part, though to a first order you can use the inverse distance times a constant plus an offset.
If you take everything into account via various rules of thumb and such you may end up with a 90% correct answer, or even 99%, but probably not 100%. It's definitely a diminishing returns type
thing, where the more work you put into it the less you get back.
Thank you very much for your answer dewster!!! It will help me a lot.
However there has to be a relationship between the capacitance and the frequency of the sound. I mean, the mutual does increase when the hand approaches the antenna, increasing the capacitance and
consequently the frequency. If that happens then the capacitance and the frequency have to be directly proportional. This is what I am trying to demonstrate with formulas.
Thank you very much for your reply!!!
My collected real-world antenna C data: http://www.mediafire.com/file/co61z6z527l7r4e/Analog_Digital_2014-12-19.xls
My collected simulation antenna C data: http://www.mediafire.com/file/zpln5ccdort7ykr/plate_capacitance_2016-02-24.xls
When the hand is not too close nor too far the 1/d proportional relationship is quite strong. When the hand is near (<0.1m) the geometry of the antenna tends to dominate: antennas with larger
dimensions having proportionally less less sensitivity to change here. This makes sense, as the hand in this region is closely interacting with less of the antenna area. For common cases (plates
and rods) one could probably come up with an approximating integral which collapses to some algebraic equation here. When the hand is quite far it often merges with the player's body, which causes a
similar drop in proportional sensitivity, which also makes sense.
I'm at the point where I mostly care about this stuff in terms of how best to obtain a linear response. The academic side still interests me, but I've got more immediate fish that need frying.
Wow, thank you for that data, It will definitely help me a lot in my research!
When you say 1/d what do you mean?
Thank you!
"When you say 1/d what do you mean?"
The inverse of the distance between the hand and the antenna. Look up capacitance on Wikipedia and you'll see the simplified 2 circular plate equation with 1/d running the show.
"Look up capacitance on Wikipedia" - dewster
The "1/d" law is a spherical cow. My experiments show "the power of 1.7" law (1/(d)^1.7 ) *****
***** [EDIT-2019] it's not true for wide distance range. For more precize formula refer this thread.
"The "1/d" law is a spherical cow. My experiments show "the power of 1.7" law (1/(d)^1.7 )" - ILYA
LOL! Yeah, a first-order thing. Though the heterodyned response is an exponential approximation that also breaks down in the near and far fields - it works well enough when all you have is stone
knives and bear skins (tubes and such) but there are better solutions to be found in the latter 20th century. The simple solutions are all going to be approximations of one sort or another. (It's
interesting that one of the better inductor formulas also uses non-integer powers.)
Physicists are really vague when it comes to whatever the hell they are talking about when they say the word "capacitance". When they give the self-C of a sphere in space it's intrinsic C. For two
spheres or plates it's mutual C. Go to any physics forum and watch the big brains argue and talk over each other because they aren't taking the basics into account in the same ways. It can be a
very disorienting and disappointing experience for one seeking a modicum of clarity.
Anyway, for the Theremin it's a combination, where the hand "plate" is grounded, and the antenna "plate" experiences the total combination of its own intrinsic C plus the mutual C with the hand. The
intrinsic isn't a constant as one might naively imagine, and the mutual C isn't simple either. The only way I'm aware of "knowing" this is to do FEA, where intrinsic and mutual are separately and
clearly presented in the solution matrix. Lab experiments will only provide us with the total C as seen by the antenna - which is what we want, thank goodness - but the raw data doesn't reveal the
underlying mechanism behind the response.
Plate antennas are different than pole antennas.
The frequency range is one octave below a piano to one octave above a piano.
Capacitance is not between the antennas, but between the antennas and ground (earth) - your hand affects that value.
"Antenna" is a marking term since radios were very popular with the Theremin were introduced. It does not transmit and if it receives, you get radio coming through your music, so it really is not
purposed as a radio antenna.
The choice of a rod for the plate has made theremins very difficult to master.
Which suit me fine. | {"url":"http://thereminworld.com/forums/T/31781?post=212062","timestamp":"2024-11-10T11:20:21Z","content_type":"application/xhtml+xml","content_length":"51031","record_id":"<urn:uuid:b78b4aec-03ed-4812-9b5a-4db693ed4ed7>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00422.warc.gz"} |
i. In a first order reaction, the concentration of reactant decreases from 20 mmol dm-3 to 8 mmol dm-3 in 38 minutes. - Crack NTA
i. In a first order reaction, the concentration of reactant decreases from 20 mmol dm-3 to 8 mmol dm-3 in 38 minutes. What is the half life of reaction? (28.7 min)
chapter 6. CHEMICAL KINETICS class 12 chemistry textbook solution
i. In a first order reaction, the concentration of reactant decreases from 20 mmol dm-3 to 8 mmol dm-3 in 38 minutes. What is the half life of reaction? (28.7 min)
[A][0] = 20 mmol dm^-3, [A][t] = 8 mmol dm^-3, t = 38 min
To find:
Half-life of the reaction t[1/2]
i. \(k = \frac{2.303}{t} \log_{10} \frac{[A]_0}{[A]_t}\)
ii. \(t_{1/2} = \frac{0.693}{k}\)
Substituting given values into (i):
\(k = \frac{2.303}{38 \text{ min}} \log_{10} \frac{20}{8} = \frac{2.303}{38 \text{ min}} \cdot 0.3979 = 0.0241\) min^-1
Using formula (ii):
\(t_{1/2} = \frac{0.693}{0.0241} = 28.7\) min
The half-life of the reaction is 28.7 minutes.
chapter 6. CHEMICAL KINETICS class 12 chemistry textbook solution page 137 | {"url":"https://cracknta.com/i-in-a-first-order-reaction-the-concentration-of-reactant-decreases-from-20-mmol-dm-3-to-8-mmol-dm-3-in-38-minutes/","timestamp":"2024-11-09T19:15:04Z","content_type":"text/html","content_length":"62793","record_id":"<urn:uuid:e199ecd2-421e-40ab-a120-a6e0744923f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00461.warc.gz"} |
CSEC Mathematics: Appreciation and Depreciation
In this lesson, we will cover:
1. what appreciation and depreciation are
2. how to calculate and solve problems involving appreciation or depreciation
Appreciation is the increase of the value of something over time. Depreciation, on the other hand, is the opposite- it is the decrease of value of something over time.
You may be familiar with appreciation in terms of the value of houses or assets over time, since a house or property will appreciate the longer you own it. A car, however, depreciates over time when
bough from new due to wear and tear.
Problems involving appreciation or depreciation are almost exactly the same as problems asking about compound interest. In fact, the formula for calculating it is the same as the formula we discussed
in a previous post on interest.
Appreciation: V = I x (1 + r)^n
Depreciation: V = I x (1 - r)^n
r = rate of increase or decrease of value over time
n = number of periods (years, months, weeks, depending on the question)
Example 1: A villa is purchased in 2004 for $16.5 million. What is the value of the villa in 2009 if it appreciates in value by 7% each year?
This is a problem of appreciation, so we add the rate of change of value.
= $16,500,000 x (1 + 0.07)^5
Example 2: A Toyota Corolla purchased in 1992 for $1.2 million depreciated in value yearly by 1.5%. What is the car's value in 2008?
This is a problem of depreciation, so we subtract the rate of change of value.
= $1,200,000 x (1-0.015)^16
1 Comment | {"url":"https://www.quelpr.com/post/csec-mathematics-appreciation-and-depreciation","timestamp":"2024-11-02T23:27:53Z","content_type":"text/html","content_length":"1050589","record_id":"<urn:uuid:082c4b8a-2225-4aa6-a374-1ed5e4cb69c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00802.warc.gz"} |
The working temperature is 30℃, and the current-carrying capacity under long-term continuous 90% load is as follows:
6 mm2-47A
Current conversion power:
10A=2200W, and so on.
For example: if the current carrying capacity is 14A copper wire, that is: 220W×14=3080W, then the power of 1.5 square copper wire is 3.08 kW.
Long-term current allowed by the national standard:
4 square is 25-32A
6 square is 32-40A
These are theoretical safety values, and the limit values must be greater than these.
The maximum power allowed for 2,5 square copper wire is 5500W.
4 square 8000W, 6 square 9000W is no problem.
40A digital electric meter is normal, 9000W is no problem. The mechanical 12000W will not burn out.
Copper core wire allows long-term current:
2.5 square millimeters (16A~25A)
4 square millimeters (25A~32A)
6 square millimeters (32A~40A)
for example :
1. The power consumption of each computer is about 200~300W (about 1~1.5A), then 10 computers need a 2.5 square millimeter copper core wire for power supply, otherwise, fire may occur.
2. The power consumption of the three large air conditioners is about 3000W (about 14A), then a single 2.5-mm-square copper core wire is required for one air conditioner.
3. The current house incoming line is generally 4 square millimeters of copper wire. Therefore, household appliances that are turned on at the same time must not exceed 25A (that is, 5500 watts). It
is useless to replace the wires in the house with 6 square millimeters of copper wire. , Because the wire that enters the meter is 4 square millimeters.
4. Early housing (15 years ago) The incoming line is generally 2.5 square millimeters of aluminum wire. Therefore, household appliances that are turned on at the same time must not exceed 13A (that
is, 2800 watts).
5. Household appliances with large power consumption are air conditioner 5A (1.2 hp), electric water heater 10A, microwave oven 4A, rice cooker 4A, dishwasher 8A, washing machine with drying function
10A, electric water heater 4A.
In the fire caused by the power supply, 90% is caused by the heating of the joints, so all the joints must be welded, and the contact devices that cannot be welded must be replaced within 5 to 10
years (such as sockets, air switches, etc.).
Copper core wire and cable current carrying capacity standard cable current carrying capacity decision
Multiply by 2.5 and multiply by nine.
Thirty-five times three and a half, both points are reduced by five.
Conditions are subject to change and conversion, and high-temperature 10% copper upgrade.
The number of pipes to be worn is two to three, four to eight, seventy-six percent full load.
This formula does not directly indicate the current-carrying capacity (safe current) of various insulated wires (rubber and plastic insulated wires) but is expressed by "cross-section multiplied by a
certain multiple" and obtained by mental calculation.
"2.5 times multiply by nine, minus one by one," refers to various cross-section aluminum core insulated wires of 2.5mm2 and below, and its current carrying capacity is about 9 times the number of
cross-sections. Such as 2.5mm2 wire, the current-carrying capacity is 2.5 × 9 = 22.5 (A). The multiple relationships between the current carrying capacity of 4mm2 and above conductors and the number
of cross-sections are to go up along the line number, and the multiple is reduced by 1, ie 4×8, 6×7, 10×6, 16×5, 25×4.
"Thirty-five times three-five, two pairs of two points minus five", said that the current-carrying capacity of 35mm2 wire is 3.5 times the number of cross-sections, which is 35 × 3.5 = 122.5 (A).
From the conductor of 50mm2 and above, the multiple relationships between the current-carrying capacity and the number of cross-sections become a group of two two-wire numbers, and the multiple is
reduced by 0.5 in turn. That is, the current-carrying capacity of the 50 and 70mm2 conductors is three times the number of cross-sections; the current-carrying capacity of the 95 and 120mm2
conductors is 2.5 times its cross-sectional area, and so on.
"Conditions are subject to change and conversion, and high-temperature 10% copper upgrade". The above formula is determined by copper core insulated wire and exposed laying at an ambient temperature
of 25℃. If the aluminum core insulated wire is laid in an area where the ambient temperature is longer than 25℃ for a long time, the current-carrying capacity of the wire can be calculated according
to the above formula, and then it can be discounted by 10%; when using aluminum wire instead of aluminum wire, Its current-carrying capacity is slightly larger than that of aluminum wire of the same
specification. You can calculate the current-carrying capacity that is one more wire number than the aluminum wire according to the above formula. For example, the current car | {"url":"https://omgdt.com/mtbd/318.html","timestamp":"2024-11-13T02:08:28Z","content_type":"text/html","content_length":"27290","record_id":"<urn:uuid:19e49195-81dd-4544-8f05-1e13fdeea2f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00330.warc.gz"} |
Linear Operator - (Lie Algebras and Lie Groups) - Vocab, Definition, Explanations | Fiveable
Linear Operator
from class:
Lie Algebras and Lie Groups
A linear operator is a mapping between two vector spaces that preserves the operations of vector addition and scalar multiplication. This means that if you take any two vectors and apply the
operator, the result is the same as if you first added those vectors or multiplied them by a scalar and then applied the operator. Linear operators play a crucial role in various mathematical areas,
including the study of Lie algebras and representation theory, particularly when examining how different representations interact.
congrats on reading the definition of Linear Operator. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Linear operators can be represented by matrices when working with finite-dimensional vector spaces, making calculations easier.
2. The kernel (null space) of a linear operator consists of all vectors that map to the zero vector, revealing important properties about the operator's behavior.
3. The image (range) of a linear operator includes all possible outputs from applying the operator to any vector in the domain.
4. Every linear operator can be decomposed into its eigenvalues and eigenvectors, providing insights into its structure and behavior.
5. In representation theory, linear operators are used to study how groups act on vector spaces, highlighting symmetries and invariants.
Review Questions
• How does a linear operator preserve the structure of vector spaces, and why is this property important in mathematical analysis?
□ A linear operator preserves the structure of vector spaces by maintaining the operations of addition and scalar multiplication. This means that applying the operator to a linear combination
of vectors yields the same result as applying it individually to each vector first. This property is essential in mathematical analysis because it allows for consistent behavior across
various mathematical contexts, enabling us to use techniques from linear algebra to solve complex problems across different fields.
• Discuss the significance of eigenvalues and eigenvectors in understanding the action of a linear operator.
□ Eigenvalues and eigenvectors are crucial in understanding the action of a linear operator because they provide key insights into its behavior. An eigenvector is a non-zero vector that only
gets scaled when the linear operator is applied, while the corresponding eigenvalue indicates the factor by which it is scaled. This relationship helps us decompose complex transformations
into simpler parts, making it easier to analyze stability, oscillations, and other properties in systems modeled by linear operators.
• Evaluate how the concept of linear operators is applied within representation theory and its implications for Lie algebras.
□ In representation theory, linear operators are instrumental for expressing abstract algebraic structures like groups or Lie algebras as concrete matrices acting on vector spaces. This
connection allows mathematicians to translate complex algebraic problems into more manageable forms involving matrix operations. The implications for Lie algebras are profound, as they enable
the study of symmetries in differential equations and physics through their representations, providing a deeper understanding of both theoretical and practical aspects of these mathematical
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/lie-algebras-and-lie-groups/linear-operator","timestamp":"2024-11-11T05:03:17Z","content_type":"text/html","content_length":"153085","record_id":"<urn:uuid:c054c974-fbf9-4ea1-a321-d1e3d1f99849>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00607.warc.gz"} |
A new construction of the asymptotic algebra associated to the $q$-Schur algebra
We denote by A the ring of Laurent polynomials in the indeterminate v and
by K its field of fractions. In this paper, we are interested in representation theory of the
“generic” q-Schur algebra Sq (n, r) over A. We will associate to every non-degenerate
symmetrising trace form τ on KSq (n, r) a subalgebra Jτ of KSq (n, r) which is iso-
morphic to the “asymptotic” algebra J (n, r)A defined by J. Du. As a consequence, we
give a new criterion for James’ conjecture.
Dive into the research topics of 'A new construction of the asymptotic algebra associated to the $q$-Schur algebra'. Together they form a unique fingerprint. | {"url":"https://research-portal.st-andrews.ac.uk/en/publications/a-new-construction-of-the-asymptotic-algebra-associated-to-the-q-","timestamp":"2024-11-10T11:35:12Z","content_type":"text/html","content_length":"50074","record_id":"<urn:uuid:0bc69614-79ff-4fb2-b02c-a79edf5d81a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00742.warc.gz"} |
Unexpected result calculating the determinant of a singular matrix (42S)
10-21-2019, 01:54 AM
Post: #1
Dave Britten Posts: 2,332
Senior Member Joined: Dec 2013
Unexpected result calculating the determinant of a singular matrix (42S)
When calculating the determinant of the matrix [[-2,1,3][1,2,1][3,1,-2]] on my 42S, I would expect to get 0 as it is a singular matrix. But the 42S says it's 3.30000000001E-12. How does the 42S
calculate determinants that would lead to that result? And how do I identify when it's giving me a suspicious result? I've done very little linear algebra, so there might be something simple and
obvious going on here.
10-21-2019, 02:24 AM
(This post was last modified: 10-21-2019 02:28 AM by Valentin Albillo.)
Post: #2
Valentin Albillo Posts: 1,100
Senior Member Joined: Feb 2015
RE: Unexpected result calculating the determinant of a singular matrix (42S)
Hi, Dave:
(10-21-2019 01:54 AM)Dave Britten Wrote: When calculating the determinant of the matrix [[-2,1,3][1,2,1][3,1,-2]] on my 42S, I would expect to get 0 as it is a singular matrix. But the 42S says
it's 3.30000000001E-12. How does the 42S calculate determinants that would lead to that result? And how do I identify when it's giving me a suspicious result? I've done very little linear
algebra, so there might be something simple and obvious going on here.
The 42S uses LU-decomposition to compute determinants. This process involves divisions so inexact terms are produced and thus rounding errors do creep in and that's why you don't get an exact result
sometimes, even if the matrix has all integer elements and it's as small as 2x2.
For an exact way to compute determinants download and have a look at my PDF paper:
Exact Determinants and Permanents
which includes a program and revealing examples.
All My Articles & other Materials here: Valentin Albillo's HP Collection
10-21-2019, 02:30 AM
Post: #3
Dave Britten Posts: 2,332
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
Okay, I kind of wondered if there were some tricks like that going on. I remember seeing mention of LU decompositions in the Advantage Module manual, so I'm assuming that gives a similar result. The
48G, on the other hand, does return a determinant of 0 for that particular matrix, as does the 3x3 linear system solver from the 32SII manual.
10-21-2019, 02:36 AM
Post: #4
Valentin Albillo Posts: 1,100
Senior Member Joined: Feb 2015
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 02:30 AM)Dave Britten Wrote: Okay, I kind of wondered if there were some tricks like that going on. I remember seeing mention of LU decompositions in the Advantage Module manual, so
I'm assuming that gives a similar result. The 48G, on the other hand, does return a determinant of 0 for that particular matrix, as does the 3x3 linear system solver from the 32SII manual.
The 48G "cheats": it detects that all elements are integer and forces the result (computed using LU as well) to be the nearest integer, which succeeds sometimes and fails some others.
Try the 7x7 matrix I give as an example in the linked paper in the 48G and see if you get
as the determinant. You won't, the cheat doesn't work.
All My Articles & other Materials here: Valentin Albillo's HP Collection
10-21-2019, 03:11 AM
Post: #5
Thomas Okken Posts: 1,896
Senior Member Joined: Feb 2014
RE: Unexpected result calculating the determinant of a singular matrix (42S)
The 48G gets exactly zero on [[-.2 .1 .3][.1 .2 .1][.3 .1 -.2]] as well. Wouldn't that defeat the cheat?
10-21-2019, 04:23 AM
(This post was last modified: 10-21-2019 04:25 AM by Valentin Albillo.)
Post: #6
Valentin Albillo Posts: 1,100
Senior Member Joined: Feb 2015
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 03:11 AM)Thomas Okken Wrote: The 48G gets exactly zero on [[-.2 .1 .3][.1 .2 .1][.3 .1 -.2]] as well. Wouldn't that defeat the cheat?
No. It checks that the
of all elements are integer, not their
. The
constant 2. has the
integer value 2
Whether the cheat is activated or not depends on a system flag, have a look at Messages #9, 40 and 49 in this thread (PDF document) at my site:
This is discussed in detail in other similar threads available there.
Best regards.
All My Articles & other Materials here: Valentin Albillo's HP Collection
10-21-2019, 07:39 AM
(This post was last modified: 10-21-2019 07:41 AM by pinkman.)
Post: #7
pinkman Posts: 434
Senior Member Joined: Mar 2018
RE: Unexpected result calculating the determinant of a singular matrix (42S)
Read Thomas’question carefully
10-21-2019, 08:41 AM
Post: #8
Moggul Posts: 68
Member Joined: Jun 2019
RE: Unexpected result calculating the determinant of a singular matrix (42S)
The Free 42 on my phone gives 0. Different methodology or different precision?
10-21-2019, 11:37 AM
(This post was last modified: 10-21-2019 11:44 AM by Thomas Okken.)
Post: #9
Thomas Okken Posts: 1,896
Senior Member Joined: Feb 2014
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 08:41 AM)Moggul Wrote: The Free 42 on my phone gives 0. Different methodology or different precision?
Free42 uses LU decomposition to calculate determinants as well. The implementation is based on the one from "Numerical Recipes in C," without the "TINY" fudge factor when encountering a zero pivot.
If the algorithm were identical to the one in the HP-42S, I'd expect the same kinds of errors, just smaller because of the extra precision, but in actual fact, it returns exactly zero, just like the
(10-21-2019 04:23 AM)Valentin Albillo Wrote:
(10-21-2019 03:11 AM)Thomas Okken Wrote: The 48G gets exactly zero on [[-.2 .1 .3][.1 .2 .1][.3 .1 -.2]] as well. Wouldn't that defeat the cheat?
No. It checks that the values of all elements are integer, not their types. The floating-point constant 2. has the integer value 2.
I think you misread my post. I tried Dave's example, divided by 10. Not inexact numbers, but actual non-integer values.
10-21-2019, 11:58 AM
Post: #10
John Keith Posts: 1,067
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
The HP 50 in approximate mode and with flag -54 (the "cheat" flag) set*, returns 0 for both Dave's and Thomas's matrices. I'm surprised that the 42s gives an inexact result, I assumed the internal
code for matrix math was essentially the same as the 48/49 series.
* Note that setting the flag removes the "cheat". Also, according to the HP 50 AUR, the basis for setting tiny elements to 0 is that intermediate results are less than 1E-14. It does not say anything
about integer values.
10-21-2019, 12:48 PM
Post: #11
ttw Posts: 287
Member Joined: Jun 2014
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 11:58 AM)John Keith Wrote: The HP 50 in approximate mode and with flag -54 (the "cheat" flag) set*, returns 0 for both Dave's and Thomas's matrices. I'm surprised that the 42s gives
an inexact result, I assumed the internal code for matrix math was essentially the same as the 48/49 series.
* Note that setting the flag removes the "cheat". Also, according to the HP 50 AUR, the basis for setting tiny elements to 0 is that intermediate results are less than 1E-14. It does not say
anything about integer values.
Approximately true. However, I've been working on some number theory stuff (which I'll post if I get it in shape for public use) that use really big (greater than 2^64) integers. Some seemingly
integer stuff ends up being floating point. Conversions of big numbers out of binary comes to mine. I've got most things to work.
Reducing some things with big integer multiple of things like Sqrt(2) or (Sqrt(5)-1)/2 and the like need careful handling. Generally FLOOR and CEIL work well. FXND can be a problem as I found some
case (I can't reproduce it) where would convert a number to a floating point.
That's for theory. In practice, I can just keep the numerators and denominators of stuff separate and use really close rational approximations for the irrational numbers in the final step.
10-21-2019, 01:49 PM
Post: #12
Dave Britten Posts: 2,332
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 11:58 AM)John Keith Wrote: The HP 50 in approximate mode and with flag -54 (the "cheat" flag) set*, returns 0 for both Dave's and Thomas's matrices. I'm surprised that the 42s gives
an inexact result, I assumed the internal code for matrix math was essentially the same as the 48/49 series.
It looks like the "cheat" was added with the 48G. I just tried calculating the determinant with my 48SX, and I get the same 3.30000000001E-12 as the 42S. I don't have a 28S/C handy, but I would
expect they do the same as the 42S and 48SX.
10-21-2019, 03:06 PM
Post: #13
Thomas Okken Posts: 1,896
Senior Member Joined: Feb 2014
RE: Unexpected result calculating the determinant of a singular matrix (42S)
But the 48G returns exactly zero regardless of whether flag -54 is set or clear...
(Free42 Binary does not return zero, so the fact that Free42 Decimal does is apparently just a coincidence, not the result of a better algorithm. There is no "tiny element is zero" cheat in play
either way.)
(The HP-15C returns 2e-9 (or rather, "15C Scientific Calculator by Vicinno," on iOS, but that should be the same thing).)
10-21-2019, 04:46 PM
(This post was last modified: 10-21-2019 04:48 PM by Claudio L..)
Post: #14
Claudio L. Posts: 1,885
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 03:06 PM)Thomas Okken Wrote: But the 48G returns exactly zero regardless of whether flag -54 is set or clear...
(Free42 Binary does not return zero, so the fact that Free42 Decimal does is apparently just a coincidence, not the result of a better algorithm. There is no "tiny element is zero" cheat in play
either way.)
(The HP-15C returns 2e-9 (or rather, "15C Scientific Calculator by Vicinno," on iOS, but that should be the same thing).)
May or may not be a coincidence. There are division-free decomposition algorithms that preserve integers, so if an integer matrix is given, all operations remain integer up until the end. I don't
know if this is the case for Free42 or the 48g, I know it is the case with newRPL.
My point is, you don't necessarily need a cheat to get a good result with integer matrices.
Here's one algorithm, for example (not the one I used, but same idea).
I think
this one is the one used by newRPL.
10-21-2019, 05:54 PM
Post: #15
Thomas Okken Posts: 1,896
Senior Member Joined: Feb 2014
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 04:46 PM)Claudio L. Wrote:
(10-21-2019 03:06 PM)Thomas Okken Wrote: (Free42 Binary does not return zero, so the fact that Free42 Decimal does is apparently just a coincidence, not the result of a better algorithm.
There is no "tiny element is zero" cheat in play either way.)
May or may not be a coincidence. There are division-free decomposition algorithms that preserve integers, so if an integer matrix is given, all operations remain integer up until the end. I don't
know if this is the case for Free42 or the 48g, I know it is the case with newRPL.
Free42 does not use a division-free algorithm, that's why I said getting zero for Dave's example was a coincidence. Free42 Decimal and Free42 Binary use the same algorithm, but one returns zero and
the other does not. The only difference between the two is the floating-point system used.
10-21-2019, 06:00 PM
Post: #16
toml_12953 Posts: 2,191
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 01:54 AM)Dave Britten Wrote: When calculating the determinant of the matrix [[-2,1,3][1,2,1][3,1,-2]] on my 42S, I would expect to get 0 as it is a singular matrix. But the 42S says
it's 3.30000000001E-12. How does the 42S calculate determinants that would lead to that result? And how do I identify when it's giving me a suspicious result? I've done very little linear
algebra, so there might be something simple and obvious going on here.
Prime gets 0 in Home screen
Tom L
Cui bono?
10-21-2019, 06:12 PM
Post: #17
John Keith Posts: 1,067
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 12:48 PM)ttw Wrote: However, I've been working on some number theory stuff (which I'll post if I get it in shape for public use) that use really big (greater than 2^64) integers.
Some seemingly integer stuff ends up being floating point. Conversions of big numbers out of binary comes to mine. I've got most things to work.
Reducing some things with big integer multiple of things like Sqrt(2) or (Sqrt(5)-1)/2 and the like need careful handling. Generally FLOOR and CEIL work well. FXND can be a problem as I found
some case (I can't reproduce it) where would convert a number to a floating point.
Conversion of binary numbers to reals will lose precision if the value is greater than 10^12. To convert large binary numbers to exact integers, you can try
->STR 3. OVER SIZE 1. - SUB OBJ->
in decimal mode.
I can't see how FXND would return reals as long as you are in exact mode (check flags -3 and -105). If you want floating-point numbers with more than 12 digits you will have to use LongFloat.
I am also interested in number theory and I would like to see what you come up with.
Apologies to others for taking this thread off-topic.
10-21-2019, 07:10 PM
Post: #18
ttw Posts: 287
Member Joined: Jun 2014
RE: Unexpected result calculating the determinant of a singular matrix (42S)
I can't reproduce the FXND today. I will post the following when I get a "useful" version of convert fraction to partial quotients (not to hard, still debating on input style: two integers or
fraction). At times I need statistics for the results; I'll probably just add some post-processing, things like sum of PCs or max or alternating sum and difference.
I also have some programs that convert a list of partial quotients into the quadratic irrational which that as the repeated part. Going the other to is fun. I still have a bit of work to do as this
one is slow if using fractions but very complex if dealing with numerator and denominator separately. Some care is needed as the integer part (first PC) has to be handled separately, then there's a
non-repeating part. Getting it in general is a bit tedious.
I may just post a "good" version with arbitrary choices and let anyone who wants to use these modify them to suit.
I've been using integers which are larger than 2^64 which can cause some problems. However keeping everything in integers is pretty helpful.
I did find out a few funny things: IDIV2 is very slow compared to separate parts (even IQUOT and IREMAINDER), the matrix form for continued fractions is really slow too.
10-21-2019, 10:33 PM
Post: #19
Valentin Albillo Posts: 1,100
Senior Member Joined: Feb 2015
RE: Unexpected result calculating the determinant of a singular matrix (42S)
(10-21-2019 11:37 AM)Thomas Okken Wrote:
(10-21-2019 04:23 AM)Valentin Albillo Wrote: No. It checks that the values of all elements are integer, not their types. The floating-point constant 2. has the integer value 2.
I think you misread my post. I tried Dave's example, divided by 10. Not inexact numbers, but actual non-integer values.
Yes, absolutely
. I read your post very late at night (actually almost dawn) and in the smallish screen of a tablet and without my reading glasses the decimal point was absolutely
to me. I simply assumed that your message said
, say, where it actually said
. My mistake, sorry.
Best regards.
All My Articles & other Materials here: Valentin Albillo's HP Collection
10-22-2019, 06:12 AM
Post: #20
Werner Posts: 902
Senior Member Joined: Dec 2013
RE: Unexpected result calculating the determinant of a singular matrix (42S)
Hi everyone.
The 48GX (and up) does not check whether the matrix elements are integer - it determines the least significant digit in the input (say it is of the order 10^s) and with Flag -54 clear it will round
the result to 10^(s*n), with n the order of the matrix.
(10-21-2019 02:36 AM)Valentin Albillo Wrote: Try the 7x7 matrix I give as an example in the linked paper in the 48G and see if you get 1 as the determinant. You won't, the cheat doesn't work.
? of course the cheat works. With Flag -54 clear, the 48GX returns 1 exactly, with Flag -54 set it returns .999945522778. The condition number is about 10^11, and the 48GX works with 15 digits
internally, so we get 15-11=4 correct digts.
Also, Valentin, the 42S uses a*b-c*d when calculating the determinant of a 2x2 system, as you can see when you calculate the determinant of
The former returns -5 exactly, the latter -5.00000000001
Best Regards,
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/thread-13830-post-122330.html","timestamp":"2024-11-04T11:23:19Z","content_type":"application/xhtml+xml","content_length":"89107","record_id":"<urn:uuid:01079436-3caf-46d6-8281-065bccd15c6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00565.warc.gz"} |
CUET Exam Maths Syllabus 2024
The National Testing Agency (NTA) has released the detailed syllabus for the CUET 2024 Mathematics exam on its official website. This syllabus is also provided here for your reference
The Mathematics syllabus is divided into two sections:
Section A: Compulsory Section:
This section is mandatory for all students and covers fundamental mathematical concepts. This section covers fundamental mathematical concepts and is mandatory for all students
Unit 1: Algebra:
• Matrices: Introduction, types of matrices, equality, transpose, symmetric & skew-symmetric matrices, algebra of matrices, determinants, inverse of a matrix, solving simultaneous equations using the
matrix method
Unit 2: Calculus:
• Higher-order derivatives, tangents & normals, increasing & decreasing functions, maxima & minima.
Unit 3: Integration and its Applications:
• Indefinite integrals of simple functions, evaluation of indefinite integrals, definite integrals, application of integration as the area under the curve.
Unit 4: Differential Equations:
• Order and degree of differential equations, formulating and solving equations with separable variables.
Unit 5: Probability Distributions:
• Random variables and their probability distributions, expected value, variance, and standard deviation of a random variable, binomial distribution.
Unit 6: Linear Programming:
• Mathematical formulation of linear programming problems, graphical solution method for problems with two variables, feasible and infeasible regions, optimal feasible solution
This section is optional and allows students to choose between two subjects:
Section B1:
Mathematics (focuses on theoretical and conceptual aspects of mathematics)
CHAPTER: Relations and Functions
Sub-unit: Types of relations
• Reflexive, symmetric, transitive, and equivalence relations.
• One-to-one and onto functions, composite functions, the inverse of a function.
• Binary operations
Sub-unit: Inverse Trigonometric Functions
• Definition, range, domain, principal value branches.
• Graphs of inverse trigonometric functions.
• Elementary properties of inverse trigonometric functions
CHAPTER: Matrices
Sub-unit: Concepts of Matrices
• Concept, notation, order, equality, types of matrices.
• Zero matrix, transpose of a matrix, symmetric and skew-symmetric matrices.
• Addition, multiplication, and scalar multiplication of matrices, simple properties.
• Non-commutativity of multiplication of matrices, existence of non-zero matrices whose product is the zero matrix (restricted to square matrices of order 2).
• Concept of elementary row and column operations.
• Invertible matrices and proof of the uniqueness of inverse
Sub-unit: Determinants
• Determinants of a square matrix (up to 3×3 matrices), properties of determinants, minors, co-factors.
• Applications of determinants in finding the area of a triangle.
• Adjoint and inverse of a square matrix.
• Consistency, inconsistency, and a number of solutions of a system of linear equations.
CHAPTER: Continuity and Differentiability
Sub-unit: Continuity and Differentiability
• Derivative of composite functions, chain rules.
• Derivatives of inverse Trigonometric functions, implicit functions.
• Exponential, logarithmic functions, derivatives of log x and ex.
• Logarithmic differentiation, derivative of functions expressed in parametric forms.
• Second-order derivatives.
• Rolle’s and Lagrange’s Mean Value theorems (without proof) and their geometric interpretations.
Sub-unit: Applications of Derivatives
• Rate of change, increasing/decreasing functions.
• Tangents and normals, approximation, maxima, and minima.
CHAPTER: Integrals
Sub-unit: Integration
• Integration as an inverse process of differentiation.
• Integration of various functions by substitution, partial fractions, and parts.
• Definite integrals as a limit of a sum.
• Fundamental Theorem of calculus (without proof), basic properties, and evaluation of definite integrals
Sub-unit: Applications of Integrals
• Finding the area under simple curves, area between curves
CHAPTER: Differential Equations
Sub-unit: Basics of Differential Equations
• Definition, order, and degree, general, and particular solutions.
• Formation of differential equations, methods of solution
UNIT: VECTORS & THREE - DIMENSIONAL GEOMETRY
CHAPTER: Vectors
Sub-unit: Basics of Vectors
• Magnitude and direction, types of vectors.
• Scalar and vector products, projection of a vector
CHAPTER: Three-dimensional Geometry
Sub-unit: Basics of Three-dimensional Geometry
• Cartesian and vector equations of lines and planes.
• Angle between lines and planes, distance of a point from a plane.
CHAPTER: Linear Programming
Sub-unit: Introduction to Linear Programming
• Terminology, mathematical formulation, graphical method of solution.
CHAPTER: Probability
Sub-unit: Basics of Probability
• Multiplication theorem, conditional probability, independent events.
• Random variable and its probability distribution, mean, and variance.
• Binomial distribution, Baye’s theorem
Section B2 : CUET 2024 Mathematics Syllabus: Section B2 (Applied Mathematics)
Unit 1: Numbers, Quantification, and Numerical Applications
• Modulo Arithmetic: Definition and application of modulo arithmetic rules (congruence, allegation and mixture problems, numerical problems).
• Real-Life Applications: Solving problems involving boats and streams, pipes and cisterns, races and games, partnership, and numerical inequalities.
Unit 2: Algebra
• Matrices: Definitions and identification of different types of matrices (equality, transpose, symmetric & skew-symmetric matrices).
Unit 3: Calculus
• Higher Order Derivatives: Understanding differentiation of parametric and implicit functions, identifying dependent and independent variables.
• Marginal Cost & Marginal Revenue: Definitions, finding their values using derivatives.
• Maxima & Minima: Determining critical points, finding local/absolute maxima/minima values
Unit 4: Probability Distributions
• Probability Distribution: Understanding random variables and their probability distributions, finding the probability distribution of discrete random variables.
• Mathematical Expectation: Calculating expected value using the arithmetic mean of the frequency distribution.
• Variance: Calculating variance and standard deviation of a random variable.
Unit 5: Index Numbers and Time Based Data
• Index Numbers: Definition as a special type of average, constructing different types and applying time reversal test.
• Population & Sample: Definitions, differentiation, and identifying representative samples.
• Parameter & Statistics: Definitions and relation, limitations of statistics, interpretation of statistical significance and inferences, central limit theorem, and relation between population,
sampling distribution, and sample.
• Time Series: Identifying time series data, distinguishing components, and analyzing univariate data based on statistical interpretation
Unit 6: Financial Mathematics
• Perpetuity & Sinking Funds: Explaining concepts and calculating perpetuities and differentiating between sinking funds and savings accounts.
• Valuation of Bonds: Defining valuation of bonds and related terms, calculating bond value using the present value approach.
• Calculation of EMI: Explaining the concept and calculating EMI using various methods.
• Linear Method of Depreciation: Defining the concept, interpreting cost, residual value, and useful life, and calculating depreciation.
Unit 7: Linear Programming
• Introduction & Terminology: Familiarizing with terms related to linear programming problems.
• Mathematical Formulation: Formulating linear programming problems.
• Types of Linear Programming Problems: Identifying and formulating different types of LPPs.
• Graphical Method: Drawing graphs and finding solutions for systems of linear inequalities in two variables.
• Feasible & Infeasible Regions/Solutions: Identifying feasible, infeasible, and unbounded regions and understanding feasible and infeasible solutions, finding the optimal feasible solution
CUET Mathematics Exam Breakdown:
Section A:
o Compulsory for all candidates.
o 15 questions covering both Mathematics and Applied Mathematics
Section B:
o Section B1: Mathematics (25 out of 35 questions to be attempted).
o Section B2: Applied Mathematics (25 out of 35 questions to be attempted).
Candidates can choose which section (B1 or B2) to focus on and attempt more questions from | {"url":"https://cuetugexam.in/cuet-exam-maths-syllabus-2024","timestamp":"2024-11-09T19:58:14Z","content_type":"text/html","content_length":"33170","record_id":"<urn:uuid:17261799-7c01-4ea0-a1f3-62e79e18e4c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00814.warc.gz"} |
CECALC.com - Reinforced Concrete Retaining Wall Calculations
* Log In to use the Calculate function *
Become a Member!
Reinforced Concrete Cantilever Retaining Walls:
Retaining Wall Design
Coulombs-Rankine Formula • Calculate active earth pressure coefficient, earth pressure, horizontal resultant and moment.
Uniform load soil surcharge • Calculate the earth pressure horizontal resultant from a uniform load surcharge.
Point load soil surcharge • Calculate the earth pressure horizontal resultant from a point load surcharge.
Strip load soil surcharge • Calculate the earth pressure horizontal resultant from a strip load surcharge.
Ramp load soil surcharge • Calculate the earth pressure horizontal resultant from a ramp load surcharge.
Triangle load soil surcharge • Calculate the earth pressure horizontal resultant from a triangle load surcharge.
Line load soil surcharge • Calculate the earth pressure horizontal resultant from a line load surcharge.
• Calculate the slip plane angle of the soil for the soil pressure wedge calculation, directly for noncohesive soils and by trial and error for cohesive
Slip plane angle soils.
Wedge method • Calculate soil pressure resultant with or without a surface surcharge by the approximate wedge method.
Trial retaining wall design section • Create a concrete cantilever retaining wall trial design section.
Resistance to sliding - no key • Calculate the resistance to sliding for a retaining wall without a keyed base.
Resistance to sliding - key • Calculate the resistance to sliding for a retaining wall with a keyed base.
Eccentricity, max and min soil • Calculate the eccentricity, maximum and minimum soil pressure beneath the base of a retaining wall.
Bearing pressure resultants • Calculate the bearing pressure resultants for the toe and heel of a concrete cantilever retaining wall to be used in the reinforcement calculations.
Check bearing capacity • Check the bearing capacity of a concrete cantilever retaining wall.
Stem section reinforcement • Calculate the distance to steel and the area of steel reinforcement for the stem section of a concrete cantilever retaining wall.
Heel section reinforcement • Calculate the distance to steel and the area of steel reinforcement for the heel section of the base of a concrete cantilever retaining wall.
Toe section reinforcement • Calculate the distance to steel and the area of the steel reinforcement for the toe section of the base of a concrete cantilever retaining wall.
Stem steel development length • Calculate the development length for the steel reinforcement of the stem section of a concrete cantilever retaining wall.
Design Problem 1 • Calculate the lateral earth force acting on a retaining wall using the wedge method.
Design Problem 2 • Calculate the lateral earth force acting on a retaining wall using the Rankine Formula.
Design Problem 3 • Check the sliding resistance for a retaining wall.
• Check the resistance to sliding, the turnover stability, the bearing capacity and calculate the required areas of steel reinforcement for a proposed
Design Problem 4 retaining wall. | {"url":"https://www.cecalc.com/RetainingWalls.aspx","timestamp":"2024-11-15T02:44:51Z","content_type":"application/xhtml+xml","content_length":"28454","record_id":"<urn:uuid:ee54cedb-0eed-42d2-b7d2-977bb782001e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00702.warc.gz"} |
Multi-Version Documents
The final version of my MVD paper has now appeared online. This hyperlink is permanent and can be used in citations. The paper reference is Schmidt, D. and Colomb, R, 2009. A data structure for
representing multi-version texts online, International Journal of Human-Computer Studies, 67.6, 497-514.
Thesis Submission
Also I have now submitted my thesis. The final title was 'Multiple Versions and Overlap in Digital Text'. Here's the abstract:
This thesis is unusual in that it tries to solve a problem that exists between two widely separated disciplines: the humanities (and to some extent also linguistics) on the one hand and
information science on the other.
Chapter 1 explains why it is essential to strike a balance between study of the solution and problem domains.
Chapter 2 surveys the various models of cultural heritage text, starting in the remote past, through the coming of the digital era to the present. It establishes why current models are outdated
and need to be revised, and also what significance such a revision would have.
Chapter 3 examines the history of markup in an attempt to trace how inadequacies of representation arose. It then examines two major problems in cultural heritage and linguistics digital texts:
overlapping hierarchies and textual variation. It assesses previously proposed solutions to both problems and explains why they are all inadequate. It argues that overlapping hierarchies is a
subset of the textual variation problem, and also why markup cannot be the solution to either problem.
Chapter 4 develops a new data model for representing cultural heritage and linguistics texts, called a 'variant graph', which separates the natural overlapping structures from the content. It
develops a simplified list-form of the graph that scales well as the number of versions increases. It also describes the main operations that need to be performed on the graph and explores their
algorithmic complexities.
Chapter 5 draws on research in bioinformatics and text processing to develop a greedy algorithm that aligns n versions with non-overlapping block transpositions in O(MN) time in the worst case,
where M is the size of the graph and N is the length of the new version being added or updated. It shows how this algorithm can be applied to texts in corpus linguistics and the humanities, and
tests an implementation of the algorithm on a variety of real-world texts.
Some people still think of MVD as a replacement for markup. It isn't. It complements markup systems or any technology that can represent content. As I said in the main page What's a Multi-Version
Document? an MVD represents the overlapping structure of a set of versions or markup perspectives. It doesn't need to represent any of the detail of the content, which is the responsibility of the
I realise that it's easy, and natural, to seek to dismiss radical ideas simply because they are radical. The difference in this case is that MVD is a technology that definitely works. It's not all
that radical anyway. Consider the direction in which multiple-sequence alignment is going in biology. They have also realised that the best way to represent multi-version genomes or protein sequences
is via a directed graph (e.g. Raphael et al., 2004. A novel method for multiple alignment of sequences with repeated and shuffled elements, Genome Research, 14, 2336-2346). I prefer to think of that
idea as parallel to mine, and his 'A-Bruijn' graph is rather different from my MVD, but it represents the same kind of data in much the same way. Acceptance that this basic idea can also be applied
to texts in humanities and linguistics is just a matter of time.
The Inadequacy of Markup
If markup is adequate for linguistics texts, why is it that every year someone thinks up a new way to manipulate markup systems to try to represent overlap? If it were adequate there would be no need
for new systems, but we continue to see 1-3 new papers on the subject every year. It's seen as a game. Look at the Balisage website: 'There's nothing so practical as a good theory'. Perceived as an
unsolvable problem, overlap is the perfect topic for a paper or a thesis.
In the humanities, overlap in markup systems is more than an annoyance; it wrecks the whole process of digitisation. In simple texts you can just about get by, but it's a question of degree. Try to
use markup to record the following structures:
1. Deletion of a paragraph break
2. Deletion of underlining
3. Changes to document structure
4. Transposition
5. Overlapping variants
These can all be done somehow in markup, I admit, but very poorly. And they are features that occur all the time in original texts. The fundamental problem is that you can't adequately fit a
non-hierarchical structure into a hierarchical template. To choose markup alone as a medium to preserve our textual cultural heritage is to resign yourself to mangling that information.
Why do we have to use markup to record complex structures it was never designed to represent? Hand that complexity over to the computer and let it work it out. That's what MVD lets you do. If you are
getting a headache shuffling around angle brackets and xml:ids, then think again. Is this any proper way for humans of the 21st century to interact with the texts of their forebears? | {"url":"https://multiversiondocs.blogspot.com/2009/03/","timestamp":"2024-11-08T11:38:47Z","content_type":"text/html","content_length":"64874","record_id":"<urn:uuid:2321ecea-1149-4862-aef5-958deb86cfb4>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00332.warc.gz"} |
Negative Numbers Order Of Operations Parentheses Addition Worksheet And | Order of Operation Worksheets
Negative Numbers Order Of Operations Parentheses Addition Worksheet And
Negative Numbers Order Of Operations Parentheses Addition Worksheet And
Negative Numbers Order Of Operations Parentheses Addition Worksheet And – You may have listened to of an Order Of Operations Worksheet, yet what exactly is it? In addition, worksheets are a fantastic
way for trainees to practice brand-new abilities as well as evaluation old ones.
What is the Order Of Operations Worksheet?
An order of operations worksheet is a kind of math worksheet that requires students to perform mathematics operations. These worksheets are divided right into three main sections: subtraction,
multiplication, as well as addition. They additionally consist of the examination of parentheses and exponents. Students that are still finding out just how to do these tasks will certainly find this
sort of worksheet useful.
The primary objective of an order of operations worksheet is to help students find out the correct way to address math equations. If a student doesn’t yet comprehend the concept of order of
operations, they can review it by referring to a description web page. On top of that, an order of operations worksheet can be split into a number of categories, based upon its problem.
One more vital objective of an order of operations worksheet is to show pupils just how to perform PEMDAS operations. These worksheets start with straightforward troubles associated with the basic
regulations and build up to extra complicated problems including all of the guidelines. These worksheets are an excellent method to present young students to the excitement of solving algebraic
Why is Order of Operations Important?
One of the most crucial things you can discover in mathematics is the order of operations. The order of operations makes certain that the math problems you fix are regular.
An order of operations worksheet is a great way to instruct students the appropriate way to solve math formulas. Prior to students start utilizing this worksheet, they may need to evaluate concepts
connected to the order of operations. To do this, they ought to examine the concept web page for order of operations. This concept web page will certainly give trainees an overview of the keynote.
An order of operations worksheet can assist students develop their abilities furthermore and subtraction. Teachers can make use of Prodigy as an easy means to set apart technique and supply
interesting material. Natural born player’s worksheets are an excellent way to aid trainees find out about the order of operations. Teachers can start with the standard ideas of multiplication,
division, as well as addition to aid students build their understanding of parentheses.
Order Of Operations With Negatives Worksheet
Integers Order Of Operations Three Steps Including Negative Integers
Order Of Operations With Negatives Worksheet
Order Of Operations With Negatives Worksheet offer a wonderful source for young students. These worksheets can be conveniently customized for details demands. They can be discovered in three degrees
of trouble. The very first level is straightforward, needing trainees to exercise using the DMAS approach on expressions containing 4 or even more integers or three drivers. The second level calls
for pupils to use the PEMDAS technique to streamline expressions utilizing inner and outer parentheses, braces, and also curly braces.
The Order Of Operations With Negatives Worksheet can be downloaded completely free and can be printed out. They can after that be reviewed making use of addition, subtraction, division, as well as
multiplication. Trainees can additionally utilize these worksheets to evaluate order of operations and also using exponents.
Related For Order Of Operations With Negatives Worksheet | {"url":"https://orderofoperationsworksheet.com/order-of-operations-with-negatives-worksheet/negative-numbers-order-of-operations-parentheses-addition-worksheet-and/","timestamp":"2024-11-09T11:21:09Z","content_type":"text/html","content_length":"28306","record_id":"<urn:uuid:11a06347-249e-4dd0-9bbc-936b882be7f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00794.warc.gz"} |
A trip through the layers of scVI
The scVI package provides a conditional variational autoencoder for the integration and analysis of scRNA-seq data. This is a generative model that uses a latent representation of cells which can
produce gene UMI counts which have similar statistical properties as the observed data. To effectively use scVI it is best to think of it in terms of the generative model rather than the “auto
encoder” aspect of it. That the latent representation is inferred using amortized inference rather than any other inference algorithm is more of an implementation detail.
This post goes through step by step how scVI takes the cell representation and is able to generate scRNA-seq UMI counts.
In summary, the steps are:
\(\begin{aligned} W &= f(Z, c), \\ X &= g(W), \\ \omega &= \text{softmax}(X), \\ \Lambda &= \omega * \ell, \\ Y &\sim NB(\Lambda, \theta). \end{aligned}\)
A very good example dataset is one by Hrvatin et al. This data consists of UMI counts for 25,187 genes from 48,266 cells, sampled from the visual cortex of 28 mice (in three experimental conditions).
To illustrate the generative process in scVI, each step will use three genes (where applicable) to illustrate how each step goes towards generating counts. These are: Slc17a7, a marker for excitatory
neurons; Olig1, a marker for oligodendrocytes; and Nr4a2, a gene which activates expression in some cell types when the mice are receiving light stimulation. It is clear that these genes have
heterogeneity beyond observational noise, which makes them good candidates for illustration.
Below are three histograms of the observed UMI counts of these genes. The x-axis shows UMI counts between 1 and 25 for the gene, and the y-axis shows how many cells in the raw data that number of UMI
counts is observed in.
After the data generative process has completed, the goal is to obtain data with very similar histograms to these.
The journey towards generating counts starts with the representation Z. Each cell is represented by a 10-dimensional vector zi which describes the structured heterogeneity in the data. The goal is
that cells that are close together in this representation have similar transcriptomes. A nice thing with scVI is that it estimates a posterior distribution for each zi, but in this post the focus is
on the mean of zi. It is not feasible to visualize the 10-dimensional representation of the data, but we can create a tSNE visualization of the Z, and color it by information that was provided in the
published dataset from Hrvatin et al.
Note how the cells group by cell type, but not by experimental sample.
scVI is a conditional autoencoder. This is what enables it to find a representation with the variation between batches accounted for. When generating data the first step is to introduce the
batch-to-batch variation. This is done by taking a representation zi and the batch ci of the i’th cell, passing these through a neural network f(), and producing an intermediate representation wi. As
above, cells with similar 128-dimensional wi representations will produce similar transcriptome observations. Again, we cannot feasibly visualize the 128-dimensional representation of the data, but
we can produce a tSNE visualization which allows us to see which cells are close together.
Note that compared to the previous tSNE, here cells from the same biological replicate are lumped together. But this happens within variation due to cell type.
At last, it is time to start to tie gene expression to the cells. To move from the 128-dimensional representation of the cells, a neural network g() is used to produce one real value per gene in that
cell. For this particular dataset each xi is 25,187-dimensional. Below figure shows the distribution of these xi values for the three example genes across the 48,266 cells in the dataset.
Now variation in gene expression levels can be investigated. This is particularly clear in the bimodal nature of Slc17a7 expression, owing to the mixture of excitatory neurons and other cells.
However, these numbers (on the x-axis) that are produced by g() have no direct ties to the observed data. It is not clear what a value of 3.0 means, for example. This step is here simply because
naive neural networks can only map values to unrestricted real numbers. The next step remedies this.
The original data is in UMI counts, and counts are modeled through frequencies and rates. To obtain a frequency from the X-values, scVI performs a softmax transformation so that the resulting ωi
value for each cell sums to 1. The softmax transformation is defined as
\(\omega_{i, g} = \text{softmax}(X_{i, g}) = \frac{\exp(X_{i, g})}{\sum_{k=1}^G \exp(X_{i, k})}.\)
Thus, each ωi lies on the simplex Δ25,187. Now scVI has generated a value we can interpret.
The frequency ωi,g means that every time we would count a molecule in cell i, there is a ωi,g chance that the molecule was produced by gene g. These (latent) frequencies are the most useful values
produced in scVI, and allows for differential expression analysis as well as representing gene expression on an easily interpretable scale. By convention, the frequencies can be multiplied by 10,000
to arrive at a unit of “expected counts per 10k molecules” for genes. This is similar to the idea behind TPM as a unit in bulk RNA-seq.
To be able to generate samples similar to the observed data, another technical source of variation needs to be added. Different cells have a different total number of UMI’s. The frequencies above
represent how likely we are to observe a molecule from a given gene when sampling a single molecule. The rate Λi,g represent the expected number of molecules produced by gene g in cell i when
sampling ℓi molecules. This value ℓ is usually referred to as the “library size” or “total UMI counts”. (In scVI ℓi is also a random variable, but this is a detail that is not important for the
general concept described here.)
In analysis of scRNA-seq data, the total UMI count per cell is easily influenced by technical aspects of data. Unlike frequencies, the scales for a given gene are not directly comparable between
Finally, using the rates Λi,g, the observational negative binomial model can be used to draw samples of UMI counts per cell i and gene g. (The additional dispersion parameter θ can be estimated with
a number of different strategies in scVI, and can be considered a detail here).
This is the final step which makes the model generative. At this point, the goal is that a sample from the model with the inferred Λ from the entire dataset should be easily confused with the
observed data. The below figure shows histograms for the three example genes of the observed data in grey, and from a sample from the scVI model in red.
An even better strategy to illustrate these properties is to draw multiple samples and see whether the observed data falls within certainty intervals of the sampled data. In the figure below each red
line corresponds to a histogram value from a sample from the observational model.
The data and the samples have pretty similar distributions. Some aspects of the observed data are not seen in the samples from the observational model. In particular data sampled from the
observational model produce more 1-counts for these three genes than what was observed in the data.
(It should be noted that here samples have only been taken at the last step, from the observational negative binomial model. To perform a posterior predictive check, Z-values should also be sampled
from the approximate posterior of Z.)
The aim of this post has been to show how in scVI the representation Z, which can be used for many cell-similarity analyses is directly tied to gene expression. And the gene expression in turn is
directly tied to the sparse UMI counts that are observed through single cell RNA-sequencing. These links are very difficult to make in other analysis methods or 'pipelines' of analysis.
A Juptyer notebook that illustrates how to generate all the quantities described here and all the figures is available on Github | {"url":"https://www.nxn.se/p/a-trip-through-the-layers-of-scvi","timestamp":"2024-11-08T23:26:18Z","content_type":"text/html","content_length":"183301","record_id":"<urn:uuid:b488a6f4-447e-41ac-a2d9-d3a1a0453291>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00547.warc.gz"} |
Website that helps with math problems
101 Test
Sample Problems From Intermediate Algebra Sample problems are under the links in the "Sample Problems" column and the corresponding review material is under the "Concepts" column. New problems are
given each time the problem links are followed.
Math Help Online. Each example essay is gone along with by a high quality report as well as a plagiarism record to show what is consisted of with each order. Put an order currently and get the finest
online writing assistance. Compose an essay will have no trouble. Solving Algebra Problems - MathHelp.com - 1000+ Online Math ... MathHelp.com - http://www.MathHelp.com - offers comprehensive help
solving Algebra problems with over 1000 online math lessons featuring a personal math teac... Problem Solving | NZ Maths
Our mission is simple: to make math a fun part of kids' everyday lives, as beloved as the bedtime story. Choose the Math Problem of the Day, or explore over 1,000 additional math problems in English
or Spanish with various zany topics, ranging from electric eels and chocolate chips to roller coasters and flamingos.
Optical Art Task. 3 4 5 6 7 8 9 10 11 12 Generalization Number Sense Pattern Recognition Shape & Space Fractions Geometry Modeling Numbers Patterns Math - Practice, Tests, Forum/Free Help The website
was created in September 2005. It is visited by 14,000 - 28,000 people daily, mostly from USA, Bulgaria, Russia, India, Philippines, Ukraine, Serbia... In 2011 was created a Russian version, and in
2015 Serbian. Homework Helper - Refdesk.com Math Word Problems for Children - over 2000 math word problems for children to learn from and enjoy. The pages are sorted by topic and level of difficulty.
Each problem is designed to improve elementary and middle school students' critical thinking and problem-solving skills.
Five sets of free The ACT Math practice test questions that you can use to familiarize yourself with the test instructions and format. ... All the problems can be ...
Math Tutoring - Mathnasium of New Hyde Park - The Math Learning… We offer math tutoring services in New Hyde Park. Our tutors help kids with mathematics homework lessons, math tutorials and math
education. Math Online Calculators for Math & Science Use Online Calculators with MyMathDone and benefit from a simple and easy way to use calculator for all your math problems Math Homework Helping
Service | Pro-Papers.com
Get the free "Online Problem Solver" widget for your website, blog, Wordpress, Blogger, or iGoogle. Find more Mathematics widgets in Wolfram|Alpha.
Grades & Subjects: All grades, math. StudyGeek.org is a nonprofit website "where PhD experts help with math homework" — neat! The site offers detailed sections on algebra, geometry, trigonometry,
calculus, and statistics. Each area provides helpful explanations and sample problems specific to all types of math. Math | Khan Academy Learn fifth grade math aligned to the Eureka Math/EngageNY
curriculum—arithmetic with fractions and decimals, volume problems, unit conversion, graphing points, and more. Module 1: Place value and decimal fractions : 5th grade (Eureka Math/EngageNY) Websites
That Will Help You Solve Your Math Problems
Freckle Math
Getting help with math homework is easy with Tutor.com. Just tell us what you're working on, and we'll match you to the best math tutor available to help your specific question. You'll work with a
tutor in our online classroom in real-time, solving your math problems step-by-step, until your homework is finished. Free math calculators, formulas, lessons, math tests and ...
Math Games Helping Kids on Computational Fluency Door24 is a free math app designed for kids in grade 4 to grade 8. The story behind all the math games is to fix Victor the robot’s circuits with math
operations that lead to number 24. Pay Someone Do My Math Homework For Me - Get Help Now Want to pay someone to do math homework for you online and get an A? Reliable math homework help online site
ready to help with math homework answers. Math4Girls - 17 Reviews - Private Tutors - Bernal Heights, San… | {"url":"https://coursesxrem.web.app/tollefsrud79334pa/website-that-helps-with-math-problems-1651.html","timestamp":"2024-11-07T05:52:35Z","content_type":"text/html","content_length":"21567","record_id":"<urn:uuid:3f24b43b-ad6e-4237-8216-1cb3da861948>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00810.warc.gz"} |
Digital Circuits 3: Combinational Circuits
Computers are good at math. Computers, as we've seen, are made out of simple gates. Gates just do simple logic functions like AND and OR, not math like addition and subtraction. How do we reconcile
Simple... we make circuits out of logic gates that can do math. In this section we'll have a look at adders and subtractors.
This also provides a few good learning opportunities to bring out some lessons having to do with digital circuit design.
Let's start simply: adding 2 1-bit numbers. Recall from math class that adding numbers results in a sum and a carry. It's no different here. With two one bit numbers we have 4 distinct cases:
1. 0 + 0 = 0 with no carry
2. 0 + 1 = 1 with no carry
3. 1 + 0 = 1 with no carry
4. 1 + 1 = 0 with a carry
Since we are dealing with binary numbers, and each binary digit corresponds to a logic value, let's express this as a truth table:
What does that remind you of? Well, Sum is A XOR B, and Cout is A AND B!
Boolean expressions
It's common when writing boolean expressions to use operators rather than gate names:
• a vertical centered dot in place of AND
• + in place of OR
• a circled + in place of XOR
• a bar over a negated expression rather than NOT
Using this notation, the expressions describing the above truth table are:
We'll use this format from now on.
Binary addition for adding more than single digit numbers is the same as you learned in school for decimal: you add the two corresponding digits and the carry from the digit adder to the immediate
right to give a sum digit and a carry. So our single digit adder must support an incoming carry. What we have above is referred to as a half adder, since is really only does part/half of the job.
What we need to do is expand on this idea to include an incoming carry. Here's the truth table:
Logic simplifcation
As your logic circuits (as well as the associated truth tables and equations) get larger and more complex, it's useful to have some tools and techniques to help simplify them. Why simplify them?
Mostly to require fewer gates. That means fewer chips, less silicon, fewer connections, smaller boards, faster circuits, etc. The simpler you can make a circuit and get the same job done, the better.
One useful tool was introduced by Maurice Karnaugh in 1953: Karnaugh Maps. Let's go through an example to see how it works.
To use karnaugh Maps we need to put the truth table in terms of an OR of AND terms. These AND terms correspond to the rows in the truth table contain a logical 1 for the output in question. For the
half adder we had:
And for the full adder the equations are:
A Karnaugh Map is a two dimensional table that has 2^n cells if there are n inputs. Adjacent rows and columns can differ by the negation of a single input. Here's the maps for the half adder:
The way it works is that the row labelled "A" corresponds to the A input being high, the other row corresponds to A being low. Similarly with the columns and the B input. The 1 in the Cout map
corresponds to the case when both A and B are high.
There's no simplification to be done on the half adder, it's trivial. The full adder is another story. There are the maps for it.
What you are looking for to simplify using a map is groups of 1s that are some power of 2 in size. In the Sum map above, there are none. Each 1 is separate. That pattern is indicative of an XOR of
all three inputs. That can be achieved by chaining XOR gates.
The Carry out is a different story, though. There are three groups of two:
The green circle is the A . B term, leaving the other two 1s to be covered. In both those cases Cin is high and A and B differ. That is the definition of XOR, and so we can rewrite the equation to
replace some ANDs, ORs, and NOTs with an XOR.
Interestingly, A . B and A XOR B are both outputs of a half adder as shown above. We need another XOR and another AND. In fact we can use two half adders along with an additional OR gate to build the
full adder as shown below.
This full adder only does single digit addition. Multiple copies can be used to make adders for any size binary numbers. By default the carry-in to the lowest bit adder is 0*. Carry-out of one
digit's adder becomes the carry-in to the next highest digit's adder. The carry-out of the highest digit's adder is the carry-out of the entire operation.
This is pretty typical of digital circuits that work on data: if you can design a circuit to work on single bit data, multiple copies can usually be used together to operate on bigger data.
[* A CPU/MCU will have a carry bit in its flag register that can be used as the carry-in for addition operations. The carry out from such operations will be stored in that flag for future use. This
allows operations on data larger than can be added at at one time.]
Half subtractor
As before, I'll start with subtracting 1-bit numbers, generating a difference and a borrow. A will be the minuend and B will be the subtrahend. I.e.the circuit will compute A - B.
Here's the truth table:
Converting that to equations:
This gives converts easily to a circuit very similar to the half adder. The only difference is the inverter on A for the computation of the borrow.
Full subtractor
Here's the truth table and corresponding maps for the full subtractor, which takes into account an incoming borrow. I'll skip the step of writing out the equations, as the maps can easily be
constructed directly from the truth table.
As before, the next step is to find the groups in the map in order to simplify the logic.
Taking the red group first, we have:
From the half subtractor, we have various pieces of this, and can do the same thing we did with the full adder: use a couple half-subtractors and an OR gate:
As with the full adder, full subtractors can be strung together (the borrow output from one digit connected to the borrow input on the next) to build a circuit to subtract arbitrarily long binary
Notice that subtractors are almost the same as adders. In fact a single circuit is generally used for both, with some "controllable invertors" being used to switch between operations. Going further
than that, a CPU contains an Arithmetic-and-Logic-Unit (aka ALU) that takes two numbers, and an operation selector to configure it to perform one of a variety of arithmetic or logic operations.
Adder on a chip
This was an interesting exercise, but we'll never need to build an adder from gates. There are adder chips that can be dropped into our designs. The 7483 is one example.
There's a lot in this section:
• describing a circuit's desired behaviour with a truth table,
• extracting a AND-OR equation for each output column of the table,
• simplifying those equations using Karnough Maps,
• implementing the simplified equations using logic gates, and
• looking for similarities in existing designs to leverage work already done.
While you can always implement a truth table by ORing ANDed terms, with NOTs in the right places, it's usually not the most efficient. When using discrete gate ICs a one goal is always to minimize
the chip count; fewer chips means a faster circuit, using less power, and generating less heat. These things aren't as relevant when you are using MCUs (they are when designing them, though), but
it's a fun puzzle to solve.
Text editor powered by tinymce. | {"url":"https://learn.adafruit.com/combinational-logic/or-and-nor","timestamp":"2024-11-10T00:05:14Z","content_type":"text/html","content_length":"103371","record_id":"<urn:uuid:2aa0d7a4-01be-495f-a9dc-9cef72f59d26>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00677.warc.gz"} |
s a
Title: The P=W conjecture and hyper-Kähler geometry.
Abstract: Topology of Hitchin’s integrable systems and character varieties play important roles in many branches of mathematics. In 2010, de Cataldo, Hausel, and Migliorini discovered a surprising
phenomenon which relates these two very different geometric objects in an unexpected way. More precisely, they predict that the topology of Hitchin systems is tightly connected to Hodge theory of
character varieties, which is now called the “P=W” conjecture. In this talk, we will discuss recent progress of this conjecture. In particular, we focus on general interactions between topology of
Lagrangian fibrations and Hodge theory in hyper-Kähler geometries. This hyper-Kähler viewpoint sheds new light on both the P=W conjecture for Hitchin systems and the Lagrangian base conjecture for
compact hyper-Kähler manifolds.
Where: Meeting virtually via zoom (e-mail reynoso@math.columbia.edu for further details)
When: Tuesday, January 19, 2021 at 12pm | {"url":"https://www.math.columbia.edu/2021/01/19/january-19-junliang-shen/","timestamp":"2024-11-06T00:46:03Z","content_type":"text/html","content_length":"39912","record_id":"<urn:uuid:1193405b-be34-4039-a0e5-2cc68a07ce92>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00375.warc.gz"} |
Introductory algebra bittinger tutorial
introductory algebra bittinger tutorial Related topics: math formulas sheet
basic mathematics,2
algebra 1 question solvers
solving nonlinear system polynomial
algebra inequality calculator
math 75a practice midterm i solutions
factoring polynomials for dummies
Taks Physics Formula Sheet
polynomial calculator
solving an expression in vertex form
quadratic equations formula
Author Message
boaX Posted: Saturday 05th of May 21:33
Hi math fanatics. This is my first post in this forum. I struggle a lot with introductory algebra bittinger tutorial equations. No matter how hard I try, I just am not able to solve
any problem in less than an hour. If things go this way, I fear I will not be able to get through my math exam.
Back to top
AllejHat Posted: Sunday 06th of May 07:15
Can you please be more elaborate as to what sort of aid you are expecting to get. Do you want to get the principles and solve your assignments on your own or do you need a utility
that would give you a step-by-step solution for your math assignments ?
From: Odense,
Back to top
Admilal`Leker Posted: Sunday 06th of May 21:14
I agree. Stress will lead you no where. Algebrator is a very useful tool. You don’t need to be a computer expert in order to operate it. Its simple to use, and it works great.
From: NW AR, USA
Back to top
Incicdor Posted: Monday 07th of May 15:24
Can a software really help me excel my math? Guys I don’t want something that will solve equations for me, instead I want something that will help me understand the subject as well.
From: THE
Back to top
malhus_pitruh Posted: Tuesday 08th of May 08:14
Algebrator is the program that I have used through several math classes - Algebra 1, College Algebra and Basic Math. It is a truly a great piece of algebra software. I remember of
going through problems with subtracting fractions, binomials and least common measure. I would simply type in a problem from the workbook , click on Solve – and step by step solution
to my algebra homework. I highly recommend the program.
From: Girona,
Back to top
Ashe Posted: Wednesday 09th of May 17:57
Sure, why not! You can grab a copy of the software from https://softmath.com/reviews-of-algebra-help.html. You are bound to get addicted to it. Best of Luck.
Back to top | {"url":"https://softmath.com/algebra-software/radical-equations/introductory-algebra-bittinger.html","timestamp":"2024-11-12T22:38:38Z","content_type":"text/html","content_length":"42605","record_id":"<urn:uuid:5f067eeb-cc17-45b8-989b-47fc780a78c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00379.warc.gz"} |
Understanding Mathematical Functions: How To Multiply Square Root Func
Mathematical functions are essential tools in understanding and analyzing relationships between variables. They provide a systematic way of examining how one quantity depends on another. When it
comes to multiplying square root functions, it's important to understand the unique properties and significance of these functions. By mastering this concept, you can apply it to real-world problems
and gain a deeper understanding of mathematical relationships.
Key Takeaways
• Mathematical functions help analyze relationships between variables systematically.
• Multiplying square root functions is important for understanding mathematical relationships in real-world problems.
• Understanding and simplifying multiplied square root functions is crucial for easier analysis.
• Multiplying square root functions has practical applications in real-life situations.
• Identifying and overcoming challenges in multiplying square root functions is essential for mastering this concept.
Understanding Square Root Functions
In mathematics, understanding functions is crucial for solving equations and finding patterns in data. Square root functions, in particular, play a significant role in various mathematical
applications. Here, we will delve into the definition, examples, and properties of square root functions to gain a better understanding of their role in mathematics.
A. Definition of square root functions
A square root function is a function that contains a square root (√) symbol. It can be represented as f(x) = √x, where x is the input value and f(x) is the output value. In simpler terms, the square
root function gives the non-negative square root of the input value.
B. Examples of square root functions
Some common examples of square root functions include:
• f(x) = √x
• g(x) = √(x + 4)
• h(x) = 3√x
C. Properties of square root functions
Square root functions possess the following properties:
• The domain of a square root function is the set of all real numbers greater than or equal to zero. This is because a square root of a negative number is not a real number.
• The range of a square root function is also the set of all real numbers greater than or equal to zero.
• The graph of a square root function is a curve that starts from the point (0, 0) and extends to the right in the positive x-axis direction.
Understanding Mathematical Functions: How to Multiply Square Root Functions
When it comes to understanding and working with mathematical functions, multiplying square root functions can often be a daunting task. However, with a clear understanding of the steps involved and
common mistakes to avoid, it can be a more manageable process.
Explanation of How to Multiply Square Root Functions
Square root functions involve the use of the square root symbol (√) and are typically expressed in the form f(x) = √x. When multiplying two square root functions together, it's important to remember
that you are essentially finding the product of two expressions that contain the square root symbol.
• Identify the two square root functions to be multiplied.
• Express each function as f(x) = √x.
• Multiply the two functions together to find the product.
Step-by-Step Example of Multiplying Square Root Functions
To illustrate the process of multiplying square root functions, let's consider the following example:
Consider the functions f(x) = √(2x) and g(x) = √(3x).
When multiplying these functions together, the steps involved would be as follows:
• Identify the two square root functions: f(x) = √(2x) and g(x) = √(3x).
• Express each function as f(x) = √(2x) and g(x) = √(3x).
• Multiply the two functions together: f(x) * g(x) = (√(2x)) * (√(3x)) = √(2x) * √(3x) = √(2x * 3x) = √(6x^2).
Therefore, the product of the two square root functions f(x) = √(2x) and g(x) = √(3x) is √(6x^2).
Common Mistakes to Avoid When Multiplying Square Root Functions
When multiplying square root functions, it's important to be mindful of common mistakes that can arise during the process. Some of the common mistakes to avoid include:
• Mixing up the order of the multiplication.
• Forgetting to simplify the product of the functions.
• Incorrectly combining terms inside the square root.
By being aware of these potential pitfalls and following the step-by-step process, you can effectively multiply square root functions with confidence and accuracy.
Understanding Mathematical Functions: How to Multiply Square Root Functions
When dealing with square root functions, it is important to know how to simplify multiplied square root functions in order to make analysis easier. In this chapter, we will discuss the techniques for
simplifying multiplied square root functions, provide examples, and highlight the importance of simplifying for easier analysis.
A. Techniques for simplifying multiplied square root functions
• Combining like terms
When multiplying square root functions, it is important to combine like terms to simplify the expression. This involves identifying terms with the same radicand and multiplying their
• Rationalizing the denominator
In some cases, it may be necessary to rationalize the denominator of a multiplied square root function to simplify the expression. This can be done by multiplying the numerator and denominator by
the conjugate of the denominator.
B. Examples of simplifying multiplied square root functions
• Example 1
Given the functions f(x) = √(2x + 3) and g(x) = √(5x - 1), simplify the expression f(x) * g(x).
• Example 2
If h(x) = √(3x + 4) and k(x) = √(3x - 2), find the simplified form of h(x) * k(x).
C. Importance of simplifying for easier analysis
• Simplifying multiplied square root functions is important for easier analysis and evaluation of the functions. By simplifying, it becomes easier to identify patterns, critical points, and other
properties of the functions.
• Additionally, simplifying the functions allows for easier comparison and manipulation, which can be useful in various mathematical calculations and applications.
Real-life Applications of Multiplying Square Root Functions
When it comes to understanding mathematical functions, it's important to consider their practical applications in real-life scenarios. One such concept is the multiplication of square root functions,
which finds use in various fields. Let's explore some examples of how multiplying square root functions is applied in real-life situations and why it's crucial to understand this concept in practical
Examples of real-life situations where multiplying square root functions is used
• Engineering: In engineering, multiplying square root functions is commonly employed in designing structures, such as bridges and buildings. The calculations involved in determining stress
distribution, load bearing capacities, and material strength often require the manipulation of square root functions.
• Physics: The study of natural phenomena and physical principles often involves the use of square root functions. Multiplying these functions is essential for analyzing and predicting the behavior
of various systems, including oscillations, waves, and heat transfer.
• Finance: Financial analysts and economists use square root functions in modeling risk and volatility in investment portfolios. Understanding how to multiply these functions is crucial for making
informed decisions and managing financial risks effectively.
• Medicine: In medical imaging and diagnostic procedures, square root functions are utilized to interpret and process data from scans and tests. Multiplying these functions helps in analyzing
complex medical data and deriving meaningful insights for diagnosis and treatment.
Importance of understanding this concept in practical applications
• Accurate Modeling: In practical applications, multiplying square root functions allows for more accurate modeling and analysis of real-world phenomena. Whether it's predicting the behavior of a
physical system or estimating financial risks, a sound understanding of this concept is essential for reliable results.
• Problem Solving: Many real-life problems involve the manipulation of square root functions, and being proficient in multiplying these functions enables individuals to solve complex problems
efficiently. From engineering challenges to financial calculations, this knowledge is invaluable for problem-solving in diverse fields.
• Innovation and Optimization: The ability to multiply square root functions is fundamental to innovation and optimization in various domains. Whether it's designing efficient structures,
developing cutting-edge technologies, or optimizing resource allocation, this concept plays a significant role in pushing the boundaries of what's possible.
Common Challenges in Multiplying Square Root Functions
When it comes to multiplying square root functions, students often face several challenges that can hinder their understanding and application of this concept. Let's take a closer look at some of the
common difficulties:
A. Identification of common difficulties when multiplying square root functions
• 1. Complexity of the functions: Square root functions can involve complex mathematical operations, making it difficult for students to grasp the concept of multiplying them together.
• 2. Understanding the properties: Students may struggle to understand the properties of square root functions and how they apply when multiplying them.
• 3. Visualization: Visualizing the multiplication of square root functions and understanding how it affects the overall function can be challenging for some students.
B. Strategies for overcoming challenges in understanding and applying this concept
• 1. Practice problems: Engaging in ample practice problems can help students familiarize themselves with the multiplication of square root functions and improve their understanding of the concept.
• 2. Use of visual aids: Utilizing visual aids such as graphs and diagrams can aid in visualizing the multiplication of square root functions and enhance comprehension.
• 3. Seeking help from educators: Students should not hesitate to seek help from their teachers or tutors to clarify any doubts and gain a deeper understanding of the concept.
Recap: Understanding how to multiply square root functions is crucial in many mathematical applications. It allows us to manipulate and simplify complex equations, making problem-solving much more
manageable. By mastering this concept, we open the door to a deeper understanding of mathematical functions and their real-world implications.
Encouragement: I strongly encourage you to further explore and practice multiplying square root functions. The more you work with this concept, the more confident and skilled you will become in using
it to solve mathematical problems. Whether you are a student looking to improve your math skills or a professional seeking to enhance your problem-solving abilities, mastering this concept will
undoubtedly benefit you in the long run.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support | {"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-multiply-square-root-functions","timestamp":"2024-11-13T01:42:42Z","content_type":"text/html","content_length":"216320","record_id":"<urn:uuid:4ed3280b-2273-49a8-bc4a-a7478e1813e1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00882.warc.gz"} |
Celebratio Mathematica
by David Eisenbud and Tsit-Yuen Lam
Irving Kaplansky, known to his friends as Kap, was born on March 22, 1917 in Toronto, the youngest of four children. His parents had recently emigrated from Poland, where his father Samuel
had studied to be a rabbi. In Toronto Samuel worked as a tailor, while Kap’s mother established a chain of Health Bread Bakeries that ultimately supported the whole family. Kap died on the
25th of June 2006, at his son Steven’s home, in Sherman Oaks, California.
Kap’s mathematical talent was apparent early. From 1938 to 1939 he attended the University of Toronto, where he got his bachelor’s and master’s degrees (1938–39). He was one of the two
Putnam Fellows in 1938, the first year of that most prestigious of all North American undergraduate mathematical competitions. Kap went to Harvard University and received his
Ph.D. there in 1941, working under Saunders Mac Lane. He was Benjamin Peirce Instructor at Harvard from 1941 to 1944, and then joined the Applied Mathematics Group doing war work at
Columbia University from 1944 to 1945.
After the war, Kap joined the Mathematics Department at the University of Chicago, where he was chair of the department from 1962 to 1967 and George Herbert Mead Distinguished Service
Professor from 1969. He retired from the University of Chicago in 1984 in order to become the second director of the Mathematical Sciences Research Institute (MSRI) in Berkeley,
established just a few years before by Shiing-Shen Chern (its first director), Calvin C. Moore, and Isadore M. Singer. He was appointed as professor of mathematics at UC Berkeley at
the same time. In 1985–86, he served as president of the American Mathematical Society.
During his lifetime, Kap received many professional honors and recognitions. These include the Quantrell Award for Excellence in Undergraduate Teaching at the University of
Chicago (1961) and the Steele Prize (Career Award) from the American Mathematical Society (1989). Kap was elected to the American Academy of Arts and Sciences in 1965, and to the
National Academy of Sciences in 1966. He received two honorary degrees: Doctor of Mathematics from the University of Waterloo, and Doctor of Science from Queen’s University, both in
Kap was proud of the fact that he was one of the first to suggest to the National Science Foundation (NSF) the founding of new mathematics research institutes beyond the Institute for
Advanced Study in Princeton. Already in the 1960s he had told an NSF panel that the growth of U.S. mathematics made the creation of such institutes in the Midwest and on the west coast an
important priority. Kap led MSRI until 1992, overseeing its move from its temporary quarters to its spectacular permanent location above the campus, and putting many MSRI
traditions in place. Among his many long-term contributions to the life of the institute was the creation in 1986 of a group of Friends of MSRI that included James H. Simons, William
Randolph Hearst III, Elwyn R. Berlekamp, and Steven Wolfram, three of whom subsequently became trustees and major contributors to MSRI.
Kap retired as director of MSRI and as professor of mathematics in 1992, but came to his office at MSRI to do research every day and attended every colloquium at Berkeley until he
became ill in 2005. A man of extraordinarily regular lifetime habits, he unfailingly took the same bus down from MSRI for his daily swim. He remained active in mathematical research
and publication until just eight months before his death.
In all Kap wrote some 151 journal articles (the last published in 2004) and 11 books. His mathematical interests were extraordinarily broad, and his papers touch on topological
algebra and operator algebras, the arithmetic and algebraic aspects of quadratic forms, commutative and homological algebra, noncommutative ring theory and differential
algebra, Lie theory, combinatorics and combinatorial number theory, infinite abelian groups, linear algebra, general algebra, game theory, probability and statistics. Among
his most important papers were those on topological algebra and operator theory published in 1948–1952, and those on noncommutative ring theory, such as the classic “Rings with a
Polynomial Identity” (Bulletin of the American Mathematical Society, 1948), which started a whole field. His books became legendary for their clarity, style and brevity.
The interest and skill that Kap showed in teaching is suggested by his books, but demonstrated by his mentoring of Ph.D. students. Fifty-five received their degrees from him between 1950
and 1978, and their work proved fertile: as of this writing (summer 2007) Kap’s “mathematical family,” consisting of students, grand-students…, has at least 627 members. Joe Rotman, one
of Kap’s students, wrote:
Every course, indeed, every lecture, was a delight. Courses were very well organized, as was each lecture. Results were put in perspective, their applications and importance made
explicit. Humor and droll asides were frequent. Technical details were usually prepared in advance as lemmas so as not to cloud the main ideas in a proof. Hypotheses were stated
clearly, with examples showing why they were necessary. The exposition was so smooth and exciting; I usually left the classroom feeling that I really understood everything. To deal
with such arrogance, Kap always assigned challenging problems, which made us feel a bit more humble, but which also added to our understanding. He was a wonderful teacher, both in
the short term and for the rest of my mathematical career. His taste was impeccable, his enthusiasm was contagious, and he was the model of the mathematician I would have been
happy to be.
Kap was not a naturally sociable person before his marriage, but his world was vastly enriched when he married Chellie Brenner in 1951. Chellie and Kap had three children, Steven, Alex
and Lucy. Chellie was Kap’s opposite in terms of outgoing open warmth. Chellie brought streams of friends and colleagues into their home, and in the years of Kap’s directorship at MSRI she
presided as a mother hen over the many visitors as well as over Kap himself. When Chellie became ill in the last part of Kap’s life the tables were turned, and he nursed her faithfully.
Kap was entranced by music very early. He wrote,
At age 4, I was taken to a Yiddish musical, Die Goldene Kala. It was a revelation to me that there could be this kind of entertainment with music. When I came home I sat down and played
the show’s hit song. So I was rushed off to piano lessons. After 11 years I realized there was no point in continuing; I was not going to be a pianist of any distinction…. I enjoy
playing piano to this day. […] God intended me to be the perfect accompanist—or better, the perfect rehearsal pianist. I play loud, I play in tune, but I don’t play very well.
Nevertheless, his playing was greatly enjoyed by his friends and family, and later by his colleagues at MSRI, where he would play at Christmas parties and other occasions. His daughter
Lucy became a well-known folksinger-songwriter, and he sometimes accompanied her concerts, for example at Berkeley’s Freight and Salvage. She wrote that “[f]rom as early as I can
remember I would sing while he played the piano. He taught me dozens of songs from the 1930s and ’40s, as well as from Gilbert and Sullivan operettas. I still remember most of these songs.”
Kap’s musical interest was centered on Gilbert and Sullivan, and on popular songs of the “golden age”, about 1920–1950. He composed a number of songs and was proud of a musicological
observation about songs of this period:
Most had the form \( \mathrm{AABA} \). I noticed there was a second form (“Type 2”) \( \mathrm{AA}^{\prime}\mathrm{BAA}^{\prime}\mathrm{BA}^{\prime\prime} \). [S:\( \mathrm{A} \): 4 bar:S]
theme; \( \mathrm{A}^{\prime} \), [S:\( \mathrm{A}^{\prime\prime} \): variants;:S] [S:\( \mathrm{B} \): contrasting:S] 8 bar theme. (Though I assumed any jazz musician knew about this,
nothing about it was found in the literature.) Type 2 is really better for songs. (In Woody Allen’s Radio Days, the majority of the 20 songs are Type 2.) As proof I tried to show that
you could make a passable song out of such an unpromising source of thematic material as the first 14 digits of \( \pi \).
Enid Rieser produced lyrics, and Lucy often performs this song on her tours. Kap is survived by his wife Chellie, his children Alex, Lucy and Steven, and by his grandchildren Aaron and
The authors are grateful to Hyman Bass, from whose account many of the quotes and facts above are taken. | {"url":"https://celebratio.org/Kaplansky_I/article/398/","timestamp":"2024-11-13T16:16:12Z","content_type":"text/html","content_length":"27010","record_id":"<urn:uuid:7204be30-c290-4b49-91d1-ab889dbe753d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00718.warc.gz"} |
What Is Spacetime in Einstein Theory | knowledge base
What Is Spacetime in Einstein Theory
Published: July 2, 2024
• the laws of physics and the speed of light must be the same for all uniformly moving observers, regardless of their state of relative motion.
• For this to be true, space and time can no longer be independent. Rather, they are “converted” into each other in such a way as to keep the speed of light constant for all observers.
• (This is why moving objects appear to shrink, as suspected by FitzGerald and Lorentz, and why moving observers may measure time differently, as speculated by PoincarĂ©.) Space and time are
relative (i.e., they depend on the motion of the observer who measures them) and light is more fundamental than either.
** Note: I don’t have intuition about spacetime in Einstein theory. **
What is spacetime in Einstein theory ?
• It’s like all objects in the universe sit in a smooth, four-dimensional fabric called space-time.
• This fabric is curved by the mass and energy of objects in it.
• The curvature of space-time causes objects to move on curved paths. We see these paths as the force of gravity.
• This fabric is not just space and time, but a combination of both. It’s like a 4D fabric where 3D is space and 1D is time.
Now to me this fabric thing confuses me. This just doesn’t feel right to me.
Let me know if you have any questions or comments.
It will help me to improve/learn. | {"url":"https://kbs.murarisumit.in/en/f7de1f85aff4e3440204f197756f8b68/","timestamp":"2024-11-11T21:13:42Z","content_type":"text/html","content_length":"10073","record_id":"<urn:uuid:ac18b476-31e3-4706-ac3c-0940d8a71a1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00305.warc.gz"} |
Dynamically Created Addition Worksheets
Here is a graphic preview for all of the addition worksheets. These dynamically created addition worksheets allow you to select different variables to customize for your needs. The addition
worksheets are randomly created and will never repeat so you have an endless supply of quality addition worksheets to use in the classroom or at home. Our addition worksheets are free to download,
easy to use, and very flexible.
These addition worksheets are a great resource for children in Kindergarten, 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, and 5th Grade.
Click here for a Detailed Description of all the Addition Worksheets. | {"url":"https://www.math-aids.com/Addition/","timestamp":"2024-11-08T02:37:41Z","content_type":"text/html","content_length":"67540","record_id":"<urn:uuid:b9dc05be-e1d9-46cf-9b58-26b8dabb624f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00012.warc.gz"} |
Professional Baseball Pitchers' Performance and its Effect on Salary
Creative Commons CC BY 4.0
In this study we identify factors that affect a Major League Baseball (MLB) pitcher's salary. We are interested in knowing whether ability is a good indicator of compensation. To test this we created
a model to predict the salaries of pitchers in the MLB. | {"url":"https://ru.overleaf.com/articles/professional-baseball-pitchers-performance-and-its-effect-on-salary/xndsqqqnrynm","timestamp":"2024-11-07T20:08:54Z","content_type":"text/html","content_length":"55371","record_id":"<urn:uuid:7828c349-87ee-41b6-acb8-5e3d58fb396f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00222.warc.gz"} |
Failure Modeling and Sensitivity Analysis of Ceramics Under Impact
A micromechanical multi-physics model for ceramics has been recalibrated and used to simulate impact experiments with boron carbide in abaqus. The dominant physical mechanisms in boron carbide have
been identified and simulated in the framework of an integrated constitutive model that combines crack growth, amorphization, and granular flow. The integrative model is able to accurately reproduce
some of the key cracking patterns of Sphere Indentation experiments and Edge On Impact experiments. Based on this integrative model, linear regression has been used to study the sensitivity of sphere
indentation model predictions to the input parameters. The sensitivities are connected to physical mechanisms, and trends in model outputs have been intuitively explored. These results help suggest
material modifications that might improve material performance, prioritize calibration experiments for materials-by-design iterations, and identify model parameters that require more in-depth
Issue Section:
Research Papers
Calibration, Ceramics, Damage, Density, Fracture (Materials), Sensitivity analysis, Simulation, Modeling, Flow (Dynamics), Engineering simulation, Fracture (Process), Granular materials, Boron
1 Introduction
Ceramics are brittle or quasi-brittle materials used in a vast range of applications from body and vehicle armors, semi-conductors, scratch-resistant shields, kitchenware, and everyday appliances.
While the properties of ceramics can be as varied as their chemical composition and structure, armor ceramics in particular are characterized by high energy absorption, high hardness, high
compressive strength and preferably low density. All these properties make them suitable for impact and blast resistance during combat. Boron carbide, with its relatively low density and high
hardness, is of special interest for research in the field of armor ceramics [1].
A ceramic constitutive model suitable for a given size and/or application might not be able to capture the relevant physics for another application. Particle-based models [2] attempting to capture
the very large number of microscale defects in ceramics are computationally intensive. Numerical models of discrete cracks [3,4] or crack-like features [5–10] are not suitable for modeling
simultaneous propagation of millions of micro-cracks, as is often the case for high rate simulations of armor ceramics. Continuum damage models [11–14] are applicable so long as the continuum
assumption holds true. The applicability of such models in penetration or fragmentation problems are dependent on the numerical solver and the output property of interest. Ensemble averaging of
atomistic properties of single crystal ceramics provides a more meaningful representation of micro-mechanical properties such as fracture toughness, moduli, and Poisson’s ratio. However, these models
also have some non-physical parameters that are difficult to calibrate experimentally. Often, calibration of these parameters based on macroscopic response might be appropriate for modeling the
physics and mechanisms involved in the experiments they are used to calibrate against but may not be suitable for other loading scenarios. Despite these limitations, many of these models can be used
for extreme environment simulations, some of which are often infeasible or expensive to replicate in laboratories. They can also help us understand key trends and guide material processing toward
improving material performance.
Like most other materials, scale separation becomes quintessential in modeling armor ceramics. From the point of simulations, this might not only mean differences in the physics at different scales
but also the calibration of parameters. As an example, fragmentation in ceramics is observed typically at two scales: a microstructure-dependent scale leading to smaller fragments and a
geometry-dependent macroscale [15]. Depending on the problem, fragmentation can either lead to granular phase transition and granular flow as observed in the Mescall region [16,17], or it can also
lead to disintegration and limit the peak strength, as observed in unconfined Kolsky bar experiments [18]. From a modeling viewpoint, this can lead to a different calibration of the fragmentation or
granular transition criterion for different experiments, depending on the dominant fragmentation mechanism. In addition to this, the resolution of the simulations limits the fragment size that can be
captured in a continuum granular mechanics model. The fragment statistics are used to calibrate such granular mechanics models [19] and thereby influence the calibration at a given resolution.
The key micro-mechanisms in boron carbide under impact are amorphization [20], crack growth, crack interaction, and crack coalescence [21] leading to fragmentation followed by granular flow [22]. The
current work exercises an integrated ceramics model [23–25] that combines these key features with a modified granular transition criterion developed by Ref. [19] and tries to simulate key failure
mechanisms observed in Sphere Indentation experiments [26] and Edge On Impact experiments [27–29].
In this study, the integrated model has been used to assess the sensitivity of material behavior to selected model parameters for sphere indentation simulations. Broadly, there are two types of
sensitivity analysis methods in the existing literature [30]: local and global. Some of the popular local methods include the one-at-a-time (OAT) method and the derivative-based methods. OAT method [
31–33] involves studying the effect of the output by locally perturbing one input variable at a time while keeping the other variables at their baseline values. Local derivative-based methods [34–36]
involve estimating the partial derivative of the output with respect to an input variable by small perturbations of that variable around a fixed point in the input space. The partial derivatives act
as natural sensitivity measures. Although fast and transparent, the main drawback of local methods is that they do not thoroughly explore the entire input space. The global sensitivity methods try to
alleviate this problem and also try to study the effect of large changes in the input variables. Among the global sensitivity analysis method, variance-based methods [31,37,38] and linear regression
analysis [39,40] have been most extensively used. Variance-based methods are based on a decomposition of the variance of the output into terms corresponding to the different input variables as well
as their interactions. The sensitivity measure of the output for a particular input variable is the amount of variance in the output attributable to that input variable. Since variance-based methods
allow the examination of sensitivities across the whole input space, they often make use of emulators/surrogates [41–44] to reduce the computational expense of too many model runs. Linear regression
involves fitting of the input-output data assuming that the output is linearly related to the data. The standardized regression coefficients obtained from the regression fit can then be used as
measures of parameter sensitivity.
In this work, sensitivity analysis is performed based on linear regression fit to the ceramics model data to study the importance of parameters with respect to the selected quantities of interest.
The most sensitive parameters have been identified, and their trends have been introspectively speculated and compared against simulation results. Finally, suggestions have been made toward material
processing modifications and prioritization of calibration experiments and/or more in-depth modeling to support-specific parameters.
2 Overview—Integrative Model
The integrated model builds on Ref. [23] and combines multiple mechanisms that are deemed dominant in boron carbide. It incorporates amorphization induced damage from Ref. [45], fracture dominated by
the growth and interaction of wing cracks from sliding flaws [23,46], and fragmentation and transition to granular mechanics [19] and granular plasticity. A Continuum breakage mechanics model (CBM) [
47,48] is used to simulate granular flow-induced plasticity. The current implementation in the integrative model is an improvement over previous implementations by Refs. [23–25] as it combines all
the mechanisms independently developed and/or improved by the aforementioned authors with a modified physics-based granular transition criterion based on Ref. [19] with the recalibration of some
model parameters.
2.1 Kinematics and Equation of State.
Multiplicative split of the deformation gradient tensor (
) [
] into deformation associated with micro-crack induced damage (
), amorphization (
), and granular plasticity (
) is used to model the kinematics as
The temperature (
), Hugoniot pressure (
), Grüneisen coefficient (Γ
), reference density (
), the cold energy(
), and the specific heat at constant entropy (
) is used in the Mie Grüneisen Equation of State to compute the pressure of the intact solid (
) as
2.2 Amorphization.
Amorphization is modeled as parallel bands along which sliding occurs [
]. There are three primary parts that describe the phenomenon. This includes a criterion for initiation of amorphization bands followed by sliding along these bands and damage induced due to these
bands which in turn affects the transition to granular flow and the rate of crack growth due to degradation of the critical stress intensity factor. The damage induced due to amorphization is
calculated as
is the shear deformation,
are material parameters associated with the number density of failed bands, and
is the band spacing.
It has been assumed that damage induced due to amorphization linearly degrades the fracture toughness (
) of the material as shown in Eq.
is the equivalent effective fracture toughness and
is the critical failure damage.
2.3 Fracture and Fragmentation.
The model, developed by Ref. [
], incorporates defect distribution in the material as micro-cracks and is based on classical wing-crack growth from sliding flaw models of Refs. [
]. Macroscale material variability has been addressed in Ref. [
] by generating microstructural realizations of local flaw distribution. Crack interactions are modeled using an effective medium approach incorporating dynamic crack growth [
]. A non-dimensional damage parameter
] is used to calculate the properties of the effective medium. The damage parameter is a summation of the damage induced due to amorphization
and micro-cracking
represent the number of flaw families with
being the number of flaws in each flaw family having a representative flaw size
. The degradation of the elastic properties of the medium with growth of damage is calculated as
= (−
K[0], G[0], and ν[0] are the bulk modulus, shear modulus, and the Poisson’s ratio of the undamaged material.
The local compliance matrix is used to calculate the local stress state of the material. A dynamic crack growth criterion based on Ref. [
] is used to calculate the crack velocity (
) as
is the maximum crack velocity,
is the stress intensity factor,
is the fracture toughness, and
is the crack growth exponent which is a fitting parameter. The crack velocity is then used to compute incremental crack growth and the rate of damage.
2.4 Transition to Granular Mechanics.
The damage parameter serves as a switch between a damaged continuum to a granular continuum determined by the threshold damage (Ω
). This threshold damage is computed using the transition equations proposed by Ref. [
]. Reference [
] estimates the extent of fragmentation at a damage threshold using a numerical crack coalescence model. For the onset of granular mechanics, sufficient fragmentation has to occur. Based on a
parametric study, empirical transition equations were proposed, that correspond to a threshold degree of fragmentation. The empirical equations suggest the transition to granular mechanics occurs
when the effective wing crack length reaches a certain threshold (
) determined by the initial flaw size (
), effective initial flaw density (
), and initial flaw orientation distribution. For a fixed defect orientation, the transition wing crack length (
) criterion is expressed as
can be reformulated in terms of a damage-based transition criterion, where the critical damage (
) is a function of the initial damage (Ω
) as
Similarly, for a random flaw orientation distribution, the transitional criterion can be expressed in terms of a transition wing crack length (
) as Eq.
or a transition damage (
) as Eq.
2.5 Continuum Breakage Mechanics Model.
A rate-dependent constitutive model developed by Refs. [
] based on breakage theory in geomechanics [
] and previously implemented in an integrated ceramics model in Ref. [
], is used to model the continuum deformation of granular media. Overstress theory of viscoplasticity has been used to introduce rate dependency in the model. The yield surface is defined as
is breakage energy and is physically related to the energy dissipation due to particle breakage,
is the breakage index which denotes how close the fragment distribution is to the ultimate distribution,
is the relative porosity, and
are the pressure and deviatoric stress, respectively.
, and
are material and model parameters.
is the critical breakage energy density and is related to the strain energy density at the onset of comminution,
is related to the behavior associated with dilation,
is a dilatancy parameter, and
is the friction parameter. Energy dissipation due to particle breakage, reorganization of particles and friction dissipation is accounted for in the model.
The model captures the refragmentation of the initial fragments during granular flow via the evolution of breakage (
) as
The evolution of porosity (
) and plastic strain (
) is given by
is a non-negative multiplier defined by Eq.
is the strain rate sensitivity coefficient and
is the viscosity parameter.
are material parameters that control the initiation and evolution of breakage.
) is the Heaviside step function of
defined in Eq.
) = 1 if
≥ 0, otherwise
) = 0.
The constitutive relationship is defined by
is the stiffness and
is the grading index.
denotes how far the initial distribution (
) is from the ultimate fragment size distribution (
). It is calculated from the ratio of the second moments of the initial and final fragment size distributions as,
3 Numerical Simulations—Integrative Model
3.1 Calibration of Model Parameters.
Calibration of mechanical, equation of state, microstructural, micromechanical, amorphization and granular flow parameters follow [24,25]. Critical transition damage for initiation of granular
mechanics [23–25] is newly recalibrated in this work as per [19]. Critical breakage energy density (E[C]) in Ref. [47] is recalibrated using Ref. [56]. Grading index ($ϑ$) is re-estimated and some
of the arguments of fragment size evolution for granular media in Ref. [47] are revisited in the context of a low porosity fragmenting ceramic. These updated calibrations are discussed in the
sections that follow. A summary of the calibrated integrative model parameters is presented in Table 1.
3.1.1 Calibration of Transition Damage.
Damage parameter is a summation of damage due to wing crack growth (Eq.
) and amorphization-induced damage. When the damage exceeds the threshold set in Ref. [
], granular physics is activated in the model. The constant critical threshold damage used in Refs. [
] is able to capture some structural fragmentation (macroscale fragments), but does not capture microscale fragments sufficiently, at or smaller than the resolution of integrative model simulations.
This does not properly represent the initiation of granular mechanics in the Mescall region as explained in Ref. [
]. The current model uses the threshold defined in Ref. [
]. Flaw sizes are assumed to follow a Pareto distribution with the minimum half flaw size (
), maximum half flaw size (
), and the distribution exponent (
) defining the distribution. The mean-squared average of initial flaw size can be expressed as
From the transitional criteria (Eqs.
), the transitional wing crack damage for fixed defect orientation distribution (
) and random defect orientation distribution (
) is estimated.
For the chosen material properties (Table 1), $Ωfd=3.5$ and $Ωfr=2.9$.
3.1.2 Calibration of Critical Breakage Energy Density.
Reference [56] explores the particle size effect in the behavior of granular boron carbide under quasi-static compression. The evolution of porosity, bulk modulus with changing hydrostatic pressure
is used to compute the critical breakage energy density, E[C] [47,55]. Equation (5.7) in Ref. [55] is used to compute the critical breakage energy density. The grading index ($ϑBM$) is calculated
from the initial and final fragment size distribution for granular boron carbide from Ref. [56]. The pressure beyond which the bulk modulus of granular boron carbide starts increasing can be thought
of as the critical comminution pressure, p[CR] (Eq. (5.7) in Ref. [55]) at which the breakage of fragments initiates. The grading index, critical comminution pressure, and the corresponding bulk
modulus (K[g]) are used to calculate the value of E[C]. The results of the calculation are listed in Table 2.
In general, it can be expected that the strength of an individual fragment might decrease with an increase in fragment size due to the higher statistical probability of defects. However, the critical
breakage energy density in a granular medium might have a more complicated response than what can be explained by simple Weibull size scaling. This might arise for two reasons. The coordination
number of particles varies with the size and shape distribution of particles and the volume fraction of the granular media. This has an influence on the average stress a particle experiences. On the
other hand, although larger particles are more prone to failure due to the presence of more defects, the interaction gets more complicated when the defects are comparable to the particle size. This,
in addition to other uncertainties, might help account for the initial increase in E[C] (from particle size 170 μm to 190 μm in Table 2) with an increase in particle size followed by a general
decreasing trend with increasing particle size (from particle size 190 μm to 470 μm in Table 2). In this study, given the uncertainties, an average value of 1.15 MPa has been used for E[C]. The
sensitivity of the output to this parameter will be evaluated in the latter part of this paper.
3.1.3 Calibration of Grading Index.
Grading index is a metric related to the evolution of fragment size distribution that signifies the proximity of the initial fragment size distribution to the final or critical distribution [47,53–55
]. It is calculated using Eq. (19). For most granular media, it has been observed in simulations and experiments that the largest grains do not undergo fragmentation as they are surrounded by many
smaller grains leading to a more hydrostatic state of stress often referred to a “cushioning effect” [60,61]. Therefore, it is assumed in breakage mechanics models that while the smallest grains
fragment, the largest ones often remain intact. This is contrary to the expectation of larger fragments being weaker due to the statistical size effect [62]. In reality, the interplay between size
effect due to the number of defects and coordination number, both of which can be expected to have opposite effects on the probability of fracture with size, control the evolution of grain
fragmentation until a steady-state is reached. In case of extremely low porosity systems like the comminuted zone in ceramics, all fragments (or grains) are expected to be sufficiently supported.
Hence, the effect of coordination number should not play a role initially, and statistical size effect should dominate the failure response. For such systems, the assumption of the largest fragments
(grains) not re-fragmenting might not hold true. Most experiments for granular media are not meant for such low porosity high-pressure systems. Under high-pressure dynamic loading conditions, this
can lead to a very high grading index if there is significant fragmentation of the largest grain. In this work, a grading index value of 0.95 has been selected. This is close to the values calculated
from Ref. [56] in Table 2. As with all the parameters, the sensitivity of model outputs to grading index will be evaluated in the latter part of this paper.
3.2 Simulation of Sphere Indentation Experiments.
Sphere indentation experiments of boron carbide were simulated in abaqus. The details of the experimental technique can be found in Ref. [26]. The geometry consists of a 1/4 in. diameter tungsten
carbide sphere impacting a boron carbide cylinder, 1 in. diameter and 1 in. height (Fig. 1). Reduced integration eight-noded brick elements (C3D8R) were used to model both the tungsten carbide sphere
and the boron carbide cylinder. Similar to Ref. [25], the cylinder was discretized using a mesh size of approximately 0.55 mm, with 99,682 elements. A kinematic contact algorithm for frictionless
surfaces was used. The integrated ceramics model was used for the boron carbide cylinder, with properties as listed in Table 1. The tungsten carbide sphere was modeled using the Johnson–Cook material
model parameters determined by Ref. [63]. The equation of state values from Ref. [64] was used (see Table 3). The simulations were conducted for 10 different cases of 100 to 1000 m/s impact velocity.
3.3 Simulation of Edge On Impact Experiments.
Edge-On Impact experiments were performed by Strassburger on boron carbide and other ceramics. The experimental technique was developed in EMI, and the details of the experiments can be found in
Refs. [27–29]. The simulation geometry consists of a steel projectile 30 mm in diameter and 23 mm in height impacting a 100 mm^2 and 10 mm thick boron carbide plate along the edge (Fig. 2). Reduced
integration eight-noded brick elements were used in abaqus for both the plate and the projectile. A kinematic contact algorithm for frictionless surfaces was used. Johnson–Cook model calibration for
hardened steel [65] was used for the steel projectile (see Table 4). The integrated ceramics model was used for modeling the boron carbide plate. K[IC] and ρ[0] were modified in this model to
$2.9MPam$ and 2530 kg/m^3, respectively, to be consistent with the values reported in Ref. [29]. All the other parameters have been set to the same values as Table 1. The simulations were run until
7.5 μs after impact for eight different impact velocities from 50 to 1010 m/s.
4 Results—Integrative Model
4.1 Sphere Indentation Simulations.
Typical features observed in Sphere Indentation experiments are a comminuted zone under the impactor with visible radial cracking on the surface. Immediately under the comminuted zone, cone cracks
are also observed. In our simulations damage (Fig. 3(a)) and density (Fig. 3(b)), localizations are observed that are presumed to be related to larger-scale radial cracks. Figure 4(a) shows the
damage contour along a section of the ceramic cylinder 15 μs after a spherical indenter impact at 600 m/s. Figures 4(b) and 4(c) show the corresponding density contour along two orthogonal planes.
High damage and low-density regions under the indenter signify the presence of a comminuted zone. In addition to that, we observe slanted damage and density localizations that have the appearance of
cone cracks. Figures 4(b) and 4(c) demonstrate that the slanted low-density regions are repeatable across different orthogonal planes. This confirms that these low-density regions are 3D cone
crack-like features, not simply an artifact of one particular cross section. Some of the other features observed in experiments like crack branching and lateral cracking are observed at certain
impact velocities but they are not repeated across all scenarios.
Figure 5 shows the damage contour at the top of the cylinder, 15 μs after impact at different velocities. Figure 6 shows the corresponding density contour. Density contour shows more radial
crack-like localization than the corresponding damage contour. In either case, the number of radial crack-like features increases with an increase in impact velocity. However, determining the number
of radial cracks is a subjective assessment. The number of observed radial cracks is however slightly less than the number of radial cracks observed in Ref. [26] for boron carbide. Radial cracks were
not quantified in Refs. [66,67].
Figure 7 shows that both the percentage of material that is granular and the percentage of material that is amorphized, 15 μs after impact, increase with an increase in impact velocity. The
percentage of material that undergoes amorphization is, however, insignificant even at an impact velocity of 1000 m/s (less than 0.1$%$). Figure 8 shows that the maximum depth of penetration of the
indenter almost linearly increases with an increase in impact velocity. Amorphization does not seem to affect the depth of penetration of the indenter significantly for the impact velocity range
studied here. Reference [24] argues that significant amorphization is not observed in sphere indentation experiments until an impact velocity of around 2 km/s. We can expect the effect of
amorphization to be more pronounced at higher impact velocities or for different shapes of indenter.
4.2 Edge On Impact Simulations.
Figure 9 shows the evolution of the surface damage contour with time for 1010 m/s impact velocity in Edge On Impact simulation. The damaged region around the indenter grows with time. Around 2.625 μs
after impact, damage starts localizing into cone crack-like features, which mostly become visible around 4 μs. There is a distributed damaged region with a darker or more fragmented granular region
Figures 10 and 11 show a comparison of the experiments performed by Strassburger [27–29] at two different impact velocities of 469 m/s and 1010 m/s. The experimental image (Figs. 10(a) and 11(a)) is
based on the intensity of light reflected from the surface of the ceramic plate viewed via a high-speed camera. It is not clear exactly which model output would best represent the changes in the
intensity of reflected light. The intuitive options are damage, density, and out of plane displacement at the surface. Figures 10(b) and 11(b) show the damage pattern for 469 m/s and 1010 m/s impact
velocity, respectively. The predicted damage pattern differs slightly in the two cases, and there is a longer cone crack-like damage localized feature observed for 1010 m/s impact, along with a wider
fragmented or granular region. In this paper, the numerical damage front represents the boundary capturing Ω ≥ 0, and the numerical granular front or the edge of the granular region represents the
boundary capturing Ω ≥ Ω[f]. The numerical granular front in Fig. 10(b) seems to be around the same location as the experimental damage front in Fig. 10(a). However, in Fig. 11(b), the numerical
damage front seems to be in the same location as the experimental damage front in Fig. 11(a). In both Figs. 10(c) and 11(c), the density pattern has been adjusted to show even the slightest amount of
dilation. The density pattern in either figures mimics the corresponding numerical damage pattern. Predictions of out of plane strain (Figs. 10(d) and 11(d)) show a marked difference between the two
impact velocities. The region experiencing higher out of plane strain grows with increasing impact velocity. However, it is still not clear if out of plane strain best correlates with the change in
light intensity observed in experiments.
Figure 12 shows a comparison of the numerical damage contour (bottom row) with the experimental image [27] (top row) at three different time instants. It appears as though the numerical damage front
(dotted line) always exceeds the experimental damage front (dashed line), while the numerical granular front (dashed-dotted line) is at or behind the experimental damage front.
Another interesting feature is that the velocity of the damage front in the middle of the plate always exceeds the velocity of the damage front at the surface of the plate (Fig. 13). For an impact
velocity of 469 m/s, 4.5 μs after impact, the distance to which granular and damage front at the middle section of the plate (Fig. 13(b)) has progressed exceeds the granular and damage front at the
surface of the plate (Fig. 13(a)). Also mid-section damage, orthogonal to the plane of the plate, shows much higher damage at the middle than the surface (Fig. 13(c)).
The numerical damage front velocity and granular front velocity has been calculated and compared against the experimentally reported damage front velocity in Fig. 14.
In the experiments, the damage front velocity initially rises sharply as a function of impact velocity and then plateaus until an impact velocity of around 469 m/s. Beyond this impact velocity, the
damage front velocity gradually rises and reaches to around 12 km/s at an impact velocity of 1010 m/s. However, in the simulations, both the numerical damage front velocity and the granular front
velocity sharply increase initially, followed by a more gradual increase with further increase in impact velocity. The experimental damage front velocity appears somewhat bounded between the granular
and numerical damage front velocities, 6 μs after impact. This is not necessarily true at an early stage when the numerical damage front and granular front almost coincide. The inset image in Fig. 14
shows the numerical granular and damage fronts. Although the simulations do not capture the sudden rise in damage front velocity beyond 469 m/s impact velocity, they feature onset of amorphization
around 742 m/s impact velocity, with the volume of amorphized region gradually increasing with further increase in impact velocity (see Fig. 15). Figure 16 shows a comparison of the growth in
granular material percentage (GMP) versus amorphized material percentage, 7.5 μs after impact with change in impact velocity. Despite the coincidence of the onset and increase in amorphization with
the sudden rise in damage front velocity reported in experiments, it is not clear as to how amorphization can induce a sudden change in the damage front velocity. Reference [45] does not account for
new crack nucleation as a consequence of amorphization, but it is not obvious if that would lead to a rise in front velocity. Another possible explanation of the experimentally observed rise in
damage front velocity, beyond 469 m/s impact velocity could be some sort of phase transformation that changes material properties. The current integrative model does not capture such phase
transformation. To summarize, experimentally observed damage pattern in Edge On Impact experiments is correlated with the growth of cracks, change in density and out of plane displacements observed
in simulations using the integrative model. Typical features such as cone cracks, a distributed crack front, and even secondary crack zones (for low impact velocity) observed in experiments [28] can
be reproduced in the simulations. However, unlike the simulations, in the experiments for boron carbide, these patterns are not always symmetric and consistently discernible. The numerical damage
front in most cases exceeds the experimental damage front, except 1010 m/s impact velocity. The change in experimental damage front velocity with impact velocity is not accurately captured by the
numerical damage front.
4.3 Necessity of Parameter Prioritization.
Ballistic performance of boron carbide can be improved by modifying the mechanical properties via doping and grain boundary engineering. Reducing defect population by controlling the free carbon
content and densification techniques [68] can help improve hardness. Significant research toward enhancing fracture toughness, strength, and/or modulus through different sintering aids [69–76] has
been conducted. Amorphization mitigation via silicon doping [77,78] and boron enrichment [79,80] has also been explored. However, performance enhancement via controlling granular flow through
material modifications is not well understood, and the focus is driven on characterization of granular flow [81]. In addition to this plethora of research serving as a guide for material
modifications, ranking the properties with the most pronounced influence over ballistic performance is desirable.
The integrative model currently has a huge parameter space with significant uncertainty around most of the parameters. While many of these parameters are flags to control the onset or suppression of
mechanisms, a significant number of these are model parameters which are directly or indirectly related to physical properties. As a result, there is a significant challenge in designing new
materials due to several reasons. First, the influence of model parameters and related material properties on model predictions is not well understood. Second, the design of new materials by
modifying individual properties poses a fiscal challenge. Performing a set of experiments to calibrate multiple iterations of a material is expensive and laborious. A work-around that addresses both
these issues is to understand the sensitivities of model predictions to model parameters. This will not only help us in understanding the trends that guide us towards improving material performance,
but also in the selection of the most significant model parameters. The following subsections will describe the setup, results, and conclusions from a sensitivity analysis study of sphere indentation
simulations in boron carbide.
5 Problem Setup—Sensitivity Analysis
Reference [82] performed sensitivity analysis on ballistic impact of silicon carbide (SiC) ceramic plate with poly-ether-ether-ketone (PEEK) layer and selected peak normal contact force, plastic
dissipation in ceramic and PEEK, transmitted impulse to the ceramic back face as output quantities of interest (QoIs). Penetration state function defined using the residual bullet velocity and the
depth of penetration is used as a QoI in ballistic impact of SiC/ultra-high molecular weight polyethylene composite plate in Ref. [83]. Similarly crater size, number of radial cracks, ejecta velocity
are useful QoIs for further studies of ballistic performance of ceramics.
In this paper, sensitivity analysis of sphere indentation simulations has been performed using the same geometry and setup as highlighted in Sec. 3.2. The depth of penetration of the spherical
indenter and the granular material percentage, 15 μs after impact, are selected as the two output QoIs.
As seen before in Fig. 7, amorphization does not play a significant role in the range of impact velocities studied. To simplify our problem, amorphization is deactivated and the CBM model for
granular mechanics is employed. We have selected 20 parameters which we suspect play an important role. These include four mechanical, six microstructural, and 10 granular flow parameters, shown in
Table 5.
The range of these parameters are chosen in order to bound commonly observed micro-mechanical values for different ceramics and granular solids as much as possible. In some cases when the parameter
is not well understood, the range was selected based on engineering intuition. Scrambled Sobol sequences [84,85] are used to generate 500 space-filled samples in the 20-dimensional parameter space
bounded by the ranges given in Table 5. The integrative model was then run for each of the 500 parameter combinations. From each simulation, the percentage granular material as well as the evolution
of the depth of penetration is obtained.
In this study, a linear regression model is fit to the selected QoIs, and statistical inference is performed to understand the relative importance between each of the parameters and the QoIs. If
denotes the set of
input parameters and
denotes the QoI, then the linear regression model assumes a linear relationship between
given by
is the vector of unknown
+ 1 coefficients which needs to be determined. After an initial analysis with the original dataset, leave-one-out-cross-validation (LOOCV) is performed and the individual cross-validated errors are
used to detect potential outliers based on the residual interquartile range (IQR) given by
is the third quartile and
is the first quartile of the residual values of each data point from the LOOCV analysis. After removing the outliers, linear regression is again performed on the remaining dataset. A parameter
reduction is then performed by eliminating a group of parameters which does not reduce the coefficient of determination (
) significantly compared to the one obtained with all the parameters. Linear regression is then performed on the remaining dataset with the reduced parameters.
In addition, some of the 20 integrative model parameters are combined to obtain a set of derived parameters which are either non-dimensional or more physically representative. These parameters are
summarized in Table 6. After replacing some of the independent parameters,they are associated with a separate study is also performed with a new set of parameters (derived and original).
6 Results—Sensitivity Analysis
6.1 Original Parameter Set: Correlation Study.
For an accurate interpretation of the linear regression coefficients, multicollinearity [86] of parameters should be avoided. The linear correlation coefficient heat map is shown in Fig. 17. It shows
the negligible correlation among the input parameters, which suggests the desired lack of multicollinearity. This is supported by the variance inflation factor (VIF) [86] values of each parameter
shown in Table 7. VIF for a parameter is obtained by regressing that parameter with respect to the other remaining parameters and has a lower bound of 1. The closer the VIF value for a parameter is
to 1, the less the dependence of that parameter on the others. VIFs between 1 and 5 [87] suggest there is a moderate correlation but represent an acceptable level of multicollinearity.
6.2 Derived Parameter Set: Correlation Study.
The list of 19 derived parameters is shown in Table 8. Minimum half flaw size (s[min]), maximum half flaw size (s[max]), flaw distribution exponent (α[flaw]), flaw density (η), density (ρ[0]), and
Poisson’s ratio (ν[0]) are removed and replaced by mean half size (s[mean]), flaw spacing (s[spacing]), range half flaw size (s[range]), initial damage (Ω[i]), and longitudinal wave speed (V[L]).
A correlation study, similar to the one with the original parameter set, is performed here. The correlation coefficient heat map is shown in Fig. 18, and the VIF values for each parameter are shown
in Table 9. All the VIF values are within the acceptable range of 1–5. This suggests that the parameter combinations have an acceptable level of multicollinearity and linear regression analysis can
be reliably performed.
6.3 Granular Material Percentage.
In this section, we study how the different mechanical, microstructural, and granular parameters influence the GMP at 15 μs after impact. The histogram plot for GMP is shown in Fig. 19. The results
show that this QoI varies considerably in the sampled parameter space from $∼5–70%$.
6.3.1 Analysis With Original Parameter Set.
The linear correlation coefficients between GMP and each of the input parameters are calculated and shown in Table 10. Fracture toughness (K[IC]) has the strongest correlation with GMP and the
correlation is negative, which means the GMP decreases with an increase in K[IC]. Minimum half flaw size and friction parameters are also strongly correlated with GMP. On the other hand, Poisson’s
ratio and the flaw distribution coefficient have the weakest correlation. These correlation coefficients, however, only provide a preliminary crude idea about the sensitivities of the parameters. For
a more detailed sensitivity analysis, we resort to linear regression analysis.
Three iterations of linear regression analysis are performed and the corresponding prediction results are shown in Table 11. As part of the prediction results, the coefficient of determination (R^2),
the adjusted coefficient of determination (adj. R^2) and the root-mean-squared error (RMSE) are reported. In the first iteration, analysis is done using the original dataset with sample size 500 and
all 20 parameters. For the full data analysis, the R^2 value is 0.827, adj. R^2 value is 0.820, and the RMSE value is 0.4159. For the LOOCV analysis, the R^2, adj. R^2, and RMSE values are 0.811,
0.803, and 0.435, respectively. In the second iteration, the cross-validated errors for each individual data were obtained from the LOOCV analysis results in the first iteration and used to detect
potential outliers based on the residual IQR (Eq. (22)). Eight outliers are detected and then linear regression was performed with a dataset of sample size 492 and the same set of 20 parameters. The
elimination of outliers helps improve all the three metrics (R^2, adj. R^2, RMSE) for both the full data and LOOCV analysis as shown in Table 11. For example, the R^2 improved from 0.827 to 0.841 for
the full data analysis and from 0.811 to 0.826 for the LOOCV analysis. In the third iteration, a group of eight parameters (ν[0], η, α[flaw], κ[BM], γ[B], γ[d], u, l) are removed such that the full
data R^2 without these parameters is not significantly reduced. Linear regression analysis is then performed with the dataset of sample size 492 and a set of 12 reduced parameters. It is found from
the results in Table 11 that for the full data analysis, the R^2 value reduces from 0.841 to 0.839 and the RMSE value increases from 0.392 to 0.395 but the adj. R^2 increases slightly from 0.834 to
0.835 which is desirable. For the LOOCV analysis, however, there is an improvement in all the three metrics compared to those in the second iteration.
The standardized t-test statistic of each parameter obtained from the regression analysis in the third iteration is shown in Fig. 20. From the p-values (not shown), it is found that all 12 parameters
are statistically significant for a significance level of 0.05. The t-test statistic bar plot in Fig. 20 gives an idea of the sign of correlation between each of the parameters and the output GMP. In
the figure, the parameters are arranged such that the importance of parameters increases from top to bottom. It is thus seen that fracture toughness (K[IC]) is the most sensitive parameter followed
by minimum half flaw size (s[min]) and friction parameter (M[BM]). On the other hand, the strain rate sensitivity coefficient (N[BM]) and maximum porosity without damage (ϕ[u]) are among the least
sensitive parameters, but still significant.
6.3.2 Analysis With Derived Parameter Set.
The linear correlation coefficients between GMP and each of the 19 derived input parameters shown in Table 8 are calculated and reported in Table 12. Fracture toughness (K[IC]) has the strongest
correlation with GMP (similar to the case in Sec. 6.3.1) while crushability parameter (κ[BM]) and range half flaw size (s[range]) have the weakest correlation.
Next, linear regression analysis is performed and the corresponding prediction results are shown in Table 13. It is noted that in the third and final iteration of the analysis, a group of five
parameters (κ[BM], γ[B], N[BM], u, l) are removed such that the full data R^2 without these parameters is not significantly reduced. All these five parameters happen to be the granular parameters of
the CBM model.
The standardized t-test statistic of each parameter obtained from the regression analysis in the third iteration are shown in Fig. 21. From the p-values, it is found that all the reduced 14
parameters are statistically significant for a significance level of 0.05. Fracture toughness (K[IC]) is the most sensitive parameter followed by mean half flaw size (s[mean]) and friction parameter
(M[BM]). On the other hand, dilative behavior parameter (γ[d]) and maximum porosity without damage (ϕ[u]) are among the least sensitive parameters, but still significant. To sum up the sensitivity
analysis of the granular material percentage, similar trends are found in the original and the derived parameter cases with respect to the order of importance of the parameters. For example, in both
cases, the three most sensitive parameters and their importance order is very similar, the only difference being the original parameter s[min] replaced by the derived parameter s[mean].
6.4 Depth of Penetration.
One of the outputs of the integrative model is the evolution of the depth of penetration values with time. Two distinct cases are observed in the 500 simulation outcomes. In one case, the sphere
rebounds after impacting the cylinder, and in the other case, the sphere continues to penetrate into the cylinder. The maximum depth of penetration for the rebound case (MDR) is selected as the
output QoI for sensitivity analysis. To be exact, 342 simulations lead to rebound of the sphere while the rest lead to sphere penetration. Thus, 342 samples are used for the sensitivity analysis of
MDR. The histogram for the MDR output is shown in Fig. 22.
6.4.1 Analysis With Original Parameter Set.
The linear correlation coefficients between MDR and each of the input parameters are shown in Table 14 which suggests friction parameter (M[BM]) has the strongest correlation with MDR while maximum
half flaw size has the weakest correlation with MDR.
Next, linear regression analysis is performed and the corresponding prediction results are shown in Table 15. After removing the outliers and reducing the number of parameters, the R^2, adj. R^2, and
RMSE values are reported to be 0.918, 0.915, and 0.270, respectively, for the LOOCV analysis.
The standardized t-test statistic of the 11 significant parameters obtained from the regression analysis in the third iteration is shown in Fig. 23. Friction parameter (M[BM]) is the most sensitive
parameter followed by grading index ($ϑBM$) and fracture toughness (K[IC]). On the other hand, flaw orientation (θ[flaw]) and maximum porosity without damage (ϕ[u]) are among the least sensitive
parameters, but still significant.
6.4.2 Analysis With Derived Parameter Set.
The linear correlation coefficient values between MDR and each of the derived input parameters (not reported here) are very similar to that shown in Sec. 6.4.1. The prediction results from the linear
regression analysis shown in Table 16 indicate a slight improvement in all the three metrics compared to that reported in Sec. 6.4.1. Figure 24 shows the standardized t-test statistic of the 11
significant parameters obtained from the regression analysis in the third iteration. Friction parameter (M[BM]) is the most sensitive parameter followed by grading index ($ϑBM$) and fracture
toughness (K[IC]). On the other hand, crushability parameter (κ[BM]) and maximum porosity without damage (ϕ[u]) are among the least sensitive parameters, but still significant.
7 Discussion
7.1 Physical Mechanisms and Trends.
The influence of model parameters on a given model output is a complex interplay of the various mechanisms with which they are associated. Although there are multiple physical mechanisms active at
any instant, only a few of them can be intuitively expected to play a crucial role in influencing the output. For example, as discussed before, amorphization may not play a critical role in these
particular experiments because of the small amorphization volume observed in the present simulations, but wave propagation, crack growth, and energy dissipation due to granular flow might. We have
tried to isolate the parameters by the mechanism they might be responsible for, and determine the possible correlations with model output. Some parameters might be responsible for multiple
mechanisms. The suspected influence of model parameters on physical mechanisms and the corresponding correlation with percentage granular region has been highlighted in Table 17. Most of the
correlations reported in Tables 10 and 12 match the physical intuition in Table 17. However, the suspected correlations from ρ[0], ν[0], α[flaw], μ[flaw], γ[d] do not match the observed correlations.
This might be due to complicated parameter interactions. At any rate, the magnitude of correlation corresponding to each of these parameters is very small, and they are not parameters that have a
significant influence on model QoIs.
As mentioned in Sec. 6.4, only 342 out of the 500 simulations led to rebound within 15 μs after impact. The simulations in which rebound did not occur within that duration may feature rebound at a
later time, or may not feature rebound because of cylinder fragmentation. Hence, the 342 rebound cases were used to conduct the sensitivity study for the depth of penetration. The depth of
penetration is a more localized measure, intricately related to deformation mechanisms in the Mescall zone. This region is highly comminuted and almost certainly under granular flow. Unsurprisingly,
the model parameters associated with granular mechanics seem to play a more significant role. In addition, the percentage of the region under granular flow is expected to have a weak influence as
well. When more of the region is granular, the region right under the indenter might exhibit less granular flow. Thus, parameters associated with crack growth might have competing influences. The
influence of the modulus, fracture toughness, critical breakage energy density is physically related to overall stiffness and can be expected to be negatively correlated with the depth of
penetration. For other granular mechanics model parameters, there is an indirect influence via porosity change and volumetric deformation. Table 18 highlights the suspected correlation of model
parameters with the instantaneous depth of penetration. The influence of crack growth parameters on the maximum depth of penetration is complicated and unclear. Once again physical intuition can be
successfully used to justify correlations corresponding to wave speed and granular flow parameters except ρ[0], ν[0], and κ[BM].
7.2 Implications Toward Designing Materials.
Table 19 lists the top 10 most sensitive parameters from regression studies for percent granular material and depth of penetration using original and derived model parameters. In either case,
fracture toughness (K[IC]), granular friction (M[BM]), shear modulus (G[0]), grading index ($ϑBM$), minimum flaw size (s[min]) seem to be important. The strain rate sensitivity coefficient of
granular flow (N[BM]) is important for the depth of penetration. This study provides us insights that could guide material processing modifications. It is difficult if not impossible to control some
of these parameters, particularly the granular mechanics parameters other than M[BM] and E[C]. Fortunately, these parameters are not the most significant ones. As mentioned earlier, researchers have
explored different techniques to improve K[IC], G[0] [74–76]. Although these parameters have been assumed to be independent, some of them might be related to one another through available processing
routes, and they may be challenging to independently control. For example, it might not be possible to vary the defect population without affecting the polycrystalline fracture toughness or the
The initial damage (Ω[i]) is physically related to the volume fraction of defects. While processing a ceramic, Ω[i] might be optimized to meet a certain performance goal. Further optimization to
improve performance would likely involve controlling the defect size and the defect spacing. Our study suggests that the flaw density (η) or flaw spacing (s[spacing]) are not as significant as the
minimum flaw size (s[min]) or the mean flaw size (s[mean]). This might mean that ensuring smaller closely spaced defects might be more desirable than larger agglomerates. Flaw distribution parameters
are related to the secondary phases in boron carbide matrix. The most abundant of these secondary phases is free carbon. Location of free carbon along boundaries of fragments suggests crack growth
from these sites [88,89]. Therefore, controlling the size and volume fraction of these graphitic inclusions [90] can help address fracture mechanisms. Smaller defect spacing might also lead to
smaller initial fragments. In addition to this, smaller fragments might also increase E[C], due to the size effect, which might improve impact performance. Higher granular friction (M[BM]) is one of
the most desirable traits and more angular particles can achieve that. However, it is not clear how to control angularity. Reference [15] suggests that larger fragments have lower circularity.
However, it might not be fair to compare a bulk-averaged estimate of granular friction with individual particle shape. Reference [91] investigated the influence of particle morphology on frictional
behavior of sand. Further research toward microstructural features that control the angularity of subsequent fragments and therefore granular friction will be useful.
7.3 Implications Towards Modeling and Calibration.
The results from the sensitivity analysis suggest that certain parameters, some of which control the evolution of state variables in the CBM model (Eqs. (13–15)), are not significant for the output
quantities of interest studied in this work. If the same can be replicated for simulation of other impact experiments (summarized in Ref. [92]), it might be assumed that either the model can be
simplified to ignore those parameters or that those parameters do not need recalibration with slight changes in the material. For example, many of the granular mechanics parameters are calibrated
through multiple drained and undrained triaxial compression tests, oedometric compression tests on granular solids [48]. Often the classical geomechanics experimental setups cannot be employed at the
high-pressure conditions of impact experiments. So, for a new material, while it will be ideal to recalibrate granular friction (M[BM]) using pressure shear impact experiments [81], one can rely on
past data for γ[B], γ[d], κ[BM]. Similarly, accurate estimation of polycrystalline fracture toughness (K[IC]) is essential [93], and should be prioritized over flaw friction (μ[flaw]) or relative
density tests to calibrate l, u and dry density test to calibrate ϕ[u]. This can save both computational and experimental effort and expenditure.
8 Summary
A ceramics model that integrates multiple physical mechanisms has been recalibrated and used to simulate sphere indentation and Edge On Impact experiments in ABAQUS using boron carbide as a model
material. The simulations are able to replicate key cracking patterns observed in experiments through damage and density localizations. Two simulation outputs from sphere indentation experiments have
been identified as quantities of interest for a sensitivity analysis study: % granular material and indentation depth, 15 μs after impact. 20 micro-mechanical and granular flow model parameters have
been varied to generate 500 space-filled samples. Linear regression analysis for the two quantities of interest has been conducted to identify the most significant model parameters. Connections to
physical mechanisms for these model parameters have been argued and the implications towards material design from the sensitivity analysis have been explored. The results from the sensitivity study
assists in prioritizing calibration experiments for new materials.
Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022. The views and conclusions contained in this document are those of the
authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to
reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
The HPC at the Maryland Advanced Research Computing Center (MARCC)^^2 and the currently decommisioned COPPER cluster at the DoD HPC^^3 were used for conducting the simulations.
The authors acknowledge the contributions of Prof. J. D. Hogan, Prof. K. T. Ramesh, Prof. Nilanjan Mitra, Prof. Mark Robbins, and others in the CMEDE^^4 Ceramics Modelling group through insightful
discussions.The authors would also like to thank Dr. Elmar Strassburger for permission to reproduce the experimental images. Copyright for Fig. 10(a) was obtained from the publisher of Ref. [29],
John Wiley and Sons, under license number 4996030533359.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. Data provided by a third party are listed in the
, and
, “7 – Glasses and Ceramics,”
The Science of Armour Materials
I. G.
, ed.,
Woodhead Publishing
Duxford, UK
, pp.
, and
, “
Arbitrary Branched and Intersecting Cracks With the Extended Finite Element Method
Int. J. Numer. Methods Eng.
), pp.
M. L.
, and
J. R.
, “
A Critical Evaluation of Cohesive Zone Models of Dynamic Fracture
J. Phys. IV France
), pp.
Z. P.
, and
, “
Crack Band Theory for Fracture of Concrete
JMatériaux et Construction
), pp.
, and
, “
A Probabilistic Crack Band Model for Quasibrittle Fracture
ASME J. Appl. Mech.
), p.
, and
, “
Phase Field Modeling of Crack Propagation
Philos. Mag.
), pp.
, and
, “
Continuum Phase Field Modeling of Dynamic Fracture: Variational Principles and Staggered FE Implementation
Int. J. Fracture
), pp.
M. J.
C. V.
M. A.
T. J.
, and
C. M.
, “
A Phase-Eield Description of Dynamic Brittle Fracture
Comput. Methods Appl. Mech. Eng.
, pp.
, and
, “
Phase Field Approximation of Dynamic Brittle Fracture
Comput. Mech.
), pp.
G. R.
, and
T. J.
, “
An Improved Computational Constitutive Model for Brittle Materials
AIP. Conf. Proc.
), pp.
, and
, “
A Micromechanical Model for High Strain Rate Behavior of Ceramics
Int. J. Solids. Struct.
), pp.
, and
, “
Inelastic Deformation and Energy Dissipation in Ceramics: A Mechanism-Based Constitutive Model
J. Mech. Phys. Solids.
), pp.
, and
, “
A Constitutive Equation for Ceramic Materials Used in Lightweight Armors
Comput. Struct.
), pp.
J. D.
, and
, “
On Compressive Brittle Fragmentation
J. Am. Ceram. Soc.
), pp.
, and
, “
Micromechanical Model for Comminution and Granular Flow of Brittle Material Under High Strain Rate Application to Penetration of Ceramic Targets
Int. J. Impact Eng.
), pp.
, and
, “
Damage Evolution of Hot-Pressed Boron Carbide Under Confined Dynamic Compression
Int. J. Impact Eng.
, pp.
, and
, Predicting High Rate Granular Transition and Fragment Statistics at the Onset of Granular Flow for Brittle Ceramics, Nov. arXiv:2011.08331.
J. W.
, and
K. J.
, “
Shock-Induced Localized Amorphization in Boron Carbide
), pp.
, and
, “
A Micromechanics Based Model to Predict Micro-Crack Coalescence in Brittle Materials Under Dynamic Compression
Eng. Fract. Mech.
, p.
D. A.
, and
, “
Failure Phenomenology of Confined Ceramic Targets and Impacting Rods
Int. J. Impact Eng.
), pp.
A. L.
, and
, “
Multi-scale Defect Interactions in High-Rate Brittle Material Failure. Part I: Model Formulation and Application to ALON
J. Mech. Phys. Solids.
, pp.
A. L.
, and
, “
A Multi-Mechanism Constitutive Model for the Dynamic Failure of Quasi-Brittle Materials. Part II: Integrative Model
J. Mech. Phys. Solids.
, pp.
M. B.
R. C.
, and
, “
An Integrative Model for the Dynamic Behavior of Brittle Materials Based on Microcracking and Breakage Mechanics
J. Dyn. Behav. Mater.
), pp.
R. B.
R. M.
, and
O. E.
, “
The Use of Sphere Indentation Experiments to Characterize Ceramic Damage Models
Int. J. Appl. Ceram. Technol.
), pp.
, Investigation of Fracture Propagation During Impact in Boron Carbide. Technical Report, United States Army, European Research Office of the U.S. Army, 01.
, “
Visualization of Impact Damage in Ceramics Using the Edge-On Impact Technique
Int. J. Appl. Ceram. Technol.
), pp.
, “Edge‐On Impact Investigation of Fracture Propagation in Boron Carbide,”
Advances in Ceramic Armor
, Vol.
J. C.
, and
, eds.,
John Wiley & Sons, Ltd
New Jersey
, and
, “
Sensitivity Analysis: A Review of Recent Advances
Eur. J. Oper. Res.
), pp.
, and
Global Sensitivity Analysis: The Primer
John Wiley & Sons
West Sussex, UK
, and
, “
Variance Based Sensitivity Analysis of Model Output. Design and Estimator for the Total Sensitivity Index
Comput. Phys. Commun.
), pp.
J. E.
G. R.
D. R.
N. J.
S. A.
G. J.
, and
J. A.
, “
Photosynthetic Control of Atmospheric Carbonyl Sulfide During the Growing Season
), pp.
J. C.
, “
Uncertainty and Sensitivity Analysis Techniques for Use in Performance Assessment for Radioactive Waste Disposal
Reliab. Eng. Syst. Saf.
), pp.
J. P.
, “Sensitivity Analysis of Simulation Models.” Wiley Encyclopedia of Operations Research and Management Science.
, “
Sensitivity Estimates for Nonlinear Mathematical Models
Math. Model. Comput. Exp
), pp.
, and
, “
Importance Measures in Global Sensitivity Analysis of Nonlinear Models
Reliab. Eng. Syst. Saf.
), pp.
, and
A. S.
Sensitivity Analysis in Linear Regression
Wiley Series in Probability and Statistics
Hoboken, NJ
, and
X. G.
Linear Regression Analysis
World Scientific
M. D.
Radial Basis Functions: Theory and Implementations
, Vol.
Cambridge University Press
Cambridge, MA
, and
, “
An Efficient Adaptive Sparse Grid Collocation Method Through Derivative Estimation
Probab. Eng. Mech.
, pp.
M. D.
, and
R. M.
, “
Stochastic Collocation Approach With Adaptive Mesh Refinement for Parametric Uncertainty Analysis
J. Comput. Phys.
, pp.
M. D.
, and
, “
On the Usefulness of Gradient Information in Surrogate Modeling: Application to Uncertainty Propagation in Composite Material Models
Probab. Eng. Mech.
, p.
A. L.
, and
, “
A Multi-Mechanism Constitutive Model for the Dynamic Failure of Quasi-Brittle Materials. Part I: Amorphization As a Failure Mode
J. Mech. Phys. Solids.
, pp.
, and
, “
An Interacting Micro-Crack Damage Model for Failure of Brittle Materials Under Compression
J. Mech. Phys. Solids.
), pp.
M. B.
R. C.
, and
, “
A Rate-Dependent Constitutive Model for Brittle Granular Materials Based on Breakage Mechanics
J. Am. Ceram. Soc.
), pp.
M. B.
R. C.
, and
, “
Constitutive Model for Brittle Granular Materials Considering Competition Between Breakage and Dilation
J. Eng. Mech.
), p.
, and
, “
A Unified Approach to Finite Deformation Elastoplastic Analysis Based on the Use of Hyperelastic Constitutive Equations
Comput. Methods Appl. Mech. Eng.
), pp.
, and
, “
The Failure of Brittle Solids Containing Small Cracks Under Compressive Stress States
Acta Metall.
), pp.
, and
, “
Compression-Induced Nonplanar Crack Extension With Application to Splitting, Exfoliation, and Rockburst
J. Geophys. Res.: Solid Earth
), pp.
L. B.
Dynamic Fracture Mechanics
Cambridge Monographs on Mechanics
Cambridge University Press
, “
Breakage Mechanics–Part II: Modelling Granular Materials
J. Mech. Phys. Solids.
), pp.
, “
Fracture Propagation in Brittle Granular Matter
Proc. R. Soc. A: Math., Phys. Eng. Sci.
), pp.
, and
J. D.
, “
Quasi-Static Confined Uniaxial Compaction of Granular Alumina and Boron Carbide Observing the Particle Size Effects
J. Am. Ceram. Soc.
), pp.
D. P.
, Shock Response of Boron Carbide. Technical Report, Army Research Lab, Aberdeen Proving Ground, MD, USA, 04.
A. E.
, and
N. L.
, “
Intact and Predamaged Boron Carbide Strength Under Moderate Confinement Pressures
J. Am. Ceram. Soc.
), pp.
, and
, “
The Kinematics of Gouge Deformation
Pure Appl. Geophys.
), pp.
, and
, “
Fragmentation of Grains in a Two-Dimensional Packing
Eur. Phys. J. B - Condens. Matter Complex Syst.
), pp.
, and
, “
The Application of Weibull Statistics to the Fracture of Soil Particles
Soils and Found.
), pp.
, and
, “
Modeling the 14.5 Mm BS41 Projectile for Ballistic Impact Computations
Comput. Ballistics II, WIT Trans. on Modelling and Simul.
), pp.
A. L.
, and
, “
Multi-Scale Defect Interactions in High-Rate Failure of Brittle Materials, Part II: Application to Design of Protection Materials
J. Mech. Phys. Solids.
, pp.
, and
, “
The characterization and ballistic evaluation of mild steel
International Journal of Impact Engineering
, pp.
J. C.
M. J.
H. T.
, and
D. E.
, “Sphere Impact Induced Damage in Ceramics: II. Armor‐Grade B4C and WC,”
Advances in Ceramic Armor
, Vol.
J. J.
, ed.,
John Wiley & Sons, Ltd.
, and
, “Observation and Modeling of Cone Cracks in Ceramics,”
Dynamic Behavior of Materials
, Vol.
, and
, eds.,
Springer International Publishing
Cham, Switzerland
, pp.
M. F.
K. Y.
K. J.
, and
R. A.
, “
Densification and Characterization of Rapid Carbothermal Synthesized Boron Carbide
Int. J. Appl. Ceram. Technol.
), pp.
, and
, “
Densification and Mechanical Properties of B4C With Al2O3 As a Sintering Aid
J. Am. Ceram. Soc.
), pp.
, and
, “
High Strength B4C–TiB2 Composites Fabricated by Reaction Hot-Pressing
J. Eur. Ceram. Soc.
), pp.
, and
M. P.
, “
The Effect of Fe Addition on the Densification of B4C Powder by Spark Plasma Sintering
Powder. Metall. Met. Ceram.
), pp.
, and
, “
Effect of Zirconia Addition on Pressureless Sintering of Boron Carbide
Ceram. Int.
), pp.
, and
, “
Effect of Al and TiO2 on Sinterability and Mechanical Properties of Boron Carbide
Mater. Sci. Eng. A.
), pp.
Madhav Reddy
, and
, “
Enhanced Mechanical Properties of Nanocrystalline Boron Carbide by Nanoporosity and Interface Phases
Nat. Commun.
), p.
, and
, “
Enhanced Fracture Toughness of Boron Carbide From Microalloying and Nanotwinning
Scr. Mater.
, pp.
, and
, “
Strengthening Boron Carbide Through Lithium Dopant
J. Am. Ceram. Soc.
), pp.
J. E.
T. J.
T. S. E.
, and
, “
Stabilization of Boron Carbide Via Silicon Doping
J. Phys.: Condens. Matter.
), p.
A. U.
A. M.
K. Y.
K. D.
J. C.
K. J.
W. A.
, and
R. A.
, “
Locating Si Atoms in Si-doped Boron Carbide: A Route to Understand Amorphization Mitigation Mechanism
Acta Mater.
, pp.
M. C.
R. A.
, and
K. J.
, “
Experimental Observations of Amorphization in Stoichiometric and Boron-Rich Boron Carbide
Acta Mater.
, pp.
M. C.
, and
R. A.
, “
Amorphization Mitigation in Boron-Rich Boron Carbides Quantified by Raman Spectroscopy
), pp.
D. D.
A. L.
J. W.
K. J.
J. C.
, and
, “
Granular Flow of An Advanced Ceramic Under Ultra-high Strain Rates and High Pressures
J. Mech. Phys. Solids.
, p.
, and
, “
Impact Analysis of PEEK/ceramic/gelatin Composite for Finding Behind the Armor Trauma
Composite Struct.
, p.
, and
, “
Ballistic Reliability Study on SiC/UHMWPE Composite Armor Against Armor-Piercing Bullet
Composite Struct.
, pp.
, “
On the L[2]-discrepancy for Anchored Boxes
J. Complex.
), pp.
, and
F. Y.
, “
Remark on Algorithm 659: Implementing Sobol’s Quasirandom Sequence Generator
ACM Trans. Math. Softw. (TOMS)
), pp.
, and
An Introduction to Statistical Learning
, Vol.
New York
A Modern Approach to Regression With R
Springer Science & Business Media
New York
, and
, “
Micromechanisms Associated with the Dynamic Compressive Failure of Hot-pressed Boron Carbide
Scr. Mater.
, pp.
K. Y.
R. A.
J. C.
, and
K. J.
, “
Microstructural Characterization of a Commercial Hot-Pressed Boron Carbide Armor Plate
J. Am. Ceram. Soc.
), pp.
, and
M. K.
, “
Carbothermic Synthesis of Boron Carbide With Low Free Carbon Using Catalytic Amount of Magnesium Chloride
J. Iranian Chem. Soc.
), pp.
K. A.
, and
M. B.
, “
Influence of Particle Morphology on the Friction and Dilatancy of Sand
J. Geotech. Geoenviron. Eng.
), p.
, and
J. D.
, “Dynamic Fracture and Fragmentation of Boron Carbide,”
Boron Carbide: Structure, Processing, Properties and Applications
Kolan Madhav
, ed.,
Nova Science Publishers
New York
, and
, “Measuring the Real Fracture Toughness of Ceramics: ASTM C 1421,”
Fracture Mechanics of Ceramics
, Vol.
R. C.
, and
K. W.
, eds.,
New York
, pp. | {"url":"https://gasturbinespower.asmedigitalcollection.asme.org/appliedmechanics/article-split/88/5/051007/1096637/Failure-Modeling-and-Sensitivity-Analysis-of","timestamp":"2024-11-04T15:33:47Z","content_type":"text/html","content_length":"882826","record_id":"<urn:uuid:bd8b2a79-7864-43d0-a55d-3fcdf74cec82>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00213.warc.gz"} |
Table of Contents
Assignment Problem
Let C be an n by n matrix representing the costs of each of n workers to perform any of n jobs. The assignment problem is to assign jobs to workers in a way that minimizes the total cost. Since each
worker can perform only one job and each job can be assigned to only one worker the assignments represent an independent set of the matrix C.
One way to generate the optimal set is to create all permutations of the indexes necessary to traverse the matrix so that no row and column are used more than once. For instance, given this matrix
(expressed in Python):
matrix = [[5, 9, 1],
[10, 3, 2],
[8, 7, 4]]
You could use this code to generate the traversal indexes:
def permute(a, results):
if len(a) == 1:
results.insert(len(results), a)
for i in range(0, len(a)):
element = a[i]
a_copy = [a[j] for j in range(0, len(a)) if j != i]
subresults = []
permute(a_copy, subresults)
for subresult in subresults:
result = [element] + subresult
results.insert(len(results), result)
results = []
permute(range(len(matrix)), results) # [0, 1, 2] for a 3x3 matrix
After the call to permute(), the results matrix would look like this:
[[0, 1, 2],
[0, 2, 1],
[1, 0, 2],
[1, 2, 0],
[2, 0, 1],
[2, 1, 0]]
You could then use that index matrix to loop over the original cost matrix and calculate the smallest cost of the combinations:
minval = sys.maxsize
for indexes in results:
cost = 0
for row, col in enumerate(indexes):
cost += matrix[row][col]
minval = min(cost, minval)
While this approach works fine for small matrices, it does not scale. It executes in O(n!) time: Calculating the permutations for an n x n matrix requires n! operations. For a 12x12 matrix, that’s
479,001,600 traversals. Even if you could manage to perform each traversal in just one millisecond, it would still take more than 133 hours to perform the entire traversal. A 20x20 matrix would take
2,432,902,008,176,640,000 operations. At an optimistic millisecond per operation, that’s more than 77 million years.
The Munkres algorithm runs in O(n^3) time, rather than O(n!). This package provides an implementation of that algorithm.
This version is based on http://csclab.murraystate.edu/~bob.pilgrim/445/munkres.html
This version was written for Python by Brian Clapper from the algorithm at the above web site. (The Algorithm:Munkres Perl version, in CPAN, was clearly adapted from the same web site.)
Construct a Munkres object:
from munkres import Munkres
m = Munkres()
Then use it to compute the lowest cost assignment from a cost matrix. Here’s a sample program:
from munkres import Munkres, print_matrix
matrix = [[5, 9, 1],
[10, 3, 2],
[8, 7, 4]]
m = Munkres()
indexes = m.compute(matrix)
print_matrix(matrix, msg='Lowest cost through this matrix:')
total = 0
for row, column in indexes:
value = matrix[row][column]
total += value
print(f'({row}, {column}) -> {value}')
print(f'total cost: {total}')'
Running that program produces:
Lowest cost through this matrix:
[5, 9, 1]
[10, 3, 2]
[8, 7, 4]
(0, 0) -> 5
(1, 1) -> 3
(2, 2) -> 4
total cost=12
The instantiated Munkres object can be used multiple times on different matrices.
Non-square Cost Matrices
The Munkres algorithm assumes that the cost matrix is square. However, it’s possible to use a rectangular matrix if you first pad it with 0 values to make it square. This module automatically pads
rectangular cost matrices to make them square.
• The module operates on a copy of the caller’s matrix, so any padding will not be seen by the caller.
• The cost matrix must be rectangular or square. An irregular matrix will not work.
Calculating Profit, Rather than Cost
The cost matrix is just that: A cost matrix. The Munkres algorithm finds the combination of elements (one from each row and column) that results in the smallest cost. It’s also possible to use the
algorithm to maximize profit. To do that, however, you have to convert your profit matrix to a cost matrix. The simplest way to do that is to subtract all elements from a large value. For example:
from munkres import Munkres, print_matrix
matrix = [[5, 9, 1],
[10, 3, 2],
[8, 7, 4]]
cost_matrix = []
for row in matrix:
cost_row = []
for col in row:
cost_row += [sys.maxsize - col]
cost_matrix += [cost_row]
m = Munkres()
indexes = m.compute(cost_matrix)
print_matrix(matrix, msg='Highest profit through this matrix:')
total = 0
for row, column in indexes:
value = matrix[row][column]
total += value
print(f'({row}, {column}) -> {value}')
print(f'total profit={total}')'
Running that program produces:
Highest profit through this matrix:
[5, 9, 1]
[10, 3, 2]
[8, 7, 4]
(0, 1) -> 9
(1, 0) -> 10
(2, 2) -> 4
total profit=23
The munkres module provides a convenience method for creating a cost matrix from a profit matrix. By default, it calculates the maximum profit and subtracts every profit from it to obtain a cost. If,
however, you need a more general function, you can provide the conversion function; but the convenience method takes care of the actual creation of the matrix:
import munkres
import math
cost_matrix = munkres.make_cost_matrix(
lambda profit: 1000.0 - math.sqrt(profit)
So, the above profit-calculation program can be recast as:
from munkres import Munkres, print_matrix, make_cost_matrix
matrix = [[5, 9, 1],
[10, 3, 2],
[8, 7, 4]]
cost_matrix = make_cost_matrix(matrix)
# cost_matrix == [[5, 1, 9],
# [0, 7, 8],
# [2, 3, 6]]
m = Munkres()
indexes = m.compute(cost_matrix)
print_matrix(matrix, msg='Highest profits through this matrix:')
total = 0
for row, column in indexes:
value = matrix[row][column]
total += value
print(f'(${row}, ${column}) -> ${total}')
print(f'total profit=${total}')
Disallowed Assignments
You can also mark assignments in your cost or profit matrix as disallowed. Simply use the munkres.DISALLOWED constant.
from munkres import Munkres, print_matrix, make_cost_matrix, DISALLOWED
matrix = [[5, 9, DISALLOWED],
[10, DISALLOWED, 2],
[8, 7, 4]]
cost_matrix = make_cost_matrix(matrix, lambda cost: (sys.maxsize - cost) if
(cost != DISALLOWED) else DISALLOWED)
m = Munkres()
indexes = m.compute(cost_matrix)
print_matrix(matrix, msg='Highest profit through this matrix:')
total = 0
for row, column in indexes:
value = matrix[row][column]
total += value
print(f'({row}, {column}) -> {value}')
print(f'total profit={total}')
Running this program produces:
Lowest cost through this matrix:
[ 5, 9, D]
[10, D, 2]
[ 8, 7, 4]
(0, 1) -> 9
(1, 0) -> 10
(2, 2) -> 4
total profit=23
1. Harold W. Kuhn. The Hungarian Method for the assignment problem. Naval Research Logistics Quarterly, 2:83-97, 1955.
2. Harold W. Kuhn. Variants of the Hungarian method for assignment problems. Naval Research Logistics Quarterly, 3: 253-258, 1956.
3. Munkres, J. Algorithms for the Assignment and Transportation Problems. Journal of the Society of Industrial and Applied Mathematics, 5(1):32-38, March, 1957.
Getting and installing munkres
Because munkres is available via PyPI, if you have pip installed on your system, installing munkres is as easy as running this command:
WARNING: As of version 1.1.0, munkres no longer supports Python 2. If you need to use it with Python 2, install an earlier version (e.g., 1.0.12):
pip install munkres==1.0.12
Installing from source
You can also install munkres from source. Either download the source (as a zip or tarball) from http://github.com/bmc/munkres/downloads, or make a local read-only clone of the Git repository using
one of the following commands:
$ git clone git://github.com/bmc/munkres.git
$ git clone http://github.com/bmc/munkres.git
Once you have a local munkres source directory, change your working directory to the source directory, and type:
To install it somewhere other than the default location (such as in your home directory) type:
python setup.py install --prefix=$HOME
Consult the API documentation for details. The API documentation is generated from the source code, so you can also just browse the source.
1. Harold W. Kuhn. The Hungarian Method for the assignment problem. Naval Research Logistics Quarterly, 2:83-97, 1955.
2. Harold W. Kuhn. Variants of the Hungarian method for assignment problems. Naval Research Logistics Quarterly, 3: 253-258, 1956.
3. Munkres, J. Algorithms for the Assignment and Transportation Problems. Journal of the Society of Industrial and Applied Mathematics, 5(1):32-38, March, 1957.
This module is released under the Apache Software License, version 2. See the license file for details. | {"url":"http://software.clapper.org/munkres/index.html","timestamp":"2024-11-13T07:28:16Z","content_type":"application/xhtml+xml","content_length":"30494","record_id":"<urn:uuid:cf651a5b-6e25-4678-8971-0d131f0e457b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00481.warc.gz"} |
Axel Ljungström: Cohomology in Cubical Type Theory and Agda | KTH
Axel Ljungström: Cohomology in Cubical Type Theory and Agda
Time: Wed 2020-12-09 10.00 - 12.00
Location: Zoom, meeting ID: 610 2070 5696
Participating: Axel Ljungström
Cubical Type Theory (CuTT) is an alternative to Homotopy Type Theory in which univalence has a constructive interpretation. In CuTT, we can easily define cohomology groups by means of
Eilenberg-MacLane spaces (EM-spaces). This construction allows us to work with cohomology in a very synthetic manner which often captures a good deal of naive topological intuition. Additionally,
since univalence has computational content in Cubical Agda, a proof assistant for CuTT, we should be able to make explicit computations concerning cohomology groups. These computations, however,
often turn out to be incredibly complex and we run into problems already at cohomology in dimension 2.
In this talk, I will discuss my latest efforts in making cohomology compute better. In particular, I will present a more direct definition of the addition operation on EM-spaces (over the integers)
which satisfies some definitional equalities that the usual definition does not satisfy. Furthermore, I will show how this definition is used to give easy proofs of the fact that loop spaces over
EM-spaces are commutative and of the fact that the n:th EM-space is equivalent to the loop space of the (n+1):th EM-space. Finally, I will present direct characterisations of some cohomology groups
(e.g. those of spheres, wedge sums, the torus and the klein bottle) and show some computations in Cubical Agda. | {"url":"https://www.kth.se/math/kalender/axel-ljungstrom-cohomology-in-cubical-type-theory-and-agda-1.1033965?date=2020-12-09&orgdate=2020-03-08&length=1&orglength=0","timestamp":"2024-11-06T09:09:04Z","content_type":"text/html","content_length":"56413","record_id":"<urn:uuid:969c710d-28ec-4411-af4c-4c77a5946e7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00383.warc.gz"} |
Efficient hidden surface removal for objects with small union size
Let 5 be a set of n non-intersecting objects in space for which we want to determine the portions visible from some viewing point. We assume that the objects are ordered by depth from the viewing
point (e.g., they are all horizontal and are viewed from infinity from above). In this paper we give two algorithms that compute the visible portions in time O((U(n) + k)log^2n), where U(n') is a
super-additive bound on the maximal complexity of the union of (the projections on a viewing plane of) any n' objects from the family under consideration, and k is the complexity of the resulting
visibility map. Both algorithms use O(U(n) log n) working storage. The algorithms are useful when the objects are "fat" in the sense that the union of the projection of any subset of them has small
(i.e., subquadratic) complexity. We present three applications of these general techniques: (i) For disks (or balls in space) we have U(n) - O(n), thus the visibility map can be computed in time O((n
+ k) log2 n). (ii) For 'fat' triangles (where each internal angle is at least some fixed 6 degrees) we have U(n) = O(n log log n) and the algorithms run in time O((n log log n + k)log^2 n). (iii) The
methods also apply to computing the visibility map for a polyhedral terrain viewed from a fixed point, and yield O((nα(n) + k) log n) algorithms.
Publication series
Name Proceedings of the Annual Symposium on Computational Geometry
Conference 7th Annual Symposium on Computational Geometry, SCG 1991
Country/Territory United States
City North Conway
Period 10/06/91 → 12/06/91
ASJC Scopus subject areas
• Theoretical Computer Science
• Geometry and Topology
• Computational Mathematics
Dive into the research topics of 'Efficient hidden surface removal for objects with small union size'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/efficient-hidden-surface-removal-for-objects-with-small-union-siz-3","timestamp":"2024-11-14T00:57:30Z","content_type":"text/html","content_length":"61132","record_id":"<urn:uuid:76407f2e-e9d8-4002-9a4e-61e6823b819f>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00875.warc.gz"} |
Lesson 4
Scale Drawings
Let’s explore scale drawings.
4.1: What is a Scale Drawing?
Here are some drawings of a school bus, a quarter, and the subway lines around Boston, Massachusetts. The first three drawings are scale drawings of these objects.
The next three drawings are not scale drawings of these objects.
Discuss with your partner what a scale drawing is.
4.2: Sizing Up a Basketball Court
Your teacher will give you a scale drawing of a basketball court. The drawing does not have any measurements labeled, but it says that 1 centimeter represents 2 meters.
1. Measure the distances on the scale drawing that are labeled a–d to the nearest tenth of a centimeter. Record your results in the first row of the table.
2. The statement “1 cm represents 2 m” is the scale of the drawing. It can also be expressed as “1 cm to 2 m,” or “1 cm for every 2 m.” What do you think the scale tells us?
3. How long would each measurement from the first question be on an actual basketball court? Explain or show your reasoning.
│ measurement │(a) length│(b) width│(c) hoop│(d) 3 point line │
│ │ of court │of court │to hoop │ to sideline │
│scale drawing│ │ │ │ │
│actual court │ │ │ │ │
4. On an actual basketball court, the bench area is typically 9 meters long.
1. Without measuring, determine how long the bench area should be on the scale drawing.
2. Check your answer by measuring the bench area on the scale drawing. Did your prediction match your measurement?
4.3: Tall Structures
Here is a scale drawing of some of the world’s tallest structures.
1. About how tall is the actual Willis Tower? About how tall is the actual Great Pyramid? Be prepared to explain your reasoning.
2. About how much taller is the Burj Khalifa than the Eiffel Tower? Explain or show your reasoning.
3. Measure the line segment that shows the scale to the nearest tenth of a centimeter. Express the scale of the drawing using numbers and words.
The tallest mountain in the United States, Mount Denali in Alaska, is about 6,190 m tall. If this mountain were shown on the scale drawing, how would its height compare to the heights of the
structures? Explain or show your reasoning.
Scale drawings are two-dimensional representations of actual objects or places. Floor plans and maps are some examples of scale drawings. On a scale drawing:
• Every part corresponds to something in the actual object.
• Lengths on the drawing are enlarged or reduced by the same scale factor.
• A scale tells us how actual measurements are represented on the drawing. For example, if a map has a scale of “1 inch to 5 miles” then a \(\frac12\)-inch line segment on
that map would represent an actual distance of 2.5 miles
Sometimes the scale is shown as a segment on the drawing itself. For example, here is a scale drawing of a stop sign with a line segment that represents 25 cm of actual length.
The width of the octagon in the drawing is about three times the length of this segment, so the actual width of the sign is about \(3 \boldcdot 25\), or 75 cm.
Because a scale drawing is two-dimensional, some aspects of the three-dimensional object are not represented. For example, this scale drawing does not show the thickness of the stop sign.
A scale drawing may not show every detail of the actual object; however, the features that are shown correspond to the actual object and follow the specified scale.
• scale
A scale tells how the measurements in a scale drawing represent the actual measurements of the object.
For example, the scale on this floor plan tells us that 1 inch on the drawing represents 8 feet in the actual room. This means that 2 inches would represent 16 feet, and \(\frac12\) inch would
represent 4 feet.
• scale drawing
A scale drawing represents an actual place or object. All the measurements in the drawing correspond to the measurements of the actual object by the same scale. | {"url":"https://im.kendallhunt.com/MS_ACC/students/2/2/4/index.html","timestamp":"2024-11-12T08:53:18Z","content_type":"text/html","content_length":"85554","record_id":"<urn:uuid:bf0c390f-db3a-4020-9d37-2f1e5b53da82>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00522.warc.gz"} |
If I am running a Chi-Square test lower p-values means there's a significant difference?
The Chi-Square Test, tests whether two variables in a table are significantly different, or in other words, if there are any difference between the observed frequencies (our frequencies) and the
expected frequencies (the frequencies in an hypothesis of independence).
Based on the significance level expected from the Test (default value 95%) a lower p-value indicate a stronger evidence of a statistical difference between observed and expected frequencies.
For example in a table like the one in the below picture, the p-value=0.001 indicate there's a strong evidence that the distribution of the variable 'Favorite alphabet letter' is independent
from 'Gender'. | {"url":"https://support.intellexweb.com/support/solutions/articles/1000158746-if-i-am-running-a-chi-square-test-lower-p-values-means-there-s-a-significant-difference-","timestamp":"2024-11-04T14:09:28Z","content_type":"text/html","content_length":"21954","record_id":"<urn:uuid:fa0751b2-4272-434a-8c05-07deb7424457>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00457.warc.gz"} |
A Cambrian Explosion of Crypto Proofs
This post is for folks with some background in cryptography. It surveys the expanding crypto-verse of proof systems and the role of symmetric STARKs within. Based on a talk delivered in San Francisco
in November 2019.
1. Introduction
For 3.5 billion years, life on earth consisted of a primordial soup of single-cell creatures. Then, within a geological eyeblink, during what is known as the Cambrian Explosion, nearly all animal
phyla we recognize today emerged.
By analogy, we are currently experiencing a Cambrian Explosion in the field of cryptographic proofs of computational integrity (CI), a subset of which include zero knowledge proofs. While a couple of
years ago there were about 1–3 new systems a year, the rate has picked up so much that today we are seeing this same amount monthly, if not weekly. To wit, in 2019 we’ve learned of new constructions
like Libra, Sonic, SuperSonic, PLONK, SLONK, Halo, Marlin, Fractal, Spartan, Succinct Aurora, and implementations like OpenZKP, Hodor, and GenSTARK. Oh, and as the ink is drying on this post,
RedShift and AirAssembly come along.
How to make sense of all this marvelous innovation? The purpose of this post is to identify the common denominators of all CI systems implemented in code and discuss a few differentiating factors.
Please note that this article will be a bit technical, as it assumes some cryptography background! This may nevertheless be worth skimming for the interested non-cryptographer to get a sense of the
lingo used in the field. With that said, our descriptions will be brief, and intentionally imprecise from a mathematical viewpoint. Another major goal of this post is to explain why our company
StarkWare is placing all its chips in terms of science, engineering and products on a specific subfamily of the CI-verse, called henceforth symmetric STARKs.
Common Ancestors
Computational integrity proof systems can help solve two fundamental problems that afflict decentralized blockchains: privacy and scalability. Zero Knowledge Proofs (ZKPs¹) provide privacy by
shielding some inputs of a computation without compromising integrity. Succinctly verifiable CI systems deliver scalability by exponentially compressing the amount of computation needed to verify the
integrity of a large batch of transactions.
All CI systems that have been realized in code share two commonalities: all use something called arithmetization, and all cryptographically enforce a concept called “low-degree compliance” (LDC)².
Arithmetization is the reduction of computational statements made by a proving algorithm. You might start from a conceptual statement like this:
“I know the keys that allow me to spend a shielded Zcash transaction”
And translate it into an algebraic statements involving a set of bounded-degree polynomials, like:
“I know four polynomials A(X), B(X), C(X), D(X), each of degree less than 1,000, such that this equality holds: A(X)*B²(X)-C(X) = (X¹⁰⁰⁰–1)*D(X)”
Low-degree compliance means using cryptography to ensure that the prover actually picks low-degree polynomials³ and evaluates those polynomials on randomly chosen points requested by the verifier. In
the example above (that we’ll keep referring to in this post), a good LDC solution assures us that when the prover is asked about x₀, it will answer with the values a₀, b₀, c₀, d₀ that are the
correct values of A, B, C and D on input x₀. The tricky part is that a prover might pick A,B,C and D after seeing the query x₀, or may decide to answer with arbitrary a₀, b₀, c₀, d₀ that appease the
verifier and do not correspond to any evaluation of pre-chosen low-degree polynomials. So, all that cryptography goes to prevent such attack vectors. (The trivial solution that requires the prover to
send the complete A,B,C, and D delivers neither scalability, nor privacy.)
With this in mind, the CI-verse can be mapped out according to (i) the cryptographic primitives used to enforce LDC, (ii) the particular LDC solutions built with those primitives and (iii) the kind
of arithmetization allowed by these choices.
2. Dimensions of Comparison
I. Cryptographic Assumptions
From 30,000 feet, the biggest theoretical distinguishing factor among different CI systems is whether their security requires symmetric primitives or asymmetric ones (see Figure 1). Typical symmetric
primitives are SHA2, Keccak (SHA3), or Blake, and we assume they are collision resistant hash (CRH) functions, pseudorandom and behave like a random oracle. Asymmetric assumptions include things like
hardness of solving the discrete logarithm problem modulo a prime number, an RSA modulus, or in an elliptic curve group, hardness of computing the size of the multiplicative group of an RSA ring, and
more exotic variants of such problems, like the “knowledge of exponent” assumption, the “adaptive root” assumption, etc.
Figure 1: Cryptographic Assumptions Family Trees
This symmetry/asymmetry divide between CI systems has many consequences, among them:
A. Computational Efficiency
The security of asymmetric primitives implemented today in code⁴ requires one to arithmetize and solve LDC problems over large algebraic domains: large prime fields and large elliptic curves over
them, in which each field/group element is hundreds of bits long, or integer rings in which each element is thousands of bits long. By contrast, constructions relying only on symmetric assumptions
arithmetize and perform LDC over any algebraic domain (ring or finite field) that contains smooth⁵ sub-groups, including very small binary fields and 2-smooth prime fields (64 bits or less), in which
arithmetic operations are fast.
Takeaway: symmetric CI systems can arithmetize over any field, leading to greater efficiency.
B. Post-Quantum Security
All asymmetric primitives currently used in the CI-verse will be broken efficiently by a quantum computer with sufficiently large state (measured in qubits), if and when such a computer appears.
Symmetric primitives, on the other hand, are plausibly post-quantum secure (perhaps with larger seeds and states per bit of security).
Takeaway: Only symmetric systems are plausibly post-quantum secure.
Figure 2: Cryptographic Assumptions and the Economic Value they support
C. Future-Proofing
The Lindy Effect theory says that “the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age.” or in plain English, old stuff survives
longer than new stuff. In the area of cryptography, this can be translated as saying that systems which rely on older, battle-tested primitives are safer and more future-proof than newer assumptions
whose tires have been kicked less (See Figure 2). From this angle, new variants of asymmetric assumptions like groups of unknown order, the generic group model and knowledge of exponent assumptions
are younger and have pulled a lighter economic cart than older assumptions like the more standard DLP and RSA assumptions that are used, e.g., for digital signatures, identity based encryption and
for SSH initialization. These assumptions are less future-proof than symmetric assumptions like the existence of a collision resistant hash because these latter assumptions (and even specific hash
functions) are the brick and mortar used to secure computers, networks, the Internet and e-commerce.
Moreover, there’s a strict mathematical hierarchy among these assumptions. The CRH assumption reigns in this hierarchy because if this assumption is broken (meaning that no safe cryptographic hash
function is to be found) then, in particular, the RSA and DLP assumptions are also broken because those assumptions imply the existence of a good CRH! Similarly, the DLP assumption reigns over the
knowledge of exponent (KoE) assumption because if the former (DLP) assumption fails to hold, then the latter (KoE) also fails to hold. Likewise, the RSA assumptions reigns over the group of unknown
order (GoUO) assumption because if RSA is broken then GoUO also breaks.
Takeaway: New asymmetric assumptions are a riskier foundation for financial infrastructure.
D. Argument Length
All points made above favor symmetric CI constructions over asymmetric ones. But there’s one area in which asymmetric constructions fare better. The communication complexity (or argument length)
associated with them is smaller by 1–3 orders of magnitude (Nielsen’s Law⁶ notwithstanding). Famously, the Groth16 SNARK is shorter than 200 bytes at an estimated level of 128-bits of security,
whereas all symmetric constructions existing today require dozens of kilobytes for the same security level. It should be noted that not all asymmetric constructions are as succinct as 200 bytes.
Recent constructions improve on Groth16 by (i) removing the need for a trusted setup (transparency) and/or (ii) handling general circuits (Groth16 requires one trusted setup per circuit). But these
newer constructions have arguments that are longer, reaching sizes between a half a kilobyte (as is the case of PLONK) to a double-digit number of kilobytes, nearing the argument length of symmetric
Takeaway: asymmetric circuit-specific systems (Groth16) are shortest, shorter than all asymmetric universal ones, and all symmetric systems.
To reiterate the above takeaways:
• Symmetric CI systems can arithmetize over any field, leading to greater efficiency
• Only symmetric systems are plausibly post-quantum secure
• New asymmetric assumptions are a riskier foundation for financial infrastructure
• Asymmetric circuit-specific systems (Groth16) are shortest, shorter than all asymmetric universal ones, and all symmetric systems
II. Low Degree Compliance (LDC) Schemes
There are two main ways to achieve low degree compliance: (i) hiding queries and (ii) commitment schemes (see Figure 3). Let’s go over the differences.
Figure 3: Hiding Queries & Commitment Schemes
Hiding Queries
This approach (formalized here) is the one used by the Zcash-style SNARKs like Pinocchio, libSNARK, Groth16, and systems built on them like Zcash's Sapling, Ethereum's Zokrates, etc. To get the
prover to answer correctly, we use homomorphic encryption to hide, or encrypt, x₀ and supply enough information so that the prover can evaluate A, B, C and D on x₀ . Actually, what is given to the
prover is a sequence of encryptions of powers of x₀ (i.e., encryptions of x₀¹ , x₀², … x₀¹⁰⁰⁰) so that the prover can evaluate any degree-1000 polynomial, but only polynomials of degree at most
1,000. Roughly speaking, the system is secure since the prover does not know what x₀ is, and this x₀ is randomly (pre-)selected, so that if the prover tries to cheat then with very high probability
they will be exposed. A trusted pre-processing setup phase is needed here to sample x₀ and encrypt the sequence of powers above (and additional information), leading to a proving key that is at least
as large as the circuit of the computation being proved (there's also a verification key which is much shorter). Once the setup has been completed and the keys released, each proof is a single, s
uccinct, noninteractive argument of knowledge (or SNARK, for short). Notice that this system does require some form of interaction, in the form of the pre-processing phase, which is unavoidable for
theoretical reasons. Notice also that the system is not transparent, meaning that the entropy used to sample and encrypt x₀ cannot be simply public random coins, because any party that knows x₀ can
break the system and prove falsities. Generating an encryption of x₀ and its powers without revealing x₀ is therefore a security issue that constitutes a potential single point of failure.
Commitment Schemes
This approach requires the prover to commit to the set of low-degree polynomials (A,B,C and D, in the example above) by sending some cryptographically crafted commitment message to the verifier. With
this commitment in hand, the verifier now samples and queries the prover about a randomly chosen x₀, and now the prover replies with a₀, b₀, c₀, and d₀ along with additional cryptographic information
that convinces the verifier that the four values revealed by the prover comply with the earlier commitment sent to the verifier. Such schemes are naturally interactive and many of them are
transparent (all messages generated by the verifier are simply public random coins). Transparency allows one to compress the protocol into a non-interactive one via the Fiat-Shamir heuristic (which
treats a pseudorandom function like SHA 2/3 as a random oracle that provides "public" randomness), or to use other public sources of randomness like block-headers. The most prevalent transparent
commitment scheme is via Merkle trees, and this method is plausibly post-quantum secure but leads to the large argument lengths seen in many symmetric systems (due to all the authentication paths
that need to be revealed and accompany each prover answer). This is the method used by most STARKs like libSTARK and succinct Aurora, as well as by succinct proof systems like ZKBoo, Ligero, Aurora
and Fractal (even though these systems do not satisfy the formal scalability definition of a STARK). In particular, the STARKs we're building at StarkWare (like the StarkDEX alpha and the
StarkExchange we're deploying soon) fall under this category. One may use asymmetric primitives to construct commitment schemes, e.g., ones based on the hardness of the discrete log problem over
elliptic curve groups (this is the approach taken by BulletProofs and Halo), and the groups of unknown order assumption (as done by DARK and SuperSonic). Using asymmetric commitment schemes comes
with the pros and cons mentioned previously: shorter proofs but longer computation time, quantum susceptibility, newer (and less studied) assumptions and, in some cases, loss of transparency.
III. Arithmetization
The choice of cryptographic assumption and LDC methods also affect the range of arithmetization possibilities, in three noticeable ways (See Figure 4):
Figure 4: Arithmetization Effects
A. NP (circuits) vs. NEXP (programs)
Most implemented CI systems reduce computational problems to arithmetic circuits which are then converted to a set of constraints (typically, R1CS constraints, discussed below). This approach allows
for circuit-specific optimizations but requires the verifier, or some entity trusted by it, to perform a computation that is as large as the computation (circuit) being verified. For multi-use
circuits like Zcash's Sapling circuit, this arithmetization suffices. But systems that are scalable and transparent (no trusted setup) like libSTARK, succinct Aurora and the systems StarkWare is
building, must use a succinct representation of computation, one that is akin to a general computer program and which has a description that is exponentially smaller than the computation being
verified. The two existing methods for achieving this-(i) Algebraic Intermediate Representations (AIRs) used by libSTARK, genSTARK and StarkWare's systems, and (ii) succinct R1CS of
succinct-Aurora, are best described as arithmetizations of general computer programs (as opposed to circuits). These succinct representations are powerful enough to capture the complexity class of
nondeterministic exponential time (NEXP), which is exponentially more expressive and powerful than the class of nondeterministic polynomial time (NP) described by circuits.
B. Alphabet Size and Type
As pointed above, the cryptographic assumptions used also dictate to a large extent which algebraic domains can serve as the alphabet over which we arithmetize. For instance, if we use bilinear
pairings, then the alphabet we'll use for arithmetization is a cyclic group of elliptic curve points, and this group must be of large prime size, meaning that we need to arithmetize over this field.
To take another example, the SuperSonic system (in one of its versions) uses RSA integers and in this case the alphabet will be a large prime field. By contrast, when using Merkle trees the alphabet
size can be arbitrary, allowing arithmetization over any finite domain. This includes the examples above but also arbitrary prime fields, extensions of small prime fields such as binary fields. The
field type matters because smaller fields lead to faster proving and verification time.
C. R1CS vs. General Polynomial Constraints
Zcash-style SNARKs make use of bilinear pairings over elliptic curves to arithmetize the constraints of the computation. This particular use⁷ of bilinear pairings limits arithmetization to gates that
are quadratic Rank-1 Constraint Systems (R1CS). The simplicity and ubiquity of R1CS has led many other systems to use this form of arithmetization for circuits, even though more general forms of
constraints can be used, like arbitrary rank quadratic forms, or constraints of higher degree.
3. STARK vs. SNARK
This is a good opportunity to clarify the differences between STARKs and SNARKs. Both terms have concrete mathematical definitions, and certain constructions can be instantiated as STARKs, or SNARKs,
or as both. The different terms put emphasis on different properties of proof systems. Let's examine these in more detail (see Figure 5).
Figure 5: STARK vs. SNARK
The S here stands for scalability, which means that as batch size n increases , proving time scales quasi-linearly in n and, simultaneously, verifying time scales poly-logarithmically⁸ in n. The T in
STARK stands for transparency, which means all verifier messages are public random coins, (no trusted setup). According to this definition, if there’s any pre-processing setup, it must be succinct
(poly-logarithmic) and must consist merely of sampling public random coins.
The S here stands for succinctness, which means that verifying time scales poly-logarithmically in n (without demanding quasi-linear proving time) and the N means non-interactive, which means that
after a pre-processing phase (which may be non-transparent), the proof system cannot allow any further interaction. Notice that according to this definition a non-succinct trusted setup phase is
allowed, and, generally speaking, the system need not be transparent, but it must be noninteractive (after finalizing the pre-processing phase, which is unavoidable).
Looking at the CI-verse (see Figure 5), one notices that some members of it are STARKs, others are SNARKs, some are both, while others are neither (e.g., if verification time scales worse than
poly-logarithmically in n). If you’re interested in privacy (ZKP) applications then both ZK-SNARKs and ZK-STARKs and even systems that have neither the scalability of a STARK nor the (weaker)
succinctness of a SNARK, could serve well; Bulletproofs, used by Monero, is one such notable example, in which verification time scales linearly with circuit size. When it comes to code maturity,
SNARKs have an advantage because there are quite a few good open source libraries to build on. But if you’re interested in scalability applications (where you need to build for ever growing batch
sizes), then we suggest using symmetric STARKs, because, at time of writing, they have the fastest proving time and come with the assurance that no part of the verification process (or of setting up
the system) requires more than poly-logarithmic processing time. And if you want to build systems that have minimal trust assumptions, then, again, you want to use a symmetric STARK because the only
ingredient needed there is some CRH and a source of public randomness.
4. Summary
Figure 6. Summary
We're blessed to be experiencing the marvelous Cambrian explosion of the Computational Integrity universe of proof systems, and all bets are that the proliferation of systems and innovations will
continue, at a growing rate. Moreover, this attempt to describe the expanding and shifting CI-verse will likely age poorly as new insights and constructions appear tomorrow. Having said that,
surveying the CI-space today, the biggest dividing line we see is between (i) systems that require asymmetric cryptographic assumptions-which lead to shorter proofs but are costlier to prove, have
newer assumptions which are quantum-susceptible, and many of which are non-transparent, and (ii) systems that rely only on symmetric assumptions, making them computationally efficient, transparent,
plausibly post-quantum secure and most future proof (according to the Lindy Effect metric).
The argument over which argument system to use is far from over. But at StarkWare we say: For short arguments, use Groth16/PLONK SNARKs. For everything else, there's symmetric STARKs.
Eli Ben-Sasson, StarkWare
Special thanks to Justin Drake for commenting on an earlier draft.
¹ The term ZKP is often misused to refer to all CI systems, even ones that are not, formally, ZKPs. To avoid this confusion we use the loosely defined terms of “crypto proofs” and “computational
integrity (CI) proofs”.
² You can read about STARK arithmetization and low-degree compliance here:
³ The use of univariate polynomials can be generalized vastly, e.g., to multivariate polynomials and algebraic geometry codes, but for simplicity we stick to the simplest, univariate, case.
⁴ We are specifically excluding lattice based constructions from our discussion, because they are not yet deployed in code. Such constructions are asymmetric and also plausibly post-quantum secure,
and typically use small (prime) fields.
⁵ A field is k-smooth if it contains a subgroup (multiplicative or additive) of size all of whose prime divisors are at most k. For instance, all binary fields are 2-smooth, and so fields of size q
such the q-1 is divisible by a large power of 2.
⁶ Nielsen’s law of Internet bandwidth states that user bandwidth grows by 50% per year. This law fits data from 1983 to 2019.
⁷ Other systems (like PLONK) use pairings only to obtain a (polynomial) commitment scheme, and not for arithmetization. In such cases, arithmetization may lead to any low-degree constraints.
⁸ Formally, “quasi-linear in n” means O(n logᴼ⁽¹⁾ n) and “poly-logarithmic in n” means logᴼ⁽¹⁾ n. | {"url":"https://nakamoto.com/cambrian-explosion-of-crypto-proofs/","timestamp":"2024-11-05T01:07:07Z","content_type":"text/html","content_length":"46447","record_id":"<urn:uuid:0b776a0a-9c9c-497e-b4ff-7027c2613c61>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00354.warc.gz"} |
Discrete Structures: Harmonic Analysis and Probability - Clay Mathematics Institute
Discrete Structures: Harmonic Analysis and Probability
Date: 27 - 28 September 2018
Location: Mathematical Institute, University of Oxford
Event type: CRC Workshop
Organisers: Alexander Volberg (Michigan State) and Paata Ivanisvili (Princeton and UC Irvine)
This workshop will focus on progress in analysis of geometric functional inequalities in particular, but not exclusively, on discrete model: the Hamming cube.
Talks will give an overview of the area, its connections to PDE, combinatorics, martingale methods, stochastic optimization problems, various notions of convexity, and applications to random graphs.
The harmonic analysis on Hamming cube is a tool for studying the Renyi random graph model (e.g. sharp threshold results of the type of Margulis graph connectivity theorem).
The geometric functional inequalities are often rooted in their discrete versions on Hamming cube, where the harmonic analysis has many interesting specifics and often uses probabilistic ideas. For
example, martingale method leads to important geometric inequalities on Hamming cube and Gauss space, such as e.g. the concentration of measure inequalities. The concentration of measure phenomenon,
in its turn, has utmost importance in applied mathematics, in questions of spectral initialization or compressed sensing.
Speakers: Franck Barthe (Toulouse), Sergey Bobkov (Minnesota), Dario Cordero-Erausquin (Paris 6), Paata Ivanisvili (UC Irvine), Ryan O’Donnell (Carnegie Mellon), Stefanie Petermichl (Toulouse), Mark
Rudelson (Michigan), Ramon van Handel (Princeton), Alexander Volberg (Michigan State) | {"url":"https://www.claymath.org/events/discrete-structures-harmonic-analysis-and-probability/","timestamp":"2024-11-13T08:57:46Z","content_type":"text/html","content_length":"89232","record_id":"<urn:uuid:aa1d8c94-3614-44af-94d6-2816a778e615>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00687.warc.gz"} |
Lesson 13.6
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Lesson 13.6 – Writing
Equations in SlopeIntercept Form
8.F.4 – Construct a function to model a linear
relationship between two quantities. Determine the
rate of change and initial value of the function from a
description of a relationship or from two (x, y) values,
including reading these from a table or from a graph.
Interpret the rate of change and initial value of a linear
function in terms of its graph or a table of values.
Find the slope of the line.
Today you will learn…
to write an equation of a line in slope-intercept
a. find slope
b. y-intercept
y = mx + b
On Your Own
Extra Example
Write an equation of the line that passes
through the points (0, -1) and (4, -1).
On Your Own
Exit Ticket
Writing Prompt:
For a line that has been graphed in a
coordinate plane, you can write the equation
Find the slope and y-intercept
pp. 606 – 607
1 – 4, 6 – 10 even, 11 - 24 | {"url":"https://studyres.com/doc/4419892/lesson-13.6","timestamp":"2024-11-06T15:05:55Z","content_type":"text/html","content_length":"63335","record_id":"<urn:uuid:50a2b99c-a073-47e6-82e0-d41c86e53284>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00889.warc.gz"} |
Associativity Of Operators In C Language » Projugaadu
Associativity of Operators in C language
When an expression contains two operators with equal priority, the connection between them is established using the operators’ association.
The associativity can be of two types – from left to right or from right to left. Left associative right means that the left operand must be unambiguous. Without ambiguity in what sense? It should
not be involved in the evaluation of any other sub-expressions. Similarly, in the case of association the Right to the Left must be the right operand unambiguous. Let us understand this with an
Consider the expression
a = 3 / 2 * 5 ;
Here there is a connection between the operators with the same priority, that is between / and *. This link is settled using the / and associative *. But both enjoy the association Left to Right.
Figure shows for each operator which is the unambiguous operation and which is not.
Since both / and * have associative L with R and only / has the left operand unambiguously (a necessary condition for associating L with R), it is performed earlier.
Consider another expression
a = b = 3 ;
Here both assignment operators have the same priority and the same association (from right to left). Figure shows for each operator which operand is unambiguous and which does not.
Because both = have associativity R to L and only the second = has operating as unequivocal (necessary condition for R to L associativity) the second = is carried out earlier.
Consider another expression
z = a * b + c / d ;
Here * and / enjoy the same priority and the same associativity (from left to right). Figure shows for each operator who is the operator unambiguous and which is not.
Here, because the operands remaining for both operators are unambiguous, the compiler is free to perform | {"url":"https://projugaadu.com/associativity-of-operators-in-c-language/","timestamp":"2024-11-14T22:19:40Z","content_type":"text/html","content_length":"215857","record_id":"<urn:uuid:9ec6caeb-c7d8-433e-bfa9-7f1025328f4e>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00882.warc.gz"} |
MAT 118 University of Connecticut CH7 Linear Programming Performance Assessment - Course Help Online
I specifically need a ALGEBRA tutor that is able to complete SEVEN questions in timingly manner (10:30AM) 5 hours. This is only 7 questions It should be quick and easy. It needs to be typed out.
Need at least a 90 to pass. PLEASE VIEW FILE BEFORE BIDDING
MAT 118 Chapter 7 Performance Assessment: You are not allowed to receive any
outside help on this exam. Do not ask other students, faculty, the math clinic, a
tutor, etc. for any help. If you have a question, you can only ask your instructor.
You can use your textbook, notes, calculator, etc.)
(Total possible points =15)
This semester we have discussed various approaches to solving linear programming problems. To solve linear programming problems with 3 or more variables we used the simplex
method, however, it is still possible to use the graphical method to solve a linear programming problem that has 3 variables. In this performance assessment we will explore such a
problem both graphically and with the simplex method.
1. Use the following table to determine the two numbers a and b that you will be using
throughout this assessment.
(a) Write your name as it appears on Banner:
First name starts with:
Last name starts with:
(b) Using the table above, fill in your values for a and b and let c = a · b:
2. Consider the following linear programming problem:
subject to
w = −x − 2y + 10z
bx + ay + cz ≤ c
x ≥ 0, y ≥ 0, z ≥ 0
Plug your values in for a, b and c into the linear programming problem above and fill
in your results below. This is the linear programming problem you will solve in this
performance assessment.
subject to
w = −x − 2y + 10z
x ≥ 0, y ≥ 0, z ≥ 0
3. (6 points) The graphical method still applies to this linear programming problem. A
sketch of the feasible region, including the corner points of the region is provided below.
Find the solution to the linear programming problem using the graphical method. Show
all of your work in the space provided below the figure.
(0, 0, 1)
(0, 0, 0)
(0, b, 0) y
(a, 0, 0)
4. (5 points) This problem can also be solved using the simplex method. Using your
values for a, b and c in part 2 write down the initial tableau for your linear programming
problem in the space provided below. Circle the pivot column and row.
5. (1 point) Is s1 in the tableau in part 4 a slack variable or a surplus variable?
6. (2 points) By following the prompts at http://simplex.tode.cz/en/ use the simplex
calculator at this link to solve your linear programming problem. After entering the
linear programming problem and clicking solve, click on ’Generate Link’ at the bottom
of the page. Include this link with your performance assessment submission.
7. (1 point) Compare your answer in part 6 to your answer in part 3. Are they the same?
If they are not the same, is that reasonable? | {"url":"https://coursehelponline.com/mat-118-university-of-connecticut-ch7-linear-programming-performance-assessment/","timestamp":"2024-11-13T21:31:39Z","content_type":"text/html","content_length":"43623","record_id":"<urn:uuid:faba1741-969f-433c-a4c1-549b8b96869d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00599.warc.gz"} |
The Journal of Law, Economics, & Organization
Uploaded from ShareLaTeX
Other (as stated in the work)
LaTeX template for
The Journal of Law, Economics, & Organization
, published by
Oxford University Press
This template was originally published on ShareLaTeX and subsequently moved to Overleaf in November 2019. | {"url":"https://es.overleaf.com/latex/templates/the-journal-of-law-economics-and-organization/mqcffvfgsdtd","timestamp":"2024-11-05T09:37:19Z","content_type":"text/html","content_length":"72928","record_id":"<urn:uuid:e1747ba1-56da-4f02-a6dd-470dc2c89fcf>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00522.warc.gz"} |
Also presented, by coauthor Melanie Pivarski:
The rate of decay of the Wiener sausage in a local Dirichlet space.
· The Eighth Annual Math Technology Pioneers Meeting (meeting agenda)
Pioneering uses of Clickers in the wilderness of student learning (session notes)
· MathFest 2011: Mathematics Modelling projects that matter
Student Multimedia Projects Connect the Real World to Model Visualization in Multivariable Calculus
· Writing Great Clicker Questions Summer Webinar Series: Quantitative Clicker Questions
Active Learning in Elementary Statistics with Numerical Entry Clickers
· ORMATYC Spring 2011: Individualized Skills Review with Aleks prep
I am a fourth-generation Kentucky educator, and I currently serve as visiting assistant professor in the School of Natural Sciences at the Indiana University - Southeast. I previously served as
assistant professor of mathematics at the University of Louisville.
Before coming to the Louisville area, I was a graduate student at Cornell University, completing my Ph.D. in 2005 in probability theory and analysis under the direction of Laurent Saloff-Coste.
Before that, I was an undergraduate in mathematics at the University of Kentucky, and a master’s student in mathematics at the University of Louisville.
I have a variety of interests related to mathematical research, and some of my papers are listed below. I am interested in random walks on infinite graphs, mathematical models of disease epidemics,
and social choice theories of preference aggregation, collective choice, and voting. In each of these areas, there are aspects of the work which are readily accessible to undergraduate students and
which could lead to undergraduate research projects.
I am also interested in the development of best-practices for utilizing new technologies to improve student learning.
I have developed a bank of conceptual clicker questions to be used in College Algebra, which are posted with the clicker resources at the Mathquest web-site. I wrote a case-study about how I use
these questions to foster critical thinking and improve student communication skills, which is available at the i>clicker website. The style of these questions follows after the Cornell GoodQuestions
for Calculus project.
I teach my courses using a tablet PC, adding pen-strokes to the screen as if it were a blackboard in the Microsoft OneNote program. I can record all the action using a screen-capture software called
Camtasia Studio. I have previously used a different program called Tegrity. Here is a recently posted video of extra examples on Markov Chains.
I have recently began providing extra video examples for my classes using a LiveScribe pen. View this file in Adobe Reader X to see what these videos look like.
Professional Information
· An Extension of McGarvey's Theorem from the Perspective of the Plurality Collective Choice Mechanism, with Bob Powers. Social Choice and Welfare, 2011, DOI: 10.1007/s00355-010-0520-3.
· Isoperimetric Profiles on the Pre-fractal Sierpinski Carpet, with Melanie Pivarski. Fractals, Volume 18(2010), no.4.
· An Example of the Multi-purpose use of Clickers in College Algebra, chapter 14 of Teaching Mathematics with Classroom Voting With and Without Clickers, MAA Notes volume 79.
· Treating Cofactors Can Reverse the Expansion of a Primary Disease Epidemic, with Bingtuan Li and Susanna Remold. BMC Infectious Diseases, 2010, 10:248
· The NIP Graph of a Social Welfare Function, with Bob Powers, Social Choice and Welfare, Volume 33(2009), no. 3.
· The Mass of Sites Visited by a Random Walk on an Infinite Graph. Electronic Journal of Probability, Volume 13(2008), Paper 44.
· My Choir: The Master’s Men
· Laura, The irreverent angel | {"url":"https://www.mathdoctorg.com/home","timestamp":"2024-11-05T00:57:57Z","content_type":"text/html","content_length":"73948","record_id":"<urn:uuid:e62a0a90-17e9-4ede-ae60-325dec86e3a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00262.warc.gz"} |
How To Find The Loss Percentage - BestTemplatess
How To Find The Loss Percentage
How To Find The Loss Percentage – Profit = Selling price – Cost price Selling price = Cost price + Profit Cost price = Selling price – Profit When S.P<C.P loss Loss = Cost price – Selling price
Selling price = Cost Cost – Loss Cost price = Selling price + loss
Example1:- Find the missing values Cost Price (C.P) Selling Price (S.P) Profit (P) Loss (L) Rs 7,282? We know Selling Price (S.P) = Cost Price (C.P) + Profit (P) = = Rs 7490 (ans) Example2:- Find
the missing values Cost Price (C.P) Selling Price (S.P) Profit (P) Loss (L ) ? Rs Rs 72 We know that Cost Price (C.P) = Selling Price (S.P) – Profit (P) = 572 – 72 = Rs 500 (ans)
How To Find The Loss Percentage
Cost Price (C.P) Selling Price (S.P) Profit (P) Loss (L) Rs 9, 684 ? Rs 684 we know that Selling Price (S.P) = Cost Price (C.P) – Loss (L) = – 684 = Rs (ans) Example4:- Find the missing values Cost
Price (C.P) Selling Price (S.P) Profit ( P) Loss (L) ? Rs 1, 973 Rs 273 We know that Cost Price (C.P) = Selling Price (S.P) – Profit (P) = – 273 = Rs (ans)
Find The Loss Percentage When The Cost Price And Selling Price Of An Article Are In The Ratio Of 5
C.P S.P Profit % Loss % Rs 384 Rs 20 % Solution Since selling price (S.P) > cost price (C.P) then profit We know that profit (P) = selling price (S.P) – cost price (C.P) = 384 – 320 = Rs 64 Ans:-
Profit% = 20%
C.P S.P Profit % Loss % Rs 361 Rs 5 % Solution from Selling Price (S.P) < Cost Price (C.P) then Loss Loss (L) = Cost Price (C.P) – Selling Price (S.P) = 380 – 361 = Rs 19 Ans :- loss % = 5%
C.P S.P Profit % Loss% Rs Rs 2 loss Rs 42 5% Solution We know that Selling Price (S.P) = Cost Price (C.P) + Loss (L) = = Rs 42 Answer:- S.P = Rs 42, Loss% = 5 %
By Selling 24 Pens Kranthi Lost An Amount Equal To The C.p. Of 3 Pens. Find His Loss Percentage
C.P S.P Profit % Loss % Rs 500 Rs profit 10% Rs 5500 Solution We know that selling price (SP) = cost price (C.P) + profit (P) = = Rs 5500 Answer:- SP = Rs 42, profit% = 10 %
For the operation of this website, we record user data and share it with processors. To use this website, you must accept our Privacy Policy, including the Cookie Policy. (1) Rohan bought a
calculator for Rs. 760 and sold it for Rs. 874. Find his profit and percentage of profit.
(2) Kriti bought a saree for Rs. 2500 and sold it for Rs. 2300. Find his loss and the percentage of loss.
If The Cost Price Of Five Apples Is Equal To The Selling Price Of 3 Apples Then Find The Profit Percentage
(3) What is the profit or loss in the following transactions. Also find the percent gain or percent loss in each case:
(4) Rajinder bought an almirah for Rs. 4800 and another for Rs. 3640. He sold the first almirah at a profit of 40/3%, and the second at a loss of 15%. HOW MUCH HAS HE GAINED OR LOST in the whole
(5) 24 tables were bought in a furniture store at a price of HRK 100. 450 per table. A dealer sold 16 of them at Rs. 600 per table and the rest at the rate of Rs. 400 per table. Find his percentage
gain or loss.
If 20 Lemons Are Bought For Rs 10 And Sold At 5 For Three Rupees Then Find A Profit Percentage
(6) Sale of fans for Rs. 810, the trader makes a profit of Rs. 60. What is the cost price of the fan? What is his profit percentage?
(7) I am selling a steel almirah for Rs. 3906, the manufacturer suffers a loss of Rs. 294. Find the cost price of the almirah and its loss rate.
(8) The price of the flower pot is 120 dinars. If the dealer sells it at a loss of 10%, find the price at which it was sold.
How To Calculate Growth Rate: 7 Steps (with Pictures)
(9) I buy a TV for Rs. 10,000 and sell at a profit of 20%. How much money do I get for this?
(10) A merchant sells an article at Rs. 300, which makes you 20% profit. Find the cost price of the item.
(11) A merchant sells an article at Rs. 320, which makes you lose 20%. Find the cost price of the item.
Solved Calculate The Percentage Of Momentum And Kinetic
(12) I am selling a chair for Rs. 522, the merchant earns 16%. What is its cost price?
(13) A dealer sold a damaged garment for Rs. 7360 with a loss of 8%. Find the item’s cost price.
(14) I am selling a table for Rs. 3168, Rashid loses 12%. Find its cost price. What percentage would he gain or lose by selling the table for Rs? 3870?
An Article Was Bought For ₹400 And Sold For ₹336. Find The Loss And Loss Per Cent
(15) Sale of an article for Rs. 4550, Tonny loses 9%. What percentage would he gain or lose by selling for Rs? 4825?If you like this math problem solving page, please tell Google by clicking the +1
button. If you like this page, please also click the +1 button.
Note: If the +1 button is dark blue, you have already +1’d it. Thank you for your support!
(If you are not signed in to your Google account (eg gMail, Docs), the sign-in window opens when you click +1. Signing in registers your “voice” on Google. Thank you!)
By Selling 24 Pens, Kranthi Lost An Amount Equal To The Cp Of 3 Pens. Find His Loss Percentage
Privacy Policy Example Problem – Geometric Sequence Example 1 Example 2 Example 3 Example 4 ALL Example Problem – Arithmetic Sequence Example 1 Example 2 Example 3 Example 4 ALL Example Problem –
Rationalize Denominator Example 1 – 2 denominator term Example 2 – 3 denominator term Example 3 – rationalize the numerator Example 4 – Rationalize the denominator with complex numbers ALL Example
problems – Quadratic equations Example 1 – solve with the quadratic formula Example 2 – solve with the Indian method Example 3 – solve by factoring Example 4 – complete the square Example 5 – create
quadratic ALL Example problems – Speed of work Problems Example 1 – 3 different work rates Example 2 – 6 men 6 days to dig 6 holes Example 3 – time to wash a car Example 4 – Linear programming
in Excel Example 5 – Representing ratios and proportions ALL Example pro problems – statistics Example 1 – statistical methodology Example 2 – standard deviation Example 3 – confidence level
Example 4 – significance level Example 5 – Permutations and combinations Example 6 – Binomial distribution – Test error rate Example 7 – Pearson ALL + correlation 1 Page Site If you like this math
problem solving page, please say it to Google by clicking the +1 button. If you like this page, please also click the +1 button. Note: If the +1 button is dark blue, you have already +1’d it. Thank
you for your support! (If you’re not signed in to your Google account (eg gMail, Docs), a sign-in window opens when you click +1. Signed in registers your “voice” with Google. Thanks!) Note: not all
browsers display the +1 button.
How to find weight loss percentage, how to find the tax percentage, how to find win loss percentage, how to find percentage of weight loss, how to figure weight loss percentage, how to find loss
percentage, how to calculate weight loss percentage, how to find the percentage of weight loss, how to find the percentage loss, how to find the percentage, how to calculate body fat percentage loss,
how to find body fat percentage loss | {"url":"https://besttemplatess.com/how-to-find-the-loss-percentage/","timestamp":"2024-11-05T16:02:50Z","content_type":"text/html","content_length":"54500","record_id":"<urn:uuid:7206387b-6efa-4e25-9495-36812a2b4408>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00444.warc.gz"} |
SQLite - Create a Relationship
SQLite supports relationships just like any other relational database management system.
SQLite is a relational database management system (RDBMS). It uses the same relational model that other popular DBMSs (such as MySQL, Oracle, SQL Server, MS Access) use.
What this means, is that you can create multiple tables, then have them linking to each other via a relationship.
A relationship is where you have multiple tables that contain related data, and the data is linked by a common value that is stored in both tables.
The following diagram illustrates this concept:
So, let's add another table called Albums, then have that linked to our Artists table via a relationship.
Doing this will enable us to lookup which artist a given album belongs to.
Create the New Table
So let's go ahead and create the Albums table:
Similar to when we created the Artists table, however, on this one, we have added FOREIGN KEY(ArtistId) REFERENCES Artists(ArtistId) to the end of the statement.
This creates a foreign key constraint on the Albums.ArtistId column. What this means is that, any data that is inserted into this column, must match a value in the Artists.ArtistId column.
If we didn't do this, it would be possible to have an album that doesn't belong to an artist. In other words, we could have orphaned records in our database. Not good if you're trying to maintain
referential integrity.
Now, if we run a .tables command, we should see both tables in the database:
sqlite> .tables
Albums Artists
Test the Relationship
Once we've created the table with the foreign key, we can test it by attempting to enter erroneous data. We can try to enter an album with an ArtistId that doesn't match an ArtistId in the referenced
table (i.e. the Artists table):
This should result in the following:
sqlite> INSERT INTO Albums (AlbumName, Year, ArtistId)
...> VALUES ('Powerslave', '1984', 70);
Error: FOREIGN KEY constraint failed
Also, running a SELECT statement on the table will return no data.
This is because the foreign key constraint blocked the wrong value from being inserted.
Didn't Work?
If you don't receive an error when trying to enter erroneous data like this, you may need to check your settings.
Run the following command: PRAGMA foreign_keys;
If this results in 0 it means that your foreign key constraints are disabled. In fact, this is the default behaviour of SQLite (it's for backwards compatibility).
To enable foreign key constraints, type the following PRAGMA foreign_keys = ON;
Now, running PRAGMA foreign_keys; should return 1, and subsequent attempts at inserting an invalid foreign key will fail.
However, if the PRAGMA foreign_keys; command returns no data, your SQLite implementation doesn't support foreign keys (either because it is older than version 3.6.19 or because it was compiled with
SQLITE_OMIT_FOREIGN_KEY or SQLITE_OMIT_TRIGGER defined).
Insert More Data
Now that the relationship has been established, we can add as much data as we need, with the confidence that only records with valid foreign keys will be inserted.
Next, we'll select data from both tables using a JOIN statement. | {"url":"https://www.quackit.com/sqlite/tutorial/create_a_relationship.cfm","timestamp":"2024-11-04T15:05:36Z","content_type":"text/html","content_length":"18268","record_id":"<urn:uuid:285ba6d6-0841-4038-a3f6-26ff359247e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00540.warc.gz"} |
IIT JEE Dimensional Formulae and Dimensional Equations
Formulae of Dimensional
The expressions or formulae which tell us how and which of the fundamental quantities are present in a physical quantity are known as the Dimensional Formula of the Physical Quantity. Dimensional
formulae also help in deriving units from one system to another. It has many real-life applications and is a basic aspect of units and measurements.
Suppose there is a physical quantity X which depends on base dimensions M (Mass), L (Length) and T (Time) with respective powers a, b and c, then its dimensional formula is represented as:
A dimensional formula is always closed in a square bracket [ ]. Also, dimensional formulae of trigonometric, plane angle and solid angle are not defined as these quantities are dimensionless in
Image 1: Dimensional Formula of some physical quantities.
• Dimensional formula of Velocity is [M^0LT^-1]
• Dimensional formula of Volume is [M^0L^3T^0]
• Dimensional formula of Force is [MLT^-2]
• Dimensional formula of Area is [M^0L^2T^0]
• Dimensional formula of Density is [ML^-3T^0]
Benefits of Dimensional Formulae
Image 2: Dimensions are used in describing a physical quantity in terms of above seven fundamental quantities.
Dimensional Formulae has the following advantages:
• To check whether a formula is dimensionally correct or not
• To convert units from one system to another
• To derive relations between physical quantities based on their interdependence
• Dimensional Formulae explain how every physical quantity can be expressed in terms of fundamental units
Limitations of Dimensional Formulae
Image 3: Dimensional Equations help in checking correctness of a equation
Besides having many advantages, dimensional formulae have some limitations too. They are as follows:
• Dimensional Formulae become not defined in case of trigonometric, logarithmic and exponential functions, which means we can’t predict nature of quantities with these functions
• Dimensional Formulae is limited to a number of physical quantities
• They can’t be used to determine proportionality constants
• Dimensional Formulae are limited to additional and subtraction only
Dimensional Equations
The equations obtained when we equal a physical quantity with its dimensional formulae are called Dimensional Equations. The dimensional equation helps in expressing physical quantities in terms of
the base or fundamental quantities.
Suppose there’s a physical quantity Y which depends on base quantities M (mass), L (Length) and T (Time) and their raised powers are a, b and c, then dimensional formulae of physical quantity [Y] can
be expressed as
[Y] = [M^aL^bT^c]
• Dimensional equation of velocity ‘v’ is given as [v] = [M^0LT^-1]
• Dimensional equation of acceleration ‘a’ is given as [a] = [M^0LT^-2]
• Dimensional equation of force ‘F’ is given as [F] = [MLT^-2]
• Dimensional equation of energy ‘E’ is give as [E] = [ML^2T^-2]
Image 4: We first simplify physical quantities and then write its dimensions.
Dimensional Equations are fundamental aspects of dimensional analysis and form the basic foundation of units and measurements as they help in simplifying physical quantities in terms of basic or
fundamental quantities. The table below depicts some dimensional equations of some physical quantities for future reference.
│Physical Quantity │Dimensional Equation │
│Force (F) │[F] = [M L T^-2] │
│Power (P) │[P] = [M L^2 T^-3] │
│Velocity (v) │[v] = [M L T^-1] │
│Density (D) │[D] = [M L^3 T^0] │
│Energy (E) │[E] = [M L^2 T^-2] │
│Pressure (P) │[P] = [M L^-1 T^-2] │
│Time Period of wave │[T] = [M^0L^0 T^-1] │
Watch this Video for more reference
More Readings
Complete Your Profile
Ask a Doubt
Get your questions answered by the expert for free
Enter text here... | {"url":"https://www.askiitians.com/iit-jee-physics/general-physics/dimensional-formulae-and-dimensional-equations/","timestamp":"2024-11-03T10:08:27Z","content_type":"text/html","content_length":"200471","record_id":"<urn:uuid:bd4d9755-ec89-467a-9cf2-c128853272e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00638.warc.gz"} |
Applying Hess’s Law in Thermodynamic Calculations
Author: admintanbourit
Hess’s Law is a powerful tool in the field of thermodynamics that allows for the calculation of change in enthalpy (ΔH) of a chemical reaction, even when direct measurement of the reaction is not
possible. This law is based on the principle that the overall change in enthalpy is independent of the pathway taken to reach the final products.
The law is named after Swiss-Russian chemist Germain Hess, who first proposed it in 1840. Hess noticed that the change in enthalpy of a reaction could be calculated by continuously adding or
subtracting the enthalpy changes of individually known reactions. This means that the overall enthalpy change of a reaction can be determined by simply knowing the enthalpies of the compounds
involved in the reaction.
The application of Hess’s Law is particularly useful in cases where direct measurement of enthalpy changes is not feasible, such as in combustion reactions that occur in a closed system or in
reactions that take place at high temperatures and pressures. In these cases, the enthalpy changes can be determined by using a series of simpler reactions whose enthalpies are known.
To apply Hess’s Law in thermodynamic calculations, there are a few key steps to follow:
1. Understand the concept of standard enthalpy of formation. Standard enthalpy of formation (ΔH°f) is the enthalpy change that occurs when one mole of a substance is formed from its elements in their
standard states under standard conditions of temperature and pressure. This is a crucial concept in thermodynamics and serves as a reference point for calculating enthalpy changes.
2. Identify the target reaction and break it down into simpler steps. In order to use Hess’s Law, it is necessary to break down the target reaction into smaller, simpler reactions whose enthalpies
are known. This can be achieved by manipulating the target reaction using known thermochemical equations.
3. Balance the equations. The law of conservation of mass must be applied to ensure that the number of atoms on both sides of the equations are equal. This also applies to any compounds that appear
on both sides of the equations.
4. Use algebraic manipulation to obtain the final equation. Once the equations have been balanced, algebraic manipulation can be used to combine them in such a way that the final equation represents
the target reaction. This is necessary because the enthalpies of the individual reactions need to be added/subtracted to obtain the enthalpy change for the target reaction.
5. Apply Hess’s Law. The final equation obtained in the previous step represents the application of Hess’s Law, where the sum of the enthalpy changes of the individual reactions equals the overall
enthalpy change of the target reaction.
By following these steps, thermodynamic calculations involving enthalpy changes can be easily solved using Hess’s Law. This law has numerous applications in various fields, such as chemical
engineering, biochemistry, and environmental sciences, where it is used to determine the enthalpy changes in important reactions.
In conclusion, Hess’s Law is a fundamental concept in thermodynamics and an important tool for calculating enthalpy changes of chemical reactions. By understanding the concept of standard enthalpy of
formation and breaking down the target reaction into simpler steps, one can apply Hess’s Law to obtain accurate results. As technology continues to advance, the significance and applications of
Hess’s Law will continue to be relevant in the study of heat and energy in chemical reactions. | {"url":"https://tanbourit.com/applying-hesss-law-in-thermodynamic-calculations/","timestamp":"2024-11-09T08:54:21Z","content_type":"text/html","content_length":"112444","record_id":"<urn:uuid:8924f8c8-3bed-4944-ac76-80f273dbe5cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00013.warc.gz"} |
TBMG-3077: Effect of Gravitation on Noninteracting Trapped Fermions - Magazine Article
Effect of Gravitation on Noninteracting Trapped Fermions
A report presents a theoretical study of the thermodynamics of an ultralow-temperature gas of fermions that interact with a gravitational field and with an externally imposed trapping potential
but not with each other. The gravitational field is taken to define the z axis and the trapping potential to be of the form (m/2) (ωxx2+ωyy2+ωzz2), where m is the mass of a fermion; x, y, and z
are Cartesian coordinates originating at the center of the trap; and the ω values denote effective harmonic- oscillator angular frequencies with respect to motion along the respective coordinate
axes. The single-particle energy is found from the solution of the time-dependent Schroedinger equation for a Hamiltonian that includes kinetic energy plus the gravitational and trapping
potentials. The equation for the single-particle energy is combined with Fermi statistics to obtain equations for the chemical potential, internal energy, and specific heat of the gas; the number
of trapped fermions; and the spatial distribution of fermions at zero temperature. The equations reveal the ways in which the Fermi energy, the specific heat, and the shape of the Fermion cloud
are affected by the gravitational field and the anisotropy of the trapping field.
Meta TagsDetails
"Effect of Gravitation on Noninteracting Trapped Fermions," Mobility Engineering, March 1, 2002.
Additional Details | {"url":"https://saemobilus.sae.org/articles/effect-gravitation-noninteracting-trapped-fermions-tbmg-3077","timestamp":"2024-11-03T03:19:13Z","content_type":"text/html","content_length":"91440","record_id":"<urn:uuid:53cef0e9-0d9c-4478-9a4e-2c1dce3b1638>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00492.warc.gz"} |
Why It Matters: Probability and Probability Distributions
Recall the Big Picture—the four-step process that encompasses statistics (as it is presented in this course):
So far, we’ve discussed the first two steps:
Producing data—how data are obtained and what considerations affect the data production process.
Exploratory data analysis—tools that help us get a first feel for the data by exposing their features using graphs and numbers.
Our eventual goal is inference—drawing reliable conclusions about the population on the basis of what we’ve discovered in our sample. To really understand how inference works, though, we first need
to talk about probability, because it is the underlying foundation for the methods of statistical inference. We use an example to try to explain why probability is so essential to inference.
First, here is the general idea: As we all know, the way statistics works is that we use a sample to learn about the population from which it was drawn. Ideally, the sample should be random so that
it represents the population well.
Recall from Types of Statistical Studies and Producing Data that when we say a random sample represents the population well, we mean that there is no inherent bias in this sampling technique. It is
important to acknowledge, though, that this does not mean all random samples are necessarily “perfect.” Random samples are still random, and therefore no random sample will be exactly the same as
another. One random sample may give a fairly accurate representation of the population, whereas another random sample might be “off” purely because of chance. Unfortunately, when looking at a
particular sample (which is what happens in practice), we never know how much it differs from the population. This uncertainty is where probability comes into the picture. We use probability to
quantify how much we expect random samples to vary. This gives us a way to draw conclusions about the population in the face of the uncertainty that is generated by the use of a random sample. The
following example illustrates this important point.
Death Penalty
Suppose we are interested in estimating the percentage of U.S. adults who favor the death penalty. To do so, we choose a random sample of 1,200 U.S. adults and ask their opinion: either in favor of
or against the death penalty. We find that 744 of the 1,200, or 62%, are in favor. (Although this is only an example, 62% is quite realistic given some recent polls). Here is a picture that
illustrates what we have done and found in our example:
Our goal is to do inference—to learn and draw conclusions about the opinions of the entire population of U.S. adults regarding the death penalty on the basis of the opinions of only 1,200 of them.
Can we conclude that 62% of the population favors the death penalty? Another random sample could give a very different result, so we are uncertain. But because our sample is random, we know that our
uncertainty is due to chance, not to problems with how the sample was collected. So we can use probability to describe the likelihood that our sample is within a desired level of accuracy. For
example, probability can answer the question, How likely is it that our sample estimate is no more than 3% from the true percentage of all U.S. adults who are in favor of the death penalty?
Answering this question (which we do using probability) is obviously going to have an important impact on the confidence we can attach to the inference step. In particular, if we find it quite
unlikely that the sample percentage will be very different from the population percentage, then we have a lot of confidence that we can draw conclusions about the population on the basis of the
In this module, we discuss probability more generally. Then we begin to develop the probability machinery that underlies inference.
Candela Citations
CC licensed content, Shared previously | {"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/introduction-6/","timestamp":"2024-11-14T11:21:15Z","content_type":"text/html","content_length":"32655","record_id":"<urn:uuid:e8efdb41-91d1-47a1-a8e7-d90977c7d0ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00075.warc.gz"} |
Metric types đź”—
In Splunk Observability Cloud, there are four types of metrics: gauge, counters, cumulative counters, and histograms.
The following table lists the types of supported metrics and their default rollups in Splunk Observability Cloud:
Metric Description Rollup
Gauge metrics Represent data that has a specific value at each point in time. Gauge metrics can increase or decrease. Average
Counter metrics Represent a count of occurrences in a time interval. Counter metrics can only increase during the time interval. Sum
Cumulative counter metrics Represent a running count of occurrences, and measure the change in the value of the metric from the previous data point. Delta
Histograms Represent a distribution of measurements or metrics, with complete percentile data available. Data is distributed into equally sized intervals or “buckets”. Histogram
The type of the metric determines which default rollup function Splunk Observability Cloud applies to summarize individual incoming data points to match a specified data resolution. A rollup is a
statistical function that takes all the data points in a metric time series (MTS) over a time period and outputs a single data point. Splunk Observability Cloud applies rollups after it retrieves the
data points from storage but before it applies analytics functions. To learn more about rollups and data resolution, see Rollups in Data resolution and rollups in charts.
Splunk Observability Cloud applies the SignalFlow average() function to data points for gauge metrics. When you specify a 10-second resolution for a line graph plot, and Splunk Observability Cloud is
receiving data for the metric every second, each point in the line represents the average of 10 data points.
Fan speed, CPU utilization, memory usage, and time spent processing a request are examples of gauge metric data.
Splunk Observability Cloud applies the SignalFlow average() function to data points for gauge metrics. When you specify a ten second resolution for a line graph plot, and Splunk Observability Cloud
is receiving data for the metric every second, each point on the line represents the average of 10 data points.
Counters đź”—
Number of requests handled, emails sent, and errors encountered are examples of counter metric data. The machine or app that generates the counter increments its value every time something happens
and resets the value at the end of each reporting interval.
Splunk Observability Cloud applies the SignalFlow sum() function to data points for counter metrics. When you specify a ten second resolution for a line graph plot, and Splunk Observability Cloud is
receiving data for the metric every second, each point on the line represents the sum of 10 data points.
Cumulative counters đź”—
Number of successful jobs, number of logged-in users, and number of warnings are examples of cumulative counter metric data. Cumulative counter metrics differ from counter metrics in the following
• Cumulative counters only reset to 0 when the monitored machine or application restarts or when the counter value reaches the maximum value representable (2 ^32 or 2 ^64 ).
• In most cases, you’re interested in how much the metric value changed between measurements.
Splunk Observability Cloud applies the SignalFlow delta() function to data points for cumulative counter metrics. When you specify a ten second resolution for a line graph plot, and Splunk
Observability Cloud is receiving data for the metric every second, each point on the line represents the change between the first data point received and the 10th data point received. As a result,
you don’t have to create custom SignalFlow to apply the delta() function, and the plot line represents variations.
Histograms đź”—
Histograms can summarize data in ways that are difficult to reproduce with other metrics. Thanks to the buckets, the distribution of your continuous data over time is easier to explore, as you
don’t have to analyze the entire dataset to see where all the data points are. At the same time, histogram helps reduce usage of your subscription.
Splunk Observability Cloud applies the SignalFlow histogram() function to data points for histogram metrics, with a default percentile value of 90. You can apply several other functions to
histograms, like min, max, count, sum, percentile, and cumulative_distribution_function.
For more information, see Histogram metrics in Splunk Observability Cloud. | {"url":"https://docs.splunk.com/observability/en/metrics-and-metadata/metric-types.html","timestamp":"2024-11-10T04:59:36Z","content_type":"text/html","content_length":"79366","record_id":"<urn:uuid:158b6f5c-163e-4a80-9a9b-5db6bb748bdf>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00691.warc.gz"} |
Go to the source code of this file.
subroutine cla_lin_berr (N, NZ, NRHS, RES, AYB, BERR)
CLA_LIN_BERR computes a component-wise relative backward error.
Function/Subroutine Documentation
subroutine cla_lin_berr ( integer N,
integer NZ,
integer NRHS,
complex, dimension( n, nrhs ) RES,
real, dimension( n, nrhs ) AYB,
real, dimension( nrhs ) BERR
CLA_LIN_BERR computes a component-wise relative backward error.
Download CLA_LIN_BERR + dependencies
[TGZ] [ZIP] [TXT]
CLA_LIN_BERR computes componentwise relative backward error from
the formula
max(i) ( abs(R(i)) / ( abs(op(A_s))*abs(Y) + abs(B_s) )(i) )
where abs(Z) is the componentwise absolute value of the matrix
or vector Z.
N is INTEGER
[in] N The number of linear equations, i.e., the order of the
matrix A. N >= 0.
NZ is INTEGER
[in] NZ We add (NZ+1)*SLAMCH( 'Safe minimum' ) to R(i) in the numerator to
guard against spuriously zero residuals. Default value is N.
NRHS is INTEGER
[in] NRHS The number of right hand sides, i.e., the number of columns
of the matrices AYB, RES, and BERR. NRHS >= 0.
RES is DOUBLE PRECISION array, dimension (N,NRHS)
[in] RES The residual matrix, i.e., the matrix R in the relative backward
error formula above.
AYB is DOUBLE PRECISION array, dimension (N, NRHS)
The denominator in the relative backward error formula above, i.e.,
[in] AYB the matrix abs(op(A_s))*abs(Y) + abs(B_s). The matrices A, Y, and B
are from iterative refinement (see cla_gerfsx_extended.f).
[out] BERR BERR is COMPLEX array, dimension (NRHS)
The componentwise relative backward error from the formula above.
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 102 of file cla_lin_berr.f. | {"url":"https://netlib.org/lapack/explore-html-3.4.2/d9/de4/cla__lin__berr_8f.html","timestamp":"2024-11-11T15:02:35Z","content_type":"application/xhtml+xml","content_length":"11855","record_id":"<urn:uuid:6ac6a734-9b94-4c58-8971-c7004db34ac1>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00804.warc.gz"} |
In quantum mechanics, the canonical commutation relation is the fundamental relation between canonical conjugate quantities (quantities which are related by definition such that one is the Fourier
transform of another). For example, ${\displaystyle [{\hat {x}},{\hat {p}}_{x}]=i\hbar \mathbb {I} }$
between the position operator x and momentum operator p[x] in the x direction of a point particle in one dimension, where [x , p[x]] = x p[x] − p[x] x is the commutator of x and p[x], i is the
imaginary unit, and ℏ is the reduced Planck constant h/2π, and ${\displaystyle \mathbb {I} }$ is the unit operator. In general, position and momentum are vectors of operators and their commutation
relation between different components of position and momentum can be expressed as ${\displaystyle [{\hat {x}}_{i},{\hat {p}}_{j}]=i\hbar \delta _{ij},}$ where ${\displaystyle \delta _{ij}}$ is the
Kronecker delta.
This relation is attributed to Werner Heisenberg, Max Born and Pascual Jordan (1925),^[1]^[2] who called it a "quantum condition" serving as a postulate of the theory; it was noted by E. Kennard
(1927)^[3] to imply the Heisenberg uncertainty principle. The Stone–von Neumann theorem gives a uniqueness result for operators satisfying (an exponentiated form of) the canonical commutation
Relation to classical mechanics
By contrast, in classical physics, all observables commute and the commutator would be zero. However, an analogous relation exists, which is obtained by replacing the commutator with the Poisson
bracket multiplied by iℏ, ${\displaystyle \{x,p\}=1\,.}$
This observation led Dirac to propose that the quantum counterparts ${\displaystyle {\hat {f}}}$ , ĝ of classical observables f, g satisfy ${\displaystyle [{\hat {f}},{\hat {g}}]=i\hbar {\widehat {\
In 1946, Hip Groenewold demonstrated that a general systematic correspondence between quantum commutators and Poisson brackets could not hold consistently.^[4]^[5]
However, he further appreciated that such a systematic correspondence does, in fact, exist between the quantum commutator and a deformation of the Poisson bracket, today called the Moyal bracket,
and, in general, quantum operators and classical observables and distributions in phase space. He thus finally elucidated the consistent correspondence mechanism, the Wigner–Weyl transform, that
underlies an alternate equivalent mathematical representation of quantum mechanics known as deformation quantization.^[4]^[6]
Derivation from Hamiltonian mechanics
According to the correspondence principle, in certain limits the quantum equations of states must approach Hamilton's equations of motion. The latter state the following relation between the
generalized coordinate q (e.g. position) and the generalized momentum p: ${\displaystyle {\begin{cases}{\dot {q}}={\frac {\partial H}{\partial p}}=\{q,H\};\\{\dot {p}}=-{\frac {\partial H}{\partial
In quantum mechanics the Hamiltonian ${\displaystyle {\hat {H}}}$ , (generalized) coordinate ${\displaystyle {\hat {Q}}}$ and (generalized) momentum ${\displaystyle {\hat {P}}}$ are all linear
The time derivative of a quantum state is represented by the operator ${\displaystyle -i{\hat {H}}/\hbar }$ (by the Schrödinger equation). Equivalently, since in the Schrödinger picture the operators
are not explicitly time-dependent, the operators can be seen to be evolving in time (for a contrary perspective where the operators are time dependent, see Heisenberg picture) according to their
commutation relation with the Hamiltonian: ${\displaystyle {\frac {d{\hat {Q}}}{dt}}={\frac {i}{\hbar }}[{\hat {H}},{\hat {Q}}]}$ ${\displaystyle {\frac {d{\hat {P}}}{dt}}={\frac {i}{\hbar }}[{\hat
{H}},{\hat {P}}]\,\,.}$
In order for that to reconcile in the classical limit with Hamilton's equations of motion, ${\displaystyle [{\hat {H}},{\hat {Q}}]}$ must depend entirely on the appearance of ${\displaystyle {\hat
{P}}}$ in the Hamiltonian and ${\displaystyle [{\hat {H}},{\hat {P}}]}$ must depend entirely on the appearance of ${\displaystyle {\hat {Q}}}$ in the Hamiltonian. Further, since the Hamiltonian
operator depends on the (generalized) coordinate and momentum operators, it can be viewed as a functional, and we may write (using functional derivatives): ${\displaystyle [{\hat {H}},{\hat {Q}}]={\
frac {\delta {\hat {H}}}{\delta {\hat {P}}}}\cdot [{\hat {P}},{\hat {Q}}]}$ ${\displaystyle [{\hat {H}},{\hat {P}}]={\frac {\delta {\hat {H}}}{\delta {\hat {Q}}}}\cdot [{\hat {Q}},{\hat {P}}]\,.}$
In order to obtain the classical limit we must then have ${\displaystyle [{\hat {Q}},{\hat {P}}]=i\hbar ~\mathbb {I} .}$
Weyl relations
The group ${\displaystyle H_{3}(\mathbb {R} )}$ generated by exponentiation of the 3-dimensional Lie algebra determined by the commutation relation ${\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar }$
is called the Heisenberg group. This group can be realized as the group of ${\displaystyle 3\times 3}$ upper triangular matrices with ones on the diagonal.^[7]
According to the standard mathematical formulation of quantum mechanics, quantum observables such as ${\displaystyle {\hat {x}}}$ and ${\displaystyle {\hat {p}}}$ should be represented as
self-adjoint operators on some Hilbert space. It is relatively easy to see that two operators satisfying the above canonical commutation relations cannot both be bounded. Certainly, if ${\
displaystyle {\hat {x}}}$ and ${\displaystyle {\hat {p}}}$ were trace class operators, the relation ${\displaystyle \operatorname {Tr} (AB)=\operatorname {Tr} (BA)}$ gives a nonzero number on the
right and zero on the left.
Alternately, if ${\displaystyle {\hat {x}}}$ and ${\displaystyle {\hat {p}}}$ were bounded operators, note that ${\displaystyle [{\hat {x}}^{n},{\hat {p}}]=i\hbar n{\hat {x}}^{n-1}}$ , hence the
operator norms would satisfy ${\displaystyle 2\left\|{\hat {p}}\right\|\left\|{\hat {x}}^{n-1}\right\|\left\|{\hat {x}}\right\|\geq n\hbar \left\|{\hat {x}}^{n-1}\right\|,}$ so that, for any n, ${\
displaystyle 2\left\|{\hat {p}}\right\|\left\|{\hat {x}}\right\|\geq n\hbar }$ However, n can be arbitrarily large, so at least one operator cannot be bounded, and the dimension of the underlying
Hilbert space cannot be finite. If the operators satisfy the Weyl relations (an exponentiated version of the canonical commutation relations, described below) then as a consequence of the Stone–von
Neumann theorem, both operators must be unbounded.
Still, these canonical commutation relations can be rendered somewhat "tamer" by writing them in terms of the (bounded) unitary operators ${\displaystyle \exp(it{\hat {x}})}$ and ${\displaystyle \exp
(is{\hat {p}})}$ . The resulting braiding relations for these operators are the so-called Weyl relations ${\displaystyle \exp(it{\hat {x}})\exp(is{\hat {p}})=\exp(-ist/\hbar )\exp(is{\hat {p}})\exp
(it{\hat {x}}).}$ These relations may be thought of as an exponentiated version of the canonical commutation relations; they reflect that translations in position and translations in momentum do not
commute. One can easily reformulate the Weyl relations in terms of the representations of the Heisenberg group.
The uniqueness of the canonical commutation relations—in the form of the Weyl relations—is then guaranteed by the Stone–von Neumann theorem.
For technical reasons, the Weyl relations are not strictly equivalent to the canonical commutation relation ${\displaystyle [{\hat {x}},{\hat {p}}]=i\hbar }$ . If ${\displaystyle {\hat {x}}}$ and ${\
displaystyle {\hat {p}}}$ were bounded operators, then a special case of the Baker–Campbell–Hausdorff formula would allow one to "exponentiate" the canonical commutation relations to the Weyl
relations.^[8] Since, as we have noted, any operators satisfying the canonical commutation relations must be unbounded, the Baker–Campbell–Hausdorff formula does not apply without additional domain
assumptions. Indeed, counterexamples exist satisfying the canonical commutation relations but not the Weyl relations.^[9] (These same operators give a counterexample to the naive form of the
uncertainty principle.) These technical issues are the reason that the Stone–von Neumann theorem is formulated in terms of the Weyl relations.
A discrete version of the Weyl relations, in which the parameters s and t range over ${\displaystyle \mathbb {Z} /n}$ , can be realized on a finite-dimensional Hilbert space by means of the clock and
shift matrices.
It can be shown that ${\displaystyle [F({\vec {x}}),p_{i}]=i\hbar {\frac {\partial F({\vec {x}})}{\partial x_{i}}};\qquad [x_{i},F({\vec {p}})]=i\hbar {\frac {\partial F({\vec {p}})}{\partial p_
Using ${\displaystyle C_{n+1}^{k}=C_{n}^{k}+C_{n}^{k-1}}$ , it can be shown that by mathematical induction ${\displaystyle \left[{\hat {x}}^{n},{\hat {p}}^{m}\right]=\sum _{k=1}^{\min \left(m,n\
right)}{{\frac {-\left(-i\hbar \right)^{k}n!m!}{k!\left(n-k\right)!\left(m-k\right)!}}{\hat {x}}^{n-k}{\hat {p}}^{m-k}}=\sum _{k=1}^{\min \left(m,n\right)}{{\frac {\left(i\hbar \right)^{k}n!m!}{k!\
left(n-k\right)!\left(m-k\right)!}}{\hat {p}}^{m-k}{\hat {x}}^{n-k}},}$ generally known as McCoy's formula.^[10]
In addition, the simple formula ${\displaystyle [x,p]=i\hbar \,\mathbb {I} ~,}$ valid for the quantization of the simplest classical system, can be generalized to the case of an arbitrary Lagrangian
${\displaystyle {\mathcal {L}}}$ .^[11] We identify canonical coordinates (such as x in the example above, or a field Φ(x) in the case of quantum field theory) and canonical momenta π[x] (in the
example above it is p, or more generally, some functions involving the derivatives of the canonical coordinates with respect to time): ${\displaystyle \pi _{i}\ {\stackrel {\mathrm {def} }{=}}\ {\
frac {\partial {\mathcal {L}}}{\partial (\partial x_{i}/\partial t)}}.}$
This definition of the canonical momentum ensures that one of the Euler–Lagrange equations has the form ${\displaystyle {\frac {\partial }{\partial t}}\pi _{i}={\frac {\partial {\mathcal {L}}}{\
partial x_{i}}}.}$
The canonical commutation relations then amount to ${\displaystyle [x_{i},\pi _{j}]=i\hbar \delta _{ij}\,}$ where δ[ij] is the Kronecker delta.
Gauge invariance
Canonical quantization is applied, by definition, on canonical coordinates. However, in the presence of an electromagnetic field, the canonical momentum p is not gauge invariant. The correct
gauge-invariant momentum (or "kinetic momentum") is
${\displaystyle p_{\text{kin}}=p-qA\,\!}$ (SI units) ${\displaystyle p_{\text{kin}}=p-{\frac {qA}{c}}\,\!}$ (cgs units),
where q is the particle's electric charge, A is the vector potential, and c is the speed of light. Although the quantity p[kin] is the "physical momentum", in that it is the quantity to be identified
with momentum in laboratory experiments, it does not satisfy the canonical commutation relations; only the canonical momentum does that. This can be seen as follows.
The non-relativistic Hamiltonian for a quantized charged particle of mass m in a classical electromagnetic field is (in cgs units) ${\displaystyle H={\frac {1}{2m}}\left(p-{\frac {qA}{c}}\right)^{2}
+q\phi }$ where A is the three-vector potential and φ is the scalar potential. This form of the Hamiltonian, as well as the Schrödinger equation Hψ = iħ∂ψ/∂t, the Maxwell equations and the Lorentz
force law are invariant under the gauge transformation ${\displaystyle A\to A'=A+abla \Lambda }$ ${\displaystyle \phi \to \phi '=\phi -{\frac {1}{c}}{\frac {\partial \Lambda }{\partial t}}}$ ${\
displaystyle \psi \to \psi '=U\psi }$ ${\displaystyle H\to H'=UHU^{\dagger },}$ where ${\displaystyle U=\exp \left({\frac {iq\Lambda }{\hbar c}}\right)}$ and Λ = Λ(x,t) is the gauge function.
The angular momentum operator is ${\displaystyle L=r\times p\,\!}$ and obeys the canonical quantization relations ${\displaystyle [L_{i},L_{j}]=i\hbar {\epsilon _{ijk}}L_{k}}$ defining the Lie
algebra for so(3), where ${\displaystyle \epsilon _{ijk}}$ is the Levi-Civita symbol. Under gauge transformations, the angular momentum transforms as ${\displaystyle \langle \psi \vert L\vert \psi \
rangle \to \langle \psi ^{\prime }\vert L^{\prime }\vert \psi ^{\prime }\rangle =\langle \psi \vert L\vert \psi \rangle +{\frac {q}{\hbar c}}\langle \psi \vert r\times abla \Lambda \vert \psi \rangle
The gauge-invariant angular momentum (or "kinetic angular momentum") is given by ${\displaystyle K=r\times \left(p-{\frac {qA}{c}}\right),}$ which has the commutation relations ${\displaystyle [K_
{i},K_{j}]=i\hbar {\epsilon _{ij}}^{\,k}\left(K_{k}+{\frac {q\hbar }{c}}x_{k}\left(x\cdot B\right)\right)}$ where ${\displaystyle B=abla \times A}$ is the magnetic field. The inequivalence of these
two formulations shows up in the Zeeman effect and the Aharonov–Bohm effect.
Uncertainty relation and commutators
All such nontrivial commutation relations for pairs of operators lead to corresponding uncertainty relations,^[12] involving positive semi-definite expectation contributions by their respective
commutators and anticommutators. In general, for two Hermitian operators A and B, consider expectation values in a system in the state ψ, the variances around the corresponding expectation values
being (ΔA)^2 ≡ ⟨(A − ⟨A⟩)^2⟩, etc.
Then ${\displaystyle \Delta A\,\Delta B\geq {\frac {1}{2}}{\sqrt {\left|\left\langle \left[{A},{B}\right]\right\rangle \right|^{2}+\left|\left\langle \left\{A-\langle A\rangle ,B-\langle B\rangle \
right\}\right\rangle \right|^{2}}},}$ where [A,B] ≡ AB − BA is the commutator of A and B, and {A,B} ≡ AB + BA is the anticommutator.
This follows through use of the Cauchy–Schwarz inequality, since |⟨A^2⟩||⟨B^2⟩| ≥ |⟨AB⟩|^2, and AB = ([A,B] + {A,B})/2; and similarly for the shifted operators A − ⟨A⟩ and B − ⟨B⟩. (Cf.
uncertainty principle derivations.)
Substituting for A and B (and taking care with the analysis) yield Heisenberg's familiar uncertainty relation for x and p, as usual.
Uncertainty relation for angular momentum operators
For the angular momentum operators L[x] = yp[z] − zp[y], etc., one has that ${\displaystyle [{L_{x}},{L_{y}}]=i\hbar \epsilon _{xyz}{L_{z}},}$ where ${\displaystyle \epsilon _{xyz}}$ is the
Levi-Civita symbol and simply reverses the sign of the answer under pairwise interchange of the indices. An analogous relation holds for the spin operators.
Here, for L[x] and L[y],^[12] in angular momentum multiplets ψ = |ℓ,m⟩, one has, for the transverse components of the Casimir invariant L[x]^2 + L[y]^2+ L[z]^2, the z-symmetric relations
⟨L[x]^2⟩ = ⟨L[y]^2⟩ = (ℓ(ℓ + 1) − m^2)ℏ^2/2,
as well as ⟨L[x]⟩ = ⟨L[y]⟩ = 0.
Consequently, the above inequality applied to this commutation relation specifies ${\displaystyle \Delta L_{x}\,\Delta L_{y}\geq {\frac {1}{2}}{\sqrt {\hbar ^{2}|\langle L_{z}\rangle |^{2}}}~,}$
hence ${\displaystyle {\sqrt {|\langle L_{x}^{2}\rangle \langle L_{y}^{2}\rangle |}}\geq {\frac {\hbar ^{2}}{2}}\vert m\vert }$ and therefore ${\displaystyle \ell (\ell +1)-m^{2}\geq |m|~,}$ so,
then, it yields useful constraints such as a lower bound on the Casimir invariant: ℓ(ℓ + 1) ≥ |m|(|m| + 1), and hence ℓ ≥ |m|, among others.
See also
• Hall, Brian C. (2013), Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer.
• Hall, Brian C. (2015), Lie Groups, Lie Algebras and Representations, An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer. | {"url":"https://www.knowpia.com/knowpedia/Canonical_commutation_relation","timestamp":"2024-11-11T16:11:23Z","content_type":"text/html","content_length":"249020","record_id":"<urn:uuid:0c1416db-d064-4d4a-af4f-2dee7de158d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00597.warc.gz"} |
Don’t Let R² Fool You
Has a low R² ever disappointed you during the analysis of your experimental results? Is this really the kiss of death? Is all lost? Let’s examine R² as it relates to factorial design of experiments
(DOE) and find out.
R² measures are calculated on the basis of the change in the response (Δy) relative to the total variation of the response (Δy + σ)over the range of the independent factor:
Let’s look at an example. Response y is dependent on factor x in a linear fashion:
We run a DOE using levels x1 and x2 in Figure 1 (below) to estimate beta1 (β1). Having the independent factor levels far apart generates a large signal-to-noise ratio (Δ12) and it is relatively easy
to estimate β1. Because the signal (Δy) is large relative to the noise (σ), R² approaches one.
What if we had run a DOE using levels x3 and x4 in figure 1 to estimate β1? Having the independent factor levels closer together generates a smaller signal-to-noise ratio (Δ34) and it is more
difficult to estimate β1. We can overcome this difficulty by running more replicates of the experiments. If enough replicates are run, β1 can be estimated with the same precision as in the first DOE
using levels x1 and x2. But, because the signal (Δy) is smaller relative to the noise (σ), R² will be smaller, no matter how many replicates are run!
In factorial design of experiments our goal is to identify the active factors and measure their effects. Experiments can be designed with replication so active factors can be found even in the
absence of a huge signal-to-noise ratio. Power allows us to determine how many replicates are needed. The delta (Δ) and sigma (Σ) used in the power calculation also give us an estimate of the
expected R² (see the formula above). In many real DOEs we intentionally limit a factor’s range to avoid problems. Success is measured with the ANOVA (analysis of variance) and the t-tests on the
model coefficients. A significant p-value indicates an active factor and a reasonable estimate of its effects. A significant p-value, along with a low R², may mean a proper job of designing the
experiments, rather than a problem!
R² is an interesting statistic, but not of primary importance in factorial DOE. Don’t be fooled by R²! | {"url":"https://www.statease.com/blog/dont-let-r%C2%B2-fool-you/","timestamp":"2024-11-12T18:38:46Z","content_type":"text/html","content_length":"19179","record_id":"<urn:uuid:6dca60b2-f10c-49a9-a30c-5161e0153a4f>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00609.warc.gz"} |
Periodic piecewise function
25004 Views
4 Replies
10 Total Likes
Periodic piecewise function
I have some very simple questions about how to define a periodic function in Mathematica. I've never used Mathematica before so please forgive my ignorance.
What I need to do is graph and obtain the Fourier series for a 2*Pi-periodic function. My function is defined as follows:
• exp(x) when -pi < x < pi
• cosh(pi) when x = -pi or x = pi
I told this to Mathematica this way:
f[x_] := Piecewise[{{Exp[x], -Pi < x < Pi}, {Cosh[x],
x == -Pi || x == Pi}}]
I think it worked properly because when I evaluate the function I get the appropriate results.
Now the problem is I need to extend this definition to the whole real number line, taking into account that f(x+2pi) = f(x). I tried to do this several ways, but none of them worked and I couldn't
figure out a solution.
Another issue is how to plot this showing the points (n*pi, cosh(n*pi)). When I plot the function it shows the line for exp(x) but nothing for cosh(x), and I need the dots to be seen. Any help would
be appreciated.
4 Replies
All the solutions worked like a charm, just like I wanted. Thank you very much!
Another way is thinking recursively!
f[x_] := Which[x > Pi, f[x - 2*Pi],
x < -Pi, f[x + 2*Pi],
-Pi < x < Pi, Exp[x],
x == -Pi || x == Pi, Cosh[x]
For the periodic continuation, could try g[x_] := f[Mod[x, 2 Pi, -Pi]] and use
to add the discrete points to the plot.
EDIT: Too slow, Szabolcs already answered :-)
Would defining your function as
Exp@Mod[x, 2 Pi, -Pi]
work for your purposes? It will work for plotting (except for the Cosh part, but that value is taken only in separate points).
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/156025?sortMsg=Recent","timestamp":"2024-11-03T17:28:37Z","content_type":"text/html","content_length":"110022","record_id":"<urn:uuid:ea02b513-c8d0-4e28-b416-6cdeb22f46d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00674.warc.gz"} |
BoTorch · Bayesian Optimization in PyTorch
Core abstractions and generic optimizers.
class botorch.optim.core.OptimizationStatus(value)[source]¶
Bases: int, Enum
An enumeration.
class botorch.optim.core.OptimizationResult(step: 'int', fval: 'Union[float, int]', status: 'OptimizationStatus', runtime: 'Optional[float]' = None, message: 'Optional[str]' = None)[source]¶
Bases: object
☆ step (int)
☆ fval (float | int)
☆ status (OptimizationStatus)
☆ runtime (float | None)
☆ message (str | None)
step: int¶
fval: float | int¶
status: OptimizationStatus¶
runtime: float | None = None¶
message: str | None = None¶
botorch.optim.core.scipy_minimize(closure, parameters, bounds=None, callback=None, x0=None, method='L-BFGS-B', options=None, timeout_sec=None)[source]¶
Generic scipy.optimize.minimize-based optimization routine.
☆ closure (Callable[[], tuple[Tensor, Sequence[Tensor | None]]] | NdarrayOptimizationClosure) – Callable that returns a tensor and an iterable of gradient tensors or
NdarrayOptimizationClosure instance.
☆ parameters (dict[str, Tensor]) – A dictionary of tensors to be optimized.
☆ bounds (dict[str, tuple[float | None, float | None]] | None) – A dictionary mapping parameter names to lower and upper bounds.
☆ callback (Callable[[dict[str, Tensor], OptimizationResult], None] | None) – A callable taking parameters and an OptimizationResult as arguments.
☆ x0 (ndarray | None) – An optional initialization vector passed to scipy.optimize.minimize.
☆ method (str) – Solver type, passed along to scipy.minimize.
☆ options (dict[str, Any] | None) – Dictionary of solver options, passed along to scipy.minimize.
☆ timeout_sec (float | None) – Timeout in seconds to wait before aborting the optimization loop if not converged (will return the best found solution thus far).
An OptimizationResult summarizing the final state of the run.
Return type:
botorch.optim.core.torch_minimize(closure, parameters, bounds=None, callback=None, optimizer=<class 'torch.optim.adam.Adam'>, scheduler=None, step_limit=None, timeout_sec=None, stopping_criterion=
Generic torch.optim-based optimization routine.
☆ closure (Callable[[], tuple[Tensor, Sequence[Tensor | None]]]) – Callable that returns a tensor and an iterable of gradient tensors. Responsible for setting relevant parameters’ grad
☆ parameters (dict[str, Tensor]) – A dictionary of tensors to be optimized.
☆ bounds (dict[str, tuple[float | None, float | None]] | None) – An optional dictionary of bounds for elements of parameters.
☆ callback (Callable[[dict[str, Tensor], OptimizationResult], None] | None) – A callable taking parameters and an OptimizationResult as arguments.
☆ optimizer (Optimizer | Callable[[list[Tensor]], Optimizer]) – A torch.optim.Optimizer instance or a factory that takes a list of parameters and returns an Optimizer instance.
☆ scheduler (LRScheduler | Callable[[Optimizer], LRScheduler] | None) – A torch.optim.lr_scheduler._LRScheduler instance or a factory that takes a Optimizer instance and returns a
_LRSchedule instance.
☆ step_limit (int | None) – Integer specifying a maximum number of optimization steps. One of step_limit, stopping_criterion, or timeout_sec must be passed.
☆ timeout_sec (float | None) – Timeout in seconds before terminating the optimization loop. One of step_limit, stopping_criterion, or timeout_sec must be passed.
☆ stopping_criterion (Callable[[Tensor], bool] | None) – A StoppingCriterion for the optimization loop.
An OptimizationResult summarizing the final state of the run.
Return type:
Acquisition Function Optimization¶
Methods for optimizing acquisition functions.
class botorch.optim.optimize.OptimizeAcqfInputs(acq_function, bounds, q, num_restarts, raw_samples, options, inequality_constraints, equality_constraints, nonlinear_inequality_constraints,
fixed_features, post_processing_func, batch_initial_conditions, return_best_only, gen_candidates, sequential, ic_generator=None, timeout_sec=None, return_full_tree=False,
retry_on_optimization_warning=True, ic_gen_kwargs=<factory>)[source]¶
Bases: object
Container for inputs to optimize_acqf.
See docstring for optimize_acqf for explanation of parameters.
☆ acq_function (AcquisitionFunction)
☆ bounds (Tensor)
☆ q (int)
☆ num_restarts (int)
☆ raw_samples (int | None)
☆ options (dict[str, bool | float | int | str] | None)
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ nonlinear_inequality_constraints (list[tuple[Callable, bool]] | None)
☆ fixed_features (dict[int, float] | None)
☆ post_processing_func (Callable[[Tensor], Tensor] | None)
☆ batch_initial_conditions (Tensor | None)
☆ return_best_only (bool)
☆ gen_candidates (Callable[[Tensor, AcquisitionFunction, Any], tuple[Tensor, Tensor]])
☆ sequential (bool)
☆ ic_generator (Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[
Tensor, Tensor, float]] | None], Tensor | None] | None)
☆ timeout_sec (float | None)
☆ return_full_tree (bool)
☆ retry_on_optimization_warning (bool)
☆ ic_gen_kwargs (dict)
acq_function: AcquisitionFunction¶
bounds: Tensor¶
q: int¶
num_restarts: int¶
raw_samples: int | None¶
options: dict[str, bool | float | int | str] | None¶
inequality_constraints: list[tuple[Tensor, Tensor, float]] | None¶
equality_constraints: list[tuple[Tensor, Tensor, float]] | None¶
nonlinear_inequality_constraints: list[tuple[Callable, bool]] | None¶
fixed_features: dict[int, float] | None¶
post_processing_func: Callable[[Tensor], Tensor] | None¶
batch_initial_conditions: Tensor | None¶
return_best_only: bool¶
gen_candidates: Callable[[Tensor, AcquisitionFunction, Any], tuple[Tensor, Tensor]]¶
sequential: bool¶
ic_generator: Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[Tensor,
Tensor, float]] | None], Tensor | None] | None = None¶
timeout_sec: float | None = None¶
return_full_tree: bool = False¶
retry_on_optimization_warning: bool = True¶
ic_gen_kwargs: dict¶
property full_tree: bool¶
Return type:
Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[Tensor, Tensor,
float]] | None], Tensor | None]
botorch.optim.optimize.optimize_acqf(acq_function, bounds, q, num_restarts, raw_samples=None, options=None, inequality_constraints=None, equality_constraints=None, nonlinear_inequality_constraints=
None, fixed_features=None, post_processing_func=None, batch_initial_conditions=None, return_best_only=True, gen_candidates=None, sequential=False, *, ic_generator=None, timeout_sec=None,
return_full_tree=False, retry_on_optimization_warning=True, **ic_gen_kwargs)[source]¶
Generate a set of candidates via multi-start optimization.
☆ acq_function (AcquisitionFunction) – An AcquisitionFunction.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X (if inequality_constraints is provided, these bounds can be -inf and +inf, respectively).
☆ q (int) – The number of candidates.
☆ num_restarts (int) – The number of starting points for multistart acquisition function optimization.
☆ raw_samples (int | None) – The number of samples for initialization. This is required if batch_initial_conditions is not specified.
☆ options (dict[str, bool | float | int | str] | None) – Options for candidate generation.
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X
[indices[i]] * coefficients[i]) >= rhs. indices and coefficients should be torch tensors. See the docstring of make_scipy_linear_constraints for an example. When q=1, or when applying the
same constraint to each candidate in the batch (intra-point constraint), indices should be a 1-d tensor. For inter-point constraints, in which the constraint is applied to the whole batch
of candidates, indices must be a 2-d tensor, where in each row indices[i] =(k_i, l_i) the first index k_i corresponds to the k_i-th element of the q-batch and the second index l_i
corresponds to the l_i-th feature of that element.
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an equality constraint of the form sum_i (X
[indices[i]] * coefficients[i]) = rhs. See the docstring of make_scipy_linear_constraints for an example.
☆ nonlinear_inequality_constraints (list[tuple[Callable, bool]] | None) – A list of tuples representing the nonlinear inequality constraints. The first element in the tuple is a callable
representing a constraint of the form callable(x) >= 0. In case of an intra-point constraint, callable()`takes in an one-dimensional tensor of shape `d and returns a scalar. In case of an
inter-point constraint, callable() takes a two dimensional tensor of shape q x d and again returns a scalar. The second element is a boolean, indicating if it is an intra-point or
inter-point constraint (True for intra-point. False for inter-point). For more information on intra-point vs inter-point constraints, see the docstring of the inequality_constraints
argument to optimize_acqf(). The constraints will later be passed to the scipy solver. You need to pass in batch_initial_conditions in this case. Using non-linear inequality constraints
also requires that batch_limit is set to 1, which will be done automatically if not specified in options.
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ post_processing_func (Callable[[Tensor], Tensor] | None) – A function that post-processes an optimization result appropriately (i.e., according to round-trip transformations).
☆ batch_initial_conditions (Tensor | None) – A tensor to specify the initial conditions. Set this if you do not want to use default initialization strategy.
☆ return_best_only (bool) – If False, outputs the solutions corresponding to all random restart initializations of the optimization.
☆ gen_candidates (Callable[[Tensor, AcquisitionFunction, Any], tuple[Tensor, Tensor]] | None) – A callable for generating candidates (and their associated acquisition values) given a tensor
of initial conditions and an acquisition function. Other common inputs include lower and upper bounds and a dictionary of options, but refer to the documentation of specific generation
functions (e.g gen_candidates_scipy and gen_candidates_torch) for method-specific inputs. Default: gen_candidates_scipy
☆ sequential (bool) – If False, uses joint optimization, otherwise uses sequential optimization.
☆ ic_generator (Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[
Tensor, Tensor, float]] | None], Tensor | None] | None) – Function for generating initial conditions. Not needed when batch_initial_conditions are provided. Defaults to
gen_one_shot_kg_initial_conditions for qKnowledgeGradient acquisition functions and gen_batch_initial_conditions otherwise. Must be specified for nonlinear inequality constraints.
☆ timeout_sec (float | None) – Max amount of time optimization can run for.
☆ return_full_tree (bool)
☆ retry_on_optimization_warning (bool) – Whether to retry candidate generation with a new set of initial conditions when it fails with an OptimizationWarning.
☆ ic_gen_kwargs (Any) – Additional keyword arguments passed to function specified by ic_generator
A two-element tuple containing
A tensor of generated candidates. The shape is
– q x d if return_best_only is True (default) – num_restarts x q x d if return_best_only is False
a tensor of associated acquisition values. If sequential=False,
this is a (num_restarts)-dim tensor of joint acquisition values (with explicit restart dimension if return_best_only=False). If sequential=True, this is a q-dim tensor of expected
acquisition values conditional on having observed candidates 0,1,…,i-1.
Return type:
tuple[Tensor, Tensor]
>>> # generate `q=2` candidates jointly using 20 random restarts
>>> # and 512 raw samples
>>> candidates, acq_value = optimize_acqf(qEI, bounds, 2, 20, 512)
>>> generate `q=3` candidates sequentially using 15 random restarts
>>> # and 256 raw samples
>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> bounds = torch.tensor([[0.], [1.]])
>>> candidates, acq_value_list = optimize_acqf(
>>> qEI, bounds, 3, 15, 256, sequential=True
>>> )
botorch.optim.optimize.optimize_acqf_cyclic(acq_function, bounds, q, num_restarts, raw_samples=None, options=None, inequality_constraints=None, equality_constraints=None, fixed_features=None,
post_processing_func=None, batch_initial_conditions=None, cyclic_options=None, *, ic_generator=None, timeout_sec=None, return_full_tree=False, retry_on_optimization_warning=True, **ic_gen_kwargs)
Generate a set of q candidates via cyclic optimization.
☆ acq_function (AcquisitionFunction) – An AcquisitionFunction
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X (if inequality_constraints is provided, these bounds can be -inf and +inf, respectively).
☆ q (int) – The number of candidates.
☆ num_restarts (int) – Number of starting points for multistart acquisition function optimization.
☆ raw_samples (int | None) – Number of samples for initialization. This is required if batch_initial_conditions is not specified.
☆ options (dict[str, bool | float | int | str] | None) – Options for candidate generation.
☆ constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ post_processing_func (Callable[[Tensor], Tensor] | None) – A function that post-processes an optimization result appropriately (i.e., according to round-trip transformations).
☆ batch_initial_conditions (Tensor | None) – A tensor to specify the initial conditions. If no initial conditions are provided, the default initialization will be used.
☆ cyclic_options (dict[str, bool | float | int | str] | None) – Options for stopping criterion for outer cyclic optimization.
☆ ic_generator (Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[
Tensor, Tensor, float]] | None], Tensor | None] | None) – Function for generating initial conditions. Not needed when batch_initial_conditions are provided. Defaults to
gen_one_shot_kg_initial_conditions for qKnowledgeGradient acquisition functions and gen_batch_initial_conditions otherwise. Must be specified for nonlinear inequality constraints.
☆ timeout_sec (float | None) – Max amount of time optimization can run for.
☆ return_full_tree (bool)
☆ retry_on_optimization_warning (bool) – Whether to retry candidate generation with a new set of initial conditions when it fails with an OptimizationWarning.
☆ ic_gen_kwargs (Any) – Additional keyword arguments passed to function specified by ic_generator
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
A two-element tuple containing
☆ a q x d-dim tensor of generated candidates.
a q-dim tensor of expected acquisition values, where the value at
index i is the acquisition value conditional on having observed all candidates except candidate i.
Return type:
tuple[Tensor, Tensor]
>>> # generate `q=3` candidates cyclically using 15 random restarts
>>> # 256 raw samples, and 4 cycles
>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> bounds = torch.tensor([[0.], [1.]])
>>> candidates, acq_value_list = optimize_acqf_cyclic(
>>> qEI, bounds, 3, 15, 256, cyclic_options={"maxiter": 4}
>>> )
botorch.optim.optimize.optimize_acqf_list(acq_function_list, bounds, num_restarts, raw_samples=None, options=None, inequality_constraints=None, equality_constraints=None,
nonlinear_inequality_constraints=None, fixed_features=None, fixed_features_list=None, post_processing_func=None, ic_generator=None, ic_gen_kwargs=None)[source]¶
Generate a list of candidates from a list of acquisition functions.
The acquisition functions are optimized in sequence, with previous candidates set as X_pending. This is also known as sequential greedy optimization.
☆ acq_function_list (list[AcquisitionFunction]) – A list of acquisition functions.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X (if inequality_constraints is provided, these bounds can be -inf and +inf, respectively).
☆ num_restarts (int) – Number of starting points for multistart acquisition function optimization.
☆ raw_samples (int | None) – Number of samples for initialization. This is required if batch_initial_conditions is not specified.
☆ options (dict[str, bool | float | int | str] | None) – Options for candidate generation.
☆ constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs
☆ nonlinear_inequality_constraints (list[tuple[Callable, bool]] | None) – A list of tuples representing the nonlinear inequality constraints. The first element in the tuple is a callable
representing a constraint of the form callable(x) >= 0. In case of an intra-point constraint, callable()`takes in an one-dimensional tensor of shape `d and returns a scalar. In case of an
inter-point constraint, callable() takes a two dimensional tensor of shape q x d and again returns a scalar. The second element is a boolean, indicating if it is an intra-point or
inter-point constraint (True for intra-point. False for inter-point). For more information on intra-point vs inter-point constraints, see the docstring of the inequality_constraints
argument to optimize_acqf(). The constraints will later be passed to the scipy solver. You need to pass in batch_initial_conditions in this case. Using non-linear inequality constraints
also requires that batch_limit is set to 1, which will be done automatically if not specified in options.
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ fixed_features_list (list[dict[int, float]] | None) – A list of maps {feature_index: value}. The i-th item represents the fixed_feature for the i-th optimization. If fixed_features_list
is provided, optimize_acqf_mixed is invoked.
☆ post_processing_func (Callable[[Tensor], Tensor] | None) – A function that post-processes an optimization result appropriately (i.e., according to round-trip transformations).
☆ ic_generator (Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[
Tensor, Tensor, float]] | None], Tensor | None] | None) – Function for generating initial conditions. Not needed when batch_initial_conditions are provided. Defaults to
gen_one_shot_kg_initial_conditions for qKnowledgeGradient acquisition functions and gen_batch_initial_conditions otherwise. Must be specified for nonlinear inequality constraints.
☆ ic_gen_kwargs (dict | None) – Additional keyword arguments passed to function specified by ic_generator
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
A two-element tuple containing
☆ a q x d-dim tensor of generated candidates.
a q-dim tensor of expected acquisition values, where the value at
index i is the acquisition value conditional on having observed all candidates except candidate i.
Return type:
tuple[Tensor, Tensor]
botorch.optim.optimize.optimize_acqf_mixed(acq_function, bounds, q, num_restarts, fixed_features_list, raw_samples=None, options=None, inequality_constraints=None, equality_constraints=None,
nonlinear_inequality_constraints=None, post_processing_func=None, batch_initial_conditions=None, ic_generator=None, ic_gen_kwargs=None)[source]¶
Optimize over a list of fixed_features and returns the best solution.
This is useful for optimizing over mixed continuous and discrete domains. For q > 1 this function always performs sequential greedy optimization (with proper conditioning on generated
☆ acq_function (AcquisitionFunction) – An AcquisitionFunction
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X (if inequality_constraints is provided, these bounds can be -inf and +inf, respectively).
☆ q (int) – The number of candidates.
☆ num_restarts (int) – Number of starting points for multistart acquisition function optimization.
☆ raw_samples (int | None) – Number of samples for initialization. This is required if batch_initial_conditions is not specified.
☆ fixed_features_list (list[dict[int, float]]) – A list of maps {feature_index: value}. The i-th item represents the fixed_feature for the i-th optimization.
☆ options (dict[str, bool | float | int | str] | None) – Options for candidate generation.
☆ constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs
☆ nonlinear_inequality_constraints (list[tuple[Callable, bool]] | None) – A list of tuples representing the nonlinear inequality constraints. The first element in the tuple is a callable
representing a constraint of the form callable(x) >= 0. In case of an intra-point constraint, callable()`takes in an one-dimensional tensor of shape `d and returns a scalar. In case of an
inter-point constraint, callable() takes a two dimensional tensor of shape q x d and again returns a scalar. The second element is a boolean, indicating if it is an intra-point or
inter-point constraint (True for intra-point. False for inter-point). For more information on intra-point vs inter-point constraints, see the docstring of the inequality_constraints
argument to optimize_acqf(). The constraints will later be passed to the scipy solver. You need to pass in batch_initial_conditions in this case. Using non-linear inequality constraints
also requires that batch_limit is set to 1, which will be done automatically if not specified in options.
☆ post_processing_func (Callable[[Tensor], Tensor] | None) – A function that post-processes an optimization result appropriately (i.e., according to round-trip transformations).
☆ batch_initial_conditions (Tensor | None) – A tensor to specify the initial conditions. Set this if you do not want to use default initialization strategy.
☆ ic_generator (Callable[[qKnowledgeGradient, Tensor, int, int, int, dict[int, float] | None, dict[str, bool | float | int] | None, list[tuple[Tensor, Tensor, float]] | None, list[tuple[
Tensor, Tensor, float]] | None], Tensor | None] | None) – Function for generating initial conditions. Not needed when batch_initial_conditions are provided. Defaults to
gen_one_shot_kg_initial_conditions for qKnowledgeGradient acquisition functions and gen_batch_initial_conditions otherwise. Must be specified for nonlinear inequality constraints.
☆ ic_gen_kwargs (dict | None) – Additional keyword arguments passed to function specified by ic_generator
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
A two-element tuple containing
☆ a q x d-dim tensor of generated candidates.
☆ an associated acquisition value.
Return type:
tuple[Tensor, Tensor]
botorch.optim.optimize.optimize_acqf_discrete(acq_function, q, choices, max_batch_size=2048, unique=True)[source]¶
Optimize over a discrete set of points using batch evaluation.
For q > 1 this function generates candidates by means of sequential conditioning (rather than joint optimization), since for all but the smalles number of choices the set choices^q of discrete
points to evaluate quickly explodes.
☆ acq_function (AcquisitionFunction) – An AcquisitionFunction.
☆ q (int) – The number of candidates.
☆ choices (Tensor) – A num_choices x d tensor of possible choices.
☆ max_batch_size (int) – The maximum number of choices to evaluate in batch. A large limit can cause excessive memory usage if the model has a large training set.
☆ unique (bool) – If True return unique choices, o/w choices may be repeated (only relevant if q > 1).
A two-element tuple containing
☆ a q x d-dim tensor of generated candidates.
☆ an associated acquisition value.
Return type:
tuple[Tensor, Tensor]
botorch.optim.optimize.optimize_acqf_discrete_local_search(acq_function, discrete_choices, q, num_restarts=20, raw_samples=4096, inequality_constraints=None, X_avoid=None, batch_initial_conditions=
None, max_batch_size=2048, unique=True)[source]¶
Optimize acquisition function over a lattice.
This is useful when d is large and enumeration of the search space isn’t possible. For q > 1 this function always performs sequential greedy optimization (with proper conditioning on generated
NOTE: While this method supports arbitrary lattices, it has only been thoroughly tested for {0, 1}^d. Consider it to be in alpha stage for the more general case.
☆ acq_function (AcquisitionFunction) – An AcquisitionFunction
☆ discrete_choices (list[Tensor]) – A list of possible discrete choices for each dimension. Each element in the list is expected to be a torch tensor.
☆ q (int) – The number of candidates.
☆ num_restarts (int) – Number of starting points for multistart acquisition function optimization.
☆ raw_samples (int) – Number of samples for initialization. This is required if batch_initial_conditions is not specified.
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X
[indices[i]] * coefficients[i]) >= rhs
☆ X_avoid (Tensor | None) – An n x d tensor of candidates that we aren’t allowed to pick.
☆ batch_initial_conditions (Tensor | None) – A tensor of size n x 1 x d to specify the initial conditions. Set this if you do not want to use default initialization strategy.
☆ max_batch_size (int) – The maximum number of choices to evaluate in batch. A large limit can cause excessive memory usage if the model has a large training set.
☆ unique (bool) – If True return unique choices, o/w choices may be repeated (only relevant if q > 1).
A two-element tuple containing
☆ a q x d-dim tensor of generated candidates.
☆ an associated acquisition value.
Return type:
tuple[Tensor, Tensor]
Model Fitting Optimization¶
Tools for model fitting.
botorch.optim.fit.fit_gpytorch_mll_scipy(mll, parameters=None, bounds=None, closure=None, closure_kwargs=None, method='L-BFGS-B', options=None, callback=None, timeout_sec=None)[source]¶
Generic scipy.optimized-based fitting routine for GPyTorch MLLs.
The model and likelihood in mll must already be in train mode.
☆ mll (MarginalLogLikelihood) – MarginalLogLikelihood to be maximized.
☆ parameters (dict[str, Tensor] | None) – Optional dictionary of parameters to be optimized. Defaults to all parameters of mll that require gradients.
☆ bounds (dict[str, tuple[float | None, float | None]] | None) – A dictionary of user-specified bounds for parameters. Used to update default parameter bounds obtained from mll.
☆ closure (Callable[[], tuple[Tensor, Sequence[Tensor | None]]] | None) – Callable that returns a tensor and an iterable of gradient tensors. Responsible for setting the grad attributes of
parameters. If no closure is provided, one will be obtained by calling get_loss_closure_with_grads.
☆ closure_kwargs (dict[str, Any] | None) – Keyword arguments passed to closure.
☆ method (str) – Solver type, passed along to scipy.minimize.
☆ options (dict[str, Any] | None) – Dictionary of solver options, passed along to scipy.minimize.
☆ callback (Callable[[dict[str, Tensor], OptimizationResult], None] | None) – Optional callback taking parameters and an OptimizationResult as its sole arguments.
☆ timeout_sec (float | None) – Timeout in seconds after which to terminate the fitting loop (note that timing out can result in bad fits!).
The final OptimizationResult.
Return type:
botorch.optim.fit.fit_gpytorch_mll_torch(mll, parameters=None, bounds=None, closure=None, closure_kwargs=None, step_limit=None, stopping_criterion=<class 'botorch.utils.types.DEFAULT'>, optimizer=
<class 'torch.optim.adam.Adam'>, scheduler=None, callback=None, timeout_sec=None)[source]¶
Generic torch.optim-based fitting routine for GPyTorch MLLs.
☆ mll (MarginalLogLikelihood) – MarginalLogLikelihood to be maximized.
☆ parameters (dict[str, Tensor] | None) – Optional dictionary of parameters to be optimized. Defaults to all parameters of mll that require gradients.
☆ bounds (dict[str, tuple[float | None, float | None]] | None) – A dictionary of user-specified bounds for parameters. Used to update default parameter bounds obtained from mll.
☆ closure (Callable[[], tuple[Tensor, Sequence[Tensor | None]]] | None) – Callable that returns a tensor and an iterable of gradient tensors. Responsible for setting the grad attributes of
parameters. If no closure is provided, one will be obtained by calling get_loss_closure_with_grads.
☆ closure_kwargs (dict[str, Any] | None) – Keyword arguments passed to closure.
☆ step_limit (int | None) – Optional upper bound on the number of optimization steps.
☆ stopping_criterion (Callable[[Tensor], bool] | None) – A StoppingCriterion for the optimization loop.
☆ optimizer (Optimizer | Callable[[...], Optimizer]) – A torch.optim.Optimizer instance or a factory that takes a list of parameters and returns an Optimizer instance.
☆ scheduler (_LRScheduler | Callable[[...], _LRScheduler] | None) – A torch.optim.lr_scheduler._LRScheduler instance or a factory that takes an Optimizer instance and returns an
☆ callback (Callable[[dict[str, Tensor], OptimizationResult], None] | None) – Optional callback taking parameters and an OptimizationResult as its sole arguments.
☆ timeout_sec (float | None) – Timeout in seconds after which to terminate the fitting loop (note that timing out can result in bad fits!).
The final OptimizationResult.
Return type:
Initialization Helpers¶
R. G. Regis, C. A. Shoemaker. Combining radial basis function surrogates and dynamic coordinate search in high-dimensional expensive black-box optimization, Engineering Optimization, 2013.
botorch.optim.initializers.transform_constraints(constraints, q, d)[source]¶
Transform constraints to sample from a d*q-dimensional space instead of a d-dimensional state.
This function assumes that constraints are the same for each input batch, and broadcasts the constraints accordingly to the input batch shape.
☆ constraints (list[tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an (in-)equality constraint of the form sum_i (X[indices
[i]] * coefficients[i]) (>)= rhs. If indices is a 2-d Tensor, this supports specifying constraints across the points in the q-batch (inter-point constraints). If None, this function is a
nullop and simply returns None.
☆ q (int) – Size of the q-batch.
☆ d (int) – Dimensionality of the problem.
List of transformed constraints.
Return type:
List[Tuple[Tensor, Tensor, float]]
botorch.optim.initializers.transform_intra_point_constraint(constraint, d, q)[source]¶
Transforms an intra-point/pointwise constraint from d-dimensional space to a d*q-dimesional space.
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an (in-)equality constraint of the form sum_i (X[indices[i]] * coefficients[i]) (>)= rhs. Here
indices must be one-dimensional, and the constraint is applied to all points within the q-batch.
☆ d (int) – Dimensionality of the problem.
☆ constraint (tuple[Tensor, Tensor, float])
☆ q (int)
ValueError – If indices in the constraints are larger than the dimensionality d of the problem.
List of transformed constraints.
Return type:
List[Tuple[Tensor, Tensor, float]]
botorch.optim.initializers.transform_inter_point_constraint(constraint, d)[source]¶
Transforms an inter-point constraint from d-dimensional space to a d*q dimesional space.
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an (in-)equality constraint of the form sum_i (X[indices[i]] * coefficients[i]) (>)= rhs. indices
must be a 2-d Tensor, where in each row indices[i] = (k_i, l_i) the first index k_i corresponds to the k_i-th element of the q-batch and the second index l_i corresponds to the l_i-th
feature of that element.
☆ constraint (tuple[Tensor, Tensor, float])
☆ d (int)
ValueError – If indices in the constraints are larger than the dimensionality d of the problem.
Transformed constraint.
Return type:
List[Tuple[Tensor, Tensor, float]]
botorch.optim.initializers.sample_q_batches_from_polytope(n, q, bounds, n_burnin, n_thinning, seed, inequality_constraints=None, equality_constraints=None)[source]¶
Samples n q-baches from a polytope of dimension d.
☆ n (int) – Number of q-batches to sample.
☆ q (int) – Number of samples per q-batch
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X.
☆ n_burnin (int) – The number of burn-in samples for the Markov chain sampler.
☆ n_thinning (int) – The amount of thinning. The sampler will return every n_thinning sample (after burn-in).
☆ seed (int) – The random seed.
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X
[indices[i]] * coefficients[i]) >= rhs.
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X
[indices[i]] * coefficients[i]) = rhs.
A n x q x d-dim tensor of samples.
Return type:
botorch.optim.initializers.gen_batch_initial_conditions(acq_function, bounds, q, num_restarts, raw_samples, fixed_features=None, options=None, inequality_constraints=None, equality_constraints=None,
generator=None, fixed_X_fantasies=None)[source]¶
Generate a batch of initial conditions for random-restart optimziation.
TODO: Support t-batches of initial conditions.
☆ acq_function (AcquisitionFunction) – The acquisition function to be optimized.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X.
☆ q (int) – The number of candidates to consider.
☆ num_restarts (int) – The number of starting points for multistart acquisition function optimization.
☆ raw_samples (int) – The number of raw samples to consider in the initialization heuristic. Note: if sample_around_best is True (the default is False), then 2 * raw_samples samples are
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ options (dict[str, bool | float | int] | None) – Options for initial condition generation. For valid options see initialize_q_batch and initialize_q_batch_nonneg. If options contains a
nonnegative=True entry, then acq_function is assumed to be non-negative (useful when using custom acquisition functions). In addition, an “init_batch_limit” option can be passed to
specify the batch limit for the initialization. This is useful for avoiding memory limits when computing the batch posterior over raw samples.
☆ constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs.
☆ generator (Callable[[int, int, int | None], Tensor] | None) – Callable for generating samples that are then further processed. It receives n, q and seed as arguments and returns a tensor
of shape n x q x d.
☆ fixed_X_fantasies (Tensor | None) – A fixed set of fantasy points to concatenate to the q candidates being initialized along the -2 dimension. The shape should be num_pseudo_points x d.
E.g., this should be num_fantasies x d for KG and num_fantasies*num_pareto x d for HVKG.
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
A num_restarts x q x d tensor of initial conditions.
Return type:
>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> bounds = torch.tensor([[0.], [1.]])
>>> Xinit = gen_batch_initial_conditions(
>>> qEI, bounds, q=3, num_restarts=25, raw_samples=500
>>> )
botorch.optim.initializers.gen_one_shot_kg_initial_conditions(acq_function, bounds, q, num_restarts, raw_samples, fixed_features=None, options=None, inequality_constraints=None, equality_constraints=
Generate a batch of smart initializations for qKnowledgeGradient.
This function generates initial conditions for optimizing one-shot KG using the maximizer of the posterior objective. Intutively, the maximizer of the fantasized posterior will often be close to
a maximizer of the current posterior. This function uses that fact to generate the initial conditions for the fantasy points. Specifically, a fraction of 1 - frac_random (see options) is
generated by sampling from the set of maximizers of the posterior objective (obtained via random restart optimization) according to a softmax transformation of their respective values. This means
that this initialization strategy internally solves an acquisition function maximization problem. The remaining frac_random fantasy points as well as all q candidate points are chosen according
to the standard initialization strategy in gen_batch_initial_conditions.
☆ acq_function (qKnowledgeGradient) – The qHypervolumeKnowledgeGradient instance to be optimized.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of task features.
☆ q (int) – The number of candidates to consider.
☆ num_restarts (int) – The number of starting points for multistart acquisition function optimization.
☆ raw_samples (int) – The number of raw samples to consider in the initialization heuristic.
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ options (dict[str, bool | float | int] | None) – Options for initial condition generation. These contain all settings for the standard heuristic initialization from
gen_batch_initial_conditions. In addition, they contain frac_random (the fraction of fully random fantasy points), num_inner_restarts and raw_inner_samples (the number of random restarts
and raw samples for solving the posterior objective maximization problem, respectively) and eta (temperature parameter for sampling heuristic from posterior objective maximizers).
☆ constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs.
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
A num_restarts x q’ x d tensor that can be used as initial conditions for optimize_acqf(). Here q’ = q + num_fantasies is the total number of points (candidate points plus fantasy points).
Return type:
Tensor | None
>>> qHVKG = qHypervolumeKnowledgeGradient(model, ref_point=num_fantasies=64)
>>> bounds = torch.tensor([[0., 0.], [1., 1.]])
>>> Xinit = gen_one_shot_hvkg_initial_conditions(
>>> qHVKG, bounds, q=3, num_restarts=10, raw_samples=512,
>>> options={"frac_random": 0.25},
>>> )
botorch.optim.initializers.gen_one_shot_hvkg_initial_conditions(acq_function, bounds, q, num_restarts, raw_samples, fixed_features=None, options=None, inequality_constraints=None,
Generate a batch of smart initializations for qHypervolumeKnowledgeGradient.
This function generates initial conditions for optimizing one-shot HVKG using the hypervolume maximizing set (of fixed size) under the posterior mean. Intutively, the hypervolume maximizing set
of the fantasized posterior mean will often be close to a hypervolume maximizing set under the current posterior mean. This function uses that fact to generate the initial conditions for the
fantasy points. Specifically, a fraction of 1 - frac_random (see options) of the restarts are generated by learning the hypervolume maximizing sets under the current posterior mean, where each
hypervolume maximizing set is obtained from maximizing the hypervolume from a different starting point. Given a hypervolume maximizing set, the q candidate points are selected using to the
standard initialization strategy in gen_batch_initial_conditions, with the fixed hypervolume maximizing set. The remaining frac_random restarts fantasy points as well as all q candidate points
are chosen according to the standard initialization strategy in gen_batch_initial_conditions.
☆ acq_function (qHypervolumeKnowledgeGradient) – The qKnowledgeGradient instance to be optimized.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of task features.
☆ q (int) – The number of candidates to consider.
☆ num_restarts (int) – The number of starting points for multistart acquisition function optimization.
☆ raw_samples (int) – The number of raw samples to consider in the initialization heuristic.
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ options (dict[str, bool | float | int] | None) – Options for initial condition generation. These contain all settings for the standard heuristic initialization from
gen_batch_initial_conditions. In addition, they contain frac_random (the fraction of fully random fantasy points), num_inner_restarts and raw_inner_samples (the number of random restarts
and raw samples for solving the posterior objective maximization problem, respectively) and eta (temperature parameter for sampling heuristic from posterior objective maximizers).
☆ constraints (equality) – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) >= rhs.
☆ constraints – A list of tuples (indices, coefficients, rhs), with each tuple encoding an inequality constraint of the form sum_i (X[indices[i]] * coefficients[i]) = rhs.
☆ inequality_constraints (list[tuple[Tensor, Tensor, float]] | None)
☆ equality_constraints (list[tuple[Tensor, Tensor, float]] | None)
A num_restarts x q’ x d tensor that can be used as initial conditions for optimize_acqf(). Here q’ = q + num_fantasies is the total number of points (candidate points plus fantasy points).
Return type:
Tensor | None
>>> qHVKG = qHypervolumeKnowledgeGradient(model, ref_point)
>>> bounds = torch.tensor([[0., 0.], [1., 1.]])
>>> Xinit = gen_one_shot_hvkg_initial_conditions(
>>> qHVKG, bounds, q=3, num_restarts=10, raw_samples=512,
>>> options={"frac_random": 0.25},
>>> )
botorch.optim.initializers.gen_value_function_initial_conditions(acq_function, bounds, num_restarts, raw_samples, current_model, fixed_features=None, options=None)[source]¶
Generate a batch of smart initializations for optimizing the value function of qKnowledgeGradient.
This function generates initial conditions for optimizing the inner problem of KG, i.e. its value function, using the maximizer of the posterior objective. Intutively, the maximizer of the
fantasized posterior will often be close to a maximizer of the current posterior. This function uses that fact to generate the initital conditions for the fantasy points. Specifically, a fraction
of 1 - frac_random (see options) of raw samples is generated by sampling from the set of maximizers of the posterior objective (obtained via random restart optimization) according to a softmax
transformation of their respective values. This means that this initialization strategy internally solves an acquisition function maximization problem. The remaining raw samples are generated
using draw_sobol_samples. All raw samples are then evaluated, and the initial conditions are selected according to the standard initialization strategy in ‘initialize_q_batch’ individually for
each inner problem.
☆ acq_function (AcquisitionFunction) – The value function instance to be optimized.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of task features.
☆ num_restarts (int) – The number of starting points for multistart acquisition function optimization.
☆ raw_samples (int) – The number of raw samples to consider in the initialization heuristic.
☆ current_model (Model) – The model of the KG acquisition function that was used to generate the fantasy model of the value function.
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ options (dict[str, bool | float | int] | None) – Options for initial condition generation. These contain all settings for the standard heuristic initialization from
gen_batch_initial_conditions. In addition, they contain frac_random (the fraction of fully random fantasy points), num_inner_restarts and raw_inner_samples (the number of random restarts
and raw samples for solving the posterior objective maximization problem, respectively) and eta (temperature parameter for sampling heuristic from posterior objective maximizers).
A num_restarts x batch_shape x q x d tensor that can be used as initial conditions for optimize_acqf(). Here batch_shape is the batch shape of value function model.
Return type:
>>> fant_X = torch.rand(5, 1, 2)
>>> fantasy_model = model.fantasize(fant_X, SobolQMCNormalSampler(16))
>>> value_function = PosteriorMean(fantasy_model)
>>> bounds = torch.tensor([[0., 0.], [1., 1.]])
>>> Xinit = gen_value_function_initial_conditions(
>>> value_function, bounds, num_restarts=10, raw_samples=512,
>>> options={"frac_random": 0.25},
>>> )
botorch.optim.initializers.initialize_q_batch(X, Y, n, eta=1.0)[source]¶
Heuristic for selecting initial conditions for candidate generation.
This heuristic selects points from X (without replacement) with probability proportional to exp(eta * Z), where Z = (Y - mean(Y)) / std(Y) and eta is a temperature parameter.
When using an acquisiton function that is non-negative and possibly zero over large areas of the feature space (e.g. qEI), you should use initialize_q_batch_nonneg instead.
☆ X (Tensor) – A b x batch_shape x q x d tensor of b - batch_shape samples of q-batches from a d`-dim feature space. Typically, these are generated using qMC sampling.
☆ Y (Tensor) – A tensor of b x batch_shape outcomes associated with the samples. Typically, this is the value of the batch acquisition function to be maximized.
☆ n (int) – The number of initial condition to be generated. Must be less than b.
☆ eta (float) – Temperature parameter for weighting samples.
A n x batch_shape x q x d tensor of n - batch_shape q-batch initial conditions, where each batch of n x q x d samples is selected independently.
Return type:
>>> # To get `n=10` starting points of q-batch size `q=3`
>>> # for model with `d=6`:
>>> qUCB = qUpperConfidenceBound(model, beta=0.1)
>>> Xrnd = torch.rand(500, 3, 6)
>>> Xinit = initialize_q_batch(Xrnd, qUCB(Xrnd), 10)
botorch.optim.initializers.initialize_q_batch_nonneg(X, Y, n, eta=1.0, alpha=0.0001)[source]¶
Heuristic for selecting initial conditions for non-neg. acquisition functions.
This function is similar to initialize_q_batch, but designed specifically for acquisition functions that are non-negative and possibly zero over large areas of the feature space (e.g. qEI). All
samples for which Y < alpha * max(Y) will be ignored (assuming that Y contains at least one positive value).
☆ X (Tensor) – A b x q x d tensor of b samples of q-batches from a d-dim. feature space. Typically, these are generated using qMC.
☆ Y (Tensor) – A tensor of b outcomes associated with the samples. Typically, this is the value of the batch acquisition function to be maximized.
☆ n (int) – The number of initial condition to be generated. Must be less than b.
☆ eta (float) – Temperature parameter for weighting samples.
☆ alpha (float) – The threshold (as a fraction of the maximum observed value) under which to ignore samples. All input samples for which Y < alpha * max(Y) will be ignored.
A n x q x d tensor of n q-batch initial conditions.
Return type:
>>> # To get `n=10` starting points of q-batch size `q=3`
>>> # for model with `d=6`:
>>> qEI = qExpectedImprovement(model, best_f=0.2)
>>> Xrnd = torch.rand(500, 3, 6)
>>> Xinit = initialize_q_batch(Xrnd, qEI(Xrnd), 10)
botorch.optim.initializers.sample_points_around_best(acq_function, n_discrete_points, sigma, bounds, best_pct=5.0, subset_sigma=0.1, prob_perturb=None)[source]¶
Find best points and sample nearby points.
☆ acq_function (AcquisitionFunction) – The acquisition function.
☆ n_discrete_points (int) – The number of points to sample.
☆ sigma (float) – The standard deviation of the additive gaussian noise for perturbing the best points.
☆ bounds (Tensor) – A 2 x d-dim tensor containing the bounds.
☆ best_pct (float) – The percentage of best points to perturb.
☆ subset_sigma (float) – The standard deviation of the additive gaussian noise for perturbing a subset of dimensions of the best points.
☆ prob_perturb (float | None) – The probability of perturbing each dimension.
An optional n_discrete_points x d-dim tensor containing the
sampled points. This is None if no baseline points are found.
Return type:
Tensor | None
botorch.optim.initializers.sample_truncated_normal_perturbations(X, n_discrete_points, sigma, bounds, qmc=True)[source]¶
Sample points around X.
Sample perturbed points around X such that the added perturbations are sampled from N(0, sigma^2 I) and truncated to be within [0,1]^d.
☆ X (Tensor) – A n x d-dim tensor starting points.
☆ n_discrete_points (int) – The number of points to sample.
☆ sigma (float) – The standard deviation of the additive gaussian noise for perturbing the points.
☆ bounds (Tensor) – A 2 x d-dim tensor containing the bounds.
☆ qmc (bool) – A boolean indicating whether to use qmc.
A n_discrete_points x d-dim tensor containing the sampled points.
Return type:
botorch.optim.initializers.sample_perturbed_subset_dims(X, bounds, n_discrete_points, sigma=0.1, qmc=True, prob_perturb=None)[source]¶
Sample around X by perturbing a subset of the dimensions.
By default, dimensions are perturbed with probability equal to min(20 / d, 1). As shown in [Regis], perturbing a small number of dimensions can be beneificial. The perturbations are sampled from
N(0, sigma^2 I) and truncated to be within [0,1]^d.
☆ X (Tensor) – A n x d-dim tensor starting points. X must be normalized to be within [0, 1]^d.
☆ bounds (Tensor) – The bounds to sample perturbed values from
☆ n_discrete_points (int) – The number of points to sample.
☆ sigma (float) – The standard deviation of the additive gaussian noise for perturbing the points.
☆ qmc (bool) – A boolean indicating whether to use qmc.
☆ prob_perturb (float | None) – The probability of perturbing each dimension. If omitted, defaults to min(20 / d, 1).
A n_discrete_points x d-dim tensor containing the sampled points.
Return type:
Determine whether a given acquisition function is non-negative.
acq_function (AcquisitionFunction) – The AcquisitionFunction instance.
True if acq_function is non-negative, False if not, or if the behavior is unknown (for custom acquisition functions).
Return type:
>>> qEI = qExpectedImprovement(model, best_f=0.1)
>>> is_nonnegative(qEI) # returns True
Stopping Criteria¶
class botorch.optim.stopping.StoppingCriterion[source]¶
Bases: ABC
Base class for evaluating optimization convergence.
Stopping criteria are implemented as a objects rather than a function, so that they can keep track of past function values between optimization steps.
abstract evaluate(fvals)[source]¶
Evaluate the stopping criterion.
fvals (Tensor) – tensor containing function values for the current iteration. If fvals contains more than one element, then the stopping criterion is evaluated element-wise and True is
returned if the stopping criterion is true for all elements.
Stopping indicator (if True, stop the optimziation).
Return type:
class botorch.optim.stopping.ExpMAStoppingCriterion(maxiter=10000, minimize=True, n_window=10, eta=1.0, rel_tol=1e-05)[source]¶
Bases: StoppingCriterion
Exponential moving average stopping criterion.
Computes an exponentially weighted moving average over window length n_window and checks whether the relative decrease in this moving average between steps is less than a provided tolerance
level. That is, in iteration i, it computes
v[i,j] := fvals[i - n_window + j] * w[j]
for all j = 0, …, n_window, where w[j] = exp(-eta * (1 - j / n_window)). Letting ma[i] := sum_j(v[i,j]), the criterion evaluates to True whenever
(ma[i-1] - ma[i]) / abs(ma[i-1]) < rel_tol (if minimize=True) (ma[i] - ma[i-1]) / abs(ma[i-1]) < rel_tol (if minimize=False)
Exponential moving average stopping criterion.
☆ maxiter (int) – Maximum number of iterations.
☆ minimize (bool) – If True, assume minimization.
☆ n_window (int) – The size of the exponential moving average window.
☆ eta (float) – The exponential decay factor in the weights.
☆ rel_tol (float) – Relative tolerance for termination.
Evaluate the stopping criterion.
fvals (Tensor) – tensor containing function values for the current iteration. If fvals contains more than one element, then the stopping criterion is evaluated element-wise and True is
returned if the stopping criterion is true for all elements.
Return type:
TODO: add support for utilizing gradient information
Stopping indicator (if True, stop the optimziation).
fvals (Tensor)
Return type:
Acquisition Function Optimization with Homotopy¶
botorch.optim.optimize_homotopy.prune_candidates(candidates, acq_values, prune_tolerance)[source]¶
Prune candidates based on their distance to other candidates.
☆ candidates (Tensor) – An n x d tensor of candidates.
☆ acq_values (Tensor) – An n tensor of candidate values.
☆ prune_tolerance (float) – The minimum distance to prune candidates.
An m x d tensor of pruned candidates.
Return type:
botorch.optim.optimize_homotopy.optimize_acqf_homotopy(acq_function, bounds, q, homotopy, num_restarts, raw_samples=None, fixed_features=None, options=None, final_options=None,
batch_initial_conditions=None, post_processing_func=None, prune_tolerance=0.0001)[source]¶
Generate a set of candidates via multi-start optimization.
☆ acq_function (AcquisitionFunction) – An AcquisitionFunction.
☆ bounds (Tensor) – A 2 x d tensor of lower and upper bounds for each column of X.
☆ q (int) – The number of candidates.
☆ homotopy (Homotopy) – Homotopy object that will make the necessary modifications to the problem when calling step().
☆ num_restarts (int) – The number of starting points for multistart acquisition function optimization.
☆ raw_samples (int | None) – The number of samples for initialization. This is required if batch_initial_conditions is not specified.
☆ fixed_features (dict[int, float] | None) – A map {feature_index: value} for features that should be fixed to a particular value during generation.
☆ options (dict[str, bool | float | int | str] | None) – Options for candidate generation.
☆ final_options (dict[str, bool | float | int | str] | None) – Options for candidate generation in the last homotopy step.
☆ batch_initial_conditions (Tensor | None) – A tensor to specify the initial conditions. Set this if you do not want to use default initialization strategy.
☆ post_processing_func (Callable[[Tensor], Tensor] | None) – Post processing function (such as rounding or clamping) that is applied before choosing the final candidate.
☆ prune_tolerance (float)
Return type:
tuple[Tensor, Tensor]
Core methods for building closures in torch and interfacing with numpy.
class botorch.optim.closures.core.ForwardBackwardClosure(forward, parameters, backward=<function Tensor.backward>, reducer=<built-in method sum of type object>, callback=None, context_manager=None)
Bases: object
Wrapper for fused forward and backward closures.
Initializes a ForwardBackwardClosure instance.
☆ closure – Callable that returns a tensor.
☆ parameters (dict[str, Tensor]) – A dictionary of tensors whose grad fields are to be returned.
☆ backward (Callable[[Tensor], None]) – Callable that takes the (reduced) output of forward and sets the grad attributes of tensors in parameters.
☆ reducer (Optional[Callable[[Tensor], Tensor]]) – Optional callable used to reduce the output of the forward pass.
☆ callback (Optional[Callable[[Tensor, Sequence[Optional[Tensor]]], None]]) – Optional callable that takes the reduced output of forward and the gradients of parameters as positional
☆ context_manager (Callable) – A ContextManager used to wrap each forward-backward call. When passed as None, context_manager defaults to a zero_grad_ctx that zeroes the gradients of
parameters upon entry.
☆ forward (Callable[[], Tensor])
class botorch.optim.closures.core.NdarrayOptimizationClosure(closure, parameters, as_array=None, as_tensor=<built-in method as_tensor of type object>, get_state=None, set_state=None, fill_value=0.0,
Bases: object
Adds stateful behavior and a numpy.ndarray-typed API to a closure with an expected return type Tuple[Tensor, Union[Tensor, Sequence[Optional[Tensor]]]].
Initializes a NdarrayOptimizationClosure instance.
☆ closure (Callable[[], tuple[Tensor, Sequence[Optional[Tensor]]]]) – A ForwardBackwardClosure instance.
☆ parameters (dict[str, Tensor]) – A dictionary of tensors representing the closure’s state. Expected to correspond with the first len(parameters) optional gradient tensors returned by
☆ as_array (Callable[[Tensor], ndarray]) – Callable used to convert tensors to ndarrays.
☆ as_tensor (Callable[[ndarray], Tensor]) – Callable used to convert ndarrays to tensors.
☆ get_state (Callable[[], ndarray]) – Callable that returns the closure’s state as an ndarray. When passed as None, defaults to calling get_tensors_as_ndarray_1d on closure.parameters while
passing as_array (if given by the user).
☆ set_state (Callable[[ndarray], None]) – Callable that takes a 1-dimensional ndarray and sets the closure’s state. When passed as None, set_state defaults to calling
set_tensors_from_ndarray_1d with closure.parameters and a given ndarray while passing as_tensor.
☆ fill_value (float) – Fill value for parameters whose gradients are None. In most cases, fill_value should either be zero or NaN.
☆ persistent (bool) – Boolean specifying whether an ndarray should be retained as a persistent buffer for gradients.
property state: ndarray¶
Model Fitting Closures¶
Utilities for building model-based closures.
botorch.optim.closures.model_closures.get_loss_closure(mll, data_loader=None, **kwargs)[source]¶
Public API for GetLossClosure dispatcher.
This method, and the dispatcher that powers it, acts as a clearing house for factory functions that define how mll is evaluated.
Users may specify custom evaluation routines by registering a factory function with GetLossClosure. These factories should be registered using the type signature
Type[MarginalLogLikeLihood], Type[Likelihood], Type[Model], Type[DataLoader].
The final argument, Type[DataLoader], is optional. Evaluation routines that obtain training data from, e.g., mll.model should register this argument as type(None).
☆ mll (MarginalLogLikelihood) – A MarginalLogLikelihood instance whose negative defines the loss.
☆ data_loader (DataLoader | None) – An optional DataLoader instance for cases where training data is passed in rather than obtained from mll.model.
☆ kwargs (Any)
A closure that takes zero positional arguments and returns the negated value of mll.
Return type:
Callable[[], Tensor]
botorch.optim.closures.model_closures.get_loss_closure_with_grads(mll, parameters, data_loader=None, backward=<function Tensor.backward>, reducer=<method 'sum' of 'torch._C.TensorBase' objects>,
context_manager=None, **kwargs)[source]¶
Public API for GetLossClosureWithGrads dispatcher.
In most cases, this method simply adds a backward pass to a loss closure obtained by calling get_loss_closure. For further details, see get_loss_closure.
☆ mll (MarginalLogLikelihood) – A MarginalLogLikelihood instance whose negative defines the loss.
☆ parameters (dict[str, Tensor]) – A dictionary of tensors whose grad fields are to be returned.
☆ reducer (Callable[[Tensor], Tensor] | None) – Optional callable used to reduce the output of the forward pass.
☆ data_loader (DataLoader | None) – An optional DataLoader instance for cases where training data is passed in rather than obtained from mll.model.
☆ context_manager (Callable | None) – An optional ContextManager used to wrap each forward-backward pass. Defaults to a zero_grad_ctx that zeroes the gradients of parameters upon entry.
None may be passed as an alias for nullcontext.
☆ backward (Callable[[Tensor], None])
☆ kwargs (Any)
A closure that takes zero positional arguments and returns the reduced and negated value of mll along with the gradients of parameters.
Return type:
Callable[[], tuple[Tensor, tuple[Tensor, …]]]
General Optimization Utilities¶
General-purpose optimization utilities.
Acquisition Optimization Utilities¶
Utilities for maximizing acquisition functions.
botorch.optim.utils.acquisition_utils.columnwise_clamp(X, lower=None, upper=None, raise_on_violation=False)[source]¶
Clamp values of a Tensor in column-wise fashion (with support for t-batches).
This function is useful in conjunction with optimizers from the torch.optim package, which don’t natively handle constraints. If you apply this after a gradient step you can be fancy and call it
“projected gradient descent”. This funtion is also useful for post-processing candidates generated by the scipy optimizer that satisfy bounds only up to numerical accuracy.
☆ X (Tensor) – The b x n x d input tensor. If 2-dimensional, b is assumed to be 1.
☆ lower (float | Tensor | None) – The column-wise lower bounds. If scalar, apply bound to all columns.
☆ upper (float | Tensor | None) – The column-wise upper bounds. If scalar, apply bound to all columns.
☆ raise_on_violation (bool) – If True, raise an exception when the elments in X are out of the specified bounds (up to numerical accuracy). This is useful for post-processing candidates
generated by optimizers that satisfy imposed bounds only up to numerical accuracy.
The clamped tensor.
Return type:
botorch.optim.utils.acquisition_utils.fix_features(X, fixed_features=None)[source]¶
Fix feature values in a Tensor.
The fixed features will have zero gradient in downstream calculations.
☆ X (Tensor) – input Tensor with shape … x p, where p is the number of features
☆ fixed_features (dict[int, float | None] | None) – A dictionary with keys as column indices and values equal to what the feature should be set to in X. If the value is None, that column is
just considered fixed. Keys should be in the range [0, p - 1].
The tensor X with fixed features.
Return type:
Extract X_baseline from an acquisition function.
This tries to find the baseline set of points. First, this checks if the acquisition function has an X_baseline attribute. If it does not, then this method attempts to use the model’s
train_inputs as X_baseline.
acq_function (AcquisitionFunction) – The acquisition function.
Return type:
Tensor | None
An optional n x d-dim tensor of baseline points. This is None if no
baseline points are found.
Model Fitting Utilities¶
Utilities for fitting and manipulating models.
class botorch.optim.utils.model_utils.TorchAttr(shape, dtype, device)[source]¶
Bases: NamedTuple
Create new instance of TorchAttr(shape, dtype, device)
☆ shape (Size)
☆ dtype (dtype)
☆ device (device)
shape: Size¶
Alias for field number 0
dtype: dtype¶
Alias for field number 1
device: device¶
Alias for field number 2
botorch.optim.utils.model_utils.get_data_loader(model, batch_size=1024, **kwargs)[source]¶
Return type:
botorch.optim.utils.model_utils.get_parameters(module, requires_grad=None, name_filter=None)[source]¶
Helper method for obtaining a module’s parameters and their respective ranges.
☆ module (Module) – The target module from which parameters are to be extracted.
☆ requires_grad (bool | None) – Optional Boolean used to filter parameters based on whether or not their require_grad attribute matches the user provided value.
☆ name_filter (Callable[[str], bool] | None) – Optional Boolean function used to filter parameters by name.
A dictionary of parameters.
Return type:
dict[str, Tensor]
botorch.optim.utils.model_utils.get_parameters_and_bounds(module, requires_grad=None, name_filter=None, default_bounds=(-inf, inf))[source]¶
Helper method for obtaining a module’s parameters and their respective ranges.
☆ module (Module) – The target module from which parameters are to be extracted.
☆ name_filter (Callable[[str], bool] | None) – Optional Boolean function used to filter parameters by name.
☆ requires_grad (bool | None) – Optional Boolean used to filter parameters based on whether or not their require_grad attribute matches the user provided value.
☆ default_bounds (tuple[float, float]) – Default lower and upper bounds for constrained parameters with None typed bounds.
A dictionary of parameters and a dictionary of parameter bounds.
Return type:
tuple[dict[str, Tensor], dict[str, tuple[float | None, float | None]]]
Returns a binary function that filters strings (or iterables whose first element is a string) according to a bank of excluded patterns. Typically, used in conjunction with generators such as
patterns (Iterator[Pattern | str]) – A collection of regular expressions or strings that define the set of names to be excluded.
A binary function indicating whether or not an item should be filtered.
Return type:
Callable[[str | tuple[str, Any, …]], bool]
botorch.optim.utils.model_utils.sample_all_priors(model, max_retries=100)[source]¶
Sample from hyperparameter priors (in-place).
Return type:
Numpy - Torch Conversion Tools¶
Utilities for interfacing Numpy and Torch.
Optimization with Timeouts¶
botorch.optim.utils.timeout.minimize_with_timeout(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, timeout_sec=None)
Wrapper around scipy.optimize.minimize to support timeout.
This method calls scipy.optimize.minimize with all arguments forwarded verbatim. The only difference is that if provided a timeout_sec argument, it will automatically stop the optimziation after
the timeout is reached.
Internally, this is achieved by automatically constructing a wrapper callback method that is injected to the scipy.optimize.minimize call and that keeps track of the runtime and the optimization
variables at the current iteration.
☆ fun (Callable[[ndarray, ...], float])
☆ x0 (ndarray)
☆ args (tuple[Any, ...])
☆ method (str | None)
☆ jac (str | Callable | bool | None)
☆ hess (str | Callable | HessianUpdateStrategy | None)
☆ hessp (Callable | None)
☆ bounds (Sequence[tuple[float, float]] | Bounds | None)
☆ tol (float | None)
☆ callback (Callable | None)
☆ options (dict[str, Any] | None)
☆ timeout_sec (float | None)
Return type:
Parameter Constraint Utilities¶
Utility functions for constrained optimization.
Homotopy Utilities¶
class botorch.optim.homotopy.FixedHomotopySchedule(values)[source]¶
Bases: object
Homotopy schedule with a fixed list of values.
Initialize FixedHomotopySchedule.
values (list[float]) – A list of values used in homotopy
property num_steps: int¶
property value: float¶
property should_stop: bool¶
Return type:
Return type:
class botorch.optim.homotopy.LinearHomotopySchedule(start, end, num_steps)[source]¶
Bases: FixedHomotopySchedule
Linear homotopy schedule.
Initialize LinearHomotopySchedule.
☆ start (float) – start value of homotopy
☆ end (float) – end value of homotopy
☆ num_steps (int) – number of steps in the homotopy schedule.
class botorch.optim.homotopy.LogLinearHomotopySchedule(start, end, num_steps)[source]¶
Bases: FixedHomotopySchedule
Log-linear homotopy schedule.
Initialize LogLinearHomotopySchedule.
☆ start (float) – start value of homotopy
☆ end (float) – end value of homotopy
☆ num_steps (int) – number of steps in the homotopy schedule.
class botorch.optim.homotopy.HomotopyParameter(parameter, schedule)[source]¶
Bases: object
Homotopy parameter.
The parameter is expected to either be a torch parameter or a torch tensor which may correspond to a buffer of a module. The parameter has a corresponding schedule.
parameter: Parameter | Tensor¶
schedule: FixedHomotopySchedule¶
class botorch.optim.homotopy.Homotopy(homotopy_parameters, callbacks=None)[source]¶
Bases: object
Generic homotopy class.
This class is designed to be used in optimize_acqf_homotopy. Given a set of homotopy parameters and corresponding schedules we step through the homotopies until we have solved the final problem.
We additionally support passing in a list of callbacks that will be executed each time step, reset, and restart are called.
Initialize the homotopy.
☆ homotopy_parameters (list[HomotopyParameter]) – List of homotopy parameters
☆ callbacks (Optional[list[Callable]]) – Optional list of callbacks that are executed each time restart, reset, or step are called. These may be used to, e.g., reinitialize the acquisition
function which is needed when using qNEHVI.
property should_stop: bool¶
Returns true if all schedules have reached the end.
Restart the homotopy to use the initial value in the schedule.
Return type:
Reset the homotopy parameter to their original values.
Return type:
Take a step according to the schedules.
Return type: | {"url":"https://botorch.org/v/latest/api/optim.html","timestamp":"2024-11-05T20:21:15Z","content_type":"text/html","content_length":"284925","record_id":"<urn:uuid:3b693229-5682-46ba-bdf3-51bb4a9a34d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00580.warc.gz"} |
OpenZFS Capacity Calculator
Click on the section titles to collapse/expand. Mousing over a table cell loads the relevant data into the walkthrough section below. You can click table cells to freeze or unfreeze that cell for the
ZFS RAID is not like traditional RAID. Its on-disk structure is far more complex than that of a traditional RAID implementation. This complexity is driven by the wide array of data protection
features ZFS offers. Because its on-disk structure is so complex, predicting how much usable capacity you'll get from a set of hard disks given a vdev layout is surprisingly difficult. There are
layers of overhead that need to be understood and accounted for to get a reasonably accurate estimate. I've found that the best way to get my head wrapped around ZFS allocation overhead is to step
through an example.
We'll start by picking a less-than-ideal RAIDZ vdev layout so we can see the impact of all the various forms of ZFS overhead. Once we understand RAIDZ, understanding mirrored and striped vdevs will
be simple. We'll use 14x 18TB drives in two 7-wide RAIDZ2 (7wZ2) vdevs. It will generally be easier for us to work in bytes so we don't have to worry about conversion between TB and TiB.
Starting with the capacity of the individual drives, we'll subtract the size of the swap partition. The swap partition acts as an extension of the system's physical memory pool. If a running process
needs more memory than is currently available, the system can unload some of its in-memory data onto the swap space. By default, TrueNAS CORE creates a 2GiB swap partition on every disk in the data
pool. Other distributions may create a large or smaller swap partition or might not create one at all.
18 * 1000^4 - 2 * 1024^3 = 17997852516352 bytes
Next, we want to account for reserved sectors at the start of the disk. The layout and size of these reserved sectors will depend on your operating system and partition scheme, but we'll use FreeBSD
and GPT for this example because that is what's used by TrueNAS CORE and Enterprise. We can check sector alignment by running gpart list on one of the disks in the pool:
root@truenas[~]# gpart list da1
Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 35156249959
first: 40
entries: 128
scheme: GPT
1. Name: da1p1
Mediasize: 2147483648 (2.0G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 65536
Mode: r0w0e0
efimedia: HD(1,GPT,b1c0188e-b098-11ec-89c7-0800275344ce,0x80,0x400000)
rawuuid: b1c0188e-b098-11ec-89c7-0800275344ce
rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 2147483648
offset: 65536
type: freebsd-swap
index: 1
end: 4194431
start: 128
2. Name: da1p2
Mediasize: 17997852430336 (16T)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 2147549184
Mode: r1w1e2
efimedia: HD(2,GPT,b215c5ef-b098-11ec-89c7-0800275344ce,0x400080,0x82f39cce8)
rawuuid: b215c5ef-b098-11ec-89c7-0800275344ce
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: (null)
length: 17997852430336
offset: 2147549184
type: freebsd-zfs
index: 2
end: 35156249959
start: 4194432
1. Name: da1
Mediasize: 18000000000000 (16T)
Sectorsize: 512
Mode: r1w1e3
We'll first note that the sector size used on this drive is 512 bytes. Also note that the first logical block on this disk is actually sector 40; that means we're losing 40 * 512 = 20480 bytes right
The Name: da1p1 section describes the swap partition on this drive. We can see it's 2GiB in size (as expected) and it starts at logical block address 128 (i.e., an offset of 512 * 128 = 65536 bytes).
If we subtract this lost space from the expected partition size calculated above, we see it lines up with the actual on-disk partition size:
17997852516352 - 20480 - 65536 = 17997852430336 bytes
Before ZFS does anything with this partition, it rounds its size down to align with a 256KiB block. This rounded-down size is referred to as the osize or physical volume size of the disk in the ZFS
floor(17997852430336 / (256 * 1024)) * 256 * 1024 = 17997852311552 bytes
Inside the physical ZFS volume, we need to account for the special labels added to each disk. ZFS creates 4 copies of a 256KiB vdev label on each disk (2 at the start of the ZFS partition and 2 at
the end) plus a 3.5MiB embedded boot loader region. Details on the function of the vdev labels can be found here and details on how the labels are sized and arranged can be found here and in the
sections just below this (lines 541 and 548). We subtract this 4.5MiB (4x 256KiB + 3.5MiB) of space from the ZFS partition to get its "usable" size:
17997852311552 - 4 * 262144 - 3670016 = 17997847592960 bytes
Next up, we need to calculate the allocation size or "asize" of the whole vdev. We simply multiply the usable ZFS partition size by the vdev width here. We're not accounting for parity space just
17997847592960 * 7 = 125984933150720 bytes
That's about 114.58 TiB. ZFS takes this chunk of storage represented by the allocation size and breaks it until smaller, uniformly-sized buckets called "metaslabs". ZFS creates these metaslabs
because they're much more manageable than the full vdev size when tracking used and available space via spacemaps. The size of the metaslabs are primarily controlled by the metaslab shift or
"ms_shift" variable with the target size being 2^ms_shift bytes. You can read more about metaslab sizing here.
ZFS sets ms_shift so that the quantity of metaslabs is under 200. ms_shift starts at 29 and grows as high as 34. Once ms_shift is 34, it doesn't grow any larger but instead the metaslab count grows
beyond 200. 2^17 or 131,072 is the cap on the metaslab count (or ms_count); after that cap is hit, ZFS allows metaslabs to grow larger than 16 GiB. You won't hit this cap until your vdev allocation
size is at least 2^17 * 16 GiB = 2 PiB. Again, that's the size of an individual vdev, not the whole pool; you aren't going to run into this unless you put more than 125 18TB disks in a single vdev
(which is actually possible with dRAID). If you do exceed 131,072 metaslabs, ZFS will increase the ms_shift value until you're back under it again. OpenZFS can handle metaslab shift values up to 64.
On the other hand, the "cutoff" for going from ms_shift = 34 down to ms_shift = 33 is really pretty small, 1,600GiB or 1.5625TiB. In other words, unless your vdevs are smaller than 1.5625TiB, your
pool's ms_shift value will be 34. For our example, asize is well over 1.5625TiB so we have ms_shift = 34.
Once we have the value of ms_shift we can easily calculate the metaslab size by doing 2^ms_shift.
2 ^ 34 = 17179869184 bytes
With ms_shift = 34, the metaslab size will be 16GiB. We can note that if ms_shift was 33, the metaslab size would be 8GiB; the metaslab size gets cut in half each time ms_shift decreases by 1. We now
need to figure out how many full 16GiB metaslabs will fit in each vdev, so we calculate asize / metaslab_size and round down using the floor() function (the 16GiB metaslab size is represented in
bytes below):
floor(125984933150720 / 17179869184) = 7333
This gives us 7,333 metaslabs per vdevs. We can check our progress so far on an actual ZFS system by using the zdb command provided by ZFS. We can check vdev asize and the metaslab shift value by
running zdb -C $pool_name and we can check metaslab count by running zdb -m $pool_name. Note on TrueNAS, you'll need to add the -U /data/zfs/zpool.cache option (i.e., zdb -U /data/zfs/zpool.cache -C
$pool_name and zdb -U /data/zfs/zpool.cache -m $pool_name).
root@truenas[~]# zdb -U /data/zfs/zpool.cache -C tank
MOS Configuration:
version: 5000
name: 'tank'
state: 0
txg: 11
pool_guid: 7584042259335681111
errata: 0
hostid: 3601001416
hostname: ''
vdev_children: 2
type: 'root'
id: 0
guid: 7584042259335681111
create_txg: 4
type: 'raidz'
id: 0
guid: 2993118147866813004
nparity: 2
metaslab_array: 268
metaslab_shift: 34
ashift: 12
asize: 125984933150720
is_log: 0
create_txg: 4
com.delphix:vdev_zap_top: 129
type: 'disk'
... (output truncated) ...
root@truenas[~]# zdb -U /data/zfs/zpool.cache -m tank
vdev 0 ms_unflushed_phys object 270
metaslabs 7333 offset spacemap free
--------------- ------------------- --------------- ------------
metaslab 0 offset 0 spacemap 274 free 16.0G
space map object 274:
smp_length = 0x18
smp_alloc = 0x12000
Flush data:
unflushed txg=5
metaslab 1 offset 400000000 spacemap 273 free 16.0G
space map object 273:
smp_length = 0x18
smp_alloc = 0x21000
Flush data:
unflushed txg=6
... (output truncated) ...
To calculate useful space in our vdev, we multiply the metaslab size by the metaslab count. This means that space within the ZFS partition but not covered by one of the metaslabs isn't useful to us
and is effectively lost. In theory, by using a smaller ms_shift value, we could recover a bit of this space, but we would end up using a lot more system memory so it's not really worth it. With 7,333
metaslabs at 16GiB per metaslab, we have:
17179869184 * 7333 = 125979980726272 bytes
That's about 114.58 TiB of useful space per vdev. If we multiply that by the quantity of vdevs, we get the ZFS pool size:
125979980726272 * 2 = 251959961452544 bytes
We can confirm this by running zpool list:
root@truenas[~]# zpool list -p -o name,size,alloc,free tank
tank 251959961452544 1437696 251959960014848
The -p flag shows exact (parsable) byte values and the -o flag determines what properties will be displayed.
Note that the zpool SIZE value matches what we calculated above. We're going to set this number aside for now and calculate RAIDZ parity and padding. Before we proceed, it will be helpful to review a
few ZFS basics including ashift, minimum block size, how partial-stripe writes work, and the ZFS recordsize value.
Hard disks and SSDs divide their space into tiny logical storage buckets called "sectors". A sector is usually 4KiB but could be 512 bytes on older hard drives or 8KiB on some SSDs. A sector
represents the smallest read or write a disk can do in a single operation. ZFS tracks disks' sector size as the "ashift" where 2^ashift = sector size (so ashift = 9 for 512 byte sectors, 12 for 4KiB
sectors, 13 for 8KiB sectors).
In RAIDZ, the smallest useful write we can make is p+1 sectors wide where p is the parity level (1 for RAIDZ1, 2 for Z2, 3 for Z3). This gives us a single sector of user data and however many parity
sectors we need to protect that user data. With this in mind, ZFS allocates space on RAIDZ vdevs in even multiples of this p+1 value. It does this so we don't end up with unusable-small gaps on the
disk. For example, imagine we made a 5-sector write to a RAIDZ2 vdev (3 user data sectors and 2 parity sectors). We later delete that data and are left with a 5-sector gap on the disk. We now make a
3-sector write to the Z2 vdev, it lands in that 5-sector gap and we're left with a 2-sector gap that we can't do anything with. That space can't be recovered without totally rewriting every other
sector on the disk after it.
To avoid this, ZFS will pad out all writes to RAIDZ vdevs so they're an even multiple of this p+1 value. By "pad out" we mean it just logically includes these extra few sectors in the block to be
written but doesn't actually write anything to them. The ZFS source code refers to them as "skip" sectors.
Unlike traditional RAID5 and RAID6 implementations, ZFS supports partial-stripe writes. This has a number of important advantages but also presents some implications for space calculation that we'll
need to consider. Supporting partial stripe writes means that in our 7wZ2 vdev example, we can support a write of 12 total sectors even though 12 is not an even multiple of our stripe width (7). 12
is evenly divisible by p+1 (3 in this case), so we don't even need any padding. We would have a single full stripe of 7 sectors (2 parity sectors plus 5 data sectors) followed by a partial stripe
with 2 parity sectors and 3 data sectors. This will be important because even though we can support partial stripe writes, every stripe (including those partial stripes) need a full set of p parity
The last ZFS concept we need to understand here is the recordsize value. The ZFS recordsize value is used to determine the largest block of data ZFS can write out. It can be set per-dataset and can
be any even power of 2 from 512 bytes up to 16MiB (values above 1MiB require changing the zfs_max_recordsize kernel module parameter). The default recordsize value is 128KiB. For capacity estimation
purposes, ZFS always assumes a 128KiB record. It's important to note that this recordsize value only considers user data, not parity or padding. It's also worth mentioning that block sizes in ZFS
will vary based on how much data needs to be written out and the recordsize value enforces the upper limit of that block size, but again, ZFS assumes all 128KiB records for space calculation
purposes, so we're going to use that value going forward.
You can read more about ZFS' handling of partial stripe writes and block padding in this article by Matt Ahrens.
Getting back to our capacity example, we have the minimum sector count already calculated above at p+1 = 3. Next, we need to figure out how many sectors will get filled up by a recordsize write
(128KiB here).
128 * 1024 / 4096 = 32 sectors
Our stripe width is 7 disks, so we can figure out how many stripes this 128KiB write will take. Remember, we need 2 parity sectors per stripe, so we divide the 32 sectors by 5 because that's the
number of data sectors per stripe:
32 / (7-2) = 6.4
We can visualize how this might look on the disks (P represents a parity sectors, D represents a data sectors):
As mentioned above, that partial 0.4 stripe also gets 2 parity sectors, so we have 7 stripes of parity data at 2 parity sectors per stripe, or 14 total parity sectors. We now have 32 data sectors, 14
parity sectors, adding those, we get 46 total sectors for this data block. 46 is not an even multiple of our minimum sector count (3), so we need to add 2 padding sectors. This brings our total
sector count to 48: 32 data sectors, 14 parity sectors, and 2 padding sectors.
With the padding sectors included, this is what the full 128KiB block might look like on disk. I've drawn two blocks so you can see how alignment of the second block gets shifted a bit to accommodate
the partial stripe we've written. The X's represent the padding sectors.
This probably looks kind of weird because we have one parity sector at the start of the second block just hanging out by itself, but even though it's not on the same exact row as the data it's
protecting, it's still providing that protection. ZFS knows where that parity data is written so it doesn't really matter what LBA it gets written to, as long as it's on the correct disk.
We can calculate a data storage efficiency ratio by dividing our 32 data sectors by the 48 total sectors it takes to store them on disk with this particular vdev layout.
32 / 48 = 0.66667
ZFS uses something similar to this ratio when allocating space but in order to simplify calculations and avoid multiplication overflows and other weird stuff it tracks this ratio as a fraction of
512. In other words, to more accurately represent how ZFS "sees" the on-disk space, we need to convert the 32/48 fraction to the nearest fraction of 512. We'll need to round down to get a whole
number in the numerator (the top part of the fraction). To do this, we calculate:
floor(0.66667 * 512) / 512 = 0.666015625 = 341/512
This 341/512 fraction is called the vdev_deflate_ratio and it's what we'll multiply the pool size calculated above by to get usable space per vdev after parity and padding. You can read a bit more on
the vdev_deflat_ratio here.
251959961452544 * 0.666015625 = 167809271201792 bytes
The last thing we need to account for is SPA slop space. ZFS reserves the last little bit of pool capacity "to ensure the pool doesn't run completely out of space due to unaccounted changes (e.g. to
the MOS)". Normally this is 1/32 of the usable pool capacity with a minimum value of 128MiB. OpenZFS 2.0.7 also introduced a maximum limit to slop space of 128GiB (this is good; slop space used to be
HUGE on large pools). You can read about SPA slop space reservation here.
For our example pool, slop space would be...
167809271201792 * 1/32 = 5244039725056 bytes
That's 4.77 TiB reserved... again, a TON of space. If we're running OpenZFS 2.0.7 or later, we'll use 128 GiB instead:
167809271201792 - 128 * 1024^3 = 167671832248320 bytes = 156156.5625 GiB = 152.4966 TiB
And there we have it! This is the total usable capacity of a pool of 14x 18TB disks configured in 2x 7wZ2. We can confirm the calculations using zfs list:
root@truenas[~]# zfs list -p tank
tank 1080288 167671831168032 196416 /mnt/tank
As with the zpool list command, the -p flag shows exact byte values.
167671831168032 + 1080288 = 167671832248320 bytes = 156156.5625 GiB = 152.4966 TiB
By adding the USED and AVAIL values, we can confirm that our calculation is accurate.
Mirrored vdevs work in a similar way but the vdev asize is just a single drive's capacity (minus ZFS labels and whatnot) and then the vdev_deflate_ratio is just 512/512 or 1.0. We skip all the parity
and padding sector stuff but we do still need to account for metaslab allocation and SPA slop space.
dRAID Capacity Calculation
Capacity calculation for dRAID vdevs is similar to that of RAIDZ but includes a few extra steps. We'll run through an abbreviated example calculation with 2x dRAID2:5d:20c:1s with 8TB disks (no swap
space reserved this time).
dRAID still aligns the space on each drive to a 256KiB block size, so we go from 8000000000000 bytes to 7999999967232 bytes per 8TB disk:
floor(8000000000000 / (256 * 1024)) * 256 * 1024 = 7999999967232 bytes
From there, we reserve space for the on-disk ZFS labels (just like in RAIDZ) but we also reserve an extra 32MiB for dRAID reflow space which is used when expanding a dRAID vdev. Details on the reflow
reserve space can be found here.
7999999967232 - (256 * 1024 * 4) - (7 * 2^19) - 2^25 = 7999961694208 bytes
dRAID does not support partial stripe writes so we go through several extra alignment operations to make sure our capacity is an even multiple of the group width. Group width in dRAID is defined as
the number of data disks in the configuration plus the number of parity disks. For our configuration, that's 5 + 2 = 7 disks. dRAID allocates 16MiB of space from each disk in the group to form a row
(details here), so we can multiply the row height (16 MiB) by the group width (7) to get the group size:
7 * 16 * 1024^2 = 117440512 bytes
First we align the individual disk's allocatable size to the row height (16 MiB):
floor(7999961694208 / (16 * 1024^2)) * 16 * 1024^2 = 7999947014144 bytes
To get the total allocatable capacity, we multiply this by the number of child disks minus the number of spare disks in the vdev:
7999947014144 * (20 - 1) = 151998993268736 bytes
And then this number is aligned to the group size which we calculated above:
floor(151998993268736 / 117440512) * 117440512 = 151998909382656 bytes
This is the allocatable size (or asize) of each of our two dRAID vdevs. We go through the same logic as RAIDZ used to determine the metaslab count but each metaslab gets its size adjusted so its
starting offset and its overall size lines up with the minimum allocation size. The minimum allocation size is the group width times the sector size (or 2^ashift). For our layout that is:
7 * 2^12 = 28672 bytes
This represents the smallest write operation we can make do our layout. To align the metaslabs, ZFS iterates over each one, rounds the starting offset up to align with the minimum allocation size,
then rounds the total size of the metaslab down so its evenly divisible by the minimum allocation size. Detail on dRAID's metaslab initialization process can be found here and the code for the
process is simplified and mocked up below:
group_alloc_size = group_width * 2^ashift
vdev_raw_size = 0
ms_base_size = 2^ms_shift
ms_count = floor(vdev_asize / ms_base_size)
new_ms_size = []
for (i = 0; i < ms_count; i++)
ms_start = i * ms_base_size
new_ms_start = ceil(ms_start / group_alloc_size) * group_alloc_size
alignment_loss = new_ms_start - ms_start
new_ms_size[i] =
floor((ms_base_size - alignment_loss) / group_alloc_size) * group_alloc_size
overall_loss = ms_base_size - new_ms_size[i]
vdev_raw_size += new_ms_size[i]
Each metaslab will get a bit of space trimmed off its head and/or its tail. The table below shows the results from the first 20 iterations of the above loop:
As you can see, we'll end up with some lost space in between many of the metaslabs but it's not very much (at worst, a few gigabytes for multi-PB sized pool). You'll also notice that the metaslab
size isn't uniform across the pool; that makes it very hard (maybe impossible) to write a simple, closed-form equation for vdev_raw_size without a loop or summation. Note that for some dRAID
topologies, the metaslabs just happen to line up without any shifting and every metaslab is exactly 2^ms_shift and we don't lose any extra space, but that's not very common.
If you're inclined, you can validate this non-uniform metaslab sizing using zdb -m tank. If you pull the offset listed with each metaslab and convert it from hex to decimal, you can calculate its
size. You'll see the size for each metaslab varies slightly as the above table shows. zdb -m also lists the metaslab size, but it rounds it to the nearest tenth of a GiB which is not a fine enough
resolution to see the tiny sizing variations.
As a side note, we could theoretically shift the first metaslab's offset to align with the minimum allocation size and then size it down so its overall size was an even multiple of the minimum
allocation size and all subsequent metaslabs (each sized down uniformly to be an even multiple of the min alloc size) would naturally line up where they needed to with no gaps in between. In order to
do this, however, the OpenZFS developers would need to add dRAID specific logic to higher-level functions in the code; they opted to keep it simple. The amount of usable space lost to those gaps
between the shifted metaslabs really is negligible though, like on the order of 0.00004% of overall pool space.
Once we have the vdev_raw_size, we need to calculate the deflate ratio for our dRAID vdevs. This follows a very similar process to RAIDZ deflate ratio calculation but it's a bit simpler because we
don't need to account for partial stripe parity sectors (because we don't have any partial stripes!)
We start with the recordsize (which we'll assume is the default 128KiB) and figure out how many sectors (each sized at 2^ashift) it takes to store a block of this size:
128 * 1024 / 2^12 = 32 sectors
Then we figure out how many redundancy groups this will fill by dividing it by the number of data disks per redundancy group (not the total group width, just the data disks; parity disks don't store
32 / 5 = 6.4
We can't fill a partial redundancy group so we round up to 7. We then multiply this by the redundancy group width (including parity) to get the total number of sectors it takes to store the 128KiB
7 * 7 = 49
This configuration consumes 49 total sectors to store 32 sectors worth of data, giving us a ratio of
32 / 49 = 0.6531...
Just like with RAIDZ, we round this down to be a whole fraction of 512 to get the deflate ratio:
floor( (32 / 49) * 512 ) / 512 = 0.6523...
We end up with 334/512 (or 0.6523...) as the deflate ratio for this configuration. We multiply the vdev_raw_size by the vdev count and the deflate ratio to get our pool usable size before slop:
151990085230592 * 2 * 334/512 = 198299564324288 bytes
We compute slop space the same as we did above (we exceed the max here so we use 128 GiB) and remove that from our usable space to get final, total usable for this pool:
198299564324288 - (128 * 1024^3) = 198162125370816 bytes
We can validate this with zfs list:
jfr@zfsdev:~$ sudo zfs list -p tank
tank 4545072 198162120825744 448896 /tank
By adding the values in USED and AVAIL, we can confirm our calculations are accurate:
4545072 + 198162120825744 = 198162125370816 bytes = 184552.86 GiB = 180.23 TiB
Closing Thoughts
The RAIDZ example used VirtualBox with virtual 18TB disks that hold exactly 18,000,000,000,000 bytes. Real disks won't have such an exact physical capacity; the 8TB disks in my TrueNAS system hold
8,001,563,222,016 bytes. If you run through these calculations on a real system with physical disks, I recommend checking the exact disk and partition capacity using gpart or something similar.
We took a shortcut with the dRAID example because we didn't need to include swap space. We used truncate to create sparse files to mimic 8TB disks. The syntax for the dRAID example is below:
sudo truncate -s 8TB /var/tmp/disk{0..39}
zpool create tank -o ashift=12 draid2:5d:20c:1s /var/tmp/disk{0..19} draid2:5d:20c:1s /var/tmp/disk{20..39}
You can optionally mount these files as loop devices with losetup. You can set the apparent sector size of the loop device as well so you don't need to specify the ashift value when creating your
It's worth noting that none of these calculations factor in any data compression. The effect of compression on storage capacity is almost impossible to predict without running your data through the
compression algorithm you intend to use. At iX, we typically see between 1.2:1 and 1.6:1 reduction assuming the data is compressible in the first place. Compression in ZFS is done per-block and will
either shrink the block size a bit (if the block is smaller than the recordsize) or increase the amount of data in the block (if the block is equal to the recordsize).
We're also ignoring the effect that variable block sizes will have on functional pool capacity. We used a 128 KiB block because that's the ZFS default and what it uses for available capacity
calculations, but (as discussed above) ZFS may use a different block size for different data. A different block size will change the ratio of data sectors to parity+padding sectors so overall storage
efficiency might change. The calculator above includes the ability to set a recordsize value and calculate capacity based on a pool full of blocks that size. You can experiment with different
recordsize values to see its effects on efficiency. Changing a dataset's recordsize value will have effects on performance as well, so read up on it before tinkering. You can find a good high-level
discussion of recordsize tuning here, a more detailed technical discussion here, and a great generalized workload tuning guide here on the OpenZFS docs page.
Please feel free to get in touch with questions or if you spot any errors! jason@jro.io
If you're interested in how the pool annual failure rate values are derived, I have a write-up on that here. | {"url":"https://jro.io/capacity/","timestamp":"2024-11-13T14:54:09Z","content_type":"text/html","content_length":"45093","record_id":"<urn:uuid:c0dcce43-f1ac-4c2a-9792-51b4f4747c61>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00763.warc.gz"} |
I want to show ellipse in c#. My codes is fine when it running in R but i get message from c# like this : "Object is static; operation not allowed (Exception from HRESULT: 0x8004000B (OLE_E_STATIC))"
here this my codes :
public void setScatter(int xAxis, int yAxis, int zAxis, List<string> variable)
// plot from R
//to show outlier with method : classic & robust Mve
this.comboBoxXAxis.SelectedIndex = xAxis;
this.comboBoxYAxis.SelectedIndex = yAxis;
dataform.rconn.EvaluateNoReturn("x<-X[," + xAxis + "] ");
dataform.rconn.EvaluateNoReturn("y<-X[," + yAxis + "] ");
dataform.rconn.EvaluateNoReturn("shape <- cov(X)");
dataform.rconn.EvaluateNoReturn("center<- colMeans(X)");
dataform.rconn.EvaluateNoReturn("d2.95 <- qchisq(0.95, df = 2)");
//dataform.rconn.EvaluateNoReturn("gr<- grid(lty=3,col='lightgray', equilogs = 'TRUE')");
//dataform.rconn.Evaluate("mtext('with classical (red) and robust (blue)')");
dataform.rconn.EvaluateNoReturn("plot(x,y, main='Draw Ellipse ', pch=19,col='black', type='p')");
dataform.rconn.EvaluateNoReturn("elp<- unname(ellipsoidPoints(shape, d2.95,center))");
dataform.rconn.Evaluate(" lines(elp, col='red' , lty=7 , lwd=2)");
//dataform.rconn.EvaluateNoReturn("lines(ellipsoidPoints(mve@cov, d2 = d2.95, loc=mve@center), col='blue', lty='7' , lwd='2') ");
in any code that I comment always got the same error. I don't know why this problem happen. Any idea how to show that ellipse ? Thank you very much because you have helped me in completing my thesis. | {"url":"https://windows-hexerror.linestarve.com/q/so30824895-c-cant-execute-code-from-r","timestamp":"2024-11-04T23:53:00Z","content_type":"text/html","content_length":"6564","record_id":"<urn:uuid:558509b1-2d25-42d8-be7c-c157232693e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00072.warc.gz"} |
Scalable Distributed Random Number Generation Based on Homomorphic Encryption | Request PDF
Conference Paper
Scalable Distributed Random Number Generation Based on Homomorphic Encryption
Generating a secure source of publicly-verifiable randomness could be the single most fundamental technical challenge on a distributed network, especially in the blockchain context. Many current
proposals face serious problems of scalability and security issues. We present a protocol which can be implemented on a blockchain that ensures unpredictable, tamper-resistant, scalable and
publicly-verifiable outcomes. The main building blocks of our protocol are homomorphic encryption (HE) and verifiable random functions (VRF). The use of homomorphic encryption enables mathematical
operations to be performed on encrypted data, to ensure no one knows the outcome prior to being generated. The protocol requires O(n) elliptic curve multiplications and additions as well as O(n)
signature signing and verification operations, which permits great scalability. We present a comparison between recent approaches to the generation of random beacons.
No full-text available
To read the full-text of this research,
you can request a copy directly from the authors.
... Some blockchain oracle networks, e.g., Chainlink 2 , combine an on-chain smart contract and off-chain server to generate random numbers [19] The smart contract listens to client requests and
sends them to the server. Chainlink 's server employs a Verifiable Random Function in [7] to generate randomness. ...
... In [20], Nguyen et al. propose to use Homomorphic Encryption (HE) in their DRNG system as another way to achieve linear communication and computation cost. In their protocol described in [19],
each participant generates a secret, encrypts it, and publishes the ciphertext. Then, all the ciphertexts are joined. ...
... The requester with a secret key can decrypt the joined ciphertext to receive the randomness. The communication cost of [19] is O(n), but it requires the requester to be honest. Suppose the
requester gives his secret key to a colluding participant. ...
This paper introduces Orand, a fast, publicly verifiable, scalable decentralized random number generator designed for applications where public Proof-of-Randomness is essential. A reliable source of
randomness is vital for various cryptographic applications and other applications such as decentralized gaming and blockchain proposals. Consequently, generating public randomness has attracted
attention from the cryptography research community. However, attempts to generate public randomness still have limitations, such as inadequate security or high communication and computation costs.
Orand is designed to generate public randomness in a distributed manner based on a suitable distributed verifiable random function. This approach allows Orand to enjoy low communication and
computational costs. Moreover, Orand achieves the following security properties: pseudo-randomness, unbiasability, liveness, and public verifiability.
... This assumption that Byzantine processes will not omit during the commitment phase of the protocol is also an exploitable vulnerability in the synchronous protocol HydRand [26], although it can
be modified to restart once there are missing contributions. Nguyen et al.'s proposal [22], also a synchronous protocol, assumes a Requester, a trusted entity generating FHE keys, which can be
considered as a client using the system. ...
... In the case of Algorand, a failure happens when the set (of expected cardinality c) of nodes chosen to be proposers is empty. In the protocol by Nguyen et al. [22], this happens when all selected
contributors are Byzantine. ...
... The protocol by Nguyen et al. [22] employs a summation on the secrets shared by the contributors, which results in linear computation complexity. ...
... Message signing takes place with the private key; for instance, Alice signs a message with her private key and sends both the message and the signature to Bob. Bob can verify two things using
Alice's public key. First, the message signature originated from Alice (and not someone pretending to be Alice), and the message was not changed or tampered with in transit [31], [33]. ...
... For the B-Rand protocol, homomorphic encryption means that messages encrypted with the same public key can be summed to give the encrypted sum of the unencrypted messages. When decrypted, this
sum yields the sum of unencrypted messages (1) [33]: ...
... The problem of producing publicly verifiable random numbers, also referred to as beacons or random beacons, has been studied in a range of applications [33]. Andrychowicz and Dziembowski [34]
devised a protocol that uses the properties of cryptographic hash functions in PoW blockchain systems to produce an unpredictable public beacon in a peerto-peer environment. ...
Many blockchain processes require pseudo-random numbers. This is especially true of blockchain consensus mechanisms that aim to fairly distribute the opportunity to propose new blocks between the
participants in the system. The starting point for these processes is a source of randomness that participants cannot manipulate. This paper proposes two methods for embedding random number seeds in
a blockchain data structure to serve as inputs to pseudo-random number generators. Because the output of a pseudo-random number generator depends deterministically on its seed, the properties of the
seed are critical to the quality of the eventual pseudo-random number produced. Our protocol, B-Rand, embeds random number seeds that are confidential , tamper-resistant , unpredictable ,
collision-resistant , and publicly verifiable as part of every transaction. These seeds may then be used by transaction owners to participate in processes in the blockchain system that require
pseudo-random numbers. Both the Single Secret and Double Secret B-Rand protocols are highly scalable with low space and computational cost, and the worst case is linear in the number of transactions
per block.
... Nguyen-van et al. [8] Verifiable and scalable random number generation ...
... However, inducing trust as a parameter in SC execution was not addressed. Nguyen-van et al. [8] proposed generation of verifiable random numbers based on homomorphic encryption that generates
unpredictable, and immutable random numbers with public access. Michael Mulders [20] proposed a scheme to generate randomization environment on Ethereum based on parameters like eth.blockstamp, and
eth.timestamp. ...
... 12: game.balance = 0 13: end if Then, we evaluate throughput-latency trade-offs for SaNkhyA and compare the proposed scheme for delay in random number generation against traditional approaches in
[7], [8], [14]. For block-convergence time, we compare the proposed consensus PoV against traditional consensus schemes. ...
In modern decentralized Internet-of-Things (IoT)-based sensor communications, pseudo noise-diffusion oracles are heavily investigated as random oracles for data exchange among peer nodes. As these
oracles are generated through algorithmic processes, they pass the standard random tests for finite and bounded intervals only. This ensures a false sense of privacy and confidentiality in exchange
through open protocol IoT-stacks in public channels i.e. Internet. Recently, blockchain (BC)-envisioned random sequences as input oracles are proposed about financial applications, and windfall games
like roulette, poker, and lottery. These random inputs exhibit fairness, and non-determinism in SC executions termed as probabilistic smart contracts (PSC). However, the IoT-enabled PSC process might
be controlled and forged through humans, machines, and bot-nodes through physical and computational methods. Moreover, dishonest entities like contract owners, players, and miners can coordinate
together to form collusion attacks during consensus to propagate false updates, which ensures forged block additions by miners in BC. Motivated by these facts, in this paper, we propose a
BC-envisioned IoT-enabled PSC scheme, SaNkhyA, which is executed in three phases. In the first phase, the scheme eliminates colluding dishonest miners through the proposed miner selection algorithm.
Then, in the second phase, the elected miners agree through the proposed consensus protocol to generate a stream of random bits. In the third phase, the generated random bit-stream is split through
random splitters and fed as input oracles to the proposed PSC among participating entities. In simulation, the scheme ensures a trust probability of 0.38 even at 85% collusion among miners and has an
average block processing delay of 1.3 seconds compared to serial approaches, where the block processing delay is 5.6 seconds, thereby exhibiting improved scalability. The overall computation and
communication cost is 28.48 milliseconds , and 101 bytes, respectively that indicates the efficacy of the proposed scheme compared to the traditional schemes.
... We briefly go through some of the drawbacks each construction/service are facing. A detailed overview and a comparison have been given in [3]. ...
... In this section, we recall some of the primitives that build up our protocol. The materials in this demo paper including homomorphic encryption, and verifiable random functions have been
described in great detail in [3]. ...
... As proved in [3], our protocol has achieved the following properties: Unpredictability, Unbiasability, Public Verifiability, Honest Minority and Scalability. ...
Generating public randomness has been significantly demanding and also challenging, especially after the introduction of the Blockchain Technology. Lotteries, smart contracts, and random audits are
examples where the reliability of the randomness source is a vital factor. We demonstrate a system of random number generation service for generating fair, tamper-resistant, and verifiable random
numbers. Our protocol together with this system is an R&D project aiming at providing a decentralized solution to random number generation by leveraging the blockchain technology along with
long-lasting cryptographic primitives including homomorphic encryption, verifiable random functions. The system decentralizes the process of generating random numbers by combining each party's
favored value to obtain the final random numbers. Our novel idea is to force each party to encrypt his contribution before making it public. With the help of homomorphic encryption, all encrypted
contribution can be combined without performing any decryption. The solution has achieved the properties of unpredictability, tamper-resistance, and public-verifiability. In addition, it only offers
a linear overall complexity with respect to the number of parties on the network, which permits great scalability.
... Authors of [13] proposed a protocol composed of three main components: Requester, Core Layer (consists of many parties responsible for PRNG), and Public Distributed Ledger (PDL). Their protocol
works in rounds where each round consists of a few stages. ...
Recent advances in blockchain gained significant social attention, mainly due to substantial price fluctuations of Bitcoin and Ethereum cryptocurrencies. By its design, blockchain is an open,
distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way, providing solutions for many complex tasks without third party involvement. To
achieve that, they employ a set of Byzantine Fault-tolerant consensus algorithms that require the implemented logic to be deterministic. The lacking source of randomness is a consequential limitation
since many application domains, like games, lotteries, or random elections, require random sources. Given the Byzantine Fault-tolerance, generating random numbers should also be publicly-verifiable
and tamper-resistant, but still hold the premises of being unpredictable.In this paper, we will provide an overview of the current research surrounding pseudo-random number generation on a
decentralized network that satisfies those requirements.
Blockchain has attracted tremendous attention in recent years due to its significant features including anonymity, security, immutability, and audibility. Blockchain technology has been used in
several nonmonetary applications, including Internet-of-Things. Though blockchain has limited resources, and scalability is computationally expensive, resulting in delays and large bandwidth overhead
that are unsuitable for many IoT devices. In this paper, we work on a lightweight blockchain approach that is suited for IoT needs and provides end-to-end security. Decentralization is achieved in
our lightweight blockchain implementation by building a network with a lot of high-resource devices collaborate to maintain the blockchain. The nodes in the network is arranged in sorted order w.r.t
execution time and count to reduce the mining overheads and is accountable for handling the public blockchain. We propose a distributed execution time-based consensus algorithm that decreases the
delay and overhead of the mining process. We also propose a randomized node-selection algorithm for the selection of nodes to verify the mined blocks to eliminate the double-spend and 51% attack. The
results are encouraging and significantly reduce the mining overhead and keep a check on the double-spending problem and 51% attack.
Bitcoin is one of the most prominent blockchain systems but is infamous for its massive energy consumption. The proof-of-work (PoW) consensus algorithm used for appending transactions to the Bitcoin
ledger (also known as Bitcoin mining) incurs substantial energy expenditure due to the energy-intensive nature of PoW. The root of this inefficiency lies in the current implementation of the PoW
algorithm. PoW establishes a linear relationship between a miner's computational power and their probability of successfully mining a block by assigning an identical cryptographic puzzle to all
miners. This paper investigates the energy inefficiency inherent in PoW mining by exploring the potential benefits of introducing a nonlinear probability of success based on a miner's computational
power. This nonlinear proof-of-work (nlPoW) algorithm reduces energy consumption without compromising the decentralised nature of Bitcoin. This study formulates four distinct nlPoW algorithms through
a meticulous design science approach by deducing requisite algorithmic specifications and structures. Rigorous statistical simulations are employed to assess the performance of nlPoW against
conventional PoW within the Bitcoin mining process. Preliminary outcomes obtained from simulating a sizable network of miners, each possessing equivalent computational power, demonstrate that nlPoW
effectively curtails the hash computations required during Bitcoin mining. nlPoW achieves energy efficiency enhancements without compromising the decentralised consensus model or substituting energy
consumption with alternate resources, a trade-off often observed in prior attempts to mitigate the energy challenge associated with PoW.
Consensus algorithms that function in permissionless blockchain systems must randomly select new block proposers in a decentralised environment. Our contribution is a new blockchain consensus
algorithm called Proof-of-Publicly Verifiable Randomness (PoPVR). It may be used in blockchain design to make permissionless blockchain systems function as pseudo-random number generators and to use
the results for decentralised consensus. The method employs verifiable random functions to embed pseudo-random number seeds in the blockchain that are confidential, tamper-resistant, unpredictable,
collision-resistant, and publicly verifiable. PoPVR does not require large-scale computation, as is the case with Proof-of-Work and is not vulnerable to the exclusion of less wealthy stakeholders
from the consensus process inherent in stake-based alternatives. It aims to promote fairness of participation in the consensus process by all participants and functions transparently using only
open-source algorithms. PoPVR may also be useful in blockchain systems where asset values cannot be directly compared, for example, logistical systems, intellectual property records and the direct
trading of commodities and services. PoPVR scales well with complexity linear in the number of transactions per block.
The scientific interest in the area of Decentralized Randomness Beacon (DRB) protocols has been thriving recently. Partially that interest is due to the success of the disruptive technologies
introduced by modern cryptography, such as cryptocurrencies, blockchain technologies, and decentralized finances, where there is an enormous need for a public, reliable, trusted, verifiable, and
distributed source of randomness. On the other hand, recent advancements in the development of new cryptographic primitives brought a huge interest in constructing a plethora of DRB protocols
differing in design and underlying primitives.To the best of our knowledge, no systematic and comprehensive work systematizes and analyzes the existing DRB protocols. Therefore, we present a
Systematization of Knowledge (SoK) intending to structure the multi-faced body of research on DRB protocols. In this SoK, we delineate the DRB protocols along the following axes: their underlying
primitive, properties, and security. This SoK tries to fill that gap by providing basic standard definitions and requirements for DRB protocols, such as Unpredictability, Bias-resistance,
Availability (or Liveness), and Public Verifiability. We classify DRB protocols according to the nature of interactivity among protocol participants. We also highlight the most significant features
of DRB protocols such as scalability, complexity, and performance along with a brief discussion on its improvement. We present future research directions along with a few interesting research
problems.KeywordsRandom beaconBias-resistanceUnpredictabilitySecret sharingVerifiable delay function
Sharding is the prevalent approach to breaking the trilemma of simultaneously achieving decentralization, security, and scalability in traditional blockchain systems, which are implemented as
replicated state machines relying on atomic broadcast for consensus on an immutable chain of valid transactions. Sharding is to be understood broadly as techniques for dynamically partitioning nodes
in a blockchain system into subsets (shards) that perform storage, communication, and computation tasks without fine-grained synchronization with each other. Despite much recent research on sharding
blockchains, much remains to be explored in the design space of these systems. Towards that aim, we conduct a systematic analysis of existing sharding blockchain systems and derive a conceptual
decomposition of their architecture into functional components and the underlying assumptions about system models and attackers they are built on. The functional components identified are node
selection, epoch randomness, node assignment, intra-shard consensus, cross-shard transaction processing, shard reconfiguration, and motivation mechanism. We describe interfaces, functionality, and
properties of each component and show how they compose into a sharding blockchain system. For each component, we systematically review existing approaches, identify potential and open problems, and
propose future research directions. We focus on potential security attacks and performance problems, including system throughput and latency concerns such as confirmation delays. We believe our
modular architectural decomposition and in-depth analysis of each component, based on a comprehensive literature study, provides a systematic basis for conceptualizing state-of-the-art sharding
blockchain systems, proving or improving security and performance properties of components, and developing new sharding blockchain system designs.
Random beacons play a crucial role in blockchains. Most random beacons in a blockchain are performed in a distributed approach to secure the generation of random numbers. However, blockchain nodes
are in an open environment and are vulnerable to adversary reboot attacks. After such an attack, the number of members involved in a random number generation decreases. The random numbers generated
by the system become insecure. To solve this problem while guaranteeing fast recovery of capabilities, we designed a threshold signature scheme based on share recovery. A bivariate polynomial was
generated among the participants in the distributed key generation phase. While preserving the threshold signature key share, it can also help participants who lost their shares to recover. The same
threshold setting for signing and recovery guarantees the security of the system. The results of our scheme show that we take an acceptable time overhead in distributed key generation and
simultaneously enrich the share recovery functionality for the threshold signature-based random number generation scheme.
Consensus algorithms are the core of blockchain technology, which can cause nodes to reach consistency or liveness when there are Byzantine nodes in the network. The generation of public randomness
in decentralized networks has been significantly demanding and challenging in terms of the consensus mechanism. Previously, the multi-party random number generator (mRNG), which is a mechanism for
creating a single value from the contributions of decentralized multiple parties, was mainly designed based on the verifiable random function. In this study, we first construct novel, efficient
verifiable mRNG protocols from any one-way function. The protocols can achieve the properties of fairness, no trusted third party, public verifiability, and manipulation resistance. Subsequently, we
propose a delegated PoS (DPoS)-based consensus algorithm that adopts the verifiable mRNG. The new algorithm can solve the problem of low fairness caused by the artificial election of master nodes
using DPoS, while addressing the issue of manipulating the consensus process owing to the pseudo-random number generated by the traditional RNG, thereby improving the credibility of the consensus
Permissionless blockchain systems are highly dependent on probabilistic decision models, for example, the block addition process. If it were possible to use blockchain systems as pseudo-random number
generators, they could be used to select, for example, new block proposers. The first step in this process is to embed random number seeds in the blockchain for use in pseudo-random number
generation. This paper proposes transient random number seeds (TRNS), which produce random number seeds as part of each transaction. TRNS, belonging to each recipient in a transaction and are
confidential, tamper-resistant, unpredictable, collision-resistant, and publicly verifiable. TRNS enable recipients to produce pseudo-random numbers to participate in any process where the blockchain
system depends on random selection. The TRNS protocol is highly scalable with constant computational complexity and space complexity linear in the number of transactions per block.
Edge computing is an emerging computing paradigm, which offers a great opportunity to implement data mining-based services and applications for a large number of devices and sensors in Internet of
Things. However, the new paradigm is faced with security and privacy challenges due to the diversity and the limited capability of edge components. In particular, data privacy is one of the most
concerned problems for all the participants. In this paper, we propose a framework of privacy-preserving data mining based on private random decision trees in edge computing, which not only gives the
strong privacy guarantee, but also provides a certain amount of data utility. Firstly, we design a preservation framework to implement private random decision trees satisfying local differential
privacy. Secondly, we present the concrete implementations of algorithms and the corresponding task that each participant needs to undertake. Thirdly, we analyze the key factors to influence privacy
and utility, including the allocation of data and privacy budget. Fourthly, we give the improved algorithms to further increase the utility with strong privacy preservation. Finally, extensive
experiments demonstrate the good performance of our designed framework.
Diverse technologies, such as machine learning and big data, have been driving the prosperity of the Internet of Things (IoT) and the ubiquitous proliferation of IoT devices. Consequently, it is
natural that IoT becomes the driving force to meet the increasing demand for frictionless transactions. To secure transactions in IoT, blockchain is widely deployed since it can remove the necessity
of a trusted central authority. However, the mainstream blockchain-based IoT payment platforms, dominated by Proof-of-Work (PoW) and Proof-of-Stake (PoS) consensus algorithms, face several major
security and scalability challenges that result in system failures and financial loss. Among the three leading attacks in this scenario, double-spend attacks and long-range attacks threaten the
tokens of blockchain users, while eclipse attacks target denial of service. To defeat these attacks, a novel bidirectional-linked blockchain (BLB) using chameleon hash functions is proposed, where
bidirectional pointers are constructed between blocks. Furthermore, a new Committee Members Auction (CMA) consensus algorithm is designed to improve the security and attack resistance of BLB while
guaranteeing high scalability. In CMA, distributed blockchain nodes elect committee members through a verifiable random function. The smart contract uses Shamir’s Secret Sharing scheme to distribute
the trapdoor keys to committee members. To better investigate BLB’s resistance against double-spend attacks, an improved Nakamoto’s attack analysis is presented. In addition, a modified entropy
metric is devised to measure eclipse attack resistance across different consensus algorithms. Extensive evaluation results show the superior resistance against attacks and demonstrate high
scalability of BLB compared with current leading paradigms based on PoS and PoW.
A reliable source of randomness is a critical element in many cryptographic systems. A public randomness beacon is a randomness source generated in a distributed manner that satisfies the following
requirements: Liveness, Unpredictability, Unbiasability and Public Verifiability. In this work we introduce HERB: a new randomness beacon protocol based on ad-ditively homomorphic encryption. We show
that this protocol meets the requirements listed above and additionaly provides Guaranteed Output Delivery. HERB has a modular structure with two replaceable modules: an homomorphic cryptosystem and
a consensus algorithm. In our analysis we instantiate HERB using ElGamal encryption and a public blockchain. We implemented a prototype using Cosmos SDK to demonstrate the simplicity and efficiency
of our approach. HERB allows splitting all protocol participants into two groups that can relate in any way. This property can be used for building more complex participation and reward systems based
on the HERB solution.
We give a simple and efficient construction of a verifiable random function (VRF) on bilinear groups. Our construction is direct. In contrast to prior VRF constructions[14,15], it avoids using an
inefficient Goldreich-Levin transformation, thereby saving several factors in security. Our proofs of security are based on a decisional bilinear Diffie-Hellman inversion assumption, which seems
reasonable given current state of knowledge. For small message spaces, our VRF’s proofs and keys have constant size. By utilizing a collision-resistant hash function, our VRF can also be used with
arbitrary message spaces. We show that our scheme can be instantiated with an elliptic group of very reasonable size. Furthermore, it can be made distributed and proactive.
This paper investigates a novel computational problem, na- mely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and
derive from this technique three encryption schemes : a trapdoor permu- tation and two homomorphic probabilistic encryption schemes computa- tionally comparable to RSA. Our cryptosystems, based on
usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2001. Includes bibliographical references (p. 163-168).
This paper describes a new replication algorithm that is able to tolerate Byzantine faults. We believe that Byzantinefault -tolerant algorithms will be increasingly important in the future because
malicious attacks and software errors are increasingly common and can cause faulty nodes to exhibit arbitrary behavior. Whereas previous algorithms assumed a synchronous system or were too slow to be
used in practice, the algorithm described in this paper is practical: it works in asynchronous environments like the Internet and incorporates several important optimizations that improve the
response time of previous algorithms by more than an order of magnitude. We implemented a Byzantine-fault-tolerant NFS service using our algorithm and measured its performance. The results show that
our service is only 3% slower than a standard unreplicated NFS. 1 Introduction Malicious attacks and software errors are increasingly common. The growing reliance of industry and government on online
Uniform randomness beacons whose output can be publicly attested to be unbiased are required in several cryptographic protocols. A common approach to building such beacons is having a number parties
run a coin tossing protocol with guaranteed output delivery (so that adversaries cannot simply keep honest parties from obtaining randomness, consequently halting protocols that rely on it). However,
current constructions face serious scalability issues due to high computational and communication overheads. We present a coin tossing protocol for an honest majority that allows for any entity to
verify that an output was honestly generated by observing publicly available information (even after the execution is complete), while achieving both guaranteed output delivery and scalability. The
main building block of our construction is the first Publicly Verifiable Secret Sharing scheme for threshold access structures that requires only O(n) exponentiations. Previous schemes required O(nt)
exponentiations (where t is the threshold) from each of the parties involved, making them unfit for scalable distributed randomness generation, which requires \(t=n/2\) and thus \(O(n^2)\)
Bias-resistant public randomness is a critical component in many (distributed) protocols. Generating public randomness is hard, however, because active adversaries may behave dishonestly to bias
public random choices toward their advantage. Existing solutions do not scale to hundreds or thousands of participants, as is needed in many decentralized systems. We propose two large-scale
distributed protocols, RandHound and RandHerd, which provide publicly-verifiable, unpredictable, and unbiasable randomness against Byzantine adversaries. RandHound relies on an untrusted client to
divide a set of randomness servers into groups for scalability, and it depends on the pigeonhole principle to ensure output integrity, even for non-random, adversarial group choices. RandHerd
implements an efficient, decentralized randomness beacon. RandHerd is structurally similar to a BFT protocol, but uses RandHound in a one-time setup to arrange participants into verifiably unbiased
random secret-sharing groups, which then repeatedly produce random output at predefined intervals. Our prototype demonstrates that RandHound and RandHerd achieve good performance across hundreds of
participants while retaining a low failure probability by properly selecting protocol parameters, such as a group size and secret-sharing threshold. For example, when sharding 512 nodes into groups
of 32, our experiments show that RandHound can produce fresh random output after 240 seconds. RandHerd, after a setup phase of 260 seconds, is able to generate fresh random output in intervals of
approximately 6 seconds. For this configuration, both protocols operate at a failure probability of at most 0.08% against a Byzantine adversary.
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide
part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer
network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest
chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not
cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes
can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.
In a (t, n) threshold digital signature scheme, t out of n signers must co-operate to issue a signature. We present an efficient and robust (t, n) threshold version of Schnorr’s signature scheme. We
prove it to be as secure as Schnorr’s signature scheme, i.e., existentially unforgeable under adaptively chosen message attacks. The signature scheme is then incorporated into a (t,n) threshold
scheme for implicit certificates. We prove the implicit certificate scheme to be as secure as the distributed Schnorr signature scheme.
A publicly verifiable secret sharing (PVSS) scheme is a verifiable secret sharing scheme with the property that the validity of the shares distributed by the dealer can be verified by any party;
hence verification is not limited to the respective participants receiving the shares. We present a new construction for PVSS schemes, which compared to previous solutions by Stadler and later by
Fujisaki and Okamoto, achieves improvements both in efficiency and in the type of intractability assumptions. The running time is O(nk), where k is a security parameter, and n is the number of
participants, hence essentially optimal. The intractability assumptions are the standard Diffie-Hellman assumption and its decisional variant. We present several applications of our PVSS scheme,
among which is a new type of universally verifiable election scheme based on PVSS. The election scheme becomes quite practical and combines several advantages of related electronic voting schemes,
which makes it of interest in its own right.
An encryption method is presented with the novel property that publicly re- vealing an encryption key does not thereby reveal the corresponding decryption key. This has two important consequences: 1.
Couriers or other secure means are not needed to transmit keys, since a message can be enciphered using an encryption key publicly revealed by the intended recipient. Only he can decipher the
message, since only he knows the corresponding decryption key. 2. A message can be \signed" using a privately held decryption key. Anyone can verify this signature using the corresponding publicly
revealed en- cryption key. Signatures cannot be forged, and a signer cannot later deny the validity of his signature. This has obvious applications in \electronic mail" and \electronic funds
transfer" systems. A message is encrypted by representing it as a number M, raising M to a publicly specied
Introduction to Cryptography with coding theory
A new signature scheme is proposed, together with an implementation of the Diffie-Hellman key distribution scheme that achieves a public key cryptosystem. The security of both systems relies on the
difficulty of computing discrete logarithms over finite fields.
Elgamal encryption using elliptic curve cryptography
• Rosy Sunuwar
• Suraj Ketan
• Samal
Rosy Sunuwar and Suraj Ketan Samal. Elgamal encryption using elliptic curve cryptography. Cryptography and Computer Security, University of Nebraska, Lincoln, 2015.
Distributed cryptography based on the proofs of work
Marcin Andrychowicz and Stefan Dziembowski. Distributed cryptography based on the proofs of work. IACR Cryptology ePrint Archive, 2014:796, 2014.
Dfinity technology overview series, consensus system
• Timo Hanke
• Mahnush Movahedi
• Dominic Williams
Timo Hanke, Mahnush Movahedi, and Dominic Williams. Dfinity technology overview series, consensus system. arXiv preprint arXiv:1805.04548, 2018.
Short signatures from the weil pairing
• Dan Boneh
• Ben Lynn
• Hovav Shacham
Dan Boneh, Ben Lynn, and Hovav Shacham. Short signatures from the weil pairing. In International Conference on the Theory and Application of Cryptology and Information Security, pages 514-532.
Springer, 2001.
Randao: Verifiable random number generation
Randao.org. Randao: Verifiable random number generation. 2017.
A fully homomorphic encryption scheme
Craig Gentry and Dan Boneh. A fully homomorphic encryption scheme, volume 20. Stanford University Stanford, 2009.
Performance based comparison study of rsa and elliptic curve cryptography
• Rounak Sinha
• Hemant Kumar Srivastava
• Sumita Gupta
Rounak Sinha, Hemant Kumar Srivastava, and Sumita Gupta. Performance based comparison study of rsa and elliptic curve cryptography. International Journal of Scientific & Engineering Research, 4
(5):720-725, 2013.
Short signatures from the weil pairing
Performance based comparison study of rsa and elliptic curve cryptography
Elgamal encryption using elliptic curve cryptography | {"url":"https://www.researchgate.net/publication/336207776_Scalable_Distributed_Random_Number_Generation_Based_on_Homomorphic_Encryption","timestamp":"2024-11-05T08:02:23Z","content_type":"text/html","content_length":"610793","record_id":"<urn:uuid:da9ecd32-c49e-49cb-8b13-d1914d8ff11b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00772.warc.gz"} |
(PDF) Computational models for simulations of lithium-ion battery cells under constrained compression tests
Author content
All content in this area was uploaded by Wei-Jen Lai on Mar 28, 2018
Content may be subject to copyright.
Computational models for simulations of lithium-ion battery cells
under constrained compression tests
Mohammed Yusuf Ali
, Wei-Jen Lai
, Jwo Pan
Department of Mechanical Engineering, The University of Michigan, Ann Arbor, MI 48109, USA
Department of Materials Science and Engineering, The University of Michigan, Ann Arbor, MI 48109, USA
Develop computational models for simulations of lithium-ion battery cells.
Model the multi-scale buckling of lithium-ion battery cells.
Model the formation of kinks and shear bands in lithium-ion battery cells.
Model the buckling of cover sheets and justify the length selection of cell specimens.
Model the void compaction, plastic deformation and load-displacement curves.
article info
Article history:
Received 15 February 2013
Received in revised form
4 May 2013
Accepted 8 May 2013
Available online 22 May 2013
Lithium-ion battery
Representative volume element
Mechanical behavior of pouch cell battery
Kink formation
Shear band formation
Computational models
In this paper, computational models are developed for simulations of representative volume element
(RVE) specimens of lithium-ion battery cells under in-plane constrained compression tests. For cell
components in the finite element analyses, the effective compressive moduli are obtained from in-plane
constrained compressive tests, the Poisson’s ratios are based on the rule of mixture, and the stress
eplastic strain curves are obtained from the tensile tests and the rule of mixture. The Gurson’s material
model is adopted to account for the effect of porosity in separator and electrode sheets. The computa-
tional results show that the computational models can be used to examine the micro buckling of the
component sheets, the macro buckling of the cell RVE specimens, and the formation of the kinks and
shear bands observed in experiments, and to simulate the loadedisplacement curves of the cell RVE
specimens. The initial micro buckling mode of the cover sheets in general agrees with that of an
approximate elastic buckling solution. Based on the computational models, the effects of the friction on
the deformation pattern and void compaction are identified. Finally, the effects of the initial clearance
and biaxial compression on the deformation patterns of the cell RVE specimens are demonstrated.
Ó2013 Elsevier B.V. All rights reserved.
1. Introduction
Lithium-ion batteries have been considered as the solution for
electric vehicles for the automotive industry due to its lightweight
and high energy density. The major design considerations of
lithium-ion batteries involve electrochemistry, thermal manage-
ment and mechanical performance. The electrochemistry has been
widely studied since it directly determines the battery performance
and its life cycle. Different active materials on electrodes give
different types of lithium-ion batteries. However, the basic
chemical reactions of the cells are similar. For automotive appli-
cations, the mechanical performance is of great importance for
crashworthiness analyses. Research works were conducted on the
safety performance of the battery cells under mechanical tests such
as nail penetration tests, round bar crush tests, and pinch tests, for
example, see Refs. [1e5]. Many research works were also conducted
to understand and model the phenomena related to intercalation
induced stresses, diffusion, debonding, cracking, and the effect of
the coatings due to reaction in the lithium-ion batteries, for
example, see Refs. [6e13]. However, these research works mainly
focused on electrodes or separators and understandably do not
cover the global mechanical behavior of battery cells, modules and
Sahraei et al. [14] conducted a series of mechanical tests and
computational modeling works on commercial LiCoO
*Corresponding author. Tel.: þ1 734 764 9404; fax: þ1 734 647 3170.
E-mail addresses: mdyusuf@umich.edu (M.Y. Ali), weijen@umich.edu (W.-J. Lai),
jwopan@gmail.com,jwo@umich.edu (J. Pan).
Contents lists available at SciVerse ScienceDirect
Journal of Power Sources
journal homepage: www.elsevier.com/locate/jpowsour
0378-7753/$ esee front matter Ó2013 Elsevier B.V. All rights reserved.
Journal of Power Sources 242 (2013) 325e340
cells used for cell phones. The results indicate that the compressive
mechanical behavior is characterized by the buckling and densifi-
cation of the cell components. Other testing and modeling data
available were also conducted on commercial LiCoO
cylindrical or
prismatic battery cells [15,16]. However, this information is of
limited use for researchers to model the mechanical performance of
automotive high-voltage LiFePO
battery cells and modules for
crashworthiness analyses. Sahraei et al. [14] indicated that
computational effort is quite significant to model local buckling
phenomenon of battery cells under in-plane compression. There-
fore, macro homogenized material models of the representative
volume elements (RVEs) for both the battery cells and modules
have to be developed for crashworthiness analyses with sacrifice of
the accuracy at the micro scale. Other than dealing with the multi-
physics problem, one of the challenges of developing the compu-
tational models for the battery behavior is to deal with different
models at different length scales as indicated in Ref. [14]. Therefore,
understanding the basic mechanical behavior of the lithium-ion
batteries for automotive applications is very important to develop
macro homogenized material models for representative volume
elements (RVEs) of cells and modules for efficient crashworthiness
Recently, Lai et al. [17,18] investigated the mechanical behaviors
of lithiumeiron phosphate battery cells and modules by conducting
tensile tests of individual cell and module components, constrained
compression tests of RVE specimens of dry cells and modules, and a
punch test of a small-scale dry module specimen. Their results of
in-plane tensile tests of the individual cell components indicate
that the active materials on electrodes have a very low tensile load
carrying capacity. For in-plane constrained compression tests of cell
RVE specimens, the results indicate that the load carrying behavior
of cell RVE specimens is characterized by the buckling of cells with
a wavelength approximately in the order of the thickness of the
cells and the final densification of the cell components. They also
tested module RVE specimens with different heights and the results
indicate that the load carrying behavior of module RVE specimens
is also characterized by the buckling of cells with a wavelength
approximately in the order of the thickness of the cells and the final
densification of the module components but relatively indepen-
dent of the height of the tested specimens. In addition, they
investigated the effects of adhesives between cells and foam/
aluminum heat dissipater sheets on the mechanical behavior of
module RVE specimens. The results indicate that the adhesive
kink angle
shear band angle
initial shear band angle
final shear band angle
shear band height
dkink length
wcell thickness
Gurson yield function for porous materials
fvoid volume fraction; it is defined as the ratio of the
volume of voids to the total volume of the material
initial void volume fraction
rrelative density of a material; it is defined as the ratio
of the volume of matrix material to the total volume of
the material
qeffective macroscopic Mises stress
pmacroscopic hydrostatic pressure
flow stress
yield stress
average equivalent plastic strain
and q
fitting parameters
Sdeviatoric part of the macroscopic Cauchy stress tensor
Smacroscopic Cauchy stress tensor
Poisson’s ratio of the i-th component for rule of
mixture (ROM)
volume fraction of the i-th component for ROM
adjusted flow stress of the matrix
PEEQ equivalent plastic strain
RVE representative volume element
nnumber of waves
mnumber of half waves
llength of the cell
kthe spring constant of the elastic foundation on one
side of the beam and is defined as the lateral force per
unit plate length per unit deflection of the neighbor
components in the out-of-plane direction
Emodulus of elasticity
bwidth of the specimen
hthickness of the neighbor components
potential energy of the system
Imoment of inertia
vdeflection of the beam in the ydirection
Pcompressive force
cover sheet
thickness of the cover sheet
thickness of the anode
thickness of the cathode
thickness of the separator
thickness of the foundation (neighbor components)
Fig. 1. A schematic view of the approaches for computational model developments.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340326
slightly increases the compressive load carrying capacity of the
module RVE specimens. Their SEM images of the active materials
on electrodes and the results of in-plane compressive and out-of-
plane compressive tests suggest the total volume fraction is up to
40% for the microscopic gaps between cell components and the
porosity of the separators and the active materials on electrodes.
Based on the compressive nominal stressestrain curves in the in-
plane and out-of-plane directions, their work suggests that the
lithium-ion battery cells and modules can be modeled as aniso-
tropic foams or cellular materials.
The current study is focused on developing the computational
models for simulations of RVE specimens of lithium-ion battery
cells under in-plane constrained compression tests based on the
work of Lai et al. [17] and then comparing the computational results
with those of the tests. Fig. 1 shows a schematic view of the ap-
proaches of the developments of the computational models. Two
approaches are used for the modeling of these battery cells and
modules: a detailed model (micro approach) and a less detailed
model (macro approach). This investigation will focus on the
detailed modeling of a cell RVE specimen of lithium-ion batteries.
In the detailed model, the pouch cell battery is modeled as a layered
composite and the RVE material nominal stressestrain response is
obtained based on the properties of the cell components of layered
anode, cathode, separator and cover sheets. The less detailed
models were investigated in a companion study [19] to address the
length scale issue in mechanical modeling of the batteries. In those
less detailed models, a small-scale battery module was considered
as a homogenized material based on the response of the physical
testing of the module RVE specimens [18]. Both approaches are
useful to investigate the mechanical behavior of lithium-ion pouch
cell batteries and modules.
The purpose of this detailed model investigation is twofold: one
is to enhance understanding of the mechanical behavior of lithium-
ion battery cells used for automotive applications and the other is
to pave the groundwork for the development of user material
models to represent the battery cells and modules by homogenized
materials which are a subject of the future research. Finite element
models can be used to simulate the tensile tests for multi-layered
cell and module RVE specimens. However, a simple estimation
scheme for tensile behavior is presented in Ref. [18] based on the
rule of mixture (ROM) for composite and thus the tensile behavior
of battery cells will not be addressed here. In this investigation, the
compressive behavior of cell RVE specimens under quasi-static in-
plane compression tests is investigated using the ABAQUS explicit
finite element solver [20]. In this paper, the experimental results for
cell RVE specimens under in-plane compression tests are first
reviewed briefly for understanding the physical deformation
pattern of the porous cell RVE specimens. Next, the Gurson’s model
for porous material is presented for characterization of the sepa-
rator and the electrodes with the active materials. Then the avail-
able material data are discussed and adopted for the input of the
computational model. The details of the computational model are
presented. The computational results of the deformation pattern
and nominal stressestrain behavior are then compared with the
test results. Based on the detailed computational results, the micro
buckling modes of the component sheets are identified and an
approximate elastic buckling solution of a beam with a rigid
boundary on one side and an unattached elastic foundation on the
other side is developed and used to examine the micro buckling
mode of the cover sheets for justification of the selection of the
length of the cell RVE specimens used in the tests in Ref. [17]. Based
on the computational model, the effects of the friction between the
(a) (b)
25 mm
4.642 mm
25 mm
Cover sheet Separator
Cathode Anode Cover sheet
Fig. 2. A schematic of (a) a pouch cell and (b) a cell RVE specimen with the dimensions, and (c) a side view of a small portion of a ten-unit cell RVE specimen showing the individual
cell components. The large arrows indicate the compressive direction.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 327
cell components and the constrained surfaces on the deformation
pattern, plastic deformation, void compaction, and the loade
displacement curve are examined. The usefulness of the compu-
tational model is then presented by further exploring the effects of
the initial clearance and biaxial compression on the deformation
patterns of cell RVE specimens. Finally, some conclusions are made.
2. Experiment
A detailed description of the structure of a lithium-ion battery
module used for this investigation can be found in Refs. [17,18]. Also
note that the following definitions will be used throughout the
paper. A Single unit cell represents a basic cell containing one
cathode, one anode and a separator sheet with two aluminum
cover sheets with two accompanying separator sheets. A Ten unit
cell consists of ten basic cells containing ten cathode, ten anode,
twenty one separator and two aluminum cover sheets. In this
investigation, the ten unit cell is considered as a general cell RVE
specimen that represents a typical assembled pouch cell.
Each cell consists of five major components: cover sheet, anode,
cathode, separator and electrolyte. Since the electrolyte is difficult
to handle during assembly due to the safety concern, all the cell and
module RVE specimens tested in this study were made without
electrolyte at the University of Michigan. Fig. 2(a) shows a sche-
matic of a pouch cell with two cover sheets and a cell RVE specimen
with the xeyezcoordinate system. A cell RVE specimen with the
dimensions is shown in Fig. 2(b). The pouch cell has a layered
structure as schematically shown in Fig. 2(b). The z-coordinate is
referred to as the out-of-plane coordinate whereas the xand y
coordinates are referred to as the in-plane coordinates. Fig. 2(c)
shows a side view of a small portion of a ten-unit cell RVE spec-
imen showing the individual cell components. The assembly of the
cell components in the generic cell RVE specimen as schematically
shown in Fig. 2(c) may be slightly different from those in usual
lithium-ion cells for convenience of assembly of the purchased cell
components. However, generic cell RVE specimens with slightly
different assemblies should have the similar buckling, kink and
shear band mechanisms under constrained compression as dis-
cussed later due to their layered structures. Constrained compres-
sion tests were conducted for cell RVE specimens with the
dimension of 25 mm 25 mm 4.642 mm. The details of the test
setup and results of the in-plane constrained compression tests are
discussed in Ref. [17] and are briefly reviewed in the following.
Fig. 3 shows three nominal compressive stressestrain curves of
the cell RVE specimens tested at a displacement rate of
0.5 mm min
. The specimens showed almost a linear behavior in
the beginning with an effective elastic modulus of 188 MPa. Note
that the effective elastic modulus obtained from the composite
ROM is 190 MPa using the effective elastic moduli obtained from
the nominal stressestrain curves of cell components under in-
plane constrained compression tests. When the strain reaches
about 2%, noticeable change of the slope takes place and the curves
continue to increase gradually up to the strains of 34%. Some minor
drops were observed during the stage after the linear region due to
the development of kinks and shear bands as shown in the defor-
mation patterns recorded as discussed later. The trends of all three
curves are quite consistent.
Figs. 4(a)e(d) show the deformation patterns of a cell RVE
specimen at the nominal strain of 1% in the initial linear stage, at
the nominal strain of 2% where the slope changes, and at the
nominal strains of 10% and 15%. Figs. 4(e) and (f) show the front and
back views of the tested cell RVE specimen at the nominal strain of
34%. A careful examination of the deformation pattern shown in
Fig. 4(a) indicates the initial linear stage corresponds to the
development of smooth buckling for the cell components. As the
displacement increases toward the nominal strain of 2% where the
slope starts to level off, the cell RVE specimen shows the devel-
opment of kinks or plastic hinges of the cell components against
the walls, as indicated in Fig. 4(b). The presence of the kinks pro-
motes the macroscopic shear band formation (strain localization in
a narrow zone), as indicated in Fig. 4(b). The shear band formation
creates a physical mechanism to accommodate efficiently for the
compression displacement and hence induces the load drop. As the
strain continues to increase, more kinks and shear bands form
across the cell RVE specimen as shown in Figs. 4(c) and (d).
Figs. 4(e) and (f) show the front and back views of the tested cell
RVE specimen at the nominal strain of about 34%. As shown in the
figures, the kinks are fully developed as folds and many shear bands
can be identified. After the efficient compaction mechanism of
shear band formation has been completed, further compression can
only be accommodated by the micro buckling of the cell compo-
nents outside the shear band regions, as marked in Figs. 4(e) and (f),
and the compression in the shear band regions.
An idealized deformation process of the cell RVE specimen un-
der an in-plane constrained compression test is proposed in
Ref. [17] to explain the shear band formation and is briefly reviewed
here. Figs. 5(a)e(c) show schematics of a cell RVE specimen before,
during, and after the shear band formation under in-plane con-
strained compression, respectively. Figs. 5(d)e(f) show the detailed
schematics of the shear band formation corresponding to
Figs. 5(a)e(c), respectively. As shown in Fig. 5(a), the cell RVE
specimen (shown in gray) forms shear bands (between two parallel
dashed lines) to accommodate the volumetric reduction under
constrained compression. During the deformation, the kink angle
(as shown in Fig. 5(e)) keeps deceasing from 90
toward to zero
while the shear band angle
(as shown in Fig. 5(e)) also decreases
from the initial
to the final
by a small amount. In the shear
band, the cell components are subjected to a compressive strain in
the z
direction, a significant amount of the shear strain in the y
plane and a significant amount of rotation. Here, y
and z
the local material coordinates that are fixed to the material. Outside
of the shear band, the cell components are subjected to compres-
sive strains in the yand zdirections. Once
reaches to zero, further
compressive strains are achieved by the micro buckling of the cell
components outside of the shear bands and the void reduction and
shear in the shear band as
continues to decrease. This is illustrated
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35
Test result 1
Test result 2
Test result 3
Nominal stress (MPa)
Nominal st rain
Load (N)
Displacement (mm)
Fig. 3. Nominal compressive stressestrain curves of the cell RVE specimens tested at a
displacement rate of 0.5 mm min
(nominal strain rate of 0.0003 s
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340328
in Figs. 5(c) and (f). It should be noted that Figs. 5(a)e(f) are
idealized with the periodic shear band structures. In tests, the shear
bands do not form at the same time and the shear band angle
varies due to the imperfections of the specimens.
3. Gurson’s yield function for porous materials
The anode and cathode for this investigation are graphite coated
on copper foil and LiFePO
coated on aluminum foil, respectively.
The copper foil has a thickness of 9
m and the total thickness of the
anode sheet is 0.2 mm. The aluminum foil has a thickness of 15
and the total thickness of the cathode sheet is 0.2 mm. Both the
anode and cathode sheets are double-side coated. The separator is
made of polyethylene with the porosity ranging from 36 to 44% and
a thickness from 16 to 25
m according to the manufacturer
Fig. 6 shows SEM images of the graphite and LiFePO
on the
anode and cathode sheets, respectively, reported in Lai et al. [17].It
should be noted that both active materials on electrodes are in a
powder form held together by the binder and therefore possess a
high degree of porosity as seen in the SEM images. It is not the
intent of this paper to characterize the composition and
morphology of the active materials on the electrodes. For example,
the porosities of the active materials on the current collectors as
shown in Figs. 6(a) and (b) are difficult to measure and characterize.
However, the electrodes with the porous active materials can be
computationally treated as homogenized porous sheets. The results
of the tensile tests of the cell component sheets will be used to
determine approximately the plastic parts of the stressestrain
curves of the component sheets as homogenized materials, and the
results of the constrained compression tests of the cell component
specimens will be used to determine approximately the compres-
sive moduli of the component sheets as homogenized materials as
detailed later in this paper and in Lai et al. [17].
It should be mentioned that the electrode sheets are idealized as
homogenized porous sheets in the micro computational models
Fig. 4. Deformation patterns of a cell RVE specimen during a compression test at the displacement rate of 0.5 mm min
: (a) at the nominal strain of 1% in the initial linear stage, (b)
at the nominal strain of 2% where the slope changes, (c) at the nominal strain of 10%, (d) at the nominal strain of 15%, (e) at the nominal strain of 34% af ter the test (front view), and
(f) at the nominal strain of 34% after the test (back view).
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 329
used for the cell RVE specimens in this paper and the computational
effort is quite extensive. It is possible toconsider the active materials
on the current collectors as particles for the electrode sheets in
computational models at smaller scales. However, this paper is
focused on development of computational models at the scales of
homogenized cell component sheets, and development of compu-
tational models at smaller scales is outof the scope of this paper. The
Gurson’s model for porous materials is adopted to model the elec-
trode sheets with the active materials as homogenized materials in
the finite element analyses. Also, the separator sheets used in the
cell RVE specimens are manufactured with a high degree of porosity
to hold electrolyte. Therefore, the Gurson’s model for porous
materials is also adopted to model the separator sheets as homog-
enized materials in the finite element analyses. A brief description of
the Gurson’s model [20] is presented in the following.
Gurson [21] proposed a yield function
for porous materials
containing a small volume fraction of voids. In porous materials, the
void volume fraction fis defined as the ratio of the volume of voids
Shear band
z' y'
z' z'
(a) (b) (c)
(d) (e) (f)
Fig. 5. Schematics of a cell RVE specimen (a) before, (b) during, and (c) after in-plane
constrained compression. (d)e(f) are detailed schematics showing the shear band
formation corresponding to (a)e(c), respectively. The yand zcoordinates are the global
coordinates and the y0and z0coordinates in (d)e(f) are the local material coordinates
rotating with the cell components.
Fig. 6. SEM images of (a) graphite and (b) LiFePO
on the anode and cathode sheets, respectively.
= 0 (Mises)
= 0.1
= 0.2
= 0.4
= 0 (Mises)
compression ( )
tension ( )
Fig. 7. (a). A schematic of the Gurson’s yield contour in the normalized hydrostatic
pressure (p)eMises stress (q) plane [20]. (b). A schematic of uniaxial behavior of a
porous material with a perfectly plastic matrix material and the initial void volume
fraction f
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340330
to the total volume of the material. The relative density of a ma-
terial, r,defined as the ratio of the volume of matrix material to the
total volume of the material, can also be used. Note that f¼1r.
The Gurson’s yield function
was later modified by Tvergaard [22]
to the form
fcos hq
¼0 (1)
where q¼ð3S:S=2Þ
represents the effective macroscopic
Mises stress, p(¼S:I/3) represents the macroscopic hydrostatic
represents the flow stress of the matrix material, which
is expressed as a function of the average equivalent plastic strain
of the matrix for strain hardening materials, and q
and q
the fitting parameters. Here, Srepresents the deviatoric part of the
macroscopic Cauchy stress tensor S. The macroscopic Cauchy stress
Sis based on the current configuration of a material element with
voids. For f¼0(r¼1), the material is fully dense, and the Gurson’s
yield function reduces to the Mises yield function. The model
generally gives physically reasonable results only for f<0.1
(r>0.9). Tvergaard [22] introduced the fitting constants q
to fit the numerical results of shear band instability in square
arrays of cylindrical holes and axisymmetric spherical voids. One
can recover the original Gurson’s yield function by setting up
¼1. In the current investigation, q
¼1.5, q
¼1, and
¼2.25 [20].
Fig. 7(a) shows a schematic of the Gurson’s yield contour in the
normalized hydrostatic pressure (p)eMises stress (q) plane for
porous materials in comparison with that of the Mises material
model. The porous material model reduces to the Mises material
model as the void volume fraction freduces to zero. Fig. 7(b) shows
a schematic of the uniaxial behavior of a porous material with a
perfectly plastic matrix material and the initial void volume frac-
tion f
. Here the yield stress is denoted as
. The porous material
softens in tension and hardens in compression. The porous material
hardens in compression due to the reduction of the void volume
fraction. Phenomenological hyperfoam and crushable foam mate-
rial models are available in ABAQUS. However, more material input
data are needed for these foam models and additional material data
for the cell components are not available. Therefore, the Gurson’s
material model is adopted here for modeling the separator and the
electrodes with the active materials.
4. Available material data for cell components
Tensile tests were conducted for the individual cell compo-
nents such as anode, cathode, separator and cover sheets, and the
test results were discussed in detail in Lai et al. [18]. In-plane
constrained compression tests were also conducted for the
anode, cathode, separator, and cover sheets to estimate the
compressive elastic moduli, and the test results were discussed in
detail in Lai et al. [17]. Although these tests are constrained
compression tests, only the apparent elastic part of the stresse
strain responses appear to be useful to obtain the effective
compressive elastic moduli for individual components. The effec-
tive compressive elastic moduli may account for the local micro
buckling that occurs at a very small load level for each component
sheet and has indistinguishable impacts to the measurable
macroscopic response. Due to the compressive loading of the cell
RVE specimens, the effective compressive elastic moduli are thus
used for the electrodes, separator and cover sheets in the finite
element analyses of the cell RVE specimens under constrained
compression tests.
For elast iceplastic materials, the plasticstrain hardening behavior
is essential forthe input of elasticeplastic finite element analyses. For
the current investigation, the elasticeplastic tensile stressestrain
data for the components obtained in Ref. [18] are used to define the
strain hardening behavior due to the difficulties to obtain such data
under uniaxial ‘unconstrained’compression tests. For the ABAQUS
solver, the tensile tests data must be converted to the true stress and
true strain format for elasticeplastic finite element analyses. There is
no simple way to convert the engineering stressestrain curves of the
anode, cathode and separator sheets with high porosity. The con-
version to the true stressestrain curve is based on the usual
assumption of plastic incompressibility for metal plasticity for lack of
the detailed information on the detailed microstructure of the anode,
cathode and separator. With the composite rule of mixture for the
void and matrix and the assumption of the constant total volume of
the void and matrix, the engineering stressestrain curve is converted
to the true stressestrain curves. The anode and cathode fail at very
low strains. The separator is very thin and is expected not to
contribute significantly to the overall load carrying capacity of cells
and modules. Therefore, the conversion to the true stressestrain
curve with the plastic incompressibility seems to be a reasonable
option for lack of further information.
The tensile and effective compressive moduli, tensile yieldstress
and Poisson’s ratio of the cell components are listed in Table 1. For
the linear part of each stressestrain curve, the modulus is calcu-
lated based on each data point with respect to the origin of the
stressestrain curve. A stable average value for a range of the strain
of the apparent linear behavior is selected as the tensile modulus
for that specific material. The yield stresses for the materials of the
cell components are selected where the stresses deviate from the
apparent linear ranges. Poisson’s ratios of 0.33 for copper, 0.33 for
aluminum, 0.45 for polymer and 0.2 for the active layers are used to
obtain the effective Poisson’s ratio for the separator, anode, cath-
ode, and cover sheets using the composite rule of mixture (ROM).
The effect of the void volume fraction on the Poisson’s ratio is
estimated by treating the void as a component with zero Poisson’s
ratio using the ROM as
and V
are the Poisson’s ratio and the volume fraction of
the i-th component, respectively. Table 1 lists the Poisson’s ratios
Table 1
Material properties used in the finite element analyses.
Effective compressive
modulus (MPa)
Tensile yield
stress (MPa)
4700 83 2.11 0.21 (f¼0%)
0.17 (f¼20%)
0.13 (f¼40%)
5100 275 1.48 0.21 (f¼0%)
0.17 (f¼20%)
0.14 (f¼40%)
Separator 500 90 10.53 0.25 (f¼44%)
Cover sheet 5600 575 9.74 0.41
Table 2
Thicknesses and densities of the battery cell components.
Thickness, mm Density, kg m
Anode, graphite/Cu 0.2 934
Cathode, LieFePO
/Al 0.2 1712
Separator 0.02 795
Cover sheet 0.111 1338
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 331
for the cell components for different values of the initial void vol-
ume fraction f. The Poisson’s ratios listed for anode and cathode in
Table 1 are corresponding to the assumed void volume fraction f
listed in the parenthesis. It should be noted that the void volume
fractions of the active materials on electrodes are difficult to
measure due to the fact that the graphite and lithium iron
phosphate particles are loosely bonded together by a weak binder.
The void volume fraction 44% of the separator provided by the
manufacturer is adopted here. The thicknesses and the densities of
the cell components are also listed in Table 2.
Figs. 8(a) and (b) show the representative tensile nominal and
true stressestrain data of the cell components, respectively, and
Fig. 8(c) shows the stresseplastic strain curves of the cell compo-
nents used in the finite element analyses. For the electrodes, the
stresseplastic strain curves are provided up to the strain of failure
in the tests. In ABAQUS, the stresses are kept constant outside the
input strain range. In this case, this represents a perfectly plastic
extension of the stresseplastic strain curves of the electrodes since
they are apparently flat near the failure strains. This automatic
extension of the input material data is used to avoid numerical is-
sues that may arise due to error tolerance check used in regular-
izing the user-defined data in ABAQUS/explicit code.
As mentioned earlier for the micro and macro buckling analyses,
the compressive elastic moduli of the cell components are used and
are listed in Table 1. The cover sheet is modeled as the Mises ma-
terial with the isotropic hardening rule of ABAQUS and the stresse
plastic strain curve shown in Fig. 8(c) is used. For the separator
sheets, the initial void volume fraction fis set at 0.44 based on the
manufacturer specification. The plastic behaviors for the cell
components are provided based on the stresseplastic strain curves
shown in Fig. 8(c). The strain hardening behavior for the matrix
material is obtained by scaling the tensile stresseplastic strain
curves using the ROM as
represent the flow stress and adjusted flow
stress of the matrix, respectively. The stresseplastic strain curve
based on equation (3) will be referred as ‘adjusted strain hardening
curve’. It should be mentioned again that the stresseplastic strain
curves are estimations based on the available information. Many
assumptions are made to obtain these curves. One goal of this
investigation is to understand the physical mechanisms of these
RVE specimens under constrained compressive tests. The details of
the various aspects of compression tests simulations for cell RVE
specimens are described in the following sections.
5. Computational models
The results of the compression tests of cell RVE specimens show
that the layers of the RVE specimens are deformed by multi-scale
buckling phenomenon eboth layer micro buckling and global
macro buckling. Ideally, in a confined space with no clearance and
only with the presence of the porosity in the separatorand the active
materials onthe electrodes, the dense parts of the layers(copper foils
in anodes, aluminum foils in cathodes, and aluminum and polymers
in cover sheets)would get the room for buckling only by compressing
the relatively softer and porous active materials on electrodes and
separator layers laterally or in the out-of-plane direction. However,
the initial microscopic gaps between the cell components also allow
some rooms for buckling. The following approach is adopted and
tested in developing the buckling model presented here. As the load
increases,the local buckling of the individual sheetsof the specimens
develops and then the kinks of the cell components start to form
laterally adjacent to the wall of the fixture. The kinematics of
development of kinks and shear bands is an efficient way to compact
these porous sheets with plastically incompressible inner copper or
aluminum foils. Based on the experimentalobservations [17], the cell
global buckling or shear bands come from the plastic hinges or sharp
bending due tothe rigid constraint of thefixture wall and the module
Nominal strain
0.00 0.05 0.10 0.15 0.20 0.25 0.30
Nominal stress (MPa)
Cover sheet
Anode (Graphite/Cu)
Cathode (Li-FePO4/Al)
True strain
0.00 0.05 0.10 0.15 0.20 0.25 0.30
True stress (MPa)
Cover sheet
Anode (Graphite/Cu)
Cathode (Li-FePO /Al)
Plastic strain
0.00 0.05 0.10 0.15 0.20 0.25 0.30
Stress (MPa)
Cover sheet
Anode (Graphite/Cu)
Cathode (Li-FePO4/Al)
Fig. 8. (a) The representative tensile nominal stressestrain data of the cell compo-
nents obtained from Ref. [18], (b) estimated true stressestrain data based on (a), and
(c) the stresseplastic strain curves of the cell components used in the finite element
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340332
global buckling comes from the smooth bending due to a more relax
environment from the soft foam padding. The formation of these
kinks and shear bands in computational simulations is the key to
properly simulate the buckling behavior of the cell RVE specimens
under constrained compression tests.
The simulation of the cell RVE (ten units) specimens under
compression tests will be presented here. The similar approach can
be used for module RVE specimens under compressive loading to
estimate the nominal stressestrain response that can be used as an
input for a less detailed modeling [19].Fig. 2(b) shows a schematic
of a cell RVE specimen with the dimensions. It should be noted that
the stack-up of the cell components gives a total thickness of
4.642 mm. However, the constrained compression test fixture has a
confinement dimension of 25 mm 25 mm 5 mm. In the
beginning of the compression test, a clearance of 0.358 mm was
present in the lateral or thickness direction, and is modeled
accordingly in the finite element analyses.
Fig. 9(a) shows the finite element model setup for a cell RVE
specimen compression test using the ABAQUS/Explicit commercial
finite element code. The xeyezcoordinate system is also shown.
The explicit finite element solver is used for this simulation for a
better contact stability among all the thin sheets during the buck-
ling and under large deformation. The vertical length 25 mm and
the thickness 4.642 mm of the finite element model are similar to
the cell RVE specimen. For computational efficiency, only a half of
the cell RVE specimen width of 25 mm is used in the finite element
model. However, it should be noted that the symmetric boundary
condition was not applied in the model due to the nature of the
problem. The nominal stress vs. nominal strain curves of the
computational simulations will be compared with those of the
The compression test fixture is made of steel that has a very high
stiffness compared to the cell components. Therefore, the
confinement surfaces are assumed to be rigid and modeled by
planar rigid surfaces. In the finite element model setup, the spec-
imen mesh is surrounded by six rigid surfaces. The rigid surfaces
contacting with the edges of the cell RVE component sheets with
the normals in the xand ydirections have zero clearance. The rigid
surfaces contacting the cover sheets with the normal in the zdi-
rection are 5 mm apart and provide a total of 0.358 mm initial
lateral clearance with 0.179 mm on each side. The reference nodes
of all the rigid surfaces except the top one have six degrees of
freedom constrained. The top rigid surface can only move in the
vertical direction and is given a velocity boundary condition. The
general contact algorithm of ABAQUS/Explicit is used to model the
contact interaction between the surfaces of the cell components
that contact with one another and with the rigid surfaces. All the
contact surfaces are assumed to be in friction contact with each
other and an appropriate value of the coefficient of friction is used
in the simulations as a fitting parameter. Fig. 9(b) shows detailed
views of the meshes of each layer. The anode, cathode and cover
sheets are modeled by linear hexahedral full integration solid ele-
ments (C3D8 of ABAQUS). Only a single layer of elements are used
to model each layer and a mesh size of
x¼0.25 mm and
y¼0.25 mm is used. For the cathode and anode sheets,
z¼0.2 mm. For the cover sheets,
z¼0.111 mm. The thin sepa-
rator is modeled by linear quadrilateral reduced integration shell
elements (S4R of ABAQUS) with a thickness of 0.02 mm for
convergence and computational efficiency.
The compression test speed of 0.008 mm s
is considered as a
quasi-static condition. Using the explicit dynamics solver to model
a quasi-static event requires some special considerations. It is
computationally impractical to model the process by a time step to
satisfy the CouranteFredricheLevy condition of numerical stability.
A solution is typically obtained either by artificially increasing the
loading rate or the speed of the process in the simulation, or
increasing the mass of the system, or both. A general recommen-
dation is to limit the impact velocity to less than 1% of the wave
speed of the specimen, and a mass scaling of 5e10% is typical to
achieve a desirable stable time increment. Also the kinetic energy
of the deforming specimen should not exceed a small fraction (1e
5%) of the internal energy throughout the quasi-static analysis. The
densities of the cell components are very low and the mesh size in
this simulation is fine enough to capture the micro and macro
buckling behaviors. Therefore, for a reasonable computational time,
the finite element analysis is conducted at a speed of 200 mm s
and with a uniform mass scaling of 100 times of the actual mass.
The deformation speed and the kinetic energy are very low and
meet the recommendations of the quasi-static analysis for the
explicit solver even though a higher mass scaling is used for
computational efficiency. Different percentages of mass scaling
were examined. The results showed some impact on the initial part
of the stressestrain response up to a strain of about 1.5% and the
results are generally comparable.
(a) (b)
Velocity boundary
condition is applied Anode
Cover sheet
Cell RVE
Fig. 9. (a) The finite element model setup for a cell RVE specimen under constrained compression, and (b) detailed views of the meshes where the anode, cathode and cover sheets
are modeled by linear hexahedral solid elements and the separator sheet is modeled by linear quadrilateral reduced integration shell elements.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 333
6. Computational results
Fig. 10 shows the initial and deformed shape of the battery cell
model under quasi-static in-plane compression and the corre-
sponding experimental results. Initially, the cell RVE model is
confined by six rigid surfaces as described earlier as shown in
Fig. 10(a). The top rigid surface is moved downward in the ydi-
rection with a velocity boundary condition. Fig. 10(b) shows the
deformed shape of the model after the compressive displacement
boundary condition is applied and held with the initial void volume
fraction of f¼0.2 for the electrodes with the active materials at the
nominal strain of 34%. In this case, the initial void volume fraction
of f¼0.2 is used in estimating the Poisson’s ratios and adjusting the
strain hardening curves for the anode and cathode sheets with the
active materials. In the model, a coefficient of friction of 0.1 is
adopted for all the contacting surfaces as a general value. Many
simulations with multiple combinations of the parameters such as
the void volume fraction and coefficient of friction were conducted.
Only the combinations of parameters giving reasonable results will
be presented here. Fig. 10(c) is a zoom-in view of (b). Fig. 10(d)
shows a deformed cell RVE test specimen after the in-plane
compression test. The cell RVE specimen after the compression
test shown in 10(d) is one of the three cell RVE specimens tested.
Note that a different tested specimen is shown in Fig. 4. The spec-
imen showed here has a fairly regular buckling pattern and was
selected for comparison of the buckling pattern with that of the
computational results. Regular buckling patterns are obtained from
the computational models since these computational models do
not have significant irregularities or imperfections. The buckling
patterns of the deformed finite element model are found similar
and comparable to that of the test specimen.
Figs. 11(i)e(vi) show successive snapshots of the deformation of
the cell RVE specimen during the buckling simulation. Figs. 12(i)e
(vi) show the successive snapshots of the equivalent plastic strain
(PEEQ) of the cell RVE specimen during the buckling simulation. In
Fig. 11(ii) for the strain of 1.7%, the cover sheets on both sides of the
cell specimen appear to buckle independently but the buckling is
restricted by the rigid walls. The computational results not shown
here indicate that the five buckles shown in Fig. 11(ii) develop
successively one by one from the top to the bottom and the softer
neighbor separator, anode and cathode sheets buckle with the
stiffer cover sheets. In Fig. 11(iii) for the strain of 3.4%, the buckling
peaks or valleys appear to adjust and synchronize with the macro
buckling mode of the cell RVE specimen as a homogenized beam or
plate. The absolute value of the nominal stress starts to drop at the
strain of about 2% with the increasing compressive nominal strain
based on the computational results and this appears to be related to
the starting of the macro buckling of the cell RVE specimen as a
homogenized beam or plate.
Fig. 11(iv) for the strain of 11.9% shows that kinks start to form
adjacent to the cover sheets and shear bands are formed between
the opposite pairs of kinks. As shown in Figs. 11(v) and (vi) for the
strains of 22.1% and 34%, respectively, the kinks become folds and
the folds have different depths. The spacing between the folds in
Fig. 10. (a) A cell RVE half model is confined by six rigid surfaces, (b) the deformed shape of the model after the compressive displacement is applied (with the effective elastic
compressive modulus and f¼0.2 for the electrode sheets with the active materials at the nominal strain of 34%), (c) a zoom-in view of (b), and (d) a deformed cell RVE specimen
after the in-plane compression test.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340334
fact is not the same due to the friction and imperfections. The
plastic hinges or bends of the cell components are found smoother
in the computational models due to the large size of the elements in
the finite element analyses compared to those of the tests where
the bends are sharper with almost rectangular corners as shown in
Figs. 4(d) and 10(d) for two different tested cell RVE specimens.
The shear bands are formed in the sheets between the two
opposite kinks as schematically shown in Fig. 11(iv). As the defor-
mation progresses, the shear bands in the computational models
become slightly wider in the middle of the specimen compared to
those of the tests due to the smoothing of the bends coming from
the large element size in the finite element analysis. It should be
noted that in order to capture the local bending more accurately,
more layers of linear elements would have been appropriate to
model each sheet. However, for computational efficiency and for a
very high length to thickness ratio of each sheet, only a single layer
of element is used for modeling each sheet to sufficiently capture
the micro and macro deformation patterns.
The compaction of the voids in the components and microscopic
gaps between the components, along with the initial clearances,
allows room for further compression in a confined space. The kinks
grow up to certain depths and the surfaces collapse with further
compressive loading as shown in Fig. 11(v). On the other hand, the
stack of sheets that apparently are vertical between the two kinks
on the same side are carrying loads by further deformations as
shown in Figs. 11(iv)e(vi) and 12(iii)e(vi). The shape of this vertical
zone across the thickness direction appears to be triangularly
shaped whose apex is at the tip of the kink on the opposite side.
Fig. 12(vi) shows that at the end of the compression, the values of
the PEEQ are higher near the top of the specimen compared to
those near the bottom. This can be attributed to the friction effect
on the top portion due to the progress of compressive deformation.
Figs. 13(i) and (ii) show the distributions of the void volume
fraction at the nominal strains of 8.5% and 34% of the constrained
compression simulation, respectively. Only the void volume frac-
tions of the anode and cathode sheets of the cell RVE specimen are
displayed in these plots. Fig. 13(i) shows that during the deforma-
tion at the nominal strain of 8.5%, the voids along the outer
boundaries of the shear bands where large bending occurs are
consumed. Fig. 13(ii) shows that at the end of the compression at
the nominal strain of 34%, the void volume fraction decreases more
near the top of the specimen compared to that near the bottom.
This can be attributed to the friction effect on the top portion due to
the progressive nature of the compressive deformation.
Fig. 14 shows a comparison of the nominal stressestrain curve
from the finite element analysis with that of the test results ob-
tained from Lai et al. [17]. As mentioned earlier, the nominal stresse
strain curve of the finite element analysis is based on the Gurson’s
material model with the initial void volume fraction f¼0.2 for the
electrodes with the active materials and a friction coefficient of 0.1
based on a parametric study. The results of the parametric study
show that the formation of the kinks and shear bands affects the
stress where the slope of the nominal stressestrain curve of the
computational results changes whereas a higher coefficient of
friction raises the nominal stress to a higher value at a large strain.
The results of the parametric study will not be reported here for
brevity. The SAE 60 class filter has been used to post-process the
computational stressestrain responses to filter the computational
noise if present and for consistency in comparing the curves from
computations [20]. The computational results show that the
stresses drop slightly after the first noticeable global buckling at a
strain of about 2% and this is in agreement with the experimental
results. After reaching a strain of about 2%, the global buckling for
the cell RVE specimen as a homogeneous beam begins. The stress
then gradually increases as the densification or compaction con-
tinues as the strain increases. The results are compared fairly well
with the test results in general. However, the computational
response drops slightly after the strain of 25%.
The importance of the establishment and validation of the
detailed computational model in this investigation can be
demonstrated by exploring two example cases in order to visualize
the effect of clearance and biaxial compression on the deformation
patterns of cell RVE specimens under constrained compression.
Only the deformation patterns of the two example cases are briefly
presented here for demonstrating the usefulness of the computa-
tional model to understand the underlying physics of the cell RVE
specimens under constrained compression.
The effect of the initial clearance on the shear band formation of
a cell RVE specimen is demonstrated by using three initial clear-
ances of zero, 0.358 mm (for the current model) and 0.716 mm in
the finite element analyses. Figs. 15(a)e(c) show the deformation
patterns of the cell model under quasi-static in-plane compression
for the three clearance cases at the nominal strain of 34%. As the
initial clearance in the finite element models decreases from
0.716 mm to 0.358 mm and then to zero, the number of kinks in-
creases, the kink depth decreases, and the number of shear bands
(i) (ii) (iii)
(iv) (v) (vi)
vertical zone
during initial
Fig. 11. The successive snapshots of the deformation of the cell RVE specimen during
the buckling simulation at the nominal strains of (i) 0%, (ii) 1.7%, (iii) 3.4%, (iv) 11.9%, (v)
22.1% and (vi) 34%.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 335
increases from 8, 10 to 15, respectively. The details of the compu-
tational results will be reported with the corresponding experi-
mental results in the future.
Fig. 16 shows the deformation pattern of a cell RVE specimen
under equal biaxial constrained compression based on the model
shown in Fig. 10(a). Here, the top and front rigid surfaces are moved
downward in the ydirection and horizontally in the xdirection,
respectively, with the velocity boundary conditions such that
compressive nominal strains are equal in both xand ydirections.
The kinks and shear bands are formed inclined to both xand y
Fig. 12. The successive snapshots of the equivalent plastic strain (PEEQ) of the cell RVE specimen during the buckling simulation at the nominal strains of (i) 1.7%, (ii) 3.4%, (iii) 6.8%,
(iv) 13.6%, (v) 23.8% and (vi) 34%.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340336
directions in Fig. 16. The number of kinks and shear bands on side S
is about half compared to that of the side L due to the length ratio of
one half for sides S and L in the computational model. Fig. 16 also
shows the pattern of interactions of the shear bands initiated from
sides L and S. The details of the computational results will be re-
ported with the experimental results for biaxial compression in the
7. Discussions
Based on the experimental observations of the cell RVE speci-
mens under in-plane constrained compression, the physical
Fig. 13. The distributions of the void volume fraction at the nominal strains of (i) 8.5%
and (ii) 34% of the constrained compression simulation. Only the void volume fractions
of the anode and cathode sheets of the cell RVE specimen are shown.
Nominal strain
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
Nominal stress (MPa)
Gurson's model
= 0.2
Test result 1
Test result 2
Test result 3
Fig. 14. A comparison of the nominal stressestrain curve from the finite element
analysis using the Gurson’s material model with those of the test results. In the finite
element analyses, all the contact surfaces are assumed to be in friction contact with a
friction coefficient of 0.1.
Fig. 15. The deformation patterns of a battery cell under quasi-static in-plane
compression for three initial clearances of (a) zero, (b) 0.358 mm of the current model
and (c) 0.716 mm at the nominal strain of 34%. The finite element models are similar to
the model described in Fig. 10(a) with different initial clearances.
Fig. 16. The deformation pattern of the cell RVE specimen under equal biaxial con-
strained compression based on the model shown in Fig. 10(a) at the nominal strain of
22% in the xand ydirections.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 337
mechanism to accommodate the compression starts with the
elastic buckling of the cell components. When a cell RVE specimen
is under in-plane constrained compression, the component sheets
buckle independently with the lateral constraints from the
neighbor component sheets as indicated in Ref. [17]. Since the
component sheets were only packed together, each component
sheet can be treated as an individual sheet or thin plate under in-
plane compression with the lateral constraints which can be
treated as unattached elastic foundations.
For the anode, cathode and separator sheets in the middle
portion of the cell RVE specimens, the buckling mode will be
dominated by the constraints on both sides of the sheets. Lai et al.
[17] presented the buckling load solutions for the cell RVE speci-
mens by treating the cell component as a uniform straight beam
supported by two equal unattached elastic foundations under end
loads with both ends hinged based on the solution listed in Refs.
[23,24]. The cover sheets have only one unattached elastic foun-
dation and are free to buckle to the unconstrained side due to the
small clearances in the die for the cell RVE specimen during the
test. However, the small clearances will limit the cover sheets to
fully develop a lower order buckling mode. For the neighbor anode,
cathode and separator sheets near the cover sheets, they can start
to buckle in a lower order mode with the cover sheets but will
also be constrained by the rigid walls through the cover sheets.
The detailed results of the finite element analyses indicate that
the cover sheets and the neighbor anode, cathode and separator
sheets are constrained by the rigid walls and buckle in a high order
Therefore, treating the cover sheet as a beam with one unat-
tached elastic foundation on one side and a small or zero clearance
to a rigid wall on the other side appears to be a reasonable approach
to gain insight on the buckling behavior and the number of the
waves or half waves for the cell RVE specimens. The details of an
elastic RayleigheRitz buckling analysis are presented in Appendix
A. The results of the elastic buckling analysis indicate that the
number of waves, n, is proportional to the length lof the cell. In
other words, the wavelength of the buckling is independent of the
specimen length l. Since the cell RVE specimens buckle with mul-
tiple half waves, the selection of the length for the cell RVE speci-
mens to represent lithium-ion battery cells with a full length under
in-plane constrained compressive loading conditions in Lai et al.
[17] appears to be reasonable.
8. Conclusions
In this paper, computational models are developed for simula-
tions of representative volume element (RVE) specimens of
lithium-ion battery cells under in-plane constrained compression
tests. First, the loadedisplacement data and deformation patterns
for cell RVE specimens under in-plane constrained compression
tests are briefly reviewed. For the corresponding finite element
analyses based on ABAQUS, the effective compressive moduli for
cell components are obtained from in-plane constrained
compressive tests, the Poisson’s ratios for cell components are
based on the rule of mixture, and the stresseplastic strain curves of
the cell components are obtained from the tensile tests and the rule
of mixture. The Gurson’s material model is adopted to account for
the effect of porosity in separators and in the active layers of anodes
and cathodes. The computational results show that the computa-
tional models can be used to examine the micro buckling of the
component sheets, the macro buckling of the cell RVE specimens,
and the formation of the kinks and shear bands observed in ex-
periments, and to simulate the loadedisplacement curves of the
cell RVE specimens. The computational results also suggest the
micro buckling of the component sheets controls the macro
buckling of the cell RVE specimens and then the formation of the
kinks and shear bands. The initial micro buckling mode of the cover
sheets in general agrees with that of the approximate elastic
buckling solution of a beam with a rigid boundary on one side and
an unattached elastic foundation on the other side. The elastic
buckling solution indicates that the buckling wavelength is a
function of the elastic bending rigidity and the out-of-plane elastic
modulus of the cell RVE specimens. The results further suggest that
the length of the cell RVE specimens is appropriately selected and
the constrained compressive behavior of the cell RVE specimens
can represent that of battery cells with a full length. Based on the
computational models, the effects of the friction between the cell
components and the constrained surfaces on the deformation
pattern, plastic deformation, void compaction, and the loade
displacement curve are identified. Finally, the usefulness of the
computational model is demonstrated by further exploring the
effects of the initial clearance and biaxial compression on the
deformation patterns of cell RVE specimens.
Helpful discussions with Yibing Shi, Guy Nusholtz, and Ronald
Elder of Chrysler, Saeed Barbat, Bill Stanko, Mark Mehall and Tau
Tyan of Ford, Jenne-Tai Wang, Ravi Nayak, Kris Yalamanchili and
Stephen Harris of GM, Christopher Orendorff of Sandia National
Laboratory, Seung-Hoon Hong of University of Michigan, and
Natalie Olds of USCAR are greatly appreciated.
Appendix A. Buckling of a beam on an elastic foundation and
rigid boundary
Fig. A1 shows a uniform straight beam under end loads and
with one unattached elastic foundation on one side and a rigid
boundary on the other side. Both ends are hinged and the beam
is supported by the elastic foundation through the lateral pres-
sure proportional to the deflection in the ydirection. Here, k
represents the spring constant of the elastic foundation on one
side of the beam and is defined as the lateral force per unit plate
length per unit deflection of the neighbor components in the
out-of-plane direction. The spring constant kcan be expressed in
terms of the out-of-plane elastic modulus Eof the cell RVE
specimens as
where brepresents the width of the specimen, and hrepresents the
thickness of the neighbor components.
In calculating the critical value of the compressive force for a
beam with an unattached elastic foundation on one side and a rigid
boundary on the other side, the energy method can be used to
develop an approximate solution [24]. The potential energy func-
tion for the beam can be expressed in terms of the strain energy of
beam bending, the strain energy of the elastic foundation, and the
work done by the compressive force as
is the potential energy of the system, Eis the modulus
of elasticity of the beam, Iis the moment of inertia of the beam, v
represents the deflection of the beam in the ydirection, lis the
length of the beam, kis the spring constant for the elastic un-
attached foundation, and Pis the compressive force. Note that
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340338
the friction effects are not considered in this simple beam
According to the computational results for the cell RVE speci-
mens with a small or zero tolerance in the die, the buckling mode
appears to be periodic. The deflection vof the beam must be pos-
itive due to the rigid boundary on the left side as shown in Fig. A1.
The deflection vis assumed in a form as
where nis an integer and a
is a coefficient. Substituting the
deflection vin equation (A3) into equation (A2) and evaluating the
integrals, the potential energy becomes
EI n
16 kl Pn
For the minimum potential energy, the critical buckling load is
determined at v
where the integer nrepresents the number of waves as indicated in
equation (A3). It should be noted that in the buckling analysis in Lai
et al. [17], the value of mcorresponds the number of half waves for
the buckling of the entire cell RVE specimens or the number of half
waves of the anode, cathode and separator sheets in the middle
portion of the cell RVE specimens.
Following the argument presented in Ref. [24] with consider-
ation of nas an integer, the value of nat which the number of waves
changes from nto nþ1 giving the same value of Pcan be obtained
from equation (A5) as
EI (A6)
The solution for equation (A6) is expressed as
Equation (A7) is similar to the solution for the number of half
waves for the buckling of a beam supported by an attached or
unattached elastic foundation as listed in Timoshenko [24] and
used in Lai et al. [17].
Equation (A5) is plotted in Fig. A2 to demonstrate the change of
the compressive load Pwith nusing the values of k,l,Eand Ithat are
listed in Table A1 based on the experimental results presented in Lai
et al. [17]. Since nis an integer, equation (A5) is discrete in nature.
Fig. A2 shows that the solution of nof 12 in equation (A7) corre-
sponds to the minimum or critical buckling load for the beam.
However, when nis treated as a real number, equation (A5) be-
comes differentiable and the value of nat which the minimum or
critical compressive load Pcan be determined. Therefore, consid-
ering nas a real number, vP/vn¼0 gives
It should be noted that when 1 is neglected on the left hand side
of equation (A6) for large n’s, equation (A6) becomes equation (A8).
For the value of nin equation (A8), the minimum or critical buckling
load P
can be determined as
Equations (A7) and (A8) give the values of n11.62 and 12.11,
respectively, based on the values of k,l,Eand Ithat are listed in
Table A1 and are obtained from the experimental results pre-
sented in Ref. [17]. Both equations give the same number of waves
of 12 for the cover sheets for the minimum or critical buckling
load. From the computational results for the cell specimen with
the zero clearance, the number the waves for the initial buckling
mode of the cover sheets is 8 which is comparable to the
approximate solution obtained from the RayleigheRitz method
presented above. Equation (A8) appears to be a reasonable esti-
mation for the current investigation and simple enough to show
that the number of waves, n, is proportional to the length lof the
Equation (A8) can be rewritten as
As indicated in equation (A10), the wavelength, l/n, is inde-
pendent of the specimen length. For the cell RVE specimens with
the small clearance of 0.358 mm, the number of half waves for the
buckling for the entire cell specimen is 7 and 10 from experimental
and computational results, respectively. The computational results
also show that as the clearance increases, the number of half waves
decreases. Since the cell RVE specimens buckle with multiple half
waves, the selection of the length for the cell RVE specimens in the
experimental investigation of Lai et al. [17] appears to be reason-
able based on equation (A10).
Fig. A1. A schematic of a uniform straight beam with a rigid boundary on one side and
an unattached elastic foundation on the other side under end loads. Both ends are
hinged and the beam is supported by the elastic foundation through the lateral
pressure proportional to the deflection in the ydirection.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340 339
[1] J. Nguyen, C. Taylor, Safety performance for phosphate based large format
lithium-ion battery, Telecommunications Energy Conference, INTELEC 2004.
26th Annual International (2004) 146e148.
[2] M. Otsuki, T. Ogino, K. Amine, ECS Transactions 1 (2006) 13e19.
[3] C. Ashtiani, Design and development for production of the think EV battery
pack, Proceeding of the AABC-09 Conference (2009) CA, USA.
[4] J. Oh, Large lithium-ion battery for automotive applications, Proceeding of the
AABC-09 Conference (2009) CA, USA.
[5] W. Cai, H. Wang, H. Maleki, J. Howard, E. Lara-Curzio, Journal of Power Sources
196 (2011) 7779e7783.
[6] X. Zhang, W. Shyy, A.M. Sastry, Journal of the Electrochemical Society 154 (10)
(2007) A910eA916., http://dx.doi.org/10.1149/1.2759840.
[7] X. Zhang, A.M. Sastry, W. Shyy, Journal of the Electrochemical Society 155 (7)
(2008) A542eA552., http://dx.doi.org/10.1149/1.2926617.
[8] R. Deshpande, Y.-T. Cheng, M.W. Verbrugge, Journal of Power Sources 195
(2010) 5081e5088., http://dx.doi.org/10.1016/j.jpowsour.2010.02.021.
[9] Y. Hu, X. Zhao, Z. Suo, Journal of Materials Research 25 (2010) 1007e1010.
[10] X. Xiao, W. Wu, X. Huang, Journal of Power Sources 195 (2010) 7649e7660.
[11] L.Q. Zhang, X. Liu, Y. Liu, S. Huang, T. Zhu, L. Gui, S. Mao, Z. Ye, C. Wang,
J.P. Sullivan, J.Y. Huang, ACS Nano 5 (2011) 4800e4809.
[12] K. Zhao, W.L. Wang, J. Gregoire, M. Pharr, Z. Suo, J.J. Vlassak, E. Kaxiras, Nano
Letters 11 (2011) 2962e2967., http://dx.doi.org/10.1021/nl201501s.
[13] K. Zhao, M. Pharr, L. Hartle, J.J. Vlassak, Z. Suo, Journal of Power Sources 218
(2012) 6e14., http://dx.doi.org/10.1016/j.jpowsour.2012.06.074.
[14] E. Sahraei, R. Hill, T. Wierzbicki, Journal of Power Sources 201 (2012) 307e
321., http://dx.doi.org/10.1016/j.jpowsour.2011.10.094.
[15] R. Hill, Development for a Representative Volume Element of Lithium-Ion
Batteries for Thermo-Mechanical Integrity, Department of Mechanical Engi-
neering, Massachusetts Institute of Technology, 2011.
[16] E. Sahraei, T. Wierzbicki, R. Hill, M. Luo, Crash safety of lithium-ion batteries
towards development of a computational model, SAE Technical Paper 2010-
01-1078 (2010), http://dx.doi.org/10.4271/2010-01-1078.
[17] W. Lai, M.Y. Ali, J. Pan, Journal of Power Sources (2013) (submitted for
[18] W. Lai, M.Y. Ali, J. Pan, Journal of Power Sources (2013) (submitted for
[19] M.Y. Ali, W. Lai, J. Pan, Journal of Power Sources (2013) (submitted for
[20] ABAQUS Version 6.11 User Manual, SIMULIA (2012). Providence, RI.
[21] A. L. Gurson, J ournal of Engineering Materials and T echnology 99 (1977)
[22] V. Tvergaard, International Journal of Fracture Mechanics 17 (1981) 389e407.
[23] W.C. Young, R.G. Budaynas, Roark’s Formulas for Stress and Strain, seventh
ed., McGraw-Hill, 2001.
[24] S. Timoshenko, Theory of Elastic Stability, McGraw-Hill, 1936.
Table A1
The values of the parameters for the cell RVE specimens used in the elastic buckling
solution for calculation of the critical buckling load and the number of waves.
Parameters Value
cover sheet
0.111 mm
0.200 mm
0.200 mm
0.020 mm
cover sheet
4.531 mm
cover sheet
) 575 MPa
Ið¼ I
cover sheet
cover sheet
2.85E-03 mm
(from out-of-plane cell RVE compression tests) 8.5 MPa
Equation (A1):k¼E
46.899 MPa
Fig. A2. The compressive load Pas a function of the number of waves, n.
M.Y. Ali et al. / Journal of Power Sources 242 (2013) 325e340340 | {"url":"https://www.researchgate.net/publication/257226230_Computational_models_for_simulations_of_lithium-ion_battery_cells_under_constrained_compression_tests","timestamp":"2024-11-10T09:54:35Z","content_type":"text/html","content_length":"995212","record_id":"<urn:uuid:592fbb16-c9ba-44e0-a31b-ce704b2e231a>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00389.warc.gz"} |
perplexus.info :: Riddles : It is a riddle
It is a riddle.
It is the strongest force,
to resist your force,
and yet it is not a force.
It never accelerates,
but it is always moving.
It never rests, and
it never gets tired.
It has been defined many ways,
but no one can quite capture it.
What is it? | {"url":"http://perplexus.info/show.php?pid=2979&cid=22973","timestamp":"2024-11-10T17:46:09Z","content_type":"text/html","content_length":"14604","record_id":"<urn:uuid:0c3068ca-06e4-4250-b523-fc14a135cb51>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00511.warc.gz"} |
Disadvantages of Three-Phase Systems in context of 3 phase current
01 Sep 2024
Title: The Disadvantages of Three-Phase Systems: A Critical Examination of the Limitations of 3-Phase Current
Three-phase systems have long been a staple of electrical power distribution, offering numerous advantages in terms of efficiency and reliability. However, despite their many benefits, three-phase
systems also possess several significant disadvantages that can have far-reaching consequences for system performance and overall operation. This article will examine the limitations of 3-phase
current, highlighting the drawbacks of this widely used technology.
Three-phase systems are a fundamental component of modern electrical power distribution, providing a reliable and efficient means of transmitting and distributing power to consumers. The use of
three-phase current allows for the transmission of more power over longer distances than single-phase systems, making it an essential aspect of many industrial and commercial applications. However,
despite its widespread adoption, three-phase systems also possess several significant disadvantages that can have far-reaching consequences for system performance and overall operation.
Disadvantages of Three-Phase Systems:
1. Harmonics: One of the primary drawbacks of three-phase systems is the presence of harmonics in the current waveform. Harmonics are periodic components of the current waveform that are not integer
multiples of the fundamental frequency, and can cause significant distortion and interference in the system.
I_harmonic = I_fundamental * sin(2πft)
where I_harmonic is the harmonic component of the current, I_fundamental is the fundamental component of the current, f is the frequency of the fundamental component, and t is time.
The presence of harmonics can lead to a range of problems, including increased power losses, reduced system efficiency, and interference with other systems operating at different frequencies.
1. Neutral Current: Another significant disadvantage of three-phase systems is the presence of neutral current. Neutral current flows through the neutral wire in a three-phase system, and can cause
significant distortion and interference in the system.
I_neutral = I_line1 + I_line2 + I_line3
where I_neutral is the neutral current, I_line1, I_line2, and I_line3 are the currents flowing through each of the three phases.
The presence of neutral current can lead to a range of problems, including increased power losses, reduced system efficiency, and interference with other systems operating at different frequencies.
1. Unbalanced Loads: Three-phase systems are designed to operate with balanced loads, where the current flowing through each phase is equal in magnitude and opposite in direction. However,
unbalanced loads can cause significant distortion and interference in the system, leading to a range of problems including increased power losses, reduced system efficiency, and equipment damage.
I_phase1 = I_phase2 + I_phase3
where I_phase1, I_phase2, and I_phase3 are the currents flowing through each of the three phases.
The presence of unbalanced loads can lead to a range of problems, including increased power losses, reduced system efficiency, and equipment damage.
While three-phase systems offer numerous advantages in terms of efficiency and reliability, they also possess several significant disadvantages that can have far-reaching consequences for system
performance and overall operation. The presence of harmonics, neutral current, and unbalanced loads can all lead to a range of problems, including increased power losses, reduced system efficiency,
and equipment damage. As such, it is essential to carefully consider the limitations of three-phase systems when designing and operating electrical power distribution systems.
• IEEE Standard 1459-2000, “IEEE Standard for Electromagnetic Compatibility in Power Systems”
• IEEE Standard 519-2014, “IEEE Recommended Practices and Requirements for Harmonic Control in Electrical Power Systems”
Note: The formulas provided are in ASCII format as requested.
Related articles for ‘3 phase current ‘ :
• Reading: **Disadvantages of Three-Phase Systems in context of 3 phase current **
Calculators for ‘3 phase current ‘ | {"url":"https://blog.truegeometry.com/tutorials/education/35580ce3912c7054084e1c2de5016f33/JSON_TO_ARTCL_Disadvantages_of_Three_Phase_Systems_in_context_of_3_phase_current.html","timestamp":"2024-11-04T09:04:10Z","content_type":"text/html","content_length":"19544","record_id":"<urn:uuid:55802f50-0fbe-4098-8564-9014e6174d0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00780.warc.gz"} |
Index - College Algebra | OpenStax
This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.
Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.
Attribution information
• If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
Access for free at https://openstax.org/books/college-algebra/pages/1-introduction-to-prerequisites
• If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
Access for free at https://openstax.org/books/college-algebra/pages/1-introduction-to-prerequisites
Citation information
• Use the information below to generate a citation. We recommend using a citation tool such as this one.
© Dec 8, 2021 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and
OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University. | {"url":"https://openstax.org/books/college-algebra/pages/index","timestamp":"2024-11-13T06:26:14Z","content_type":"text/html","content_length":"747718","record_id":"<urn:uuid:31c25d73-f2d9-4b69-883b-350ce1472d04>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00426.warc.gz"} |
Action - (Mathematical Physics) - Vocab, Definition, Explanations | Fiveable
from class:
Mathematical Physics
Action is a fundamental concept in physics, defined as the integral of the Lagrangian over time. It encapsulates the dynamics of a system, allowing for the derivation of equations of motion through
the principle of least action. This principle states that the actual path taken by a system is the one that minimizes the action, linking together various physical laws and principles in a coherent
congrats on reading the definition of Action. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Action is mathematically represented as $$ S = \int L dt $$, where $$ S $$ is action and $$ L $$ is the Lagrangian.
2. The principle of least action can be applied to various physical theories, including classical mechanics, quantum mechanics, and field theories.
3. In Hamiltonian mechanics, action plays a crucial role in defining canonical transformations and establishing relationships between coordinates and momenta.
4. Variational calculus is often employed to determine the path that minimizes action, leading to Euler-Lagrange equations.
5. The concept of action provides deep insights into conservation laws, such as energy conservation, as systems evolve in time.
Review Questions
• How does the concept of action relate to the equations of motion in classical mechanics?
□ The concept of action is central to deriving the equations of motion in classical mechanics through the principle of least action. By minimizing action, which is defined as the integral of
the Lagrangian over time, we obtain the Euler-Lagrange equations. These equations describe how a system evolves over time and are equivalent to Newton's laws of motion, providing a more
generalized approach to understanding dynamics.
• Discuss how action influences the transition from Lagrangian to Hamiltonian mechanics.
□ Action serves as a bridge between Lagrangian and Hamiltonian mechanics by allowing for the formulation of canonical transformations. In Hamiltonian mechanics, one can express dynamics in
terms of phase space variables, which are derived from minimizing action. The transition involves expressing the Lagrangian in terms of coordinates and momenta, leading to a new formulation
where the Hamiltonian represents total energy and governs system evolution through Hamilton's equations.
• Evaluate how the principle of least action connects different areas of physics and its implications for modern theoretical frameworks.
□ The principle of least action is a unifying concept that connects various areas of physics by emphasizing that systems tend to evolve along paths that minimize action. This principle has
profound implications for modern theoretical frameworks such as quantum field theory and general relativity. In these contexts, minimizing action leads to fundamental insights about particle
interactions and spacetime dynamics, showcasing its role as a cornerstone in both classical and modern physics theories.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/math-physics/action","timestamp":"2024-11-10T14:37:22Z","content_type":"text/html","content_length":"170344","record_id":"<urn:uuid:df1fc634-75dc-42d5-916b-8a10d1293c16>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00857.warc.gz"} |
Transactions Online
Takahiro MURAKAMI, Toshihisa TANAKA, Yoshihisa ISHIDA, "Measurement of Similarity between Latent Variables" in IEICE TRANSACTIONS on Fundamentals, vol. E92-A, no. 3, pp. 824-831, March 2009, doi:
Abstract: A method for measuring similarity between two variables is presented. Our approach considers the case where available observations are arbitrarily filtered versions of the variables. In
order to measure the similarity between the original variables from the observations, we propose an error-minimizing filter (EMF). The EMF is designed so that an error between outputs of the EMF is
minimized. In this paper, the EMF is constructed by a finite impulse response (FIR) filter, and the error between the outputs is evaluated by the mean square error (EMF). We show that minimization of
the MSE results in an eigenvalue problem, and the optimal solution is given in a closed form. We also reveal that the minimal MSE by the EMF is efficient in the measurement of the similarity from the
viewpoint of a correlation coefficient between the originals.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E92.A.824/_p
author={Takahiro MURAKAMI, Toshihisa TANAKA, Yoshihisa ISHIDA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Measurement of Similarity between Latent Variables},
abstract={A method for measuring similarity between two variables is presented. Our approach considers the case where available observations are arbitrarily filtered versions of the variables. In
order to measure the similarity between the original variables from the observations, we propose an error-minimizing filter (EMF). The EMF is designed so that an error between outputs of the EMF is
minimized. In this paper, the EMF is constructed by a finite impulse response (FIR) filter, and the error between the outputs is evaluated by the mean square error (EMF). We show that minimization of
the MSE results in an eigenvalue problem, and the optimal solution is given in a closed form. We also reveal that the minimal MSE by the EMF is efficient in the measurement of the similarity from the
viewpoint of a correlation coefficient between the originals.},
TY - JOUR
TI - Measurement of Similarity between Latent Variables
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 824
EP - 831
AU - Takahiro MURAKAMI
AU - Toshihisa TANAKA
AU - Yoshihisa ISHIDA
PY - 2009
DO - 10.1587/transfun.E92.A.824
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E92-A
IS - 3
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - March 2009
AB - A method for measuring similarity between two variables is presented. Our approach considers the case where available observations are arbitrarily filtered versions of the variables. In order to
measure the similarity between the original variables from the observations, we propose an error-minimizing filter (EMF). The EMF is designed so that an error between outputs of the EMF is minimized.
In this paper, the EMF is constructed by a finite impulse response (FIR) filter, and the error between the outputs is evaluated by the mean square error (EMF). We show that minimization of the MSE
results in an eigenvalue problem, and the optimal solution is given in a closed form. We also reveal that the minimal MSE by the EMF is efficient in the measurement of the similarity from the
viewpoint of a correlation coefficient between the originals.
ER - | {"url":"https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E92.A.824/_p","timestamp":"2024-11-03T10:31:43Z","content_type":"text/html","content_length":"60934","record_id":"<urn:uuid:54ff1b68-17fd-4ff6-b460-514ff2925477>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00493.warc.gz"} |
How Calculus for Engineers by Donald Trim Can Help You Solve
How Calculus for Engineers by Donald Trim Can Help You Solve Real-World Engineering Problems
Calculus for Engineers by Donald Trim: A Comprehensive Review
Calculus is one of the most important and challenging subjects for engineering students. It provides the foundation for many advanced topics in mathematics, physics, chemistry, computer science, and
engineering. However, learning calculus can also be daunting and frustrating, especially if you don't have a good textbook that explains the concepts clearly, shows how they are applied in real-world
situations, and provides ample practice problems to test your understanding.
[FULL] Calculus For Engineers By Donald Trim
If you are looking for such a textbook, you might want to consider Calculus for Engineers by Donald Trim. This book is designed specifically for engineering students who want to learn calculus in a
practical and relevant way. It uses an early transcendental approach, which means that it introduces functions such as exponential, logarithmic, trigonometric, and inverse trigonometric before
differentiation and integration. This allows you to see how these functions are used in engineering problems from the start.
In this article, we will give you a comprehensive review of Calculus for Engineers by Donald Trim. We will cover the following aspects:
• What is Calculus for Engineers?
• How is Calculus for Engineers organized?
• What are the strengths and weaknesses of Calculus for Engineers?
• How does Calculus for Engineers compare to other similar books?
• How can you get the most out of Calculus for Engineers?
By the end of this article, you will have a better idea of whether Calculus for Engineers by Donald Trim is the right book for you or not. So let's get started!
What is Calculus for Engineers?
Calculus for Engineers is a textbook written by Donald W. Trim, a professor emeritus of mathematics at Brandon University in Canada. He has over 40 years of experience in teaching calculus and other
mathematics courses to engineering students. He has also authored several other books on mathematics, such as A Concise Introduction to Linear Algebra and A First Course in Mathematical Analysis.
The main features and benefits of Calculus for Engineers are:
The main features and benefits of the book
• It covers all the topics that are essential for engineering students, such as limits, continuity, differentiation, integration, techniques of integration, parametric equations, polar coordinates,
infinite sequences and series, vectors, three-dimensional analytic geometry, multivariable calculus, multiple integrals, vector calculus, and differential equations.
• It emphasizes practical applications, many of which are drawn from various engineering fields, such as civil, mechanical, electrical, chemical, and computer engineering. It shows how calculus can
be used to model and solve real-world problems involving rates of change, optimization, curve fitting, area and volume, work and energy, fluid mechanics, heat transfer, electric circuits, and
• It provides a clear and concise explanation of the concepts and theorems, with proofs when appropriate. It uses a logical and consistent notation and terminology throughout the book. It also
includes historical notes and biographies of some of the famous mathematicians who contributed to the development of calculus.
• It offers a variety of examples and exercises that range from basic to challenging. It gives detailed solutions to all the odd-numbered exercises and some of the even-numbered ones. It also
provides hints and tips for solving some of the more difficult problems. It encourages you to think critically and creatively about the concepts and methods.
The target audience and prerequisites of the book
The target audience of Calculus for Engineers is engineering students who are taking a first or second course in calculus. It is also suitable for students in other disciplines that require a solid
background in calculus, such as physics, chemistry, or computer science.
The prerequisites of Calculus for Engineers are a good knowledge of algebra, trigonometry, and geometry. You should be familiar with functions and graphs, equations and inequalities, exponential and
logarithmic functions, trigonometric functions and identities, inverse trigonometric functions, complex numbers, matrices and determinants. You should also have some experience in solving word
problems involving these topics.
How is Calculus for Engineers organized?
Calculus for Engineers is organized into 16 chapters. Each chapter consists of several sections that cover a specific topic or subtopic. Each section begins with an introduction that motivates the
topic and explains its relevance to engineering. Then it presents the definitions, formulas, rules, properties, and theorems that are needed for the topic. Next it gives examples that illustrate how
to apply the concepts and methods to solve problems. Finally it ends with exercises that allow you to practice what you have learned.
The structure and content of the book are:
The structure and content of the book
Plane Analytic Geometry and Functions
1.1 Lines1.2 Circles1.3 Functions1.4 Graphs of Functions1.5 Operations on Functions1.6 Inverse Functions1.7 Exponential Functions1.8 Logarithmic Functions1.9 Trigonometric Functions1.10 Inverse
Trigonometric Functions
Limits and Continuity
2.1 Limits2.2 One-Sided Limits2.3 Infinite Limits2.4 Limits at Infinity2.5 Continuity2.6 Properties of Continuous Functions2.7 The Intermediate Value Theorem
3.1 The Derivative3.2 Rules for Differentiation3.3 Higher-Order Derivatives3.4 The Chain Rule3.5 Implicit Differentiation3.6 Related Rates3.7 Differentials
Applications of Differentiation
4.1 Rolle's Theorem and the Mean Value Theorem4.2 Increasing and Decreasing Functions4.3 Concavity and Points of Inflection4.4 Curve Sketching4.5 Optimization Problems4.6 Newton's Method
The Indefinite Integral or Antiderivative
# Outline of the article (continued) - H2: How is Calculus for Engineers organized? (continued) - H3: The examples and exercises of the book - H2: What are the strengths and weaknesses of Calculus
for Engineers? - H3: The pros of the book - H3: The cons of the book - H2: How does Calculus for Engineers compare to other similar books? - H3: The similarities and differences between Calculus for
Engineers and other calculus books - H3: The advantages and disadvantages of Calculus for Engineers over other calculus books - H2: How can you get the most out of Calculus for Engineers? - H3: The
best ways to use the book for learning and teaching calculus - H3: The supplementary resources and tools that can enhance your calculus experience - H2: Conclusion - H3: A summary of the main points
and takeaways of the article - H3: A call to action for the readers to check out the book or leave a comment - H2: FAQs - H4: Where can I buy Calculus for Engineers by Donald Trim? - H4: What are
some other good books on calculus for engineers? - H4: How can I get help with calculus problems or concepts? - H4: How can I improve my calculus skills and grades? - H4: What are some real-world
applications of calculus for engineers? # Article with HTML formatting (continued) How is Calculus for Engineers organized? (continued)
The examples and exercises of the book
One of the most important features of Calculus for Engineers is the abundance and quality of examples and exercises that it provides. Each section contains several worked-out examples that
demonstrate how to apply the concepts and methods to solve various types of problems. The examples are carefully chosen to illustrate the main ideas and techniques, as well as to show common errors
and pitfalls to avoid. The examples are also annotated with comments and explanations that help you understand the steps and reasoning involved.
Each section also contains a set of exercises that allow you to practice what you have learned. The exercises are graded according to their difficulty level, from easy to hard. The exercises cover a
wide range of topics and applications, such as engineering design, optimization, modeling, simulation, data analysis, graphing, approximation, estimation, error analysis, and more. The exercises also
include some challenging problems that require you to think creatively and critically about the concepts and methods.
The book gives detailed solutions to all the odd-numbered exercises and some of the even-numbered ones at the end of each chapter. It also provides hints and tips for solving some of the more
difficult problems in a separate section at the end of each chapter. These solutions, hints, and tips are very helpful for checking your work, finding your mistakes, learning from your errors, and
improving your problem-solving skills.
What are the strengths and weaknesses of Calculus for Engineers?
Calculus for Engineers is a well-written, well-organized, well-balanced, and well-received textbook that has many strengths and few weaknesses. Here are some of the pros and cons of the book:
The pros of the book
• It is comprehensive and covers all the topics that are essential for engineering students.
• It is practical and relevant and shows how calculus can be used to solve real-world engineering problems.
• It is clear and concise and explains the concepts and theorems in a logical and consistent way.
• It is engaging and interesting and includes historical notes and biographies of famous mathematicians.
• It is challenging and stimulating and provides a variety of examples and exercises that test your understanding.
• It is helpful and supportive and gives detailed solutions, hints, and tips for solving problems.
The cons of the book
• It is expensive and may not be affordable for some students.
• It is lengthy and may not be suitable for some courses that have limited time or syllabus.
• It is rigorous and may not be easy for some students who have weak backgrounds or skills in mathematics.
How does Calculus for Engineers compare to other similar books?
There are many other books on calculus for engineers available in the market, such as Calculus for Engineers and Scientists by William Briggs, Lyle Cochran, Bernard Gillett, and Eric Schulz, Calculus
for Engineering Students by Jesus Martin Vaquero, Calculus for Engineering and the Sciences by Dale Varberg, Edwin Purcell, and Steven Rigdon, and Calculus for Engineers by Anthony Croft and Robert
Davison. How does Calculus for Engineers by Donald Trim compare to these books?
The similarities and differences between Calculus for Engineers and other calculus books are:
The similarities and differences between Calculus for Engineers and other calculus books
Calculus for Engineers and Scientists by Briggs et al.
- It also uses an early transcendental approach.- It also covers all the topics that are essential for engineering students.- It also emphasizes practical applications and provides examples and
exercises from various engineering fields.
- It is more recent and updated than Calculus for Engineers by Trim.- It is more colorful and visually appealing than Calculus for Engineers by Trim.- It is more interactive and integrated with
online resources and tools than Calculus for Engineers by Trim.
Calculus for Engineering Students by Vaquero
- It also covers all the topics that are essential for engineering students.- It also emphasizes practical applications and provides examples and exercises from various engineering fields.
- It uses a late transcendental approach, which means that it introduces differentiation and integration before exponential, logarithmic, trigonometric, and inverse trigonometric functions.- It is
more concise and compact than Calculus for Engineers by Trim.- It is more accessible and affordable than Calculus for Engineers by Trim.
Calculus for Engineering and the Sciences by Varberg et al.
- It also covers all the topics that are essential for engineering students.- It also emphasizes practical applications and provides examples and exercises from various engineering fields.
- It uses a hybrid approach, which means that it introduces exponential, logarithmic, trigonometric, and inverse trigonometric functions before differentiation, but after integration.- It is more
rigorous and theoretical than Calculus for Engineers by Trim.- It is more traditional and classic than Calculus for Engineers by Trim.
Calculus for Engineers by Croft and Davison
- It also covers all the topics that are essential for engineering students.- It also emphasizes practical applications and provides examples and exercises from various engineering fields.
- It uses a non-transcendental approach, which means that it does not cover exponential, logarithmic, trigonometric, and inverse trigonometric functions at all.- It is more simplified and streamlined
than Calculus for Engineers by Trim.- It is more focused on numerical methods than Calculus for Engineers by Trim.
The advantages and disadvantages of Calculus for Engineers over other calculus books
The advantages of Calculus for Engineers over other calculus books are:
• It is more comprehensive and covers all the topics that are essential for engineering students.
• It is more practical and relevant and shows how calculus can be used to solve real-world engineering problems.
• It is more clear and concise and explains the concepts and theorems in a logical and consistent way.
• It is more engaging and interesting and includes historical notes and biographies of famous mathematicians.
• It is more challenging and stimulating and provides a variety of examples and exercises that test your understanding.
• It is more helpful and supportive and gives detailed solutions, hints, and tips for solving problems.
The disadvantages of Calculus for Engineers over other calculus books are:
• It is more expensive and may not be affordable for some students.
• It is more lengthy and may not be suitable for some courses that have limited time or syllabus.
• It is more rigorous and may not be easy for some students who have weak backgrounds or skills in mathematics.
How can you get the most out of Calculus for Engineers?
If you decide to use Calculus for Engineers as your textbook for learning or teaching calculus, you might want to know how to get the most out of it. Here are some tips and suggestions that can help
you enhance your calculus experience:
# Article with HTML formatting (continued) How can you get the most out of Calculus for Engineers? (continued)
The best ways to use the book for learning or teaching calculus
• Read the introduction of each chapter and section carefully to get an overview of the topic and its relevance to engineering.
• Follow the examples and exercises step by step and try to understand the logic and reasoning behind each solution.
• Do as many exercises as you can and check your answers with the solutions provided. If you get stuck, use the hints and tips to guide you.
• Review the concepts and methods regularly and use the summary and review sections at the end of each chapter to reinforce your learning.
• Use the historical notes and biographies to learn more about the history and development of calculus and its applications.
• Ask questions and seek help from your instructor, tutor, or classmates if you have any doubts or difficulties.
The supplementary resources and tools that can enhance your calculus experience
• Use online platforms and websites that offer interactive tutorials, videos, quizzes, games, simulations, and other resources that can help you learn calculus in a fun and engaging way. Some
examples are Khan Academy, Coursera, edX, MIT OpenCourseWare, Wolfram Alpha, Desmos, GeoGebra, and more.
• Use calculators and software that can help you perform calculations, graph functions, solve equations, visualize concepts, and explore applications. Some examples are TI-84 Plus CE Graphing
Calculator, MATLAB, Maple, Mathematica, Excel, Python, RStudio, and more.
# Article with HTML formatting (continued) How can you get the most out of Calculus for Engineers? (continued)
The supplementary resources and tools that can enhance your calculus experience (continued)
• Use books and articles that can help you deepen your understanding, broaden your perspective, and inspire your curiosity about calculus and its applications. Some examples are The Calculus
Lifesaver by Adrian Banner, The Calculus Story by David Acheson, The Joy of x by Steven Strogatz, Infinite Powers by Steven Strogatz, The Calculus Diaries by Jennifer Ouellette, The Calculus of
Friendship by Steven Strogatz, The Man Who Knew Infinity by Robert Kanigel, A Beautiful Mind by Sylvia Nasar, The Simpsons and Their Mathematical Secrets by Simon Singh, Fermat's Enigma by Simon
Singh, The Code Book by Simon Singh, The Music of the Primes by Marcus du Sautoy, The Number Devil by Hans Magnus Enzensberger, Flatland by Edwin Abbott Abbott, Gödel, Escher, Bach by Douglas
Hofstadter, The Hitchhiker's Guide to Calculus by Michael Spivak, A Tour of the Calculus by David Berlinski,
• The Calculus Gallery by William Dunham, The Princeton Companion to Mathematics edited by Timothy Gowers, The Math Book by Clifford Pickover, The Math Gene by Keith Devlin, The Math Instinct by
Keith Devlin, The Mathematical Universe by Max Tegmark, The Universe in a Nutshell by Stephen Hawking, A Brief History of Time by Stephen Hawking, and more.
# Article with HTML formatting (continued) How can you get the most out of Calculus for Engineers? (continued)
The supplementary resources and tools that can enhance your calculus experience (continued)
• You can also find many online articles and blogs that discuss calculus and its applications in various fields and domains. Some examples are Quanta Magazine, Scientific American, Wired, The New
Yorker, The Guardian, Medium, Math with Bad Drawings, 3Blue1Brown, Numberphile, Veritasium, Vsauce, TED-Ed,
• Khan Academy, Coursera, edX, MIT OpenCourseWare, Wolfram Alpha, Desmos, GeoGebra, and more.
In this article, we have given you a comprehensive review of Calculus for Engineers by Donald Trim. We have covered the following aspects:
• What is Calculus for Engineers?
• How is Calculus for Engineers organized?
• What are the strengths and weaknesses of Calculus for Engineers?
How does | {"url":"https://www.kt-gold.com/group/mysite-231-group/discussion/1d1dc49f-7084-4d70-9412-856d1d0dfa7b","timestamp":"2024-11-11T19:53:27Z","content_type":"text/html","content_length":"1050368","record_id":"<urn:uuid:5648df78-567f-417f-a697-37cae4897423>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00011.warc.gz"} |
atm to mmHg
The standard barometric pressure indicates atmospheric pressure and is defined as 101325Pa = one barometric pressure and the standard barometric rule.
Then, in order to use the unit of pressure"atm", it is the conversion formula that 1atm (atom) = 101325Pa (Pascal) is satisfied.Because this is a definition, it can be said that there is no choice
but to remember. And, the mmhg is a unit of pressure when Mercury column is used, and how to read it is various, such as "Milie chair", "millimeter Mercury note meter", "millihagen".
At this time, it is defined as 1atm = 760mmhg.This is because of the definition (that is fixed) I have no choice but to remember.
In addition, in order to convert from mmhg to atm, since it is good to take the inverse, it is possible to calculate 1mmhg = 1/760atm.
In this way, let's understand that mmhg and atm can be converted.It should be noted that, since the numerical value in front of each unit of magnitude relationship will be as follows, it is good to
be able to image which one will be larger as a numerical value.
Convert atmosphere to Torr and back | {"url":"https://inversionensemble.com/","timestamp":"2024-11-10T03:11:23Z","content_type":"text/html","content_length":"6963","record_id":"<urn:uuid:8f5d05de-77d4-4815-857f-706f3acf05b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00700.warc.gz"} |
Comparative analysis of rotor system models with auto-balancers of ball, roller and pendulum type
In the paper, differential equations of motion are derived for a plane model of rotor on anisotropic viscoelastic supports, which is balanced by a ball (roller) or pendulum auto-balancer. The forces
of gravity and viscous resistance to the motion of compensating cargoes are taken into account. Cases of an unbalanced rotor motion from engine torque with a variable and constant angular velocity
are considered. Using a comparative analysis of the motion equations structure it is established cases in which it is possible to build a unified theory of the specified auto-balancer types and apply
the results obtained for one auto-balancer type to other types.
1. Introduction
Passive auto-balancers (AB) of ball (roller) and pendulum types are used to balance fast-rotating rotors during their motion, see, for example, [1]. Under certain conditions, the compensating cargoes
(cargoes) in these AB come to a position in which the rotor is balanced and then rotates with it as one rigid whole, until the rotor balance changes or other perturbations appear in the system.
The most complete information about the auto-balancing process is provided by analytical research results. Therefore, there is a common scientific problem in developing an analytical theory of
passive balancing. The problem solution is complicated by the presence of several types of AB.
Nowadays, there are separate studies of both ball (roller) [2-6] and pendulum [7, 8] ABs. For example, within a plane model of rotor on isotropic supports, the following have been studied:
– stability of the auto-balancing mode in the case of a two-ball [4] and a two-pendulum AB [7];
– stuck mode of cargoes in the case of a two-ball [5] and a two-pendulum [8] AB.
The question arises whether the results obtained for one AB type can be transferred to other types, and if it is possible, then in what cases. In the present work, this problem is solved within the
framework of a plane model of rotor on anisotropic supports, balanced by a ball (roller) or pendulum AB.
The aim of this work is a comparative analysis of models of a rotor system with AB of ball (roller) and pendulum types in order to identify qualitative differences in the dynamic properties of
various AB types.
To build physical and mathematical models of the rotor systems with AB under consideration, elements of the theory of rotor machines with AB [1, 3-9] and classical mechanics [10] are used.
Differential equations of systems motion are derived using the theorem on the motion of the center of mechanical system mass and Lagrange’s equations of the second kind.
To get an answer to these questions, the differential equations of motion obtained for different AB types are compared.
2. Comparative analysis of mathematical models of the rotor system
2.1. Mechanical model of the rotor system
The rotor system (Fig. 1(a)) consists of a rotor of mass $M$ and a passive AB – ball, roller (Fig. 1(b)) or pendulum (Fig. 1(c)) type. The rotor is mounted with an eccentricity e on an absolutely
rigid shaft. AB is mounted on the shaft. The rotor performs plane-parallel (flat) motions, and its axis is inclined to the horizon by an angle α.
Let’s draw two fixed mutually perpendicular axes $X$, $Y$ from the position of the static equilibrium of the shaft center, point O. The axes $X$, Y form the right coordinate system, and the $X$ axis
is horizontal. The movable auxiliary axes ${X}_{1}$, ${Y}_{1}$ come out from the shaft center, point P, and are parallel to the axes $X$, $Y$.
The rotor located on two viscoelastic supports with stiffness and viscosity coefficients in the direction of the axes $X$, $Y$ – ${k}_{x}$, ${b}_{x}$ and ${k}_{y}$, ${b}_{y}$, respectively. The rotor
is driven by the engine torque ${M}_{rot}$.
Fig. 1Plane model of the rotor on anisotropic supports and multi-mass AB: a) rotor support diagram; b) kinematics of the motion of the rotor and the ball (roller); c) kinematics of the motion of the
rotor and the pendulum; d) rolling without sliding of the ball (roller)
During motion, the shaft (point P) deviates from the position of static equilibrium (point O) by $x$, $y$ and rotates with the rotor by an angle $\phi$.
The AB consists of ${n}_{b}$ cargoes. The mass of the $j$-th cargo ${m}_{j}$. The center of mass of the cargo, point ${C}_{j}$, moves along a circle of radius ${l}_{j}$ with the center at point P
(Fig. 1(b), (c)). The position of the center of mass of the $j$-th cargo relative to the AB housing is determined by the angle ${\phi }_{j}$. The angle is measured from the axis ${X}_{1}$ to the
segment $P{C}_{j}$.
The AB housing rotates with the rotor around point P at an angular velocity $\omega =\stackrel{˙}{\phi }$, where the character stroke denotes the time derivative $t$. The segment $P{C}_{j}$ rotates
at an angular velocity ${\stackrel{˙}{\phi }}_{j}$.
The motion of the cargo relative to the AB housing is hindered by a viscous resistance force having a module:
${F}_{j}={\beta }_{j}{v}_{j}^{\left(r\right)}={\beta }_{j}{l}_{j}\left|{\stackrel{˙}{\phi }}_{j}-\stackrel{˙}{\phi }\right|,j=\stackrel{-}{1,{n}_{b}},$
where ${\beta }_{j}$ – coefficient of viscous resistance force; ${v}_{j}^{\left(r\right)}={l}_{j}\left|{\stackrel{˙}{\phi }}_{j}-\stackrel{˙}{\phi }\right|$ – the velocity module of mass center of
the cargo number $j$ relative to the AB housing.
Balls (rollers) roll along their races without slipping (Fig. 1(d)).
We consider the motion of the ball (roller) number j relative to the AB housing as the sum of the translational motion with the center of mass at the velocity ${l}_{j}{\stackrel{˙}{\phi }}_{j}$ and
the rotational motion around the center of mass at an angular velocity ${\omega }_{j}$. At the point of contact of the ball (roller) with the race, the velocity’s of the ball (roller) and the race
are the same: $\omega \left({l}_{j}+{r}_{j}\right)={l}_{j}{\stackrel{˙}{\phi }}_{j}+{\omega }_{j}{r}_{j}$. From here we find:
${\omega }_{j}=\left[\left({l}_{j}+{r}_{j}\right)\stackrel{˙}{\phi }-{l}_{j}{\stackrel{˙}{\phi }}_{j}\right]/{r}_{j}=\left({l}_{j}/{r}_{j}+1\right)\stackrel{˙}{\phi }-{\stackrel{˙}{\phi }}_{j}{l}_{j}
As it is customary in the analytical theory of passive ABs, we assume that AB cargoes do not interfere with each other’s motion [1, 3-9].
2.2. Mathematical model of the rotor system with a variable rotor speed
We will conditionally divide the differential equations of system motion into the motion equations of the rotor and cargoes.
The theorem on the motion of the center of mass of a mechanical system [10] gives the following differential equations of translational motion of the rotor:
${M}_{\mathrm{\Sigma }}\stackrel{¨}{x}+{b}_{x}\stackrel{˙}{x}+{k}_{x}x+{\stackrel{¨}{S}}_{x}=0,{M}_{\mathrm{\Sigma }}\stackrel{¨}{y}+{b}_{y}\stackrel{˙}{y}+{k}_{y}y+{\stackrel{¨}{S}}_{y}=0,$
where ${M}_{\mathrm{\Sigma }}=M+{\sum }_{j=1}^{{n}_{b}}{m}_{j}$ – mass of the whole system; ${S}_{x}$, ${S}_{y}$ – projections of the total imbalance of the rotor and cargoes on the axis $X$, $Y$:
${S}_{\mathrm{x}}={\sum }_{j=1}^{{n}_{b}}{m}_{j}{l}_{j}\mathrm{cos}{\phi }_{j}+Me\mathrm{cos}\phi ,{S}_{\mathrm{y}}={\sum }_{j=1}^{{n}_{b}}{m}_{j}{l}_{j}\mathrm{sin}{\phi }_{j}+Me\mathrm{sin}\phi .$
It should be noted that:
– gravity is not included in Eq. (3) because of the position of static equilibrium of the shaft center is taken as the origin point;
– system Eq. (3) is a homogeneous system of linear differential equations with constant coefficients relative to unknown generalized coordinates $x$, $y$ and ${S}_{x}$, ${S}_{y}$;
– the form of Eq. (3) does not depend on the AB type.
We compose the remaining motion equations using Lagrange’s equations of the second kind [10].
The system kinetic energy is the sum of the kinetic energies of the rotor ${T}_{r}$ and the cargoes ${T}_{j}$:
$T={T}_{r}+{\sum }_{j=1}^{{n}_{b}}{T}_{j}.$
According to Kőnig’s theorem [10], the kinetic energy of the rotor has two components caused by the translational motion of the rotor with the center of mass (point C) and the rotational motion of
the rotor around the center of mass:
${T}_{r}={T}_{r}^{\left(tr\right)}+{T}_{r}^{\left(r\right)}=\frac{M{u }_{C}^{2}+{J}_{C}{\stackrel{˙}{\phi }}^{2}}{2}=\frac{M\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2e\stackrel{˙}{\phi }\
left(-\stackrel{˙}{x}\mathrm{sin}\phi +\stackrel{˙}{y}\mathrm{cos}\phi \right)\right]}{2}+\frac{{J}_{P}{\stackrel{˙}{\phi }}^{2}}{2},$
where ${v}_{c}$ – the velocity module of the center of rotor mass; ${J}_{p}={J}_{C}+M{e}^{2}$ – axial moment of inertia of the rotor relative to point P (longitudinal shaft axis).
According to Kőnig’s theorem [10], the kinetic energy of the $j$-th AB cargo is equal to the kinetic energy of translational motion together with the center of mass and the kinetic energy of rotation
around the center of mass:
${T}_{j}={T}_{j}^{\left(tr\right)}+{T}_{j}^{\left(r\right)}=\left({m}_{j}{v}_{j}^{2}+{J}_{{C}_{j}}{\omega }_{j}^{2}\right)/2,$
where ${v}_{j}$ – the velocity module of the mass center of the cargo; ${J}_{{C}_{j}}$ – principal central axial moment of cargo inertia; ${\omega }_{j}$ – the module of the angular velocity of cargo
rotation around the center of mass.
Kinetic energy of the $j$-th ball (roller):
$T=\left\{{m}_{j}\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2{l}_{j}{\stackrel{˙}{\phi }}_{j}\left(-\stackrel{˙}{x}\mathrm{sin}{\phi }_{j}+\stackrel{˙}{y}\mathrm{cos}{\phi }_{j}\right)\right]+
{k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{˙}{\phi }}_{j}^{2}\right\}/2$$+{J}_{{C}_{j}}\left[{\stackrel{˙}{\phi }}^{2}{\left({l}_{j}/{r}_{j}+1\right)}^{2}-2\stackrel{˙}{\phi }{\stackrel{˙}{\phi }}_{j}\left
where ${k}_{j}=1+{J}_{{C}_{j}}/\left({m}_{j}{r}_{j}^{2}\right)$ – dimensionless coefficient. For the ball ${J}_{{C}_{j}}=2{m}_{j}{r}_{j}^{2}/5$, for the roller ${J}_{{C}_{j}}={m}_{j}{r}_{j}^{2}/2$
and, respectively, ${\kappa }_{j}=$1 + 2/5 = 7/5, ${\kappa }_{j}=$1 + 2/5 = 3/2.
The kinetic energy of the $j$-th pendulum, taking into account that ${\omega }_{j}={\stackrel{˙}{\phi }}_{j}$:
${T}_{j}=\left\{{m}_{j}\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2{l}_{j}{\stackrel{˙}{\phi }}_{j}\left(-\stackrel{˙}{x}\mathrm{sin}{\phi }_{j}+\stackrel{˙}{y}\mathrm{cos}{\phi }_{j}\right)\
right]+{k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{˙}{\phi }}_{j}^{2}\right\}/2,$
where ${k}_{j}=1+{J}_{{C}_{j}}/\left({m}_{j}{l}_{j}^{2}\right)$.
The kinetic energy of the whole mechanical system is obtained on the basis of Eqs. (5-9):
– in the case of a ball (roller) AB:
$T={T}_{r}+{\sum }_{j=1}^{{n}_{b}}{T}_{j}=\left\{M\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2e\stackrel{˙}{\phi }\left(-\stackrel{˙}{x}\mathrm{sin}\phi +\stackrel{˙}{y}\mathrm{cos}\phi \
right)\right]+{J}_{P}{\stackrel{˙}{\phi }}^{2}\right\}/2$$+{\sum }_{j=1}^{{n}_{b}}\left\{{m}_{j}\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2{l}_{j}{\stackrel{˙}{\phi }}_{j}\left(-\stackrel{˙}
{x}\mathrm{sin}{\phi }_{j}+\stackrel{˙}{y}\mathrm{cos}{\phi }_{j}\right)\right]\right\+{k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{˙}{\phi }}_{j}^{2}$$+{J}_{{C}_{j}}\left[{\stackrel{˙}{\phi }}^{2}{\left({l}_
{j}/{r}_{j}+1\right)}^{2}-2\stackrel{˙}{\phi }{\stackrel{˙}{\phi }}_{j}\left({l}_{j}/{r}_{j}+1\right){l}_{j}/{r}_{j}\right]}/2.$
– in the case of a pendulum AB:
$T={T}_{r}+{\sum }_{j=1}^{{n}_{b}}{T}_{j}=\left\{M\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2e\stackrel{˙}{\phi }\left(-\stackrel{˙}{x}\mathrm{sin}\phi +\stackrel{˙}{y}\mathrm{cos}\phi \
right)\right]+{J}_{P}{\stackrel{˙}{\phi }}^{2}\right\}/2$$+{\sum }_{j=1}^{{n}_{b}}\left\{{m}_{j}\left[{\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}+2{l}_{j}{\stackrel{˙}{\phi }}_{j}\left(-\stackrel{˙}
{x}\mathrm{sin}{\phi }_{j}+\stackrel{˙}{y}\mathrm{cos}{\phi }_{j}\right)\right]+{k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{˙}{\phi }}_{j}^{2}\right\}/2.$
Potential energy of the system (accurate to a constant):
$\mathrm{\Pi }={\mathrm{\Pi }}_{r}+{\sum }_{j=1}^{{n}_{b}}{\mathrm{\Pi }}_{j}=Mg\left(y+e\mathrm{sin}\phi \right)\mathrm{cos}\alpha +{\sum }_{j=1}^{{n}_{b}}{m}_{j}g\left(y+{l}_{j}\mathrm{sin}{\phi }_
{j}\right)\mathrm{cos}\alpha$$=\left({M}_{\mathrm{\Sigma }}y+Mg\mathrm{sin}\phi +{\sum }_{j=1}^{{n}_{b}}{m}_{j}{l}_{j}\mathrm{sin}{\phi }_{j}\right)g\mathrm{cos}\alpha .$
Rayleigh dissipation function of the system:
$R=\frac{b\left({\stackrel{˙}{x}}^{2}+{\stackrel{˙}{y}}^{2}\right)}{2}+\frac{1}{2}{\sum }_{j=1}^{{n}_{b}}{\beta }_{j}{l}_{j}^{2}{\left(\stackrel{˙}{\phi }-{\stackrel{˙}{\phi }}_{j}\right)}^{2}.$
The remaining motion equations of the mechanical system elements are obtained using the equations Eqs. (5-13) with the help of Lagrange’s equations of the second kind [10]:
$\frac{d}{dt}\frac{\partial T}{\partial {\stackrel{˙}{q}}_{j}}-\frac{\partial T}{\partial {q}_{j}}+\frac{\partial R}{\partial {\stackrel{˙}{q}}_{j}}+\frac{\partial \mathrm{\Pi }}{\partial {q}_{j}}=
When obtaining the differential equation of rotational rotor motion, it was taken into account that ${q}_{j}=\phi$, $Q={M}_{rot}.$ Then this equation has the form:
– in the case of a ball (roller) AB:
${J}_{P}\stackrel{¨}{\phi }+{\sum }_{j=1}^{{n}_{b}}{\beta }_{j}{l}_{j}^{2}\left(\stackrel{˙}{\phi }-{\stackrel{˙}{\phi }}_{j}\right)+Mge\mathrm{cos}\alpha \mathrm{cos}\phi +Me\left(-\stackrel{¨}{x}\
mathrm{sin}\phi +\stackrel{¨}{y}\mathrm{cos}\phi \right)$$-{\sum }_{j=1}^{{n}_{b}}{J}_{{C}_{j}}{\stackrel{¨}{\phi }}_{j}\left({l}_{j}/{r}_{j}+1\right){l}_{j}/{r}_{j}={M}_{rot}.$
– in the case of a pendulum AB:
${J}_{P}\stackrel{¨}{\phi }+{\sum }_{j=1}^{{n}_{b}}{\beta }_{j}{l}_{j}^{2}\left(\stackrel{˙}{\phi }-{\stackrel{˙}{\phi }}_{j}\right)+Mge\mathrm{cos}\alpha \mathrm{cos}\phi +Me\left(-\stackrel{¨}{x}\
mathrm{sin}\phi +\stackrel{¨}{y}\mathrm{cos}\phi \right)={M}_{rot}.$
When obtaining the differential motion equations of cargoes, it was taken into account that ${q}_{j}=\phi$, $Q=0.$ As a result, the following differential motion equations were obtained:
– balls (rollers) motion equations:
${k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{¨}{\phi }}_{j}+{{\beta }_{j}l}_{j}^{2}\left({\stackrel{˙}{\phi }}_{j}-\stackrel{˙}{\phi }\right)+{m}_{j}g{l}_{j}\mathrm{cos}{\phi }_{j}\mathrm{cos}\alpha -{J}_
{{C}_{j}}\stackrel{¨}{\phi }\left({l}_{j}/{r}_{j}+1\right){l}_{j}/{r}_{j}$$+{m}_{j}{l}_{j}\left(-\stackrel{¨}{x}\mathrm{sin}{\phi }_{j}+\stackrel{¨}{y}\mathrm{cos}{\phi }_{j}\right)=0,j=\stackrel{-}
– pendulums motion equations:
${k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{¨}{\phi }}_{j}+{{\beta }_{j}l}_{j}^{2}\left({\stackrel{˙}{\phi }}_{j}-\stackrel{˙}{\phi }\right)+{m}_{j}g{l}_{j}\mathrm{cos}{\phi }_{j}\mathrm{cos}\alpha +{m}_{j}
{l}_{j}\left(-\stackrel{¨}{x}\mathrm{sin}{\phi }_{j}+\stackrel{¨}{y}\mathrm{cos}{\phi }_{j}\right)=0,$$j=\stackrel{-}{1,{n}_{b}}.$
Thus, the mathematical motion model of the rotor with AB of a ball (roller) type has the form of the equations system Eqs. (3), (15), (17), and the motion model of the rotor with AB of a pendulum
type has the form of the equations system Eqs. (3), (16), (18). Both of these models correspond to a variable rotor speed.
A comparative analysis of models of the rotor system with AB shows the following. The form of the differential equations of rotational motion of the rotor and the motion of cargoes depends on the
type of AB, see Eqs. (15), (17) and, accordingly, Eqs. (16), (18). Eqs. (15), (17) contain additional terms proportional to the angular accelerations of the rotor and cargoes.
Therefore, for a variable rotor speed $\omega$, it is impossible to build a unified theory of the considered AB types. Such a theory studies the modes of acceleration and braking of the rotor, as
well as other modes associated with a change in the speed of rotor rotation.
2.3. Mathematical model of the rotor system when the rotor rotates at a constant angular velocity
Let the rotor rotate with a constant angular velocity $\omega$ and the angle of rotor rotation is equal to $\phi =\omega t$. With this in mind, from Eqs. (3), (15), (17) and from Eqs. (3), (16), (18)
we obtain the following differential equations of system motion:
– rotor motion equations:
${M}_{\mathrm{\Sigma }}\stackrel{¨}{x}+{b}_{x}\stackrel{˙}{x}+{k}_{x}x+{\stackrel{¨}{S}}_{x}=0,{M}_{\mathrm{\Sigma }}\stackrel{¨}{y}+{b}_{y}\stackrel{˙}{y}+{k}_{y}y+{\stackrel{¨}{S}}_{y}=0.$
– cargoes motion equations:
${k}_{j}{m}_{j}{l}_{j}^{2}{\stackrel{¨}{\phi }}_{j}+{{\beta }_{j}l}_{j}^{2}\left({\stackrel{˙}{\phi }}_{j}-\omega \right)+{m}_{j}g{l}_{j}\mathrm{cos}{\phi }_{j}\mathrm{cos}\alpha +{m}_{j}{l}_{j}\left
(-\stackrel{¨}{x}\mathrm{sin}{\phi }_{j}+\stackrel{¨}{y}\mathrm{cos}{\phi }_{j}\right)=0,$$j=\stackrel{-}{1,{n}_{b}}.$
Thus, in the case of $\omega =$ const, the form of the motion Eqs. (19), (20) does not depend on the type of AB. Therefore, it is possible to build a unified theory of the considered types of ABs
when the rotor rotates with a constant angular velocity. Such a theory includes: the theory of the existence and stability of the main (in which the cargoes rotate synchronously with the rotor) and
secondary system motions; the theory of stuck mode of cargoes in the AB.
3. Conclusions
A comparative analysis of the structure of motion models of the rotor system with AB showed the following.
1) When the rotor rotates at a variable speed, the equations of system motion for ball (roller) AB are fundamentally different from the equations of motion for pendulum AB. The difference is caused
by the fact that the pendulum can rotate freely relative to the rotor, and the balls (rollers) roll without sliding along the circular paths that are rigidly connected to the rotor. Therefore, for $\
omega =var$, ball (roller) ABs have dynamic properties that fundamentally differ from ABs of a pendulum type, specifically: the angular accelerations of cargoes (balls or rollers) and a rotor have a
mutual influence on their motion, which is not observed in ABs of a pendulum type.
For the specified modes of rotor motion, it is impossible to build a unified theory for ball (roller) and pendulum ABs.
2) When the rotor rotates at a constant speed, the form of the motion equations of the system does not depend on the type of AB. In this case, it is possible to build a unified theory of ball
(roller) and pendulum ABs. Therefore, the results obtained previously for one type of AB are applicable for another type of AB.
• Fіlіmonіhіn G. B. Balancing and Vibration Protection of Rotors by Autobalancers with Solid Corrective Weights. Kirovograd, KNTU, 2004, p. 352, (in Ukrainian).
• Gorbenko A. N. On the stability of self-balancing of a rotor with the help of balls. Strength of Materials, Vol. 35, Issue 3, 2003, p. 305-312.
• Chung J. Effect of gravity and angular velocity on an automatic ball balancer. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, Vol. 219,
Issue 1, 2005, p. 43-51.
• Lu Chung-Jen, Wang Ming-Cheng, Huang Shih-Hsuan Analytical study of the stability of a two-ball automatic balancer. Mechanical Systems and Signal Processing, Vol. 23, Issue 3, 2009, p. 884-896.
• Lu Chung-Jen, Tien Meng-Hsuan Pure-rotary periodic motions of a planar two-ball auto-balancer system. Mechanical Systems and Signal Processing, Vol. 32, 2012, p. 251-268.
• Strautmanis G., Mezītis M., Strautmane V., Gorbenko A. On the issue of impact of anisotropy of the rotor elastic suspension on the performance of the automatic balancer. Vibroengineering
Procedia, Vol. 17, Issue 1, 2018, p. 1-6.
• Dubovik V. A., Ziyakaev G. R. The basic motion of pendulum auto-balancers on flexible shaft with resilient supports. News of Tomsk Polytechnic University, Vol. 317, Issue 2, 2010, p. 37-39, (in
• Artyunin A. I. The “sticking” effect and the characteristics of the rotor movement with pendulum auto-balancers. Science and Education: the Electronic Scientific Publication of the Bauman NSTU,
Vol. 8, 2013, p. 443-454, (in Russian).
• Goncharov V. V., Filimonikhin G. B. Form and structure of differential equations of motion and process of auto-balancing in the rotor machine with auto-balancers. Bulletin of the Tomsk
Polytechnic University, Geo Assets Engineering, Vol. 326, Issue 12, 2015, p. 20-30.
• Strauch D. Classical Mechanics: An Introduction. Springer-Verlag Berlin Heidelberg, 2009, p. 405.
About this article
Mechanical vibrations and applications
pendulum auto-balancer
ball (roller) auto-balancer
equations of motion
Copyright © 2020 Gennadiy Filimonikhin, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/21366","timestamp":"2024-11-08T12:07:51Z","content_type":"text/html","content_length":"159758","record_id":"<urn:uuid:cf898bcd-b039-4772-b87c-9fe4af74a387>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00598.warc.gz"} |
The number of four-digit numbers strictly greater than 4321 tha... | Filo
The number of four-digit numbers strictly greater than 4321 that can be formed using the digits (repetition of digits is allowed) is (a) 306 (b) 310 (c) 360 (d) 288
Not the question you're searching for?
+ Ask your question
Exp. (b) Following are the cases in which the 4-digit numbers strictly greater than 4321 can be formed using digits (repetition of digits is allowed) Case-1
Was this solution helpful?
Video solutions (9)
Learn from their 1-to-1 discussion with Filo tutors.
14 mins
Uploaded on: 4/22/2023
Was this solution helpful?
10 mins
Uploaded on: 4/19/2023
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE for FREE
9 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Questions from JEE Mains 2019 - PYQs
View more
Practice questions from Arihant JEE Main Chapterwise Solutions Mathematics (2019-2002) (Arihant)
Practice questions from Permutations and Combinations in the same exam
Practice more questions from Permutations and Combinations
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text The number of four-digit numbers strictly greater than 4321 that can be formed using the digits (repetition of digits is allowed) is (a) 306 (b) 310 (c) 360 (d) 288
Updated On Sep 22, 2023
Topic Permutations and Combinations
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 9
Upvotes 1136
Avg. Video Duration 8 min | {"url":"https://askfilo.com/math-question-answers/the-number-of-four-digit-numbers-strictly-greater-than-4321-that-can-be-formed","timestamp":"2024-11-02T11:10:45Z","content_type":"text/html","content_length":"487341","record_id":"<urn:uuid:c180dc3e-4250-42b7-8054-55170ac5cd22>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00704.warc.gz"} |
College Algebra Project Saving for the Future Investing Questions Discussion - Course Help Online
Need your help in completing this short project. All steps need to be shown (typed preferably) to show how you arrived at the final answer. I need an A grade in this project.
College Algebra Project
Saving for the Future
This project is to be completed individually! If the plagiarism software pings your assignment as
being turned in by another student, you will receive a 0 and possibly an XF for the course. It is
very important that you work on the assignment by yourself. I am willing to check your work for 1a
โ c and 2a โ c to ensure you are using the formula correctly. Use the Inbox tab in the course to
send me your work and I will let you know if you are on the right track before you finish out #3.
You will need to type up your answers and submit them to the Vericite submission as part of your
final grade. You can handwrite and scan your work or type it to turn it in for a chance to receive
partial credit but make sure you turn in a typed version of your final answers as well. Failure to do
so will result in a 0 on the project.
In this project you will investigate compound interest, specifically how it applies to the typical retirement
For instance, many retirement plans deduct a set amount out of an employeeโ s paycheck. Thus, each
year you would invest an additional amount on top of all previous investments including all previously
earned interest.
If you invest P dollars every year for t years in an account with an interest rate of r (expressed as a
decimal) compounded n times per year, then you will have accumulated C dollars as a function of time,
given by the following formula.
Compound Interest Formula, with Annual Investmentsr
๐ ๐ ๐ ก
๐ (1 + ) [1 โ (1 + ) ]
๐ ถ (๐ ก ) =
1 โ (1 + )๐
If you would like the derivation of this formula, please send me an email and I will send you the
If you invest $1200 every year (P = 1200) for 3 years (t=3) at an interest rate of 5% (r = 0.05)
compounded weekly (n = 52), then the first yearโ s investment of $1200 would earn interest for 3 years,
but then the next year, the next investment of $1200 would only earn interest for 2 years, and then the
final investment of $1200 would only earn interest for 1 year. This lends itself to the following:
You have to be sure not to round until the very end where you round to the nearest cent. (So be sure
to keep as many decimal places as possible until the end, as you will be taken off for rounding before
then.) For more information about round-off errors click on the link:
Your answers should be typed and in a single document. If you would like to attach your work, scan it
and add it to the end of your typed document. You can either attach a Word file or a pdf file. Many word
processing programs will save a document as a pdf file if you select Save as and look for file types.
1) How much will you have accumulated over a period of 35 years if, in an IRA which has a 10%
interest rate compounded monthly, you annually invest:
a. $1
b. $5000
c. $8,000
d. Part (a) is called the effective yield of an account. How could Part (a) be used to determine
Parts (b) and (c)? (Your answer should be in complete sentences free of grammar, spelling,
and punctuation mistakes.) (Total of 15 points)
2) How much will you have accumulated, if you annually invest $3000 into an IRA at 12% interest
compounded bi-annually for:
a. 10 year
b. 20 years
c. 50 years
d. How long will it take to earn your first million dollars? Your answer should be exact
rounded within 2 decimal places. Please use logarithms to solve. (Total of 15 points)
3) Now you will plan for your retirement. To do this we need to first determine a couple of values.
a. How much will you invest each year? Even $25 a month is a start ($300 a year), youโ ll be
surprised at how much it will earn. You can choose a number you think you can afford on
your life circumstances or you can dream big. State what you will use for P, r, and n to earn
credit. (3 points)๏
The typical example of a retirement investment is an I.R.A., an Individual Retirement
Account, although other options are available. However, for this example, we will assume
that you are investing in an I.R.A. (for more information see:
http://en.wikipedia.org/wiki/Individual_Retirement_Account ) earning 8% interest
compounded annually. (This is a good estimate, basically, hope for 10%, but expect 8%. But
again this is just one example; I would see a financial advisor before investing, as there is
some risk involved, which explains the higher interest rates.) List your P, r, and n to earn
points for this question.
b. Determine the formula for the accumulated amount that you will have saved for retirement
as a function of time and be sure to simplify it as much as possible. You need to be able to
show me what you used for r, n, and P so that I can calculate your answers. Plug in those
values into the formula and simplify the equation. (5 points)
c. Graph this function from t = 0 to t = 50. (6 points)
Ways to show graphs:
๏ ท Excel
๏ ท Hand draw, take a pic with phone and import it into your document as a picture.
๏ ท Online graphing calculator program (try googling free graphing calculators or use
d. When do you want to retire? Use this to determine how many years you will be investing.
(65 years old is a good retirement-age estimate). You need to say how old you are if you are
retiring when you are 65 or tell me how long until you retire. State what you will use for t.
(2 points)
e. Determine how much you will have at retirement using the values you decided upon
above. (5 points)
f. How much of that is interest? (4 points)
g. Now letโ s say you wait just 5 years before you start saving for retirement, how much will
that cost you in interest? How about 10 years? How about just 1 year? (10 points)
Now you need to consider if that is enough. If you live to be 90 years old, well above
average, then from the time you retire, to the time you are 90, you will have to live on what
you have in retirement (not including social security). So if you retired at 65, you will have
another 25 years where your retirement funds have to last.
h. Determine how much you will have to live on each year. Note, we are neither taking into
account taxes nor inflation (which is about 2% a year). (5 points)
Letโ s look at this from the other direction then, supposing that you wanted to have $35,000 a
year after retirement.
i. How much would you need to have accumulated before retirement? (5 points)
j. How much would you need to start investing each year, beginning right now, to accumulate
this amount? A โ short-cutโ to doing this is to first compute the effective yield at your
retirement age, then divide this amount into Part (i). This is the amount you well need to
invest each year. (5 points)
k. That was just using $35,000, how much would you want to have each year to live on?
Dream big or reasonable depending on your occupation! Now using that value, repeat parts
(i) and (j) again. You need to state what you would want to live on and it needs to be
something besides $35,000. (10 points)
Your answer to (k) would work, if you withdrew all of your retirement funds at once and
divided it up. However, if you left the money in the account and let it draw interest, it is
possible that the interest itself would be enough to live on, or at the very least if you had to
withdraw some of the principle, the remaining portion would still continue to earn interest.
Essentially, what you have found is the upper bound for the amount of money that you will
need to invest each year to attain your financial goals.
l. Finish by summarizing what you have learned in the entire project and consider setting a
goal towards saving for retirement. (Your answer should be in complete sentences free of
grammar, spelling, and punctuation mistakes.) This should be a paragraph not just one
sentence. (10 points) | {"url":"https://coursehelponline.com/college-algebra-project-saving-for-the-future-investing-questions-discussion-2/","timestamp":"2024-11-13T21:44:03Z","content_type":"text/html","content_length":"48860","record_id":"<urn:uuid:1ef3c63b-bd9b-4c91-8d81-9e92bf09e499>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00315.warc.gz"} |
Mathematical Methods in Quantum Mechanics I (Winter Semester 2019/20)
Mathematical Methods in Quantum Mechanics I (Winter Semester 2019/20)
• Classes: Lecture (0163500), Problem class (0163510)
• Weekly hours: 4+2
Hint: The titel of the course in the syllabus of the faculty is "Mathematical Physics". After the file of summaries you can find the modified file that has the hints for the exam.
Quantum mechanics is one of the subjects in physics that has influenced Mathematics during the last century. For example, it has drastically influenced the development of Functional Analysis,
Spectral Theory and Operator Theory.
Goal of this lecture series is to discuss how to investigate Quantum Mechanics from a mathematically rigorous point of view. This makes course having a different aspects than Quantum Mechanics
courses from the Physics point of view. Contents include observable and self-adjointness (which requires more than what usually physicists consider as self-adjoint) , existence of dynamics of
Schrödinger equations, spectral properties of Schödinger operators, the uncertainty principle, existence and stability of atoms. Note that the course will have a second part in summer semester and in
the second part we will also cover some topics of actual research.
Prerequisites: Analysis, linear Algebra. It is recommended that you have taken functional analysis or that you take it parallel to the class. If you are not sure about your background please talk to
Lecture: Wednesday 9:45-11:15 SR 2.59
Friday 8:00-9:30 SR 2.66
Problem class: Monday 15:45-17:15 SR 2.66
Summaries of the lectures
Posting of brief summaries of the lectures is planned to be done till Sunday afternoon of the weak before them. This file give a very dry text having only the notation, definitions and theorems,
without any proofs, examples, remarks or motivation. The purpose of this is that you get the possibility to see a little bit what will come in the next week each time. It is highly recommended that
you read it and think about it even for a short of time before coming to the lecture. Such a small effort from your side might make you understand much more during the lecture than you would
understand without any preparation. The file will be updated every week. If you find any typos please send an email to ioannis.anapolitanos@kit.edu, so that they get corrected.
Summaries of lectures
Hints for the exam: Please read carefully they are in red
== Lecture notes and script==
In the following link you find informal lecture notes for the course. Some additional small explanations given in the lecture might not always be here but the most important parts are written here.
The other way around too, some staff that we did not have enough time to go thoughroully during the lecture might be explained in a more detail in the notes. If you find any typos please send an
email to ioannis.anapolitanos@kit.edu
In the following link you will find the lecture notes typed from three of your collegues. As it has not yet been read for correction be cautious for mistakes
Script (has still not been read for correction)
Exercise sheets
In the following link you can find the exercise sheets
Feedback for the course
Your feedback for the course (suggestions, difficulties, critisism) is highly appreciated. You can either talk to us, or if you are too shy to do this, you can anonymously write in the following
There is going to be an oral exam for the course
1. Gustavson S., Sigal I.M.: Mathematical Concepts of Quantum mechanics third edition Springer 2011
2. Reed M., Simon B.: Methods of Modern Mathematical Physics Volumes I-IV, Academic Press
3. Berezin F.A. , Shubin M.A.: The Schödinger Equation. Kluwer Academic Publishers 1991. | {"url":"https://www.math.kit.edu/iana1/edu/quantummech2019w/","timestamp":"2024-11-12T09:10:59Z","content_type":"text/html","content_length":"190782","record_id":"<urn:uuid:5f57fe34-7720-4022-87aa-cd5ea452a5b3>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00476.warc.gz"} |
ellipse – Graphic PIZiadas
We have seen that the study of conic can be made from different geometric approaches. En particular, to start analyzing conic we have defined as the ellipse locus, we said that:
Ellipse is the locus of points in a plane whose sum of distances from two fixed points, called Spotlights, It has a constant value.
This metric definition of this curve allows us to address important study relating to the tangents circumferences, known as “Problem of Apollonius” in any of its versions. When we approach the study
of the parabola or hyperbola return to reframe the problem to generalize these concepts and reduce problems “fundamental problem of tangents in the case straight”, or “fundamental problem of tangents
in the case circumference”, namely, determining a circumference of a “Make corradical” a tangency condition.
Conical : Ellipse as locus
The study of conic can be made from different geometric approaches. One of the most used is the analysis that determined from planar sections in a cone of revolution.
From this definition it is possible to infer metric properties of these curves, plus new definitions of the same.
Ellipses and Parables around us [School]
A recurring job type blogs that have developed my students has been the search for and identification of the geometry in all aspects of their daily life, realizing the significance of it.
Conic curves studied in metric geometry section have a high interest in aeronautical engineering studies, and that help describe the trajectories of the bodies under the laws of gravity. Sin embargo,
as clearly excel in their jobs, are not the only field of application. The short article that follows, performed by the student group calling itself “The Maze Angle” is a sample of these concerns in
relation to the everyday. | {"url":"https://piziadas.com/en/tag/elipse","timestamp":"2024-11-04T04:33:22Z","content_type":"text/html","content_length":"60566","record_id":"<urn:uuid:c9e454e3-0a9e-4a12-bcff-7b2777907cf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00892.warc.gz"} |
Better streaming algorithms for Maximum Directed Cut via 'snapshots'
\[ \gdef\bias{\mathrm{bias}} \gdef\deg{\mathrm{deg}} \gdef\indeg{\mathrm{indeg}} \gdef\outdeg{\mathrm{outdeg}} \gdef\Snap{\mathrm{Snap}} \gdef\RSnap{\mathrm{RefSnap}} \]
In this blog post, I’ll discuss a new algorithm based on two joint papers of mine with Raghuvansh Saxena, Madhu Sudan, and Santhoshini Velusamy (appearing in SODA’23 and FOCS’23). The goal of this
algorithm is to “approximate” the value of a graph optimization problem called “maximum directed cut”, or Max-DICUT for short, and the algorithm operates in the so-called “streaming model”. After
defining these terms, I will describe how we reduce the problem of approximating the Max-DICUT value of a directed graph to the problem of estimating a certain matrix, which we call the “snapshot”,
associated to a directed graph; finally, I will present some ideas behind streaming algorithms for estimating these snapshot matrices.
To start, we will define the particular algorithmic model (streaming algorithms) and computational problem (Max-DICUT) that we are interested in.
Streaming algorithms
Motivated by applications to “big data”, in the last two decades, theoretical models of computing on massive inputs have been widely studied. In these models, the algorithm is given limited, partial
access to some input object, and is required to produce an output fulfilling some guarantee related to that object. Some classes of models include:
• Property testing, where an algorithm must decide whether a large object either has some property \(P\) or “really doesn’t have \(P\)”^1 given only a few queries to the object. Depending on the
specific model, the algorithm may be able to choose these queries adaptively, or, more restrictively, the queries might just be randomly and independently sampled according to some distribution.
• Online algorithms, where an algorithm is forced to make progressive decisions about an object while it is revealed piece by piece.
• Streaming algorithms, where an algorithm is allowed to make a decision about an object after seeing it revealed progressively in a “stream”, but there is a limit on the amount of information that
can be stored in memory.
This blog post is concerned with streaming algorithms. In this setting, memory space is the most important limited resource. Sometimes, there are even algorithms that pass over a data stream of
length \(n\) but maintain their internal state using only \(O(\log n)\) or even fewer bits of memory! One exciting aspect of the streaming model of computation is that space restrictions can often be
studied mathematically from the standpoint of information theory, opening an avenue for proving impossibility results.^2
Numerous algorithmic problems have been studied in the context of streaming algorithms. These include statistical problems, such as finding frequent elements in a list (so-called “heavy hitters”) or
estimating properties of the distribution of element frequencies in lists (like so-called “frequency moments”), as well as questions about graphs, such as testing for connectivity, or computing the
maximum matching size, where the stream consists of the list of edges. The common denominator between all these problems is that the “usual” algorithms might not be good streaming algorithms, whether
because they require too much space or because they require “on-demand” access to the input data.
Constraint satisfaction problems
Many “classical” computational problems can be recast into questions in the streaming model. Here, we are interested in one class of problems that have been particularly well-studied classically,
namely constraint satisfaction problems (CSPs). These occur often in practice and include many problems one might encounter in introductory algorithms courses, such as Max-3SAT, Max-CUT, and Max-\(q
CSPs are defined by variables and local constraints over a finite alphabet. More formally, a CSP is defined by:
• A finite set \(\Sigma\), called an alphabet; in the typical “Boolean” case, \(\Sigma=\{0,1\}\).
• A number of variables, \(n\).
• A number of local constraints, \(m\), and a list of constraints \(C_1,\ldots,C_m\). Each constraint \(C_j\) is defined by four objects:
1. A number \(k_j \geq 1 \in \mathbb{N}\), called the arity, that determines the number of variables \(C_j\) involves.
2. A choice of \(k_j\) distinct variables \(i_{j,1},\ldots,i_{j,k_j} \in \{1,\ldots,n\}\).
3. A predicate (or “goal” function) \(f_j : \Sigma^{k_j} \to \{0,1\}\) for those variables.
4. A weight \(w_j \geq 0\).
The CSP asks us to optimize over potential assignments, which are functions \(x : \{1,\ldots,n\} \to \Sigma\) mapping each variable to an element of \(C_j\). In particular, the objective is to
maximize^3 the number of “satisfied” (or “happy”, if you’d like) constraints, where a constraint \(C_j\) is “satisfied” if the alphabet symbols assigned by \(x\) on the variables \(i_{j,1},\ldots,i_
{j,k_j}\) satisfy the predicate \(f_j\). The maximum number of constraints satisfied by any assignment is called the value of the CSP.
Some examples of CSPs are:
• In Max-CUT (a.k.a. “Maximum Cut”), the alphabet is Boolean (\(\Sigma = \{0,1\}\)), and all constraints are binary and use the same predicate: \(f(x,y) = x \oplus y\) (where \(\oplus\) denotes the
Boolean XOR operation). I.e., if we apply a constraint to the variables \((i_1,i_2)\), then the constraint is satisfied iff \(x(i_1) \neq x(i_2)\). Max-\(q\)Coloring is similar, over a larger
alphabet of size \(q\), with the predicate \(f(x,y)=1 \iff x \neq y\).
• In Max-DICUT (a.k.a. “Maximum Directed Cut”), the alphabet is again Boolean, and the predicate is now \(f(x,y) = x \wedge \neg y\), so that a constraint \((i_1,i_2)\) is satisfied iff \(x(i_1) =
1 \wedge x(i_2) = 0\).
• In Max-3SAT, the alphabet is also Boolean, all constraints are ternary, and the assorted predicates are all possible disjunctions on literals, such as \(f(x,y,z) = x \vee \neg y \vee z\) or \(f
(x,y,z) = \neg x \vee \neg y \vee \neg z\).
Both Max-CUT and Max-DICUT can be described interchangeably in the language of graphs, which might be more familiar. For Max-CUT, given an instance on \(n\) variables, we can form a corresponding
undirected graph on \(n\) vertices, and add an edge \(i_1 \leftrightarrow i_2\) for each constraint \((i_1,i_2)\) in the instance (with the same weight). Now an assigns each vertex to either \(0\) or
\(1\), and an edge is satisfied iff its endpoints are on different sides of the cut. (We can think of an assignment as a “cut” which partitions the vertex-set into two sets: one side corresponding to
the variables \(\{i : x(i)=0\}\) and one for \(\{i : x(i)=1\}\).) For Max-DICUT, because of the asymmetry, we have to create a directed graph. We add an edge \(i_1 \to i_2\) for each constraint \
((i_1,i_2)\), and an edge \(i_1 \to i_2\) is satisfied iff \(i_1\) is assigned to \(1\) and \(i_2\) to \(0\). (Similarly, an assignment is an “ordered” partition of the vertex into two sets, i.e., we
have a designated “source” set and “target” set and they are not interchangeable.)
Figure. A “dictionary” between the CSP and graph versions of Max-CUT and Max-DICUT. Each variable becomes a vertex, each constraint becomes an edge (directed in Max-DICUT, undirected in Max-CUT), and
a Boolean assignment \(x\) becomes a “cut” of the vertices in the graph.
Note that in these examples, the arity is a small constant (i.e., either \(2\) or \(3\)). What makes CSPs so interesting is that we can “build up” complicated global instances on arbitrarily many
variables by applying predicates to “local” sets of a few variables at a time.
For various reasons, we are interested in studying the feasibility of approximating the values of CSPs (and not exactly determining this value). Firstly, the approximability of CSPs by “classical”
(i.e., polynomial-time) algorithms is a subject of intense interest, stemming from connections to probabilistically checkable proofs and semidefinite programming. But the theory of classical CSP
approximations relies on unproven assumptions like \(\mathbf{P} \neq \mathbf{NP}\). Space-bounded streaming algorithms generally seem very weak compared to polynomial-time algorithms, but this gives
us the satisfaction of proving unconditional hardness results — and some CSPs still admit nontrivial streaming approximation algorithms. Secondly, it turns out that exact computation of CSP value is
very hard in the streaming setting. Further, exact computation is hardest for dense instances, which is typical for many streaming problems, while approximation is, interestingly, hardest for sparse
instances, i.e., for instances with \(O(n)\) constraints. This is because of the following well-known “sparsification lemma”, which reduces computing the value (approximately) for arbitrary instances
to computing the value (approximately) for sparse instances:
Lemma (sparsification, informal). Let \(\Psi\) be an instance of a CSP with \(n\) variables and \(m\) constraints. Suppose we construct a new instance \(\Psi’\), also on \(n\) variables, but with \(m
= \Theta(n)\) constraints, by randomly sampling constraints from \(\Psi\). Then with high probability, the values of \(\Psi\) and \(\Psi’\) will be roughly the same.
(To make this lemma formal: For \(\epsilon > 0\), if \(m’ = \Theta(n/\epsilon^2)\), then we get high probability of the values being within an additive \(\pm\epsilon\). In the unweighted case,
“randomly sampling constraints” literally means that each constraint is randomly sampled from \(\Psi\)’s constraints. It is possible to generalize to the weighted case assuming the ratio of maximum
to minimum weights is bounded.)
Because of this sparsification lemma, in the remainder of this post, we will assume for simplicity that all CSP instances on \(n\) variables have \(\Theta(n)\) constraints. (Note we are assuming they
also have \(\Omega(n)\) constraints. The algorithms we describe below will also work for \(o(n)\) constraints, but this case can sometimes be messier.)
Streaming algorithms meet CSPs: Max-CUT and Max-DICUT
It is natural to ask whether streaming algorithms can use the local constraints defining an instance to deduce something about the quality of the best global assignment:
Key question: How much space does a streaming algorithm need to approximate the value of (the best global assignment to) a CSP given a pass over its list of local constraints?
This question was first posed at the 2011 Bertinoro workshop on sublinear algorithms (see the sublinear.info wiki). In this section, we examine this question through the lens of Max-CUT and Max-DICUT
, which are two of the simplest and most widely studied Boolean, binary CSPs.
Streaming CSPs and Max-CUT
For the rest of this blog post, we adopt the “graph” language for describing Max-CUT and Max-DICUT. Thus, in the streaming setting, we are interested in algorithms for Max-CUT and Max-DICUT where the
input is a stream of undirected edges (Max-CUT) or directed edges (Max-DICUT) from a graph, and the goal is to output an approximation to the value of the graph.
Now, we turn to some prior results about streaming algorithms for Max-CUT and Max-DICUT. Recall that streaming algorithms are characterized by the amount of space they use. We will be interested in
three “regimes” of space. We define these regimes using “\(O\)-tilde” notation: \(g(n) = \tilde{O}(f(n))\) if \(g(n) = O(f(n) \cdot \log^C n)\) for some constant \(C>0\). The regimes are as follows.
Large space
We use “large space” to refer to space between \(\Omega(n)\) and \(\tilde{O}(n)\). This space regime is sufficient to store entire input instances in memory! Thus, we can exactly calculate the value
of instances once we see all their constraints, simply by enumerating all possible \(2^n\) global assignments. (Recall that the streaming model places no restrictions on the time usage of
Kapralov and Krachun (STOC’19) showed that for Max-CUT, this algorithm is the best possible: no algorithms using less-than-large space can get a \((1/2+\epsilon)\)-approximation for any \(\epsilon>0
\). (\(1/2\)-approximation is “trivial” since every Max-CUT instance has value at least \(1/2\); indeed, a random assignment has expected value \(1/2\) in any instance.) However, the picture for
Max-DICUT is much more complicated.
Medium space
We use “medium space” to refer to space between \(\Omega(\sqrt n)\) and \(\tilde{O}(\sqrt n)\). This space regime is important because the “birthday paradox” phenomenon kicks in:
Key fact: Medium space is sufficient to store a set \(S\) of variables large enough that we expect that there are constraints involving at least two variables in \(S\).
Indeed, suppose we have an instance \(\Psi\) on \(n\) variables, we pick a random subset \(S \subseteq [n]\) of \(\Theta(\sqrt n)\) variables, and we look at all constraints which involve at least
two variables in \(S\). Each constraint has this property with probability roughly \(\Theta((1/\sqrt n)^2) = \Theta(1/n)\), so by linearity of expectation, we expect roughly \(\Theta(1)\) constraints
to have this property.
This key fact implied the breakdown of certain lower bound techniques for problems like Max-DICUT which worked in less-than-medium space, and it is also the starting point for unlocking improved
approximation algorithms for Max-DICUT once medium space is available, as we’ll discuss below.
Small space
Finally, we use “small space” to refer to space which is \(\tilde{O}(1)\). Surprisingly, a result of Guruswami, Velingker, and Velusamy (APPROX’17, based out of CMU!) showed that even in small space,
there are nontrivial algorithms for Max-DICUT. Chou, Golovnev, and Velusamy (FOCS’20) gave a variant of this algorithm with better approximation guarantees, which they also showed is optimal in
less-than-medium space.^4 These algorithms achieve nontrivial approximations in small space by using an important tool from the literature on streaming algorithms: the small-space streaming
algorithm, from the seminal work of Indyk (2006), for estimating vector norms.
Max-DICUT and bias
The work of Chou et al. left wide open the gap between medium and large space for approximating Max-DICUT. That is: Are there medium-space (or even less-than-large-space) algorithms which get better
approximations than is possible in less-than-medium space? In the next section, I present our affirmative answer to this question, but first, I will introduce a further quantity we will need, which
first showed up in this context in the work of Guruswami et al.
Given an instance \(\Psi\) of Max-DICUT (a.k.a., a directed graph), and a vertex \(i \in \{1,\ldots,n\}\), let \(\outdeg_\Psi(i)\) denote the total weight of edges \(i \to i’\), \(\indeg_\Psi(i)\)
the total weight of edges \(i’ \to i\), and \(\deg_\Psi(i) = \outdeg_\Psi(i) + \indeg_\Psi(i)\) the total weight of edges \(i_1\to i_2\) in which \(i \in \{i_1,i_2\}\). (These are called,
respectively, the out-degree, in-degree, and total-degree of \(i\).) If \(\deg_\Psi(i) > 0\), then we define a scalar quantity called the bias of \(i\): \[ \bias_\Psi(i) := \frac{\outdeg_\Psi(i) - \
indeg_\Psi(i)}{\deg_\Psi(i)}. \] Note that \(-1 \leq \bias_\Psi(i) \leq +1\). The quantity \(\bias_\Psi(i)\) captures whether the edges incident to \(i\) are mostly outgoing (\(\bias_\Psi(i) \approx
+1\)), mostly incoming (\(\bias_\Psi(i) \approx -1\)), or mixed (\(\bias_\Psi(i) \approx 0\)).
Figure. Visual depictions of three vertices in a directed graph with biases close to \(+1,0,-1\), respectively. Green edges are outgoing and red edges are incoming.
This concept of bias, which relies crucially on the asymmetry of the predicate (and therefore has no analogue for Max-CUT), is the key to unlocking nontrivial streaming approximation algorithms for
Max-DICUT. Observe that if e.g. \(\bias_\Psi(i) = -1\), then all edges incident to \(i\) are incoming, and therefore, the optimal assignment for \(\Psi\) should assign \(i\) to \(0\).^5 Indeed, an
instance is perfectly satisfiable iff all variables have bias either \(+1\) or \(-1\). What Guruswami et al. showed was that (i) this relationship is “robust”, in that instances with “many large-bias
variables” have large value and vice versa, and (ii) whether an instance has “many large-bias variables” can be quantified using small-space streaming algorithms. Chou et al. gave an algorithm with
better approximation ratios by strengthening the inequalities in (i).
Remark: While we will not require this below, we mention that the notion of “many large-bias variables” is formalized by a quantity called the total bias of \(\Psi\), which is simply the sum over \(i
\), weighted by \(\deg_\Psi(i)\), of \(|\bias_\Psi(i)|\). By definition, the total bias is equal to \(\sum_{i=1}^n |\outdeg_\Psi(i)-\indeg_\Psi(i)|\), which is simply the \(1\)-norm of the vector
associated to \(\Psi\) whose \(i\)-th entry is \(\outdeg_\Psi(i)-\indeg_\Psi(i)\)! So the Max-DICUT algorithms of Guruswami et al. and Chou et al. use the small-space \(1\)-norm sketching algorithm
of Indyk as a black-box subroutine to estimate the total bias of the input graph.
Improved algorithms from snapshot estimation
Finally, we turn to the improved streaming algorithm for Max-DICUT from our recent papers in (SODA’23, FOCS’23). Our result is the following:
Theorem (Saxena, S., Sudan, Velusamy, FOCS’23). There is a medium-space streaming algorithm for Max-DICUT which achieves an approximation ratio \(\alpha\) strictly larger than the ratio \(\beta\)
possible in less-than-medium space (and achievable in small space).
The various results on streaming approximations for Max-DICUT are collected in the following figure:
Figure. A diagram of the known upper and lower bounds on streaming approximations for Max-DICUT. The exponents of \(0,1/2,1\) on the \(x\)-axis correspond to the small-, medium-, and large-space
regimes; green dots are prior upper bounds, red dots are prior lower bounds, and the blue dot is our new upper bound. Of note, Chou, Golovnev, and Velusamy showed that \(4/9\)-approximations are
achievable in small space and optimal in sub-medium space, while Kapralov and Krachun showed that \(1/2\)-approximations are optimal in sub-large space (where in fact arbitrarily good approximations
are known). Our new algorithm gives a \(0.484\)-approximation, lying strictly between \(4/9\) and \(1/2\).
The snapshot matrix
To present our algorithm, we first need to define a matrix, which we call the snapshot, associated to any directed graph \(\Psi\). This matrix has the property that a certain linear combination of
its entries gives a good approximation to the Max-DICUT value of \(\Psi\) (a better approximation than is possible with a less-than-medium space streaming algorithm). Then, the goal of our algorithm
becomes simply estimating the snapshot.
The snapshot matrix is simply the following. Recall that the interval \([-1,+1]\) is the space of possible biases of a variable in a Max-DICUT instance. Fix a partition \(I_1,\ldots,I_B\) of this
interval into a finite number of subintervals. Given this partition, we can partition the (positive-degree) variables in \(\Psi\) into “bias classes”: Each vertex \(i \in \{1,\ldots,n\}\) has bias \
(\bias_\Psi(i)\) falling into a unique interval \(I_b\) for some \(b \in \{1,\ldots,B\}\). Edges also are partitioned into biases classes: To an edge \(i_1 \to i_2\) in \(\Psi\) we associate class \
((b_1,b_2) \in \{1,\ldots,B\} \times \{1,\ldots,B\}\), where \(b_1\) and \(b_2\) are respectively the classes of \(i_1\) and \(i_2\). The snapshot matrix, which we denote \(\mathsf{Snap}_\Psi \in \
mathbb{R}_{\geq 0}^{B \times B}\), is simply the \(B \times B\) matrix which captures the weight of edges in each bias class, i.e., the \((b_1,b_2)\)-th entry is the total weight of edges \(i_1 \to
i_2\) with \(\bias_\Psi(i_1) \in I_{b_1}\) and \(\bias_\Psi(i_2) \in I_{b_2}\).
Aside: Oblivious algorithms
At this point, we can “black-box” the notion of snapshot, since our algorithmic goal is now only to estimate the snapshot. However, to give intuition for the snapshot and show why it lets us achieve
good approximations for Max-DICUT, we first take a detour into describing a simple class of “local” algorithms for Max-DICUT. These algorithms, called oblivious algorithms, were introduced by Feige
and Jozeph (Algorithmica’17). Again, fix a partition of the space of possible biases \([-1,+1]\) into intervals \(I_1,\ldots,I_B\). For each interval \(I_b\), also fix a probability \(p_b\). Now an
oblivious algorithm is one which, given an instance \(\Psi\), inspects each variable \(i\) independently and randomly sets it to \(1\) with probability \(p_b\), where \(b\) is the class of \(i\), and
\(0\) otherwise. These algorithms are “oblivious” in the sense that they ignore everything about each variable except its bias.
As discussed in the previous section, in Max-DICUT, if a variable has bias \(+1\), we always “might as well” assign it to \(1\), and if it has bias \(-1\), we “might as well” assign it to \(0\).
Oblivious algorithms flesh out this connection by choosing how to assign every variable based on its bias. For instance, if a variable has bias \(+0.99\), we should still want to assign it to \(1\)
(at least with large probability).
Feige and Jozeph showed that for a specific choice of the partition \(I_b)\) and probabilities \(p_b)\), the oblivious algorithm gives a good approximation to the overall Max-DICUT value. In
particular, we realized the ratio achieved by their oblivious algorithm is strictly better than what Chou et al. showed was possible with a less-than-medium space streaming algorithm. (In a paper of
mine at APPROX’23, I generalized this definition and the corresponding algorithmic result to Max-\(k\)AND for all \(k \geq 2\).) Thus, to give improved streaming algorithms it suffices to “simulate”
oblivious algorithms (and in particular the oblivious algorithm of Feige and Jozeph).
Figure. The specific choice of bias partition \(I\) and probabilities \(\pi\) employed by Feige and Jozeph to achieve a \(0.483\)-approximation for Max-DICUT. Here, these two objects are presented
together as a single step function, with bias on the horizontal axis and probability on the vertical axis. This choice deterministically rounds vertices with bias \(\geq +1/2\) to \(1\), \(\leq -1/2
\) to \(0\), and it performs a (discretized version of a) linear interpolation between these extremes for vertices with bias closer to \(0\).
The key observation is then that to simulate an oblivious algorithm on an instance \(\Psi\), it suffices to only know (or estimate) the snapshot of \(\Psi\). Indeed, every edge of class \(b_1, b_2\)
is satisfied with probability \((\pi_{b_1})(1-\pi_{b_2})\) (the first factor is the probability that the first endpoint is assigned to \(1\), the second the probability that the second endpoint is
assigned to \(0\), and these two events are independent). Thus, by linearity of expectation, the expected weight of the constraints satisfied by the oblivious algorithm is
\[ \mathop{\mathbb{E}}_{x \sim \mathcal{X}}\left[\mathsf{Obl}(\Psi) \right] = \sum_{b_1,b_2 = 1}^B (\pi_{b_1})(1-\pi_{b_2}) \cdot \Snap_\Psi(b_1,b_2). \]
The upshot of this for us is that to estimate the value of an instance \(\Psi\), it suffices to calculate some linear function of this snapshot matrix \(\Snap_\Psi\). Another important consequence of
this formula is that it allowed Feige and Jozeph to determine the approximation ratio of any oblivious algorithm using a linear program which minimizes the weight of constraints satisfied over all
valid snapshots.^6
A medium-space algorithm and “smoothing” the snapshot
At this point, our goal is to use streaming algorithms to estimate a linear function of the entries of the snapshot \(\Snap_\Psi\). To calculate this function up to a (normalized) \(\pm \epsilon\),
it suffices to estimate each entry of the snapshot up to \(\pm \epsilon/B^2\). \(B\) is a constant and so, reparametrizing \(\epsilon\), we seek an algorithm to estimate a given entry of the snapshot
up to \(\pm \epsilon\) error.
Recall that the \((b_1,b_2)\)-th entry of the snapshot of \(\Psi\) is the weight of edges in \(\Psi\) with bias class \((b_1,b_2)\), i.e., the weight of edges from bias class \(b_1\) to bias class \
(b_2\). To estimate this, we would ideally sample a random set \(E\) of \(T = O(1)\) edges in \(\Psi\), measure the biases of their endpoints, and then use the fraction of edges in the sample with
bias class \((b_1,b_2)\) as an estimate for the total fraction of edges with this bias class. But it is not clear how to use a streaming algorithm to randomly sample a small set of edges and measure
the biases of their endpoints simultaneously.^7 Indeed, this cannot be possible in small space, since we know via Chou et al.’s lower bound that medium space is necessary for improved Max-DICUT
approximations, and therefore for snapshot estimation! In this final section, we describe how we are able to estimate the snapshot using medium space.
Algorithm for bounded-degree graphs
First, suppose we were promised that in \(\Psi\), every vertex has degree at most \(D\), and \(D = O(1)\). An algorithm to estimate the \((b_1,b_2)\)-th entry of the snapshot of \(\Psi\) in this case
is the following:
1. Before the stream, sample a set \(S \subseteq \{1,\ldots,n\}\) of \(k\) random vertices, where \(k\) is a parameter to be chosen later.
2. During the stream, (i) store all edges whose endpoints are both in \(S\), and (ii) measure the biases of each vertex in \(S\).
3. After the stream, take \(E\) to be the set of edges whose endpoints are both in \(S\). Observe that we know the biases of the endpoints of all edges in \(E\), and therefore the bias class of
every edge in \(E\). Use the number of edges in \(E\) in bias class \((b_1,b_2)\) to estimate the total number of edges in \(\Psi\) in this bias class.
Observe that the expected number of edges in \(E\) is \(\sim m (k/n)^2\) where \(m\) is the number of edges in \(\Psi\). If \(m = O(n)\), then \(|E| = \Omega(1)\) (in expectation) as long as \(k = \
Omega(\sqrt n)\), which is precisely why this algorithm “kicks in” once we have medium space! ^8 Once \(S\) is this large, we can indeed show that \(E\) suffices to estimate the snapshot. The proof
of correctness of the estimate relies on bounded dependence of \(E\), by which we mean that in the collection of events \(\{e \in E\}_{e \in \Psi}\), each event is independent of all but \(O(1)\)
other events. Indeed, observe that since \(\Psi\) has maximum degree \(D\), every edge in \(\Psi\) is incident to \(\leq 2D-1\) other edges. (Two edges are incident if they share at least one
endpoint.) And for any two edges \(e, e’ \in \Psi\), the events “\(e \in \Psi\)” and “\(e’ \in \Psi\)” are not independent iff \(e\) and \(e’\) are incident.
The general case
General instances \(\Psi\) need not have bounded maximum degree. This poses a serious challenge for the bounded-degree algorithm we just presented. Consider the case where \(\Psi\) is a “star”, where
each edge connects a designated center vertex \(i^*\) to one of the remaining vertices. In this situation, not every vertex is created equal. Indeed, if \(i^* \not\in S\) (which happens
asymptotically almost surely), \(E\) will be empty, and therefore we learn nothing about \(\Psi\)’s snapshot.
Figure. An example graph with a highlighted subset of vertices \(S\) (green). Only edges with both endpoints in \(S\) are placed in \(E\) — in this case, there is only a single solid edge. All other
edges are not in \(E\). There is a high-degree vertex (\(1\)) which we would ideally put in \(S\): since it is adjacent to so many other vertices, adding it to \(S\) would make \(E\) much larger.
To deal with this issue, the algorithm must become substantially more complex. We design the new algorithm to treat vertices of different degrees differently, giving “higher priority” to storing
high-degree vertices, and it also captures more information than the above algorithm — in particular, it stores edges that have one endpoint in the “sampled set”, as opposed to both.
Our new algorithm aims to estimate a more detailed object than the snapshot itself, which we call the refined snapshot of \(\Psi\). To define this object, we also choose a partition into intervals \
(J_1,\ldots,J_D\) of the space \([0,O(n)]\) of possible degrees. (We only need that each interval has ratio \(O(1)\) between the minimum and maximum degrees it contains. For simplicity, we pick the
intervals to be powers of two: \([1,2), [2,4), [4,8),\ldots\).) This lets us define a unique degree class in \(\{1,\ldots,D\}\) for every vertex, and a corresponding degree class in \(\{1,\ldots,D\}^
2\) for every edge. Now the refined snapshot is a four-dimensional array \(\RSnap_\Psi \in \mathbb{R}^{D^2 \times B^2}\), whose \((d_1,d_2,b_1,b_2)\)-th entry is the number of edges in \(\Psi\) with
degree class \((d_1,d_2)\) and bias class \((b_1,b_2)\).
Now, how do we estimate entries of this refined snapshot, i.e., estimate the number of edges in \(\Psi\) with degree class \((d_1,d_2)\) and bias class \((b_1,b_2)\)? First, we sample a subset \(\
Phi_1 \subseteq \Psi\) of \(\Psi\)’s edges, which I’ll call a slice, in the following way:
1. Sample a set \(S_1\) of vertices by including each vertex in \(\{1,\ldots,n\}\) independently w.p. \(p_1\).
2. Sample a set \(H_1\) of edges in \(\Psi\) by including each edge in \(\Psi\) independently w.p. \(q_1\).
3. \(\Phi_1\) consists of edges in \(H_1\) with at least one vertex in \(S_1\).
Here, \(p_1\) and \(q_1\) are two parameters that depend only on the degree class \(d_1\). We claim that a streaming algorithm can sample a slice (this follows from the definitions), and we observe
that this slice can be stored in medium space assuming that \(p_1 q_1 = \tilde{O}(1/\sqrt n)\), since \(\Psi\) has \(O(n)\) edges and therefore \(\Phi_1\) has \(O(p_1q_1n)\) edges in expectation. We
repeat the above process to produce a second slice \(\Phi_2\), with corresponding parameters \(p_2,q_2\), and then use the slices \(\Phi_1,\Phi_2\) to calculate our estimate of the snapshot.
The choices of \(p_1,q_1,p_2,q_2\) are delicate. Taking \(p_1,q_1\) as an example, if the highest degree in class \(J_{d_1}\) is constant, then we pick \(p_1=\Theta(1/\sqrt n)\) and \(q_1 = 1\), and
our algorithm recovers the bounded-degree algorithm above. But in general, \(q_1\) is chosen so that vertices in degree-class \(J_{d_1}\) have expected constant degree in \(H_1\), which allows us to
recover similar “bounded dependence” behavior to the bounded-degree algorithm and therefore get concentration in the estimate.
But still, how does the algorithm use the slices \(\Phi_1,\Phi_2\) to estimate the snapshot entry? Let \(W_1\) denote the set of “target” vertices in \(\Psi\) which actually have bias class \(b_1\)
and degree class \(d_1\). Similarly, define \(W_2\) as the “target” vertices in bias class \(b_2\) and degree class \(d_2\). The \((d_1,d_2,b_1,b_2)\)-th entry of the snapshot is then simply \(|\Psi
\cap (W_1 \times W_2)|\). Let \(V_1 = W_1 \cap S_1\) and \(V_2 = W_2 \cap S_2\). Suppose that the algorithm, in addition to the slices \(\Phi_1,\Phi_2\), received \(V_1,V_2\) as its input. Now note
that for any edge \(e = (v_1,v_2) \in \Psi \cap (W_1 \times W_2)\), the event “\(e \in \Phi_1 \cap (V_1 \times V_2) \)” has probability \(p_1 p_2 q_1\), since the events “\(v_1 \in S_1\)”, “\(v_2 \in
S_2\)”, and “\(e \in H_1\)” are all independent. We could therefore hope to use \(|\Phi_1 \cap (V_1 \times V_2)|\) to estimate the snapshot entry;^9 indeed, (assuming that \(d_1 < d_2\)) this turns
out to be true, and the proof goes by first conditioning on \(H_1\), and then arguing that given \(H_1\), degrees are sufficiently small to imply bounded dependence of which edges are in \(\Phi_1\)
over the choice of \(S_1,S_2\).
But unfortunately, the algorithm does not get to see the actual sets \(V_1\) and \(V_2\). Instead, we have to employ certain “proxy” sets \(\hat{V}_1,\hat{V}_2\). To define these sets, observe that
in the graph \(H_1\), for every vertex \(v \in \{1,\ldots,n\}\), \[ \mathbb{E}_{H_1}[\deg_{H_1}(v)] = q_1 \cdot \deg_{\Psi}(v). \] Thus, by just looking at the slice \(\Phi_1\), we can estimate the
degree of every vertex in \(S_1\). We can similarly estimate the bias, since \[ \mathbb{E}_{H_1}[\bias_{H_1}(v)] = \bias_\Psi(v). \] So, given \(\Phi_1\) we can define a set \(\hat{V}_1 \subseteq \
{1,\ldots,n\}\) of vertices in \(S_1\) which appear to have bias class \(b_1\) and degree class \(d_1\), based on their estimated degrees and biases in the slice. \(\hat{V}_1\) is an “estimate” for \
(V_1\), and similarly we can define \(\hat{V}_2\) “estimating” \(V_2\) using the second slice \(\Phi_2\).
Smoothing the snapshot
There is an additional complication caused by using “estimated” sets \(\hat{V}_1,\hat{V}_2\) instead of the actual sets \(V_1,V_2\): It is not improbable for there to be “extra” or “missing” vertices
in the estimated sets. Suppose, for instance, there is a vertex \(v\) which is in degree class \(d_1+1\), but whose degree is close to the lower limit of the interval \(J_{d_1+1}\). Then \(v\) is by
definition not in \(V_1\), but depending on the randomness of \(H_1\), it could end up in \(\hat{V}_1\) with decent probability. This means we actually cannot estimate any particular entry of the
refined snapshot with good probability!
To deal with this issue, we slightly modify the underlying problem we are trying to solve: Instead of aiming to directly estimate the refined snapshot, we aim to estimate a “smoothed” version of this
snapshot, where the entries “overlap”, in that each entry captures edges whose bias and degree classes fall into certain “windows”. More precisely, for some window-size parameter \(w\), the \
((d_1,d_2,b_1,b_2)\)-th entry captures the number of edges whose degree class is in \(\{d_1-w,\ldots,d_1+w\} \times \{d_2-w,\ldots,d_2+w\}\) and bias class class is in \(\{b_1-w,\ldots,b_1+w\} \times
\{b_2-w,\ldots,b_2+w\}\). Each particular vertex will fall into many (\(\sim w^4\)) of these windows, meaning that any errors from mistakenly shifting a vertex into adjacent bias or degree classes
are “averaged out” for sufficiently large \(w\). Finally, we show that estimating the “smoothed” snapshot is still sufficient to estimate the Max-DICUT value using a continuity argument, essentially
because slightly perturbing vertices’ biases cannot modify the Max-DICUT value too much.
Several interesting open questions remain after the above results on streaming algorithms for Max-DICUT. Firstly, it would be interesting to extend these results to other CSPs besides Max-DICUT. For
instance, we know of analogues for oblivious algorithms for Max-\(k\)AND for all \(k \geq 2\), but whether there are snapshot estimation algorithms that “implement” these oblivious algorithms in
less-than-large space is an open question. Also, there is a yawning gap between medium and large space. Proving any approximation impossibility result, or constructing better approximation
algorithms, in the between-medium-and-large space regime would be very exciting. We do mention that the snapshot-based approach cannot give optimal (i.e., ratio-\(1/2\)) approximations for Max-DICUT
because of another result of Feige and Jozeph, namely, a pair of graphs \(\Psi,\Phi\) which have the same snapshot, but the ratio of their Max-DICUT values is strictly less than \(1/2\).
J. Boyland, M. Hwang, T. Prasad, N. Singer, and S. Velusamy, “On sketching approximations for symmetric Boolean CSPs,” in Approximation, Randomization, and Combinatorial Optimization. Algorithms and
Techniques, A. Chakrabarti and C. Swamy, Eds., in LIPIcs, vol. 245. Schloss Dagstuhl — Leibniz-Zentrum für Informatik, Jul. 2022, p. 38:1–38:23. doi: 10.4230/LIPIcs.APPROX/RANDOM.2022.38.
C.-N. Chou, A. Golovnev, and S. Velusamy, “Optimal Streaming Approximations for all Boolean Max-2CSPs and Max-\(k\)SAT,” in IEEE 61st Annual Symposium on Foundations of Computer Science, IEEE
Computer Society, Nov. 2020, pp. 330–341. doi: 10.1109/FOCS46700.2020.00039.
U. Feige and S. Jozeph, “Oblivious Algorithms for the Maximum Directed Cut Problem,” Algorithmica, vol. 71, no. 2, pp. 409–428, Feb. 2015, doi: 10.1007/s00453-013-9806-z.
V. Guruswami, A. Velingker, and S. Velusamy, “Streaming Complexity of Approximating Max 2CSP and Max Acyclic Subgraph,” in Approximation, randomization, and combinatorial optimization. Algorithms and
techniques, K. Jansen, J. D. P. Rolim, D. Williamson, and S. S. Vempala, Eds., in LIPIcs, vol. 81. Schloss Dagstuhl — Leibniz-Zentrum für Informatik, Aug. 2017, p. 8:1-8:19. doi: 10.4230/
P. Indyk, “Stable distributions, pseudorandom generators, embeddings, and data stream computation,” J. ACM, vol. 53, no. 3, pp. 307–323, May 2006, doi: 10.1145/1147954.1147955
M. Kapralov, S. Khanna, and M. Sudan, “Streaming lower bounds for approximating MAX-CUT,” in Proceedings of the 26th Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and
Applied Mathematics, Jan. 2015, pp. 1263–1282. doi: 10.1137/1.9781611973730.84.
M. Kapralov and D. Krachun, “An optimal space lower bound for approximating MAX-CUT,” in Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, Association for Computing
Machinery, Jun. 2019, pp. 277–288. doi: 10.1145/3313276.3316364.
N. G. Singer, “Oblivious algorithms for the Max-\(k\)AND problem,” in Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, N. Megow and A. D. Smith, Eds., in
LIPIcs, vol. 275. May 2023. doi: 10.4230/LIPIcs.APPROX/RANDOM.2023.15.
R. R. Saxena, N. G. Singer, M. Sudan, and S. Velusamy, “Streaming complexity of CSPs with randomly ordered constraints,” in Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms,
Jan. 2023. doi: 10.1137/1.9781611977554.ch156.
R. R. Saxena, N. Singer, M. Sudan, and S. Velusamy, “Improved streaming algorithms for Maximum Directed Cut via smoothed snapshots,” in IEEE 63rd Annual Symposium on Foundations of Computer Science,
IEEE Computing Society, 2023, pp. 855–870. doi: 10.1109/FOCS57990.2023.00055.
More precisely, this typically means that the object is “far from” the set of objects having \(P\) in some mathematical sense. For instance, if the objects are graphs and the property \(P\) is the
graph property of bipartiteness, “really not having \(P\)” might mean that many edges in the graph must be added or deleted in order to get \(P\) to hold.
This is in contrast to more traditional areas of theory, such as time complexity, where many impossibility results are “conditional” on conjectures like \(\mathbf{P} \neq \mathbf{NP}\).
It is also interesting to study minimization versions of CSPs (i.e., trying to minimize the number of unsatisfied constraints), but that is out of scope for this post.
Specifically, Chou et al. showed a sharp threshold in the space needed for \(4/9\)-approximations. The analysis of their algorithm was subsequently simplified in a joint work of mine with Boyland,
Hwang, Prasad, and Velusamy in (APPROX’21).
More precisely, there exists an optimal assignment with this property.
This is an oversimplification: The goal is to minimize the approximation ratio (i.e., the value of the oblivious assignment over the value of the optimal assignment). However, Feige and Jozeph
observe that under a symmetry assumption for \(\pi\), it suffices to only minimize over instances where (i) the (unnormalized) value of the instance is \(1\) and (ii) the all-\(1\)’s assignment is
optimal. Given (i), the algorithm’s ratio on an instance is simply the (unnormalized) expected value of the assignment produced by the oblivious algorithm, and (i) and (ii) together can be
implemented as an additional linear constraint in the LP.
This task is easier in some “nonstandard” streaming models. Firstly, suppose we were guaranteed that the edges showed up in the stream in a uniformly random order. Then since the first \(T\) edges in
the stream are a random sample of \(\Psi\)’s edges, we could simply use these edges for our set \(E\), and then record the biases of their endpoints over the remainder of the stream. Alternatively,
suppose we were allowed two passes over the stream of edges. We could then use the first pass to sample \(T\) random edges \(E\), and use the second pass to measure the biases of their endpoints.
Both of these algorithms use small space, since we are only sampling a constant number of edges.
To avoid having to sample \(S\) upfront and store it, it turns out to be instead sufficient to use a \(4\)-wise independent hash function.
It turns out to be important for the concentration bounds that we use the slice with smaller degree, e.g., if \(d_1 < d_2\) then we count edges in \(\Phi_1\). In this case, if we instead counted
edges in \(\Phi_2\), the expectation would be \(O(p_1 p_2 q_2 m)\), which could be smaller than \(1\) if \(d_2\) is very large.
More precisely, for all \(\epsilon>0\) these algorithms output some value \(\hat{v}\) satisfying \(\hat{v} \in (1\pm\epsilon) \|\mathbf{v}\|_p\) with high probability, and use \(O(\log n/\epsilon^{O
(1)})\) space. | {"url":"https://www.cs.cmu.edu/~csd-phd-blog/2024/streaming-csps/","timestamp":"2024-11-03T08:10:49Z","content_type":"text/html","content_length":"63417","record_id":"<urn:uuid:84144864-624d-41c9-b57d-cfab525b2e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00078.warc.gz"} |
Re: [tlaplus] Meta-theorem (induction lemma) in TLA+
I want to prove a meta-theorem (or induction lemma, tactic) for a specification and reuse it in other proofs, like Coq.
I wonder if its possible or not.
Consider the following spec:
vars == <state variables>
Act1 == ...
Act2 == ...
Next == Act1 \/ Act2
Spec == Init /\ [][Next]_vars
Then we expect the following lemma hold for any non-temporal formula "Invariant", and we want to use it to prove, say, its type invariance.
(I know its too simple. Actually I want to consider more complicated cases)
I was able to prove the lemma, but I could not "apply" the lemma to the type invariant theorem.
Is there anything wrong?
LEMMA SpecInduction ==
NEW Invariant,
ASSUME Init PROVE Invariant,
ASSUME Invariant, Act1 PROVE Invariant',
ASSUME Invariant, Act2 PROVE Invariant',
Spec => []Invariant
<1>1. Init => Invariant OBVIOUS
<1>2. Invariant /\ Next => Invariant' OBVIOUS
<1>3. Invariant /\ UNCHANGED vars => Invariant' OBVIOUS
<1> QED BY PTL, <1>1, <1>2, <1>3 DEF Spec | {"url":"http://discuss.tlapl.us/msg03004.html","timestamp":"2024-11-12T23:29:13Z","content_type":"text/html","content_length":"28622","record_id":"<urn:uuid:b6fed2d4-7545-49b7-abc2-6bfa8d6ebf41>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00360.warc.gz"} |
3-28 REFLECTIONS ON A TRANSMISSION LINE Transmission line characteristics are based on an infinite line. A line cannot always be terminated in
its characteristic impedance since it is sometimes operated as an OPEN-ENDED line and other times as a
SHORT-CIRCUIT at the receiving end. If the line is open-ended, it has a terminating impedance that is
infinitely large. If a line is not terminated in characteristic impedance, it is said to be finite.
When a line is not terminated in Z[0], the incident energy is not absorbed but is returned along the only
path available the transmission line. Thus, the behavior of a finite line may be quite different from that of the infinite line. REFLECTION OF DC VOLTAGE FROM AN OPEN CIRCUIT
The equivalent circuit of an open-ended transmission line is shown in figure 3-24, view A. Again,
losses are to be considered as negligible, and L is lumped in one branch. Assume that (1) the battery in
this circuit has an internal impedance equal to the characteristic impedance of the transmission line
(Z[i ]= Z[0]); (2) the capacitors in the line are not charged before the battery is connected; and (3) since the line is open-ended, the terminating impedance is infinitely large. | {"url":"https://electriciantraining.tpub.com/14182/css/Reflection-Of-Dc-Voltage-From-An-Open-Circuit-136.htm","timestamp":"2024-11-08T16:08:25Z","content_type":"text/html","content_length":"17950","record_id":"<urn:uuid:c1298a87-4df2-49ee-adb8-7264e22dd16f>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00487.warc.gz"} |
A development of particle-based Eulerian property index (EPI) and an analysis of water droplet particles with phase change
초록 (요약문)
The Lagrangian approach is mostly used to analyze particle-laden flow characteristics. The complexity and high computational cost are the drawbacks associated with the Lagrangian approach, in order
to avoid this Eulerian approach is used as an alternative technique for the dispersed phase. The objective of the present study is to develop Eulerian indices to observe fluid flow characteristics
laden with particles. The developed Eulerian indices such as residence time, travel distance, mean centrifugal force is evaluated for particle-based fluid flow, in the cylinder for swirling motion
and cooling tower natural draft. The Eulerian property indices are computed with different droplet diameter cases and results are compared with Lagrangian property indices. The flow characteristics
in cyclone separators are analyzed by using these developed indices. The effect of inlet velocity and vortex finder length in cyclone separator is observed for the tangential velocity, mean
centrifugal force and separation efficiency. When performing an analysis of particle simulation, it is important to perform statistical analysis based on modeling such as particle diameter
distribution in order to obtain more realistic analysis. However, using particle diameter distribution modeling is difficult thing associated with the Eulerian approach, because it requires very
complex modeling to use particle diameter distribution. For this reason, Lagrangian approach is used as an alternative technique for dispersed phase. As a result, statistical prediction and flow
field analysis were performed based on Lagrangian droplet particles with phase change from a cooling tower under a background air flow. Statistical tendency of the particle temperature and diameter
were investigated for Lagrangian water droplets according to the background air temperature and gravitational effects. As the particle diameter distribution model, the Rosin-Rammler distribution was
used for about 2.5 million droplets particles. The horseshoe vortices were observed behind the cooling tower, which affects especially small droplets by increasing condensation via enhanced mixing
with a cold background air. It was observed that the gravity effect increases the strength of the horseshoe vortices. | {"url":"https://dcollection.sogang.ac.kr/dcollection/srch/srchDetail/000000066544?navigationSize=10&query=%2B%28%2B%28subject%3A%22droplet%22%29%29&pageSize=10&insCode=211029&searchWhere1=subject&sortDir=desc&searchTotalCount=0&pageNum=1&rows=10&searchOption=em&searthTotalPage=0&treePageNum=1&sortField=score&start=0&ajax=false&searchText=%5B%EC%A3%BC%EC%A0%9C%EC%96%B4%3A+Droplet%5D&searchKeyWord1=+Droplet","timestamp":"2024-11-04T04:53:40Z","content_type":"text/html","content_length":"30848","record_id":"<urn:uuid:d6b75710-d094-4795-abe3-1d59153396bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00787.warc.gz"} |
10.29: Hypothesis Test for a Difference in Two Population Means (1 of 2)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Learning Objectives
• Under appropriate conditions, conduct a hypothesis test about a difference between two population means. State a conclusion in context.
Using the Hypothesis Test for a Difference in Two Population Means
The general steps of this hypothesis test are the same as always. As expected, the details of the conditions for use of the test and the test statistic are unique to this test (but similar in many
ways to what we have seen before.)
Step 1: Determine the hypotheses.
The hypotheses for a difference in two population means are similar to those for a difference in two population proportions. The null hypothesis, H[0], is again a statement of “no effect” or “no
• H[0]: μ[1] – μ[2] = 0, which is the same as H[0]: μ[1] = μ[2]
The alternative hypothesis, H[a], can be any one of the following.
• H[a]: μ[1] – μ[2] < 0, which is the same as H[a]: μ[1] < μ[2]
• H[a]: μ[1] – μ[2] > 0, which is the same as H[a]: μ[1] > μ[2]
• H[a]: μ[1] – μ[2] ≠ 0, which is the same as H[a]: μ[1] ≠ μ[2]
Step 2: Collect the data.
As usual, how we collect the data determines whether we can use it in the inference procedure. We have our usual two requirements for data collection.
• Samples must be random to remove or minimize bias.
• Samples must be representative of the populations in question.
We use this hypothesis test when the data meets the following conditions.
• The two random samples are independent.
• The variable is normally distributed in both populations. If this variable is not known, samples of more than 30 will have a difference in sample means that can be modeled adequately by the
t-distribution. As we discussed in “Hypothesis Test for a Population Mean,” t-procedures are robust even when the variable is not normally distributed in the population. If checking normality in
the populations is impossible, then we look at the distribution in the samples. If a histogram or dotplot of the data does not show extreme skew or outliers, we take it as a sign that the
variable is not heavily skewed in the populations, and we use the inference procedure. (Note: This is the same condition we used for the one-sample t-test in “Hypothesis Test for a Population
Step 3: Assess the evidence.
If the conditions are met, then we calculate the t-test statistic. The t-test statistic has a familiar form.
Since the null hypothesis assumes there is no difference in the population means, the expression (μ[1] – μ[2]) is always zero.
As we learned in “Estimating a Population Mean,” the t-distribution depends on the degrees of freedom (df). In the one-sample and matched-pair cases df = n – 1. For the two-sample t-test, determining
the correct df is based on a complicated formula that we do not cover in this course. We will either give the df or use technology to find the df. With the t-test statistic and the degrees of
freedom, we can use the appropriate t-model to find the P-value, just as we did in “Hypothesis Test for a Population Mean.” We can even use the same simulation.
Step 4: State a conclusion.
To state a conclusion, we follow what we have done with other hypothesis tests. We compare our P-value to a stated level of significance.
• If the P-value ≤ α, we reject the null hypothesis in favor of the alternative hypothesis.
• If the P-value > α, we fail to reject the null hypothesis. We do not have enough evidence to support the alternative hypothesis.
As always, we state our conclusion in context, usually by referring to the alternative hypothesis.
“Context and Calories”
Does the company you keep impact what you eat? This example comes from an article titled “Impact of Group Settings and Gender on Meals Purchased by College Students” (Allen-O’Donnell, M., T. C.
Nowak, K. A. Snyder, and M. D. Cottingham, Journal of Applied Social Psychology 49(9), 2011, onlinelibrary.wiley.com/doi/10.1111/j.1559-1816.2011.00804.x/full). In this study, researchers examined
this issue in the context of gender-related theories in their field. For our purposes, we look at this research more narrowly.
Step 1: Stating the hypotheses.
In the article, the authors make the following hypothesis. “The attempt to appear feminine will be empirically demonstrated by the purchase of fewer calories by women in mixed-gender groups than by
women in same-gender groups.” We translate this into a simpler and narrower research question: Do women purchase fewer calories when they eat with men compared to when they eat with women?
Here the two populations are “women eating with women” (population 1) and “women eating with men” (population 2). The variable is the calories in the meal. We test the following hypotheses at the 5%
level of significance.
The null hypothesis is always H[0]: μ[1] – μ[2] = 0, which is the same as H[0]: μ[1] = μ[2].
The alternative hypothesis H[a]: μ[1] – μ[2] > 0, which is the same as H[a]: μ[1] > μ[2].
Here μ[1] represents the mean number of calories ordered by women when they were eating with other women, and μ[2] represents the mean number of calories ordered by women when they were eating with
Note: It does not matter which population we label as 1 or 2, but once we decide, we have to stay consistent throughout the hypothesis test. Since we expect the number of calories to be greater for
the women eating with other women, the difference is positive if “women eating with women” is population 1. If you prefer to work with positive numbers, choose the group with the larger expected mean
as population 1. This is a good general tip.
Step 2: Collect Data.
As usual, there are two major things to keep in mind when considering the collection of data.
• Samples need to be representative of the population in question.
• Samples need to be random in order to remove or minimize bias.
Representative Samples?
The researchers state their hypothesis in terms of “women.” We did the same. But the researchers gathered data by watching people eat at the HUB Rock Café II on the campus of Indiana University of
Pennsylvania during the Spring semester of 2006. Almost all of the women in the data set were white undergraduates between the ages of 18 and 24, so there are some definite limitations on the scope
of this study. These limitations will affect our conclusion (and the specific definition of the population means in our hypotheses.)
Random Samples?
The observations were collected on February 13, 2006, through February 22, 2006, between 11 a.m. and 7 p.m. We can see that the researchers included both lunch and dinner. They also made observations
on all days of the week to ensure that weekly customer patterns did not confound their findings. The authors state that “since the time period for observations and the place where [they] observed
students were limited, the sample was a convenience sample.” Despite these limitations, the researchers conducted inference procedures with the data, and the results were published in a reputable
journal. We will also conduct inference with this data, but we also include a discussion of the limitations of the study with our conclusion. The authors did this, also.
Do the data met the conditions for use of a t-test?
The researchers reported the following sample statistics.
• In a sample of 45 women dining with other women, the average number of calories ordered was 850, and the standard deviation was 252.
• In a sample of 27 women dining with men, the average number of calories ordered was 719, and the standard deviation was 322.
One of the samples has fewer than 30 women. We need to make sure the distribution of calories in this sample is not heavily skewed and has no outliers, but we do not have access to a spreadsheet of
the actual data. Since the researchers conducted a t-test with this data, we will assume that the conditions are met. This includes the assumption that the samples are independent.
Step 3: Assess the evidence.
As noted previously, the researchers reported the following sample statistics.
• In a sample of 45 women dining with other women, the average number of calories ordered was 850, and the standard deviation was 252.
• In a sample of 27 women dining with men, the average number of calories ordered was 719, and the standard deviation was 322.
To compute the t-test statistic, make sure sample 1 corresponds to population 1. Here our population 1 is “women eating with other women.” So x[1] = 850, s[1] = 252, n[1] =45, and so on.
text{}719}{\sqrt{\frac{{252}^{2}}{45}+\frac{{322}^{2}}{27}}}\text{}\approx \text{}\frac{131}{72.47}\text{}\approx \text{}1.81$
Using technology, we determined that the degrees of freedom are about 45 for this data. To find the P-value, we use our familiar simulation of the t-distribution. Since the alternative hypothesis is
a “greater than” statement, we look for the area to the right of T = 1.81. The P-value is 0.0385.
Step 4: State a conclusion.
Generic Conclusion
The hypotheses for this test are H[0]: μ[1] – μ[2] = 0 and H[a]: μ[1] – μ[2] > 0. Since the P-value is less than the significance level (0.0385 < 0.05), we reject H[0] and accept H[a].
Conclusion in context
At Indiana University of Pennsylvania, the mean number of calories ordered by undergraduate women eating with other women is greater than the mean number of calories ordered by undergraduate women
eating with men (P-value = 0.0385).
Comment about Conclusions
In the conclusion above, we did not generalize the findings to all women. Since the samples included only undergraduate women at one university, we included this information in our conclusion. But
our conclusion is a cautious statement of the findings. The authors see the results more broadly in the context of theories in the field of social psychology. In the context of these theories, they
write, “Our findings support the assertion that meal size is a tool for influencing the impressions of others. For traditional-age, predominantly White college women, diminished meal size appears to
be an attempt to assert femininity in groups that include men.” This viewpoint is echoed in the following summary of the study for the general public on National Public Radio (npr.org).
• Both men and women appear to choose larger portions when they eat with women, and both men and women choose smaller portions when they eat in the company of men, according to new research
published in the Journal of Applied Social Psychology. The study, conducted among a sample of 127 college students, suggests that both men and women are influenced by unconscious scripts about
how to behave in each other’s company. And these scripts change the way men and women eat when they eat together and when they eat apart.
Should we be concerned that the findings of this study are generalized in this way? Perhaps. But the authors of the article address this concern by including the following disclaimer with their
findings: “While the results of our research are suggestive, they should be replicated with larger, representative samples. Studies should be done not only with primarily White, middle-class college
students, but also with students who differ in terms of race/ethnicity, social class, age, sexual orientation, and so forth.” This is an example of good statistical practice. It is often very
difficult to select truly random samples from the populations of interest. Researchers therefore discuss the limitations of their sampling design when they discuss their conclusions.
In the following activities, you will have the opportunity to practice parts of the hypothesis test for a difference in two population means. On the next page, the activities focus on the entire
process and also incorporate technology.
Contributors and Attributions
CC licensed content, Shared previously | {"url":"https://stats.libretexts.org/Courses/Lumen_Learning/Concepts_in_Statistics_(Lumen)/10%3A_Inference_for_Means/10.29%3A_Hypothesis_Test_for_a_Difference_in_Two_Population_Means_(1_of_2)","timestamp":"2024-11-02T05:27:12Z","content_type":"text/html","content_length":"153322","record_id":"<urn:uuid:2f3b1fc6-3f24-49fc-ac7a-2ecb3d2f1a84>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00574.warc.gz"} |
Time Series Analysis | THE DATA SCIENCE INTERVIEW BOOK
Reference: 📖Explanation
A time series is simply a series of data points ordered in time. In a time series, time is often the independent variable and the goal is usually to make a forecast for the future.
Moving Average: Here the assumption is that future value of our variable depends on the average of its $k$ previous values. Moving average has another use case - smoothing the original time
series to identify trends. The wider the window, the smoother the trend. In the case of very noisy data, which is often encountered in finance, this procedure can help detect common patterns.
This can also be used to determine anamolies based on the confidence level.
Weighted average: It is a simple modification to the moving average. The weights sum up to 1 with larger weights assigned to more recent observations. $\hat{y}_{t} = \displaystyle\sum^{k}_{n=1} \
omega_n y_{t+1-n}$
Exponential smoothing: Here instead of weighting the last $k$ values of the time series, we start weighting all available observations while exponentially decreasing the weights as we move
further back in time. $\hat{y}_{t} = \alpha \cdot y_t + (1-\alpha) \cdot \hat y_{t-1}$
The $\alpha$ weight is called a smoothing factor. It defines how quickly we will "forget" the last available true observation. The smaller $\alpha$ is, the more influence the previous
observations have and the smoother the series is.
Double Exponential smoothing: Up to now, the methods that we've discussed have been for a single future point prediction (with some nice smoothing). That is cool, but it is also not enough. Let's
extend exponential smoothing so that we can predict two future points (of course, we will also include more smoothing).
Series decomposition will help us -- we obtain two components: intercept (i.e. level) $\ell$ and slope (i.e. trend) $b$. We have learnt to predict intercept (or expected series value) with our
previous methods; now, we will apply the same exponential smoothing to the trend by assuming that the future direction of the time series changes depends on the previous weighted changes. As a
result, we get the following set of functions:
$\ell_x = \alpha y_x + (1-\alpha)(\ell_{x-1} + b_{x-1})$
$b_x = \beta(\ell_x - \ell_{x-1}) + (1-\beta)b_{x-1}$
$\hat{y}_{x+1} = \ell_x + b_x$
The first one describes the intercept, which, as before, depends on the current value of the series. The second term is now split into previous values of the level and of the trend. The second
function describes the trend, which depends on the level changes at the current step and on the previous value of the trend. In this case, the $\beta$ coefficient is a weight for exponential
smoothing. The final prediction is the sum of the model values of the intercept and trend.
Triple exponential smoothing a.k.a. Holt-Winters:
The idea is to add a third component - seasonality. This means that we should not use this method if our time series is not expected to have seasonality. Seasonal components in the model will
explain repeated variations around intercept and trend, and it will be specified by the length of the season, in other words by the period after which the variations repeat. For each observation
in the season, there is a separate component; for example, if the length of the season is 7 days (a weekly seasonality), we will have 7 seasonal components, one for each day of the week.
The new system of equations:
$\ell_x = \alpha(y_x - s_{x-L}) + (1-\alpha)(\ell_{x-1} + b_{x-1})$
$b_x = \beta(\ell_x - \ell_{x-1}) + (1-\beta)b_{x-1}$
$s_x = \gamma(y_x - \ell_x) + (1-\gamma)s_{x-L}$
$\hat{y}_{x+m} = \ell_x + mb_x + s_{x-L+1+(m-1)modL}$
The intercept now depends on the current value of the series minus any corresponding seasonal component. Trend remains unchanged, and the seasonal component depends on the current value of the
series minus the intercept and on the previous value of the component. Take into account that the component is smoothed through all the available seasons; for example, if we have a Monday
component, then it will only be averaged with other Mondays. You can read more on how averaging works and how the initial approximation of the trend and seasonal components is done here. Now that
we have the seasonal component, we can predict not just one or two steps ahead but an arbitrary $m$ future steps ahead, which is very encouraging.
Below is the code for a triple exponential smoothing model, which is also known by the last names of its creators, Charles Holt and his student Peter Winters. Additionally, the Brutlag method was
included in the model to produce confidence intervals:
$\hat y_{max_x}=\ell_{x−1}+b_{x−1}+s_{x−T}+m⋅d_{t−T}$
$\hat y_{min_x}=\ell_{x−1}+b_{x−1}+s_{x−T}-m⋅d_{t−T}$
$d_t=\gamma∣y_t−\hat y_t∣+(1−\gamma)d_{t−T},$
where $T$ is the length of the season, $d$ is the predicted deviation. Other parameters were taken from triple exponential smoothing. You can read more about the method and its applicability to
anomaly detection in time series here.
Exponentiality is hidden in the recursiveness of the function – we multiply by $(1-\alpha)$ each time, which already contains a multiplication by $(1-\alpha)$ of previous model values.
Before we start modeling, we should mention such an important property of time series, stationarity.
So why is stationarity so important? Because it is easy to make predictions on a stationary series since we can assume that the future statistical properties will not be different from those
currently observed. Most of the time-series models, in one way or the other, try to predict those properties (mean or variance, for example). Furture predictions would be wrong if the original series
were not stationary.
When running a linear regression the assumption is that all of the observations are all independent of each other. In a time series, however, we know that observations are time dependent. It turns
out that a lot of nice results that hold for independent random variables (law of large numbers and central limit theorem to name a couple) hold for stationary random variables. So by making the data
stationary, we can actually apply regression techniques to this time dependent variable.
Dickey-Fuller test can be used as a check for stationarity. If ‘Test Statistic’ is greater than the ‘Critical Value’ then the time series is stationary.
There are a few ways to deal with non-stationarity:
Plot the ACF and PACF charts and find the optimal parameters.
ARIMA family
Let's combine our first 4 letters:
What we have here is the Autoregressive–moving-average model! If the series is stationary, it can be approximated with these 4 letters. Let's continue.
ARIMA and similar models assume some sort of causal relationship between past values and past errors and future values of the time series. Facebook Prophet doesn't look for any such causal
relationships between past and future. Instead, it simply tries to find the best curve to fit to the data, using a linear or logistic curve, and Fourier coefficients for the seasonal components.
There is also a regression component, but that is for external regressors, not for the time series itself (The Prophet model is a special case of GAM - Generalized Additive Model).
Cross Validation with Time Series
Can cross validation be used with Time Series to estimate model parameters automatically?
Normal cross-validation cannot be used for time series because one cannot randomly mix values in a fold while preserving this structure. With randomization, all time dependencies between observations
will be lost. But something like "cross-validation on a rolling basis" can be used.
CNNs in Time Series
How are CNNs used for Time Series Prediction?
The ability of CNNs to learn and automatically extract features from raw input data can be applied to time series forecasting problems. A sequence of observations can be treated like a
one-dimensional image that a CNN model can read and distill into the most salient elements.
The capability of CNNs has been demonstrated to great effect on time series classification tasks such as automatically detecting human activities based on raw accelerator sensor data from fitness
devices and smartphones.
CNNs have the support for multivariate input, multivariate output, it can learn arbitrary but complex functional relationships, but does not require that the model learn directly from lag
observations. Instead, the model can learn a representation from a large input sequence that is most relevant for the prediction problem.
Data Prep for Time Series
What are some of Data Preprocessing Operations you would use for Time Series Data?
It depends on the problem, but some common ones are:
Parsing time series information from various sources and formats.
Generating sequences of fixed-frequency dates and time spans.
Manipulating and converting date times with time zone information.
Resampling or converting a time series to a particular frequency.
Performing date and time arithmetic with absolute or relative time increments.
Missing Value in Time Series
What are some of best ways to handle missing values in Time Series Data?
The most common methodology used for handling missing, unequally spaced, or unsynchronized values is linear interpolation.
Can you explain why time series has to be stationary?
Stationarity is important because, in its absence, a model describing the data will vary in accuracy at different time points. As such, stationarity is required for sample statistics such as means,
variances, and correlations to accurately describe the data at all time points of interest.
What quantities are we typically interested in when we perform statistical analysis on a time series? We want to know
To calculate these things we use a mean across many time periods. The mean across many time periods is only informative if the expected value is the same across those time periods. If these
population parameters can vary, what are we really estimating by taking an average across time?
(Weak) stationarity requires that these population quantities must be the same across time, making the sample average a reasonable way to estimate them.
IQR in Time Series
How is Interquartile range used in Time series?
It is mostly used to detect outliers in Time Series data.
Irregular Data
What does irregularly-spaced spatial data mean in Time series?
A lot of techniques assume that data is sampled at regularly-spaced intervals of time. This interval between adjacent samples is called the sampling period.
A lot of data is not or cannot be sampled with a fixed sampling period. For example, if we measure the atmosphere using sensors, the terrain may not allow us to place weather stations exactly 50
miles apart.
There are many different ways to deal with this kind of data which does not have a fixed sampling period. One approach is to interpolate the data onto a grid and then use a technique intended for
gridded data.
Sliding Window
Explain the Sliding Window method in Time series?
Time series can be phrased as supervised learning. Given a sequence of numbers for a time series dataset, we can restructure the data to look like a supervised learning problem.
In the sliding window method, the previous time steps can be used as input variables, and the next time steps can be used as the output variable.
In statistics and time series analysis, this is called a lag or lag method. The number of previous time steps is called the window width or size of the lag. This sliding window is the basis for how
we can turn any time series dataset into a supervised learning problem.
LSTM vs MLP
Can you discuss on the usage of LSTM vs MLP in Time Series?
Multilayer Perceptrons, or MLPs for short, can be applied to time series forecasting. A challenge with using MLPs for time series forecasting is in the preparation of the data. Specifically, lag
observations must be flattened into feature vectors. My understanding is that LSTMs captures the relations between time steps, whereas simple MLPs treat each time step as a separated feature (doesn't
take succession into consideration).
RNNs are known to be superior to MLP in case of sequential data. But complex models like LSTM and GRU require a lot of data to achieve their potential.
Sequential vs Non-Sequential
Can a CNN (or other non-sequential deep learning models) outperform LSTM (or other sequential models) in time series data
You are right CNN based models can outperform RNN. You can take a look at this paper where they compared different RNN models with TCN (temporal convolutional networks) on different sequence modeling
tasks. Even though there are no big differences in terms of results there are some nice properties that CNN based models offers such as: parallelism, stable gradients and low training memory
footprint. In addition to CNN based models there are also attention based models (you might want to take a look at the transformer)
Long Time Series
What's the best architecture for time series prediction with a long dataset?
LSTM is ideal for this. For even stronger representational capacity, make your LSTM's multi-layered. Using 1-dimensional convolutions in a CNN is a common way to exctract information from time series
too, so there's no harm in trying. Typically, you'll test many models out and take the one that has best validation performance.
How to use Correlation in Time series data?
Pearson correlation is used to look at correlation between series ... but being time series, the correlation is looked at across different lags -- the cross-correlation function. The
cross-correlation is impacted by dependence within-series, so in many cases the within-series dependence should be removed first. So, to use this correlation, rather than smoothing the series, it's
actually more common (because it's meaningful) to look at dependence between residuals - the rough part that's left over after a suitable model is found for the variables.
You probably want to begin with some basic resources on time series models before delving into trying to figure out whether a Pearson correlation across (presumably) nonstationary, smoothed series is
In particular, you'll probably want to look into spurious correlation. The point about spurious correlation is that series can appear correlated, but the correlation itself is not meaningful.
Consider two people tossing two distinct coins counting number of heads so far minus number of tails so far as the value of their series.
Obviously, there's no connection whatever between the two series. Clearly neither can tell you the first thing about the other!
But look at the sort of correlations you get between pairs of coins:
If I didn't tell you what those were, and you took any pair of those series by themselves, those would be impressive correlations would they not?
But they're all meaningless. Utterly spurious. None of the three pairs are really any more positively or negatively related to each other than any of the others -- it's just cumulated noise. The
spuriousness isn't just about prediction, the whole notion of considering association between series without taking account of the within-series dependence is misplaced.
All you have here is within-series dependence. There's no actual cross-series relation whatever.
Once you deal properly with the issue that makes these series auto-dependent - they're all integrated (Bernoulli random walks), so you need to difference them - the "apparent" association disappears
(the largest absolute cross-series correlation of the three is 0.048).
What that tells you is the truth -- the apparent association is a mere illusion caused by the dependence within-series.
If your question asked "how to use Pearson correlation correctly with time series" -- so please understand: if there's within-series dependence and you don't deal with it first, you won't be using it
Further, smoothing won't reduce the problem of serial dependence; quite the opposite -- it makes it even worse! Here are the correlations after smoothing (default loess smooth - of series vs index -
performed in R):
| | coin1 | coin2 |
| coin2 | 0.9696378 | |
| coin3 | -0.8829326 | -0.7733559 |
They all got further from 0. They're all still nothing but meaningless noise, though now it's smoothed, cumulated noise. (By smoothing, we reduce the variability in the series we put into the
correlation calculation, so that may be why the correlation goes up.)
Check the link given above for a detailed discussion
State Space Model and Kalman Filtering
Describe in details how State Space Model and Kalman Filtering are used in Time Series forecasting?
Resource __ Source
State Space Model vs Conventional Methodologies
What are disadvantages of state-space models and Kalman Filter for time-series modelling over let's say conventional methodologies like ARIMA, VAR or ad-hoc/heuristic methods?
Overall - compared to ARIMA, state-space models allow you to model more complex processes, have interpretable structure and easily handle data irregularities; but for this you pay with increased
complexity of a model, harder calibration, less community knowledge.
ARIMA is a universal approximator - you don't care what is the true model behind your data and you use universal ARIMA diagnostic and fitting tools to approximate this model. It is like a
polynomial curve fitting - you don't care what is the true function, you always can approximate it with a polynomial of some degree.
State-space models naturally require you to write-down some reasonable model for your process (which is good - you use your prior knowledge of your process to improve estimates). Of course, if
you don't have any idea of your process, you always can use some universal state-space model also - e.g. represent ARIMA in a state-space form. But then ARIMA in its original form has more
parsimonious formulation - without introducing unnecessary hidden states.
Because there is such a great variety of state-space models formulations (much richer than class of ARIMA models), behavior of all these potential models is not well studied and if the model you
formulated is complicated - it's hard to say how it will behave under different circumstances. Of course, if your state-space model is simple or composed of interpretable components, there is no
such problem. But ARIMA is always the same well studied ARIMA so it should be easier to anticipate its behavior even if you use it to approximate some complex process.
Because state-space allows you directly and exactly model complex/nonlinear models, then for these complex/nonlinear models you may have problems with stability of filtering/prediction (EKF/UKF
divergence, particle filter degradation). You may also have problems with calibrating complicated-model's parameters - it's a computationally-hard optimization problem. ARIMA is simple, has less
parameters (1 noise source instead of 2 noise sources, no hidden variables) so its calibration is simpler.
For state-space there is less community knowledge and software in statistical community than for ARIMA.
Discrete Wavelet Transform
Have you heard of Discrete Wavelet Transform in time series?
A Discrete Wavelet Transform (DWT) allows you to decompose your input data into a set of discrete levels, providing you with information about the frequency content of the signal i.e. determining
whether the signal contains high frequency variations or low frequency trends. Think of it as applying several band-pass filters to your input data.
We will explain this model by building up letter by letter. $SARIMA(p, d, q)(P, D, Q, s)$, Seasonal Autoregression Moving Average model:
$AR(p)$ - autoregression model i.e. regression of the time series onto itself. The basic assumption is that the current series values depend on its previous values with some lag (or several lags).
The maximum lag in the model is referred to as $p$. To determine the initial $p$, you need to look at the PACF plot and find the biggest significant lag after which most other lags become
$MA(q)$ - moving average model. Without going into too much detail, this models the error of the time series, again with the assumption that the current error depends on the previous with some lag,
which is referred to as $q$. The initial value can be found on the ACF plot with the same logic as before.
$AR(p) + MA(q) = ARMA(p, q)$
$I(d)$ - order of integration. This is simply the number of nonseasonal differences needed to make the series stationary.
Adding this letter to the four gives us the $ARIMA$ model which can handle non-stationary data with the help of nonseasonal differences. Great, one more letter to go!
$S(s)$ - this is responsible for seasonality and equals the season period length of the series
With this, we have three parameters: $(P, D, Q)$
$P$ - order of autoregression for the seasonal component of the model, which can be derived from PACF. But you need to look at the number of significant lags, which are the multiples of the season
period length. For example, if the period equals 24 and we see the 24-th and 48-th lags are significant in the PACF, that means the initial $P$ should be 2.
$Q$ - similar logic using the ACF plot instead.
$D$ - order of seasonal integration. This can be equal to 1 or 0, depending on whether seasonal differeces were applied or not.
R squared: coefficient of determination (in econometrics, this can be interpreted as the percentage of variance explained by the model), $(-\infty, 1]$
$R^2 = 1 - \frac{SS_{res}}{SS_{tot}}$
Mean Absolute Error: this is an interpretable metric because it has the same unit of measurment as the initial series, $[0, +\infty)$
$MAE = \frac{\sum\limits_{i=1}^{n} |y_i - \hat{y}_i|}{n}$
Median Absolute Error: again, an interpretable metric that is particularly interesting because it is robust to outliers, $[0, +\infty)$
$MedAE = median(|y_1 - \hat{y}_1|, ... , |y_n - \hat{y}_n|)$
Mean Squared Error: the most commonly used metric that gives a higher penalty to large errors and vice versa, $[0, +\infty)$
$MSE = \frac{1}{n}\sum\limits_{i=1}^{n} (y_i - \hat{y}_i)^2$
Mean Squared Logarithmic Error: practically, this is the same as MSE, but we take the logarithm of the series. As a result, we give more weight to small mistakes as well. This is usually used when
the data has exponential trends, $[0, +\infty)$
$MSLE = \frac{1}{n}\sum\limits_{i=1}^{n} (log(1+y_i) - log(1+\hat{y}_i))^2$
Mean Absolute Percentage Error: this is the same as MAE but is computed as a percentage, which is very convenient when you want to explain the quality of the model to management, $[0, +\infty)$
$MAPE = \frac{100}{n}\sum\limits_{i=1}^{n} \frac{|y_i - \hat{y}_i|}{y_i}$
The idea is rather simple -- we train our model on a small segment of the time series from the beginning until some $t$, make predictions for the next $t+n$ steps, and calculate an error. Then, we
expand our training sample to $t+n$ value, make predictions from $t+n$ until $t+2*n$, and continue moving our test segment of the time series until we hit the last available observation. As a result,
we have as many folds as $n$ will fit between the initial training sample and the last observation.
The idea is to create estimated values at the desired time stamps. These can be used to generate multivariate time series that are synchronized, equally spaced, and have no missing values. Consider
the scenario where $y_i$ and $y_j$ are values for the time series at times $t_i$ and $t_j$, repectively, where $i < j$. Let $t$ be a time drawn from the interval $(t_i, t_j)$. Then, the interpolated
value of the series is given by: $y = y_i + \frac{t-t_i}{t_j-t_i}*(y_j-y_i)$
Looking at the time series plots below, you can notice how the mean and variance of any given segment of time would do a good job representing the whole stationary time series but a relatively poor
job representing the whole non-stationary time series. For instance, the mean of the non-stationary time series is much lower from $600<t<800$ and its variance is much higher in this range than in
the range from $200<t<400$.
The correlation between values $s$ periods apart for a set of values.
(So, if person-1 tosses $HTHH$... they have $3-1 = 2$ for the value at the $4^{th}$ time step, and their series goes $1,0,1,2,....$)
A state space model (SSM) is a time series model in which the time series $Y_t$ is interpreted as the result of a noisy observation of a stochastic process $X_t$. The values of the variables $X_t$
and $Y_t$ can be continuous (scalar or vector) or discrete. Graphically, an SSM is represented as follows:
SSMs belong to the realm of Bayesian inference, and they have been successfully applied in many fields to solve a broad range of problems. It is usually assumed that the state process $X_t$ is
Markovian. The most well studied SSM is the Kalman filter, for which the processes above are linear and the sources of randomness are Gaussian.
Let $T$ denote the time horizon. Our broad goal is to make inference about the states Xt based on a set of observations $Y_1, . . . , Y_t$. Three questions are of particular interest:
Filtering: $t < T$. What can we infer about the current state of the system based on all available observations?
Smoothing: $t = T$. What can be inferred about the system based on the information contained in the entire data sample? In particular, how can we back fill missing observations?
Forecasting: $t > T$. What is the optimal prediction of a future observation and/or a future state of the system?
In principle, any inference for this model can be done using the standard methods of multivariate statistics. However, these methods require storing large amounts of data and inverting $tn × tn$
matrices. Notice that, as new data arrive, the storage requirements and matrix dimensionality increase. This is frequently computationally intractable and impractical. Instead, the Kalman filter
relies on a recursive approach which does not require significant storage resources and involves inverting $n × n$ matrices only. | {"url":"https://book.thedatascienceinterviewproject.com/algorithms/time-series-analysis","timestamp":"2024-11-12T13:56:10Z","content_type":"text/html","content_length":"1050600","record_id":"<urn:uuid:192474a2-d5c2-484c-b44e-553472cb44a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00519.warc.gz"} |
∴ Answer: (b);2x2+log∣x∣+C Q. 6. If m and n respectively, are ... | Filo
Question asked by Filo student
Q. 6. If and respectively, are the order and the degree of the differential equaltion , then
a. 1
b. 2
c. 3
d. 4
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
2 mins
Uploaded on: 2/9/2023
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Differential Equations
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Q. 6. If and respectively, are the order and the degree of the differential equaltion , then
Updated On Feb 9, 2023
Topic Differential Equations
Subject Mathematics
Class Class 12
Answer Type Video solution: 1
Upvotes 148
Avg. Video Duration 2 min | {"url":"https://askfilo.com/user-question-answers-mathematics/therefore-text-answer-b-frac-x-2-2-log-x-c-q-6-if-and-34313731333332","timestamp":"2024-11-08T18:07:22Z","content_type":"text/html","content_length":"276203","record_id":"<urn:uuid:49ab987b-3b56-439b-a2d5-cb1c9868426a>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00575.warc.gz"} |
Getting Started With UVa Online Judge: The 3n+1 Problem - Red-Green-Code
If you’re interested in competitive programming, or you just like solving programming puzzles, UVa Online Judge is not a bad place to start. During the many years that it has been active, people have
written books, supporting web sites, and sample code to help programmers get more out of it.
As my contribution to the ongoing commentary on UVa OJ problems, I have been working through and writing about the CP3 starred problems, a subset of UVa OJ problems recommended by the authors of a
companion textbook called Competitive Programming 3.
But this week I’m picking a problem that is not on that list: the popular problem #100, also known as The 3n+1 Problem. The origin of problem #100 is the Collatz conjecture, a mathematical conjecture
posed by the German mathematician Lothar Collatz, pictured above.
Since I want my friends to keep calling me, I won’t be trying to prove this conjecture (which remains an open problem as of this writing). Instead, I’ll just be writing a short program to count the
elements in the sequences generated by the $3n+1$ algorithm. Along the way, I’ll explain a process that you can use when you’re solving other UVa OJ problems.
Solving a UVa OJ Problem
If you’re planning to solve more than a few programming puzzles, I would recommend developing a problem-solving process. That way you’re not just solving each problem in isolation. You’re also
working on the larger meta-problem of how to solve programming problems. That will be more useful in the long run.
Here’s the quick-start version of the process I use:
Read the problem
Before you start thinking too hard about the problem, it’s worthwhile to read the problem statement carefully. Competitive programming problems have a reputation for including extraneous information
and confusing language in their descriptions. One way to get through this is to take notes about key ideas as you read. That way you can refer to your notes as you solve the problem, rather than
re-reading the confusing description.
When I first read the problem statement for UVa 100, it reminded me of Project Euler Problem #14, which I had solved before. Nevertheless, when I came across this statement in the description, I
found it puzzling:
For any two numbers $i$ and $j$ you are to determine the maximum cycle length over all numbers between and including both $i$ and $j$.
When I read that, I got hung up on the “over all numbers” wording and thought it had something to do with the length of a cycle beginning with $i$ and ending with $j$. But of course it just means:
for each integer x between i and j (inclusive)
calculate the length of the Collatz sequence that starts with x
return the maximum of these lengths
If you’re confused by the wording of a problem, it often helps to skip to the example input and output to clear things up. Some people even recommend reading problems from the bottom up: start with
the sample data and end with the introduction.
Solve it on paper
After reading the problem statement carefully, the next step in solving a programming problem is to “solve the problem on paper.” That’s similar to what I did in the previous paragraph. The steps
listed above aren’t the complete solution (e.g., they don’t print anything), but they cover the key idea of the problem. You could even be less formal with your paper-based solution, and just write:
“Generate the Collatz sequences that start with each integer between i and j, and return the length of the longest one.”
Whether it is written precisely or casually, I find that it’s useful to come up with a big-picture solution before digging into the details of a problem. For difficult problems, what you come up with
in this step may not even turn out to be a correct solution. But it provides a starting point and ensures that you have at least some idea of what is being asked before you move on to coding.
Write pseudocode
Once you have a paper solution, you can write a pseudocode solution. The purpose of pseudocode is to express a complete step-by-step solution without worrying about syntax. Translating a solution in
your head directly into a real programming language means doing two complex tasks at the same time: breaking down the solution into steps, and expressing those steps in a programming language. It’s
generally more effective to get the solution steps out of your head first, and worry about the language details later.
Here’s one way you could express the pseudocode solution for UVa 100:
for each test case
read i and j from input
set lo = min of i and j
set hi = max of i and j
for each x from lo to hi
set len = cycle(x)
if len > maxlen, maxlen = len
print i j maxlen
set len = 1
while n > 1
if n is odd, set n = 3*n+1
else set n = n/2
increment len
return len
The style of pseudocode syntax you use is up to you. Since you won’t be feeding it into a compiler, you can use any syntax that helps you clarify the solution. Here are some language features that
are often missing from pseudocode:
• Unnecessary punctuation (parentheses, curly brackets, semicolons)
• Variable declarations
• Implementation details like the names of library methods
The key is to avoid anything that will distract you from expressing the logical steps of the solution. If you have to look up any syntax, then it doesn’t belong in pseudocode. Once you have spent a
lot of time on competitive programming, you may reach a level of expertise where you are so fluent in a programming language that you don’t have to think about syntax. At that point you’ll probably
skip the pseudocode step. But until then, it makes sense to think about the problem independently of the implementation language.
Implement the solution
Once you have pseudocode written and you think it’s correct, it’s time to translate it into real code. Since UVa OJ can be picky about details, it’s best to start with a template containing the code
that is common to all solutions (e.g., functions for reading input and writing output). You can find templates for the supported languages on the UVa OJ submission specification page. If you’re using
Java, I maintain a template that I use for my solutions. It’s overkill for this problem, but it contains some functions that will save you some headaches for other problems, especially when it comes
to input and output performance.
I’ll mention two details about implementing the solution to UVa 100. First, it’s helpful to know that the modulo operator (the % symbol in C-like languages) returns the remainder of a division
operation. For example, if n % 2 == 0 then you know that n is even. Modulo is also useful to know about if you find yourself having to solve the FizzBuzz problem, which is sometimes used as an
interview question.
The second detail has to do with runtime errors. A runtime error verdict means that an unhandled error occurred while your program was executing, and resulted in an exception or segmentation fault.
One way this can happen is when you try to process input in a format you didn’t expect. Although my first attempt at this problem gave the correct answer for all of the uDebug random input, I got a
runtime error when I submitted it to UVa OJ. I tried modifying my code to ignore any lines containing only whitespace, and my submission was accepted. This is a good reminder that you shouldn’t trust
the input from the online judge.
For Further Reading
So there you have it: a process to get you started solving UVa 100 and other problems on UVa Online Judge. If you’re planning on trying other UVa OJ problems, you may be interested in some of my
other articles. Here are a few suggestions:
(Image credit: Konrad Jacobs) | {"url":"https://www.redgreencode.com/getting-started-with-uva-online-judge-the-3n1-problem/","timestamp":"2024-11-04T11:39:03Z","content_type":"text/html","content_length":"65039","record_id":"<urn:uuid:c2c25255-0d95-4ce0-8c80-bb8349727249>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00638.warc.gz"} |
Sample usage for inference
Sample usage for inference¶
Logical Inference and Model Building¶
>>> from nltk.test.setup_fixt import check_binary
>>> check_binary('mace4')
>>> from nltk import *
>>> from nltk.sem.drt import DrtParser
>>> from nltk.sem import logic
>>> logic._counter._value = 0
Within the area of automated reasoning, first order theorem proving and model building (or model generation) have both received much attention, and have given rise to highly sophisticated techniques.
We focus therefore on providing an NLTK interface to third party tools for these tasks. In particular, the module nltk.inference can be used to access both theorem provers and model builders.
NLTK Interface to Theorem Provers¶
The main class used to interface with a theorem prover is the Prover class, found in nltk.api. The prove() method takes three optional arguments: a goal, a list of assumptions, and a verbose boolean
to indicate whether the proof should be printed to the console. The proof goal and any assumptions need to be instances of the Expression class specified by nltk.sem.logic. There are currently three
theorem provers included with NLTK: Prover9, TableauProver, and ResolutionProver. The first is an off-the-shelf prover, while the other two are written in Python and included in the nltk.inference
>>> from nltk.sem import Expression
>>> read_expr = Expression.fromstring
>>> p1 = read_expr('man(socrates)')
>>> p2 = read_expr('all x.(man(x) -> mortal(x))')
>>> c = read_expr('mortal(socrates)')
>>> Prover9().prove(c, [p1,p2])
>>> TableauProver().prove(c, [p1,p2])
>>> ResolutionProver().prove(c, [p1,p2], verbose=True)
[1] {-mortal(socrates)} A
[2] {man(socrates)} A
[3] {-man(z2), mortal(z2)} A
[4] {-man(socrates)} (1, 3)
[5] {mortal(socrates)} (2, 3)
[6] {} (1, 5)
The ProverCommand¶
A ProverCommand is a stateful holder for a theorem prover. The command stores a theorem prover instance (of type Prover), a goal, a list of assumptions, the result of the proof, and a string version
of the entire proof. Corresponding to the three included Prover implementations, there are three ProverCommand implementations: Prover9Command, TableauProverCommand, and ResolutionProverCommand.
The ProverCommand’s constructor takes its goal and assumptions. The prove() command executes the Prover and proof() returns a String form of the proof If the prove() method has not been called, then
the prover command will be unable to display a proof.
>>> prover = ResolutionProverCommand(c, [p1,p2])
>>> print(prover.proof())
Traceback (most recent call last):
File "...", line 1212, in __run
compileflags, 1) in test.globs
File "<doctest nltk/test/inference.doctest[10]>", line 1, in <module>
File "...", line ..., in proof
raise LookupError("You have to call prove() first to get a proof!")
LookupError: You have to call prove() first to get a proof!
>>> prover.prove()
>>> print(prover.proof())
[1] {-mortal(socrates)} A
[2] {man(socrates)} A
[3] {-man(z4), mortal(z4)} A
[4] {-man(socrates)} (1, 3)
[5] {mortal(socrates)} (2, 3)
[6] {} (1, 5)
The prover command stores the result of proving so that if prove() is called again, then the command can return the result without executing the prover again. This allows the user to access the
result of the proof without wasting time re-computing what it already knows.
>>> prover.prove()
>>> prover.prove()
The assumptions and goal may be accessed using the assumptions() and goal() methods, respectively.
>>> prover.assumptions()
[<ApplicationExpression man(socrates)>, <AllExpression all x.(man(x) -> mortal(x))>]
>>> prover.goal()
<ApplicationExpression mortal(socrates)>
The assumptions list may be modified using the add_assumptions() and retract_assumptions() methods. Both methods take a list of Expression objects. Since adding or removing assumptions may change the
result of the proof, the stored result is cleared when either of these methods are called. That means that proof() will be unavailable until prove() is called and a call to prove() will execute the
theorem prover.
>>> prover.retract_assumptions([read_expr('man(socrates)')])
>>> print(prover.proof())
Traceback (most recent call last):
File "...", line 1212, in __run
compileflags, 1) in test.globs
File "<doctest nltk/test/inference.doctest[10]>", line 1, in <module>
File "...", line ..., in proof
raise LookupError("You have to call prove() first to get a proof!")
LookupError: You have to call prove() first to get a proof!
>>> prover.prove()
>>> print(prover.proof())
[1] {-mortal(socrates)} A
[2] {-man(z6), mortal(z6)} A
[3] {-man(socrates)} (1, 2)
>>> prover.add_assumptions([read_expr('man(socrates)')])
>>> prover.prove()
Prover9 Installation¶
You can download Prover9 from https://www.cs.unm.edu/~mccune/prover9/.
Extract the source code into a suitable directory and follow the instructions in the Prover9 README.make file to compile the executables. Install these into an appropriate location; the
prover9_search variable is currently configured to look in the following locations:
>>> p = Prover9()
>>> p.binary_locations()
Alternatively, the environment variable PROVER9HOME may be configured with the binary’s location.
The path to the correct directory can be set manually in the following manner:
>>> config_prover9(path='/usr/local/bin')
[Found prover9: /usr/local/bin/prover9]
If the executables cannot be found, Prover9 will issue a warning message:
>>> p.prove()
Traceback (most recent call last):
NLTK was unable to find the prover9 executable! Use config_prover9() or
set the PROVER9HOME environment variable.
>> config_prover9('/path/to/prover9')
For more information, on prover9, see:
Using Prover9¶
The general case in theorem proving is to determine whether S |- g holds, where S is a possibly empty set of assumptions, and g is a proof goal.
As mentioned earlier, NLTK input to Prover9 must be Expressions of nltk.sem.logic. A Prover9 instance is initialized with a proof goal and, possibly, some assumptions. The prove() method attempts to
find a proof of the goal, given the list of assumptions (in this case, none).
>>> goal = read_expr('(man(x) <-> --man(x))')
>>> prover = Prover9Command(goal)
>>> prover.prove()
Given a ProverCommand instance prover, the method prover.proof() will return a String of the extensive proof information provided by Prover9, shown in abbreviated form here:
============================== Prover9 ===============================
Prover9 (32) version ...
Process ... was started by ... on ...
The command was ".../prover9 -f ...".
============================== end of head ===========================
============================== INPUT =================================
% Reading from file /var/...
(all x (man(x) -> man(x))).
============================== end of search =========================
Exiting with 1 proof.
Process 6317 exit (max_proofs) Mon Jan 21 15:23:28 2008
As mentioned earlier, we may want to list some assumptions for the proof, as shown here.
>>> g = read_expr('mortal(socrates)')
>>> a1 = read_expr('all x.(man(x) -> mortal(x))')
>>> prover = Prover9Command(g, assumptions=[a1])
>>> prover.print_assumptions()
all x.(man(x) -> mortal(x))
However, the assumptions are not sufficient to derive the goal:
>>> print(prover.prove())
So let’s add another assumption:
>>> a2 = read_expr('man(socrates)')
>>> prover.add_assumptions([a2])
>>> prover.print_assumptions()
all x.(man(x) -> mortal(x))
>>> print(prover.prove())
We can also show the assumptions in Prover9 format.
>>> prover.print_assumptions(output_format='Prover9')
all x (man(x) -> mortal(x))
>>> prover.print_assumptions(output_format='Spass')
Traceback (most recent call last):
. . .
NameError: Unrecognized value for 'output_format': Spass
Assumptions can be retracted from the list of assumptions.
>>> prover.retract_assumptions([a1])
>>> prover.print_assumptions()
>>> prover.retract_assumptions([a1])
Statements can be loaded from a file and parsed. We can then add these statements as new assumptions.
>>> g = read_expr('all x.(boxer(x) -> -boxerdog(x))')
>>> prover = Prover9Command(g)
>>> prover.prove()
>>> import nltk.data
>>> new = nltk.data.load('grammars/sample_grammars/background0.fol')
>>> for a in new:
... print(a)
all x.(boxerdog(x) -> dog(x))
all x.(boxer(x) -> person(x))
all x.-(dog(x) & person(x))
exists x.boxer(x)
exists x.boxerdog(x)
>>> prover.add_assumptions(new)
>>> print(prover.prove())
>>> print(prover.proof())
============================== prooftrans ============================
Prover9 (...) version ...
Process ... was started by ... on ...
The command was ".../prover9".
============================== end of head ===========================
============================== end of input ==========================
============================== PROOF =================================
% -------- Comments from original proof --------
% Proof 1 at ... seconds.
% Length of proof is 13.
% Level of proof is 4.
% Maximum clause weight is 0.
% Given clauses 0.
1 (all x (boxerdog(x) -> dog(x))). [assumption].
2 (all x (boxer(x) -> person(x))). [assumption].
3 (all x -(dog(x) & person(x))). [assumption].
6 (all x (boxer(x) -> -boxerdog(x))). [goal].
8 -boxerdog(x) | dog(x). [clausify(1)].
9 boxerdog(c3). [deny(6)].
11 -boxer(x) | person(x). [clausify(2)].
12 boxer(c3). [deny(6)].
14 -dog(x) | -person(x). [clausify(3)].
15 dog(c3). [resolve(9,a,8,a)].
18 person(c3). [resolve(12,a,11,a)].
19 -person(c3). [resolve(15,a,14,a)].
20 $F. [resolve(19,a,18,a)].
============================== end of proof ==========================
The equiv() method¶
One application of the theorem prover functionality is to check if two Expressions have the same meaning. The equiv() method calls a theorem prover to determine whether two Expressions are logically
>>> a = read_expr(r'exists x.(man(x) & walks(x))')
>>> b = read_expr(r'exists x.(walks(x) & man(x))')
>>> print(a.equiv(b))
The same method can be used on Discourse Representation Structures (DRSs). In this case, each DRS is converted to a first order logic form, and then passed to the theorem prover.
>>> dp = DrtParser()
>>> a = dp.parse(r'([x],[man(x), walks(x)])')
>>> b = dp.parse(r'([x],[walks(x), man(x)])')
>>> print(a.equiv(b))
NLTK Interface to Model Builders¶
The top-level to model builders is parallel to that for theorem-provers. The ModelBuilder interface is located in nltk.inference.api. It is currently only implemented by Mace, which interfaces with
the Mace4 model builder.
Typically we use a model builder to show that some set of formulas has a model, and is therefore consistent. One way of doing this is by treating our candidate set of sentences as assumptions, and
leaving the goal unspecified. Thus, the following interaction shows how both {a, c1} and {a, c2} are consistent sets, since Mace succeeds in a building a model for each of them, while {c1, c2} is
>>> a3 = read_expr('exists x.(man(x) and walks(x))')
>>> c1 = read_expr('mortal(socrates)')
>>> c2 = read_expr('-mortal(socrates)')
>>> mace = Mace()
>>> print(mace.build_model(None, [a3, c1]))
>>> print(mace.build_model(None, [a3, c2]))
We can also use the model builder as an adjunct to theorem prover. Let’s suppose we are trying to prove S |- g, i.e. that g is logically entailed by assumptions S = {s1, s2, ..., sn}. We can this
same input to Mace4, and the model builder will try to find a counterexample, that is, to show that g does not follow from S. So, given this input, Mace4 will try to find a model for the set S' =
{s1, s2, ..., sn, (not g)}. If g fails to follow from S, then Mace4 may well return with a counterexample faster than Prover9 concludes that it cannot find the required proof. Conversely, if g is
provable from S, Mace4 may take a long time unsuccessfully trying to find a counter model, and will eventually give up.
In the following example, we see that the model builder does succeed in building a model of the assumptions together with the negation of the goal. That is, it succeeds in finding a model where there
is a woman that every man loves; Adam is a man; Eve is a woman; but Adam does not love Eve.
>>> a4 = read_expr('exists y. (woman(y) & all x. (man(x) -> love(x,y)))')
>>> a5 = read_expr('man(adam)')
>>> a6 = read_expr('woman(eve)')
>>> g = read_expr('love(adam,eve)')
>>> print(mace.build_model(g, [a4, a5, a6]))
The Model Builder will fail to find a model if the assumptions do entail the goal. Mace will continue to look for models of ever-increasing sizes until the end_size number is reached. By default,
end_size is 500, but it can be set manually for quicker response time.
>>> a7 = read_expr('all x.(man(x) -> mortal(x))')
>>> a8 = read_expr('man(socrates)')
>>> g2 = read_expr('mortal(socrates)')
>>> print(Mace(end_size=50).build_model(g2, [a7, a8]))
There is also a ModelBuilderCommand class that, like ProverCommand, stores a ModelBuilder, a goal, assumptions, a result, and a model. The only implementation in NLTK is MaceCommand.
Using Mace4¶
Check whether Mace4 can find a model.
>>> a = read_expr('(see(mary,john) & -(mary = john))')
>>> mb = MaceCommand(assumptions=[a])
>>> mb.build_model()
Show the model in ‘tabular’ format.
>>> print(mb.model(format='tabular'))
% number = 1
% seconds = 0
% Interpretation of size 2
john : 0
mary : 1
see :
| 0 1
0 | 0 0
1 | 1 0
Show the model in ‘tabular’ format.
>>> print(mb.model(format='cooked'))
% number = 1
% seconds = 0
% Interpretation of size 2
john = 0.
mary = 1.
- see(0,0).
- see(0,1).
- see(1,1).
The property valuation accesses the stored Valuation.
>>> print(mb.valuation)
{'john': 'a', 'mary': 'b', 'see': {('b', 'a')}}
We can return to our earlier example and inspect the model:
>>> mb = MaceCommand(g, assumptions=[a4, a5, a6])
>>> m = mb.build_model()
>>> print(mb.model(format='cooked'))
% number = 1
% seconds = 0
% Interpretation of size 2
adam = 0.
eve = 0.
c1 = 1.
- man(1).
- love(0,0).
- love(1,0).
- love(1,1).
Here, we can see that adam and eve have been assigned the same individual, namely 0 as value; 0 is both a man and a woman; a second individual 1 is also a woman; and 0 loves 1. Thus, this is an
interpretation in which there is a woman that every man loves but Adam doesn’t love Eve.
Mace can also be used with propositional logic.
>>> p = read_expr('P')
>>> q = read_expr('Q')
>>> mb = MaceCommand(q, [p, p>-q])
>>> mb.build_model()
>>> mb.valuation['P']
>>> mb.valuation['Q'] | {"url":"https://www.nltk.org/howto/inference.html","timestamp":"2024-11-02T04:57:55Z","content_type":"text/html","content_length":"57889","record_id":"<urn:uuid:ca7a7440-edcd-45d2-91ce-10cf88dcdfeb>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00501.warc.gz"} |
Influence of the transition width on the magnetocaloric effect across the magnetostructural transition of Heusler alloys. - PDF Download Free
Research Cite this article: Cugini F, Porcari G, Fabbrici S, Albertini F, Solzi M. 2016 Influence of the transition width on the magnetocaloric effect across the magnetostructural transition of
Heusler alloys. Phil. Trans. R. Soc. A 374: 20150306. http://dx.doi.org/10.1098/rsta.2015.0306 Accepted: 9 May 2016 One contribution of 16 to a discussion meeting issue ‘Taking the temperature of
phase transitions in cool materials’. Subject Areas: materials science, solid state physics Keywords: magnetocaloric effect, Heusler alloys, magneto-structural transitions, magnetic shape memory
materials Author for correspondence: F. Albertini e-mail:
[email protected]
Influence of the transition width on the magnetocaloric effect across the magnetostructural transition of Heusler alloys F. Cugini1,2 , G. Porcari1 , S. Fabbrici2 , F. Albertini2 and M. Solzi1 1
Department of Physics and Earth Sciences, University of Parma,
Parco Area delle Scienze 7/A, 43124 Parma, Italy 2 IMEM-CNR Institute, Parco Area delle Scienze 37/A, 43124 Parma, Italy FC, 0000-0003-0275-1986; GP, 0000-0002-6960-3681; SF, 0000-0002-8756-0750; FA,
0000-0002-7210-0735; MS, 0000-0002-9912-4534 We report a complete structural and magnetothermodynamic characterization of four samples of the Heusler alloy Ni-Co-Mn-Ga-In, characterized by similar
compositions, critical temperatures and high inverse magnetocaloric effect across their metamagnetic transformation, but different transition widths. The object of this study is precisely the
sharpness of the martensitic transformation, which plays a key role in the effective use of materials and which has its origin in both intrinsic and extrinsic effects. The influence of the transition
width on the magnetocaloric properties has been evaluated by exploiting a phenomenological model of the transformation built through geometrical considerations on the entropy versus temperature
curves. A clear result is that a large temperature span of the transformation is unfavourable to the magnetocaloric performance of a material, reducing both isothermal entropy change and adiabatic
temperature change obtainable in a given magnetic field and increasing the value of the maximum field needed to fully induce the transformation. The model, which is based on standard magnetometric
and conventional calorimetric measurements, turns out to be a convenient tool for the determination of the optimum values of transformation 2016 The Author(s) Published by the Royal Society. All
rights reserved.
Over the last two decades, magnetic refrigeration has attracted a great interest as a technological alternative to the conventional gas compression–expansion technique. The finding of suitable
magnetocaloric materials, alternative to Gd, with large and reversible magnetocaloric properties, i.e. isothermal entropy change (s) and adiabatic temperature change (Tad ), for cyclic applications
in moderate magnetic fields will play a decisive role to bring this technology into the market [1]. The research in this field was boosted by the introduction in 1997 by Pecharsky and Gschneidner of
the ‘giant’ magnetocaloric material Gd5 (SiGe)4 , showing high s at room temperature, associated with a first-order magneto-structural phase transition [2,3]. Since then a great effort has been made
towards material systems displaying the first-order phase changes, involving a significant latent heat [4–8]. Among them magnetic shape memory Heusler alloys represent a particularly interesting
class [9]. They are rare earth-free, easy to prepare and offer large tailoring possibilities. Their interesting phenomenology arises from a martensitic phase transition from a high-temperature cubic
phase (austenite) to a low-temperature low-symmetry phase (martensite) that involves a change in both structural and magnetic properties. Remarkably, thanks to the strong discontinuities of the
physical properties at the martensitic transformation, caloric effects can be obtained not only by applying magnetic fields but also stress and pressure, enabling multicaloric applications [10–12].
By exploiting suitable compositional changes of Ni2+x Mn1+y X1+z (X = Ga, In, Sn, Sb, x + y + z = 0) it has been possible to control the main physical properties of this class of materials and
consequently tune the magnetocaloric performances: e.g. critical temperatures, field dependence of the transformation temperature, intensity and nature of the magnetocaloric effect (from direct to
inverse) [9]. In particular, a crucial goal of the magnetocaloric research has been modelling the magnetic interactions in martensitic and austenitic phases and increasing the magnetization
discontinuity (M) at the transformation [13,14]. In off-stoichiometric Ga-based compounds by changing the relative amount of the constituent elements, it is possible to merge martensitic and Curie
temperatures and obtain a direct first-order transformation from ferromagnetic martensite to paramagnetic austenite, giving rise to a direct magnetocaloric effect [6]. On the other hand, In-, Sn- and
Sb-compounds show, in suitable stoichiometric ranges, a martensitic transformation between a paramagnetic-like martensite and a ferromagnetic austenite, a feature which makes them known in the
literature as ‘metamagnetic Heuslers’. In this case, an inverse and remarkable magnetocaloric effect has been obtained [9]. NiCoMnGa-based alloys also belong to the family of metamagnetic Heuslers:
stoichiometry controls the critical temperatures, and allows for the realization of materials where the sequence of structural (occurring at the critical temperature TM ) and magnetic (occurring at
TCM and TCA ) transitions can be swapped: in Ni-Co-Mn-Ga this means that the martensitic transformation can be realized between ferromagnetic phases, between paramagnetic phases and, most
interestingly, between a paramagnetic-like martensite and ferromagnetic austenite [15,16]. Additionally, it was found that partial substitution of Ga with In allows one to selectively lower the
structural critical temperature while leaving the magnetic critical temperatures almost unaffected [13]. This finding introduces a further degree of freedom in designing the magneto-structural
behaviour of these alloys: in particular, it allows one to further separate TM and TCA , maximizing the magnetization jump occurring at the transformation. Comparison between quaternary and Indoped
quinary compositions allowed to verify that alloys showing higher magnetization jumps
1. Introduction
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
temperature span in a trade-off between sheer performance and amplitude of the operating range of a material. This article is part of the themed issue ‘Taking the temperature of phase transitions in
cool materials’.
Ni-Co-Mn-Ga-In samples were prepared by arc melting the stoichiometric amounts of high-purity elements. To prevent oxidation, a protective Ar atmosphere was established through several Ar– vacuum
cycles and pure Ti was melted for 3 min prior to every fusion to act as getter of residual oxygen. Melted buttons were turned around and re-melted four times to improve homogeneity; samples where
then wrapped in Ta foil and annealed for 72 h at 1173 K in a protective atmosphere, finalized by water quenching. The composition was experimentally determined through energy dispersive spectroscopy
(EDS) microanalysis on a Philips 515 scanning electron microscope. Thermomagnetic analysis (TMA) determined the structural and magnetic critical temperatures of the grown samples by measuring the AC
susceptibility in a purpose-built susceptometer working at 0.5 mT and 500 Hz. Temperature-dependent X-ray diffraction patterns were collected with a Thermo ARL X’tra diffractometer equipped with a
solid-state Si(Li) Peltier detector and an environmental chamber. Specific heat measurements were performed with a homemade differential scanning calorimeter based on thermoelectric modules [27].
This in-field calorimeter is able to work in 10−5 mbar vacuum between 250 and 420 K and in magnetic fields up to 1.8 T. Its temperature control resolution is ±0.01 K and the thermal sweep is
controlled by a high-power Peltier cell. The calibration was performed by using a single-crystal sapphire sample. The error of specific heat data is estimated to be about 4%. Such error is due to
slightly different vacuum conditions between the calibration and consecutive measurements and due to small oscillations of the temperature sweep rate. The reported measurements were carried out with
temperature sweeps in heating and cooling at a rate of 2 K min−1 , in zero magnetic field and in a magnetic field µ0 H = 1.8 T.
2. Methods
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
displayed also higher structural discontinuities, measured by X-ray diffraction experiments as the relative volume change between the two phases. Although metamagnetic Heuslers show high values of
inverse magnetocaloric effect (adiabatic temperature changes up to Tad ∼ 8 K in µ0 H = 1.95 T [17] at the first application of magnetic field) their performances are strongly reduced on subsequent
runs of the magnetic field. The hysteretic character of the transition represents a strong drawback for the cyclic use of these materials. It is well assessed that the reversibility of the
magnetocaloric effect depends upon two factors: the extent of the hysteresis and the shift in the transition temperature with field [18]. The current research is addressed to systems with low
hysteresis and large field dependence of the martensitic transformation temperature, enabling high reversibility rates in moderate magnetic fields (around 1 T). The possible exploitation of minor
loops or artificial phase nucleation sites has been proposed; yet, a deeper understanding of thermal and magnetic hysteresis is required to improve materials performances [17,19,20]. Some recent
works have been addressed to this [21–23]. Several aspects have to be taken into account both of intrinsic and of extrinsic origin, such as crystalline symmetry and geometric compatibilities of
martensite and austenite, local variation of composition, internal stresses, lattice defects, atomic ordering and dynamic properties of the transformation [24–26]. On the other hand, not only
hysteresis but also the sharpness of the martensitic transformation plays a crucial role in the effective use of materials, and similarly it is due to both intrinsic and extrinsic effects. The full
potential of a material can be exploited only if the applied field is large enough to induce the complete transition, being the magnetocaloric effect proportional to the transformed fraction of phase
[8]. In this paper, we report on four samples of the Heusler alloy Ni-Co-Mn-Ga-In; the samples were chosen with similar compositions, critical temperatures and high inverse magnetocaloric effect
across their metamagnetic transformation, but different transition widths. We will provide a complete structural and magneto-thermodynamic characterization of the alloys through in-field calorimetry
and evaluate the role of the transition width in the magnetocaloric properties by taking advantage of a phenomenological model of the transformation built through geometrical consideration of the
entropy versus temperature curves.
3. Results and discussion
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
Figure 1 shows the TMA of four Co- and In-doped NiMnGa Heusler alloys of general formula Ni50−x Cox Mn50−y (Ga,In)y : their measured compositions are reported in table 1, as well as their critical
temperatures, measured as the inflection points on the susceptibility curves recorded by TMA. All the samples show a similar transformation between a paramagnetic-like martensite, characterized by a
null signal of the susceptibility, and a ferromagnetic austenite, evidenced by the high susceptibility region in the TMA curves. The martensitic Curie temperatures occur well below room temperature,
between 170 and 223 K; the transformation temperatures are all above room temperature and in a narrow interval, ranging between 350 K for sample S2 and 388 K for sample S1. The austenitic Curie
temperatures, which are mostly influenced by the Co content, occur between 430 K (sample S1) and 476 K (sample S3). Besides these similarities, TMA highlights quite different transformation widths
and hysteresis. In order to evaluate the structural properties, powder X-ray diffraction patterns were collected for each sample in a wide temperature range across the transformation. The powders
used for the experiments were heat treated to reduce crystal defects and stresses introduced by grinding. All samples transform from cubic austenite to tetragonal martensite; the diffraction patterns
have been fitted through the LeBail algorithm [28] to extrapolate the lattice parameters of the two phases at various temperatures. In the following, we discuss the results for sample S3, which is
representative of the whole series. Figure 2 shows a notable region of the diffraction patterns and their evolution with temperature: the lowering of the austenitic reflections and the onset of the
tetragonal martensitic phase are clearly visible. The patterns were collected on cooling from austenite to martensite in the range 453–283 K: the martensitic diffraction peaks become clearly visible
below 370 K, and traces of the austenitic phase are detectable at the lowest temperature of the series. Figure 3 shows the temperature evolution of the martensitic lattice parameters √ (figure 3a)
and the tetragonal distortion cM /aM 2 (figure 3b): the tetragonal plane (the aM lattice parameter) shows almost negligible thermal expansion in the measurement range. On the other hand, the
tetragonal cM axis shows a quite strong contraction coming from high temperature down to 343 K, which is the closest value in the measured series to the cooling martensitic temperature measured by
TMA (TA−M = 350 K) (figure 1). On further cooling, the cM axis reverts the trend and increases again to higher values. The anomalous behaviour of the tetragonal axis is evidenced in the evolution of
the tetragonal distortion and of the martensitic volume, which show a minimum at the transformation temperature TA−M . The analysis of the temperature dependence of the lattice parameters allows for
the determination of several features useful for characterizing the transformation [13]: besides the tetragonal distortion of the martensitic lattice, described above, the relative volume change V/V
provides a measure of the structural discontinuity between martensite and austenite, while the middle eigenvalue of the transformation matrix λ2 , defined for the cubic–tetragonal √ transformation as
λ2 = aM 2/aA , is a good parameter [24] for describing the lattice mismatch at the transformation invariant plane. The calculated unit cell volumes and relative changes are reported in figure 3c,d.
From figure 3c, it appears that the two phases have different thermal expansion factors: thus, the relative volume change is not constant over temperature. Additional variability, visible as
scattering of the computed quantities in the graphs of figure 3, is generated by the thermal drift of the experimental set-up during the pattern acquisitions and by the error propagation induced by
calculations; it is therefore sensible to estimate the mean transformation V/V by considering the linear fit of the computed values at different temperatures and extrapolating at TA−M . For sample
S3, we estimate V/V ≈ 0.85 ± 0.05%. The same mathematical approach has been followed for the estimation of the mean tetragonal distortion and the λ2 values. Table 2 reports the computed values of
relative volume discontinuity, V/V, tetragonal √ √ distortion, cM /aM 2, and invariant plane mismatch, λ2 = aM 2/aA , for the presented samples. The relative volume change is the quantity with the
highest variability within the series: the maximum volume discontinuity is ≈1.2 ± 0.1% for sample S2, while the smallest one is ≈0.6 ± 0.1% for sample S1. The values of tetragonal distortion and
transformation matrix
400 425 T (K)
c (arb. units)
400 425 T (K)
Figure 1. Temperature dependence of the AC susceptibility curves. To improve readability only the range 300–500 K is displayed.
Table 1. Compositions measured by EDS are expressed as at.%. The reported error is the standard deviation estimated from the compositional mapping; the lower bound to the error is the instrument
uncertainty, ±0.1%. Magnetic (TCM , TCA ) and structural (TA−M , TM−A ) critical temperatures are estimated as the inflection points of the susceptibility curves. composition (at.%) sample S1
Ni 42.4 ± 0.2
Co 7.1 ± 0.2
Mn 33.0 ± 0.2
Ga 15.3 ± 0.2
In 2.3 ± 0.2
TCM (K) 223
TCA (K) 430
TA−M (K) 374
TM−A (K) 388
41.7 ± 0.2
8.6 ± 0.3
32.3 ± 0.4
14.1 ± 0.2
3.3 ± 0.2
40 ± 0.3
10.8 ± 0.2
31.4 ± 0.2
16.5 ± 0.2
1.4 ± 0.2
41.7 ± 0.2
8.1 ± 0.3
33.3 ± 0.4
13.8 ± 0.3
3.2 ± 0.2
eigenvalue show a much smaller variance. A similar trend as the one observed for the volume discontinuity is however established: sample S1 shows the least tetragonal distortion and the highest
compatibility factor (λ2 is the closest to unity of the series), while sample S2 shows the highest tetragonal distortion and the lowest value of λ2 . The thermodynamic and thermomagnetic properties
of the four presented samples are explored by measuring their specific heat under magnetic field across the martensitic transformation. In-field differential scanning calorimetry (DSC) offers a large
amount of information concerning the thermodynamic and magnetocaloric features of materials exploiting first-order transitions [29]. Figure 4 shows the specific heat of the samples measured with
temperature sweeps on heating and cooling in zero and in a 1.8 T applied magnetic field. The specific heat of martensitic and austenitic phases, before and after the transition, is nearly the same
for the four samples (cpmart ≈ 480 J kg−1 K−1 at 300 K and cpaust ≈ 570 J kg−1 K−1 at 400 K), and it shows a slow variation with temperature. At the transition temperature, the presence
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
c (arb. units)
(220)A intensity (arb. units)
293 K 313 K 333 K 343 K 353 K 363 K 383 K 453 K
45 2q (°)
Figure 2. Temperature evolution of the diffraction patterns collected across the transformation temperature. A narrow 2θ range has been displayed to highlight the temperature evolution of the indexed
reflections of austenite (A subscript) and martensite (M subscript). (a)
3.86 3.84 280
cM/aM `÷2
aM, cM (Å)
a, c (Mart.) c (Mart.) 290
c/a 370
1.202 370 1.1
VA,M (Å3)
199.0 0.8
198.5 198.0
0.7 Vol Aust. Vol Mart.
197.5 280
DV/V %
360 380 T (K)
DV/V 280
320 330 T (K)
0.6 370
√ Figure 3. Temperature evolution of (a) the martensitic lattice parameters, (b) the tetragonal distortion cM /aM 2, (c) the austenitic and martensitic cell volumes and (d) the relative volume change
V/V. of peaks in the specific heat confirms that this magnetic transition is of first order; the heating and cooling peaks are separated by the transformation hysteresis, while the magnetic field, as
expected for inverse magnetocaloric effect alloys, promotes the high-temperature magnetic phase thus shifting the transformations to lower temperatures. From the shift of the heat flow peaks, we can
deduce the values of dT/µ0 dH. The values, calculated for the transition on heating and on cooling, are reported in table 3. In all the samples, the dT/µ0 dH across the cooling transformation is
higher than that in the heating branch. This confirms the already reported behaviour [14,30] according to which the magnetic field shifts more the transformation temperature on cooling due to the
larger magnetization jump. Observing the temperature of peaks in the cooling and heating
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
cp (J kg–1 K–1)
cool. m0H = 0 T
cool. m0H = 1.8 T
360 370 T (K)
350 T (K)
Figure 4. Specific heat curves of the four samples in µ0 H = 0 T (solid lines) and 1.8 T (dashed lines) measured with temperature sweeps on heating (red lines, right arrow) and on cooling (blue
lines, left arrow).
Table 2. Mean values of the relative volume change (V/V), tetragonal distortion of the martensitic cell (t), and middle eigenvalue of the transformation matrix (λ2 ). The means were calculated at the
transformation temperature from the linear fits, as described in the text.
sample S1
volume discontinuity V/V % 0.6 ± 0.1
tetragonal distortion √ t = cM /aM 2 1.200 ± 0.005
middle eigenvalue √ λ2 = aM 2/aA 0.940 ± 0.002
1.2 ± 0.1
1.210 ± 0.005
0.933 ± 0.002
0.85 ± 0.05
1.205 ± 0.005
0.936 ± 0.001
0.88 ± 0.025
1.202 ± 0.005
0.937 ± 0.001
measurements, we can deduce also the values of the thermal hysteresis (reported in table 3), which characterize the first-order transition. The different field sensitivity of the cooling and heating
critical temperatures affects also the thermal hysteresis: the in-field hysteresis is larger than the zero-field one. The peaks’ shape, width and height are different for every sample and do not seem
to be correlated with the stoichiometric composition. The area under the peaks corresponds to the latent heat (L) of the fully transformed phase, which can be calculated by integrating the specific
heat data after subtraction of the baseline (cp,baseline ) between the start and finish temperature of the transformation (Ts and Tf ): Tf L(H) = (cp (T , H) − cp,baseline (T )) dT . (3.1) Ts
The calculated latent heat values are comparable with those measured in samples with similar composition [14]. Both the application of magnetic field and the shift to lower temperatures of the
transformation observed in the cooling curves result in a sizeable reduction of L. The strong
7 .........................................................
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
heat. m0H = 0 T heat. m0H = 1.8 T
cp (J kg–1 K–1)
FWHM (K)
L (J kg−1 )
dT/µ0 dH dT/µ0 dH Hyst. Hyst. heating (K T−1 ) cooling (K T−1 ) µ0 H = 0 T (K) µ0 H = 1.8 T (K)
3.1 ± 0.3
2600 ± 80
385.0 ± 0.3
−2.4 ± 0.6
−2.7 ± 0.6
10.4 ± 0.6
11.6 ± 0.6
351.9 ± 0.5 10.0 ± 0.8 4700 ± 140 −4.6 ± 1.0
−5.9 ± 1.0
16.7 ± 1.0
19.1 ± 1.0
381.4 ± 0.5 11.0 ± 1.0
4450 ± 130 −4.5 ± 1.0
−5.9 ± 1.0
21.5 ± 1.0
24.0 ± 1.0
366.6 ± 0.5
5150 ± 150 −3.5 ± 1.0
−4.3 ± 1.0
16.9 ± 1.0
18.4 ± 1.0
7.7 ± 0.7
action of the magnetic field on the latent heat was already observed in Ni-Mn-Co-Ga-In [14]. The absence of In in the quaternary alloys reduces this effect, which disappears in the parent Ni2 MnGa
alloy, showing a ferro–ferro martensitic transformation [14]. The comparison between zero and in-field specific heat data gives the possibility to obtain a complete magnetocaloric effect
characterization of the samples. The integration of the calorimetry data provides the entropy–temperature curves across the transition at different magnetic fields: T cp (T , H) dT . (3.2) s(T, H) −
s(T0 ) = T T0 The adiabatic temperature change Tad (T) and the isothermal entropy change s(T) can be deduced from the obtained s–T curves. The errors correlated with this numerical manipulation of
specific heat data can be estimated following the discussion of Porcari et al. [27] and Pecharsky & Gschneidner [31]. The temperature behaviours of s(T) and Tad (T) for a µ0 H of 1.8 T are reported
in figure 5. The results of samples S2 and S4 have been compared with the s(T) obtained by magnetic measurements using the Maxwell relation and with the Tad (T) directly measured with a probe based
on a Cernox temperature sensor [32]. The results obtained from the different techniques, provided that they have been used with the proper measurement protocol and on strictly the same sample, turn
out to be consistent, as already demonstrated in [27]. The peak values speak and Tadpeak for a magnetic field span µ0 H = 1.8 T across the transformation on heating are shown in table 4. The Tadpeak
values reported in this paper are the highest of all the Ga-based Heuslers [14,27,33,34], reaching almost 2 K T−1 for sample S2. For all the samples, the measured speak results lower than the maximum
entropy change of the fully induced phase, estimated from latent heat sfull ≈ L/Tp . This means that a magnetic field of 1.8 T does not fully induce the transformation in those samples. We can
observe in table 4 that there is no close relationship between the speak and the sfull values: for instance, S4 has the biggest sfull = 14 J kg−1 K−1 among the four samples but it shows a speak lower
than that of S2. This fact underlines that further quantities characterizing the transitions play a role in determining the s(T) and Tad (T) of real materials. A large sfull , which on the basis of
the Clausius–Clapeyron relation is proportional to the magnetization difference between the two phases, is not enough to ensure a large speak exploitable in thermomagnetic cycles. There is instead a
correlation between Tadpeak and (dT/µ0 dH); however also in this case none of the samples reaches the maximum expected value of Tad , calculated as H · (dT/dH) (table 4). The key to understand the
behaviour of these materials is the transformation width (W): we can observe in figure 4 that for all the samples the transition occurs over a large temperature range rather than at a well-defined
temperature, as expected in principle for a first-order transition. We estimated W from the calorimetric measurements as the FWHM of the transformation peaks (table 3), because it is difficult to
exactly determine the initial and final temperature of the transition. The quantity W assumes a relevant role in determining both s and Tad . We can observe that S1, which has the narrowest W, is the
sample in which the 1.8 T magnetic field
sample Tp (K)
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
Table 3. Characteristics of the first-order transition of the four samples as obtained from DSC data: transition temperature in zero applied magnetic field on heating Tp , transition width (full
width at half maximum (FWHM) of the specific heat peak at zero magnetic field on heating), latent heat of the transition in zero applied field on heating L, magnetic field dependence of the
transition temperature dT/µ0 dH on heating and cooling, thermal hysteresis in zero and applied magnetic field (Hyst.).
DTad (K)
–2 –3 –4 360 2
360 T (K)
–Ds (J kg–1 K–1)
0 –2 –4 –6 –8 360
380 T (K)
345 T (K)
380 T (K)
Figure 5. Tad (T) (circles) and s(T) (squares) for µ0 H = 1.8 T calculated from DSC data on heating. (Online version in colour.)
Table 4. Peak values of the isothermal entropy variation speak and of the adiabatic temperature change Tadpeak in a field span of 1.8 T calculated from specific heat data. Comparison of these values
with the expected entropy changes of the fully induced transition sfull ≈ L/Tp and the maximum achievable adiabatic temperature change Tmax = H · (dT/dH). The adiabatic temperature change Tcalc is
calculated using equation (3.8) from data reported in table 3.
sample S1
speak (J kg−1 K−1 ) 6.2 ± 0.5
sfull (J kg−1 K−1 ) 6.8
speak /sfull (%) 91
Tadpeak (K) −2.5 ± 0.2
Tmax (K) 4.3
Tpeak /Tmax (%) 58
Tcalc (K) 2.7
7.7 ± 0.8
−3.3 ± 0.3
6.5 ± 0.9
−3.1 ± 0.4
7.5 ± 0.8
−2.9 ± 0.3
manages to transform almost all the phase (speak /sfull = 91%). At the same time, this sample has the highest ratio between the measured Tpeak and the maximum exploitable Tad , as deduced from the
relation Tmax = H(dT/dH). As for thermal and magnetic hysteresis, several features contribute to the smearing of the transformation over temperature, both extrinsic and intrinsic to the material.
Lattice mismatch (e.g. λ2 ) and volume differences V/V between the two phases contribute to the total free energy with an elastic strain energy term that plays a major role in broadening the
transition and in determining the two-phase stability regions. Coherently, among the measured samples, sample S1, which shows a less pronounced discontinuity of the structural parameters at the
transformation (table 2), displays a smaller transformation width (i.e. W = 3.1 ± 0.3 K). However, a direct correlation between intrinsic structural features and transition width cannot be drawn for
all the samples, highlighting the crucial role of extrinsic properties in giving rise to variations of the effective local transition temperature.
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
10 .........................................................
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
The compositional inhomogeneity, caused either by improper melting or by phase splitting due to solubility limits of the various elements in the alloy, can be one of the most important contributions.
Compositional mapping performed through EDS analysis shows that compositional fluctuations are present in all samples, mainly involving Mn: the composition errors reported in table 1 are the
propagation of the standard deviations calculated for all elements with the experimental error of the EDS technique, which is in our case ±0.1 at.%. Uncertainties on Mn top 0.4 at.%, while the other
elements show much lower deviations, in some cases comparable to the experimental error of EDS. These values, although numerically limited, can be significant in these alloys, where variations of 1
at.% on Mn can in some cases shift the martensitic critical temperature for tens of degrees [15]. Besides compositional inhomogeneities, microstructural features such as defects and grain boundaries
strongly influence the martensitic transformation process that proceeds through nucleation and growth of one phase into the other following an avalanche criticality type of path [17]. Further
analyses specifically targeted to these aspects are needed to improve the comprehension of the phenomenon, aiming at a better exploitation of magnetocaloric materials. In order to understand the role
that each thermodynamic and thermomagnetic parameter, characterizing the transition, plays in determining the magnetocaloric effect features, a simple geometrical model is constructed in the s–T
plane. A similar model, based on magnetization data, has been introduced in [33] in order to correlate the isothermal (s) and adiabatic (Tad ) features of the magnetocaloric effect; in this paper, we
generalize its construction to take into account also the transformation width. The model is built drawing the tangent lines at the inflection points of the two entropy curves across the transition,
both in zero field and under applied magnetic field, and the tangent lines at the entropy curves below and above the transition region (figure 6a). In this way, the area in the s–T plane where the
magnetocaloric effect is significant looks like a parallelogram. This model physically means that we are considering a linear variation of the phase fraction, the order parameter of the process, over
a temperature range W centred at the peak temperature Tp of the cp (T) curve. Figure 6b shows the comparison between the measured specific heat data of sample S2 and the model constructed on such
data. The specific heat peaks are approximated by a rectangular shape, which after integration gives rise to the linear trend of the entropy curves at the transition. The rectangle width, W, is
determined by the latent heat, which must remain the same obtained from the specific heat data, and by the height of the specific heat peak, calculated as the peak value of a Gaussian best fit of the
experimental curve. The temperature dependence of the specific heat below and above the transition is considered to be linear and the entropy curves of the austenitic phase with and without magnetic
field are assumed to be overlapping: the effect of applied magnetic field on the austenitic phase, where a direct magnetocaloric effect contribution is expected, turns out to be negligible. Both
positive Tad and negative s contributions on the high-temperature peak-tails of the transformation, reported in figure 5 for all the samples, are experimentally observed but below the experimental
error. Therefore, the latent heat, represented as the segment DB’ in figure 6c, is assumed to be the same in zero and applied magnetic field. Although this approximation might seem far from reality,
a possible field dependence of latent heat (leading to the s(T) curves not overlapping in the austenitic range) would not affect the determination of peak values of both Tad and s: by model
construction peak values are realized before the in-field austenitic line starts. The only effect of the magnetic field is to shift the transition temperature Tp to lower temperature. For clarity
reasons, we apply this construction on the entropy curves in heating, only. Curves in cooling and the effects due to thermal hysteresis can be introduced too, as was done by Gottschall et al. [26],
but this is outside the purpose of this paper. In this simplified model, five fundamental parameters are enough to describe the transition: the temperature of the transition peak in zero applied
magnetic field (Tp ), the total latent heat of the transition (λ), the shift of the transition temperature due to the applied magnetic field H · (dT/dH), the transition width (W) and the specific
heat value of the martensitic phase before the transition (cpmart ). Thanks to some geometrical proportions (figure 6c), considering the
m0H = 1.8 T
m0H = 0 T
E entropy
cp (J kg–1 K–1)
(b) 1000
DSC model
C¢ C
m0H = 0 T
m0H = 1.8 T
B¢ B
(dT/dH) × DH 330
T (K)
Figure 6. (a) Sketch of the proposed model superimposed to the s(T) curves calculated from DSC data on heating for µ0 H = 0 and 1.8 T. (b) Comparison between the measured specific heat data and the
considered model of the transition. (c) Geometrical constructions based on the model in order to correlate the main parameters of the first-order transition (see text for details).
triangles ABD, ACE and FC’E, we can link the magnetocaloric features at the transition, Tad and s, to these five parameters: dT H : Tad = s + CC : s (3.3) dH
and W : Tad =
L + BB : s. Tp
The segments CC and BB depend on the entropy rate before the transition and can be approximated as cpmart cpmart BB = AB tan α ≈ AB =W (3.5) Tp Tp and CC = AC tan α ≈ AC
cpmart cpmart dT H = . Tp dH Tp
The validity of equation (3.3) for real materials was demonstrated in Porcari et al. [27] by comparing the Tad values obtained from (3.3) and those directly measured and derived from infield specific
heat data. The proportions (3.3) and (3.4) are strictly valid only for purely first-order systems and when the field-induced transition shift is smaller than the transformation width [34]: all the
samples presented in this paper comply with these restrictions. By inserting (3.5) and (3.6) in (3.3) and (3.4), we obtain speak = and Tadpeak =
((dT/dH)H) L Tp W
((dT/dH)H) L . L + Wcpmart
By combining equations (3.7) and (3.8), it is possible to correlate speak and Tadpeak : Tadpeak =
speak L/W + cpmart
One can appreciate how equation (3.9) deviates from previous derivations on the same matter [35]. The reason for such difference originates from the substantially different approximations
DSC model
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
s–s0 (J kg–1 K–1)
Tadmax , 1 + Wcpmart /L
where it is evidenced the combined effect of quantities W and L to determine the Tadpeak value. Considering that cpmart is almost constant for all the samples of this series, we deduce from equation
(3.10) that a large latent heat allows a relevant Tadpeak value to be kept even in the case of a non-negligible transition width. This statement is valid if dT/µ0 dH is kept constant. Instead, it was
observed that for some materials, like for the Fe2 P-based compounds, a larger latent heat decreases the Tadpeak , due to its significant effect on dT/µ0 dH [36]. As an example, for sample S1, L =
2600 J kg−1 and the relatively small W = 3.1 K corresponds to Tadpeak = 2.7 K, while for sample S4 it is L = 5150 J kg−1 and, in spite of a larger width W = 7.7 K, from equation (3.10) it results in
Tadpeak = 3.7 K.
Tadpeak =
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
employed to describe the first-order transformation: we must remark that the derivation appearing in Pecharsky et al. is obtained outside the range of validity of the present model, i.e. by assuming
idealized sharp transitions (W = 0) where the application of the magnetic field is sufficient to complete the transformation, while in this paper we are dealing with partial phase transformations and
finite transition ranges. The denominator of equation (3.9) represents an effective specific heat inside the transition region, as is shown in figure 6b, with a factor (L/W) taking into account a
contribution of the latent heat that is spread over the whole temperature range of the transition. Equations (3.7) and (3.8) can be used to estimate the magnetocaloric features of materials from
standard magnetometric and conventional zero-field DSC measurements. The only required parameters to perform the calculation are the specific heat before the transition, the latent heat of the
transformation in zero applied field, the peak temperature of the transition, its span width and its change with the applied magnetic field. In table 4, we compare the Tad calculated using equation
(3.8) and those obtained from in-field specific heat data. The calculated values show the same trend as the measured ones but they turn out to be overestimated by about 20%. This overestimation is
due to the difference between the smoother trend of the experimental entropy curves when compared with the series of line segments of our geometrical model. Equations (3.7) and (3.8) also give some
simple indication of how the various parameters affect the speak and Tadpeak values. The first aspect we notice is that higher values of (dT/dH) and L increase both speak and Tadpeak . The former has
a primary role in determining the Tadpeak and it represents its upper limit when the product W · cpmart tends to zero. Regarding instead speak , it reaches its maximum value, limited by the entropy
variation of the fully induced phase L/Tp , when the ratio (dT/dH)/W tends to 1. For a lower width of the transition, this model is no longer valid: anyhow, the value of speak cannot grow more and it
is expected to remain constant over a finite temperature range. On the contrary, we can observe that an enlargement of the transition width always decreases both the speak and Tadpeak values. These
general considerations can be visualized in figure 7. In figure 7a, a series of possible s–T diagrams is represented differing in the transition width W. All the other parameters (dT/µ0 dH, Tp , cp ,
L) are kept constant and equal to those of sample S2. Figure 7b,c reports the variations of s(T) and Tad (T) curves on changing the W value. As discussed above, we can observe that s reaches its
upper limit when W = H · (dT/dH) (=8.3 K, in this case), while Tad continues to grow for W tending to zero. It is evident indeed that the transition width W plays a key role in changing the speak and
Tadpeak values as compared to their upper limit: L/Tp and Tmax = H · (dT/dH), respectively. As a general consideration aimed at future materials’ design, the sole reduction of W should not be the
main goal of research: in fact, W acts also on the width of the s(T) and Tad (T), enlarging the area where the magnetocaloric effect is sizable and can thus be exploited in thermodynamic cycles. A
guideline for this analysis comes from a careful consideration of the denominator in equation (3.8). Following a straightforward mathematical manipulation, one can also rewrite equation (3.8) as
(a) 120
13 W = 20 K
s–s0 (J kg–1 K–1)
W = 10 K
W = 5.0 K W = 0.1 K
20 350 T (K)
(c) 5
15 340 345 350 355 20 360 T (K) 365
15 340 345 350 355 T (K) 360 365 20
DTad (K)
W (K
–Ds (J kg–1 K–1)
W (K
Figure 7. (a) Simulation of different s(T) diagrams on varying the width of the transition temperature span; Tad and s are pointed out with arrows. (b,c) Variation of the Tad (T) and s(T) as a
function of the transition width W, calculated using the geometrical model. In practical terms, higher values of transformation width, and thus higher working ranges, can be tolerated without
excessive decrease of Tadpeak and speak as long as high transformation latent heat is obtained.
4. Conclusion In this contribution, we have presented a thorough calorimetric and structural characterization of four samples of the Heusler alloy Ni-Co-Mn-Ga-In. The studied materials show high
values of inverse magnetocaloric effect, triggered by magnetic field in the surroundings of their martensitic transformation temperatures: compared to the other members of the Co- and In-doped
Ni-MnGa alloys up to now reported in the literature, we have measured the highest values of adiabatic temperature change (up to almost 2 K T−1 ). The presented samples display similar compositions
and magneto-structural phenomenology, yet their martensitic transformations realize in different temperature spans. The different transformation width has to be ascribed to both intrinsic (e.g.
structural discontinuity between martensite and austenite) and extrinsic reasons (e.g. sample inhomogeneities, defects, grain boundaries). The role of the transformation broadening in the
magnetocaloric properties has been investigated by developing a geometrical model, which traces the transformation coordinates on the entropy–temperature plane. The model is readily applicable, as it
relies on standard magnetometric and conventional DSC measurements. It is found that the transition width is always detrimental to the magnetocaloric performances of a material, reducing the amount
of both isothermal entropy change and adiabatic temperature
W = 15 K
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
F.C. and G.P. performed the calorimetric characterization and developed the geometrical model. S.F. synthesized the samples and performed the TMA measurements and the temperature-dependent X-ray
diffraction experiments. F.A. and M.S. motivated the study and supervised the research activity. All authors gave final approval for publication. Competing interests. The authors declare no competing
interests. Funding. F.C. thanks Fondazione Cariparma for financial support. Acknowledgements. The authors acknowledge Dr Davide Calestani (IMEM-CNR) and Dr Tiziano Rimoldi (University of Parma) for
their contributions to the experimental characterization (composition measurements).
References 1. Kitanovski A, Tušek J, Tomc U, Plaznik U, Ožbolt M, Poredoš A. 2015 Magnetocaloric energy conversion, green energy and technology. Cham, Switzerland: Springer International Publishing.
2. Pecharsky K, Gschneidner Jr KA. 1997 Giant magnetocaloric effect in Gd5 (Si2 Ge2 ). Phys. Rev. Lett. 78, 4494–4497. (doi:10.1103/PhysRevLett.78.4494) 3. Pecharsky VK, Gschneidner Jr KA. 1997
Tunable magnetic regenerator alloys with a giant magnetocaloric effect for magnetic refrigeration from ∼20 to ∼290 K. Appl. Phys. Lett. 70, 3299–3302. (doi:10.1063/1.119206) 4. Tegus O, Brück E,
Buschow KHJ, de Boer FR. 2002 Transition-metal-based magnetic refrigerants for room-temperature applications. Nature 415, 150–152. (doi:10.1038/415150a) 5. Wada H, Tanabe Y. 2001 Giant magnetocaloric
effect of MnAs1−x Sbx . Appl. Phys. Lett. 79, 3302–3304. (doi:10.1063/1.1419048) 6. Pareti L, Solzi M, Albertini F, Paoluzi A. 2003 Giant entropy change at the co-occurrence of structural and
magnetic transitions in the Ni2.19 Mn0.81 Ga Heusler alloy. Eur. Phys. J. B 32, 303–307. (doi:10.1140/epjb/e2003-00102-y) 7. Krenke T, Duman E, Acet M, Wassermann EF, Moya X, Manosa L, Planes A. 2005
Inverse magnetocaloric effect in ferromagnetic Ni-Mn-Sn alloys. Nat. Mater. 4, 450–454. (doi:10.1038/nmat1395) 8. Liu J, Gottschall T, Skokov KP, Moore JD, Gutfleisch O. 2012 Giant magnetocaloric
effect driven by structural transitions. Nat. Mater. 11, 620–626. (doi:10.1038/nmat3334) 9. Acet M, Mañosa L, Planes A. 2001 Magnetic-field-induced effects in martensitic Heusler-based magnetic shape
memory alloys. In Handbook of magnetic materials 19 (ed. KHJ Buschow), ch. 4, pp. 231–289. Amsterdam, The Netherlands: Elsevier. 10. Moya X, Kar-Narayan S, Mathur ND. 2014 Caloric materials near
ferroic phase transitions. Nat. Mater. 13, 439–450. (doi:10.1038/nmat3951) 11. Mañosa L, González-Alonso D, Planes A, Bonnot E, Barrio M, Tamarit JL, Aksoy S, Acet M. 2010 Giant solid-state
barocaloric effect in the Ni-Mn-In magnetic shape-memory alloy. Nat. Mater 9, 478–481. (doi:10.1038/nmat2731) 12. Fahler S et al. 2012 Caloric effects in ferroic materials: new concepts for cooling.
Adv. Eng. Mater. 14, 10–19. (doi:10.1002/adem.201100178) 13. Albertini F et al. 2011 Reverse magnetostructural transitions by CO and in doping NiMnGa alloys: structural, magnetic, and magnetoelastic
properties. In Advances in magnetic shape memory materials, vol. 684 (ed. VA Chernenko), pp. 149–161. Materials Science Forum. Zurich, Switzerland: Trans Tech Publications. 14. Fabbrici S, Porcari G,
Cugini F, Solzi M, Kamarad J, Arnold Z, Cabassi R, Albertini F. 2014 Co and In doped Ni-Mn-Ga magnetic shape memory alloys: a thorough structural, magnetic and magnetocaloric study. Entropy 16,
2204–2222. (doi:10.3390/e16042204) 15. Fabbrici S, Albertini F, Paoluzi A, Bolzoni F, Cabassi R, Solzi M, Righi L, Calestani G. 2009 Reverse magnetostructural transformation in Co-doped NiMnGa
multifunctional alloys. Appl. Phys. Lett. 95, 022508. (doi:10.1063/1.3179551)
Authors’ contributions. All the authors contributed to the data analysis and helped draft and revise the manuscript.
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
change obtainable in a given magnetic field and increasing the value of the maximum field needed to fully induce the transformation; yet, the presented model is a convenient tool for estimating the
effects of the transition width on the magnetocaloric properties, allowing for the determination of the optimum values of transformation width in a trade-off between sheer performance and amplitude
of the operating range of a material.
15 .........................................................
rsta.royalsocietypublishing.org Phil. Trans. R. Soc. A 374: 20150306
16. Fabbrici S et al. 2011 From direct to inverse giant magnetocaloric effect in Co-doped NiMnGa multifunctional alloys. Acta Mater. 59, 412–419. (doi:10.1016/j.actamat.2010.09.059) 17. Gottschall T,
Skokov KP, Frincu B, Gutfleisch O. 2015 Large reversible magnetocaloric effect in Ni-Mn-In-Co. Appl. Phys. Lett. 106, 021901. (doi:10.1063/1.4905371) 18. Emre B, Yuce S, Stern-Taulats E, Planes A,
Fabbrici S, Albertini F, Mañosa L. 2013 Large reversible entropy change at the inverse magnetocaloric effect in Ni-Co-Mn-Ga-In magnetic shape memory alloys. J. Appl. Phys. 113, 213905. (doi:10.1063/
1.4808340) 19. Niemann R, Diestel A, Backen A, Roessler UK, Behler C, Hahn SM, Wagner F-X, Schultz L, Fähler S. 2014 Controlling reversibility of the magneto-structural transition in first-order
materials on the micro scale. In Proc. 6th Int. Conf. on Magnetic Refrigeration at Room Temperature (Thermag VI), Victoria, Canada, 7–10 September 2014. 20. Diestel A, Niemann R, Schleicher B,
Schwabe S, Schultz L, Fähler S. 2015 Field-temperature phase diagrams of freestanding and substrate-constrained epitaxial Ni-Mn-Ga-Co films for magnetocaloric applications. J. Appl. Phys. 118,
023908. (doi:10.1063/1.4922358) 21. Shamberger PJ, Ohuchi FS. 2009 Hysteresis of the martensitic phase transition in magnetocaloric-effect NiMnSn alloys. Phys Rev. B 79, 144407. (doi:10.1103/
PhysRevB.79. 144407) 22. Titov I, Acet M, Farle M, Gonzalez-Alonso D, Mañosa L, Planes A, Krenke T. 2012 Hysteresis effects in the inverse magnetocaloric effect in martensitic Ni-Mn-In and Ni-Mn-Sn.
J. Appl. Phys. 112, 073914. (doi:10.1063/1.4757425) 23. Srivastava V, Song YT, Bhatti K, James RD. 2011 The direct conversion of heat to electricity using multiferroic alloys. Adv. Energy Mater. 1,
97–104. (doi:10.1002/aenm.201000048) 24. Cui J et al. 2006 Combinatorial search of the thermoelastic shape-memory alloys with extremely small hysteresis width. Nat. Mater. 5, 286–290. (doi:10.1038/
nmat1593) 25. Niemann R, Kopeˇcek J, Heczko O, Romberg J, Schultz L, Fähler S, Vives E, Mañosa L, Planes A. 2014 Localizing sources of acoustic emission during the martensitic transformation. Phys.
Rev. B 89, 214118. (doi:10.1103/PhysRevB.89.214118) 26. Gottschall T, Skokov KP, Burriel R, Gutfleisch O. 2016 On the S(T) diagram of magnetocaloric materials with first-order transition: kinetic and
cyclic effects of Heusler alloys. Acta Mater. 107, 1–8. (doi:10.1016/j.actamat.2016.01.052) 27. Porcari G, Cugini F, Fabbrici S, Pernechele C, Albertini F, Buzzi M, Mangia M, Solzi M. 2012
Convergence of direct and indirect methods in the magnetocaloric study of first order transformations: the case of Ni-Co-Mn-Ga Heusler alloys. Phys. Rev. B 86, 104432. (doi:10.1103/
PhysRevB.86.104432) 28. Le Bail A, Duroy H, Fourquet JL. 1988 Ab-initio structure determination of LiSbWO6 by X-ray powder diffraction. Mater. Res. Bull. 23, 447–452. (doi:10.1016/0025-5408(88)
90019-0) 29. Basso V, Sasso CP, Küpferling MA. 2010 Peltier cells differential calorimeter with kinetic correction for the measurement of cp (H,T) and δs(H,T) of magnetocaloric materials. Rev. Sci.
Instrum. 81, 113904. (doi:10.1063/1.3499253) 30. Khovaylo VV et al. 2010 Peculiarities of the magnetocaloric properties in Ni-Mn-Sn ferromagnetic shape memory alloys. Phys. Rev. B 81, 214406.
(doi:10.1103/PhysRevB. 81.214406) 31. Pecharsky VK, Gschneidner Jr KA. 1999 Magnetocaloric effect from indirect measurements: magnetization and heat capacity. J. Appl. Phys. 86, 565–575. (doi:10.1063
/1.370767) 32. Porcari G, Buzzi M, Cugini F, Pellicelli R, Pernechele C, Caron L, Brück E, Solzi M. 2013 Direct magnetocaloric characterization and simulation of thermomagnetic cycles. Rev. Sci.
Instrum. 84, 073907. (doi:10.1063/1.4815825) 33. Porcari G, Fabbrici S, Pernechele C, Albertini F, Buzzi M, Paoluzi A, Kamarad J, Arnold Z, Solzi M. 2012 Reverse magnetostructural transformation and
adiabatic temperature change in Co-, In-substituted Ni-Mn-Ga alloys. Phys. Rev. B 85, 024414. (doi:10.1103/PhysRevB.85.024414) 34. Porcari G. 2013 Magnetocaloric effect across first order
transformations of energy conversion materials. PhD thesis, ch. 4, University of Parma, Italy. 35. Pecharsky VK, Gschneidner Jr AK, Pecharsky AO, Tishin AM. 2001 Thermodynamics of the magnetocaloric
effect. Phys. Rev. B 64, 144406. (doi:10.1103/PhysRevB.64.144406) 36. Guillou F, Porcari G, Yibole H, van Dijk N, Brück E. 2014 Taming the first-order transition in giant magnetocaloric materials.
Adv. Mater. 26, 2671–2675. (doi:10.1002/adma.201304788) | {"url":"https://d.docksci.com/influence-of-the-transition-width-on-the-magnetocaloric-effect-across-the-magnet_5a0a1cc5d64ab24cf2c65148.html","timestamp":"2024-11-11T23:42:40Z","content_type":"text/html","content_length":"111324","record_id":"<urn:uuid:77418a3a-8aa0-4113-a2ee-afe0728306c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00790.warc.gz"} |
Stable MoneyStable Money
SBI FD Calculator
The State Bank of India (SBI), headquartered in Mumbai, is the largest Indian multinational public sector bank. It controls over a quarter of the market and serves over 48 crore consumers through its
22,405+ branches spread across the country. Furthermore, this bank has a long 200-year legacy, making it one of India's most reputable banking and financial services firms.
The SBI FD calculator serves as a valuable resource for investors seeking clarity on the growth of their savings over time. This tool is especially beneficial for those who wish to plan their
finances, set financial goals, and ascertain the ideal tenure and investment amount for their FDs.
Let us dive deeper into this here to clearly understand what an FD calculator is and how the SBI FD calculator works.
How to Use SBI FD Calculator?
Using the simple steps below, you can use the SBI fixed deposit calculator with Stable Money.
Step 1: Enter the Principal Amount
First, use the slider of the ‘Total Investment’ option to set the principal amount according to the investment you want to make for your fixed deposit.
Step 2: Enter the Rate of Interest
Under the ‘Rate of Interest’ header, you can change the FD interest rate as applicable or use the slider to set it to your desired rate. You should enter the interest rate as applicable on the day of
Step 3: Input the Tenure
You get the option to change the tenure of your FD under the ‘Time Period’ tab. Here, you can directly input the number of years you want to invest your money or drag the slider to the desired
You will get the maturity and interest values (rounded to the nearest rupee) as soon as you complete these steps. However, you must note that the actual value of your deposit after maturity will be
printed on the FDR (Fixed Deposit Receipt).
How Does an FD Calculator Work?
Before investing in an FD, it is always important to check your returns so that you have an idea about the returns on your investment. Using the SBI FD calculator is simple as you do not have to
manually calculate it, which can be time-consuming.
There are two methods to calculate your fixed deposit's interest and maturity value. They are simple interest and compound interest methods. Let us understand how both methods work.
Formula to Calculate HDFC FD Returns
1. Simple Interest
This method calculates the interest on the principal amount throughout the total tenure of FD. To calculate simple interest, the following formula is used:
Simple Interest = (P * R * T) / 100
• P = Principal amount
• R = Rate of interest
• T = Tenure of fixed deposit
Let us look at an example to get a clear understanding of how this formula works.
Suppose you invest in a State Bank of India fixed deposit scheme with an amount of ₹50,000 for a tenure of 3 years. The interest applicable in this scheme is 7.00%. Hence, by using the formula, your
interest and maturity value are:
• P = ₹50,000
• R = 7.00%
• T = 3 years
Simple Interest = (P * R * T) / 100
= (₹50,000 * 7 * 3) / 100
= ₹10,500
Therefore, the maturity value is = ₹50,000 + ₹10,500 = ₹60,500
2. Compound Interest
Compound Interest is a way of computing interest not only on the initial principal amount but also on the interest accumulated over time. To put it another way, it's interest earned on interest. This
approach can result in a large growth of the amount invested over time, particularly for longer investment periods.
The formula to calculate interest and maturity value using the compound method is:
A = P (1+r/n) ^ (n * t)
• A = Maturity Value of the deposit
• P = Principal amount
• r = Rate of interest
• t = Tenure of fixed deposit.
• n = Number of compounding in a year
Here is an example to understand how this formula works.
Let’s say you invest ₹20,000 in a State Bank of India fixed deposit scheme for over 3 years. In this scheme, the interest rate is 7.00%. By applying the formula, your interest and maturity value will
• P = ₹20,000
• r = 7.00%
• t = 3 years
• n = 4 (once in every quarter)
Maturity Value = ₹20,000 (1+0.07/4) ^ (4*3)
= ₹24,628.79
So, your interest in the matured value is -
Interest = Maturity value - Principal amount
= ₹24,628.79 - ₹20,000
= ₹4,628.79
Advantages of Using the SBI FD Calculator
Using the SBI fixed deposit calculator is beneficial to you in several ways. Below is a list of advantages of using the calculator:
• The SBI FD Calculator provides accurate estimates of maturity amounts, enabling individuals to plan their finances effectively.
• Calculating FD returns manually can be time-consuming and prone to errors. The online FD calculator simplifies the process and provides instant results within a few seconds without any error.
• Investors can use the calculator to compare different FD schemes, tenure options, and interest rates, allowing them to make informed investment decisions.
• Investors can use the calculator to align their FD investments with specific financial goals, such as buying a house, funding education, or planning for retirement.
• The SBI FD Calculator is designed with a user-friendly interface requiring no specialised financial knowledge. It is readily available on the bank’s website and can be used at any time throughout
the day.
• The calculator allows users to experiment with various investment amounts, interest rates, and tenures to find the most suitable FD plan.
Overall, the SBI FD calculator is an important tool that enables you to make informed financial decisions and plan your future monetary goals by simply following a few steps. The calculator offers
accurate answers about the interest and maturity value, thus saving you precious time.
Whether you are a novice or an experienced investor, using this calculator is easy and beneficial to get error-free results.
Frequently Asked Questions
1. What is the maximum and minimum investment amount for SBI fixed deposit schemes?
The minimum investment amount in an SBI fixed deposit is ₹1,000, with no maximum deposit limit.
2. What is the maximum and minimum tenure to invest in SBI fixed deposit schemes?
7 days is the minimum tenure for investing in a fixed deposit scheme, and 10 years is the maximum for the State Bank of India.
3. Can I use the SBI FD calculator for free?
Yes. The FD calculator is free to use for all. All you need to do is enter the values; the results will be displayed in seconds.
4. Is using an online SBI FD calculator easy?
Yes. There are only a few simple steps for calculating the interest and maturity value of your FD using the online SBI fixed deposit calculator.
5. Does SBI offer premature withdrawal of my FD?
Yes, premature withdrawal is allowed by the bank, but it is subject to a penalty. | {"url":"https://stablemoney.in/calculators/sbi-fd-calculator?type=bank","timestamp":"2024-11-06T14:01:14Z","content_type":"text/html","content_length":"55398","record_id":"<urn:uuid:84def2c0-6363-487e-8ba4-4706aba42962>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00215.warc.gz"} |
setting up a constraint
The problem is to assign a number of persons to 5 tasks while maximizing an objective using OPTMODEL. 10% or less of the persons can be assigned 2 tasks while the rest only 1. If the constraint of
"no more than 2" can be set up this way,
con no_more_2 {i in persons}: sum {j in tasks} assign[i,j] <= 2;
how should I specify the constraint of "only <= 10% can be assigned 2 tasks?" I tried the following
con percent: sum {i in persons} (if sum {j in tasks}assign[i,j] > 0 then 1 else 0) <= 10%*total ;
It did not work and gave this message:
ERROR: The specified optimization technique does not allow nonlinear constraints.
Thank you for your help!
06-14-2017 11:14 AM | {"url":"https://communities.sas.com/t5/Mathematical-Optimization/setting-up-a-constraint/td-p/366998","timestamp":"2024-11-03T01:22:14Z","content_type":"text/html","content_length":"440369","record_id":"<urn:uuid:23094275-136d-43e1-957f-556016a1207b>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00671.warc.gz"} |
cosh(x: array, /) array¶
Calculates an implementation-dependent approximation to the hyperbolic cosine for each element x_i in the input array x.
The mathematical definition of the hyperbolic cosine is
\[\operatorname{cosh}(x) = \frac{e^x + e^{-x}}{2}\]
The hyperbolic cosine is an entire function in the complex plane and has no branch cuts. The function is periodic, with period \(2\pi j\), with respect to the imaginary component.
x (array) – input array whose elements each represent a hyperbolic angle. Should have a floating-point data type.
out (array) – an array containing the hyperbolic cosine of each element in x. The returned array must have a floating-point data type determined by Type Promotion Rules.
Special cases
For all operands, cosh(x) must equal cosh(-x).
For real-valued floating-point operands,
□ If x_i is NaN, the result is NaN.
□ If x_i is +0, the result is 1.
□ If x_i is -0, the result is 1.
□ If x_i is +infinity, the result is +infinity.
□ If x_i is -infinity, the result is +infinity.
For complex floating-point operands, let a = real(x_i), b = imag(x_i), and
For complex floating-point operands, cosh(conj(x)) must equal conj(cosh(x)).
□ If a is +0 and b is +0, the result is 1 + 0j.
□ If a is +0 and b is +infinity, the result is NaN + 0j (sign of the imaginary component is unspecified).
□ If a is +0 and b is NaN, the result is NaN + 0j (sign of the imaginary component is unspecified).
□ If a is a nonzero finite number and b is +infinity, the result is NaN + NaN j.
□ If a is a nonzero finite number and b is NaN, the result is NaN + NaN j.
□ If a is +infinity and b is +0, the result is +infinity + 0j.
□ If a is +infinity and b is a nonzero finite number, the result is +infinity * cis(b).
□ If a is +infinity and b is +infinity, the result is +infinity + NaN j (sign of the real component is unspecified).
□ If a is +infinity and b is NaN, the result is +infinity + NaN j.
□ If a is NaN and b is either +0 or -0, the result is NaN + 0j (sign of the imaginary component is unspecified).
□ If a is NaN and b is a nonzero finite number, the result is NaN + NaN j.
□ If a is NaN and b is NaN, the result is NaN + NaN j.
where cis(v) is cos(v) + sin(v)*1j.
Changed in version 2022.12: Added complex data type support. | {"url":"https://data-apis.org/array-api/latest/API_specification/generated/array_api.cosh.html","timestamp":"2024-11-06T19:15:08Z","content_type":"text/html","content_length":"29665","record_id":"<urn:uuid:1b8ee79b-bdc3-4db2-8e5d-3d130b83d5b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00239.warc.gz"} |
Does a rate of change must be positive
6 Jun 2019 Investors who are able to predict how the yield curve will change can curve is positive, this indicates that investors require a higher rate of Disclaimer: This paper should not be
reported as representing the views of However, changes in interest rates will also affect bank profits through capital gains or surprises likely reflects the positive impact of announcements by the
ECB Overall, rate of change is always positive (Even if it's distance traveled, if you go backwards, its called slowing down at a rate of the speed you drove back). For the 2nd question, if the
change is in the Y axis, then I think its the distance the line goes up.
Rate of Change. A rate of change is a rate that describes how one quantity changes in relation to another quantity. If is the independent variable and is the dependent variable, then. rate of change
= change in y change in x. Rates of change can be positive or negative. Slope is simply how much the graph of a line changes in the vertical direction over a change in the horizontal direction.
Because of this, the slope is sometimes referred to as the rate of change. Slopes can be positive or negative. A positive slope moves upward on a graph from left to right. Rate Of Change - ROC: The
rate of change - ROC - is the speed at which a variable changes over a specific period of time. ROC is often used when speaking about momentum, and it can generally be Rate Of Change - ROC: The rate
of change - ROC - is the speed at which a variable changes over a specific period of time. ROC is often used when speaking about momentum, and it can generally be If the slope is positive, this is an
increasing rate of change. If the slope is negative, this is a decreasing rate of change. To unlock this lesson you must be a Study.com Member. Rates of Change and Behavior of Graphs. Learning
Outcomes. The price change per year is a rate of change because it describes how an output quantity changes relative to the change in the input quantity. We can see that the price of gasoline in the
table above did not change by the same amount each year, so the rate of change was not
Rate of change can be negative or positve mathematically speaking (which is what would be the correct answer for a math test). Someone had mentioned that if it's negative it can be called slowing
down, but that wouldn't be an answer your math teacher would give you marks for.
Rates of change in other directions are given by directional as the direction changes, and, in particular, how they can be used to find the maximum and minimum in the positive s-direction at s = 0,
which is at the point (x0, y0, f(x0, y0 )). Bonds with shorter durations are less sensitive to changing rates and thus are less with small coupons—something known as "positive convexity," meaning it
will liquidity risk, and call risk are other relevant variables that should be part of 6 Mar 2017 If you own bonds or have money in a bond fund, there is a number you should know. sensitive your
bond investment will be to changes in interest rates. This means fluctuations in price, whether positive or negative, will be InfoChoice provides RBA interest rate updates & forecasts. Everyone
should remain 1.5 metres away from all other people as much as possible but children Other indicators look more positive for the economy. Climate change is now considered a major risk to the economy
of Australia and other major countries. How do changes in policy interest rates affect the macroeconomy? Cheaper loans should provide a possible floor for house prices in the property market 28 Mar
2019 If you chose a fixed rate, you will experience no impact if the repo rate changes. But, if you have a prime-linked loan, you need to pay attention
11 Dec 2019 Interest rates can change for other reasons and may not change by the same To cover their costs, banks need to pay less on saving than they
Rates of change in other directions are given by directional as the direction changes, and, in particular, how they can be used to find the maximum and minimum in the positive s-direction at s = 0,
which is at the point (x0, y0, f(x0, y0 )). Bonds with shorter durations are less sensitive to changing rates and thus are less with small coupons—something known as "positive convexity," meaning it
will liquidity risk, and call risk are other relevant variables that should be part of 6 Mar 2017 If you own bonds or have money in a bond fund, there is a number you should know. sensitive your
bond investment will be to changes in interest rates. This means fluctuations in price, whether positive or negative, will be InfoChoice provides RBA interest rate updates & forecasts. Everyone
should remain 1.5 metres away from all other people as much as possible but children Other indicators look more positive for the economy. Climate change is now considered a major risk to the economy
of Australia and other major countries. How do changes in policy interest rates affect the macroeconomy? Cheaper loans should provide a possible floor for house prices in the property market 28 Mar
2019 If you chose a fixed rate, you will experience no impact if the repo rate changes. But, if you have a prime-linked loan, you need to pay attention 15 Aug 2016 There's no shortage of self-help
gurus who swear that repeating positive phrases to yourself can change your life, encouraging that if you
Disclaimer: This paper should not be reported as representing the views of However, changes in interest rates will also affect bank profits through capital gains or surprises likely reflects the
positive impact of announcements by the ECB
Marginal Cost of Funds based Lending Rate (MCLR) But my bank does not have a single time bucket which has more than 30% share of the funds i.e. business strategy and Credit risk premium shall have
either a positive value or be zero. As forward expectations for LIBOR change, so will the fixed rate that investors Historically the spread tended to be positive across maturities, reflecting the
five-year rates will fall using cash in the Treasury market, a trader must invest cash 6 Jun 2019 Investors who are able to predict how the yield curve will change can curve is positive, this
indicates that investors require a higher rate of Disclaimer: This paper should not be reported as representing the views of However, changes in interest rates will also affect bank profits through
capital gains or surprises likely reflects the positive impact of announcements by the ECB Overall, rate of change is always positive (Even if it's distance traveled, if you go backwards, its called
slowing down at a rate of the speed you drove back). For the 2nd question, if the change is in the Y axis, then I think its the distance the line goes up.
21 Aug 2017 But more formally, it means that, whenever b > a, we must always have f(b) > f(a). ) Beyond f increasing, there is little else one can say about f.
21 Aug 2017 But more formally, it means that, whenever b > a, we must always have f(b) > f(a). ) Beyond f increasing, there is little else one can say about f. You've calculated the average rate of
change going from r=20 to r=25, which I'm sure you would agree should be positive and then makes sense. Regarding the Review average rate of change and how to apply it to solve problems. How do I
find the average rate of change of a function when given a function and 2 inputs ( x-values)?. Reply. Reply to s-723724152's If I'm correct, then you need to: Function values can be positive or
negative, and they can increase or decrease as the input increases. If the function is decreasing, it has a negative rate of growth. If you have a x^2 term, you need to realize it is a quadratic
function. to think about it, you have a, you have a positive rate of change of y with respect to x.
6 Jun 2019 Investors who are able to predict how the yield curve will change can curve is positive, this indicates that investors require a higher rate of Disclaimer: This paper should not be
reported as representing the views of However, changes in interest rates will also affect bank profits through capital gains or surprises likely reflects the positive impact of announcements by the
ECB Overall, rate of change is always positive (Even if it's distance traveled, if you go backwards, its called slowing down at a rate of the speed you drove back). For the 2nd question, if the
change is in the Y axis, then I think its the distance the line goes up. A rate of change is a rate that describes how one quantity changes in relation to another quantity. Rates of change can be
positive or negative. This corresponds to an increase or decrease in the -value between the two data points. When a quantity does not change over time, it is called zero rate of change. This is true
not just for velocities, but for all rates of change. A positive rate of change means that the quantity you are measuring is increasing over time, and a negative rate of change means | {"url":"https://topbitxqdseo.netlify.app/hitzel3160xyji/does-a-rate-of-change-must-be-positive-372","timestamp":"2024-11-08T20:45:42Z","content_type":"text/html","content_length":"35929","record_id":"<urn:uuid:74d75555-7bf1-4cf9-879e-d1fcb7ecb818>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00830.warc.gz"} |
ECE 207L - FIRST ORDER RL CIRCUITS
ECE 207L - FIRST ORDER RL CIRCUITS - LAB 19
FALL 2003
A.P. FELZER
The objective of this lab is to measure the step responses of first order RL circuits.
1. Given the following first order RL circuit
0.1 h
PARTNER 1: R = 1K
PARTNER 2: R = 2K
a. Measure your resistor and inductor values. Compare with nominal values
b. Sketch what you expect the step responses of iL(t) and vL(t) to look like. Make sure your
pulse train input has pulses that are ON long enough for the circuit to reach steady state
and OFF long enough for iL(t) to return to zero.
c. Sketch the step responses of iL(t) and vL(t) from what you observe on the scope
d. Explain any differences between your predictions and observations of iL(t) and vL(t) in
parts (b) and (c)
e. Make use of your graph in part (c) to write an equation for iL(t) as a function of τ
f. Measure iL(t) at a particular time t.
g. Make use of your equation in part (e) and your measurement in part (f) to calculate the
circuit's time constant τ
h. Calculate the time constant τ using the equation τ = L/R
i. Compare your calculated and measured values of τ in parts (g) and (h) | {"url":"https://studylib.net/doc/18686664/ece-207l---first-order-rl-circuits","timestamp":"2024-11-09T06:44:29Z","content_type":"text/html","content_length":"58851","record_id":"<urn:uuid:386ee57f-ab48-4cd7-a28b-c550b3a8341b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00030.warc.gz"} |
Built In Factorial Function In C++ With Code Examples
In this article, we will look at how to get the solution for the problem, Built In Factorial Function In C++ With Code Examples
What is a factorial of 3?
Multiplying all the whole numbers starting from 1 to 3 together. Thus, 3 factorials which can be written as 3! is equal to 6. Hence, this is the required answer.
#include <cmath>
int fact(int n){
return std::tgamma(n + 1);
// for n = 5 -> 5 * 4 * 3 * 2 = 120
//tgamma performas factorial with n - 1 -> hence we use n + 1
int factorial(int n)
return (n == 1 || n == 0) ? 1 : factorial(n - 1) * n;
#include <iostream>
using namespace std;
class Factorial
int num;
int factorial = 1;
void calculateFactorial();
void show();
cout << "Enter a number : " << endl;
cin >> num;
if (num == 0 || num == 1)
factorial = 1;
while (num > 1)
factorial = factorial * num;
void Factorial::show()
cout << "Factorial : " << factorial << endl;
int main()
Factorial factorial;
Is there a built in factorial in Python?
Does Python have a factorial? Yes, we can calculate the factorial of a number with the inbuilt function in Python. The function factorial () is available in the Python library and can be used to
compute the factorial without writing the complete code. The factorial () function is defined in the math module of Python.
Is factorial a built in function?
Using built-in function We will use the math module, which provides the built-in factorial() method. Let's understand the following example. We have imported the math module that has factorial()
function. It takes an integer number to calculate the factorial.
What is a factorial of 6?
What is factorial in data structure?
The factorial, symbolized by an exclamation mark (!), is a quantity defined for all integer s greater than or equal to 0. For an integer n greater than or equal to 1, the factorial is the product of
all integers less than or equal to n but greater than or equal to 1. The factorial value of 0 is defined as equal to 1.
What is the logic of factorial?
What is the factorial of a number? Factorial of a non-negative integer is the multiplication of all positive integers smaller than or equal to n. For example factorial of 6 is 6*5*4*3*2*1 which is
720. A factorial is represented by a number and a ” ! ” mark at the end.
How do you write a factorial function?
The factorial function can be written as a recursive function call. Recall that factorial(n) = n × (n – 1) × (n – 2) × … × 2 × 1. The factorial function can be rewritten recursively as factorial(n) =
n × factorial(n – 1).
How do you write factorial symbol in C?
Let's see the factorial Program using loop.
• #include<stdio.h>
• int main()
• {
• int i,fact=1,number;
• printf("Enter a number: ");
• scanf("%d",&number);
• for(i=1;i<=number;i++){
• fact=fact*i;
What is factorial function in C?
Factorial Program in C: All positive descending integers are added together to determine the factor of n. Hence, n! is denoted as a Factorial of n. A factorial is denoted by "!".
Is there a built in factorial function in C?
Although there is no C function defined specifically for computing factorials, C math library lets you compute gamma function.
Return Array Of Sorted Objects With Code Examples
In this article, we will look at how to get the solution for the problem, Return Array Of Sorted Objects With Code Examples How do you return an array from a method in Java? How to return an array in
Java import java.util.Arrays; public class ReturnArrayExample1. { public static void main(String args[]) { int[] a=numbers(); //obtain the array. for (int i = 0; i < a.length; i++) //for loop to
print the array. System.out.print( a[i]+ " "); w=["as","3e","1"] a='vf'.join(w) print(a) //d
Violin Plot Seaborn With Code Examples
In this article, we will look at how to get the solution for the problem, Violin Plot Seaborn With Code Examples What is violin plot in Seaborn? A violin plot plays a similar role as a box and
whisker plot. It shows the distribution of quantitative data across several levels of one (or more) categorical variables such that those distributions can be compared. import seaborn as sns import
matplotlib.pyplot as plt %matplotlib inline sns.violinplot(x="day", y="total_bill", data=tips,palette='
C# How To Remove An Image In A Folder With Code Examples
In this article, we will look at how to get the solution for the problem, C# How To Remove An Image In A Folder With Code Examples Why is C Popular? The C programming language is so popular because
it is known as the mother of all programming languages. This language is widely flexible to use memory management. C is the best option for system level programming language. var filePath =
Server.MapPath("~/Images/" + filename); if(File.Exists(filePath)) { File.Delete(filePath); } What does C mea
Theme Color In Html With Code Examples
In this article, we will look at how to get the solution for the problem, Theme Color In Html With Code Examples What is theme color HTML? The theme-color value for the name attribute of the <meta>
element indicates a suggested color that user agents should use to customize the display of the page or of the surrounding user interface. If specified, the content attribute must contain a valid CSS
<color> . <!-- for a singular color regardless of theme --> <meta name="theme-color" content="#ffffff
Dropdown Bootstrap 5 With Code Examples
In this article, we will look at how to get the solution for the problem, Dropdown Bootstrap 5 With Code Examples How do you create a drop-down button? Create a drop-down list Select the cells that
you want to contain the lists. On the ribbon, click DATA > Data Validation. In the dialog, set Allow to List. Click in Source, type the text or numbers (separated by commas, for a comma-delimited
list) that you want in your drop-down list, and click OK. <div class="dropdown"> <button class="btn bt | {"url":"https://www.isnt.org.in/built-in-factorial-function-in-c-with-code-examples.html","timestamp":"2024-11-10T21:06:51Z","content_type":"text/html","content_length":"149724","record_id":"<urn:uuid:0d501423-3c80-453a-a19d-b278d259aadf>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00778.warc.gz"} |
Claudio Gallicchio - Reservoir Computing
Within the umbrella of randomized Neural Network approaches [1], Reservoir Computing (RC) is an extremely efficient paradigm for modeling Recurrent Neural Networks (RNNs), and it is considered a de
facto state-of-the-art approach for learning in temporal and sequential domains.
The RC paradigm has been instantiated in several equivalent forms in literature, among which the Echo State Network (ESN) model is likely the most known in the neuro-computing area. Simply said, an
ESN operates on sequential data by recurrently encoding the external input in its reservoir state, which provides to the system a memory over the past input history. At each time step, the output is
computed from the reservoir's units activations by a layer of linear units called readout. The weights of the connections pointing to the readout are the only ones that are trained, while those
pertaining to the input-reservoir and to the reservoir's feedback connections are left untrained, thus the characterization of extreme efficiency of training algorithms for ESNs.
Nowadays ESNs are widely used to approach a large variety of learning problems emerging in diverse real-world application domains. Currently, some fundamental questions, mainly on the true nature of
their operation, stimulate the research effort in this area, as discussed in [1].
[1] C. Gallicchio, J. D. Martin-Guerrero, A. Micheli, E. Soria-Olivas, "Randomized Machine Learning Approaches: Recent Developments and Challenges", Proceedings of the 25th European Symposium on
Artificial Neural Networks (ESANN), Bruges, Belgium, 26-28 April 2017, i6doc.com, pp. 77-86, ISBN: 978-287587038-4, 2017
Why do ESNs work? Understanding the bias of using fixed randomized weights.
The reservoir of an ESN implements a discrete-time dynamical system computed by means of a randomized basis expansion. The initialization conditions used to fix the recurrent weights in the
reservoir, according to the Echo State Property, basically constrain the system dynamics towards a Markovian characterization of the state space, as it has been shown in [2]. According to such
characterization, even without training, the reservoir is already able to develop state representations that distinguish between different input histories in a suffix-based fashion. This means (i)
that input sequences that share a common suffix will be encoded in states that are close to each other proportionally to the length of the common suffix, and (ii) that sequences that have different
suffixes will be encoded into (possibly highly) far states. The intrinsic differentiation among the input histories performed by the reservoir is exploited together with the typical high
dimensionality of the reservoir states to make the problem amenable to linear regression in the state space by the readout. Thereby, whenever the learning task to be solved is compliant with this
Markovian characterization then ESNs provide an extreme efficient way of successfully approaching it.
Further information on the suffix-based characterization of reservoirs state space organizations can be found in [2].
[2] C. Gallicchio, A. Micheli, "Architectural and Markovian factors of echo state networks", Neural Networks, Elsevier, vol. 24(5), pp. 440-456, DOI: 10.1016/j.neunet.2011.02.002, ISSN: 0893-6080,
2011. [PDF_preprint]
Extending the paradigm to structured data with Tree Echo State Networks.
The possibility of extending neural networks methodologies for learning in domains of highly structured data representations, such as trees, graphs and networks, opens the way to a broad range of
exciting real-world applications in domains such as Cheminformatics, Computational Toxicology, Document processing, Social Network Analysis, just to name a few. However, at the same time, extending
neural networks approaches to naturally deal with structured data involves the major downside of exploding training costs of the learning algorithms. Hence, the advantages of efficient methodologies
for learning in structured domains is even more clear and appealing than in the case of temporal data processing.
Recently, the Tree Echo State Network (TreeESN) model [3] has been proposed as an extension of the ESN approach to hierarchical structures. Specifically, the reservoir layer of a TreeESN implements a
stable dynamical system on trees, through which the input structure is encoded into an isomorphic structured state representation. As for standard ESNs, training TreeESNs involves only the adaptation
of a linear readout layer, and thereby it is incredibly more efficient when compared to other neuro-computing approaches (in which all the model's parameters must be trained). A preliminary extension
of the same ideas on graph structured data is provided by the Graph Echo State Network model, proposed in [4].
[3] C. Gallicchio, A. Micheli, "Tree Echo State Networks", Neurocomputing (2013), vol. 101, pp. 319-337, Elsevier, DOI: 10.1016/j.neucom.2012.08.017, ISSN: 0925-2312. [PDF_preprint]
[4] C. Gallicchio, A. Micheli, "Graph Echo State Networks", Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18-23 July 2010, IEEE, pp. 1-8, DOI:
10.1109/IJCNN.2010.5596796, ISBN: 978-1-4244-6916-1, 2010.
Deep Echo State Networks.
Recently, an intriguing research direction is being devoted to the extension of the Deep Learning principles to processing of temporal information by means of RNNs. Deep Neural Networks are based on
the hierarchical composition of many non-linear hidden layers. Through learning, such architecture is able to develop a distributed, progressively more abstract representation of the involved
information, sparsely represented across the activations of the units in deeper layers. Moving this paradigm to time-series processing is extremely appealing, as it would allow to naturally learn
hierarchical representation of temporal data, and thereby approach a large variety of problems (especially in the cognitive area) in a natural fashion.
In this context, the introduction of the deep Echo State Network (deepESN) model [5] promises to open the way to the development of novel recurrent models for time series processing that can deal
with multiple time-scales dynamics in the input borrowing the extreme efficiency of training algorithms typical of RC. On the theoretical side, studies on the deepESN model from the dynamical system
viewpoint [6][7] allow to shed fresh light on the real importance of layering per se as a major design factor in deep recurrent networks. At the same time, investigations on the characterization of
the deepESN multi-layered dynamics allow to develop a more in-depth understanding on the true merits of learning in the development of the hierarchical representation of time.
The current state of research on this topics suggests that the hierarchical composition of the recurrent layers in deep RNNs is the most important aspect of what can be accounted for the definition
of a deep learning approach for time-series. Learning, and as recently discovered, also non-linearities in the hidden layers, seems definitely to have a less important role in the emerging structure
of temporal data representation.
[5] C. Gallicchio, A. Micheli, L. Pedrelli, "Deep Reservoir Computing: A Critical Experimental Analysis", Neurocomputing (2017), DOI: 10.1016/j.neucom.2016.12.089 [LINK][PDF_preprint]
[6] C. Gallicchio, A. Micheli, "Echo State Property of Deep Reservoir Computing Networks", Cognitive Computation (2017), DOI: 10.1007/s12559-017-9461-9 [PDF_preprint]
[7] C. Gallicchio, A. Micheli, L. Silvestri, "Local Lyapunov Exponents of Deep RNN", Proceedings of the 25th European Symposium on Artificial Neural Networks (ESANN), Bruges, Belgium, 26-28 April
2017, i6doc.com, pp. 559-564, ISBN: 978-287587038-4, 2017 | {"url":"https://sites.google.com/site/cgallicch/research-activity_1/reservoir-computing","timestamp":"2024-11-13T12:01:39Z","content_type":"text/html","content_length":"69562","record_id":"<urn:uuid:f1f71fdc-270c-42e4-b84e-94f212ec2322>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00292.warc.gz"} |
Steady State Vector Calculator – Easily Find Your Markov Chain Solutions
Use this tool to calculate the steady state vector of a Markov chain, providing you with the long-term probabilities for each state.
How the Calculator Works
This calculator takes input as a string representation of a square matrix, where the rows are separated by semicolons (;) and the individual numbers by commas (,). It outputs the steady state vector
by iteratively updating an initial probability distribution until it converges to a steady state.
How to Use It
Simply input the transition matrix in the specified format, and click “Calculate”. The calculator will display the steady state vector.
How It Calculates Results
This tool iteratively adjusts an initial probability distribution vector. At each step, it computes the new distribution by applying the transition rates from the input matrix. After several
iterations, the vector will converge to a steady state, assuming one exists.
Due to its iterative nature, the calculator may not converge to the exact steady state vector if the system has no steady state or if it requires more iterations. The input matrix must be square, and
this tool is not optimized for very large matrices. | {"url":"https://madecalculators.com/steady-state-vector-calculator/","timestamp":"2024-11-08T14:39:27Z","content_type":"text/html","content_length":"142662","record_id":"<urn:uuid:10346c49-64dd-4d99-b705-2f6819491d5f>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00802.warc.gz"} |
Pentagon Calculator | Find Area, Side, Perimeter, Diagonal - Areavolumecalculator.com - areavolumecalculator.com
Pentagon Calculator contains various other calculators like Pentagon Area Calculator, Pentagon Side Calculator, Pentagon Diagonal Calculator, and Pentagon Perimeter Calculator. You have to select any
one from the given dropdown and proceed further to get the output in a short span of time. It will give exact result by taking the input as quick as possible.
Formulae for Pentagon
Pentagon is a geometrical shape having 5 sides of equal length. All the sides of pentagon are connected at their edges. Go through the following lines to get the various formula to compute the
pentagon perimeter, area, diagonal and side. Make use of these formulae while solving the related topics.
The formula to find the area of pentagon is as follows:
Pentagon area = 1/4 ((√(5 (5 + 2 √5) s2)
Where, s is the side of a pentagon.
Perimeter of a pentagon is the length of the outline of a pentagon. The formula to find the perimeter of a pentagon is as follows:
Pentagon perimeter = 5 * s
Where, s is the length of a pentagon side.
You can use any of these formulas to compute the side of a pentagon based on the given constraints.
If perimeter of pentagon is given, then Side of a pentagon = perimeter / 5.
If area of pentagon is given, then pentagon side = √[ (4 * a) / ((√(5 (5 + 2 √5))), where a is the area.
The universal formula to calculate diagonal of a pentagon is diagonal = (1 + √5) / 2) * s, where s is the side of a pentagon.
• Area of a Pentagon
• Perimeter of a Pentagon
• Side of a Pentagon
• Diagonal of a Pentagon
Improve your skills on geometry topics and strengthen your understanding of area, volume, perimeter concepts with our collection of free and online calculators at Areavolumecalculator.com
FAQs on Pentagon Calculator
1. How to use the pentagon calculator?
• You have to select any one option from the drop down of the calculator i.e area, side, diagonal, and perimeter.
• Enter the value of the input as given in the calculator.
• You can also change the units of the input from cm, m, yard, etc.
• Press on the calculate button.
• Then, you will find the exact result in fraction of seconds.
2. What is a pentagon?
Pentagon is a five sided two dimensional geometric shape. It has five sides and five corners. If a polygon has five sides and all the sides length and angles are equal, then it is called pentagon.
3. What are the types of pentagons?
In total we have four different types of pentagon. They are along the lines:
1. Regular Pentagon: A geometrical shape with five equal sides and equal angles.
2. Irregular Pentagon: A geometrical shape with different lengths of sides and unequal sizes of angles.
3. Concave Pentagon: If a pentagon has an internal angle greater than 180 degrees, then it is a concave pentagon.
4. Convex Pentagon: If a pentagon has no internal angle greater than 180 degree, then it is a convex pentagon.
4. What is the formula to compute the height of a pentagon?
Pentagon height calculation formula is given below:
h = s / 2 x (√ (5 + 2√5))
h is the height of a pentagon and s is the side of the pentagon. | {"url":"https://areavolumecalculator.com/pentagon-calculator/","timestamp":"2024-11-09T10:58:39Z","content_type":"text/html","content_length":"27933","record_id":"<urn:uuid:10e43883-4efc-43f2-a8f6-583a9f7d05e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00051.warc.gz"} |
1510 - Exact Change
□ Seller: That will be fourteen dollars.
□ Buyer: Here's a twenty.
□ Seller: Sorry, I don't have any change.
□ Buyer: OK, here's a ten and a five. Keep the change.
When travelling to remote locations, it is often helpful to bring cash, in case you want to buy something from someone who does not accept credit or debit cards. It is also helpful to bring a
variety of denominations in case the seller does not have change. Even so, you may not have the exact amount, and will have to pay a little bit more than full price. The same problem can arise
even in urban locations, for example with vending machines that do not return change.
Of course, you would like to minimize the amount you pay (though you must pay at least as much as the value of the item). Moreover, while paying the minimum amount, you would like to minimize the
number of coins or bills that you pay out.
The first line of input contains one integer specifying the number of test cases to follow. Each test case begins with a line containing an integer, the price in cents of the item you would like
to buy. The price will not exceed 10 000 cents (i.e., $100). The following line contains a single integer n, the number of bills and coins that you have. The number n is at most 100. The
following n lines each contain one integer, the value in cents of each bill or coin that you have. Note that the denominations can be any number of cents; they are not limited to the values of
coins and bills that we usually use in Canada. However, no bill or coin will have a value greater than 10 000 cents ($100). The total value of your bills and coins will always be equal to or
greater than the price of the item.
For each test case, output a single line containing two integers: the total amount paid (in cents), and the total number of coins and bills used.
sample input
sample output
waterloo 4 October, 2008 | {"url":"http://hustoj.org/problem/1510","timestamp":"2024-11-13T16:00:20Z","content_type":"text/html","content_length":"9302","record_id":"<urn:uuid:355bbedb-547d-406c-9ba2-6b903fa2a773>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00658.warc.gz"} |
How do you maximize a linear programming problem?
How do you maximize a linear programming problem?
The Maximization Linear Programming Problems
1. Write the objective function.
2. Write the constraints.
3. Graph the constraints.
4. Shade the feasibility region.
5. Find the corner points.
6. Determine the corner point that gives the maximum value.
What is linear programming maximization?
Linear programming is a mathematical technique for solving constrained maximization and minimization problems when there are many constraints and the objective function to be optimized, as well as
the constraints faced, are linear (i.e., can be represented by straight lines).
What does maximization of objective function in LPP means?
Maximization of objective function in an LP model means Value occurs at allowable set of decisions. Linear programming refers to choosing the best alternative from the available alternatives, whose
objective function and constraint function can be expressed as linear mathematical functions.
What is generally used with maximization problem?
The maximization problem can be solved resorting to the Lagrange multipliers technique, and therefore the Excel Solver can be used.
What is the difference between a minimization and a maximization problem in linear programs?
A difference between minimization and maximization problems is that: minimization problems cannot be solved with the corner-point method. maximization problems often have unbounded regions.
minimization problems often have unbounded regions.
What are the applications of linear programming in industries?
Linear programming is used in business and industry in production planning, transportation and routing, and various types of scheduling. Airlines use linear programs to schedule their flights, taking
into account both scheduling aircraft and scheduling staff.
How can we solve maximization problem using simplex method?
1. Set up the problem.
2. Convert the inequalities into equations.
3. Construct the initial simplex tableau.
4. The most negative entry in the bottom row identifies the pivot column.
5. Calculate the quotients.
6. Perform pivoting to make all other entries in this column zero.
What is contribution margin in linear programming graphical method?
Define and explain linear programming graphical method. How profit maximization problem is solved using linear programming graphical method. The contribution margin is one measure of whether
management is making the best use of resources. When the total contribution margin is maximized, management’s profit objective should be satisfied.
How profit maximization problem is solved using linear programming graphical method?
How profit maximization problem is solved using linear programming graphical method. The contribution margin is one measure of whether management is making the best use of resources. When the total
contribution margin is maximized, management’s profit objective should be satisfied.
What are the assumptions of linear programming?
Another assumption of linear programming is that the decision variables are continuous. This means a combination of outputs can be used with the fractional values along with the integer values. For
example, If 5 2 /3 units of product A and 10 1 /3 units of product B to be produced in a week.
What is linear programming?
Linear programming is a powerful quantitative technique (or operational research technique) designs to solve allocation problem. The term ‘linear programming’ consists of the two words ‘Linear’ and
‘Programming’. The word ‘Linear’ is used to describe the relationship between decision variables, which are directly proportional. | {"url":"https://www.replicadb4.com/how-do-you-maximize-a-linear-programming-problem/","timestamp":"2024-11-02T03:08:32Z","content_type":"text/html","content_length":"42743","record_id":"<urn:uuid:b6d82678-8d17-4285-83e3-b981cd783bd2>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00831.warc.gz"} |